id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2301.08169 | Spanning the full range of neutron star properties within a microscopic
description | The high-density behavior of nuclear matter is analyzed within a relativistic
mean-field description with non-linear meson interactions. To assess the model
parameters and their output, a Bayesian inference technique is used. The
Bayesian setup is limited only by a few nuclear saturation properties, the
neutron star maximum mass larger than 2 M$_\odot$, and the low-density pure
neutron matter equation of state (EOS) produced by an accurate N$^3$LO
calculation in chiral effective field theory. Depending on the strength of the
non-linear scalar vector field contribution, we have found three distinct
classes of EOSs, each one correlated to different star properties
distributions. If the non-linear vector field contribution is absent, the
gravitational maximum mass and the sound velocity at high densities are the
greatest. However, it also gives the smallest speed of sound at densities below
three times saturation density. On the other hand, models with the strongest
non-linear vector field contribution predict the largest radii and tidal
deformabilities for 1.4 M$_\odot$ stars, together with the smallest mass for
the onset of the nucleonic direct Urca processes and the smallest central
baryonic densities for the maximum mass configuration. These models have the
largest speed of sound below three times saturation density, but the smallest
at high densities, in particular, above four times saturation density the speed
of sound decreases approaching approximately $\sqrt{0.4}c$ at the center of the
maximum mass star. On the contrary, a weak non-linear vector contribution gives
a monotonically increasing speed of sound. A 2.75 M$_\odot$ NS maximum mass was
obtained in the tail of the posterior with a weak non-linear vector field
interaction. We found that pQCD favors models with a large non-linear vector
field contribution or hyperons. | Tuhin Malik, Márcio Ferreira, Milena Bastos Albino, Constança Providência | 2023-01-19T16:59:37Z | http://arxiv.org/abs/2301.08169v4 | # Spanning the full range of neutron star properties within a microscopic description
###### Abstract
The high density behavior of nuclear matter is analyzed within a relativistic mean field description with nonlinear meson interactions. To assess the model parameters and their output, a Bayesian inference technique is used. The Bayesian setup is limited only by a few nuclear saturation properties, the neutron star maximum mass larger than 2 M\({}_{\odot}\), and the low-density pure neutron matter equation of state (EOS) produced by an accurate N\({}^{1}\)LO calculation in chiral effective field theory. Depending on the strength of the non-linear scalar vector field contribution, we have found three distinct classes of EOSs, each one correlated to different star properties distributions. If the non-linear vector field contribution is absent, the gravitational maximum mass and the sound velocity at high densities are the greatest. However, it also gives the smallest speed of sound at densities below three times saturation density. On the other hand, models with the strongest non-linear vector field contribution, predict the largest radii and tidal deformabilities for 1.4 M\({}_{\odot}\) stars, together with the smallest mass for the onset of the nucleonic direct Urca processes and the smallest central baryonic densities for the maximum mass configuration. These models have the largest speed of sound below three times saturation density, but the smallest at high densities, in particular, above four times saturation density the speed of sound decreases approaching approximately \(\sqrt{0.4}c\) at the center of the maximum mass star. On the contrary, a weak non-linear vector contribution gives a monotonically increasing speed of sound. A 2.75 M\({}_{\odot}\) NS maximum mass was obtained in the tail of the posterior with a weak non-linear vector field interaction. This indicates that the secondary object in GW190814 could also be an NS. The possible onset of hyperons and the compatibility of the different sets of models with pQCD are discussed. It is shown that pQCD favors models with a large contribution from the non-linear vector field term or which include hyperons.
## I Introduction
It has been shown that the very large neutron-proton asymmetry and baryonic density that exist in the universe inside compact objects such as neutron stars (NSs), can be studied using multi-messenger astronomy, which provides us with comprehensive information far beyond what is available in terrestrial laboratories [1; 2; 3]. NSs are believed to contain extremely rare phases of matter within the cores [4; 5]. Using astrophysical observations together with theoretical models of the equation of state (EOS), the astrophysics community is trying to understand not only the permissible domain of the EOS but also the possible scenarios of particle species pertaining to NS matter. In the case of high density matter, there is the possibility that a wide variety of phases or compositions occur, including hyperons, quarks, superconducting matter, or colored superconducting matter [4]. However, up to this point in time, we know very little about NS's composition. The particle composition derived from NS matter is largely model-dependent in nature. With the present different types of available EOS models, the constraints from the Neutron star Interior Composition Explorer (NICER) observatory and gravitational waves (GW) are still compatible with the sole inclusion of nucleonic degrees of freedom [6]. It is imperative to note that the calculation of the nuclear EOS is a problem of theoretical modeling of the nuclear interaction. There are different models that can be used to describe the nuclear EOS of NS matter. In spite of this, relativistic mean field (RMF) models are preferred because they are capable of describing matter with relativistic effects, important for dense matter such as matter in NS, as well as finite nuclei [7; 8; 9; 10; 11; 12; 13; 14; 15].
To account for the many-body effects associated with nuclear interactions, it has been established that RMF models provide a suitable description of finite nuclei and infinite nuclear matter as a result of meson exchange. A relativistic mean field model is built from an effective Lorentz scalar Lagrangian that incorporates baryon, scalar, and vector meson fields [4; 16; 17]. The mesonic fields are introduced to describe the nuclear interaction: the \(\sigma\) mesons generate an attractive force, while the \(\omega\) mesons generate a repulsive short-range force. Within the RMF formalism, two approaches are available to adequately describe the density dependence of the EOS and the symmetry energy. In one of the approaches, nonlinear meson terms have been incorporated into the Lagrangian density [9; 11; 13; 17; 18] while in the other approach, density-dependent coupling parameters are used to describe the nonlinearities [19; 20; 6], avoiding the introduction of various nonlinear meson interaction terms. In the Lagrangian density, the coupling parameters are not completely free but are adjusted to reproduce a few well-known experimental and empirical nuclear saturation properties. To date, it is only loosely known which properties of nuclear matter govern the high-density behavior (\(\rho>>\rho_{0}\)) [22], but hopefully, astrophysical observations will constrain them.
The Bayesian approach is commonly used to optimize a set of model parameters given a set of observational/theoretical constraints [23; 24; 25; 26; 27; 28; 29; 30; 31]. In nuclear physics and astrophysics, this method becomes a valuable tool, because it is able to determine joint posterior distributions and correlations between model parameters for a given set of fit data. Generally,
Bayesian analysis of a model provides a whole snapshot of the model under the given fit data. As previously discussed, the RMF model describes dense matter EOS related to NS successfully, with density-dependent couplings or including a few different non-linear self or cross-mesonic intersections. In light of the current observations of NS as well as pure neutron matter constraints obtained from chiral effective field theory calculations at low densities, it is imperative to study the effects of those interactions statistically. Our previous study explored the RMF model with density-dependent couplings within a Bayesian framework [6]. This study systematically examines the RMF model with constant couplings and non-linear mesonic interactions within a Bayesian framework. In [32], the nonlinear meson interactions in a RMF model were investigated using a Bayesian framework based solely on astrophysical data. Pure neutron matter constraints from chiral effective field theory calculations at low densities were ignored. Indeed, low-density bounds on pure neutron matter (PNM) EOS from \(\chi\)EFT are a very strict constraint for this family of RMF models as it will be shown in the present study. Besides, higher-order interactions of \(\omega\) meson (e.g., \(\omega^{4}\)) and cross interactions between the two mesons \(\varrho\) and \(\omega\) were not included in that study, which was restricted to the non-linear \(\sigma\)-meson terms introduced in [17]. Recently, the model we will discuss in the present study has been applied to analyze the correlations existing among nuclear matter parameters at saturation and neutron star properties [33]. In particular, the role the \(\omega^{4}\) term plays in these correlations and in controlling the maximum star mass was discussed. It was shown that the correlations are dependent on the strength of the \(\omega^{4}\) term. The same model is also considered in [30], where the authors take a different approach to the one of the present study and explore the constraining power of the astrophysical observations coming from all the current observation (X-ray, radio and Gravitational detection) and from simulated future X-ray missions.
The present study aims at analyzing a large set of parameters of RMF models with several nonlinear meson interactions, by employing a Bayesian approach based on a given minimal set of fit data, in order to perform a detailed statistical analysis. The fit data include a few nuclear saturation properties, the observation of two solar mass NS, and an estimation of the EOS of PNM from a \(\chi\)EFT calculation. Furthermore, the consistency of the obtained EOSs from marginalized posterior distributions of the model parameters with recent measurements of the NS mass-radius by NICER and the dimensionless tidal deformability from GW170817 by LIGO-Virgo collaboration will be analized. In particular, we will focus our study on the high density behavior of the speed of sound. It has been shown that conditioning the EOS built within a physics-agnostic approach to perturbative QCD calculations at high densities has a direct influence on the behavior of the speed of sound, which shows a maximum around three times saturation density or an energy density \(\approx 500\) MeV fm\({}^{-3}\)[34; 35; 36]. On the contrary, imposing just astrophysical constraints this behavior does not occur [34; 36].
The article's structure is as follows. Section II introduces a brief overview of the field theoretical RMF model for the EOS at zero temperature, while Section III discusses the Bayesian parameter estimation. The results of our analysis are discussed in Section IV. The effect of hyperon and perturbative QCD (pQCD) constraints on the present model are discussed in Section V. In Section VI, the summary and conclusions are presented.
## II Equation of state
In the present study, we consider several sets of EOSs calculated within a RMF description of nuclear matter based on a field theoretical approach that includes non-linear meson terms, both self-interactions, and mixed terms. These non-linear terms are important to define the density dependence of the EOS. Different regions of the parameter space that give an equally good description of the nuclear properties will be considered.
The nuclear interaction between nucleons is introduced through the exchange of the scalar-isoscalar meson \(\sigma\), the vector-isoscalar meson \(\omega\) and the vector-isovector meson \(\varrho\). The Lagrangian describing the baryonic degrees of freedom is given by
\[\mathcal{L}=\mathcal{L}_{N}+\mathcal{L}_{M}+\mathcal{L}_{NL} \tag{1}\]
with
\[\mathcal{L}_{N}= \bar{\Psi}\Big{[}\gamma^{\mu}\left(i\partial_{\mu}-g_{\omega} \omega_{\mu}-g_{\varrho}\mathbf{t}\cdot\mathbf{\varrho}_{\mu}\right)\] \[-(m-g_{\sigma}\phi)\,\Big{]}\Psi\] \[\mathcal{L}_{M}= \frac{1}{2}\left[\partial_{\mu}\phi\partial^{\mu}\phi-m_{\sigma }^{2}\phi^{2}\right]\] \[-\frac{1}{4}F_{\mu\nu}^{(\omega)}F^{(\omega)\mu\nu}+\frac{1}{2}m _{\omega}^{2}\omega_{\mu}\omega^{\mu}\] \[-\frac{1}{4}\mathbf{F}_{\mu\nu}^{(\varrho)}\cdot\mathbf{F}^{(\varrho)\mu \nu}+\frac{1}{2}m_{\varrho}^{2}\mathbf{\varrho}_{\mu}\cdot\mathbf{\varrho}^{\mu}.\] \[\mathcal{L}_{NL}= -\frac{1}{3}bg_{\sigma}^{3}(\sigma)^{3}-\frac{1}{4}cg_{\sigma}^{ 4}(\sigma)^{4}+\frac{\xi}{4!}(g_{\omega}\omega_{\mu}\omega^{\mu})^{4}\] \[+\Lambda_{\omega}g_{\varrho}^{2}\mathbf{\varrho}_{\mu}\cdot\mathbf{\varrho }^{\mu}g_{\omega}^{2}\omega_{\mu}\omega^{\mu},\]
The field \(\Psi\) is a Dirac spinor that describes the nucleon doublet (neutron and proton) with a bare mass \(m\); \(\gamma^{\mu}\) are the Dirac matrices and \(\mathbf{t}\) is the isospin operator. The vector meson tensors are defined as \(F^{(\omega,\varrho)\mu\nu}=\partial^{\mu}A^{(\omega,\varrho)\nu}-\partial^{ \nu}A^{(\omega,\varrho)\mu}\). \(g_{\sigma}\), \(g_{\omega}\) and \(g_{\varrho}\) are the couplings of the nucleons to the meson fields \(\sigma\), \(\omega\) and \(\varrho\), having masses, respectively, \(m_{\sigma}\), \(m_{\omega}\) and \(m_{\varrho}\).
The parameters \(b\), \(c\), \(\xi\) and \(\Lambda_{\omega}\), which define the strength of the non-linear terms, are determined together with the couplings \(g_{i}\,i=\sigma,\,\omega,\,\varrho\), imposing a set of constraints. The terms with \(b,\,c\), have been introduced in [17] to control the nuclear matter incompressibility at saturation. The \(\xi\) term controls the stiffness of the high-density EOS, the larger it is the softer the EOS. The \(\Lambda_{\omega}\) parameter affects the density dependence of the symmetry energy, the larger the smaller the symmetry energy slope at saturation. The effect of the nonlinear terms on the magnitude of the meson fields is clearly seen from the equa
tions of motion for the mesons
\[\sigma =\frac{g_{\sigma}}{m_{\sigma,\rm eff}^{2}}\sum_{i}\rho_{i}^{s} \tag{2}\] \[\omega =\frac{g_{\omega}}{m_{\omega,\rm eff}^{2}}\sum_{i}\rho_{i}\] (3) \[\varrho =\frac{g_{\varrho}}{m_{\varrho,\rm eff}^{2}}\sum_{i}I_{3}\rho_{i}, \tag{4}\]
where \(\rho_{i}^{s}\) and \(\rho_{i}\) are, respectively, the scalar density and the number density of nucleon \(i\), and
\[m_{\sigma,\rm eff}^{2} =m_{\sigma}^{2}+bg_{\sigma}^{3}\sigma+cg_{\sigma}^{4}\sigma^{2} \tag{5}\] \[m_{\omega,\rm eff}^{2} =m_{\omega}^{2}+\frac{\xi}{3!}g_{\omega}^{4}\omega^{2}+2\Lambda_{ \omega}g_{\varrho}^{2}g_{\omega}^{2}\varrho^{2}\] (6) \[m_{\varrho,\rm eff}^{2} =m_{\varrho}^{2}+2\Lambda_{\omega}g_{\omega}^{2}g_{\varrho}^{2} \omega^{2}, \tag{7}\]
where the meson fields should be interpreted as their expectation values. Some conclusions can be drawn from these equations with respect to the density behavior of the EOS: a) the effective mass of the \(\omega\)-meson \(m_{\omega,\rm eff}\) increases as the \(\omega\)-field increases and as a result at high densities \(\omega\propto\rho^{\alpha}\) with \(\alpha<1\), giving rise to a softening of the EOS at high densities with respect to models with a zero or small \(\xi\). This will also affect the behavior of the speed of sound as we will discuss later; b) the effective mass of the \(\varrho\)-meson, \(m_{\varrho,\rm eff}\), increases with the increase of the density and, as a result, the \(\varrho\) field becomes weaker, which implies a softer symmetry energy. Notice, however, that if \(\xi\neq 0\) this softening is smaller since the \(\omega\) field does not grow so fast with the baryonic density.
Based on a reasonable approximation, the EOS of nuclear matter can be divided into two parts: (i) the EOS of symmetric nuclear matter (SNM) \(\epsilon(\rho,0)\) (ii) a term involving the symmetry energy coefficient \(S(\rho)\) and the asymmetry \(\delta\),
\[\epsilon(\rho,\delta)\simeq\epsilon(\rho,0)+S(\rho)\delta^{2}, \tag{8}\]
where \(\epsilon\) is the energy per nucleon at a given density \(\rho\) and isospin asymmetry \(\delta=(\rho_{n}-\rho_{p})/\rho\). The EOS can be recast in terms of various properties of bulk nuclear matter of order \(n\) at saturation density: (i) for the symmetric nuclear matter, the energy per nucleon \(\epsilon_{0}=\epsilon(\rho_{0},0)\) (\(n=0\)), the incompressibility coefficient \(K_{0}\) (\(n=2\)), the skewness \(Q_{0}\) (\(n=3\)), and the kurtosis \(Z_{0}\) (\(n=4\)), respectively, given by
\[X_{0}^{(n)}=3^{n}\rho_{0}^{n}\left(\frac{\partial^{n}\epsilon(\rho,0)}{ \partial\rho^{n}}\right)_{\rho_{0}},\,n=2,3,4; \tag{9}\]
(ii) for the symmetry energy, the symmetry energy at saturation \(J_{\rm sym,0}\) (\(n=0\)),
\[J_{\rm sym,0}=S(\rho_{0})=\frac{1}{2}\left(\frac{\partial^{2}\epsilon(\rho, \delta)}{\partial\delta^{2}}\right)_{\delta=0}, \tag{10}\]
the slope \(L_{\rm sym,0}\) (\(n=1\)), the curvature \(K_{\rm sym,0}\) (\(n=2\)), the skewness \(Q_{\rm sym,0}\) (\(n=3\)), and the kurtosis \(Z_{\rm sym,0}\) (\(n=4\)), respectively, defined as
\[X_{\rm sym,0}^{(n)}=3^{n}\rho_{0}^{n}\left(\frac{\partial^{n}S(\rho)}{ \partial\rho^{n}}\right)_{\rho_{0}},\,n=1,2,3,4. \tag{11}\]
## III The Bayesian Setup
By updating a prior belief (i.e., a prior distribution) with given information (i.e., observed or fit data) and optimizing a likelihood function, a posterior distribution can be obtained according to Bayes' theorem [37]. Hence, in order to set up a Bayesian parameter optimization system, four things must be defined: the prior, the likelihood function, the fit data, and the sampler.
_The prior_ - First, we examine the prior domain of the adopted RMF model, which provides relatively wide nuclear matter saturation properties through Latin hypercube sampling, in order to define the prior distribution of our Bayesian setup. Finally, we determine the uniform priors for each parameter listed in Table 1.
_The fit data_- In Table 2, the fit data include the nuclear saturation density \(\rho_{0}\), the binding energy per nucleon \(\epsilon_{0}\), the incompressibility coefficient \(K_{0}\), and the symmetry energy \(J_{\rm sym,0}\), all assessed at the nuclear saturation density \(\rho_{0}\). Additionally, we take into account the pressure of PNM for densities of 0.08, 0.12, and 0.16 fm\({}^{-3}\) from N\({}^{3}\)LO calculation in \(\chi\)EFT [38], accounting for 2 \(\times\) N\({}^{3}\)LO data uncertainty as well as the NS maximum mass above 2.0 M\({}_{\odot}\) with uniform probability in the likelihood.
_The Log-Likelihood_- With our setup, we have optimized a log-likelihood as a cost function. For all the data presented in Table 2, with the appropriate \(\sigma\) uncertainty, equation 12 shows the log-likelihood function, except for the low-density PNM data and the maximum mass of NS. Our approach has been to use the box function probability as given in equation 13 for the PNM data from \(\chi\)EFT. We also use the step function probability for the NS mass.
\[Log(\mathcal{L})=-0.5\times\sum_{j}\left\{\left(\frac{d_{j}-m_{j}(\mathbf{\theta} )}{\sigma_{j}}\right)^{2}+Log(2\pi\sigma_{j}^{2})\right\} \tag{12}\]
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{No} & \multirow{2}{*}{Parameters} & \multicolumn{2}{c}{_Set 0_} \\ \cline{3-4} & & min & max \\ \hline
1 & \(g_{\sigma}\) & 6.5 & 15.5 \\
2 & \(g_{\omega}\) & 6.5 & 15.5 \\
3 & \(g_{\varrho}\) & 6.5 & 16.5 \\
4 & \(B\) & 0.5 & 9.0 \\
5 & \(C\) & -5.0 & 5.0 \\
6 & \(\xi\) & 0.0 & 0.04 1 \\
7 & \(\Lambda_{\omega}\) & 0 & 0.12 \\ \hline \end{tabular}
\end{table}
Table 1: The uniform prior is considered for the parameters of the RMF models. Specifically, B and C are \(b\times 10^{3}\) and \(c\times 10^{3}\), respectively. The entrances ’min’ and ’max’ denote the minimum and maximum values of the distribution.
\[Log(\mathcal{L})=Log\left\{\prod_{j}\frac{1}{2\sigma_{j}}\frac{1}{\exp\left(\frac{|d_ {j}-m_{j}(\theta)|-\sigma_{j}}{0.015}\right)+1\right\} \tag{13}\]
Specifically, \(j\) runs over the entire dataset, and \(d_{j}\) and \(m_{j}\) represent the data and derived model values, respectively. \(\sigma_{j}\) represents the uncertainty associated with each data point in the dataset and the \(\mathbf{\theta}\) is the vector representation of the model parameter. It is important to understand that when sampling the posterior, the normalization of the log-likelihood, which is done in equations 12 and 13 is irrelevant. However, to calculate the Bayes _evidence_ it is mandatory and in some cases, it also reduces the computation time.
To populate the six-dimensional posterior, we use the nested sampling algorithm, first proposed in Ref. [39] and suitable for low-dimensional problems. The PyMulttunel sampler is invoked to generate samples for the four thousand starting "n-live" points [40; 41]. There are approximately eighteen thousand samples we have obtained in each posterior with \(\approx 0.04\) acceptance rate.
## IV Results
In the following, we examine the posterior probability distributions of the RMF model parameters we have adapted for the purpose of this work, namely \(g_{\sigma}\), \(g_{\omega}\), \(g_{\varrho}\), \(b\), \(c\), \(\xi\), and \(\Lambda_{\omega}\) as briefly outlined in Sec. III. Our Bayesian setup for the RMF model parameters includes the uniform ("un-informative") prior as discussed in the earlier section. We first perform a Bayesian inference with prior _Set 0_, as given in Table 1, imposing the constraints given in Table 2. Besides the conditions used in [6], the PNM condition was implemented with hard cuts, and an extra constraint was introduced: it was imposed that the PNM pressure is an increasing function of the density. This last condition his necessary because this behavior is physically justified but the inference process may originate models that satisfy all the other constraints except this one. In Fig. 1 the corner plot for the posteriors of the parameters \(g_{\sigma}\), \(g_{\omega}\), \(g_{\rho}\), \(B\), \(C\), \(\Lambda_{\omega}\) and \(\xi\) is shown. The parameters \(B\) and \(C\) are \(b\times 10^{3}\) and \(c\times 10^{3}\), respectively.
Some comments are in order: a) some models appear at large \(g_{\sigma}\), \(g_{\omega}\) and \(\xi\) and small \(\Lambda_{\omega}\). It is the value of \(\xi\) that defines this subset, and, therefore, in order to better understand the properties of these models, an independent Bayesian inference calculation is performed taking a prior restriction on the parameter \(\xi\in[0.015,0.04]\) (_Set 3_); b) in order to completely understand the effect of the \(\omega^{4}\) term, that has a strong effect on the density dependence of the SNM EOS, in particular, determines the high-density dependence of the EOS, two other calculations will be performed, one with \(\xi\in[0,0.004]\) (_Set 1_) and a second with \(\xi\in[0.004,0.015]\) (_Set 2_).
The corner plots that compare the three sets of \(\approx\) 20000 models each defined by a different constraint on \(\xi\) are shown in Figs. 2, 4 and 5, respectively, for the model parameters, the nuclear matter properties and NS properties (set 1 represented by solid black lines, set 2 by red and set 3 by green). The median values and associated 90% credible intervals (CI) have been compiled in Table 3. In the table, we have listed the NMPs defined in Eqs. (9) and (11), and the following NS properties: the gravitational mass of the maximum mass configuration \(M_{\rm max}\), and corresponding baryonic mass \(M_{\rm B,max}\)
\begin{table}
\begin{tabular}{c c c c} \hline \hline & \multicolumn{3}{c}{Constraints} \\ Quantity & \multicolumn{3}{c}{Value/Band} & Ref \\ \hline \multirow{2}{*}{\begin{tabular}{c} NMP \\ [MeV] \\ \end{tabular} } & \(\rho_{0}\) & \(0.153\pm 0.005\) & [19] \\ & \(\epsilon_{0}\) & \(-16.1\pm 0.2\) & [42] \\ & \(K_{0}\) & \(230\pm 40\) & [18; 43] \\ & \(J_{\rm sym,0}\) & \(32.5\pm 1.8\) & [44] \\ \end{tabular} } \\ \begin{tabular}{c} PNM \\ [MeV fm\({}^{-3}\)] \\ \end{tabular} & \(P(\rho)\) & \(2\times\) N\({}^{3}\)LO & [38] \\ & \(dP/d\rho\) & \(>0\) & \\
\begin{tabular}{c} NS mass \\ [\(M_{\odot}\)] \\ \end{tabular} & \(M_{\rm max}\) & \(>2.0\) & [45] \\ \hline \hline \end{tabular}
\end{table}
Table 2: The constraints imposed in the Bayesian inference to generate all sets of models: binding energy per nucleon \(\epsilon_{0}\), incompressibility \(K_{0}\), symmetry energy \(J_{\rm sym,0}\) at the nuclear saturation density \(\rho_{0}\), including an 1\(\sigma\) uncertainty; the pressure of pure neutron matter PNM determined at the densities 0.08, 0.12 and 0.16 fm\({}^{-3}\) from a \(\chi\)EFT calculation [38], with 2 \(\times\) N\({}^{3}\)LO uncertainty in the likelihood, the pressure of PNM is an increasing function of density and the maximum NS mass above \(2M_{\odot}\).
Figure 1: Corner plot for the posteriors of the parameters \(g_{\sigma}\), \(g_{\omega}\), \(g_{\rho}\), \(B=b\times 10^{3}\), \(C=c\times 10^{3}\), \(\Lambda_{\omega}\) and \(\xi\) of the RMF model used in the present study obtained using the uniform priors defined in Table 1. The vertical lines represent the 90% credible intervals (CIs), and the light and dark intensities represent the 1\(\sigma\), 2\(\sigma\), and 3\(\sigma\) CIs, respectively.
radius \(R_{\rm max}\), central energy density \(\varepsilon_{c}\), central baryonic number density \(\rho_{c}\), and square of central speed-of-sound \(c_{s}^{2}\) of the maximum mass NS, the radius \(R_{\rm M_{i}}\) and the dimensionless tidal deformability \(\Lambda_{\rm M_{i}}\) ( \(\Lambda_{\rm M_{i}}\) of stars with gravitational mass \({\rm M_{i}}\in[1.4,1.6,1.8,2.08]~{}M_{\odot}\)), and the effective tidal deformability \(\tilde{\Lambda}\) for the GW170817 merger with \(q=1\) (\(q\) is the mass ratio of NSs engaged in the binary merger) computed for the three sets.
First, let's discuss the model parameters for the three sets based on the constraints on \(\xi\). The main finding is that the parameters of sets 1 and 2 do not differ much: \(g_{\sigma}\) and \(g_{\omega}\) extend to slightly larger values, while \(B\) and \(\Lambda_{\omega}\) take slightly smaller values. In order to compensate for the \(\omega^{4}\) term, that softens the EOS, the \(g_{\omega}\) must increase, a change that reflects itself on the other parameters. Finally, set 3 differs a lot from the other two: it spreads to larger values of \(g_{\sigma}\) and \(g_{\omega}\), smaller values of \(B\) and \(\Lambda_{\omega}\) and \(C\) takes mainly negative values. Only \(g_{\rho}\) is similar for the three sets. These differences will reflect on the NMP and NS properties.
It is also interesting to discuss how efficiently do the posterior distributions of the nuclear matter properties specified in Table 2 span the target distributions. In Fig. 3, the distributions of the posteriors of the physical properties that define the constraints imposed in the Bayesian inference given in Table 2 are compared with the target distributions. We conclude that: a) set 1 and 2 have very similar behaviors; b) set 3 covers all the target distribution for \(K_{0}\) while the other sets are restricted to values \(K_{0}\gtrsim 230\) MeV; c) all sets show a similar result for the symmetry energy at saturation \(J_{\rm sym,0}\) and are pushed to the lower limit of the target; d) concerning the PNM pressure sets 1 and 2 are pushed to the upper (lower) values of \(P_{1}\) (\(P_{3}\)) while the opposite is true for set 3.
The corner plot for the nuclear matter properties, Fig. 4, confirms the above discussion, i.e. while sets 1 and 2 have very similar properties, set 3 differs a lot from the other two: a) concerning the symmetric nuclear matter properties, set 3 presents larger values of \(Q_{0}\) and \(Z_{0}\), while \(K_{0}\) shows a Gaussian distribution centered just above 200 MeV and spreading between \(\sim 100\) MeV and \(\sim 300\) MeV. For the other two, the distribution of \(K_{0}\) is squeezed above 220 MeV. It should also be noted an anti-correlation between \(Z_{0}\) and \(K_{0}\): the lower values of \(K_{0}\) are compensated by larger \(Z_{0}\); b) considering the symmetry energy properties, all sets have the same \(J_{\rm sym}\) distribution, but all the other properties show differences. Set 3 takes larger values of \(L_{\rm sym,0}\) and \(K_{\rm sym,0}\), and smaller of \(Q_{\rm sym,0}\) and \(Z_{\rm sym,0}\). Set 3 also shows a slight positive correlation between \(L_{\rm sym,0}\) and \(K_{\rm sym,0}\). Similar behavior has been shown in [51] for a set of quite different nuclear models. Notice, however, that this correlation is not present in sets 1 and 2. Besides also a quite strong correlation is obtained between \(L_{\rm sym,0}\) and \(Q_{\rm sym,0}\) for all three sets. Finally, it is also interesting to point out the quite broad and flat distribution of \(Z_{sym,0}\) for sets 1 and 2 while for set 3 it presents a quite peaked distribution at a low value. Lower values of \(L_{\rm sym,0}\) and \(K_{\rm sym,0}\) for sets 1 and 2 are compensated with larger values for the two higher orders, \(Q_{\rm sym,0}\) and \(Z_{\rm sym,0}\).
Let us now discuss the NS properties of the three sets plotted in Fig. 5. The largest gravitational masses are obtained with set 1. In particular, within set 1 there is a small subset for which the mass is above 2.5\(M_{\odot}\) and as high as 2.75\(M_{\odot}\). One property that distinguishes clearly the three sets is the speed of sound in the center of the maximum mass star: for set 1 the square of this quantity takes values above \(~{}0.6c^{2}\), for set 3 values below \(0.45c^{2}\) and set 2 fill the gap between the other two distributions.
Set 3 presents the largest radius and tidal deformability for 1.4\(M_{\odot}\) stars and the smaller central baryonic densities in
Figure 3: The comparison of the marginalized posteriors and the corresponding constraints imposed in the Bayesian inference analysis.
Figure 2: Corner plot for the three sets of models, set 1 with \(\xi\in[0,0.004]\) (solid black lines), set 2 with \(0\xi\in[0.004,0.015]\) (red) and set 3 with \(\xi\in[0.015,0.04]\) (green), comparing the posteriors of the parameters \(g_{\sigma}\), \(g_{\omega}\), \(g_{\rho}\), \(B=b\times 10^{3}\), \(C=c\times 10^{3}\), and \(\Lambda_{\omega}\) of the RMF model used in present study. The vertical lines represent the 68% CIs, and the different intensities, from dark to light, represent the 1\(\sigma\), 2\(\sigma\), and 3\(\sigma\) CIs, respectively.
dicating a stiffer EOS. Notice, however, that the small subset of models of set 1 with a mass above 2.5\(M_{\odot}\) also have \(R_{1.4}\gtrsim 13\)km and \(\Lambda_{1.4}\gtrsim 700\). Besides, they present a large central speed of sound, \(c_{s}^{2}\sim 0.7c^{2}\), and the smallest central baryonic densities, \(<0.8\)fm\({}^{-3}\).
The baryonic and gravitational masses of the maximum mass configurations are strongly correlated. Besides, the maximum gravitational mass also shows a strong correlation with the radius and the tidal deformability of a 1.4\(M_{\odot}\) NS, the larger the maximum mass the larger these two properties, and an anti-correlation with the central baryonic density of the maximum mass configuration, with larger densities associated with smaller radii and tidal deformabilities. Similar correlations have been obtained in [6] and [52], with models with density-dependent couplings.
A comparison of the NS properties predicted by the three sets becomes more evident through Fig. 1 where the full posteriors for the three sets are plotted together with some astrophysical observations, the mass-radius prediction from the GW170817 detection [54] and the NICER observations of the pulsar PSR J0030 + 0451 [46; 47] and of the pulsar PSR J0740 + 6620 [48; 49]. None of the sets is rejected by the observations. The \(\omega^{4}\) term softens the high-density behavior of the EOS, and, therefore, set 3 does not describe stars above 2.3\(M_{\odot}\). It is interesting to discuss the properties of set 3: a strong \(\xi\) softens the EOS at high densities, therefore, in order to satisfy the 2\(M_{\odot}\) constraint this set of models has a larger \(g_{\omega}\) coupling, see Fig. 1, that gives rise to a stiffer EOS at low and intermediate densities. At high densities, the \(\omega^{4}\) term softens the EOS and it is not possible to attain very high masses. In addition, we compare the mass-radius relationships obtained from a few RMF models with our results, in particular, BigApple [55], IUFSU, FSU2 [14], FSU2R [56], NL3\(\omega\rho\)[15], TM1-2, TM1\(\omega\rho\), and TM1-2\(\omega\rho\)[57]. It should be emphasized that
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline \multirow{3}{*}{Quantity} & \multirow{3}{*}{Units} & \multicolumn{3}{c}{Set 1} & \multicolumn{3}{c}{Set 2} & \multicolumn{3}{c}{Set 3} \\ \cline{3-10} & & & median & \multicolumn{2}{c}{\(90\%\) CI} & \multicolumn{2}{c}{median} & \multicolumn{2}{c}{\(90\%\) CI} & \multicolumn{2}{c}{median} & \multicolumn{2}{c}{\(90\%\) CI} \\ & & & min & max & & min & max & & min & max \\ \hline \multirow{8}{*}{NMP} & \(\rho_{0}\) & fm\({}^{-3}\) & \(0.152\) & \(0.145\) & \(0.160\) & \(0.152\) & \(0.145\) & \(0.160\) & \(0.153\) & \(0.145\) & \(0.161\) \\ & \(m^{\star}\) & \(0.76\) & \(0.69\) & \(0.78\) & \(0.72\) & \(0.64\) & \(0.76\) & \(0.63\) & \(0.55\) & \(0.69\) \\ & \(\varepsilon_{0}\) & & \(-16.10\) & \(-16.43\) & \(-15.76\) & \(-16.10\) & \(-16.43\) & \(-15.76\) & \(-16.10\) & \(-16.43\) & \(-15.77\) \\ & \(K_{0}\) & & \(257\) & \(234\) & \(293\) & \(252\) & \(205\) & \(300\) & \(232\) & \(169\) & \(295\) \\ & \(Q_{0}\) & & \(-444\) & \(-497\) & \(-301\) & \(-438\) & \(-548\) & \(-256\) & \(-319\) & \(-562\) & \(483\) \\ & \(Z_{0}\) & & \(1766\) & \(435\) & \(3054\) & \(2161\) & \(65\) & \(5521\) & \(4698\) & \(739\) & \(9623\) \\ & \(J_{\rm sym,0}\) & MeV & \(31.87\) & \(29.10\) & \(34.22\) & \(31.90\) & \(29.05\) & \(34.44\) & \(32.05\) & \(29.19\) & \(34.75\) \\ & \(L_{\rm sym,0}\) & & \(35\) & \(21\) & \(57\) & \(39\) & \(25\) & \(58\) & \(50\) & \(35\) & \(64\) \\ & \(K_{\rm sym,0}\) & & \(-126\) & \(-177\) & \(-57\) & \(-96\) & \(-160\) & \(4\) & \(-6\) & \(-89\) & \(71\) \\ & \(Q_{\rm sym,0}\) & & \(1438\) & \(640\) & \(1736\) & \(1328\) & \(722\) & \(1661\) & \(866\) & \(-88\) & \(1303\) \\ & \(Z_{\rm sym,0}\) & & \(-12118\) & \(-19290\) & \(236\) & \(-13057\) & \(-19030\) & \(-1147\) & \(-13422\) & \(-17643\) & \(-6877\) \\ & \(M_{\rm max}\) & M \(\odot\) & \(2.073\) & \(2.013\) & \(2.306\) & \(2.064\) & \(2.011\) & \(2.244\) & \(2.048\) & \(2.010\) & \(2.162\) \\ & \(M_{\rm B,max}\) & M \(\odot\) & \(2.457\) & \(2.378\) & \(2.772\) & \(2.437\) & \(2.367\) & \(2.677\) & \(2.400\) & \(2.348\) & \(2.546\) \\ & \(c_{s}^{2}\) & \(c^{2}\) & \(0.63\) & \(0.58\) & \(0.70\) & \(0.52\) & \(0.46\) & \(0.58\) & \(0.43\) & \(0.39\) & \(0.45\) \\ & \(\rho_{c}\) & fm\({}^{-3}\) & \(1.079\) & \(0.914\) & \(1.138\) & \(1.036\) & \(0.899\) & \(1.099\) & \(0.972\) & \(0.883\) & \(1.035\) \\ & \(\varepsilon_{c}\) & MeV fm\({}^{-3}\) & \(1377\) & \(1169\) & \(1462\) & \(1302\) & \(1127\) & \(1394\) & \(1198\) & \(1084\) & \(1288\) \\ & \(R_{\rm max}\) & & \(10.75\) & \(10.46\) & \(11.52\) & \(11.03\) & \(10.69\) & \(11.74\) & \(11.47\) & \(11.07\) & \(11.97\) \\ & \(R_{1.4}\) & & \(12.34\) & \(12.03\) & \(12.89\) & \(12.50\) & \(12.17\) & \(13.05\) & \(12.87\) & \(12.42\) & \(13.30\) \\ NS & \(R_{1.6}\) & km & \(12.21\) & \(11.89\) & \(12.86\) & \(12.39\) & \(12.04\) & \(13.02\) & \(12.77\) & \(12.31\) & \(13.26\) \\ & \(R_{1.8}\) & & \(11.98\) & \(11.62\) & \(12.79\) & \(12.18\) & \(11.79\) & \(12.93\) & \(12.57\) & \(12.09\) & \(13.14\) \\ & \(R_{2.075}\) & & \(11.67\) & \(10.96\) & \(12.86\) & \(11.88\) & \(11.21\) & \(12.92\) & \(12.25\) & \(11.65\) & \(12.96\) \\ & \(\Lambda_{1.6}\)
Figure 4: Corner plot for the three sets of models with \(\xi\in[0,0.004]\) (solid black lines), \(\xi\in[0.004,0.015]\) (red) and \(\xi\in[0.015,0.04]\) (green) comparing the respective nuclear matter properties, in particular, the binding energy \(e_{0}\), incompressibility \(K_{0}\), skewness \(Q_{0}\) and curtois \(Z_{0}\) at saturation that characterizes symmetric nuclear matter and symmetry energy \(J_{\mathrm{sym},0}\), its slope \(L_{\mathrm{sym},0}\), curvature \(K_{\mathrm{sym},0}\), skewness \(Q_{\mathrm{sym},0}\) and curtois \(Z_{\mathrm{sym}}\) at saturation that characterizes the symmetry energy. The vertical lines represent the 68% CIs, and the light and dark intensities represent the \(1\sigma\), \(2\sigma\), and \(3\sigma\) CIs, respectively.
the posterior we have obtained for the three sets does not completely encapsulate all models, particularly FSU2, and TM1-2. This is due to the fact that those models do not satisfy all the restrictions put forth in the Bayesian setup. These two are disregarded due to the \(J_{\mathrm{sym},0}\) requirement. All the others fall inside the full posterior for the NS mass-radius domain.
The NS mass-radius constraint obtained from HESS J17311-347 is shown in dashed dark red (solid dark red) [50]. The existence of only nucleonic composition in this star may be questionable since all sets lie outside the 1\(\sigma\) 2-D posterior distribution in mass-radius. However, there are some EOS that falls within the 2\(\sigma\) limit. The EOS that, considering all sets, best matches the HESS J1731-34 1\(\sigma\) (68 % CI) data, BMPF_most_HESS, is also plotted in Fig. 1. Its model parameters together with its NMP and NS properties are given in the Supplemental Material, respectively, in Tables 2 and 3. In the Supplemental material, we also present a few selected models for NSs with maximum mass 2.2, 2.4, 2.6, and 2.75 \(\mathrm{M}_{\odot}\) (the extreme one), namely BMPF220, BMPF240, BMPF260, and BMPF275, respectively.
In Fig. 7, we plot the 90% CI region of the conditional probabilities \(P(R|M)\) (left) and \(P(\Lambda|M)\) (right) for the three sets. The gray zones in the left panel indicate the 90% (solid) and 50% (dashed) CI for the binary components of the GW170817 event [53]. The NICER x-ray data predictions for the pulsars PSR J0030+0451 and PSR J0740 + 6620 are also included, in particular, the 1\(\sigma\) (68%) confidence zone of the 2-D posterior distribution in mass-radii domain from millisecond pulsar PSR J0030+0451 (cyan and yellow) [46; 47] as well as PSR J0740 + 6620 (violet) [46; 47]. The horizontal (radius) and vertical (mass) error bars reflect the 1\(\sigma\) credible interval derived for the same NICER data's 1-D marginalized posterior distribution. Finally, the blue bars depict the radius of PSR J0740+6620 at 2.08\(M_{\odot}\) (left panel) and its tidal deformability at 1.36 \(M_{\odot}\) (right panel) [54]. As already indicated by the full posteriors, masses above 2.3 \(M_{\odot}\) are only obtained within set 1 and set 2. Sets 1 and 2 predict \(\sim 0.5\) km smaller radii, as we can also confirm from Table 3. Only set 3 predicts radii above 13 km at a 90%CI. Notice that according to sets 1 and 2 the low mass object associated with the gravitational waves GW190814 predicted to have a mass in the range 2.5-2.67 \(M_{\odot}\)[58] could be a neutron star. The detection of masses above 2.3\(M_{\odot}\) puts strong constraints on \(\xi\). Concerning the tidal deformability (right panel), set 1 and 2 prediction for \(\Lambda_{1.3\circ}\), corresponding to the \(q=1\) mass ratio of the GW170817 detection, lies well inside observations, while for set 3 some models lie outside this range.
In order to better understand how the three sets compare regarding the tidal deformability, we plot in Fig. 8 the effective tidal deformability \(\tilde{\Lambda}\) probability distribution calculated for the three sets for the chirp mass associated with the GW170817, \(\mathrm{M}_{chirp}=1.186\,M_{\odot}\). For each and every mass
Figure 5: Corner plot for the three sets of models with \(\xi\in[0,0.004]\) (solid black lines), \(\xi\in[0.004,0.015]\) (red) and \(\xi\in[0.015,0.04]\) (green) comparing the respective NS properties, in particular, the gravitational and baryonic maximum masses \(M_{\mathrm{max}}\) and \(M_{\mathrm{B,max}}\), the square of the speed of sound, the central baryonic density of the maximum mass configuration, and the radius and dimensionless tidal deformability of a 1.4\(M_{\odot}\) star. The vertical lines represent the 68% CIs, and the light and dark intensities represent the 1\(\sigma\), 2\(\sigma\), and 3\(\sigma\) CIs, respectively.
Figure 6: NS mass-radius domains (full posterior) produced in the following three scenarios: set 1 with \(\xi\in[0,0.004]\) (black dot), set 2 with \(\xi\in[0.004,0.015]\) (salmon), and set 3 with \(\xi\in[0.015,0.04]\) (green). The gray lines depict the constraints from the binary components of GW170817, along with their 90% and 50% credible intervals (CI). The 1\(\sigma\) (68%) CI for the 2D posterior distribution in the mass-radii domain for millisecond pulsar PSR J0030 + 0451 (cyan and yellow) [46; 47] as well as PSR J0740 + 6620 (violet) [48; 49] from the NICER x-ray data are also shown. Additionally, we show the constraint obtained from HESS J1731-347 for 68.3% (95.4%) CIs in dashed dark red (solid dark red) [50]. MR curves from a few well-known RMF models are also plotted (see text for details). Also, shown is BMPF_most_HESS, the EOS from our complete set that best describes HESS J1731-347.
radius curve, and fixing the chirp mass at 1.186\(M_{\odot}\), we select all possible combinations of the mass \(m_{1}\) and \(m_{2}\) and calculate the combined tidal deformability. For each EOS we have 44 combinations of \(m_{1}\) and \(m_{2}\). None of the distributions goes below 300, consistent with the findings of several studies that show that electromagnetic counterparts of GW170817, the gamma-ray burst GRB170817A [59], and the electromagnetic transient AT2017gfo [60] set a lower limit on the \(\bar{\Lambda}\) of the order of 210 [61], 300 [62], 279 [63], and 309 [64]. The median along with its 90% CI of the three distributions corresponding to sets 1, 2, and 3 are, respectively, \(471^{+163}_{-71}\), \(516^{+166}_{-84}\), and \(626^{+154}_{-132}\). Set 3 has a quite symmetric and wide distribution while the other two are narrower asymmetric distributions that spread above the 720 limits obtained from [54].
In Fig. 9, we plot the \(\beta\)-equilibrium pressure as a function of the baryonic density for the three sets (\(\xi<0.004\), \(0.004<\xi<0.015\) and \(\xi>0.015\)), together with the precision obtained from GW170817 [59]. All models fall inside the GW170817 band. However, their behavior can be distinguished: a smaller \(\xi\) implies a softer EOS at lower densities, harder at high densities, and the other way around.
In Fig. 10, the symmetry energy is represented for the three scenarios considered in our study. We conclude that the larger \(\xi\) the stiffer is the symmetry energy, favoring larger proton fractions as seen in Fig. 11. As referred in Sec. II, a nonzero \(\xi\) gives rise to a larger \(\varrho\) effective mass, Eq. (7), therefore, having a direct influence on the strength of the \(\varrho\) field. The \(\omega\)-field is proportional to the baryonic number density \(\rho\) if \(\{\xi=0\}\), while for a nonzero \(\xi\), \(\omega\) increases with a smaller
Figure 8: The Probability distribution of combined tidal deformability \(\bar{\Lambda}\) in a Binary is plotted for a given _chirp_ mass M\({}_{chirp}=1.186\) M\({}_{\odot}\) and marginalized over NS mass ratio \(q=m_{1}/m_{2}\) obtained in Set 1, 2 and 3. The median and 90% CI for \(\bar{\Lambda}\) are \(471^{+163}_{-71}\), \(516^{+166}_{-84}\), and \(626^{+154}_{-132}\) for Set 1, 2 and 3, respectively.
Figure 7: The 90% CI region for the sets: \(\xi\in[0,0.004]\) (black dot), \(\xi\in[0.004,0.015]\) (salmon), and \(\xi\in[0.015,0.04]\) (green) derived using the conditional probabilities \(P(R|M)\) (left) and \(P(\Lambda|M)\) (right). The gray zones in the left panel indicate the 90% (solid) and 50% (dashed) CI for the binary components of the GW170817 event [53], for the \(1\sigma\) (68%) credible zone of the 2-D posterior distribution in mass-radii domain from millisecond pulsar PSR J0030+0451 (cyan and yellow) [46; 47] as well as PSR J0740 + 6620 (violet) [48; 49] are shown for the NICER x-ray data. The horizontal (radius) and vertical (mass) error bars reflect the \(1\sigma\) credible interval derived for the same NICER data’s 1-D marginalized posterior distribution. The blue bars depict the radius of PSR J0740+6620 at 2.08\(M_{\odot}\) (left panel) and its tidal deformability at 1.36 \(M_{\odot}\) (right panel) [54].
power of \(\rho\). So the larger the value of \(\xi\) the smaller the \(\varrho\) effective mass and the larger the \(\varrho\) field. A large \(\varrho\)-field gives rise to a smaller isospin asymmetry, i.e. larger proton fractions will occur. However, since larger proton fractions favor the direct Urca (DUrca) process inside NSs with smaller masses, the different scenarios represented by the three sets may be distinguished by their cooling properties.
Also very interesting is the analysis of the speed of sound behavior for the three sets. While for the \(\xi<0.004\) set, the speed of sound increases monotonically with the baryonic density, this is not so for the \(\xi>0.015\) set, see Fig. 12: in this case, the speed of sound square attains a maximum below 0.45\(c^{2}\) at \(\rho\sim 4\rho_{0}\) and then decreases smoothly. The average behavior of the set with \(0.004<\xi<0.015\) shows an intermediate behavior as expected. In this last case for the densities plotted in Fig. 12, the speed of sound has stabilized just above 0.5\(c^{2}\). The blue region in the figure represents the 90% credible interval of the square of sound velocity that allows for the onset of hyperons in Set 0, as discussed below under section V.
Finally, we study the correlations between the different quantities considered, in particular, model parameters, nuclear matter properties, and neutron stars properties, see Fig. 13
Figure 11: A comparison of the proton, electron, and muon fractions versus the baryonic density in the three different scenarios: \(\xi\in[0,0.004]\) (dark grey), \(\xi\in[0.004,0.015]\) (salmon), and \(\xi\in[0.015,0.04]\) (green).
Figure 12: The median and 90% credible interval of the square of sound velocity (\(c_{s}^{2}\)) as a function of baryon density are shown for \(\xi\in[0,0.004]\) (black dot), \(\xi\in[0.004,0.015]\) (salmon), and \(\xi\in[0.015,0.04]\), respectively (green). The blue region represents the 90% credible interval of the square of sound velocity allowing for the onset of hyperons in Set 0.
Figure 10: The symmetry energy versus the baryonic number density for the three sets with \(\xi\in[0,0.004]\) (dark grey), \(\xi\in[0.004,0.015]\) (salmon), and \(\xi\in[0.015,0.04]\) are plotted (green). The constraint depicted from the IAS analysis is also illustrated by the light sky region.
Figure 9: Pressure versus the baryonic number density for the three scenarios \(\xi\in[0,0.004]\) (dark grey), \(\xi\in[0.004,0.015]\) (salmon) and \(\xi\in[0.015,0.04]\) (green). Also shown is the band predicted from the GW170817 event (hatched grey).
where the Kendall rank correlation coefficients are shown for set 0. The strongest correlations obtained with coefficients of the order of 85% or above are between: a) \(g_{\sigma}\) and \(g_{\omega}\) for which 85% was determined. The correct description of the binding energy strongly constrain these two parameters; b) the central baryonic density and energy density of the maximum mass star with the corresponding star radius, respectively, -87% and -92%. This correlation was referred in [65] and will be discussed below; c) the speed of sound in the center of the maximum mass star with the parameter \(\xi\), -90%. This correlation reflects the fact that the parameter \(\xi\) determines the stiffness of the EOS at high densities; d) the gravitational mass of the maximum mass star with the corresponding baryonic mass, 92%, and the central baryonic density with the energy density of the maximum mass star, also 92%.
As discussed above, the correlation coefficient between the central density of the maximum mass star \(\rho_{c}\) and its radius \(R_{\rm max}\) is of the order of \(0.9\), see Fig. 13. A similar result was obtained in [65] with a set of EOS determined using the sound-speed parameterization method and constrained to satisfy X-ray and gravitational-wave observations, and ab-initio calculations, in particular, low-density neutron matter chiral effective theory and high density perturbative QCD results. These authors found that the normalized central density of the maximum mass star was related to the corresponding radius through the quadratic relation
\[\frac{\rho_{c}}{0.16\ \mathrm{fm}^{-3}}=d_{0}\left[1-\left(\frac{R_{\rm max}}{10 \ \mathrm{km}}\right)\right]+d_{1}\left(\frac{R_{\rm max}}{10\ \mathrm{km}}\right)^{2},\]
with \(d_{0}=27.6\) and \(d_{1}=7.5\) and a 3.7% standard deviation of
Figure 13: The Kendall rank correlation coefficients between RMF model parameters, nuclear saturation properties (NMP), and neutron star properties (NS) were obtained from the posterior with prior Set 0. In such figures, Pearson’s correlation coefficient is typically employed. Pearson’s correlation coefficient measures a linear relationship between two variables, whereas Kendall’s correlation coefficient measures a monotonic relationship.
relative residual over the central value zero. Performing a similar analysis with Set 0, we have obtained \(d_{0}=28.89\pm 0.02\) and \(d_{1}=7.73\pm 0.01\). The parameters \(d_{0}\) and \(d_{1}\) obtained with our approach and in [65] differ less than 5% although very different EOS descriptions have been used. Notice, however, that the linear relation shows a chi-square fit similar to the quadratic relation. We have obtained for Set 0
\[\frac{\rho_{c}}{0.16\ \mathrm{fm}^{-3}}=m_{0}\left(\frac{R_{\mathrm{max}}}{10 \ \mathrm{km}}\right)+c_{0},\]
with \(m_{0}=-11.618\pm 0.018\) and \(c_{0}=19.255\pm 0.019\). The relative residual for \(\rho_{c}\) with Set 0 data, obtained with both non-linear and linear relations shows a symmetric Gaussian distribution centered over zero with 1.4% standard deviation.
## V The hyperons and perturbative QCD
In this section we complete the discussion of the previous section by addressing two issues frequently considered: a) how will non-nucleonic degrees of freedom affect the conclusions; b) are the constraints obtained from pQCD for densities as the ones found inside neutron stars affecting the present neutron star description? The two topics will be discussed in the following subsections.
### Effect of hyperons
The appearance of hyperons in neutron stars, or other nucleonic degrees of freedom, is an open question in astrophysics and is still the subject of ongoing research. For instance, in [66] the authors conclude within an auxiliary field diffusion Monte Carlo description of nuclear matter with \(\Lambda\)-hyperons the onset of hyperons is very sensitive to the three-body force, and may disfavor the onset of hyperons. However, if hyperons are considered in a RMF description of neutron star matter, the onset of hyperons generally occurs for densities of the order \(2-3\rho_{0}\).
We will introduce hyperons following the approach described in [6]. The interaction between nucleons and hyperons is defined by the \(\sigma\), \(\omega\), \(\rho\), and \(\phi\) mesons, and we allow for the possible onset of the neutral \(\Lambda\)-hyperon and the negatively charged \(\Xi^{-}\)-hyperon. The \(\Lambda\)-hyperon generally sets in first and the \(\Xi^{-}\)-hyperon secondly [67, 68, 69]. The \(\Sigma\)-hyperon potential in the nuclear matter is possibly repulsive disfavoring the onset of this hyperon before the \(\Xi^{-}\)-hyperon, see [70]. We consider that the coupling of the hyperons to the vector-isoscalar mesons (\(\omega\) and \(\phi\)-mesons) is determined by the SU(6) symmetry
\[g_{\omega\Xi^{-}}=\frac{1}{3}g_{\omega N}\,\quad g_{\phi\Xi^{-}}-\frac{2 \sqrt{2}}{3}g_{\omega N} \tag{14}\]
and for the \(\rho\)-meson we assume
\[g_{\rho B}=g_{\rho N} \tag{15}\]
In the Lagrangian density, the interaction term between the \(\rho\)-meson and baryons takes into account the isospin explicitly. The coupling of the \(\sigma\)-meson to the baryons is written in terms of the coupling to the nucleon as \(g_{\sigma Y}=x_{\sigma Y}\,g_{\sigma N}\), with \(x_{\sigma Y}\) fitted to hypernuclei properties. Considering several models, the factor \(x_{\sigma\Lambda}\) takes values between 0.609 and 0.622, and values between 0.309 and 0.321 were calculated for \(x_{\sigma\Xi^{-}}\). These two intervals have been used in the calculation with hyperons. The same prior used to define set 0 (see Table 1) together with the above intervals for the baryon-\(\sigma\) meson were considered, as well as the constraints defined in Table 2.
The effect of the inclusion of hyperons on the total mass-radius domain span by the hyperon EOS set is plotted in Fig. 14. The maximum mass that is attained has reduced from 2.7 \(M_{\odot}\) for nucleonic stars to \(\sim 2.2M_{\odot}\) for hyperonic stars. A strong effect is also observed on the radius: the smaller radius region was eliminated, and, simultaneously the mass-radius region extends to slightly larger radii. The EoS has to be stiffer in order to be able to describe \(2M_{\odot}\) stars. The EoS obtained are characterized by a very small value of \(\xi\) (the median is 0.00137, and the 68% CI is [0.0004,0.00326]) as expected because a large \(\xi\) softens the EoS at high density disfavoring the possible description of stars with a mass equal or above \(2M_{\odot}\).
The behavior of the speed of sound in the presence of hyperons is shown in Fig. 12 where it can be compared with the no-hyperon calculation. The hyperon onset has a strong effect on the speed of sound as discussed in other works [6]. The speed of sound presents a maximum at the onset of hyperons, for a density close to the one predicted in [36] and [35] with an agnostic description of the EoS. Agnostic descriptions, however, do not allow the determination of the star composition [34, 35, 71, 72].
Figure 14: The NS mass-radius complete domains generated using only nucleonic and hyperonic matter, based on the Set 0 conditions, are depicted in their full posterior form: the dotted (blue) region corresponds to the no-hyperon (hyperon) calculation. For the meaning of all the other regions and curves please see the caption of Fig. 1
We conclude that the description of neutron star matter based on the microscopic model of nuclear matter considered shows a behavior of the speed of sound compatible with the results of [36] and [35]: in our framework if the parameter \(\xi\) is large enough the speed of sound increases until a value of \(\sim 0.4c-0.45c\) at \(\sim 3\rho_{0}\) and then stabilizes or decreases. If in the future the speed of sound is constrained and a speed of sound of the order of \(0.4c\) is obtained in the center of a NS, the present work shows that we do not necessary need exotic degrees of freedom or a deconfinement phase transition to interpret this value.
### Effect of pQCD constraints
In our Bayesian inference, no constraints from pQCD have been included. Our framework is not valid for densities as high as the ones explored with pQCD, however, indirect constraints may be imposed. It has been shown in [36; 73] that pQCD constraints have a finite effect at densities found inside NS, and in [73] a set of constraints on the pressure, chemical potential and baryonic density were calculated using information from thermodynamic potentials together with causality and stability conditions. In this subsection we discuss the compatibility of our different EoS sets with the pQCD constraints deduced in [73].
In Fig. 15 we plot for three values of the QCD scale X the pressure versus energy density including the pQCD constraints on these quantities, respectively from left to right, for set 0, set 0 with hyperons, set 1 and set 3. We also identify the constrained region for selected baryonic densities, up to 8 times saturation density taking for this quantity a reference value, \(n_{s}=0.16\) fm\({}^{-3}\). This density is above the central density of the maximum mass star of all our sets. Set 1 is having the largest densities in the center and at 90% CI these are below 7.2\(n_{s}\). At 5\(n_{s}\) almost all models satisfy pQCD. However, at 8\(n_{s}\) some models fail the constraints, in particular, some models with a small \(\xi\), depending also on the value of QCD scale X. It is interesting that all models of set 3 (large values of \(\xi\)) satisfy the pQCD constraints independently of the scale. Also, the set that includes hyperons essentially satisfies the pQCD constraints. In the future, these constraints could be imposed in the Bayesian inference but we should point out that calculations of pQCD have still uncertainties, for instance, are performed for zero quark masses. In a similar way, also the uncertainties associated with the chEFT theory are not given with a given confidence interval, and therefore, the constraints deduced in [73] should be considered with care. At high densities, \(X=4\) is imposing the strongest constraints. Models of Set 1 (with the smaller \(\xi\)) are the ones that fail more frequently the constraints at 8 \(n_{s}\). Considering the constraint \(X=1\) in set 1, from the total 21037 models 618 do not satisfy the pQCD constraints. The last models have larger maximum masses (\(2.307-2.512\,M_{\odot}\) at 90% CI in contrast with \(2.07-2.51M_{\odot}\) for the models that satisfy the constraints). The absolute maximum mass is \(\sim 2.75M_{\odot}\) for the excluded models and \(\sim 2.5M_{\odot}\) for the others.
## VI Conclusions
In the present study, we have studied the nuclear matter properties and NS properties obtained within a RMF description of nuclear matter. We have considered a RMF model that includes mesonic non-linear self-interaction and mixed interaction terms as in models discussed in [11; 14; 17; 18; 75]. A Bayesian inference analysis was performed, considering flat distributions for the priors, the model parameters, and imposing a small number of nuclear matter properties and the 2\(M_{\odot}\) observational constraint.
Presently, nuclear matter properties at saturation are reasonably well constrained, however, at high densities there is still too little information to constrain nuclear models. In the RMF model used in our study, the non-linear \(\omega^{4}\) term has a special role in establishing the high-density behavior of the EOS. We have, therefore, considered three different scenarios by imposing different constraints to the coupling \(\xi\) of the \(\omega^{4}\) term.
One of the main conclusions is that the strength of the \(\omega^{4}\) term controls the magnitude of the speed of sound in the center of the star: a larger coupling will originate a smaller speed of sound in the center. However, a smaller speed of sound also indicates a softer EOS at high densities. The two solar mass constraint in this model with a large coupling \(\xi\) is only satisfied if the EOS is stiff at low and intermediate densities, and, therefore gives rise to larger 1.4\(M_{\odot}\) NS radii. At 90% CI we have obtained for the 1.4\(M_{\odot}\) star the radius \(11.99<R_{1.4}<12.66\) km for \(\xi<0.015\) which increases to \(12.44<R_{1.4}<13.29\) km if \(\xi>0.015\) is considered.
It is interesting to verify that for set 3 (\(\xi>0.015\)) the speed of sound has a non-monotonous behavior: it attains a maximum around 4 \(\rho_{0}\) and decreases for larger densities. In [36; 76], the authors study the behavior of the speed of sound at high density extrapolating the equation of state to high densities using a Gaussian process EoS description. They condition the EOS to astrophysical observations, or to both astrophysical observations and pQCD, and verify that the QCD conditioning gives rise to a decrease of the speed of sound above \(\sim 3\ \rho_{0}\) after a steep rise until this density. Notice that this is precisely the density at which the speed of sound of the three sets cross in Fig. 12. The softer the EOS above that density, the stiffer it is below this reference density and the other way around. The decrease of the speed of sound with the onset hyperons, as discussed in Sec. V.1 and in [77], occurs below \(3\rho_{0}\) but for values of \(c_{s}^{2}\) of the same order of magnitude \(\lesssim 0.4c^{2}\). The probability distribution for sets 2 and 3 and set 0 with hyperons in Fig. 12 are compatible with the results of [36] when pQCD constraints are imposed. If in the future the speed of sound is constrained and a speed of sound of the order of 0.4 \(c\) is obtained in the center of a NS, the present study shows that it is not necessary to include exotic degrees of freedom or a deconfinement phase transition to interpret this value.
All observational constraints existing presently, from NICER, from LVC, or from the measurement of NS masses above two solar masses can be satisfied within the RMF model discussed. Notice that the GW170917 tidal deformability con
straint is well satisfied by the present model. The maximum mass attained is \(\sim 2.75\)\(M_{\odot}\) and was obtained for a \(\xi<0.004\), i.e. for an almost zero \(\omega^{4}\) term. For a finite \(\xi>0.015\) the maximum mass obtained is \(\sim 2.3M_{\odot}\).
Another important nuclear matter property affected indirectly by the \(\omega^{4}\) term is the symmetry energy. It was discussed that a larger \(\omega^{4}\) term gives rise to a larger \(\varrho\)-field and, therefore, a smaller proton-neutron asymmetry. A direct effect is the onset of direct Urca nucleonic processes at lower densities, and, therefore smaller NS masses.
We have also confirmed the anti-correlation obtained in [65] between the maximum mass radius and the corresponding central baryonic density with a set of EOS built using the speed of sound method. We have shown that both a linear and a quadratic relation give rise to a similar chi-square fit.
It is interesting to establish a comparison with the results of a similar Bayesian inference analysis carried out in a different family of RMF models in [6], where a model with density-dependent couplings was considered. The high-density behavior of the EOS in our approach is defined by the non-linear meson terms included in the Lagrangian density, which are not included in the formulation with density-dependent couplings. Comparing the outputs in both studies we conclude that the conclusions drawn in [6] do not differ much from the results obtained with set 2. Set 1 predicts larger maximum masses and speed of sound than the ones obtained in [6]. On the other hand, set 3 predicts larger radii for the canonical NS and smaller central speeds of sound, clearly showing a different high-density behavior.
In [77], the authors undertook the Bayesian inference considering the possibility that hyperons nucleate inside NS. In that study, the authors concluded that the joint effect of the presence of hyperons and the two solar mass constraints was the prediction of larger radii for intermediate mass NS. This is a conclusion similar to the one drawn with set 3: the \(\omega^{4}\) softens the EOS, in an equivalent way the onset of hyperon does, and, as a consequence the EOS has to be stiffer at intermediate densities, giving rise to larger radii. We have also studied the
Figure 15: We display the values of energy density (e) and pressure (p) for four different sets: Set 0, Set 0 with hyperon, Set 1, and Set 3 (from left to right columns). In addition, we apply the robust equation of state constraints from Ref. [73] that ensures stability, causality, and thermodynamic consistency. The regions enclosed by the solid blue lines are subject to pQCD constraints that restrict the values of energy density (e) and pressure (p) within the same solid line regions. These constraints apply specifically to baryon number densities of \(n=2\), 3, 5, and 8 \(n_{s}\) (where \(n_{s}\)=0.16 fm\({}^{-3}\)). In contrast, dotted blue lines represent excluded regions, where not all pQCD conditions are met. We show the results for different renormalization scale parameters X [74] for 1, 2, and 4 in order from top to bottom row. The green and red dots represent, respectively, the models in our sets that satisfy and do not satisfy pQCD constraints.
onset of hyperons on the present framework. The two solar mass constraint restricts the parameter \(\xi\) to quite small values. On average the NS radius of a 1.4\(M_{\odot}\) star increases and the speed of sound has a steep drop around 2\(\rho_{0}\) and a moderate growth for larger baryonic densities keeping inside the range constrained by pQCD [36].
It has been shown in [73] that pQCD imposes constraints at densities that can be as low as \(\sim 2n_{s}\). We have verified whether the different EoS sets generated satisfy the constraints deduced in [73] and concluded: a) the constraints are satisfied for any QCD scale \(X\in[1,4]\) if a \(\xi>0.015\) is used; b) the set with hyperons and any value of \(\xi\) satisfies almost completely the constraints, except for a very few models if \(X=1\) is chosen; c) set 1 with \(\xi<0.004\) is the one that has a larger number of models that do not satisfy the pQCD constraints (e.g. \(\sim 3\%\) if \(X=1\)). For \(X=1\) the absolute maximum mass of the set 1 models drops from \(\sim 2.75M_{\odot}\) to \(\sim 2.5M_{\odot}\) for models that satisfy pQCD.
In [32] the authors have performed a Bayesian inference analysis to constrain the EOS using as framework a RMF model similar to the one considered in the present study, taking, however, \(\xi=0\) and \(\Lambda_{\omega}=0\) and using only observations to constrain the parameters. They have tested several different priors and the possibility of \(\Lambda\)-hyperon onset. They have generally obtained larger radii for a 1.4\(M_{\odot}\) star, possibly because they take \(\Lambda_{\omega}=0\). As a consequence, they also get quite large values of the symmetry energy slope at saturation, except when the saturation symmetry energy takes values below 20 MeV. Besides, in [32] smaller maximum masses were obtained. This property is connected to the nuclear effective mass in this kind of model. The most probable effective masses obtained are generally above 0.7 nucleon mass. As shown in [67] in the model used in [32], the larger the effective mass the smaller the maximum mass configuration. In the model applied in our study, this correlation does not exist because of the presence of the \(\omega^{4}\) term. In the study [30], the authors also take astrophysical observations as the constraining power of the Bayesian inference which takes as the underlying framework the same used in our study. In this study, the nuclear physics constraints are minimal, and are mainly included in choosing a narrower prior that takes into account some nuclear physics prior knowledge. It is very interesting to see that observations favor a large \(\xi\) parameter, and, as a consequence a speed of sound square of the order of 0.4 \(c^{2}\) in the center of massive stars.
In the Supplemental material, we present a few selected models for NSs with maximum mass 2.0, 2.2, 2.4, 2.6, and 2.75 M\({}_{\odot}\) (the extreme one), namely BMPF_most_HESS, BMPF220, BMPF240, BMPF260, and BMPF275, respectively. Its model parameters together with its NMP and NS properties are given, respectively, in Tables 2 and 3 of the Supplemental Material.
## Acknowledgments
This work was partially supported by national funds from FCT (Fundacao para a Ciencia e a Tecnologia, I.P, Portugal) under Projects No. UIDP/04564/2020, No. UIDB/04564/2020 and 2022.06460.PTDC. MBA, one of the authors, would like to thank the FCT for its support through the Ph.D. grant number 2022.11685.BD. The authors acknowledge the Laboratory for Advanced Computing at the University of Coimbra for providing HPC resources that have contributed to the research results reported within this paper, URL: [https://www.uc.pt/lca](https://www.uc.pt/lca).
## Data availability
The final posterior of the model parameters, the equation of states, and the solutions for the star properties obtained with all the sets can be obtained from the link ([http://e.pc.cd/mootalK](http://e.pc.cd/mootalK)).
|
2304.12937 | Improved Formula for the Multi-Section of the Linear Three-Term
Recurrence Sequence | The standard formula for the multi-section of the general linear three-term
recurrence relation is simplified in terms of Chebyshev S-polynomials. | Gary Detlefs, Wolfdieter Lang | 2023-04-14T10:55:51Z | http://arxiv.org/abs/2304.12937v2 | ###### Abstract
###### Abstract
The standard formula for the multi-section of the general linear three-term recurrence relation is simplified in terms of Chebyshev S-polynomials.
April 2023
**Improved Formula for the Multi-Section of the Linear Three-Term Recurrence Sequence**
**Gary Detlefs 1and Wolfdieter Lang 2**
Footnote 1: [email protected]
Footnote 2: [email protected], [http://www.itp.kit.edu/~wl](http://www.itp.kit.edu/~wl)
## 1 Introduction
The \(m-\)section (multi- or modular-section) of an integer sequence consists of set of \(m\) sequences which carry as indices the equivalence classes modulo \(m\).
The general decomposition of the ordinary generating function (o.g.f) of the sequence into the \(m\) o.g.fs of the members of the set of \(m-\)sections is given in terms of a special \(m\times m\) Vandermonde matrix. The inverse of this matrix gives the o.g.fs of these members in terms of the o.g.f. of the sequence. The computation which brings these \(m\) fractions into one, either by hand (tedious) or by computer, does not give an insight into the structure of this final rational fraction.
For the general sequence satisfying a linear three-term recurrence relation (called Horadam-sequence) it is shown that the result for the o.g.fs of the \(m-\)section sequences can be given in terms of Chebyshev-\(S\) (and -\(R\) polynomials, the monic \(T\)-polynomials, which are a difference of two \(S\)-polynomials).
This is achieved by a proposal for the \(m\)-section of the Horadam sequence, first a conjecture for the first element of this section by one of the authors (\(G.\ D\)), then generalized for all elements, and later proved by the second author.
The first section summarizes the standard treatment of the \(m\)-section of a sequence and the o.g.fs. The second section is a reminder of some elementary properties of the Horadam sequence. In the third section the conjectures for the \(m\)-section of this sequence are formulated, and the last section gives the proof of these conjectures.
The proof uses a lemma a (known) alternative bisection of the Chebyshev-\(S\) polynomials (not the one obtained for improved \(m\,=\,2\) case).
## 2 Multi-Section of a sequence
This section is a reminder of the standard treatment of the \(m-\)section of a sequence.
The ordinary generating function (o.g.f) \(G(m,l,x)\,=\,\sum_{n=0}^{\infty}a(m\,n+l)\,x^{n}\) of the \(l\)th part of the \(m\)-section of a sequence \(\{a(n)\}_{n>=0}\) with o.g.f. \(G(x)\,=\,\sum_{n=0}^{\infty}a(n)\,x^{n}\), for integer \(m\,\geq\,2\) and \(l\,\in\,\{0,\,1,\,...,\,m-1\,\}\), satisfies
\[G(x)\,=\,\sum_{l=0}^{m-1}G(m,l,x^{m})\,x^{l}\,. \tag{1}\]
For the solution of \(G(m,l,x)\) for given \(G(x)\) one uses the roots of the polynomial \(x^{m}\,-\,1\), that is \(w(m,\,k)\,=\,e^{2\pi k/m}\), for \(k\,\in\,\{0,\,1,\,...,\,m-1\,\}\), and considers the inhomogeneous system of \(m\) equations,
for \(k\,\in\,\{0,\,1,\,...,\,m-1\,\}\), for the \(m\) unknowns \(\{G(m,l,x)\}_{l=0}^{m-1}\),using a Vandermonde matrix \(V_{m}(x)\) with elements
\[[V_{m}(x)]_{k,l}\,=\,(w(m,\,k)\,x)^{l}, \tag{2}\]
as
\[\sum_{l=0}^{m-1}[V_{m}(x)]_{k,l}\,G(m,l,x^{m})\,=\,G(w(m,\,k)\,x) \tag{3}\]
Note that \((w(m,\,k)\,x)^{m}\,=\,x^{m}\) has been used.
The inverse of a general Vandermonde matrix is known. _e.g._, [5], and for the present case its elements become
\[[V_{m}^{-1}(x)]_{l,j}\,=\,N(m,l,j,x)/DN(m,j,x), \tag{4}\]
with denominator
\[DN(m,j,x)\,=\,x^{m-1}\,\prod_{0\,\leq\,<\,i\,\neq\,j\,\leq m-1}\,(w(m,\,i)\,- \,w(m,\,j)), \tag{5}\]
and numerator
\[N(m,l,j,x)\,=\,(-1)^{l}\,x^{m-1-l}\,\sum_{n=1}^{\#Ch(m,l,j)}\,\prod_{k=1}^{m- 1-l}(Ch(m,l,j)[n])[k], \tag{6}\]
where the list of lists (order respected, and the \(k\)th elements of a list \(L\) is denoted by \(L[k]\))
\[Ch(m,l,j)\,=\,choose(P(m,\,j),m-1-l), \tag{7}\]
with the list
\[P(m,\,j)\,=\,[w(m,\,0),...,w(m,\,j-1),w(m,\,j+1),\,...,\,w(m,\,m-1)]. \tag{8}\]
The length of list \(Ch(m,l,j)\) is \(\#Ch(m,l,j)\,=\,{m-1\choose l}\) and the length of the lists of \(Ch(m,l,j)\) is \(m-1-l\) with \(\#P(m,\,j)\,=\,m-1\).
Thus, using new arguments \(x\to x^{1/m}\), one obtains, for \(l\,\in\,\{0,\,1,\,...,\,m-1\,\}\)
\[G(m,l,x)\,=\,\sum_{j=0}^{m-1}[V_{m}^{-1}(x^{1/m})]_{l,j}\,G(w(m,j)\,x^{1/m}). \tag{9}\]
**Example 1: m = 3**
With \(w(3,\,0)\,=\,1,\,w(3,\,1)\,=\,w=\frac{1}{2}(-1\,+\,\sqrt{3}\,i)\) and \(w(3,2)\,=\,\overline{w}\,=\,-\frac{1}{2}(1\,+\,\sqrt{3}\,i)\) one finds
\([V_{3}^{-1}(x)]_{1,2}\,=\,w/(3\,x)\), because \(DN(3,2,x)\,{=}\,x^{2}\,(1\,-\overline{w})\,(w\,-\,\overline{w})\,=\,x^{2}\, \frac{1}{2}\,(3\,+\,i\,\sqrt{3})\,i\,\sqrt{3}\,=\,3\,w\,x^{2}\), and from \(P(3,2)=[1,\,w]\), and \(Ch(3,1,2)\,=\,[[1],\,[w]]\) one obtains \(N(m,l,j,x)\,=\,(-1)^{1}\,x\,(1\,+\,w)=x\,\overline{w}\). Indeed, \([V_{3}^{-1}(x)]_{1,2}\,=\,\overline{w}/(3\,w)\,=\,w/(3\,x)\), due to \(\overline{w}^{2}\,=\,w\).
\[V_{3}^{-1}(x)\,=\,\frac{1}{3}\,\left(\begin{array}{ccc}1&1&1\\ 1/x&\overline{w}/x&w/x\\ 1/x^{2}&w/x^{2}&\overline{w}/x^{2}\end{array}\right)\,. \tag{10}\]
Therefore the standard trisection of \(G(x)\) is
\[G(3,0,x) = \frac{1}{3}\,\left(G(x^{1/3})\,+\,G(w\,x^{1/3})\,+\,G(\overline{w }\,x^{1/3})\right), \tag{11}\] \[G(3,1,x) = \frac{1}{3\,x}\,\left(G(x^{1/3})\,+\,\overline{w}\,G(w\,x^{1/3}) \,+\,w\,G(\overline{w}\,x^{1/3})\right),\] (12) \[G(3,2,x) = \frac{1}{3\,x^{2}}\,\left(G(x^{1/3})\,+\,w\,G(w\,x^{1/3})\,+\, \overline{w}\,G(\overline{w}\,x^{1/3})\right)\,. \tag{13}\]
This should then be simplified for given \(G(x)\), by finding the rational function \(P(x)/Q(x)\) which can become tedious in the general m-section case (the computer will help).
The topic of this paper is to give for the general linear three-term recurrence relation the coefficients of these polynomials \(P\) and \(Q\) in terms of well known polynomials which are functions of the signature of this recurrence.
## 3 General linear three term recurrence
This section is a review of basic formulas for the considered recurrence relation.
The sequence \(\{H(p,q;r,s;n)\}_{n=0}^{\infty}\) satisfies the following linear three- term (also called second order) recurrence relation of signature \((r,\,s)\), with integer numbers \(r\) and \(s\), both non-vanishing, and initial conditions (seeds or inputs) \((p,\,q)\), with integer numbers \(p\) and \(q\). Only integer sequences are considered. In the following these domains for \(p,\,q,\,r,\,s\) will not be repeated in the formulas.
The letter \(H\) is used because this sequence has been studied by A. F. Horadam in many publications. See _e.g._, [2], [3],[4], and also [9].
\[H(p,q;r,s;\,n) = r\,H(p,q;r,s;\,n-1)\,+\,s\,H(p,q;r,s;\,n-2),\,\,\mbox{for}\,\,n\, \geq\,2,\,\mbox{and} \tag{14}\] \[H(p,q;r,s;\,0) = p,\,\,\,H(p,q;r,s;\,1)\,=\,q. \tag{15}\]
It is sufficient to consider the seeds \((p,\,q)\,=\,(0,\,1)\), naming the sequence \(\{H01(r,s;\,n)\}_{n=0}^{\infty}\), because
\[H(p,q;r,s;\,n)\,=\,q\,H01(r,s;\,n)\,+\,p\,s\,H01(r,s;\,n-1). \tag{16}\]
Also \(H01(r,s;-1)\,=\,1/s\) and \(H01(r,s;-2)\,=\,-r/s^{2}\) will be used.
One can also extend this sequence to all negative integer \(n\), by
\[H01(r,s;\,n)\,=\,-(-s)^{n}\,H01(r,s;\,-n), \tag{17}\]
which implies the result for negative indices for sequence \(H\).
The Binet - de Moivre formula is
\[H01(r,s;n)\,=\,\frac{\lambda(r,s)^{n}\,-\,(-s/\lambda(r,s))^{n}}{\lambda(r,s) \,-\,(-s/\lambda(r,s))},\,\,\mbox{where}\,\,\lambda(r,\,s)\,=\,\frac{1}{2}\, \left(r\,-\,\sqrt{r^{2}\,+\,4\,s}\,\right)\,. \tag{18}\]
The transfer matrix, also called \({\bf Q}\) matrix, for the \((r\,s)\) recurrence relation is
\[{\bf Q}\,=\,\left(\begin{array}{cc}r&s\\ 1&0\end{array}\right)\,. \tag{19}\]
The powers of this \(2\times 2\) matrix with trace \(Tr\,{\bf Q}\,=\,r\) and determinant \(\mbox{\it Det}\,{\bf Q}\,=\,-s\,\neq\,0\), can be found with the help of the Cayley-Hamilton Theorem in terms of Chebyshev \(S\)-polynomials by
\[{\bf Q}^{n}(r,s)\,=\,(\sqrt{-s})^{n}\,\left[S\left(n,\,\frac{r}{\sqrt{-s}} \right)\,{\bf 1}\,+\,S\left(n-1,\,\frac{r}{\sqrt{-s}}\right)\,\frac{1}{\sqrt{-s}} \left({\bf Q}(r,s)\,-\,r\,{\bf 1}\right)\right]\,. \tag{20}\]
For the Chebyshev S-polynomials see OEIS [7] A049310 for their coefficients, their properties, and references, _e.g._, [1], [8]. OEIS \(A\)-number links will henceforth be used without citation.
\[S(n,\,x)\,:=\,H(1,x;x,-1;\,n),\,\mbox{for}\,\,n\,\geq\,0. \tag{21}\]
For negative \(n\) one finds \(S(-1,\,x)\,=\,0\), and \(S(n,\,x)\,=\,-S(-n-2,\,x)\), for \(n\,\leq\,-2\).
This produces the matrix
\[{\bf Q}^{n}(r,\,s)\,=\,(\sqrt{-s})^{n}\,\left(\begin{array}{cc}S\left(n,\, \frac{r}{\sqrt{-s}}\right)&\frac{s}{\sqrt{-s}}\,S\left(n-1,\,\frac{r}{\sqrt{-s }}\right)\\ \frac{1}{\sqrt{-s}}\,S\left(n-1,\,\frac{r}{\sqrt{-s}}\right)&-S\left(n-2,\, \frac{r}{\sqrt{-s}}\right)\end{array}\right)\,. \tag{22}\]
The (generalized) Chebyshev \(T\)-polynomials are defined from the trace as
\[T\left(n,\,\frac{r}{2\,\sqrt{-s}}\right)\,:=\,\frac{1}{2}\,Tr\,{\bf Q}^{n}(r,\,s )\,=\,\frac{1}{2}\,\left(S\left(n,\,\frac{r}{\sqrt{-s}}\right)-S\left(n-2,\, \frac{r}{\sqrt{-s}}\right)\right). \tag{23}\]
For \((r,s)\,=\,(x,-1)\) these are the usual Chebyshev\(T\)-polynomials: \(T(n,\,x/2)\,=\,\frac{1}{2}\,(S(n,x)-S(n\!-\!2,\,x))\). Later \(R(n,\,x)\,=\,T(n,\,x/2)/2\) will be used.
Because \(Det\,{\bf Q}(s,\,r)\,=\,-s\) one has \(Det\,{\bf Q}^{n}(s,\,r)\,=\,(-s)^{n}\), by the product theorem for determinants, and this leads to the Cassini-Simson identity in \(n\) and \((r,s)\) (with \(n\to n+1\))
\[S(n,\,y)^{2}\,-\,S(n-1,\,y)\,S(n+1,\,y)\,=\,1, \tag{24}\]
where \(r\) and \(s\) only enter _via_ \(y\,=\,y(r,\,s)\,=\,\frac{r}{\sqrt{-s}}\).
A further reduction of the \(H01\) sequence, important for the main part of this paper, is possible in terms of the usual Chebyshev \(S\)-polynomials by
\[H01(r,s;n)\,=\,(\sqrt{-s}\,)^{n-1}\,S\left(n-1,\,\frac{r}{\sqrt{-s}}\right)\,. \tag{25}\]
This follows from comparing the recurrence and the seeds.
The ordinary generating functions (o.g.f) of \(\{H01(r,s;n\}_{n=0}^{\infty}\) is
\[GH01(r,s;x)\,=\,\frac{x}{1\,-\,r\,x\,-\,s\,x^{2}}\,. \tag{26}\]
The o.g.f. of \(\{H(p,q;r,s;n\}_{n=0}^{\infty}\) in terms of \(GH01(r,s;x)\) is
\[GH(p,q;r,s;x) = p+(q\,+\,p\,s\,x)\,GH01(r,s;x), \tag{27}\] \[= \frac{p\,-\,(p\,r\,-\,q)\,x}{1\,-\,r\,x\,-\,s\,x^{2}}\,. \tag{28}\]
The o.g.f. of \(\{S(n,\,y\}_{n=0}^{\infty}\) is
\[GS(x)\,=\,\frac{1}{1\,-\,y\,x\,+\,x^{2}}\,. \tag{29}\]
## 4 Conjecture for improved formulas for the m-section of the linear three -term recurrence sequences
This section contains conjectures for simplified formulas for the \(m\)-section or the special sequences \(H\), \(H01\) and \(S\). In the next section these conjectures will be proved.
One of the authors (\(G\). \(D\).) heuristically found a formula for the sequence \(\{H(p,q;r,s;\,m\,n)\}_{n=0}^{\infty}\), for \(m\,\geq\,0\), that identifies it as an \(H\) sequence with different input \((p,\,q^{\prime})\) and signature \((r^{\prime},\,s^{\prime})\). See his comment in A034807 where \(p,\ q,\,r,\,s\) are denoted as \(a,\,b,\,c,\,d\), respectively.
The second author generalized this conjecture to the \(m\)-section of the sequence \(H\) and their o.g.f.s. He also proved a conjecture for the sequence \(H01\) which implies the one for \(H\). In the next section the proof will be given for the conjecture for the \(m\) section of Chebyshev \(S\)-polynomials and the o.g.f.s., that will lead to the other two conjectures.
**Conjecture for \(H\)**
For \({\rm n}\,\geq\,0,\ {\rm m}\,\geq\,(1),2\) and \({\rm l}\,\in\,\{0,\,1,\,...,\,{\rm m}-1\,\}\)
\[H(p,q;r,s;\,m\,n+l)\,=\,H(H(p,q;r,s;\,l),H(p,q;r,s;\,m+l);SUM(r,s;\,m),-(-s)^{ m};\,n)\,. \tag{30}\]
with
\[SUM(r,s;\,m)\,=\,r^{m}\,\sum_{k=0}^{\left\lfloor\frac{m}{2}\right\rfloor}\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
The other parts of the trisection \(F(3\,n\,+\,1)\,=\,\)A033887\((n)\) and \(F(3\,n\,+\,2)\,=\,\)A015448\((n+1)\), for \(n\,\geq\,0\),follow similarly, and the results are
\[F(3\,n) = H(0,2;4,1;\,n)\,=\,2\,H01(4,1;\,n)\,=\,2\,i^{n-1}\,S(n-1,-4\,i), \tag{38}\] \[F(3\,n\,+\,1) = H(1,3;4,1;\,n)\,=\,H01(4,1;\,n+1)\,-\,H01(4,1;\,n)\] (39) \[=\,-i^{n}\,(S(n,\,-4\,i)\,+\,i\,S(n-1,\,-4\,i)),\] \[F(3\,n\,+\,2) = H(1,5;4,1;\,n)\,=\,H01(4,1;\,n+1)\,+\,H01(4,1;\,n)\] \[=\,i^{n}\,(S(n,\,-4\,i)\,-\,i\,S(n-1,\,-4\,i))\,.\]
The above conjecture for \(H\) is equivalent to the one for its o.g.f.
\[GHml(p,q;r,s;m,l;\,x)\,:=\,\sum_{n=0}^{\infty}\,H(p,q;r,s;\,m\,n\,+\,l)\,x^{n}\,. \tag{41}\]
**Conjecture for GHml**
\[GHml(p,q;r,s;m,l;\,x)\,=\,\frac{H(p,q;r,s;\,l)\,-\,(H(p,q;r,s;\,l)\,SUM(r,s;\, m)\,-\,H(p,q;r,s;\,m+l))\,\,x}{1\,-\,SUM(r,s;\,m)\,x\,+\,(-s)^{m}\,x^{2}}\,. \tag{42}\]
**Proof:** This equivalence of conjectures is clear from the o.g.f. of the sequence \(H\) given in Eq. 28. One has just to insert the conjectured values for the inputs and signature from Eq. 30
**Example 3: O.g.f. Fibonacci trisection**
For \(m\,=\,3\), \((p,\,q)\,=\,(0,\,1)\) and \((r,\,s)\,=\,(1,\,1)\) the denominator of \(GF3l(x)\,:=\,\sum_{n=0}^{\infty}\,F(3\,n+l)\,x^{n}\) is \(1\,-\,(-i\,R(3,\,1/i))\,x\,+\,(-1)^{3}\,x^{2}\,=\,1\,-\,4\,x\,-\,x^{2}\), for \(l\,=\,0,\,1\) and \(2\). The numerators are \(F(3)\,x\,=\,2\,x\), \(1\,-\,(1\,4\,-\,3)\,x\,=\,1\,-\,x\), and \(1\,-\,(1\,4\,-\,5)\,x\,=\,1\,+\,x\), for these \(l\) values, respectively.
The conjecture for the \(m\)-section of \(H\) implies the one for \(H01\), and the corresponding o.g.fs, are obtained setting \((p,\,q)\,=\,(0,\,1)\), and then rewriting \(H\) in terms of \(H01\) using Eq. 16 with the new parameters. In Example 2 this second step has been used for \(m\,=\,3\) and \((r,\,s)\,=\,(1,\,1)\).
**Conjecture for H01**
\[H01(r,s;\,m\,n\,+\,l)\,=\,q^{\prime}H01(r^{\prime},s^{\prime};\,n)\,+\,p^{ \prime}\,s^{\prime}\,H0(r^{\prime},s^{\prime};\,n-1)\,,\mbox{ where }\]
\[p^{\prime}\,=\,p^{\prime}(r,s;\,l)\,=\,H01(r,s;\,l),\,\,q^{\prime}\,=p^{ \prime}(r,s;\,m\,+l)\,=\,H01(r,s;\,m+l),\]
\[r^{\prime}\,=\,r^{\prime}(r,s;\,m)\,=\,(\sqrt{-s}\,)^{m}\,R(m,\,r/\sqrt{-s}\, ),\,\,\,\,s^{\prime}\,=\,s^{\prime}(s,\,m)\,=\,-(-s)^{m}. \tag{43}\]
The part \(l\,=\,0\) simplifies to
\[H01(r,s;\,m\,n)\,=\,H01(r,s;\,m)\,H01((\sqrt{-s}\,)^{m}\,R(m,\,r/\sqrt{-s}\, ),\,-(-s)^{m};\,n)\,. \tag{44}\]
**Conjecture for GH01ml**
The conjecture for the o.g.f. \(GH01ml(r,s;m,l;\,x)\,:=\,\sum_{n=0}^{\infty}\,H01(r,s;m\,n+l)\,x^{n}\) is obtained from \(GHml(x)\) in Eq. 42, and is given with \(y\,=\,y(r,\,s)/\sqrt{-s}\) as
\[GH01ml(r,s;m,l;\,x)\,=\,\frac{H01(r,s;\,l)\,-\,((\sqrt{-s}\,)^{m}\,H01(r,s;\, l)\,R(m,\,y)\,-\,H01(r,s;\,m\,+\,l))\,x}{1\,-\,(\sqrt{-s}\,)^{m}\,R(m,\,y)\,x\,+\,(-s)^{m} \,x^{2}}\,. \tag{45}\]
Because the sequences \(H\) and \(H01\) are determined by the Chebyshev polynomials \(\{S(n,\,y\,=\,r/\sqrt{-s}\,)\}\) the conjecture for \(S(m\,n\,+\,l,\,r/\sqrt{-s}\,)\) is fundamental.
**Conjecture for S**
For \(n\,\geq\,0\), \(m\,\geq\,(1),2\) and \(l\,\in\,\{0,\,1,\,...,\,m-1\,\}\):
\[S(m\,n\,+\,l,\,y) = c(s,\,m)^{n-1}\,\left\{S(m\,+\,l,\,y)\,S(n-1,\,c(s,\,m)\,R(m,\,y))\right. \tag{46}\] \[\left.\qquad\qquad\qquad\qquad\qquad-\,c(s,\,m)\,S(l,\,y)\,S(n-2, \,c(s,\,m)\,R(m,\,y))\right\},\] \[\mbox{with y = r}/\sqrt{-s},\,\mbox{c(s,\,m)}\,:=\,\frac{(\sqrt{-s})^{m}}{ \sqrt{(-s)^{m}}},\,\mbox{S($-2,\,y)}\,=\,-1,\mbox{ and S($-1,\,y)}\,=\,0\,.\]
The part \(l\,=\,0\) simplifies, using \(c(s,\,m)^{2}\,=\,1\), the recurrence relation of \(S\) and then the definition of \(R\), to
\[S(m\,n,\,r/\sqrt{-s})\,=\,(c(s,\,m))^{n}\,. \tag{47}\] \[\cdot\{S(n,\,c(s,\,m)\,R(m,\,r/\sqrt{-s}))+c(s,\,m)\,S(m-2,\,r/ \sqrt{-s}\,)\,S(n-1,\,c(s,\,m)\,R(m,\,r/\sqrt{-s}\,))\}.\]
The following proof that this conjecture is equivalent to the conjecture for \(H01\) uses \(y\,=\,r/\sqrt{-s}\) and Eq. 25.
**Proof of the equivalence between the conjectures H01 and S**
With \(y\,=\,r/\sqrt{-s}\) and Eq. 25\(S(m\,n\,+\,l,\,y)\,=\,(1/\sqrt{-s})^{\,m\,n\,+\,l}\,H01(r,s;m\,n\,+\,l\,+\,1)\). With the conjecture for \(H01\) from above this becomes in terms of \(S\), again using Eq. 25,
\[(\sqrt{-s}\,)^{m\,n\,+\,l}\,S(m\,n\,+\,l,\,y) = \widehat{q^{\prime}}\,(\sqrt{-s^{\prime}}\,)^{n-1}\,S(n-1,r^{ \prime}/\sqrt{-s^{\prime}}\,) \tag{48}\] \[+\,\widehat{p^{\prime}}\,s^{\prime}\,(\sqrt{-s^{\prime}}\,)^{n-2 }\,S(n-2,r^{\prime}/\sqrt{-s^{\prime}}\,)\,,\]
with \(r^{\prime}\), \(s^{\prime}\), \(p^{\prime}\) and \(q^{\prime}\) from Eq. 43, and \(q^{\prime}\) and \(p^{\prime}\) are written in terms of \(S\) as \(\widehat{q^{\prime}}\,=\,\sqrt{-s}\,^{m+l}\,S(m+l,\,y)\) and \(\widehat{p^{\prime}}\,=\,\sqrt{-s}\,^{l}\,S(l,\,y)\). Also \(r^{\prime}/\sqrt{-s^{\prime}}\,=\,c(s,m)\,R(m,\,y)\).
Dividing both sides by \((\sqrt{-s}\,)^{\,m\,+\,l}\,(\sqrt{(-s)^{m}}\,)^{\,n-1}\) produces
\[c(s,m)^{n-1}\,S(m\,n\,+\,l,\,y) = \{S(m\,+\,l,\,y)\,S(n-1\,c(s,m)\,R(m\,y)) \tag{49}\] \[-\,(1/c(s,m))\,S(l,\,y)\,S(n-2,\,c(s,m)\,R(m,\,y))\}\.\]
Because \(c(s,m)^{2}\,=\,1\) one replaces \(1/c(s,\,m)\) by \(c(s,\,m)\), giving the final result. \(\Box\)
Note that \(c(s,\,m)\) has only values from \(\{+1,\,-1\}\). \(c(-1,\,m)\,=\,1\), for \(m\,\geq\,1\), and \(\{c(1,\,m)\}_{m\,\geq\,1}\,=\,\mbox{repeat}\,\{1,\,-1,\,-1,\,1\}\,=\,\mbox{{\tt A 087960}}\) with offset 1.
**Example 4: Trisection of Chebyshev S-polynomials**
m = 3,r = y, s = -1. Note that \(y\) is now an indeterminate.
**l = 0** : \(S(m\,n,\,y)\,=\,S(n,\,R(3,\,y))\,+\,y\,S(n-1,\,R(3,\,y))\), with \(R(3,\,y)\,=\,y\,(y^{2}\,-\,3)\).
**l = 1** : \(S(m\,n\,+\,1,\,y)\,=\,S(4,\,y)\,S(n-1,\,R(3,\,y))-y\,S(n-2,R(3,\,y))\), with \(S(4,\,y)\,=\,1\,-\,3\,y^{2}\,+\,y^{4}\).
**l = 2** : \(S(m\,n\,+\,2,\,y)\,=\,S(2,\,y)\,(R(3,\,y)\,S(n-1,\,R(3,\,y))-S(n-2,\,R(3, \,y))\,=\,S(2,\,y)\,S(n,\,R(3,\,y))\), because \(S(5,\,y)\,=\,S(2,\,y)\,R(3,\,y)\), and \(S(2,\,y)\,=\,y^{2}\,-\,1\).
The conjecture for the o.g.f.\(\,GSml(r,\,s;m,l;\,x)\,:=\,\sum_{n=0}^{\infty}S(m\,n\,+\,l,\,y\,=\,r/\sqrt{-s}\,)\,x^{n}\) is obtained from the one for \(GHml(x)\) given above.
**Conjecture for GSml**
With \(y\,=\,\frac{r}{\sqrt{-s}}\) :
\[GSml(r,s;m,l;\,x) = \frac{1}{(\sqrt{-s}\,)^{l}}\,GH01ml\left(r,s;m,l\,+\,1;\,\frac{x} {(\sqrt{-s}\,)^{m}}\right), \tag{50}\] \[= \frac{S(l,\,y)\,-\,(S(l,\,y)\,R(m,\,y)\,-\,S(m\,+\,l,\,y))\,x}{1\, -\,R(m,\,y)\,x\,+\,x^{2}}\,.\]
Note that the advantage of working with the o.g.fs instead of the sequences is that the \((r,\,s)\) dependence appears only in \(y\) (not like in Eq. 46 also in \(c(s,\,m)\)).
**Exercise**
In order to appreciate these formulas on should compare them with the standard computation according to _Section 2_. Done either by hand or by computer the result will not be expressed in terms of Chebyshev polynomials.
**Proof of the equivalence between GSml and GH01ml**
This uses the relation between \(S(n,\,y)\) and \(H01(r,s;\,n+1)\) obtained from Eq. 25, for \(n\,\rightarrow\,m\,n\,+\,l\). This leads to the relation between the o.g.fs. Then in Eq. 45 the \(H01\) sequences are rewritten in terms of \(S\), with \(y\,=\,r/\sqrt{-s}\). \(\Box\)
**Example 5: O.g.f.s for the trisection of Chebyshev S polynomials**
\(m=3,\,\,r\,=\,y,\,s\,=\,-1\). Note that \(y\) is now an indeterminate.
\({\bf l}={\bf 0}:\,\,GS30(y,\,x)\,=\,(1-(R(3,\,y)\,-\,S(3,\,y))\,x)/(1\,-\,R(3,\,y) \,x\,+\,x^{2})\), With \(R(3,\,y)\) from above in Example 4, and \(S(3,\,y)\,=\,y\,R(2,\,x)\,=\,y\,(y^{2}\,-\,2)\) one obtains \(R(3,\,y)\,-\,S(3,\,y)\,=-y\), hence \(GS30(y,\,x)\,=\,(1\,+\,y\,x)/(1\,-\,y\,(y^{2}\,-\,3)\,x\,+\,x^{2})\).
\({\bf l}={\bf 1}\): In the numerator appears \(y\,R(3,\,y)\,-\,S(4,\,y)\,=\,-1\). Hence
\(GS31(y,\,x)\,=\,(y\,+\,x)/(1\,-\,y\,(y^{2}\,-\,3)\,x\,+\,x^{2})\).
\({\bf l}={\bf 2}\): In the numerator appears \(S(2,\,y)\,R(3,\,y)\,-\,S(5,\,y)\,=\,0\) (see Example 4). Hence
\(GS32(y,\,x)\,=\,(y^{2}\,-\,1)/(1\,-\,y\,(y^{2}\,-\,3)\,x\,+\,x^{2})\).
## 5 Proof of the conjectures
The proof is given for the conjectured o.g.fs, equivalent to the conjectures for the corresponding sequences. Here the proof for the conjecture of the o.g.f. of the sequence \(S\), _i.e._, \(GSml\) of Eq. 49, is given which is equivalent to the o.g.f. of sequence \(H01\), _i.e._, \(GH01ml\) of Eq. 45.
The conjecture for the o.g.f. of the sequence \(H\), _i.e._, \(GHml\) of Eq. 42, follows from the conjecture of \(GSml\) by
\[GHml(p,q;r,s;m,l;\,x) = q\,(\sqrt{-s}\,)^{l-1}\,GSml\left(r,s;m,l-1;\,(\sqrt{-s}\,)^{m} \,x\right) \tag{50}\] \[+\,p\,s\,(\sqrt{-s}\,)^{l-2}\,GSml\left(r,s;m,l-2;\,(\sqrt{-s}\, )^{m}\,x\right)\,.\]
Note that for \(m\,\geq\,2\) and \(l\,=\,0\) and \(1\) the sequences \(H\), \(H01\) and \(S\) appear also with negative indices \(n\,=\,-1\) and \(-2\), namely \(H01(r,s;-1)\,=\,1/s\), \(S(-2,\,x)\,=\,-1\) and \(S(-1,\,x)\,=\,0\).
This \(GHml\) formula coincides with the original one of Eq. 42, after sequence \(H\) is replaced by sequence \(H01\), and then by sequence \(S\).
**Theorem:**_The conjecture for the o.g.f. \(GSml\) of \(\{S(m\,n+l,\,r/\sqrt{-s}\,)\}_{n=0}^{\infty}\) is true._
**Proof:**
One proves that the o.g.f. \(GS(y,\,x)\,=\,1/(1\,-\,y\,x\,+\,x^{2})\) for the Chebyshev polynomials \(\{S(n,\,y)\}_{n=0}^{\infty}\), with \(y\,=\,r/\sqrt{-s}\), satisfies the \(m-\)section formula according to Eq. 1 in terms of the conjectured part \(l\) o.g.fs \(GSml\) from Eq. 49. This can be rewritten, by bringing the identical (\(l\)-independent) denominators of \(GSml\) to the left hind side, and the denominator of \(GS(y,\,x)\) to the right-hand side as
\[LHS(m,l;\,y,\,x) := \,\,1\,-\,R(m,\,y)\,x^{m}\,+\,x^{2\,m},\] \[RHS(m,l;\,y,\,x) := \,\,(1\,-\,y\,x\,+\,x^{2})\,\sum_{l=0}^{m-1}\,x^{l}\,N(m,l;\,y,\,x^ {m}),\] \[\mbox{with} \,\,\,N(m,l;\,y,\,x^{m})\,=\,S(l,\,y)\,-\,(S(l,\,y)\,R(m,\,y)\,- \,S(m+l,\,y))\,x^{m}\,. \tag{51}\]
Remember that by working with o.g.fs instead of sequences the \((r,\,s)\) dependence appears only \(y\). Therefore the proof will be given for the indeterminate y.
Because the Vandermonde matrix has an inverse (see Eq. 4) the proof will automatically hold also for \(GSml\) in terms of \(GS\) like in Eq. 9.
All powers of \(x\) will be compared on both sides in order to prove that \(LHS\,=\,RHS\).
In \(RHS\) all powers \(x^{0}\), \(x^{1}\),..., \(x^{m}\),..., \(x^{2\,m}\), \(x^{2\,m+1}\) appear. In \(LHS\) only \(x^{0}\), \(x^{m}\), \(x^{2\,m}\) are present.
It will turn out that the proof for the two highest powers \(x^{2\,m+1}\) and \(x^{2\,m}\) differs from the one for the other powers. Usually the recurrence of the Chebyshev \(S\) polynomials will show directly that \(RHS-LHS\,=\,0\) but for the two highest powers one has to use results from the bisection of these polynomials.
For the other powers the contribution of the \(R\) terms in the numerator \(N\) of \(GSml\) will be considered separately from the remainder (pure \(S\) terms). For the powers \(x^{m}\) to \(x^{2\,m-1}\) it will turn out that the \(R\) terms are multiplied by factors which vanish because of the recurrence of the \(S\)-polynomials (the structure of \(R\) will thus be irrelevant).
One starts with the two highest powers.
For \(x^{2\,m+1}\) only \(RHS\) is present, namely \(-(S(m-1,\,y)\,R(m,\,y)\,-\,S(2\,m\,-\,1))\). It vanishes if
\[S(2\,m\,-\,1,\,y)\,=\,S(m-1,\,y)\,R(m,\,y)\,. \tag{52}\]
For \(x^{2\,m}\) the \(RHS\) becomes (-y) \(\{\)-(S(m-1,y)\,R(m,y)\,\) -S(2\,m\,-\,1,y))\(\}+(+1)\,\{\)-(S(m-2,y)\,R(m,y)\,\) -
S(2\(\,\)m\(\,\)-2,y))\(\}\), and \(RHS\,=\,1\). The first term vanishes if the \(x^{2\,m+1}\) power contribution vanishes, and then for this \(x\)-power \(RHS\,-\,LHS\,=\,0\) if
\[S(2\,(m-1),\,y)\,=\,1\,+\,S(m-2,\,y)\,R(m,\,y)\,. \tag{53}\]
**Lemma 2**_: Eqs. (52) and (53) are satisfied for all \(m\,\geq\,0\,\)._
**Proof:**
These two equations are found in [6], written for Chebyshev \(T\) and \(U\)-polynomials.
Using a proof of the standard bisection will not help here. The proof is done by induction on \(m\) on both equations simultaneously, employing the Cassini-Simson identity from Eq. 24.
For \(m\,=\,0\) the first equation is fulfilled because \(S(-1,\,y)\,=\,0\), and the second one because \(S(-2,\,y)\,=\,-1\) and \(R(0,\,y)\,=\,2\).
Assume that both equations hold for \(m^{\prime}\,=\,1,\,2,\,...,\,m\). First, Eq. 53 will be proved for \(m\,\rightarrow\,m+1\). Multiplying Eq. 52 by \(y\) and subtracting Eq. 53 yields, after using recurrence relations,
\[S(2\,m,\,y)\,=\,S(m,\,y)\,R(m,\,y)\,-\,1\,. \tag{54}\]
For Eq. 53 one wants to prove \(S(2\,m,\,y)\,=\,1\,+\,S(m-1,\,y)\,R(m+1,\,y)\), _i.e._, \(0\,=\,1\,+\,S(m-1,\,y)\,R(m+1,\,y)-(S(m,\,y)\,R(m,\,y)\,-\,1)\), _i.e._, \(0\,=\,2\,-\,S(m,\,y)\,R(m,\,y)\,+\,S(m-1,\,y)\,R(m+1,\,y)\). Replacing \(R\) in terms of \(S\), using \(S(m\,-\,1,\,y)\,S(m\,+\,1,\,y)\,=\,S(m,\,y)^{2}\,-\,1\) (Cassini-Simson) leads to a cancellation of \(S(m,\,y)^{2}\), leaving \(0\,=\,1\,-\,S(m\,-\,1,\,y)^{2}\,+\,S(m,\,y)\,S(m\,-\,2,\,y)\), which is again a Cassini-Simson identity. Thus Eq. 53 is proved.
For Eq. 52 one wants to prove \(S(2\,m\,+\,1,\,y)\,=\,S(m,\,y)\,R(m\,+\,1,\,y)\). By recurrence \(S(2\,m\,+\,1,\,y)\,=\,y\,S(2\,m,\,y)\,-\,S(2\,m\,-\,1,\,y)\) which becomes, with the induction assumptions Eq. 54 and Eq. 52 (for \(m\)), \(S(2\,m+1,\,y)\,=\,-y+y\,S(m,\,y)\,R(m,\,y)-S(m-1,\,\,y)\,R(m,\,y)\). This is by recurrence \(S(2\,m+1,\,y)\,=\,-y\,+\,S(m\,+\,1,\,y)\,R(m,\,y)\). One wants now to prove \(S(2\,m\,+\,1,\,y)\,=\,S(m,\,y)\,R(m+1,\,y)\,=\,-y\,+\,S(m\,+\,1,\,y)\,R(m,\,y)\). Replacing \(R\) in terms of \(S\) gives, after cancellation of \(S(m,\,y)\,S(m\,+\,1,\,y)\), \(S(m\,+\,1,\,y)\,S(m-2,\,y)-S(m,\,y)\,S(m-1,\,y)\,=\,-y\). Replacing \(S(m-2,\,y)\) by \(y\,S(m-1,\,y)-S(m,\,y)\), and then \(S(m\,+\,1,\,y)\,S(m\,-\,1,\,y)\,=\,-1\,+\,S(m,\,y)^{2}\) (Cassini-Simson) leads to \(0\,=\,S(m,\,y)\,(y\,S(m,\,y)\,-\,S(m-1,\,y))-S(m+1,\,y)\,S(m,\,y)\), which holds because of the recurrence relation. \(\Box\)
Consider the \(R\) term contributions in the numerator \(N\) together with the pre-factor with powers \(x^{i}\) for \(i\,\in\,\{0,\,1\,,\,2\}\). The powers are \(x^{i+l+m}\), for \(m\,\geq\,2\) and \(l\,\in\,\{0,\,1,\,...,\,m-1\,\}\), but only for
\(i\,+\,l\,+\,m\,<=2\,m\,-\,1\), because the powers \(x^{m}\) and \(x^{2\,m+1}\) have just been treated separately. \(R\) terms appear only for the exponents \(e\,=\,2\,m\,-\,1\),..., \(m\).
In \(RHS\) the general \(l\) term contributes, for \(i\,=\,0,\,1,\,2\), with \(\widehat{R}(l)\,:=\,-R(m,\,y)\,S(l,\,y)\) to \(x^{m\,+\,l\,+\,i}\). Therefore one considers an \(m\) rows and three columns array \(AR\) with entries \(AR(l,\,i)\,=\,\widehat{R}(l)\,x^{m+l+i}\). The anti-diagonals of \(AR\) have identical powers of \(x\).
In the last row, \(l\,=\,m-1\), the last two entries, and in the row \(l\,=\,m-2\) the last entry are not relevant because the powers are \(x^{2m}\) and \(x^{2m+1}\), already treated.
A special case is \(AR(0,\,0)\,=\,\widehat{R}(0)\,x^{m}\) because here the \(LHS\) has entry \(-R(m,\,y)\,x^{m}\), and to this power also the later discussed array \(AS\) contributes with three terms \(S(m,\,y)\), \(-y\,S(m-\,1,\,y)\) and \(S(m-2,\,y)\) from the second terms of \(AS(0,\,0)\), the first terms of \(AS(m-1,\,1)\) and \(AS(m-2,\,2)\), respectively. Then \(RHS\,-\,LHS\) vanishes for \(x^{m}\) because \(-R(m,\,y)\,+\,(S(m,\,y)\,-\,y\,S(m-1,\,y)\,+\,S(m-2,\,y))\,-\,(-R(m,\,y))\,=\,0\), by cancellation of \(R\) and the recurrence relation of the \(S\) polynomials. As announced, the \(R\) term is irrelevant, only the \(S\) recurrence enters.
Another case where the length of the anti-diagonal in \(AR\) is not 3 is \(AR(1,\,0)\,=\,\widehat{R}(1)\) and \(AR(0,\,1)\,=\,\widehat{R}(0)\). This produces for the power \(x^{m+1}\) only \(-R(m,\,y)(\,S(1,\,y)\,-\,y\,S(0,\,y))\,=\,0\), because \(S(-1,\,y)\,=\,0\). There will be no contribution from the array \(AS\) for this power.
All other anti-diagonals of \(AR\), _i.e._, those with powers \(x^{m\,+\,j}\), for \(j\,\in\,\{2,\,3,\,...,\,m-1\,\}\), have length 3, and their contributions \(R(m,\,y)\,(S(j,\,y)\,-\,y\,S(j\,-\,1,\,y)\,+\,S(j\,-\,2,\,y))\) vanish because of the recurrence for the \(S\)-polynomials, independently of \(R\).
For the pure \(S\) terms of \(RHS\) one considers the companion array \(AS\) with two term entries \(AS(l,\,i)\,=\,S(l,\,y)\,x^{l+i}\,+\,S(m\,+\,l,\,y)\,x^{m\,+\,l\,+\,i}\).
The anti-diagonals with length 3, _i.e._, those for powers \(\{x^{j},\,x^{m\,+\,j}\}\), for \(j\,\in\,\{2,\,3,\,...,\,m-1\,\}\) give vanishing contributions to \(RHS\) because for both \(S\)-polynomial terms their recurrence relation appears. The first term of the entry \(AS(0,\,0)\,=\,S(0,\,y)\,x^{0}\,+\,S(m,\,y)\,x^{m}\) is identical with \(1\cdot x^{0}\) of \(LHS\), and the second term has been needed above (among others) in the proof of the vanishing of the contribution to \(x^{m}\).
The second anti-diagonal \(A(1,\,0)\) and \(A(0,\,1)\) contributes to \(x^{1}\) with \(S(1,\,y)\,-\,y\,S(0,\,y)\,=\,0\,\,(S(-1,\,y)\,=\,0)\), and to \(x^{m+1}\) with \(S(m\,+\,1,\,y)\,-\,y\,S(m,\,y)\) which is needed, together with the first term of the last entry \(AS(m-1,\,2)\,=\,S(m\,-\,1,y)\,x^{m+1}\), to prove the vanishing for the contribution of \(AS\) to \(x^{m+1}\). For the corresponding vanishing of the \(AR\) contribution see above. There is no such \(LHS\) contribution.
The second term of \(AS(m-1,\,2)\,=\,S(2\,m\,-\,1,\,y)\,x^{2\,m\,+\,1}\) has been used in the treatment of the highest power above.
Finally, the second terms of the two anti-diagonal entries \(AS(m-1,\,1)\) and \(AS(m-2,\,2)\) contribute \(-y\,S(2\,m\,-\,1,\,y)\,+\,S(2\,m\,-\,2,\,y)\). They have been treated above together with the \(AR\) contribution to the second highest power above.
All \(RHS\) entries of \(AR\) and \(AS\) have been considered and shown to contribute only to the three powers \(x^{1},\,x^{m}\) and \(x^{2\,m}\) giving the \(LHS\), which ends the proof. \(\Box\)
To end this work the results of the alternative bisection formulas Eq. 52 and Eq. 53 are given for the \(H01\) and \(H\) sequences. There derivation is done with the help of eqs. (25) and (16).
\[H01(r,s;\,2\,m\,+\,1) = \left(\sqrt{-s}\,\right)^{m+1}H01(r,s;\,m)\,R(m+1,\,r/\sqrt{-s}) \,+\,(-s)^{m}\,, \tag{55}\] \[= \left(-s\right)^{m}\,\left(S(m-1,\,r/\sqrt{-s})\,R(m+1,\,r/\sqrt{ -s})\,+\,1\right)\,,\] \[H01(r,s;\,2\,m) = \left(\sqrt{-s}\,\right)^{m}H01(r,s;\,m)\,R(m,\,r/\sqrt{-s})\,,\] (56) \[= \left(\sqrt{-s}\right)^{2\,m-1}S(m-1,\,r/\sqrt{-s})\,R(m,\,r/ \sqrt{-s})\,.\] \[H(p,q;r,s;\,2\,m\,+\,1) = \left(\sqrt{-s}\,\right)^{m}\,\left\{H01(r,s;\,m)\,(\sqrt{-s}\,q \,R(m\,+\,1,r/\sqrt{-s})+p\,s\,R(m,\,r/\sqrt{-s}))\right.\] (57) \[\qquad\qquad\left.+(\sqrt{-s}\,)^{m}\,q\right\}\] \[= \left(-s\right)^{m}\left\{S(m\,-\,1,\,r/\sqrt{-s})\,(q\,R(m\,+\, 1,\,r/\sqrt{-s})\,-\,\sqrt{-s}\,p\,R(m,\,r/\sqrt{-s}))\right.\] \[\qquad\qquad\left.+q\right\}\,,\] \[H(p,q;r,s;\,2\,m) = \left(\sqrt{-s}\,\right)^{m}\left\{R(m,\,r/\sqrt{-s})\,(q\,H01(r, s;\,m)\,+\,s\,p\,H01(r,s;\,m\,-\,1))\right.\] (58) \[\qquad\qquad\left.-\,(\sqrt{-s}\,)^{m}\,p\right\}\,,\] \[= \left(-s\right)^{m-1}\left\{R(m,\,r/\sqrt{-s})\,(\sqrt{-s}\,q\,S (m\,-\,1,\,r/\sqrt{-s})\,+\,s\,p\,S(m\,-\,2,\,r/\sqrt{-s}))\right.\] \[\qquad\qquad\left.+\,s\,p\right\}\,.\]
|
2301.06144 | Solitary waves and their interactions in the cylindrical Korteweg-de
Vries equation | We consider approximate, exact, and numerical solutions to the cylindrical
Korteweg-de Vries equation. We show that there are different types of solitary
waves and obtain the dependence of their parameters on distance. Then, we study
the interaction of solitary waves of different types. | Wencheng Hu, Jingli Ren, Yury Stepanyants | 2023-01-15T16:53:37Z | http://arxiv.org/abs/2301.06144v1 | # Solitary waves and their interactions in the cylindrical Korteweg-de Vries equation
###### Abstract
We consider approximate, exact, and numerical solutions to the cylindrical Korteweg-de Vries equation. We show that there are different types of solitary waves and obtain the dependence of their parameters on distance. Then, we study the interaction of solitary waves of different types.
keywords: Nonlinear wave; Cylindrical Korteweg-de Vries equation; Soliton; Self-similar solitary wave +
Footnote †: journal: Symmetry
,
## 1 Introduction
The study of weakly nonlinear cylindrical waves in dispersive media has a long history. In 1959 Iordansky derived the cylindrical version of the Korteweg-de Vries (cKdV) equation [15] for surface waves in a fluid. A similar equation was later derived for water and plasma waves by various authors [12; 25; 27; 29; 33; 40; 41]. Currently, the cylindrical KdV equation is one of the basic equations of contemporary mathematical physics. In application to the description of outgoing waves with axisymmetric fronts, the equation in the proper physical coordinates reads:
\[\frac{\partial u}{\partial r}+\frac{1}{c}\frac{\partial u}{\partial t}-\frac {\alpha}{c}u\frac{\partial u}{\partial t}-\frac{\beta}{2c^{5}}\frac{\partial^ {3}u}{\partial t^{3}}+\frac{u}{2r}=0, \tag{1.1}\]
where \(c\) is the speed of long linear waves for which dispersion is negligible (\(\beta=0\)), \(\alpha\) is the nonlinear coefficient, and \(\beta\) is the dispersive coefficient. Here \(r\) stands for the radial coordinate and \(t\) is time. The derivation of this equation is based on the assumption that the last three terms that describe the effects of weak nonlinearity, dispersion, and geometric divergence are relatively small (compared to the first two linear terms) and are of the same magnitude of smallness. The smallness of the geometric divergence presumes that the cKdV equation is valid at big distances from the center of the polar coordinate frame where \(r\gg\Lambda\), and \(\Lambda\) is the characteristic width of a wave perturbation. A similar equation describing incoming waves can be also derived; it differs from Eq. (1.1) only by the sign minus in front of the second term. In such a form the cKdV equation was used for the interpretation of physical experiments with plasma waves in laboratory chambers [13; 27; 33] (however, it becomes invalid when a wave approaches the origin). The importance of the cKdV equation in water wave problems is related to circular perturbations which can appear due to "point sources" produced by underwater earthquakes, volcanoes, atmospheric pressure, fallen meteorites, etc. Besides, there are many observations when quasi-cylindrical internal waves were generated due to water intrusion in certain basins (see, for example, in the Internet numerous satellite images of internal waves generated by Atlantic water intrusions in the Mediterranean Sea).
The generalized cKdV equation was derived by McMillan and Sutherland [28] who considered the generation and evolution of solitary waves by intrusive gravity currents in a two-layer fluid. Another generalised cKdV model was derived for the description of surface and internal ring waves subject to shear flows [18; 21; 22]. However, in this paper, we do not consider the influence of intrusions or shear flows, as well as the environment inhomogeneity on wave dynamics focusing on the structure of solitary waves and their interactions within the standard cKdV equation.
In 1976 Dryuma discovered that the cKdV equation is completely integrable [7] and found self-similar (but singular) solutions to this equation. Non-singular self-similar solutions were found later in several papers [4; 5; 29; 30]. There were also derived approximate solutions in the form of KdV solitons with gradually varying parameters (amplitude, width, and speed) [23; 38]. As was shown in all these papers, amplitudes of outgoing waves decay as \(A(r)\sim r^{-2/3}\), and their characteristic duration increase as \(T(r)\sim r^{1/3}\). Later exact solutions to the cKdV equation were derived by Calogero and Degasperis [3] (see also [4]), as well as by Nakamura and Chen [31]. The structure of exact solutions constructed by these authors was mathematically very similar to N-soliton solutions to the KdV equation. Despite the numerous publications on cylindrical waves described by cKdV equation, the structure of cylindrically diverging solitary waves was not been properly analysed in detail until now. Their role in the dynamics of initial pulse-type perturbations as well as interactions with each other was not studied too. Therefore, the main aim of this paper is to fill in the gap in the knowledge in this field.
## 2 Solitary wave solutions to the cylindrical Korteweg-de Vries equation
### Dimensionless form of the cKdV equation and connection of cKdV with the plane KdV equation
It is convenient to study solutions of the cKdV equation in the dimensionless form. To this end, we make the transformation:
\[r^{\prime}=r,\quad\tau=-(\beta/2c^{5})^{-1/3}(t-r/c),\quad v=\alpha(2c^{2}/ \beta)^{1/3}u/6 \tag{2.1}\]
and present Eq. (1.1) in the form (the symbol prime of \(r\) can be omitted):
\[\frac{\partial v}{\partial r}+6v\frac{\partial v}{\partial\tau}+\frac{ \partial^{3}v}{\partial\tau^{3}}+\frac{v}{2r}=0. \tag{2.2}\]
If we omit the last term in this equation, we obtain the classical KdV equation; one of its exact solutions in the form of a soliton is:
\[v(r,\tau)=A\,\text{sech}^{2}\frac{\tau-r/V}{T}. \tag{2.3}\]
Here \(A\) is the soliton amplitude, \(T=\sqrt{2/A}\) is its characteristic duration, and \(V=1/(2A)\) is soliton speed. (Note that in this variable the speed looks a bit unusual; it is inverse proportional to the soliton amplitude \(A\). However, in the original physical variables, the dimensional soliton speed is determined
as \(1/V_{s}=1/c-(\beta/2c^{5})^{1/3}(1/V)=1/c-(\beta/2c^{5})^{1/3}2A=1/c-\alpha A_{s}/3c\), where \(A_{s}\) is the dimensional soliton amplitude - see the transformations (2.1). This gives \(V_{s}=c/(1-\alpha A_{s}/3)\approx c(1+\alpha A_{s}/3)\), where approximation is valid for small-amplitude solitons which is in agreement with the assumption of a weak nonlinearity in the KdV equation.) Below we present an approximate and exact solutions to the cKdV equation (2.2).
There is a relationship between the ordinary KdV equation and cKdV equation established for the first time by A.A. and B.A. Lugovtsovs [26], and then found also in Refs. [2; 14]. Making the transformation:
\[\tau^{\prime}=-2\tau/r,\quad r^{\prime}=4/r^{2},\quad v^{\prime}=(v+\tau/4)/r \tag{2.4}\]
one can reduce the classic KdV equation (Eq. (2.2) without the last term on the left-hand side) to the cKdV equation (2.2). Formally, this allows us to get wide classes of exact solutions from the corresponding solutions of the KdV equation, including N-soliton solutions (some examples are presented in Refs. [14; 24]). However, all such solutions, apparently, are physically meaningless as they contain time-dependent nonuniform background.
### Asymptotic solution of the cylindrical KdV equation
In the cylindrical case, the soliton solution (2.3) is no longer the exact solution; however, if the last term in the cKdV equation (2.2) is small compared to the nonlinear and dispersive terms, then we can assume that the structure of a pulse having a shape of the KdV soliton (2.3) given at some distance \(r_{0}\gg\Delta\equiv VT\) remains the same in the outgoing wave, whereas its amplitude and other parameter are slowly varying function of \(r\). Therefore, the approximate solution can be presented as:
\[v(r,\tau)=A(r)\operatorname{sech}^{2}\frac{\tau-\int dr/V(r)}{T(r)}. \tag{2.5}\]
The dependence of soliton amplitude on \(r\) can be found from the equation of energy flux conservation. Multiplying Eq. (2.2) by \(v\) and integrating over \(\tau\) from minus to plus infinity, we obtain:
\[r\int\limits_{-\infty}^{+\infty}v^{2}(r,\tau)\,d\tau=const. \tag{2.6}\]
Substituting here solution (2.5) and bearing in mind the relationship between \(T\) and \(A\), we derive:
\[A(r)=A_{0}\left(r/r_{0}\right)^{-2/3},\quad T(r)=T_{0}\left(r/r_{0}\right)^{1/ 3}. \tag{2.7}\]
These are the laws of parameter variations in the nonlinear outgoing waves which were obtained in the papers cited above [23; 38] and in many others (see, for example, Refs. [6; 8; 32; 35]). Both the experimental and numerical data confirm the dependences (7) derived in the adiabatical approximation for cylindrical solitons (see, e.g., [8; 35] and references therein). For the numerical study, we used the explicit finite-difference scheme described Berezin [1] (see also [32]). Figure 1 illustrates a comparison of a typical cylindrical solitary wave as a function of \(\tau\) plotted on the basis of the adiabatic formulae (5), (7) and as obtained from the direct numerical solution of Eq. (2).
The initial amplitude of the KdV soliton was chosen to be \(A_{0}=1\) at \(\tau=500\) (for other amplitudes, the results were very similar). After a while at \(\tau=809.6\), the amplitude dropped to \(A(809.6)=0.625\). As one can see from Fig. 1, the shapes of approximate and numerical solutions are not distinguishable by the naked eye. In a more detailed comparison, one can notice that a small amplitude long tail of negative polarity forms behind the soliton in the numerical solution as shown in Fig. 1b). The tail shape can be described in the next approximation of the asymptotic theory (see, for example, [11; 34]). The same results were obtained by Johnson [19] who also derived the analytical express
Figure 1: (color online). Comparison of the approximate solution (5), (7) with the numerical solution for the initial pulse in the form of a KdV soliton. Panel (a) demonstrates that the numerical solution (red line) is indistinguishable from the approximate solution (blue line). However, a small-amplitude long tail of negative polarity can be seen behind the soliton in the numerical solution when the plot is zoomed in as shown in panel (b).
Appendix C in Ref. [12] where Grimshaw estimated the decay of the tail amplitude of the negative polarity as \(r^{-2/3}\)).
As has been mentioned, the approximate solution is valid at a big distance from the center of a polar coordinate frame, where \(r\gg\Delta\) and when the last geometric term is small compared to the nonlinear and dispersive terms. However, in the course of solitary wave propagation, its parameters vary and the used approximation can become invalid. Therefore, it is of interest to estimate the validity of the approximate soliton solution (2.5), (2.7) at different distances. To this end, let us compare the last term in Eq. (2.2) with the nonlinear term on the soliton solution:
\[\frac{v}{2r}:6v\frac{\partial v}{\partial t}\sim\frac{T(r)}{12A(r)r}\sim \frac{T_{0}(r/r_{0})^{1/3}}{12A_{0}(r/r_{0})^{-2/3}r}=\frac{T_{0}}{12A_{0}}= \frac{A_{0}^{-3/2}}{6\sqrt{2}}. \tag{2.8}\]
From this formula one can see that the ratio of these two terms does not depend on \(r\); it remains small if it was small at the beginning when \(r=r_{0}\).
It is worth reminding that in this paper we study solitary waves within the framework of the cKdV equation when it is applicable to particular physical systems. In general, the amplitude decay of cylindrical waves can be different from the soliton amplitude dependence \(A\sim r^{-2/3}\). As well-known, amplitudes of linear waves in cylindrical systems without dispersion vary as \(A\sim r^{-1/2}\), and linear waves in cylindrical systems with dispersion vary as \(A\sim r^{-1}\). All these amplitude dependencies for pulse-type initial perturbations were observed in experiments with electromagnetic waves in 2D lattices [6; 38]. Similar results were obtained in the numerical study of radially spreading axisymmetric intrusions and solitary waves [28].
Diverging KdV-like solitons interact in a similar manner as classical KdV solitons. Figure 2 illustrates the typical overtaking interaction of two KdV-like solitons within the framework of cKdV equation (2.2) obtained by direct numerical modeling of this equation with the initial condition in the form of two KdV solitons of different amplitudes (\(A_{1}=0.2;A_{2}=1\)).
### Exact solutions of the cKdV equation
The first nontrivial exact solutions to the cKdV equation were obtained by Calogero and Degasperis [3]. Solutions were presented in terms of the Airy function \(\text{Ai}(z)\). As was shown later by Nakamura and Chen [31], exact solutions can be presented through the Hirota transform: \(v(r,\tau)=2\partial^{2}f(r,\tau)/\partial\tau^{2}\). Then, the simplest solution is:
\[f(r,\tau)=1+\frac{\varepsilon\rho^{2}}{(12r)^{1/3}}\left\{\left[z(r,\tau)-z_{ 1}(r,\tau_{1})\right]\text{Ai}^{2}(z-z_{1})-\left[\text{Ai}^{\prime}(z-z_{1}) \right]^{2}\right\}, \tag{2.9}\]
where \(\varepsilon,\rho\), and \(\tau_{1}\) are some arbitrary constants, and
\[z(r,\tau)=\frac{\tau}{\left(12r\right)^{1/3}},\quad z_{1}(r,\tau_{1})=\frac{\tau _{1}}{\left(12r\right)^{1/3}}. \tag{2.10}\]
The symbol prime in Eq. (2.9) stands for differentiation with respect to the function argument. Note that in terms of the function \(f(r,\tau)\), solution (2.9) is the typical self-similar solution on the constant pedestal. However, in the original variable \(v(r,\tau)\), the corresponding solution is more complicated, it is neither self-similar nor a traveling-wave solution. One of the typical exact solutions is plotted in Fig. 3 for the particular parameters \(\varepsilon=-0.01\), \(\rho=1\), and \(\tau_{1}=150\). This solution represents a wave that pulls into the origin as one can see from the right columns of Fig. 3. Approaching the origin, the wavelength drastically decreases and goes to zero. However, in the vicinity of the origin solution becomes invalid anyway because, as mentioned above, the cKdV equation is applicable only
Figure 2: (color online). The typical overtaking interaction of two KdV-like solitons in outgoing cylindrical waves.
at relatively big distances from the origin. Apparently, such solutions are out of physical interest.
The genuine self-similar solution in terms of function \(v(r,\tau)\) can be obtained if we set \(\varepsilon\rho^{2}\to\infty\)[16]. Then, we obtain:
\[v_{ss}(r,\tau)=\frac{2}{(12r)^{2/3}}\frac{d^{2}}{dz^{2}}\ln\Big{\{}[z(r,\tau)-z _{1}(r,\tau_{1})]\,\text{Ai}^{2}(z-z_{1})-\big{[}\text{Ai}^{\prime}(z-z_{1}) \big{]}^{2}\Big{\}}. \tag{2.11}\]
Such a solution was considered in [17] in application to the water-wave problem.
The self-similar solution to the cKdV equation can be obtained if we seek a solution in the form \(v(r,\tau)=r^{\alpha}F(\xi)\), where \(\xi=r^{\beta}\tau^{\gamma}\) (the similar approach was used in [20] for the KdV equation). Substituting this form of the solution in Eq. (2.2), we obtain after simple manipulation that function \(F(\xi)\) must satisfy the ODE:
\[F^{\prime\prime\prime}+6FF^{\prime}-\frac{1}{3}zF^{\prime}+3F=0 \tag{2.12}\]
provided that \(\alpha=-2/3\), \(\beta=-1/3\), \(\gamma=1\). This agrees with the solution (2.11) if we set \(F=v_{ss}\left(12r\right)^{2/3}/2\).
Calogero and Degasperis wrote that solutions that they constructed "are in some sense the analogous of the single-soliton solutions (although they are not quite localised, having a slowly vanishing
Figure 3: (color online). The typical exact solution of the cKdV equation in terms of the Airy function Ai(\(z\)) (2.9) with the following parameters: \(\varepsilon=-0.01\), \(\rho=1\), and \(\tau_{1}=150\). In the left column, one can see the dependence of \(v(\tau)\) for two distances, \(r=50\) and \(r=100\); in the right column, the solution is presented as a function of \(r\) for two different times, \(\tau=0\) and \(\tau=20\). (Note that In the vicinity of the origin, the plot is simply cut; therefore, it looks that the solution is constant.)
wiggling tail)". The analysis of solution (2.9) shows that it describes a wave perturbation that decays in space as \(r^{-2/3}\) whereas its duration increases with the distance as \(r^{1/3}\), i.e. these quantities vary in space in the same manner as the parameters (amplitude and duration) of a solitary wave in the approximate solution (2.5), (2.7). Even more complicated solutions mathematically similar to N-soliton solutions can be constructed but all of them are far from real solitary waves.
Nakamura and Chen [31] found that compact pulse-type solutions can be obtained if one replaces the first-kind Airy function Ai(\(z\)) in the solution (2.9) with the second-kind Airy function Bi(\(z\)). Then, the simplest solution looks pretty much the same as the KdV soliton, at least in its leading part. As an example, we show in Fig. 4a) the comparison of solution (2.9) with the function Bi(\(z\)) with the
Figure 4: (color online). Exact solution of the cKdV equation in terms of the second-kind Airy function Bi(\(z\)) (2.9) with the following parameters: \(\varepsilon=10^{-4},\rho=10^{-3}\), and \(\tau_{1}=10\). Panel (a) shows the dependence of the solution on time \(\tau\) for the fixed distances, and panel (b) shows the dependence of the solution on distance \(r\) for the fixed times.
KdV soliton of the same amplitude at \(r=25\). As one can see, the leading parts of these solutions are practically the same; the only difference is in the rear parts of the solutions. The same good agreements were confirmed for the solutions of equal amplitudes at other distances. However, in contrast to KdV-like solitons, solitary waves in the solution of Nakamura and Chen [31] are accompanied by well-visible positive polarity tails (cf. Fig. 1b). Solutions with Airy functions of the second-kind \(Bi(z)\) are also singular at \(r=0\) like solutions with Airy functions of the first-kind \(Ai(z)\) (see, for example, Fig. 4b). However, in this kind of solutions, the leading part being far from the origin, make sense and their shapes are well-approximated by KdV solitons as shown in Fig. 4a).
Despite solutions (2.9) with either first-kind or second-kind Airy functions are not exactly self-similar or traveling-wave solutions, we will call, conditionally solution (2.9) with the second-kind Airy function Bi(\(z\)) the _self-similar soliton_ (ss-soliton). Figure 5 shows the diverging ss-soliton at different time moments. In the last frame at \(\tau=100\), one can see a singularity at the center \(r=0\).
Figure 5: (color online). The typical cylindrically diverging self-similar soliton is described by function (2.9) with the Airy function of the second kind Bi(\(z\)) (2.9). The plot was generated for the same parameters as in Fig. 4. Here \(x\) and \(y\) are the Cartesian coordinates such that \(r^{2}=x^{2}+y^{2}\).
The "two-soliton solution" in terms of function \(f(r,\tau)\) can be presented in the form [31]:
\[f(r,\tau)=1+\varepsilon(a_{11}+a_{22})+\varepsilon^{2}\left|\begin{array}{cc}a _{11}&a_{12}\\ a_{21}&a_{22}\end{array}\right|=\left|\begin{array}{cc}1+\varepsilon a_{11}& \varepsilon a_{12}\\ \varepsilon a_{21}&1+\varepsilon a_{22}\end{array}\right|, \tag{2.13}\]
where the quantities \(a_{ij}\) are defined by the following expressions:
\[a_{ij} = \frac{\rho_{i}\rho_{j}}{(12r)^{1/3}}\frac{w_{i}(z-z_{i})\,w_{j}^{ \prime}(z-z_{j})-w_{i}^{\prime}(z-z_{i})\,w_{j}(z-z_{j})}{z_{i}-z_{j}},\quad i \neq j; \tag{2.14}\] \[a_{ii} = \frac{\rho_{i}^{2}}{(12r)^{1/3}}\left\{(z-z_{i})\,w_{i}^{2}(z-z_{ i})-\left[w_{i}^{\prime}(z-z_{i})\right]^{2}\right\},\quad i=j. \tag{2.15}\]
where \(i,j=1,2\), and \(w_{i}(z)\) are either Airy function of the first kind Ai(\(z\)) or Airy function of the second kind Bi(\(z\)). However, as has been aforementioned, solutions with the function Ai(\(z\)) do not represent pulse-type waves; therefore, we consider further only solutions with the second-type Airy function Bi(\(z\)).
A typical two-soliton solution described by Eqs. (2.13)-(2.15) with \(w(z)\equiv\) Bi(\(z\)) is illustrated by Fig. 6. In this figure, one can see the time dependence of function \(v(\tau)\) at three distances from the center. The interaction of two ss-solitons resembles the overtaking type interaction of KdV solitons
Figure 6: (color online). Exact two-soliton solution of the cKdV equation in terms of the second-kind Airy function Bi(\(z\)) as per Eqs. (2.13)–(2.15) with the following parameters: \(\varepsilon=10^{-4}\), \(\rho_{1}=10^{-3}\), \(\rho_{2}=10^{-6}\), \(\tau_{1}=25\), and \(\tau_{2}=-10\). To make graphics clearly visible, we multiplied function \(v(\tau)\) by 4 at \(r=400\), by 16 at \(r=2\cdot 10^{3}\), and by 25 at \(r=10^{4}\).
[10] when two peaks merge at some distance (at \(r=400\) in our figure) and then, they slowly separate. However, the separation lasts a very long time and even at big distances the pulses remain coupled as illustrated by Fig. 6.
There is also the process of fission of an initial pulse-type perturbation into ss-solitons that looks very similar to the pure soliton breakdown of a pulse in the plane KdV equation. An example of such a process is shown in Fig. 7.
The physical importance of such solutions is not clear but mathematically they are very interesting. Johnson in his paper [17] mentioned that the "choice of either Bi or Ai functions does not lead to a proper solution of the cKdV equation" but he assumed that, perhaps, there is some mileage in describing the evolution of pulse-type initial profiles in terms of such functions.
## 3 Pulse disintegration into KdV-like solitons and interaction of KdV solitons with ss-solitons
As was shown above, a KdV soliton is very robust in the cylindrical system and keeps its identity even in the process of decay due to geometrical divergence. The interaction between two KdV-like solitons is very much similar to the interaction of KdV solitons in the plane case. It is natural to expect that solitons can emerge from wide initial pulses in the same manner as in the plane case. To confirm this conjecture, we conducted numerical experiments with wider initial pulses which gives rise to the emergence of several solitons in the plane KdV equation. The typical example with three solitons
Figure 7: (color online). Fission of initial pulse at \(r=10\) onto two ss-solitons within the exact solution described by Eqs. (2.13)–(2.15) with the following parameters: \(\varepsilon=10^{-4}\), \(\rho_{1}=0.1\), \(\rho_{2}=10^{-4}\), \(\tau_{1}=0\), and \(\tau_{2}=-10\). To make graphics clearly visible, we multiplied function \(v(\tau)\) by 5 at \(r=50\), by 15 at \(r=\cdot 10^{3}\), and by 45 at \(r=10^{4}\).
emergence is shown in Fig. 8. This example corresponds to the pure soliton decay of a \(\,\mathrm{sech}^{2}\)-pulse in the plane KdV equation. We see that in the cKdV equation the same pure soliton decay occurs at the early stage of evolution and then, each soliton experiences the adiabatic decay in accordance with the asymptotic formulae (2.5) and (2.7).
A similar pulse disintegration into a number of solitons was observed for pulses of positive polarity and different initial duration and amplitudes. A pure soliton disintegration was observed for the same parameters of an initial pulse as in the plane case. In general, the initial pulse breaks into solitons and a trailing dispersive wave train. Fission into solitons was also observed in a recent paper [39]
It is of interest to study also the interaction of a KdV soliton with an ss-soliton. This can be done numerically for the initial condition consisting of one KdV soliton and one ss-soliton. The result of such interaction is shown in Fig. 9.
Thus, we see that the traveling KdV-type soliton overtakes the ss-soliton and after the interaction, both of them restore their shapes and continue moving and decaying due to the geometrical divergence. Thus, we can conclude that in the weakly nonlinear physical systems with a small dispersion,
Figure 8: (color online). Initial pulse disintegration in the cKdV equation and emergence of KdV-like solitons. Frame a) \(r-r_{0}=0\), frame b) \(r-r_{0}=6\), frame c) \(r-r_{0}=12\).
the outgoing pulses with cylindrical fronts evolve in a similar way as in the plane KdV equation but experience amplitude decay due to the geometrical divergence.
## 4 Concluding remarks
In this paper, we have presented a detailed analysis of solitary wave solutions to the cylindrical KdV equation. It was shown that soliton-like solutions in the form of KdV solitons exist in this equation. In the process of geometrical divergence, such solitons gradually decay so that the total energy of the initial pulse is conserved, \(E=\int\eta^{2}r\,d\tau=\text{const}\), where the integration should be carried out over \(\tau\) in the infinite limits, \(-\infty<\tau<+\infty\). There are also exact solutions of the cKdV equation [31] which have pulse-type shapes (ss-solitons) which are very similar to KdV solitons of the same amplitudes. Their parameters (amplitudes and duration) vary with the distance in the same manner
Figure 9: (color online). Interaction of the KdV-like soliton with the ss-soliton in outgoing cylindrical waves. The amplitude of the KdV soliton was \(A_{0}=1\) at \(r_{0}=100\). The parameters of the ss-soliton were \(\varepsilon=10^{-10}\), \(\rho=1\), \(\tau_{1}=1\). Frame a) \(r-r_{0}=160\), frame b) \(r-r_{0}=190\), frame c) \(r-r_{0}=210\), frame d) \(r-r_{0}=240\).
as in the diverging KdV-like solitons, \(A\sim r^{-2/3}\), \(T\sim r^{1/3}\). However, such solutions are not traveling waves but are closer to self-similar solutions.
A numerical study of interactions between KdV-like solitons, ss-solitons, as well as between KdV-like and ss-solitons revealed that all of such solitons are robust and, apparently, interact elastically. A general pulse-type initial perturbation of positive polarity in the course of evolution experiences a breakdown into a number of KdV-like solitons and trailing dispersive wavetrain. Each of emerged KdV-like solitons decays then individually due to the geometrical divergence.
In conclusion, we note that some asymptotic solutions to the cKdV equation were obtained in Refs. [36; 37]. Using symbolic computation, Gao and Tian [9] constructed a few self-similar solutions to the cKdV equation; some of them were mentioned in this paper and obtained by other authors using analytical methods. However, all these solutions are out of our current interest as they are not of a soliton-type.
In perspective, we plan to study quasi-cylindrical waves within the cylindrical version of the Kadomtsev-Petviashvili equation (alias Johnson equation) [17]. The important problem to be studied is the stability of a soliton front with respect to small azimuthal perturbations and lump formations. One more problem to be studied in perspective is the dynamics of solitons within the cylindrical Gardner equation containing both quadratic and cubic nonlinearities. Such an equation is applicable to the description of internal waves in the ocean and the results obtained can be of practical interest.
_Acknowledgements._ W.H. acknowledges the financial support from China Scholarship Council (grant No. 202002425001). His study was also supported by the National Natural Science Foundation of China (grants No. 11947093 and 12204554) and the Natural Science Foundation of Henan Province of China (grant No. 222300420393). Y.S. acknowledges the financial support provided by the President Council of the Russian Federation (grant No. NSH-70.2022.1.5) for the State support of Leading Scientific Schools of the Russian Federation. The authors are grateful to K. Khusnutdinova for her helpful remarks.
|
2310.10067 | Comparative Study of Planetary Atmospheric Uncertainties and Design
Rules for Aerocapture Missions | Aerocapture uses atmospheric drag to decelerate spacecraft and achieve orbit
insertion. One of the significant risks associated with aerocapture is the
uncertainty in the atmospheric density, particularly for outer planets. The
paper performs a comparative study of the atmospheric uncertainties and
provides design rules for aerocapture missions. The atmospheres of Venus, Mars,
and Titan are well-characterized for engineering purposes. At the altitude
ranges relevant for aerocapture, the 3$\sigma$ density variation is
approximately $\pm$30%, $\pm$50%, $\pm$30% for Venus, Mars, and Titan
respectively. With no in-situ data, the atmospheres of Uranus and Neptune are
not as well characterized as the other bodies. For both Uranus and Neptune, the
GRAM suite provides a 3$\sigma$ density variation of approximately $\pm$30% for
the relevant altitude ranges which is considered an optimistic estimate. Until
in-situ data from an atmospheric probe becomes available, a more conservative
global min-max estimate is recommended to accommodate the worst-case scenario.
The study presents a graphical method for selection of the optimal entry flight
path angle when considering the atmospheric uncertainties to ensure the
on-board guidance is given the best possible initial state for targeting the
desired exit state post aerocapture. | Athul Pradeepkumar Girija | 2023-10-16T04:59:22Z | http://arxiv.org/abs/2310.10067v1 | # Comparative Study of Planetary Atmospheric Uncertainties and Design Rules for Aerocapture Missions
###### Abstract
Aerocapture uses atmospheric drag to decelerate spacecraft and achieve orbit insertion. One of the significant risks associated with aerocapture is the uncertainty in the atmospheric density, particularly for outer planets. The paper performs a comparative study of the atmospheric uncertainties and provides design rules for aerocapture missions. The atmospheres of Venus, Mars, and Titan are well-characterized for engineering purposes. At the altitude ranges relevant for aerocapture, the 3\(\sigma\) density variation is approximately \(\pm\)30%, \(\pm\)50%, \(\pm\)30% for Venus, Mars, and Titan respectively. With no in-situ data, the atmospheres of Uranus and Neptune are not as well characterized as the other bodies. For both Uranus and Neptune, the GRAM suite provides a 3\(\sigma\) density variation of approximately \(\pm\)30% for the relevant altitude ranges which is considered an optimistic estimate. Until in-situ data from an atmospheric probe becomes available, a more conservative global min-max estimate is recommended to accommodate the worst-case scenario. The study presents a graphical method for selection of the optimal entry flight path angle when considering the atmospheric uncertainties to ensure the on-board guidance is given the best possible initial state for targeting the desired exit state post aerocapture.
Planetary atmosphere, GRAM, Uncertainties, Aerocapture
## I Introduction
With the exception of Mercury, all planetary destinations in the Solar System and Saturn's moon Titan possess significant atmospheres [1]. From the hot thick Venusian CO\({}_{2}\) atmosphere to the cold icy H\({}_{2}\)-He atmospheres of Uranus and Neptune, there exists great diversity in the physical structure and chemical composition of these atmospheric layers [2]. Measurements such as the noble gas abundances and isotopic ratios in these atmospheres are critical to our understanding of the origin, formation, and evolution of the Solar System [3]. The presence of atmosphere makes these destinations fundamentally more interesting than airless bodies, due to their ability to maintain a climate, induce weather phenomena such as storms and rainfall, and erosive process which shape these bodies [4, 5]. In addition to their scientific importance, the atmospheres are also of significant engineering interest for planetary missions. All planetary entry missions to date have utilized the drag provided by the atmosphere to slow down probes and landers [6, 7]. While it has never been flown, the related technique of aerocapture shown in Fig. 1 which uses atmospheric drag to decelerate spacecraft and achieve orbit insertion is of great interest for future missions across the Solar System [8, 9]. One of the significant risks with aerocapture is the uncertainty in the atmospheric density, particularly for outer planets whose atmospheres have no in-situ measurements as ground truth for the engineering models [10]. The paper uses the NASA Global Reference Atmospheric Model (GRAM) to perform a comparative study of the atmospheric uncertainties and provides design rules for aerocapture missions.
Figure 1: Schematic illustration of the aerocapture maneuver.
## II II. Structure and Chemical Composition
Figure 2 shows the extent and chemical composition of the atmospheres of which are of interest for aerocapture missions. The terrestrial planets Venus and Mars have well understood atmospheres that extend to about 120 km and is almost entirely composed of CO\({}_{2}\). Titan with its low gravity has a thick N\({}_{2}\) atmosphere that extends to nearly 1000 km above its surface. With in-situ measurements from from the Huygens Atmospheric Structure Instrument (HASI), Titan's atmospheric structure and composition (CH\({}_{4}\) mass fraction) is now well understood for engineering purposes [11]. Jupiter and Saturn are not considered in this study because their extreme entry speeds and harsh aero-thermal environments make aerocapture impractical [12]. The ice giant planets Uranus and Neptune have primarily H\({}_{2}\)-He atmospheres with a small fraction of CH\({}_{4}\) which absorbs in the red, and gives them their distinctive blue-green color. The only measurements of the ice giant atmospheres are remote sensing observations from the Voyager flyby, and the radio science experiments using signals that passed through the atmosphere and then received on Earth. The lack of in-situ measurements make their atmospheres the least well constrained and has the largest uncertainties, posing a potential risk for aerocapture. The CH\({}_{4}\) mass fraction could also influence the aerothermodynamics and add uncertainty to the heating rates encountered. Until an atmospheric probe enters the Uranus atmosphere, potentially in the 2040s, our understanding of the ice giant atmospheres will remain relatively poor and with large uncertainties.
Figure 2: Extent and chemical composition of various planetary atmospheres.
## III Effect of Atmospheric Uncertainties
Aerocapture involves entry, atmospheric flight, and exit as shown in Figure 1. The density profile encountered during the atmospheric flight greatly affects the trajectory, and hence an understanding of the expected uncertainties in the density profile is of great importance to assess the risk it poses to a future mission. If the vehicle enters too shallow or encounters an atmosphere which is less dense than the expected minimum, spacecraft may exit the atmosphere without getting captured as shown in Figure 3. If the vehicle enters too steep, or the density is much higher than expected, the vehicle may bleed too much speed and fail to exit the atmosphere. Both of the above are are undesirable scenarios which will lead to complete loss of mission and hence adequate margins must be provided for the guidance system against these atmospheric uncertainties, in addition to delivery error and aerodynamic uncertainties [13]. The Theoretical Corridor Width (TCW) is a useful concept which quantifies the width of the corridor, and must be large enough to accommodate the delivery and atmospheric uncertainties, and also provide sufficient safety margin for mission success even in limiting scenarios (such as combination of shallow entry and thin atmosphere). NASA has developed the GRAM Suite as a unified tool, to provide engineering models for all planetary atmospheres [14]. The atmospheres of Mars and Venus are well-constrained for all engineering purposes, and the atmospheric uncertainties are quite low, while the ice giant atmospheres are significantly less constrained. The remainder of the paper uses the GRAM models to perform a quantitative study of the atmospheric uncertainties at various destinations.
Figure 3: Illustration of the aerocapture theoretical corridor width.
## IV. Venus
Venus is our closest planetary neighbor and aerocapture using its atmosphere has been shown to be feasible using both lift and drag modulation [15]. However, the large heating rates at Venus make lift modulation not attractive. Drag modulation with its lower heating rate particularly makes it attractive for small satellite orbit insertion, and has been extensively studied in the recent years in the context of low-cost missions [16, 17]. Drag modulation aerocapture at Venus has been proposed for inserting independent small satellites into low-circular orbits [18, 19], small satellites as part of New Frontiers or Flagship missions [20], and a future sample return mission from the Venusian cloud layers [21]. Due to the abundance of in-situ data from the Venera and Pioneer Venus entry probes, the Venusian atmospheric density from Venus-GRAM is practically well-characterized for all engineering purposes [22]. Venus-GRAM provides the average, low and high density (1\(\circ\)) values. Figure 4 (left) shows the minimum, average, and maximum (3\(\circ\)) density profiles as a function of altitude from Venus-GRAM. The shaded altitude band of 100-120 km is where most of the deceleration occurs for aerocapture at Venus, and hence the most relevant in terms of uncertainty quantification. Note that the results in Figure 4 are only for a particular location (latitude, longitude) and time (year, month, date). However, the results are expected to provide a reasonable estimate of the expected uncertainty for any location and time of year. Figure 4 (right) shows the percentage deviation (3\(\sigma\)) from the average as a function of altitude. Figure 4 shows that in the altitude range of 100-120 km relevant to aerocapture, the expected 3\(\sigma\) density variation is approximately \(\pm\)30%. Venus aerocapture concepts should demonstrate success with these uncertainties accounted for.
Figure 4: Density profiles from Venus-GRAM (left) and percent deviation from nominal (right).
## 5 V. Mars
Mars has a relatively thin atmosphere compared to the Earth, but nevertheless is relevant for aerocapture and has been shown to provide small but still significant performance benefits [23]. The thinner atmosphere and the lower entry speed result in a relatively benign aero-thermal environment making it an attractive destination for a low-cost aerocapture technology demonstration [24, 25]. Drag modulation at Mars has been extensively studied in the recent years in the context of small low-cost planetary science missions [26]. Due to the plethora of lander and rover missions, the Martian atmosphere is also well understood, but also has relatively large seasonal variations compared to Venus and associated uncertainties, particularly in the thinner upper atmosphere. Figure 5 (left) shows the minimum, average, and maximum (3\(\sigma\)) density profiles as a function of altitude from a Mars-GRAM run. As with Venus, the results in Figure 5 are only for a particular location and time, but expected to be a representative estimate of the uncertainties. The shaded altitude band of 50-80 km is where most of the deceleration occurs for aerocapture at Mars. Figure 5 (right) shows the percentage deviation (3\(\sigma\)) from the average as a function of altitude. The uncertainty in the density profile increases with altitude. In the altitude range of 50-80 km relevant to aerocapture, the expected 3\(\sigma\) density variation is approximately \(\pm\)50%. Compared to Venus, the low gravity and the extended atmosphere provide larger TCW at Mars (by a factor of 2), and hence larger atmospheric uncertainties can easily be accommodated. In particular, proposed concepts should be able to demonstrate mission success with adequate margin in two limiting scenarios: shallow entry and thin atmosphere, and thick atmosphere and steep entry.
Figure 5: Density profiles from Mars-GRAM (left) and percent deviation from nominal (right).
## VI Titan
Saturn's largest moon Titan is the only moon in our Solar System with an atmosphere and makes it unique in several ways such as the only other place with surface liquids [27]. Titan's low gravity and greatly extended thick atmosphere make it the ideal destination for aerocapture, providing the largest corridor width of any destination [28]. Its small size makes it particularly difficult to insert orbiters using conventional propulsion [29, 30]. However, aerocapture is promising alternative for future missions. Aerocapture at has applications for a future Titan orbiter, following the Dragonfly mission, to perform global mapping of the Titan surface and its lakes and seas [31, 32]. Largely due to in-situ date from the Huygens lander, Titan's density profile is well constrained. Figure 6 (left) shows the minimum, average, and maximum (3\(\sigma\)) density profiles from Titan-GRAM. The shaded altitude band of 300-450 km is where most of the deceleration occurs for aerocapture at Titan. Figure 6 (right) shows the percentage deviation (3\(\sigma\)) from the average as a function of altitude. The uncertainty in the density profile increases with altitude, reaches a maximum of about 40% near 100 km above the surface and then decreases. It is not clear this is an artifact of the assumptions used in the model, or indeed a real effect. In the altitude range of 300-450 km km relevant to aerocapture, the expected 3\(\sigma\) density variation is approximately \(\pm\)30% which is comparable to that at Venus. It is also worth mentioning that though Venus and Titan atmosphere are quite different in terms of their temperature (737K vs 94K) and chemistry (CO\({}_{2}\) vs N\({}_{2}\)), they share several physical similarities, such as both being relatively thick, super-rotating atmospheres with the planetary body rotating slowly and significant greenhouse warming in the lower troposphere.
Figure 6: Density profiles from Titan-GRAM (left) and percent deviation from nominal (right).
## VII Uranus and Neptune
At the far reaches of the outer Solar System, the ice giants Uranus and Neptune are the last class of planets yet to be explored using orbiter spacecraft. Their enormous heliocentric distance presents significant mission design challenges. The 2023-2032 Planetary Science Decadal Survey has identified a Uranus Orbiter and Probe (UOP) as the top priority for a Flagship mission in the next decade. Even though Uranus and Neptune are both equally compelling scientifically [33], Uranus is less demanding from a mission design perspective with propulsive insertion. Aerocapture was not considered during the Uranus mission studies [34, 35], but aerocapture has been shown to be strongly enhancing to enabling technology for ice giant missions [36]. With aerocapture, both Uranus and Neptune would be equally accessible. Recent studies have shown that aerocapture enables significantly shorter flight times to Uranus than possible with propulsive insertion [37, 38], especially with new high energy launch vehicles [39]. Similar results have been been obtained for Neptune where the performance benefits of aerocapture are even greater [40, 41]. However, the lack of any in-situ measurements presents a challenge as the model predicted atmospheric uncertainties cannot be verified. Figure 7 (left) shows the minimum, average, and maximum (3\(\sigma\)) density profiles. The shaded altitude band of 200-400 km above the 1-bar pressure level is where most of the deceleration occurs at Uranus [42]. Figure 6 (right) shows the percentage deviation (3\(\sigma\)) from the average as a function of altitude. In the altitude range of 200-400 km km relevant to aerocapture, the expected 3\(\sigma\) density variation is approximately \(\pm\)30%. This must be taken as a "optimistic" estimate until in-situ data becomes available and the actual uncertainty may be much higher.
Figure 7: Density profiles from Uranus-GRAM (left) and percent deviation from nominal (right).
Figure 8 (left) shows results from a Neptune-GRAM run. Figure 8 (right) shows the percentage deviation (\(3\sigma\)) from the average, which is similar to Uranus likely indicating the same uncertainty model is used for both planets. Neptune-GRAM provides a legacy FMINMAX parameter which provides global minimum (-1) and maximum (+1) bounding density profiles which are shown in Figure 9. Considering the full range of FMINMAX, a more "conservative" density variation in the 200-400 km altitude range is (0.1x, 3x) where x indicates a multiplication factor. Until in-situ data from a probe becomes available, the more conservative estimate is recommended.
Figure 8: Density profiles from Neptune-GRAM (left) and percent deviation from nominal (right).
Figure 9: FMINMAX [-1, 0, +1] from Neptune-GRAM (left) and percent deviation (right).
## VIII Design for Atmospheric Uncertainties
The aerocapture mission design must account for the expected atmospheric uncertainties to assure the guidance scheme can successfully steer the vehicle to the desired exit state [43]. An important part of the mission design is the selection of the target entry flight path angle (EFPA) for the aerocapture missions. Proper selection of the EFPA can be used to ensure the on-board guidance is given the best possible initial state for targeting the desired exit state. Figure 10 shows an example of the target EFPA selection which accounts for the atmospheric uncertainties for a Uranus mission. The blue, green, and red boxes indicate the aerocapture corridor for minimum, average, and maximum expected density profiles. Note that the corridor changes depending on the atmospheric profile. As the atmosphere becomes thicker, the aerocapture corridor becomes more shallower. For example, if the atmosphere encountered is blue (minimum), then any EFPA shallower than the top boundary (overshoot limit) may result in the vehicle not getting captured. Any EFPA steeper than the bottom boundary (undershoot limit) may result in undershoot of the target orbit or failure to exit altogether. Ideally, the target EFPA is chosen such that two limiting cases are both within the capability of the guidance: a shallow entry and thin atmosphere, and a steep entry and thick atmosphere can both be handled with some additional margin. Graphically, this can be seen by selecting a target EFPA such that both the steep (-3g) and shallow (+3g) EFPA values fall within all three boxes. In the case of Uranus example in Figure 10, the corridor even with a L/D=0.24 aeroshell and high entry speed (30 km/s) is only about 1 deg. wide, leaving little margin on either side of the min-density-overshoot or the max-density undershoot. If the corridor width is not high enough to cover the full extent of atmospheric uncertainties, then it is recommended to bias the target EFPA towards the steep side to reduce the high risk of overshoot, at the expense of incurring a small risk of undershoot. In addition, on-board density estimation during the descending leg of aerocapture, and using it during the apoapsis prediction phase of the guidance is critical to ensure mission success in atmospheres with very large uncertainties [44, 45].
Figure 10: Graphical method for selection of target EFPA incorporating atmospheric uncertainties.
## IX Conclusions
The paper used the GRAM suite to perform a comparative study of the atmospheric uncertainties and provided design rules for aerocapture missions. The atmospheres of Venus, Mars, and Titan are well-characterized for engineering purposes, due to the availability of in-situ data. At the altitude ranges relevant for aerocapture, the 3\(\sigma\) density variation is approximately \(\pm\)30%, \(\pm\)50%, \(\pm\)30% for Venus, Mars, and Titan respectively. With no in-situ data, the atmospheres of Uranus and Neptune are not as well characterized. For both Uranus and Neptune, the GRAM suite provides a 3\(\sigma\) density variation of approximately \(\pm\)30% which must be considered an "optimistic" estimate. Considering the full range of FMINMAX, a more "conservative" density variation in the 200-400 km altitude range is (0.1x, 3x) where x indicates a multiplication factor. Until in-situ data from an atmospheric probe becomes available, the more conservative estimate is recommended as a worst-case scenario for mission concept studies. The study presented a graphical method for selection of the optimal EFPA when considering the atmospheric uncertainties to ensure the on-board guidance is given the best possible initial state for targeting the desired exit state.
## Data Availability
The atmospheric dataset used in the study was created using NASA Global Reference Atmospheric Model (GRAM) Suite v1.5, and is available to download as a zip file at [https://doi.org/10.13140/RG.2.28166.34880](https://doi.org/10.13140/RG.2.28166.34880). The data and the code used to make the study results will be made available by the author upon reasonable request.
|
2303.17630 | What can a detected photon with a given gravitational redshift tell us
about the maximum density of a compact star? | Far away observers can in principle bound from below the dimensionless
maximum-density parameter $\Lambda\equiv4\pi R^2\rho_{\text{max}}$ of a compact
star by measuring the gravitational redshift factor
$z\equiv\nu_{\text{e}}/\nu_{\infty}-1$ of photons that were emitted from the
{\it surface} of the star: $\Lambda\geq{3\over2}[1-(1+z)^{-2}]$ [here $R$ is
the radius of the star and $\{\nu_{\text{e}},\nu_{\infty}\}$ are respectively
the frequency of the emitted light as measured at the location of the emission
and by asymptotic observers]. However, if photons that were created somewhere
{\it inside} the star can make their way out and reach the asymptotic
observers, then the measured redshift parameter $z$ may not determine uniquely
the surface properties of the star, thus making the above bound unreliable. In
the present compact paper we prove that in these cases, in which the creation
depth of a detected photon is not known to the far away observers, the
empirically measured redshift parameter can still be used to set a (weaker)
lower bound on the dimensionless density parameter of the observed star:
$\Lambda\geq{3\over2}[1-(1+z)^{-2/3}]$. | Shahar Hod | 2023-03-30T18:00:02Z | http://arxiv.org/abs/2303.17630v1 | What can a detected photon with a given gravitational redshift tell us about the maximum density of a compact star?
###### Abstract
Far away observers can in principle bound from below the dimensionless maximum-density parameter \(\Lambda\equiv 4\pi R^{2}\rho_{\rm max}\) of a compact star by measuring the gravitational redshift factor \(z\equiv\nu_{\rm e}/\nu_{\infty}-1\) of photons that were emitted from the _surface_ of the star: \(\Lambda\geq\frac{3}{2}[1-(1+z)^{-2}]\) [here \(R\) is the radius of the star and \(\{\nu_{\rm e},\nu_{\infty}\}\) are respectively the frequency of the emitted light as measured at the location of the emission and by asymptotic observers]. However, if photons that were created somewhere _inside_ the star can make their way out and reach the asymptotic observers, then the measured redshift parameter \(z\) may not determine uniquely the surface properties of the star, thus making the above bound unreliable. In the present compact paper we prove that in these cases, in which the creation depth of a detected photon is not known to the far away observers, the empirically measured redshift parameter can still be used to set a (weaker) lower bound on the dimensionless density parameter of the observed star: \(\Lambda\geq\frac{3}{2}[1-(1+z)^{-2/3}]\).
## I Introduction
The gravitational redshift effect in general relativity [1; 2] implies that electromagnetic waves traveling out of a gravitational well (which, for example, may be created by the presence of a compact star) seem to lose energy. This physically important phenomenon is reflected in the fact that the frequencies of the detected photons as measured by flat-space asymptotic observers are lower than the original frequencies of the corresponding photons as determined at the source of the emission (the compact star).
In spherically symmetric spacetimes the ratio between these two frequencies can be expressed in terms of the \(tt\)-component of the curved line element at the emission site [1; 2]:
\[1+z\equiv\frac{\nu_{\rm e}}{\nu_{\infty}}=\frac{1}{\sqrt{g_{00}}}\, \tag{1}\]
where \(\{\nu_{\rm e},\nu_{\infty}\}\) are respectively the photon frequency as measured at the source of emission and the detected frequency as measured by far away asymptotic observers. In particular, since the spacetime region just outside the surface of a compact star of mass \(M\) and radius \(R\) is characterized by the simple relation \(g_{00}=1-2M/R\)[3], the frequencies of photons that were created near the surface of the star are gravitationally redshifted according to the simple dimensionless ratio [1; 2]
\[1+z\equiv\frac{\nu_{\rm e}}{\nu_{\infty}}=\left(1-\frac{2M}{R}\right)^{-\frac {1}{2}}. \tag{2}\]
An immediate consequence of the simple relation (2) is that the dimensionless maximum-density-area parameter
\[\Lambda\equiv 4\pi R^{2}\rho_{\rm max} \tag{3}\]
of the emitting star can in principle be bounded from below by far away observers who measure the gravitational redshift parameter \(z\) of an emitted photon that was created near the _surface_ of the star:
\[\Lambda\geq\frac{3}{2}[1-(1+z)^{-2}]. \tag{4}\]
(We have used here the simple inequality \(M\leq\frac{4}{3}\pi R^{3}\rho_{\rm max}\) for the mass of the observed star [4]).
However, caution should be taken in interpreting observational data of stellar emission spectra: one has to take into account the possibility that photons can in principle also be created in the non-vacuum region _inside_ the star, where the radial functional behavior of the metric component \(g_{00}(r)\) [see Eq. (1)] may not be known to the far away observers.
In particular, if a photon that was created in some unknown depth inside the _non_-vacuum region of the star can make its way out, then the empirically measured redshift parameter of the detected photon cannot be used by the
asymptotic observers to determine uniquely the dimensionless compactness \(M/R\) [see Eq. (2)] of the emitting star, thus making the simple lower bound (4) unreliable.
The main goal of the present compact paper is to present a compact theorem that reveals the interesting fact that in these situations, in which the detected photon was originally created in the non-vacuum region inside the star (in some depth which may _not_ be known to the far away asymptotic observers), the observers can still use the empirically measured redshift factor of the detected photon in order to derive a lower bound, which is albeit weaker than (4), on the dimensionless maximum density parameter \(\Lambda\) of the observed star.
## II Description of the system
We consider a compact star of mass \(M\) and radius \(R\) whose spherically symmetric asymptotically flat spacetime is described, using the Schwarzschild spacetime coordinates, by the curved line element [1; 2]
\[ds^{2}=-e^{-2\delta}\mu dt^{2}+\mu^{-1}dr^{2}+r^{2}(d\theta^{2}+\sin^{2}\theta d \phi^{2}) \tag{5}\]
with the radially-dependent metric functions \(\mu=\mu(r)\) and \(\delta=\delta(r)\).
The Einstein-matter field equations, \(G^{\mu}_{\nu}=8\pi T^{\mu}_{\nu}\), yield the non-linear differential relations [5; 6]
\[\frac{d\mu}{dr}=-8\pi r\rho+\frac{1-\mu}{r} \tag{6}\]
and
\[\frac{d\delta}{dr}=-\frac{4\pi r(\rho+p)}{\mu} \tag{7}\]
for the metric functions. Here
\[\rho\equiv-T^{t}_{t}\quad\mbox{ and }\quad p\equiv T^{r}_{r} \tag{8}\]
are respectively the energy density and radial pressure of the matter fields, which are assumed to be non-negative in the interior region of the star and respect the dominant energy condition [7; 8]:
\[0\leq p\leq\rho\quad\mbox{ for }\quad r\leq R. \tag{9}\]
In addition, the energy density and pressure are assumed to vanish outside the compact surface of the star:
\[\rho=p=0\quad\mbox{ for }\quad r>R. \tag{10}\]
We assume that the spacetime of the compact star is spatially regular, which implies the near-origin relations [5; 6]
\[\mu(r\to 0)=1+O(r^{2})\quad\mbox{ and }\quad\delta(0)<\infty. \tag{11}\]
In addition, the metric functions of the asymptotically flat spacetime of the star are characterized by the functional behaviors [5; 6]
\[\mu(r\rightarrow\infty)\to 1\quad\mbox{ and }\quad\delta(r\rightarrow\infty)\to 0 \tag{12}\]
at spatial infinity.
Taking cognizance of the Einstein equation (6), one finds the simple expression
\[\mu(r)=1-\frac{2m(r)}{r} \tag{13}\]
for the metric function, where the gravitational mass contained within a sphere of radius \(r\) is given by the integral relation [5; 6]
\[m(r)=4\pi\int_{0}^{r}x^{2}\rho(x)dx \tag{14}\]
and is characterized by the boundary condition [see Eq. (10)]
\[m(r=R)=M. \tag{15}\]
III Generic lower bound on the dimensionless maximum density parameter of an optically observed star
In the present section we shall derive, using _analytical_ techniques, a generic lower bound on the dimensionless maximum-density-area parameter \(\Lambda\) of optically observed stars. In particular, our goal is to derive a model-independent bound which would be valid even in situations in which the creation depth inside the star of the asymptotically detected photon is not known to the far away observers.
We first point out that Eqs. (1), (5), and (13) yield the functional relation
\[1+z(r_{\rm cr})=\frac{e^{\delta(r_{\rm cr})}}{\sqrt{1-\frac{2m(r_{\rm cr})}{r _{\rm cr}}}} \tag{16}\]
for the asymptotically measured gravitational redshift factor, \(z=z(r_{\rm cr})\), that characterizes a photon that originally was created inside the star at a radial distance \(r=r_{\rm cr}\) from its center and eventually was detected by the far away asymptotic observers [9].
In order to emphasize the fact that far away observers can in principle determine the redshift factor \(z\) of an asymptotically detected photon but, in general, they may not have precise knowledge about the exact depth (or, equivalently, the exact radius \(r=r_{\rm cr}\)) at which the photon was originally created inside the star, we shall henceforth use the notation \(z\) [instead of \(z(r_{\rm cr})\)] for the empirically determined redshift factor of an asymptotically detected photon.
Our main goal is to derive a lower bound on the dimensionless density parameter \(\Lambda\) of the observed star. To this end, we shall first prove that the metric function \(e^{\delta(r)}/\sqrt{\mu(r)}\), which determines the redshift parameter (16), is monotonically decreasing inside the star. Using the Einstein equations (6) and (7) with the radial relation (13), one obtains the gradient relation [see Eq. (9)]
\[\frac{d\Big{[}\frac{e^{\delta(r)}}{\sqrt{\mu(r)}}\Big{]}}{dr}=-\frac{e^{ \delta(r)}}{\sqrt{\mu(r)}}\cdot\frac{\frac{m(r)}{r}+4\pi r^{2}p}{\mu r}<0\, \tag{17}\]
which implies [see Eq. (11)]
\[\max_{r}\Bigl{\{}\frac{e^{\delta(r)}}{\sqrt{\mu(r)}}\Bigr{\}}=e^{\delta(0)} \quad\quad{\rm for}\quad\quad r\in[0,R]. \tag{18}\]
We shall next derive a useful generic upper bound on the radially-dependent dimensionless metric function \(\delta(r)\). Using the simple relation [see Eq. (14)]
\[m(r)\leq\frac{4\pi}{3}r^{3}\cdot\rho_{\rm max} \tag{19}\]
for the gravitational mass contained within a sphere of radius \(r\), one finds from Eq. (13) the characteristic inequality
\[\mu(r)\geq 1-\frac{8\pi}{3}r^{2}\rho_{\rm max}. \tag{20}\]
We shall henceforth assume the relation
\[\Lambda<\frac{3}{2} \tag{21}\]
for the dimensionless density parameter of the star [otherwise we immediately obtain the lower bound \(\Lambda\geq\frac{3}{2}\), which is actually stronger than the bound (29) that we shall prove in the present section], which implies the characteristic inequalities [see Eq. (20)]
\[0<\mu(r)\leq 1. \tag{22}\]
Taking cognizance of Eqs. (7), (9), and (20), one can write the following series of inequalities
\[-\frac{d\delta}{dr}\leq\frac{8\pi r\rho(r)}{\mu(r)}\leq\frac{8\pi r\rho_{\rm max }}{1-\frac{8\pi}{3}r^{2}\rho_{\rm max}} \tag{23}\]
for the gradient of the metric function in the interior region (\(r\leq R\)) of the compact star. From (23) one obtains the integral relation
\[-\int_{\delta(r)}^{\delta(R)}d\delta\leq\int_{r}^{R}\frac{8\pi x\rho_{\rm max}}{1 -\frac{8\pi}{3}x^{2}\rho_{\rm max}}dx. \tag{24}\]
Performing the integration in (24) with the help of the boundary relation [see Eqs. (7), (10), and (12)]
\[\delta(r=R)=0\, \tag{25}\]
one finds the characteristic inequality
\[\delta(r)\leq\frac{3}{2}\cdot\ln\Big{(}\frac{3-8\pi r^{2}\rho_{\rm max}}{3-8 \pi R^{2}\rho_{\rm max}}\Big{)} \tag{26}\]
for the radially-dependent metric function. In particular, one can write the upper bound
\[\delta(0)\leq\frac{3}{2}\cdot\ln\Big{(}\frac{3}{3-8\pi R^{2}\rho_{\rm max}} \Big{)} \tag{27}\]
on the value of the metric function at the center of the star.
Taking cognizance of Eqs. (3), (16), (18), and (27), one obtains the dimensionless inequalities
\[1+z\leq e^{\delta(0)}\leq\Big{(}\frac{3}{3-2\Lambda}\Big{)}^{\frac{3}{2}}\, \tag{28}\]
which imply the lower bound
\[\Lambda\geq\frac{3}{2}[1-(1+z)^{-2/3}] \tag{29}\]
on the characteristic maximum-density-area parameter of the optically observed compact star.
We note that the bound (29) is weaker than the bound (4). It is important to emphasize, however, that while the familiar bound (4) is only valid if one assumes that the asymptotically detected photon was created near the surface of the star, the analytically derived lower bound (29) is valid even in situations in which the creation depth of the asymptotically detected photon inside the non-vacuum region of the star is not known to us.
## IV Summary and physical implications
Determining the value of the composed maximum-density-area parameter \(\Lambda\equiv 4\pi R^{2}\rho_{\rm max}\) of highly compact stars is a challenging task that may help physicists to determine the correct equation of state of highly dense nuclear matter configurations. Interestingly, using the bound (4), the value of this dimensionless physical parameter can in principle be bounded from below by far away observers who measure the gravitational redshift parameter \(z\) of an asymptotically detected photon that was created near the _surface_ of the compact star.
In the present paper we have emphasized the fact that, in principle, photons can also be created in the non-vacuum region _inside_ the star, in which case the asymptotically measured redshift parameter \(z\) may not determine uniquely the surface properties of the star, thus making the simple bound (4) unreliable.
Motivated by this observation, we have presented a compact theorem that reveals the physically important fact that in these situations, in which the detected photon may have been created in the non-vacuum region inside the compact star (in some depth which is not necessarily known to the far away observers), the asymptotic observers can still use the empirically measured redshift factor of the detected photon in order to set an upper bound on the value of the dimensionless density parameter \(\Lambda\) that characterizes the observed star.
In particular, using the non-linearly coupled Einstein-matter field equations, we have derived the relation [see Eqs. (3) and (29)]
\[\Lambda\equiv 4\pi R^{2}\rho_{\rm max}\geq\frac{3}{2}[1-(1+z)^{-2/3}] \tag{30}\]
between the dimensionless maximum-density-area parameter of an emitting star and the gravitational redshift factor that characterizes the detected photon as measured by far away asymptotic observers.
Inspection of the analytically derived lower bound (30) reveals the fact that the strongest bound on the dimensionless density parameter of the star may be deduced by substituting in (30) the value \(z_{\rm max}\) that characterizes the most redshifted photon which is detected in the emission spectrum of the star by the asymptotic observers.
In order to illustrate the important physical implications of our results, let us assume that asymptotic observers detect an emitted photon with say \(z=1/2\). According to the familiar vacuum relation (4), this gravitational redshift factor yields the lower bound \(\Lambda\geq 5/6\simeq 0.833\) on the dimensionless density parameter of the emitting star. However, the use of the vacuum relation (4) may not be justified if the detected photon was created in the non-vacuum region inside the star. In these cases one should instead rely on the analytically derived density-redshift relation (30), which implies that the asymptotically detected photon with the property \(z=1/2\) corresponds to the corrected bound \(\Lambda\gtrsim 0.355\) on the dimensionless maximum-density-area parameter of the observed star.
Finally, we would like to emphasize again that the analytically derived lower bound (30) on the dimensionless density parameter (3) of an optically observed star is valid even in situations in which the creation depth of the asymptotically detected photon inside the compact star is not known to us.
**ACKNOWLEDGMENTS**
This research is supported by the Carmel Science Foundation. I would like to thank Yael Oren, Arbel M. Ongo, Ayelet B. Lata, and Alona B. Tea for stimulating discussions.
|
2309.02500 | Monthly quasi-periodic eruptions from repeated stellar disruption by a
massive black hole | In recent years, searches of archival X-ray data have revealed galaxies
exhibiting nuclear quasi-periodic eruptions with periods of several hours.
These are reminiscent of the tidal disruption of a star by a supermassive black
hole, and the repeated, partial stripping of a white dwarf in an eccentric
orbit around a ~10^5 solar mass black hole provides an attractive model. A
separate class of periodic nuclear transients, with significantly longer
timescales, have recently been discovered optically, and may arise from the
partial stripping of a main-sequence star by a ~10^7 solar mass black hole. No
clear connection between these classes has been made. We present the discovery
of an X-ray nuclear transient which shows quasi-periodic outbursts with a
period of weeks. We discuss possible origins for the emission, and propose that
this system bridges the two existing classes outlined above. This discovery was
made possible by the rapid identification, dissemination and follow up of an
X-ray transient found by the new live \swift-XRT transient detector,
demonstrating the importance of low-latency, sensitive searches for X-ray
transients. | P. A. Evans, C. J. Nixon, S. Campana, P. Charalampopoulos, D. A. Perley, A. A. Breeveld, K. L. Page, S. R. Oates, R. A. J. Eyles-Ferris, D. B. Malesani, L. Izzo, M. R. Goad, P. T. O'Brien, J. P. Osborne, B. Sbarufatti | 2023-09-05T18:00:04Z | http://arxiv.org/abs/2309.02500v1 | # Monthly quasi-periodic eruptions from repeated stellar disruption by a massive black hole
###### Abstract
In recent years, searches of archival X-ray data have revealed galaxies exhibiting nuclear quasi-periodic eruptions with periods of several hours. These are reminiscent of the tidal disruption of a star by a supermassive black hole, and the repeated, partial stripping of a white dwarf in an eccentric orbit around a \(\sim 10^{5}M_{\odot}\) black hole provides an attractive model. A separate class of periodic nuclear transients, with significantly longer timescales, have recently been discovered optically, and may arise from the partial stripping of a main-sequence star by a \(\sim 10^{7}\)\(M_{\odot}\) black hole. No clear connection between these classes has been made. We present the discovery of an X-ray nuclear transient which shows quasi-periodic outbursts with a period of weeks. We discuss possible origins for the emission, and propose that this system bridges the two existing classes outlined above. This discovery was made possible by the rapid identification, dissemination and follow up of an X-ray transient found by the new live _Swift_-XRT transient detector, demonstrating the importance of low-latency, sensitive searches for X-ray transients.
## 1 Introduction
Swift J023017.0+283603 (hereafter Swift J0230) was discovered in _Swift_-X-ray Telescope (XRT) data by the Living _Swift_-XRT Point Source (LSXPS) catalogue's real-time transient detector1 on 2022 June 22. The source was serendipitously present in an observation of an unconnected source, SN 2021afk (4.3' away), and had a 0.3-10 keV count rate of \(2.7^{+0.6}_{-0.5}\times 10^{-2}\) ct s\({}^{-1}\). This field had been observed on 11 previous occasions by _Swift_ between December 2021 and January 2022; combining all of those observations, Swift J0230 was undetected down to a 3-\(\sigma\) upper limit of 1.5\(\times 10^{-3}\) ct s\({}^{-1}\). The last of these observations was 164 days before the discovery of Swift J0230, placing a rather loose lower limit on its switch-on time; for convenience we give all times relative to MJD 59752 (midnight on the day of discovery). The best localization of Swift J0230 is from the XRT, and is RA=02\({}^{3}\)\(30^{m}\)\(17.12^{s}\), Dec=\(+28\)\({}^{\circ}\) 36\({}^{\prime}\) (04.4'' (J2000), with an uncertainty of 2.8'' (radius, 90% confidence). This is consistent with the nucleus of the galaxy 2MASX J02301709+2836050, but also marginally consistent with the type-II supernova SN2020rht (3.1'' away), discovered two years earlier on 2020 August 12 (Figure 1). An optical spectrum of 2MASX J02301709+2836050, obtained with the Nordic Optical Telescope (Figure 2), gives a redshift \(z=0.03657\pm 0.00002\). Assuming standard cosmological parameters 4, this corresponds to a luminosity distance \(D_{L}=160.7\) Mpc. The galaxy type is unclear, but it is either quiescent or, at most, a very weak AGN (see the Supplementary Information).
Footnote 4: [http://www.astro.washington.edu/](http://www.astro.washington.edu/)\(\sim\)march/
## 2 Results
### X-ray analysis
Following the initial, serendipitous discovery, we obtained regular monitoring with _Swift_ (see the Methods section for details). The initial outburst continued for 4 days following the discovery; on the fourth day it ended with a rapid decay, the luminosity falling by a factor of 20 in just 57 ks; there was a brief rebrightening (a factor of 4.5 in 6 ks), before it became too faint to detect; the light curve is shown in Figure 3. Fitting this decay with a power-law, \(L\propto(t-t_{0})^{-\alpha}\) (where \(t_{0}\) is set to the start of the first bin in this observation), gives \(\alpha=11.0\pm 1.7\). Eight subsequent outbursts were observed at \(\sim\) 25 d intervals, with durations \(\sim\) 10-15 d. The fifth outburst was either significantly longer (up to \(\sim\) 32 d), or consisted of a weak outburst, a return to quiescence, and then a second, longer outburst. This was followed by a long gap of \(\sim\) 70 d during which two possible short and weak outbursts were seen, before another outburst similar to the early ones.
A Lomb-Scargle analysis (see Methods and Extended Data Figure. 1) reveals moderately-significant peaks at approximately 22 and 25 d periods, although each peak is \(\sim\) 1 d wide, confirming the quasi-rather than strictly-periodic nature of the variability, as may be expected from eyeballing the light curve. Further consideration of the variability requires us to define what constitutes an outburst: the initial outburst and that from days \(\sim\) 41-48 appear clearly defined, but during the outburst from days \(\sim\) 60-75, the source underwent a sudden decline, being undetected on day 72 with an upper limit of \(L<8.9\times 10^{41}\) erg s\({}^{-1}\)(0.3-2 keV), recovering to \(L\sim 2\times 10^{42}\) erg s\({}^{-1}\) by day 74. It seems plausible to interpret this as a single outburst, with a sudden, brief dip. Hereafter, what constitutes an outburst becomes more subjective. The outburst starting on days 89 and 102 could each be explained as comprising two short outbursts close together, or a single outburst with a quiet phase in the middle; it is worth noting that in the first of these, if we sum the upper limits during this quiet phase, we find a detection at a higher level than the upper limits found in the quiescent phases. During the long, largely quiescent period from days 111-195 the source was twice briefly detected with \(L\sim 1-2\times 10^{41}\) erg s\({}^{-1}\), but these are hardly 'outbursts' in the same way as the earlier emission. Based on visual inspection of the light curve, we define an outburst as comprising any times where \(L>2\times 10^{41}\) erg s\({}^{-1}\); the details of the outbursts thus identified are given in Table 1 - see the Methods section for full details of how these were derived. In summary: we have de
lected transient X-ray emission that rapidly switches on and off again with a recurrence timescale that is of the order of 25 d, but which can vary by several days between outbursts. The duration of the outbursts also shows significant variability with the longest being of order 20 days and the shortest less than a day.
The X-ray spectrum during the outbursts was very soft, with no emission seen above 2 keV, and could be well-modelled with a simple blackbody emitter with only Galactic absorption. Due to this soft spectrum, the typical energy bands used for XRT hardness ratios were inappropriate; we selected 0.3-0.9 keV and 0.9-2 keV as this gave roughly equal counts in the two bands, maximising the signal-to-noise ratio. The time-evolution of this hardness ratio (Figure 3) shows a clear correlation between luminosity and spectral hardness (Spearman rank p-value of \(1.3\times 10^{-6}\) of the data being uncorrelated), ruling out a change in absorption as the cause of the flux variation. We fitted the absorbed blackbody model to each observation in which Swift J0230 was detected. The blackbody temperature obtained is strongly correlated with luminosity (p-value: \(4.5\times 10^{-6}\); see Extended Data Figure. 2); no evidence for absorption beyond the Galactic column is seen.
As noted earlier, while coincident with the galaxy nucleus, the XRT position is also potentially in agreement with that of SN 2020ht. We obtained a 3 ks _Chandra_ DDT observation during the fourth outburst to obtain a better position, but unfortunately this observation fell on day 97, which turned out to be in one of the mid-outburst quiet phases.
### Optical and UV analysis
At optical and ultraviolet wavelengths, there is no evidence for outbursting behaviour. We obtained data from both _Swift_ UV/Optical Telescope (UVOT) and the Liverpool Telescope (Extended Data Tables 1-2). The host galaxy, 2MASX J02301709+2836050, is clearly detected in all observations, but there is no evidence for variability or an increase in flux compared to catalogued values. For UVOT, we also analysed the data from the pre-discovery observations and find no secure evidence for a change in brightness between those data and the observations taken during outburst. Full details are given in the Methods section.
## 3 Discussion
The peak luminosity of \(\sim~{}4\times 10^{42}\) erg s\({}^{-1}\), the timescales of the outbursts and their quasi-periodic, quasi-chaotic nature, the soft X-ray spectrum and lack of optical variability place significant constraints on the possible models to explain Swift J0230. While the lack of detection with _Chandra_ means that we cannot rule out a positional association with SN2020rht, it is difficult to see how a supernova could have evolved into the object which we have detected. The spectrum, luminosity and variability timescale are inconsistent with the properties of ultra-luminous X-ray sources [5], and while certain supernovae could in principle be followed by X-rays from a newly-formed millisecond pulsar [6], this should occur while the supernova is still visible in the optical. [7] have shown how this emission could be delayed, but the timescales and luminosities they predict (e.g. their fig. 5) are not consistent with our observation. Equally, neither model explains the variability or spectrum we see in Swift J0230. We discuss this further in the Supplementary Information.
We suggest that (near) periodic mass supply into an accretion flow onto the central supermassive black hole in 2MASX J02301709+2836050 presents the most likely explanation for Swift J0230. From simple energetics (see Supplementary Information) we can infer that the total mass accreted during a typical outburst is \(\sim 10^{-5}~{}M_{\odot}\). In an AGN, a supermassive black hole at the heart of a galaxy accreting from a surrounding disc of gas, flares and outbursts are common. However, the timescale and spectrum of Swift J0230 are not consistent with typical AGN behaviour, and 2MASX J02301709+2836050 itself does not appear to be an AGN (Extended Data Figure. 3; see Supplementary Information for a full discussion). We therefore consider the possibility that one (or several) stars are interacting with, and feeding mass on to, the central supermassive black hole. One possible mechanism for producing the mass flow is the interaction of two stars in orbit around the black hole; if they pass sufficiently close to each other material can be liberated from one or both stars that can feed the central black hole [8, 9]. To generate the required timescales from this model requires a pair of stars orbiting in the same direction and in the same plane [9]. This could occur for stellar orbits that are initially randomly oriented if they can be subsequently ground down into the same plane by interaction with an AGN disc [10]. For Swift J0230, which lacks any clear signature of a standard AGN disc, it is unlikely that any stars orbiting the central black hole have the required orbits to achieve the observed timescales.
Another possibility is a repeating, partial Tidal Disruption Event (rpTDE), in which a star on a bound, highly-eccentric orbit loses some of its envelope every pericentre passage due to tides from the black hole's gravitational field. These events are a sub-class of TDEs in which the "regular" scenario sees the incoming star approach the black hole on a parabolic orbit, and the star is destroyed by the first encounter (see refs [11, 12] for reviews of TDEs).
The rpTDE model was investigated prior to the discovery of any corresponding sources (for example, ref [13]) and was suggested as the explanation of X-ray flares in the active galaxy IC 3599 [14]. Recently, it has been proposed (for example, refs [15, 16, 17]) as a possible explanation for hours-long quasi-periodic eruptions (QPEs) discovered in galactic nuclei (for example, refs [18, 19, 20, 21, 22]). These works focused on the possibility of a white dwarf interacting with a relatively low-mass central black hole of mass \(\sim~{}10^{5-6}M_{\odot}\). A second set of sources show much longer outbursts (both in duration and recurrence period) and have been referred to as periodic nuclear transients (PNTs); these may be the same rpTDE mechanism acting with a main-sequence star rather than a compact star [23, 24, 25, 26], and a more massive black hole (\(\sim~{}10^{7-8}M_{\odot}\)).
In the rpTDE model, the donor star is in a highly eccentric orbit around a black hole; at each pericentre passage the star has to approach, but not quite reach, the tidal radius at which the star would be fully disrupted. The outer layers of the star are liberated, and some of this material accretes on to the central black hole powering the outburst. The recurrence time of the outbursts is related to the orbital period of the star. The majority of the energy released from the accretion process occurs in the central regions near the black hole where the matter is most likely in the form of an accretion disc. We can therefore provide an estimate of the black hole mass in Swift J0230 by comparing the temperature of \(\sim~{}100\) eV (\(\sim 10^{6}\) K), measured from the X-ray spectrum, with the peak temperature of a standard accretion disc [27]. This yields (see the Supplementary Information) a black hole mass estimate of \(\sim 2\times 10^{5}M_{\odot}\). This is similar to the mass estimates for the QPE sources (for example, refs [18, 19, 22]). It is worth noting that the QPE sources and Swift J0230 show very little in the way of optical emission, whereas the PNTs show strong optical emission. This may simply reflect the difference in the black hole masses, i.e. the soft X-ray spectrum and the lack of optical emission seen in Swift J0230 appears consistent with this estimate of the black hole mass.
If the accreted material is stripped from the orbiting star during pericentre passage of a highly elliptical orbit, and the pericentre distance is a few gravitational radii (required to liberate any material from the surface of a white dwarf for black hole masses of a few \(\times 10^{5}~{}M_{\odot}\)), then we would expect the outburst duration to be similar in different systems, regardless of their orbital (and hence outburst) period. This is because the pericentre passage of a highly elliptical orbit is approximately that of a parabolic orbit and its duration is not connected to the orbital period. This means that it is difficult to explain both Swift J0230 (outburst duration of days) and the pre-existing QPEs (outburst duration of hours) as rpTDE of a white dwarf around a modest-mass
black hole. On the other hand, rpTDE of main-sequence stars by \(\sim~{}10^{7}-10^{8}~{}M_{\odot}\) black holes have been proposed to explain the PNTs ASASSN-14ko, for which the recurrence timescale is 114 d [23, 24], and AT2018fyk which exhibited a significant re-brightening after around 600 d of quiescence [25]. Swift J0230 clearly lies between these two classes of object.
An important question is how the star arrived on such an orbit around the central black hole. Tidal capture, in which an orbiting star loses orbital energy due to tidal forces and becomes bound to the black hole [28], is typically incapable of generating the required orbits; however, the Hills mechanism [29] was proposed as a viable formation route for the PNT ASASSN-14ko [30]. In this mechanism, a binary star system approaches the black hole with a small enough pericentre distance such that the tidal force from the black hole is stronger than the gravitational force holding the binary together. This results in the binary being disrupted, with one component ejected from the system, and the other locked into a bound, but highly eccentric, orbit about the black hole. If the progenitor of Swift J0230 were a binary consisting of a low mass, main-sequence star and, say, a white dwarf, then the main-sequence star needs to be captured into a bound orbit around the black hole with the observed \(\sim\) 25 d period. For a black hole mass of \(M_{\bullet}\sim 4\times 10^{5}~{}M_{\odot}\) this period corresponds to the most likely outcome of Hills capture from such a binary system (the calculations of [30] show that higher black hole masses are allowed but are significantly less likely to result in this period for the bound star; see Supplementary Information for details). This is consistent with the mass estimate derived from the temperature of the X-ray spectrum. Further, the expected accretion timescales from such a system (see Supplementary Information) are also consistent with those observed in Swift J0230.
The variable shape and timescales of the outbursts seen in Swift J0230 may also be explained by this model. In a standard TDE, as opposed to an rpTDE, the star arrives on a parabolic orbit, meaning that some of the stellar matter is bound to the black hole (the inner tidal stream), and this material forms an accretion flow, while the rest of the stellar debris (the outer tidal stream) is unbound and leaves the system [11]. In an rpTDE, the star must be on a bound orbit around the black hole. In this case both the inner and outer tidal streams can remain bound to the black hole. The inner stream falls back soonest, and thus with a higher mass return rate, while the outer stream can return on longer timescales. Due to relativistic precession of the stellar debris orbits (both apsidal and nodal) the returning streams can collide and partially cancel their angular momenta to augment the accretion rate on to the black hole, with the magnitude of the effect depending on the orientation of the colliding orbits (see [31, 32] for similar variability induced in accretion discs due to relativistic precession; and the processes described therein may also occur in the discs formed in Swift J0230). The exact details of this interaction between the two streams, the accretion flow, and the orbiting star are complex and require a full numerical analysis which is beyond the scope of this discovery paper; however it is clear that such interaction will produce variable emission that could at least partially erase the more exactly periodic nature of the stellar orbit. An example of such complex interacting debris streams can be seen in fig. 9 of [33]. It will be particularly important to determine if the star itself can be sufficiently perturbed during each pericentre passage, with e.g. tides imparting variations in oscillation amplitudes and rotation frequency, to change the amount of mass transferred and the structure of each outburst. Additionally, the sharp decline observed at the end of each outburst may be driven by the returning star disturbing the accretion flow. These questions can be addressed with future theoretical investigations.
We have proposed a single explanation for the QPEs and PNTs, as repeated, partial tidal disruption of a star in an eccentric orbit around a supermassive black hole; and reported the discovery of the first object that can bridge the gap between these classes. The QPEs are thought to harbour a white dwarf and a modest (\(\sim~{}10^{5}~{}M_{\odot}\)) black hole, and the PNTs are thought to host a main sequence star and a more massive (\(\sim~{}10^{7}~{}M_{\odot}\)) black hole. Swift J0230 represents an intermediate class of system which is consistent with a main sequence star orbiting a modest-mass black hole. Given the timescales, modest fluxes, and lack of emission outside of the X-ray band, Swift J0230-like systems are difficult to discover. Unlike QPEs, which were discovered in archival data, their timescales and behaviour would not be exposed by a single observation. It is only with the recent creation of a real-time transient detector [1] that objects like this can be found rapidly enough for follow-up observations to expose their behaviour. The fact that this event was found within three months of enabling this real-time search suggests that they are reasonably common and we can expect to discover more objects of this class with sensitive, wide-field X-ray instruments such as _eRosita_[34] and in the near future, the _Einstein Probe_[35].
## Methods
### Discovery
At 13:58 UTC on 2022 June 22, the LSXPS real-time transient detector [1] reported the discovery of a possible new X-ray transient, dubbed Swift J023017.0+283603. The object was detected in _Swift_ observation 00014936012 which had taken place between 08:19 and 08:46 UTC; i.e. the notification was produced 5.2 hours after the observation (most of this latency related to the timing of ground station passes and the ingesting of the data by the Swift Data Center: the observation data were received at the UKSSDC at 13:51 UTC). No catalogued X-ray source was found at this position. Further, _Swift_ had already observed this location on 11 previous occasions (the observation target was SN2021afk, 4.3\({}^{\prime}\) away from this serendipitous transient), for a total of 9.6 ks. These observations had been analysed as a stacked image in LSXPS (DatasetID: 19690); no source was present near the position of Swift J0230, with a 3-\(\sigma\) upper limit of 1.5\(\times 10^{-3}\) ct s\({}^{-1}\). The peak count rate of Swift J0230 in the new observation was \(2.7^{+0.6}_{-0.5}\times 10^{-2}\) ct s\({}^{-1}\) (1-\(\sigma\) errors), significantly above this upper limit, clearly indicating that a new transient had been discovered.
Due to the very soft spectrum and coincidence with the nucleus of the galaxy 2MASX J02301709+2836050, this was originally interpreted as a tidal disruption event [2, 37], and a high-urgency target-of-opportunity (ToO) request was submitted to _Swift_.
Figure 1: The location of the new transient, Swift J0230, relative to its host galaxy and an old supernova. The image is an archival Pan-STARRS [36] image of 2MASX J02301709+2836050, with colour scaled arbitrarily for aesthetic purposes. The broken circle shows the 90% confidence _Swift_-XRT position of Swift J0230; the solid one SN 2020ht.
In the following analysis we assumed \(H_{0}=67.36\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{\Lambda}=0.6847\), \(\Omega_{m}=0.3153\)[4].
### Observations and data analysis
_Swift_ follow-up observations began at 16:07 UTC on June 22 (T0+0.67 d). Due to _Swift_'s Moon observing constraint, subsequent observations were not available until day 4 (June 26). Daily observations of 1 ks exposure were obtained with _Swift_ until day 12. A subsequent ToO request was submitted (PI: Guolo requesting weekly monitoring of this source, which began on day 21 (2022 July 13). The initial observation showed that the source had turned back on in X-rays, but in the following observations on days 27 and 35 it was again below the detection threshold. In order to better quantify the duty cycle, we submitted regular ToO requests (PI: Evans) for daily 1 ks observations, which ran until 2023 March 19 (day 270) when the source entered _Swift_'s Sun observing constraint. Note that we have not _obtained_ 1 ks per day; each month proximity to the Moon prevents observations for 3-4 days, and due to the nature of _Swift_'s observing programme, our observations were sometimes shortened or completely superseded by other targets.
**Swift-XRT** XRT data were analysed using the on-demand tools of [38], via the swiftools Python module (v3.0.7). A 0.3-10 keV light curve was constructed, binned to one bin per observation; the soft and hard bands were set to 0.3-0.9 and 0.91-2 keV respectively. Observations 00015231018 and 00015231019 overlapped in time, as did 0001523143 and 0001523144. When this happens the per-observation binning is unreliable, so we built light curves of each of these observations individually, and then replaced the affected bins in the original light curves with those thus obtained. For each run of consecutive upper limit bins, we merged the limits into a single bin, using merge-LightCurveBins() function in swifttools, giving a better measurement of, or limit on, the quiescent flux.
For each observation in which the light curve showed a detection (at the 3-\(\sigma\) level), we extracted a spectrum, fitting it with a blackbody component absorbed by two absorbers. The first was a tbabs model with \(N_{\rm H}\) fixed at the Galactic value of 1.12\(\times 10^{21}\) cm\({}^{-2}\)[39]; the second
Figure 2: Optical spectrum of the host galaxy 2MASX J02301709+2836050 obtained with the Nordic Optical Telescope on day 132. _Top:_ the black line shows the observed spectrum, while the orange line shows the fit to the stellar continuum provided by starlight. The vertical lines mark prominent emission and absorption features, which together allow to measure the redshift \(z=0.03657\pm 0.00002\) (\(1-\sigma\) confidence). _Bottom:_ the residuals between the observed data (stellar + nebular spectrum) and the fit (stellar continuum), which single out the nebular emission. The emission line fluxes were measured on the residual spectrum and allow placing the galaxy on the BPT plot (Extended Data Figure. 3).
was a ztbabs model with \(N_{\rm H}\) free to vary, and the redshift fixed at the value obtained from our NOT spectrum. From these fits we obtained the 0.3-2 keV flux and (given the luminosity distance of 160.7 Mpc) luminosity. The dependence of this luminosity on blackbody temperature, reported in the main paper, is shown in Extended Data Figure. 2. We also obtained for each spectrum the conversion factor from 0.3-10 keV count rate to 0.3-2 keV luminosity, and so converted our count-rate light curve into luminosity. For the detections with too few photons to yield a spectral fit, the upper limits in the light curve and the bins created by merging (above), we used the conversion factor of \(7.54\times 10^{43}\) erg c\({}^{-1}\) obtained from the discovery observation. The resultant light curve was shown in Figure 3, with the merged bins marked with crosses (and in green/cyan in the online version). To explore the rapid flux decay seen at the end of the first outburst (right-hand panel of Figure 3), we rebinned the data to one bin per snapshot (using the rebinLightCurve() function), converting to luminosity using the same conversion factor (that obtained for the appropriate observation) for each bin.
In order to determine the timescales of the outbursts (Table 1), we defined outbursts as being times where the 0.3-2 keV luminosity was above \(3\times 10^{41}\) erg s\({}^{-1}\) (based on visual inspection of the light curve). We built a new light curve, still with one bin per observation, but in which all bins were created as count-rates with 1-\(\sigma\) errors, rather than allowing upper limits (using the swifttools module with the argument allowUL=False passed to the light curve), and then identified each point which was inconsistent with \(L=3\times 10^{41}\) erg s\({}^{-1}\) at at least the 1-\(\sigma\) level. The start and end times of the outbursts were then constrained to being between consecutive datapoints from this sample which were on opposite sides of the \(3\times 10^{41}\) erg s\({}^{-1}\) line. The results are shown in Table 1. We created a Lomb-Scargle periodogram (using the astropy.timeseries.LombScargle Python package) to search this light curve for periodicity, and find possible peaks centred on 22.1 d and 25.0 d, each with widths of \(\sim\) 1 d. To determine their significance we used a bootstrapping method, whereby one'shuffles' the data, randomly redistributing the fluxes (and their errors) among the time bins, and then recalculates the periodogram. We did this 10,000 times, and then for each trial period in the periodogram identified the 99.7th percentile of power, i.e. the 3-\(\sigma\) significance threshold. The result is shown in Extended Data Figure. 1, along with the window function. These two peaks are both clearly above 3-\(\sigma\) in significance and not present in the window function.
We also investigated whether with our observing strategy we can rule out short-period variations like those seen in the QPEs reported to date. We simulated a simplistic light curve based on the period of GSN 069 (and including alternating between slightly longer and shorter cycles as in GSN 069). For each snapshot in the real XRT light curve of Swift J0230 we determined the phase of the trial period and set the count rate either to 0.03 ct s\({}^{-1}\) ('on') or 0.001 ct s\({}^{-1}\) ('off'); fractional errors were set to typical values from our real light curve. We then constructed the Lomb-Scargle periodogram of this, and repeated the bootstrap approach above. A strong signal was found at the nominal period, confirming that such a signal would have been easily recovered had it been present. Thus we can be confident that there is no short-period modulation like that in GSN 069 present in Swift J0230.
**Swift-UVOT** UVOT data were analysed using the uvotsource package in HEasoft v6.30. For the pre-outburst data, the location of Swift J0230 was only in the field of view on five occasions. No sign of variability is found in these data, so we summed the images in each filter using uvotsimum and extracted mean magnitudes. In the initial discovery observation UVOT gathered data in all filters, but no sign of the outburst is seen, any variability being swamped by the underlying galaxy emission; UVOT magnitudes from this observation and the pre-discovery data are shown in Extended Data Table 1. Due to this lack of variability, subsequent _Swift_ observations used the UVOT 'filter of the day', which rotates each day between the \(u\) and UV filters, to preserve the life of the filter wheel. No significant variability is seen. When visually examining the light curve it is tempting to claim some variability in phase with the XRT data, but the magnitude of the variability is much smaller than the errors on the UVOT photometry. To further investigate, we rebinned the XRT light curve to one bin per
Figure 3: The temporal evolution of Swift J0230. All error-bars are 1\(\sigma\) significance. **Left:** X-ray time series, binned to one bin per observation. _Top:_ 0.3–2 keV luminosity light curve. The red data points marked with dots are the _Swift_-XRT detections; grey arrows are 3-\(\sigma\) upper limits from XRT. The dark blue upper limit marked with an asterisk is from _Chandra_. The broad bins marked with crosses were created by combining consecutive XRT non-detections (upper limits in cyan, detections in green); see the Methods section for full details. _Bottom_: The (0.9–2)/(0.3–0.9) keV hardness ratio; the spectral hardness is strongly correlated with the luminosity. The vertical bands are at 25-d intervals. **Right:** the light curve of the XRT observation taken on days 4–5, with one bin per spacecraft orbit, showing the rapid decay at the end of the first outburst. All error-bars are at the 1\(\sigma\) level.
snapshot (i.e. the same binning as the UVOT data, which has one exposure per snapshot), and disabled upper limits, forcing a count-rate and 1-\(\sigma\) error per bin. For each UVOT filter, we identified the coincident XRT data and then performed a Spearman rank correlation analysis between the XRT and UVOT fluxes. This does not account for the uncertainty on the count-rates, and therefore is likely to overestimate the significance; however, no significant correlation was found at all, with p-values between 0.1 and 0.9, and so more complex correlation mechanisms were not deemed necessary. We also attempted image subtraction, summing all UVOT images in the \(u\) filter (that which showed the strongest signs of possible variability) during times of XRT detection and non-detection, before subtracting the latter from the former. No evidence of an excess at the XRT position was seen.
**Nordic Optical Telescope** Spectroscopy of the host galaxy 2MASX J02301709+2836050 was obtained on 2022 November 1 (day 132). A 2.4-ks optical spectrum was accumulated using the Alhambra Faint Object Spectrograph and Camera (ALFOSC) mounted on the 2.56-m Nordic Optical Telescope (NOT) located at La Palma, Spain. The ALFOSC spectrum was reduced using the spectroscopic data reduction pipeline PyNOT ([https://github.com/jkrogager/PyNOT](https://github.com/jkrogager/PyNOT)). We used a 1.0'' slit width and Grism #4, covering the wavelength range \(\sim 3200\)-9600 A at resolution \(\Delta\lambda\lambda\approx 360\). The airmass during the observation was of the order of \(\sim 1.1\). The spectrum is shown in Figure 2, and features prominent H\({}_{0}\), [N ii] and [O III] emission lines, with a common redshift of \(0.03657\pm 0.00002\). A weak H\(\beta\) line is also seen. The flux of weaker lines is often affected by the presence of stellar absorption in the continuum. In order to recover the pure nebular fluxes, we fitted the spectrum with the starlight ([http://www.starlight.ufsc.br/](http://www.starlight.ufsc.br/)) software. starlight fits the stellar continuum, identifying the underlying stellar populations in terms of age and metallicity. Comparing the observed data with the output synthetic spectrum, the pure nebular continuum can be identified, and the emission line fluxes measured accurately.
Based on this analysis, we could build the 'BPT' (Baldwin, Phillips & Terlevich) diagram [40], which is widely adopted to identify the level of nuclear activity in a galaxy. It exploits the ratio of nearby emission lines, minimizing the effects of extinction. The result is shown in Extended Data Figure. 3. 2MASX J02301709+2836050 lies in the locus where low-power AGN, LINERs (Low-Ionization Nuclear Emission-line Region) and star-forming dominated galaxies intersect. A secure classification of 2MASX J02301709+2836050 is therefore not possible, but a powerful AGN is clearly ruled out.
**Liverpool Telescope** The position of Swift J0230 was observed with the 2 m Liverpool Telescope (LT) optical imager (IO:O) on six different occasions between 2022 Jun 28 and 2022 Aug 26, using the \(griz\) filters (the first and last epoch were \(gri\) only). Images were processed using the default IO:O pipeline and downloaded from the LT archive. We co-added exposures and performed basic image subtraction versus Pan-STARRS 1 reference imaging using a custom IDL routine. While a few subtracted image pairs show weak positive or negative residuals at the location of the \(\sim\)18 mag nuclear point source, there is no clear correlation in these residuals between filters or epochs, suggesting minimal contribution of any nuclear transient (or any AGN variability) to the optical flux at the sensitivity level of the LT images. The lack of any residual source at the location of SN 2020rht is unambiguous in all images. Limiting magnitudes of the images (5\(\sigma\)) are given in Extended Data Table 2.
**Chandra** We requested a 3 ks Director's Discretionary Time observation of Swift J0230 with _Chandra_ (Proposal 23708869). We triggered this on day 93, when the _Swift_-XRT count rate exceeded the approved threshold of 0.02 ct s\({}^{-1}\), and the observations were obtained on day 97 (obsID 27470). Our intention was to obtain an accurate (arc-second or better) position of Swift J0230, to be able to say definitively whether it was associated with the nucleus of its host, the historical supernova, or neither. Unfortunately, this observation occured during the quiescent/faint part of the 5th outburst, and Swift J0230 was not detected. The 3-\(\sigma\) upper limit, converted to 0.3-2 keV luminosity assuming a 90 eV blackbody with a Galactic absorber, is \(L<8.0\times 10^{41}\) erg s\({}^{-1}\), consistent with the XRT measurements at the time (Fig. 3).
**Extended Data Figure 1** - Period analysis of the XRT data of Swift J0230. The Lomb-Scargle periodogram of the per-snapshot binned XRT light curve is shown in red. The window function is in grey and the line marking 3-\(\sigma\) significance as a function of period is in black. The two peaks above the 3-\(\sigma\) line and not corresponding to window-function peaks are centred on 22.1 d and 25.0 d.
**Extended Data Figure 2** - The 0.3-2 keV luminosity and blackbody temperature derived from spectral fits to the XRT observations in which Swift J0230 was detected. A Spearman rank test shows these to be strongly correlated (p-value: \(4.5\times 10^{-6}\)). The errorbars reflect the 90% confidence intervals on the parameters, obtained using \(\Delta C=2.706\) in the spectral fitting.
**Extended Data Table 1** - UVOT photometry from pre-discovery data and the discovery observation.
\begin{tabular}{c c c c} \hline Filter & Magnitude & Magnitude \\ & (AB mag) & (AB, discovery obs) \\ \hline v & \(15.78\pm 0.07\) & \(15.82\pm 0.10\) \\ b & \(16.36\pm 0.07\) & \(16.41\pm 0.08\) \\ u & \(17.40\pm 0.07\) & \(17.41\pm 0.09\) \\ uvw1 & \(18.33\pm 0.08\) & \(18.39\pm 0.10\) \\ uvm2 & \(18.95\pm 0.08\) & \(18.88\pm 0.11\) \\ uvw2 & \(18.85\pm 0.07\) & \(18.91\pm 0.09\) \\ \hline \end{tabular}
**Extended Data Figure 3** - BPT (Baldwin, Phillips & Terlevich) diagram, showing galaxy type (HII star forming region, AGN, LINER, composite) as a function of certain line flux ratios. The line ratios are [O iii] 5007 / H\(\beta\) versus [N ii] 6583 / H\(\alpha\) (left) and [S ii] 6717,6732 / H\(\alpha\) (right). In both panels, the red solid lines are the theoretical models separating star-forming regions and AGN [41]. In the left panel, the green line is the demarcation between pure star forming and composite star-forming/AGN regions, as prescribed by [42]. The straight segments separate proper AGN from LINERs (left: [43]; right: [41]). The SDSS galaxy catalogue object density is shown in greyscale [44] and the position of 2MASX J02301709+2836050 is marked by the blue cross (errorbars are 1 \(\sigma\)).
**Extended Data Table 2** - Liverpool Telescope upper limits on emission at the position of Swift J0230 after subtracting the galaxy emission (AB magnitudes).
\begin{tabular}{c c c c c} \hline MJD-59752 & \(g\) & \(r\) & \(i\) & \(z\) \\ \hline
7.20 & \(>\)20.8 & \(>\)20.8 & \(>\)20.9 & \(-\) \\
32.20 & \(>\)21.7 & \(>\)21.6 & \(>\)21.7 & \(>\)21.5 \\
25.15 & \(>\)20.5 & \(>\)21.2 & \(>\)21.4 & \(>\)21.0 \\
29.18 & \(>\)20.6 & \(>\)20.7 & \(>\)20.6 & \(>\)20.1 \\
33.21 & \(>\)22.0 & \(>\)21.2 & \(>\)19.6 & \(>\)17.0 \\
66.12 & \(>\)21.5 & \(>\)21.6 & \(>\)21.5 & \(-\) \\ \hline \end{tabular}
**Extended Data Table 3** - BPT (Baldwin, Phillips & Terlevich) diagram, showing galaxy type (HII star forming region, AGN, LINER, composite) as a function of certain line flux ratios. The line ratios are [O iii] 5007 / H\(\beta\) versus [N ii] 6583 / H\(\alpha\) (left) and [S ii] 6717,6732 / H\(\alpha\) (right). In both panels, the red solid lines are the theoretical models separating star-forming regions and AGN [41]. In the left panel, the green line is the demarcation between pure star forming and composite star-forming/AGN regions, as prescribed by [42]. The straight segments separate proper AGN from LINERs (left: [43]; right: [41]). The SDSS galaxy catalogue object density is shown in greyscale [44] and the position of 2MASX J02301709+2836050 is marked by the blue cross (errorbars are 1 \(\sigma\)).
**Extended Data Table 4** - Liverpool Telescope upper limits on emission at the position of Swift J0230 after subtracting the galaxy emission (AB magnitudes).
\begin{tabular}{c c c c} \hline MJD-59752 & \(g\) & \(r\) & \(i\) & \(z\) \\ \hline
7.20 & \(>\)20.8 & \(>\)20.8 & \(>\)20.9 & \(-\) \\
32.20 & \(>\)21.7 & \(>\)21.6 & \(>\)21.7 & \(>\)21.5 \\
25.15 & \(>\)20.5 & \(>\)21.2 & \(>\)21.4 & \(>\)21.0 \\
29.18 & \(>\)20.6 & \(>\)20.7 & \(>\)20.6 & \(>\)20.1 \\
33.21 & \(>\)22.0 & \(>\)21.2 & \(>\)19.6 & \(>\)17.0 \\
66.12 & \(>\)21.5 & \(>\)21.6 & \(>\)21.5 & \(-\) \\ \hline \end{tabular}
**Extended Data Table 2** - Liverpool Telescope upper limits on emission at the position of Swift J0230 after subtracting the galaxy emission (AB magnitudes).
\begin{tabular}{c c c c} \hline MJD-59752 & \(g\) & \(r\) & \(i\) & \(z\) \\ \hline
7.20 & \(>\)20.8 & \(>\)20.8 & \(>\)20.9 & \(-\) \\
32.20 & \(>\)21.7 & \(>\)21.6 & \(>\)21.7 & \(>\)21.5 \\
25.15 & \(>\)20.5 & \(>\)21.2 & \(>\)21.4 & \(>\)21.0 \\
29.18 & \(>\)20.6 & \(>\)20.7 & \(>\)20.6 & \(>\)20.1 \\
33.21 & \(>\)22.0 & \(>\)21.2 & \(>\)19.6 & \(>\)17.0 \\
66.12 & \(>\)21.5 & \(>\)21.6 & \(>\)21.5 & \(-\) \\ \hline \end{tabular}
**Extended Data Table 4** - Liverpool Telescope upper limits on emission at the position of Swift J0230 after subtracting the galaxy emission (AB magnitudes).
\begin{tabular}{c c c c c} \hline MJD-59752 & \(g\) & \(r\) & \(i\) & \(z\) \\ \hline
7.20 & \(>\)20.8 & \(>\)20.8 & \(>\)20.9 & \(-\) \\
32.20 & \(>\)21.7 & \(>\)21.6 & \(>\)21.7 & \(>\)21.5 \\
25.15 & \(>\)20.5 & \(>\)21.2 & \(>\)21.4 & \(>\)21.0 \\
29.18 & \(>\)20.6 & \(>\)20.7 & \(>\)20.6 & \(>\)20.1 \\
33.21 & \(>\)22.0 & \(>\)21.2 & \(>\)19.6 & \(>\)17.0 \\
66.12 & \(>\)21.5 & \(>\)21.6 & \(>\)21.5 & \(-\) \\ \hline \end{tabular}
**Extended Data Table 5** - Liverpool Telescope upper limits on emission at the position of Swift J0230 after subtracting the galaxy emission (AB magnitudes).
\begin{tabular}{c c c c c} \hline MJD-59752 & \(g\) & \(r\) & \(i\) & \(z\) \\ \hline
7.20 & \(>\)20.8 & \(>\)20.8 & \(>\)20.9 & \(-\) \\
32.20 & \(>\)21.7 & \(>\)21.6 & \(>\)21.7 & \(>\)21.5 \\
25.15 & \(>\)20.5 & \(>\)21.2 & \(>\)21.4 & \(>\)21.0 \\
29.18 & \(>\)20.6 & \(>\)20.7 & \(>\)20.6 & \(>\)20.1 \\
33.21 & \(>\)22.0 & \(>\)21.2 & \(>\)19.6 & \(>\)17.0 \\
66.12 & \(>\)21.5 & \(>\)21.6 & \(>\)21.5 & \(-\) \\ \hline \end{tabular}
## Supplementary Information
### Discussion
### The nature of the host galaxy, 2MASX J02301709+2836050
A natural question is whether we have detected a new source, or are just seeing normal AGN activity. The first identified QPE occurred in an AGN, the Seyfert-2 GSN 069, but the quasi-periodic nature of the eruptions were not consistent with observed AGN activity [18]. Swift J0230 has a much longer (quasi-)period than the previously identified QPEs, but its behaviour remains inconsistent with typical AGN flaring. Rapid variability, such as that seen in the rise and decay of Swift J0230 is sometimes seen in Seyfert galaxies, and recently a narrow-line Seyfert 1 galaxy was shown to undergo a very soft outburst with a similar spectrum to Swift J0230, and a decline similar in timescale to that seen on day 4 of Swift J0230 [45]. However, in that object the spectrum hardened as the luminosity declined - the opposite behaviour to that seen in Swift J0230. Further, our optical spectrum (Figure 2), is inconsistent with a narrow-line Seyfert classification: such galaxies show broader spectral lines and strong Feii emission [46]. The BPT diagram (Extended Data Figure. 3) further demonstrates that the optical spectrum is not consistent with a Seyfert, and is only marginally consistent with a weak AGN.
Indeed, there are many objections to classifying 2MASX J02301709+2836050 as a form of AGN. AGN are typically identified by their hard (2-10 keV) X-ray flux, but Swift J0230 was never detected in this band. Summing up all of the 2-10 keV data, 148 ks in total, we obtain a marginal detection (3.2-\(\sigma\) significance, adopting the method of [47]). Assuming a power-law spectrum with a photon index of 1.7, this yields \(L_{2-10}=(3.8^{+1.6}_{-1.4})\times 10^{40}\) erg s\({}^{-1}\). This does not alone rule out AGN activity, but does require any AGN to be very weak, consistent with the result of the optical spectrum. Furthermore, the host galaxy does not appear in the ALLWISEAGN catalogue [48], which contains 84% of all AGN brighter than 2MASX J02301709+2836050. It is present in the WISE catalogue, with colours \(W1-W2=0.14\), \(W2-W3=3.9\). The \(W1-W2\) colour is much more blue than the AGN shown in the classification plot (fig. 12) of [49].
The difficulty of explaining the quasi-periodic nature, timescale and luminosity of the outbursts in a weak, non-Seyfert AGN, combined with the above arguments against 2MASX J02301709+2836050 as an AGN, lead us to rule out variability of an existing AGN as a plausible explanation for Swift J0230.
### The association with SN2020rht
While the _Swift_-XRT position for Swift J0230 is coincident with the nucleus of the galaxy 2MASX J02301709+2836050, we cannot rule out spatial colocation with SN2020rht. However, as noted in the main article above, it is difficult to identify a mechanism by which SN2020rht could evolve into Swift J0230. Accretion is ruled out on simple energetics grounds: the Eddington luminosity for a stellar-mass black hole is \(\sim~{}10^{38}\) erg s\({}^{-1}\), and any theoretical super-Eddington (to the tune of 4 orders of magnitude!) emission would not exhibit the soft, thermal spectrum we observed - and would show strong optical emission along with the X-rays. Magnetar spin-down has been proposed as powering long-lived GRB emission (for example, refs [50]) and can therefore certainly provide sufficient energy; however, existing models are not consistent with Swift J0230, as already detailed above.
We thus regard the near-alignment with SN2020rht as simply a chance occurrence. A detailed calculation of the probability of this is beyond the scope of this paper, especially as the rate of X-ray transients in the Universe is not a well-known quantity; however, a simple consideration is enough to show that such alignments cannot be rare. If the typical supernova rate is 0.01-0.1 per galaxy per year [51], and given that the XRT localisation accuracy is compatible to or larger than the angular size of a typical galaxy, it follows that the probability of an XRT transient in a given galaxy also lying close to a supernova of up to 2 years old, must be of order a few percent to a few tens of percent.
### Comparison of Swift J0230 with similar objects
There is a growing collection of systems identified as QPEs, with varying degrees of confidence (and cycles observed) [18, 19, 20, 21, 22]. These systems share various common properties. They have all been discovered in X-rays and show little (if any) variability at longer wavelengths. They are all located in the nucleus of their host galaxies, and have very soft spectra, with almost all emission below 2 keV and capable of being fitted by an absorbed blackbody with temperatures in the \(\sim\) 100-200 eV range. They all exhibit a clear correlation between spectral hardness and luminosity, and have peak luminosities of \(L_{\rm 0.3-2keV}\approx 10^{42-43}\) erg s\({}^{-1}\). They show outbursts that are approximately, but not strictly periodic. They are typically reported to have black hole masses of \(\sim~{}10^{5}~{}M_{\odot}\). These properties are all shared by Swift J0230, with the exception of the timescales. The QPE candidates are all found to have outbursts with durations of minutes to hours, and periods of a few hours; Swift J0230 as noted has a much longer outburst duration and period.
A smaller number of objects have been classified as PNTs [23, 24, 25]. These objects were discovered optically and show much longer periods (\(>100\) d) but are again repeating events colocated with a galaxy nucleus, this time with black hole masses \(\sim~{}10^{8}~{}M_{\odot}\). In the first of these, ASASSN-14ko [23, 24], no X-ray outburst is seen although the X-ray emission from the host AGN is found to decrease prior to the optical outburst. The second event, AT2018fyk [23], does show a strong X-ray and UV outburst which was abruptly terminated, before rising again about 600d later.
A further system, eRASSt J045650.3-203750 [26], shows similarities to both classes of object. It was discovered via its X-ray outbursts, which are \(\sim\) 90 d in duration and occur every \(\sim\) 220 d. Spectrally, this object is harder than the QPEs, with notable flux detections above 2 keV. It is again located coincident with a galactic nucleus, with an estimated SMBH mass of \(\sim 10^{6}~{}M_{\odot}\). In this event, some evidence for UV variability is seen but no optical variation.
All of these objects have been interpreted by the papers cited above as rpTDE events, with differing black-hole masses and donor types. Swift J0230 is clearly most similar to the QPEs and eRASSt J045650.3-203750, but with an outburst duration and period much longer than the former, and shorter than the latter.
### The nature of Swift J0230
Here, we discuss the properties of Swift J0230 and estimate physical quantities such as the mass accreted during each outburst. Given the discussion above, we assume that the variability is not caused by activity within a standard AGN disc (by, for example, the mechanisms outlined in [52, 53]). We therefore explore the implications in the context of existing models for accretion of material tidally stripped from stars orbiting near the central black hole [15, 17, 13, 17].
If the outbursts are accretion events then the total mass accreted during an outburst is given by:
\[M_{\rm acc}\sim~{}\frac{E_{\rm ob}k}{\eta c^{2}} \tag{1}\]
where \(E_{\rm ob}\) is the measured X-ray energy during the outburst, \(k\) is a bolometric correction, and \(\eta\) is the radiative efficiency of the accretion process. We can roughly characterise the observed outbursts as lasting \(\approx\)10 days, with a mean X-ray luminosity of \(L_{\rm 0.3-2}\sim~{}3\times 10^{42}\) erg s\({}^{-1}\), thus \(E_{\rm ob}\sim~{}3\times 10^{48}\) erg, hence \(M_{\rm acc}\approx 3\times 10^{27}k/\eta\) g. A typical value for the accretion efficiency is \(\eta~{}\sim~{}0.1\)[27]. The lack
of observed emission in the UV or optical is consistent with the simple blackbody model fitted to the XRT spectrum, from which we find \(k\sim 1\). A correction is needed to account for the absorption, however this change is found (via fitting in xspec) to be very small; therefore, \(M_{\rm acc}\approx 10^{-5}M_{\odot}\) is a reasonable estimate of the mass accreted during an outburst.
We can provide an estimate of the black hole mass in Swift J0230 by comparing the measured temperature of \(\sim~{}100\) eV (\(1.16\times 10^{6}\) K) with the peak temperature of a standard accretion disc [27]. The temperature profile of a standard disc accreting at a rate \(\dot{M}\) is
\[T_{\rm eff}(R)=\left\{\frac{3GM\dot{M}}{8\pi R^{3}\sigma}\left[1-\left(\frac{R _{\rm in}}{R}\right)^{1/2}\right]\right\}^{1/4}\,, \tag{2}\]
and thus the maximum temperature reached is
\[T_{\rm max}=\frac{6^{6/4}}{7^{7/4}}\left(\frac{3GM\dot{M}}{8\pi^{3}R_{\rm g}^{ 3}\sigma}\right)^{1/4}=\frac{6^{6/4}}{7^{7/4}}\left(\frac{3c^{6}\dot{M}}{8\pi \zeta^{3}\sigma G^{2}M^{2}}\right)^{1/4} \tag{3}\]
where the innermost stable circular orbit occurs at \(R_{\rm isco}=\zeta R_{\rm g}\) (with \(R_{\rm g}=GM/c^{2}\) and \(\zeta\) ranging from unity for a maximally spinning prograde black hole to \(9\) for a maximally spinning retrograde black hole), and the numerical pre-factor of \(6^{6/4}/7^{7/4}\approx 0.488\) is due to the zero-torque inner boundary condition (see [54] and references therein for a discussion of discs with non-zero torque boundary conditions).
Taking the total mass accreted during a typical outburst (derived above) of \(10^{-5}~{}M_{\odot}\) over a period of 10 d, yields an accretion rate of \(\sim 2\times 10^{22}\) g s\({}^{-1}\). Putting this into Equation 3 gives an estimate for the black hole mass of \(M_{\rm ext}\sim~{}2.4\times 10^{5}~{}(k^{1/2}/\zeta^{3/2})M_{\odot}\), similar to those reported for QPE sources [18, 20, 21, 22]. We note that for the blackbody temperature range of 50-250 eV (Extended Data Figure. 2), the black hole mass estimate ranges from \(6\times 10^{4}~{}M_{\odot}\) to 1.5\(\times 10^{6}~{}M_{\odot}\). With an estimate of the black hole mass, we can now consider the types of system which can produce an rpTDE consistent with the behaviour of Swift J0230.
As discussed in the main paper, the formation of a system in which rpTDE can take place requires the Hills mechanism [29], in which a supermassive black hole disrupts a stellar binary system, resulting in one star being ejected and the other being bound to the black hole. In some cases the bound star may have a sufficiently small pericentre distance that it is (fully or partially) disrupted by the black hole, and [30] propose this as a route for forming rpTDEs. They show that the orbital period of the bound star is
\[P_{\bullet}\simeq\pi\frac{a_{*}^{3/2}}{\sqrt{2GM_{*}}}\left(\frac{M_{\bullet }}{M_{*}}\right)^{1/2}\,, \tag{4}\]
where \(M_{\bullet}\) is the mass of the black hole, \(M_{*}\) the mass of the primary star. The semi-major axis of the (pre-dison) binary star system, \(a_{*}\), is strongly constrained. For \(a_{*}\gtrsim GM_{*}/\sigma^{2}\sim 0.02\) AU (for \(M_{*}=M_{\odot}\) and stellar velocity dispersion \(\sigma=200\) km s\({}^{-1}\)), the binary system would be destroyed by the tidal field of the galaxy centre before reaching the central black hole, while for \(a_{*}\lesssim 0.001\) AU it is hard to fit a main-sequence star of any mass inside the binary orbit. If the progenitor of Swift J0230 were a binary consisting of a compact object (e.g. a white dwarf) and a low-mass main-sequence star, with a semi-major axis of \(a_{*}=\) 0.005-0.01 AU, and the compact object was ejected during the Hills encounter, then for the low-mass star to enter into a bound orbit around the black hole with the observed \(\sim\) 25 d period requires a black hole mass of \(M_{\bullet}\sim 4\times 10^{5}~{}M_{\odot}\). This is consistent with the mass estimate derived from the temperature of the X-ray spectrum. It is worth noting that the exact value of the orbital period of the bound star depends somewhat on the details of the Hills capture process. For example, it is possible to get periods of order \(\sim\) 25 d with a larger black hole mass (of order \(10^{7}~{}M_{\odot}\)), but as shown in the probability distribution functions (PDFs) displayed in fig. 1 of [30] such cases are relatively unlikely with the PDF of the orbital period strongly peaked around the value given by equation 4.
The pericentre distance from the black hole at which the star can be partially disrupted is \(\approx 1.6(M_{\bullet}/M_{*})^{1/3}R_{*}\) (for example, refs [55]) where \(R_{*}\) is the radius of star. The pericentre distance of the stellar orbit must therefore be \(r_{\rm p}\gtrsim 50R_{\rm g}\), where \(R_{\rm g}=GM/c^{2}\) is the gravitational radius of the black hole (assuming a low-mass star and a black hole of mass a few\(\times 10^{5}~{}M_{\odot}\)). This suggests why the outburst period of Swift J0230 is so long compared to the other QPE sources with a similar black hole mass. The QPE sources, with white-dwarf donors, must have pericentre distances that are \(\lesssim 5R_{\rm g}\) to liberate any mass from the white dwarf (indeed in many cases the orbits calculated by [16] have pericentre distances that imply the black hole must be rapidly spinning in the prograde direction to avoid directly capturing the star in a single orbit). Thus, in those sources, any accretion flow that forms will evolve rapidly due to the small circularization radius of the stellar debris, whereas here we have a much larger circularization radius of \(\sim~{}50-100R_{\rm g}\). The viscous timescale for a standard disc with \(M_{\bullet}\sim 3\times 10^{5}~{}M_{\odot}\), \(\alpha\approx 0.3\) (see ref [56]) and disc angular semi-thickness, \(H/R\sim 0.1\), is of the order of hours for radii of order a few \(R_{\rm g}\), and grows to of the order of days for radii of order \(50R_{\rm g}\). This provides the correct timescales observed for both the QPE sources and Swift J0230. It is therefore possible to explain Swift J0230 as the repeated partial tidal disruption of a solar-type star by a modest-mass black hole, within the framework of a model that can also explain the shorter-period QPEs (white dwarfs orbiting modest-mass black holes) and the longer-period PNTs (main sequence stars around more massive black holes).
Our black-hole mass estimate above was based on the X-ray spectrum as in previous QPE papers (for example, ref [18]); however, other methods may also be considered. If we assume that the peak accretion rate is at the Eddington limit then this requires a black hole mass of \(1.6\times 10^{5}~{}M_{\odot}\), although this is probably a lower limit as the accretion is more likely to be sub-Eddington. Using the starlight fits to the optical spectrum (Figure. 2) we can estimate the total stellar mass in the host galaxy 2MASX J02301709+2836050 as \((2.93\pm 0.94)\times 10^{9}~{}M_{\odot}\). The relationship between stellar mass and bulge mass shown by [57] (see their fig. 8) and [58] (fig. 4) shows that the expected black hole mass is in the range \(\log M_{\rm BH}/M_{\odot}\approx\)5-7, consistent with the above measurements. Recently, [59] obtained spectroscopic observations to determine the black hole masses of various QPE hosts using velocity dispersion, finding low masses similar to those predicted by the X-ray temperature approach. This may indicate that QPEs preferentially occur in galaxies with unusually low black hole masses; spectroscopic measurements of 2MASX J02301709+2836050 in order to measure the velocity dispersion will be needed to confirm the black hole mass in this system.
Swift J0230, to PNTs and beyond. To explain the \(\sim 25\) d variability of Swift J0230 a coplanar set of stars is required. As the stars are likely to encounter the black hole on randomly distributed orientations, a mechanism is required to ensure the stars settle into a coplanar state. [9] appeal to Type I inward migration through a gaseous AGN disc, which in the case of Swift J0230 may not be present.
Other possible interpretationsOther possible explanations for Swift J0230 may be considered, although we find none of them as compelling as the pTPDE model described above. Repeating stellar phenomena such as X-ray binary outbursts can be disregarded on luminosity grounds. Magnetar-powered emission can produce the required luminosity (for example, refs [6, 7, 80]). We have above discussed and discarded the possibility of a magnetar connected to the two-year old SN2020rht; more generally, while a magnetar could produce emission with luminosity and spectrum consistent with Swift J0230, the variability observed cannot be explained by this model. The spectral variation observed rules out the possibility of a steadily-emitting magnetar periodically obscured by some absorbing disk or stream, and the absence of strict periodicity rules out binary eclipses.
A class of objects known as 'Fast X-ray transients' has been identified in archival data (for example, refs [61, 62, 63]), which undergo relatively rapid X-ray outbursts. However, these are spectrally harder and more luminous than Swift J0230, and have much shorter outbursts (\(<50\) ks [63]). Further, they have not been observed to repeat; indeed [63] only identify as FXT candidates those objects which were only detected once.
A more promising type of analogous object is HLX-1 in ESO 243-49, which has been proposed as an accreting intermediate mass black hole (IMBH; a black-hole with a mass of \(10^{3-4}~{}M_{\odot}\)) [64]. This system has a similar peak luminosity to Swift J0230, but a harder spectrum and much longer period. Various models have been proposed to explain this system, including wind accretion [65] and disc instability [66], or rpTDE akin to that proposed above [16]. As shown above, simple energetics suggests that the amount of matter accreted during a single outburst (and thus stripped from the star every \(\sim\) 25 d) is \(\sim~{}10^{-5}M_{\odot}\), orders of magnitude too high to be powered by a stellar wind. In the disc-instability model [66], the accretion disc is formed by Roche-Lobe overflow, which can provide the requisite amount of mass. In their preferred scenario, the accretion is modulated by a disc wind instability. Futhermore, the HLX outbursts show a fast-rise, slow-decay morphology which is the opposite to that seen for Swift J0230. The differing shape, much shorter period, lack of optical variability, and softer spectra than HLX-1 all pose significant challenges for this interpretation. We therefore conclude that the progenitors of HLX-1 and Swift J0230 are likely to be physically distinct.
We thus conclude that rpTDE is the most likely explanation for Swift J0230, although detailed numerical modelling of the accretion flows is required to confirm whether the deviations from strict periodicity and occasional long quiescent times can be explained by this model.
## Declarations
Data AvailabilityAll of the _Swift_ data are available via the Swift data archives provided in the USA ([https://swift.gsfc.nasa.gov/archive/](https://swift.gsfc.nasa.gov/archive/)), UK ([https://www.swift.ac.uk/archive/](https://www.swift.ac.uk/archive/)) and Italy ([https://www.ssdc.asi.it/mmia/index.php?mission=swiftmastr](https://www.ssdc.asi.it/mmia/index.php?mission=swiftmastr)); they have targetIDs 00014936 and 00015231. Reduced _Swift_-XRT data for this transient are available at [https://www.swift.ac.uk/LSXPS/transients/690](https://www.swift.ac.uk/LSXPS/transients/690). The _Chandra_ data are publicly available via the _Chandra_ data archive ([https://cxc.harvard.edu/cda/](https://cxc.harvard.edu/cda/)), with sequence 704871 and obsID 27470. The NOT data will be available through the NOT public interface after the expiration of the standard proprietary period; the reduced spectrum is available through the University of Leicester FigShare repository ([https://doi.org/10.25392/leicester.data.c.6444296](https://doi.org/10.25392/leicester.data.c.6444296)). The Liverpool Telescope Data will be available through the Liverpool Telescope public interface after the expiration of the standard proprietary period; the photometry was included in this published article.
## Acknowledgments
This work made use of data supplied by the UK Swift Science Data Centre at the University of Leicester. We acknowledge the following funding support: UK Space Agency, grant ST/X001881/1 (PAE, KLP, RAJE-F and AAB). The Science and Technology Facilities Council, grants ST/Y000544/1 (CJN), and ST/W000857/1 (POB). The Leverhulme Trust, grant RPG-2021-380 (CJN). Italian Space Agency, contract ASI/INAF n. I/004/11/5 (SC). European Union's Horizon 2020 Programme under the AHEAD2020 project, grant 871158 (RAJE-F). European Research Council under the European Union's Horizon 2020 research and innovation programme, grant 725246 (DBM). The Cosmic Dawn Center (DAWN) is funded by the Danish National Research Foundation under grant No. 140. The Pan-STARRS1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation Grant No. AST-1238877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation. LI was supported by grants from VILLUM FONDEN (project number 16599 and 25501) We thank Andy Beardmore for help with the bootstrapping method the period analysis.
Author contributionsPAE authored the tools that discovered the event, was PI of the _Swift_ and _Chandra_ observations, performed most of the X-ray data analysis and led the writing of the article. CJN carried out the theoretical interpretation of the data, and produced the associated text. SC first noticed the automated report of the new transient and classified it as of interest; he was CoI of the _Chandra_ observations. PC obtained the NOT spectrum and led the analysis of it, producing Figure 2 and Extended Data Figure. 3. DAP obtained and analysed the Liverpool Telescope data. AAB led the analysis of the UVOT data. KLP carried out some of the XRT data analysis, particularly spectral fitting. SRO supported the UVOT data analysis. RAJE-F was a CoI of the _Chandra_ observations and was involved in many discussions concerning the interpretation of the object. DBM arranged for the acquisition of the NOT spectrum and helped with its analysis and interpretation, and was a CoI _Chandra_ observations. LI conducted the analysis of the NOT spectrum with starlight. MRG and PTO offered AGN expertise supporting ruling out AGN activity as the cause of the observed outburst. JPO and BS offered programmatic support and general input. All authors read the text and contributed to its editing.
Competing interestsThe authors declare no competing interests.
|
2302.00065 | High-availability displacement sensing with multi-channel self mixing
interferometry | Laser self-mixing is in principle a simple and robust general purpose
interferometric method, with the additional expressivity which results from
nonlinearity. However, it is rather sensitive to unwanted changes in target
reflectivity, which often hinders applications with non-cooperative targets.
Here we analyze experimentally a multi-channel sensor based on three
independent self-mixing signals processed by a small neural network. We show
that it provides high-availability motion sensing, robust not only to
measurement noise but also to complete loss of signal in some channels. As a
form of hybrid sensing based on nonlinear photonics and neural networks, it
also opens perspectives for fully multimodal complex photonics sensing. | Robin Matha, Stephane Barland, François Gustave | 2023-01-20T14:28:09Z | http://arxiv.org/abs/2302.00065v1 | # High-availability displacement sensing with multi-channel self mixing interferometry
###### Abstract
Laser self-mixing is in principle a simple and robust general purpose interferometric method, with the additional expressivity which results from nonlinearity. However, it is rather sensitive to unwanted changes in target reflectivity, which often hinders applications with non-cooperative targets. Here we analyze experimentally a multi-channel sensor based on three independent self-mixing signals processed by a small neural network. We show that it provides high-availability motion sensing, robust not only to measurement noise but also to complete loss of signal in some channels. As a form of hybrid sensing based on nonlinear photonics and neural networks, it also opens perspectives for fully multimodal complex photonics sensing.
1DOTA, ONERA, Universite Paris-Saclay, F-91123, Palaiseau, France
2Universite Cote d'Azur - CNRS, Institut de Physique de Nice, 1361 route des Lucioles, F-06560, Valbonne, France
*[email protected]
## 1 Introduction
Complex (nonlinear or disordered) systems have long been considered as high-capacity "information processors" [1]. For the specific task of processing environmental changes (_ie "sensing"_), complex waves and photonic systems are particularly useful thanks to their capability to process information from a distance. Recently, hybrid solutions leveraging complex wave physics and computer neural networks have enabled outstanding achievements [2, 3], at the intersection between neural networks and optics [4]. Thanks to its nonlinearity, self-mixing interferometry can carry much more information than its linear counterpart, which makes it potentially useful for many sensing tasks [5, 6, 7, 8, 9, 10, 11].
Self mixing interferometry is based on a delayed feedback effect, where a laser beam re-enters the emitting laser itself (in most cases in a semiconductor laser) after being reflected on a target. As is well known, this feedback alters the operation point of the laser and by monitoring this operation point (either via an integrated photodiode or by simply measuring the voltage across the diode), information about the displacement of the target can be retrieved. Thanks to this simplicity and compacity, this scheme is expected to provide a robust and reliable sensing technique, for instance for the displacement of a target along the light propagation axis. However, the operation range in which information can be reliably retrieved is in fact an important constraint, notably in terms of the amount of light which re-enters the emitting laser. This is often characterized via the coupling parameter \(C\) which also depends on the target distance. We refer the reader to for instance [5] for a very complete discussion of the different feedback regimes depending on \(C\) (including more parameters in [12]) but here we only underline that in general, the shape of the signal strongly depends on \(C\), which makes signal processing difficult. More importantly, when the amount of light re-entering the device is too small (\(C<<1\)), the self mixing signal becomes harmonic and the information about the direction of the displacement of the target is lost. At the other edge of the operation regime (when \(C>4.6\)), the dynamics of the laser with reinjection becomes multistable or even unstable towards chaotic regimes. Again, in this range, the self-mixing signal does not carry reliable information about the displacement of the target. For these reasons, great care must be taken to keep the system within the useful operation
range. Different approaches have been taken to mitigate unwanted variation of \(C\) for instance due to speckle effect, including dedicated algorithms [13], on the fly parameter estimation [14, 15] or training a neural network in different alignment conditions [16, 17]. However, the hardest limitation is the progressive loss (eventually up to complete lack) of significance of the signal when \(C<<1\) or \(C>4.6\). Since this limitation is of physical origin, tracking hardware has been proposed for the case of speckle [18] and adapted beam focussing has been analysed for large displacements [19].
Here, we propose that high-availability motion sensing can be achieved with a multi-channel self-mixing interferometer equipped with a simple, embeddable neural network. Multichannel self mixing has already been envisioned for complex measurements (see \(eg\)[20, 21, 22, 23]) but only seldomly considered as a potential enhancement [24] for the acquisition of a single measurement. On the other hand, machine learning has been increasingly used in self-mixing applications, including for fringe detection [25, 26, 27], parameter estimation [15], signal enhancement [28, 29], vibration measurement [30] and displacement inference [16]. Here we show that, thanks to the intrinsic capacity of neural networks to process high-dimensional data, a multichannel self-mixing sensor can provide high availability displacement measurements, robust against signal loss and with enhanced resolution.
In section 2 we present the experimental arrangement (2.1), the neural network design (2.2.1) and its training procedure (2.2.2). We assess the performance of the system in section 3 in terms of accuracy (3.1), robustness against noise (3.2) and measurement availability (3.3). We present our conclusions in section 4.
## 2 Experimental set up, model and training
### Experiment
The principle of the experimental arrangement is presented on Fig. 1. It consists of three independent self-mixing measurement channels with a (calibrated) speaker which acts as a target, a moving surface. Each channel is composed of a power supply, a laser diode and a signal amplification stage. The lasers of channels 1 and 2 emit at \(\lambda=1310\) nm (ML725B8F) an their
Figure 1: Scheme of the experimental setup. Three independent self-mixing channels monitor the displacement of a single non-cooperative target and a neural network processes the resulting high dimensional data to estimate the displacement of the target.
coherent emission threshold is 6.5 mA. The laser of channel 3 emits at \(\lambda\)=1550 nm (ML925B45F) and its threshold is around 10 mA. For all the acquisitions that will be described later, the lasers were respectively driven with DC currents of 7.7mA (1 and 2) and 41.7mA (channel 3). The laser beams are focussed onto the speaker surface, which is located about 15cm away from each laser. The lasers are placed very close to each other to minimize the angle between the beams and the normal to the speaker surface, minimizing the error between the actual displacement and its projection along each beam propagation direction.
The three self-mixing signals are obtained by measuring the voltage across each laser diode for two of them and the current through the internal photodiode for the third one (dictated by available equipment). These signals being of low amplitude, each voltage signal is amplified by an AC-coupled amplifier with \(10^{4}\) gain factor and several MHz bandwidth and the photodiode current is amplified by a transimpedance amplifier. Since self-mixing interferometry with non-cooperative targets is often plagued with signals of varying quality (mostly due to speckle effect) we purposefully align the lasers slightly differently so that each laser operates in a slightly different regime. An example is shown on Fig. 2. On the top row, the self-mixing signal is not particularly noisy but it corresponds to a rather low reinjection value and therefore the asymmetry of the fringes is not very visible. On the second row, the signal is of excellent quality with low amount of noise and well defined asymmetric fringes. Finally, the signal shown on the third row is hardly exploitable at all, with a very poor signal to noise ratio.
The displacement of the speaker has been calibrated and the linearity of the response of the speaker to a harmonic signal in a range of 5 to 100 Hz allows us to have access to the effective displacement of the surface. In this frequency range the response of the loudspeaker does not show any phase shift. These harmonic signals are generated by the sound card of the computer with a normalized amplitude between 0.1 and 1, which means displacements between 3.5\(\upmu\)m and 7.5\(\upmu\)m. This electrical signal is recorded during the experiment and (after some preprocessing described below) forms the basis of the truth signal. An example of the displacement signal is shown on the bottom row of Fig. 2.
### Model and Training
#### 2.2.1 Model design
In this experiment, the neural network is supposed to infer the displacement of the target on the basis of three interferometric signals. The problem is therefore in principle a (multi-)sequence to sequence task. However, as was pointed at in [16], defining a spectral band of operation for the displacements to be measured allows one to convert the task into a simpler regression task by downsampling the displacement signal. In this case, the frequency operation regime we define ranges from 10 to 100 Hz. Within this range of displacement frequency and with 4 \(\upmu\)s sampling time for the signal acquisition (both interferometric signals and displacement), we choose to analyse the time series in chunks of 1.024 ms, each of them containing 256 data points per interferometric channel and per displacement measurement. Due to the selected frequency band and the Nyquist theorem, each chunk of 256 measurement of speaker voltage can be replaced by a single average displacement value over the 256 time steps. This converts the sequence to sequence task into a much simpler regression task, where one displacement value must be inferred from three interferometric sequences. At this point, we can use the very simple convolutional neural network architecture proposed in [16], since it appeared to carry sufficient information capacity to efficiently process single-channel data. However, each input vector is now composed of three channels, one per interferometric signal. Although advanced use of multiple sensing modalities is a very relevant area of research in itself (see _eg_[31, 32, 33, 34]), here we deal with the simpler case of multiple measurement channels which operate on similar modalities. Thus, we stick to a very simple "early fusion" approach: the first convolutional layer operates on the three input channels and fuses the resulting kernels into a single output space,
irrespectively of what specific channel activated them. After this stage, the network consists of a contracting stack of downsampling and 1D, single channel, convolutional layers. At the end of the stack, two fully connected layers perform the final regression towards a single displacement value per time interval. All layers use a Rectified Linear Unit activation function except for the last layer which is linear. Details about the network are available in Appendix.
#### 2.2.2 Training
The training set [35] consists only of experimental data. We record simultaneously the three self-mixing signals and the speaker voltage, calibrated as a displacement. The sampling rate is 4 \(\mu\)s and the record length is 499968 points. The training set features only harmonic displacements with eight evenly spaced frequencies between 53Hz and 93Hz and six evenly spaced amplitudes between 0.4 and 0.9 V applied to the speaker, corresponding to 3.5 and 7.5 \(\mu\)m. Thus, the training set contains 48 different configurations in terms of frequency and amplitude of periodic displacement.
Before processing by the network, the interferometric signals are normalized and centered around zero by dividing them by their standard deviation and substracting the mean value of the signal over the full record length. In order to reinforce the robustness of the network to noise, we add to each signal a gaussian white noise of standard deviation \(\sigma_{n}=\sigma_{s}\) where \(\sigma_{s}\) is the standard deviation of the original self-mixing signal.
As for the truth signal, first we remove the electronic noise (intrinsic oscilloscope noise and digitalization noise) by smoothing the displacement signal using a Savitsky-Golay filter whose parameters are a sliding interval of 1001 points and a polynomial degree of 2. Then we compute the average displacement during a time window of 256 sampling points which gives a signal in
Figure 2: Experimental data example. The top three rows show the self-mixing signal provided by each laser and the bottom row the corresponding displacement. The lasers are purposefully set to provide different quality of self-mixing signals (see text).
units of Volts per time window. We then use the duration of the sampling rate \(dt=4\mu\)s which gives a time window of 1.024 ms and the speaker calibration of 31.3\(\mu\)m/V to convert the signal in physical units of \(\mu m/ms\).
The network can then be trained using as input about \(9.5*10^{4}\) arrays of size \(3\times 256\) corresponding to three interferometric signals over time windows of \(256\times dt\) and as truth the corresponding average displacement over each time window, with 10% of the samples kept for validation. Of course during training the order of samples is randomized at each epoch but an additional randomization is also operated on the order of the channels themselves, _ie_ to a single displacement value can correspond any permutation of the measurement channels. This point implies that the network must be trained in physical units of \(\mu\)m/ms but we have observed that it leads to a remarkable improvement of the reconstruction performance. We attribute it to an enhancement of training for all weights especially at the first layer instead of focusing training immediately on the most significant channel. In addition to randomization, during training we randomly replace one of the self-mixing signals with white noise (with probability 1/4) to prepare the network to potential channel loss during the inference phase. We train the network by minimizing the mean squared error between the inferred and the true displacement during 18 epochs in batches of 32 samples. On a (consumer grade) GTX1080 GPU the training time is about 20 min.
## 3 Results
After training, we assess the performance of a multi-channel measurement approach against that of the usual single-channel approach, first in terms of accuracy and second in terms of robustness to measurement noise up to the complete loss of one or the other channel. Since each channel
Figure 3: Reconstruction performance based on the single channels in isolation (the top three rows are three 1D models) and on the three channels simultaneously (bottom row, 3D model).
provides an independent measurement, we will refer to a model processing all three channels as a three-dimensional (3D) model, as opposed to a model processing a single dimension, which we will refer to as a one-dimensional (1D) model.
### Accuracy
On Fig. 3, we show the reconstruction of the target trajectory from each measurement channel and the reconstruction based on the three channels simultaneously. The displacement reconstruction is expected to operate correctly for arbitrarily complex displacements therefore we analyse its performance (as in [16]) on a random displacement with a prescribed bandwidth (here 10 to 100 Hz). To achieve this, we generate a random \(\delta\)-correlated signal which we Fourier filter with a fifth order Butterworth filter between 10 and 100 Hz. This signal is sent to the speaker, which then undergoes a random motion. On the top three traces, we use three 1D models trained on the experimental data set described above, each model processing only one of the three self-mixing signals. We quantify the quality of the reconstruction by measuring the Pearson correlation coefficient and the Root Mean Squared Error between the true displacement and the reconstruction on a 512-seconds long time trace of this random displacement (\(5*10^{5}\) samples of 1.024 ms duration). On the top trace, the reconstruction is rather good with a correlation coefficient between the true displacement and the reconstruction of \(r=0.83\) and a Root Mean Square Error \(RMSE<0.23\)\(\mu\)m/ms. The second row is based on the best measurement channel, with low detection noise and well defined, clearly asymmetric fringes. Correspondingly, the quality of the target displacement reconstruction is excellent, with a correlation coefficient between the true displacement and the reconstruction of \(r=0.97\) and a \(RMSE<0.11\)\(\mu\)m/ms. On the third row instead, the reconstruction of the displacement is of again of lower quality, which is not unexpected since the self-mixing signal is really very poor (see Fig. 2, third row). We note however that the reconstruction is essentially equivalent to that of the first channel, with a correlation coefficient \(r=0.83\) and \(RMSE<0.24\)\(\mu\)m/ms. Finally, on the bottom row, we show the reconstruction based on a 3D model processing the three channels simultaneously. This reconstruction is almost as good as the one obtained only on the single highest quality measurement (channel 2) and is markedly better (\(r=0.96,RMSE=0.13\mu\)m/ms) than the one obtained on the two poorest channels (1 and 3).
### Robustness against signal degradation
We have seen above that (at least with the simple early fusion approach we use here in the neural network), the redundancy in measurement channels does not immediately translate into a higher measurement precision. However, as we shall see in the following, it does improve very strongly the resilience of the system to noise present in one or more channels.
To quantify this effect, we check the reconstruction performance on the same data set as above after adding to one or more channel(s) a \(\delta\)-correlated gaussian noise with standard deviation \(\sigma_{n}\) which models the noise-induced degradation of self-mixing measurements. The results of this procedure are shown on Fig. 4, where we show (top row) the correlation coefficient between the true displacement and the reconstruction of the displacement provided by several models and (bottom row) the Root Mean Square Error on the reconstruction for each model depending on noise added to one or more channels.
The three 1D models curves (red, green and purple triangles) are shown for reference and display essentially the same behavior. For each case, we use a single channel for the reconstruction and degrade this channel by adding noise to it. At first, each model provides an acceptable level of performance but even the best signal (channel 2) becomes very unreliable when the added noise is larger than \(\sigma_{n}=3*\sigma_{s}\) where \(\sigma_{s}\) is the standard deviation of the uncontaminated original self-mixing signal. The fact that there is an optimal non-zero level of noise in these curves is a result of our training procedure: for robustness all models have been trained with \(\sigma_{n}=1\) and
therefore have never seen uncontaminated signals.
The two curves based on 3D models (blue disks) show the robustness of a model operating on three channels simultaneously. The continuous blue line is obtained when noise is added only to the second self-mixing channel _ie_ the one with the best performance when used in isolation. We observe that when a single channel is fully degraded (\(\sigma_{n}=8\sigma_{s}\) for instance) this model still operates at a very high level of performance (\(r=0.93\), \(RMSE=0.16\)). This is an excellent point in terms of measurement quality and availability. About measurement availability, it shows that a 3D measurement system can very well operate under highly degraded conditions _ie_ even when the highest performance channel is basically lost. The dashed blue line is obtained when noise is added to the second self-mixing channel and also (starting from \(\sigma_{n}>3.5\sigma_{s}\) to the third measurement channel. When this noise term \(\sigma_{n,3}\) is added to the third channel, the measurement quality of course decreases since the model in the end operates basically on only a single remaining channel. Accordingly, when both channels 2 and 3 have become unusable due to very high noise (for instance \(\sigma_{n}=7,\sigma_{n,3}=5\)), the performance level of the 3D model is basically the performance level of the 1D model operating on the uncontaminated channel 1 \(r=0.83\), \(RMSE=0.23\). Again, this is an excellent point in terms of measurement availability since a three-channels measurement system can operate at the performance level of a single channel when the other two are lost.
Finally, for completeness, the performance of a 2D model (trained only on the poorest self-mixing signals 1 and 3) is shown as black squared dashed line. Of course the performance of that model is not affected by noise added to channel 2 which it does not process but it is important to underline that the 2D model performance is better than that of the 1D models based only on
Figure 4: Robustness of the reconstruction against noise in one or more self-mixing signals. Top: Pearson correlation coefficient between the reconstruction provided by models and the true displacement. Bottom: Root Mean Squared Error on the displacement reconstruction.
channels 1 and 3 in isolation. When noise is added also to channel 3, the performance of this model smoothly degrades down to that of the 1D model based only on an unperturbed channel 1.
Overall, the above observations demonstrate several outstanding properties (in terms of accuracy and robustness) of a 3D approach to displacement measurement:
* In absence of noise, the 3D model's performance almost matches that of the best 1D model and is much better than the other two 1D models
* When noise degrades the most informative channel, the 3D model strongly outperforms all 1D models and also outperforms a 2D model based on unperturbed channels
* Even if noise strongly degrades two of the three available channels, the performance of the 3D model never degrades below that the 1D model based on the only unperturbed channel
* In presence of noise on all 3 channels, we checked that the 3D model performance is essentially equivalent to that of the best 1D model (channel 2), as was also observed in absence of noise (Fig. 3).
### Measurement availability
The features described above have a considerable impact in terms of measurement availability, as we discuss below. Our goal here is to demonstrate that a displacement measurement system based on three simultaneous measurement channels processed by an adequately trained neural network
Figure 5: Channel loss and measurement availability. At different times, one or more self-mixing signals (top three rows) are replaced by white noise, simulating \(C=0\) as could be caused for instance by speckle. The 3D model transparently makes use of the available data and provides (bottom row) a meaningful reconstruction of the displacement even in the worst case scenario of two channels becoming unavailable simultaneously.
provides a considerable step towards a high-availability self mixing displacement sensor. In fact, it is well known that with uncooperative targets, speckle can strongly modify the self-mixing signal shape, including leading to an effectively vanishing feedback rate. Several possibly complementary ways to mitigate this issue exist, including dedicated tracking hardware [18], dedicated algorithms [13], on-the-fly parameter estimation [14] or training a neural network in several feedback conditions to enable self-mixing signal processing in many operation ranges [16]. Here we focus on the case in which speckle (or in fact any other unwanted perturbation) leads to the full degradation of a self-mixing measurement channel.
To simulate this phenomenon, we replace one or the other channel by a gaussian white noise during a certain time interval (simulating the case of \(C=0\) due to speckle or any other disturbance for this channel) and feed that modified interferometric data to the 3D model. The top three rows of Fig. 5 show the three self-mixing signals and the bottom row shows the true displacement and the reconstruction. This analysis is performed on the same segment of displacement as in Fig. 2. During the interval \(60<t<110\) ms, channel 1 is degraded and during the interval \(25<t<75\) ms channel 2 is degraded, again simulating \(C=0\). As we see, the 3D model always provides a meaningful reconstruction, even in the worst conditions such as the central region, during the interval \(60<t<75\) ms where two channels out of three are unusable. Most importantly for a high-availability measurement system, the 3D model transparently makes optimal use of the available information but also the optimal quality of the reconstruction is recovered as soon as the quality of the self-mixing data itself is restored.
## 4 Conclusion
In conclusion, we have analyzed the performance of a high-availability displacement sensor based on three independent self-mixing interferometry channels processed by a lightweight neural network. Thanks to the inherent capacity of neural networks to process high-dimensional input, the multichannel sensor can transparently make optimal use of the self-mixing signal, using one or more input channels depending on their availability. The multichannel system performance nearly matches that of the best quality channel in absence of any disruption and as soon as one channel is degraded the multichannel sensor outperforms any single-channel sensor. Even when two channels are entirely degraded, the multichannel sensor still provides reliable displacement inference based on the only remaining channel. The network is trained to infer a displacement without relying on physical modelling, therefore the approach does not require any parameter estimation. Together with the ability of neural networks to learn many shapes of self-mixing signals [16], we believe that the multi-channel approach constitutes a key element in solving the long-standing issue of speckle-affected self-mixing interferometry with non-cooperative targets.
Since the neural network is purposefully designed to process self-mixing data with near-minimal number of parameters, the network is amenable to embedding on tiny computing devices [36, 37], which opens the way towards small-footprint and low-power smart sensors leveraging the intrinsic simplicity of self-mixing interferometry.
For future work, we underline that the approach outlined here is only meant as a proof of concept. First about the neural network itself: as in [16], the network is extremely basic and can certainly be improved, especially through more advanced channel fusion. Also about the photonic stages, the present work opens perspectives along the lines of more advanced and multimodal sensing [11, 38], perhaps including compressed sensing [39] for onboarding on low-footprint components.
## Appendix
The structure of the network is identical to that of [16] albeit with a larger number of convolutional kernels so as to accommodate three measurement channels instead of one. The key elements of the network are shown on Table 1. The network is implemented thanks to the Keras library [40] and
we refer the reader to deep learning fundamentals [41] and implementations [40] for background information. The total number of trainable parameters is 207 905. Networks of identical architecture with more cells per layer did not lead to significant improvements. The training of the network takes about twenty minutes on a consumer-grade GPU (NVIDIA GeForce GTX 1080).
## Disclosures
The authors declare no conflicts of interest.
|
2308.15355 | Pionic and radiative transitions from $T_{c\bar{s}0}^+(2900)$ to
$D_{s1}^+(2460)$ as a probe of the structure of $D_{s1}^+(2460)$ | In this work, we evaluated the widths of the pionic and radiative transitions
from the $T_{c\bar{s}0}^{+}(2900)$ to the $D_{s1}^{+}(2460)$ in the
$D_{s1}^{+}(2460)$ molecular frame and the $D_{s1}^{+}(2460)$ charmed-strange
meson frame. Our estimations demonstrate that the transition widths in the
$D_{s1}^{+}(2460)$ molecular frame are much larger than those in the the
$D_{s1}^{+}(2460)$ charmed-strange meson frame. Specifically, the ratio of the
widths of $\Gamma(T_{c\bar{s}0}^{+}(2900)\to D_{s1}^{+} \pi^{0})$ and
$\Gamma(T_{c\bar{s}0}^{+}(2900)\to D^{+(0)}K^{0(+)})$ is estimated to be around
0.1 in the $D_{s1}^{+}(2460)$ charmed-strange meson frame, whereas the lower
limit of this ratio is 0.67 in the $D_{s1}^{+}(2460)$ molecular frame. Thus,
the aforementioned ratio could be employed as a tool for testing the nature of
the $D_{s1}^{+}(2460)$. | Zi-Li Yue, Cheng-Jian Xiao, Dian-Yong Chen | 2023-08-29T14:52:01Z | http://arxiv.org/abs/2308.15355v2 | Pionic and radiative transitions from \(T^{+}_{c\bar{s}0}(2900)\) to \(D^{+}_{s1}(2460)\) as a probe of the structure of \(D^{+}_{s1}(2460)\)
###### Abstract
In this work, we evaluated the widths of the pionic and radiative transitions from the \(T^{+}_{c\bar{s}0}(2900)\) to the \(D^{+}_{s1}(2460)\) in the \(D^{+}_{s1}(2460)\) molecular frame and the \(D^{+}_{s1}(2460)\) charmed-strange meson frame. Our estimations demonstrate that the transition widths in the \(D^{+}_{s1}(2460)\) molecular frame are much larger than those in the the \(D^{+}_{s1}(2460)\) charmed-strange meson frame. Specifically, the ratio of the widths of \(\Gamma(T^{+}_{c\bar{s}0}(2900)\to D^{+}_{s1}\pi^{0})\) and \(\Gamma(T^{+}_{c\bar{s}0}(2900)\to D^{+(0)}K^{0+})\) is estimated to be around 0.1 in the \(D^{+}_{s1}(2460)\) charmed-strange meson frame, whereas the lower limit of this ratio is 0.67 in the \(D^{+}_{s1}(2460)\) molecular frame. Thus, the aforementioned ratio could be employed as a tool for testing the nature of the \(D^{+}_{s1}(2460)\).
## I Introduction
As one of typical new hadron states with one heavy quark, \(D^{+}_{s0}(2317)\) was first observed by BABAR Collaboration in the inclusive \(D^{+}_{s}\pi^{0}\) invariant mass distribution from the electro-positron annihilation data at energies near 10.6 GeV [1], and then confirmed by the CLEO Collaboration [2]. As indicated in Ref. [1], the most possible \(J^{P}\) quantum numbers of \(D^{+}_{s0}(2317)\) were \(0^{+}\), which could be a good candidate of \(P\)-wave charmed strange meson [3; 4; 5; 6; 7; 8; 9]. However, the observed mass of \(D^{+}_{s0}(2317)\) is about 160 MeV below the corresponding predicted mass of the \(P\)-wave charmed strange meson, which is 2.48 MeV [3; 10]. Moreover, the observed mass of \(D^{+}_{s0}(2317)\) is about 40 MeV below the threshold of \(DK\), which indicates \(D^{+}_{s0}(2317)\) could be a good candidate of \(DK\) molecular state [11; 12; 13; 14; 15; 16; 17; 18].
Besides confirming the existence of \(D^{+}_{s0}(2317)\)[2], the CLEO Collaboration reported another new narrow resonance \(D^{+}_{s1}(2460)\) in the invariant mass distribution of \(D^{+}_{s1}\pi^{0}\) with the mass around 2.46 GeV, and the \(J^{P}\) quantum numbers were determined to be \(1^{+}\)[2]. Similar to the case of \(D^{+}_{s0}(2317)\), \(D^{+}_{s1}(2460)\) could be a candidate of \(P\) wave charmed-strange mesons[3; 4; 5; 6; 7; 8; 9]. However, the mass of \(D^{+}_{s1}(2460)\) is also about 100 MeV below the predicted mass of the corresponding \(P\)-wave charmed strange meson [10], which makes the interpretation of \(D^{+}_{s1}(2460)\) in the conventional charmed strange frame questionable [10; 19]. It is more interesting to notice that the mass of \(D^{+}_{s1}(2460)\) is also about 40 MeV below the threshold of \(D^{+}K\), which lead to the prosperity of \(D^{+}K\) molecular interpretation [12; 17; 18; 20; 21; 22; 23; 24].
Recently, the LHCb Collaboration reported two new structures \(T^{0/++}_{c\bar{s}0}(2900)\) in \(D^{+}_{s}\pi^{+}/D^{*}_{s}\pi^{-}\) invariant mass spectrum of \(B^{+}\to D^{-}D^{*}_{s}\pi^{+}/B^{0}\to\bar{D}^{0}D^{*}_{s}\pi^{-}\) decays with a significance to be \(9\sigma\). The masses and widths of the \(T^{0/++}_{c\bar{s}0}(2900)\) are measured to be [25; 26],
\[m_{T^{0}_{s0}} = 2892\pm 14\pm 15\text{MeV},\] \[\Gamma_{T^{0}_{s0}} = 119\pm 26\pm 12\text{MeV}, \tag{1}\]
and
\[m_{T^{+-}_{c\bar{s}0}} = 2921\pm 17\pm 19\text{MeV},\] \[\Gamma_{T^{+-}_{c\bar{s}0}} = 137\pm 32\pm 14\text{MeV}, \tag{2}\]
respectively.
From the above parameters, one can conclude that \(T^{0}_{c\bar{s}0}(2900)\) and \(T^{++}_{c\bar{s}0}(2900)\) should be two of the isospin multiplets. In addition, the experimental measurement indicates that the masses of \(T^{0/++}_{c\bar{s}0}(2900)\) is near the threshold of \(D^{*}K^{*}\), especially the neutral one. As shown in Fig. 1, the newly observed \(T_{c\bar{s}0}(2900)\), along with \(D^{+}_{s0}(2317)\) and \(D^{+}_{s1}(2460)\), make the states near the \(D^{(*)}K^{(*)}\) thresholds abundant[1; 2; 15; 19; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38]. However, unlike the \(D^{+}_{s0}(2317)\) and \(D^{+}_{s1}(2460)\), the isospin of \(T_{c\bar{s}0}(2900)\) is 1, which suggests that it could only be an exotic candidate rather than conventional charmed strange meson. Consequently, certain exotic interpretations, particularly the \(D^{*}K^{*}\) molecular interpretation, have been proposed[39; 40; 41; 42; 43; 44; 45; 46]. For instance, the authors in Ref. [44] suggested that the \(T^{0/++}_{c\bar{s}0}(2900)\) could be a \(D^{*}K^{*}\)
molecular state with \(I(J^{P})=1(0^{+})\), employing the one-boson exchange model. In our previous work [47], we examined the strong decay behavior of \(T^{0}_{c30}(2900)\) in \(D^{*}K^{*}\) molecular scenario using an effective Lagrangian approach. Specifically, we investigated the decay \(T^{0}_{c30}(2900)\to D^{+}_{s1}(2460)\pi^{-}\) where \(D^{+}_{s1}(2460)\) is considered to be a \(P\) wave charmed-strange meson. In the present work, we investigate the pionic and radiative transitions from \(T^{+}_{c30}(2900)\) to \(D^{+}_{s1}(2460)\) within a molecular framework, where \(T^{+}_{c30}(2900)\) and \(D^{+}_{s1}(2460)\) are assigned to be the \(D^{*}K^{*}\) and \(D^{*}K\) molecules, respectively. By comparing the results in the molecular scenario with those in the \(P\) wave charmed strange meson scheme, we demonstrate that the pionic and radiative transition process explored in the present work may be utilized to probe the nature of \(D^{+}_{s1}(2460)\).
This work is organized as follows. After introduction, the hadronic molecular structure of \(T^{+}_{c30}(2900)\) and \(D^{+}_{s1}(2460)\) are discussed in Section II, The pionic and radiative transitions between \(T^{+}_{c30}(2900)\) and \(D^{+}_{s1}(2460)\) are presented in Section III. The numerical results and related discussions are presented in Section IV and the last section is devoted to a short summary.
## II Hadronic molecular structure of \(T^{+}_{c30}(2900)\) and \(D^{+}_{s1}(2460)\)
In the molecular scheme, the \(T^{+}_{c30}(2900)\) and \(D^{+}_{s1}(2460)\) could be considered as \(S\)-wave molecular states composed of \(D^{*}K^{*}\) and \(D^{*}K\), respectively. Here, we employ the effective Lagrangian approach to describe the coupling of the molecular states with their components, and the effective Lagrangians related to \(T^{+}_{c30}(2900)\) and \(D^{+}_{s1}(2460)\) are,
\[\mathcal{L}_{D_{s1}} = g_{D_{s1}}D^{\mu+}_{s1}(x)\int dy\Phi_{D_{s1}}(y^{2})\Big{[}D^{ \mu+}_{\mu}(x+\omega_{KD^{*}}y)K^{0}(x-\omega_{D^{*}K}y)+D^{0}_{\mu}(x+\omega _{KD^{*}}y)K^{+}(x-\omega_{D^{*}K}y)\Big{]},\] \[\mathcal{L}_{T_{c30}} = g_{T_{c30}}T^{+}_{c30}(x)\int dy\Phi_{T_{c30}}(y^{2})\Big{[}D^{ \mu+}_{\mu}(x+\omega_{K^{*}D^{*}}y)K^{+\mu}(x-\omega_{D^{*}K}y)-D^{0}_{\mu}(x+ \omega_{K^{*}D^{*}}y)K^{+\mu}(x-\omega_{D^{*}K^{*}}y)\Big{]}, \tag{3}\]
respectively, where \(\omega_{kj}=m_{i}/(m_{i}+m_{j})\) is the kinematical parameter. The \(\Phi_{T_{c30}}(y^{2})\) and \(\Phi_{D_{s1}}(y^{2})\) are the correlation function for \(T^{+}_{c30}(2900)\) and \(D^{+}_{s1}(2460)\), respectively, which are introduced to describe the molecular inner structure. The Fourier transformation of the correlation function is,
\[\Phi_{M}(y^{2})=\int\frac{d^{4}p}{(2\pi)^{4}}e^{-ipr}\tilde{\Phi}_{X}(-p^{2}, \Lambda^{2}_{M}),\quad M=\Big{(}T_{c30},\;\;D_{s1}\Big{)}. \tag{4}\]
In principle, the correlation function in momentum space should decrease sharply enough to avoid the divergence in the ultraviolet region. Here, we employ the correlation function in the Gaussian form [48; 49; 50; 51], which is,
\[\tilde{\Phi}_{M}(p_{E}^{2},\Lambda^{2}_{M})=\exp(-p_{E}^{2}/\Lambda^{2}_{M}) \tag{5}\]
where \(P_{E}\) is the Jacobi momentum in the Euclidean space, and \(\Lambda_{M}\) is a model parameter to depict the distribution of components in the molecule.
For the coupling constants \(g_{T_{c30}}\) and \(g_{D_{s1}}\) in Eq. (II), they could be determined by the Weinberg's compositeness condition, which means that the possibility of finding the molecular in a bare elementary state is set equal to zero [52; 53; 54; 55; 56], i.e.,
\[Z_{T_{c30}} = 1-\Pi^{\prime}_{T_{c30}}=0\] \[Z_{D_{s1}} = 1-\Pi^{\prime}_{D_{s1}}=0 \tag{6}\]
with \(\Pi^{\prime}_{T_{c30}}\) to be the derivative of mass operator of the \(T_{c30}\). While for \(D^{+}_{s1}(2460)\), the mass operator \(\Pi^{\prime\prime}_{D_{s1}}\) could be divided into the transverse part \(\Pi_{D_{s1}}\) and the longitudinal part \(\Pi^{L}_{D_{s1}}\), which is,
\[\Pi^{\mu\nu}_{D_{s1}}(p)=g^{\mu\nu}_{\perp}\Pi_{D_{s1}}(p^{2})+\frac{p^{\mu}p ^{\nu}}{p^{2}}\Pi^{L}_{D_{s1}}(p^{2}) \tag{7}\]
between \(T^{+}_{c30}(2900)\) and \(D^{+}_{s1}(2460)\) are presented in Section III. The numerical results and related discussions are presented in Section IV and the last section is devoted to a short summary.
## II Hadronic molecular structure of \(T^{+}_{c30}(2900)\) and \(D^{+}_{s1}(2460)\)
In the molecular scheme, the \(T^{+}_{c30}(2900)\) and \(D^{+}_{s1}(2460)\) could be considered as \(S\)-wave molecular states composed of \(D^{*}K^{*}\) and \(D^{*}K\), respectively. Here, we employ the effective Lagrangian approach to describe the coupling of the molecular states with their components, and the effective Lagrangians related to \(T^{+}_{c30}(2900)\) and \(D^{+}_{s1}(2460)\) are,
\[\mathcal{L}_{D_{s1}} = g_{D_{s1}}D^{\mu+}_{s1}(x)\int dy\Phi_{D_{s1}}(y^{2})\Big{[}D^{ \mu+}_{\mu}(x+\omega_{KD^{*}}y)K^{0}(x-\omega_{D^{*}K}y)+D^{0}_{\mu}(x+\omega_{ KD^{*}}y)K^{+}(x-\omega_{D^{*}K}y)\Big{]},\] \[\mathcal{L}_{T_{c30}} = g_{T_{c30}}^{T}T^{+}_{c30}(x)\int dy\Phi_{T_{c30}}(y^{2})\Big{[}D^ {\mu+}_{\mu}(x+\omega_{KD^{*}}y)K^{+}(x-\omega_{D^{*}K}y)-D^{0}_{\mu}(x+\omega_{K^{*} D^{*}}y)K^{+\mu}(x-\omega_{D^{*}K^{*}}y)\Big{]}, \tag{8}\]
respectively, where \(\omega_{kj}=m_{i}/(m_{i}+m_{j})\) is the kinematical parameter. The \(\Phi_{T_{c30}}(y^{2})\) and \(\Phi_{D_{s1}}(y^{2})\) are the correlation function for \(T^{+}_{c30}(2900)\) and \(D^{+}_{s1}(2460)\), respectively, which are introduced to describe the molecular inner structure. The Fourier transformation of the correlation function is,
\[\Phi_{M}(y^{2})=\int\frac{d^{4}p}{(2\pi)^{4}}e^{-ipr}\tilde{\Phi}_{X}(-p^{2}, \Lambda^{2}_{M}),\quad M=\Big{(}T_{c30},\;D_{s1}\Big{)}. \tag{9}\]
In principle, the correlation function in momentum space should decrease sharply enough to avoid the divergence in the ultraviolet region. Here, we employ the correlation function in the Gaussian form [48; 49; 50; 51], which is,
\[\tilde{\Phi}_{M}(p_{E}^{2},\Lambda^{2}_{M})=\exp(-p_{E}^{2}/\Lambda^{2}_{M}) \tag{10}\]
where \(P_{E}\) is the Jacobi momentum in the Euclidean space, and \(\Lambda_{M}\) is a model parameter to depict the distribution of components in the molecule.
For the coupling constants \(g_{T_{c30}}\) and \(g_{D_{s1}}\) in Eq. (II), they could be determined by the Weinberg's compositeness condition, which means that the possibility of finding the molecular in a bare elementary state is set equal to zero [52; 53; 54; 55; 56], i.e.,
\[Z_{T_{c30}} = 1-\Pi^{\prime}_{T_{c30}}=0\] \[Z_{D_{s1}} = 1-\Pi^{\prime}_{D_{s1}}=0 \tag{11}\]
with \(\Pi^{\prime}_{T_{c30}}\) to be the derivative of mass operator of the \(T_{c30}\). While for \(D^{+}_{s1}(2460)\), the mass operator \(\Pi^{\prime\prime}_{D_{s1}}\) could be divided into the transverse part \(\Pi_{D_{s1}}\) and the longitudinal part \(\Pi^{L}_{D_{s1}}\), which is,
\[\Pi^{\mu\nu}_{D_{s1}}(p)=g^{\mu\nu}_{\perp}\Pi_{D_{s1}}(p^{2})+\frac{p^{\mu}p^{ \nu}}{p^{2
## III Pionic and radiative transitions from \(T^{+}_{c\bar{c}0}(2900)\) to \(D^{+}_{s1}(2460)\)
In the present work, the initial state \(T^{+}_{c\bar{c}0}(2900)\) is considered as a \(D^{+}K^{*}\) molecule. Subsequently, the pionic and radiative transitions from \(T^{+}_{c\bar{c}0}(2900)\) to \(D^{+}_{s1}(2460)\) could occur through two possible subprocesses. The first one is via the subprocess \(K^{*}\to K\pi/\gamma\) and the \(K\) and \(D^{*}\) couple to the \(D^{+}_{s1}(2460)\), which is shown in Figs. 3-4. The second one is through the subprocess \(D^{*}\to D\pi/\gamma\) and the \(D\) and \(K^{*}\) couple to the \(D^{+}_{s1}(2460)\), where the exchanged meson is \(D\) meson. The mass of \(D\) meson is much greater than the one of \(K\) meson, thus, the contributions from the second subprocess should be suppressed. In addition, in the kaon exchange diagram, the final \(D_{s1}(2460)\) couples to \(DK\), and the threshold of \(DK\) is close to the mass of the \(D_{s1}(2460)\), thus in the triangle diagram, all the involved internal particles are almost on-shell, which will enhance the loop integral. On the contrary, in the \(D\) meson exchange diagram, the final \(D_{s1}(2460)\) couples to \(DK^{*}\), while the threshold of \(DK^{*}\) is far above the mass of \(D_{s1}(2460)\), thus, the involve internal particles are off-shell, which further suppress the contributions from the \(D\) meson exchange diagrams. Thus in the present estimation, we only consider the diagrams in Figs. 3-4.
In the present calculations, the diagrams in Figs. 3-4 are evaluated in the hadronic level. The interactions of the involved particles are depicted by effective Lagrangian. The effective Lagrangian depicts the subprocess \(K^{*}\to K\pi\) could be constructed by SU(3) symmetry interaction [12; 27; 47; 57; 58; 59; 60], which is
\[\mathcal{L}_{K^{*}K\pi} = -ig_{K^{*}K\pi}\left(K\bar{\sigma}^{\mu}\pi-\bar{\sigma}^{\mu}K \pi\right)K^{*}_{\mu}+\ \mathrm{H.c.}\, \tag{9}\]
while the one for \(K^{*}\to K\gamma\) is,
\[\mathcal{L}_{K^{*}K\gamma} = \left(\frac{g_{K^{*}K^{*}\gamma}}{4}ee^{\mu\nu\beta}F_{\mu\nu}K^ {*+}_{\alpha\beta}K^{-}\right. \tag{10}\] \[+ \left.\frac{g_{K^{*}K^{*}\gamma}}{4}ee^{\mu\nu\alpha\beta}F_{\mu \nu}K^{*0}_{\alpha\beta}\bar{K}^{0}\right)+\ \mathrm{H.c.}\,\]
where \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\), \(K^{*}_{\alpha\beta}=\partial_{\alpha}K^{*}_{\beta}-\partial_{\beta}K^{*}_{\alpha}\) are the field-strength tensors. According to the decay width of \(K^{*}K\pi\)[61], the coupling constant \(g_{K^{*}K\pi}\) is estimated to be 3.12. Additionally, we utilize the coupling constants \(g_{K^{*}\alpha}g_{\nu\gamma}=-1.27\) and \(g_{K^{*}K^{*}\gamma}=0.83\), which are estimated from the corresponding partial width of \(K^{*+}\to K^{*/0}\gamma\)[59; 60; 61; 62].
### In the \(D^{+}_{s1}(2460)\) molecular frame
With the above preparation, we can estimate the pionic and radiative transitions from \(T^{+}_{c\bar{c}0}(2900)\) to \(D^{+}_{s1}(2460)\) in a molecular frame, where both \(T^{+}_{c\bar{c}0}(2900)\) and \(D^{+}_{s1}(2460)\) are considered as molecular states. The amplitude corresponding to Fig. 3-(a) is,
\[i\mathcal{M}_{a} = i^{3}\int\frac{d^{4}q}{(2\pi)^{4}}\left[g_{Ts,\bar{\sigma}} \bar{\Phi}_{Ts,\bar{\sigma}}(-p_{12}^{2},\Lambda_{Ts,\bar{\sigma}}^{2})g_{ \bar{\sigma}\mu}\right] \tag{11}\] \[\times \left[g_{D_{s1}}\bar{\Phi}_{D_{s1}}(-p_{20}^{2},\Lambda_{Ds_{u}}^ {2})\epsilon_{\bar{\rho}}(p_{4})\right]\] \[\times \left[-ig_{K^{*}Kpi}i(p_{3\nu}-g_{\nu})\right]\frac{-g^{\mu\nu}+p _{1}^{\mu}p_{1}^{\nu}/m_{1}^{2}}{p_{1}^{2}-m_{1}^{2}}\] \[\times \frac{-g^{\alpha\beta}+p_{2}^{\alpha}p_{2}^{\beta}/m_{2}^{2}}{p_ {2}^{2}-m_{2}^{2}}\frac{1}{q^{2}-m_{q}^{2}}\mathcal{F}^{2}(m_{q},\Lambda)\]
where \(p_{12}=p_{1}\omega_{D^{*}K^{*}}-p_{2}\omega_{K^{*}D^{*}}\) and \(p_{20}=p_{2}\omega_{KD^{*}}-q\omega_{D^{*}K}\). The amplitudes corresponding to Fig. 3-(b) can be obtained by \(\mathcal{M}_{a}\) through replacing the masses of the involve meson and the relevant coupling constants with the corresponding values, which is,
\[i\mathcal{M}_{b}=i\mathcal{M}_{a[m_{a}^{g_{K^{*}K^{*}\pi},b^{*}-g_{\bar{\sigma} }\omega_{D^{*}\bar{\sigma}}\omega_{D^{*}\bar{\sigma}}}]}^{g_{K^{*}K^{*}\pi},b^{ *}-g_{\bar{\sigma}}\omega_{D^{*}\bar{\sigma}}\omega_{D^{*}\bar{\sigma}}}. \tag{12}\]
Then, the total amplitude of \(T^{+}_{c\bar{c}0}(2900)\to D^{+}_{s1}(2460)\pi^{0}\) is,
\[i\mathcal{M}_{Ts,\bar{\sigma}\to D_{s1}\pi}=i\mathcal{M}_{a}+i\mathcal{M}_{b} \tag{13}\]
In the similar way, one can obtain the amplitudes for \(T^{+}_{c\bar{c}0}(2900)\to D^{+}_{s1}(2460)\gamma\) corresponding to the diagrams in Fig. 4, which are,
\[i\mathcal{M}_{c} = i^{3}\int\frac{d^{4}q}{(2\pi)^{4}}\left[g_{Ts,\bar{\sigma}}\bar{ \Phi}_{Ts,\bar{\sigma}}(-p_{12}^{2},\Lambda_{Ts,\bar{\sigma}}^{2})g_{\bar{ \sigma}\pi}\right] \tag{14}\] \[\times \left[g_{Ds_{1}}\bar{\Phi}_{D_{s1}}(-p_{20}^{2},\Lambda_{Ds_{u}}^ {2})\epsilon_{\bar{\sigma}}(p_{4})\right]\left[-\frac{g_{K^{*}K\gamma}}{4}ee^{ \mu\nu\alpha\beta}\right.\] \[\times \left.(ip_{3\beta}g_{\bar{\sigma}\mu}-ip_{3\beta}g_{\bar{\sigma}\mu })(-ip_{1\alpha}g_{\beta\beta}+ip_{1\beta}g_{\alpha\beta})\right.\] \[\times \left.e^{\theta}(p_{3})\right]\frac{-g^{\delta\delta}+p_{1}^{\theta} p_{1}^{\bar{\sigma}}/m_{1}^{2}}{p_{1}^{2}-m_{1}^{2}}\frac{-g^{\sigma\sigma}+p_{2}^{ \sigma}p_{2}^{\sigma}/m_{2}^{2}}{p_{2}^{2}-m_{2}^{2}}\] \[\times \frac{1}{q^{2}-m_{q}^{2}}\mathcal{F}^{2}(m_{q},\Lambda),\] \[i\mathcal{M}_{d} = i\mathcal{M}_{[m_{a}^{g_{K^{*}K^{*}\pi},b^{*}-g_{\bar{\sigma}} \omega_{D^{*}\pi}}]}^{g_{K^{*}K^{*}\pi},b^{*}-g_{\bar{\sigma}}\omega_{D^{*}\pi} \omega_{D^{*}\pi}\omega_{D^{*}\pi}}\,\]
Figure 3: Diagrams contributing to the pionic transition from the \(T^{+}_{c\bar{c}0}(2900)\) to \(D^{+}_{s1}(2460)\).
Figure 4: Diagrams contributing to the radiative transition from the \(T^{+}_{c\bar{c}0}(2900)\) to \(D^{+}_{s1}(2460)\).
then, the total amplitude of \(T^{+}_{c\bar{s}0}(2900)\to D^{+}_{s1}(2460)\gamma\) is,
\[i\mathcal{M}_{T_{c\bar{s}0}\to D_{s1}\gamma}=i\mathcal{M}_{\text{c}}+i \mathcal{M}_{d}. \tag{15}\]
here, we introduce a monopole form factor to depict the inner configuration and off-shell effect of the exchanging mesons,
\[\mathcal{F}(m_{q},\Lambda)=\frac{m_{q}^{2}-\Lambda^{2}}{q^{2}- \Lambda^{2}}, \tag{16}\]
After performing the loop integral, we find the amplitude of \(T^{+}_{c\bar{s}0}(2900)\to D^{+}_{s1}(2460)\gamma\) can be simplified as,
\[\mathcal{M}_{T_{c\bar{s}0}\to D_{s1}\gamma}=g_{T_{c\bar{s}0}\to D_{s1} \gamma}\epsilon_{\mu\nu\rho\rho}p_{3}^{\mu}p_{4}^{\nu}\epsilon^{\alpha}(p_{3}) e^{\beta}(p_{4}), \tag{17}\]
which satisfies the principle of gauge invariance for the photon field.
### In the \(D^{+}_{s1}(2460)\) charmed-strange meson frame
In the \(D^{+}_{s1}(2460)\) charmed-strange meson frame, the \(D^{+}_{s1}(2460)\) is considered as a \(P\)-wave charmed-strange meson. As for the interaction related to \(P\)-wave and \(S\)-wave charmed-strange mesons, the effective Lagrangian could be constructed by using heavy quark limit and chiral symmetry [63; 64; 65; 66; 67; 68], which is
\[\mathcal{L}_{\mathcal{D}_{s1}\mathcal{D}\mathcal{P}} = ig_{\mathcal{D}_{s1}\mathcal{D}\mathcal{P}}\left(\mathcal{D}^{ \mu}_{1}\stackrel{{\leftrightarrow}}{{\partial_{\nu}}}D^{\dagger} _{\mu}\right)\partial^{\nu}\mathcal{P}+\text{H.c.}, \tag{18}\]
the relevant coupling constant \(g_{\mathcal{D}_{s1}\mathcal{D}\mathcal{P}\mathcal{P}}\) is,
\[g_{\mathcal{D}_{s1}\mathcal{D}\mathcal{P}} = -\frac{h}{f_{\pi}}, \tag{19}\]
with \(f_{\pi}=132\) MeV to be the decay constant of \(\pi\) meson, and \(h=0.56\pm 0.04\)[63; 69; 70; 71; 72].
In our previous work [47], we have estimated the pionic transition from \(T^{+}_{c\bar{s}0}(2900)\) to \(D^{+}_{s1}(2460)\) in the \(D^{+}_{s1}(2460)\) charmed-strange meson frame. Thus, in this subsection, we only present the amplitude for \(T^{+}_{c\bar{s}0}(2900)\to D^{+}_{s1}(2460)\gamma\). The amplitudes corresponding to the diagrams in Fig. 4 are,
\[i\mathcal{M}^{\prime}_{\text{c}} = i^{3}\int\frac{d^{4}q}{(2\pi)^{4}}\Big{[}g_{T_{c\bar{s}0}}\tilde {\Phi}(-p_{12}^{2},\Lambda_{T_{c\bar{s}0}}^{2})g_{\phi\tau}\Big{]}\] \[\times \left[-ig_{\mathcal{D}_{s1}\mathcal{D}^{\prime}}p(p_{2}^{2}+p_{4} ^{2})g_{\delta\delta}g^{\omega\rho}e^{\theta}(p_{4})\right]\] \[\times \left[-\frac{g_{\mathcal{K}^{\prime}\mathcal{K}^{\prime}}}{4}e^{ \mu\nu\rho\bar{s}}i(p_{3\mu}g_{\sigma\rho}-p_{3\nu}g_{\mu\rho})(-i)\right.\] \[\times \left.(p_{1\alpha}g_{\beta\delta}-p_{1\beta}g_{\alpha\delta})e^{ \theta}(p_{3})\right]\frac{-g^{\delta\delta}+p_{1}^{\phi}p_{1}^{\delta}/m_{1}^ {2}}{p_{1}^{2}-m_{1}^{2}}\] \[\times \frac{-g^{\alpha\rho}+p_{2}^{\tau}p_{2}^{\alpha}/m_{2}^{2}}{p_{2} ^{2}-m_{2}^{2}}\frac{1}{q^{2}-m_{2}^{2}}\mathcal{F}^{2}(m_{q},\Lambda)\] \[i\mathcal{M}^{\prime}_{d} = i\mathcal{M}^{\prime}_{c}[\nicefrac{{g_{\mathcal{K}^{\prime} \mathcal{K}^{\prime}}}{{m_{\mathcal{K}^{\prime}}}^{\mathcal{K}^{\prime}}}^{ \mathcal{K}^{\prime}}+g_{\mathcal{K}^{\prime}\mathcal{K}^{\prime}}}{{m_{ \mathcal{K}^{\prime}}}^{\mathcal{K}^{\prime}}}^{\mathcal{K}^{\prime}},\mu_{0 ^{\prime}}\nicefrac{{g_{\mathcal{K}^{\prime}\mathcal{K}^{\prime}}}}{{m_{ \mathcal{K}^{\prime}}}^{\mathcal{K}^{\prime}}}\cdots\nicefrac{{g_{\mathcal{K}^{ \prime}\mathcal{K}^{\prime}}}}{{m_{\mathcal{K}^{\prime}}}^{\mathcal{K}^{\prime}} }\cdots\nicefrac{{g_{\mathcal{K}^{\prime}\mathcal{K}^{\prime}}}}{{m_{\mathcal{K }^{\prime}}}^{\mathcal{K}^{\prime}}}\cdots\nicefrac{{g_{\mathcal{K}^{\prime} \mathcal{K}^{\prime}}}}{{m_{\mathcal{K}^{\prime}}}^{\mathcal{K}^{\prime}}}. \tag{20}\]
The total amplitude for \(T^{+}_{c\bar{s}0}(2900)\to D^{+}_{s1}(2460)\gamma\) is,
\[i\mathcal{M}^{\prime}_{T_{c\bar{s}0}\to D_{s1}\gamma}=i\mathcal{M}^{\prime}_{ \text{c}}+i\mathcal{M}^{\prime}_{d}. \tag{21}\]
Similar to Eq. (15), the above amplitude also satisfies the principle of gauge invariance for the photon field.
Additionally, in the above amplitudes, a phenomenological form factor in monopole form is introduced, which can describe the inner structure and off-shell effect of the exchanging mesons. The concrete form of the form factor is,
\[\mathcal{F}(m_{q},\Lambda)=\frac{m_{q}^{2}-\Lambda^{2}}{q^{2}- \Lambda^{2}}, \tag{22}\]
with \(\Lambda\) to be a model parameter, which should be of the order of unity.
## IV Numerical results and discussions
In the present estimations, the loop integrals in the mass operators and the amplitudes can be estimated by Schwinger parameterizations [73]. This parameterization scheme is more convenient to handle the four-momentum integrals with the correlation functions in the Gaussian form.
### Coupling constants
Since the \(T^{+}_{c\bar{s}0}(2900)\) has not been observed yet, in the present estimation, we take the same resonance parameters as those of \(T^{0}_{c\bar{s}0}(2900)\) for easy comparison with our previous work in Ref. [28]. Besides the coupling constants discussed in the above section, the coupling constants related to the molecular states are also needed to estimate the relevant decay widths. In the present work, we estimate the coupling constants \(g_{D_{s1}}\) and \(g_{T_{c0}}\) according to the compositeness conditions as shown in Eq. (6). Here, \(\Lambda_{T_{c\bar{s}0}}\) and \(\Lambda_{D_{s1}}\) are phenomenological model parameters, which should be of order \(1\;\text{GeV}\). Considering both \(K\) and \(K^{*}\) are \(S\)-wave strange mesons and the similarity between \(D^{+}_{s1}(2460)\) and \(T^{+}_{c\bar{s}0}(2900)\)
we take \(\Lambda_{T_{c0}}=\Lambda_{D_{s1}}=\Lambda_{M}\) for simplify. In Ref. [47], our estimations indicate that the total width of \(T^{0}_{c0}(2900)\) could be well reproduced with \(\Lambda_{M}<1.6\), thus, in the present work, we vary the parameter \(\Lambda_{M}\) from 0.5 to 1.6 GeV, and the \(\Lambda_{M}\) dependences of \(g_{D_{s1}}\) and \(g_{T_{c0}}\) are presented in Fig. 5. In the considered parameter range, one can find the coupling constants \(g_{D_{s1}}\) and \(g_{T_{c0}}\) are weakly dependent on the model parameter \(\Lambda_{M}\). Particularly, with the parameter \(\Lambda_{M}\) increase from 0.5 to 1.2 GeV, the coupling constants \(g_{T_{c0}}\) and \(g_{D_{s1}}\) decrease from 4.98 to 3.84 GeV and from 15.08 to 10.51 GeV, respectively.
### Decay widths
In addition to the parameter \(\Lambda_{M}\) introduced by the correlation functions for \(T^{+}_{c0}(2900)\) and \(D^{+}_{s1}(2460)\), there are another parameter \(\Lambda\) involved by the form factor in the amplitudes, which is also of the order of 1 GeV. Here, we take several typical values for \(\Lambda\), which are 1.6, 1.8 and 2.0 GeV, respectively [47]. With the above preparations, we can evaluate the decay widths of \(T^{+}_{c\bar{c}0}(2900)\to D^{+}_{s1}(2460)\pi\) and \(T^{+}_{c\bar{c}0}(2900)\to D^{+}_{s1}(2460)\gamma\) in different frames, which are the \(D^{+}_{s1}(2460)\) molecular frame and the \(D^{+}_{s1}(2460)\) charmed strange meson frame, respectively.
In Fig. 6, we present the decay width of \(T^{+}_{c0}(2900)\to D^{+}_{s1}(2460)\pi^{0}\) in the \(D^{+}_{s1}(2460)\) molecular scenario and in the \(D^{+}_{s1}(2460)\) charmed strange meson frame, respectively. From our estimation, one can find the transition width in the \(D^{+}_{s1}(2460)\) molecular frame are much larger than that in the \(D^{+}_{s1}(2460)\) charmed-strange meson frame. It should be noted that in the \(D_{s1}(2460)\) molecular frame, \(D_{s1}(2460)\) is composed of \(D^{*}\) and \(K\), thus, the coupling between \(D_{s1}(2460)\) and \(D^{*}K\) is much stronger than that in the charmed strange meson frame. In addition, the kaon is considered as an ordinary meson in the \(D_{s1}(2460)\) molecular frame, and the coupling between \(D_{s1}(2460)\) and \(D^{*}K\) is \(S\) wave, which is momentum independent. However, in the \(D_{s1}(2460)\) charmed strange meson frame, the kaon is considered as a chiral particle, thus, the \(D_{s1}D^{*}K\) vertex is momentum dependent as shown in Eq. (18) due to the chiral symmetry, which further suppress the width of \(T_{c1}(2900)\to D_{s1}(2460)\pi\) in the charmed strange meson frame.
In Ref. [47], we have investigated the decay properties of \(T_{c0}(2900)\) in the \(D^{*}K^{*}\) molecular frame. The decay widths of \(DK\), \(D_{s\pi}\), \(D^{*}_{s}\rho\), \(D_{s1}(2460)\pi\), \(D_{s1}(2536)\pi\) and \(D^{*}K\pi\) channels were estimated [47], where the \(D_{s1}(2460)\) is considered as a charmed strange meson. For simplify, we just quote the results of Ref. [47] in the following discussions. By comparing the estimated total width with the one reported by the LHCb Collaboration, we obtained the proper \(\Lambda_{M}\) ranges for different \(\Lambda\), which is collected in Table 1. In these parameter range, the estimated widths for \(D_{s1}(2460)\pi\) and \(DK\) channels are also listed. The ratio of \(\Gamma_{D_{s1}\pi}\) and \(\Gamma_{DK}\) is estimated to be,
\[\frac{\Gamma_{D_{s1}\pi}}{\Gamma_{DK}}=0.10\sim 0.14, \tag{23}\]
which is very weakly dependent on the model parameters \(\Lambda\) and \(\Lambda_{M}\).
Moreover, one can investigate the decay properties of \(T^{+}_{c\bar{c}0}(2900)\) in the \(D^{*}_{s1}(2460)\) molecular frame. Considering the isospin symmetry, the strong decay behaviors of \(T^{+}_{c\bar{c}0}(2900)\) and \(T^{0}_{c\bar{c}0}(2900)\) should be the same. When we investigate the decay properties of the \(T^{+}_{c\bar{c}0}(2900)\) in the \(D^{+}_{s1}(2460)\) molecular frame, the partial widths of other decay channels without \(D^{+}_{s1}(2460)\) should be the same as those in Ref. [28]. Together with the width of \(T^{+}_{c\bar{c}0}(2900)\to D^{+}_{s1}(2460)\pi^{0}\) estimated in the present work, we can roughly obtain the total width of \(T^{+}_{c\bar{c}0}(2900)\). Along the same line [47], one can determine the value of \(\Lambda_{M}\) by reproducing the width of \(T^{+}_{c\bar{c}0}(2900)\to D^{+}_{s1}\pi^{0}\), which are also listed in Table 1. From the table, one can find that the determined range of \(\Lambda_{M}\) in the \(D^{+}_{s1}(2460)\) molecular frame is smaller than the one in the \(D^{+}_{s1}(2460)\) charmed strange meson frame [47]. In these model parameter, one can find that the partial widths of \(D_{s1}(2460)\pi\) and \(DK\) channels are in the same order, and the ratio of \(\Gamma_{D_{s1}\pi}\) and \(\Gamma_{DK}\) is estimated
to be,
\[\frac{\Gamma_{D_{s1}\pi}}{\Gamma_{DK}}=0.67\sim 1.33. \tag{24}\]
By analyzing Eqs. (23) and (24), one can find the ratio of the widths of \(T^{+}_{c\bar{s}0}(2900)\to D^{+}_{s1}\pi^{0}\) and \(T^{+}_{c\bar{s}0}(2900)\to DK\) significantly different across different \(D^{+}_{s1}(2460)\) frame work. Thus, this ratio can serve as a valuable tool for evaluating the internal structure of the \(D^{+}_{s1}(2460)\).
In addition to \(D_{s1}\pi\) channel, we also estimate the widths of the radiative transition from \(T^{+}_{c\bar{s}0}(2900)\) to \(D^{+}_{s1}(2460)\) in different scenario, and the estimated transition widths depending on the model parameters are presented in Fig. 7. Similar to the case of \(D_{s1}(2460)\pi\), our estimations indicate that the widths of \(T^{+}_{c0}(2900)\to D^{+}_{s1}(2460)\gamma\) are about 1 keV and more than 50 keV in the \(D^{+}_{s1}(2460)\) charmed-strange meson frame and \(D^{+}_{s1}(2460)\) molecular frame, respectively.
## V Summary
Stimulated by the observation of \(T_{c\bar{s}0}(2900)\) by the LHCb Collaboration[25; 26], theorist have proposed some different exotic interpretations, among which the \(D^{*}K^{*}\) molecular interpretation is the most promising one. Together with \(D^{+}_{s0}(2317)\), and \(D^{+}_{s1}(2460)\), the newly observed \(T_{c\bar{s}0}(2900)\) makes the states around the \(D^{(*)}K^{(*)}\) thresholds abundant. Different with \(T_{c\bar{s}0}(2900)\), the \(D^{+}_{s0}(2317)\) and \(D^{+}_{s1}(2460)\) could also be the candidates of \(P\)-wave charmed-strange mesons.
In the present work, we investigate the pionic and radiative transitions from the \(T^{+}_{c\bar{s}0}(2900)\) to \(D^{+}_{s1}(2460)\) in the \(D^{+}_{s1}(2460)\) charmed-strange meson frame and the \(D^{+}_{s1}(2460)\) molecular scenario, respectively. Our estimations indicate the ratio of the widths of \(T^{+}_{c\bar{s}0}(2900)\to D^{+}_{s1}\pi^{0}\) and \(T^{+}_{c\bar{s}0}(2900)\to DK\) are rather different in two different frame. Particularly, the ratio is estimated to be around 0.1 in the \(D^{+}_{s1}(2460)\) charmed-strange frame, while the lower limit of this ratio is 0.67 in the molecular frame. Thus, we suggest that the ratio could be employed as a tool for testing the nature of the \(D^{+}_{s1}(2460)\). Moreover, our estimation also find the radiative transition width estimated in the \(D^{+}_{s1}(2460)\) molecular frame is much larger than the one estimated in the \(D^{+}_{s1}(2460)\) charmed-strange meson frame.
## Acknowledgements
This work is supported by the National Natural Science Foundation of China under the Grant No. 12175037 and 11775050.
|
2304.13397 | Filter Pruning via Filters Similarity in Consecutive Layers | Filter pruning is widely adopted to compress and accelerate the Convolutional
Neural Networks (CNNs), but most previous works ignore the relationship between
filters and channels in different layers. Processing each layer independently
fails to utilize the collaborative relationship across layers. In this paper,
we intuitively propose a novel pruning method by explicitly leveraging the
Filters Similarity in Consecutive Layers (FSCL). FSCL compresses models by
pruning filters whose corresponding features are more worthless in the model.
The extensive experiments demonstrate the effectiveness of FSCL, and it yields
remarkable improvement over state-of-the-art on accuracy, FLOPs and parameter
reduction on several benchmark models and datasets. | Xiaorui Wang, Jun Wang, Xin Tang, Peng Gao, Rui Fang, Guotong Xie | 2023-04-26T09:18:38Z | http://arxiv.org/abs/2304.13397v1 | # Filter pruning via filters similarity in consecutive layers
###### Abstract
Filter pruning is widely adopted to compress and accelerate the Convolutional Neural Networks (CNNs), but most previous works ignore the relationship between filters and channels in different layers. Processing each layer independently fails to utilize the collaborative relationship across layers. In this paper, we intuitively propose a novel pruning method by explicitly leveraging the Filters Similarity in Consecutive Layers (FSCL). FSCL compresses models by pruning filters whose corresponding features are more worthless in the model. The extensive experiments demonstrate the effectiveness of FSCL, and it yields remarkable improvement over state-of-the-art on accuracy, FLOPs and parameter reduction on several benchmark models and datasets.
Xiaorui Wang\({}^{1}\), Jun Wang\({}^{1}\)1, Xin Tang\({}^{2}\), Peng Gao\({}^{1}\), Rui Fang\({}^{2}\), Guotong Xie\({}^{1}\)1\({}^{1}\)AI Platform Group, Ping An Technology Co. Ltd., Beijing, China
\({}^{2}\)Visual Computing Group, Ping An Property \(\&\) Casualty Insurance Company, Shenzhen, China
Footnote 1: Corresponding authors. {wangjun916, xiequotong}@pingan.com.cn.
## 1 Introduction
Recently, the large model size and high computational costs remain great obstacles for the deployment of CNN models on devices with limited resources. Model compression, which can reduce the sizes of networks, mainly falls into some categories, i.e., pruning, quantization, knowledge distillation [1]. Pruning methods can handle any kind of CNNs and will not have substantial negative influences on model performance. Specifically, Typical pruning contains: weight pruning and filter (channel) pruning [1, 2]. Weight pruning directly deletes weight values in a filter in an irregular and random way, and may cause unstructured sparsities. So it is unable to achieve acceleration on general-purpose processors [1]. Meanwhile, filter (channel) pruning discards the whole selected filters, thus the pruned network structure won't be damaged and can easily achieve acceleration in general processors [2].
Current filter pruning can be implemented in filter-wise or channel-wise manner. Most of them consider the information in each independent convolution layer, and they do not use the relationship between filter and channel in more layers explicitly. When evaluating the importance of filters, different layers usually cannot communicate with each other. And most of them conceive that the norm of a filter and its importance are strongly correlated. But this has not been proved by theory. To utilize this criterion, analysis in FPGM [2] shows that some prerequisites should be satisfied. And experiments in Taylor [3] show that there is a significant gap in the correlation between the norm and the importance.
To address the above limitations, we propose a novel filter pruning via Filters Similarity in Consecutive Layers (FSCL), to merge the information in two continuous layers for evaluation of filter importance. FSCL calculates the similarity of filters in two consecutive layers by convolution operation, and it can quantify the worth of features extracted by the filters. Then we prune the filters tend to have little contribution to the network. FSCL not only takes the filters which generate the feature maps into consideration, but also takes advantage of the channels in the next layer which uses the feature maps.
Figure 1: Filter pruning via Filters Similarity in Consecutive Layers (FSCL). There are 4 and 2 filters at the 1st and 2nd layers, respectively. If the 3rd filter in conv1 is pruned, the 3rd feature map can be removed, implying that the 3rd channels of the 2 filters at conv2 can be removed, too. On the top, FSCL calculates the similarity between the 3rd filter at conv1 and the 3rd channels of the 2 filters at conv2 to evaluate the importance of the 3rd filter at conv1. On the bottom, we similarly evaluate the importance of the 2nd filter at conv1. Then we prune the 3rd filter for its smaller importance score.
We highlight the main contributions as follows:
(1) We explore the relationship between filter-wise and channel-wise pruning, reveal reasonable information in continuous convolutional layers that can be used to prune filters.
(2) We propose a novel method to estimate the contribution of a filter in each convolution layer using the Filter's Similarity in Consecutive Layers (FSCL), which combines filter-wise pruning with channel-wise pruning.
(3) Experimental results show that the proposed FSCL achieves state-of-the-art performance on a wide variety of networks trained on CIFAR-10 and ImageNet.
## 2 Methodology
This section explains FSCL by performing filter pruning on CNNs. We evaluate the importance (similarity) of filters in convolutions by two consecutive layers, as shown in Figure 1.
### Preliminary
We assume that a neural network has \(L\) convolutional layers, and \(W^{i}\) is the parameters in the \(i\)-th convolutional layer. We use \(N^{i}\) to represent the number of filters in \(W^{i}\). The parameters \(W^{i}\) can be represented as a set of 3D filters \(W^{i}=\left\{w_{1}^{i},w_{2}^{i},...,w_{N^{i}}^{i}\right\}\in R^{N^{i}\times C ^{i}\times K^{i}}\). \(C^{i}\) is the number of channels in filters and \(K^{i}\) denotes the kernel size.
Filter pruning is to prune the filters from \(N^{i}\) to desired \(N^{i}_{0}(0\leq N^{i}_{0}\leq N^{i})\). The core is to remove the less important filters, which can be formulated as an optimization problem:
\[\begin{split}&\operatorname*{arg\,max}_{\beta_{i,j}}\sum_{i=1}^{L} \sum_{j=1}^{N^{i}}\beta_{i,j}\mathcal{L}(w_{j}^{i}),\\ & s.t.\quad\sum_{j=1}^{N^{i}}\beta_{i,j}\quad\leq N^{i}_{0},\end{split} \tag{1}\]
where \(\beta_{i,j}\) is an indicator which is 1 if \(w_{j}^{i}\) is to be reserved, or 0 if \(w_{j}^{i}\) is to be removed. \(\mathcal{L}(\cdot)\) measures the importance of a filter.
Designing \(\mathcal{L}(\cdot)\) has been widely studied in the community [4, 2]. L1 [4] measures the importance of each filter by calculating its absolute weight sum. FPGM [2] calculates the distance to the geometric median of filters as filter importance score. Unfortunately, these above definitions ignored the relationship between consecutive layers.
### Filters Similarity in Consecutive Layers
We propose to define \(\mathcal{L}\) on the worth of features extracted by the filters. The worth is proportional to the usefulness of the features. To calculate the usefulness of features, we firstly look into the convolution operation in consecutive layers and the filter pruning in them. In Figure 1, we randomly sample an element in the feature map \(Y\) of the \(c\)-th (\(c\) is short for current) layer, it is in the \(j^{c}\) channel and denote as \(y_{j^{c}}\), \(j^{c}\in\{1,2,\ldots,N^{c}\}\). A corresponding filter \(w_{j^{c}}\in R^{C^{c}\times K^{c}\times K^{c}}\) and sliding window \(x\in R^{C^{c}\times K^{c}\times K^{c}}\) can also be determined according to its location. The convolution operation is:
\[y_{j^{c}}=\sum_{c^{c}=1}^{C^{c}}\sum_{k_{1}^{c}=1}^{K^{c}}\sum_{k_{2}^{c}=1}^ {K^{c}}w_{j^{c},c^{c},k_{1}^{c},k_{2}^{c}}^{c}\times x_{c^{c},k_{1}^{c},k_{2} ^{c}}. \tag{2}\]
Similarly, a randomly sampled element \(z_{j^{n}}\) in the \(j^{n}\)th (\(n\) is short for next and \(j^{n}\in\{1,2,\ldots,N^{n}\}\)) channel of next layer's feature map \(Z\) is computed as follows:
\[z_{j^{n}}=\sum_{c^{n}=1}^{C^{n}}\sum_{k_{1}^{n}=1}^{K^{n}}\sum_{k_{2}^{n}=1}^ {K^{n}}w_{j^{n},c^{n},k_{1}^{n},k_{2}^{n}}^{n}\times y_{c^{n},k_{1}^{n},k_{2} ^{n}}. \tag{3}\]
In Figure 1, if we remove the \(j^{c}_{0}\)th filter \(w_{j^{c}_{0},:,:,}^{c}\), the \(j^{c}_{0}\)th channel of feature map \(Y\) is close to zero, implying that the \(j^{c}_{0}\)th input channel of the \(N^{n}\) filters in next convolution layer are prone to be useless. The \(j^{c}_{0}\)th channel of the convolution \(w_{:,j^{c}_{0},:,}^{n}\) can be removed too. The filter number of the current layer \(N^{c}\) are the same as the channel number of the next layer \(C^{n}\). So we find when evaluate the usefulness of the features extracted by the filters, we should consider two parts: The filter that products it and the channel which uses it. The second part is very important but ignored by other methods. Only the features used for the subsequent calculations are valuable.
To define \(\mathcal{L}\) on the worth of features extracted by the filters, we evaluate the similarity in continuous layers. The more similar they are, the more features extracted by previous filters will be used for the next layer, and finally, for the prediction.
The convolution operation in layer \(n\) is calculated by:
\[w_{:,j^{c}_{0},:,:}^{n}\times y_{j^{c}_{0},:,:}, \tag{4}\]
but without feature map \(Y\), we can use the parameters in the filter which product it to similarly replace it in the calculation. For dimensional consistency, we duplicate the \(j^{n}\)th filter's \(j^{c}_{0}\)th channel \(w_{j^{n},:,j^{c}_{0},:,}^{n}\). \(C^{c}\) times and concatenate them together, then we get a new convolution filters \(\hat{w}_{j^{n},:j^{c}_{0},:,:,}^{n}\). \(j^{n}\in\{1,2,\ldots,N^{n}\}\), so we can similarly get \(N^{n}\) new convolution filters. Filters extract features most similar to it. We can calculate the similarity between the new convolution filters and the filter in the previous layer \(w_{j^{c}_{0},:,:,}^{c}\) by convolution operation. The result can be used as an importance score for model pruning. We average the sum of the absolute value of \(N^{n}\) results as the importance score of the \(j^{c}_{0}\)th filter \(w_{j^{c}_{0},:,:,:}^{c}\):
\[\mathcal{L}(w_{j^{c}_{0}}^{c})=\frac{1}{N^{n}}\sum_{j^{n}=1}^{N^{n}}\left\|w_{ j^{c}_{0},:,:,:}^{c}\otimes\hat{w}_{j^{n},:,j^{c}_{0},:,:,}^{n}\right\|_{1}, \tag{5}\]
where \(\otimes\) denotes the convolution operation and \(\left\|\cdot\right\|_{1}\) denotes \(\ell_{1}\)-norm. Instead of considering a single layer, our proposed method evaluates the channels with respect to other filters consecutively placed in the next layer. A similarity measure
calculated between filters would determine if the filters should be retained, where a high similarity points out more information being extracted, so the corresponding filters are more important. Our FSCL can measure the worth of features extracted by the filters, without being affected by the distribution of the input. Besides, we can calculate the importance scores offline. Compared with FPGM [2], which calculates difference between filters in one convolution, our method calculates difference between filters in continuous convolutions. After the definition of \(\mathcal{L}\), Eq. (1) can be solved by pruning the filters with \((N^{i}-N_{0}^{i})\) least importance scores. Then we fine-tune the pruned model to restore accuracy.
### FSCL For Multiple Structures
We provide standards for FSCL in different structures include: Plain structure, Inception module, Residual block.
**Plain Structure**. For the Plain structure, such as VGG, we use vanilla FSCL to evaluate the importance of filters.
**Inception Module**. There are four branches in inception modules of GoogleNet: branch1x1, branch3x3, branch5x5, branchpool. The feature maps and the next layers of them are concatenated together. To implement FSCL in GoogleNet, we first split the channels of the next layers by different branches, thus the four first layers of the next inception module are all split into four groups. Take branch1x1 as a example, it links to four branches, so we concatenate the channels in four first layers of the next inception module together as a new filter.
**Residual Block**. (1) **For the residual block with no convolution in shortcut**, such as ResNet-56, we use the method explained in Plain Structure. (2) **For residual blocks with convolution in shortcut**, such as Resnet-50, we multiply the importance scores of shortcut connections and residual connections together to get global importance scores and then prune them together.
## 3 Experiments
### Implementation Details
**Datasets and Models**. We evaluated our method on CIFAR-10 and ImageNet. Various recent network architectures were used in our experiments.
**Configurations**. All experiments for CIFAR-10 and ImageNet were conducted on 1 and 8 NVIDIA V100 GPUs, respectively. We first evaluated the filter importance by FSCL. Then we pruned the filters with less importance in each layer. Finally, we fine-tuned the slimmed network. For comparison, we first fixed similar parameters and FLOPs, and measured the accuracy, then fixed the accuracy to be similar to the baselines and measured the parameters and FLOPs reductions.
### Comparisons of Accuracy
First, to evaluate of the accuracy under fixed similar reductions, we compared FSCL with recently proposed methods (Figure 2). The L1 version of FSCL used the \(\ell_{1}\)-norm and the L2 version used the \(\ell_{2}\)-norm. Other baselines include: FPGM [2], Taylor [3], MeanActivation [5], L1 [4] and APoZ [6]. None of these five methods considered the relationship between layers. We pruned the VGG-16 which accuracy was 93.99 by different filter pruning methods and fine-tuned the pruned model for 100 epochs on CIFAR-10. We used the implementations of these methods by NNI ([https://github.com/microsoft/nni](https://github.com/microsoft/nni)).
We pruned some of the convolution layers in VGG-16 (feature.0, feature.24, feature.27, feature.30, feature.34 and feature.37) by 50% using different methods. Figure 2 shows the accuracy and epochs of fine-tune process. The L1 and L2 version of FSCL ("FSCL_L1", "FSCL_L2") achieved remarkable higher accuracy compared with other methods without considering the relationship between filters and channels in consecutive layers. It indicated that merging the weight information of consecutive layers was effective for the importance calculation of the convolution kernel. The L1 version was higher than L2, showing the L1 norm was a better criterion.
### Comparisons of Parameters and FLOPs
Second, we evaluated whether FSCL can outperform baselines in reducing parameters under similar accuracy. For VGGNet on CIFAR-10, we fine-tuned the pruned network for 300 epochs, with a start learning rate of 0.009, weight decay 0.005, and momentum 0.9. The learning rate is divided by 10 at the epochs 150, 225, 265. For ResNet-56 on CIFAR-10, we fine-tuned with a start learning rate of 0.01. The other settings are the same as VGGNet. For GoogLeNet on CIFAR-10, the training schedule was the same as VGGNet except for the start learning rate 0.008. After pruning ResNet-50 on ImageNet, we fine-tuned the pruned model for 220 epochs with a start learning rate of 0.009, weight decay 0.0001, and momentum
Figure 2: The accuracy and epochs of pruned vgg-16 on cifar10 by different methods. Our FSCL_L1 and FSCL_L2 are consistently better and FSCL_L1 yields best performance.
0.99. The learning rate schedule type is cosine annealing.
**VGGNet on CIFAR-10**. As shown in Table 1, FSCL achieves state-of-the-art performance. FSCL demonstrated its ability to obtain the lowest accuracy drop of 0.28%, with 81.5% FLOPs reduction and 89.5% parameters reduction. And the pruned model has the highest Top-1 accuracy (93.68%). FSCL utilizes the relationship between layers, which is the main cause of its superior performance.
**ResNet-56 on CIFAR-10**. As summarized in Table 1, compared to LRPET [14], which led to a 0.23% drop in terms of the Top-1 accuracy, our FSCL achieved a Top-1 increase by 0.39% and a higher pruned rate in FLOPs. With more reduction in FLOPs (54.4%), our FSCL could achieve a 0.26 Top-1 accuracy increase. But with lower FLOPs reduction (52.6%), FPGM [2] and Pruning Criterion [15] result in Top-1 accuracy degradation. FSCL accomplished outstanding results.
**GoogLeNet on CIFAR-10**. As shown in Table 1, FSCL achieved the lowest accuracy drops compared to other baselines. GAL-ApoZ is the result of ApoZ obtained by the implementations of GAL [8]. Specifically, FSCL pruned GoogLeNet with 75.3% FLOPs reduction and 66.3% parameters reduction only loses 0.02% top-1 accuracy. These results showed that our method could reduce the complexity of the model with inception modules.
**ResNet50 on ImageNet**. For ImageNet ILSVRC-12, we adopted ResNet50 for evaluation. We reported the results in Table 2. FSCL consistently outperformed the counterparts on the FLOPs reduction and the parameter reduction. FSCL pruned 53.80% parameters to achieve 55.99% FLOPs reduction with only 0.31%/0.08% Top-1/5 accuracy drop. Compared with those methods, FSCL achieves state-of-the-art performance, which shows that our FSCL can identify the redundant filters, and it is effective on large-scale datasets.
### Ablation Study
**Varying Pruned FLOPs**. We performed experiments of FSCL to explore the relationship between pruned FLOPs and the accuracy of ResNet-56 ( Figure 3). Compared to the original model, FSCL could reduce over 55% of FLOPs without loss in accuracy. FSCL has a regularization effect on the neural network. Compared to others with the same baseline, we got higher accuracy on similar pruned FLOPs.
## 4 Conclusion
In this paper, we analyze the relationship of filters in consecutive convolutional layers and reveal that the filters' similarity can be utilized to slim the model. So we propose a novel filter pruning method called FSCL, which ranks the filter importance by their similarity in consecutive layers. FSCL can detect the filters whose corresponding features are more worthless for model compression. Extensive experiments on various modern CNNs show that our intuitive and effective FSCL achieves state-of-the-art performance.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline & Methods & B/P Top-1(\%/j\(\%\)) & FLOPs/\(\downarrow\)(\%) & Params/\(\downarrow\)(\%) \\ \hline \multirow{4}{*}{\begin{tabular}{c} FSCL \\ \end{tabular} } & SSS [7] & 93.59/ 93.02/ 0.57 & 183.13M / 41.6 & 3.93M / 73.8 \\ & GAL-0.1 [8] & 93.96 / 93.42/ 0.54 & 171.89M / 45.2 & 2.67M / 82.2 \\ & Hinge [9] & 94.02/ 93.95/ 0.43 & - / 39.1 & - / 80.1 \\ & HRank [10] & 93.96 / 92.34/ 1.62 & 108.61M / 65.3 & 2.64M / 82.1 \\ & **FSCL(Ours)** & 93.96 / **93.68/ 0.28** & **58.06M / 81.5** & **1.58M / 89.5** \\ \hline \multirow{4}{*}{\begin{tabular}{c} FSCL \\ \end{tabular} } & CP [11] & 92.80/ 91.80/ 1.80 & 62.00M / 50.6 & - / - \\ & NISP [12] & 93.94 / 93.01/ 0.03 & 81.00M / 35.5 & 0.49M / 42.4 \\ & GAL-0.6 [8] & 93.26 / 93.38/ - 0.12 & 78.30M / 37.6 & 0.75M / 11.8 \\ & HRRank [10] & 93.26 / 93.17/ 0.09 & 62.72M / 50.0 & 0.49M / 42.4 \\ & NPPM [13] & 93.04 / 93.04/ - 0.36 & - / 50.0 & - / - \\ & LPET [49] & 93.93 / 93.10/ 0.23 & 61.92M / 51.0 & 0.43M / 49.4 \\ & **FSCL(Ours)** & 93.26 / **93.65** / **-0.39** & **59.95M / 52.2** & **0.42M / 50.3** \\ \hline \multirow{4}{*}{\begin{tabular}{c} FSCL \\ \end{tabular} } & FRGM [2] & 95.93 / 93.49/ 0.10 & - / 52.6 & - / - \\ & P Criterion [15] & 93.95 / 93.24/ 0.35 & - / 52.6 & - / - \\ & **FSCL(Ours)** & 93.26 / **93.52** / **-0.26** & **57.26M / 54.4** & **0.43M / 49.7** \\ \hline \multirow{4}{*}{
\begin{tabular}{c} FSCL \\ \end{tabular} } & Random & 95.05/ 94.54/ 0.51 & 960M / 36.8 & 3.58M / 41.8 \\ & GAL-ApoZ [6] & 95.05 / 92.11/ 2.94 & 760M / 50.0 & 2.58M / 53.7 \\ & GAL-0.5 [8] & 95.05 / 94.56/ 0.49 & 940M / 38.2 & 3.12M / 49.3 \\ & ABCPruner [16] & 95.05 / 94.84/ 0.21 & 513M / 66.6 & 2.46M / 60.1 \\ & HRRank [10] & 95.05 / 94.53/ 0.52 & 690M / 54.9 & 2.47M / 55.4 \\ & DCFF [17] & 95.05 / 94.92/ 0.13 & 460M / 70.1 & 2.08M / 66.3 \\ & CLR-0.91 [18] & 95.05 / 94.85/ 0.20 & 490M / 67.9 & 2.18M / 64.7 \\ & **FSCL(Ours)** & 95.05 / **95.03/ 0.02** & **375M / 75.3** & **2.07M / 66.3** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparisons of performance on CIFAR-10. ”-” denotes that the results are not reported. Top-1 \(\downarrow\) is the Top-1 accuracy gap between the baseline model and the pruned model. The best performance is highlighted in bold.
Figure 3: Accuracy of ResNet-56 using various pruned FLOPs. The dotted orange line indicates the accuracy of the baseline (unpruned ResNet-56). |
2302.00703 | Spectral statistics of a minimal quantum glass model | Glasses have the interesting feature of being neither integrable nor fully
chaotic. They thermalize quickly within a subspace but thermalize much more
slowly across the full space due to high free energy barriers which partition
the configuration space into sectors. Past works have examined the
Rosenzweig-Porter (RP) model as a minimal quantum model which transitions from
localized to chaotic behavior. In this work we generalize the RP model in such
a way that it becomes a minimal model which transitions from glassy to chaotic
behavior, which we term the "Block Rosenzweig-Porter" (BRP) model. We calculate
the spectral form factors of both models at all timescales. Whereas the RP
model exhibits a crossover from localized to ergodic behavior at the Thouless
timescale, the new BRP model instead crosses over from glassy to fully chaotic
behavior, as seen by a change in the slope of the ramp of the spectral form
factor. | Richard Barney, Michael Winer, Christopher L. Baldwin, Brian Swingle, Victor Galitski | 2023-02-01T19:01:02Z | http://arxiv.org/abs/2302.00703v3 | # Spectral statistics of a minimal quantum glass model
###### Abstract
Glasses have the interesting feature of being neither integrable nor fully chaotic. They thermalize quickly within a subspace but thermalize much more slowly across the full space due to high free energy barriers which partition the configuration space into sectors. Past works have examined the Rosenzweig-Porter (RP) model as a minimal quantum model which transitions from localized to chaotic behavior. In this work we generalize the RP model in such a way that it becomes a minimal model which transitions from glassy to chaotic behavior, which we term the "Block Rosenzweig-Porter" (BRP) model. We calculate the spectral form factors of both models at all timescales larger than the inverse spectral width. Whereas the RP model exhibits a crossover from localized to ergodic behavior at the Thouless timescale, the new BRP model instead crosses over from glassy to fully chaotic behavior, as seen by a change in the steepness of the ramp of the spectral form factor.
###### Contents
* 1 Introduction
* 2 Review of the spectral form factor
* 3 Construction of the spectral form factor
* 3.1 The joint probability density function
* 3.2 Correlation functions
* 3.3 The unfolded spectral form factor
* 4 The Rosenzweig-Porter model
* 4.1 The spectral form factor
* 4.2 The infinite matrix size limit
* 5 The block Rosenzweig-Porter model
* 5.1 The two-point generating function
* 5.2 The spectral form factor
* 5.3 The infinite matrix size limit
* 6
Conclusion
**Appendix**
## 1 Introduction
Random Hermitian matrices provide a simple model for the energy levels of a wide variety of quantum systems, including complex nuclei [1, 2, 3, 4, 5, 6], systems with a chaotic classical limit [7, 8, 9, 10, 11, 12], strongly interacting quantum field theories [13, 14, 15, 16, 17], and much more. The wide prevalence of random-matrix-like energy levels is known as random matrix universality [7, 18, 19], and it is one of the key manifestations of quantum chaos. Given an ensemble of Hamiltonians \(\{H\}\), one way to characterize random matrix universality is in terms of the statistical correlations of eigenvalues. Letting \(\rho(E)\) denote the density of eigenvalues of a particular random \(H\), then although the average density \(\overline{\rho(E)}\) is not universal, the pair-correlation \(\overline{\rho(E)\rho(E^{\prime})}\) does turn out to be. Indeed, one finds that the pair-correlation is closely related to the pair-correlation of a Gaussian random matrix of the appropriate symmetry.
However, while many systems ultimately exhibit random-matrix-like behavior at the finest energy scales, i.e. for sufficiently small \(E-E^{\prime}\), real systems typically have additional structure in their energy spectrum which is not random-matrix-like. This structure may be eventually washed out at the finest energy scales, but it does cause a deviation from the random matrix behavior of \(\overline{\rho(E)\rho(E^{\prime})}\) when \(E-E^{\prime}\) is of some intermediate size. Perhaps the simplest such structure occurs when the Hamiltonian breaks up into approximately decoupled blocks labelled by some almost-conserved quantity. This situation is common and is broadly related to the presence of slow dynamics, e.g. a slowly diffusing charge or almost-frozen glassy dynamics. In the simplest case, each block is statistically independent and the blocks are connected by some additional weak perturbations. Then as a function of \(E-E^{\prime}\), the pair-correlation can exhibit a crossover from multiple small random blocks to a single large random block. Recently, a set of effective theories have been formulated to describe this crossover [20]. Here we present a new solvable model in which this crossover can be analytically verified and studied. The results are consistent with Ref. [20], but they also offer new insights into various regimes.
This new model is a generalization of the unitary Rosenzweig-Porter (RP) model, which is itself a slight generalization of the original model constructed by Rosenzweig and Porter to describe complex atomic spectra [21]. The RP Hamiltonian is
\[H=A+\frac{\lambda}{N^{\gamma/2}}V, \tag{1}\]
where \(A\) is a diagonal matrix with independent and identically distributed elements, and \(V\) is a random matrix drawn from the Gaussian Unitary Ensemble (GUE) [22] whose matrix elements have unit variance. \(N\) is the size of the Hilbert space. Several studies [23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37] have examined this model at different values of \(\gamma\). The RP model has also been generalized in several different ways to create a family of interesting random matrix ensembles [38, 39, 40, 41, 42, 43, 44].
Motivated by the problem of many-body localization, Kravtsov et al. [27] studied this RP ensemble as a simple random matrix model with both localization and ergodicity-breaking transitions. Consistent with previous results [23, 24, 25, 26, 45], they found Poissonian statistics for \(\gamma>2\), indicating Anderson localized behavior, and GUE statistics for \(\gamma<1\), indicating chaotic behavior. They found that intermediate values of \(\gamma\) led to a non-ergodic extended phase in which eigenstates are neither localized nor fully ergodic.
One useful tool for studying spectral statistics is the unfolded spectral form factor (SFF), which is the Fourier transform of the unfolded two-point correlation function. Unfolding refers to the process of rescaling the spectrum in such a way that the local mean level spacing becomes unity, bringing the universal features of the SFF to the fore. Ref. [27] provides a calculation of the RP SFF for all \(\gamma>1\) at the Thouless timescale, i.e., the timescale at which random matrix statistics first appear. Other timescales are then examined by rescaling time in this result.
We can form schematic expectations for the behavior of the RP SFF at different values of \(\gamma\) by comparing the Thouless time to the two other cardinal times: the inverse spectral width \(w^{-1}\) and the Heisenberg time, which is the inverse mean level spacing. These schematics are shown in Fig. 1. In the figure \(u/N\) is the unfolded time, that is the real time divided by the level density. When the Thouless time is smaller than the inverse spectral width, which occurs when \(\gamma<1\), we expect GUE statistics indicating chaotic behavior. When it is larger than the Heisenberg time, which occurs when \(\gamma>2\), we expect Poissonian statistics indicating localized behavior. If the Thouless time is larger than the inverse spectral width but smaller than the Heisenberg time we expect to see a crossover between the two behaviors. Phase transitions occur when the Thouless timescale coincides with one of the cardinal timescales.
As an initial result of the present work, we directly calculate the SFF of the RP model at all timescales larger than the inverse spectral width. This approach has the added benefit of more closely paralleling the calculation of the SFF for the new model we examine. The result is shown in Eq. (66). We find that this result does follow the schematic expectations shown in Fig. 1 and is in good agreement with the numerical results we obtain. All these findings are in agreement with Ref. [27].
In the bulk of this work, we generalize the RP model to obtain a random matrix model which transitions from chaotic to glassy behavior, where glassiness is identified by the presence of multiple thermalization timescales. This is accomplished by redefining \(A\) in Eq. (1) to be the block-diagonal matrix
\[A=\bigoplus_{i=1}^{P}A^{(i)}, \tag{2}\]
Figure 1: Schematic representation on a log-log scale of the unfolded SFF for the RP model in each of the three identified phases. The lower gray curve is the GUE SFF while the upper gray curve is the Poissonian result.
where each block \(A^{(i)}\) is an independent GUE matrix of size \(M=N/P\) whose elements have variance \(M^{-1}\) (so that the eigenvalues of each \(A^{(i)}\) are finite at large \(M\)). We call this generalization of the RP model the Block Rosenzweig-Porter (BRP) model. Because of the the normalization of each \(A^{(i)}\) the system thermalizes in \(O(1)\) time within each block but can fully thermalize only at times diverging with \(N\) (if at all). Due to the different structure of the \(A\) matrix, the BRP model will have a localized phase different from that of the RP model. In this phase the system will be confined to the subspaces corresponding to the blocks of \(A\) instead of individual states.
This generalization of the RP model is motivated by our recent work examining the SFF of a canonical quantum spin glass model [46]. In that work we found that the SFF has a linear ramp at times sub-exponential in the system size, as would a chaotic system. However, the ramp is steeper by a factor of the number of distinct metastable states. This enhancement of the ramp is a hallmark of glassy behavior. It indicates that, at these early timescales, the Hamiltonian can be viewed as a direct sum of independent random matrices. However, it has been argued that, for appropriate values of the coupling between spins and at a sufficiently late timescale, the system will escape from the metastable configurations and fully thermalize [47, 48, 49]. At this timescale we would expect the SFF to experience a crossover from the enhanced ramp to the GUE ramp. Due to the challenge of calculating the SFF of canonical quantum spin glass models at late times with current methods, we turn to the BRP model as a simpler model of a quantum glass for which the SFF can be calculated at late timescales and the crossover from glassy to chaotic behavior can be seen.
As we did for the RP model, we form schematic expectations for the behavior of the SFF at different values of \(\gamma\) by comparing the Thouless time with the other cardinal times. These schematics are shown in Fig. 2. When the Thouless time is less than the inverse spectral width, which we find occurs when \(\gamma<1\), we expect GUE statistics indicating chaotic behavior. When it is larger than the Heisenberg time, which we find occurs when \(\gamma>2\), we expect the statistics of uncoupled GUE blocks, indicating localization within the blocks. Note that in this case the SFF reaches its plateau value at a time earlier than the Heisenberg time. This is the block Heisenberg time, which is the inverse mean level spacing for a single block of \(A\). This means that the block Heisenberg time is smaller than the full Heisenberg time by a factor of \(P\), the number of blocks. The block Heisenberg time is an additional cardinal time in the BRP model.
If the Thouless time is larger than the inverse spectral width and smaller than the Heisenberg time we expect to see a crossover between the block localized and chaotic behaviors at the Thouless timescale. Further, if the Thouless time is smaller than the block Heisenberg time, which occurs when \(\gamma<1+d\) where \(d=\log_{N}M\), this crossover will occur before the SFF can reach its plateau value at the block Heisenberg time. If the Thouless time is greater than the block Heisenberg time, which occurs when \(\gamma>1+d\), the SFF will reach its plateau value at the block Heisenberg time, drop down to the GUE result at the Thouless timescale, then reach the plateau value again at the Heisenberg time.
Our primary result is the calculation of the SFF of the BRP model at all timescales larger than the inverse spectral width, which is \(O(1)\). This result is shown in Eqs. (131)-(132). It follows the schematic expectations shown in Fig. 2 and is in good agreement with the numerical results we obtain. At the Thouless timescale, if it is not of the same order as the Heisenberg time, the SFF decays exponentially over time from its glassy result to the GUE SFF, as shown in Eq. (154). In the case where the Thouless time is of the same order as the Heisenberg time, a more complicated crossover behavior emerges, shown in Eqs. (155)-(156).
Our results indicate that for \(\gamma<1\) the system will immediately thermalize while for \(\gamma>2\) the system will always remain localized within blocks. For intermediate values of \(\gamma\) the system will initially be confined to a single block, but will escape and thermalize at the Thouless time. We
also find a transition at \(\gamma=1+d\). This transition has an important interpretation in terms of the eigenstates. It is the point at which the eigenstates become fully delocalized across the Hilbert space.
The remainder of this work is organized as follows. In section 2 we review the spectral form factor's definition, features, and role as a diagnostic of quantum chaos. In section 3 we outline the aspects of the construction of the SFF which are common to both the RP and BRP models and discuss the unfolding procedure. In section 4 we examine the RP SFF, discussing the results of previous works and directly calculating the SFF at relevant timescales. We compare our analytical results with numerics. Section 5 contains the bulk of our results, an examination of the BRP model and the calculation of its SFF, again comparing to numerical results. Finally, we conclude with section 6.
## 2 Review of the spectral form factor
The spectral form factor (SFF) has a long history as a diagnostic of quantum chaos [7, 18, 22]. Examples of random-matrix-like SFFs in chaotic systems appear in numerous areas of physics,
Figure 2: Schematic representation on a log-log scale of the unfolded SFF for the BRP model in each of the four identified phases. The lower gray curve is the GUE SFF while the gray curve above it is the SFF for completely uncoupled GUE blocks.
from nuclear systems [6, 50] to condensed matter [51, 52, 53] to holographic theories [11, 12]. The SFF diagnoses whether energy levels repel as they do in random matrices [51], have independent Poissonian statistics [54], or have some more exotic behavior [55, 56, 57]. The SFF can be written as
\[\mathrm{SFF}(t,f)=\overline{|\operatorname{tr}[e^{-iHt}f(H)]|^{2}}, \tag{3}\]
where \(f\) is a filter function used to pick an energy band of interest, and the overline denotes a disorder average over an entire ensemble of Hamiltonians.
The SFF can also be expressed as the two-point function of the density of states. Defining
\[\rho(E,f)=\sum_{n}\delta(E-E_{n})f(E_{n})=\operatorname{tr}\left(\delta(H-E)f( H)\right), \tag{4}\]
we have
\[\mathrm{SFF}(t,f)=\overline{\int dE_{1}dE_{2}\rho(E_{1},f)\rho(E_{2},f)e^{i(E_ {1}-E_{2})t}} \tag{5}\]
As a two-point function, the SFF can be broken down into connected and disconnected components. These components have very different behaviors, as discussed below.
For random-matrix-like systems, the spectral form factor has three regimes of interest.
* The "dip," also known as the slope, occurs at early times. It comes from the disconnected piece of the SFF (and thus its precise shape is non-universal and depends on the details of \(f\) and the thermodynamics of the system). Its downward nature reflects a loss of constructive interference: at \(t=0\) the terms in \(\operatorname{tr}[e^{-iHt}f(H)]\) are all positive, but the different terms of \(\operatorname{tr}e^{-iHt}\) acquire different phase factors as \(t\) increases.
Figure 3: A log-log plot of the disorder-averaged SFF for the Gaussian unitary ensemble (GUE). The matrices in this ensemble have dimension \(N=5000\). The SFF was computed numerically by exactly diagonalizing five hundred realizations. The three regimes of the SFF—dip, ramp, plateau—are each labeled.
* The "ramp" occurs at intermediate times. It is arguably the most interesting regime, and marks the beginning of the universal behavior in the connected spectral form factor. In the canonical matrix ensembles, it is a consequence of the result [22] \[\mathbb{E}\left[\rho\left(E+\frac{\omega}{2}\right)\rho\left(E-\frac{\omega}{2} \right)\right]-\mathbb{E}\left[\rho\left(E+\frac{\omega}{2}\right)\right] \mathbb{E}\left[\rho\left(E-\frac{\omega}{2}\right)\right]\sim-\frac{1}{ \mathfrak{b}\pi^{2}\omega^{2}},\] (6) where \(\mathfrak{b}=1\), \(2\), \(4\) for the orthogonal, unitary, and symplectic ensembles respectively. The fact that the right hand side is negative is a manifestation of level repulsion [50]. Taking the Fourier transform of Eq. (6) with respect to \(\omega\) gives a term proportional to \(t\) for the connected SFF. Such a linear-in-\(t\) ramp is often taken as a defining signature of quantum chaos. The exact coefficient of the ramp can tell us a lot about a quantum system. For instance, if \(H\) is not a GUE matrix but the direct sum \(H_{1}\oplus H_{2}\) of two GUE matrices, the ramp will be enhanced by a factor of two. This enhancement shows up in realistic systems such as the Bunimovich stadium [58, 59], which have different sectors of their Hamiltonian which behave differently under reflection symmetry. It has recently been shown [46] that at times sub-exponential in system size, all-to-all spin glasses exhibit an enhancement of the ramp equal to the number of effective "sectors," i.e., regions of configuration space rendered dynamically disconnected by large energy barriers. This work serves as an extension of that result for a toy model of spin glasses, going out to late times.
* The "plateau" occurs at late times. It is a signature of the discreteness of the spectrum. It is part of the connected spectral form factor and is completely universal to all systems, whether thermalizing, integrable, glassy, or many-body localized [60, 61]. At times much larger than the inverse level spacing or "Heisenberg time," one expects that all off-diagonal terms in the double-trace of the SFF average to zero, meaning that \[\mathrm{SFF}(t,f)=\sum_{mn}\overline{e^{-i(E_{m}-E_{n})t}f(E_{m})f(E_{n})} \sim\sum_{n}f(E_{n})^{2}.\] (7) For integrable systems, the plateau is reached very quickly with little to no ramp regime [54, 56, 57], whereas for chaotic systems the plateau isn't reached until a time exponential in system size.
In the bulk of this paper we calculate the unfolded connected SFF for the RP and BRP models at all timescales larger than the inverse spectral width, thus we capture both the ramp and the plateau regimes. We find that when the sectors of the BRP model are uncoupled the ramp of the SFF is enhanced by a factor of the number of sectors. As the coupling strength is increased there will be a crossover to the regular ramp at the Thouless timescale.
## 3 Construction of the spectral form factor
In this section we lay out the initial steps of the construction of the SFF which are common to both the RP model and the new BRP model. In fact, the results of this section are applicable to any Hamiltonian which is perturbed by a GUE matrix. Both of the models can be expressed as a Hamiltonian matrix of size \(N\) with the form
\[H=A+V, \tag{8}\]
where the spectrum of \(A\) is centered at \(0\) and has a width that is \(O(1)\) with respect to \(N\). \(V\) is a random GUE matrix drawn from the distribution
\[p_{V}(V)\sim\exp\left(-\frac{1}{2\sigma}\operatorname{tr}V^{2}\right),\ \sigma=\frac{\lambda^{2}}{N^{\gamma}}. \tag{9}\]
The exact details of \(A\) will depend on whether we are working with the RP or BRP ensemble, but are unimportant at this initial stage.
### The joint probability density function
Our first step is to find the joint probability density function (JPDF) of the eigenvalues of the Hamiltonian \(H\). We follow the method of Kunz and Shapiro [45]. For the moment we will hold \(A\) constant. The probability distribution of \(H\) is then
\[p_{H}(H)=p_{V}(H-A)\sim\exp\left[-\frac{1}{2\sigma}\sum_{i}\left(E_{i}^{2}+a_{ i}^{2}\right)\right]\exp\left(\frac{1}{\sigma}\operatorname{tr}AUEU^{\dagger} \right), \tag{10}\]
where \(\{a_{i}\}\) are the eigenvalues of \(A\), \(\{E_{i}\}\) are the eigenvalues of \(H\), \(E=\operatorname{diag}(E_{1},\ldots,E_{N})\) is the diagonal matrix similar to \(H\), and \(U\) is the unitary matrix which diagonalizes \(H\).
We now make the change of variables \(H\to\{U,E\}\). The Jacobian of this change is \(\Delta^{2}(E)\), where \(\Delta(E)=\prod_{i>j}(E_{i}-E_{j})\) is the Vandermonde determinant. The JPDF is
\[p(E)\sim\Delta^{2}(E)\exp\left[-\frac{1}{2\sigma}\sum_{i}\left(E_{i}^{2}+a_{i} ^{2}\right)\right]\int dU\exp\left(\frac{1}{\sigma}\operatorname{tr}AUEU^{ \dagger}\right), \tag{11}\]
where \(dU\) is the Haar measure over the unitary group \(U(N)\). We can evaluate the integral over \(U\) using the Itzykson-Zuber integral identity [14]
\[\int dU\exp\left(\frac{1}{\sigma}\operatorname{tr}AUEU^{\dagger}\right)\sim \frac{\det\exp(a_{i}E_{j}/\sigma)}{\Delta(a)\Delta(E)}. \tag{12}\]
Applying this identity, we find that the JPDF is
\[p(E)=c(\sigma,N)\frac{\Delta(E)}{\Delta(a)}\exp\left[-\frac{1}{2\sigma}\sum_{ i}\left(E_{i}^{2}+a_{i}^{2}\right)\right]\det\exp\left(\frac{a_{i}E_{j}}{ \sigma}\right), \tag{13}\]
with \(c(\sigma,N)\) to be determined by normalization.
To normalize we calculate
\[\begin{split} 1=\int dE\ p(E)=&\frac{c(\sigma,N)}{ \Delta(a)}\int dE\ \Delta(E)\exp\left[-\frac{1}{2\sigma}\sum_{i}\left(E_{i}^{2}+a_{i}^{2}\right) \right]\det\exp\left(\frac{a_{i}E_{j}}{\sigma}\right)\\ =&\frac{c(\sigma,N)}{\Delta(a)}\int dE\ \Delta(E)\exp \left[-\frac{1}{2\sigma}\sum_{i}\left(E_{i}^{2}+a_{i}^{2}\right)\right]\sum_{ \pi\in S_{N}}\operatorname{sign}(\pi)\prod_{i=1}^{N}\exp\left(\frac{a_{i}E_{ \pi(i)}}{\sigma}\right)\\ =& c(\sigma,N)\frac{N!}{\Delta(a)}\int dE\ \Delta(E)\exp \left[-\frac{1}{2\sigma}\operatorname{tr}(E-a)^{2}\right],\end{split} \tag{14}\]
where \(a={\rm diag}(a_{1},\ldots,a_{N})\). In the second line above \(S_{N}\) is the set of permutations of \(N\) elements. The third line follows because \(\Delta(E)\) is antisymmetric under all transpositions. We now use the identity (proven in the appendix of [45])
\[\int dE\ \Delta(E)\exp\left[-\frac{1}{2\sigma}\,{\rm tr}(E-a)^{2}\right]= \Delta(a)(2\pi\sigma)^{N/2} \tag{15}\]
to find that the normalization factor is
\[c(\sigma,N)=\frac{1}{N!}(2\pi\sigma)^{-N/2}. \tag{16}\]
We now let \(A\) also be a random matrix. From Eqs. (13) and (16) we see that, for any function \(W(E)\) which is symmetric in the eigenvalues of \(H\), the ensemble average is
\[\overline{W}(E)=(2\pi\sigma)^{-N/2}\left\langle\frac{1}{\Delta(a)}\int dE\ W(E) \Delta(E)\exp\left[-\frac{1}{2\sigma}\,{\rm tr}(E-a)^{2}\right]\right\rangle, \tag{17}\]
where the angle brackets indicate that the average over the distribution of \(A\) still needs to be performed. Fortunately the SFF and the correlation functions examined below are such symmetric functions.
### Correlation functions
We now examine two correlation functions which we will use to construct the SFF. The first is the Fourier transform of the level density
\[\overline{C_{1}}(t)=\sum_{k}\overline{e^{itE_{k}}}. \tag{18}\]
Averaging with respect to the JPDF and making use of Eq. (15) yields
\[\overline{C_{1}}(t)=e^{-\sigma t^{2}/2}\left\langle\sum_{k}e^{ita_{k}}\frac{ \Delta(a+\delta_{k}\tau)}{\Delta(a)}\right\rangle=e^{-\sigma t^{2}/2}\left \langle\sum_{k}e^{ita_{k}}\prod_{j\neq k}\left(1+\frac{\tau}{a_{k}-a_{j}} \right)\right\rangle, \tag{19}\]
where \(\tau=it\sigma\) and \(\delta_{k}\) is the projection matrix onto the \(k^{\rm th}\) dimension. In order to average over the eigenvalues of \(A\) we rewrite this as
\[\overline{C_{1}}(t)=\frac{e^{-\sigma t^{2}/2}}{\tau}\oint_{\cal C}\frac{dz}{ 2\pi i}e^{itz}\left\langle\prod_{j}\left(1+\frac{\tau}{z-a_{j}}\right)\right\rangle, \tag{20}\]
where \({\cal C}\) is a rectangular integration contour of infinitesimal width in the imaginary direction which encompasses the real axis. Using this form is advantageous because each term within the angle brackets now depends on only a single eigenvalue of \(A\).
We now consider the Fourier transform of the two-point correlation function (excepting the \(k=l\) terms)
\[\overline{C_{2}}(t)=\sum_{k\neq l}\overline{e^{it(E_{k}-E_{l})}}. \tag{21}\]
Again, averaging with respect to the JPDF and using Eq. (15) yields
\[\begin{split}\overline{C_{2}}(t)&=e^{-\sigma t^{2}} \left\langle\sum_{k\neq l}e^{it(a_{k}-a_{l})}\frac{\Delta\left(a+\tau(\delta_{k }-\delta_{l})\right)}{\Delta(a)}\right\rangle\\ &=e^{-\sigma t^{2}}\left\langle\sum_{k\neq l}e^{it(a_{k}-a_{l})} \left[1-\left(\frac{\tau}{a_{l}-a_{k}-\tau}\right)^{2}\right]\prod_{j\neq k} \left(1+\frac{\tau}{a_{k}-a_{j}}\right)\prod_{j\neq l}\left(1-\frac{\tau}{a_{l} -a_{j}}\right)\right\rangle.\end{split} \tag{22}\]
We can also write this in terms of contour integrals as
\[\begin{split}\overline{C_{2}}(t)&=-\frac{e^{-\sigma t ^{2}}}{\tau^{2}}\oint_{\mathcal{C}}\frac{dz}{2\pi i}\oint_{\mathcal{C}}\frac{ dz^{\prime}}{2\pi i}e^{it(z-z^{\prime})}\left[1-\left(\frac{\tau}{z^{\prime}-z- \tau}\right)^{2}\right]\left\langle\prod_{j}\left(1+\frac{\tau}{z-a_{j}} \right)\left(1-\frac{\tau}{z^{\prime}-a_{j}}\right)\right\rangle.\end{split} \tag{23}\]
### The unfolded spectral form factor
We now combine the correlation functions considered above to form the function
\[C(t)=\frac{1}{N}\left(\overline{C_{2}}(t)-\left|\overline{C_{1}}(t)\right|^{2 }\right), \tag{24}\]
which is the connected SFF apart from the missing \(k=l\) terms in \(C_{2}(t)\). We will now demonstrate how \(C(t)\) can be used to find the unfolded SFF. We note that
\[C(t) =-\frac{1}{N}\int dz\ dz^{\prime}e^{it(z-z^{\prime})}\left(\rho(z) \rho(z^{\prime})-\rho_{2}(z,z^{\prime})+\sum_{k}\overline{\delta(z-E_{k}) \delta(z^{\prime}-E_{k})}\right), \tag{25}\] \[\rho(z) =\sum_{k}\overline{\delta(z-E_{k})},\ \rho_{2}(z,z^{\prime})=\sum_{k,l} \overline{\delta(z-E_{k})\delta(z-E_{l})}. \tag{26}\]
We now unfold by making the change of variables
\[(z,z^{\prime}) =x\pm\frac{y}{2\rho(x)}, \tag{27}\] \[t =\rho(x)T. \tag{28}\]
In these new variables \(x\) is the central energy and \(y\) is the distance between the energies in units of local mean level spacings at the central energy. With this change we find
\[C(t)=\int dx\ p(x)\left(K(x,T)-1\right), \tag{29}\] \[K(x,T)=-\int dy\ Y(x,y)e^{iTy},\] (30) \[Y(x,y)=\frac{1}{[\rho(x)]^{2}}\left[\rho\left(x+\frac{y}{2\rho( x)}\right)\rho\left(x-\frac{y}{2\rho(x)}\right)-\rho_{2}\left(x+\frac{y}{2\rho(x)}, x-\frac{y}{2\rho(x)}\right)\right], \tag{31}\]
where \(p(x)=\rho(x)/N\) is the probability distribution function, \(Y(x,y)\) is the unfolded connected two-point correlation function, also called the unfolded two-point cluster function, and \(K(x,T)\) is the unfolded connected SFF. Our approach, whichever model we are examining, will be to put \(C(t)\) in the form of Eq. (29) and extract the unfolded connected SFF, which we will simply call the SFF going forward.
For the models we study in this work it is reasonable to assume that \(Y(x,y)\) vanishes in the large \(N\) limit for \(y\gg 1\), so we can restrict the interval of integration in Eq. (30) to be of order 1. For these values of \(y\) the disconnected part of \(Y(x,y)\) will tend to 1, thus it will not contribute to the SFF for \(T\neq 0\). In the calculations that follow we will therefore neglect the disconnected piece of the two-level cluster function. This is an advantage of the unfolding procedure; we no longer need to worry about subtracting out the disconnected part in order to observe the universal features of the SFF. We will also find that if we unfold the coupling parameter \(\lambda\) as well, \(Y(x,y)\) and the SFF will become independent of the center energy \(x\). So, although \(C(t)\) is not universal due to its dependence on \(p(x)\), the unfolded connected two-point correlation function and the SFF will be universal.
It will be helpful to use the time variable \(u=N|T|=|t|/p(x)\), so \(u\) is of the same order as the real time \(t\). This means that \(u\) is the time unfolded by the level probability density instead of the level density. The modulus may be taken since the SFF is symmetric in time. Because the SFF is the Fourier transform of the two-point cluster function, at time \(u\) it probes the correlations between energy levels with separations on the order of \(u^{-1}\). For this reason we consider only timescales larger than the inverse spectral width.
## 4 The Rosenzweig-Porter model
The Hamiltonian matrix for the RP model has the form of Eq. (8), with the additional condition that the eigenvalues of \(A\) are all independent and identically distributed. We can consider \(A\) to be diagonal such that
\[A=\mbox{diag}(a_{1},\ldots,a_{N}), \tag{32}\]
where each \(a_{i}\) is independently drawn from some distribution \(p(a)\) with a variance of order 1 in \(N\).
We can view this as an Anderson model [62] of \(N\) sites with independent random on-site potentials and all-to-all random couplings on the order of \(N^{-\gamma/2}\). From Fermi's golden rule, we can determine that the tunneling rate from one of these sites is on the order of \(N^{1-\gamma}\). Thus the Thouless time, the time at which the system will escape a single site and random matrix statistics first appears, is
\[t_{\rm Th}\sim N^{\gamma-1}. \tag{33}\]
The ergodicity-breaking transition occurs at the point where the tunneling rate is the same order as the spectral width [32, 48, 49, 63], or, equivalently, when the Thouless time becomes of the same order as the inverse spectral width. The spectral width of the GUE matrix \(V\) is on the order of \(N^{(1-\gamma)/2}\)[22], while the spectral width of \(A\) is of order 1. So the spectral width of the RP model is
\[w\sim\max(1,N^{(1-\gamma)/2}), \tag{34}\]
This indicates that the ergodicity-breaking transition occurs at \(\gamma=1\). For all \(\gamma<1\) the RP model will exhibit GUE statistics.
The localization transition of the model, on the other hand, can be determined from the Mott criterion [32, 49, 63, 64], i.e., when the number of sites in resonance with a given one becomes finite at large \(N\). For the RP model this number of sites is on the order of \(N^{1-\gamma/2}\), indicating that the localization transition occurs at \(\gamma=2\). Equivalently, we can understand the localization transition as occuring when the Thouless and Heisenberg times are of the same order. From our result for the spectral width we find that the Heisenberg time, which is the inverse mean level spacing, is
\[t_{\rm Heis}\sim\min(N,N^{(1+\gamma)/2}). \tag{35}\]
The Thouless and Heisenberg times are of the same order when \(\gamma=2\), which matches our earlier reasoning for the localization transition through the Mott criterion. For \(\gamma>2\) the Thouless time is larger than the Heisenberg time. Since the SFF reaches its plateau value at the Heisenberg timescale, there is no chance for random-matrix-like ramp to appear. The RP model will behave as if \(H=A\). That is, Poissonian statistics will emerge. This analysis informs our determination of which values of \(\gamma\) lead to which phase in the schematics of Fig. 1.
It's clear from the above discussion that the region \(1<\gamma<2\) is particularly interesting because random-matrix-like behavior should be found, but only after a long period of time has elapsed. This case in which the Thouless time is larger than the inverse spectral width but smaller than the Heisenberg time is the nonergodic extended phase [27]. In this region eigenstates are not localized, but they are not spread sufficiently to be ergodic. For these values of \(\gamma\) the calculation of the SFF is nontrivial because the behavior of the SFF depends not only on \(\gamma\) but also the timescale. Going forward we will assume that \(\gamma>1\).
As a warm-up we calculate the Fourier transform of the density of states \(\overline{C_{1}}(t)\) to leading order. Due to the independence of the eigenvalues of \(A\) we can simplify Eq. (20) as
\[\overline{C_{1}}(t) =\frac{e^{-\sigma t^{2}/2}}{\tau}\oint_{\mathcal{C}}\frac{dz}{2 \pi i}e^{itz}[g_{1}(z,\tau)]^{N}, \tag{36}\] \[g_{1}(z,\tau) =1+\tau\left\langle\frac{1}{z-a^{\prime}}\right\rangle, \tag{37}\]
where \(a^{\prime}\) is any eigenvalue of \(A\). \(\overline{C_{1}}(t)\) contains information about the average density of states. Expanding with the binomial theorem, we find
\[\frac{1}{N}\overline{C_{1}}(t)=\frac{1}{N}e^{-\sigma t^{2}/2}\oint_{\mathcal{C }}\frac{dz}{2\pi i}e^{itz}\sum_{j=0}^{N}{N\choose j}(i\sigma t)^{j-1}\left\langle \frac{1}{z-a^{\prime}}\right\rangle^{j}. \tag{38}\]
Note that we have divided by \(N\) to probe the average density of states but we have not performed the unfolding procedure. We want to examine this quantity on the scale of the spectral width, so we let \(t\) be order \(1\). For \(\gamma>1\), \(N\sigma\) goes to \(0\), meaning only the \(j=1\) term remains. So
\[\frac{1}{N}\overline{C_{1}}(t)=\oint_{\mathcal{C}}\frac{dz}{2\pi i}e^{itz} \left\langle\frac{1}{z-a^{\prime}}\right\rangle=\left\langle e^{ita^{\prime}} \right\rangle. \tag{39}\]
This means that the average level density is the same as the probability density of the eigenvalues of \(A\)[19].
As discussed above, we can neglect the disconnected contribution1 so the unfolded SFF at non-zero times is determined entirely by \(\overline{C_{2}}(t)\). In the RP model, Eq. (23) simplifies as
Footnote 1: Strictly speaking, the argument above for neglecting the disconnected contribution to the SFF only applies for \(T\neq 0\), i.e., \(t/N\neq 0\). Nonetheless, we make the same approximation for times that scale as a smaller (but still non-zero) power of \(N\) as well. We find good agreement between the resulting expressions and numerical calculations for all timescales of interest.
\[\overline{C_{2}}(t) =-\frac{e^{-\sigma t^{2}}}{\tau^{2}}\oint_{\mathcal{C}}\frac{dz}{2 \pi i}\oint_{\mathcal{C}}\frac{dz^{\prime}}{2\pi i}e^{it(z-z^{\prime})}\left[1 -\left(\frac{\tau}{z^{\prime}-z-\tau}\right)^{2}\right][g_{2}(z,\tau;z^{\prime },-\tau)]^{N}, \tag{40}\] \[g_{2}(z,\tau;z^{\prime},-\tau) =\left\langle\left(1+\frac{\tau}{z-a^{\prime}}\right)\left(1- \frac{\tau}{z^{\prime}-a^{\prime}}\right)\right\rangle\] (41) \[=1+\tau\left(1-\frac{\tau}{z^{\prime}-z}\right)\left(\left\langle \frac{1}{z-a^{\prime}}\right\rangle-\left\langle\frac{1}{z^{\prime}-a^{\prime} }\right\rangle\right).\]
Inserting Eq. (40) into Eq. (24) and neglecting the disconnected contribution, we obtain
\[C(t)=-\frac{e^{-\sigma t^{2}}}{N\tau^{2}}\oint_{\mathcal{C}}\frac{dz}{2\pi i} \oint_{\mathcal{C}}\frac{dz^{\prime}}{2\pi i}e^{it(z-z^{\prime})}\left\{\left[1 -\left(\frac{\tau}{z^{\prime}-z-\tau}\right)^{2}\right][g_{2}(z,\tau;z^{\prime },-\tau)]^{N}\right\}. \tag{42}\]
We now make the change of variables
\[(z,z^{\prime})=x\pm\frac{y}{2N}-i(q,q^{\prime})0^{+}, \tag{43}\]
where \(q,q^{\prime}=\pm 1\), indicating which leg of the contour \(\mathcal{C}\) the variables are on. Note that this definition of \(y\) differs from that of Eq. (27) by a factor of \(p(x)\), but it has the same scaling with \(N\) and the argument for taking \(y\) to be order 1 still applies. This change is made purely for convenience. With it we obtain
\[C(t)=-\frac{e^{-\sigma t^{2}}}{(N\tau)^{2}}\sum_{q,q^{\prime}}qq^{\prime}\int \frac{dx}{2\pi i}\int\frac{dy}{2\pi i}e^{iN^{-1}ty}\left[1-\left(\frac{N\tau} {y-i(q-q^{\prime})0^{+}+N\tau}\right)^{2}\right][g_{2}(z,\tau;z^{\prime},-\tau )]^{N}. \tag{44}\]
### The spectral form factor
Kunz and Shapiro [45] used the process outlined above to find the SFF when \(\gamma=2\) and \(t\sim N\) (for which the Thouless and Heisenberg timescales coincide). They found that, if the coupling parameter is also unfolded by replacing \(\lambda\) with \(\Lambda/p(x)\), the two-level cluster function and the SFF become independent of the center energy \(x\). Later Kravtsov et al. [27] generalized this result to all \(\gamma>1\) at the Thouless timescale. Additional timescales may be examined by rescaling time in this result. We first review this result and then present our direct calculation of the SFF at all timescales of interest. This calculation more closely parallels that of the SFF for the BRP model presented in section 5.
Making the change of variables in Eq. (43), we find that
\[\left\langle\frac{1}{z-a^{\prime}}\right\rangle=\mathchoice{{\vbox{ \hrule height 0.4pt width 100 height 1px}\kern 1.0pt}}{{\vbox{\hrule height 0.4pt width 100 pt\hbox{\vrule width 0.4pt height 6.0pt depth 0.0pt} \kern 6.0pt\vrule width 0.4pt height 6.0pt depth 0.0pt} \kern-3.0pt\hbox{\vrule width 0.4pt height 6.0pt depth 0.
Inserting this result into Eq. (44) and then unfolding by setting
\[v=s/p(x), \tag{50}\] \[\Lambda=\lambda p(x) \tag{51}\]
yields the result that the Thouless-timescale SFF is
\[K_{\rm Th}^{(\gamma)}(v)=1+e^{-2\pi\Lambda^{2}v-N^{\gamma-2} \Lambda^{2}v^{2}}\left[\frac{2I_{1}(\kappa v^{3/2})}{\kappa v^{3/2}}\right.\\ \left.-\frac{1}{4\pi}\kappa v^{5/2}N^{\gamma-2}\int_{0}^{\infty }\frac{d\xi\;\xi}{\sqrt{\xi+1}}I_{1}(\kappa v^{3/2}\sqrt{\xi+1})e^{-N^{\gamma- 2}\Lambda^{2}v^{2}\xi}\right], \tag{52}\]
where \(\kappa=\sqrt{8\pi\Lambda^{4}N^{\gamma-2}}\) and \(I_{1}(x)\) is the modified Bessel function of the first kind. See Ref. [27] for full details. Note that we have unfolded using the probability density instead of the level density, which differ by a factor of \(N\). Eq. (52) may also be used to examine other timescales by rescaling \(v\). However, we note that the intermediate step Eq. (49) only holds for \(s\ll\sqrt{N}\), that is when \(t\ll N^{\gamma-1/2}\). Even restricting ourselves to times not larger than the Heisenberg time (\(t\not\gg N\)), we see that this is not always the case.
We now present a direct calculation of the SFF for the RP model at all relevant timescales. We make the same change of variables \(\{z,z^{\prime}\}\to\{x,y\}\) shown in Eq. (43), but we will not set the \(N\)-dependence of \(t\) yet. We can now write
\[C(t)=\int dx\;p(x)e^{-\sigma t^{2}}(S_{1}+S_{2}), \tag{53}\] \[S_{1}=-\frac{1}{2\pi ip(x)(N\tau)^{2}}\sum_{q,q^{\prime}}qq^{ \prime}\int\frac{dy}{2\pi i}e^{iN^{-1}ty}[g_{2}(z,\tau;z^{\prime},-\tau)]^{N},\] (54) \[S_{2}=\frac{1}{2\pi ip(x)}\sum_{q,q^{\prime}}qq^{\prime}\int \frac{dy}{2\pi i}e^{iN^{-1}ty}\frac{[g_{2}(z,\tau;z^{\prime},-\tau)]^{N}}{[y-i (q-q^{\prime})0^{+}+N\tau]^{2}}. \tag{55}\]
Instead of seeking an asymptotic form for \([g(z,\tau;z^{\prime},-\tau)]^{N}\) as in Eq. (49), we write
\[\begin{split}[g_{2}(z,\tau;z^{\prime},-\tau)]^{N}&= \sum_{j=0}^{N}{N\choose j}\frac{\left\{N\tau^{2}\left[i\pi p(x)(q-q^{\prime}) +O(N^{-1})\right]\right\}^{j}}{\left\{1+\tau\left[i\pi p(x)(q-q^{\prime})+O(N ^{-1})\right]\right\}^{j-N}[y-i(q-q^{\prime})0^{+}]^{j}}\\ &=\sum_{j=0}^{N}p_{\rm bi}\left(N,j,i\pi\tau p(x)(q^{\prime}-q)+O (N^{-1}\tau)\right)\left(\frac{N\tau}{y-i(q-q^{\prime})0^{+}}\right)^{j}, \end{split} \tag{56}\]
where
\[p_{\rm bi}(N,j,x)={N\choose j}(1-x)^{N-j}x^{j} \tag{57}\]
is the binomial probability density function. We take \(i\tau(q^{\prime}-q)\) to be positive because, when it is not, the integrals over \(y\) in Eqs. (54)-(55) vanish due to a lack of singularities in the half of the complex plane in which the contour of integration may be closed. At this point we restrict ourselves to times not larger than the Heisenberg time, so \(|\tau|=\sigma t\ll 1\). This is not a problem since we already know that the SFF goes to 1 for times larger than the Heisenberg time. With this restriction, we see by inspection of the first line of Eq. (56) that we can neglect the \(O(N^{-1})\) terms when \(j\ll N\). The probability density \(p_{\rm bi}(N,j,x)\) is peaked near its mean at \(j=Nx\) with
a variance of \(Nx(1-x)\). Far from the peak the probability density is exponentially small in \(N\). So only the \(j\sim N|\tau|\) terms will contribute in Eq. (56). This means that the \(j\sim N\) terms will not contribute and we can therefore neglect the \(O(N^{-1})\) terms. With these considerations we can write, for positive \(i\tau(q^{\prime}-q)\),
\[[g_{2}(z,\tau;z^{\prime},-\tau)]^{N}=\sum_{j=0}^{N}p_{\rm bi}\left(N,j,i\pi\tau p (x)(q^{\prime}-q)\right)\left(\frac{N\tau}{y-i(q-q^{\prime})0^{+}}\right)^{j}. \tag{58}\]
Inserting this into Eq. (54) we find
\[S_{1}=\frac{1}{N}\sum_{q}\sum_{j=1}^{N}{N\choose j}\frac{(2\pi iN\tau^{2}p(x) )^{j-1}}{(1+2\pi i\tau qp(x))^{j-N}}\int\frac{dy}{2\pi i}\frac{e^{iN^{-1}tqy} }{(y-i0^{+})^{j}}, \tag{59}\]
where we have made the change of variable \(y\to qy\). Note that the \(j=0\) term vanishes due to the integrand having no singularities. For the remaining terms the integral over \(y\) vanishes when \(tq<0\), so we can remove the sum over \(q\) by making the replacement \(tq\to|t|\). Performing the integral over \(y\), we find that
\[S_{1}=\sum_{j=0}^{N-1}\frac{(|\tau||t|)^{j}}{(j+1)!}p_{\rm bi}\left(N-1,j,2 \pi|\tau|p(x)\right). \tag{60}\]
We now turn our attention to \(S_{2}\). Inserting Eq. (56) into Eq. (55), we find
\[S_{2}=\frac{i}{2\pi p(x)}\sum_{q}\sum_{j=1}^{N}{N\choose j}\frac{(2\pi iN\tau ^{2}p(x))^{j}}{(1+2\pi i\tau qp(x))^{j-N}}\int\frac{dy}{2\pi i}\frac{e^{iN^{- 1}tqy}}{(y+N\tau q)^{2}(y-i0^{+})^{j}}, \tag{61}\]
where we have again made the change of variable \(y\to qy\). The \(j=0\) term vanishes because the integrand has no singularities in the half of the complex plane in which the contour of integration may be closed. For the remaining terms the integral over \(y\) vanishes when \(tq<0\), so we again remove the sum over \(q\) by making the replacement \(tq\to|t|\). Performing the integral over \(y\), we find that
\[S_{2}=-e^{\sigma t^{2}}\sum_{j=0}^{N-1}p_{\rm bi}\left(N-1,j,2\pi|\tau|p(x) \right)\left[\tilde{\Gamma}(j+2,\sigma t^{2})-\frac{\sigma t^{2}}{j+1}\tilde{ \Gamma}(j+1,\sigma t^{2})\right], \tag{62}\]
where
\[\tilde{\Gamma}(j+1,z)=\frac{1}{j!}\int_{z}^{\infty}d\xi\ \xi^{j}e^{-\xi}=e^{-z} \sum_{k=0}^{j}\frac{z^{k}}{k!} \tag{63}\]
is the regularized upper incomplete gamma function. Note that \(\tilde{\Gamma}\) satisfies the recurrence relation
\[\tilde{\Gamma}(j+1,z)=\tilde{\Gamma}(j,z)+\frac{z^{j}e^{-z}}{j!}. \tag{64}\]
We take our results for \(S_{1}\) and \(S_{2}\) in Eqs. (60) and (62) and insert them into Eq. (53), then extracting the SFF using Eq. (29). Then unfolding time by setting
\[u=|t|/p(x)=N|T| \tag{65}\]
and unfolding \(\lambda\) using Eq. (51), we obtain the SFF
\[K^{(\gamma)}(u/N)=1+\sum_{j=0}^{N-1}p_{\rm bi}(N-1,j,2\pi N^{- \gamma}\Lambda^{2}u)\left[\frac{e^{-N^{-\gamma}\Lambda^{2}u^{2}}}{(j+1)!}\left( N^{-\gamma}\Lambda^{2}u^{2}\right)^{j}\right.\\ \left.-\tilde{\Gamma}\left(j+2,N^{-\gamma}\Lambda^{2}u^{2}\right) +\frac{N^{-\gamma}\Lambda^{2}u^{2}}{j+1}\tilde{\Gamma}\left(j+1,N^{-\gamma} \Lambda^{2}u^{2}\right)\right]. \tag{66}\]
With this result we have generalized the Thouless-timescale result of Eq. (52) to all times larger than the inverse spectral width.
Fig. 4 plots Eq. (66) for \(\gamma=1.6\) and various values of the coupling parameter \(\Lambda\) and compares to numerical results obtained through exact diagonalization. There is good agreement between theory and numerics at not-too-early times (note in particular that the agreement is already good by the Thouless timescale). The discrepancy at early times which are still larger than the inverse spectral width is a result of calculating the numerical SFF using only the eigenvalues within a finite window in order to perform the unfolding procedure. At times earlier than this inverse window width the SFF probes correlations between eigenvalues at separations larger than the width of the window, but these eigenvalues are cut off in the numerical unfolding procedure. We observe that the SFF decays from the Poissonian result to the GUE SFF over time, with the rate of decay controlled by \(\Lambda\). As we vary \(\Lambda\) we can see the SFF moving between the behaviors we predicted schematically in Fig. 1.
Figure 4: The SFF of the RP model when \(\gamma=1.6\) for several values of the coupling parameter \(\Lambda\). The dark thin lines show the analytical results while the lighter, thicker lines show numerical results. The dotted line indicates the Heisenberg time. In the numerics the diagonal elements of \(A\) are drawn from the Gaussian distribution with 0 mean and a variance of 1. The numerics are obtained through exact diagonalization of size \(N=1000\) random matrices and are averaged over \(10^{5}\) realizations. The numerics are restricted to eigenvalues within the window [-1,1].
If we examine the Thouless timescale by setting \(u=N^{\gamma-1}v\) in Eq. (66) we can write
\[K^{(\gamma)}_{\rm Th}(v)=K^{(\gamma)}(N^{\gamma-2}v)=1+e^{-2\pi \Lambda^{2}v-N^{\gamma-2}\Lambda^{2}v^{2}}\frac{2I_{1}(\kappa v^{3/2})}{\kappa v ^{3/2}}-B^{(\gamma)}_{\rm Th}(v), \tag{67}\] \[B^{(\gamma)}_{\rm Th}(v)=e^{-2\pi\Lambda^{2}v}\sum_{j=0}^{\infty }\frac{(2\pi\Lambda^{2}v)^{j}}{j!}\left(\tilde{\Gamma}(j+2,\Lambda^{2}N^{ \gamma-2}v^{2})+\frac{\Lambda^{2}N^{\gamma-2}v^{2}}{j+1}\tilde{\Gamma}(j+1, \Lambda^{2}N^{\gamma-2}v^{2})\right), \tag{68}\]
where we have discarded some vanishing terms and used the fact that the modified Bessel function may be written as the infinite series
\[I_{1}(x)=\sum_{j=0}^{\infty}\frac{(x/2)^{2j+1}}{j!(j+1)!}. \tag{69}\]
Writing the regularized upper incomplete gamma functions in the integral form shown in Eq. (63), using Eq. (69), and making the change of variable \(\xi\to\sqrt{\xi+1}\), we recover the Thouless-timescale SFF shown in Eq. (52).
### The infinite matrix size limit
We can now determine the large \(N\) limit of the SFF at different timescales. We begin by examining the Thouless timescale where \(u\sim N^{\gamma-1}\). When \(\gamma>2\) the regularized incomplete gamma functions in Eq. (68) vanish, so
\[B^{(\gamma)}_{\rm Th}(v)=0. \tag{70}\]
Inserting this result in Eq. (67) and noting that the second term of the SFF vanishes for large \(N\), we find that the SFF is
\[K^{(\gamma)}_{\rm Th}(v)=1, \tag{71}\]
indicating localized behavior.
When \(\gamma<2\) the regularized incomplete gamma functions become complete and go to 1. Resumming then gives
\[B^{(\gamma)}_{\rm Th}(v)=1-\frac{T}{2\pi}\left(1-e^{-2\pi\Lambda^{2}v}\right), \tag{72}\]
where \(T=t/\rho(x)=N^{\gamma-2}v\) is the fully unfolded (including all factors of \(N\)) time. We again insert this in Eq. (67). We note that the second term of the SFF goes to \(e^{-2\pi\Lambda^{2}v}\), so the SFF for \(1<\gamma<2\) is
\[K^{(\gamma)}_{\rm Th}(v)=e^{-2\pi\Lambda^{2}v}+\frac{T}{2\pi}\left(1-e^{-2\pi \Lambda^{2}v}\right). \tag{73}\]
The second term is subleading, but we include it to make the crossover to GUE behavior for large \(\Lambda\) apparent. \(K^{(2)}_{\rm Th}(v)\) does not have a simple limit.
Pulling together all the results for the Thouless timescale SFF at different values of \(\gamma\) we have
\[K^{(\gamma)}_{\rm Th}(v)=\begin{cases}1,&\gamma>2\\ 1+e^{-\Lambda^{2}v(v+2\pi)}\frac{2I_{1}(\kappa v^{3/2})}{\kappa v^{3/2}}-B^{(2 )}_{\rm Th}(v),&\gamma=2\\ e^{-2\pi\Lambda^{2}v}+\frac{T}{2\pi}\left(1-e^{-2\pi\Lambda^{2}v}\right),&1< \gamma<2\end{cases}. \tag{74}\]
When \(1<\gamma\leq 2\) the behavior of the SFF is dependent on \(\Lambda\). It moves to the Poissonian result for small \(\Lambda\) and the GUE result for large \(\Lambda\). This can be seen explicitly in the above equation when \(1<\gamma<2\).
It is not surprising that the SFF does not have a simple limit when \(\gamma=2\) since, in this case, the Thouless and Heisenberg timescales are the same. By plotting this case, shown in Fig. 5, we can observe that there is still a crossover from the Poissonian to the GUE result controlled by \(\Lambda\). An interesting feature of this crossover is that, for finite \(\Lambda\), the SFF saturates at times later than the Heisenberg time. There is a trade-off: increasing the level repulsion at large energy separations decreases the level repulsion at small separations and vice versa. This phenomenon can be shown through the fact that the area between the SFF and its plateau value is independent of the coupling strength for any nonzero coupling. This property is demonstrated in the Appendix.
We can now examine the large \(N\) limit of the SFF at other timescales. For convenience we rewrite the SFF of Eq. (66) as
\[K^{(\gamma)}(u/N)=1+e^{-N^{-\gamma}\Lambda^{2}u^{2}}\sum_{j=0}^{N-1}\frac{(N^{ -\gamma}\Lambda^{2}u^{2})^{j}}{(j+1)!}p_{\rm bi}(N-1,j,2\pi N^{-\gamma}\Lambda ^{2}u)-B^{(\gamma)}(u), \tag{75}\]
\[B^{(\gamma)}(u)=\sum_{j=0}^{N-1}p_{\rm bi}(N-1,j,2\pi N^{-\gamma}\Lambda^{2}u )\left[\tilde{\Gamma}\left(j+2,N^{-\gamma}\Lambda^{2}u^{2}\right)-\frac{N^{- \gamma}\Lambda^{2}u^{2}}{j+1}\tilde{\Gamma}\left(j+1,N^{-\gamma}\Lambda^{2}u^ {2}\right)\right]. \tag{76}\]
The regularized upper incomplete gamma function \(\tilde{\Gamma}(j,x)\) is the cumulative distribution function for a Poissonian random variable with parameter \(x\). This means that these functions in Eq. (76) have a crossover from \(0\) to \(1\), where the location of the crossover is at approximately \(j=N^{-\gamma}\Lambda^{2}u^{2}\) and the width of the crossover is on the order of \(N^{-\gamma/2}\Lambda u\). Meanwhile the binomial probability density function in Eq. (76) has a mean and variance of approximately \(2\pi N^{1-\gamma}\Lambda^{2}u\) for large \(N\) and \(u\) not larger than order \(N\).
Figure 5: The SFF for the RP model when the Thouless and Heisenberg timescales coincide for various values of the coupling parameter \(\Lambda\). The dotted vertical line indicates the Heisenberg time.
In the following considerations we will assume that the crossover point of the upper incomplete gamma functions and the mean of the binomial distribution are much larger than the crossover width and the binomial distribution width. Fortunately, for the timescales we are considering, this is only violated when \(u\sim N\) and \(\gamma=2\). This is the case where the Thouless and Heisenberg timescales coincide, which we have already considered above.
The binomial distribution is exponentially small far from its mean, so only the terms with \(j\) close to \(2\pi N^{1-\gamma}\Lambda^{2}u\) may not vanish. If the mean of the binomial distribution is less than the crossover point of the regularized upper incomplete gamma functions, which occurs when \(u>2\pi N\) and \(\gamma<2\), we see from Eq. (63) that those functions will be exponentially small and can therefore be replaced with \(0\). If the mean of the binomial distribution is greater than the crossover point, which occurs when \(u<2\pi N\), they can be replaced with \(1\). We can also observe that when \(\gamma>2\), the regularized incomplete gamma functions will become complete and approach \(1\). With these considerations we find, after performing the sum over \(j\), that for \(u\not\gg N\)
\[B^{(\gamma)}(u)=\begin{cases}1-\frac{u}{2\pi N}\left[1-(1-2\pi N^{-\gamma} \Lambda^{2}u)^{N}\right],&u<2\pi N\text{ or }\gamma>2\\ 0,&u>2\pi N\text{ and }\gamma<2\end{cases}. \tag{77}\]
We now seek to find the large \(N\) limit of the SFF for times smaller than the Thouless time and not larger than the Heisenberg time, that is when \(u\ll t_{\rm Th}\sim N^{\gamma-1}\) and \(u\not\gg t_{\rm Heis}\sim N\). Under these conditions we find that \(e^{-N^{-\gamma}\Lambda^{2}u^{2}}\to 1\) and the mean and variance of the binomial density in Eq. (75) go to \(0\), leaving only the \(j=0\) contribution nonvanishing in the second term. We also find that \((1-2\pi N^{-\gamma}\Lambda^{2}u)^{N}\to 1\). Using these limits in Eqs. (75) and (77) we find that the SFF is
\[K^{(\gamma)}(T)=1. \tag{78}\]
This matches the result for the SFF at times much later than the Heisenberg time, so this result is valid for all times smaller than the Thouless time but larger than the inverse spectral width.
At these times the SFF indicates Poissonian statistics. This can also be determined directly from Eqs. (19) and (22). When \(\sigma t^{2}\) and \(N\tau\) are both less than order \(1\), which is the case for times smaller than the Thouless time and not larger than the Heisenberg time, we find in the \(N\to\infty\) limit that
\[\overline{C_{1}}(t)=\left\langle\sum_{k}e^{ita_{k}}\right\rangle,\ \overline{C_{2}}(t)=\left\langle\sum_{k\neq l}e^{it(a_{k}-a_{l})}\right\rangle, \tag{79}\]
indicating that the SFF is simply that of \(A\). Since the eigenvalues of \(A\) are uncorrelated, Poissonian statistics are found.
We now seek to find the large \(N\) limit of the SFF for times larger than the Thouless time but not larger than the Heisenberg time, that is when \(u\gg t_{\rm Th}\sim N^{\gamma-1}\) and \(u\not\gg t_{\rm Heis}\sim N\). We note that the maximum value of the binomial probability density is on the order of the inverse of its standard deviation. This tells us that
\[\sum_{j=0}^{N-1}\frac{(N^{-\gamma}\Lambda^{2}u^{2})^{j}}{(j+1)!}p_{\rm bi}(N-1,j,2\pi N^{-\gamma}\Lambda^{2}u)\ll\sum_{j=0}^{\infty}\frac{(N^{-\gamma} \Lambda^{2}u^{2})^{j}}{j!}=e^{N^{-\gamma}\Lambda^{2}u^{2}}. \tag{80}\]
Therefore the second term in Eq. (75) vanishes. We also note that \((1-2\pi N^{-\gamma}\Lambda^{2}u)^{N}\to 0\). With these considerations we find that the SFF is
\[K^{(\gamma)}(T)=\min\left(\frac{T}{2\pi},1\right). \tag{81}\]
This result for the SFF is equal to 1 for all times later than the Heisenberg time, so it is valid for all times larger than the Thouless time and inverse spectral width. We have recovered the GUE SFF. We know that we also get the GUE result for \(\gamma<1\), so we can conclude that this result is valid for all values of \(\gamma\).
We can visualize the results of Ref. [27] and the new results found here with a diagram for the behavior of the SFF at different timescales and values of \(\gamma\), shown in Fig. 6. The SFF is 1 before the Thouless timescale and equal to the GUE SFF after. At the Thouless timescale, when it is not greater than the Heisenberg time, there is a crossover between these two behaviors. At times larger than the Heisenberg time the SFF is simply 1.
## 5 The block Rosenzweig-Porter model
Motivated by our recent results for the early time ramp of the SFF for a quantum spin glass [46], we construct a minimal quantum model which can also exhibit glassy behavior. We create this model by generalizing the RP model examined in the previous section. The diagonal matrix \(A\) in Eq. (32) is redefined as a block-diagonal matrix with \(P\) independent GUE matrices of size \(M=N/P\). Each block in this matrix corresponds to a single sector and the fact that the block is a GUE matrix means that the system is ergodic within that sector. The Hamiltonian is the sum of this block-diagonal matrix and a size \(N\) GUE matrix which couples the sectors.
More formally, the Hamiltonian matrix for the BRP model has the form of Eq. (8), but now with the condition that the eigenvalues of \(A\) are those of \(P\) independent size \(M\) GUE matrices
Figure 6: The SFF of the RP model at different timescales and values of the coupling parameter \(\gamma\). The magenta dot indicates the point where the Thouless and Heisenberg timescales coincide, where a closed form expression for the SFF is not known. The dotted lines indicate the values of \(\gamma\) at which there is a phase transition. The dashed line indicates the Heisenberg time.
drawn from the distribution
\[p\left(A^{(i)}\right)\sim\exp\left[-\frac{M}{2}\operatorname{tr}\left(A^{(i)} \right)^{2}\right]. \tag{82}\]
This means that the spectral width of \(A\) is order 1 and, if \(M\) is large, the eigenvalues of \(A\) cannot have magnitude greater than 2 [4]. We can consider \(A\) to be block diagonal such that
\[A=\bigoplus_{i=1}^{P}A^{(i)}. \tag{83}\]
Similar to our analysis of the RP model, we can use the Fermi golden rule to determine the Thouless timescale and determine the locations of the transitions by comparing to the other cardinal timescales. We are interested in the escape rate from a particular site to sites outside that starting site's sector. The number of such sites to escape to is of order \(N\), the coupling is of order \(N^{-\gamma/2}\), and the spectral width of \(A\) is order 1. It follows that the RP results for the Thouless time, the spectral width, and the Heisenberg time, shown in Eqs. (33)-(35) respectively, are valid for the BRP model also.
The localization transition occurs when the Thouless and Heisenberg times are of the same order, which is when \(\gamma=2\). Notice that the block-diagonal structure of \(A\) leads to a localized phase in the BRP model which is distinct from that of the RP model. Within this phase the system will be confined to the subspaces corresponding the the blocks of \(A\) instead of individual states. The transition to full GUE statistics occurs when the Thouless time is of the same order as the inverse spectral width. This occurs when \(\gamma=1\).
For the BRP model there is another timescale of interest--the block Heisenberg time \(t_{\rm Heis,Bl}=t_{\rm Heis}/P\). This is the inverse mean level spacing for a single block of \(A\). In the large \(\gamma\) limit where the coupling between the blocks vanishes, \(t_{\rm Heis,Bl}\) is the time at which the SFF will reach its plateau value. Comparing the Thouless time to the block Heisenberg time, we see that there will be another transition at \(\gamma=1+d\), where \(d=\log_{N}M\), which is when these timescales are of the same order. This analysis informs our determination of which values of \(\gamma\) lead to which phase in the schematics of Fig. 2.
Two of the transitions we have discussed have an important interpretation in terms of the size of the support set of the energy eigenstates. Starting with an unperturbed energy eigenstate of \(A\), which is spread over \(M\) sites in a single block, the perturbation from the \(V\) matrix causes each of those \(M\) sites to hybridize with other sites within an energy interval on the order of \(E_{\rm Th}\sim t_{\rm Th}^{-1}\). Assuming this interval is not smaller than the mean level spacing \(\delta=w/N\), the number of sites in this interval is \(E_{\rm Th}/\delta\). This means that the size of the support set for the energy eigenstates is on the order of \(ME_{\rm Th}/\delta=N^{2-\gamma+d}\). This is of order \(M\) when \(\gamma=2\), indicating the localization transition. For larger \(\gamma\), \(E_{\rm Th}\) becomes smaller than \(\delta\) and no hybridization occurs. The size of the support set is of order \(N\) when \(\gamma=1+d\), indicating delocalization over the entire Hilbert space. It is an interesting feature of the BRP model that this transition is, in general, separate from the transition to full GUE spectral statistics at \(\gamma=1\).
We note that if the block size \(M\) is of order 1, the BRP model becomes equivalent to the RP model. In this case the full eigenstate delocalization transition occurs at \(\gamma=1\) as it does for the RP model. In terms of the SFF, the eigenvalues of \(A\) are uncorrelated on timescales larger than the block Heisenberg time. When \(M\) is of order 1 the block Heisenberg time is of the same order as the inverse spectral width. Since the eigenvalues of \(A\) are uncorrelated for the relevant timescales, the BRP model reduces to the RP model. Going forward we will consider \(M\) to be larger than order 1.
We may now turn our attention to the calculation of the SFF for the BRP model. We will once again assume in our calculations that \(\gamma>1\). With the additional information about the structure of \(A\) we can simplify Eqs. (20) and (23) for \(\overline{C_{1}}(t)\) and \(\overline{C_{2}}(t)\) respectively. Each eigenvalue of \(A\) will be correlated only with the eigenvalues from the same block so
\[\overline{C_{1}}(t)=\frac{e^{-\sigma t^{2}/2}}{\tau}\oint_{\cal C }\frac{dz}{2\pi i}e^{itz}[Z_{1}(z,\tau)]^{P}, \tag{84}\] \[Z_{1}(z,\tau)=\left\langle\frac{\det(z+\tau-A^{\prime})}{\det(z -A^{\prime})}\right\rangle, \tag{85}\]
and
\[\overline{C_{2}}(t)=-\frac{e^{-\sigma t^{2}}}{\tau^{2}}\oint\frac{ dz}{2\pi i}\oint\frac{dz^{\prime}}{2\pi i}e^{it(z-z^{\prime})}\left[1-\left( \frac{\tau}{z^{\prime}-z-\tau}\right)^{2}\right][Z_{2}(z,\tau;z^{\prime},- \tau)]^{P}, \tag{86}\] \[Z_{2}(z,\tau;z^{\prime},-\tau)=\left\langle\frac{\det(z+\tau-A^ {\prime})\det(z^{\prime}-\tau-A^{\prime})}{\det(z-A^{\prime})\det(z^{\prime} -A^{\prime})}\right\rangle, \tag{87}\]
where \(A^{\prime}\) is any of the \(A^{(i)}\). The functions \(Z_{1}\) and \(Z_{2}\) are actually generating functions. The ensemble averaged Green's function for a size \(M\) GUE matrix is
\[\left\langle G(z)\right\rangle=\left\langle\operatorname{tr}\frac{1}{z-A^{ \prime}}\right\rangle=\left[\frac{\partial}{\partial\tau}Z_{1}(z,\tau)\right]_ {\tau=0} \tag{88}\]
and the correlator of two Green's functions is
\[\left\langle G(z)G(z^{\prime})\right\rangle=\left\langle\operatorname{tr} \frac{1}{z-A^{\prime}}\operatorname{tr}\frac{1}{z^{\prime}-A^{\prime}}\right\rangle =\left[\frac{\partial^{2}}{\partial\tau\partial\tau^{\prime}}Z_{2}(z,\tau;z^ {\prime},\tau^{\prime})\right]_{\tau,\tau^{\prime}=0}. \tag{89}\]
We now consider whether the condition on the average level density which we derived in Eq. (39) for the RP model holds for the BRP model also. From Eq. (85) we find that
\[Z_{1}(z,\tau)=1+M\tau\left\langle\frac{1}{z-a^{\prime}}\right\rangle+O(\tau^{ 2}). \tag{90}\]
Using this in Eq. (84) and dividing by \(N\) while letting \(t\) be order \(1\), the same order as the inverse spectral width, we find that Eq. (39) does indeed hold for the BRP model. This means that, when \(\gamma>1\), the average level density is equal to the probability density for the eigenvalues of \(A\), which is the average level density for size \(M\) GUE matrices. For infinitely large \(M\) this is
\[p(a)=\begin{cases}\frac{1}{2\pi}\sqrt{4-a^{2}},&|a|\leq 2\\ 0,&|a|>2\end{cases}. \tag{91}\]
In this section we will make the change of variables
\[(z,z^{\prime})=x\pm\frac{y}{2N}+i(q,q^{\prime})0^{+}. \tag{92}\]
This differs from the change of variables of Eq. (43) only in the signs of \(q\) and \(q^{\prime}\), which is done purely for convenience. With this change of variables we see that
\[C(t)=-\frac{e^{-\sigma t^{2}}}{(N\tau)^{2}}\sum_{qq^{\prime}}qq^{\prime}\int \frac{dx}{2\pi i}\int\frac{dy}{2\pi i}e^{iN^{-1}ty}\left[1-\left(\frac{N\tau}{ y+N\tau+i(q-q^{\prime})0^{+}}\right)^{2}\right][Z_{2}(z,\tau;z^{\prime},-\tau)]^{P}. \tag{93}\]
As we did for the calculation of the SFF for the RP model, we are neglecting the contribution from the disconnected part of the two-level correlation function under the assumption that correlations between energy levels with separations much larger than the mean level spacing will vanish.
### The two-point generating function
In this section we calculate the two-point generating function
\[Z_{2}(z,\tau;z^{\prime},-\tau)=\left\langle\frac{\det(z+\tau-A^{\prime})\det(z^{ \prime}-\tau-A^{\prime})}{\det(z-A^{\prime})\det(z^{\prime}-A^{\prime})}\right\rangle, \tag{94}\]
where \(A^{\prime}\) is a GUE matrix of size \(M\) drawn from the distribution
\[p(A^{\prime})\sim\exp\left(-\frac{M}{2}\operatorname{tr}A^{\prime 2}\right). \tag{95}\]
We begin by writing the determinants in the numerator as fermionic integrals and the determinants in the denominator as bosonic integrals. Doing so, we obtain
\[Z_{2}(z,\tau;z^{\prime},-\tau)=\pi^{-2M}\left\langle\int D(\bar{ \psi},\psi)\exp\sum_{\zeta}\sum_{\alpha}\left\{\sum_{i}\bar{\psi}_{\zeta i}^{ \alpha}A^{\prime}_{ii}\psi_{\zeta i}^{\alpha}+\sum_{i>j}\left(\bar{\psi}_{ \zeta i}^{\alpha}\psi_{\zeta j}^{\alpha}+\bar{\psi}_{\zeta j}^{\alpha}\psi_{ \zeta i}^{\alpha}\right)\Re(A^{\prime}_{ij})\right.\right.\\ \left.\left.+i\sum_{i>j}\left(\bar{\psi}_{\zeta i}^{\alpha}\psi_{ \zeta j}^{\alpha}-\bar{\psi}_{\zeta j}^{\alpha}\psi_{\zeta i}^{\alpha}\right) \Im(A^{\prime}_{ij})-\left[\left(z^{(\alpha)}-(-1)^{\alpha}\tau\right)\delta_{ \zeta F}+z^{(\alpha)}\delta_{\zeta B}\right]\sum_{i}\bar{\psi}_{\zeta i}^{ \alpha}\psi_{\zeta i}^{\alpha}\right]\right\rangle, \tag{96}\]
where \(\alpha=1,2\), \((z^{(1)},z^{(2)})=(z,z^{\prime})\), and \(\zeta=B,F\), indicating bosonic and fermionic fields respectively.
Performing the GUE average over \(A^{\prime}\), we obtain the following mean-field sigma model:
\[\begin{split} Z_{2}(z,\tau;z^{\prime},-\tau)=\pi^{-2M}\int D( \bar{\psi},\psi)\exp\left\{\frac{1}{2M}\operatorname{str}\tilde{\Omega}\right. \\ \left.\qquad\qquad\qquad\qquad\qquad\qquad\left.-\sum_{\zeta}\sum_ {\alpha}\sum_{i}\left[\left(z^{(\alpha)}-(-1)^{\alpha}\tau\right)\delta_{ \zeta F}+z^{(\alpha)}\delta_{\zeta B}\right]\bar{\psi}_{\zeta i}^{\alpha} \psi_{\zeta i}^{\alpha}\right\}\!,\\ \tilde{\Omega}=\begin{pmatrix}\tilde{\Omega}_{BB}&\tilde{ \Omega}_{BF}\\ \tilde{\Omega}_{FB}&\tilde{\Omega}_{FF}\end{pmatrix},\;\tilde{\Omega}_{\zeta \zeta^{\prime}}=\begin{pmatrix}\sum_{i}\psi_{\zeta i}^{1}\psi_{\zeta i}^{1}& \sum_{i}\psi_{\zeta i}^{1}\psi_{\zeta i}^{2}\\ \sum_{i}\psi_{\zeta i}^{2}\psi_{\zeta i}^{1}&\sum_{i}\psi_{\zeta i}^{2}\psi_{ \zeta i}^{2}\end{pmatrix},\end{split} \tag{97}\]
where \(\tilde{\Omega}\) is a \(4\times 4\) matrix and \(\operatorname{str}\tilde{\Omega}=\operatorname{tr}\tilde{\Omega}_{BB}- \operatorname{tr}\tilde{\Omega}_{FF}\) is the supertrace. We perform a Hubbard-Stratonovich transformation to remove the quartic terms in the action by inserting the unity
\[1=\frac{1}{4\pi^{4}}\int d\Omega\exp\left(-\frac{1}{2M} \operatorname{str}\tilde{\Omega}^{2}-\bar{\Psi}\Omega\Psi-\frac{M}{2} \operatorname{str}\Omega^{2}\right), \tag{99}\] \[\Omega=\begin{pmatrix}\Omega_{BB}&\Omega_{BF}\\ \Omega_{FB}&i\Omega_{FF}\end{pmatrix},\;\Omega_{\zeta\zeta^{\prime}}=\begin{pmatrix} \Omega_{\zeta\zeta^{\prime}}^{11}&\Omega_{\zeta\zeta^{\prime}}^{12}\\ \Omega_{\zeta\zeta^{\prime}}^{21}&\Omega_{\zeta\zeta^{\prime}}^{22}\end{pmatrix},\;\Psi=\begin{pmatrix}\psi_{B}^{1}\\ \psi_{B}^{2}\\ \psi_{F}^{1}\\ \psi_{F}^{2}\end{pmatrix}, \tag{100}\]
where \(d\Omega=d\Omega_{BB}d\Omega_{FF}d(\Omega_{BF},\Omega_{FB})\). The field \(\Omega_{\zeta\zeta^{\prime}}^{\alpha\alpha^{\prime}}\) is bosonic when \(\zeta=\zeta^{\prime}\) and fermionic when \(\zeta\neq\zeta^{\prime}\). Inserting this unity and integrating out the \(\Psi\) field, we find
\[Z_{2}(z,\tau;z^{\prime},-\tau)=\frac{1}{4\pi^{4}}\int d\Omega[ \operatorname{sdet}(\Omega+\mathcal{E})]^{-M}e^{-M\operatorname{str}\Omega^{2} /2}, \tag{101}\] \[\mathcal{E}=\operatorname{diag}(z^{(1)},z^{(2)},z^{(1)}+\tau,z^{( 2)}-\tau). \tag{102}\]
where \(\mbox{sdet}\,\Omega=\mbox{det}(\Omega_{BB}-\Omega_{BF}\Omega_{FF}^{-1}\Omega_{ FB})(\mbox{det}\,\Omega_{FF})^{-1}=\mbox{det}\,\Omega_{BB}[\mbox{det}(\Omega_{FF}- \Omega_{FB}\Omega_{BB}^{-1}\Omega_{BF})]^{-1}\) is the superdeterminant.
We now make the change of variable \(\Omega\to U\omega U^{\dagger}-{\cal E}\), where \(U\) is an element of the superunitary group \(U(2|2)\) and \(\omega=\mbox{diag}(\omega_{B}^{1},\omega_{B}^{2},i\omega_{F}^{1},i\omega_{F}^{ 2})\). The Jacobian of this change of variables is the supersymmetric generalization of the Vandermonde determinant [65]\(\Delta_{s}(\Omega)=\mbox{det}(1/(\omega_{B,i}-i\omega_{F,j}))\), where \(\omega_{B,i}\) are the Bose-Bose eigenvalues and \(i\omega_{F,j}\) are the Fermi-Fermi eigenvalues. We can now write
\[Z_{2}(z,\tau;z^{\prime},-\tau)=\frac{1}{4\pi^{4}}\int d(\omega-{\cal E})\Delta _{s}^{2}(\omega)(\mbox{sdet}\,\omega)^{-M}\int dUe^{-M(U\omega U^{\dagger}-{ \cal E})^{2}/2}, \tag{103}\]
where \(dU\) is the Haar measure of the superunitary group \(U(2|2)\).
We can now use the supersymmetric generalization of the Itzykson-Zuber integral [65]:
\[\int dUe^{-M\,\mbox{str}(U\omega U^{\dagger}-\beta)^{2}}\sim M^{k}[\Delta_{s} (\omega)\Delta_{s}(\eta)]^{-1}e^{-M\,\mbox{str}(\omega-\eta)^{2}}, \tag{104}\]
where \(dU\) is the Haar measure of the superunitary group \(U(k|k)\), \(\omega\) is diagonal, and \(\eta\) is the diagonal matrix similar to \(\beta\). Using this integral identity gives us
\[Z_{2}(z,\tau;z^{\prime},-\tau)\sim\left(\frac{M}{2\pi}\right)^{ 2}\int d(\omega-{\cal E})\frac{\Delta_{s}(\omega)}{\Delta_{s}({\cal E})}(\mbox {sdet}\,\omega)^{-M}e^{-M\,\mbox{str}(\omega-{\cal E})^{2}/2}=\frac{\mbox{det} \,R}{\Delta_{s}({\cal E})}, \tag{105}\] \[R^{\alpha\alpha^{\prime}}=\frac{M}{2\pi}\int_{-\infty}^{\infty} \int_{-\infty}^{\infty}\frac{d\omega_{B}d\omega_{F}}{\omega_{B}+iq^{(\alpha)} 0^{+}-i\omega_{F}}\exp\left[-Mf\left(\omega_{B}+iq^{(\alpha)}0^{+},z^{(\alpha) }\right)\right.\] (106) \[\left.+Mf\left(i\omega_{F},z^{(\alpha^{\prime})}-(-1)^{\alpha^{ \prime}}\tau\right)\right],\] \[f(\omega,c)=\frac{1}{2}(\omega-c)^{2}+\ln\omega, \tag{107}\]
with \((q^{(1)},q^{(2)})=(q,q^{\prime})\).
The saddle points of the integrand in Eq. (106) are at
\[\omega_{B}=\omega_{B\pm}^{\alpha}=\frac{1}{2}\left(z^{(\alpha)} \pm i\sqrt{4-\left(z^{(\alpha)}\right)^{2}}\right), \tag{108}\] \[i\omega_{F}=i\omega_{F\pm}^{\alpha^{\prime}}=\frac{1}{2}\left(z^ {(\alpha^{\prime})}-(-1)^{\alpha^{\prime}}\tau\pm i\sqrt{4-\left[z^{(\alpha^ {\prime})}-(-1)^{\alpha^{\prime}}\tau\right]^{2}}\right). \tag{109}\]
Since \(M\) is large, the eigenvalues of \(A^{\prime}\) lie within the interval \((-2,2)\); we only need to concern ourselves with the case when \(|\Re(z)|<2\). This means that the terms under the square roots in the equations above will always have a positive real part. Note that, due to the presence of a singularity at \(\omega_{B}=0\), only one saddle point value is reachable through deformation of the contour of integration for \(\omega_{B}\). This is \(\omega_{B+}^{\alpha}\) when \(q=1\) and \(\omega_{B-}^{\alpha}\) when \(q=-1\). We need to consider both possible saddle point values for \(i\omega_{F}\).
The result of the saddle point approximation is
\[R^{\alpha\alpha^{\prime}}=\sum_{\beta}L_{q^{(\alpha)}\beta}^{ \alpha\alpha^{\prime}}\exp\left(MF_{q^{(\alpha)}\beta}^{\alpha\alpha^{\prime}} \right), \tag{110}\] \[F_{\beta\beta^{\prime}}^{\alpha\alpha^{\prime}}=-f\left(\omega_ {B\beta}^{\alpha},z^{(\alpha)}\right)+f\left(i\omega_{F\beta^{\prime}}^{ \alpha^{\prime}},z^{(\alpha^{\prime})}-(-1)^{\alpha^{\prime}}\tau\right),\] (111) \[L_{\beta\beta^{\prime}}^{\alpha\alpha^{\prime}}=\left(\omega_{B \beta}^{\alpha}-i\omega_{F\beta^{\prime}}^{\alpha^{\prime}}\right)^{-1}\left\{ \left[1-\left(\omega_{B\beta}^{\alpha}\right)^{-2}\right]\left[1-\left(i \omega_{F\beta^{\prime}}^{\alpha^{\prime}}\right)^{-2}\right]\right\}^{-1/2}, \tag{112}\]
where \(\beta,\beta^{\prime}=\pm\). Expanding about infinite \(N\) and \(\tau=0\), we find that the exponents of the exponential terms in Eq. (110) are, apart from the factor of \(M\),
\[F^{\alpha\alpha^{\prime}}_{\pm\pm}=\frac{(-1)^{\alpha^{\prime}+1} }{2N}\left(x\mp 2\pi ip(x)\right)\left\{(1-\delta^{\alpha\alpha^{\prime}})[y+i(q-q ^{\prime})0^{+}]+N\tau\right\} \tag{113}\] \[+O(\tau^{2})+O(N^{-1}\tau)+O(N^{-2}),\] \[F^{\alpha\alpha^{\prime}}_{\pm\mp}=\pm\left[i\pi xp(x)+\ln\left( \frac{x-2\pi ip(x)}{x+2\pi ip(x)}\right)\right]\] \[+\frac{(-1)^{\alpha^{\prime}+1}}{2N}\left\{\left[(1-\delta^{ \alpha\alpha^{\prime}})x\pm\delta^{\alpha\alpha^{\prime}}2\pi ip(x)\right]y+ \left(x\pm 2\pi ip(x)\right)N\tau\right\}\] (114) \[+O(\tau^{2})+O(N^{-1}\tau)+O(N^{-2}).\]
The prefactors are
\[L^{\alpha\alpha^{\prime}}_{\pm\pm}=\frac{(-1)^{\alpha^{\prime}} N}{(1-\delta^{\alpha\alpha^{\prime}})[y+i(q-q^{\prime})0^{+}]+N\tau}+O(\tau)+O(N ^{-1}), \tag{115}\] \[L^{\alpha\alpha^{\prime}}_{\pm\mp}=\mp i(2\pi p(x))^{-2}+O(\tau) +O(N^{-1}). \tag{116}\]
Writing these quantities in this way is useful for times not larger than the Heisenberg time, where \(|\tau|\ll 1\). We can restrict ourselves to these timescales since the SFF is already known to be 1 at later times.
Recall that we are interested in the \(P^{\rm th}\) power of the two-point generating function to use in Eq. 93. If \(|\tau|\gg N^{-1}\), indicating times larger than the Thouless time, the \(P^{\rm th}\) power of the two-point generating function will be dominated by the saddle points that maximize the real part of \(F^{\alpha\alpha^{\prime}}_{\beta\beta^{\prime}}\), excluding the unphysical combinations of saddle points that lead to the two-point generating function having a complex phase that grows with \(N\). Considering this and using Eqs. (113)-(114), we find that the dominant Fermi-Fermi saddle point in the calculation of \(R^{\alpha\alpha^{\prime}}\) is \(i\omega^{\alpha}_{F\beta^{*}(\alpha^{\prime})}\), where
\[\beta^{*}(\alpha^{\prime})=\delta_{qq^{\prime}}q+(1-\delta_{qq^{\prime}})(-1) ^{\alpha^{\prime}+1}. \tag{117}\]
Our result for the \(P^{\rm th}\) power of the two-point generating function at these timescales is then
\[[Z_{2}(z,\tau;z^{\prime},-\tau)]^{P}=[\Delta_{s}(\mathcal{E})]^{-P}\left[ \det\left(L^{\alpha\alpha^{\prime}}_{q^{(\alpha)}\beta^{*}(\alpha^{\prime})} \right)\right]^{P}\exp\left[N\left(F^{11}_{q\beta^{*}(1)}+F^{22}_{q^{\prime} \beta^{*}(2)}\right)\right], \tag{118}\]
where we have used the fact that \(F^{11}_{\beta_{1}\beta_{2}}+F^{22}_{\beta_{3}\beta_{4}}=F^{12}_{\beta_{1}\beta _{4}}+F^{21}_{\beta_{3}\beta_{2}}\) to extract the exponential term from the determinant. We can verify that the normalization is correct by setting \(\tau=0\) and finding that the above expression is equal to 1, as we know it must be from Eq. (94).
If \(|\tau|\gg N^{-1}\), indicating times not larger than the Thouless time, the real parts of the exponent of \(R^{\alpha\alpha^{\prime}}\) will not be much larger than \(P^{-1}\), so they will not contribute to the determination of the dominant saddle points for the \(P^{\rm th}\) power of the two-point generating function. The saddle points that will dominate are instead determined by the prefactors \(L^{\alpha\alpha^{\prime}}_{\beta\beta^{\prime}}\). We see from Eqs. (115)-(116) that, at these timescales, the prefactor for the \(\beta=\beta^{\prime}\) case is always greater than that for the \(\beta=-\beta^{\prime}\) case by at least a factor of \(N\), so the non-dominant contributions can be neglected. This means that the dominant Fermi-Fermi saddle point in the calculation of \(R^{\alpha\alpha^{\prime}}\) is simply \(i\omega^{\alpha^{\prime}}_{Fq}\). With these considerations we find that the \(P^{\rm th}\) power of the two-point generating function at these
timescales is
\[\left[Z_{2}(z,\tau;z^{\prime},-\tau)\right]^{P} =\left[\Delta_{s}(\mathcal{E})\right]^{-P}\left\{\det\left[L_{q^{( \alpha)}q^{(\alpha)}}^{\alpha\alpha^{\prime}}\exp\left(MF_{q^{(\alpha)}q^{( \alpha)}}^{\alpha\alpha^{\prime}}\right)\right]\right\}^{P}\] \[=e^{-i\pi N\tau(q-q^{\prime})p(x)}\left[1-\left(\frac{N\tau}{y+i( q-q^{\prime})0^{+}+N\tau}\right)^{2}\right]^{-P}\] \[\qquad\qquad\qquad\times\left[1-\left(\frac{N\tau}{y+i(q-q^{ \prime})0^{+}+N\tau}\right)^{2}e^{i\pi(q-q^{\prime})p(x)(y+2N\tau)/P}\right]^ {P}, \tag{119}\]
where we have neglected the vanishing terms in the second equality. Once again we can verify that the normalization is correct by noting that the above expression becomes \(1\) when \(\tau=0\).
### The spectral form factor
We are now in a position to calculate the SFF. We begin with the case where time is not larger than the Thouless time. Inserting Eq. (119) into the correlation function of Eq. (93), then extracting the SFF using Eq. (29), we obtain
\[K^{(\gamma)}(T)=1+\frac{e^{-\sigma t^{2}}}{2\pi i(N\tau)^{2}p(x) }\sum_{q}\Bigg{\{}e^{-2\pi iqN\tau p(x)}\int\frac{dy}{2\pi i}e^{iN^{-1}ty} \left[1-\left(\frac{N\tau}{y+N\tau+iq0^{+}}\right)^{2}\right]^{1-P}\\ \times\left[1-\left(\frac{N\tau}{y+N\tau+iq0^{+}}\right)^{2}e^{2 \pi iqp(x)(y+2N\tau)/P}\right]^{P}-\int\frac{dy}{2\pi i}e^{iN^{-1}ty}\left[1- \left(\frac{N\tau}{y+N\tau}\right)^{2}\right]\Bigg{\}}. \tag{120}\]
The first term in the braces is the \(q=-q^{\prime}\) contributions while the second term is the \(q=q^{\prime}\) contributions. The integral over \(y\) in the second term never has singularities in the half of the complex plane in which the contour of integration may be closed, so the term vanishes.
Making the changes of variable \(q\to-q\) and then \(y\to qy\) and expanding with the binomial theorem, we obtain
\[K^{(\gamma)}(T)=1+\frac{e^{-\sigma t^{2}}}{2\pi i(N\tau)^{2}p(x) }\sum_{q}e^{2\pi iqN\tau p(x)}\int\frac{dy}{2\pi i}e^{iN^{-1}tyy}\left[\frac{ (y+qN\tau)^{2}}{(y-i0^{+})(y+2qN\tau)}\right]^{P-1}\\ \times\sum_{j=0}^{P}\binom{P}{j}\left(\frac{N|\tau|}{y+qN\tau} \right)^{2j}e^{-2\pi ip(x)(y+2qN\tau)j/P}. \tag{121}\]
We observe that the integrand contains no singularities in the half of the complex plane in which the contour integral may be closed when \(tq<0\). So we can remove the sum over \(q\) by making the substitution \(tq\to|t|\). This gives us
\[K^{(\gamma)}(T)=1+\frac{e^{-\sigma t^{2}-2\pi N|\tau|p(x)}}{2\pi i (N\tau)^{2}p(x)}\sum_{j=0}^{P}\binom{P}{j}\int\frac{dy}{2\pi i}D_{j}(y), \tag{122}\] \[D_{j}(y)=(N|\tau|)^{2j}\frac{(y+iN|\tau|)^{2(P-1-j)}}{(y-i0^{+}) ^{P-1}(y+2iN|\tau|)^{P-1}}\exp\left[iN^{-1}|t|y-2\pi ip(x)(y+2iN|\tau|)j/P \right]. \tag{123}\]
By examining the \(y\)-dependent terms in the exponent of the exponential term of Eq. (123), we see that the contour of integration can be closed in the lower half of the complex plane when
\(|t|/p(x)<2\pi Mj\). In this case there may be contributions from residues located at \(y=-iN|\tau|\) and \(y=-2iN|\tau|\). When \(|t|/p(x)>2\pi Mj\) the contour of integration may be closed in the upper half of the complex plane and there may be a contribution from the residue at \(y=i0^{+}\). Fig. 7 shows the contours of integration for each situation and the poles which they enclose. We find that
\[K^{(\gamma)}(T)=1+\frac{e^{-\sigma t^{2}-2\pi N|\tau|p(x)}}{2\pi i(N\tau)^{2}p (x)}\sum_{j=0}^{P}\binom{P}{j}\begin{cases}-\underset{y=-iN|\tau|}{\mathrm{Res }}D_{j}(y)-\underset{y=-2iN|\tau|}{\mathrm{Res}}D_{j}(y),&|t|/p(x)<2\pi Mj\\ \underset{y=i0^{+}}{\mathrm{Res}}D_{j}(y),&|t|/p(x)>2\pi Mj\end{cases}. \tag{124}\]
We first find the residue at \(y=-iN|\tau|\). Observing that it is nonzero only when \(j=P\), we find that
\[\underset{y=-iN|\tau|}{\mathrm{Res}}D_{P}(y)=i(N|\tau|)^{2}(N^{-1}|t|-2\pi p( x))e^{\sigma t^{2}+2\pi N|\tau|p(x)}. \tag{125}\]
The other two residues are
\[\underset{y=-2iN|\tau|}{\mathrm{Res}}D_{j}(y)=2i(-1)^{j+1}N|\tau|e^{2\sigma t ^{2}}\sum_{k=0}^{P-2}J(j,k)\left(\sigma t^{2}-2\pi jM|\tau|p(x)\right)^{k}, \tag{126}\]
\[\underset{y=i0^{+}}{\mathrm{Res}}D_{j}(y)=2i(-1)^{j}N|\tau|e^{4\pi Mj|\tau|p( x)}\sum_{k=0}^{P-2}J(j,k)\left(2\pi jM|\tau|p(x)-\sigma t^{2}\right)^{k}, \tag{127}\]
where
\[J(j,k)=\frac{2^{k-2(P-1)}}{k!}\sum_{k^{\prime}=0}^{P-2-k}2^{k^{\prime}}\binom{ 2(P-1-j)}{k^{\prime}}\binom{1-P}{P-2-k-k^{\prime}}. \tag{128}\]
Using Eqs. (125)-(127) in Eq. (124) and unfolding by setting
\[u=|t|/p(x)=N|T|, \tag{129}\] \[\Lambda=\lambda p(x), \tag{130}\]
Figure 7: The poles and contours of integration for the integrals over \(y\) in Eqs. (122) and (135). If \(|t|/p(x)\) is less (greater) than \(2\pi Mj\) the contour is closed in the lower (upper) half of the complex plane.
we find that the SFF is
\[K^{(\gamma)}(T)=\min\left(\frac{T}{2\pi},1\right)-\frac{e^{-2\pi \Lambda^{2}N^{1-\gamma}u}}{\pi\Lambda^{2}N^{1-\gamma}u}\sum_{j=0}^{P}(-1)^{j} \binom{P}{j}Q_{j}^{(\gamma)}(u), \tag{131}\] \[Q_{j}^{(\gamma)}(u)=\sum_{k=0}^{\infty}J(j,k)\left(-\Lambda^{2} N^{-\gamma}u|u-2\pi Mj|\right)^{k}\begin{cases}e^{\Lambda^{2}N^{-\gamma}u^{2}},&u \leq 2\pi Mj\\ e^{\Lambda^{2}N^{-\gamma}u(4\pi Mj-u)},&u\geq 2\pi Mj\end{cases}. \tag{132}\]
Note that we have raised the upper limit for the summation index \(k\) from \(P-2\) to \(\infty\). We can do this because \(J(j,k)=0\) for \(k>P-2\). Doing so removes the need for separate treatments of the cases where \(P<3\). Although we have derived this result for times not larger than the Thouless time, it turns out to be valid at all relevant timescales, as we will now show.
We now examine the SFF at times larger than the Thouless time, where \(|\tau|\gg N^{-1}\). We obtain the SFF by inserting Eq. (118) into the correlation function of Eq. (93) then extracting the SFF using Eq. (29). This yields
\[K(T)=-\frac{e^{-\sigma t^{2}}}{2\pi i(N\tau)^{2}p(x)}\sum_{qq^{ \prime}}qq^{\prime}\int\frac{dy}{2\pi i}e^{iN^{-1}tq}\left[1-\left(\frac{N\tau }{y+N\tau+i(q-q^{\prime})0^{+}}\right)^{2}\right]\\ \times[\Delta_{s}(\mathcal{E})]^{-P}\left[\det\left(L_{q^{(\alpha )}\beta^{*}(\alpha^{\prime})}^{\alpha\alpha^{\prime}}\right)\right]^{P}\exp \left[N\left(F_{q\beta^{*}(1)}^{11}+F_{q^{\prime}\beta^{*}(2)}^{22}\right) \right]. \tag{133}\]
We note from Eq. (117) that, for the \(q=q^{\prime}\) terms, the dominant saddle points are the same as for the case examined above for earlier times. This means that, although there may be additional nonvanishing terms in the exponent of the exponential term of the integrand, they will be subleading and the residue analysis is unchanged. There are no singularities in the half of the complex plane in which the contour of integration may be closed. As they did for earlier times, the \(q=q^{\prime}\) terms of the SFF vanish.
We next consider the \(q=-q^{\prime}=1\) term. From Eqs. (113) and (115)-(116) we find that \((L_{++}^{11}L_{--}^{22}-L_{+-}^{12}L_{-+}^{21})^{P}\) and \(N(F_{++}^{11}+F_{--}^{22})\) have vanishing \(y\)-dependence. If we consider only positive times, which we can do since the SFF is symmetric in time, we find that the integrand once again has no singularities in the half of the complex plane in which the contour of integration may be closed. For this reason, the \(q=-q^{\prime}=1\) contribution to the SFF also vanishes.
Finally we consider the only nonvanishing term, for which \(q=-q^{\prime}=-1\). From Eqs. (115)-(116) we find that
\[L_{-+}^{11}L_{+-}^{22}-L_{--}^{12}L_{++}^{21}=X+\left(\frac{N}{y-i0^{+}+N\tau} \right)^{2}, \tag{134}\]
where \(X\) is a quantity of order 1 with its largest \(y\)-dependent terms being of order \(N^{-1}\), so they can be neglected. Expanding with the binomial theorem, we find that the SFF is
\[K^{(\gamma)}(T)=1+\frac{e^{-\sigma t^{2}}}{2\pi ip(x)(N\tau)^{2}} \sum_{j=0}^{P}\binom{P}{j}\left(|\tau|^{2}X\right)^{P-j}\int\frac{dy}{2\pi i}D _{j}^{\prime}(y), \tag{135}\] \[D_{j}^{\prime}(y)=(N|\tau|)^{2j}\frac{(y+iN|\tau|)^{2(P-1-j)}}{( y-i0^{+})^{P-1}(y+2iN|\tau|)^{P-1}}\exp\left[iN^{-1}|t|y+N(F_{-+}^{11}+F_{+-}^{22 })\right]. \tag{136}\]
From Eq. (114) we learn that that the largest \(y\)-dependent term in \(N(F_{-+}^{11}+F_{+-}^{22})\) is \(-2\pi ip(x)y\). This means that the contour of integration can be closed in the lower half of the complex plane when \(|t|/p(x)<2\pi N\). In this case there may be contributions from residues located at \(y=-iN|\tau|\)
and \(y=-2iN|\tau|\). When \(|t|/p(x)>2\pi N\) the contour of integration is instead closed in the upper half of the complex plane and there may be a contribution from the residue at \(y=i0^{+}\). So
\[K^{(\gamma)}(T)=1+\frac{e^{-\sigma t^{2}}}{2\pi ip(x)(N\tau)^{2}} \sum_{j=0}^{P}\begin{pmatrix}P\\ j\end{pmatrix}\left(|\tau|^{2}X\right)^{P-j}\\ \times\begin{cases}-\underset{y=-iN|\tau|}{\mathrm{Res}}D_{j}^{ \prime}(y)-\underset{y=-2iN|\tau|}{\mathrm{Res}}D_{j}^{\prime}(y),&|t|/p(x)<2 \pi N\\ \underset{y=i0^{+}}{\mathrm{Res}}D_{j}^{\prime}(y),&|t|/p(x)>2\pi N \end{cases}. \tag{137}\]
It turns out that only the residue located at \(y=-iN|\tau|\) contributes for large \(N\). The exponent of the exponential factor of the terms coming from the residue at \(y=-2iN|\tau|\) is
\[-\sigma t^{2}+\left[iN^{-1}|t|(y-i0^{+})+N(F_{-+}^{11}+F_{+-}^{22})\right]_{y =-2iN|\tau|}=\sigma t^{2}-2\pi N|\tau|p(x)+O(N\tau^{2}), \tag{138}\]
which becomes infinitely negative when \(|t|/p(x)<2\pi N\). Similarly, the exponent of the exponential factor of the terms coming from the residue at \(y=i0^{+}\) is
\[-\sigma t^{2}+\left[iN^{-1}|t|(y-i0^{+})+N(F_{-+}^{11}+F_{+-}^{22})\right]_{y =i0^{+}}=-\sigma t^{2}+2\pi N|\tau|p(x)+O(N\tau^{2}), \tag{139}\]
which becomes infinitely negative when \(|t|/p(x)>2\pi N\). The remaining residue at \(y=-iN|\tau|\) is only nonzero when \(j=P\). This residue is, to leading order,
\[\underset{y=-iN|\tau|}{\mathrm{Res}}D_{P}^{\prime}(y)=i(N|\tau|)^{2}(N^{-1}|t |-2\pi p(x))e^{\sigma t^{2}}. \tag{140}\]
This leads to the same contribution to the SFF as the residue at the same location in the case examined above for earlier times. With this result we find that the SFF for times larger than the Thouless time is
\[K^{(\gamma)}(T)=\min\left(\frac{T}{2\pi},1\right). \tag{141}\]
Noting that \(u\gg N^{\gamma-1}\) at these timescales, we see that this is consistent with the SFF we found for earlier times in Eq. (131). Therefore Eq. (131) holds for all relevant timescales.
In Fig. 8 we compare this result for the SFF to numerical results obtained through exact diagonalization for \(\gamma=1.6\), \(P=20\) blocks, and various values of the coupling parameter \(\Lambda\). When numerically calculating the SFF with the full spectrum there is good agreement between theory and numerics at early times, but the agreement is less good at later times. We can trade this early-time agreement with later-time agreement by using only the eigenvalues within a window smaller than the spectral width. We can combine these results to get good agreement at all times of interest by determining the time at which the two approaches converge, then using the full spectrum result before that time and the windowed results at later times. We observe that the SFF decays from the fully uncoupled result to the GUE SFF over time, with the rate of decay controlled by \(\Lambda\). As we vary \(\Lambda\) we can see the SFF moving between the behaviors we predicted schematically in Fig. 2.
Somewhat similar SFFs with a short-time peak then a crossover to RMT behavior have been found for random quantum circuits [66, 67, 68] and the mass-deformed Sachdev-Ye-Kitaev model [69]. However, the enhanced ramps are faster than linear in time in these cases because these models transition from many-body-localized to ergodic behavior, in contrast to the glassy to ergodic transition we see in the BRP model.
### The infinite matrix size limit
We now seek to find the behavior of the SFF in the large \(N\) limit for all timescales of interest. Above we have already considered the SFF at times greater than the Thouless time and argued that several terms vanish at large \(N\), so Eq. (141) is the large \(N\) limit for the SFF at those timescales.
Figure 8: The SFF of the BRP model when \(\gamma=1.6\) and there are \(P=20\) blocks for several values of the coupling parameter \(\Lambda\). The dark thin lines show the analytical results while the lighter, thicker lines show numerical results. The left (right) dotted line indicates the block (full) Heisenberg time. The numerics are obtained through exact diagonalization of size \(N=1000\) random matrices and are averaged over \(10^{5}\) realizations. In (a) the numerics are calculated with the full spectrum while in (b) only the eigenvalues within the window [-0.4.0.4] are used. In (c) the results of (a) are used at times before the numerical curves for a particular \(\Lambda\) converge and the numerical results of (b) are used after.
Going forward we will consider earlier times. Recall that we have also limited ourselves to times not larger than the Heisenberg time. We will also assume for now that we are not at the Thouless and Heisenberg timescales simultaneously, which occurs when \(\gamma=2\) and \(u\sim t_{\rm Th}\sim N\). This is a special case which we will examine later.
Under these conditions, the sum of the \(k=0\) contributions to the SFF in the time period \(2\pi Mj\leq u\leq 2\pi M(j+1)\) is
\[\frac{e^{-2\pi\Lambda^{2}N^{1-\gamma}u}}{\pi\Lambda^{2}N^{1-\gamma}u}\left[ \sum_{l=0}^{j}(-1)^{l}\binom{P}{l}J(l,0)e^{\Lambda^{2}N^{-\gamma}u(4\pi Ml-u)} +e^{\Lambda^{2}N^{-\gamma}u^{2}}\sum_{l=j+1}^{P}(-1)^{l}\binom{P}{l}J(l,0) \right]. \tag{142}\]
Since \(\gamma>1\), the exponents of the exponential terms within the brackets are small. Expanding them to first order, we can write this as
\[\frac{e^{-2\pi\Lambda^{2}N^{1-\gamma}u}}{\pi\Lambda^{2}N^{1-\gamma }u}\sum_{l=0}^{P}(-1)^{l}\binom{P}{l}J(l,0)\\ +\frac{e^{-2\pi\Lambda^{2}N^{1-\gamma}u}}{\pi}\left[\sum_{l=0}^{ j}(-1)^{l}\binom{P}{l}J(l,0)(T-4\pi l/P)-\sum_{l=j+1}^{P}(-1)^{l}\binom{P}{l}J(l,0)T \right]. \tag{143}\]
We note that \(J(j,k)\) is a polynomial of order less than \(P\) in \(j\). So, from the theory of finite differences, we find that [70]
\[\sum_{j=0}^{P}(-1)^{j}\binom{P}{j}J(j,k)=0. \tag{144}\]
Using this fact, we find that the sum of the \(k<2\) contributions is, to first order in \(N^{-1}\),
\[\frac{2}{\pi}e^{-2\pi\Lambda^{2}N^{1-\gamma}u}\sum_{l=0}^{j}(-1)^{l}\binom{P} {l}[J(l,0)+J(l,1)](T-2\pi l/P). \tag{145}\]
The sum of the \(k\geq 2\) contributions will be subleading. So, to first order, the SFF in this time range is
\[K^{(\gamma)}(T)=\min\left(\frac{T}{2\pi},1\right)+\frac{2}{\pi}e^{-2\pi\Lambda ^{2}N^{1-\gamma}u}\sum_{l=0}^{j}(-1)^{l}\binom{P}{l}[J(l,0)+J(l,1)](T-2\pi l/ P). \tag{146}\]
We now seek to simplify \(J(l,0)+J(l,1)\). By shifting the summation index in \(J(l,1)\) and using Pascal's rule
\[\binom{n-1}{k}+\binom{n-1}{k-1}=\binom{n}{k}, \tag{147}\]
we find that
\[J(l,0)+J(l,1)=2^{-2(P-2)}\sum_{k^{\prime}=0}^{P-2}2^{k^{\prime}}\binom{2(P-1- l)+1}{k^{\prime}}\binom{1-P}{P-2-k^{\prime}}=J(l-1/2,0). \tag{148}\]
We can also find a closed-form expression for \(J(l,0)\) by noting that it is the \((P-2)^{\rm th}\) coefficient in the Maclaurin series expansion
\[J(l,0)=2^{-2(P-1)}[z^{P-2}](1+z)^{1-P}\sum_{k=0}^{\infty}(2z)^{k}\binom{2(P-1- l)}{k}, \tag{149}\]
where \([z^{P-2}]\) is the \((P-2)^{\rm th}\) coefficient extractor. Now, summing over \(k\), writing the coefficient as a residue at \(z=0\), and making the change of variable \(z\to(\sqrt{4z+1}-1)/2\), we find that
\[J(l,0)=2^{-2(P-1)}\underset{z=0}{\text{Res}}\ z^{1-P}(1+4z)^{P-l-3/2}=\frac{1}{4 }\binom{P-l-3/2}{P-2}. \tag{150}\]
Using Eqs. (148) and (150) in Eq. (146), we find, for \(2\pi j/P\leq T\leq 2\pi(j+1)/P\),
\[K^{(\gamma)}(T)=\min\left(\frac{T}{2\pi},1\right)+e^{-2\pi\Lambda^{2}N^{1- \gamma}u}\begin{cases}(P-1)\frac{T}{2\pi},&j=0\\ 1-\frac{T}{2\pi},&0<j<P\\ 0,&j=P\end{cases}. \tag{151}\]
This means that, overall, the SFF is
\[K^{(\gamma)}(T)=\left(1-e^{-2\pi\Lambda^{2}N^{1-\gamma}u}\right)\min\left( \frac{T}{2\pi},1\right)+e^{-2\pi\Lambda^{2}N^{1-\gamma}u}\min\left(P\frac{T}{2 \pi},1\right). \tag{152}\]
This result becomes 1 for times greater than the Heisenberg time, so it is also valid at those times. We conclude that Eq. (152) is valid at all relevant times, excepting the case of coincident Thouless and Heisenberg timescales. For times smaller than the Thouless time, where \(u\ll N^{\gamma-1}\), we see that
\[K^{(\gamma)}(T)=\min\left(P\frac{T}{2\pi},1\right). \tag{153}\]
This result is exactly what we would expect in the weak-coupling limit: the GUE SFF with time enhanced by a factor of \(P\), the number of blocks.
For the Thouless timescale, while different from the Heisenberg timescale (meaning \(\gamma\neq 2\)), we obtain
\[K^{(\gamma)}_{\rm Th}(v)=K^{(\gamma)}(N^{\gamma-2}v)=\left(1-e^{-2\pi\Lambda^ {2}v}\right)\min\left(\frac{T}{2\pi},1\right)+e^{-2\pi\Lambda^{2}v}\min\left(P \frac{T}{2\pi},1\right), \tag{154}\]
where \(v=N^{2-\gamma}T\). We see that at the Thouless timescale there is a crossover from the GUE SFF for large \(\Lambda\) to the uncoupled blocks result for small \(\Lambda\). For times greater than the block Heisenberg time \(2\pi M\), this becomes identical to the RP SFF. The reason for this is that the correlations between the eigenvalues of \(A\) do not persist for times larger than the block Heisenberg time. At these timescales we are justified in setting \(Z_{2}(z,\tau;z^{\prime},-\tau)=[g_{2}(z,\tau;z^{\prime},-\tau)]^{M}\) in Eq. (93), which reduces the calculation of the SFF to that of the RP model.
We now examine the \(\gamma=2\) case at the Thouless timescale. From Eq. (131) we find that
\[K^{(2)}_{\rm Th}(v)=K^{(2)}(v)=\min\left(\frac{v}{2\pi},1\right)-\frac{e^{-2 \pi\Lambda^{2}v}}{\pi\Lambda^{2}v}\sum_{j=0}^{P}(-1)^{j}\binom{P}{j}Q^{(2)}_{ \rm Th,j}(v), \tag{155}\]
(156)
Due to the coincidence of the Thouless and Heisenberg timescales, the SFF in this case does not generally have a closed form expression. Although the SFF is unwieldy in general, it does simplify greatly for \(P<3\). When \(P=1\) we find the GUE result
\[K^{(2)}_{\rm Th}(v)=\min\left(\frac{v}{2\pi},1\right), \tag{157}\]
while for \(P=2\), the simplest nontrivial case, we find
\[K_{\text{Th}}^{(2)}(v)=\min\Big{(}\frac{v}{2\pi},1\Big{)}+\frac{e^{-2\pi\Lambda^{ 2}v}}{2\pi\Lambda^{2}v}\begin{cases}\sinh(\Lambda^{2}v^{2}),&0\leq v\leq\pi\\ e^{\Lambda^{2}v(2\pi-v)}-\cosh(\Lambda^{2}v^{2}),&\pi\leq v\leq 2\pi\\ -2\sinh^{2}(\pi\Lambda^{2}v)e^{\Lambda^{2}v(2\pi-v)},&v\geq 2\pi\end{cases}. \tag{158}\]
The Thouless-Heisenberg timescale SFF is plotted in Fig. 9, for the cases of \(P=4\) and \(P=20\). Also shown are numerical results obtained through exact diagonalization which are in strong agreement with the theory. For times much greater than the block Heisenberg time there is graphical evidence that the SFF moves to that of the RP model. By examining Fig. 9, we can see that when the number of sectors \(P\) becomes large, making \(u\gg 2\pi M\), the SFF moves to the RP SFF shown in Fig. 5. As with the RP SFF, we see that the SFF reaches its plateau value at times greater than the Heisenberg time when \(\Lambda\) is finite. In this case there is also a trade-off between level repulsion at early and late times. The area between the SFF and its plateau value is independent of the coupling strength for any nonzero coupling between the sectors, as is shown in the Appendix.
Bringing together all the results at the Thouless timescale we have that the SFF is
\[K_{\text{Th}}^{(\gamma)}(v)=\begin{cases}\min\big{(}P\frac{T}{2\pi},1\big{)},&\gamma>2\\ \min\big{(}\frac{v}{2\pi},1\big{)}-\frac{e^{-2\pi\Lambda^{2}v}}{\pi\Lambda^{2 }v}\sum_{j=0}^{P}(-1)^{j}\binom{P}{j}Q_{\text{Th},j}^{(2)}(v),&\gamma=2\\ \Big{(}1-e^{-2\pi\Lambda^{2}v}\Big{)}\min\big{(}\frac{T}{2\pi},1\big{)}+e^{-2 \pi\Lambda^{2}v}\min\big{(}P\frac{T}{2\pi},1\big{)}\,,&1<\gamma<2\end{cases}. \tag{159}\]
The behavior of the SFF is dependent of \(\Lambda\) when \(1<\gamma\leq 2\). For small \(\Lambda\) the SFF goes to the result expected for uncoupled sectors, while for large \(\Lambda\) it goes to the GUE result. This can be seen explicitly in the above equation for \(1<\gamma<2\) and can be observed graphically for the \(\gamma=2\) case in Fig. 9.
As we did for the RP model, we can create a diagram showing the behavior of the SFF at different timescales and values of \(\gamma\), shown in Fig. 10. At times larger than the Thouless time the SFF goes to the GUE result. At times smaller than the Thouless time the SFF goes to the GUE SFF with time enhanced by a factor of \(P\), the number of blocks. If \(\gamma>1+d\) this result first reaches its plateau value at the block Heisenberg time, which is smaller than the full Heisenberg time by a factor of \(P\). This means that at times larger than the block Heisenberg time the SFF is the same as the RP SFF. At the Thouless timescale, while it is not larger than the Heisenberg timescale, there is a crossover between the enhanced GUE SFF and the standard GUE SFF. The SFF is 1 at times larger than the Heisenberg time.
## 6 Conclusion
In this paper we introduce a generalization of the Rosenzweig-Porter (RP) model called the Block Rosenzweig-Porter (BRP) model. This is done by redefining the diagonal matrix in the RP model to be a block diagonal matrix with each block being a GUE matrix. In doing so we obtain a minimal quantum glass model which immediately thermalizes within the blocks but has much slower global thermalization (if it thermalizes at all) depending on the strength of the inter-block coupling. It is known that the RP model is chaotic for \(\gamma<1\) and localized for \(\gamma>2\). For intermediate values of \(\gamma\) the RP model is localized at early times, then thermalizes and becomes chaotic at the Thouless time. We find that the same is true for the BRP model. However, the localized behaviors for the two models are different. Whereas the RP model exhibits localization in single states, the BRP model instead exhibits localization within single blocks.
Figure 9: The SFF for the BRP model when the Thouless and Heisenberg timescales coincide for various values of the coupling parameter \(\Lambda\). The dark thin lines show the analytical results while the lighter, thicker lines show the numerical results. The numerics are obtained through exact diagonalization of size \(N=1000\) random matrices and are averaged over \(10^{5}\) realizations. The numerics are restricted to eigenvalues within the window [-0.75,0.75]. In (a) there are \(P=4\) blocks while in (b) there are \(P=20\). The left (right) dotted line indicates the block (full) Heisenberg time.
Our main result is a calculation of the spectral form factor (SFF) at all timescales larger than the inverse spectral width for \(\gamma>1\) in the BRP model. As a lead-in to this, we perform this calculation for the RP model. In the intermediate phase where \(1<\gamma<2\), the SFF of the RP model indicates Poissonian statistics before the Thouless timescale and GUE statistics afterward. At the Thouless timescale there is a crossover between these behaviors mediated by the unfolded coupling parameter \(\Lambda\). These results are consistent with those of Ref. [27]. Within the same range for \(\gamma\), the BRP model has statistics consistent with independent GUE blocks before the Thouless timescale. After the Thouless timescale statistics consistent with a single large GUE block are found. As with the RP model, there is a crossover between these behaviors at the Thouless timescale, again mediated by \(\Lambda\). This indicates that the system is initially frozen into a single sector, then escapes and thermalizes at the Thouless time.
An important feature of the SFF is the times at which it is equal to its plateau value, as these indicate the disappearance of repulsion between eigenvalues at the energy separations corresponding to these times. We find that if \(\gamma<1+d\), where \(d=\log_{N}M\), the SFF will first reach its plateau value at or near the Heisenberg time, meaning that eigenvalue correlations first vanish at separations smaller than the mean level spacing, as they must for all discrete quantum systems. However, if \(\gamma>1+d\) the SFF first reaches its plateau value at the block Heisenberg time, which is smaller than the full Heisenberg time by a factor of \(P\), the number of blocks. This means that correlations between eigenvalues first vanish at separations smaller than the mean level spacing of a single block.
Figure 10: The SFF of the BRP model at different timescales and values of the coupling parameter \(\gamma\) when the size of the blocks \(M\) is much less than \(N\). The green line represents a crossover between the enhanced GUE ramp and the regular GUE ramp. The magenta line represents the crossover between the Poissonian result and the regular GUE ramp. The magenta dot indicates the point at which the Thouless and Heisenberg timescales coincide and there is no closed form expression for the SFF. The dotted lines indicate the values of \(\gamma\) at which there is a phase transition. The left (right) dashed line indicates the block (full) Heisenberg time.
If \(\gamma<2\), meaning the system is not in its localized phase, level repulsion may still be present at later times (smaller energy separations), causing the SFF to drop back below its plateau value until the Heisenberg time.
While this work has been concerned with dynamics and associated spectral statistics, one can also ask about the eigenstate properties in the three phases. In the RP model the eigenstates are fully ergodic for \(\gamma<1\) and localized to a single state for \(\gamma>2\). For intermediate values of \(\gamma\) the eigenstates are neither localized nor ergodic; they are termed nonergodic extended states [27]. A similar phenomenon occurs in the BRP model. For \(\gamma>2\) the eigenstates are localized to a single sector. However, the eigenstates become fully delocalized across the Hilbert space for \(\gamma<1+d\) (rather than merely \(\gamma<1\)), where \(M\) is the sector size. The fact that this eigenstate transition is, in general, separate from the transition to full GUE statistics at \(\gamma=1\) is an interesting feature of the BRP model. Future work may examine this feature and the eigenstate statistics in more detail.
The BRP model we introduce here is significant because it is a solvable random matrix model with a glass transition. The SFF of the BRP model can be calculated at all relevant timescales. In more complex quantum glass models one cannot probe times exponential in the system size with currently available methods. Particularly interesting is the case in which the Thouless time is of the same order as the Heisenberg time. The BRP model is thus a useful starting point in understanding the spectral statistics of systems with slow dynamics.
Due to the simplicity of the BRP model it does not capture some features of more physical models. Each block is taken to be the same size and independent of the others. More generally we can expect the sectors to have some distribution in size with correlations between their matrix elements. The BRP Hamiltonian also has only two levels in its hierarchy, making it a model with a single nearly conserved quantity or slow mode. More realistic systems may have a richer hierarchy of long timescales, which could be captured by a generalized BRP model with more than two levels of nested blocks. An avenue for future work may be to modify the BRP model to capture these features.
## Acknowledgements
We would like to thank I. M. Khaymovich for useful discussion and comments which improved this manuscript, in particular for pointing out the interpretation of the coupling parameter \(\gamma\) in terms of eigenstate properties. This work was supported by the following: The National Science Foundation through the Quantum Leap Challenge Institute for Robust Quantum Simulation (grant OMA-2120757), NSF DMR-2037158, US-ARO Contract No.W911NF1310172, and Simons Foundation (V.G.); the Joint Quantum Institute (M.W.); the Air Force Office of Scientific Research under award numbers FA9550-17-1-0180 (M.W.) and FA9550-19-1-0360 (B.S.); the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Accelerated Research for Quantum Computing program "FAR-QC" (R.B.); the DoE ASCR Quantum Testbed Pathfinder program under award number DE-SC0019040 (C.L.B.); the DoE ASCR Accelerated Research in Quantum Computing program under award number DE-SC0020312 (C.L.B.); the DoE QSA, AFOSR, AFOSR MURI, NSF PFCQC program, NSF QLCI under award number OMA-2120757 (C.L.B.); DoE award number DE-SC0019449 (C.L.B.), ARO MURI, and DARPA SAVaNT ADVENT (C.L.B.). This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE 1840340 (R.B.), and by the National Science Foundation NRC postdoctoral fellowship program (C.L.B.).
## Appendix
Here we show that the area between the SFF and its plateau value is independent of the strength of the coupling between different states, as long as the coupling is nonzero. Consider the integral
\[I=\int_{-\infty}^{\infty}dt(\text{SFF}(t)-N), \tag{160}\]
where
\[\text{SFF}(t)=\sum_{mn}\overline{\exp[i(E_{n}-E_{m})t]} \tag{161}\]
is the SFF. We can approximate \(I\) as
\[I(t_{\text{long}})=\int_{-\infty}^{\infty}dt(\text{SFF}(t)-N)\exp\left(-\frac {t^{2}}{2t_{\text{long}}^{2}}\right), \tag{162}\]
which goes to \(I\) as \(t_{\text{long}}\rightarrow\infty\). We find that
\[\begin{split} I(t_{\text{long}})&=\int_{-\infty}^ {\infty}dt\sum_{m\neq n}\overline{\exp[i(E_{n}-E_{m})t]}\exp\left(-\frac{t^{2}} {2t_{\text{long}}^{2}}\right)\\ &=\sqrt{2\pi}t_{\text{long}}\sum_{m\neq n}\overline{\exp\left( \frac{-(E_{n}-E_{m})^{2}t_{\text{long}}^{2}}{2}\right)}.\end{split} \tag{163}\]
If there is repulsion between all levels so that each level has a nonzero distance from the others, \(I(t_{\text{long}})\) will vanish for \(t_{\text{long}}\) larger than the Heisenberg time. This indicates that \(I=0\).
Because the disconnected part of the SFF depends only on the level density, which for the models considered in this work is independent of the coupling strength when \(\gamma>1\), the area between the connected SFF and its plateau value will also be independent of the coupling strength for those values of \(\gamma\). We found above that, as \(\gamma\to 1^{+}\) for both models, the SFF goes to the GUE result, which is also the result for all \(\gamma\leq 1\). From this we can conclude that the area between the unfolded connected SFF and its plateau value is equal to the GUE result for any nonzero coupling strength. This is simply a triangle with area \(\pi N\).
|
2303.08076 | Optimized Control-Centric Communication in Cooperative Adaptive Cruise
Control Systems | In this study, we explore an innovative approach to enhance cooperative
driving in vehicle platooning systems through the use of vehicle-to-everything
(V2X) communication technologies. As Connected and Autonomous Vehicles (CAVs)
integrate into increasingly dense traffic networks, the challenge of
efficiently managing communication resources becomes crucial. Our focus is on
optimizing communication strategies to support the growing network of
interconnected vehicles without compromising traffic safety and efficiency. We
introduce a novel control-aware communication framework designed to reduce
communication overhead while maintaining essential performance standards in
vehicle platoons. This method pivots from traditional periodic communication to
more adaptable aperiodic or event-triggered schemes. Additionally, we integrate
Model-Based Communication (MBC) to enhance vehicle perception under suboptimal
communication conditions. By merging control-aware communication with MBC, our
approach effectively controls vehicle platoons, striking a balance between
communication resource conservation and control performance. The results show a
marked decrease in communication frequency by 47\%, with minimal impact on
control accuracy, such as less than 1\% variation in speed. Extensive
simulations validate the effectiveness of our combined approach in managing
communication and control in vehicle platoons, offering a promising solution
for future cooperative driving systems. | Mahdi Razzaghpour, Shahriar Shahram, Rodolfo Valiente, Mahdi Zaman, Yaser P. Fallah | 2023-03-14T17:10:04Z | http://arxiv.org/abs/2303.08076v2 | # Control-aware Communication for Cooperative Adaptive Cruise Control
###### Abstract
Utilizing vehicle-to-everything (V2X) communication technologies, vehicle platooning systems are expected to realize a new paradigm of cooperative driving with higher levels of traffic safety and efficiency. Connected and Autonomous Vehicles (CAVs) need to have proper awareness of the traffic context. However, as the quantity of interconnected entities grows, the expense of communication will become a significant factor. As a result, the cooperative platoon's performance will be influenced by the communication strategy. While maintaining desired levels of performance, periodic communication can be relaxed to more flexible aperiodic or event-triggered implementations. In this paper, we propose a control-aware communication solution for vehicle platoons. The method uses a fully distributed control-aware communication strategy, attempting to decrease the usage of communication resources while still preserving the desired closed-loop performance characteristics. We then leverage Model-Based Communication (MBC) to improve cooperative vehicle perception in non-ideal communication and propose a solution that combines control-aware communication with MBC for cooperative control of vehicle platoons. Our approach achieves a significant reduction in the average communication rate (\(47\%\)) while only slightly reducing control performance (e.g., less than \(1\%\) speed deviation). Through extensive simulations, we demonstrate the benefits of combined control-aware communication with MBC for cooperative control of vehicle platoons.
Cooperative Driving, Distributed Event-triggered Communication, Model-based Communication, Multi-Agent Systems, Platooning
## I Introduction
Cooperative Adaptive Cruise Control (CACC), which relies on Vehicle to Vehicle (V2V) communication, offers the potential to enhance traffic flow dynamics by ensuring string stability and allowing for closer inter-vehicle distances [1, 2]. The backbone of distributed Multi-Agent Systems (MASs) is information exchange to create situational awareness. Overuse of communication resources may lead to communication congestion, resulting in long latency, increased packet loss, and reduced throughput, all of which will inevitably degrade system stability, performance, and reliability [3, 4]. Therefore, a crucial consideration when developing an appropriate distributed control system for MASs is to ensure not only the desired control performance but also the ability to conserve limited communication and computation resources.
Model-Based Communication (MBC) is a relatively new communication scalability solution that has shown promise in reducing channel congestion [5]. The main objective of the MBC scheme is to utilize a content structure that is more adaptable for transmitting packets that contain the parameters of the joint vehicle dynamic/driver behavioral models than the Basic Safety Message (BSM) content structure defined by the J2735 standard [6]. To represent the vehicle's dynamic when using the MBC scheme, various modeling methods can be considered. Non-parametric Bayesian inference methods, such as Gaussian Processes (GP), hold great potential for modeling the joint vehicle's dynamic/driver's behavior.
It should be noted that the majority of existing vehicle platoon control research uses Time-Triggered Communication (TTC), in which data exchange between two successive vehicles is performed periodically according to a fixed communication rate. In this case, data communications are activated on a regular basis regardless of measurement changes, even if the difference between two successively transmitted values is very small. Because the scheduling of transmission instants is purely based on time and not on the actual status of the vehicle, TTC frequently results in inefficient use of communication resources, which is undesirable in the context of CACC. Therefore, it appears more logical to utilize communication techniques that are sensitive to control needs, determining the timing of transmissions based on output measurements in order to achieve a better balance between communication efficiency and control performance.
Because there is a tradeoff between platoon control performance and communication resource utilization, designing an efficient Event-Triggered Communication (ETC) for the vehicle platoon is critical. We describe the design of such a system in this paper and provide a new perspective on how the interaction between different components of a platoon can be modeled to improve performance. The contributions of this paper are as follows:
* We proposed a control-aware communication solution that combines ETC with MBC for cooperative control of vehicle platoons.
Fig. 1: A description of the communication topology, with the dashed lines indicating the flow of information among vehicles. The distance between the \(n^{th}\) vehicle and its preceding vehicle is denoted as \(d_{i}\).
* We describe a fully distributed ETC design for vehicle platoons that achieves a significant reduction in the average communication rate while only mildly reducing control performance.
## II Related Work
The control of multiple CAVs' collective behavior is based on the vehicles' mutual awareness of their states (e.g., inter-vehicle distance and vehicle speed), which is accomplished through inter-vehicle sensing and communication. This section will go over one of the cooperative driving applications, CACC, and V2V communication.
### _Cooperative Adaptive Cruise Control (CACC)_
CACC systems must be constructed to endure any special maneuvers, such as interfering vehicles cutting into CACC platoons or hard braking by leading vehicles [7, 8]. The faster and more precise information about the motion of preceding vehicles provided by V2V allows the CACC vehicle to more accurately follow the preceding cooperating vehicle, even at a considerably shorter distance. This not only enhances user acceptance but also has the potential to significantly improve traffic flow dynamics and lane throughput capacity. Researchers demonstrated that vehicle platooning has enormous potential to address a wide range of transportation issues [9, 10]. A well-designed CACC must keep the deviation from the desired gap, known as spacing error, as small as possible in order to reduce the risk of collision and enjoy the advantages of platoon formation, such as lower fuel consumption and higher traffic throughput [11].
Communication imperfections can significantly affect the performance of CACC systems, as excessively long communication delays or low transmission rates can compromise string stability and other performance properties for a given time gap [12]. As a result, the number of transmissions in time must be large enough and communication delays must be short enough to achieve the desired platooning behavior [13].
### _Vehicle to Vehicle (V2V) Communication_
The exchange of information is critical for platoon deployment because it allows control actions to be taken utilizing the latest information regarding road and traffic conditions. Many studies have been conducted to determine the impact of the communication network on platoon performance [14, 15]. One of TTC's major flaws is its lack of flexibility and scalability. This section discusses ETC and MBC as a flexible and scalable solution. The Cellular Vehicle to Everything (C-V2X) standard establishes a lower bound for the Minimum Inter-Event Time (MIET), which is the minimum amount of time that must pass between two consecutive transmissions [16]. This inter-packet duration is limited to \(100ms\) in the lower bound and \(600ms\) in the upper bound. This strictly positive lower bound is required to avoid Zeno behavior (an infinite number of events in finite time) and to make the ETC system practical to implement.
#### Ii-B1 **Event-Triggered Communication**
Event-based strategies are a popular way to ensure that communication resources are used efficiently in MASs [17, 18]. In contrast to traditional TTC, event-based approaches transmit data only when necessary to meet a control system specification. It was found that event-triggered systems exhibit better real-time performance than time-triggered systems. In [19], an approach is proposed to reduce the communication burden by using a flexible event-triggering strategy based on tunable parameters for each platoon member.
Each agent sends its current state to neighboring agents only when the difference between the current state and the previously transmitted state surpasses a time-varying threshold, or when it reaches the maximum value of the inter-event interval. As a more feasible limitation for event-triggered approaches, researchers have explored imposing a minimum time interval between events [20]. Each vehicle's event-triggered scheme can be defined as follows:
\[t_{k+1}=t_{k}+\min\left(\tau_{k},\tau\right)\text{,} \tag{1}\]
where \(\tau\) is a positive constant that denotes the upper bound of the inter-event interval. \(\tau_{k}\) is determined by the following equation,
\[\tau_{k}=\inf_{t>t_{k}}\left\{t-t_{k}\mid C\left(S(t)\,,\bar{S}\left(t\right) \right)>0\right\},\quad\text{ for }t\geq 0. \tag{2}\]
When new information is received, each agent will update its control input and use the received model for prediction. It should be noted that these trigger instants are not synchronized among agents. In such techniques, each vehicle runs a local duplicate of its neighbors' dynamics. In the proposed configuration, each vehicle uses its own transmitted model to determine when to send data. If the prediction of the kinematic model since the last broadcast is still accurate, the vehicle will not transmit a message.
#### Ii-B2 **Model-based Communication**
When designing CACC systems, we must explicitly account for the uncertainty in vehicle state, behavior, and communication [21]. Because the information from the neighboring vehicles is not continuously available, each agent must run an estimator. In this scheme, each agent uses a model to predict the measurements of the other agents when they do not receive a packet either due to packet loss or because the transmission was not triggered.
In this paper, we consider the velocity time series of each cooperative vehicle, \(v_{n}(t)\), to be a GP defined by the mean function \(m_{n}(t)\) and the covariance kernel function \(\kappa_{n}(t,t^{\prime})\) as
\[v_{n}(\mathbf{t})\sim\mathcal{GP}\left(m_{n}(\mathbf{t}),\kappa_{n}\left( \mathbf{t},\mathbf{t}^{\prime}\right)\right). \tag{3}\]
We are interested in incorporating the knowledge that the observed velocity data provide about the underlying function, \(v_{n}(t)\), and its future values. Assuming that for each cooperative vehicle, the mean of the process is zero, \(m_{n}(t)=0\), the covariance kernel is a Radial Basis Function (RBF), and the measurement noises are independent and identically distributed (\(i.i.d.\)) with the Gaussian distribution \(\mathcal{N}(0,\,\gamma_{n,noise}^{2})\)
the covariance matrix of the observed velocity of the \(n^{th}\) cooperative vehicle is
\[K_{n}(\mathbf{t},\mathbf{t^{\prime}})=\kappa_{n}(t,t^{\prime})+\gamma_{n,noise}^{2}I \tag{4}\]
where \(I\) denotes the identity matrix of dimension equal to the size of the training (measured) data and \(\kappa_{n}(t,t^{\prime})\) can be calculated using the RBF definition as
\[\kappa_{n}(t,t^{\prime})=\exp(-\frac{||t-t^{\prime}||^{2}}{2\gamma_{n}^{2}}). \tag{5}\]
Using the aforementioned assumptions, the joint distribution of the past observed values, \(\mathcal{V}_{n}^{obs}\), and the future values \(\mathcal{V}_{n}^{*}\), can be represented as
\[\left[\begin{array}{c}\mathcal{V}_{\mathbf{n}}^{\mathbf{obs}}\\ \mathcal{V}_{\mathbf{n}}^{*}\end{array}\right]\sim\mathcal{N}\left(\mathbf{0}, \left[\begin{array}{cc}K_{n}(\mathbf{t},\mathbf{t})&K_{n}\left(\mathbf{t},\mathbf{t}^{*} \right)\\ K_{n}\left(\mathbf{t}^{*},\mathbf{t}\right)&K_{n}\left(\mathbf{t}^{*},\mathbf{t}^{*}\right) \end{array}\right]\right), \tag{6}\]
where \(\mathbf{t}\) and \(\mathbf{t}^{*}\) denote the sets of observation and future value time stamps, respectively, and \(K_{n}(.,.)\) can be obtained from (4).
## III Preliminaries and Problem Formulation
It is challenging to achieve a substantial reduction in V2V communication while preserving the desired performance of the vehicular platoon. Therefore, a crucial matter to tackle is how to devise suitable control strategies that can sustain acceptable MAS control performance while considerably decreasing the overuse of communication and computation resources. In our control formulation, local information such as spacing error and velocity error is used in a relative sense, that is, in comparison to the agent's own state, to adjust the control input of each subsequent vehicle to match the velocity of the leading vehicles while sustaining a consistent time gap between any two successive vehicles. During V2V communication outages, communication losses for CACC control were mitigated by using a GP to estimate the speed of the preceding vehicles.
### _Vehicle Model and Model Predictive Control Design Approach_
In this study, we consider a platoon of \(N_{v}\) vehicles, where \(n\in\{0,1,\ldots,N_{v}-1\}\) denotes the \(n^{th}\) vehicle in the platoon, and \(n=0\) represents the platoon leader as shown in Figure 1. \(d_{n}\) denotes the gap between \(n^{th}\) and \((n-1)^{th}\) vehicles and is defined as
\[d_{n}=x_{n-1}-x_{n}-l_{n}^{v}, \tag{7}\]
where \(x_{n}\) and \(l_{n}^{v}\) are the longitudinal location of the \(n^{th}\) vehicle rear bumper and the vehicle length, respectively. The desired spacing policy can be defined as follows:
\[d_{n}^{*}(t)=\delta_{n}\,v_{n}(t)+d_{n}^{s}. \tag{8}\]
In (8), \(v_{n}\) is the velocity of the \(n^{th}\) vehicle, \(\delta_{n}\) is the time gap, and \(d_{n}^{s}\) represents the standstill distance. The difference between the gap and its desired value is defined as \(\Delta d_{n}(t)=d_{n}(t)-d_{n}^{*}(t)\), and the velocity difference between \(n^{th}\) vehicle and its predecessor is defined as \(\Delta v_{n}(t)=v_{n-1}(t)-v_{n}(t)\). Hence, \(\Delta\dot{d}_{n}\) turns into \(\Delta\dot{d}_{n}(t)=\Delta v_{n}(t)-\delta_{n}\,a_{n}(t)\) and \(\Delta\dot{v}_{n}=a_{n-1}-a_{n}\), where \(a_{n}\) denotes the acceleration of the \(n^{th}\) vehicle. By taking the driveline dynamics \(f_{n}\) into account, the derivative of the acceleration of vehicle \(n\) is \(\dot{a_{n}}(t)=-f_{n}a_{n}(t)+f_{n}u_{n}(t)\), where \(u_{n}(t)\) acts as vehicle input. By considering \(S_{n}=[\Delta d_{n}\,\,\Delta v_{n}\,\,\,a_{n}]^{T}\) as the vector of states for \(n^{th}\) vehicle, the state-space representation for each vehicle is
\[\dot{S}_{n}(t)=A_{n}\,S_{n}(t)+B_{n}\,u_{n}(t)+D\,a_{n-1}(t)\] \[=\begin{bmatrix}0&1&-\delta_{n}\\ 0&0&-f_{n}\end{bmatrix}S_{n}(t)+\begin{bmatrix}0\\ 0\\ f_{n}\end{bmatrix}u_{n}(t)+\begin{bmatrix}0\\ 1\\ 0\end{bmatrix}a_{n-1}(t). \tag{9}\]
For \(n=0\) (leader), \(a_{n-1}(t)\) is replaced by zero. The following equation describes the discrete-time state-space model when the first-order forward time approximation is employed;
\[S_{n}(k+1)=\\ (I+t_{s}\,A_{n})\,S_{n}(k)+t_{s}\,B_{n}\,u_{n}(k)+t_{s}\,D\,a_{n-1}(k), \tag{10}\]
where \(t_{s}\) is the sampling time.
Some constraints on the system states and input are also considered, including bounds on acceleration and input, road speed limit, and distance between vehicles (note that a negative distance implies collision and therefore should not occur). The following inequalities (hard constraints) should always hold true
\[a_{n}^{min}\leq a_{n}(k)\leq a_{n}^{max}, \tag{11a}\] \[u_{n}^{min}\leq u_{n}(k)\leq u_{n}^{max},\] (11b) \[v_{n}(k)\leq v^{max},\] (11c) \[d_{n}(k)>0. \tag{11d}\]
Besides, for passenger comfort, system input changes are bounded as
\[t_{s}\,u_{n}^{min}\leq u_{n}(k+1)-u_{n}(k)\leq t_{s}\,u_{n}^{max}. \tag{12}\]
The MPC design problem for each vehicle is
\[\sum_{k=0}^{N-1}\Bigg{[} \left(\mathbf{S}_{n}(k)-R_{n}\right)^{T}Q_{n}\left(\mathbf{S}_{n }(k)-R_{n}\right)\] \[+\!\!\sum_{i=n-r}^{n-1}\!\!\left[c_{i}^{d}\left(x_{i}(k)-x_{n}( k)\!-\!\!\sum_{j=i+1}^{n}\!\!(d_{j}^{s}(k)+l_{j}^{v})\right)^{2}\right.\] \[+c_{i}^{v}\left(v_{i}(k)-v_{n}(k)\right)^{2}\bigg{]}\Bigg{]},\] \[\text{subject to: System Constrains}, \tag{13}\]
where \(\mathbf{u}_{n}\) is the system inputs from \(k=0\) to \(k=N-1\), \(c_{i}^{d}\) and \(c_{i}^{v}\) are positive coefficients, and \(r\) denotes the number of predecessors sharing information with the \(n^{th}\) vehicle.
### **Event-triggering conditions**
Transmission instants in ETC schemes are determined online by a "smart" triggering condition that depends on, for example, system output measurements, so that transmission is only scheduled when necessary to guarantee some performance properties. It is only necessary to verify and execute the event-triggered condition periodically at each communication moment.
The event-triggered condition is "fully distributed" in the sense that it does not rely on global communication topology information. In these systems, if each agent decides when to broadcast its state information on its own, not only the control effort but also the network load will be reduced.
#### Iii-B1 **Control-aware Triggering**
Making explicit decisions based on current control system states results in a control-aware approach. In desirable states, control systems perform modestly, whereas, in undesirable states, systems change their states and are given higher priority for data transmission. Transmission thresholds are set so that each vehicle can meet stability goals and make transmission decisions to meet performance targets while reducing the total transmission rate. As a result, \(\tau_{k}\) in 2 for control-aware triggering will take the form of
\[\tau_{k}=\inf_{t>t_{k}}\left\{t-t_{k}\mid\|\mathcal{C}_{i}\|\geq\beta\right\}, \quad\text{ for }t\geq 0. \tag{14}\]
where \(\mathcal{C}_{i}\) is the cost function in 13.
## IV Experimental Results
In our experiments, we considered the Packet Error Rate (PER) to be an i.i.d. random variable with two values of \(0\) (ideal communication) and \(0.6\) (randomly losing 60% of packets) to study the effect of communication loss on the CACC performance. In our studies, the simulation step is \(100\,ms\) which is the communication periodicity. To validate the proposed strategy and demonstrate its technical feasibility, the ETC policy has been simulated on a platoon of \(N_{v}=10\) vehicles. CVXPY package in Python is used for implementing the optimization problem and Gurobi optimization package is used as the solver [22, 23].
To check the event-triggering condition, each vehicle only needs to record the state of its most recent event-triggered instant and continuously monitor its own state. We used an All-Predecessor-Leader-Following (APLF) topology [12].Our method is to establish a correlation between communication behaviors and platoon performance, which directly relies on the quantity of accurately transmitted vehicle information and, of course, the number of nodes that receive the information [3]. We investigate the behavior of a platoon as the threshold values of the proposed triggering condition is varied over a selected range in terms of vehicle efficiency and safety.
### **Implementation Details**
The parameters used in the simulations can be found in Table I. Each scenario takes \(60\,s\), in which the objective of the platoon is to maintain the desired gap time of \(0.6\,s\) with the preceding vehicle. Upon each transmission opportunity, each cooperative vehicle uses its \(5\) most recent velocity observations, measured at equally-distanced \(100\,ms\) time intervals, to train a GP model and obtain the set of parameters \(\Theta_{n}=\{\gamma_{n},\gamma_{n,noise}\}\). After the GP parameters are learned, the transmitting vehicle shares the model parameters along with its history of the \(5\) most recent velocity measurements and the current position and acceleration with their time stamps. In addition, the \(10\) future velocity values (parameter \(N\) in Table I) predicted by the vehicle's MPC are included in the transmitting packet. The cooperative vehicles update the preceding vehicles' information either based on the newly received information from them or based on the GP predictive model every \(100\,ms\). This information is fed into the MPC for updating the control action. Furthermore, the control module provides the optimal predicted states' values of the ego vehicle. Finally, if a triggering condition is detected, the control module will send current states and predicted future velocity trajectory values to the networking module for broadcasting.
### **Analysis and Results**
We use the experimental results from a TTC scheme, which determines transmission instants based on a fixed transmission rate of \(10\,Hz\), as a benchmark to assess the performance of the proposed ETC approach and communication resource utilization. To compare the performance of control-aware triggering, the average transmission rate is used as a function of the network resource usage pattern. We defined the distance error as the absolute value of the difference between the actual distance gap and desired distance gap in meters. Also, the difference between maximum and minimum speed and acceleration considering all platoon members at each time step is a good measure for traffic flow and CACC performance. We define these metrics as speed difference and acceleration difference. The mean of the absolute value of spacing error, speed difference, and acceleration difference for an ideal \(10\,Hz\) TTC scheme are \(\textbf{0.302}\,m\), \(\textbf{4.693}\,m/s\), and \(\textbf{1.257}\,m/s^{2}\), respectively (see Figure 2). These are the smallest errors achieved using our method.
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}||p{56.9pt}|p{56.9pt}|} \hline Parameter & Value & Parameter & Value \\ \hline \(N\) & \(10\) & \(t_{s}\) & \(0.1\,s\) \\ \(t_{n}^{v}\) & \(5\,m\) & \(d_{n}^{s}\) & \(2\,m\) \\ \(a_{n}^{max}\) & \(3\,m/s^{2}\) & \(a_{n}^{min}\) & \(-4\,m/s^{2}\) \\ \(u_{n}^{max}\) & \(3\,m/s^{2}\) & \(u_{n}^{min}\) & \(-4\,m/s^{2}\) \\ \(f_{n}\) & \(10\,s^{-1}\) & & \\ \hline \end{tabular}
\end{table} TABLE I: Model and optimization parameters used in the simulations.
In Figures 2, 3, and 4, the first subplot (first row) shows the distance of each vehicle from its predecessor (\(d_{n}(t)\)) while the second subplot shows the velocity of each vehicle (\(v_{n}(t)\)). The third subplot depicts the acceleration information for each vehicle, and in the last subplot, every time instance (\(\xi_{n}(t)\)) when a vehicle sends information to its following vehicles is marked. We want to note that, \(d_{0}(t)\) cannot be defined for the leader because there is no platoon member in front of it, therefore, \(d_{0}(t)\) is not included in the first subplots of the mentioned figures. Despite the significant reduction in communication achieved by the ETC scheme, the responses to the ETC scheme look similar to the responses to the TTC scheme. These findings demonstrate that the frequency of communication can be significantly reduced while maintaining the desired control performance. Figure 3 represents the simulation's best-case scenario for the level 6 threshold. Because we use an ideal communication (\(PER=0\)) for this figure, we only considered the effect of the ETC scheme. We considered the combined effect of the ETC and PER in figure 4 which represents the worst-case scenario of the simulation under consideration in this paper (the highest level of threshold and the highest level of PER). Despite the smooth acceleration shown in Figure 2, the acceleration profile in all other figures fluctuates due to a lack of precise information (either because the packets were lost or because the transmissions were not triggered). Vehicles using the proposed communication paradigm can safely follow the vehicle in front of them. The threshold levels for control-aware triggering are six equally spaced steps from \(200\) to \(700\) as shown in Table II.
In Figure 3, communication trig
Fig. 4: Performance of the CACC with control-aware triggering ETC, PER=0.6, level 6 threshold, and average communication rate of 5.28 Hz.
Fig. 3: Performance of the CACC with control-aware triggering ETC, PER=0, level 6 threshold, and average communication rate of 5.28 Hz.
Fig. 2: Performance of the CACC with TTC, PER=0, and fixed communication rate of 10 Hz.
current state of the control system. Because the platoon's last members will have a relatively worst control situation (they must compensate for the errors of preceding vehicles in order to provide a string stable platoon), they will transmit more frequently. For instance, communication events for \(\xi_{8}(t)\) and \(\xi_{9}(t)\) are always one.
Table II demonstrates how the trigger level can be selected to trade performance against the data transmission rate. Each data point is an average of 70 simulation rounds to provide a better sense of performance. Table II shows the mean of spacing error (\(m\)), speed difference (\(m/s\)), and acceleration difference (\(m/s^{2}\)), respectively. The ETC policy induces a tradeoff between control performance and communication frequency. As can be seen, lowering the trigger level leads to a lower error, but a higher data transmission rate. Long inter-event times result in significant performance errors. As a result, raising the threshold will degrade the control performance.
## V Conclusion
As the number of devices connected to a shared network increases beyond the previous limit, distributed time-triggered coordination techniques become less scalable. Such constraints necessitate a shift away from the periodic communication paradigm toward opportunistic schemes, such as the one discussed in this paper. Excessive use of communication resources, on the other hand, can have a negative impact on their reliability. As a result, in this article, a resource-sensitive CACC communication strategy is proposed, with the goal of reducing the utilization of communication resources compared to conventional TTC methods while maintaining system performance. Alternatively, the performance of the system can be improved given a fixed communication rate. Furthermore, the minimum inter-event times are guaranteed to have a positive lower-bound by design to avoid Zeno behavior.
In addition, we combined MBC with ETC to propose a communication strategy for distributed multi-agent coordination. Each agent decides based on local information, mainly on the difference between its current state/model and its latest broadcast state/model, when a new measurement has to be transmitted over the network. Only the event-triggered condition needs to be periodically checked and executed at each communication time instant. The simulation results show that it is possible to achieve an ETC with good performance that reduces network load (\(47\%\)) when compared to a TTC while only slightly reducing control performance (e.g., less than \(1\%\) speed deviation).
|
2301.06622 | IOPathTune: Adaptive Online Parameter Tuning for Parallel File System
I/O Path | Parallel file systems contain complicated I/O paths from clients to storage
servers. An efficient I/O path requires proper settings of multiple parameters,
as the default settings often fail to deliver optimal performance, especially
for diverse workloads in the HPC environment. Existing tuning strategies have
shortcomings in being adaptive, timely, and flexible. We propose IOPathTune,
which adaptively tunes PFS I/O Path online from the client side without
characterizing the workloads, doing expensive profiling, and communicating with
other machines. We implemented IOPathTune on Lustre and leveraged CloudLab to
conduct the evaluations on 20 different Filebench workloads in three different
scenarios. We observed either on-par or better performance than the default
configuration, as high as 231% on standalone executions. IOPathTune also
delivers 89.57% better overall performance than CAPES in multiple client
executions. | Md. Hasanur Rashid, Youbiao He, Forrest Sheng Bao, Dong Dai | 2023-01-16T22:09:18Z | http://arxiv.org/abs/2301.06622v1 | # IOPPathTune: Adaptive Online Parameter Tuning for Parallel File System I/O Path
###### Abstract
**Motivations.** Parallel file systems (PFSes) play a foundational role in High-Performance Computing (HPC) platforms for providing users and applications the data access ability at needed speed and capacity. It is critical to make sure PFSes could deliver the extreme performance for a largely diverse spectrum of scientific applications running in HPC systems. Parallel file systems are known for their long and complex I/O path. Specifically, in PFSes, I/O requests are issued from the computing nodes or I/O nodes using PFS client library. They then are delivered to the storage servers using RPC mechanisms over high-speed network. Finally, these requests will be buffered on the storage servers' I/O queues waiting for the internal I/O schedulers to materialize the data. An efficient and ideal I/O path means the data flows from clients to storage servers without being jammed at any places, which is then a key to deliver optimized I/O performance in PFSes. To achieve optimal I/O performance, the parallel file systems must have proper settings on multiple PFS parameters that control how data flows in the I/O path. Although carefully picked, the default settings often fail to deliver optimal performance for diverse and changing workloads.
To effectively tune I/O path-relevant parameters for parallel file systems, we look for several key properties: adaptive, timely, and flexible. The tuning framework should adjust parameters adaptively when workloads change and respond to runtime variables such as I/O contentions. The tuning decision delivery needs to be fast due to the short burst nature of the workloads. The tunable parameters should also be modifiable dynamically. The framework must accommodate the flexibility to tune different clients differently. Due to the limited I/O, computation, and communication resources for performance tuning, the framework should also avoid the expensive probing or profiling of the HPC environment.
**Design of IOPPathTune.** In this study, we propose IOPathTune, a new tuning framework designed to tune Lustre [1] I/O Path online from the client side adaptively. We focus on two parameters after closely investigating the Lustre I/O path: max_pages_per_rpc (maximum number of pages in a single RPC) and max_rpcs_in_flight (maximum number of RPCs in flight). Both of them belong to the Lustre Portal RPC (PtlRPC) subsystem. They are dynamically tunable, take immediate effect, and control the I/O flow for each Lustre Client. These important features align well with our goals. Due to these two parameters, the RPCs are formed and transferred in a structured way, helping us avoid the characterization of workloads.
One critical design choice of IOPathTune is to avoid probing storage servers or other compute nodes. We design it to solely depend on the statistics collected by the PFS client library. Our tuning algorithm proceeds with tuning every ten seconds, and in each turn, it chooses to tune either of the two parameters alternately as illustrated in Figure 1. The tuning action consists of either multiplying or dividing the parameter value by two each time, much like the primary approach of TCP congestion control.
Specifically, during tuning, we calculate different metrics along the I/O path. The metrics we derive are: how much data is in the dirty page cache, how fast the pages are getting cached, how quickly the client can generate RPCs, and what speed the client is achieving while transferring RPCs for the last observation period. By comparing these metrics, if we observe that our last tuning action improves the bandwidth, we reciprocate the previous tuning action. Otherwise, we do the opposite of the last tuning action. We also observe whether I/O contentions are developing or not based on these metrics. In the case of I/O contentions, we adopt a conservative approach: assign blame on our previous tuning action and revert it.
**Evaluations.** To test the efficacy of our framework IOPathTune, we leverage CloudLab [2] to conduct the evaluations. Our HPC cluster consists of one machine for MGS/MDS, four for OSS, and five for clients (with an extra machine while running CAPES). Every machine has two Intel Xeon Silver 4114 10-core CPUs at 2.20 GHz, 192GB ECC DDR4-2666 memory, and one Intel DC S3500 480 GB 6G SATA SSD. Each of our OSS consists of two 200GB OSTs
Figure 1: Heuristic Tuning Approach
mounted on SSD. Dual-port Intel X520-DA2 10Gb NIC supports the network. We have set up CentOS 7 as the operating system and Lustre 2.12.8 as the primary file system on our HPC cluster. We tested 20 different Filebench [4] workloads with varying I/O patterns (random or sequential), I/O operations (read or write), I/O request sizes, number of processes and threads, and others. We ran three different evaluations and checked how IOPathTune handles different scenarios.
The first test was standalone workload executions from a single client. We observed either on par with slight degradation or better performance than the default configuration across all workloads. Table 1 shows some of the most considerable improvement includes 231%, 113%, and 96% for Filebench fivestreamwriternd, whole file read-write, and sequential read-write workloads. Our second kind of test was dynamic testing, where we continuously changed the workloads after running for 300 seconds. We changed the workload six times at a single run and experimented with five runs, each with six different combinations of workloads. We observed consistent improvements from IOPathTune similar to the standalone performance evaluation, indicating our algorithm can quickly catch up and converge to better configurations.
Our final type of test was executing different workloads from different clients at the same time. We compared IOPathTunes' performance with the default configuration and CAPES [3] execution. IOPathTune improved the total bandwidth of the cluster by 129.30% compared to the default execution and by 89.57% compared to the CAPES execution as shown in Table 2. The findings prove that our proposed IOPathTune framework can independently tune parameters from different clients to achieve better performance.
**Conclusions.** Our study has shown how we can perform adaptive online parameter tuning without characterizing the workloads, without doing expensive profiling, and without communicating with other machines. We hope this approach will welcome more attention towards researching inexpensive yet very effective solutions in system research. In future, we like to scale our algorithm to accommodate tuning more parameters following this heuristic approach. We would also like to test it out in real-world HPC facilities to observe how much improvement the solution brings regarding I/O performance.
**Acknowledgement.** This research is partially supported by NSF award CNS-1817089 and CNS-2008265.
## Ccs Concepts
* **Computing methodologies \(\rightarrow\)**_Shared memory algorithms._
|
2305.05804 | A step towards the tensorization of Sobolev spaces | We prove that Sobolev spaces on Cartesian and warped products of metric
spaces tensorize, only requiring that one of the factors is a doubling space
supporting a Poincar'e inequality. | Silvia Ghinassi, Vikram Giri, Elisa Negrini | 2023-05-09T23:26:32Z | http://arxiv.org/abs/2305.05804v2 | # Tensorization of Sobolev spaces
###### Abstract.
We prove that Sobolev spaces on Cartesian and warped products of metric spaces tensorize, when one of the factor is a doubling space supporting a Poincare inequality.
Key words and phrases:Sobolev space, metric space, doubling, cartesian product, warped product 2010 Mathematics Subject Classification: Primary: 53C23. Secondary: 46E35, 51F30 E.N. is supported by the Simons Postdoctoral program at IPAM and NSF DMS 1925919. S.G. is partially supported by NSF DMS-FRG-1853993. V.G. has been supported by NSF DMS-FRG-1854344. The project started while the authors were participating at the AMS MRC 2020 on Analysis in Metric Spaces. We are grateful to Nicola Gigli, Nageswari Shanmungalingam, and Luca Capogna for their helpful guidance. We are also grateful Angela Wu for useful insights in the early stages of the project.
Introduction
The study of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global_ properties of the _global properties of the _global_ properties of the _global_ properties of the _global properties of the _global properties of the _global_ properties of the _global properties of the _global properties of the _global_ properties of the _global properties of the _global_ properties of the _global properties of the _global_ properties of the _global_ properties of the _global properties of the _global
\(G\in L^{p}([0,1])\) such that
\[d(\gamma_{s},\gamma_{t})\leq\int_{s}^{t}G(r)\,dr\quad\forall t,s\in[0,1],\;s<t \tag{2.1}\]
For \(p=1\) the space \(AC^{1}([0,1],X)\) is denoted \(AC([0,1],X)\) and it is the space of absolutely continuous curves.
**Theorem 2.2** (Theorem 1.1.2 in [1] ).: _For \(\gamma\in AC([0,1],X)\) there exists an a.e. minimal function \(G\) which satisfies (2.1) called the metric derivative which can be computed for a.e. \(t\in[0,1]\) as_
\[|\dot{\gamma}_{t}|:=\lim_{s\to t}\frac{d(\gamma_{t},\gamma_{s})}{|s-t|}\]
**Proposition 2.3** ([1]).: _The length of a curve \(\gamma\in AC([0,1],X)\) is given by:_
\[l[\gamma]:=\int_{0}^{1}|\dot{\gamma}_{t}|\,dt\]
_If \((X,d)\) is a length space then for any \(x,y\in X\)_
\[\mathrm{d}(x,y)=\inf\bigg{\{}\int_{0}^{1}|\dot{\gamma}_{t}|\,dt;\quad\gamma \in AC([0,1],X)\text{ connecting }x\text{ and }y\ \bigg{\}}\]
**Definition 2.4** (Metric doubling).: Given a metric measure space \((X,d,\mu)\), we say that \(X\) is _metric doubling_ if there exists a constant \(C_{X}>1\) such that, for all \(r>0\) every ball of radius \(r\) can be covered by \(C_{X}\) balls of radius \(r/2\).
_Remark 2.5_.: There is another, closely related, notion of doubling spaces; a space is said to be _measure doubling_ if there exists a constant \(C_{X}>1\) such that, for all \(x\in X\) and \(r>0\) we have \(\mu(B(x,2r))\leq C_{X}\mu(B(x,r))\). Every metric space that carries a doubling measure is automatically metric doubling, and every complete metric doubling space is also measure doubling. For a proof see [14]. Because we will only be dealing with complete (and separable) metric spaces, we will simply refer to spaces as _doubling_, as the two notions are equivalent in this setting (and the relative constants are quantitatively related).
**Definition 2.6** (Local and global Lipschitz constants).: Given \(f\colon X\to[0,+\infty]\), the local Lipschitz constant of \(f\) is the function \(\mathrm{lip}(f):X\to[0,+\infty]\) defined as
\[\mathrm{lip}(f)(x)=\begin{cases}\limsup_{y\to x}\frac{|f(x)-f(y)|}{ \mathrm{d}_{X}(x,y)}&\quad\text{if $x$ is not isolated}\\ 0&\quad\text{otherwise}\end{cases} \tag{2.2}\]
Analogously, the global Lipschitz constant is defined as
\[\mathrm{Lip}(f)=\limsup_{y\neq x}\frac{|f(x)-f(y)|}{\mathrm{d}_{X}(x,y)}, \tag{2.3}\]
and if \(X\) is a length space, \(\mathrm{Lip}(f)=\sup_{x}\mathrm{lip}(f)(x)\).
**Definition 2.7** (Test plan).: Let \((X,d,m)\) be a metric measure space and \(\pi\in\mathcal{P}(C([0,1],X))\). \(\pi\) is said to have bounded compression if there exists a constant \(C>0\) such that for all \(t\in[0,1]\)
\[(e_{t})_{\#}\pi\leq Cm\]
\(\pi\) is said to be a test plan if it has bounded compression, it is concentrated on \(AC^{2}([0,1],X)\) and
\[\int_{0}^{1}\;\int\;|\dot{\gamma}_{t}|^{2}\;d\pi(\gamma)\;dt\leq+\infty\]
**Definition 2.8** (Sobolev class).: Let \((X,d,m)\) be a metric measure space. A Borel function \(f:X\to\mathbb{R}\) belongs to the Sobolev class \(S^{2}(X,d,m)\) (respectively \(S^{2}_{loc}(X,d,m)\) ) if there exists a non-negative function \(G\in L^{2}(X,m)\) (respectively \(G\in L^{2}_{loc}(X,m)\)) such that for all test plans \(\pi\)
\[\int\;|f(\gamma_{1})-f(\gamma_{0})|\;d\pi(\gamma)\leq\int\;\int_{0}^{1}G( \gamma_{s})|\dot{\gamma}_{s}|\;ds\;d\pi(\gamma)\]
In this case the function \(G\) is called a \(2\)-weak upper gradient of \(f\), or simply a weak upper gradient of \(f\).
_Remark 2.9_.: Among all weak upper gradients of \(f\) there exists a minimal function \(G\) in the \(m\)-a.e. sense. Such minimal function is called minimal weak upper gradient and we denote it by \(|Df|\). Notice that if \(f\) is Lipschitz, then \(|Df|\leq\mathrm{lip}(f)\)\(m\)-a.e. since \(\mathrm{lip}(f)\) is a weak upper gradient of \(f\).
We will use that minimal weak upper gradients are lower semicontinuous in the following sense: if \(f_{n}\in S^{2}(X,d,m)\) is a sequence converging \(m\)-a.e. to some \(f\) such that the sequence given by the \(|Df_{n}|\)'s is bounded in \(L^{2}(X,m)\), then \(f\in S^{2}(X,d,m)\) and for all \(G\) that are the \(L^{2}\)-weak limit of some subsequence of \(|Df_{n}|\) we have \(|Df|\leq G\). Finally, we will later use the fact that space \(S^{2}_{loc}(X,d,m)\cap L^{\infty}_{loc}(X,d,m)\) is an algebra and for all \(f,g\in S^{2}_{loc}(X,d,m)\cap L^{\infty}_{loc}(X,d,m)\) the following inequality holds:
\[|D(fg)|\leq|f||Dg|+|g||Df|\quad m\text{-a.e.}. \tag{2.4}\]
For additional details on the properties of minimal weak upper gradients, see [1].
**Definition 2.10** (Sobolev space).: The Sobolev space \(W^{1,2}(X,d,m)\) is defined as
\[W^{1,2}(X,d,m):=S^{2}(X,d,m)\cap L^{2}(X,m).\]
\(W^{1,2}(X,d,m)\) endowed with the norm
\[\|f\|_{W^{1,2}(X,d,m)}:=\|f\|_{L^{2}(X,m)}+\||Df|\|_{L^{2}(X,m)}\]
is a Banach space, but in general it is not an Hilbert space. If \(W^{1,2}(X,d,m)\) is an Hilbert space, we say that \((X,d,m)\) is infinitesimally Hilbertian.
**Lemma 2.11** (Density in energy of Lipschitz functions, [1]).: _Let \(Y\) be a complete and separable metric measure space, and \(f\in W^{1,2}(Y)\). Then there exists a sequence of Lipschitz functions \(f_{n}\) that converges to \(f\) in \(L^{2}\) and such that \(\mathrm{lip}(f_{n})\) converges in \(L^{2}\) to \(|Df|\)._
_Assumption 2.12_.: We will have, unless otherwise specified, the following set of assumptions for the metric measure space \((X,\mathrm{d}_{X},m_{X})\):
* \((X,\mathrm{d}_{X})\) is a complete and separable length space,
* \(m_{X}\) is a non-negative Borel measure with respect to \(\mathrm{d}_{X}\) and it is finite on bounded sets,
* \(\mathrm{supp}(m_{X})=X\).
In the following we may denote this space simply by \(X\).
_Assumption 2.13_.: We will have, unless otherwise specified, the following set of assumptions for the metric measure space \((Y,\mathrm{d}_{Y},m_{Y})\):
* \((Y,\mathrm{d}_{Y},m_{Y})\) is complete, separable, and \(C_{Y}\)-doubling length space,
* \(m_{Y}\) is a non-negative Borel measure with respect to \(\mathrm{d}_{Y}\) and it is finite on bounded sets,
* \(\mathrm{supp}(m_{Y})=Y\),
* \(Y\) supports a \((2,2)\)-Poincare inequality, that is, for every \(r>0\), there exists constants \(\lambda\geq 1\) and \(C_{p}\), such that for any metric ball \(B\subset Y\) of radius smaller than \(r\), we have \[\fint_{B}|u-u_{B}|^{2}\,dm_{Y}\leq C_{P}\,\mathrm{rad}(B)^{2}\int_{\lambda B}| g|^{2},\] where \(g\) is any weak upper gradient for \(u\), and \(u_{B}=\fint_{B}u\,dm_{Y}\).
In the following we may denote this space simply by \(Y\).
_Remark 2.14_.: The most restrictive assumptions (on \(Y\)) is the existence of a Poincare inequality. This fact will be used exclusively to obtain (3.16) in Lemma 3.9.
## 3. Cartesian products
The product space \(X\times Y\) is given the Euclidean product metric
\[\mathrm{d}_{X\times Y}((x,t),(y,s)):=\sqrt{\mathrm{d}_{X}(x,y)^{2}+\mathrm{d}_ {Y}(t,s)^{2}} \tag{3.1}\]
and the measure on it is the usual product measure and is denoted simply by \(m\).
**Definition 3.1** (Beppo Levi space).: The Beppo-Levi space \(\mathsf{BL}(X,Y)\) is the space of functions \(f\in L^{2}(X\times Y;\mathbb{R})\) such that
1. \(f(x,\cdot)\in W^{1,2}(Y)\) for \(m_{X}\) - a.e. \(x\)
2. \(f(\cdot,t)\in W^{1,2}(X)\) for \(m_{Y}\) - a.e. \(t\)
3. the function (3.2) \[|Df|_{\mathsf{BL}}(x,t):=\sqrt{|Df(x,\cdot)|_{Y}(t)^{2}+|Df(\cdot,t)|_{X}(x)^{ 2}}\] belongs to \(L^{2}(X\times Y;\mathbb{R})\)
with the Beppo-Levi norm
\[\|f\|_{\mathsf{BL}}:=\sqrt{\|f\|_{L^{2}}^{2}+\||Df|_{\mathsf{BL}}\|_{L^{2}}^{2}} \tag{3.3}\]
We will also write \(|Df(x,\cdot)|_{Y}(t)\) as \(|\partial f/\partial t|(x,t)\) and \(|Df(\cdot,t)|_{X}(x)\) as \(|\partial f/\partial x|(x,t)\).
Our main theorem for this section is:
**Theorem 3.2**.: _The sets \(W^{1,2}(X\times Y)\) and \(\mathsf{BL}(X,Y)\) coincide and for every \(f\in W^{1,2}(X\times Y)=\mathsf{BL}(X,Y)\) we have_
\[|Df|_{\mathsf{BL}}\leq|Df|_{X\times Y}\leq C_{0}|Df|_{\mathsf{BL}}\quad m_{X \times Y}\text{-a.e.}\]
As in the interval case, we only need to show \(\mathsf{BL}(X,Y)\subset W^{1,2}(X\times Y)\) as the other inclusion has already been shown in [1]:
**Proposition 3.3** (Prop. 6.18 in [1]).: _We have \(W^{1,2}(X\times Y)\subset\mathsf{BL}(X\times Y)\) and_
\[\int_{X\times Y}|Df|_{\mathsf{BL}}^{2}\leq\int_{X\times Y}|Df|_{X\times Y}^{2}. \tag{3.4}\]
To prove our main theorem we will need the following lemmas.
**Lemma 3.4**.: _Let \(N>0\) be a fixed natural number. Let \(f:X\times Y\to\mathbb{R}\) be of the form \(f(x,t)=\sum_{i=1}^{N}h_{i}(t)g_{i}(x)\) where \(g_{i}\in\operatorname{Lip}(X)\) and \(h_{i}\in\operatorname{Lip}(Y)\) for all \(1\leq i\leq N\). Then_
\[\operatorname{lip}_{X\times Y}(f)^{2}(x,t)\leq\operatorname{lip}_{X}(f(\cdot,t ))^{2}(x)+\operatorname{lip}_{Y}(f(x,\cdot))^{2}(t) \tag{3.5}\]
_for every \((x,t)\in X\times Y\)_
This lemma replaces Lemma 3.3 in [1], the key difference being the use of triangle inequality instead of Cauchy-Scwhartz.
Proof.: We have
\[\operatorname{lip}_{X\times Y}(f)^{2}(x,t)=\limsup_{(y,s)\to(x,t)} \frac{|f(y,s)-f(x,t)|^{2}}{\operatorname{d}_{X\times Y}((y,s),(x,t))^{2}}\] \[\leq\limsup_{(y,s)\to(x,t)}\frac{1}{\operatorname{d}_{X\times Y} ((y,s),(x,t))^{2}}\left(\operatorname{d}_{X}(y,x)^{2}\frac{|f(y,s)-f(x,s)|^{2 }}{\operatorname{d}_{X}(y,x)^{2}}+\operatorname{d}_{Y}(s,t)^{2}\frac{|f(x,s) -f(x,t)|^{2}}{\operatorname{d}_{Y}(s,t)^{2}}\right)\] \[\leq\limsup_{(y,s)\to(x,t)}\frac{\left|\sum_{i=1}^{N}h_{i}(s)(g_{ i}(y)-g_{i}(x))\right|^{2}}{\operatorname{d}_{X}(y,x)^{2}}+\limsup_{(y,s)\to(x,t)} \frac{\left|\sum_{i=1}^{N}(h_{i}(s)-h_{i}(t))g_{i}(x)\right|^{2}}{\operatorname {d}_{Y}(s,t)^{2}}\] \[\leq\limsup_{(y,s)\to(x,t)}\frac{\left|\sum_{i=1}^{N}h_{i}(t)(g_{ i}(y)-g_{i}(x))+(h_{i}(s)-h_{i}(t))(g_{i}(y)-g_{i}(x))\right|^{2}}{\operatorname{d }_{X}(x,y)^{2}}\] \[\qquad+\limsup_{(y,s)\to(x,t)}\frac{\left|\sum_{i=1}^{N}(h_{i}(s) -h_{i}(t))g_{i}(x)\right|^{2}}{\operatorname{d}_{Y}(s,t)^{2}}\] \[\leq\left(\limsup_{(y,s)\to(x,t)}\frac{\left|\sum_{i=1}^{N}h_{i}(t )(g_{i}(y)-g_{i}(x))\right|}{\operatorname{d}_{X}(x,y)}+\sum_{i=1}^{N}|h_{i}(s )-h_{i}(t)|\,\frac{|g_{i}(y)-g_{i}(x)|}{\operatorname{d}_{X}(y,x)}\right)^{2}\] \[\qquad+\limsup_{(y,s)\to(x,t)}\frac{\left|\sum_{i=1}^{N}(h_{i}(s) -h_{i}(t))g_{i}(x)\right|^{2}}{\operatorname{d}_{Y}(s,t)^{2}}\] \[\leq 2\limsup_{(y,s)\to(x,t)}\frac{\left|\sum_{i=1}^{N}h_{i}(t)(g_{ i}(y)-g_{i}(x))\right|^{2}}{\operatorname{d}_{X}(y,x)^{2}}+2\limsup_{(y,s)\to(x,t)} \sum_{i=1}^{N}|h_{i}(s)-h_{i}(t)|^{2}\,\frac{|g_{i}(y)-g_{i}(x)|^{2}}{ \operatorname{d}_{X}(x,y)^{2}}\] \[\qquad+\limsup_{(y,s)\to(x,t)}\frac{\left|\sum_{i=1}^{N}(h_{i}(s) -h_{i}(t))g_{i}(x)\right|^{2}}{\operatorname{d}_{Y}(s,t)^{2}}\] \[=2\operatorname{lip}_{X}(f(\cdot,t))^{2}(x)+\operatorname{lip}_{Y }(f(x,\cdot))^{2}(t), \tag{3.6}\]
where in the last step we used that \(h\) is continuous.
**Definition 3.5**.: The class \(\mathcal{A}\subset\operatorname{\mathsf{BL}}_{loc}(X\times Y)\) consists of functions \(f\) that can be written as \(f(x,t)=\sum_{i=1}^{M}h_{i}(t)g_{i}(x)\) for \(g_{i}\in W^{1,2}(X)\), for some _fixed_\(M\) such that \(M>C_{Y}^{3}\) and \(h_{i}\in\operatorname{Lip}(Y)\). The class \(\tilde{\mathcal{A}}\) will denote those functions that are locally in class \(\mathcal{A}\). Note that \(\tilde{\mathcal{A}}\subset S^{2}_{loc}(X\times Y)\) (see Proposition 2.6 in [1]).
In order to discretize the problem, we need to introduce a suitable partition on \(Y\). One option would be to use generalization of dyadic cubes in metric measure spaces (such as the ones constructed in [10]). However, since we will not be using the tree-like structure, we can work with a simpler construction, such as the one in the following lemma.
**Lemma 3.6** (Lemma 7.1 in [1]).: _For every \(k\in\mathbb{N}\) there exists a collection of open subsets of \(Y\), \(Q_{i,k}\) and points \(t_{i,k}\) (the "centers" of the "cubes"), and \(i\in I_{k}\), where \(I_{k}\) is a countable set, such that_
* \(m_{Y}\left(Y\setminus\bigcup_{i}Q_{i,k}\right)=0\) _for all_ \(k\in\mathbb{Z}\)_;_
* _For every_ \(i,j\in I_{k}\) _either_ \(Q_{j,k}=Q_{i,k}\) _or_ \(Q_{j,k}\cap Q_{i,k}=\varnothing\) _(i.e. the_ \(Q_{i,k}\)_'s form a partition);_
* _if_ \(i\neq j\)_, then_ \(d_{Y}(t_{i,k},t_{j,k})>\frac{1}{k}\)_;_
* _each_ \(Q_{i,k}\) _is comparable to a ball centered at_ \(t_{i,k}\) _of radius roughly_ \(\frac{1}{k}\)_,_ \[B\left(t_{i,k},\frac{1}{3}\frac{1}{k}\right)\subset Q_{i,k}\subset B\left(t_{i,k},\frac{5}{4}\frac{1}{k}\right).\]
_Remark 3.7_.: We say that \(Q_{i,k}\) and \(Q_{j,k}\) are neighbors, and we write \(Q_{i,k}\sim Q_{j,k}\), if their distance is less than \(\frac{1}{k}\). If that is the case, then it must be that \(d_{Y}(t_{i,k},t_{j,k})\leq\frac{4}{k}\). Given \(Q_{i,k}\sim Q_{j,k}\), we have that their centers are at most \(\frac{10}{4}\frac{1}{k}+\frac{1}{k}=\frac{14}{4}\frac{1}{k}\leq\frac{4}{k}\) where \(\frac{5}{4k}\) comes from the fact that both cubes are contained in a ball with radius \(\frac{5}{4k}\) while the \(\frac{1}{k}\) is from the definition of \(Q_{i,k}\sim Q_{j,k}\). This implies that each \(Q_{i,k}\) has at most \(C_{Y}^{3}\) neighbors. In fact, we can cover \(B(t_{i,k},\frac{4}{k})\) with \(C_{Y}^{3}\) balls of radius \(\frac{1}{2}\frac{1}{k}\), but the condition \(d_{Y}(t_{i,k},t_{j,k})>\frac{1}{k}\) means each ball can only contain one of the \(t_{i,k}\)'s.
**Lemma 3.8**.: _There exists a constant \(C_{0}>0\) depending only on the doubling constant of \(Y\) such that for every \(f\in\tilde{\mathcal{A}}\), we have that_
\[|Df|_{\mathsf{BL}}\leq|Df|\leq C_{0}|Df|_{\mathsf{BL}}\]
_for \((m_{X}\times m_{Y})\)-a.e. \((x,t)\)._
Proof.: Since the statement is local in nature, it suffices to show it for \(f\in\mathcal{A}\) where \(f(x,t)=\sum_{l=1}^{M}h_{l}(t)g_{l}(x)\) and the \(h_{l}\)'s have compact support. Given the other inclusion, it suffices to prove that there exists a constant \(C_{0}\) such that \(|Df|^{2}\leq C_{0}^{2}|Df|^{2}_{\mathsf{BL}}\) for \(m\)-a.e. \((x,t)\). Thus it suffices to prove that for any \(F\subset Y\) and \(E\subset X\) Borel, we have that
\[\int_{E\times F}|Df|^{2}(x,t)\,dm(x,t)\leq C_{0}\int_{E\times F}|\partial f/ \partial t|^{2}(x,t)+|\partial f/\partial x|^{2}(x,t)\,dm_{Y}(t)\,dm_{X}(x). \tag{3.7}\]
For every \(k\in\mathbb{N}\), let \(\{\chi_{i,k}\}_{i\in I}\) be a partition of unity subordinate to the partition of \(Y\) by "cubes" \(Q_{i,k}\) as above. That is, \(\chi_{i,k}=1\) on \(B\left(t_{i,k},\frac{1}{3}\frac{1}{k}\right)\), and \(\operatorname{supp}\chi_{i,k}\subset B\left(t_{i,k},\frac{5}{4}\frac{1}{k}\right)\). Each \(\chi_{i,k}\) is \(c_{1}k\)-Lipschitz, where \(c_{1}=c_{1}(C_{Y})\). See for instance the construction in [11]. Now define
\[f_{k}(x,t):=\sum_{i\in I_{k}}\chi_{i,k}(t)f_{k,i}(x), \tag{3.8}\]
where \(f_{k,i}(x):=\fint_{Q_{i,k}}f(x,t)\,dt\). For a fixed \((x,t)\) at most \(C_{Y}^{3}\) terms in the sum are non-zero, and hence \(f_{k}\in\tilde{\mathcal{A}}\). Moreover,
\[\|f_{i,k}\|_{L^{2}(X\times Y)}^{2}\leq\int_{X\times Y}\fint_{Q_{ik}}|f(x,t)|^{2 }=\fint_{Q_{ik}}\int_{X\times Y}|f(x,t)|^{2}=\|f\|_{L^{2}(X\times Y)}^{2},\]
and so
\[\|f_{k}\|_{L^{2}(X\times Y)}^{2}=\left\|\sum_{i\in I_{k}}\chi_{i,k}(t)f_{k,i}( x)\right\|_{L^{2}(X\times Y)}^{2}\leq C_{Y}^{3}\sum_{i\neq j}\|f_{k,i}\|_{L^{2}(X \times Y)}^{2}\leq C_{Y}^{6}\|f\|_{L^{2}(X\times Y)}^{2}.\]
Consequently, by the dominated convergence theorem, we get
\[f_{k}\to f\qquad\text{in}\quad L^{2}(X\times Y). \tag{3.9}\]
Since \(f_{k,i}\in W^{1,2}(X)\) we have a sequence of Lipschitz functions \(f_{k,i}^{n}\colon X\to\R\) such that \(f_{k,i}^{n}\to f_{k,i}\) in \(L^{2}(X,m_{X})\) and \(\operatorname{lip}_{X}(f_{k,i}^{n})\to|Df_{k,i}|_{X}\) in \(L^{2}(X,m_{X})\). Now define
\[F_{k}^{n}(x,t):=\sum_{i\in I_{k}}\chi_{i,k}(t)f_{k,i}^{n}(x). \tag{3.10}\]
Note that \(F_{k}^{n}\in\operatorname{Lip}(X\times Y)\cap\tilde{\mathcal{A}}\) and, since \(h_{i}(t)\) is continuous, \(\operatorname{lip}_{X}(F_{k}^{n})\to|\partial f_{k}/\partial x|\) in \(L^{2}(X\times Y)\). Now fix \(t\in Y\). Recall that \(\chi_{j,k}(t)=1-\sum_{i\neq j}\chi_{i,k}(t)\) for any choice of \(j\in\N\), and so we have that, for any \(s\in F\),
\[F_{k}^{n}(x,t)-F_{k}^{n}(x,s)=\sum_{i\in I_{k}}\chi_{i,k}(t)f_{k,i}^{n}(x)-\sum_{i\in I_{k}}\chi_{i,k}(s)f_{k,i}^{n}(x)\] \[=\sum_{i\neq j}\chi_{i,k}(t)f_{k,i}^{n}(x)+\chi_{j,k}(t)f_{k,j}^{ n}(x)-\chi_{j,k}(s)f_{k,j}^{n}(x)-\sum_{i\neq j}\chi_{i,k}(s)f_{k,i}^{n}(x)\] \[=\sum_{i\neq j}\chi_{i,k}(t)f_{k,i}^{n}(x)-\sum_{i\neq j}\chi_{i, k}(t)f_{k,j}^{n}(x)+\sum_{i\neq j}\chi_{i,k}(s)f_{k,j}^{n}(x)-\sum_{i\neq j} \chi_{i,k}(s)f_{k,i}^{n}(x)\] \[=\sum_{i\neq j}\left(\chi_{i,k}(t)-\chi_{i,k}(s)\right)(f_{k,i}^{ n}(x)-f_{k,j}^{n}(x)). \tag{3.11}\]
Note that the sum above is finite, as only finitely many of the \(\chi_{i,n}\) are nonzero. Dividing this by \(t-s\), and then taking a limit we obtain
\[\begin{split}\operatorname{lip}_{Y}(F_{k}^{n}(x))(t)& =\limsup_{s\to t}\left|\sum_{i\neq j}\frac{\chi_{i,k}(t)-\chi_{i, k}(s)}{t-s}(f_{k,i}^{n}(x)-f_{k,j}^{n}(x))\right|\\ &\leq\sum_{\begin{subarray}{c}i\neq j\\ Q_{i,k}\sim Q_{j,k}\end{subarray}}\operatorname{Lip}_{Y}(\chi_{i,k})\left|f_{ k,i}^{n}(x)-f_{k,j}^{n}(x)\right|,\end{split} \tag{3.12}\]
Note that because \(Y\) is doubling the last sum is also finite. We define
\[\begin{split} g_{k}^{n}(x,t)&:=\sum_{ \begin{subarray}{c}i\neq j\\ Q_{i,k}\sim Q_{j,k}\end{subarray}}\operatorname{Lip}_{Y}(\chi_{i,k})\left|f_{ k,i}^{n}(x)-f_{k,j}^{n}(x)\right|\quad\text{for }t\in Q_{j,k}\\ g_{k}(x,t)&:=\sum_{\begin{subarray}{c}i\neq j\\ Q_{i,k}\sim Q_{j,k}\end{subarray}}\operatorname{Lip}_{Y}(\chi_{i,k})\left|f_{ k,i}(x)-f_{k,j}(x)\right|\quad\text{for }t\in Q_{j,k}\end{split} \tag{3.13}\]
Note that we have \(g_{k}^{n}\to g_{k}\) in \(L^{2}(X\times Y)\).
Next, we want to show that the \(g_{k}\)'s are good approximations for the \(Y\)-gradient of \(f\). Observe that if \(Q_{i,k}\) and \(Q_{j,k}\) are neighbors, and \(t\in Q_{i,k}\) and \(y\in Q_{j,k}\) then \(\operatorname{d}(t,y)\leq(10/4+10/4+1)\frac{1}{k}=\frac{6}{k}\). We have
\[\lim_{k\to\infty}g_{k}(x,t) =\lim_{k\to\infty}\sum_{\begin{subarray}{c}i\neq j\\ Q_{i,k}\sim Q_{j,k}\end{subarray}}\operatorname{Lip}_{Y}(\chi_{i,k})\left|f_{k,i}(x)-f_{k,j}(x)\right|\] \[=\lim_{k\to\infty}\sum_{\begin{subarray}{c}i\neq j\\ Q_{i,k}\sim Q_{j,k}\end{subarray}}\operatorname{Lip}_{Y}(\chi_{i,k})\left| \sum_{l=1}^{M}\left[\fint_{Q_{i,k}}h_{l}(t)\,dt-\fint_{Q_{j,k}}h_{l}(t)\,dt \right]g_{l}(x)\right|\] \[\leq c_{1}\lim_{k\to\infty}\sum_{\begin{subarray}{c}i\neq j\\ Q_{i,k}\sim Q_{j,k}\end{subarray}}k\left|\sum_{l=1}^{M}(h_{l}(\tilde{t}_{i,k}) -h_{l}(\tilde{t}_{j,k}))g_{l}(x)\right|\] \[=c_{1}\lim_{k\to\infty}\sum_{\begin{subarray}{c}i\neq j\\ Q_{i,k}\sim Q_{j,k}\end{subarray}}k\operatorname{d}(\tilde{t}_{i,k},\tilde{t}_ {j,k})\left|\sum_{l=1}^{M}\frac{h_{l}(\tilde{t}_{i,k})-h_{l}(\tilde{t}_{j,k})} {\operatorname{d}(\tilde{t}_{i,k},\tilde{t}_{j,k})}g_{l}(x)\right|\] \[=Cc_{1}\lim_{k\to\infty}\sum_{\begin{subarray}{c}i\neq j\\ Q_{i,k}\sim Q_{j,k}\end{subarray}}\left|\sum_{l=1}^{M}\frac{h_{l}(\tilde{t}_{i, k})-h_{l}(\tilde{t}_{j,k})}{\operatorname{d}(\tilde{t}_{i,k},\tilde{t}_{j,k})}g_{l}(x)\right|\] \[\leq Cc_{1}C_{Y}^{3}|\partial f/\partial t|. \tag{3.14}\]
where \(C\) is a geometric constant such that \(k\operatorname{d}(\tilde{t}_{i,k},\tilde{t}_{j,k})<C\) for neighbouring cubes \(Q_{i,k}\sim Q_{j,k}\). We have used that, by construction, \(\operatorname{Lip}_{Y}(\chi_{i,k})=c_{1}k\).
By Lemma 3.4 we have
\[|\operatorname{lip}_{X\times Y}(F_{k}^{n})|^{2}(x,t)\leq 2|\operatorname{lip}_{ X}(F_{k}^{n}(\cdot,t))|^{2}(x)+|\operatorname{lip}_{Y}(F_{k}^{n}(x,\cdot))|^{2}(t) \qquad m\text{-a.e. }(x,t). \tag{3.15}\]
Now the above with the lower semicontinuity of minimal weak upper gradients gives us
\[\int_{E\times F}|Df|^{2}(x,t)\,dm(x,t) \leq\limsup_{k\to\infty}\int_{F\times E}|Df_{k}|^{2}(x,t)\,dm(x,t)\] \[\leq\limsup_{k\to\infty}\lim_{n\to\infty}\int_{E\times F} \operatorname{lip}_{X\times Y}(F_{k}^{n})^{2}(x,t)\,dm(x,t)\] \[\leq\limsup_{k\to\infty}\lim_{n\to\infty}\int_{E\times F} \operatorname{lip}_{Y}(F_{k}^{n})^{2}(x,t)\,dm(x,t)\] \[\qquad+2\limsup_{k\to\infty}\lim_{n\to\infty}\int_{E\times F} \operatorname{lip}_{X}(F_{k}^{n})^{2}(x,t)\,dm(x,t).\]
Then, to finish the proof, it is enough to prove the following two inequalities
\[2\limsup_{k\to\infty}\lim_{n\to\infty}\int_{E\times F}\operatorname{lip}_{X}(F _{k}^{n})^{2}(x,t)\,dm(x,t)\leq C_{0}\int_{E\times F}|\partial f/\partial x|^{2 }(x,t)\,dm_{Y}(t)\,dm_{X}(x). \tag{3.16}\]
\[\limsup_{k\to\infty}\lim_{n\to\infty}\int_{E\times F}\operatorname{lip}_{Y}(F_{k}^{ n})^{2}(x,t)\,dm(x,t)\leq C_{0}\int_{E\times F}|\partial f/\partial t|^{2}(x,t)\,dm_{Y}(t) \,dm_{X}(x); \tag{3.17}\]
To prove (3.16),
\[\limsup_{k\to\infty}\lim_{n\to\infty}\int_{E\times F}\operatorname {lip}_{X}(F_{k}^{n})^{2}(x,t)\,dm(x,t)\] \[\leq C_{Y}^{3}\limsup_{k\to\infty}\int_{F}\left(\sum_{i\in I_{k}} \chi_{i,k}(t)\lim_{n\to\infty}\int_{E}(\operatorname{lip}_{X}f_{k,i}^{n}(x))^{ 2}\,dm_{X}(x)\right)dm_{Y}(t)\] \[=C_{Y}^{3}\limsup_{k\to\infty}\int_{F}\left(\sum_{i\in I_{k}} \chi_{i,k}(t)\int_{E}|\partial f_{k,i}/\partial x|^{2}(x)\,dm_{X}(x)\right)dm _{Y}(t)\] \[\leq(C_{Y}^{3})^{2}\limsup_{k\to\infty}\int_{F}\left(\sum_{i\in I _{k}}\chi_{i,k}(t)\int_{E}\left(\fint_{Q_{ik}}|\partial f/\partial x|^{2}(x) \,ds\right)\,dm_{X}(x)\right)dm_{Y}(t)\] \[=C_{Y}^{6}\limsup_{k\to\infty}\int_{E}\int_{F}\left(\sum_{i\in I _{k}}\chi_{i,k}(t)\fint_{Q_{ik}}|\partial f/\partial x|^{2}(x)\,ds\right)\,dm (x,t) \tag{3.18}\]
Because \(Y=\bigcup_{j}Q_{j,k}\), and the sum over \(i\in I_{k}\) has a finite number of non-zero summands, we have
\[\int_{Y}\left(\sum_{i\in I_{k}}\chi_{i,k}(t)\fint_{Q_{ik}}| \partial f/\partial x|^{2}(x,s)\,dm_{Y}(s)\right)\,dm_{Y}(t)\] \[\leq\sum_{j\in I_{k}}\sum_{i\in I_{k}}\int_{Q_{j,k}}\chi_{i,k}(t) N_{i,k}(x)\,dm_{Y}(t), \tag{3.19}\]
where we have set
\[N_{i,k}(x)=\fint_{Q_{ik}}|\partial f/\partial x|^{2}(x,s)\,dm_{Y}(s).\]
The integral inside the two sums is nonzero only if \(Q_{i,k}\) and \(Q_{j,k}\) are neighbors, or \(i=j\). In the latter case we simply have
\[\sum_{i\in I_{k}}\int_{Q_{j,k}}\chi_{i,k}(t)N_{i,k}(x)\,dm_{Y}(t) \leq\sum_{i\in I_{k}}\int_{Q_{i,k}}N_{i,k}(x)\,dm_{Y}(t)\] \[=\sum_{i\in I_{k}}|Q_{i,k}|N_{i,k}(x)\] \[=\sum_{i\in I_{k}}|Q_{i,k}|\frac{1}{|Q_{i,k}|}\int_{Q_{i,k}}| \partial f/\partial x|^{2}(x,s)\,dm_{Y}(s)\] \[=\sum_{i\in I_{k}}\int_{Q_{i,k}}|\partial f/\partial x|^{2}(x,s) \,dm_{Y}(s)\] \[=\int_{Y}|\partial f/\partial x|^{2}(x,s)\,dm_{Y}(s). \tag{3.20}\]
If \(Q_{i,k}\) and \(Q_{j,k}\) are neighbors (and there is as most \(C_{Y}^{3}\)\(j\)'s such that \(Q_{j,k}\) is a neighbor of \(Q_{i,k}\)), we have
\[\sum_{j\in I_{k}}\sum_{i\in I_{k}}\int_{Q_{j,k}}\chi_{i,k}(t)N_{i,k }(x)\,dm_{Y}(t) =\sum_{\begin{subarray}{c}i\neq j\\ Q_{i,k}\sim Q_{j,k}\end{subarray}}\sum_{i\in I_{k}}|Q_{j,k}|N_{i,k}(x)\] \[=\sum_{\begin{subarray}{c}i\neq j\\ Q_{i,k}\sim Q_{j,k}\end{subarray}}\sum_{i\in I_{k}}|Q_{j,k}|\frac{1}{|Q_{i,k}|} \int_{Q_{i,k}}|\partial f/\partial x|^{2}(x,s)\,dm_{Y}(s)\] \[\leq C_{Y}^{2}C_{Y}^{3}\sum_{i\in I_{k}}\int_{Q_{i,k}}|\partial f /\partial x|^{2}(x,s)\,dm_{Y}(s)\] \[=C_{Y}^{5}\int_{Y}|\partial f/\partial x|^{2}(x,s)\,dm_{Y}(s), \tag{3.21}\]
where we used the fact that \(\frac{|Q_{j,k}|}{|Q_{i,k}|}\leq C_{Y}^{2}\) (because cubes are comparable to balls). This concludes the proof of (3.16).
To prove (3.17), recall that \(\operatorname{lip}_{Y}(F_{k,n})\leq g_{k}^{n}\), \(g_{k}^{n}\to g_{k}\) in \(L^{2}(X\times Y)\) and we have proved that \(\lim_{k\to\infty}g_{k}(x,t)\leq 6c_{1}C_{Y}^{3}|\partial f/\partial t|\).
Then we have
\[\limsup_{k\to\infty}\lim_{n\to\infty}\int_{E\times F} \operatorname{lip}_{Y}(F_{k}^{n})^{2}(x,t)\,dm(x,t)\] \[\leq\limsup_{k\to\infty}\lim_{n\to\infty}\int_{E\times F}(g_{k}^ {n})^{2}(x,t)\,dm(x,t)\] \[\leq 6c_{1}C_{Y}^{3}\limsup_{k\to\infty}\int_{E\times F}(g_{k})^{ 2}(x,t)\,dm(x,t)\] \[\leq 6c_{1}C_{Y}^{3}\int_{E\times F}|\partial f/\partial t|^{2}\, dm(x,t)\]
By choosing \(C_{0}=\max\{C_{Y}^{6},12c_{1}C_{Y}^{3}\}\) the desired result holds.
**Lemma 3.9**.: _For any \(f\in\operatorname{\mathsf{BL}}(X,Y)\) there exists a sequence \((f_{k})\subset\tilde{\mathcal{A}}\cap\operatorname{\mathsf{BL}}(X,Y)\) with \(f_{k}\to f\) in \(L^{2}(X\times Y)\) and such that \(|Df_{k}|_{\operatorname{\mathsf{BL}}}\to|Df|_{\operatorname{\mathsf{BL}}}\) in \(L^{2}(X\times Y)\) as \(n\to\infty\)._
Proof.: With a standard truncation and diagonal argument, we can assume that the given \(f\) is bounded with bounded support. Given \(f\in\operatorname{\mathsf{BL}}(X,Y)\), we define a sequence of function \(f_{k}\)'s as in the previous lemma, that is
\[f_{k}(x,t)=\sum_{i\in I_{k}}\chi_{i,k}(t)f_{k,i}(x)\]
Observe that \(f_{k}\in\tilde{\mathcal{A}}\cap\operatorname{\mathsf{BL}}(X,Y)\) and that (3.9) says that \(f_{k}\to f\) in \(L^{2}(X\times Y)\).
It remains to show \(|Df_{k}|_{\operatorname{\mathsf{BL}}}\to|Df|_{\operatorname{\mathsf{BL}}}\). To show this notice that it suffices to show the following two inequalities for every \(k\in\mathbb{N}\),
\[\int_{X\times Y}|\partial f_{k}/\partial x|^{2}(x,t)\,dm(x,t) \leq\int_{X\times Y}|\partial f/\partial x|^{2}(x,t)\,dm_{Y}(t)\, dm_{X}(x). \tag{3.23}\] \[\int_{X\times Y}|\partial f_{k}/\partial t|^{2}(x,t)\,dm(x,t) \leq\int_{X\times Y}|\partial f/\partial t|^{2}(x,t)\,dm_{Y}(t)\, dm_{X}(x); \tag{3.22}\]
To prove (3.22), we first notice that by convexity and lower semi-continuity, we have
\[\int_{X}|\partial f_{i,k}/\partial x|^{2}(x)dm_{X}(x)\leq\int_{X}\fint_{Q_{i,k}}| \partial f/\partial x|^{2}(x,s)dm_{Y}(s)dm_{X}(x)\]
Also, we have
\[|\partial f_{k}/\partial x|^{2}(x,t) =\left|\frac{\partial}{\partial x}\left(\sum_{i=0}^{\infty}\chi_{ i,k}(t)f_{i,k}(x)\right)\right|^{2}\] \[\leq\sum_{i=0}^{\infty}\chi_{i,k}(t)|\partial f_{i,k}/\partial x|^ {2}(x)\]
Thus, we have
\[\int_{X\times Y}|\partial f_{k}/\partial x|^{2}(x,t)\,dm(x,t) =\int_{X}\int_{Y}|\partial f_{k}/\partial x|^{2}(x,t)\,dm_{Y}(t)\, dm_{X}(x)\] \[\leq\int_{X}\int_{Y}\sum_{i\in I_{k}}\chi_{i,k}(t)|\partial f_{i, k}/\partial x|^{2}(x)\,dm_{Y}(t)\,dm_{X}(x)\] \[\leq\int_{X}\sum_{i\in I_{k}}|Q_{i,k}||\partial f_{i,k}/\partial x |^{2}(x)\,dm_{X}(x)\] \[\leq\int_{X}\sum_{i\in I_{k}}|Q_{i,k}|\frac{1}{|Q_{i,k}|}\int_{Q_ {i,k}}|\partial f/\partial x|^{2}(x,s)\,dm_{Y}(s)\,dm_{X}(x)\] \[\leq C\int_{X\times Y}|\partial f/\partial x|^{2}(x,t)\,dm_{Y}(t) \,dm_{X}(x)\,.\]
It remains to show (3.23). Set \(B=B(t_{j,k},\frac{6}{k})\), and note that if \(Q_{i,k}\) and \(Q_{j,k}\) are neighbors, then they are both contained in \(B\). We have
\[\left|\fint_{Q_{i,k}}f(x,t)\,dm_{Y}(t)-\fint_{Q_{j,k}}f(x,t)\,dm_ {Y}(t)\right|^{2}\] \[\leq\left(\left|\fint_{Q_{i,k}}f(x,t)\,dm_{Y}(t)-\fint_{B}f(x,t) \,dm_{Y}(t)\right|^{2}+\left|\fint_{B}f(x,t)\,dm_{Y}(t)-\fint_{Q_{j,k}}f(x,t) \,dm_{Y}(t)\right|^{2}\right)\] \[\leq\left(\frac{1}{|Q_{i,k}|}\int_{Q_{i,k}}|f(x,t)-f_{B}|^{2}\,\, dm_{Y}(t)+\frac{1}{|Q_{j,k}|}\int_{Q_{j,k}}|f(x,t)-f_{B}|^{2}\,\,dm_{Y}(t)\right)\] \[\leq 2C_{Y}^{5}\fint_{B}|f(x,t)-f_{B}|^{2}\,\,dm_{Y}(t) \tag{3.24}\]
Hence, using Poincare inequality, we obtain
\[\int_{X\times Y} |\partial f_{k}/\partial t|^{2}(x,t)\,dm(x,t)\leq C_{Y}^{3}\int_{X} \sum_{j}\int_{Q_{j,k}}k^{2}\sum_{\begin{subarray}{c}i\neq j\\ Q_{i,k}\sim Q_{j,k}\end{subarray}}|f_{k,j}(x)-f_{k,i}(x)|^{2}\,dm_{Y}(t)\,dm_{X}(x)\] \[=C_{Y}^{3}\int_{X}\sum_{j}\int_{Q_{j,k}}\sum_{\begin{subarray}{c }i\neq j\\ Q_{i,k}\sim Q_{j,k}\end{subarray}}k^{2}\left|\fint_{Q_{i,k}}f(x,s)\,dm_{Y}(s)- \fint_{Q_{j,k}}f(x,s)\,dm_{Y}(s)\right|^{2}\,dm_{Y}(t)\,dm_{X}(x)\] \[\leq 2C_{Y}^{5}C_{Y}^{3}\int_{X}\sum_{j}\int_{Q_{j,k}}\sum_{ \begin{subarray}{c}i\neq j\\ Q_{i,k}\sim Q_{j,k}\end{subarray}}k^{2}\fint_{B}\left|f(x,s)-f_{B}\right|^{2} \,dm_{Y}(s)\,dm_{Y}(t)\,dm_{X}(x)\] \[\leq 2C_{Y}^{8}C_{P}\int_{X}\sum_{j}\int_{Q_{j,k}}\sum_{ \begin{subarray}{c}i\neq j\\ Q_{i,k}\sim Q_{j,k}\end{subarray}}k^{2}\operatorname{rad}(B)^{2}\fint_{ \lambda B}|\partial f/\partial s|^{2}\,dm_{Y}(s)\,dm_{Y}(t)\,dm_{X}(x)\] \[\leq 2\cdot 6^{2}C_{Y}^{8}C_{P}C_{Y}^{3}\int_{X}\sum_{j}\int_{Q_{ j,k}}\fint_{\lambda B}|\partial f/\partial s|^{2}\,dm_{Y}(s)\,dm_{Y}(t)\,dm_{X}(x)\] \[\leq 2\cdot 6^{2}C_{Y}^{11}C_{P}\int_{X}\sum_{j}\fint_{\lambda B} \int_{Q_{j,k}}|\partial f/\partial t|^{2}\,dm_{Y}(t)\,dm_{Y}(s)\,dm_{X}(x)\] \[\leq 2\cdot 6^{2}C_{Y}^{11}C_{P}\int_{X\times Y}|\partial f/ \partial t|^{2}\,dm(x,t), \tag{3.25}\]
which concludes the proof.
We are now ready to prove the main theorem.
Proof of Theorem 3.2.: We already observed that \(W^{1,2}(X\times Y)\subset\mathsf{BL}(X\times Y)\). Let \(f\in\mathsf{BL}(X\times Y)\) and let \(\{f_{k}\}\subset\mathsf{BL}(X\times Y)\cap\tilde{\mathcal{A}}\) be as in Lemma 3.9. Lemma 3.8 says that
\[|Df_{k}|_{X\times Y}=|Df_{k}|_{\mathsf{BL}}.\]
The right hand side converges to \(|Df|_{\mathsf{BL}}\) in \(L^{2}(X\times Y)\) and because \(f_{k}\to f\) in \(L^{2}\), the lower semicontinuity of weak upper gradients implies that if \(f\in W^{1,2}(X\times Y)\) and
\[|Df|_{X\times Y}\leq|Df|_{\mathsf{BL}},\]
which together with Lemma 3.3 concludes the proof.
## 4. Warped products
For this section, we still have Assumptions 2.12 and 2.13.
**Definition 4.1**.: Let \((X,d_{X})\) and \((Y,d_{Y})\) be length spaces, and \(w_{d}\colon Y\to[0,\infty)\) a continuous function. Let \(\gamma=(\gamma^{X},\gamma^{Y})\) be a curve such that \(\gamma^{X}\) and \(\gamma^{Y}\) are absolutely continuous. Then the \(w_{d}\)-length of \(\gamma\) is defined as
\[\ell_{w}(\gamma)=\lim_{\tau}\sum_{i=1}^{n}\sqrt{d_{Y}^{2}(\gamma_{t_{i-1}}^{Y},\gamma_{t_{i}}^{Y})+w_{d}^{2}(\gamma_{t_{i-1}}^{Y}d_{X}^{2}(\gamma_{t_{i-1}}^ {X},\gamma_{t_{i}}^{X})},\]
where \(\tau\) is a partition of \([0,1]\) and the limit is taken over refinement ordering of partitions.
The limit exists and
\[\ell_{w}(\gamma)=\int_{0}^{1}\sqrt{|\dot{\gamma}_{t}^{Y}|^{2}+w_{d}^{2}(\gamma_{t} ^{Y})|\dot{\gamma}_{t}^{X}|^{2}}\,dt\]
**Definition 4.2**.: Let \((X,d_{X})\) and \((Y,d_{Y})\) be length spaces, and \(w_{d}\colon Y\to[0,\infty)\) a continuous function. We define a pseudo-metric \(\mathrm{d}_{w}\) on on the space \(X\times Y\) by
\[\mathrm{d}_{w}(p,q)=\inf\{\ell_{w}(\gamma)\mid\gamma^{X}\in AC([0,1],X),\, \gamma^{Y}\in AC([0,1],Y),\,\text{and}\,\,\gamma_{0}=p,\gamma_{1}=q\},\]
for any \(p,q\in X\times Y\).
The pseudo-metric induces an equivalent relation on \(X\times Y\) and hence a metric on the quotient. We denote (slight abuse of notation) the completion of such quotient by \((X\times_{w}Y,d_{w})\). If both \(X\) and \(Y\) are separable, so is \(X\times_{w}Y\). Let \(\pi\colon X\times Y\to X\times_{w}Y\) be the quotient map,
**Definition 4.3**.: Let \((X,d_{X},m_{X})\) and \((Y,d_{Y},m_{Y})\) be complete separable and length metric spaces equipped with non-negative Radon measures. Assume that \(m_{X}(X)<\infty\) and let \(w_{d},w_{m}\colon Y\to[0,\infty)\) be continuous functions. Then the warped product \((X\times_{w}Y,d_{w})\) is defined as above and the Radon measure \(m_{w}\) is defined as
\[m_{w}=\pi_{*}((w_{m}m_{Y})\times m_{X}).\]
Note that the assumption that \(m_{X}\) is a finite measure is needed to ensure that \(m_{w}\) is Radon (it is always Borel).
**Definition 4.4**.: As a set, the Beppo Levi space \(\mathsf{BL}(X\times_{w}Y)\) is the subset of \(L^{2}(X\times_{w}Y,m_{w})\) of all functions \(f\) such that
* for \(m_{X}\)-a.e. \(x\in X\), we have \(f^{(x)}=f(x,\cdot)\in W^{1,2}(Y,w_{m}m_{Y})\);
* for \(w_{m}m_{Y}\)-a.e. \(t\in Y\), we have \(f^{(t)}=f(\cdot,t)\in W^{1,2}(X)\);
* the function \[|Df|_{\mathsf{BL}_{w}}=\sqrt{w_{d}^{-2}|Df^{(t)}|_{X}^{2}(x)+|Df^{(x)}|_{Y}^{2 }(t)}\] belongs to \(L^{2}(X\times_{w}Y,m_{w})\).
On \(\mathsf{BL}(X\times_{w}Y)\) we put the norm
\[\|f\|_{\mathsf{BL}(X\times_{w}Y)}=\sqrt{\|f\|_{L^{2}}^{2}+\||Df|_{\mathsf{BL} _{w}}\|_{L^{2}}^{2}}.\]
To handle the warped case we need to introduce an auxiliary space:
**Definition 4.5**.: Let \(\mathcal{V}\subset\mathsf{BL}_{w}(X,Y)\) be the space of functions \(f\) which are identically \(0\) on \(X\times\Omega\), where \(\Omega\) is an open set that satisfies \(\{w_{m}=0\}\subset\Omega\subset Y\). \(\mathsf{BL}_{0,w}(X,Y)\subset\mathsf{BL}_{w}(X,Y)\) is defined as the closure of \(\mathcal{V}\) in \(\mathsf{BL}_{w}(X,Y)\).
We want to compare the Beppo-Levi space with the Sobolev space on the warped product, and the respective notions of minimal upper gradients. If we merely assume that \(w_{m}\) and \(w_{d}\) are continuous and that \(\{w_{d}=0\}\subset\{w_{m}=0\}\) (the same assumptions in [1] for the analogous case) we want to show that
\[\mathsf{BL}_{w,0}(X,Y)\subset W^{1,2}(X\times_{w}Y)\subset\mathsf{BL}_{w}(X,Y),\]
and that for every \(f\in W^{1,2}(X\times_{w}Y)\subset\mathsf{BL}_{w}(X,Y)\) the inequalities
\[|Df|_{\mathsf{BL}_{w}}\leq|Df|_{X\times_{w}Y}\leq C_{0}|Df|_{\mathsf{BL}_{w}}\]
hold \(m_{w}\)-a.e., and \(C_{0}>0\) is as in Lemma 3.2.
**Proposition 4.6**.: _We have \(W^{1,2}(X\times_{w}Y)\subset\mathsf{BL}_{w}(X,Y)\)._
Proof.: Let \(f\in W^{1,2}(X\times_{w}Y)\). Then by density in energy of Lipschitz functions, we can find a sequence \(f_{n}\) of Lipschitz functions on \(X\times_{w}Y\) such that \(f_{n}\to f\) and \(\operatorname{lip}(f_{n})\to|Df|_{X\times_{w}Y}\) in \(L^{2}(X\times_{w}Y)\). Passing to a subsequence, which we do not relabel, we can assume that \(\|f_{n}-f\|_{L^{2}(X\times_{w}Y,m_{w})}<n^{-4}\). Then
\[\begin{split}&\|\sum_{n=1}^{\infty}\|f_{n}^{(t)}-f^{(t)}\|_{L^{2}(X,m_ {X})}\|_{L^{2}(Y,w_{m}m_{Y})}\\ &=\left(\int_{Y}\ \left(\sum_{n=1}^{\infty}\frac{n}{n}\sqrt{\int_{X}|f_{n} ^{(t)}(x)-f^{(t)}(x)|^{2}dm_{X}(x)}\right)^{2}w_{m}(t)dm_{Y}(t)\right)^{1/2} \\ &\leq\left(\sum_{n=1}^{\infty}\frac{1}{n^{2}}\right)\left(\int_{Y }\ \int_{X}\sum_{n=1}^{\infty}n^{2}|f_{n}^{(t)}(x)-f^{(t)}(x)|^{2}dm_{X}(x)\,w_{m} (t)dm_{Y}(t)\right)^{1/2}\\ &=C\|\sum_{n=1}^{\infty}n^{2}|f_{n}-f|\|_{L^{2}(X\times_{w}Y,m_{w })}\\ &\leq C\sum_{n=1}^{\infty}n^{2}\|f_{n}-f|\|_{L^{2}(X\times_{w}Y,m_ {w})}<\infty\end{split} \tag{4.1}\]
This shows that for \(w_{m}m_{Y}\)-a.e. \(t\in Y\) we have \(\sum_{n=1}^{\infty}\|f_{n}^{(t)}-f^{(t)}\|_{L^{2}(X,m_{X})}<\infty\) and so in particular \(f_{n}^{(t)}\to f^{(t)}\) in \(L^{2}(X,m_{X})\). Similarly for \(m_{x}\)-a.e. \(x\in X\), we have \(f_{n}^{(x)}\to f^{(x)}\) in \(L^{2}(Y,w_{m}m_{Y})\).
Now observe that for \((x,t)\in X\times_{w}Y\) we have:
\[\begin{split}\operatorname{lip}(f_{n})(x,t)&=\limsup _{(y,s)\to(x,t)}\frac{|f_{n}(y,s)-f_{n}(x,t)|}{d_{w}((y,s),(x,t))}\\ &\geq\limsup_{s\to t}\frac{|f_{n}(x,s)-f_{n}(x,t)|}{d_{w}((x,s),( x,t))}\\ &=\limsup_{s\to t}\frac{|f_{n}^{(x)}(s)-f_{n}^{(x)}(t)|}{d_{Y}(s, t)}=\operatorname{lip}_{Y}(f_{n}^{(x)})(t)\end{split} \tag{4.2}\]
Then, by Fatou's lemma:
\[\begin{split}&\int_{X}\liminf_{n\to\infty}\int_{Y}\operatorname{ lip}_{Y}(f_{n}^{(x)})^{2}(t)w_{m}(t)dm_{Y}(t)dm_{X}(x)\\ &\leq\liminf_{n\to\infty}\int_{X\times_{w}Y}\operatorname{lip}_{ Y}(f_{n}^{(x)})^{2}(t)dm_{w}(x,t)\\ &\leq\liminf_{n\to\infty}\int_{X\times_{w}Y}\operatorname{lip}(f_{ n})^{2}(x,t)dm_{w}(x,t)\\ &=\int_{X\times_{w}Y}|Df|^{2}_{X\times_{w}Y}dm_{w}(x,t)<\infty \end{split} \tag{4.3}\]
Since \(f_{n}^{(x)}\to f^{(x)}\) in \(L^{2}(Y,w_{m}m_{Y})\) for \(m_{x}\)-a.e. \(x\in X\), the last inequality together with the lower semicontinuity of minimal weak upper gradients gives that
\(f^{(x)}\in W^{1,2}(Y,w_{m}m_{Y})\) for \(m_{x}\)-a.e. \(x\in X\) and
\[\int_{X_{w}}|Df^{(x)}|^{2}_{Y}(t)dm_{w}(x,t)\;\leq\int_{X_{w}}|Df|^{2}_{X\times_ {w}Y}dm_{w}(x,t) \tag{4.4}\]
With an analogous argument we can get conditions on \(f^{(t)}\): Starting from the bound:
\[\begin{split}\operatorname{lip}(f_{n})(x,t)&= \limsup_{(y,s)\to(x,t)}\frac{|f_{n}(y,s)-f_{n}(x,t)|}{d_{w}((y,s),(x,t))}\\ &\geq\limsup_{y\to x}\frac{|f_{n}(y,t)-f_{n}(x,t)|}{d_{w}((y,t),( x,t))}\\ &=\limsup_{y\to x}\frac{|f_{n}^{(t)}(y)-f_{n}^{(t)}(x)|}{w_{d}(t) d_{X}(x,y)}=\frac{1}{w_{d}(t)}\operatorname{lip}_{X}(f_{n}^{(t)})(x)\end{split} \tag{4.5}\]
This inequality, valid for every \(t\in Y\) such that \(w_{d}(t)>0\), grants that \(f^{(t)}\in W^{1,2}(X)\) for \(w_{m}m_{Y}\)-a.e. \(t\in Y\) (here we are using the assumption that \(\{w_{d}=0\}\subset\{w_{m}=0\}\)) and that:
\[\int_{X\times_{w}Y}\frac{|Df^{(t)}|^{2}_{X}(x)}{w_{d}^{2}(t)}dm_{w}(x,t)\; \leq\int_{X\times_{w}Y}|Df|^{2}_{x_{w}}dm_{w}(x,t) \tag{4.6}\]
The bounds 4.4 and 4.6 ensure that \(f\in BL(X_{w})\) so that the desired inclusion is proved.
**Lemma 4.7** (Lemma 3.11 in [1]).: _Let \(X\) be a set, \(d_{1},\,d_{2}\) two distances on \(X\) and \(m_{1},\,m_{2}\) two measures. Assume also that \((X,d_{1},m_{1})\) and \((X,d_{2},m_{2})\) are metric spaces that satisfy Assumption 2.12 and that for some \(C>0\) we have \(m_{2}\leq Cm_{1}\) and that for some \(L>0\) we have \(d_{1}\leq Ld_{2}\). Then, denoting by \(S(X_{1})\) and \(S(X_{2})\) the Sobolev classes relative to \((X,d_{1},m_{1})\) and \((X,d_{2},m_{2})\) respectively and by \(|Df|_{1}\) and \(|Df|_{2}\) the associated minimal weak upper gradients, we have \(S(X_{1})\subset S(X_{2})\) and for every \(f\in S(X_{1})\) the inequality \(|Df|_{2}\leq L|Df|_{1}\) holds \(m_{2}\)-a.e._
**Proposition 4.8**.: _Let \(f\in W^{1,2}(X\times_{w}Y)\subset\mathsf{BL}_{w}(X,Y)\). Then \(|Df|_{\mathsf{BL}_{w}}\leq|Df|_{X\times_{w}Y}\leq C_{0}|Df|_{\mathsf{BL}_{w}}\) m\({}_{w}\)-a.e., where \(C_{0}\) is as in Lemma 3.2._
Proof.: Fix \(\varepsilon>0\). Let \(t_{0}\) be such that \(w_{m}(t_{0})>0\), and hence \(w_{d}(t_{0})>0\). By continuity we can find \(\delta>0\) such that
\[\left|\frac{w_{d}(t)}{w_{d}(s)}\right|\leq 1+\varepsilon\qquad\text{for all }t,s\in \overline{B(t_{0},3\delta)}. \tag{4.7}\]
Let \(\chi\colon Y\to[0,1]\) be a Lipschitz function such that \(\chi\equiv 1\) on \(\overline{B(t_{0},\delta)}\), and \(\chi\equiv 0\) outside of \(\overline{B(t_{0},3\delta)}\). Define two continuous functions (in order to have the product being warped only around \(t_{0}\)) as follows:
\[\overline{w_{d}}(t)=\begin{cases}w_{d}(t)&\text{if }\operatorname{d}_{Y}(t,t_{0}) \leq 2\delta,\\ \frac{\operatorname{d}_{Y}(t,t_{0})-2\delta}{\delta}w_{d}(t_{0})+\frac{3 \delta-\operatorname{d}_{Y}(t,t_{0})}{\delta}w_{d}(t)&\text{if }2\delta< \operatorname{d}_{Y}(t,t_{0})<3\delta,\\ w_{d}(t_{0})&\text{if }\operatorname{d}_{Y}(t,t_{0})\geq 3\delta,\end{cases} \tag{4.8}\]
\[\overline{w_{m}}(t)=\begin{cases}w_{m}(t)&\text{if }\operatorname{d}_{Y}(t,t_{0}) \leq 2\delta,\\ \frac{\operatorname{d}_{Y}(t,t_{0})-2\delta}{\delta}w_{m}(t_{0})+\frac{3 \delta-\operatorname{d}_{Y}(t,t_{0})}{\delta}w_{m}(t)&\text{if }2\delta< \operatorname{d}_{Y}(t,t_{0})<3\delta,\\ w_{m}(t_{0})&\text{if }\operatorname{d}_{Y}(t,t_{0})\geq 3\delta,\end{cases} \tag{4.9}\]
and let \((X\times_{\overline{w}}Y,\mathrm{d}_{\overline{w}},m_{\overline{w}})\) be the corresponding product space. Consider the function \(\overline{f}\colon X\times_{w}Y\to\mathbb{R}\) defined by \(\overline{f}(x,t)=\chi(t)f(t,x)\). Clearly \(\overline{f}\in W^{1,2}(X\times_{w}Y)\), and hence to \(\mathsf{BL}_{w}(X,Y)\). Since minimal upper gradients are local we know that
\[|Df|_{X\times_{w}Y}=|D\overline{f}|_{X\times_{w}Y}\quad\text{and}\quad|Df|_{ \mathsf{BL}_{w}}=|D\overline{f}|_{\mathsf{BL}_{w}}\quad\text{ $m_{w}$-a.e. on $X\times \overline{B(t_{0},\delta)}$.}\]
Because \(\overline{f}\) is supported on \(B(t_{0},3\delta)\times X\), where \(w_{d}\) is positive we can think of \(\overline{f}\) as a function on \(X\times_{\overline{w}}Y\). With this identification we have
\[|D\overline{f}|_{X\times_{w}Y}=|D\overline{f}|_{X\times_{\overline{w}}Y}\quad \text{and}\quad|D\overline{f}|_{\mathsf{BL}_{w}}=|D\overline{f}|_{\mathsf{BL}_ {\overline{w}}}\quad\text{ $m_{w}$-a.e. on $X\times\overline{B(t_{0},2\delta)}$.}\]
We now want to use this "localized warped space" to utilize the results for the Cartesian product we have obtained in the previous section.
Consider first the space \(\overline{X}:=(X,w_{d}(t_{0})\,\mathrm{d}_{X},w_{m}(t_{0})m_{X})\). This is simply our original metric measure space \((X,\mathrm{d}_{X},m_{X})\) which has been re-scaled. Now consider \((\overline{X}\times Y,\mathrm{d},m)\) where with a slight abuse of notation we denote by \(\mathrm{d}\) and \(m\) are the appropriate Cartesian metric and measure, respectively. Set \(c=\min_{\overline{B(t_{0},3\delta)}}w_{m}\) and \(C=\max_{\overline{B(t_{0},3\delta)}}w_{m}\). From the definition of \(m_{w}\) we immediately have
\[cm\leq m_{\overline{w}}\leq Cm,\]
and, recalling (4.7), we have
\[(1+\varepsilon)^{-1}\,\mathrm{d}\leq\mathrm{d}_{\overline{w}}\leq(1+ \varepsilon)\,\mathrm{d}\,. \tag{4.10}\]
Let us denote by \(\overline{\mathsf{BL}}:=\mathsf{BL}(\overline{X},Y)\). Using Lemma 4.7, recalling that \(X\times_{\overline{w}}Y\) and \(\overline{X}\times Y\) coincide as sets, we obtain
\[(1+\varepsilon)^{-1}|D\overline{f}|_{\overline{X}\times Y}\leq|D\overline{f}| _{X\times_{\overline{w}}Y}\leq(1+\varepsilon)|D\overline{f}|_{\overline{X} \times Y}\]
and
\[(1+\varepsilon)^{-1}|D\overline{f}|_{\overline{\mathsf{BL}}}\leq|D\overline{f} |_{\mathsf{BL}_{\overline{w}}}\leq(1+\varepsilon)|D\overline{f}|_{\overline{ \mathsf{BL}}}.\]
We now exploit the Cartesian results applied to the pair \((\overline{X},Y)\): by Lemma 3.2, we know that
\[|D\overline{f}|_{\overline{\mathsf{BL}}}\leq|D\overline{f}|_{\overline{X} \times Y}\leq C_{0}|D\overline{f}|_{\overline{\mathsf{BL}}}\]
\(m\)-a.e.. Putting everything together we obtain
\[(1+\varepsilon)^{-2}|Df|_{\mathsf{BL}_{\overline{w}}}\leq|Df|_{X\times_{ \overline{w}}Y}\leq(1+\varepsilon)^{2}C_{0}|Df|_{\mathsf{BL}_{\overline{w}}}\]
\(m_{w}\)-a.e. on \(X\times B(t_{0},\delta)\). Because \(t_{0}\) was arbitrary, and because every cover of \(\{w_{m}>0\}\subset Y\) has a countable subcover, the conclusion holds by letting \(\varepsilon\to 0\).
**Proposition 4.9**.: _We have \(\mathsf{BL}_{0,w}(X,Y)\subset W^{1,2}(X\times_{w}Y)\)._
Proof.: Thanks to Proposition 4.8, it is sufficient to prove that \(\mathcal{V}\subset W^{1,2}(X\times_{x}Y)\), where \(\mathcal{V}\) is defined as in Definition 4.5. Fix \(k\in\mathbb{N}\), let \(t_{0}\in Y\), and let
\[\psi_{n,k}(t)=\sum_{\begin{subarray}{c}j\\ \operatorname{supp}\chi_{j,k}\cap B(t_{0},n)\neq\varnothing\end{subarray}} \chi_{j,k}(t), \tag{4.11}\]
where the \(\chi_{i,k}\)'s are the partition of unity subordinate to the covering by "cubes" we defined in the proof of Lemma 3.8. Observe that \(\psi_{k,n}\equiv 1\) inside \(B(t_{0},n-1)\). With a slight abuse of notation we will denote \(\psi_{n,k}\) simply as \(\psi_{n}\), as \(k\) will remained fixed throughout the proof. For \(f\in\mathsf{BL}_{w}(X,Y)\) we define \(f_{n}(x,t):=\psi_{n}(t)f(x,t)\), and note that by definition \(f_{n}\in\mathsf{BL}_{w}(X,Y)\), while by the dominated convergence theorem and inequality (2.4) we obtain \(f_{n}\to f\) in \(\mathsf{BL}_{w}(X,Y)\).
The idea of the proof is as follows: we show that any \(f\in\mathcal{V}\) with support contained in \(Y\cap B(t_{1},R)\) for \(R>0\) and \(t_{1}\in Y\) belongs to \(W^{1,2}(X\times_{w}Y)\). This together with Proposition 4.8, which ensures \(\mathsf{BL}\)-convergence implies \(W^{1,2}\)-convergence, completes the proof.
Accordingly, fix such \(f\in\mathcal{V}\) and for \(r\in(0,1)\) let \(\Omega_{r}\subset Y\) be the r-neighborhood of \(\{w_{m}=0\}\). Now, find \(r\in(0,1)\) such that \(f\) is \(m_{w}\)-a.e. zero on \(X\times\Omega_{2r}\). Then, recalling that \(\{w_{d}=0\}\subset\{w_{m}=0\}\) and by continuity and compactness we have that there exist constants \(0<c<C<\infty\) such that
\[c<w_{d}(t),\;w_{m}(t)<C,\quad\forall t\in Y\cap B(t_{1},R)\setminus\Omega_{ \frac{r}{2}}.\]
We now use a comparison argument similar to the one used in Proposition 4.8. Let \(w^{\prime}_{d}\) and \(w^{\prime}_{m}\) two continuous functions which agree with \(w_{d}\) and \(w_{m}\) on \(B(t_{1},R)\setminus\Omega_{\frac{r}{2}}\) and such that \(c<w^{\prime}_{d}(t),\;w^{\prime}_{m}(t)<C\) on the whole \(Y\). Consider now the warped product \((X\times_{w^{\prime}}Y,d_{w^{\prime}},m_{w^{\prime}})\) and Cartesian product of \((X\times Y,d,m)\) of \(X\) and \(Y\). Then by Lemma 4.7 and the properties of \(w^{\prime}_{d},\;w^{\prime}_{m}\) we have the following equalities of sets:
\[\mathsf{BL}_{w^{\prime}}(X,Y)=\mathsf{BL}(X,Y)\quad\text{and}\quad W^{1,2}(X \times_{w^{\prime}}Y)=W^{1,2}(X\times Y). \tag{4.12}\]
Moreover by Theorem 3.2 the following equality of sets also holds:
\[\mathsf{BL}(X,Y)=W^{1,2}(X\times Y). \tag{4.13}\]
Finally, putting together (4.12) and (4.13) we obtain:
\[\mathsf{BL}_{w^{\prime}}(X,Y)=\mathsf{BL}(X,Y)=W^{1,2}(X\times Y)=W^{1,2}(X \times_{w^{\prime}}Y). \tag{4.14}\]
By construction of \(w^{\prime}_{d},\;w^{\prime}_{m}\) we have that \(f\in\mathsf{BL}_{w^{\prime}}(X,Y)\) so that, by equation (4.14), \(f\in W^{1,2}(X\times_{w^{\prime}}Y)\). By density in energy of Lipschitz functions (Lemma 2.11) there exists a sequence of \(d_{w^{\prime}}\)-Lipschitz functions \(f_{n}\) that converges to \(f\) in \(L^{2}(X\times_{w^{\prime}}Y)\) and, denoting by \(\mathrm{lip}^{\prime}\) the local Lipschitz constant computed w.r.t. the distance \(d_{w^{\prime}}\), we also have
\[\sup_{n\in\mathbb{N}}\int\mathrm{lip}^{\prime}(f_{n})^{2}\,dm_{w^{\prime}}<\infty\]
uniformly bounded in \(n\). From now on we assume \(f_{n}\) is bounded for every \(n\in\mathbb{N}\). This is possible up to replacing the original \(f_{n}\) with \(\min(\max(f_{n},-C_{n}),C_{n})\) for sufficiently large \(C_{n}\).
Now similarly to what we did at the beginning of the proof find a Lipschitz function \(\psi\colon Y\to[0,1]\) which is identically \(0\) on \(\Omega_{r}\cup(Y\setminus B(t_{1},R+1))\) and identically \(1\) on \(Y\cap B(t_{1},R)\setminus\Omega_{2r}\) and set \(\tilde{f}_{n}(t,x):=\psi(t)f_{n}(t,x)\). By construction the \(\tilde{f}_{n}\)'s are still \(d_{w^{\prime}}\)-Lipschitz, they converge to \(f\in L^{2}(X\times_{w^{\prime}}Y)\) and satisfy
\[\sup_{n\in\mathbb{N}}\int\mathrm{lip}^{\prime}(\tilde{f}_{n})^{2}\,dm_{w^{ \prime}}<\infty. \tag{4.15}\]
To conclude the proof we need to show that the \(\tilde{f}_{n}\)'s are also \(d_{w}\)-Lipschitz, converge to \(f\in L^{2}(X\times_{w}Y)\) and satisfy
\[\sup_{n\in\mathbb{N}}\int\mathrm{lip}(\tilde{f}_{n})^{2}\,dm_{w}<\infty. \tag{4.16}\]
Once this is proved, the conclusion follows from the lower semicontinuity of weak upper gradients and the bound \(|D\tilde{f}_{n}|_{X\times_{w}Y}\leq\mathrm{lip}(\tilde{f}_{n})\) valid \(m_{w}\)-a.e..
First, \(\tilde{f}_{n}\) converges to \(f\in L^{2}(X\times_{w}Y)\) since the functions \(\tilde{f}_{n}\) and \(f\) are concentrated on \(X\times(B(t_{1},R)\setminus\Omega_{r})\) and on this set the measures \(m_{w}\) and \(m_{w^{\prime}}\) agree by definition. Moreover, since \(w_{d}\) and \(w_{d^{\prime}}\) agree on \(X\times(B(t_{1},R)\setminus\Omega_{r})\), by definition we also have
\[\lim_{(y,s)\to(x,t)}\frac{d_{w}((y,s),(x,t))}{d_{w^{\prime}}((y,s),(x,t))}=1 \quad\forall\;(x,t)\in X\times(B(t_{1},R)\setminus\Omega_{r})\]
so that, in particular, \(\operatorname{lip}(\tilde{f}_{n})=\operatorname{lip}^{\prime}(\tilde{f}_{n})\). This together with (4.15) proves (4.16).
It remains to prove that \(\tilde{f}_{n}\) are \(d_{w}\)-Lipschitz, that is, we need to prove that \(\operatorname{Lip}(\tilde{f}_{n})<\infty\). Recall that, since we are on a length space, the Lipschitz constant of a function is equal to the supremum of the local Lispchitz constants. Then, by denoting \(\operatorname{Lip}^{\prime}(\tilde{f}_{n})\) the \(d_{w^{\prime}}\)-Lipschitz constant and recalling that by construction of \(\tilde{f}_{n}\), \(\operatorname{Lip}^{\prime}(\tilde{f}_{n})<\infty\), we have:
\[\operatorname{Lip}(\tilde{f}_{n})=\sup_{X\times_{w}Y}\operatorname{lip}( \tilde{f}_{n})=\sup_{X\times_{w^{\prime}}Y}\operatorname{lip}^{\prime}(\tilde {f}_{n})=\operatorname{Lip}^{\prime}(\tilde{f}_{n})<\infty\]
which concludes the proof.
By adding a few extra assumptions on the function \(w_{m}\) we can improve the previous result and show that, with these additional assumptions, the inclusions above are all equalities.
**Proposition 4.10**.: _Assume that \(w_{m}\) satisfies_
\[\{w_{m}=0\}\subset Y\text{ is discrete;} \tag{4.18}\] \[w_{m}\text{ decays at least linearly near its zeros, i.e.}\] \[w_{m}(t)\leq C\inf_{s|w_{m}(s)=0}\operatorname{d}_{Y}(t,s),\quad \forall t\in Y. \tag{4.17}\]
_Then \(\operatorname{\mathsf{BL}}_{0,w}(X,Y)=W^{1,2}(X\times_{w}Y)=\operatorname{ \mathsf{BL}}_{w}(X,Y)\)._
Proof.: It is easy to see that \(\operatorname{\mathsf{BL}}_{w}(X,Y)\cap L^{\infty}(X\times_{w}Y)\) is dense in \(\operatorname{\mathsf{BL}}_{w}(X,Y)\), using a standard truncation argument, so that it is enough to show that for any \(f\in\operatorname{\mathsf{BL}}_{w}(X,Y)\cap L^{\infty}(X\times_{w}Y)\) there is a sequence in \(\mathcal{V}\) that converges to it in \(\operatorname{\mathsf{BL}}_{w}(X,Y)\).
Pick \(f\in\operatorname{\mathsf{BL}}_{w}(X,Y)\cap L^{\infty}(X\times_{w}Y)\) and define \(D(t)=\min_{s|w_{m}(s)=0}\operatorname{d}_{Y}(t,s)\). For \(m,n\in\mathbb{N}\), with \(n>1\), fix \(x_{0}\in X\) and \(t_{0}\in Y\), and let \(\psi_{n,k}(t)\) be as in (4.11). Moreover define
\[\sigma_{m}(x) =\max\{0,\min\{m-\operatorname{d}_{X}(x,x_{0}),1\}\}\] \[\eta_{n}(t) =\max\{0,\min\{1-\frac{|\log(D(t))|}{\log n},1\}\}.\]
Define \(f_{n,m,k}(x,t)=\psi_{n,k}(t)\eta_{n}(t)\sigma_{m}(x)f(x,t)\). Because the product of the three auxiliary functions is Lipschitz and bounded for all \(n,m,k\), \(f_{n,m,k}\in\operatorname{\mathsf{BL}}_{w}(X,Y)\), and because \(\eta_{n}\) is \(0\) in a neighborhood of \(\{w_{m}=0\}\), we also have \(f_{n,m,k}\in\mathcal{V}\), for all \(n,m,k\).
By construction, the function \((x,t)\mapsto\psi_{n,k}(t)\eta_{n}(t)\sigma_{m}(x)\) is uniformly bounded by \(1\), and hence by the dominated convergence theorem we have that \(f_{n,m,k}\to f\) in \(L^{2}(X\times_{w}Y)\), as \(n,m\to\infty\), for every \(k\).
Now, recalling (2.4), and because \(\sigma_{m}\) is \(1\)-Lipschitz, for \(m_{w}\)-a.e. \((x,t)\)
\[|\partial/\partial x(f-f_{n,m,k})(x,t)|\leq|\psi_{n,k}(t)\eta_{n}(t)\sigma_{m} (x)-1||\partial f/\partial x|(x,t)+|f(x,t)|1_{\{\operatorname{d}_{X}(\cdot,x_{0 })\geq m-1\}}(x).\]
Applying the dominated convergence theorem again we get that, as \(n,m\to\infty\),
\[\int_{X\times Y}|\partial/\partial x(f-f_{n,m,k})(x,t)|^{2}dm_{w}\to 0.\]
Using again Lipschitzness and the product rule we have
\[|\partial/\partial t(f-f_{n,m,k})(x,t)| \leq|\psi_{n,k}(t)\eta_{n}(t)\sigma_{m}(x)-1||\partial f/\partial t |(x,t)\] \[\qquad+c_{1}C_{Y}^{3}k|f(x,t)|1_{\{\mathrm{d}Y(\cdot,t_{0})\geq n -1\}}(t)\] \[\qquad+|f(x,t)|1_{\{\mathrm{d}X(\cdot,x_{0})\leq m\}}(x)1_{\{ \mathrm{d}Y(\cdot,t_{0})\leq n\}}(t)|\partial\eta_{n}/\partial t|(t),\]
for \(m_{w}\)-a.e. \((x,t)\). Once again by the dominated convergence theorem, the first two terms go to \(0\) in \(L^{2}\) as \(n,m\to\infty\). For the last term, observe first that \(|\partial\eta_{n}/\partial t|(t)\leq\frac{1_{D-1(n-1,1)}(t)}{D(t)\log n}\). Now we can use both the additional assumptions on \(w_{m}\): if \(y_{1},\ldots,y_{N}\) are the finite number of zeroes of \(w_{m}\) inside \(\overline{B(t_{0},n-1)}\), and recalling that \(f\) is bounded, we have
\[\int_{X\times Y}|f(x,t)|^{2} 1_{\{\mathrm{d}X(\cdot,x_{0})\leq m\}}(x)1_{\{\mathrm{d}Y(\cdot,t_{0})\leq n\}}|\partial\eta_{n}/\partial t|^{2}(t)\,dm_{w}(x,t)\] \[\leq\frac{\|f\|_{L^{\infty}}^{2}m_{X}(B(x_{0},m))}{(\log n)^{2}} \int_{B(t_{0},n)\cap D^{-1}([n^{-1},1])}\frac{1}{D(t)^{2}}w_{m}(t)\,dm_{Y}(t)\] \[\leq C\frac{\|f\|_{L^{\infty}}^{2}m_{X}(B(x_{0},m))}{(\log n)^{2} }\int_{B(t_{0},n)\cap D^{-1}([n^{-1},1])}\frac{1}{D(t)}\,dm_{Y}(t)\] \[\leq C\frac{\|f\|_{L^{\infty}}^{2}m_{X}(B(x_{0},m))}{(\log n)^{2} }\sum_{i=1}^{N}\int_{\{t|\mathrm{d}_{Y}(y_{i},t)\in[n^{-1},1]\}}\frac{1}{ \mathrm{d}_{Y}(t,y_{i})}\,dm_{Y}(t)\] \[=C\frac{\|f\|_{L^{\infty}}^{2}m_{X}(B(x_{0},m))}{(\log n)^{2}} \sum_{i=1}^{N}\int_{\frac{1}{n}}^{1}\frac{1}{s}\,ds\] \[=NC\frac{\|f\|_{L^{\infty}}^{2}m_{X}(B(x_{0},m))}{\log n}.\]
The last term goes to \(0\) as \(n\to\infty\) for every \(m,k\in\mathbb{N}\), and so we have proved the desired result.
|
2308.05620 | A Robust and Rapidly Deployable Waypoint Navigation Architecture for
Long-Duration Operations in GPS-Denied Environments | For long-duration operations in GPS-denied environments, accurate and
repeatable waypoint navigation is an essential capability. While simultaneous
localization and mapping (SLAM) works well for single-session operations,
repeated, multi-session operations require robots to navigate to the same
spot(s) accurately and precisely each and every time. Localization and
navigation errors can build up from one session to the next if they are not
accounted for. Localization using a global reference map works well, but there
are no publicly available packages for quickly building maps and navigating
with them. We propose a new architecture using a combination of two publicly
available packages with a newly released package to create a fully functional
multi-session navigation system for ground vehicles. The system takes just a
few hours from the beginning of the first manual scan to perform autonomous
waypoint navigation. | Erik Pearson, Brendan Englot | 2023-08-10T15:09:14Z | http://arxiv.org/abs/2308.05620v1 | A Robust and Rapidly Deployable Waypoint Navigation Architecture for Long-Duration Operations in GPS-Denied Environments
###### Abstract
For long-duration operations in GPS-denied environments, accurate and repeatable waypoint navigation is an essential capability. While simultaneous localization and mapping (SLAM) works well for single-session operations, repeated, multi-session operations require robots to navigate to the same spot(s) accurately and precisely each and every time. Localization and navigation errors can build up from one session to the next if they are not accounted for. Localization using a global reference map works well, but there are no publicly available packages for quickly building maps and navigating with them. We propose a new architecture using a combination of two publicly available packages with a newly released package to create a fully functional multi-session navigation system for ground vehicles. The system takes just a few hours from the beginning of the first manual scan to perform autonomous waypoint navigation.
## I Introduction
Mobile robots are often programmed for repeatable tasks, and each instance typically requires the same code. However, repeatable tasks require consistency between attempts, and localization is an important contributing factor to this consistency. For unmanned ground vehicles (UGVs) like Clearpath's Jackal seen in Fig. 1, reliable navigation to specified waypoints can facilitate a wide range of repeatable tasks. For example, mobile manipulator platforms would be able to perform repeatable grasping and manipulation tasks at specified locations. However, there are currently no simple, publicly available localization methods and implementations compatible with repeated waypoint navigation that incorporate fast map construction from scratch.
Simultaneous localization and mapping (SLAM) has become an essential tool for mobile robotics, and successful approaches allow vehicles to navigate around a previously unknown environment with confidence. However, many repeatable tasks occur in indoor environments that undergo few, if any, changes to their structure. Creating a new map on each attempt is unnecessary and time consuming for simple tasks. Additionally, new maps may not have the same orientation or origin as prior maps, resulting in different outcomes of repeated tasks. While key locations in a map can sometimes be identified semantically, the added complexity of doing so may sometimes be undesirable on embedded systems.
Localization using a prior map will ensure waypoints are placed at the same global location every time. Some common packages exist for SLAM within the Robot Operating System (ROS) [1]. Currently there are no packages for localization using a prior map that also include navigation, therefore we have implemented a fast approach for building and using a prior map with a localization package for waypoint navigation in ROS, which is described in detail below.
## II Related Work
Mobile robot navigation has been researched extensively to open a pathway for more advanced tasks. However, navigation requires both localization and environmental data to make informed decisions. SLAM has been successful at satisfying both of those requirements. Most SLAM algorithms define a map origin based on the initial position of the robot when it begins its operations. To perform multi-session tasks, we need to ensure a common origin is used for each session.
### _LiDAR-aided Pose Estimation_
Autonomous ground vehicles rely on accurate pose estimation for navigation [2, 3]. While there are many methods for performing pose estimation [4], simplifying the task down to a common sensor type reduces the complexity. LiDAR sensors provide enough data for localization with comparative algorithms. Managing thousands of data points from each scan can be daunting from a computational complexity standpoint, however there are methods to reduce
Fig. 1: The Clearpath Jackal unmanned ground vehicle (UGV) used in this research, equipped with a Velodyne VLP-16 “puck” lidar.
the complexity such as semantic labeling. This approach labels and groups a subset of point cloud data together to appear as a single entity, thereby significantly reducing the overall number of comparisons. Using semantics and Random Sample Consensus (RANSAC), [5] performs stable and accurate pose estimation while eliminating dynamic obstacles from comparisons. Semantic modeling can be a powerful tool for pose estimation as proved in [6], which solved global localization by registering a single LiDAR scan overlapped with a camera to a reference map using segmentation and neural network training. Localization can be performed by focusing on one semantic object class while attaining high accuracy in handling the surrounding data [7].
### _Multi-Session SLAM_
Two forms of multi-session SLAM persist in mobile robotics research. The first expands the boundaries of a previously defined map [8], while the second revisits the same location from a previous map, where the environment may have changed [9]. Real world environments are not static, which requires maps to be updated from time to time for accurate localization. Therefore, there needs to be some tolerance for robots to use prior maps with outdated information, by either having enough static points of reference on a prior map or updating a prior map during each session. One example of enough reference data is the work produced by Labbe et al. [10] in illumination invariant visual SLAM, where distinctly different visual references are capable of localizing successfully. Another method separates changes between the current map and the prior [11]. Other researchers prefer to update the global reference map such as Zhao et al. [12], who acknowledge active environment changes such as new stores within a small should be manageable.
### _Repeated Navigation_
While consistent localization through pose estimation is required for multi-session waypoint navigation, environments are rarely static. One method to handle this is ignoring data unrelated to localization. Indoor artificial landmarks such as fiducial markers can be placed in an environment where other features may change [13]. Visual teach and repeat methods [14, 15] allow for repeated navigation without localization which can function even under seasonal environmental changes [16, 17]. However, robust localization methods can sometimes handle large static changes in the environment.
### _Model Localization_
Prior maps can be built from many different data sources. One helpful source is 3D models constructed using computer aided design (CAD). Building Information Models can also be used to generate maps for both geometric and semantic localization [18]. Not many real world structures have complete 3D models, however a 3D mesh can be approximately generated from a 2D floorplan for precise robot localization [19]. 3D models of real world environments work well in scenarios where the models already exist. For scenarios that begin with a completely unknown environment, the goal of this paper is to provide an easy-to-deploy alternative solution.
The main contributions of this work are:
* Simple to use and quick to implement prior map localization with globally referenced waypoints, for repeated mobile robot navigation tasks.
* An autonomous waypoint distributor package publishing 2D waypoint locations for repeatable navigation.
* A 3D localization package robust enough to handle sources of error caused by differences in current LiDAR data and global reference maps, due to dynamic objects, displaced static objects and occlusions.
* A recommended package for building a global reference map quickly and accurately.
This unique framework will enable researchers to minimize time spent building custom solutions for repeated waypoint navigation tasks. We hope our implementation can be a stepping stone for work addressing highly complex tasks.
## III Architecture Description
To ensure accurate waypoint navigation, a global reference map and some method of performing localization are required. Global reference maps can be composed of labeled data, such as landmarks, or raw metric data, such as a point cloud. Many localization methods rely on a specific type of global reference map to perform optimally. SLAM algorithms build global reference maps from scratch.
Our framework uses publicly available packages to create a global reference map from a prior manual expedition, followed by localization for waypoint navigation in multi-session tasks. The package we selected for creating a global reference map was _hdl_graph_slam_1[20] for two main reasons. The SLAM results were successful even when only LiDAR point cloud data was given, which reduced the requirements for data collection. There is also a service included that allows users to save a copy of the completed global map in the correct data type for the subsequent localization package.
Footnote 1: [https://github.com/koide3/hdl_graph_slam](https://github.com/koide3/hdl_graph_slam)
The real-time localization package we chose is a publicly available Iterative Closest Point (ICP) [21] implementation from ETH Zurich2. With some minor modifications for our specific systems, their package was able to take an initial position for the sensor used and accept odometry data for an estimate before performing ICP localization between an active sensor and the global reference map. With further testing, this localization method performed well in cases where the sensor had partial occlusion, dynamic obstacles moved around, and even when portions of the global reference map had been changed, such as moved boxes or chairs.
Footnote 2: [https://github.com/leggedrobotics/icp_localization](https://github.com/leggedrobotics/icp_localization)
To enable movement with the localization package, a navigation stack was implemented. The default 2D navigation stack in ROS is the _move_base3_ package. The full navigation stack includes obstacle avoidance with 2D costmaps and
computes collision-free paths from the current position to a global goal using Dijkstra's algorithm [22] by default. Once a goal has been set, the navigation stack sends velocity commands to the robot until the goal has been reached. The new _waypoint_navigation4_ package we have published includes the complete waypoint navigation architecture shown in Fig. 2, with simulated and real-world example launch files using a Jackal ground vehicle with a mounted LiDAR. Included in our integration of the packages shown in Fig. 2 is a waypoint publisher script. The current version of that script accepts 2D goal positions as parameters in a.yaml file, before creating a list of goal positions. When the node is activated, the goals are published in order while waiting for the robot to arrive at its current goal. Once the robot has successfully arrived, the next goal is published until no further goals remain on the list. While simple, this script is effective at supporting repeated autonomous navigation to a series of waypoints.
Footnote 4: [https://github.com/RobustFieldAutonomyLab/waypoint.navigation](https://github.com/RobustFieldAutonomyLab/waypoint.navigation)
## IV Experiments
Multi-session localization for waypoint navigation requires accuracy and precision. Accuracy can be defined by how close the robot is to the target, while precision measures the consistency for a given target. As we are using a ground vehicle to test our localization algorithm, the waypoints will be defined by their 2D Cartesian coordinates and a quaternion for orientation, \(w_{i}=(x_{i},y_{i},q_{i})\) for the \(i\)th position of an ordered list \(\mathcal{W}_{n}=(w_{1},w_{2},\ldots,w_{n})\) of \(n\) waypoints. The Jackal UGV used in our work is described by its pose \(s_{\tau}=(x_{\tau},y_{\tau},q_{\tau})\) at time \(t=\tau\).
### _Simulation Comparisons_
The proposed waypoint navigation architecture of Fig. 2 was built to be simple to use and quick to implement. Given the lack of existing localization algorithm implementations that incorporate navigation, the only practical competitor was SLAM, as implemented to support waypoint navigation in the ROS navigation stack. Many SLAM algorithm implementations exist within ROS, however the simplest to use is GMapping [23], the default SLAM framework. Therefore, comparisons were made with respect to accuracy and precision of waypoint navigation for both GMapping, and ICP localization (as implemented within our architecture). Simulations were performed in ROS Noetic with an i9-9900K 3.60GHz CPU and 64GB of RAM. ICP trials were given 5 minutes to reach completion, while GMapping required 6 minutes per trial.
Extensive simulations were performed in Gazebo to compare our proposed use of ICP localization within the architecture of Fig. 2 against the conventional usage of GMapping5 for waypoint navigation within ROS. While a simulated Jackal UGV manages its state estimation process, the Gazebo simulation environment can provide ground truth. Therefore, to test for accuracy and precision we recorded the ground truth state of the robot when navigation to each designated waypoint was completed. If we denote the time when the robot arrived at waypoint \(i\) as \(\tau i\), then all our ground truth data for a single trial forms the set \(\mathcal{S}_{n}=\{s_{\tau 1},s_{\tau 2},\ldots,s_{\tau n}\}\).
Footnote 5: [https://wiki.ros.org/gmapping](https://wiki.ros.org/gmapping)
A simplified tunnel-like map seen in Fig. 3 was used for testing with \(n=15\) waypoints chosen to reflect a variety of positions and orientations. Each trial used the same \(15\) waypoints assigned in the global reference frame. To ensure a sufficiently large dataset, \(m=200\) trials were performed for each waypoint. However, not every trial was successful, resulting in less than \(3000\) individual datapoints. If failures occurred in the middle of a trial, data was collected up to the last successfully navigated waypoint. Therefore, we were able to compute a success rate in addition to the accuracy and precision of each framework.
```
Data:\(\mathcal{W}_{15}\) \(m\gets 200\) \(count\gets 0\) \(n\gets 15\) for\(count<m\)do begin localization simulation \(i\gets 0\) for\(i<n\)do navigate to \(w_{i}\in\mathcal{W}_{15}\) record robot state \(i++\) end simulation \(count++\)
```
**Algorithm 1**Simulating 200 trials of navigation to 15 waypoints
Multi-session waypoint navigation implies a first session followed by many subsequent sessions. The initial session
Fig. 2: Complete architecture of the proposed waypoint navigation package. Purple boxes represent data that only needs to be handled once. Blue boxes are sent and received by the robot. Green ellipses show the new packages used by this architecture.
can be performed by starting a robot in the ideal state, with no errors. However, under autonomous navigation, the robot needs to navigate itself back to the home position and orientation. Any errors accumulated during navigation will affect future sessions if not accounted for. Therefore, we performed two phases of simulation, in which the first phase represents an initial session where the robot was placed exactly at the origin of the map for each trial. By the comparing the final UGV ground truth locations against the assigned waypoints, we created a set \(\mathcal{E}_{n}=\mathcal{S}_{n}-\mathcal{W}_{n}=\{e_{1},e_{2},\ldots,e_{n}\}\) of target errors. These errors are plotted in Fig. 4 and show how GMapping resulted in more accurate waypoint navigation. However, it is clear that the ICP framework has higher precision, as the convex hulls around each individual waypoint's error \(e_{i}\) form relatively small clusters.
The second phase of simulation is conducted similarly to the first, with the only change being the starting location of the robot. For these trials, a random error from the first session \(e\in\mathcal{E}\) was added to the robot's initial pose. The same map, set of waypoints \(\mathcal{W}_{n}\), and number of trials were performed as in the first round of simulation. However, the results show significantly worse errors for the GMapping framework, while the ICP framework barely had any changes, as seen in Fig. 5. The success rate of each method was also affected during the change from initial session to second session. For the initial session, ICP had 198 trials reach the first goal point successfully, while only 134 managed to reach the final goal. GMapping had similar values, with 188 reaching the first goal, and 122 completing the entire trial. ICP achieved higher performance in the second round of simulation, with 148 trials reaching the final goal, although only 195 reached the first goal. As expected with the additional error, GMapping had 162 trials reach the first goal, but only 14 trials managed to navigate to all the waypoints. Besides the accuracy and precision outcomes, the robustness of each algorithm demonstrates the benefits of localization using a prior map, for repeated waypoint navigation tasks in indoor environments.
### _Measuring Accuracy_
When navigating to a waypoint, accuracy is a measure of how close the robot was to the true goal. A simple method to compute accuracy is an average of Euclidean distances:
\[\frac{1}{nm}\sum_{j=1}^{m}\sum_{i=1}^{n}||z_{ij}-z_{i}^{*}||. \tag{1}\]
For 2D Euclidean distance, \(z_{ij}=(x_{ij},y_{ij})\in s_{\tau ij}\in\mathcal{S}_{j}\) and the true pose \(z_{i}^{*}=(x_{i}^{*},y_{i}^{*})\in w_{i}\in\mathcal{W}\), resulting in an average radial distance. Accuracy was recorded independently of the specific waypoint measured against. For our initial session, ICP yielded an accuracy of \(0.257994m\), while
Fig. 3: Simulated Gazebo environment with 15 marked waypoints in red. Each cardinal and inter-cardinal direction is represented at least once. The white point cloud is the global reference map used for ICP localization.
GMapping has a significantly smaller value at \(0.105023m\). Angular orientation accuracy used the same basic equation, with \(z_{ij}=q_{ij}\in s_{\tau ij}\) applied to the yaw angle, and once again \(z_{i}^{*}=q_{i}^{*}\in w_{i}\). As seen in Fig. 4, both algorithms achieved similar rotational accuracy, which amounted to \(0.045672\) radians for ICP and \(0.04284\) radians for GMapping. Based on these results, GMapping has higher translational accuracy with comparable rotational accuracy, when compared to the ICP framework, using the exact origin as the robot's starting location for each trial.
When performing multiple sessions, GMapping was unable to maintain its higher translational accuracy. As seen in Fig. 5, the ICP framework resulted in nearly the same values as previously, with a translational accuracy of \(0.255417m\) and rotational accuracy of \(0.045574\) radians. GMapping however became drastically worse at locating waypoints, resulting in a translational accuracy of \(0.611964m\). Rotational errors seemed to be more centered with a wider range, leading to an accuracy of \(0.043954\) radians.
### _Measuring Precision_
Precision can be computed using a similar averaging of the Euclidean norm, with different parameters. This time, errors were divided into subsets based on the waypoint they corresponded to, using Equation (2) for each waypoint \(i\):
\[\frac{1}{m}\sum_{j=1}^{m}||z_{ij}-z_{i}^{*}||. \tag{2}\]
Within those subsets, centroids were computed to estimate how tightly packed the errors were. For translational precision, the centroids were computed as \(z_{i}^{*}=(\bar{x}_{i},\bar{y}_{i})\), while rotational precision was able to directly use the average yaw. Navigation precision was recorded for each waypoint, as seen in Table I. While there is some variability among the results, in the first session, the precision of the ICP framework is superior to GMapping at every waypoint for both translational and rotational precision, with only two angular exceptions of waypoints \(5\) and \(13\).
Similar to the decline in accuracy, GMapping suffered drastically with respect to the precision realized in the translational errors of its second session. Most waypoints saw order-of-magnitude worse results, while all angular precision measurements were worse than those from the ICP algorithm, as seen in Table II.
Fig. 4: Translational and rotational displacements of Jackal UGV when navigating to waypoints in simulation. Convex hulls around data from each waypoint illustrate navigation precision within the translational displacement plots at top.
### _Real World Experiments_
ICP localization was implemented on a Jackal UGV using a global reference map of the ABS Engineering Center at Stevens Institute of Technology seen in Fig. 7. A script published the same waypoints for five consecutive trials. The final waypoint was placed at the origin of the map so that the robot could return autonomously to its start location.
Fig. 5: Translational and rotational displacements of Jackal UGV when navigating to waypoints in simulation, during the second navigation session. Convex hulls around data from each waypoint illustrate navigation precision within the translational displacement plots at top.
simulate multi-session navigation, the localization algorithm was restarted remotely before each trial began, however the hardware was not touched between trials. All five trials can be seen in Fig. 6 where the pathways varied, but the waypoints were reached precisely.
All of the preparations to perform real world testing, and its execution, occurred on the same day. LiDAR data was gathered by manually driving the Jackal UGV around the ABS Engineering Center while recording. For complete coverage, the manual path taken was more comprehensive than our final autonomous path. The environment was small enough that our offline SLAM solution produced an accurate prior map to use in less than 10 minutes (shown in white in Fig. 6). After updating the localization parameters, waypoint navigation testing began. Waypoints were manually placed on the global reference map via rviz. If the waypoints were reasonable (e.g., appeared to be collision-free), they were added to the autonomous navigation script. Once the script was complete, everything was put in place for autonomous navigation around the indoor environment. The total process from gathering the prior map data to defining waypoints and autonomously navigating to them took less than 4 hours.
The reference map used in this experiment included some extra data of dynamic obstacles, such as team members following the robot around. During each navigation trial, students were actively moving around this workspace as well, resulting in lidar scans that were not perfectly matched to the prior map. However, ICP localization combined with odometry achieved enough point to point matches to overcome any mismatches due to moving obstacles, which resulted in accurate localization even with these discrepancies. Obstacle avoidance was restarted at the beginning of each trial, and only current data was used to define obstacles, which enabled the Jackal to avoid new objects that did not exist when the global reference map was created.
Fig. 6: Five consecutive execution traces of autonomous waypoint navigation on the Jackal UGV using the proposed architecture of Fig. 2 in Stevens’ ABS Engineering Center. Fifteen waypoints marked with green arrows were sent via script to run autonomously. Each new trial was started by resetting the localization software, but the hardware was not moved, to ensure true multi-session navigation. A video of this process can be viewed here:
[https://youtu.be/vsRrwgDN9kg](https://youtu.be/vsRrwgDN9kg)
Fig. 7: ABS Engineering Center used for real world testing of ICP localization framework on Jackal UGV.
## V Conclusion
Using publicly available software packages, our team was able to produce a reliable and quick system for building a global reference map and performing localization that was sufficient for autonomous waypoint navigation in a previously unknown environment. The global reference map does not need to be exactly the same as the live data, thanks to the robustness of ICP combined with odometry data. This is particularly helpful in environments with constantly changing floor spaces and people actively walking around. While we were able to tackle localization for ground vehicles, the default move_base navigation package had limited success at avoiding obstacles across multiple sessions, under the accumulation of errors. We hope that the proposed architecture can serve as a foundational capability upon which the robotics community can achieve more complex task execution in GPS-denied indoor environments.
|
2303.09368 | Geodesic graphs for geodesic orbit Finsler $(α,β)$ metrics on
spheres | Invariant geodesic orbit Finsler $(\alpha,\beta)$ metrics $F$ which arise
from Riemannian geodesic orbit metrics $\alpha$ on spheres are determined. The
relation of Riemannian geodesic graphs with Finslerian geodesic graphs proved
in a previous work is now illustrated with explicit constructions. Interesting
examples are found such that $(G/H,\alpha)$ is Riemannian geodesic orbit space,
but for the geodesic orbit property of $(G/H,F)$ the isometry group has to be
extended. It is also shown that projective spaces other than ${\mathbb{R}}P^n$
do not admit invariant purely Finsler $(\alpha,\beta)$ metrics. | Teresa Arias-Marco, Zdenek Dusek | 2023-03-16T14:53:03Z | http://arxiv.org/abs/2303.09368v2 | # Geodesic graphs for geodesic orbit
###### Abstract
Invariant geodesic orbit Finsler \((\alpha,\beta)\) metrics \(F\) which arise from Riemannian geodesic orbit metrics \(\alpha\) on spheres are determined. The relation of Riemannian geodesic graphs with Finslerian geodesic graphs proved in a previous work is now illustrated with explicit constructions. Interesting examples are found such that \((G/H,\alpha)\) is Riemannian geodesic orbit space, but for the geodesic orbit property of \((G/H,F)\) the isometry group has to be extended. It is also shown that projective spaces other than \(\mathbb{R}P^{n}\) do not admit invariant purely Finsler \((\alpha,\beta)\) metrics.
**MSClassification:** 53C22, 53C60, 53C30.
**Keywords:** Homogeneous Finsler manifold, \((\alpha,\beta)\) metric, homogeneous geodesic, g.o. manifold, geodesic graph.
## 1 Preliminaries
A Finsler manifold \((M,F)\) is _homogeneous_ if it admits a connected Lie group \(G\) which acts transitively on \((M,F)\) as a group of isometries. Fundamental tools for the study of homogeneous manifolds can be found in several monographs, for example [8] by S. Deng, concerning Finsler geometry. The setting convenient for the present work was used for example in the recent work [12] by the second author. To remain self-contained, we shortly recall these concepts here. Homogeneous manifold \((M,F)\) can be naturally identified with the _homogeneous space_\((G/H,F)\), where \(H\) is the isotropy group of the origin \(p\in M\). Denote by \(\mathfrak{g}\) and \(\mathfrak{h}\) be the Lie algebras of \(G\) and \(H\) and consider the adjoint representation \(\mathrm{Ad}\colon H\times\mathfrak{g}\to\mathfrak{g}\) of \(H\) on \(\mathfrak{g}\). There exists a _reductive decomposition_ of the form \(\mathfrak{g}=\mathfrak{m}+\mathfrak{h}\), where \(\mathfrak{m}\subset\mathfrak{g}\) is an \(\mathrm{Ad}(H)\)-invariant vector subspace. This decomposition is usually not unique. For a fixed reductive decomposition \(\mathfrak{g}=\mathfrak{m}+\mathfrak{h}\), there is the natural identification of the vector space \(\mathfrak{m}\) with the tangent space of \(M\) at \(p\) induced by the natural projection \(G\to G/H=M\).
The invariant Finsler metric \(F\) on \(G/H\) gives the Minkovski norm and its fundamental tensor on \(T_{p}M\). Using the above identification \(\mathfrak{m}\simeq T_{p}M\), we obtain
the \(\mathrm{Ad}(H)\)-invariant Minkowski norm and the \(\mathrm{Ad}(H)\)-invariant fundamental tensor on \(\mathfrak{m}\). Conversely, if \(F_{p}\) is given on \(\mathfrak{m}\simeq T_{p}M\), we naturally define \(F_{q}\) on any \(T_{q}M\), \(q\in M\), using the action of \(\sigma\in G\) such that \(\sigma(p)=q\) by the formula
\[F_{q}(\sigma_{*}X)=F_{p}(X),\qquad X\in T_{p}M. \tag{1}\]
In the following, we work only with the vector space \(\mathfrak{m}\) and we denote the \(\mathrm{Ad}(H)\)-invariant Minkowski norm on \(\mathfrak{m}\) just by \(F\) and its fundamental tensor on \(\mathfrak{m}\) by \(g\).
We shall start with an \(\mathrm{Ad}(H)\)-invariant positive scalar product \(\alpha\) and an \(\mathrm{Ad}(H)\)-invariant one-form \(\beta\) on \(\mathfrak{m}\). Let \(\phi\colon(-b_{0},b_{0})\to(0,\infty)\) be a smooth function and let \(\|\beta\|_{\alpha}<b_{0}\). Define
\[F(y)=\sqrt{\alpha(y,y)}\cdot\phi(s),\qquad s=\frac{\beta(y)}{\sqrt{\alpha(y, y)}}. \tag{2}\]
We will use a lemma from [7], see also [19].
**Lemma 1** ([7]): _The function \(F\) defined in formula (2) is a Minkowski norm for any \(\alpha\) and \(\beta\) with \(\|\beta\|_{\alpha}<b_{0}\) if and only if the function \(\phi(s)\) satisfies_
\[\phi(s)>0,\qquad\left(\phi(s)-s\phi^{\prime}(s)\right)+(b^{2}-s^{2})\phi^{ \prime\prime}(s)>0, \tag{3}\]
_where \(s\) and \(b\) are arbitrary real numbers with \(|s|\leq b<b_{0}\)._
The \(\mathrm{Ad}(H)\)-invariant function \(F\) defined on \(\mathfrak{m}\) by formula (2) and which satisfies conditions (3) obviously determines a \(G\)-invariant Finsler metric on \(M=G/H\). This metric is called homogeneous \((\alpha,\beta)\) metric. The notion of an \((\alpha,\beta)\) metric naturally covers Randers metrics for \(\phi(s)=1+s\), quadratic metrics for \(\phi(s)=(1+s)^{2}\) and many other important Finsler metrics, see for example [7] or [19] for more information.
Recall that the standard concepts in Finsler geometry can be found for example in monographs [3] by D. Bao, S.-S. Chern and Z. Shen or [8] by S. Deng. We consider the _Chern connection_, which is the unique linear connection on the pullback vector bundle \(\pi^{*}TM\) over \(TM_{0}\) which is torsion free and almost \(g\)-compatible. Here \(g\) is the fundamental tensor related with the metric \(F\). It allows to define the derivative along a curve and geodesics in \(M\).
There are many works in which homogeneous geodesics were studied in various settings, in Riemannian geometry, pseudo-Riemannian geometry, affine geometry and Finsler geometry. See for example [1], [2], [9], [10], [11], [13], [14], [15], [16], [17] and the references therein. A geodesic \(\gamma(s)\) through the point \(p\in M\) is _homogeneous_ if it is an orbit of a one-parameter subgroup of the group \(G=I_{0}(M)\) of isometries. If \((M,F)\) is identified with the homogeneous space \((G/H,F)\), there exists a nonzero _geodesic vector_\(X\in\mathfrak{g}\) such that \(\gamma(t)=\exp(tX)(p)\) for all \(t\in\mathbb{R}\). A homogeneous space \((G/H,F)\) is called a Finsler _geodesic orbit space_ or just _g.o. space_, if every geodesic of \((G/H,F)\) is homogeneous.
A homogeneous manifold \((M,F)\) may admit more presentations as a homogeneous space in the form \((G/H,F)\), corresponding to various transitive isometry groups. There are examples such that \((G/H,F)\) is not a g.o. space, but it is a g.o. space in a presentation \((\widetilde{G}/\widetilde{H},F)\), where \(\widetilde{G}\supset G\). In a g.o. space \((G/H,F)\), we investigate sets of geodesic vectors which generate all geodesics through a fixed point. The concept of geodesic graphs comes from the work of J. Szenthe [20], in the affine setting. Later, it was used by O. Kowalski with coauthors in the Riemannian setting an by the second author in the Finslerian setting, see the recent works [9], [10], [11], [13], for example. A _geodesic graph_ is an \(\operatorname{Ad}(H)\)-equivariant map \(\xi\colon\mathfrak{m}\to\mathfrak{h}\) such that \(X+\xi(X)\) is a geodesic vector for each \(o\neq X\in\mathfrak{m}\). If the vector \(\xi(X)\) is uniquely determined, then the map \(\xi\) is \(\operatorname{Ad}(H)\)-equivariant and we are interested in the algebraic structure of the mapping \(\xi\). If there are more choices for the vector \(\xi(X)\), we want to define geodesic graph in a way that the algebraic structure of the mapping \(\xi\) is as simple as possible. In [15], O. Kowalski and S. Nikcevic developed a theory of Riemannian geodesic graphs. The existence of a linear geodesic graph is equivalent with the natural reductivity of the space \(G/H\). Or, components of the geodesic graph are rational functions \(\xi_{i}=P_{i}/P\), where \(P_{i}\) and \(P\) are homogeneous polynomials and it holds \(\deg(P_{i})=\deg(P)+1\).
Geodesic graphs on many Riemannian g.o. manifolds were constructed in several works, see for example the recent survey paper [9] by the author for more details and references and also for many important classes of Riemannian g.o. spaces. Further references and a structural approach to Riemannian g.o. manifolds using Lie theory can be found in the recent papers [14] and [17] by C.S. Gordon and Yu.G. Nikonorov. Recent results about Riemannian g.o. spaces of special types were obtained for example in [1], [2] or [6]. Finsler g.o. spaces attained attention recently. In [21], Z. Yan and S. Deng studied relations of special Finsler g.o. spaces and Riemannian g.o. spaces. Particular results were obtained for Randers g.o. metrics and for weakly symmetric metrics. Invariant Randers g.o. metrics on spheres \(S^{2n+1}=\mathrm{U}(n+1)/\mathrm{U}(n)\) were also considered in this paper. In [10], the second author investigated invariant Randers geodesic orbit metrics on modified H-type groups and he constructed Finslerian geodesic graphs on these geodesic orbit manifolds. In [11], the second author investigated special families of weakly symmetric Finsler g.o. metrics on modified H-type groups which were studied also in [21]. The relation of Finslerian geodesic graphs with the Riemannian geodesic graphs of the underlying Riemannian metric was studied.
In [12], second author studied general Finsler \((\alpha,\beta)\) metrics which arise from a Riemannian geodesic orbit metric \(\alpha\) and a one-form \(\beta\). In particular, the class of \((\alpha,\beta)\) metrics covers all Randers metrics as a special case. Geodesic lemma, which is the formula for characterization of geodesic vectors, was derived in terms of the Riemannian metric \(\alpha\) and the one-form \(\beta\). The existence of a suitable reductive decomposition \(\mathfrak{g}=\mathfrak{m}+\mathfrak{h}\) was proved in which the relation of Riemannian geodesic graph with the Finslerian geodesic graph can be easily described, in a suitable group extension, without the explicit construction of
the geodesic graph. As a consequence, it was proved that all \((\alpha,\beta)\) metrics such that \(\alpha\) is a Riemannian geodesic orbit metric are Finsler geodesic orbit metrics, possibly with respect to a bigger group of isometries. It was an open question whether this group extension is necessary only for the particular theoretical construction in the proof there or whether there are examples which require this extension to obtain the geodesic orbit property for the Finsler metrics. In the special situation when the Riemannian g.o. metric \(\alpha\) is naturally reductive, an alternative description of Finslerian geodesic graph with respect to the naturally reductive decomposition was also given, again, without the explicit construction of geodesic graph. The construction was illustrated with one example which is not naturally reductive.
In the present paper, we are going to illustrate these construction on spheres, and we observe interesting phenomena. For the list of Riemannian geodesic orbit metrics on spheres we refer for example for the monograph [4] by V. Berestovskii and Yu. Nikonorov or for the references therein for the detailed results in particular cases. In each case, we determine invariant Finsler \((\alpha,\beta)\) metrics and analyze the geodesic orbit property. In each of the series, we work just with the low-dimensional example. Due to the explicit construction, it can be easily seen that similar results hold for the same type of spaces in higher dimensions. For the general case, we argue using Corollary 5 below.
Spheres \(S^{2n+1}=\mathrm{SU}(n+1)/\mathrm{SU}(n)\) with any geodesic orbit Riemannian metric \(\alpha\) with respect to the group \(\mathrm{SU}(n+1)\) are also geodesic orbit with respect to this group with all \((\alpha,\beta)\) Finsler metrics which arise from the Riemannian metric \(\alpha\). However, for the easy description of the relation of Riemannian and Finslerian geodesic graph, we need the extended expression \(S^{2n+1}=\mathrm{U}(n+1)/\mathrm{U}(n)\). But this space with Riemannian g.o. metrics is naturally reductive and it is more convenient to use the naturally reductive decomposition, hence we illustrate this approach. For spheres \(S^{4n+3}=\mathrm{Sp}(n+1)/\mathrm{Sp}(n)\), there is a 2-parameter family of Riemannian geodesic orbit metrics. There are many invariant \((\alpha,\beta)\) metrics, but none of them is geodesic orbit with respect to \(\mathrm{Sp}(n+1)\). For each invariant \((\alpha,\beta)\) metric, the isometry group can be extended to \(\mathrm{Sp}(n+1)\cdot\mathrm{U}(1)\) and the relation of Riemannian and Finslerian geodesic graph can be easily described with respect to this group. The isometry group of all these Riemannian metrics can be further extented to \(\mathrm{Sp}(n+1)\cdot\mathrm{Sp}(1)\) to obtain natural reductivity, but there are no invariant \((\alpha,\beta)\) metrics with respect to this group. We also show that the exceptional spheres which admit Riemannian geodesic orbit metrics do not admit invariant purely Finsler \((\alpha,\beta)\) metrics. So our study concludes with the classification of geodesic orbit Finsler \((\alpha,\beta)\) metrics on spheres. We also examine the projective spaces to see that other projective spaces than those \(\mathbb{R}P^{n}\) which arise naturally from the spheres do not admit invariant Finsler \((\alpha,\beta)\) metrics.
Geodesic graphs
We recall geodesic lemma, which characterizes geodesic vectors. For general Finsler metrics it was derived by D. Latifi.
**Lemma 2** ([16]): _Let \((G/H,F)\) be a homogeneous Finsler space with a reductive decomposition \(\mathfrak{g}=\mathfrak{m}+\mathfrak{h}\). A nonzero vector \(Y\in\mathfrak{g}\) is geodesic vector if and only if it holds_
\[g_{Y_{\mathfrak{m}}}(Y_{\mathfrak{m}},[Y,U]_{\mathfrak{m}})=0\qquad\forall U \in\mathfrak{m}, \tag{4}\]
_where the subscript \(\mathfrak{m}\) indicates the projection of vector from \(\mathfrak{g}\) to \(\mathfrak{m}\)._
In [12], the second author expressed this condition for Finsler \((\alpha,\beta)\) metrics in terms of the Riemannian metric \(\alpha\) and the one-form \(\beta\). Recall that the one-form \(\beta\) corresponds to an \(\alpha\)-equivalent vector \(V\in\mathfrak{m}\) related with \(\beta\) by the condition \(\beta(U)=\alpha(V,U)\) for all \(U\in\mathfrak{m}\). We further denote by \(\zeta=\zeta(X)\) the following function on \(\mathfrak{m}\) characteristic for the particular \((\alpha,\beta)\) metric. We write \(\phi^{\prime}=\mathrm{d}\phi/\mathrm{d}s\).
\[\zeta=\zeta(X)=\frac{\alpha\phi^{\prime}}{\sqrt{\alpha}\phi-\beta\phi^{ \prime}}. \tag{5}\]
**Lemma 3** ([12]): _Let \(F=\sqrt{\alpha}\cdot\phi(s)\) be a homogeneous Finsler \((\alpha,\beta)\) metric on \(G/H\), let \(\mathfrak{g}=\mathfrak{m}+\mathfrak{h}\) be a reductive decomposition and \(V\in\mathfrak{m}\) be the vector \(\alpha\)-equivalent with \(\beta\). The vector \(X+\xi(X)\), where \(X\in\mathfrak{m}\) and \(\xi(X)\in\mathfrak{h}\), is geodesic vector if and only if it holds_
\[\alpha\Big{(}X+\zeta(X)\cdot V,[X+\xi(X),U]_{\mathfrak{m}}\Big{)}=0,\qquad \forall U\in\mathfrak{m}. \tag{6}\]
Let us also recall the algebraic feature which is necessary for the further results.
**Proposition 4** ([12]): _Let \((G/H,\alpha)\) be a Riemannian geodesic orbit space with a reductive decomposition \(\mathfrak{g}=\mathfrak{h}+\mathfrak{m}\) and let \(V\in\mathfrak{m}\) be an \(\mathrm{Ad}(H)\)-invariant vector. Then either there exist an invariant reductive decomposition \(\mathfrak{g}=\mathfrak{h}+\mathfrak{m}^{\prime}\) such that the projection of the vector \(V\) to \(\mathfrak{m}^{\prime}\) is in the center of \(\mathfrak{g}\), or \(M=G/H\) can be expressed in an extended form \(M=\widetilde{G}/\widetilde{H}\) and the above property holds with respect to a decomposition \(\widetilde{\mathfrak{g}}=\widetilde{\mathfrak{h}}+\mathfrak{m}^{\prime}\)._
The crucial step in the proof is the fact that the operator \(\mathrm{ad}(V)|_{\mathfrak{m}}\) acts skew-symmetrically on the suitable reductive complement \(\mathfrak{m}\) and we can define new formal operator \(W=\mathrm{ad}(V)|_{\mathfrak{m}}\) and extend the isotropy algebra by this operator, if necessary. Consequently, the vector \(V-W\) generates nontrivial center in \(\widetilde{\mathfrak{g}}\). The following easy corollary will be crucial later.
**Corollary 5**: _Let \((G/H,\alpha)\) be a Riemannian geodesic orbit space. The Riemannian metric \(\alpha\) can be modified into a Finslerian \((\alpha,\beta)\) metric \(F\) if and only if either \(\mathfrak{g}\) has nontrivial center or there exists the extended expression of the manifold in the form \(\widetilde{G}/\widetilde{H}\), where \(\widetilde{\mathfrak{g}}\) has nontrivial center._
Proof.: Projection of any vector from the center of \(\widetilde{\mathfrak{g}}\) into any reductive complement \(\mathfrak{m}\) gives an \(\operatorname{Ad}(H)\)-invariant vector \(V\in\mathfrak{m}\) and invariant Finsler \((\alpha,\beta)\) metrics, according to Lemma 1. On the other hand, existence of invariant Finsler \((\alpha,\beta)\) metric requires an \(\operatorname{Ad}(H)\)-invariant vector \(V\in\mathfrak{m}\) and according to Proposition 4, there must be nontrivial center in \(\widetilde{\mathfrak{g}}\).
Proposition 4 is also suitable for an easy construction of Finslerian geodesic graph using the Riemannian one. We shall illustrate these constructions in the next section.
**Theorem 6** ([12]): _Let \((G/H,\alpha)\) be a Riemannian geodesic orbit space. Then all invariant Finsler \((\alpha,\beta)\) metrics \(F=\sqrt{\alpha}\cdot\phi(s)\) on \(G/H\) are geodesic orbit metrics, possibly with respect to a bigger group of isometries. The geodesic graph is_
\[\xi(X)=\xi_{R}(X+\zeta(X)\cdot V), \tag{7}\]
_where \(\xi_{R}\) is the Riemannian geodesic graph in a suitable group extension and with respect to a suitable reductive decomposition._
If the Riemannian geodesic orbit metric \(\alpha\) on \(M\) is naturally reductive, the decomposition from Proposition 4 is usually not naturally reductive. An alternative construction in the naturally reductive decomposition is the following.
**Proposition 7** ([12]): _Let \((G/H,\alpha)\) be naturally reductive Riemannian homogeneous space with the naturally reductive decomposition \(\mathfrak{g}=\mathfrak{m}+\mathfrak{h}\) and with the nontrivial center \(\mathfrak{c}\subset\mathfrak{g}\). For any Finsler \((\alpha,\beta)\) metric \(F=\sqrt{\alpha}\cdot\phi(s)\) determined by the Riemannian metric \(\alpha\) and an \(\operatorname{Ad}(H)\)-invariant vector \(V=C_{\mathfrak{m}}\), such that \(C_{\mathfrak{m}}+C_{\mathfrak{h}}\in\mathfrak{c}\), the geodesic graph is_
\[\xi(X)=-\zeta(X)\cdot C_{\mathfrak{h}}. \tag{8}\]
## 3 Geodesic orbit \((\alpha,\beta)\) metrics on spheres
We are going to consider all expressions of spheres as a homogeneous space with Riemannian geodesic orbit metrics, according to the classification table in [4]. We analyze the existence of invariant Finsler \((\alpha,\beta)\) metrics and we construct geodesic graphs on them to illustrate Theorem 6 and Proposition 7. For the infinite series of spheres, we shall describe explicitly the low-dimensional examples.
We start with the standard representation of the sphere \(S^{n}=\operatorname{SO}(n+1)/\operatorname{SO}(n)\). It is well known that it admits one-parameter family of invariant Riemannian metrics and these metrics are naturally reductive. The isotropy representation does not admit any invariant vectors in \(\mathfrak{m}\) and hence no invariant Finsler \((\alpha,\beta)\) merics. We shall analyze other expressions of some spheres, for smaller isometry groups, which admit more invariant geodesic orbit metrics and also some invariant vectors in \(\mathfrak{m}\) and hence some invariant Finsler \((\alpha,\beta)\) metrics. Namely, \(S^{2n+1}=\operatorname{SU}(n+1)/\operatorname{SU}(n)\) which admits group extension and
expression \(S^{2n+1}={\rm U}(n+1)/{\rm U}(n)\) and \(S^{4n+3}={\rm Sp}(n+1)/{\rm Sp}(n)\) which admits two possible group extensions, see also [4] for details about Riemannian geodesic orbit metrics. Finally, we show that exceptional spheres do not admit invariant purely Finsler \((\alpha,\beta)\) metrics and we classify all Finsler \((\alpha,\beta)\) geodesic orbit metrics on spheres.
### \(S^{2n+1}={\rm SU}(n+1)/{\rm SU}(n)\)
For \(n=1\), we are reduced to the situation \(S^{3}={\rm SU}(2)\). In this case, the geodesic orbit Riemannian metrics \(\alpha\) are multiples of the bi-invariant metric. The isometry group can be enlarged and this example can be treated in the framework of the next section. To illustrate the general behaviour of the present series, we shall consider \(n=2\) and hence \(S^{5}={\rm SU}(3)/{\rm SU}(2)\). On the Lie algebra level, we choose the basis \(\{H_{1},H_{2},H_{3}\}\) of the Lie algebra \(\mathfrak{h}=\mathfrak{su}(2)\), given by the matrices
\[H_{1}=\left(\begin{array}{ccc}i&0&0\\ 0&-i&0\\ 0&0&0\end{array}\right),H_{2}=\left(\begin{array}{ccc}0&i&0\\ i&0&0\\ 0&0&0\end{array}\right),H_{3}=\left(\begin{array}{ccc}0&1&0\\ -1&0&0\\ 0&0&0\end{array}\right).\]
For the reductive complement \(\mathfrak{m}\) in the decomposition \({\rm su}(3)={\rm su}(2)+\mathfrak{m}\), we choose the basis \(B=\{X_{1},X_{2},Y_{1},Y_{2},Z\}\), given by the matrices
\[X_{1}=\left(\begin{array}{ccc}0&0&1\\ 0&0&0\\ -1&0&0\end{array}\right),\quad X_{2}=\left(\begin{array}{ccc}0&0&i\\ 0&0&0\\ i&0&0\end{array}\right),\] \[Y_{1}=\left(\begin{array}{ccc}0&0&0\\ 0&0&1\\ 0&-1&0\end{array}\right),\quad Y_{2}=\left(\begin{array}{ccc}0&0&0\\ 0&0&i\\ 0&i&0\end{array}\right),\quad Z=\left(\begin{array}{ccc}-\frac{i}{2}&0&0\\ 0&-\frac{i}{2}&0\\ 0&0&i\end{array}\right).\]
By the straightforward calculations, we obtain the Lie bracket relations on \(\mathfrak{h}={\rm span}\{H_{i}\}\), which are
\[[H_{1},H_{2}]=-2H_{3},\quad[H_{1},H_{3}]=2H_{2},\quad[H_{2},H_{3}]=-2H_{1}\]
and the Lie bracket relations on \(\mathfrak{m}={\rm span}\{X_{i},Y_{i},Z\}\), which are
\[[X_{1},X_{2}]=-2Z+H_{1},\] \[[X_{1},Y_{1}]=-H_{3},\quad[X_{2},Y_{1}]=-H_{2},\] \[[X_{1},Y_{2}]=\phantom{-}H_{2},\quad H_{2},\quad[X_{2},Y_{2}]=-H_ {3},\quad[Y_{1},Y_{2}]=-2Z-H_{1},\] \[[X_{1},Z]=\frac{3}{2}X_{2},\quad[X_{2},Z]=-\frac{3}{2}X_{1},\quad [Y_{1},Z]=\frac{3}{2}Y_{2},\quad[Y_{2},Z]=-\frac{3}{2}Y_{1}. \tag{9}\]
The adjoint action of \(\mathfrak{h}=\mathfrak{su}(2)\) on \(\mathfrak{m}\) is generated by the operators
\[{\rm ad}(H_{1})|_{\mathfrak{m}}=A_{12}-A_{34},\quad{\rm ad}(H_{2})|_{ \mathfrak{m}}=A_{14}-A_{23},\quad{\rm ad}(H_{3})|_{\mathfrak{m}}=-A_{13}-A_{24}. \tag{10}\]
From the adjoint action given by formulas (10) we see that there is a 2-parameter family of invariant scalar products on \(\mathfrak{m}\) which determine invariant Riemannian
metrics on \(\mathrm{SU}(3)/\mathrm{SU}(2)\). Up to a multiple, these scalar products are determined by the condition that the basis \(\{X_{1},X_{2},Y_{1},Y_{2},\frac{1}{\sqrt{c}}Z\}\), for some \(c>0\), is orthonormal. These Riemannian metrics are known to be weakly symmetric and hence geodesic orbit, see for example [4], [22], [23], [24].
**Proposition 8**: _Each of the \(2\)-parameter family of Riemannian geodesic orbit metrics on the sphere \(S^{5}=\mathrm{SU}(3)/\mathrm{SU}(2)\) can be modified in one direction to obtain invariant Finsler \((\alpha,\beta)\) metrics. All these metrics are geodesic orbit with respect to the group \(\mathrm{SU}(3)\)._
_Proof._ The vector \(Z\in\mathfrak{m}\) is \(\mathrm{Ad}(H)\)-invariant and hence its \(\alpha\)-equivalend one-form \(\beta\), together with the Riemannian metric \(\alpha\) and a function \(\phi(s)\), according to Lemma 1, determine invariant Finsler \((\alpha,\beta)\) metrics. We are now going to illustrate the geodesic orbit property of these \((\alpha,\beta)\) metrics. We use equation (6) in the form
\[\alpha\Big{(}X,[\xi(X),U]_{\mathfrak{m}}\Big{)} = -\alpha\Big{(}X,[X,U]_{\mathfrak{m}}\Big{)}-\alpha\Big{(}\zeta(X )\cdot V,[X,U]_{\mathfrak{m}}\Big{)}, \tag{11}\]
which allows us to construct both the Riemannian geodesic graph \(\xi_{R}\) and the Finslerian geodesic graph \(\xi\). We write each vector \(X\in\mathfrak{m}\) and each vector \(F\in\mathfrak{h}\) in the forms
\[X = x_{1}X_{1}+x_{2}X_{2}+x_{3}Y_{1}+x_{4}Y_{2}+zZ,\] \[F = \xi_{1}H_{1}+\ldots+\xi_{3}H_{3},\]
and we consider formula (11) to determine components \(\xi_{i}\) depending on \(x_{j}\) and on \(z\). We obtain the system of linear equations and we write down the extended matrix \((\mathbb{A},\mathbb{B},\mathbb{C})\) of this system, where we separate the two right-hand sides according the two terms on the right-hand side in formula (11). The second right-hand side, which we denote by \((\mathbb{C})\) and which corresponds to the last term in formula (11) is the Finslerian term. The extended matrix is
\[(\mathbb{A}|\mathbb{B}|\mathbb{C})=\left(\begin{array}{cccc|c}x_{2}&x_{4}&- x_{3}&x_{2}z(\frac{3}{2}-2c)&-2x_{2}c\zeta v\\ -x_{1}&-x_{3}&-x_{4}&-x_{1}z(\frac{3}{2}-2c)&2x_{1}c\zeta v\\ -x_{4}&x_{2}&x_{1}&x_{4}z(\frac{3}{2}-2c)&-2x_{4}c\zeta v\\ x_{3}&-x_{1}&x_{2}&-x_{3}z(\frac{3}{2}-2c)&2x_{3}c\zeta v\end{array}\right).\]
The rank of this system is equal to \(3\) and it is solvable by the Cramer's rule. In the Riemannian case and also in the Finslerian case, the solution is the unique geodesic graph. After cancelling out the common factor, its components are \(\xi_{i}=\frac{P_{i}}{P}\), where \(\deg(P_{i})-1=\deg(P)=2\). It shows that all the Finsler \((\alpha,\beta)\) metrics which arise from the Riemannian geodesic orbit metric \(\alpha\) and the one-form \(\beta\) which is \(\alpha\)-equivalent with the vector \(Z\in\mathfrak{m}\) are also geodesic orbit metrics. \(\square\)
We remark that the geodesic graph constructed from the system \((\mathbb{A}|\mathbb{B}|\mathbb{C})\) above proves the geodesic orbit property of the space \(G/H\) with respect to he group \(G\), but it is not constructed according to Theorem 6.
### \(S^{2n+1}=\mathrm{U}(n+1)/\mathrm{U}(n)\)
We continue with the case \(n=2\), hence we have \(S^{5}=\mathrm{U}(3)/\mathrm{U}(2)\). We are going to illustrate Proposition 7 for the space from previous section, in the extended isometry group \(\widetilde{G}=\mathrm{U}(3)\). The space with Riemannian metrics \(\alpha\) considered above is known to be naturally reductive, with respect to the isometry group \(\mathrm{U}(3)\), see for example [4], [22], [23], [24]. We denote further
\[H_{0}=\left(\begin{array}{ccc}i&0&0\\ 0&i&0\\ 0&0&0\end{array}\right),\quad\bar{Z}=\left(\begin{array}{ccc}i(1-2c)&0&0\\ 0&i(1-2c)&0\\ 0&0&i\end{array}\right)\]
and consider the basis \(\{H_{0},H_{1},H_{2},H_{3}\}\) of the Lie algebra \(\widetilde{\mathfrak{h}}=\mathfrak{u}(2)\). We have \(\mathfrak{u}(3)=\mathfrak{u}(2)+\mathfrak{m}\), where the subspace \(\mathfrak{m}\) is again generated by the basis \(B\), as in the previous section. The adjoint action of \(\widetilde{\mathfrak{h}}\) on \(\mathfrak{m}\) is given by the operators
\[\mathrm{ad}(H_{1})|_{\mathfrak{m}}=A_{12}-A_{34},\quad\mathrm{ad }(H_{2})|_{\mathfrak{m}}=A_{14}-A_{23},\] \[\mathrm{ad}(H_{3})|_{\mathfrak{m}}=-A_{13}-A_{24},\quad\mathrm{ ad}(H_{0})|_{\mathfrak{m}}=A_{12}+A_{34}. \tag{12}\]
The invariant scalar products on \(\mathfrak{m}\) are the same as in previous section and they determine invariant Riemannian metrics also on \(\mathrm{U}(3)/\mathrm{U}(2)\). Again, the vector \(Z\in\mathfrak{m}\) is \(\mathrm{Ad}(H)\)-invariant and hence, together with a function \(\phi(s)\), according to Lemma 1, it determines invariant Finsler \((\alpha,\beta)\) metrics. For \(F\in\widetilde{\mathfrak{h}}\), we write
\[F = \xi_{1}H_{1}+\ldots+\xi_{3}H_{3}+\xi_{4}H_{0}\]
and we consider again formula (11). We obtain the system of linear equations whose extended matrix is
\[(\mathbb{A}|\mathbb{B}|\mathbb{C})=\left(\begin{array}{ccc|c}x_{2}&x_{4}&- x_{3}&x_{2}&x_{2}z(\frac{3}{2}-2c)&-2x_{2}c\zeta v\\ -x_{1}&-x_{3}&-x_{4}&-x_{1}&-x_{1}z(\frac{3}{2}-2c)&2x_{1}c\zeta v\\ -x_{4}&x_{2}&x_{1}&x_{4}&x_{4}z(\frac{3}{2}-2c)&-2x_{4}c\zeta v\\ x_{3}&-x_{1}&x_{2}&-x_{3}&-x_{3}z(\frac{3}{2}-2c)&2x_{3}c\zeta v\end{array} \right).\]
Obviously, there exist a linear geodesic graph \(\xi(X)=z(\frac{3}{2}-2c)\cdot H_{0}\). We now change into the naturally reductive decomposition \(\widetilde{\mathfrak{g}}=\mathfrak{h}+\mathfrak{m}^{\prime}\), where \(\mathfrak{m}^{\prime}\) is generated by the basis \(B=\{X_{i},Y_{i},\bar{Z}\}\) and \(\bar{Z}=Z+\xi(Z)=Z+(\frac{3}{2}-2c)\cdot H_{0}\). The nonzero projections of the new Lie brackets of elements from \(\mathfrak{m}^{\prime}\) into \(\mathfrak{m}^{\prime}\) are
\[\begin{array}{ll}[X_{1},X_{2}]_{\mathfrak{m}^{\prime}}=-2\bar{Z},\quad[Y_{ 1},Y_{2}]_{\mathfrak{m}^{\prime}}=-2\bar{Z},\\ [X_{1},\bar{Z}]_{\mathfrak{m}^{\prime}}=2cX_{2},\quad[X_{2},\bar{Z}]_{ \mathfrak{m}^{\prime}}=-2cX_{1},\quad[Y_{1},\bar{Z}]_{\mathfrak{m}^{\prime}}=2 cY_{2},\quad[Y_{2},\bar{Z}]_{\mathfrak{m}^{\prime}}=-2cY_{1}.\end{array}\]
The adjoint action of \(\widetilde{\mathfrak{h}}=\mathfrak{u}(2)\) on \(\mathfrak{m}^{\prime}\) is given again by the operators (12). Formula (11) in the naturally reductive decomposition \(\widetilde{\mathfrak{g}}=\widetilde{\mathfrak{h}}+\mathfrak{m}^{\prime}\) is in the form
\[\alpha\Big{(}X,[\xi(X),U]_{\mathfrak{m}}\Big{)} = -\alpha\Big{(}\zeta(X)\cdot V,[X,U]_{\mathfrak{m}}\Big{)},\]
because the first term on the right-hand side vanishes. The matrix of the corresponding system of equations is now
\[(\mathbb{A}|\mathbb{C})=\left(\begin{array}{cccc|c}x_{2}&x_{4}&-x_{3}&x_{2}&-2 x_{2}c\zeta v\\ -x_{1}&-x_{3}&-x_{4}&-x_{1}&2x_{1}c\zeta v\\ -x_{4}&x_{2}&x_{1}&x_{4}&-2x_{4}c\zeta v\\ x_{3}&-x_{1}&x_{2}&-x_{3}&2x_{3}c\zeta v\end{array}\right).\]
We see that there is the solution \(\xi(X)=-2c\zeta(X)\cdot H_{0}\). Because \(\bar{Z}+2cH_{0}\in\mathfrak{c}(\mathfrak{g})\), this solution is the Finslerian geodesic graph, according to Proposition 7.
**Proposition 9**: _Each of the \(2\)-parameter family of Riemannian naturally reductive metrics on the sphere \(S^{2n+1}=\mathrm{U}(n+1)/\mathrm{U}(n)\) can be modified in one direction to obtain invariant Finsler \((\alpha,\beta)\) metrics. All these metrics are geodesic orbit with respect to the group \(\mathrm{U}(n+1)\)._
_Proof._ The existence of the \(2\)-parameter family of Riemannian naturally reductive metrics is well known, see for example [4], [22], [23], [24]. The detailed construction of these metrics and also the statement of the proposition in the particular case \(n=2\) is clear from the example above. Concerning the general case, we use Corollary 5 and Proposition 7. Let \(\mathfrak{g}=\mathfrak{m}+\mathfrak{h}\) be the naturally reductive decomposition and \(\mathfrak{c}\subset\mathfrak{g}=\mathfrak{u}(n+1)\) be the center. For any \(\mathrm{Ad}(H)\)-invariant vector \(V=C_{\mathfrak{m}}\in\mathfrak{m}\), there is a unique vector \(C_{\mathfrak{h}}\in\mathfrak{h}\) such that \(C_{\mathfrak{m}}+C_{\mathfrak{h}}\in\mathfrak{c}\). Geodesic graph for any invariant Finsler \((\alpha,\beta)\) metric \(F\) determined by the Riemannian metric \(\alpha\), the one-form \(\beta\) which is \(\alpha\)-equivalent with \(V\) and a function \(\phi(s)\), according to Lemma 1, can be constructed using Proposition 7. This proves the geodesic orbit property of \((G/H,F)\). \(\square\)
### \(S^{4n+3}=\mathrm{Sp}(n+1)/\mathrm{Sp}(n)\)
For \(n=1\), we have \(S^{7}=\mathrm{Sp}(2)/\mathrm{Sp}(1)\). On the Lie algebra level, we choose a basis \(\{H_{1},H_{2},H_{3}\}\) of the Lie algebra \(\mathfrak{h}=\mathfrak{sp}(1)\), given by the matrices
\[H_{1}=\left(\begin{array}{cc}i&0\\ 0&0\end{array}\right),H_{2}=\left(\begin{array}{cc}j&0\\ 0&0\end{array}\right),H_{3}=\left(\begin{array}{cc}k&0\\ 0&0\end{array}\right).\]
For the reductive complement \(\mathfrak{m}\) in the decomposition \(\mathrm{sp}(2)=\mathrm{sp}(1)+\mathfrak{m}\), we choose the basis \(B=\{X_{1},\ldots,X_{4},Z_{1},\ldots,Z_{3}\}\), given by the matrices
\[X_{1}=\left(\begin{array}{cc}0&1\\ -1&0\end{array}\right),X_{2}=\left(\begin{array}{cc}0&i\\ i&0\end{array}\right),X_{3}=\left(\begin{array}{cc}0&j\\ j&0\end{array}\right),X_{4}=\left(\begin{array}{cc}0&k\\ k&0\end{array}\right),\] \[Z_{1}=\left(\begin{array}{cc}0&0\\ 0&i\end{array}\right),Z_{2}=\left(\begin{array}{cc}0&0\\ 0&j\end{array}\right),Z_{3}=\left(\begin{array}{cc}0&0\\ 0&k\end{array}\right).\]
By the straightforward calculations, we obtain the Lie bracket relations on \(\mathfrak{h}\), which are
\[[H_{1},H_{2}]=2H_{3},\quad[H_{1},H_{3}]=-2H_{2},\quad[H_{2},H_{3}]=2H_{1}.\]
The Lie bracket relations on \(\mathfrak{m}={\rm span}\{X_{i},Z_{j}\}\) are
\[[X_{1},X_{2}]=-2Z_{1}+2H_{1},\] \[[X_{1},X_{3}]=-2Z_{2}+2H_{2},\quad[X_{2},X_{3}]=2Z_{3}+2H_{3},\] \[[X_{1},X_{4}]=-2Z_{3}+2H_{3},\quad[X_{2},X_{4}]=-2Z_{2}-2H_{2}, \quad[X_{3},X_{4}]=\ 2Z_{1}+2H_{1},\]
\[[X_{1},Z_{1}]=X_{2},\quad[X_{2},Z_{1}]=-X_{1},\quad[X_{3},Z_{1}]=-X_{4},\quad[ X_{4},Z_{1}]=X_{3},\] \[[X_{1},Z_{2}]=X_{3},\quad[X_{2},Z_{2}]=X_{4},\quad[X_{3},Z_{2}]=-X _{1},\quad[X_{4},Z_{2}]=-X_{2},\] \[[X_{1},Z_{3}]=X_{4},\quad[X_{2},Z_{3}]=-X_{3},\quad[X_{3},Z_{3}]=X _{2},\quad[X_{4},Z_{3}]=-X_{1},\]
\[[Z_{1},Z_{2}]=2Z_{3},\quad[Z_{1},Z_{3}]=-2Z_{2},\quad[Z_{2},Z_{3}]=2Z_{1}.\]
The adjoint action of \(\mathfrak{h}\) on \(\mathfrak{m}\) is given by the operators
\[{\rm ad}(H_{1})|_{\mathfrak{m}}=A_{12}+A_{34},\quad{\rm ad}(H_{2})|_{\mathfrak{ m}}=A_{13}-A_{24},\quad{\rm ad}(H_{3})|_{\mathfrak{m}}=A_{14}+A_{23}. \tag{13}\]
There is a 7-parameter family of invariant scalar products on \(\mathfrak{m}\) and consequently invariant Riemannian metrics on \(M=G/H\). The geodesic orbit property of these Riemannian metrics was studied also with respect to extended groups in [18], [22], [23], [24], see also [4] for the overview. We are now going to generalize this to Finsler \((\alpha,\beta)\) metrics and formulate the results in terms of geodesic graphs, in the following sections. Without loosing any Riemannian geodesic orbit metrics, we start with the 4-parameter family of invariant scalar products on \(\mathfrak{m}\). Up to a multiple, the basis \(B\) will be orthogonal with \(\langle E_{i},E_{i}\rangle=1\), \(\langle Z_{1},Z_{1}\rangle=c_{1}\), \(\langle Z_{2},Z_{2}\rangle=c_{2}\), \(\langle Z_{3},Z_{3}\rangle=c_{3}\). We obtain the 4-parameter family of invariant Riemannian metrics on \(G/H={\rm Sp}(2)/{\rm Sp}(1)\). We use again equation (6) in the form
\[\alpha\Big{(}X,[\xi(X),U]_{\mathfrak{m}}\Big{)} = -\alpha\Big{(}X,[X,U]_{\mathfrak{m}}\Big{)}-\alpha\Big{(}\zeta(X) \cdot V,[X,U]_{\mathfrak{m}}\Big{)}. \tag{14}\]
We write each vector \(X\in\mathfrak{m}\) and each vector \(F\in\mathfrak{h}\) in the forms
\[X = x_{1}X_{1}+\ldots+x_{4}X_{4}+z_{1}Z_{1}+\ldots+z_{3}Z_{3},\] \[F = \xi_{1}H_{1}+\ldots+\xi_{3}H_{3}\]
and consider formula (14), without the last Finslerian term. We obtain the system of linear equations whose extended matrix \((\mathbb{A}|\mathbb{B})\), without the Finslerian part \((\mathbb{C})\), is
\[\left(\begin{array}{cccc|c}x_{2}&x_{3}&x_{4}&(1-2c_{1})z_{1}x_{2}+(1-2c_{2} )z_{2}x_{3}+(1-2c_{3})z_{3}x_{4}\\ -x_{1}&-x_{4}&x_{3}&-(1-2c_{1})z_{1}x_{1}+(1-2c_{2})z_{2}x_{4}-(1-2c_{3})z_{3} x_{3}\\ x_{4}&-x_{1}&-x_{2}&-(1-2c_{1})z_{1}x_{4}-(1-2c_{2})z_{2}x_{1}+(1-2c_{3})z_{3} x_{2}\\ -x_{3}&x_{2}&-x_{1}&(1-2c_{1})z_{1}x_{3}-(1-2c_{2})z_{2}x_{2}-(1-2c_{3})z_{3} x_{1}\\ 0&0&0&2z_{2}z_{3}(c_{3}-c_{2})\\ 0&0&0&2z_{1}z_{3}(c_{1}-c_{3})\\ 0&0&0&2z_{1}z_{2}(c_{2}-c_{1})\end{array}\right).\]
We see immediately that the system is solvable (and \(G/H\) is a g.o. space) for the Riemannian metric (with the first right-hand side \(\mathbb{B}\) only) if and only if \(c_{1}=c_{2}=c_{3}\). We see that just the corresponding 2-parameter family of invariant Riemannian metrics on \({\rm Sp}(2)/{\rm Sp}(1)\) are geodesic orbit metrics.
**Proposition 10**: _For each metric from the \(2\)-parameter family of geodesic orbit metrics on the sphere \(S^{7}=\mathrm{Sp}(2)/\mathrm{Sp}(1)\), there exist a three-dimensional \(\mathrm{Ad}(H)\)-invariant subspace in \(\mathfrak{m}\). For each vector \(V\) from this subspace, the Riemannian geodesic orbit metric \(\alpha\) can be modified using this vector to obtain invariant Finsler \((\alpha,\beta)\) metrics. None of these purely Finsler metrics is geodesic orbit with respect to the group \(\mathrm{Sp}(2)\)._
_Proof._ The existence of the \(2\)-parameter family of Riemannian geodesic orbit metrics is well known, see for example [4], and they are explicitly described in the example above. From the Lie brackets and from the adjoint representation given by the operators (13) we see that there is the three-dimensional \(\mathrm{Ad}(H)\)-invariant subspace in \(\mathfrak{m}\), generated by the vectors \(V_{1},\ldots,V_{3}\). Hence, any nonzero vector \(V=v_{1}Z_{1}+\ldots+v_{3}Z_{3}\), together with a function \(\phi(s)\), determines invariant Finsler \((\alpha,\beta)\) metrics on \(M=G/H\), according to Lemma 1. Concerning the geodesic orbit condition, we apply the condition \(c_{1}=c_{2}=c_{3}\) into the calculations above and we obtain the extended matrix \((\mathbb{A}|\mathbb{B}|\mathbb{C})\), also with the Finslerian part \((\mathbb{C})\), in the form
\[\left(\begin{array}{cccc|c}x_{2}&x_{3}&x_{4}&(1-2c)(z_{1}x_{2}+z_{2}x_{3}+z_ {3}x_{4})&2\zeta c(-v_{1}x_{2}-v_{2}x_{3}-v_{3}x_{4})\\ -x_{1}&-x_{4}&x_{3}&(1-2c)(-z_{1}x_{1}+z_{2}x_{4}-z_{3}x_{3})&2\zeta c(v_{1}x_ {1}-v_{2}x_{4}+v_{3}x_{3})\\ x_{4}&-x_{1}&-x_{2}&(1-2c)(-z_{1}x_{4}-z_{2}x_{1}+z_{3}x_{2})&2\zeta c(v_{1}x_ {4}+v_{2}x_{1}-v_{3}x_{2})\\ -x_{3}&x_{2}&-x_{1}&(1-2c)(z_{1}x_{3}-z_{2}x_{2}-z_{3}x_{1})&2\zeta c(-v_{1}x_ {3}+v_{2}x_{2}+v_{3}x_{1})\\ 0&0&0&2\zeta c(z_{2}v_{3}-z_{3}v_{2})\\ 0&0&0&0&2\zeta c(z_{3}v_{1}-z_{1}v_{3})\\ 0&0&0&0&2\zeta c(z_{1}v_{2}-z_{2}v_{1})\end{array}\right).\]
We see that for nonzero vector \(V\in\mathfrak{m}\) and for the general choice \(U=u_{1}Z_{1}+\ldots+u_{3}Z_{3}\), the system of equations corresponding to the above extended matrix is not solvable for general \(X\in\mathfrak{m}\) and hence \((G/H,F)\) is not a geodesic orbit space. \(\square\)
Proposition 10 gives the first example to the open question posed in [12]. However, \((G/H,F)\) becomes geodesic orbit space with respect to the bigger group \(\tilde{G}\supset G\), according to Theorem 6. We shall continue with the analysis of this situation in more detail.
### \(S^{4n+3}=\mathrm{Sp}(n+1)\cdot\mathrm{U}(1)/\mathrm{Sp}(n)\cdot\mathrm{diag} (\mathrm{U}(1))\)
In the previous section we have seen that any vector of the form \(V=v_{1}Z_{1}+\ldots+v_{3}Z_{3}\) is \(\mathrm{Ad}(H)\)-invariant and it determines an invariant Finsler \((\alpha,\beta)\) metric. For each such metric, the isotropy algebra can be extended by the formal operator \(\mathrm{ad}(V)\) according to Proposition 4 and its proof in [12] and we obtain \(\widetilde{\mathfrak{h}}\simeq\mathrm{sp}(1)\oplus\mathrm{u}(1)\). For the particular example, let us choose \(V=v_{1}Z_{1},v_{1}\in\mathbb{R}\). Then we have \(\widetilde{\mathfrak{h}}=\mathrm{span}\{\mathfrak{h},W_{1}\}\simeq\mathrm{sp }(1)\oplus\mathrm{u}(1)\) for the operator
\[W_{1}=\mathrm{ad}(Z_{1})|_{\mathfrak{m}}=2B_{23}-A_{12}+A_{34} \tag{15}\]
and we have the new expression of our manifold in the form \(S^{7}=\mathrm{Sp}(2)\cdot\mathrm{U}(1)/\mathrm{Sp}(1)\cdot\mathrm{diag}( \mathrm{U}(1))\). We will now consider the \(3\)-parameter family of Riemannian metrics corresponding to scalar products from previous section, which
satisfy the condition \(c_{2}=c_{3}\) and which are invariant with respect to \(\mathfrak{h}\). Related invariant Minkowski norms and Finsler \((\alpha,\beta)\) metrics are determined by the vector \(V\) above and a function \(\phi(s)\), according to Lemma 1. Formula (14) gives us the system of linear equations whose extended matrix \((\mathbb{A}|\mathbb{B}|\mathbb{C})\) is
\[\left(\begin{array}{cccc|c}x_{2}&x_{3}&x_{4}&-x_{2}&(1-2c_{1})z_{1}x_{2}+(1-2 c_{2})(z_{2}x_{3}+z_{3}x_{4})&-2\zeta x_{2}v_{1}c_{1}\\ -x_{1}&-x_{4}&x_{3}&x_{1}&-(1-2c_{1})z_{1}x_{1}+(1-2c_{2})(z_{2}x_{4}-z_{3}x_{3 })&2\zeta x_{1}v_{1}c_{1}\\ x_{4}&-x_{1}&-x_{2}&x_{4}&-(1-2c_{1})z_{1}x_{4}+(1-2c_{2})(-z_{2}x_{1}+z_{3}x_{2 })&2\zeta x_{4}v_{1}c_{1}\\ -x_{3}&x_{2}&-x_{1}&-x_{3}&(1-2c_{1})z_{1}x_{3}-(1-2c_{2})(z_{2}x_{2}+z_{3}x_{ 1})&-2\zeta x_{3}v_{1}c_{1}\\ 0&0&0&2z_{3}c_{2}&2z_{1}z_{3}(c_{1}-c_{2})&2\zeta c_{1}z_{3}v_{1}\\ 0&0&0&-2z_{2}c_{2}&2z_{1}z_{2}(c_{2}-c_{1})&-2\zeta c_{1}z_{2}v_{1}\\ \end{array}\right).\]
The rank of this system is equal to four and we see that it is solvable. Hence the Finsler \((\alpha,\beta)\) metrics determined by \(V=v_{1}Z_{1}\) are geodesic orbit metrics. However, for the components of the Riemannian geodesic graph \(\xi^{R}\), after cancelling out the common factor, we have \(\xi^{R}_{i}=P_{i}/P\), where \(\deg(P_{i})-1=\deg(P)=2\) and it is not easy to describe the Finslerian geodesic graph \(\xi\) using \(\xi^{R}\). To do so, we change the reductive decomposition, according to Proposition 4. We put \(\bar{Z}_{1}=Z_{1}-W_{1}\) (hence \(\bar{Z}_{1}\in\mathfrak{c}(\mathfrak{g})\) and \(Z_{1}=\bar{Z}_{1}+W_{1}\)) and \(\mathfrak{m}^{\prime}=\text{span}\{X_{i},\bar{Z}_{1},Z_{2},Z_{3}\}\). It holds
\[[H_{1},H_{2}]=2H_{3},\quad[H_{1},H_{3}]=-2H_{2},\quad[H_{2},H_{3}]=2H_{1}, \quad[H_{i},W_{1}]=0\]
and the projections of the Lie brackets of generators of the space \(\mathfrak{m}^{\prime}\) onto \(\mathfrak{m}^{\prime}\) are
\[\begin{array}{ll}[X_{1},X_{2}]_{\mathfrak{m}^{\prime}}=-2\bar{Z}_{1},\\ [X_{1},X_{3}]_{\mathfrak{m}^{\prime}}=-2Z_{2},\quad[X_{2},X_{3}]_{\mathfrak{ m}^{\prime}}=2Z_{3},\\ [X_{1},X_{4}]_{\mathfrak{m}^{\prime}}=-2Z_{3},\quad[X_{2},X_{4}]_{\mathfrak{ m}^{\prime}}=-2Z_{2},\quad[X_{3},X_{4}]_{\mathfrak{m}^{\prime}}=\ 2\bar{Z}_{1},\end{array}\]
\[\begin{array}{ll}[X_{1},\bar{Z}_{1}]_{\mathfrak{m}^{\prime}}=0,&[X_{2}, \bar{Z}_{1}]_{\mathfrak{m}^{\prime}}=0,&[X_{3},\bar{Z}_{1}]_{\mathfrak{m}^{ \prime}}=0,&[X_{4},\bar{Z}_{1}]_{\mathfrak{m}^{\prime}}=0,\\ [X_{1},Z_{2}]_{\mathfrak{m}^{\prime}}=X_{3},&[X_{2},Z_{2}]_{\mathfrak{m}^{ \prime}}=X_{4},&[X_{3},Z_{2}]_{\mathfrak{m}^{\prime}}=-X_{1},&[X_{4},Z_{2}]_{ \mathfrak{m}^{\prime}}=-X_{2},\\ [X_{1},Z_{3}]_{\mathfrak{m}^{\prime}}=X_{4},&[X_{2},Z_{3}]_{\mathfrak{m}^{ \prime}}=-X_{3},&[X_{3},Z_{3}]_{\mathfrak{m}^{\prime}}=X_{2},&[X_{4},Z_{3}]_{ \mathfrak{m}^{\prime}}=-X_{1},\end{array}\]
\[[\bar{Z}_{1},Z_{2}]_{\mathfrak{m}^{\prime}}=0,\quad[\bar{Z}_{1},Z_{3}]_{ \mathfrak{m}^{\prime}}=0,\quad[Z_{2},Z_{3}]_{\mathfrak{m}^{\prime}}=2\bar{Z}_ {1}.\]
The adjoint action of \(\widetilde{\mathfrak{h}}\) on \(\mathfrak{m}^{\prime}\) is given by the operators
\[\begin{array}{ll}\text{ad}(H_{1})|_{\mathfrak{m}}=A_{12}+A_{34},&\text{ad}(H _{2})|_{\mathfrak{m}}=A_{13}-A_{24},\\ \text{ad}(H_{3})|_{\mathfrak{m}}=A_{14}+A_{23},&\text{ad}(W_{1})|_{\mathfrak{m}} =2B_{23}-A_{12}+A_{34}.\end{array}\]
Geodesic lemma and formula (14) gives us the system of equations whose extended matrix \((\mathbb{A},\mathbb{B},\mathbb{C})\) is equivalent to
\[\left(\begin{array}{cccc|c}x_{2}&x_{3}&x_{4}&-x_{2}&-2c_{1}z_{1}x_{2}+(1-2c _{2})(z_{2}x_{3}+z_{3}x_{4})&-2\zeta vc_{1}x_{2}\\ -x_{1}&-x_{4}&x_{3}&x_{1}&2c_{1}z_{1}x_{1}+(1-2c_{2})(z_{2}x_{4}-z_{3}x_{3})&2 \zeta vc_{1}x_{1}\\ x_{4}&-x_{1}&-x_{2}&x_{4}&2c_{1}z_{1}x_{4}+(1-2c_{2})(-z_{2}x_{1}+z_{3}x_{2})&2 \zeta vc_{1}x_{4}\\ 0&0&0&c_{2}&z_{1}c_{1}&\zeta vc_{1}\\ \end{array}\right).\]
Using the Cramer's rule, we obtain components of geodesic graph
\[\xi_{1} = \Big{[}(\frac{c_{1}}{c_{2}}-2c_{1})(x_{1}^{2}+x_{2}^{2}-x_{3}^{2}-x_ {4}^{2})(z_{1}+\zeta v)+\] \[2(1-2c_{2})(x_{2}x_{3}-x_{1}x_{4})z_{2}+2(1-2c_{2})(x_{1}x_{3}+x_ {2}x_{4})z_{3}\Big{]}/\|x\|^{2},\] \[\xi_{2} = \Big{[}2(\frac{c_{1}}{c_{2}}-2c_{1})(x_{2}x_{3}+x_{1}x_{4})(z_{1} +\zeta v)+\] \[(1-2c_{2})(x_{1}^{2}-x_{2}^{2}+x_{3}^{2}-x_{4}^{2})z_{2}+2(1-2c_{2 })(x_{3}x_{4}-x_{1}x_{2})z_{3}\Big{]}/\|x\|^{2},\] \[\xi_{3} = \Big{[}2(\frac{c_{1}}{c_{2}}-2c_{1})(x_{2}x_{4}-x_{1}x_{3})(z_{1} +\zeta v)+\] \[2(1-2c_{2})(x_{1}x_{2}+x_{3}x_{4})z_{2}+(1-2c_{2})(x_{1}^{2}-x_{2 }^{2}-x_{3}^{2}+x_{4}^{2})z_{3}\Big{]}/\|x\|^{2},\] \[\xi_{4} = \frac{c_{1}}{c_{2}}(z_{1}+\zeta v),\]
where we put
\[\|x\|^{2}=x_{1}^{2}+x_{2}^{2}+x_{3}^{2}+x_{4}^{2}.\]
This geodesic graph satisfies the statement of Theorem 6.
**Proposition 11**: _Each of the \(3\)-parameter family of geodesic orbit metrics on the sphere \(S^{4n+3}=\mathrm{Sp}(n+1)\cdot\mathrm{U}(1)/\mathrm{Sp}(n)\cdot\mathrm{diag}( \mathrm{U}(1))\) can be modified in one direction to obtain invariant Finsler \((\alpha,\beta)\) metric. Each of these Finsler metrics is geodesic orbit metric with respect to the group \(\mathrm{Sp}(n+1)\cdot\mathrm{U}(1)\)._
_Proof._ The existence of the \(3\)-parameter family of geodesic orbit Riemannian metrics on these spheres is well known, see for example [4]. The detailed construction of these metrics and also the statement of the proposition in the particular case \(n=1\) is clear from the example above. In any reductive decompositon \(\mathfrak{g}=\mathfrak{h}+\mathfrak{m}\), the isotropy representation admits just one one-dimensional invariant subspace, which is the projection of the center \(\mathfrak{c}(\mathfrak{g})\) into \(\mathfrak{m}\). The reductive decomposition can be changed for the decomposition satisfying Proposition 4, which also the example above illustrates. Geodesic graph for any invariant Finsler \((\alpha,\beta)\) metric \(F\) determined by the Riemannian metric \(\alpha\), the one-form \(\beta\) which is \(\alpha\)-equivalent with \(V\) and a function \(\phi(s)\), according to Lemma 1, can be constructed using Theorem 6. This proves the geodesic orbit property of \((G/H,F)\). \(\square\)
### \(S^{4n+3}=\mathrm{Sp}(n+1)\cdot\mathrm{Sp}(1)/\mathrm{Sp}(n)\cdot\mathrm{diag}( \mathrm{Sp}(1))\)
We finish with considering the \(2\)-parameter family of metrics determined by the condition \(c_{1}=c_{2}=c_{3}\), from the Section 3.3. In such a case, the isotropy algebra can be extended by the operators
\[W_{1}=\mathrm{ad}(Z_{1})|_{\mathfrak{m}}=2B_{23}-A_{12}+A_{34},\] \[W_{2}=\mathrm{ad}(Z_{2})|_{\mathfrak{m}}=-2B_{13}-A_{13}-A_{24},\] \[W_{3}=\mathrm{ad}(Z_{3})|_{\mathfrak{m}}=2B_{12}-A_{14}+A_{23} \tag{16}\]
and we obtain \(\mathfrak{h}^{\prime}=\mathrm{span}(\mathfrak{h},W_{1},\ldots,W_{3})\simeq \mathrm{sp}(1)\oplus\mathrm{sp}(1)\) and \(\mathfrak{g}^{\prime}=\mathfrak{h}^{\prime}+\mathfrak{m}\). We have the new expression of our manifold \(M\) in the form \(M=G^{\prime}/H^{\prime}=\mathrm{Sp}(2)\cdot\mathrm{Sp}(1)/\mathrm{Sp}(1)\cdot \mathrm{diag}(\mathrm{Sp}(1))\). With respect to the bigger group \(G^{\prime}\), for the vector \(F\in\mathfrak{h}^{\prime}\), we write
\[F = \xi_{1}H_{1}+\ldots+\xi_{3}H_{3}+\xi_{4}W_{1}+\xi_{5}W_{1}+\xi_{6}W_{3}\]
and the Riemannian version of formula (14), without the last Finslerian term, gives us the system of equations whose extended matrix \((\mathbb{A}|\mathbb{B})\), without the Finslerian part \((\mathbb{C})\), is
\[\left(\begin{array}{ccccc|cccc}x_{2}&x_{3}&x_{4}&-x_{2}&-x_{3}&-x_{4}&(1-2c )(z_{1}x_{2}+z_{2}x_{3}+z_{3}x_{4})\\ -x_{1}&-x_{4}&x_{3}&x_{1}&-x_{4}&x_{3}&(1-2c)(-z_{1}x_{1}+z_{2}x_{4}-z_{3}x_{3 })\\ x_{4}&-x_{1}&-x_{2}&x_{4}&x_{1}&-x_{2}&(1-2c)(-z_{1}x_{4}-z_{2}x_{1}+z_{3}x_{2} )\\ -x_{3}&x_{2}&-x_{1}&-x_{3}&x_{2}&x_{1}&(1-2c)(z_{1}x_{3}-z_{2}x_{2}-z_{3}x_{1} )\\ 0&0&0&-2z_{3}c&2z_{2}c&0\\ 0&0&0&2z_{3}c&0&-2z_{1}c&0\\ 0&0&0&-2z_{2}c&2z_{1}c&0&0\end{array}\right).\]
We easily see the linear geodesic graph.
\[\xi(X) = (1-2c)\bigl{(}-z_{1}W_{1}-z_{2}W_{2}-z_{3}W_{3}\bigr{)}\]
and hence \(M=G/H\) is naturally reductive, which is well known, see for example [23], [24].
**Proposition 12**: _None of the \(2\)-parameter family of naturally reductive metrics on the sphere \(S^{4n+3}=\mathrm{Sp}(n+1)\cdot\mathrm{Sp}(1)/\mathrm{Sp}(n)\cdot\mathrm{diag} (\mathrm{Sp}(1))\) can be modified into invariant purely Finsler \((\alpha,\beta)\) metric._
_Proof._ In the special case \(n=1\), we see from the adjoint representation given by the operators (13) and (16) that there are no \(\mathrm{Ad}(H^{\prime})\)-invariant vectors in \(\mathfrak{m}\) and hence no \(\mathrm{Sp}(n+1)\cdot\mathrm{Sp}(1)\)-invariant purely Finsler \((\alpha,\beta)\) metrics. Concerning the general case, the Lie algebra \(\mathfrak{g}^{\prime}=\mathfrak{sp}(n+1)\oplus\mathfrak{sp}(1)\) is centerless and the manifold \(M=(G^{\prime}/H^{\prime},\alpha)\) does not admit expression in an extended form \(M=\widetilde{G}/\widetilde{H}\) for \(\widetilde{G}\supset G^{\prime}\). According to Corollary 5, there are no \(\mathrm{Ad}(H^{\prime})\)-invariant vectors in \(\mathfrak{m}\) and hence no \(\mathrm{Sp}(n+1)\cdot\mathrm{Sp}(1)\)-invariant purely Finsler \((\alpha,\beta)\) metrics. \(\square\)
### Geodesic orbit \((\alpha,\beta)\) metrics on spheres
We conclude our observations with the classification theorem.
**Theorem 13**: _The only Riemannian geodesic orbit metrics which can be modified into geodesic orbit purely Finsler \((\alpha,\beta)\) metrics on spheres are: - The two parameter family of metrics on \(S^{2n+1}=\mathrm{U}(n+1)/\mathrm{U}(n)\); - The three parameter family of metrics on \(S^{4n+3}=\mathrm{Sp}(n+1)\cdot\mathrm{U}(1)/\mathrm{Sp}(n)\cdot\mathrm{diag} (\mathrm{U}(1))\). In both cases, these modifications can be done in one direction. In a fixed reductive decomposition, this direction is determined by the projection of the center \(\mathfrak{c}(\mathfrak{g})\) into \(\mathfrak{m}\)._
Proof.: The two parameter family of Riemannian geodesic orbit metrics on \(S^{2n+1}=\mathrm{SU}(n+1)/\mathrm{SU}(n)\) coincides with the two parameter family of Riemannian geodesic orbit metrics on \(S^{2n+1}=\mathrm{U}(n+1)/\mathrm{U}(n)\) and geodesic orbit Finsler \((\alpha,\beta)\) metrics on the later space were described in Proposition 9. Geodesic orbit Finsler \((\alpha,\beta)\) metrics on the space \(S^{4n+3}=\mathrm{Sp}(n+1)\cdot\mathrm{Sp}(1)/\mathrm{Sp}(n)\cdot\mathrm{diag}( \mathrm{Sp}(1))\) were described in Proposition 11. Riemannian geodesic orbit metrics on spheres \(S^{4n+3}=\mathrm{Sp}(n+1)\cdot\mathrm{Sp}(1)/\mathrm{Sp}(n)\cdot\mathrm{diag}( \mathrm{Sp}(1))\) do not admit modification into invariant purely Finsler \((\alpha,\beta)\) metrics, as we have shown in Proposition 12. Concerning the exceptional spheres \(S^{6}=\mathrm{G}_{2}/\mathrm{SU}(3)\), \(S^{7}=\mathrm{Spin}(7)/\mathrm{G}_{2}\) and \(S^{15}=\mathrm{Spin}(9)/\mathrm{Spin}(7)\), the Lie algebras \(\mathfrak{g}_{2}\), \(\mathfrak{spin}(7)\) and \(\mathfrak{spin}(9)\) are centerless. These spheres with Riemannian geodesic orbit metrics also do not admit presentations in an extended form \(\widetilde{G}/\widetilde{H}\) for \(\widetilde{G}\supset G\). Hence, according to Corollary 5, they do not admit \(\mathrm{Ad}(H)\)-invariant vectors in \(\mathfrak{m}\) and hence no modification into invariant purely Finsler \((\alpha,\beta)\) metrics.
## 4 Projective spaces
As we have done explicit calculations with the spheres in previous section, it is easy to illustrate similar behaviour with some projective spaces now. With Riemannian metrics, the geodesic orbit property on these projective spaces was studied in [22], [23], [24], see also [4] for the overview. Again, we shall study explicitly the low-dimensional examples.
### \(\mathbb{C}P^{n}=\mathrm{SU}(n+1)/\mathrm{S}(\mathrm{U}(n)\cdot\mathrm{U}(1))\)
For \(n=2\), we have \(\mathbb{C}P^{2}=\mathrm{SU}(3)/\mathrm{S}(\mathrm{U}(2)\cdot\mathrm{U}(1))\). We choose the basis \(\{H_{1},H_{2},H_{3},Z\}\) of the Lie algebra \(\mathfrak{h}=\mathfrak{s}(\mathfrak{u}(2)\oplus\mathfrak{u}(1))\), given by the matrices
\[H_{1}=\left(\begin{array}{ccc}i&0&0\\ 0&-i&0\\ 0&0&0\end{array}\right),H_{2}=\left(\begin{array}{ccc}0&i&0\\ i&0&0\\ 0&0&0\end{array}\right),H_{3}=\left(\begin{array}{ccc}0&1&0\\ -1&0&0\\ 0&0&0\end{array}\right),Z=\left(\begin{array}{ccc}-\frac{i}{2}&0&0\\ 0&-\frac{i}{2}&0\\ 0&0&i\end{array}\right).\]
For \(\mathfrak{m}\), we choose the basis \(B=\{X_{1},X_{2},Y_{1},Y_{2}\}\), given by the matrices in Section 3.1. We use formulas (9) and (10) and we obtain the adjoint action of \(\mathfrak{h}\) on \(\mathfrak{m}\), which is given by the operators
\[\begin{array}{ccc}\mathrm{ad}(H_{1})|_{\mathfrak{m}}=A_{12}-A_{34},&\quad \mathrm{ad}(H_{2})|_{\mathfrak{m}}=A_{14}-A_{23},\\ \mathrm{ad}(H_{3})|_{\mathfrak{m}}=-A_{13}-A_{24},&\quad\mathrm{ad}(Z)|_{ \mathfrak{m}}=-\frac{3}{2}A_{12}-\frac{3}{2}A_{34}.\end{array}\]
We see that there is a \(1\)-parameter family of invariant Riemannian metrics on \(M=G/H\). These are known to be normal homogeneous and hence naturally reductive and also geodesic orbit spaces. We also see that there is not any \(\mathrm{Ad}(H)\)-invariant vector \(V\in\mathfrak{m}\) and hence no invariant purely Finsler \((\alpha,\beta)\) metric on \(M=G/H\).
### \(\mathbb{C}P^{2n+1}=\mathrm{Sp}(n+1)/\mathrm{Sp}(n)\cdot\mathrm{U}(1)\)
For \(n=1\), we have \(\mathbb{C}P^{3}=\mathrm{Sp}(2)/\mathrm{Sp}(1)\cdot\mathrm{U}(1)\). We use the matrices from Section 3.3. Now, we consider the basis \(\{H_{1},H_{2},H_{3},Z_{1}\}\) of \(\mathfrak{h}\) and the basis \(B=\{X_{1},\ldots,X_{4},Z_{1},Z_{3}\}\) of \(\mathfrak{m}\). Using formulas (13) and (16), we see that the adjoint action of \(\mathfrak{h}\) on \(\mathfrak{m}\) is given by the operators
\[\begin{array}{ll}\mathrm{ad}(H_{1})|_{\mathfrak{m}}=A_{12}+A_{34},&\quad \mathrm{ad}(H_{2})|_{\mathfrak{m}}=A_{13}-A_{24},\\ \mathrm{ad}(H_{3})|_{\mathfrak{m}}=A_{14}+A_{23},&\quad\mathrm{ad}(Z_{1})|_{ \mathfrak{m}}=2B_{23}-A_{12}+A_{34}.\end{array}\]
We see that there is a \(2\)-parameter family of invariant Riemannian metrics on \(M=G/H\). These metrics are known to be weakly symmetric, some of them are naturally reductive, see [23], [24] for details. The detailed analysis of these Riemannian metrics and geodesic vectors can be found in [5]. We also see that there is not any \(\mathrm{Ad}(H)\)-invariant vector \(V\in\mathfrak{m}\) and hence no invariant purely Finsler \((\alpha,\beta)\) metric on \(M=G/H\).
### \(\mathbb{H}P^{n}=\mathrm{Sp}(n+1)/\mathrm{Sp}(n)\cdot\mathrm{Sp}(1)\)
For \(n=1\), we have \(\mathbb{H}P^{1}=\mathrm{Sp}(2)/\mathrm{Sp}(1)\cdot\mathrm{Sp}(1)\). We use again the matrices from Section 3.3. Now, we consider the basis \(\{H_{1},H_{2},H_{3},Z_{1},\ldots,Z_{3}\}\) of \(\mathfrak{h}\) and the basis \(B=\{X_{1},\ldots,X_{4}\}\) of \(\mathfrak{m}\). Using formulas (13) and (16), we see that the adjoint action of \(\mathfrak{h}\) on \(\mathfrak{m}\) is given by the operators
\[\begin{array}{ll}\mathrm{ad}(H_{1})|_{\mathfrak{m}}=A_{12}+A_{34},&\quad \mathrm{ad}(H_{2})|_{\mathfrak{m}}=A_{13}-A_{24},&\quad\mathrm{ad}(H_{3})|_{ \mathfrak{m}}=A_{14}+A_{23},\\ \mathrm{ad}(Z_{1})|_{\mathfrak{m}}=-A_{12}+A_{34},&\quad\mathrm{ad}(Z_{2})|_{ \mathfrak{m}}=-A_{13}-A_{24},&\quad\mathrm{ad}(Z_{3})|_{\mathfrak{m}}=-A_{14 }+A_{23}.\end{array}\]
We see again that there is a \(1\)-parameter family of invariant Riemannian metrics on \(M=G/H\). These are known to be normal homogeneous and hence naturally reductive and geodesic orbit spaces. We also see that there is not any \(\mathrm{Ad}(H)\)-invariant vector \(V\in\mathfrak{m}\) and hence no invariant purely Finsler \((\alpha,\beta)\) metric on \(M=G/H\).
### Geodesic orbit \((\alpha,\beta)\) metrics on projective spaces
All geodesic orbit metrics on spheres induce geodesic orbit metrics on real projective spaces \(\mathbb{R}P^{n}\) in the natural way. According to [4], Riemannian metrics obtained this way and certain Riemannian metrics on \(\mathbb{C}P^{n}=\mathrm{SU}(n+1)/\mathrm{S}(\mathrm{U}(n)\cdot\mathrm{U}(1))\), \(\mathbb{C}P^{2n+1}=\mathrm{Sp}(n+1)/\mathrm{Sp}(n)\cdot\mathrm{U}(1)\), \(\mathbb{H}P^{n}=\mathrm{Sp}(n+1)/\mathrm{Sp}(n)\cdot\mathrm{Sp}(1)\) and \(\mathbb{C}aP^{2}=\mathrm{F}_{4}/\mathrm{Spin}(9)\) exhaust all Riemannian geodesic orbit metrics on projective spaces.
**Proposition 14**: _Projective spaces \(\mathbb{C}P^{n}\), \(\mathbb{C}P^{2n+1}\), \(\mathbb{H}P^{n}\) and \(\mathbb{C}aP^{2}\) mentioned above do not admit invariant purley Finsler \((\alpha,\beta)\) metrics._
_Proof_. For the particular case \(n=1\), the statement for the first three type of spaces follows from previous examples. Concerning the general case, the Lie algebras \(\mathfrak{g}\) of the isometry groups \(G\) of mentioned spaces are centerless. These
projective spaces with Riemannian geodesic orbit metrics also do not admit presentations in an extended form \(\widetilde{G}/\widetilde{H}\) for \(\widetilde{G}\supset G\). Hence, again according to Corollary 5, they do not admit \(\operatorname{Ad}(H)\)-invariant vectors in \(\mathfrak{m}\) and hence no modification into invariant purely Finsler \((\alpha,\beta)\) metrics.
## Acknowledgements
The research is supported by grant PID2019-10519GA-C22 funded by \(AEI/10.13\) 039/501100011033. The first author is also partially supported by grant GR21055 funded by Junta de Extremadura and Fondo Europeo de Desarrollo Regional.
|
2301.10040 | Interrogating Quantum Nonlocal Effects in Nanoplasmonics through
Electron-Beam Spectroscopy | A rigorous account of quantum nonlocal effects is paramount for understanding
the optical response of metal nanostructures and for designing plasmonic
devices at the nanoscale. Here, we present a scheme for retrieving the quantum
surface response of metals, encapsulated in the Feibelman $d$-parameters, from
electron energy-loss spectroscopy (EELS) and cathodoluminescence (CL)
measurements. We theoretically demonstrate that quantum nonlocal effects have a
dramatic impact on EELS and CL spectra, in the guise of spectral shifts and
nonlocal damping, when either the system size or the inverse wave vector in
extended structures approach the nanometer scale. Our concept capitalizes on
the unparalleled ability of free-electrons to supply deeply subwavelength
near-fields and, thus, probe the optical response of metals at length scales in
which quantum-mechanical effects are apparent. These results pave the way for a
widespread use of the $d$-parameter formalism, thereby facilitating a rigorous
yet practical inclusion of nonclassical effects in nanoplasmonics. | P. A. D. Gonçalves, F. Javier García de Abajo | 2023-01-24T14:31:18Z | http://arxiv.org/abs/2301.10040v2 | # Interrogating Quantum Nonlocal Effects in Nanoplasmonics through Electron-Beam Spectroscopies
###### Abstract
A rigorous account of quantum nonlocal effects is paramount for understanding the optical response of metal nanostructures and for designing plasmonic devices operating at the nanoscale. Here, we present a scheme for retrieving the quantum surface response of metals encapsulated in the Feibelman \(d\)-parameters from electron energy-loss spectroscopy (EELS) and cathodoluminescence (CL) measurements. We theoretically demonstrate that quantum nonlocal effects have a dramatic impact on EELS and CL spectra, in the guise of spectral shifts and nonlocal damping, when either the system size or the inverse wave vector in extended structures approach the nanometer scale. Our concept capitalizes on the unparalleled ability of free-electrons to supply tailored, deeply subwavelength near-fields and, thus, probe the optical response of metals at length scales in which quantum-mechanical effects are apparent. These results pave the way for a widespread use of the \(d\)-parameter formalism, thereby facilitating a rigorous yet practical inclusion of nonclassical effects in nanoplasmonics studies.
The optical response of few-nanometer-scale plasmonic structures, such as those engineered with state-of-the-art nanofabrication techniques, can exhibit substantial quantum nonlocal effects associated with the inherently quantum mechanical nature of the plasmon-supporting electron gas in the involved materials [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16]. Broadly speaking, the impact of nonclassical effects becomes non-negligible when either the characteristic size of the system falls below \(\sim 10-20\,\mathrm{nm}\) or the optical response is mediated by field components of large momenta such as those produced by confined near-field confinement. Hence, a quantum nonlocal description of the underlying plasmon-mediated light-matter interaction is required in order to explain experimental data as well as to draw insight into the elementary processes governing that interaction in the few-nanometer regime.
Since an all-encompassing quantum-mechanical treatment of the many-electron system [e.g., using time-dependent density-functional theory [17] (TDDFT)] is severely constrained to few-atom clusters much smaller than the typical nanoplasmonic systems exploited in experiments, in practice it is necessary to resort to quantum-informed models that incorporate dominant quantum effects to leading-order [1; 18; 19; 20]. Among these, the Feibelman \(d\)-parameter formalism [2] is particularly appealing because it simultaneously incorporates electron spill-out/spill-in, nonlocality (i.e., momentum-dependent response), and surface-enabled Landau damping through the introduction of two microscopic surface-response functions, \(d_{\perp}(\omega)=\int\mathrm{d}z\,z\,\rho_{\mathrm{ind}}(z,\omega)/\int \mathrm{d}z\,\rho_{\mathrm{ind}}(z,\omega)\) and \(d_{\parallel}(\omega)=\int\mathrm{d}z\,z\,\partial_{x}J_{\parallel,\mathrm{ ind}}(z,\omega)/\int\mathrm{d}z\,\partial_{x}J_{\parallel,\mathrm{ind}}(z,\omega)\), corresponding to the centroids of the induced charge density along the surface normal \(\hat{\mathbf{z}}\) and of the normal derivative of the current parallel to the interface, respectively. Once they are known for the planar dielectric-metal interface(s) of interest, these parameters allow the incorporation of the above-mentioned nonclassical effects in the optical response of metallic nanostructures using standard electromagnetic solvers upon replacing the macroscopic boundary conditions [21] by their \(d\)-parameter-corrected counterparts [14; 22; 23; 24; 25; 26; 27]. Naturally, this procedure relies on our ability to compute the \(d\)-parameters in the first place using, for example, linear-response TDDFT. However, while simple metals (e.g., alkali metals or aluminum) can be well-described by jellium-level TDDFT, for which accurate \(d\)-parameter data exist [2; 28; 29; 30; 12], noble metals such as gold and silver require a more demanding atomistic treatment beyond the jellium approximation due to valence-electron screening from the lower-lying bands [30; 31]. As a result of this, and despite the relevance of noble metals in nanoplasmonics, quantitatively accurate \(d\)-parameter data remains elusive [32], thus limiting the widespread use of the \(d\)-parameter framework.
Here, we propose a scheme in which electron-beam (e-beam) spectroscopies [33; 34] are employed to determine the quantum surface response (i.e., the \(d\)-parameters) of metals directly from experimental spectra (Fig. 1). To that end, we present a quantum-corrected theory of electron energy-loss spectroscopy (EELS) [33; 34; 35] and cathodoluminescence (CL) [33; 34; 36] based on the aforementioned quantum surface-response formalism and use it to infer \(d_{\perp}\) and \(d_{\parallel}\) from the measured spectra by quantifying the size- or wave-vector-dependent spectral shifting and broadening due to quantum nonlocal effects. Crucial to this is the ability of e-beams to produce broadband and highly confined near-fields [33], which may be tailored by, for example, varying the electron kinetic energy or controlling the e-beam trajectory. Such fields contain evanescent components that allow free electrons to efficiently couple to strongly confined optical excitations in materials and retrieve sub-nanometer spatial information, thus rendering them first-class probes of nonclassical effects in nanoplasmonics [6; 7; 8; 9; 10; 11; 6; 12; 13]. Our work opens an powerful route toward a better quantitative understanding of the nonclassical optical response of metallic nanostructures, which is instrumental from a fundamental viewpoint and constitutes a key ingredient in the design of nanophotonic devices operating at the few-nanometer scale.
We begin our analysis by considering the canonical scenario of a swift electron moving with constant velocity \(v\) along a straight-line trajectory \(\mathbf{r}_{e}(t)\) parallel to a metal surface placed at \(z=0\). Taking \(\mathbf{v}=v\,\hat{\mathbf{x}}\) and \(\mathbf{r}_{e}(t)=(v\,t,0,b)\), with \(b\) defining the electron-surface separation, and assuming that the medium adjacent to the metal is a lossless dielectric with relative permittivity \(\epsilon_{\mathrm{d}}\), the spectral EELS probability experienced by the |
2310.19043 | Differentially Private Permutation Tests: Applications to Kernel Methods | Recent years have witnessed growing concerns about the privacy of sensitive
data. In response to these concerns, differential privacy has emerged as a
rigorous framework for privacy protection, gaining widespread recognition in
both academic and industrial circles. While substantial progress has been made
in private data analysis, existing methods often suffer from impracticality or
a significant loss of statistical efficiency. This paper aims to alleviate
these concerns in the context of hypothesis testing by introducing
differentially private permutation tests. The proposed framework extends
classical non-private permutation tests to private settings, maintaining both
finite-sample validity and differential privacy in a rigorous manner. The power
of the proposed test depends on the choice of a test statistic, and we
establish general conditions for consistency and non-asymptotic uniform power.
To demonstrate the utility and practicality of our framework, we focus on
reproducing kernel-based test statistics and introduce differentially private
kernel tests for two-sample and independence testing: dpMMD and dpHSIC. The
proposed kernel tests are straightforward to implement, applicable to various
types of data, and attain minimax optimal power across different privacy
regimes. Our empirical evaluations further highlight their competitive power
under various synthetic and real-world scenarios, emphasizing their practical
value. The code is publicly available to facilitate the implementation of our
framework. | Ilmun Kim, Antonin Schrab | 2023-10-29T15:13:36Z | http://arxiv.org/abs/2310.19043v2 | # Differentially Private Permutation Tests:
###### Abstract
Recent years have witnessed growing concerns about the privacy of sensitive data. In response to these concerns, differential privacy has emerged as a rigorous framework for privacy protection, gaining widespread recognition in both academic and industrial circles. While substantial progress has been made in private data analysis, existing methods often suffer from impracticality or a significant loss of statistical efficiency. This paper aims to alleviate these concerns in the context of hypothesis testing by introducing differentially private permutation tests. The proposed framework extends classical non-private permutation tests to private settings, maintaining both finite-sample validity and differential privacy in a rigorous manner. The power of the proposed test depends on the choice of a test statistic, and we establish general conditions for consistency and non-asymptotic uniform power. To demonstrate the utility and practicality of our framework, we focus on reproducing kernel-based test statistics and introduce differentially private kernel tests for two-sample and independence testing: dpMMD and dpHSIC. The proposed kernel tests are straightforward to implement, applicable to various types of data, and attain minimax optimal power across different privacy regimes. Our empirical evaluations further highlight their competitive power under various synthetic and real-world scenarios, emphasizing their practical value. The code is publicly available to facilitate the implementation of our framework.
## 1 Introduction
Ensuring the privacy of sensitive data has become a critical concern in modern data analysis. As organizations collect and analyze vast amounts of personal information, safeguarding individual privacy has emerged as a crucial ethical and legal imperative. In response to these challenges, differential privacy (DP), introduced by Dwork et al. (2006), has emerged as a rigorous framework for addressing privacy concerns, and has gained widespread recognition not only in academia but also in industrial companies. For instance, major industry players such as Apple Apple (2017), Google Erlingsson et al. (2014) and Microsoft Ding et al. (2017) have embraced differential privacy as a robust definition of privacy. This growing trend has sparked a recent surge of research in statistics and related fields, aiming at integrating differential privacy and its variants Dwork et al. (2006); Bun and Steinke (2016); Mironov (2017); Dong et al. (2022) into data analysis and developing privacy-preserving methodologies. In this line of work, a major challenge is to strike
a balance between privacy guarantees and statistical efficiency. Notably, a high privacy guarantee requires substantial data perturbation, which in turn degrades statistical performance. Conversely, releasing less perturbed data can improve statistical efficiency but at the expense of reduced privacy guarantees. Therefore, balancing this trade-off between privacy and efficiency has been a central topic in the existing literature (_e.g._, Duchi et al., 2018; Cai et al., 2021; Kamath and Ullman, 2020).
Broadly, there are two major statistical problems tackled under privacy constraints: estimation and hypothesis testing (Kamath and Ullman, 2020, for a recent review). This paper focuses on the latter problem, which requires access to the null distribution of a test statistic in order to effectively calibrate a test statistic. Analyzing the distribution of a test statistic becomes particularly more challenging in private settings due to additional random sources arising from privacy mechanisms. As we review in Section 1.1, substantial efforts have been made to address private testing problems. These efforts involve adapting classical hypothesis tests to private settings or developing new testing procedures that achieve an optimal balance between privacy and statistical power.
Despite the significant progress made over the last decade, there are still several areas where further improvements can be made. One such area includes the reliance on asymptotic methods for determining the critical value of a test statistic. The practical quality of this asymptotic approach depends on the convergence rate of a privatized statistic to the limiting distribution. This convergence rate is often slow in private settings, and more importantly, the limiting distribution may vary depending on the delicate interplay between privacy and other parameters. This issue puts practitioners in a bind as it is unclear which limiting distribution should be considered a priori. All these concerns lead to the unreliability of asymptotic private tests in real-world applications.
Another area of concern is the limited practicality of existing methods. Many private statistical tests are designed specifically for discrete data, not directly applicable to handling continuous or mixed-type data. Moreover, existing methods often rely on unspecified constants and heuristics, making them less user-friendly and potentially undermining their reliability. We also point out that the majority of research has concentrated on theoretical aspects of private testing, and only a handful of papers are equipped with thorough empirical evaluations and open-source code.
In this work, we aim to tackle the aforementioned concerns by introducing differentially private permutation tests. The primary goal is to extend classical non-private permutation tests to differentially private settings, applicable to any test statistic with finite global sensitivity. The proposed private permutation test inherits the finite-sample validity of the classical permutation test under the exchangeability condition, while ensuring differential privacy. The power of the proposed test depends on the choice of a test statistic, and we establish sufficient conditions for consistency and non-asymptotic uniform power. To demonstrate the effectiveness of our framework, we focus on the two-sample and independence testing problems and propose differentially private versions of the maximum mean discrepancy (MMD) and Hilbert-Schmidt independence criterion (HSIC), which we coin as "dpMMD" and "dpHSIC", respectively. On the theoretical side, we prove minimax optimality of dpMMD and dpHSIC tests over the entire privacy regimes in terms of kernel metrics. On the empirical side, we showcase the competitive power performance of the proposed tests across various practical scenarios. The code that implements our methods is publicly available at [https://github.com/antoninschrab/dpkernel](https://github.com/antoninschrab/dpkernel) to allow practitioners to build on our findings.
### Related Work
In recent years, there has been a growing body of research on hypothesis testing problems under privacy constraints. Since the early work by Vu and Slavkovic (2009) and Fienberg et al. (2011), numerous attempts have been made to extend classical non-private tests to their private counterparts. Examples include ANOVA (Campbell et al., 2018; Swanberg et al., 2019), likelihood ratio tests (Canonne et al., 2019), tests for regression coefficients (Sheffet, 2017; Alabi and Vadhan, 2022), rank or sign-based nonparametric tests (Task and Clifton, 2016; Couch et al., 2019), conditional independence tests (Kalemaj et al., 2023) and \(\chi^{2}\)-tests (Fienberg et al., 2011; Wang et al., 2015; Gaboardi et al., 2016; Rogers and Kifer, 2017; Kakizaki et al., 2017; Friedberg and Rogers, 2023). While most of the aforementioned work focuses on asymptotic settings where the sample size goes to infinity, a recent line of work within computer science has placed greater emphasis on finite-sample analysis. In particular, Cai et al. (2017) propose a two-step algorithm for identity testing for discrete distributions, and studies the sample complexity under DP. The work of Acharya et al. (2018) explores both identity (goodness-of-fit) testing and closeness (two-sample) testing in finite-sample settings, and improves the upper bound result by Cai et al. (2017) and Aliakbarpour et al. (2018) for identity testing. Aliakbarpour et al. (2019) privatize the non-private test proposed by Diakonikolas and Kane (2016), and investigate the sample complexity for closeness testing and independence testing. In line with these advancements, our work develops private permutation tests, and studies their non-asymptotic performance under DP settings.
Despite the extensive body of literature, the majority of research has focused on private tests designed for discrete or bounded data. There are a few notable exceptions that have explored other data types. For example, Canonne et al. (2020) and Narayanan (2022) have investigated goodness-of-fit testing for high-dimensional Gaussian distributions. In addition, Raj et al. (2020) have proposed private two-sample tests based on finite dimensional approximations of kernel mean embeddings. The flexibility offered by kernels methods enables these tests to handle a wide variety of data types. However, their tests are asymptotic in nature, which may introduce reliability concerns when working with small sample sizes. Moreover, their analysis requires the number of features to be fixed. Such requirement potentially limits the power of the test when dealing with alternatives which are not well-represented by these fixed numbers of features.
Another line of work aims to develop a generic way to create private tests from non-private ones. The subsample-and-aggregate idea (Nissim et al., 2007) has emerged as a useful tool for this purpose. In particular, it offers a strategy to convert non-private sample complexity results into private ones in a black-box manner as pointed out by Cai et al. (2017); Canonne et al. (2019, 2020). Recent studies by Pena and Barrientos (2022) and Kazan et al. (2023) have focused specifically on the practical implementation of the subsample-and-aggregate approach. However, it is worth mentioning that this generality typically comes at the cost of suboptimal power, and often fails to recover the optimal sample complexity (Canonne et al., 2019, 2020). Moreover, the performance of this subsample-and-aggregate approach is sensitive to the number of subsamples, and determining the optimal value of this parameter remains an open problem.
Beyond global differential privacy, there has been a substantial amount of work on hypothesis testing under local differential privacy. Some of the notable works include Liao et al. (2017);
Gaboardi and Rogers (2018); Sheffet (2018); Acharya et al. (2019); Berrett and Butucea (2020); Dubois et al. (2023); Lam-Weil et al. (2022) and see the references therein. Local differential privacy requires data perturbation at the individual level, proving particularly useful in settings where data providers lack trust in data analysts. This individual-wise approach demands different analyses than global differential privacy, and the results under global and local differential privacy are not directly comparable.
Our work is also related to recent advances in kernel-based minimax testing (Li and Yuan, 2019; Albert et al., 2022; Schrab et al., 2023; Kim et al., 2022a). Specifically, we extend the non-private minimax testing rates established in this line of work to private counterparts. To achieve this, we leverage the techniques therein, such as the two moments method and exponential inequalities for permuted statistics in Kim et al. (2022a), and adapt them to private settings.
### An Overview of Our Results
The main contributions of this work are summarized below.
* **DP Permutation Tests (Section 3).** We introduce differentially private permutation tests in Algorithm 1, and establish their theoretical properties. A naive way of extending the classical permutation tests to private settings is to first make the original test statistic and its permuted counterparts differentially private, and then carry out the permutation test based on these individually privatized statistics. However, this naive approach results in an unnecessary power loss by adding more noise as the number of permutations increases. The proposed framework addresses this issue by utilizing the quantile representation of a permutation test. This strategy leads to a substantial power gain over the naive approach, while being finite-sample valid. We present sufficient conditions for pointwise consistency (Theorem 3) and non-asymptotic uniform power (Theorem 4) of the proposed tests. The latter uniform power condition can be regarded as an extension of the two moments method (Kim et al., 2022a) to private settings.
* **DP Kernel Tests (Section 4).** We showcase the versatility of our framework by applying it to two specific tasks: differentially private two-sample and independence testing based on reproducing kernel-based test statistics. We consider the plug-in estimators of the MMD and HSIC, and privatize them through the proposed method employing the standard Laplace mechanism. The practical performance of the resulting differentially private kernel tests heavily depends on the global sensitivity used in the Laplace mechanism. To boost the empirical performance, we put significant effort into establishing sharp upper bounds for the global sensitivity of the plug-in estimators of MMD and HSIC, as well as matching lower bounds for popular kernels (Lemma 5 and Lemma 6). We then establish key properties of the proposed kernel tests, including non-asymptotic validity and consistency against any fixed alternatives in Theorem 5 and Theorem 6.
* **Uniform Power and Optimality (Section 5).** We characterize the trade-off between differential privacy and statistical power through the lens of minimax analysis. To this end,
we analyze the minimum separation required for the differentially private MMD test to achieve significant power in terms of the MMD metric. We derive an upper bound on this minimum separation in Theorem 7, and a lower bound in Theorem 8, which matches in all relevant parameters including testing error rates and privacy levels. Our minimax results suggest that there is an unavoidable loss of power when the privacy parameter is smaller than a certain threshold (_i.e._, high privacy). On the other hand, the privacy guarantee comes for free in terms of separation rate when the privacy parameter exceeds this threshold (_i.e._, low privacy). We also derive the minimum separation in terms of the \(L_{2}\) metric in Theorem 9 that extends the prior work (Li and Yuan, 2019; Schrab et al., 2023) on non-private minimax testing to private settings. In Appendix B.5 and Appendix B.6, we present analogous findings for the differentially private HSIC test, which closely resemble the results obtained for the MMD test.
* **Negative Results of U-statistics (Section 5.3).** We also derive perhaps unexpected negative results of U-statistics in private settings. U-statistics have played an important role, arguably more popular than V-statistics, in deriving non-private minimax rates for various testing problems (Li and Yuan, 2019; Albert et al., 2022; Kim et al., 2022a; Berrett et al., 2021; Schrab et al., 2023). Given this trend, it is natural to consider U-statistics as an initial building block for obtaining private minimax rates. However, it turns out that the U-statistics suffer from higher global sensitivity than the corresponding V-statistics, requiring a higher level of noise to effectively privatize the resulting procedure. We formalize this observation in the context of kernel testing and show that the private tests based on U-statistics have sub-optimal power in high privacy regimes. This negative result naturally justifies our approach based on the V-statistics (equivalently, plug-in estimators) of the MMD and HSIC.
* **Empirical Validation (Section 6).** A significant portion of the prior work on differentially private testing has focused on theoretical aspects, often lacking practical values. On the other hand, practical approaches to differentially private testing frequently rely on heuristics without proper theoretical validation. Our work serves a role in bridging the gap between theory and practice by balancing both theoretical and practical aspects. In particular, we highlight that our method is simple to use and comes with strong theoretical guarantees as we demonstrate throughout the paper. The empirical results in Section 6 and Appendix C also illustrate the competitive performance of the proposed method across diverse scenarios, highlighting its practical value.
We further make contributions by presenting asymptotic distributions of privatized kernel statistics (Appendix B.1), general consistency results for resampling-based tests (Appendix B.2) as well as other technical innovations (Appendix G). We also introduce private kernel tests obtained through the subsample-and-aggregate idea (Appendix D). Due to space constraints, we relegate these additional results to the appendix.
### Organization
The rest of this paper is organized as follows. We begin in Section 2 by providing a brief overview of the fundamental concepts of differential privacy. Section 3 presents our main proposal, namely the
differentially private permutation test, and investigates its finite-sample validity and consistency in power. Moving forward to Section 4, we apply our proposed permutation framework to specific scenarios, focusing on differentially private kernel testing. In particular, we explore the privatization of kernel MMD and HSIC tests, and delve into minimum separation rates of the resulting tests in Section 5. To validate our theoretical findings, Section 6 presents empirical evaluations of the proposed algorithms, comparing their performance with existing methods. Finally, we conclude in Section 7 with a discussion outlining potential directions for future work. All the proofs and additional results are deferred to the appendix.
### Notation
Given two datasets \(\mathcal{X}_{n}\coloneqq(X_{1},\ldots,X_{n})\) and \(\tilde{\mathcal{X}}_{n}\coloneqq(\tilde{X}_{1},\ldots,\tilde{X}_{n})\), we denote the Hamming distance between \(\mathcal{X}_{n}\) and \(\tilde{\mathcal{X}}_{n}\) by \(d_{\text{ham}}(\mathcal{X}_{n},\tilde{\mathcal{X}}_{n})\coloneqq\sum_{i=1}^{ n}\mathds{1}(X_{i}\neq\tilde{X}_{i})\). For two sequences of real numbers \(a_{n},b_{n}\), we write \(a_{n}\lesssim b_{n}\) (and similarly \(a_{n}\gtrsim b_{n}\)) if there exists some positive constant \(C>0\) independent of \(n\) such that \(a_{n}\leq Cb_{n}\) for all \(n\geq 1\). We also write \(a_{n}\asymp b_{n}\) if \(a_{n}\lesssim b_{n}\) and \(b_{n}\lesssim a_{n}\). For \(x\in\mathbb{R}\), \(\lfloor x\rfloor\) denotes the largest integer smaller than or equal to \(x\). For a natural number \(k\in\mathbb{N}\), we use \([k]\) to denote the set \(\{1,\ldots,k\}\). We let \(\mathbf{\Pi}_{n}\) denote the set of all permutations of \([n]\). For a continuous function \(f:\mathbb{R}^{d}\mapsto\mathbb{R}\), the \(L_{2}\) and \(L_{\infty}\) norms of \(f\) are given as \(\|f\|_{L_{2}}=\{\int_{\mathbb{R}^{d}}f^{2}(\mathbf{x})\mathrm{d}\mathbf{x}\}^{1/2}\) and \(\|f\|_{L_{\infty}}=\sup_{\mathbf{x}\in\mathbb{R}^{d}}|f(\mathbf{x})|\), respectively. We say \(X\sim\mathsf{Laplace}(0,1)\) if \(X\) follows a Laplace distribution with local and scale parameters \((0,1)\). We often denote
\[\xi_{\varepsilon,\delta}\coloneqq\varepsilon+\log\bigl{(}1/(1-\delta)\bigr{)} \tag{1}\]
to simplify the notation in various places.
## 2 Background: Differential Privacy
This section presents a brief overview of the basic concepts and properties regarding differential privacy. For a comprehensive treatment, we refer the readers to Dwork et al. (2014). In our work, we adhere to the definition of differential privacy (Dwork et al., 2014, page 25), allowing for the inclusion of additional auxiliary variables. This extended definition requires that the standard differential privacy condition holds for every possible value of the auxiliary variable. In our permutation testing framework, we treat random permutations as the auxiliary variables independent of the dataset.
**Definition 1** (Differential Privacy).: _Consider a randomized algorithm \(\mathcal{A}\), which takes as input a dataset \(\mathcal{X}_{n}\) and an additional auxiliary variable \(w\in\mathcal{W}\). For \(\varepsilon>0\) and \(\delta\in[0,1)\), the algorithm \(\mathcal{A}\) is said to be \((\varepsilon,\delta)\)-differentially private if for (i) all \(S\in\operatorname{range}(\mathcal{A})\), (ii) all \(w\in\mathcal{W}\) and (iii) all two datasets \(\mathcal{X}_{n}\) and \(\tilde{\mathcal{X}}_{n}\) with \(d_{\text{ham}}(\mathcal{X}_{n},\tilde{\mathcal{X}}_{n})\leq 1\), the following inequality holds:_
\[\mathbb{P}\bigl{(}\mathcal{A}(\mathcal{X}_{n};w)\in S\,|\,\mathcal{X}_{n},w \bigr{)}\ \leq\ e^{\varepsilon}\mathbb{P}\bigl{(}\mathcal{A}(\tilde{\mathcal{X}}_{n};w) \in S\,|\,\tilde{\mathcal{X}}_{n},w\bigr{)}+\delta.\]
We note that \((\varepsilon,0)\)-differential privacy is often simply referred to as \(\varepsilon\)-DP or pure-DP. On the other hand, \((\varepsilon,\delta)\)-differential privacy with \(\delta\in(0,1)\) is referred to as approximate-DP, considered as a relaxation of pure-DP. As mentioned by Dwork et al. (2014, page 18), employing a large value
of \(\delta\) may lead to serious privacy breaches, potentially exposing the complete information of a small number of individuals with a non-trivial probability. Hence, it is generally desirable to choose small values of \(\delta\) such as \(\delta\lesssim\varepsilon^{2}n^{-1}\). Nevertheless, our interest lies in exploring entire privacy regimes, and developing comprehensive results applicable in a variety of settings. Consequently, we do not place restrictions on privacy parameters other than \(\varepsilon>0\) and \(\delta\in[0,1)\).
We collect several fundamental properties of differential privacy that are useful in our contexts. The first property is called post-processing (Dwork et al., 2014, Proposition 2.1), which asserts that any arbitrary post-processing applied to the outcome of a differentially private algorithm preserves the same level of privacy.
**Lemma 1** (Post-Processing).: _Suppose that an algorithm \(\mathcal{A}\) is \((\varepsilon,\delta)\)-differentially private. Then for an arbitrary randomized function \(f\), the composition \(f\circ\mathcal{A}\) also preserves \((\varepsilon,\delta)\)-differentially privacy._
Another important property of differential privacy, called the composition theorem (Dwork et al., 2014, Theorem 3.16), presents the overall privacy guarantee for a composition of multiple DP mechanisms.
**Lemma 2** (Composition).: _Suppose that each algorithm \(\mathcal{A}_{i}\) is \((\varepsilon,\delta)\)-differentially private for \(i\in[m]\). Then, the composed algorithm \(\mathcal{A}_{1:m}\) defined as \(\mathcal{A}_{1:m}\coloneqq(\mathcal{A}_{1},\ldots,\mathcal{A}_{m})\) is \((\sum_{i=1}^{m}\varepsilon_{i},\sum_{i=1}^{m}\delta_{i})\)-differentially private._
The definition of \((\varepsilon,\delta)\)-DP immediately leads to the following group property (Acharya et al., 2021, Lemma 19), which plays an important role in constructing minimax lower bounds under DP.
**Lemma 3** (Group Privacy).: _Suppose that an algorithm \(\mathcal{A}\) is \((\varepsilon,\delta)\)-differentially private. Then for (i) all \(S\in\operatorname{range}(\mathcal{A})\), (ii) all \(w\in\mathcal{W}\), and (iii) all two datasets \(\mathcal{X}_{n}\) and \(\tilde{\mathcal{X}}_{n}\) with \(d_{\mathrm{ham}}(\mathcal{X}_{n},\tilde{\mathcal{X}}_{n})\leq m\), the following inequality holds:_
\[\mathbb{P}\big{(}\mathcal{A}(\mathcal{X}_{n};w)\in S\,|\,\mathcal{X}_{n},w \big{)}\ \leq\ e^{m\varepsilon}\mathbb{P}\big{(}\mathcal{A}(\tilde{\mathcal{X}}_{n};w) \in S\,|\,\tilde{\mathcal{X}}_{n},w\big{)}+me^{(m-1)\varepsilon}\delta. \tag{2}\]
Several mechanisms have been developed to safeguard differential privacy, with the Laplace mechanism (Dwork et al., 2006b) standing out as one of the most commonly used approaches. To formally state the Laplace mechanism, we first describe the global sensitivity, which is a keystone in the differential privacy framework. We stress that the definition presented below allows us to take into account an additional auxiliary variable \(w\), and thus it is more general than the definition commonly encountered in the DP literature, such as in Dwork et al. (2006b).
**Definition 2** (Global \(\ell_{p}\)-Sensitivity).: _Consider a function \(f\) taking as input a dataset \(\mathcal{X}_{n}\) and an additional auxiliary variable \(w\in\mathcal{W}\), and assume that the output of \(f\) lies in \(\mathbb{R}^{r}\). For \(p\geq 1\), the global \(\ell_{p}\)-sensitivity of \(f\) is defined as_
\[\Delta_{f}^{p}\ \coloneqq\ \sup_{w\in\mathcal{W}}\sup_{\begin{subarray}{c} \mathcal{X}_{n},\tilde{\mathcal{X}}_{n}:\\ d_{\mathrm{ham}}(\mathcal{X}_{n},\tilde{\mathcal{X}}_{n})\leq 1\end{subarray}} \big{\|}f(\mathcal{X}_{n};w)-f(\tilde{\mathcal{X}}_{n};w)\big{\|}_{p},\]
_where \(\|x\|_{p}\) denotes the \(\ell_{p}\) norm of a vector \(x\)._
The Laplace mechanism works with the \(\ell_{1}\)-sensitivity, which determines the scaling factor of the Laplace noise injected into the outputs of the function. Formally, the Laplace mechanism is given as follows.
**Definition 3** (Laplace Mechanism).: _Consider a function \(f\) with the \(\ell_{1}\)-sensitivity \(\Delta^{1}_{f}\) described in Definition 2. For a given privacy parameter \(\xi>0\), the Laplace mechanism is defined as the random function:_
\[\mathcal{M}^{\xi}_{f}(\mathcal{X}_{n};w)\coloneqq f(\mathcal{X}_{n};w)+\frac{ \Delta^{1}_{f}}{\xi}(\zeta_{1},\ldots,\zeta_{r})^{\top},\]
_where \(\zeta_{1},\ldots,\zeta_{r}\stackrel{{\mathrm{i.i.d.}}}{{\sim}} \mathsf{Laplace}(0,1)\) generated independent of \(\mathcal{X}_{n}\) and \(w\)._
The privacy guarantee of the Laplace mechanism depends crucially on the choice of privacy parameter \(\xi\). It is well-known that the Laplace mechanism is \((\varepsilon,0)\)-DP when \(\xi=\varepsilon\)(Dwork et al., 2014, Theorem 3.6). In general, Acharya et al. (2018, Lemma 5) shows that any \((\varepsilon+\delta,0)\)-DP algorithm is also \((\varepsilon,\delta)\)-DP. While this strategy is effective for small values of \(\delta\), it returns a suboptimal result when \(\delta\) is close to one. Concretely, when \(\delta\) approaches one, we are entering the non-private regime where adding noise is unnecessary. However, the Laplace mechanism with \(\xi=\varepsilon+\delta\) injects a non-negligible amount of noise to the algorithm. The refined calibration result proposed by Holohan et al. (2015) avoids such issue, proving that \(\mathcal{M}^{\xi_{\varepsilon,\delta}}_{f}\) with \(\xi_{\varepsilon,\delta}=\varepsilon+\log(1/(1-\delta))\) is also \((\varepsilon,\delta)\)-DP. We record this guarantee in the following lemma.
**Lemma 4** (Differential Privacy of Laplace Mechanism).: _Let \(\varepsilon>0\) and \(\delta\in[0,1)\). The Laplace mechanism \(\mathcal{M}^{\xi_{\varepsilon,\delta}}_{f}\) in Definition 3 with \(\xi_{\varepsilon,\delta}=\varepsilon+\log\bigl{(}1/(1-\delta)\bigr{)}\) is \((\varepsilon,\delta)\)-differentially private._
It is worth noting that Holohan et al. (2015) consider typical differential privacy without considering an auxiliary variable \(w\). Nevertheless, the same proof can be applied to differential privacy involving auxiliary variables, provided that we consider the global sensitivity holding uniformly over \(w\in\mathcal{W}\) as in Definition 2.
**Remark 1** (Gaussian Mechanism).: For \((\varepsilon,\delta)\)-DP, one can also consider the Gaussian mechanism with the \(\ell_{2}\)-sensitivity, another common method for preserving privacy (Dwork et al., 2006, 2014). The Gaussian mechanism can be beneficial over the Laplace mechanism when the \(\ell_{2}\)-sensitivity is significantly smaller than the \(\ell_{1}\)-sensitivity. However, such benefit is not immediately clear when the outcome of \(f\) is one-dimensional where the \(\ell_{p}\) sensitivity remains the same for any \(p\geq 1\). As our framework is mainly concerned with one-dimensional numeric outcomes, we simply focus on the Laplace mechanism and refer to the _global \(\ell_{1}\)-sensitivity_ as the _global sensitivity_ whenever it is clear from the context, and simply denote it by \(\Delta_{f}\).
## 3 Differentially Private Permutation Tests
In this section, we introduce a general framework for constructing a differentially private permutation test. To begin, consider a class of distributions \(\mathcal{P}\), which is formed by the union of two disjoint subclasses: \(\mathcal{P}_{0}\) and \(\mathcal{P}_{1}\). Suppose that we observe a random sample \(\mathcal{X}_{n}\) of size \(n\) drawn from \(P\in\mathcal{P}\)
Given \(\mathcal{X}_{n}\), our ultimate goal is to test whether \(H_{0}:P\in\mathcal{P}_{0}\) or \(H_{1}:P\in\mathcal{P}_{1}\), while preserving differential privacy. Consider a test statistic \(T:\mathcal{X}_{n}\mapsto\mathbb{R}\), which is assumed to take a large value under the alternative hypothesis \(H_{1}\). To build on the permutation principle (Lehmann and Romano, 2005, Chapter 15.2), we make the assumption that \(\mathcal{X}_{n}\) is exchangeable under the null \(H_{0}\). That is, for any permutation \(\mathbf{\pi}\coloneqq(\pi_{1},\ldots,\pi_{n})\in\mathbf{\Pi}_{n}\), the joint distribution of \(\mathcal{X}_{n}\) is the same as that of \(\mathcal{X}_{n}^{\mathbf{\pi}}\coloneqq(X_{\pi_{1}},\ldots,X_{\pi_{n}})\). Under the exchangeability assumption, the permutation test rejects the null when \(T\) is significantly larger than the permuted counterparts. More formally, let \(\mathbf{\pi}_{1},\ldots,\mathbf{\pi}_{B}\) be i.i.d. random permutations of \([n]\), and denote by \(T(\mathcal{X}_{n}^{\mathbf{\pi}_{1}}),\ldots,T(\mathcal{X}_{n}^{\mathbf{\pi}_{B}})\), the test statistics computed based on \(\mathcal{X}_{n}^{\mathbf{\pi}_{1}},\ldots,\mathcal{X}_{n}^{\mathbf{\pi}_{B}}\), respectively. The (Monte Carlo) permutation test then rejects the null hypothesis when the permutation \(p\)-value is less than or equal to significance level \(\alpha\), _i.e._,
\[\widehat{p}\coloneqq\frac{1}{B+1}\bigg{\{}\sum_{i=1}^{B}\mathds{1}\big{(}T( \mathcal{X}_{n}^{\mathbf{\pi}_{i}})\geq T(\mathcal{X}_{n})\big{)}+1\bigg{\}}\leq\alpha. \tag{3}\]
It is well-known that \(\widehat{p}\) is super-uniform, _i.e._, \(\mathbb{P}(\widehat{p}\leq t)\leq t\) for all \(t\in[0,1]\), under exchangeability of \(\mathcal{X}_{n}\) (_e.g._, Lemma 15). Therefore the permutation test \(\mathds{1}(\widehat{p}\leq\alpha)\) controls the type I error for any finite sample size \(n\). Our aim is to privatize the permutation test under the DP constraint, while maintaining finite-sample validity and achieving competitive (potentially optimal) power.
For notational convenience, we often write \(T_{0}=T(\mathcal{X}_{n})\) and \(T_{i}=T(\mathcal{X}_{n}^{\mathbf{\pi}_{i}})\) for \(i\in[B]\), and set \(\mathbf{\pi}_{0}=(1,2,\ldots,n)\) in what follows.
### Proposed Privatization Method
To describe the proposed method, suppose that the test statistic \(T\) has the global sensitivity (Definition 2) with the permutation \(\mathbf{\pi}\) as an auxiliary variable:
\[\Delta_{T}\coloneqq\sup_{\mathbf{\pi}\in\mathbf{\Pi}_{n}}\sup_{ \begin{subarray}{c}\mathcal{X}_{n},\tilde{\mathcal{X}}_{n}:\\ d_{\text{ham}}(\mathcal{X}_{n},\tilde{\mathcal{X}}_{n})\leq 1\end{subarray}} \big{|}T(\mathcal{X}_{n}^{\mathbf{\pi}})-T(\tilde{\mathcal{X}}_{n}^{\mathbf{\pi}}) \big{|}. \tag{4}\]
Our tailored definition of global sensitivity to permutation tests above is stronger than the usual one since it measures the sensitivity over all possible permutations. However, this additional requirement is not overly restrictive as we demonstrate below for integral probability metrics.
**Example 1** (Sensitivity of Integral Probability Metric).: Consider a two-sample setting where we observe random variables \(\mathcal{Y}_{n}=\{Y_{1},\ldots,Y_{n}\}\) and \(\mathcal{Z}_{m}=\{Z_{1},\ldots,Z_{m}\}\), each supported on \(\mathbb{S}\). Let \(\mathcal{F}\) be a class of real-valued functions on \(\mathbb{S}\). A plug-in estimator of the corresponding integral probability metric (IPM) is given as
\[T=\sup_{f\in\mathcal{F}}\bigg{|}\frac{1}{n}\sum_{i=1}^{n}f(Y_{i}) -\frac{1}{m}\sum_{i=1}^{m}f(Z_{i})\bigg{|}. \tag{5}\]
As detailed in Appendix B.3, the global sensitivity of \(T\) is precisely equal to
\[\Delta_{T}=\frac{1}{\min\{n,m\}}\sup_{X,X^{\prime}\in\mathbb{S}} \sup_{f\in\mathcal{F}}|f(X)-f(X^{\prime})|.\]
The IPM includes several metrics commonly used in the literature (Sriperumbudur et al., 2012) and their sensitivity can be analyzed as follows.
1. _Mean difference in \(\ell_{p}\)_: Let \(\mathbb{S}\subset\mathbb{R}^{d}\), and set \(1\leq p,q\leq\infty\) such that \(1/p+1/q=1\). Choosing \(\mathcal{F}=\{f:\mathbb{S}\mapsto\mathbb{R}\,|\,f(x)=a^{\top}x,\|a\|_{q}\leq 1\}\), the IPM becomes the \(p\)th norm of the sample mean difference between \(\mathcal{Y}_{n}\) and \(\mathcal{Z}_{m}\). In this case, we have \(\sup_{X,X^{\prime}\in\mathbb{S}}\sup_{f\in\mathcal{F}}|f(X)-f(X^{\prime})|= \sup_{X,X^{\prime}\in\mathbb{S}}\|X-X^{\prime}\|_{p}\).
2. _Wasserstein distance_: When \(\mathcal{F}=\{f:\mathbb{S}\mapsto\mathbb{R}\,|\,\|f\|_{\mathrm{Lip}}\leq 1\}\) where \(\|f\|_{\mathrm{Lip}}\) denotes the minimal Lipschitz constant for \(f\) on a metric space \((\mathbb{S},\|\cdot\|)\), the IPM corresponds to the Wasserstein 1-distance. By the Lipschitz property of \(f\), we have \(\sup_{X,X^{\prime}\in\mathbb{S}}\sup_{f\in\mathcal{F}}|f(X)-f(X^{\prime})| \leq\sup_{X,X^{\prime}\in\mathbb{S}}\|X-X^{\prime}\|\).
3. _Total variation distance_: Let \(\mathcal{F}=\{f:\mathbb{S}\mapsto\mathbb{R}\,|\,\sup_{x\in\mathbb{S}}|f(x)|\leq 1\}\). The corresponding IPM is the total variation distance for which we have \(\sup_{X,X^{\prime}\in\mathbb{S}}\sup_{f\in\mathcal{F}}|f(X)-f(X^{\prime})|\leq 2\).
4. _Kolmogorov distance_: When \(\mathcal{F}=\{\mathds{1}(-\infty,x]:x\in\mathbb{R}^{d}\}\), the IPM is called the Kolmogorov distance. In this case, we have \(\sup_{X,X^{\prime}\in\mathbb{S}}\sup_{f\in\mathcal{F}}|f(X)-f(X^{\prime})|\leq 1\) with \(\mathbb{S}=\mathbb{R}^{d}\).
5. _Maximum mean discrepancy_: Let \(\|f\|_{\mathcal{H}_{k}}\) be the norm of a function \(f\) in a reproducing kernel Hilbert space \(\mathcal{H}_{k}\) equipped with kernel \(k\). When \(\mathcal{F}=\{f:\mathbb{S}\mapsto\mathbb{R}\,|\,\|f\|_{\mathcal{H}_{k}}\leq 1\}\), the IPM corresponds to the maximum mean discrepancy and it satisfies that \(\sup_{X,X^{\prime}\in\mathbb{S}}\sup_{f\in\mathcal{F}}|f(X)-f(X^{\prime})|\leq \sqrt{2K}\) where \(K\) is the maximum value of a non-negative kernel \(k\). See Lemma 5 for details.
In general, obtaining the exact value of the global sensitivity \(\Delta_{T}\) can be challenging, and thus we often work with an upper bound for \(\Delta_{T}\). We remark that the differential privacy guarantee of the Laplace mechanism in Lemma 4 remains valid when we replace the sensitivity in the Laplace mechanism with any upper bound. With an abuse of notation, we also use \(\Delta_{T}\) to denote an upper bound for the global sensitivity, when the exact value of the global sensitivity is not available.
Naive Approach.Given the global sensitivity of \(T\), one naive attempt to privatize the permutation test is to apply the basic composition theorem (Lemma 2). To depict the idea, let \(\{\zeta_{i}\}_{i=0}^{B}\) be a sequence of i.i.d. \(\mathsf{Laplace}(0,1)\) random variables, and define
\[\widetilde{M}_{i}\coloneqq T_{i}+\frac{\Delta_{T}}{\varepsilon(B+1)^{-1}+\log \bigl{(}1/\{1-\delta(B+1)^{-1}\}\bigr{)}}\zeta_{i},\quad\text{for }i\in\{0\}\cup[B].\]
By the Laplace mechanism, each \(\widetilde{M}_{i}\) is \(\bigl{(}\varepsilon/(B+1),\delta/(B+1)\bigr{)}\)-DP and the composition theorem in Lemma 2 then ensures that the permutation \(p\)-value given as
\[\widehat{p}_{\mathrm{dp}}^{\mathrm{naive}}\coloneqq\frac{1}{B+1}\bigg{\{} \sum_{i=1}^{B}\mathds{1}\bigl{(}\widetilde{M}_{i}\geq\widetilde{M}_{0}\bigr{)} +1\bigg{\}} \tag{6}\]
is \((\varepsilon,\delta)\)-DP. Moreover, \(\{\widetilde{M}_{i}\}_{i=0}^{B}\) are exchangeable under the null, which in turn yields that \(\widehat{p}_{\mathrm{dp}}^{\mathrm{naive}}\) is a valid \(p\)-value by Lemma 15. While this naive approach returns rigorous guarantees on both
privacy and validity, it has room for improvement in regard to the power performance. Observe that the noise level grows linearly in the number of permutations \(B\). This means that the Laplace noise overwhelms the signal when \(B\) is significantly large, leading to a loss of power. It is also worth noting that the permutation \(p\)-value is lower bounded by \((B+1)^{-1}\). This means that in order to have non-zero power, the number of permutations \(B\) must be greater than \(\alpha^{-1}-1\). Hence, one cannot take \(B\) to be arbitrarily small. This issue serves as the motivation for our proposal, which is described below.
Refined Approach.The factor of \(B+1\) arises from an application of the composition theorem (Lemma 2), which cannot be improved in general (_e.g._, Section 2.1 of Steinke, 2022). As one of our key contributions, we remove this unpleasant dependence on \(B\) via the quantile representation of the permutation test (Lemma 17). To describe our proposal, define
\[M_{i}\coloneqq T_{i}+\frac{2\Delta_{T}}{\xi_{\varepsilon,\delta}}\zeta_{i},\]
for \(i\in\{0\}\cup[B]\), where \(\xi_{\varepsilon,\delta}\) can be recalled in (1). Notably, the noise level \(2\Delta_{T}\xi_{\varepsilon,\delta}^{-1}\) is independent of \(B\) and strictly smaller than that of the naive approach for any \(B>1\). Given \(\{M_{i}\}_{i=0}^{B}\), we define the private permutation \(p\)-value as
\[\widehat{p}_{\mathrm{dp}}\coloneqq\frac{1}{B+1}\bigg{\{}\sum_{i=1}^{B}\mathds{1 }\big{(}M_{i}\geq M_{0}\big{)}+1\bigg{\}}, \tag{7}\]
and reject the null when \(\widehat{p}_{\mathrm{dp}}\leq\alpha\). We summarize the proposed method in Algorithm 1.
```
Input: Data \(\mathcal{X}_{n}\), significance level \(\alpha\in(0,1)\), privacy parameters \(\varepsilon>0\) and \(\delta\in[0,1)\), test statistic \(T\), global sensitivity (or its upper bound) \(\Delta_{T}\), number of permutations \(B\in\mathbb{N}\). For\(i\in[B]\)do Generate a random permutation \(\mathbf{\pi}_{i}\) of \([n]\). Generate \(\zeta_{i}\sim\mathsf{Laplace}(0,1)\). Set \(M_{i}\gets T(\mathcal{X}_{n}^{\mathbf{\pi}_{i}})+2\Delta_{T}\xi_{\varepsilon, \delta}^{-1}\zeta_{i}\) where \(\xi_{\varepsilon,\delta}\coloneqq\varepsilon+\log(1/(1-\delta))\). End For Generate \(\zeta_{0}\sim\mathsf{Laplace}(0,1)\) and set \(M_{0}\gets T(\mathcal{X}_{n})+2\Delta_{T}\xi_{\varepsilon,\delta}^{-1} \zeta_{0}\). Compute the permutation \(p\)-value \(\widehat{p}_{\mathrm{dp}}\) as in (7). Output: Reject \(H_{0}\) if \(\widehat{p}_{\mathrm{dp}}\leq\alpha\).
```
**Algorithm 1** Differentially Private Permutation Test
### Validity and Privacy Guarantee
Having introduced our method, we next investigate its theoretical guarantees and provide intuition behind our proposal. We start with the validity of the private test, which follows immediately from Lemma 15.
**Theorem 1** (Validity Guarantee).: _Suppose that \(\mathcal{X}_{n}\) are exchangeable under the null \(H_{0}:P\in\mathcal{P}_{0}\). Then for any \(\alpha\in(0,1)\) and \(B,n\geq 1\), the type I error of the test \(\mathds{1}(\widehat{p}_{\mathrm{dp}}\leq\alpha)\) from Algorithm 1 satisfies_
\[\sup_{P\in\mathcal{P}_{0}}\mathbb{P}_{P}(\widehat{p}_{\mathrm{dp}}\leq\alpha)= \frac{\lfloor(B+1)\alpha\rfloor}{B+1}\leq\alpha.\]
It is worth emphasizing that type I error control of the proposed test is both non-asymptotic and uniform over the entire class of null distributions \(\mathcal{P}_{0}\). Another distinct feature is that the type I error is equal to \(\lfloor(B+1)\alpha\rfloor/(B+1)\), which can be strictly smaller than \(\alpha\). If this small gap is a concern, one can make the type I error exactly equal to \(\alpha\) through randomization (Lemma 16). We also remark that even if we replace the global sensitivity \(\Delta_{T}\) in the procedure with any other value, type I error control remains valid. In other words, the validity of the proposed test is not affected by the noise level of the Laplace mechanism.
Next we turn to the privacy guarantee of the proposed test and show that it is \((\varepsilon,\delta)\)-DP.
**Theorem 2** (Privacy Guarantee).: _For any \(\alpha\in(0,1)\), the permutation test \(\mathds{1}(\widehat{p}_{\mathrm{dp}}\leq\alpha)\) from Algorithm 1 is \((\varepsilon,\delta)\)-differentially private._
It is worth highlighting that the privacy guarantee does not require the exchangeability of \(\mathcal{X}_{n}\). Hence the proposed test is \((\varepsilon,\delta)\)-DP under both the null and the alternative. As mentioned before, we prove the privacy guarantee of the proposed test via the quantile representation of the permutation test (Lemma 17). That is, rejecting the null when \(\widehat{p}\leq\alpha\) where \(\widehat{p}\) is given in (3) is equivalent to rejecting the null when \(T_{0}>Q_{1-\alpha}\) where \(Q_{1-\alpha}\) is the \(1-\alpha\) quantile of \(\{T_{i}\}_{i=0}^{B}\). Roughly speaking, our proof proceeds by privatizing \(T_{0}\) and \(Q_{1-\alpha}\), separately, which raises the factor of \(2\) in \(2\Delta_{T}\xi_{\varepsilon,\delta}^{-1}\). However, a direct application of the Laplace mechanism to \(T_{0}\) and \(Q_{1-\alpha}\) destroys the exchangeability of random variables, thereby type I error control is no longer guaranteed. Making both \(T_{0}\) and \(Q_{1-\alpha}\) private while ensuring the finite-sample validity of the resulting test is non-trivial, and thus we highlight it as our main contribution. Along the way, we develop a general sensitivity result of quantiles in Lemma 19, which may be of independent interest. We also point out that the factor of \(2\) in the noise level is a price to pay for not knowing the null distribution of \(T\). When \(T\) is distribution-free under the null, then it is possible to sharpen the constant factor from two to one.
### Power Analysis
Moving our focus to the power property, we aim to provide tractable conditions for pointwise consistency and non-asymptotic uniform power. Starting with pointwise consistency, the following result provides conditions under which the power converges to one as the sample size increases against a fixed alternative. Below, we add the subscript \(n\) to \(B_{n}\) to indicate that the number of permutations can vary with the sample size.
**Theorem 3** (Pointwise Consistency).: _Let \(\alpha\in(0,1)\) be a fixed constant. For a given alternative distribution \(P\), suppose that \(\lim_{n\to\infty}\mathbb{P}_{P}(M_{0}\leq M_{1})=0\). Then for any positive sequence of \(B_{n}\) such that \(\min_{n\geq 1}B_{n}+1>\alpha^{-1}\), the differentially private permutation test is consistent in power as \(\lim_{n\to\infty}\mathbb{P}_{P}(\widehat{p}_{\mathrm{dp}}\leq\alpha)=1\)._
In view of the above result, proving consistency of the permutation test essentially boils down to verifying the condition \(M_{0}>M_{1}\), _i.e._, the original statistic is greater than a permuted statistic, with probability approaching one. In Section 4, we showcase the consistency results based on kernel-based methods for two-sample and independence testing. We note that Theorem 3 can be proven in a straightforward manner via a union bound when \(B_{n}\) is fixed. A similar result for fixed \(B_{n}\) can be found in Dobriban (2022, Lemma 5.2) and Rindt et al. (2021, Theorem 6). Extending this result to any arbitrary sequence of \(B_{n}\) requires a different technique that exploits the conditional i.i.d. structure of given variables. To broaden the scope of our paper, we develop a consistency result for general resampling-based tests in Lemma 8 of Appendix B.2, from which we can derive Theorem 3 as a corollary.
While pointwise consistency is a useful property, it is often regarded as a relatively weak guarantee. We now shift our focus to the second result of this subsection, providing a non-asymptotic, uniform guarantee on the power under stronger assumptions. In particular, we identify the moment conditions under which the proposed test has significant power. These conditions can be regarded as the private extension of Kim et al. (2022a, Lemma 3.1). Below, the symbols \(\mathbb{E}_{P,\mathbf{\pi}}\) and \(\mathrm{Var}_{P,\mathbf{\pi}}\) denote the expectation and the variance, respectively, taken over both \(\mathcal{X}_{n}\) and \(\mathbf{\pi}\).
**Theorem 4** (Uniform Power).: _For \(\alpha\in(0,1)\), \(\beta\in(0,1-\alpha)\) and \(\xi_{\varepsilon,\delta}>0\), assume that \(B\geq 16\alpha^{-2}\log(8/\beta)\) and for any \(P\in\mathcal{P}_{1}\),_
\[\begin{split}\mathbb{E}_{P}[T(\mathcal{X}_{n})]-\mathbb{E}_{P, \mathbf{\pi}}[T(\mathcal{X}_{n}^{\mathbf{\pi}})]&\ \geq\ C_{1}\sqrt{\frac{\mathrm{Var}_{P}[T(\mathcal{X}_{n})]+ \mathrm{Var}_{P,\mathbf{\pi}}[T(\mathcal{X}_{n}^{\mathbf{\pi}})]}{\alpha\beta}}\\ &\ +\ C_{2}\frac{\Delta_{T}}{\xi_{\varepsilon,\delta}}\max\bigg{\{} \log\bigg{(}\frac{1}{\alpha}\bigg{)},\,\log\bigg{(}\frac{1}{\beta}\bigg{)} \bigg{\}},\end{split} \tag{8}\]
_where \(C_{1}\) and \(C_{2}\) are universal constants. Then the uniform power of the private permutation test is bounded below by \(1-\beta\) as_
\[\inf_{P\in\mathcal{P}_{1}}\mathbb{P}_{P}(\widehat{p}_{\mathrm{dp}}\leq\alpha) \geq 1-\beta.\]
A few remarks are in order.
* The above theorem ensures that the private permutation test has significant power as long as the signal of the problem, namely the difference between the expected values of the original test statistic and of the permuted test statistic, is larger than the noise of the problem, namely the square root of the variances and the noise level of the Laplace mechanism.
* The proof of Theorem 4, given in Appendix E.2, builds on the proof of Kim et al. (2022a, Lemma 3.1) where the key idea is to replace the random permutation threshold with a deterministic one using concentration inequalities. The main distinction from Kim et al. (2022a) is the incorporation of Laplace noises in the analysis, which results in the second line of the condition (8). We also note that Theorem 4 concerns the Monte Carlo permutation test, which is computationally more efficient than the full permutation test analyzed in Kim et al. (2022a, Lemma 3.1).
* Notably, the condition on \(B\) is independent of the sample size, which is a consequence of the Dvoretzky-Kiefer-Wolfowitz (DKW) inequality (Massart, 1990), and similar conditions can be found in Schrab et al. (2023) and Schrab et al. (2022). One can improve this restriction, especially constant factors, with more technical effort or by using other techniques, such as the one based on order statistics (Domingo-Enrich et al., 2023, Lemma 6).
* It is worth mentioning that the first line of the condition (8) relies on a polynomial dependence on \(\alpha\) and \(\beta\), which arise from the application of Chebyshev's and Markov's inequalities. If the considered test statistic has an exponential tail bound, these polynomial factors can be improved to logarithmic ones as we illustrate in Section 5.
Before moving on, let us briefly illustrate Theorem 4 based on the plug-in IPM statistic considered in Example 1.
**Example 2** (Power Analysis against IPM alternatives).: Continuing our discussion from Example 1, denote the IPM between \(P\) and \(Q\) with a class of functions \(\mathcal{F}\) as
\[\mathrm{IPM}_{\mathcal{F}}(P,Q)=\sup_{f\in\mathcal{F}}\big{|} \mathbb{E}_{P}[f(Y)]-\mathbb{E}_{Q}[f(Z)]\big{|}.\]
Without loss of generality, assume \(n\leq m\) and write the pooled sample as \(\mathcal{X}_{n+m}=\mathcal{Y}_{n}\cup\mathcal{Z}_{m}=\{X_{1},\ldots,X_{n+m}\}\). Consider the maximum Rademacher complexity of \(\mathcal{F}\) over all possible permuted samples given as
\[\mathcal{R}_{n}(\mathcal{F})=\sup_{\boldsymbol{\pi}\in\boldsymbol {\Pi}_{n+m}}\mathbb{E}\bigg{[}\sup_{f\in\mathcal{F}}\bigg{|}\frac{1}{n}\sum_{ i=1}^{n}\omega_{i}f(X_{\pi_{i}})\bigg{|}\bigg{]},\]
where \(\{\omega_{i}\}_{i=1}^{n}\) are i.i.d. Rademacher random variables independent of \(\mathcal{X}_{n+m}\). Suppose that we implement Algorithm 1 using the plug-in IPM estimator \(T(\mathcal{X}_{n+m})\) in (5). Then Theorem 4 yields that the resulting permutation test has the power greater than \(1-\beta\) if
\[\mathrm{IPM}_{\mathcal{F}}(P,Q)\geq C_{1}\frac{\mathcal{R}_{n}( \mathcal{F})}{\sqrt{\alpha\beta}}+C_{2}\frac{\sqrt{n}\Delta_{T}}{\sqrt{\alpha \beta}}+C_{3}\frac{\Delta_{T}}{\xi_{\varepsilon,\delta}}\max\bigg{\{}\log \!\bigg{(}\frac{1}{\alpha}\bigg{)},\,\log\!\bigg{(}\frac{1}{\beta}\bigg{)} \bigg{\}},\]
where \(C_{1},C_{2},C_{3}\) are some positive constants. We defer a detailed analysis that leads to the above result to Appendix B.4. We also refer to van der Vaart and Wellner (1996); Bartlett and Mendelson (2002); Wainwright (2019) for additional information on the Rademacher complexity and illustrative examples.
So far we have examined the properties of the private permutation test in a general context. In the next section, we will apply our framework to the specific problem of kernel testing, and provide a detailed analysis.
## 4 Application: Differentially Private Kernel Tests
In recent years, there has been a growing trend in employing kernel-based methods for hypothesis testing problems, such as the MMD and the HSIC. This popularity is partly due to their ability to
capture complex, non-linear relationships and to their straightforward implementation. Equipped with such benefits, the MMD (Gretton et al., 2012) is used to measure the difference between two probability distributions, while the HSIC (Gretton et al., 2005) is used to quantify the dependence between two random variables. In this and subsequent sections, we propose differentially private tests based on these two kernel-based measures, and provide an in-depth analysis of their theoretical properties.
Terminology.Before we begin, let us establish the terminology related to kernels. Consider a reproducing kernel \(k:\mathbb{S}\times\mathbb{S}\mapsto\mathbb{R}\) defined on a separable topological space \(\mathbb{S}\). Let \(\mathcal{H}_{k}\) be a reproducing kernel Hilbert space (RKHS) endowed with kernel \(k\). A kernel \(k\) is said to be _characteristic_ if the kernel mean embedding
\[\mu_{P}=\int_{\mathbb{S}}k(\cdot,x)\mathrm{d}P(x)\in\mathcal{H}_{k}\]
is injective. In addition, a kernel \(k:\mathbb{S}\times\mathbb{S}\mapsto\mathbb{R}\) is said to be _translation invariant_ if there exists a symmetric positive definite function \(\kappa\) such that \(k(x,y)=\kappa(x-y)\) for all \(x,y\in\mathbb{S}\). Assuming that \(0\leq k(x,y)\leq K\) for all \(x,y\in\mathbb{S}\), we say that the kernel \(k\) has _non-empty level sets_ on \(\mathbb{S}\) if, for any \(\epsilon\in(0,K)\), there exist \(x,y\in\mathbb{S}\), such that \(k(x,y)\leq\epsilon\). Some popular examples of kernels include the Gaussian kernel \(k(x,y)=e^{-\sigma\|x-y\|_{2}^{2}}\) and the Laplacian kernel \(k(x,y)=e^{-\sigma\|x-y\|_{1}}\) for \(\sigma>0\). These two kernels are translation invariant and known to be characteristic on \(\mathbb{R}^{d}\)(_e.g._, Sriperumbudur et al., 2011). They also have non-empty level sets on \(\mathbb{R}^{d}\), which can be deduced from the continuity of the kernel function.
### Differentially Private MMD Test
Starting with the MMD, suppose we are given mutually independent samples \(\mathcal{Y}_{n}\coloneqq\{Y_{1},\ldots,Y_{n}\}\stackrel{{\text{ i.i.d.}}}{{\sim}}P\) and \(\mathcal{Z}_{m}\coloneqq\{Z_{1},\ldots,Z_{m}\}\stackrel{{\text{ i.i.d.}}}{{\sim}}Q\) on a domain \(\mathbb{S}\). Without loss of generality, assume \(n\leq m\) throughout the rest of this paper. Based on these samples, the two-sample problem aims to determine whether two probability distributions \(P\) and \(Q\) coincide. A majority of two-sample methodologies target a certain metric between \(P\) and \(Q\), and use their empirical counterpart as a test statistic. One such method is the non-private MMD test (Gretton et al., 2012) where the difference between \(P\) and \(Q\) is quantified in terms of MMD. To elaborate, consider the unit ball in a RKHS \(\mathcal{H}_{k}\) denoted by \(\mathcal{F}_{k}\coloneqq\{f\in\mathcal{H}_{k}:\|f\|_{\mathcal{H}_{k}}\leq 1\}\). The maximum mean discrepancy between \(P\) and \(Q\) is defined as
\[\mathrm{MMD}_{k}(P,Q)\coloneqq\sup_{f\in\mathcal{F}_{k}}\bigl{\{}\mathbb{E}_{P }[f(Y)]-\mathbb{E}_{Q}[f(Z)]\bigr{\}}.\]
The empirical MMD is a plug-in estimator of MMD that replaces \(P\) and \(Q\) with the corresponding empirical probability measures. Formally, letting \(\mathcal{X}_{n+m}=\mathcal{Y}_{n}\cup\mathcal{Z}_{m}\) be the pooled sample as before, the empirical MMD is given as
\[\widehat{\mathrm{MMD}}(\mathcal{X}_{n+m})\coloneqq\sup_{f\in\mathcal{F}_{k}} \biggl{\{}\frac{1}{n}\sum_{i=1}^{n}f(Y_{i})-\frac{1}{m}\sum_{j=1}^{m}f(Z_{i}) \biggr{\}}. \tag{9}\]
Thanks to the reproducing kernel property, the empirical MMD can be computed straightforwardly. In particular, the squared empirical MMD can be calculated in quadratic time using the kernel-based expression
\[\widehat{\mathrm{MMD}}^{2}(\mathcal{X}_{n+m})=\frac{1}{n^{2}}\sum_{i,j=1}^{n}k(Y _{i},Y_{j})+\frac{1}{m^{2}}\sum_{i,j=1}^{m}k(Z_{i},Z_{j})-\frac{2}{nm}\sum_{i=1} ^{n}\sum_{j=1}^{m}k(Y_{i},Z_{j}). \tag{10}\]
In order to propose a private version of the MMD test, we begin with the global sensitivity of the empirical MMD.
**Lemma 5** (Sensitivity of Empirical MMD).: _Assume that the kernel \(k\) is bounded as \(0\leq k(x,y)\leq K\) for all \(x,y\in\mathbb{S}\). Then the global sensitivity of the empirical MMD satisfies_
\[\sup_{\boldsymbol{\pi}\in\boldsymbol{\Pi}_{n+m}}\sup_{\begin{subarray}{c} \mathcal{X}_{n+m},\tilde{\mathcal{X}}_{n+m}:\\ d_{\mathrm{ham}}(\mathcal{X}_{n+m},\tilde{\mathcal{X}}_{n+m})\leq 1\end{subarray}} \big{|}\widehat{\mathrm{MMD}}(\mathcal{X}_{n+m}^{\boldsymbol{\pi}})- \widehat{\mathrm{MMD}}(\tilde{\mathcal{X}}_{n+m}^{\boldsymbol{\pi}})\big{|} \leq\frac{\sqrt{2K}}{n}.\]
_Moreover assume that \(k\) is translation invariant, and has non-empty level sets in \(\mathbb{S}\). Then the inequality becomes an equality._
The proof of Lemma 5 can be found in Appendix E.5. Note that the sensitivity of the empirical MMD in Lemma 5 can be equivalently defined without the supremum over the permutations \(\boldsymbol{\pi}\in\boldsymbol{\Pi}_{n+m}\) as \(d_{\mathrm{ham}}(\mathcal{X}_{n+m},\tilde{\mathcal{X}}_{n+m})\) is the same as \(d_{\mathrm{ham}}(\mathcal{X}_{n+m}^{\boldsymbol{\pi}},\tilde{\mathcal{X}}_{n+ m}^{\boldsymbol{\pi}})\) for any \(\boldsymbol{\pi}\in\boldsymbol{\Pi}_{n+m}\). As we will see in Section 4.2, however, this property does not hold for independence testing. We also highlight our lower bound result, indicating that the upper bound \(\sqrt{2K}/n\) cannot be improved for translation invariant kernels with non-empty level sets on their domain.
With Lemma 5 in place, we set the sensitivity parameter \(\Delta_{T}=\sqrt{2K}n^{-1}\) and run Algorithm 1 with the empirical MMD in (9) as the test statistic. We refer to the resulting private permutation test as the dpMMD test and denote it as \(\phi_{\mathrm{dpMMD}}\). The dpMMD test has the following properties, which are proven in Appendix E.6.
**Theorem 5** (Properties of dpMMD test).: _Let \(\alpha\in(0,1)\) be a fixed constant and the kernel \(k\) be bounded as \(0\leq k(x,y)\leq K\) for all \(x,y\in\mathbb{S}\). Then \(\phi_{\mathrm{dpMMD}}\) satisfies the following properties:_
1. _(Differential Privacy) For_ \(\varepsilon>0\) _and_ \(\delta\in[0,1)\)_,_ \(\phi_{\mathrm{dpMMD}}\) _is_ \((\varepsilon,\delta)\)_-differentially private._
2. _(Validity) The type I error of_ \(\phi_{\mathrm{dpMMD}}\) _is controlled at level_ \(\alpha\) _non-asymptotically._
3. _(Consistency) Suppose that_ \(\mathrm{MMD}_{k}(P,Q)\) _is independent of the sample sizes and strictly positive for a fixed pair of_ \((P,Q)\)_. Moreover assume that_ \(n^{-1}\xi_{\varepsilon,\delta}^{-1}\to 0\) _as_ \(n\to\infty\)_. Then for any sequence_ \(B_{n}\) _such that_ \(\min_{n\geq 1}B_{n}+1>\alpha^{-1}\)_, we have_ \(\lim_{n\to\infty}\mathbb{E}_{P,Q}[\phi_{\mathrm{dpMMD}}]=1\)_._
The first two properties on differential privacy and validity are clear in view of Theorem 1 and Theorem 2. It is well-known that the population MMD is strictly positive under the alternative if the kernel is characteristic (Gretton et al., 2012). Hence the condition on the MMD metric in (_P3_) is satisfied for characteristic kernels against any fixed alternative. Another highlight is that
consistency holds irrespective of the relationship between \(n\) and \(m\). The power converges to one as long as the minimum sample size goes to infinity. We also remark that the condition \(n^{-1}\xi_{\varepsilon,\delta}^{-1}\to 0\) is critical for obtaining consistency. If not, the empirical MMD is overwhelmed by the Laplace noise, which leads to a significant loss of power.
We note in passing the recent work of Yang et al. (2023) that also utilizes the MMD in differentially private data analysis. Despite the fact that both Yang et al. (2023) and ours consider the differentially private MMD, their primary focus is on differentially private data generation, which is different from our focus on hypothesis testing.
### Differentially Private HSIC Test
Turning to the second application, suppose that we are given an i.i.d. paired sample \(\mathcal{X}_{n}=\{(Y_{i},Z_{i})\}_{i=1}^{n}\) from a joint distribution \(P_{YZ}\) on domain \(\mathbb{Y}\times\mathbb{Z}\). Given \(\mathcal{X}_{n}\), the aim of independence testing is to assess whether \(Y\) and \(Z\) are statistically independent or not. As a kernel dependence measure, the HSIC compares the joint probability measure \(P_{YZ}\) to the product of marginals \(P_{Y}P_{Z}\). To formally define it, let \(k\) and \(\ell\) be kernels on \(\mathbb{Y}\) and \(\mathbb{Z}\), and let \(k\otimes\ell\) be the product kernel given as \(k\otimes\ell\big{(}(y,z),(y^{\prime},z^{\prime})\big{)}=k(y,y^{\prime})\ell(z,z^{\prime})\) for all \(y,y^{\prime}\in\mathbb{Y}\) and \(z,z^{\prime}\in\mathbb{Z}\). Further denoting the unit ball in the RKHS associated with \(k\otimes\ell\) by \(\mathcal{F}_{k\otimes\ell}\), HSIC is defined as1
Footnote 1: We remark that in the literature HSIC is often defined as the square of this quantity. However, for consistency with MMD, we define it without the square.
\[\mathrm{HSIC}_{k\otimes\ell}(P_{YZ})\coloneqq\sup_{f\in\mathcal{F}_{k\otimes \ell}}\big{\{}\mathbb{E}_{P_{YZ}}[f(Y,Z)]-\mathbb{E}_{P_{Y}P_{Z}}[f(Y,Z)] \big{\}}.\]
In other words, the HSIC of \(Y\) and \(Z\) is simply the MMD between \(P_{YZ}\) and \(P_{Y}P_{Z}\) with the product kernel \(k\otimes\ell\). The empirical HSIC is a plug-in estimator given as
\[\widehat{\mathrm{HSIC}}(\mathcal{X}_{n})\coloneqq\sup_{f\in\mathcal{F}_{k \otimes\ell}}\bigg{\{}\frac{1}{n}\sum_{i=1}^{n}f(Y_{i},Z_{i})-\frac{1}{n^{2}} \sum_{i,j=1}^{n}f(Y_{i},Z_{j})\bigg{\}}. \tag{11}\]
Similarly to the empirical MMD, the squared empirical HSIC also has an explicit form in terms of the kernels \(k\) and \(\ell\) as
\[\widehat{\mathrm{HSIC}}^{2}(\mathcal{X}_{n}) =\ \frac{1}{n^{2}}\sum_{i,j=1}^{n}k(Y_{i},Y_{j})\ell(Z_{i},Z_{j})+ \frac{1}{n^{4}}\sum_{i_{1},i_{2},j_{1},j_{2}=1}^{n}k(Y_{i_{1}},Y_{j_{1}})\ell (Z_{i_{2}},Z_{j_{2}}) \tag{12}\] \[-\frac{2}{n^{3}}\sum_{i,j_{1},j_{2}=1}^{n}k(Y_{i},Y_{j_{1}})\ell (Z_{i},Z_{j_{2}}),\]
which can be computed in quadratic time as described in Song et al. (2012, Theorem 1). For independence testing, the permutation test proceeds by randomly permuting either the \(Y\) observations or the \(Z\) observations. Here, we permute the \(Z\) observations and denote \(\mathcal{X}_{n}^{\boldsymbol{\pi}}=\{(Y_{i},Z_{\pi_{i}})\}_{i=1}^{n}\). With this notation in place, the next lemma explores the global sensitivity of the empirical HSIC.
**Lemma 6** (Sensitivity of Empirical HSIC).: _Assume that the kernels \(k\) and \(\ell\) are bounded as \(0\leq k(y,y^{\prime})\leq K\) and \(0\leq\ell(z,z^{\prime})\leq L\) for all \(y,y^{\prime}\in\mathbb{Y}\) and \(z,z^{\prime}\in\mathbb{Z}\). Then the global sensitivity of the empirical HSIC satisfies_
\[\sup_{\mathbf{\pi}\in\mathbf{\Pi}_{n}}\sup_{\begin{subarray}{c}\mathcal{X}_{n}, \tilde{\mathcal{X}}_{n}:\\ d_{\mathrm{ham}}(\mathcal{X}_{n},\tilde{\mathcal{X}}_{n})\leq 1\end{subarray}} \big{|}\widehat{\mathrm{HSIC}}(\mathcal{X}_{n}^{\mathbf{\pi}})-\widehat{\mathrm{ HSIC}}(\tilde{\mathcal{X}}_{n}^{\mathbf{\pi}})\big{|}\leq\frac{4(n-1)}{n^{2}}\sqrt{KL}.\]
_Moreover assume that \(k\) and \(\ell\) are translation invariant, and have non-empty level sets on \(\mathbb{Y}\) and \(\mathbb{Z}\), respectively. Then the global sensitivity is lower bounded by \(4(n-2.5)n^{-2}\sqrt{KL}\)._
The proof of Lemma 5 can be found in Appendix E.7. Contrary to the MMD case, we observe that the two hamming distances, namely \(d_{\mathrm{ham}}(\mathcal{X}_{n}^{\mathbf{\pi}},\tilde{\mathcal{X}}_{n}^{\mathbf{\pi}})\) and \(d_{\mathrm{ham}}(\mathcal{X}_{n},\tilde{\mathcal{X}}_{n})\), can differ for independence testing. Consequently, the supremum over the permutations \(\mathbf{\pi}\in\mathbf{\Pi}_{n}\) plays a non-trivial role in the sensitivity of the empirical HSIC. We mention the work of Kusner et al. (2016) that also examines the global sensitivity of the empirical HSIC. Our upper bound result improves theirs by replacing the constant factor \(12\) to \(4\) with a tighter analysis. In fact, as the lower bound result states, the proposed upper bound is asymptotically tight under mild conditions for \(k\) and \(\ell\).
In view of the above lemma, we set \(\Delta_{T}=4(n-1)n^{-2}\sqrt{KL}\) and run Algorithm 1 with the empirical HSIC in (11) as the test statistic. We refer to the resulting permutation test as the dpHSIC test and denote it as \(\phi_{\mathrm{dpHSIC}}\). Similar to Theorem 5, the dpHSIC test has the following properties, which are proven in Appendix E.8.
**Theorem 6** (Properties of dpHSIC test).: _Let \(\alpha\in(0,1)\) be a fixed constant and assume that the kernels \(k\) and \(\ell\) are bounded as \(0\leq k(y,y^{\prime})\leq K\) and \(0\leq\ell(z,z^{\prime})\leq L\) for all \(y,y^{\prime}\in\mathbb{Y}\) and \(z,z^{\prime}\in\mathbb{Z}\). Then \(\phi_{\mathrm{dpHSIC}}\) satisfies the following properties:_
1. _(Differential Privacy) For_ \(\epsilon>0\) _and_ \(\delta\in[0,1)\)_,_ \(\phi_{\mathrm{dpHSIC}}\) _is_ \((\varepsilon,\delta)\)_-differentially private._
2. _(Validity) The type I error of_ \(\phi_{\mathrm{dpHSIC}}\) _is controlled at level_ \(\alpha\) _non-asymptotically._
3. _(Consistency) Suppose that_ \(\mathrm{HSIC}_{k\otimes\ell}(P_{YZ})\) _is independent of the sample sizes and strictly positive for a fixed distribution_ \(P_{YZ}\)_. Moreover assume that_ \(n^{-1}\xi_{\varepsilon,\delta}^{-1}\to 0\) _as_ \(n\to\infty\)_. Then for any sequence_ \(B_{n}\) _such that_ \(\min_{n\geq 1}B_{n}+1>\alpha^{-1}\)_, we have_ \(\lim_{n\to\infty}\mathbb{E}_{P_{YZ}}[\phi_{\mathrm{dpHSIC}}]=1\)_._
As for the dpMMD test, the first two properties on differential privacy and validity are direct consequences of Theorem 1 and Theorem 2. The condition for consistency is ensured under any alternative when the kernels are characteristic (Gretton, 2015). Therefore the dpHSIC test equipped with a characteristic kernel is pointwise consistent against any fixed alternative, provided that \(n^{-1}\xi_{\varepsilon,\delta}^{-1}\to 0\) and \(\min_{n\geq 1}B_{n}+1>\alpha^{-1}\).
Before moving on and studying uniform power properties, let us briefly remark on the asymptotic null distributions of private kernel test statistics.
**Remark 2** (Asymptotic null distributions).: As mentioned earlier, when the null distribution is tractable, we can improve the power by eliminating the factor of \(2\) in the noise level. However, characterizing the limiting distribution is not a trivial task, even for non-private kernel statistics.
In Appendix B.1, we show that a private kernel statistic converges in distribution to a mixture of Gaussian chaos and Laplace distributions, which is even more intricate than the limiting distribution of a non-private kernel statistic. A recent line of work (Shekhar et al., 2022, 2020) propose cross MMD and cross HSIC that have a tractable limiting distribution with competitive power. We leave the exploration of extending these variants to the private setting and comparing their power performance with our proposed methods as an avenue for future research.
## 5 Uniform Power and Optimality
In the previous section, we examined the fundamental properties of the private kernel tests, including their asymptotic power against fixed alternatives. This section delves into a more challenging setting where the alternative can shrink to the null as the sample size increases, and develops uniform power results. Moreover we highlight an intrinsic trade-off between privacy and statistical power through the lens of minimax analysis, and explore optimality of the proposed private tests under the differential privacy constraint. In the main text, we focus on the analysis of the dpMMD test, and defer analogous results for the dpHSIC test to Appendix B.5 and Appendix B.6.
### Separation in MMD Metric
Consider the setting described in Section 4.1, and denote by \(\mathcal{P}_{\mathbb{S}}\) the class of distributions defined on \(\mathbb{S}\). Our first goal is to determine the minimum separation for \(\phi_{\text{dpMMD}}\) based on the MMD metric with kernel \(k\). To this end, for \(\rho>0\), we define a class of paired distributions \((P,Q)\) such that
\[\mathcal{P}_{\text{MMD}_{k}}(\rho)\coloneqq\big{\{}(P,Q)\in\mathcal{P}_{ \mathbb{S}}\times\mathcal{P}_{\mathbb{S}}:\text{MMD}_{k}(P,Q)\geq\rho\big{\}}.\]
For a given target type II error \(\beta\in(0,1-\alpha)\), the minimum separation for the dpMMD test against \(\mathcal{P}_{\text{MMD}_{k}}(\rho)\) is given by
\[\rho_{\phi_{\text{dpMMD}}}(\alpha,\beta,\varepsilon,\delta,m,n)\coloneqq\inf \biggl{\{}\rho>0:\sup_{(P,Q)\in\mathcal{P}_{\text{MMD}_{k}}(\rho)}\mathbb{E}_ {P,Q}[1-\phi_{\text{dpMMD}}]\leq\beta\biggr{\}}. \tag{13}\]
In simpler terms, the minimum separation \(\rho_{\phi_{\text{dpMMD}}}\) refers to the smallest MMD metric between \(P\) and \(Q\) that can be correctly detected by the dpMMD test with probability at least \(1-\beta\). The next theorem provides an upper bound for \(\rho_{\phi_{\text{dpMMD}}}\) as a function of the parameters \(\alpha\), \(\beta\), \(\varepsilon\), \(\delta\), \(m\), and \(n\). The proof can be found in Appendix E.9.
**Theorem 7** (Minimum Separation of dpMMD over \(\mathcal{P}_{\text{MMD}_{k}}\)).: _Assume that the kernel \(k\) is bounded as \(0\leq k(x,y)\leq K\) for all \(x,y\in\mathbb{S}\), and \(n\leq m\leq\tau n\) for some fixed constant \(\tau\geq 1\). Then for all values of \(\alpha\in(0,1)\), \(\beta\in(0,1-\alpha)\), \(\varepsilon>0\), \(\delta\in[0,1)\) and \(B\geq 16\alpha^{-2}\log\bigl{(}8/\beta\bigr{)}\), the minimum separation for \(\phi_{\text{dpMMD}}\) satisfies_
\[\rho_{\phi_{\text{dpMMD}}}\leq C_{K,\tau}\max\Biggl{\{}\sqrt{\frac{\max\bigl{\{} \log(1/\alpha),\,\log(1/\beta)\bigr{\}}}{n}},\,\frac{\max\bigl{\{}\log(1/ \alpha),\,\log(1/\beta)\bigr{\}}}{n\xi_{\varepsilon,\delta}}\Biggr{\}},\]
_where \(C_{K,\tau}\) is a positive constant that depends only on \(K\) and \(\tau\), and \(\xi_{\varepsilon,\delta}\) can be recalled in (1)._
We present several comments on the upper bound result.
* Theorem 7 states that the separation rate for the dpMMD test becomes \(n^{-1/2}\) in low privacy regimes (_i.e._, \(\xi_{\varepsilon,\delta}\gtrsim n^{-1/2}\)), whereas it becomes \(n^{-1}\xi_{\varepsilon,\delta}^{-1}\) in high privacy regimes (_i.e._, \(\xi_{\varepsilon,\delta}\lesssim n^{-1/2}\)). Notably, this upper bound result allows the parameters \(\alpha,\beta,\xi_{\varepsilon,\delta}\) to vary freely within the constraints in the theorem statement. We also mention that the minimum separation is meaningful only when \(n^{-1}\xi_{\varepsilon,\delta}^{-1}\to 0\), which coincides with the condition for consistency established in Theorem 5.
* We point out that \(\phi_{\mathrm{dpMMD}}\) is equivalent to the non-DP MMD test (Gretton et al., 2012) when \(\varepsilon\to\infty\) or \(\delta\to 1\) (_i.e._, \(\xi_{\varepsilon,\delta}\to\infty\)). Thus, our result also yields the minimum separation rate for the non-DP MMD test as a byproduct.
* One can prove Theorem 7 by verifying the general conditions in Theorem 4. However this strategy results in polynomial factors of \(\alpha\) and \(\beta\) instead of logarithmic ones. To obtain logarithmic dependence in both \(\alpha\) and \(\beta\), we modify the proof of Theorem 4 and utilize exponential concentration inequalities for the empirical MMD statistic (Lemma 13) and permuted MMD statistic (Lemma 10). As we will see in Theorem 8, these logarithmic factors cannot be improved further when \(\alpha\asymp\beta\).
* The constraint on the sample size ratio can be completely removed by using the Markov inequality for a permuted MMD statistic (Lemma 11). Nevertheless, this alternative approach yields a polynomial factor of \(\alpha\) instead of a logarithmic one. See Remark 3. It is currently unknown whether the constraint on \(m\) and \(n\) can be eliminated, while preserving the logarithmic factors.
We next investigate minimax optimality of \(\phi_{\mathrm{dpMMD}}\) under certain regimes in \(\mathbb{S}=\mathbb{R}^{d}\). To set the stage, let \(\phi:\mathcal{Y}_{n}\cup\mathcal{Z}_{m}\mapsto\{0,1\}\) be a test function, and denote the set of \((\varepsilon,\delta)\)-DP level \(\alpha\) tests as
\[\Phi_{\alpha,\varepsilon,\delta}\coloneqq\Big{\{}\phi:\sup_{P\in\mathcal{P}_ {\mathbb{S}}}\mathbb{E}_{P,P}[\phi]\leq\alpha\text{ and }\phi\text{ is }(\varepsilon, \delta)\text{-DP}\Big{\}}.\]
From a theoretical point of view, it is of interest to figure out an information-theoretic lower bound on the minimum separation for any test. This is often called the minimax separation or critical radius in the literature (Ingster, 1994; Ingster et al., 2003; Baraud, 2002). Formally, the minimax separation in terms of the MMD metric is defined as
\[\rho_{\mathrm{MMD}}^{\star}(\alpha,\beta,\varepsilon,\delta,m,n)\coloneqq\inf \Big{\{}\rho>0:\inf_{\phi\in\Phi_{\alpha,\varepsilon,\delta}}\sup_{(P,Q)\in \mathcal{P}_{\mathrm{MMD}_{k}}(\rho)}\mathbb{E}_{P,Q}[1-\phi]\leq\beta\Big{\}}.\]
In simpler terms, the minimax separation \(\rho_{\mathrm{MMD}}^{\star}\) refers to the largest MMD metric between \(P\) and \(Q\) that cannot be correctly detected with probability at least \(1-\beta\) by any level \(\alpha\) test. We say that a test \(\phi\) is minimax rate optimal in terms of the MMD metric if the minimum separation of \(\phi\) is equivalent to \(\rho_{\mathrm{MMD}}^{\star}\) up to constant factors. The next theorem, proved in Appendix E.10, establishes a lower bound for the minimax separation under the DP constraint, from which we demonstrate minimax optimality of the dpMMD test.
**Theorem 8** (Minimax Separation over \(\mathcal{P}_{\mathrm{MMD}_{k}}\)).: _Let \(\alpha\) and \(\beta\) be real numbers in the interval \((0,1/5)\), \(\varepsilon>0\) and \(\delta\in[0,1)\). Assume that the kernel function \(k\) is translation invariant on \(\mathbb{R}^{d}\). In particular there exists some function \(\kappa\) such that \(k(x,y)=\kappa(x-y)\) for all \(x,y\in\mathbb{R}^{d}\). Moreover, the kernel is non-constant in the sense that there exists a positive constant \(\eta\) such that \(\kappa(0)-\kappa(z)\geq\eta\) for some \(z\in\mathbb{R}^{d}\). Then the minimax separation over \(\mathcal{P}_{\mathrm{MMD}_{k}}\) is lower bounded as_
\[\rho^{\star}_{\mathrm{MMD}}\geq C_{\eta}\max\Biggl{\{}\min\Biggl{(}\sqrt{ \frac{\log(1/(\alpha+\beta))}{n}},\,1\Biggr{)},\,\min\Biggl{(}\frac{\log(1/ \beta)}{n\xi_{\varepsilon,\delta}},\,1\Biggr{)}\Biggr{\}},\]
_where \(C_{\eta}\) is a positive constant that only depends on \(\eta\), and \(\xi_{\varepsilon,\delta}\) can be recalled in (1)._
Several remarks are in order.
* First of all, the restriction on \(\alpha\) and \(\beta\) is mild as we are typically interested in small values of \(\alpha\) and \(\beta\). In fact, the same result holds for any \(\alpha,\beta\) such that \(\alpha+\beta\leq C\) where \(C\) is some fixed constant strictly smaller than \(1/2\). We also note that for a bounded kernel ranging from \(0\) and \(K\), the MMD as well as the corresponding minimax separation cannot exceed \(\sqrt{2K}\). Our lower bound result captures this restriction through the minimum operator.
* Second, our proof builds on the \((\varepsilon,\delta)\)-DP Le Cam's method outlined in Acharya et al. (2018, 2021). This technique generalizes classical Le Cam's two-point method (Le Cam, 1973) to private settings via coupling argument. As pointed out by Acharya et al. (2018, Lemma 5), one can obtain a lower bound result for \((\varepsilon,\delta)\)-DP by replacing \(\varepsilon\) with \(\varepsilon+\delta\) in the lower bound result for \(\varepsilon\)-DP. However, this method fails to yield a tight lower bound in terms of \(\beta\). Our approach differs from Acharya et al. (2018, Lemma 5) and returns a sharp lower bound for all parameters of interest, namely \(\beta,n,\varepsilon,\delta\).
* Theorem 8 holds for translation invariant kernels. Indeed, many kernels commonly used in practice are translation invariant including the Gaussian, Laplacian, inverse multiquadrics and Matern kernels. Moreover, as discussed in Tolstikhin et al. (2017), if we further assume that the kernel \(k\) is characteristic, it guarantees the existence of \(z\in\mathbb{R}^{d}\) and \(\eta>0\) that satisfy the conditions of Theorem 8. For instance, for the Gaussian kernel \(k(x,y)=e^{-\sigma\|x-y\|_{2}^{2}}\), one can take \(\eta=\frac{\sigma}{2}\|z\|_{2}^{2}\) for any non-zero \(z\) such that \(\|z\|_{2}^{2}\leq\sigma^{-1}\).
* The last point worth highlighting is that our lower bound permits varying values of \(\alpha\) and \(\beta\), which is in contrast to most existing research on minimax testing. A notable exception is the recent work by Diakonikolas et al. (2021), which examines the sample complexity of testing for discrete distributions with high probability.
We now compare the results of Theorem 7 and Theorem 8, and observe that the lower bound for \(\rho^{\star}_{\mathrm{MMD}}\) matches the upper bound for \(\rho_{\phi_{\mathrm{dpMMD}}}\) in the regime where \(m\asymp n\) and \(\alpha\asymp\beta\) for \(\mathbb{S}=\mathbb{R}^{d}\). This shows that the proposed dpMMD test is minimax rate optimal against the class of alternatives determined by the MMD metric in the considered regime. It is noteworthy that there is no restriction on the privacy parameters \(\varepsilon>0\) and \(\delta\in[0,1)\), and hence the dpMMD test achieves optimal separation rates in all privacy regimes.
### Separation in \(L_{2}\) Metric
We next investigate the minimum separation of the dpMMD test in terms of the \(L_{2}\) metric. Let \(p\) and \(q\) denote the Lebesgue density functions of \(P\) and \(Q\), respectively, defined on \(\mathbb{R}^{d}\). As in Schrab et al. (2023) and Li and Yuan (2019), we restrict our attention to a smooth class of density functions defined over a Sobolev ball. In particular, for a smoothness parameter \(s>0\) and a radius \(R>0\), the Sobolev ball \(\mathcal{S}^{s}_{d}(R)\) is given as
\[\mathcal{S}^{s}_{d}(R):=\Bigg{\{}f\in L_{1}(\mathbb{R}^{d})\cap L _{2}(\mathbb{R}^{d}):\int_{\mathbb{R}^{d}}\|w\|_{2}^{2s}|\widehat{f}(w)|^{2} \mathrm{d}w\leq(2\pi)^{d}R^{2}\Bigg{\}}, \tag{14}\]
where \(\widehat{f}\) is the Fourier transform of \(f\), _i.e._, \(\widehat{f}(w)=\int_{\mathbb{R}^{d}}f(x)e^{-ix^{\top}w}\mathrm{d}x\) for \(w\in\mathbb{R}^{d}\). The condition \(f\in L_{1}(\mathbb{R}^{d})\cap L_{2}(\mathbb{R}^{d})\) simply requires the function \(f\colon\mathbb{R}^{d}\to\mathbb{R}\) to be both integrable and square-integrable with respect to the Lebesgue measure. For \(\rho>0\), let \(\mathcal{P}_{L_{2}}(\rho)\) be the collection of paired distributions \((P,Q)\) on \(\mathbb{R}^{d}\times\mathbb{R}^{d}\) where \(P\) and \(Q\) are equipped with the Lebesgue density functions \(p\) and \(q\), respectively, such that \(\|p-q\|_{L_{2}}\geq\rho\). The target class of distributions is a subset of \(\mathcal{P}_{L_{2}}(\rho)\) defined as
\[\mathcal{P}^{s}_{L_{2}}(\rho)\coloneqq\big{\{}(P,Q)\in\mathcal{P} _{L_{2}}(\rho):p-q\in\mathcal{S}^{s}_{d}(R),\ \max(\|p\|_{L_{\infty}},\|q\|_{L_{\infty}})\leq M\big{\}}.\]
The aim of this subsection is to characterize the minimum value of \(\rho\) for which the dpMMD test has significant power uniformly over \(\mathcal{P}^{s}_{L_{2}}(\rho)\). For simplicity, we focus on the dpMMD test with a Gaussian kernel. This choice is motivated by the observation that the population MMD with the Gaussian kernel approximates \(\|p-q\|_{L_{2}}^{2}\) for small bandwidth values (Li and Yuan, 2019), and a similar result can be derived using other kernels in view of Schrab et al. (2023). For \(x=(x_{1},\ldots,x_{d})^{\top}\in\mathbb{R}^{d}\) and \(y=(y_{1},\ldots,y_{d})^{\top}\in\mathbb{R}^{d}\), the Gaussian kernel with bandwidth \(\boldsymbol{\lambda}=(\lambda_{1},\ldots,\lambda_{d})^{\top}\in(0,\infty)^{d}\) is given as
\[k_{\boldsymbol{\lambda}}(x,y)=\prod_{i=1}^{d}\frac{1}{\sqrt{2 \pi}\lambda_{i}}e^{-\frac{(x_{i}-y_{i})^{2}}{2\lambda_{i}^{2}}}.\]
Let us denote the minimum separation of the dpMMD test with the Gaussian kernel against \(L_{2}\) alternatives as
\[\rho_{\phi_{\mathrm{dpMMD}},L_{2}}(\alpha,\beta,\varepsilon, \delta,m,n,d,s,R,M)\coloneqq\inf\Bigg{\{}\rho>0:\sup_{(P,Q)\in\mathcal{P}^{s} _{L_{2}}(\rho)}\mathbb{E}_{P,Q}[1-\phi_{\mathrm{dpMMD}}]\leq\beta\Bigg{\}}. \tag{15}\]
The next theorem, proved in Appendix E.11, provides an upper bound for \(\rho_{\phi_{\mathrm{dpMMD}},L_{2}}\) in terms of a set of parameters, including the bandwidth \(\boldsymbol{\lambda}\) and sample sizes.
**Theorem 9** (Minimum Separation of dpMMD over \(\mathcal{P}^{s}_{L_{2}}\)).: _Assume that \(n\leq m\leq\tau n\) for some fixed constant \(\tau\geq 1\), and that \(\alpha\in(0,e^{-1})\), \(\beta\in(0,1-\alpha)\), \(\varepsilon>0\), \(\delta\in[0,1)\), \(B\geq 16\alpha^{-2}\log(8/\beta)\) and \(\prod_{i=1}^{d}\lambda_{i}\leq 1\). The minimum separation of the dpMMD test with the Gaussian kernel over \(\mathcal{P}^{s}_{L_{2}}\) is
upper bounded as_
\[\rho_{\phi_{\mathrm{dpMMD}},L_{2}}^{2}\leq C_{\tau,\beta,s,R,M,d} \left\{\sum_{i=1}^{d}\lambda_{i}^{2s}+\frac{\log(1/\alpha)}{n\sqrt{\lambda_{1} \cdots\lambda_{d}}}+\frac{\log(1/\alpha)}{n^{3/2}\lambda_{1}\cdots\lambda_{d} \xi_{\varepsilon,\delta}}\right.\] \[+\left.\frac{\log^{2}(1/\alpha)}{n^{2}\lambda_{1}\cdots\lambda_{d }\xi_{\varepsilon,\delta}^{2}}+\frac{\log^{3/2}(1/\alpha)}{n^{3/2}(\lambda_{ 1}\cdots\lambda_{d})^{3/4}\xi_{\varepsilon,\delta}}\right\},\]
_where \(C_{\tau,\beta,s,R,M,d}\) is a positive constant, depending only on \(\tau,\beta,s,R,M,d\), and \(\xi_{\varepsilon,\delta}\) is as in (1)._
There are several points needed to be highlighted. To facilitate our discussion, assume that \(\alpha\) is a fixed number and write
\[\text{(I)}=\frac{\log(1/\alpha)}{n\sqrt{\lambda_{1}\cdots\lambda_{d}}},\ \text{(II)}=\frac{\log(1/\alpha)}{n^{3/2}\lambda_{1}\cdots\lambda_{d}\xi_{ \varepsilon,\delta}},\ \text{(III)}=\frac{\log^{2}(1/\alpha)}{n^{2}\lambda_{1}\cdots\lambda_{d}\xi_{ \varepsilon,\delta}^{2}},\ \text{(IV)}=\frac{\log^{3/2}(1/\alpha)}{n^{3/2}(\lambda_{1} \cdots\lambda_{d})^{3/4}\xi_{\varepsilon,\delta}}.\]
When \(\alpha\) is fixed, we can absorb the term (IV) into the term (II) as \(\lambda_{1}\cdots\lambda_{d}\leq 1\), and simplify the interpretation of the result as follows.
* In the low privacy regime where the first term (I) dominates the others, our result recovers Schrab et al. (2023, Theorem 6), which studies the minimum separation of the non-private MMD test against \(\mathcal{P}_{L_{2}}^{s}\). In this low privacy regime, by setting bandwidths \(\lambda_{i}=n^{-2/(4s+d)}\) for \(i\in[d]\), we can achieve the optimal separation rate over the Sobolev ball, that is \(n^{-2s/(4s+d)}\).
* In the mid privacy regime where the term (II) becomes a leading term, equating (II) with \(\sum_{i=1}^{d}\lambda_{i}^{2s}\) yields the optimal choice of bandwidths \(\lambda_{i}=n^{-3/(4s+2d)}\xi_{\varepsilon,\delta}^{-1/(2s+d)}\) for \(i\in[d]\). The resulting separation rate is \(n^{-3s/(4s+2d)}\xi_{\varepsilon,\delta}^{-s/(2s+d)}\). Similarly, in the high privacy regime where the term (III) dominates the others, equating (III) with \(\sum_{i=1}^{d}\lambda_{i}^{2s}\) yields the optimal choice of bandwidths \(\lambda_{i}=(n\xi_{\varepsilon,\delta})^{-2/(2s+d)}\) for \(i\in[d]\). This returns the separation rate \((n\xi_{\varepsilon,\delta})^{-2s/(2s+d)}\). By tracking conditions for each term dominating the others, the minimum separation rate that one can achieve using different bandwidths is summarized as \[\rho_{\phi_{\mathrm{dpMMD}},L_{2}}\lesssim\begin{cases}n^{-\frac{2s}{4s+d}},& \text{if }n^{-\frac{2s-d/2}{4s+d}}\lesssim\xi_{\varepsilon,\delta}\ \text{(low privacy)},\\ (n^{\frac{3}{2}}\xi_{\varepsilon,\delta})^{-\frac{s}{2s+d}},&\text{if }n^{- \frac{1}{2}}\lesssim\xi_{\varepsilon,\delta}\lesssim n^{-\frac{2s-d/2}{4s+d}} \ \text{(mid privacy)},\\ (n\xi_{\varepsilon,\delta})^{-\frac{2s}{2s+d}},&\text{if }\xi_{\varepsilon, \delta}\lesssim n^{-\frac{1}{2}}\ \text{(high privacy)}.\end{cases}\] (16) In particular, when \(\xi_{\varepsilon,\delta}\asymp n^{-1/2}\), the separation rate becomes \(n^{-2/(2s+d)}\), which is known to be the minimax optimal rate of density estimation under the \(L_{2}\) loss. We refer to Appendix B.8 for a detailed discussion of the separation rate.
* It is important to note that all these separation rates are achieved by using different bandwidths, which requires knowledge of the smoothness parameter \(s\). Building on the idea of Ingster (2000); Schrab et al. (2023); Biggs et al. (2023), one can develop an aggregated dpMMD test that is adaptive to \(s\) without losing much power. The main idea would be to consider a wide range of private MMD statistics with different bandwidths and aggregate them properly. A detailed analysis of this approach is left for future research.
* While the dpMMD test can achieve the minimax rate in the low privacy regime, it remains unknown whether the derived separation rates are optimal in the mid/high privacy regimes. We believe that \((\varepsilon,\delta)\)-DP Le Cam's method (Acharya et al., 2021, Theorem 1) plays an important role in constructing a lower bound for the \(L_{2}\) separation as well. The key challenge lies in finding a coupling between continuous distributions, yielding a small expected Hamming distance. We leave this important direction for future work.
In contrast to the prior work (Li and Yuan, 2019; Schrab et al., 2023) that utilize a U-statistic for minimax two-sample testing, our approach is based on a plug-in estimator, also known as a V-statistic, of the MMD. While plug-in estimators can often exhibit suboptimal performance in estimation problems due to their inherent bias, they can still achieve optimal results in testing problems. This can be explained by the interplay between the test statistic and the critical value in a testing procedure, where the bias terms in these components may offset each other. Theorem 9 demonstrates this phenomenon by showing that the test based on the plug-in estimator of the MMD attains the minimax separation rate over \(\mathcal{P}_{L_{2}}^{s}\) in the low privacy regime. Perhaps more interestingly, the plug-in estimator can outperform the U-statistic by having lower sensitivity and thus leading to greater power in high privacy regimes. This aspect of plug-in estimators has not been noticed in the literature, and we provide a more detailed discussion in the next subsection.
### Private Test based on the MMD U-statistic
It has been shown that kernel tests based on U-statistics often produce optimal separation rates in non-DP settings (Li and Yuan, 2019; Schrab et al., 2023; Albert et al., 2022; Kim et al., 2022a). Therefore, one can naturally expect that their private extensions perform similarly well across different privacy regimes. In this section, we prove that this is not necessarily the case. In particular, we illustrate that the private MMD permutation test based on a U-statistic is provably outperformed by our approach based on the plug-in MMD estimate in high privacy regimes. A similar result for HSIC can be found in Appendix B.7.
We begin with the explicit form of the MMD U-statistic, which is an unbiased estimator of \(\text{MMD}_{k}^{2}\), given as
\[U_{\text{MMD}}(\mathcal{X}_{n+m})\coloneqq\frac{1}{n(n-1)}\sum_{1\leq i\neq j \leq n}k(Y_{i},Y_{j})+\frac{1}{m(m-1)}\sum_{1\leq i\neq j\leq m}k(Z_{i},Z_{j}) -\frac{2}{nm}\sum_{i=1}^{n}\sum_{j=1}^{m}k(Y_{i},Z_{j}).\]
The following lemma calculates the global sensitivity of \(U_{\text{MMD}}\), which is proved in Appendix E.12.
**Lemma 7** (Global Sensitivity of \(U_{\text{MMD}}\)).: _Assume that the kernel \(k\) is bounded as \(0\leq k(x,y)\leq K\) for all \(x,y\in\mathbb{S}\). In addition, assume that \(k\) is translation invariant, and have non-empty level sets on \(\mathbb{S}\). Then there exists a positive sequence \(c_{m,n}\in[4,8]\) such that for all \(2\leq n\leq m\),_
\[\sup_{\boldsymbol{\pi}\in\boldsymbol{\Pi}_{n+m}}\sup_{\begin{subarray}{c} \mathcal{X}_{n+m},\tilde{\mathcal{X}}_{n+m}:\\ d_{\text{ham}}(\mathcal{X}_{n+m},\tilde{\mathcal{X}}_{n+m})\leq 1\end{subarray}} \big{|}U_{\text{MMD}}(\mathcal{X}_{n+m}^{\boldsymbol{\pi}})-U_{\text{MMD}}( \tilde{\mathcal{X}}_{n+m}^{\boldsymbol{\pi}})\big{|}=\frac{c_{m,n}K}{n}.\]
The lemma above indicates that the global sensitivity of the U-statistic has the same dependence on \(n\) as that of the plug-in MMD in Lemma 5. However, it is important to mention that their target parameters are different. The U-statistic is an estimator of \(\mathrm{MMD}_{k}^{2}\), whereas the plug-in estimator given in (9) estimates \(\mathrm{MMD}_{k}\) without squaring. This key difference can lead to a significant gap in their power performance in privacy regimes as explored below.
Given the sensitivity of \(U_{\mathrm{MMD}}\) in Lemma 7, we consider the private permutation test in Algorithm 1 using the test statistic \(U_{\mathrm{MMD}}\) and the global sensitivity \(\Delta_{T}=c_{m,n}Kn^{-1}\). Let us denote the resulting private test by \(\phi_{\mathrm{dpMMD}}^{u}\). We analyze the minimum separations of \(\phi_{\mathrm{dpMMD}}^{u}\) over \(\mathcal{P}_{\mathrm{MMD}_{k}}\) and \(\mathcal{P}_{L_{2}}^{s}\) in Theorem 10 and Theorem 11, respectively, and compare them with those of \(\phi_{\mathrm{dpMMD}}\) based on the plug-in estimator. Starting with the MMD alternative, the following theorem demonstrates that \(\phi_{\mathrm{dpMMD}}^{u}\) fails to achieve the minimax separation rate over \(\mathcal{P}_{\mathrm{MMD}_{k}}\).
**Theorem 10** (Suboptimality of \(\phi_{\mathrm{dpMMD}}^{u}\) against MMD Alternatives).: _Assume that the kernel \(k\) fulfills the conditions specified in Lemma 7. Moreover, assume that if \(P,Q\in\mathcal{P}_{\mathbb{S}}\), then \(wP+(1-w)Q\in\mathcal{P}_{\mathbb{S}}\) for all \(w\in[0,1]\), and there exist \(P_{0},Q_{0}\in\mathcal{P}_{\mathbb{S}}\) such that \(\mathrm{MMD}_{k}(P_{0},Q_{0})=\varrho_{0}\) for some fixed \(\varrho_{0}>0\). Let \(\alpha\in\big{(}(B+1)^{-1},1\big{)}\), \(\beta\in(0,1-\alpha)\) be fixed values. Consider the high privacy regime where \(\xi_{\varepsilon,\delta}\asymp n^{-1/2-r}\) with fixed \(r\in(0,1/2)\), for \(\xi_{\varepsilon,\delta}\) as in (1). Then the uniform power of \(\phi_{\mathrm{dpMMD}}^{u}\) is asymptotically at most \(\alpha\) over \(\mathcal{P}_{\mathrm{MMD}_{k}}(\rho)\) where_
\[\rho=\log(n)\times\max\Biggl{\{}\sqrt{\frac{\max\bigl{\{}\log(1/\alpha),\,\log( 1/\beta)\bigr{\}}}{n}},\,\frac{\max\bigl{\{}\log(1/\alpha),\,\log(1/\beta) \bigr{\}}}{n\xi_{\varepsilon,\delta}}\Biggr{\}}. \tag{17}\]
_In other words, it holds that_
\[\limsup_{n\to\infty}\inf_{(P,Q)\in\mathcal{P}_{\mathrm{MMD}_{k}}(\rho)}\mathbb{ E}_{P,Q}[\phi_{\mathrm{dpMMD}}^{u}]\leq\alpha.\]
Theorem 10, proven in Appendix E.13, clearly shows that \(\phi_{\mathrm{dpMMD}}^{u}\) is not minimax optimal in the MMD metric as \(\rho/\rho_{\mathrm{MMD}}^{\star}\to\infty\) as \(n\to\infty\). We also mention that the factor \(\log(n)\) in \(\rho\) is chosen for convenience, and it can be replaced by any other positive sequence that increases slower than \(n^{r}\) for \(r\in(0,1/2)\). The suboptimal performance of \(\phi_{\mathrm{dpMMD}}^{u}\) primarily stems from the relatively high noise level associated with the Laplace mechanism. Intuitively, we expect that \(\phi_{\mathrm{dpMMD}}^{u}\) is powerful in the private regime when the target parameter \(\mathrm{MMD}_{k}^{2}(P,Q)\) is larger than the Laplace noise level \((n\xi_{\varepsilon,\delta})^{-1}\), equivalently, \(\mathrm{MMD}_{k}(P,Q)\) is larger than \((n\xi_{\varepsilon,\delta})^{-1/2}\). Otherwise, the test statistic will be dominated by the Laplace noise. Importantly, the minimax separation under high privacy regimes in Theorem 8 is associated with \(\min\bigl{\{}(n\xi_{\varepsilon,\delta})^{-1},1\bigr{\}}\), which is smaller than \((n\xi_{\varepsilon,\delta})^{-1/2}\). This briefly explains the suboptimality of \(\phi_{\mathrm{dpMMD}}^{u}\) against the MMD alternative. Nevertheless, our analysis is limited to the U-statistic with the Laplace mechanism, and it is unknown whether the U-statistic in conjunction with other DP mechanisms can lead to optimality.
Turning to the \(L_{2}\) alternative, let us denote the minimum separation of \(\phi_{\mathrm{dpMMD}}^{u}\) with the Gaussian kernel over \(\mathcal{P}_{L_{2}}^{s}\) as \(\rho_{\phi_{\mathrm{dpMMD}}^{u},L_{2}}\), which is similarly defined as (15). Our next concern is characterizing \(\rho_{\phi_{\mathrm{dpMMD}}^{u},L_{2}}\) and comparing it with the minimum separation of the dpMMD test established in Theorem 9.
**Theorem 11** (Minimum Separation of \(\phi^{u}_{\mathrm{dpMMD}}\) over \(\mathcal{P}^{s}_{L_{2}}\)).: _Assume that \(n\leq m\leq\tau n\) for some fixed constant \(\tau\geq 1\), and that \(\alpha\in(0,e^{-1})\), \(\beta\in(0,1)\), \(\varepsilon>0\), \(\delta\in[0,1)\), \(B\geq 16\alpha^{-2}\log(8/\beta)\) and \(\prod_{i=1}^{d}\lambda_{i}\leq 1\). The minimum separation of \(\phi^{u}_{\mathrm{dpMMD}}\) with the Gaussian kernel over \(\mathcal{P}^{s}_{L_{2}}\) is upper bounded as_
\[\rho^{2}_{\phi^{u}_{\mathrm{dpMMD}},L_{2}}\leq C_{\tau,\beta,s,R,M,d}\Bigg{\{} \sum_{i=1}^{d}\lambda_{i}^{2s}+\frac{\log(1/\alpha)}{n\sqrt{\lambda_{1}\cdots \lambda_{d}}}+\frac{\log(1/\alpha)}{n\lambda_{1}\cdots\lambda_{d}\xi_{ \varepsilon,\delta}}\Bigg{\}},\]
_where \(C_{\tau,\beta,s,R,M,d}\) is a positive constant, depending only on \(\tau,\beta,s,R,M,d\), and \(\xi_{\varepsilon,\delta}\) is as in (1)._
The proof of Theorem 11 is given in Appendix E.14. To simplify our discussion, assume that \(\alpha\) is a fixed constant. In this case, by comparing Theorem 11 with Theorem 9, the upper bound for \(\rho^{2}_{\phi^{u}_{\mathrm{dpMMD}},L_{2}}\) is smaller than that for \(\rho^{2}_{\phi^{u}_{\mathrm{dpMMD}},L_{2}}\), up to a constant, only when \(n\xi_{\varepsilon,\delta}\lesssim 1\). Since \(\prod_{i=1}^{d}\lambda_{i}\leq 1\) and \(\alpha\) is fixed, the condition \(n\xi_{\varepsilon,\delta}\lesssim 1\) essentially means that \(\|p-q\|_{L_{2}}\) needs to be sufficiently larger than a specific constant for significant power. However, this condition may be infeasible as we assume that \(\|p\|_{L_{\infty}}\) and \(\|q\|_{L_{\infty}}\) are bounded by \(M\). In fact, our earlier result in Theorem 5 suggests that the test is not even consistent in a pointwise sense when \(n\xi_{\varepsilon,\delta}\lesssim 1\). Therefore, except for this boundary case, it is more beneficial to use \(\phi_{\mathrm{dpMMD}}\) than \(\phi^{u}_{\mathrm{dpMMD}}\) to achieve tighter separation rates over \(\mathcal{P}^{s}_{L_{2}}\) in high privacy regimes.
As we mentioned before, it remains an open question whether there exist alternative privacy mechanisms that could potentially yield an optimal test based on \(U_{\mathrm{MMD}}\) in high privacy regimes. While we still advocate using the plug-in MMD estimate over \(U_{\mathrm{MMD}}\) due to its smaller sensitivity, it would be interesting to explore this question in future work.
## 6 Simulations
In this section, we compare the empirical power of dpMMD against other private two-sample tests, including the naive dpMMD introduced in Equation (6) and the U-statistic dpMMD studied in Section 5.3. We also implement two generic methods for privatizing the permutation MMD test, which we refer to as TOT MMD (Kazan et al., 2023) and SARRM MMD (Pena and Barrientos, 2022). Finally, we also compare against the differentially private kernel two-sample test, TCS-ME, proposed by Raj et al. (2020). A brief overview of these alternative methods is provided in Section 6.1, with detailed information available in Appendix D.
We empirically study the power attained by dpMMD on synthetic data sampled from perturbed uniform distributions in Section 6.2, and on real-world high-dimensional CelebA image data in Section 6.3. The latter scenario involves sensitive information on human faces, justifying the incorporation of differential privacy in the analysis. In both simulation settings, we observe consistent patterns in the power behavior, and a detailed discussion on the results is presented in Section 6.4. For all our experiments, we use a Gaussian kernel, \(B=2000\) permutations, and report power results averaged over 200 repetitions.
Due to space constraints, we defer the dpHSIC simulations to Appendix C, along with test-level analysis and additional low-privacy experiments. The code to run our tests and to reproduce the experiments is available at [https://github.com/antoninschrab/dpkernel](https://github.com/antoninschrab/dpkernel).
### Alternative differentially private tests
We provide a brief introduction to the alternative differentially private tests, namely TCS-ME, TOT, and SARRM, with detailed implementation information available in Appendix D.
Tot (Kazan et al., 2023).TOT is constructed based on the subsample-and-aggregate idea outlined in Canonne et al. (2019). It is guaranteed to be differentially private and to correctly control the probability of type I error for any sample size, any number of partitioned subsets, and any sub-test significance level. However, for non-parametric testing, there is no principled way to choose the last two parameters and one has to rely on heuristics in practice. This heuristic aspect presents a notable disadvantage of TOT, being highly sensitive to the choice of these parameters.
SarRM (Pena and Barrientos, 2022).As another method based on the subsample-and-aggregate idea, SARRM also depends on the number of partitioned subsets and on the sub-test significance level. Pena and Barrientos (2022) propose a method to select these parameters, which can be implemented for MMD/HSIC tests. However, the differential privacy constraint and type I error control for SARRM are only guaranteed for sufficiently large sample sizes (determined by a minimum number of partitioned subsets to use), which means that SARRM simply cannot be run in some settings depending on the values of \(\varepsilon\), \(\alpha\), and \(n\).
Tcs-Me (Raj et al., 2020).The TCS-ME test is a privatized version of the ME test (Jitkrittum et al., 2016) that utilizes kernel mean embeddings. This method builds on a Hotelling-type test statistic privatized by the Gaussian mechanism, and requires a careful choice of test locations. The resulting test is \((\varepsilon,\delta)\)-DP for \(\delta>0\) (run with \(\delta=10^{-5}\)). A major limitation of TCS-ME is its potential for significant miscalibration, particularly in high-dimensional settings. See our discussion on type I error control in Section 6.5.
### Perturbed Uniform Distributions
As recalled in Section 5.2, the minimax \(L_{2}\) separation rate over the Sobolev ball \(\mathcal{S}^{s}_{d}(R)\) is \(n^{-2s/(4s+d)}\) in the non-privacy regime. A lower bound for this minimax separation rate is derived by constructing two densities whose difference lies in the Sobolev ball with a small \(L_{2}\) norm. As explained by Schrab et al. (2023, Appendix D), the uniform and perturbed uniform densities meet the requirements in the lower bound construction, and we adopt this setting in our two-sample experiments. In more detail, we compare the uniform distribution with its perturbed counterpart, varying the amplitudes.
Specifically, the considered uniform distribution on \([0,1]^{d}\) has density \(\mathds{1}(x\in[0,1]^{d})\) for \(x\in\mathbb{R}^{d}\), while the perturbed uniform density on \([0,1]^{d}\) with a perturbation amplitude \(a\in[0,1]\) is
\[\mathds{1}\big{(}x\in[0,1]^{d}\big{)}+a\,\prod_{i=1}^{d}P(x_{i}),\quad\text{ for }x=(x_{1},\ldots,x_{d})^{\top}\in\mathbb{R}^{d}, \tag{18}\]
where the one-dimensional perturbation is defined as
\[P(x_{i})\coloneqq\exp\biggl{(}1-\frac{1}{1-(4x_{i}-1)^{2}}\biggr{)}\mathds{1} \bigl{(}x_{i}\in(0,1/2)\bigr{)}-\exp\biggl{(}1-\frac{1}{1-(4x_{i}-3)^{2}}\biggr{)} \mathds{1}\bigl{(}x_{i}\in(1/2,1)\bigr{)}.\]
This definition matches the one of Schrab et al. (2023, Equation 17) with only one (scaled) perturbation per dimension. The one-dimensional and two-dimensional perturbed densities with various perturbation amplitudes are visualized in Figure 1.
Figure 1: Perturbed uniform \(d\)-dimensional densities on \([0,1]^{d}\) with varying perturbation amplitude \(a\).
Figure 2: Comparing uniform vs. perturbed uniform while varying the privacy level \(\varepsilon\). We set the sample sizes \(m=n=3000\) and dimension \(d=1\), and change the privacy level \(\varepsilon\) and perturbation amplitude \(a\) as follows: _(Left)_ Privacy level \(\varepsilon\) from \(1/n\) to \(10/\sqrt{n}\), perturbation amplitude \(a=0.2\). _(Middle)_ Privacy level \(\varepsilon\) from \(10/\sqrt{n}\) to \(1\), perturbation amplitude \(a=0.15\). _(Right)_ Privacy level \(\varepsilon\) from \(1\) to \(\sqrt{n}\), perturbation amplitude \(a=0.1\).
We run our perturbed uniform experiments under three different settings where we vary the privacy level \(\varepsilon\) (Figure 2), the sample sizes \(m=n\) (Figure 3), and the dimension \(d\) (Figure 4). Additionally, we provide experiments with'strong-signal' alternatives in the low privacy regime in Figure 12 in Appendix C.2, as well as a level analysis in Figure 14 in Appendix C.3. We discuss all experimental results of Figures 2 to 4 in Section 6.4.
Figure 4: Comparing uniform vs. perturbed uniform while varying the dimension \(d\). We set the sample sizes \(m=n=3000\) and perturbation amplitude \(a=0.2\). We change the privacy level as follows: _(Left)_ Privacy level \(\varepsilon=10/\sqrt{n}\). _(Middle)_ Privacy level \(\varepsilon=1\). _(Right)_ Privacy level \(\varepsilon=\sqrt{n}/10\).
Figure 3: Comparing uniform vs. perturbed uniform while varying the sample sizes \(m=n\). We set the dimension \(d=1\) and perturbation amplitude \(a=0.1\). We change the privacy level as follows: _(Left)_ Privacy level \(\varepsilon=10/\sqrt{n}\). _(Middle)_ Privacy level \(\varepsilon=1\). _(Right)_ Privacy level \(\varepsilon=\sqrt{n}/10\).
### CelebA
As some potential real-world applications of differentially private two-sample tests, we consider CelebA face images which in practice would be highly confidential, and hence the use of DP tests is thoroughly justified. The CelebA dataset (Liu et al., 2015) consists of \(202,599\) face images of \(10,177\) identities with a large diversity of face attributes, poses and backgrounds. For illustration purposes, we display a selection of CelebA images in Figure 5. It is worth highlighting that we run our tests on the original full-resolution images (\(3\times 178\times 218\)) without any modifications.
In our experiments, one sample consists of uniformly-sampled face images of women, while the other is 'corrupted' with corruption parameter \(c\in[0,1]\) in the following sense: we uniformly sample face images of women with probability \(1-c\), and of men with probability \(c\).
We run several CelebA experiments while varying the privacy level \(\varepsilon\) (Figure 6), the sample sizes \(m=n\) (Figure 7), and the corruption \(c\) (Figure 8). As in Section 6.2, we consider the high/mid/low privacy regimes for each of these. We also verify that all tests are well-calibrated in Figure 14 in Appendix C.3. TCS-ME is excluded from our power analysis on the CelebA data as we empirically observed that this method is not well-calibrated for this dataset.
Our results consistently demonstrate that kernel tests using a simple Gaussian kernel are able to capture complex image distribution shifts (see Figure 5) even in this extremely high dimensional setting with \(d=3\times 178\times 218=116,412\). This result is surprising given that the Gaussian kernel simply compares the distance between pixels at the same location without using information about the image structure. We discuss this aspect more in Section 6.4.
Figure 5: Selected CelebA images in dimension \(3\times 178\times 218\).
### Analysis of Main Experimental Results
We now analyze the results of the perturbed uniform and CelebA experiments presented in Sections 6.2 and 6.3, respectively.
Figure 6: Comparing CelebA women/men images while varying the privacy level \(\varepsilon\). We set the sample sizes \(m=n=500\), and change the parameters as follows: _(Left)_ Privacy level \(\varepsilon\) from \(1/n\) to \(10/\sqrt{n}\), corruption \(c=1\). _(Middle)_ Privacy level \(\varepsilon\) from \(10/\sqrt{n}\) to \(1\), corruption \(c=0.6\). _(Right)_ Privacy level \(\varepsilon\) from \(1\) to \(\sqrt{n}\), corruption \(c=0.5\).
Figure 7: Comparing CelebA women/men images while varying the sample sizes \(m=n\). We set the other parameters as follows: _(Left)_ Privacy level \(\varepsilon=10/\sqrt{n}\), corruption \(c=0.7\). _(Middle)_ Privacy level \(\varepsilon=1\), corruption \(c=0.5\). _(Right)_ Privacy level \(\varepsilon=\sqrt{n}/10\), corruption \(c=0.4\).
Overview.First and foremost, we observe that dpMMD achieves significantly higher power than all other tests across all privacy regimes. This trend remains consistent when varying the other parameters {privacy, sample sizes, dimension, corruption}. In the high and mid privacy regimes illustrated in Figures 2 to 4, only dpMMD is able to detect the perturbation on the uniform distribution. In the high and mid privacy regimes presented in Figures 6 to 8, dpMMD clearly outperforms all other tests but TOT MMD and SARRM MMD eventually manage to detect the CelebA image distributional shift. In the low privacy regime, dpMMD also achieves the highest power, which is eventually matched by U-stat dpMMD as the privacy parameter increases.
Varying the sample size.When varying the sample sizes \(m=n\) in Figures 3 and 7 with fixed high/mid/low privacy level \(\varepsilon\in\{10/\sqrt{n},1,\sqrt{n}/10\}\), the power of all tests naturally increase, while the power of dpMMD increases faster than the others. In the low privacy regime of Figure 3, we see that the power of U-stat dpMMD approaches that of dpMMD as the same size increases. This can be explained by the aforementioned reasoning along with the observation that the privacy level \(\varepsilon=\sqrt{n}/10\) also increases (_i.e._, lower privacy) in this setting.
Varying the problem difficulty.In Figures 4 and 8, the sample sizes and privacy levels are fixed while the difficulty of the problem is varied. For the perturbed uniform experiment, as the dimension of the problem increases, the perturbation becomes more difficult to detect and hence the power decreases for each test. We observe nonetheless that the power of dpMMD deteriorates at a much slower rate than the one of the other tests. For the CelebA experiment, the test power increases with the corruption parameter of the image sampler, with dpMMD always achieving the highest power, followed by either TOT MMD or SARRM MMD.
The power of kernel methods.We end this discussion with some remarks regarding the CelebA experiments of Section 6.3. First, we emphasize again that the quadratic-time kernel tests run swiftly on the full-resolution CelebA image data, which has \(116,412\) pixels per image. Second, given the large diversity of faces, poses and backgrounds (see Figure 5), it is remarkable that dpMMD is able to detect such complicated differences in high-dimensional image distributions while using an off-the-shelf Gaussian kernel which entirely ignores the image structure and simply averages distances between pixel values at the same locations. Third, the fact that such complex testing problems can now be solved while guaranteeing differential privacy is extremely important, especially when dealing with data as personal and sensitive as facial data. The DP constraint essentially guarantees privacy in the sense that confidential information about a single face image cannot be recovered. Fourth, we stress that the power results reported are meaningful and that dpMMD truly detects the difference between CelebA images of women and men, which is justified as type I error control is correctly retained (Appendix C.3). We believe these CelebA experiments strongly advocate the use of tests leveraging kernel methods for differential privacy.
### Analysis of Additional Experimental Results
Before concluding the paper, we briefly summarize the results of the additional experiments in Appendix C.
Independence testing.In the HSIC testing experiments of Appendix C.1, we consider the problem of detecting the dependence of variables with the perturbed uniform joint density introduced in Section 6.2, and hence with uniform marginal densities. This setting is of particular interest as this corresponds to the joint density used in deriving the non-private \(L_{2}\) minimax independence lower bound over Sobolev balls by Albert et al. (2022). The exact same power dynamics and aforementioned observations, which hold for the MMD-based tests, also apply to the HSIC variants of all tests, and indeed dpHSIC achieves substantially higher power than all the other tests.
High-signal & low-privacy alternatives.We remark that Naive dpMMD and TCS-ME have almost no power against the alternatives considered in the previous subsections. This is not because these tests are faulty but because the signal is too weak to be detected by these tests. In fact, the Naive dpMMD test with privacy \(\varepsilon\) is exactly equivalent to the dpMMD test with privacy \(2/(B\varepsilon)\) (recall \(B=2000\) permutations), which justifies the poor performance of Naive dpMMD. For a sanity check, we consider 'high-signal' & 'low-privacy' alternatives in Appendix C.2 and show that dpMMD and TCS-ME are indeed able to detect the difference when the signal is large enough. The results can be found in Figure 12.
Type I error control.In Appendix C.3, we run experiments under the null hypothesis. This corresponds to no perturbation for the perturbed uniform two-sample and independence settings (amplitude \(a=0\)), and to no corruption in the CelebA sampler (corruption \(c=0\)). All tests are well-calibrated with empirical level around \(\alpha\) under all settings considered, except TCS-ME. Indeed, we observe in Figure 14 that when testing two samples from a 100-dimensional uniform distribution, TCS-ME fails to control the type I error rate, which is estimated to be around \(0.5=10\alpha\) instead of around \(\alpha=0.05\). This depicts a major limitation of TCS-ME especially in high-dimensional settings. However, we point out that this test is well-calibrated in the settings of Section 6.2, ensuring a fair comparison of power therein.
## 7 Discussion
In this work, we have proposed differentially private permutation tests and examined their theoretical and empirical performance. The prior work on differentially private testing has often been limited in its practical applicability, being restricted to discrete data or relying on asymptotic theory that does not offer confidence in finite sample scenarios. Our permutation framework addresses these challenges by introducing practical tools, which are applicable to diverse settings with finite sample guarantees for both type I error control and differential privacy. In addition to general power properties, we have provided a detailed power analysis of the proposed method in the context of kernel testing, and showed that the proposed private kernel tests achieve minimax optimal power in terms of kernel metrics in all privacy regimes. We have also analyzed the testing power against nonparametric \(L_{2}\) alternatives, and established minimum separation rates in all privacy regimes. Finally, we have conducted an extensive simulation study to validate our theoretical findings as well as to highlight the practical value of our approach.
Our work raises several intriguing open questions that deserve further investigation, as outlined below.
* **Beyond Global DP.** Our work focuses on global \((\varepsilon,\delta)\)-differential privacy along with the Laplace mechanism. While global DP is a widely used as an effective concept of data protection, other privacy concepts may be more suitable depending on the context and requirements. Exploring the development of private permutation tests applicable to other privacy concepts or based on other privacy mechanisms would be an interesting direction for future investigation.
* **Other Applications.** We illustrated the proposed method in the context of two-sample and independence testing, with a focus on kernel-based tests. The permutation method has been employed successfully in other statistical problems such as testing for regression coefficients (DiCiccio and Romano, 2017) and conditional independence testing (Kim et al., 2022b). It is therefore compelling to broaden the application of our framework by tackling other statistical problems in privacy settings. Future work can also focus on conducting a detailed analysis of the private permutation test using non-kernel test statistics.
* **Variants of dpMMD and dpHSIC.** In our analysis, we utilized the plug-in estimators of MMD and HSIC based on single kernels. In recent years, significant progress has been made to reduce the computational complexity (Schrab et al., 2022; Domingo-Enrich et al., 2023) as well as to avoid the bandwidth selection issue (Schrab et al., 2023; Biggs et al., 2023) of this standard approach. Considering these developments, a promising avenue for future research would be to extend these recent advances to privacy-preserving settings.
* **Minimax Separation under DP.** Our results in Section 5.2 provide upper bounds for the minimax separation rates in terms of the \(L_{2}\) metric, which match the lower bound in low privacy regimes. However, as mentioned earlier, it is unknown whether they are still tight in mid/high privacy regimes. Consequently, future work could focus on addressing this question by establishing matching lower bounds or sharper upper bounds. More broadly, it would be interesting to establish minimax separation rates under DP in terms of other metrics as well. We leave these important questions to future work.
|
2307.02912 | LEA: Improving Sentence Similarity Robustness to Typos Using Lexical
Attention Bias | Textual noise, such as typos or abbreviations, is a well-known issue that
penalizes vanilla Transformers for most downstream tasks. We show that this is
also the case for sentence similarity, a fundamental task in multiple domains,
e.g. matching, retrieval or paraphrasing. Sentence similarity can be approached
using cross-encoders, where the two sentences are concatenated in the input
allowing the model to exploit the inter-relations between them. Previous works
addressing the noise issue mainly rely on data augmentation strategies, showing
improved robustness when dealing with corrupted samples that are similar to the
ones used for training. However, all these methods still suffer from the token
distribution shift induced by typos. In this work, we propose to tackle textual
noise by equipping cross-encoders with a novel LExical-aware Attention module
(LEA) that incorporates lexical similarities between words in both sentences.
By using raw text similarities, our approach avoids the tokenization shift
problem obtaining improved robustness. We demonstrate that the attention bias
introduced by LEA helps cross-encoders to tackle complex scenarios with textual
noise, specially in domains with short-text descriptions and limited context.
Experiments using three popular Transformer encoders in five e-commerce
datasets for product matching show that LEA consistently boosts performance
under the presence of noise, while remaining competitive on the original
(clean) splits. We also evaluate our approach in two datasets for textual
entailment and paraphrasing showing that LEA is robust to typos in domains with
longer sentences and more natural context. Additionally, we thoroughly analyze
several design choices in our approach, providing insights about the impact of
the decisions made and fostering future research in cross-encoders dealing with
typos. | Mario Almagro, Emilio Almazán, Diego Ortego, David Jiménez | 2023-07-06T10:53:50Z | http://arxiv.org/abs/2307.02912v1 | # LEA: Improving Sentence Similarity Robustness to Typos Using Lexical Attention Bias
###### Abstract.
Textual noise, such as typos or abbreviations, is a well-known issue that penalizes vanilla Transformers for most downstream tasks. We show that this is also the case for sentence similarity, a fundamental task in multiple domains, _e.g._ matching, retrieval or paraphrasing. Sentence similarity can be approached using cross-encoders, where the two sentences are concatenated in the input allowing the model to exploit the inter-relations between them. Previous works addressing the noise issue mainly rely on data augmentation strategies, showing improved robustness when dealing with corrupted samples that are similar to the ones used for training. However, all these methods still suffer from the token distribution shift induced by typos. In this work, we propose to tackle textual noise by equipping cross-encoders with a novel LExical-aware Attention module (LEA) that incorporates lexical similarities between words in both sentences. By using raw text similarities, our approach avoids the tokenization shift problem obtaining improved robustness. We demonstrate that the attention bias introduced by LEA helps cross-encoders to tackle complex scenarios with textual noise, specially in domains with short-text descriptions and limited context. Experiments using three popular Transformer encoders in five e-commerce datasets for product matching show that LEA consistently boosts performance under the presence of noise, while remaining competitive on the original (clean) splits. We also evaluate our approach in two datasets for textual entailment and paraphrasing showing that LEA is robust to typos in domains with longer sentences and more natural context. Additionally, we thoroughly analyze several design choices in our approach, providing insights about the impact of the decisions made and fostering future research in cross-encoders dealing with typos.
Sentence similarity, Transformers, typos, lexical, e-commerce. +
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: thanks: These authors contributed equally to this research.
+
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
+
Footnote †: These authors contributed equally to this research.
Sentence similarity is of great interest to the scientific community for its variety of downstream applications (_e.g._ question-answering, matching or retrieval) and the unresolved challenges that arises (Beng et al., 2015; Chen et al., 2016; Chen et al., 2017). Transformer architectures dominate the state-of-the-art with two main alternatives: bi-encoders (Beng et al., 2015; Chen et al., 2016; Chen et al., 2017; Chen et al., 2018; Chen et al., 2019) and cross-encoders (Chen et al., 2016; Chen et al., 2017; Chen et al., 2019; Chen et al., 2019). Bi-encoders focus on learning meaningful representations for each sentence independently using Siamese-like architectures, making them suitable for efficient retrieval (Wang et al., 2019). However, these type of models rely only on the sentence embeddings comparison and lack of any interaction between the tokens/words of the two sentences to be compared. Cross-encoders (Wang et al., 2019), on the other hand, tackle this limitation by concatenating the two sentences in the input. The attention heads in the Transformer learn the intra and inter-sentence interactions, which in many cases provides highly valuable information for achieving correct predictions in similarity tasks.
Cross-encoders are often considered an upper-bound for textual similarity (Wang et al., 2019), being its main limitation the computational cost of jointly encoding pairs of sentences. Recent works, attempt to use its potential by using late interaction modules for bi-encoders (Chen et al., 2016; Chen et al., 2017; Chen et al., 2019), distillation techniques (Wang et al., 2019; Wang et al., 2019), or using cross-encoders as re-rankers after retrieving potential candidates with bi-encoders (Chen et al., 2016; Chen et al., 2019). These two stages resemble the typical pipeline of product matching, with an initial blocking stage to discard samples that are unlikely to be positive matches for a given product. This is followed by a more computationally intensive step that identifies the correct match among the selected candidates (Beng et al., 2015; Chen et al., 2017; Chen et al., 2019).
All these methods suffer when dealing with textual noise, that may appear in many forms: _e.g._ typos and misspellings as in queries for retrieval tasks (Wang et al., 2019) or custom abbreviations in certain domains (Wang et al., 2019; Wang et al., 2019). This noise is challenging for vanilla Transformers for two main reasons: character information is not used in this type of architectures, and the shift in the token distribution caused by noise makes harder to relate tokens referring to the same concept, _e.g._ _ chocolate_ with _cholate_ or _chclt_. Prior works in information retrieval evidence performance issues (Wang et al., 2019) under these conditions and propose to address them mainly by training with similar typos as the ones in test. Although, this strategy has been proven, to some extent, effective to mitigate the issue of the token distribution shift, all these methods still have the limitations associated to the loss of character information from the tokenization process.
All the evidence from previous works stress the importance of the character-level information to deal with textual noise. Following the same intuition, we propose to equip cross-encoders with a LExical Attention bias (LEA) that modifies the self-attention module in Transformers, guiding it towards lexically similar words. This helps the model to improve robustness under the presence of noise. We adopt standard data augmentation strategies to deal with typos and demonstrate large performance improvements when adding LEA to cross-encoders. Figure 1 shows an example of the average attention across layers in a cross-encoder. When the typo appears in the word _black_ (_i.e._ _black_\(\rightarrow\)_blk_), the tokenizer breaks it into two sub-words (\(b+\pi\)_blk_), preventing the self-attention mechanism from capturing the relationship between both terms. However, when LEA is introduced, the lexical similarity between both adds a bias that helps the modified attention to maintain this relationship. Our main contributions are as follows:
* We propose to add a lexical bias in the self-attention module of cross-encoders, which is designed specifically to improve textual noise scenarios. This bias provides the model with information about the lexical similarities between words in pairs of sentences, increasing the attention between those that are lexically close.
* We evaluate LEA in five e-commerce datasets using three Transformer backbones and demonstrate that LEA consistently improves performance by a large margin. Concretely, we report an average improvement of around 6 absolute points of F1-score when dealing with synthetically generated typos. Results in textual entailment and paraphrasing tasks show that LEA achieves competitive performance or even surpasses the baselines.
* We thoroughly analyze the impact of the different components of LEA to shed light on the design choices made.
## 2. Related Work
### Sentence similarity
Determining the degree of similarity between two sentences is a fundamental problem for matching, entailment or paraphrasing tasks and is normally tackled using two type of approaches: bi-encoders and cross-encoders. Bi-encoders are designed to process each sentence independently, obtaining an embedding for each of them. These models are typically trained with metric learning objectives that pull together the representations of positive pairs while pushing apart those of negative pairs. The authors in (Chen et al., 2017) propose SimCSE, which exploits dropout to generate embeddings that build positive pairs in the unsupervised setup. They also propose a supervised setting where they use textual entailment labels to construct the positive pair. Tracz et al. (Tracy et al., 2019) adopt a triplet loss in a supervised setup for product matching. The approach described in (Chen et al., 2017), extends SimCSE and propose to learn via equivariant contrastive learning, where representations have to be insensitive to dropout and sensitive to MLM-based word replacement perturbations. Supervised contrastive learning (Krizhevsky et al., 2014) is also adopted in (Chen et al., 2017) for sentence similarity in a general domain, while (Beng et al., 2015; Chen et al., 2019) apply it for product matching in e-commerce. Another popular method is SBERT (Shen et al., 2019), which addresses both the problem of predicting several similarity degrees as a regression task and directly matching pairs of sentences via classification.
Cross-encoders, on the other hand, jointly process a concatenated pair of sentences. These models are often considered to outperform bi-encoders (Wang et al., 2019; Wang et al., 2019), obtaining robust results in general domains (Wang et al., 2019), product matching (Wang et al., 2019) or retrieval tasks (Wang et al., 2019). However, their main drawback is the need to recompute the encoding for each different pair of sentences. Therefore, many recent works adopt hybrid solutions to improve bi-encoders. Humeau et al. proposed Poly-encoders (Humeau et al., 2019) that utilizes an attention mechanism to perform extra interaction after Siamese encoders. The TransEncoder method (Wang et al., 2019) alternates bi- and cross-encoder independent trainings, while distilling their knowledge via pseudo-labels. The resulting bi-encoder shows improved performance. Distillation is further explored in (Wang et al., 2019), where knowledge transfer from the cross-attention of a light interaction module is adopted during training and removed at inference time.
### Dealing with typos and abbreviations
Recent literature demonstrates that Transformer-based architectures (Rendle et al., 2017) are not robust to textual noise, _i.e._ to misspellings or abbreviations of the input words (Krause et al., 2017; Wang et al., 2017; Wang et al., 2018; Wang et al., 2019). Despite using sub-word tokenizers (_e.g._ WordPiece) designed to deal with out-of-vocabulary words, Transformers exhibit in practice performance drops when exposed to typos. Words with noise are not likely to be present in the vocabulary and therefore, they are split into several sub-words yielding to token distribution shifts with respect to the noise-free counterpart (Wang et al., 2017; Wang et al., 2018).
Training the model with synthetically generated perturbations (Krause et al., 2017; Wang et al., 2018; Wang et al., 2019) is a standard practice to deal with typos. Some techniques use simple addition, deletion or character swaps, while others use more sophisticated methods such as common misspellings or keyboard and OCR errors (Wang et al., 2017). Moreover, depending on the type of perturbation, the same type of noise can have a different impact, _i.e._ typos in various words of a sentence do not influence equally. This issue was reported in (Wang et al., 2017; Wang et al., 2018) showing that noise in relevant words yields larger performance drops. We can find approaches that complement this practice with different architectural designs or specific losses. For example, in (Wang et al., 2017) the authors realize that character-based Transformers provide improved robustness to typos and exploit this fact to propose a self-teaching strategy to boost performance of dense retrievers. Authors in (Wang et al., 2018) add a module to recognize the presence of typos in words just before the downstream classifier, which helps to align representations between noisy and clean versions.
### Relative attention bias
Self-attention modules in Transformers receive as input the token representations coming from the previous layers and output contextual representations for each token estimated from a weighted combination of the tokens' representations. Modifications of the self-attention have been proposed to add a bias that accounts for the relative distance between words/tokens in the input sequence (Wang et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2019). This strategy known as relative positional embeddings replaces the absolute positional embeddings, where the position was injected as part of the input of the Transformer. In (Wang et al., 2017) the authors follow this idea and extend it with long and short term relations. Wennberg et al. (Wennberg et al., 2019) propose a more interpretable representation for translation invariant relative positional embeddings using the Toeplitz matrix. The authors in (Wang et al., 2018) simplify the embeddings by just adding fixed scalars to the attention values that vary with the distance. This way of adding relative information between tokens has also been applied to information extraction from documents, where they use 2D relative distances (Krause et al., 2017) or tabular structural biases for table understanding (Wang et al., 2019).
## 3. Proposed approach
In this work we propose to incorporate a Lexical-aware Attention module (LEA) to the self-attention mechanism of vanilla cross-encoder Transformers. This module considers inter-sentence lexical relations, which we demonstrate to be key for improving sentence similarity tasks, specially in the presence of typos.
_Notation_. We use capital letters to denote sets (_e.g._ "X"), bold capital letters for matrices (_e.g._ "X"), bold lowercase letters for vectors (_e.g._ "x") and lowercase for scalars (_e.g._ "x"). For simplicity in the notation, equations only refer to a single layer and head in the Transformer architecture.
### Self-attention
A key cornerstone in the success of Transformers is the multi-head self-attention mechanism, which learns token dependencies and encodes contextual information from the input (Krause et al., 2017). In particular, a single head in this attention module receives an input sequence of \(n\) token representations coming from the previous layer \(X=(\mathbf{x}_{1},\dots,\mathbf{x}_{n})\), where \(\mathbf{x}_{i}\in\mathbb{R}^{d_{h}}\) and computes a new sequence \(Z=(\mathbf{z}_{1},\dots,\mathbf{z}_{n})\) of the same length and hidden dimension \(d_{h}\). The resulting token representations are computed as follows:
\[\mathbf{z}_{i}=\sum_{j=1}^{n}a_{ij}\left(\mathbf{x}_{j}\cdot\mathbf{W}^{V} \right),\quad\mathbf{z}_{i}\in\mathbb{R}^{d_{h}}. \tag{1}\]
Therefore, each new token representation \(\mathbf{z}_{i}\) is a weighted average of the linearly projected tokens representations \(\mathbf{x}_{j}\), using the value projection matrix \(\mathbf{W}^{V}\). The weight associated with each pair of tokens \(a_{ij}\) is computed using a softmax function:
\[a_{ij}=\frac{\exp e_{ij}}{\sum_{k=1}^{n}\exp e_{ik}}, \tag{2}\]
where the scalar \(e_{ij}\) is computed using a compatibility function (dot product) between tokens \(i\) and \(j\) in the sentence:
\[e_{ij}=\frac{(\mathbf{x}_{i}\mathbf{W}^{Q})(\mathbf{x}_{j}\mathbf{W}^{K})}{ \sqrt{d_{h}}}. \tag{3}\]
The query, key and value projection matrices \(\{\mathbf{W}^{Q},\mathbf{W}^{K},\mathbf{W}^{V}\}\in\mathbb{R}^{d_{h}\times d_ {i}}\) are learned during training, where \(d_{i}\) refers to the dimension of the intermediate representations.
### Lexical attention bias for cross-encoders
As we demonstrate in Section 4, in presence of textual noise, \(a_{ij}\) struggles to relate similar terms corrupted by noise. To address this issue, we propose to add a lexical attention bias to the self-attention module of cross-encoders. This bias term guides the attention towards tokens with high lexical similarity. We illustrate our proposed architecture in Figure 2.
Cross-encoders for textual similarity receive as input the concatenation of the two sentence representations to be compared:
\[\mathbf{X}_{c}=\mathbf{X}_{j}\mid\mathbf{X}_{r}, \tag{4}\]
where \(\mathbf{X}_{l}\) and \(\mathbf{X}_{r}\) are the left and right sentences, respectively. Inspired by previous works in relative position embeddings (Wang et al., 2018), we propose to modify the self-attention module described in Eq. 3 as follows:
\[\tilde{e}_{ij}=e_{ij}+\alpha\ \mathbf{t}_{ij}\mathbf{W}^{L},\quad\forall i,j\in \mathbf{X}_{c}, \tag{5}\]
where the second term accounts for the lexical bias. \(\mathbf{W}^{L}\in\mathbb{R}^{d_{i}\times 1}\) is a learnable projection matrix, \(\mathbf{t}_{ij}\in\mathbb{R}^{1\times d_{h}}\) is the pairwise lexical attention embedding and \(\alpha\) is a fixed scale factor that aligns the contributions of the lexical attention (\(\mathbf{t}_{ij}\mathbf{W}^{L}\)) and the scaled-dot product attention (\(e_{ij}\)). This factor is computed automatically once
at the beginning of the training based on the magnitudes of both terms.
To compute the pairwise lexical attention embedding, we first measure the similarity between words considering only inter-sentence relations, _i.e._ lexical similarities between words of the same sentence are set to 0:
\[s_{ij}=\begin{cases}Sim\left(\mathbf{w}(\mathbf{x}_{i}),\mathbf{w}(\mathbf{x}_{ j})\right)&,\text{if }\mathbf{x}_{i}\in\mathbf{X}_{I}\text{ and }\mathbf{x}_{j}\in\mathbf{X}_{r}\\ &\text{or }\mathbf{x}_{i}\in\mathbf{X}_{r}\text{ and }\mathbf{x}_{j}\in \mathbf{X}_{I}\\ 0&,\text{otherwise}.\end{cases} \tag{6}\]
where \(\mathbf{X}_{I}\) and \(\mathbf{X}_{r}\) represent the pair of input sentences to compare, \(w(\mathbf{x}_{i})\) and \(w(\mathbf{x}_{j})\) denote the input textual word associated to the i-th and j-th tokens, respectively, and \(Sim(\cdot,\cdot)\) is a metric that measures the string similarity between two words. We elaborate on our choice for the similarity metric in Section 4.3.
Inspired by (Kumar et al., 2017), we apply a sinusoidal function over \(s_{ij}\) to get an embedding that represents the lexical similarity:
\[\boldsymbol{\epsilon}_{ij}^{(s_{ij},2p)} = \sin\left(\frac{2\pi\cdot d_{ij}}{\beta^{2p/d_{h}}}\right), \tag{7}\]
\[\boldsymbol{\epsilon}_{ij}^{(s_{ij},2p+1)} = \cos\left(\frac{2\pi\cdot d_{ij}}{\beta^{2p/d_{h}}}\right), \tag{8}\]
where \(\beta=10^{4}\) and \(p\in\{0,\dots,d_{L}-1\}\). The final lexical embedding \(\boldsymbol{\epsilon}_{ij}\) is the concatenation of the two sinusoidal embeddings in Eq. 7 and Eq. 8, respectively. Different from the original proposal (Kumar et al., 2017) we scale the similarity \(s_{ij}\) by \(2\pi\) to cover the full range of the sinusoidal functions. This results in embeddings more uniformly distributed across the output space.
Note that by equipping LEA with a learnable projection matrix \(\mathbf{W}^{L}\) we provide the model with the flexibility to adjust the contribution coming from the lexical term in the final attention values. The parameter overhead introduced by this term is \(d_{L}\times\#\text{heads}\) in all the layers where we use it.
## 4. Experimental Work
We structure the experimentation around four research questions to shed light on the design and capabilities of LEA.
**RQ1.** Does LEA improve performance in a consistent way across datasets and architectures under the presence of typos while remaining competitive on clean setups?
**RQ2.** How important is the choice of the lexical similarity metric for LEA under the presence of typos?
**RQ3.**What is the impact of applying LEA on varying locations of the architecture and the effect of sharing the parameters at different levels _e.g._ model, layer?
**RQ4.** Does LEA generalize to different noise strengths?
The remaining of this section presents the experimental setting in Section 4.1 and responds the four research questions in Sections 4.2, 4.3, 4.4 and 4.5.
### Experimental setting
The impact of textual noise in the prediction of the models depends on whether it appears on relevant words or not (Zhu et al., 2017). We argue that when the sentences are short the probability of the random noise appearing in relevant words is higher and, therefore, we expect a higher contribution of the lexical attention bias. Hence, the core of our experiments is conducted in five product matching datasets, where the sentences are short and normally with lack of syntax: Abt-Buy (Kumar et al., 2017), Amazon-Google (Kumar et al., 2017) and WDC-Computers (small, medium and large) (Zhu et al., 2017). Moreover, we validate the contribution of LEA in two related tasks of natural language domain: textual entailment (RTE (Zhu et al., 2017)) and paraphrasing (MRPC (Zhu et al., 2017)). Details about the datasets are provided in Tables 1 and 2, respectively. We artificially introduce typos on the aforementioned datasets as described in Section 4.1.1.
#### 4.1.1. Synthetic noise generation
The original datasets mostly contain clean sentences with a low percentage of textual noise. To
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multicolumn{3}{c}{**Size**} & **Average** \\ \cline{2-5} & Train & Val & Test & \#**words** \\ \hline \hline Abt-Buy & 5,743 & 1,916 & 1,916 & 8.0 \\ Amaz.–Goog. & 6,874 & 2,293 & 2,293 & 6.6 \\ WDC-Comp. & & & & \\ - small & 2,263 & 567 & 1,100 & 12.1 \\ - medium & 6,464 & 1,618 & 1,100 & 12.2 \\ - large & 26,640 & 6,659 & 1,100 & 13.4 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Product matching datasets details: train/val/test sizes and the average number of words per sentence.
Figure 2. Overview of the attention mechanism in Transformers where we add the proposed lexical attention bias (LEA). We use the traditional nomenclature for the key, query and value representations (Q, K, V).
evaluate the models' generalization under the presence of noise, we synthetically generate test splits containing a wide range of typos. We apply the strategies followed in previous works (Srivastava et al., 2017; Wang et al., 2018) using the _nlpaug_1 library for data augmentation. In particular, we consider the following character operations:
Footnote 1: [https://github.com/makedward/nlpaug](https://github.com/makedward/nlpaug)
* Insertion. A random character is inserted in a random position within the word, _e.g. screen \(\rightarrow\) screen_.
* Deletion. A random character from the word is removed, _e.g. screen \(\rightarrow\) screen_.
* Substitution. A random character from the word is replaced by a random character, _e.g. screen \(\rightarrow\)_s_b_reen_.
* Swapping. A random character is swapped with one neighbor character in the word, _e.g. screen \(\rightarrow\) screen_.
* Keyboard substitution. A random character from the word is replaced by a close character in the QWERTY keyboard, _e.g. screen \(\rightarrow\) screen_.
We modify all sentences in the test splits, where each word has a 20% chance to be augmented. Only one type of operation is applied to each word, which is chosen randomly among the five options. We limit the augmentation to words with more than 3 characters to mitigate the effect of words becoming non-recognizable from the original form _e.g. ace \(\rightarrow\) ate_.
#### 4.1.2. Baseline models
Due to the lack of prior works dealing with textual noise in cross-encoders for sentence similarity tasks, we adopt a benchmarking based on the comparison between three versions of cross-encoders: 1) vanilla, 2) trained with data augmentation (DA) and 3) trained with data augmentation and LEA. We adopt 2) as the reference baseline in the literature following the approach of related works in other domains, where they successfully applied data augmentation to deal with typos (Kang et al., 2018; Wang et al., 2018; Wang et al., 2018).
For data augmentation during training we apply the same configuration as in the synthetic generation of the test splits (see Section 4.1.1) and use a 50% chance for each sentence to be augmented. We use three popular pre-trained language models (PLMs) of varying sizes, _i.e._ Electra-small (Wang et al., 2018), BERT-Medium (Wang et al., 2018) and BERT-Base (Wang et al., 2018).
#### 4.1.3. Implementation details
In all the experiments we fine-tune the PLMs described in Section 4.1.2 for 30 epochs, using AdamW with a batch size of 32, an initial learning rate of \(5e^{-5}\), a weight decay of \(5e^{-5}\) and we apply a cosine annealing scheduler with a warm-up of 1.5 epochs. For LEA, \(\alpha\) is automatically fixed in Eq. 5 at the beginning of the training for each layer of the Transformer and leave \(\mathbf{W}^{L}\) being trained independently per head. As similarity metric we use Jaccard (see Section 4.3 for more details) and apply the proposed lexical attention bias to half of the last layers in all architectures (see Section 4.4 for a detailed analysis). For more details we refer the reader to Appendix B.
We use the same training data for all methods and evaluate them on two different test splits, the original (clean) and the corrupted version with typos. We run three different seeds for the training and create three test splits randomly introducing typos, as their influence may differ depending on the words containing typos. Thus, we report the resulting mean and standard deviation over three and nine results for the clean and typo experiments, respectively.
The test splits with typos, the binaries of the models and the required material to reproduce results are available in our repository2.
Footnote 2: [https://github.com/m-almagro-cadir/LEA](https://github.com/m-almagro-cadir/LEA)
### Robustness across datasets
We compare in Table 3 the F1-score of LEA with that of vanilla cross-encoders trained without and with data augmentation (+ DA) in five product matching datasets. We observe that applying data augmentation to mimic typos during training improves the robustness to them as reported by previous works in the retrieval domain (Srivastava et al., 2017). When we apply LEA, we outperform the baseline by 5.4, 6.1 and 7.0 points on average across the five datasets for Electra-small, BERT-Medium and BERT-Base, respectively. Strategies solely based on data augmentation completely depend on the tokenized data, which may lose part of the lexical information when splitting into subwords. In contrast, LEA exploits character-level similarity between words, an information that is not dependent on the tokenization.
Moreover, in Table 4 we analyze the impact of adding LEA to cross-encoders in the absence of typos. Here, the vanilla cross-encoders trained without data augmentation perform best on average. LEA, however clearly outperforms training with data augmentation and provides a competitive performance, achieving the best performance in some datasets. We refer the reader to Sections 4.6.1 and 4.6.2 for additional experiments with a larger architecture (BERT-Large), autoregressive models (GPT-2 and GPT-Neo) and larger datasets (WDC-XLarge and WDC-All).
The results presented in Tables 3 and 4, therefore provide a positive response to \(\mathbf{RQ_{1}}\): _LEA improves cross-encoders performance to typos by a large margin, while achieving competitive performance in their absence._
#### 4.2.1. Performance on additional domains
Previous experiments showing the improvements of LEA were conducted in the e-commerce domain, _i.e._ short product descriptions with little context. In Table 5, we further demonstrate the benefits of LEA using BERT-Medium in RTE (textualment) and MRPC (paraphrasing) datasets that represent a completely different domain with longer sentences. Again, typos dramatically reduce the performance of a cross-encoder trained without data augmentation. However, LEA palliates this drop and achieves best results in RTE with typos (\(\sim\) 6 absolute points gain), while having comparable performance to a vanilla cross-encoder trained with data augmentation in MRPC. In contrast, in a clean setup LEA suffers small performance drops with
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multicolumn{3}{c}{**Size**} & **Average** \\ \cline{2-5} & Train & Val & Test & **\#words** \\ \hline \hline RTE & 2,241 & 249 & 277 & 43.3 \\ MRPC & 3,668 & 408 & 1,725 & 22.0 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Textual similarity datasets details: train/val/test sizes and the average number of words per sentence.
respect to the cross-encoder. We argue that Jaccard may reflect similarity worse in long texts than an edit distance because it is agnostic to character order, resulting in a higher probability of highlighting unrelated words. In turn, longer sentences reduce the probability of
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & **Abt-Buy** & **Amaz.-Goog.** & \multicolumn{3}{c}{**WDC-Computers (clean)**} & **Average** \\ \cline{3-6} & (clean) & (clean) & Small & Medium & Large & \\ \hline \hline Electra-small & 78.85 \(\pm\) 0.8 & **71.80 \(\pm\) 0.9** & **83.37 \(\pm\) 1.8** & **88.42 \(\pm\) 0.4** & **93.00 \(\pm\) 0.9** & **83.09 \(\pm\) 1.0** \\ + DA & 80.62 \(\pm\) 1.1 & 69.83 \(\pm\) 3.3 & 79.12 \(\pm\) 1.7 & 85.65 \(\pm\) 1.0 & 92.19 \(\pm\) 0.7 & 81.48 \(\pm\) 1.6 \\ + LEA & **80.95 \(\pm\) 1.6** & 70.94 \(\pm\) 2.5 & 78.70 \(\pm\) 0.5 & 85.23 \(\pm\) 1.2 & 91.34 \(\pm\) 0.4 & 81.43 \(\pm\) 1.2 \\ \hline BERT-Med. & 79.21 \(\pm\) 2.7 & 69.06 \(\pm\) 1.0 & **80.78 \(\pm\) 1.5** & **86.91 \(\pm\) 1.4** & 90.98 \(\pm\) 0.5 & **81.62 \(\pm\) 1.6** \\ + DA & 77.86 \(\pm\) 0.3 & 67.45 \(\pm\) 2.8 & 73.35 \(\pm\) 0.8 & 81.61 \(\pm\) 0.8 & 90.60 \(\pm\) 0.9 & 77.87 \(\pm\) 1.2 \\ + LEA & **82.31 \(\pm\) 0.2** & **73.39 \(\pm\) 0.9** & 74.66 \(\pm\) 0.9 & 83.76 \(\pm\) 1.8 & **91.23 \(\pm\) 0.5** & 79.82 \(\pm\) 1.1 \\ \hline BERT-Base & **83.03 \(\pm\) 0.8** & 70.82 \(\pm\) 0.6 & **82.08 \(\pm\) 0.1** & **88.13 \(\pm\) 1.1** & **92.69 \(\pm\) 0.5** & **83.35 \(\pm\) 0.6** \\ + DA & 80.43 \(\pm\) 1.7 & 67.80 \(\pm\) 1.6 & 75.83 \(\pm\) 1.9 & 84.59 \(\pm\) 1.1 & 89.51 \(\pm\) 0.9 & 79.63 \(\pm\) 1.4 \\ + LA & 82.66 \(\pm\) 0.9 & **72.62 \(\pm\) 0.4** & 79.14 \(\pm\) 1.8 & 86.40 \(\pm\) 0.3 & 91.04 \(\pm\) 0.7 & 82.37 \(\pm\) 0.8 \\ \hline \hline \end{tabular}
\end{table}
Table 4. Results obtained in five e-commerce datasets for product matching (original splits). We report the mean and standard deviation of the F1-score for three seeds during training.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & **Abt-Buy** & **Amaz.-Goog.** & \multicolumn{3}{c}{**WDC-Computers (typo)**} & **Average** \\ \cline{3-6} & (typo) & (typo) & Small & Medium & Large & \\ \hline \hline Electra-small & 33.15 \(\pm\) 8.0 & 24.61 \(\pm\) 5.9 & 35.76 \(\pm\) 5.4 & 49.73 \(\pm\) 3.8 & 53.60 \(\pm\) 1.1 & 39.37 \(\pm\) 4.8 \\ + DA & 69.57 \(\pm\) 2.2 & 55.50 \(\pm\) 2.5 & 72.92 \(\pm\) 2.5 & 76.56 \(\pm\) 0.6 & 82.34 \(\pm\) 0.9 & 71.38 \(\pm\) 1.7 \\ + LEA & **74.04 \(\pm\) 1.7** & **68.01 \(\pm\) 1.3** & **75.71 \(\pm\) 0.9** & **79.94 \(\pm\) 1.5** & **86.27 \(\pm\) 1.3** & **76.79 \(\pm\) 1.3** \\ \hline BERT-Med. & 43.69 \(\pm\) 5.4 & 17.33 \(\pm\) 2.5 & 54.12 \(\pm\) 4.3 & 58.35 \(\pm\) 2.6 & 53.25 \(\pm\) 3.4 & 45.35 \(\pm\) 3.6 \\ + DA & 67.08 \(\pm\) 2.9 & 53.41 \(\pm\) 1.7 & 69.17 \(\pm\) 1.5 & 76.25 \(\pm\) 1.1 & 84.89 \(\pm\) 0.9 & 70.16 \(\pm\) 1.6 \\ + LEA & **73.19 \(\pm\) 2.0** & **69.30 \(\pm\) 1.2** & **72.64 \(\pm\) 0.9** & **79.51 \(\pm\) 1.3** & **86.66 \(\pm\) 0.8** & **76.26 \(\pm\) 1.2** \\ \hline BERT-Base & 57.01 \(\pm\) 3.0 & 22.42 \(\pm\) 2.7 & 51.29 \(\pm\) 6.6 & 58.49 \(\pm\) 5.1 & 55.32 \(\pm\) 2.6 & 48.91 \(\pm\) 4.0 \\ + DA & 70.60 \(\pm\) 3.2 & 53.79 \(\pm\) 1.1 & 71.57 \(\pm\) 0.8 & 77.55 \(\pm\) 2.7 & 84.69 \(\pm\) 0.9 & 71.64 \(\pm\) 1.8 \\ + LA & **75.97 \(\pm\) 1.4** & **70.41 \(\pm\) 0.9** & **76.85 \(\pm\) 1.1** & **82.10 \(\pm\) 1.7** & **88.07 \(\pm\) 1.0** & **78.68 \(\pm\) 1.2** \\ \hline \hline \end{tabular}
\end{table}
Table 3. Results obtained in five e-commerce datasets for product matching with typos. We report the mean and standard deviation of the F1-score for three seeds during training and three versions of the test splits where we add synthetic noise (nine experiments each).
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{**RTE**} & \multicolumn{2}{c}{**MRPC**} \\ \cline{2-5} & Typo & Clean & Typo & Clean \\ \hline \hline BERT-Med. & 19.13 \(\pm\) 3.5 & **70.19 \(\pm\) 2.0** & 34.52 \(\pm\) 15.5 & **86.58 \(\pm\) 0.1** \\ + DA & 59.31 \(\pm\) 1.9 & 65.96 \(\pm\) 3.3 & **82.87 \(\pm\) 0.8** & 86.21 \(\pm\) 0.3 \\ + LEA & **65.18 \(\pm\) 2.4** & 67.13 \(\pm\) 2.0 & **82.16 \(\pm\) 1.6** & 84.32 \(\pm\) 1.5 \\ \hline \hline \end{tabular}
\end{table}
Table 5. Results obtained in two additional domains of sentence similarity, _i.e._ textual entailment (RTE) and paraphrasing (MRPC). We report the mean and standard deviation of the F1-score values for the noisy (nine seeds) and the original (three seeds) versions of the test splits, respectively.
applying typos to relevant words, thus hiding the potential benefit of using LEA in real settings. Despite these limitations, we show that even in this situation LEA performs competitively and can even improve performance.
### Impact of the lexical similarity choice
The lexical embeddings of LEA are computed with a similarity function between two strings. In Table 6 (**Lexical similarity metric**), we analyze the impact of the choice of this similarity metric in the Abt-Buy dataset using BERT-Medium. We try LEA with the following string similarity metrics: Jaccard (Jac.), Smith-Waterman (Smith), Longest Common Subsequence (LCS), Levenshtein (Lev.) and Jaro-Winkler (Jaro) (Jaro, 2018). All the metrics improve the performance when evaluating with typos, thus supporting the positive contribution of LEA regardless of the lexical similarity metric adopted. In clean scenarios, the Smith-Waterman similarity does not outperform the regular cross-encoder (top row), while the remaining metrics does surpass it. Smith-Waterman is the metric that is penalized the most by typos appearing in the middle of words, and by lexical variations, as it relies on aligning common substrings.
We decided to adopt the Jaccard similarity for LEA given that it consistently outperforms both the clean and the noisy scenarios for short sentences. The Jaccard coefficient applied to characters is order agnostic and therefore more robust to character swaps. Our intuition is that Jaccard provides higher separability between word pairs with and without typos, which is beneficial in short-texts. However, as the word context increases in long sentence domains, the probability of comparing words with different meaning that share characters increases, thus reducing the swap invariance advantage. We refer the reader to Appendix A for further details on the design choices for the relative attention bias used in LEA.
The evidences presented provide a positive answer to \(\mathbf{RQ_{2}}\): _it is important to choose the right metric for better performance, although all of them help in preventing performance drops against typos with respect to the vanilla cross-encoder._
### LEA on different layers and sharing strategy
Two important decisions to make when integrating LEA in a Transformer architecture are:
* Do we use the same lexical projection matrix for the entire model, one per layer, or one independent matrix per head?
* Do we apply LEA in all layers across the architecture, or it is more beneficial to apply it only in certain layers?
In Table 6 we present results to answer these questions. For the first decision (\(\mathbf{W^{L}}\)**parameter sharing**) we show that using an independent projection matrix per head behaves best and observe an increasing performance tendency towards sharing less parameters, _i.e._ shared across all layers is the worst choice. We argue that this behaviour is reasonable given that using independent \(\mathbf{W^{L}}\) matrices provides higher flexibility to learn the projection, as the addition to the standard self-attention term in Eq. 5 might need different behaviour in different heads for better performance. We, therefore, use for the default LEA configuration this non-shared alternative.
Regarding the second decision (**Layers with LEA**), we evaluate adding LEA to different layer subsets in BERT-Medium (8 layers in total): all layers ([0-8]), excluding the first two layers ([2-8]), second half of the layers ([4-8]) and the last two layers ([6-8]). We observe that all the choices help dealing with typos, achieving the best performance by adding LEA to the second half of layers. Similar behaviour is observed in clean scenarios, although only adding LEA to the last half and last two layers outperform the vanilla cross-encoder performance. Therefore, we use LEA in half of the deeper layers in all architectures and experiments.
We argue that the character-level similarity provided by LEA can be considered as a high-level interaction information. Therefore, this complements the high-level features of deep Transformer layers. We left for future work to validate this hypothesis.
The results obtained in this set of experiments address \(\mathbf{RQ_{3}}\): _it is better to use dedicated lexical projection matrices for each attention head and to add LEA on late layers for better performance._
### Impact of the noise strength
We analyze in Figure 3 (top) the robustness of LEA to different noise strengths at test time. These results demonstrate a higher robustness to typos than vanilla cross-encoder baselines trained with and without data augmentation. For this experiment, models trained simulating typos use a 20% probability of introducing them in a word, while at test this probability is modified to change the noise strength. Intuitively, since the character-level similarities
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Model** & **Typo** & **Clean** \\ \hline \hline BERT-Med. (8 layers) & 43.69 & 79.21 \\ BERT-Med. + DA & 67.08 & 77.86 \\ \hline \hline \multicolumn{4}{c}{**Lexical similarity metric \(\{\cdot\}\)**} \\ + LEA \{Smith\} & 68.29 & 78.95 \\ + LEA \{LCS\} & 71.82 & 80.48 \\ + LEA \{Lev.\} & 71.52 & 80.36 \\ + LEA \{Jaro\} & 72.96 & 81.23 \\ + **LEA \{Jaccard\}** & **73.19** & **82.31** \\ \hline \multicolumn{4}{c}{\(\mathbf{W^{L}}\)**parameter sharing \((\cdot\)**)**} \\ + LEA \{model\} & 70.79 & 80.13 \\ + LEA \{layer\} & 71.56 & 81.71 \\ + **LEA \{head\}** & **73.19** & **82.31** \\ \hline \multicolumn{4}{c}{**Layers with LEA \([\cdot\)**} \\ + LEA \{0-8\} & 68.49 & 75.87 \\ + LEA \{2-8\} & 69.75 & 78.14 \\ + **LEA \{4-8\}** & **73.19** & **82.31** \\ + LEA \{6-8\} & 72.81 & 80.75 \\ \hline \hline \end{tabular}
\end{table}
Table 6. Influence of the different design choices in LEA, _i.e._ the lexical similarity metric [Smith-Waterman, Longest Common Subsequence, Levenshtein, Jaro-Winkler and Jaccard], the sharing strategy of LEA’s projection matrix (shared per model, layer or head) and the layers where LEA is applied within a 8-layer BERT architecture [1-8].
exploited by LEA are not learned during training, they provide the model with information that is, to some extent, less dependent on the amount of noise. Furthermore, Figure 3 (bottom) shows an increasing gap between the performance of LEA with respect to the vanilla cross-encoder trained with data augmentation, suggesting a better generalization of LEA to different noise strengths.
We, then, can respond **RQ4** based on these results, _LEA maintains a robust performance across noise strengths, being dramatic the performance drop of a vanilla cross-encoder trained without data augmentation._
### Additional experiments
#### 4.6.1. Comparison with larger models
In order to assess the effectiveness of LEA in a larger model, we perform experiments using BERT-Large. Additionally, we adopt auto-regressive architectures (GPT-2 and GPT-Neo) to compare them with the auto-encoder models used across this work. In Table 7, we show that despite following the same training procedure, the gap between the vanilla cross-encoder and LEA using BERT-Large increases to 28 absolute points. In Table 8 we show the effectiveness of LEA for the clean versions of the datasets.
For the GPT-like models, we followed the same approach as in (Zhu et al., 2017) and fine-tuned the backbones as cross-encoders using the last token embedding for the sentence representation pooling (also suggested in (Beng et al., 2016; Chen et al., 2017)). We used the same hyper-parameters as the ones used in the experiments of our paper, _i.e._ number of epochs, size of the classification head, etc, and the publicly available pre-trained weights in HuggingFace (Beng et al., 2016; Chen et al., 2017). As we observe in Table 9, embeddings obtained by fine tuning GPT-like architectures in our downstream tasks still suffer from the textual noise issue, experiencing drops in performance of 23 and 7 absolute points on average for GPT2-330M without and with DA, respectively. GPTNeo-125M also shows an average drop in performance of 21 and 4 absolute points when trained without and with DA, respectively. Despite these models being pre-trained in massive data and having more parameters, we show that by just using BERT-Base equipped with LEA we manage to outperform GPT-like architecture in the presence of textual noise. Note that we leave the addition of LEA to GPT-like architectures for future work.
These results suggest that larger models might reduce the gap to some extent (as depicted in Table 3), but they strongly suffer with textual noise (as shown when comparing results between Table 7 and Table 8). Overall our approach mitigates the impact of noise, while keeping comparable performance using clean text.
#### 4.6.2. Comparison with larger datasets
We have conducted experiments considering WDC-Computers XLarge (68,461 data points in total for training) and WDC-All (214,661 samples for training) obtaining the results in Table 10.
In all the experiments, we show that LEA consistently improves the baselines performance by a significant margin, confirming the effectiveness of our proposal in larger datasets. It is worth mentioning that the average results of "BERT-M + DA" for the 3 test splits slightly improves LEA, although with a high standard deviation. Nevertheless, LEA clearly outperforms the baselines in the remaining scenarios.
## 5. Conclusions
This work proposes LEA, a LExical-aware relative Attention module designed to improve the performance of cross-encoder architectures in sentence similarity tasks. LEA is particularly intended for scenarios with textual noise (_e.g._ typos) and short texts, where we show that vanilla Transformers drop performance due to tokenization shifts between noisy and clean data. In particular, we propose to modify the self-attention module by introducing a lexical bias.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & \begin{tabular}{c} Abt-Buy \\ (typo) \\ \end{tabular} & \begin{tabular}{c} Amaz.- \\ Goog. \\ (typo) \\ \end{tabular} & \multicolumn{3}{c}{WDC-Comp. (typo)} \\ \cline{2-5} &
\begin{tabular}{c} \(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ }}}}}}}}}}}}}}\) \\ \end{tabular} } & Med. & Large \\ \hline \hline BERT-L & \(48.10\pm 2.5\) & \(19.27\pm 1.8\) & \(48.97\pm 2.3\) & \(44.29\pm 0.9\) \\ BERT-L & \(72.09\pm 3.0\) & \(48.95\pm 1.7\) & \(76.79\pm 0.3\) & \(79.98\pm 0.7\) \\ BERT-L & \(\mathbf{76.17\pm}\) & \(\mathbf{69.06\pm}\) & \(\mathbf{82.00\pm}\) & \(\mathbf{86.85\pm}\) \\ + LEA & \(\mathbf{2.5}\) & \(\mathbf{0.7}\) & \(\mathbf{0.9}\) & \(\mathbf{0.9}\) \\ \hline \hline \end{tabular}
\end{table}
Table 7. Results for BERT-Large in the test sets with typos.
Figure 3. Performance comparison of the cross-encoder approaches under the influence of different noise strengths (top) and the relative improvement of LEA over the cross-encoder trained with data augmentation (bottom). We use BERT-Medium in Abt-Buy for this experiment.
This lexical information tackles the tokenization shift by providing a raw character-level similarity that tends to be high for lexically close words, with and without typos. This similarity is independent of the tokenization and does not assume any prior knowledge on the type of noise present in the data.
The results of LEA on five e-commerce datasets using several backbones of varying size demonstrate consistent improvements when dealing with typos over cross-encoder baselines. We further verify the robustness of LEA against typos in textual entailment and paraphrasing tasks and observe competitive performance despite not being strictly designed for these scenarios. Moreover, we provide insights to better understand the behaviour of LEA and explore the impact of: 1) different string similarity metrics, 2) introducing the lexical bias at varying subsets of layers and 3) sharing parameters when encoding the lexical similarities. Finally, we investigate the generalization to different noise strengths, demonstrating that LEA performs and generalizes better than the vanilla cross-encoder baselines.
### Limitations and future work
Despite making no assumption about the type of noise, LEA assumes that lexical similarities between two sentences is a relevant bias for similarity matching. It is worth mentioning that in scenarios without typos there is a slight drop in performance (lower than 2 absolute points on average) as reported in Table 4 when adding this bias. However, in the presence of typos LEA largely outperforms a Vanilla cross-encoder (more than 30 absolute points on average), thus demonstrating that the proposed lexical bias helps in these scenarios. LEA is designed for Transformer configurations where two or more sentences are used as part of the input to the model. While limited to this specific context, it encompasses a wide-ranging topic within the sentence similarity literature and LEA can be repurposed across different but related domains.
Future work will focus on improving the use of lexical information on longer texts and better using this bias in clean scenarios. Another interesting research direction is the extension of LEA to bi-encoders with late-interaction techniques.
|
2303.16248 | Polar decomposition in algebraic K-theory | We show that the Hausdorffized algebraic K-theory of a C*-algebra decomposes
naturally as a direct sum of the Hausdorffized unitary algebraic K-theory and
the space of continuous affine functions on the trace simplex. Under mild
regularity hypotheses, an analogous natural direct sum decomposition holds for
the ordinary (non-Hausdorffized) algebraic K-theory. | Pawel Sarkowicz, Aaron Tikuisis | 2023-03-28T18:53:56Z | http://arxiv.org/abs/2303.16248v2 | # Polar decomposition in algebraic \(K\)-theory
###### Abstract.
We show that the Hausdorffized algebraic \(K\)-theory of a C*-algebra decomposes naturally as a direct sum of the Hausdorffized unitary algebraic \(K\)-theory and the space of continuous affine functions on the trace simplex. Under mild regularity hypotheses, an analogous natural direct sum decomposition holds for the ordinary (non-Hausdorffized) algebraic \(K\)-theory.
###### Contents
* 1 Introduction
* 2 Preliminaries and notation
* 3 Polar decomposition
* 4 Nonstable algebraic \(K\)-theory
## 1. Introduction
\(K\)-theory is at the heart of understanding the structure of C*-algebras and their morphisms. Indeed, the recently-established classification of simple separable nuclear C*-algebras shows that an invariant consisting of \(K\)-theory paired with traces fully determine C*-algebras up to isomorphism, among a large class of "classifiable" C*-algebras. Moreover, a richer \(K\)-theoretic invariant, augmenting the aforementioned invariant by \(K\)-theory with coefficients and Hausdorff unitary \(K\)-theory, classifies the morphisms between these classifiable C*-algebras, up to approximate unitary equivalence ([CGS\({}^{+}\)]).
The components of the richer \(K\)-theoretic invariant can be directly related, through split exact sequences, to the smaller invariant of \(K\)-theory and traces. \(K\)-theory with \(\mathbb{Z}/n\) coefficients relates to ordinary \(K\)-theory through the split exact sequence
\[0\to K_{i}(A)\otimes\mathbb{Z}/n\to K_{i}(A;\mathbb{Z}/n)\to\operatorname{Tor }(K_{1-i}(A),\mathbb{Z}/n)\to 0, \tag{1.1}\]
where the connecting maps are induced by the Bockstein maps (see [Sch84, Propositions 1.8 and 2.4]). While the maps in this exact sequence are natural,
the splitting is not and cannot be - as can be seen by the existence of homomorphisms which agree on \(K\)-theory but not on \(K\)-theory with coefficients.1
Footnote 1: An example is the tensor flip map \(\sigma\) on \(A:=\mathcal{O}_{3}\otimes\mathcal{O}_{3}\). Using the Künneth formula, one can compute that \(K_{0}(A)=K_{1}(A)=\mathbb{Z}/2\), and since the flip map is an order 2 automorphism, it must induce the identity on \(K\)-theory. Suppose, for a contradiction, that \(\sigma\) induces the identity on \(K\)-theory with coefficients. One can see that \(\operatorname{Pext}(K_{0}(A),K_{0}(A))=0\) and so, since the algebra \(A\) satisfies the UCT, it would follow that \([\sigma]=[\operatorname{id}_{A}]\) in \(KK(A,A)\) by [6] (see Proposition 2.4 in particular). The Kirchberg–Phillips classification theorem (Theorem 8.3.3 (iii) of [10], for example) would then imply that \(\sigma\) is approximately inner. Hence, \(\mathcal{O}_{3}\) would have approximately inner flip, but it doesn’t by [14, EST]
Similarly, Hausdorff unitary \(K\)-theory, denoted \(\overline{K}_{1}^{\operatorname{alg},u}(A)\), relates to ordinary \(K\)-theory and traces, through the split exact sequence
\[0\to\operatorname{Aff}T(A)/\overline{\rho(K_{0}(A))}\to\overline{K}_{1}^{ \operatorname{alg},u}(A)\to K_{1}(A)\to 0, \tag{1.2}\]
with the first map due to Thomsen (and intimately related to the de la Harpe-Skandalis determinant, as discussed below), and the second map arising as a natural quotient (see [11, Theorem 3.2 and Corollary 3.3]). Here, \(\rho\) is the pairing map, so that \(\rho(K_{0}(A))\) is the subgroup of \(\operatorname{Aff}T(A)\) generated by the images of projections. Again, these connecting maps are natural, though the splitting is not (see [13, Section 5] for example).
A natural question to ask is: why has only _unitary_ (Hausdorffized) algebraic \(K\)-theory been used in the classification theorem and (for the most part) related literature, as opposed to algebraic \(K\)-theories defined using all invertibles? A superficial answer may be that, since C*-algebraists are used to working only with unitaries when defining \(K_{1}\), it is natural to work with them in algebraic \(K\)-theory. But this answer misses the point, and doesn't tell us whether the classification of morphisms theorem would look any different if all invertibles were used instead of unitaries, in the Hausdorff algebraic \(K\)-theory component of the invariant.
In this article, we investigate the relation between (Hausdorffized and ordinary) algebraic \(K\)-theory defined using unitaries and using general invertible elements. For topological \(K\)-theory, the relationship is entirely straightforward: polar decomposition expresses a general invertible as a unitary multiplied by a positive invertible element. By effectively forgetting the positive invertible part, the unitary group is a strong deformation retract of the invertible group, and therefore the topological \(K\)-theory is the same whether one uses unitaries or invertibles. To study algebraic \(K\)-theory, polar decomposition is once again the crucial tool. However, since positive invertibles are non-trivial in (even Hausdorff) algebraic \(K\)-theory, they need to be carefully accounted for.
Using the polar decomposition, we obtain a decomposition of Hausdorff algebraic \(K\)-theory, as our first main result:
**Theorem A**.: _Let \(A\) be any unital C*-algebra. Then there is a natural isomorphism of topological groups_
\[\overline{K}^{\rm alg}_{1}(A)\simeq\overline{K}^{\rm alg,u}_{1}(A)\oplus{\rm Aff }\,T(A). \tag{1.3}\]
Said differently, the Hausdorff algebraic \(K_{1}\)-class of a positive invertible is determined entirely by tracial data, and the interaction between unitaries and positive invertibles is, at the level of Hausdorff algebraic \(K_{1}\), entirely trivial. The de la Harpe-Skandalis determinant \(\Delta\) (introduced in [11]), mapping from the connected component of the unitary group to \({\rm Aff}\,T(A)/\rho(K_{0}(A))\), provides the inverse of the first map in (1.2) and plays a key role in our analysis. This works because the kernel of this determinant (when viewed as a map into \({\rm Aff}\,T(A)/\overline{\rho(K_{0}(A))}\)) consists _exactly_ of the closure of the commutator subgroup.
In the non-Hausdorffized setting, the kernel of the de la Harpe-Skandalis determinant (now going into \({\rm Aff}\,T(A)/\rho(K_{0}(A))\)) could be strictly larger than the commutator subgroup (this time, without the closure). Understanding when these two coincide is a well-studied problem (indeed, two problems, corresponding to the two components identified above for \(\overline{K}^{\rm alg}(A)\)), and it has been shown that they coincide under appropriate regularity hypotheses, and in particular for all classifiable C*-algebras. We state our results for the non-Hausdorffized setting in a general form, without such regularity hypotheses; to do so, we need to work with the quotient of the invertibles (respectively unitaries) by the kernel of the determinant map \(\Delta\).
**Theorem B**.: _Let \(A\) be any unital C*-algebra. Then there is a natural isomorphism_
\[GL_{\infty}(A)/{\rm ker}\Delta\simeq U_{\infty}(A)/{\rm ker}\Delta|_{U^{0}_{ \infty}(A)}\oplus{\rm Aff}\,T(A). \tag{1.4}\]
**Corollary C**.: _Let \(A\) be a unital, separable, simple C*-algebra which has stable rank one, is pure in the sense of [15, Definition 3.6], and such that every 2-quasitracial state on \(A\) is a trace (in particular, \(A\) may be any unital, separable, simple, finite, exact, \(\mathcal{Z}\)-stable C*-algebra). Then there is a natural isomorphism_
\[K^{\rm alg}_{1}(A)\simeq K^{\rm alg,u}_{1}(A)\oplus{\rm Aff}\,T(A). \tag{1.5}\]
The paper is structured as follows. First we discuss preliminaries and notation in Section 2. We introduce each of the variants of the de la Harpe-Skandalis determinant and discuss some relationships between kernels. In Section 3, we prove the main results (Theorems A and B). In Section 4 we look at non-stable analogues of the results in 3, under the hypothesis of certain \(K\)-theoretic regularity conditions.
## Acknowledgements
Thanks to George Elliott, Thierry Giordano, Chris Shafhauser, and Stuart White for helpful discussions. Thanks to the other authors of [CGS\({}^{+}\)] for agreeing to share the draft of that article with all authors.
## 2. Preliminaries and notation
### Notation
For a group \(G\), we denote by \(DG\) the derived subgroup of \(G\), i.e.,
\[DG:=\langle ghg^{-1}h^{-1}\mid g,h\in G\rangle. \tag{2.1}\]
If \(G\) has an underlying topology, we denote by \(CG\) the closure of \(DG\) and \(G^{0}\) the connected component of the identity.
For a unital C*-algebra \(A\), \(GL(A)\) denotes the general linear group of \(A\), while \(GL^{0}(A)\) denotes the connected component of \(GL(A)\). For \(n\in\mathbb{N}\), we write \(GL_{n}(A):=GL(M_{n}(A))\), \(GL_{n}^{0}(A):=GL^{0}(M_{n}(A))\), and we set
\[GL_{\infty}(A):=\lim_{\rightarrow}GL_{n}(A), \tag{2.2}\]
with connecting maps \(GL_{n}(A)\ni x\mapsto x\oplus 1\in GL_{n+1}(A)\). This makes \(GL_{\infty}(A)\) both a topological space (with the inductive limit topology) and a group.2 We have unitary analogues by replacing \(GL\) with \(U\), where \(U(A)\) denotes the group of unitaries of \(A\). Similarly, we define \(M_{\infty}(A)=\lim_{\rightarrow}M_{n}(A)\) (as an algebraic direct limit) with connecting maps \(x\mapsto x\oplus 0\). If \(E\) is Banach space and \(\tau:A\to E\) is a linear map that is tracial (i.e., \(\tau(ab)=\tau(ba)\) for all \(a,b\in A\)), we extend this canonically to \(M_{\infty}(A)\) by setting \(\tau((a_{ij})):=\sum_{i}\tau(a_{ii})\) for \((a_{ij})\in M_{n}(A)\).
Footnote 2: It is not, however, a topological group. This is addressed in a footnote in [CGS\({}^{+}\)].
We write \(\pi_{1}(X)\) for the fundamental group of a topological space \(X\) with distinguished base point. (In this paper, we will only use \(X\) equal to \(U_{n}^{0}(A)\) or \(GL_{n}^{0}(A)\), and in either case, we use the unit element as the base point.)
For a C*-algebra \(A\), we let \(K_{0}(A),K_{1}(A)\) be the topological \(K\)-groups of \(A\). The set of tracial states on \(A\) will be denoted \(T(A)\), which is a Choquet simplex ([Sak12, Theorem 3.1.18]). We denote by \(\operatorname{Aff}T(A)\) the set of continuous affine functions \(T(A)\rightarrow\mathbb{R}\), which is an interpolation group with order unit (see [Goo10, Chapter 11]). The space \(\operatorname{Aff}T(A)\) can also be viewed as a topological group under addition, equipped with the uniform norm topology. For unital \(A\), the pairing map \(\rho_{A}:K_{0}(A)\rightarrow\operatorname{Aff}T(A)\) is defined as follows: if \(x\in K_{0}(A)\), we can write \(x=[p]-[q]\) where \(p,q\in M_{n}(A)\) are projections, and then define
\[\rho_{A}(x)(\tau):=\operatorname{tr}_{n}\otimes\tau(p-q),\quad\tau\in T(A). \tag{2.3}\]
We will denote by \(K_{1}^{alg,u}(A),K_{1}^{\text{alg}}(A)\) the unitary algebraic \(K_{1}\)-group and the algebraic \(K_{1}\)-group respectively:
\[K_{1}^{alg,u}(A):=U_{\infty}(A)/DU_{\infty}(A)\text{ and }K_{1}^{\text{alg}}(A):= GL_{\infty}(A)/DGL_{\infty}(A). \tag{2.4}\]
For the Hausdorff variants we write
\[\overline{K}_{1}^{alg,u}(A):=U_{\infty}(A)/CU_{\infty}(A)\text{ and }\overline{K}_{1}^{\text{alg}}(A):= GL_{\infty}(A)/CGL_{\infty}(A), \tag{2.5}\]
where the closures \(CU_{\infty}(A)=\overline{DU_{\infty}(A)}\) and \(CGL_{\infty}(A)=\overline{DGL_{\infty}(A)}\) are taken with respect to the inductive limit topologies. - that is, \(CU_{\infty}(A)=\underset{\rightarrow}{\lim}\,CU_{n}(A)\) and \(CGL_{\infty}(A)=\underset{\rightarrow}{\lim}\,CGL_{n}(A)\). We note that \(\overline{K}_{1}^{alg,u}(A)\) and \(\overline{K}_{1}^{\text{alg}}(A)\) are topological groups (despite the fact that \(U_{\infty}(A)\) and \(GL_{\infty}(A)\) are not in general) - this is addressed in [10, Remark 2.11].
### The de la Harpe-Skandalis determinant and Thomsen's variant
We recall the definition of the de la Harpe-Skandalis determinant [11] (see [11] for a more in-depth exposition). Let \(\text{Tr}_{A}:A\to A/\overline{[A,A]}\) be the quotient map from \(A\) to the quotient Banach space \(A/\overline{[A,A]}\) where \(\overline{[A,A]}\) is the closed linear span of additive commutators. We call \(\text{Tr}_{A}\) the universal trace on \(A\) and will usually omit the subscript when the C*-algebra is clear from context. For \(n\in\mathbb{N}\cup\{\infty\}\) and a piecewise smooth path \(\xi:[0,1]\to GL_{n}(A)\), set
\[\tilde{\Delta}^{n}(\xi):=\frac{1}{2\pi i}\int_{0}^{1}\text{Tr}(\xi^{\prime}(t )\xi(t)^{-1})dt. \tag{2.6}\]
**Proposition 2.1** (Lemme 1 of [11]).: _The map \(\tilde{\Delta}^{n}\) which takes a piecewise smooth path to an element in \(A/\overline{[A,A]}\) has the following four properties:_
1. _it takes pointwise products to sums: if_ \(\xi_{1},\xi_{2}\) _are two piecewise smooth paths, then_ (2.7) \[\tilde{\Delta}^{n}(\xi_{1}\xi_{2})=\tilde{\Delta}^{n}(\xi_{1})+\tilde{\Delta}^ {n}(\xi_{2}),\] _where_ \(\xi_{1}\xi_{2}\) _is the piecewise smooth path_ \(t\mapsto\xi_{1}(t)\xi_{2}(t)\) _from_ \(\xi_{1}(0)\xi_{2}(0)\) _to_ \(\xi_{1}(1)\xi_{2}(1)\)_;_
2. _if_ \(\|\xi(t)-1\|<1\) _for all_ \(t\in[0,1]\)_, then_ (2.8) \[2\pi i\tilde{\Delta}^{n}(\xi)=\text{Tr}\big{(}\log\xi(1)-\log\xi(0)\big{)};\]
3. \(\tilde{\Delta}^{n}(\xi)\) _depends only on the homotopy class of_ \(\xi\)_;_
4. _if_ \(p\in M_{n}(A)\) _is an idempotent, then the path_ \(\xi_{p}:[0,1]\to GL_{n}^{0}(A)\) _given by_ \(\xi_{p}(t):=pe^{2\pi it}+(1-p)\) _satisfies_ \(\tilde{\Delta}^{n}(p)=\text{Tr}(p)\)_._
The de la Harpe-Skandalis determinant (at the \(n^{\text{th}}\) level) is then the map
\[\Delta^{n}:GL_{\infty}^{0}(A)\rightarrow\left(A/\overline{[A,A]}\right)/ \tilde{\Delta}^{n}(\pi_{1}(GL_{n}^{0}(A))) \tag{2.9}\]
given by \(\Delta^{n}(x):=[\tilde{\Delta}^{n}(\xi_{x})]\) where \(\xi_{x}\) is any piecewise smooth path \(\xi_{x}:[0,1]\to GL_{n}^{0}(A)\) from \(1\) to \(x\). This is a group homomorphism to an abelian group, and therefore factors through the derived group, i.e., \(DGL_{n}^{0}(A)\subseteq\ker\Delta^{n}\). For the case \(n=\infty\), we just write \(\tilde{\Delta}\) and \(\Delta\) for \(\tilde{\Delta}^{\infty}\) and \(\Delta^{\infty}\) respectively. If the C*-algebra needs to be specified, we will write \(\Delta_{A}^{n}\) or \(\Delta_{A}\).
**Remark 2.2**.: _Unless we make any regularity assumptions, the maps \(\Delta^{n}\) have different codomains as \(n\) varies, since the images \(\tilde{\Delta}^{n}(\pi_{1}(GL_{n}^{0}(A)))\) may vary. We do however always have_
\[\tilde{\Delta}^{n}(\pi_{1}(GL_{n}^{0}(A)))\subseteq\tilde{\Delta}^{n+1}(\pi_{ 1}(GL_{n+1}^{0}(A))) \tag{2.10}\]
_since \(\tilde{\Delta}^{n+1}(\xi\oplus 1)=\tilde{\Delta}^{n}(\xi)\) whenever \(\xi\) is a piecewise smooth loop in \(GL_{n}^{0}(A)\). However, when the canonical map \(\pi_{1}(GL^{0}(A))\to K_{0}(A)\) is surjective, we have that \(\Delta^{n}=\Delta|_{GL_{n}^{0}(A)}\)._
We note that \(\pi_{1}(GL_{\infty}^{0}(A))\simeq K_{0}(A)\) canonically via the map induced by \([\xi_{p}]\mapsto p\), where \(\xi_{p}\) is the path in property (4) above, and consequently \(\Delta\) can be thought of as a map
\[GL_{\infty}^{0}(A)\to\left(A/\overline{[A,A]}\right)/\mathrm{Tr}(K_{0}(A)). \tag{2.11}\]
Let \(A_{0}\) consist of elements \(a\in A_{sa}\) satisfying \(\tau(a)=0\) for all \(\tau\in T(A)\). This is a norm-closed real subspace of \(A_{sa}\) such that \(A_{0}\subseteq\overline{[A,A]}\), and there is an isometric identification \(A_{sa}/A_{0}\simeq\mathrm{Aff}\,T(A)\) sending an element \([a]\) to \(\widehat{a}\), where \(\widehat{a}(\tau):=\tau(a)\). Indeed, it is not hard to see that the map \(A_{sa}/A_{0}\to\mathrm{Aff}\,T(A)\) given by \([a]\mapsto\hat{a}\) is a well-defined \(\mathbb{R}\)-linear map. Moreover [13, Theorem 2.9], together with a convexity argument, gives that this is isometric identification. To see that we have all the real-valued affine functions, we note that the image of this map contains constant functions and separates points, so [14, Corollary 7.4] gives that the image is dense, whence equal to all of \(\mathrm{Aff}\,T(A)\) (since this is an isometry). We freely identify \(A_{sa}/A_{0}\) with \(\mathrm{Aff}\,T(A)\).
**Lemma 2.3**.: _Let \(A\) be a unital C*-algebra. The canonical map \(\Theta:\mathrm{Aff}\,T(A)\to A/\overline{[A,A]}\) given by \(\Theta(\hat{a}):=[a]\) is an \(\mathbb{R}\)-linear isometry._
Proof.: Identifying \(\mathrm{Aff}\,T(A)\) with \(A_{sa}/A_{0}\), \(\Theta\) is the map \([a]_{A_{sa}/A_{0}}\mapsto[a]_{A/\overline{[A,A]}}\). Clearly this is \(\mathbb{R}\)-linear if it is well-defined, and it is well-defined since \(A_{0}\subseteq\overline{[A,A]}\). To show that it's isometric, we have the following chain of inequalities.
For \(a\in A_{sa}\),
\[\sup_{\tau\in T(A)}|\tau(a)| \leq\sup_{\tau\text{ s.a, tracial, }\|\tau\|=1}|\tau(a)|\] \[\leq\sup_{\tau\text{ tracial, }\|\tau\|=1}|\tau(a)|\] \[=\inf_{x\in[A,A]}\|a+x\|\] \[\leq\inf_{x\in A_{0}}\|a+x\|\] \[=\sup_{\tau\in T(A)}|\tau(a)|, \tag{2.12}\]
where the inequalities are obvious, the equality on the third line comes from a standard Hahn-Banach argument, and the final equality comes from our isometric identification \(A_{sa}/A_{0}\simeq\operatorname{Aff}T(A)\). Thus all quantities in the above chain of inequalities are equal.
Consequently, we can think of the map \(\operatorname{Tr}|_{A_{sa}}\) as the map from \(A_{sa}\to\operatorname{Aff}T(A)\) (\(\simeq A_{sa}/A_{0}\)) given by \(a\mapsto\hat{a}\).
**Lemma 2.4**.: _Let \(A\) be a unital C*-algebra. For \(n\in\mathbb{N}\cup\{\infty\}\),_
\[\tilde{\Delta}^{n}(\pi_{1}(GL_{n}^{0}(A)))=\tilde{\Delta}^{n}(\pi_{1}(U_{n}^{0 }(A)))\subseteq\Theta(\operatorname{Aff}T(A)). \tag{2.13}\]
Proof.: As \(U_{n}(A)\) is a retract of \(GL_{n}(A)\), the first equality is clear by Proposition 2.1(3). Now suppose we have a piecewise smooth loop \(\xi:[0,1]\to U_{n}^{0}(A)\). By Proposition 1.4 of [5], \(\xi^{\prime}(t)\xi(t)^{-1}\) is skew-adjoint so that \(\frac{1}{2\pi i}\xi^{\prime}(t)\xi(t)^{-1}\) is self-adjoint. Therefore \(\operatorname{Tr}(\frac{1}{2\pi i}\xi^{\prime}(t)\xi(t)^{-1})\in\Theta( \operatorname{Aff}T(A))\) and, since \(\Theta(\operatorname{Aff}T(A))\) is a closed real subspace,
\[\tilde{\Delta}^{n}(\xi)=\int_{0}^{1}\operatorname{Tr}\left(\frac{1}{2\pi i} \xi^{\prime}(t)\xi(t)^{-1}\right)dt\in\Theta(\operatorname{Aff}T(A)) \tag{2.14}\]
as well.
**Corollary 2.5**.: _Let \(A\) be a unital C*-algebra. For \(n\in\mathbb{N}\cup\{\infty\}\) and \(u\in U_{n}^{0}(A)\),_
\[\Delta^{n}(u)\in\Theta(\operatorname{Aff}T(A))/\tilde{\Delta}^{n}(\pi_{1}(U_{n }^{0}(A))). \tag{2.15}\]
For \([x]\in A/\overline{[A,A]}\), we'll write
\[\operatorname{Re}([x]):=[\operatorname{Re}(x)]=\Theta\left(\frac{\widehat{x+ x^{*}}}{2}\right)\in\Theta(\operatorname{Aff}T(A)), \tag{2.16}\]
which is well-defined since \(\overline{[A,A]}\) is closed under adjoints. Note that \(\operatorname{Re}(i[x])=0\) if \([x]\in\Theta(A_{sa}/A_{0})\), and therefore \(\operatorname{Re}(i\Delta^{n}(\cdot)):GL_{n}^{0}(A)\to A_{sa}/A_{0}\) is well-defined. With this, we have the following fact:
\[\operatorname{Re}(2\pi i\Delta^{n}(x))=2\pi i\Delta^{n}(|x|)=[\log|x|]. \tag{2.17}\]
To see this, let \(\xi_{0}:[0,1]\to U_{n}^{0}(A)\) be any path from \(1\) to \(u_{x}\), the unitary part in the polar decomposition of \(x\), and let \(\xi_{1}:[0,1]\to GL_{n}^{0}(A)\) be the path from \(1\) to \(|x|\) given by \(t\mapsto e^{2\pi it\log|x|}\). Then \(\Delta^{n}(x)\) is the class of \(\tilde{\Delta}^{n}(\xi_{0})+\tilde{\Delta}^{n}(\xi_{1})\) mod \(\tilde{\Delta}(\pi_{1}(GL_{n}^{0}(A)))\) (which is contained in \(\Theta(A_{sa}/A_{0})\)). As \(\tilde{\Delta}^{n}(\xi_{0})\in\Theta(\operatorname{Aff}T(A))\), \(\operatorname{Re}(2\pi i\tilde{\Delta}^{n}(\xi_{0}))=0\), leaving \(\operatorname{Re}(2\pi i\tilde{\Delta}^{n}(\xi_{1}))=2\pi i\tilde{\Delta}^{n}( \xi_{1})\). Moreover, \(2\pi i\tilde{\Delta}^{n}(\xi_{1})\) is clearly equal to \(\Theta(\overline{\log|x|})\) by (2.6).
### Thomsen's variant
Thomsen's variant of the de la Harpe-Skandalis determinant is the Hausdorff version, taking into account the closure of the image of the homotopy groups. We consider the map
\[\bar{\Delta}^{n}:GL_{n}^{0}(A)\to\left(A/\overline{[A,A]}\right)/\overline{ \tilde{\Delta}^{n}(\pi_{1}(GL_{n}^{0}(A)))}, \tag{2.18}\]
given by \(\bar{\Delta}^{n}(x):=[\tilde{\Delta}^{n}(\xi_{x})]\) where \(\xi_{x}:[0,1]\to GL_{n}^{0}(A)\) is any piecewise smooth path from \(1\) to \(x\in GL_{n}^{0}(A)\). This is almost the same map as \(\Delta^{n}\), except the codomain is now the quotient by the closure of the image of the fundamental group under the pre-determinant (i.e., the Hausdorffization of the codomain). Unlike with \(\Delta^{n}\), the kernel of \(\bar{\Delta}^{n}\) can be identified, without any regularity assumptions on the C*-algebra.
**Lemma 2.6** (Lemma 3.1, [16]).: _Let \(A\) be a unital C*-algebra._
1. \(\ker\bar{\Delta}^{n}|_{U_{n}^{0}(A)}=CU_{n}^{0}(A)\)_;_
2. \(\ker\bar{\Delta}^{n}=CGL_{n}^{0}(A)\)_._
We note that Lemma 3.1 of [16] only proves (1) above. However working with exponentials \(e^{a}\) with \(a\in A\) instead of \(e^{ia}\) for \(a\in A_{sa}\) yields (2).
Two things follow for free here: the first is that \(CGL_{n}^{0}(A)\cap U_{n}^{0}(A)=CU_{n}^{0}(A)\) (the inclusion \(\supseteq\) is automatic, while \(\subseteq\) follows from (1)). The second is that the canonical map \(U_{n}^{0}(A)/CU_{n}^{0}(A)\to GL_{n}^{0}(A)/CGL_{n}^{0}(A)\) is an injection for \(n\in\mathbb{N}\cup\{\infty\}\). Thomsen also gave the following unnatural direct sum decomposition of \(\overline{K}_{1}^{alg,u}(A)\) in terms of \(K\)-theory and traces.
**Theorem 2.7** (Corollary 3.3, [16]).: _Let \(A\) be a unital C*-algebra. There is an exact sequence_
\[0\to\operatorname{Aff}T(A)/\overline{\rho_{A}(K_{0}(A))}\to\overline{K}_{1}^{ alg,u}(A)\to K_{1}(A)\to 0, \tag{2.19}\]
_which splits unnaturally._
Indeed, the splitting above is necessarily unnatural as can be seen in [14, Section 5]. These give examples of morphisms which agree on \(K\)-theory and traces but disagree on \(U(A)/CU(A)\).3
## 3. Polar decomposition
We produce direct sum decompositions of the algebraic \(K_{1}\)-group of a C*-algebra in terms of the unitary algebraic \(K_{1}\)-group and traces. We provide Hausdorff versions as well. We motivate this with the example of the complex numbers.
**Example 3.1**.: _Let \(A=\mathbb{C}\). Then we have isomorphisms_
\[K_{1}^{\text{alg}}(A) \simeq\mathbb{C}^{\times},\quad\text{and}\] \[K_{1}^{\text{alg},u}(A) \simeq\mathbb{T}, \tag{3.1}\]
_via the (usual) determinant map, and_
\[\operatorname{Aff}T(A)\simeq\mathbb{R}, \tag{3.2}\]
_since \(A\) has a unique trace. Hence we see that \(K_{1}^{\text{alg}}(A)\simeq\mathbb{T}\oplus\operatorname{Aff}T(A)\). The projection \(K_{1}^{\text{alg}}(A)\to\operatorname{Aff}T(A)\) is given by the canonical map \(\log(|\cdot|):\mathbb{C}^{\times}\to\mathbb{R}\), so it is the map_
\[[x]\mapsto\log(|\det(x)|)=\log(\det(|x|))=\text{tr}(\log(|x|)). \tag{3.3}\]
### Non-Hausdorffized algebraic \(K\)-theory
We start by examining the structure of \(U_{n}^{0}(A)/\ker\Delta^{n}|_{U_{n}^{0}(A)}\) and \(GL_{n}^{0}(A)/\ker\Delta^{n}\). We'll then apply these results to C*-algebras satisfying
\[DU_{\infty}^{0}(A)=\ker\Delta|_{U_{\infty}^{0}(A)}\text{ and }DGL_{\infty}^{0}(A)=\ker\Delta. \tag{3.4}\]
First we show that \(\Delta^{n}\) is invariant under conjugation by elements of \(GL_{n}(A)\). The following is not obvious from the fact that \(\Delta^{n}\) is a homomorphism since \(\Delta^{n}(s)\) is not defined when \(s\not\in GL_{n}^{0}(A)\).
**Lemma 3.2**.: _Let \(A\) be a unital C*-algebra. For \(x\in GL_{n}^{0}(A),\Delta^{n}(s^{-1}xs)=\Delta^{n}(x)\) for any \(s\in GL_{n}(A)\)._
Proof.: If \(\xi:[0,1]\to GL_{n}^{0}(A)\) is a piecewise smooth path from \(1\) to \(x\), then \(\eta:=s^{-1}\xi(\cdot)s:[0,1]\to GL_{n}^{0}(A)\) is a piecewise smooth path from \(1\) to \(s^{-1}xs\) with \(\eta^{\prime}(t)=s^{-1}\xi^{\prime}(t)s\) (whenever we can differentiate). Consequently (using (2.6)),
\[\tilde{\Delta}^{n}(\eta)=\tilde{\Delta}^{n}(\xi), \tag{3.5}\]
and the result follows.
**Lemma 3.3**.: _Let \(A\) be a unital C*-algebra. For \(n\in\mathbb{N}\cup\{\infty\},x,y\in GL_{n}(A)\),_
\[\Delta^{n}(|xy|)=\Delta^{n}(|x|)+\Delta^{n}(|y|). \tag{3.6}\]
Proof.: Let \(xy=u|xy|\), \(x=u_{x}|x|\), and \(y=u_{y}|y|\) be polar decompositions. Then \(|u^{*}xy|=|xy|=u^{*}xy\), and in \(GL_{n}(A)/DGL_{n}(A)\), we have
\[[|xy|]=[u^{*}xy]=[u^{*}u_{x}|x|u_{y}|y]=[u^{*}u_{x}u_{y}]+[|x||y|]. \tag{3.7}\]
Hence by the previous lemma and using the fact (2.17) that \(i\Delta^{n}(|z|)=\operatorname{Re}(i\Delta^{n}(z))\) for \(z\in GL_{n}^{0}(A)\),
\[\begin{split} i\Delta^{n}(|xy|)&=\operatorname{Re} (i\Delta^{n}(|xy|))\\ &=\operatorname{Re}\bigl{(}i\Delta^{n}(u^{*}u_{x}u_{y})+\Delta^{n }(|x||y|)\bigr{)}\\ &=0+i\Delta^{n}(|x|)+i\Delta^{n}(|y|),\end{split} \tag{3.8}\]
as desired.
**Lemma 3.4**.: _Let \(A\) be a unital C*-algebra. The map_
\[\chi_{n}:GL_{n}(A)/\ker\Delta^{n}\to\operatorname{Aff}T(A) \tag{3.9}\]
_defined by \([x]\mapsto\widehat{\log|x|}\) is a well-defined surjective group homomorphism._
Proof.: With our identification \(A_{sa}/A_{0}=\operatorname{Aff}T(A)\), it is enough to show that \(GL_{n}(A)/\ker\Delta^{n}\to A_{sa}/A_{0}:x\mapsto[\log|x|]\) is well-defined. Let \(x,y\in GL_{n}(A)\) with \(x=yz\) for some \(z\in\ker\Delta^{n}\). Now we have
(3.10) \[\begin{split}[\log|x|]&\stackrel{{\eqref {eq:def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def
giving well-definedness. Moreover, for \([a],[b]\in A_{sa}/A_{0}\),
\[\begin{split}\Delta^{n}(e^{a+b})&=\frac{1}{2\pi i} \mathrm{Tr}(a+b)\\ &=\frac{1}{2\pi i}\mathrm{Tr}(a)+\frac{1}{2\pi i}\mathrm{Tr}(b) \\ &=\Delta^{n}(e^{a})+\Delta^{n}(e^{b}).\end{split} \tag{3.14}\]
Hence, \([a^{a+b}]=[e^{a}e^{b}]\) in \(GL^{0}_{n}/\ker\Delta^{n}\).
**Remark 3.6**.: _Now we explain why we are working with Tr instead of working with each tracial state concurrently. If we worked with \(\Delta_{\tau}\), where \(\tau\) ranges over \(\tau\in T(A)\), the same arguments above will hold. However, unless one makes a separability assumption (more specifically, that \(K_{0}(A)\) is countable), we don't necessarily have \(\ker\Delta=\bigcap_{\tau\in T(A)}\ker\Delta_{\tau}\). Indeed, if we had a piecewise smooth path \(\xi_{\tau}\) from 1 to \(x\) with \(\tilde{\Delta}_{\tau}(\xi_{\tau})\in\tau(K_{0}(A))\) for all \(\tau\in T(A)\), it is not necessarily true that we can find a single element of \(x\in K_{0}(A)\) such that \(\tilde{\Delta}_{\tau}(\xi_{\tau})=\rho_{A}(x)(\tau)\) for all \(\tau\in T(A)\). See Lemme 5 and Proposition 6 of [10]._
**Proposition 3.7**.: _Let \(A\) be a unital C*-algebra. The sequence_
(3.15)
_is a split short exact sequence._
Proof.: We know that \(\iota^{0}_{n}\) is injective and that \(\chi^{0}_{n}\circ s_{n}=\mathrm{id}_{\mathrm{Aff}\,T(A)}\) (and hence \(\chi^{0}_{n}\) is surjective). We must show that
\[\ker\chi^{0}_{n}=\mathrm{Im}(\iota^{0}_{n}). \tag{3.16}\]
The containment \(\supseteq\) is trivial since a unitary has positive part equal to 1. For the reverse containment, suppose that \(x\in GL^{0}_{n}(A)\) is such that \(\mathrm{Tr}(\log|x|)=0\). Then, letting \(x=u_{x}|x|\) be the polar decomposition,
\[\begin{split}\Delta^{n}(x)&=\Delta^{n}(u_{x})+ \Delta^{n}(|x|)\\ &=\Delta^{n}(u_{x})+\frac{1}{2\pi i}\mathrm{Tr}(\log|x|)\\ &=\Delta^{n}(u_{x})\in\iota^{0}_{n}\Big{(}U^{0}_{n}(A)/\ker \Delta^{n}|_{U^{0}_{n}(A)}\Big{)}.\qed\end{split}\]
We wish to remove the superscript \(0\) to get a sequence involving \(U_{n}(A)/\ker\Delta^{n}|_{U^{0}_{n}(A)}\), and \(GL_{n}(A)/\ker\Delta^{n},A_{sa}/A_{0}\)
**Theorem 3.8**.: _Let \(A\) be a unital C*-algebra. The sequence_
(3.17)
_is a split short exact sequence._
Proof.: We have the following commutative diagram:
(3.18)
where all the columns, as well as the \(1^{\rm st}\) and \(3^{\rm rd}\) rows are exact. As we have
\[\iota_{n}(U_{n}(A)/\ker\Delta^{n}|_{U_{n}^{0}(A)})\subseteq\ker\chi_{n}, \tag{3.19}\]
it follows from Exercise 2 of II.5 of [14] that the second row is also exact. It is easy to see that \(s_{n}:{\rm Aff}\,T(A)\to GL_{n}^{0}(A)/\ker\Delta^{n}\subseteq GL_{n}(A)/\ker \Delta^{n}\) is a splitting for the second row as
\[\chi_{n}\circ s(\widehat{a})=\chi_{n}([e^{a}])=\widehat{\log|e^{a}|}=\widehat {a}. \tag{3.20}\]
**Corollary 3.9**.: _Suppose that \(A\) is a unital C*-algebra and let \(n\in\mathbb{N}\). If \(\ker\Delta^{n}|_{U_{n}^{0}(A)}=DU_{n}^{0}(A)\) and \(\ker\Delta^{n}=DGL_{n}^{0}(A)\), then_
(3.21)
_is a split short exact sequence._
Proof.: We have the following commutative diagram:
(3.22)
_is a split short exact sequence. In particular, for \(n=\infty\), we have a natural split short exact sequence_
(3.22)
Proof.: The first part follows from the above as \(DU_{n}^{0}(A)=\ker\Delta^{n}|_{U_{n}^{0}(A)}\) and \(DGL_{n}^{0}(A)=\ker\Delta^{n}\). For the last part, if \(n=\infty\), then \(DGL_{\infty}(A)=DGL_{\infty}^{0}(A)\) by Whitehead's lemma. Indeed, if \(x\in GL_{n}(A)\) is a commutator, say \(x=yzy^{-1}z^{-1}\), then \(x\oplus 1\oplus 1\in GL_{3n}(A)\) can be written as a commutator as follows:
\[\begin{pmatrix}x&&\\ &1&\\ &&1\end{pmatrix}=\begin{pmatrix}y&&\\ &y^{-1}&\\ &&1\end{pmatrix}\begin{pmatrix}z&&\\ &1&\\ &&z^{-1}\end{pmatrix}\begin{pmatrix}y^{-1}&&\\ &y&\\ &&1\end{pmatrix}\begin{pmatrix}z^{-1}&&\\ &&1&\\ &&z\end{pmatrix}. \tag{3.23}\]
The four matrices on the right are connected to the identity by Whitehead's lemma (see [10, Lemma 2.1.5]).
The above split exact sequence yields that
\[K_{1}^{\mathrm{alg}}(A)\simeq K_{1}^{alg,u}(A)\oplus\operatorname{Aff}T(A) \tag{3.24}\]
naturally via the isomorphism
\[[x]\mapsto[u_{x}]\oplus\widehat{\log|x|}. \tag{3.25}\]
The following is an immediate consequence.
**Corollary 3.10**.: _Let \(A,B\) be unital C*-algebras such that_
\[DU_{\infty}^{0}(A)=\ker\Delta|_{U_{\infty}^{0}(A)}\text{ and }DGL_{\infty}^{0}(A)=\ker\Delta. \tag{3.26}\]
_If \(x,y\in GL_{\infty}(A)\), the following are equivalent._
1. \([u_{x}]=[u_{y}]\) _in_ \(K_{1}^{alg,u}(A)\) _and_ \(\widehat{\log|x|}=\widehat{\log|y|}\) _in_ \(\operatorname{Aff}T(A)\)_;_
2. \([x]=[y]\) _in_ \(K_{1}^{alg}(A)\)_._
For \(\phi:A\to B\) a *-homomorphism between unital C*-algebras, denote by
1. \(K_{1}^{alg,u}(\phi):K_{1}^{alg,u}(A)\to K_{1}^{alg,u}(B)\);
2. \(K_{1}^{\mathrm{alg}}(\phi):K_{1}^{\mathrm{alg}}(A)\to K_{1}^{\mathrm{alg}}(B)\);
3. \(T(\phi):T(B)\to T(A)\)
the maps induced by \(\phi\).
**Corollary 3.11**.: _Let \(A,B\) be unital C*-algebras such that_
* \(DU_{\infty}^{0}(A)=\ker\Delta_{A}|_{U_{\infty}^{0}(A)}\) _and_ \(DGL_{\infty}^{0}(A)=\ker\Delta_{B}\)_;_
* \(DU_{\infty}^{0}(B)=\ker\Delta_{B}|_{U_{\infty}^{0}(A)}\) _and_ \(DGL_{\infty}^{0}(B)=\ker\Delta_{B}\)
_Let \(\phi,\psi:A\to B\) be unital *-homomorphisms. The following are equivalent._
1. \(K_{1}^{alg,u}(\phi)=K_{1}^{alg,u}(\psi)\) _and_ \(T(\phi)=T(\psi)\)_;_
2. \(K_{1}^{alg}(\phi)=K_{1}^{alg}(\psi)\)_._
There are many classes of unital C*-algebras satisfying the two hypotheses of the above corollary [11, 12, 13, 14], with the latter, due to Ng and Robert, being the most general. Namely, it is shown there that they hold in the case that \(A\) is a unital, separable, simple, pure C*-algebra of stable rank one such that every 2-quasitracial state is a trace.
**Corollary 3.12**.: _Let \(A\) be a unital, simple, separable, pure C*-algebra of stable rank one such that every 2-quasitrace is a trace. Then there is a natural isomorphism_
\[K_{1}^{alg}(A)\simeq K_{1}^{alg,u}(A)\oplus\operatorname{Aff}T(A). \tag{3.27}\]
### Hausdorff algebraic \(K\)-theory
In the Hausdorff setting, we obtain similar results by the same arguments. However, in this case, we have \(\ker\bar{\Delta}^{n}|_{U_{n}^{0}(A)}=CU_{n}^{0}(A)\) by Lemma 3.1 of [12], and one can use the same techniques to show that \(\ker\bar{\Delta}^{n}=CGL_{n}^{0}(A)\). Let
\[\bar{\iota}_{n} :U_{n}(A)/CU_{n}^{0}(A)\to GL_{n}(A)/CGL_{n}^{0}(A),\] \[\bar{\chi}_{n} :GL_{n}(A)/CGL_{n}^{0}(A)\to\operatorname{Aff}T(A),\quad\text{and}\] \[\bar{s}_{n} :\operatorname{Aff}T(A)\to GL_{n}(A)/CGL_{n}^{0}(A) \tag{3.28}\]
be the variants of the maps \(\iota_{n},\chi_{n},s_{n}\) in the previous section (so our domains and codomains are now topological). Identifying \(CU_{n}^{0}(A)=\ker\overline{\Delta}_{n}\) and applying the arguments from section 3.1 gives that each of these maps are well-defined group homomorphisms for \(n\in\mathbb{N}\cup\{\infty\}\). In the Hausdorff setting, these maps are also continuous. First a lemma to handle the \(n=\infty\) case.
**Lemma 3.13**.: _Let \(G=\cup_{n}G_{n}\) be an increasing union of topological groups and equip \(G\) with the inductive limit topology. Let \(H\leq G\) be a subgroup such that the closure \(CH\) of \(H\) is also a subgroup of \(G\). Then the quotient map \(q:G\to G/CH\) is an open map._
Proof.: Let \(S\subseteq G\) be open. As \(G/CH\) has the quotient topology, the set \(q(S)\subseteq G/CH\) is open if and only if \(q^{-1}(q(S))\subseteq G\) is open in \(G\). Thinking of \(G/CH\) as the space of \(CH\)-orbits of \(G\) where \(CH\curvearrowright G\) by right translation, we have that
\[q^{-1}(q(S))=\bigcup_{h\in CH}Sh \tag{3.29}\]
which is open if \(S\) is since right translation still yields a homeomorphism in the inductive limit topology - see [14, Proposition 1.1(ii)].
**Proposition 3.14**.: _The maps in (3.2) are well-defined, continuous group homomorphisms. Moreover, \(\overline{\iota}\) and \(\overline{\chi}\) are open onto their images._
Proof.: A straightforward adaptation of the arguments of the previous section shows that these are well-defined group homomorphisms. We work with the \(n=\infty\) case throughout, as the \(n\in\mathbb{N}\) case is similar, and easier due to the fact that \(GL_{n}(A),U_{n}(A)\) are topological groups.
Let us show that \(\overline{\iota}\) is continuous. The diagram
(3.30)
commutes where the left and right maps are quotient maps and \(\sigma\) is the canonical inclusion. We note that for any subset \(S\subseteq\overline{K}^{\text{alg}}_{1}(A)\) the commutation of the above diagram gives that
\[(q_{U})^{-1}\left(\overline{\iota}^{-1}(S)\right)=\sigma^{-1}\left(q_{GL}^{-1} (S)\right). \tag{3.31}\]
Therefore if \(S\subseteq\overline{K}^{\text{alg}}_{1}(A)\) is open, then
\[\begin{split}\overline{\iota}^{-1}(S)&=q_{U}\left(q _{U}^{-1}\left(\overline{\iota}^{-1}(S)\right)\right)\\ &=q_{U}\left(\sigma^{-1}\left(q_{GL}^{-1}(S)\right)\right),\end{split} \tag{3.32}\]
where \(\sigma^{-1}\left(q_{GL}^{-1}(S)\right)\) is open because both \(q_{GL}\) and \(\sigma\) are continuous. As \(q_{U}\) is open by Lemma 3.13, it follows that \(\overline{\iota}^{-1}(S)\) is open. This shows continuity.
Let us show that \(\overline{\iota}\) is open onto its image. We note that taking the unitary part of the polar decomposition \(\omega_{n}:GL_{n}(A)\to U_{n}(A)\subseteq U_{\infty}(A)\) is continuous for all \(n\) and therefore induces a continuous map \(\omega:GL_{\infty}(A)\to U_{\infty}(A)\). Since \(CGL_{\infty}(A)\cap U_{\infty}(A)=CU_{\infty}(A)\) by Lemma 2.6, we get an induced continuous map
\[\overline{\omega}:\overline{K}^{\text{alg}}_{1}(A)\to\overline{K}^{alg,u}_{1 }(A) \tag{3.33}\]
which clearly satisfies
\[\overline{\omega}\circ\overline{\iota}=\text{id}_{\overline{K}^{alg,u}_{1}( A)}\text{ and }\overline{\iota}\circ\overline{\omega}\big{|}_{\overline{\iota}\left( \overline{K}^{alg,u}_{1}(A)\right)}=\text{id}_{\overline{\iota}\left( \overline{K}^{alg,u}_{1}(A)\right)}. \tag{3.34}\]
Now if \(S\subseteq\overline{K}^{alg,u}_{1}(A)\) is open, then it follows that
\[\overline{\iota}(S)=\overline{\iota}\left(\overline{K}^{alg,u}_{1}(A)\right) \cap(\overline{\omega})^{-1}(S). \tag{3.35}\]
As \(\overline{\omega}\) is continuous, \((\overline{\omega})^{-1}(S)\subseteq\overline{K}^{\text{alg}}_{1}(A)\) is open and so \(\overline{\iota}(S)\subseteq\overline{\iota}\left(\overline{K}^{alg,u}_{1}( A)\right)\) is open with respect to the subspace topology. This shows that \(\overline{\iota}\) is open onto its image.
For \(\overline{\chi}\), let \(g:GL_{\infty}(A)\to\operatorname{Aff}T(A)\) denote the map \(g(x):=\widehat{\log|x|}\). The diagram
(3.36)
commutes, so we have that for \(S\subseteq\operatorname{Aff}T(A)\)
\[\begin{split}\overline{\chi}^{-1}(S)&=q_{GL}\left(q _{GL}^{-1}\left(\overline{\chi}^{-1}(S)\right)\right)\\ &=q_{GL}\left(g^{-1}(S)\right).\end{split} \tag{3.37}\]
Thus since we know that \(q_{GL}\) is open by Lemma 3.13, it suffices to show that \(g\) is continuous. But \(g\) is continuous if \(g|_{GL_{n}(A)}\) is continuous for all \(n\),4 and this is true: indeed, \(g|_{GL_{n}(A)}\) can be written as the composition
Footnote 4: If \(X=\cup_{n}X_{n}\) is equipped with the inductive limit topology and \(f:X\to Y\) is a function such that \(f|_{X_{n}}\) is continuous for all \(n\), then \(f\) is continuous. To see this, let \(S\subseteq Y\) be open and note that \(f^{-1}(S)=\cup_{n}f|_{X_{n}}^{-1}(S)\) is open.
\[G_{n}(A)\xrightarrow{l|_{GL_{n}(A)}}A_{sa}\xrightarrow{\operatorname{Tr}|_{A _{sa}}}\operatorname{Aff}T(A) \tag{3.38}\]
where \(l:GL_{\infty}(A)\to A_{sa}\) is the map given by \(l(x):=\operatorname{tr}\log|x|\) where \(\operatorname{tr}:M_{\infty}(A)\to A\) is the unnormalized trace. Seeing that \(l|_{GL_{n}(A)}\) is continuous follows easily: if \(x_{n}\to x\) in \(GL_{n}(A)\), then \(\operatorname{tr}\log|x_{n}|\to\operatorname{tr}\log|x|\).
To show that \(\overline{\chi}\) is open, note that we have the following commutative diagram:
(3.39)
where the map \(f:GL_{\infty}(A)\to\operatorname{Aff}T(A)\) is given by \(f(x):=\widehat{\log|x|}\). To see that \(\overline{\chi}\) is open, it suffices to show that \(f\) is open - and to this end it suffices to show that \(f|_{GL_{n}}\) is open for each \(n\).5 For \(GL_{n}(A)\), \(\widehat{\log|x_{0}|}=\operatorname{tr}\widehat{\log|x_{0}|}\), where \(\operatorname{tr}\) is the unnormalized trace, and so we can restrict to the case where \(n=1\). Let us without loss of generality work with open balls around the identity: let \(\varepsilon>0\) and consider
Footnote 5: If \(X=\cup_{n}X_{n}\) is an increasing union of topological spaces with the inductive limit topology and \(Y\) is another topological space, then for \(S\subseteq X\), we have that \(f(S)=\cup_{n}f(S\cap X_{n})\).
\[S:=\{x\in GL(A)\mid\|x-1\|<\varepsilon\}. \tag{3.40}\]
Looking at the image of \(S\) under \(f_{1}:=f|_{GL(A)}\), we have
\[f_{1}(S)=\widehat{\{\log|x|\mid\|x-1\|<\varepsilon\}}. \tag{3.41}\]
Let \(x_{0}\in GL(A)\) be such that \(x_{0}\approx_{\varepsilon}1\) and let use show that there is an open ball around \(\widehat{\log|x_{0}|}\) that is contained in \(f_{1}(S)\). First off note that for \(\hat{h}\in\operatorname{Aff}T(A)\), with \(h\in A_{sa}\), we have
\[\begin{split}\widehat{\|\log|x_{0}|}-\hat{h}\|& =\widehat{\|\log|x_{0}|}-\widehat{\log|e^{\hat{h}}|}\|\\ &=\|\overline{\chi}([x_{0}])-\overline{\chi}([e^{h}])\|\\ &=\|\overline{\chi}([x_{0}e^{-h}])\|\\ &=\widehat{\|\log|x_{0}e^{-h}|}\|.\end{split} \tag{3.42}\]
Now let \(\delta>0\) be such that whenever \(a\in A\), we have \(\|a\|<\delta\) implies that \(\|e^{a}-1\|<\varepsilon\). Then for \(\hat{h}\in\operatorname{Aff}T(A)\) with \(\widehat{\log|x_{0}|}\approx_{\frac{\delta}{2}}\hat{h}\), we have by (3.42) that
\[\widehat{\|\log|x_{0}e^{-h}|}\|<\frac{\delta}{2}. \tag{3.43}\]
Find a self-adjoint lift, say \(k\in A_{sa}\), of \(\widehat{\log|x_{0}e^{-h}|}\) with \(\|k\|<\delta\). Then we have that \(\|e^{k}-1\|<\varepsilon\) and
\[f_{1}(e^{k})=\hat{k}=\widehat{\log|x_{0}e^{-h}|}. \tag{3.44}\]
This shows that \(B_{\frac{\delta}{2}}\widehat{(\log|x_{0}|)}\subseteq f_{1}(S)\).
Finally, let us show that \(\overline{s}\) is continuous. We have that
(3.45)
commutes where \(\alpha(a):=e^{a}\) - note that \(\alpha\) is continuous and that the image of \(\alpha\) is contained in \(GL(A)\subseteq GL_{\infty}(A)\) (\(\alpha\) is however _not_ a homomorphism). Consequently if \(S\subseteq\overline{K}_{1}^{\operatorname{alg}}(A)\) is open, then since \(\operatorname{Tr}|_{A_{sa}}\) is surjective,
\[\begin{split}\overline{s}^{-1}(S)&=\operatorname{ Tr}|_{A_{sa}}\left(\operatorname{Tr}_{A_{sa}}^{-1}\left(\overline{s}^{-1}(S) \right)\right)\\ &=\alpha^{-1}\left(q_{GL}^{-1}(S)\right)\end{split} \tag{3.46}\]
is open as well since \(\alpha\) and \(q_{GL}\) are continuous.
**Theorem 3.15**.: _For any unital C*-algebra \(A\) and \(n\in\mathbb{N}\cup\{\infty\}\), the sequence_
(3.47)
_is a split short exact sequence of topological groups. In particular, we have the following split short exact sequence of topological groups:_
(3.48)
Proof.: The same argument as in Theorem 3.8 gives an algebraic splitting. The fact that this is a splitting of topological groups follows as \(\overline{\iota}_{n},\overline{\chi}_{n},\overline{s}_{n}\) are all continuous and \(\overline{\iota}_{n}\) and \(\overline{\chi}_{n}\) are open onto their images by Proposition 3.14.
**Corollary 3.16**.: _Let \(A,B\) be unital C*-algebras, \(x,y\in\overline{K}_{1}^{\text{alg}}(A)\). The following are equivalent._
1. \([u_{x}]=[u_{y}]\) _in_ \(\overline{K}_{1}^{\text{alg},u}(A)\) _and_ \(\widehat{\log|x|}=\widehat{\log|y|}\) _in_ \(\operatorname{Aff}T(A)\)_;_
2. \([x]=[y]\) _in_ \(\overline{K}_{1}^{\text{alg}}(A)\)_._
For \(A,B\) unital C*-algebras, \(\phi:A\to B\) a unital *-homomorphism, denote by
1. \(\overline{K}_{1}^{\text{alg},u}(\phi):\overline{K}_{1}^{\text{alg},u}(A) \to\overline{K}_{1}^{\text{alg},u}(B)\), and
2. \(\overline{K}_{1}^{\text{alg}}(\phi):\overline{K}_{1}^{\text{alg}}(A)\to \overline{K}_{1}^{\text{alg}}(B)\)
the maps induced by \(\phi\).
**Corollary 3.17**.: _Let \(A,B\) be unital C*-algebras. Let \(\phi,\psi:A\to B\) be unital *-homomorphisms. The following are equivalent._
1. \(\overline{K}_{1}^{\text{alg},u}(\phi)=\overline{K}_{1}^{\text{alg},u}(\psi)\) _and_ \(T(\phi)=T(\psi)\)_;_
2. \(\overline{K}_{1}^{\text{alg}}(\phi)=\overline{K}_{1}^{\text{alg}}(\psi)\)_._
## 4. Nonstable algebraic \(K\)-theory
Here we discuss some structure of the non-stable (both Hausdorff and not) algebraic \(K_{1}\)-groups. In [11, Theorem 3.2], Thomsen proved that the map
\[U_{\infty}^{0}(A)/CU_{\infty}^{0}\simeq\operatorname{Aff}T(A)/\overline{\rho_ {A}(K_{0}(A))} \tag{4.1}\]
given by \([u]\mapsto\overline{\Delta}(u)\) is a homeomorphic isomorphism.
It was noted that if \(\pi(U^{0}(A))\to K_{0}(A)\) is surjective, then we have that
\[U^{0}(A)/CU^{0}(A)\simeq U^{0}_{n}(A)/CU^{0}_{n}(A) \tag{4.2}\]
for all \(n\in\mathbb{N}\cup\{\infty\}\). Indeed, if the canonical map \(\pi_{1}(U^{0}(A))\to K_{0}(A)\) is surjective, then the following diagram commutes
\[\begin{CD}U^{0}(A)/CU^{0}(A)@>{\bar{i}}>{}>U^{0}_{\infty}(A)/CU^{0}_{\infty}(A )\\ @V{\overline{D}}V{}V@V{}V{\overline{D}}V\\ \operatorname{Aff}T(A)/\tilde{\Delta}(\pi_{1}(U^{0}(A)))@>{}>{\operatorname{id} }>AffT(A)/\rho_{A}(K_{0}(A))\end{CD} \tag{4.3}\]
where \(\bar{i}:U^{0}(A)/CU^{0}(A)\to U^{0}_{\infty}(A)/CU^{0}_{\infty}(A)\) is the canonical map, and \(\overline{D}_{1},\overline{D}\) are maps factoring \(\Delta^{1},\Delta\) through \(CU^{0}(A),CU^{0}_{\infty}(A)\) respectively. As \(\operatorname{id},\overline{D}_{1}\) and \(\overline{D}\) are all homeomorphic isomorphisms, it follows that the canonical map \(\bar{i}\) is a homeomorphic isomorphism.
**Remark 4.1**.: _Moreover generally one can study the question of when_
\[U^{0}_{n}(A)/CU^{0}_{n}(A)\to U^{0}_{m}(A)/CU^{0}_{m}(A) \tag{4.4}\]
_is an isomorphism for all \(m\geq n\), even in the case where \(\pi_{1}(U^{0}(A))\to K_{0}(A)\) may not be surjective. See [10] for details. One can of course get similar results using the general linear invariants, as well as the purely algebraic variants under the assumptions that \(\ker\Delta_{n}|_{U^{0}_{n}(A)}=DU^{0}_{n}(A)\) or \(\ker\Delta_{n}=DGL^{0}_{n}(A)\) for every \(n\)._
A similar argument gives the following in the algebraic setting.
**Lemma 4.2**.: _Let \(A\) be a unital C*-algebra and suppose that \(\pi_{1}(U^{0}(A))\to K_{0}(A)\) is surjective._
1. _The canonical map_ \(U^{0}(A)/\ker\Delta^{1}|_{U^{0}(A)}\to U^{0}_{\infty}(A)/\ker\Delta|_{U^{0}_{ \infty}(A)}\) _is an isomorphism._
2. _The canonical map_ \(GL^{0}(A)/\ker\Delta^{1}\to GL^{0}_{\infty}(A)/\ker\Delta\) _is an isomorphism._
Proof.: Writing out a similar diagram to (4.3), we have
\[\begin{CD}U^{0}(A)/\ker\Delta^{1}|_{U^{0}(A)}@>{i}>{}>U^{0}_{\infty}(A)/\ker \Delta|_{U^{0}_{\infty}(A)}\\ @V{D_{1}}V{}V@V{}V{D}V\\ \operatorname{Aff}T(A)/\tilde{\Delta}(\pi_{1}(U^{0}(A)))@>{}>{\operatorname{id} }>AffT(A)/\rho_{A}(K_{0}(A))\end{CD} \tag{4.5}\]
The maps \(\operatorname{id},D_{1},D\) are all group isomorphisms, so \(i\) must be as well (the maps \(i,D_{1},D\) are the purely algebraic analogues of \(\bar{i},\overline{D}_{1},\overline{D}\) above). We get a similar diagram in the \(GL\) setting with \(U\) replaced with \(GL\) and \(\operatorname{Aff}T(A)\) replaced by \(A/\overline{[A,A]}\)
Using similar techniques to [14], we have the following.
**Lemma 4.3**.: _Let \(A\) be a unital C*-algebra and suppose that \(\pi_{1}(U^{0}(A))\to K_{0}(A)\) is surjective._
1. _If_ \(\ker\Delta^{1}|_{U^{0}(A)}=DU^{0}(A)\)_, then_ \(\ker\Delta^{n}|_{U^{0}_{n}(A)}=DU^{0}_{n}(A)\) _for all_ \(n\in\mathbb{N}\cup\{\infty\}\)_._
2. _If_ \(\ker\Delta^{1}=DGL^{0}(A)\)_, then_ \(\ker\Delta^{n}=DGL^{0}_{n}(A)\) _for all_ \(n\in\mathbb{N}\cup\{\infty\}\)_._
Proof.: We show (1) holds; (2) is similar. Suppose that \(u\in\ker\Delta^{n}|_{U^{0}_{n}(A)}\). There is some \(a\in A_{sa}\) such that \([u]=[e^{2\pi ia}\oplus 1_{n-1}]\) and a piecewise smooth path \(\xi:[0,1]\to U^{0}_{n}(A)\) from \(1\) to \(u\) with \(\tilde{\Delta}(\xi)=\operatorname{Tr}(a)\).6
Footnote 6: As in the proof of Lemme 3 of [13], take any piecewise smooth path \(\xi:[0,1]\to U^{0}_{n}(A)\) from \(1\) to \(u\) and choose \(k\) such that \(\|\xi(\frac{j-1}{k})-\xi(\frac{j}{k})\|<1\) for all \(j=1,\dots,k\). Then taking \(a:=\sum_{j=1}^{k}\frac{1}{2\pi i}\log\left(\xi(\frac{j-1}{k})^{-1}\xi(\frac{ j}{k})\right)\) gives the desired self-adjoint element.
As \(u\in\ker\Delta^{n}|_{U^{0}_{n}(A)}\), there is some piecewise smooth loop \(\eta:[0,1]\to U^{0}_{n}(A)\) with
\[\tilde{\Delta}^{n}(\xi)=\tilde{\Delta}^{n}(\eta). \tag{4.6}\]
As before, the surjectivity of \(\pi_{1}(U^{0}(A))\to K_{0}(A)\) implies that \(\eta\) is homotopic to \(\eta_{0}\oplus 1_{n-1}\) for some piecewise smooth loop \(\eta_{0}:[0,1]\to U^{0}(A)\). Then \(\eta_{1}(t):=e^{2\pi ia}\eta_{0}(t)^{*}\) defines a piecewise smooth path in \(U^{0}(A)\) from \(1\) to \(e^{2\pi ia}\) such that \(\tilde{\Delta}(\eta_{1})=0\). Therefore \(e^{2\pi ia}\in\ker\Delta^{1}|_{U^{0}(A)}=DU^{0}(A)\) and consequently
\[[u]=[e^{2\pi ia}\oplus 1_{n-1}]=0\text{ in }U^{0}_{n}(A)/DU^{0}_{n}(A). \tag{4.7}\]
Now we finish by showing that we can work outside of the connected component.
**Theorem 4.4**.: _Let \(A\) be a unital C*-algebra such that_
1. _the canonical map_ \(\pi_{1}(U^{0}(A))\to K_{0}(A)\) _is surjective;_
2. _the canonical map_ \(U(A)/U^{0}(A)\to K_{1}(A)\) _is an isomorphism._
_Then the following is true._
1. _If_ \(\ker\Delta^{1}|_{U^{0}(A)}=DU^{0}(A)\)_, then_ \(U(A)/DU(A)\simeq K_{1}^{alg}(A)\)_._
2. _If_ \(\ker\Delta^{1}=DGL^{0}(A)\)_, then_ \(GL(A)/DGL(A)\simeq K_{1}^{alg,u}(A)\)_._
Proof.: We show this in the unitary setting. First we again note that \(DU^{0}_{\infty}(A)=DU_{\infty}(A)\) by (3.23), and so \(K_{1}^{alg,u}(A)=U_{\infty}/DU^{0}_{\infty}(A)\). Moreover, (2) implies that \(A\) is \(K_{1}\)-injective, giving that \(DU(A)=DU^{0}(A)\) and \(DGL(A)=DGL^{0}(A)\) as well. Using property (1) with the fact that \(\ker\Delta^{1}|_{U^{0}(A)}=DU^{0}(A)\) gives that the canonical map
\[U^{0}(A)/DU^{0}(A)\simeq U^{0}_{\infty}(A)/DU^{0}_{\infty}(A) \tag{4.8}\]
is an isomorphism by combining 4.2 and 4.3. Now combining (2) with (4.8) yields a morphism of short exact sequences
(4.9)
where the left and right vertical maps are isomorphisms. Therefore the middle vertical map is an isomorphism by the Short Five Lemma [12, Lemma I.3.1]. The argument in the general linear setting is the same with replaced by and replaced replaced with replaced with replaced.
**Remark 4.5**.: _If is unital, with the assumptions_
1. _is surjective and_
2. _is an isomorphism,_
_we also get that_
(4.10)
_even as topological groups. Indeed, looking at the unitary case for example, since the map_
(4.11)
_is a topological group isomorphism by means of (4.3), it follows that the map_
(4.12)
_is open as it will send open small neighbourhoods of the identity to open neighbourhoods (as sufficiently small neighbourhoods will be connected to the identity)._
_Again, as the map is injective, we also have that. Thus and in this case._
Finally we finish by stating that unital C*-algebras satisfying
1. is surjective and
2. is an isomorphism
are very common. Indeed, this includes the class of stable rank one C*-algebras [14, Theorem 3.3], -stable C*-algebras [15, Theorem 3], and tensor products with coronas over -unital C*-algebras [16, Theorem 4.9]. |
2310.16370 | PartRePer-MPI: Combining Fault Tolerance and Performance for MPI
Applications | As we have entered Exascale computing, the faults in high-performance systems
are expected to increase considerably. To compensate for a higher failure rate,
the standard checkpoint/restart technique would need to create checkpoints at a
much higher frequency resulting in an excessive amount of overhead which would
not be sustainable for many scientific applications. Replication allows for
fast recovery from failures by simply dropping the failed processes and using
their replicas to continue the regular operation of the application.
In this paper, we have implemented PartRePer-MPI, a novel fault-tolerant MPI
library that adopts partial replication of some of the launched MPI processes
in order to provide resilience from failures. The novelty of our work is that
it combines both fault tolerance, due to the use of the User Level Failure
Mitigation (ULFM) framework in the Open MPI library, and high performance, due
to the use of communication protocols in the native MPI library that is
generally fine-tuned for specific HPC platforms. We have implemented efficient
and parallel communication strategies with computational and replica processes,
and our library can seamlessly provide fault tolerance support to an existing
MPI application. Our experiments using seven NAS Parallel Benchmarks and two
scientific applications show that the failure-free overheads in PartRePer-MPI
when compared to the baseline MVAPICH2, are only up to 6.4% for the NAS
parallel benchmarks and up to 9.7% for the scientific applications. | Sarthak Joshi, Sathish Vadhiyar | 2023-10-25T05:18:48Z | http://arxiv.org/abs/2310.16370v1 | # PartRePer-MPI: Combining Fault Tolerance and Performance for MPI Applications
###### Abstract
As we have entered Exascale computing, the faults in high-performance systems are expected to increase considerably. To compensate for a higher failure rate, the standard checkpoint/restart technique would need to create checkpoints at a much higher frequency resulting in an excessive amount of overhead which would not be sustainable for many scientific applications. Replication allows for fast recovery from failures by simply dropping the failed processes and using their replicas to continue the regular operation of the application.
In this paper, we have implemented _PartRePer-MPI_, a novel fault-tolerant MPI library that adopts partial replication of some of the launched MPI processes in order to provide resilience from failures. The novelty of our work is that it combines both fault tolerance, due to the use of the User Level Failure Mitigation (ULFM) framework in the Open MPI library, and high performance, due to the use of communication protocols in the native MPI library that is generally fine-tuned for specific HPC platforms. We have implemented efficient and parallel communication strategies with computational and replica processes, and our library can seamlessly provide fault tolerance support to an existing MPI application. Our experiments using seven NAS Parallel Benchmarks and two scientific applications show that the failure-free overheads in PartRePer-MPI when compared to the baseline MVAPICH2, are only up to 6.4% for the NAS parallel benchmarks and up to 9.7% for the scientific applications.
MPI, ULFM, Fault Tolerance, Replication
## I Introduction
Large-scale systems are prone to failures due to both hardware and software faults. The standard method to handle failures is the checkpoint/restart mechanism [1, 2]. In this method, the application state is saved as checkpoints at regular intervals. Upon failure, the application is restarted, the last saved checkpoint is used to recover the saved state, and execution continues from that point. However, as Exascale systems are being built, the failure rate is expected to considerably increase due to the complexities of the components and the interconnections [3, 4]. Checkpoints need to be created at a higher frequency in order to compensate for the high failure rates. Furthermore, the saved checkpoints will be loaded much more frequently as a restart will be needed at every failure [5]. This results in large overheads that will result in significant performance loss for many scientific applications [6].
Hence, fault tolerance using replication was proposed [7] in order to have faster recovery from failures, increase the mean time to interruption (MTTI) of the application and allow for longer checkpoint intervals. Most of the existing fault-tolerant MPI libraries do not harness the efficient native MPI communications that are highly tuned to the network topology and other hardware aspects, thereby compromising performance. In this paper, we have developed, _PartRePer-MPI_, an MPI library based on partial replication of processes that combines both fault tolerance and high-performance communications. Our library allows various degrees of replication to be used for partial replication. The novelty of our library is that it provides both fault-tolerance, by utilizing ULFM (User Level Failure Mitigation) [8] from the Open MPI library [9] and high-performance, by utilizing a native MPI library for communications, thereby providing the best of both the worlds. We have created an interface with which any MPI library (both open and closed source) can be dynamically loaded and used for communications. We have also implemented efficient and parallel communication strategies involving computational and replica processes.
We performed experiments with our library, PartRePer-MPI, using seven NAS Parallel Benchmarks, and two scientific applications, related to solving compressible Euler equations and Particle in Cell (PIC) simulations, respectively. We show that the failure-free overheads in PartRePer-MPI, when compared to the baseline MVAPICH2 are only up to 6.4% for the NAS parallel benchmarks and 9.7% for the scientific applications. We also show that the use of partial replication in our library results in reduced checkpoint recovery overheads.
Section II gives an overview of the related work in the area of fault tolerance for MPI applications. Section III gives background and fundamentals used in our library, namely, the mechanisms of replicating the state of a process to another process and the ULFM standard for fault tolerance. In Section IV, we describe the various challenges and our solutions for combining two MPI libraries, Open MPI and a native MPI library, to provide both fault tolerance and high performance, respectively. Section V explains the implementation of PartRePer-MPI, including communicators and various MPI functions. Section VI briefly explains the failure management related to repairing the communications and dealing with lost messages after failures. Section VII provides experiments and results related to overheads due to our library. Finally, Section VIII gives conclusions and presents the scope for future work.
## II Related Work
Over the years, various approaches have emerged to counter the issue of frequent failures in large-scale systems. These involve techniques like algorithm-based fault tolerance [10] and libraries like RedMPI [11] to handle undetected soft-error like silent data corruptions. There have also been approaches to improve the efficiency of checkpointing techniques to deal with crash failures through the use of multilevel checkpointing [12] that enables the use of multiple types of checkpoints in a single run with varying costs. Other solutions in this category involve checkpointing without global synchronization [13], providing failure containment by dividing the application processes across independent clusters [14], and techniques for efficient checkpoint recovery [15]. There have also been some advancements in failure prediction techniques that can allow more proactive checkpointing [16]. Efforts also exist to reduce the application recovery time through the global-restart model [17, 18].
More replication-based solutions have also emerged in this period as some studies have shown the impact of replication on the mean time to application interruption [5, 19]. Much of the attention in this field is either on utilizing replicas to identify and recover from inconsistencies due to soft errors, e.g., RedMPI, or on providing higher than the 50% efficiency, which is the maximum that can be achieved with dual redundancy [20], general application communication properties [21], and by sharing the work between two replicas to reduce redundancy [22, 23]. Furthermore, it has been shown that accurate failure predictions, along with adaptive replication, can greatly improve the efficiency [24]. To our knowledge, ours is the first work that provides fault tolerance using partial replication and ULFM, and combines with harnessing efficient communications in the native MPI libraries.
## III Background and Preliminaries
### _Replication_
The replica of a process can be defined as a process that performs the same operations in the same order on the same inputs and produces the same outputs at the application level. However, internally, the data might be loaded from and stored at different addresses. At the process level, the mechanism of replicating a process is equivalent to checkpointing the state of the process and communicating the checkpointed state to another process. Our replication procedure consists of 3 major steps, namely, Data Segment Transfer, Heap Segment Transfer, and Stack Segment Transfer. Initially, all the basic information needed to facilitate these steps is transferred. This includes the jmp_buf structure object that maintains the stack pointer, the frame pointer, the program counter, data from certain registers, pointer addresses of heap chunks, sizes of heap chunks (covered in Heap Segment Transfer), the stack segment address range, the data segment address range, and the ending address of the bss segment in the source process.
#### Iii-A1 Data Segment Transfer
The data segment in a process consists of initialized and uninitialized data. Firstly if there is a discrepancy in the total amount of memory space used for data (both initialized and uninitialized), it is equalized in the target process using the sbrk command. Then, in the target process, any variables in the initialized data segment that need to be preserved in the replica, such as custom communicators and dynamic library references (covered in later sections), are stored in temporary variables. The data at the start of the data segment in the source process is transferred to the start of the data segment in the target process. Then, the data saved in the stack in the target process is restored to its original location.
#### Iii-A2 Heap Segment Transfer
Unlike the data segment, the heap segment consists of non-contiguous chunks of data, which are allocated during the execution of the process, as shown in Figure 1(a). We use a malloc wrapper to keep a record of each heap chunk allocated during execution. This is maintained as a linked list of objects of the structure datatype that consists of the starting address of the heap chunk, the address of the pointer pointing to that heap chunk (or pointer address), and the size of the chunk. The transfer of the heap data from the source to the target process consists of three steps, namely, matching the number of chunks in the source and target processes, matching the chunk sizes, and updating the pointers in the target process. These steps are illustrated in Figure 1.
#### Iii-A3 Stack Segment Transfer
Unlike the previous two segments, the stack is an active component in all the operations which are running as it keeps track of the functions called and the data used in those functions. Therefore, we cannot simply transfer the stack like in the previous two cases without corrupting the currently executing function. We utilize the \(setjmp\) function to save the current calling environment into a jmp_buf structure at the source process and send it to the target process, migrate the stack pointer to a safe data segment
Fig. 1: Heap Segment Transfer Procedure
in the target process, transfer the stack segment, and retrieve the calling environment at the source and target process by calling \(longjmp\) function with the jmp_buf structure. The state of both processes is restored to that of the source process just before the replication procedure started. This procedure is the same as the Condor checkpointing procedure [25] and is illustrated in Figure 2.
In this manner, the target process, having an equivalent data, heap, and stack segment and also having the same process state as the source process just before replication, essentially becomes a replica of the source process.
### _User Level Failure Mitigation_
User Level Failure Mitigation [8], or ULFM, was proposed by a working group in the MPI Forum as a set of basic interfaces and semantics to enable libraries and applications to survive process failures and, subsequently, to repair the state of the MPIs world.
ULFM provides various functions to the users. These are as follows:
* **MPI_Comm_failure_ack**: Marks a communicator with the total number of failed processes.
* **MPI_Comm_failure_get_ack**: Intersects the failed process group with the communicator group to output the group of failed processes in that communicator.
* **MPI_Comm_revoke**: Revokes the communicator. This results in any operation involving that communicator returning with the MPI_ERR_REVOKED error code.
* **MPI_Comm_is_revoked**: Checks if the communicator is locally revoked.
* **MPI_Comm_shrink**: Used to shrink a communicator such that it does not have any failed processes.
The general flow of ULFM is that when a process attempts to communicate with a failed process, it gets an error code and is redirected to a user-defined error handler. Inside the error handler, the user would revoke the communicator, which would result in all the processes in the communicator eventually reaching the error handler (essentially error propagation). The user can subsequently shrink the communicator, rebalance the load, take any other measures to ensure correctness, and then continue execution.
## IV Using Multiple MPI Libraries together
Production supercomputing systems have native MPI libraries that are specifically tuned to exploit the underlying hardware architecture, including the network topology (e.g., Dragonfly topology in Cray systems), in order to maximize performance. Most of these native MPI libraries on production systems do not provide fault tolerance. On the other hand, fault-tolerant MPI libraries are built either completely independently or by extending an existing MPI library. Such libraries do not take advantage of the underlying hardware architecture since they follow generic communication algorithms. Thus, in many cases, application developers will have to make a trade-off between using a library that maximizes their application's performance and a library that provides fault tolerance but with a lower performance. We have designed our library in a way that allows the user to utilize the efficient implementations of the native MPI libraries while also having fault tolerance without any changes to the code bases of the native MPI libraries. Our approach involves using two MPI libraries at the same time. One of these libraries is a fault-tolerant library (Open MPI with ULFM in our case). This library is responsible for all fault tolerance mechanisms, including detection, propagation, and recovery. The other library is a native MPI library chosen by the user and used for actual MPI communications. Our work is the first implementation of its kind that simultaneously utilizes two MPI libraries while also providing fault tolerance support when one of the libraries was not originally designed for it.
### _Conflict resolution and Dynamic Loading_
The first challenge in attempting to utilize multiple MPI libraries in a single program is related to the conflicts that would be caused due to the use of the same variables and data structures with different definitions in the libraries. For example, different MPI libraries have their own implementations of MPI_COMM_WORLD, MPI_Init, etc. We start by providing wrappers for the various conflicting declarations in the two libraries so that we can differentiate between them properly. For this, we use a script to extract all the "#define", "typedef", and "enum" declarations in the mpi.h files of both the Open MPI (OMPI) and the other MPI library (EMPI or External MPI). We modify these extracted lines such that all instances of the pattern MPI are replaced with OMPI to refer to the Open MPI library and EMPI to refer to the external (native) MPI library. Many of the patterns, like MPI_Send and MPI_Comm_rank, are functions that are undefined in the header file. We dynamically load their definitions into our program at the start of MPI_Init using the functionality provided by the dl library in the UNIX API. With this, the
Fig. 2: Stack Segment Transfer Procedure
functions and symbols of both libraries can be used without conflicts by simply using OMPI or EMPI keywords for all the calls.
### _Establishing a combined MPI environment_
Most MPI libraries establish the MPI environment using the mpirun command. This command creates a server process, which forks child processes in which the program is executed. The server process is responsible for keeping track of process states, dynamically spawning new processes, printing the stdout output stream for all child processes in a single terminal, and other coordination operations. Different libraries have different implementations of mpirun, and thus, processes launched by mpirun of one library would be equivalent to singleton processes in any other library, where these processes would all have a world size of 1 and rank 0 without any way to communicate between them using MPI. However, in our library, we cannot allow the processes to initialize as a singleton process for either library as we want to utilize both the communication functions from the external MPI library and have a consistent communicator from Open MPI that can be used to check for failures. We have opted to start the processes using the external MPI's mpirun and then establish a connection between the processes and the server process of Open MPI. This allows us to have no modifications on the external MPI end as its implementation varies across libraries, and the implementations may not be open-source. As Open MPI with ULFM would be the common library across all of the implementations and is an open-source implementation, we make modifications on this end in order to establish connections.
In Open MPI, the mpirun command loads up a server process defined in the PRTE (PMIx Reference Run Time Environment) library that performs all the initialization tasks. In the general workflow of starting up processes using Open MPI, the PRTE server initializes a PMIx server module. Following this, for each MPI process required by the application, it creates four pipes, then forks the child process. The pipes are used to redirect the standard I/O streams. After the redirects are complete, all the pipes are closed, and the MPI program is executed on the child process. When the child process calls the MPI_Init function, a PMIx client module is initialized on each of the processes. It searches its environment variables for the PMIx address to the PRTE server. If an address is found and the following attempt to connect to the server is successful, the process is now treated as an MPI process, as shown in Figure 3.
In our implementation, we modify this module such that instead of forking a child process, the Open MPI server process copies the relevant environment variables and the PID of the server process to a file. This file is then read by the processes spawned by the external MPI's mpirun based on their rank after they have been initialized as non-singleton external MPI processes. Now, pipe connections for redirecting the input, output, and error streams will have to be established between Open MPI's PRTE server and EMPI's MPI processes. The primary challenge here is that the EMPI's MPI process is not a child of the PRTE server, and therefore it does not have an identical file descriptor table to establish the pipe connections. To resolve this, we use the concept of ancillary messages using UNIX domain sockets. This allows the processes to exchange a set of open file descriptors. We send the data containing the file descriptors pointing to the ends of the pipe that would normally be open in the child processes spawned by the PRTE server. The steps are illustrated in Figure 4. Through this, the process is able to initialize as an MPI process for both libraries simultaneously.
### _Process termination_
When a process terminates, it sends a SIGCHLD signal to its parent. This signal is used by the server process as a trigger to check if the child process is alive using the waitpid system call. In the case of failure, libraries without in-built fault tolerance generally proceed to kill all the child processes. We cannot allow this to happen, as it would invalidate our main objective of tolerating failures. In Open MPI with ULFM, the failed process is deregistered from the server, while the group of failed processes is updated at the MPI layer in the processes running the program. In our case, the processes running the program are not actually the child processes of the Open
Fig. 4: Establishing pipe connection across unrelated processes
Fig. 3: Open MPI Process Structure
MPI library, and thus, they do not send the SIGCHLD signal to the PRTE server upon failure. To resolve this, we have used the ptrace system call at the PRTE server that allows the server process to trace the target process and also causes the server process to receive SIGCHLD signals when the target process terminates. On the other end, as the processes running the program are the child processes of the server started by the external MPI's priwn, they send the SIGCHLD signal to that server on failure. This can result in the external MPI's server process killing all of its children when any of them fails. To prevent this, we override the waitpid system call with a custom implementation on external MPI's priwn using the LD_PRELOAD environment variable. This custom implementation internally makes the actual system call but returns in a manner that hides the failed processes from the executable.
Figure 5 depicts the structure our processes achieve after startup. Through these modifications, we now ensure that the external MPI library does not know about process failures while the Open MPI library does.
### _Extending to multiple nodes_
In large-scale systems with multiple nodes, MPI jobs are generally executed using job schedulers like PBS and Slurm. When the node mapping is provided using a "hostfile" parameter, the priwn process uses public functions provided by the job schedulers to start up a server daemon in each of the nodes. This server daemon is responsible for spawning (forking) and tracking the child processes on different cores in the same node. The priwn process keeps track of all the daemons as well as all the MPI processes. It receives updates about the state of the MPI processes from the daemons and propagates them to all the other daemons if needed. ULFM in Open MPI relies on this information to keep track of the failed processes.
As the MPI environment is already established for the single node case, we only need some adjustments to account for multiple nodes. Firstly, we provide the same node mappings to the "hostfile" parameter for both the Open MPI and external MPI's priwn. Individual process failures are already accounted for as the PRRTE library has mechanisms to propagate failures to all the PRTEDs (PRTE daemons), which pass it on to their local MPI processes through the PMix servers while the external MPI's server daemons would not detect the failure and thus not pass it on to the priwn server process. We mainly need to consider the case of node failures which are common in large-scale systems. These result in all the processes in an entire node failing, which includes the server daemon process. Open MPI with ULFM (through PRRTE) already has mechanisms to update all the alive daemons regarding the failure of all the processes in the failed node. We mainly need to ensure that the priwn process from the external MPI library does not detect these failures. This detection is achieved by polling the sockets using the poll system call and reading them using the read system call if poll returns a POLLIN event. We override these system calls to ignore any terminated processes using the LD_PRELOAD environment variable. Through this, we can initialize our processes as both Open MPI and external MPI processes at the same time across multiple nodes and tolerate even multiple node failures. Figure 6 depicts the structure our processes achieve after startup over multiple nodes.
## V PartRePer-MPI Library Implementation
We use six communicators in our implementation for the external MPI library for communication and one communicator from the Open MPI library for error checking.
(1) **oworldComm and eworldComm**: These are initially duplicates of OMPL_COMM_WORLD (Open MPI for fault tolerance) and EMPL_COMM_WORLD (native MPI library for MPI communications) respectively.
(2) **EMPI_COMM_CMP**: This communicator contains all the computational processes. The processes in this communicator are always the first nComp (variable holding the number of computational processes) processes in the eworldComm
Fig. 5: Process structure after startup in our library
Fig. 6: Process structure after startup over multiple nodes
communication (ordered by increasing rank). This communicator is null for all replica processes.
(3) **EMPI_COMM_REP**: This communicator contains all the replica processes. The processes in this communicator are always the last nRep (variable holding the number of replica processes) processes in the eworldComm communicator. This communicator is null for all computational processes.
(4) **EMPI_CMP_REP_INTERCOMM**: This is an inter-communicator bridging the above-mentioned communicators for computational and replica processes, respectively. This is null when there are no replica processes alive.
(5) **EMPI_CMP_NO_REP**: This communicator contains all the computational processes that do not have a replica. This communicator is null for all replica processes. It is also null for computational processes that have a replica.
(6) **EMPI_CMP_NO_REP_INTERCOMM**: This is an inter-commun-icator bridging the EMPI_CMP_NO_REP communicator with EMPI_COMM_REP. This is null when there are no replica processes alive or when all computational processes have a replica.
### _Initialization_
We first dynamically load the Open MPI and the external MPI libraries and dynamically define all of our OMPI and EMPI functions. We then initialize the external MPI library (EMPI_Init), identify the number of computational and replica processes, and create the six above-mentioned communicators for EMPI. We then perform the replication procedure using the EMPI communication functions along with the EMPI_CMP_REP_INTERCOMM communicator to copy the process images from the computational processes (in EMPI_COMM_CMP) to the replica processes (in EMPI_COMM_REP). At this point, we also establish the replica mappings and proceed to dynamically connect the processes with the separately started PRTE server. Following that, the processes are initialized as Open MPI processes (OMPI_Init). Finally, all the processes synchronize with a barrier.
### _Peer-to-Peer Communication_
The computational processes send to/receive from the computational process corresponding to their destination/source, and the replica processes send to/receive from the replica process corresponding to their destination/source. If the destination doesn't have a replica, then only the computational process performs the communication. If the source doesn't have a replica, then the source computational process communicates with both the computational and replica destination processes in parallel. The EMPI communicators are used for actual communication. A send message-id is piggybacked in all the transmissions and is also saved at the sender, along with all the arguments for every successful transmission. For the transmission, the corresponding EMPI communication is invoked in a loop until success or if the source/destination becomes invalid, with the source/destination being modified if needed to account for any failures of the source or destination processes.
For the actual calls, we first check if the corresponding OMPI communicator has been revoked. If not, then we check if any process in the communicator has failed. If not, then we call EMPI_Isend and EMPI_Irecv, followed by a loop containing EMPI_Test. Each iteration of the loop also checks for the revoked communicator and the failed processes in the communicator. This loop repeats until either the communication request successfully finishes or there is an error code generated because of the revoked communicator or failed processes. In both erroneous cases, we set the error code to the values specified in the ULFM section. The processes then enter an error handler that we have defined to deal with the failure. The entire workflow for each EMPI_Send or EMPI_Recv operation is depicted in Figure 7. We have extensively analyzed and tested this workflow to account for all possible cases of processes detecting failures, and subsequently being redirected to the error handler in this dual MPI environment, while still minimizing overhead by interleaving communication with active failure detection. Non-blocking sends and receives can be implemented in a similar way using a structure for saving the parameters related to the pending communication and providing MPI_Request datatype as a pointer to this structure.
### _Collective Communication_
We implement our collective communications by performing the equivalent EMPI collective on the computational processes and sending the results to the replica processes. The workflow here is also very similar to the peer-to-peer communication case, as depicted in Figure 7. It essentially involves calling the non-blocking variant of that EMPI collective followed by a loop checking for a revoked communicator or failed processes, progressing the Open MPI run-time engine, and calling EMPI_Test. Also similar to peer-to-peer communication, we maintain logs for each communication and store them along with a separate _last_collective_id_ that updates with every collective to recover the lost in-flight messages when a failure occurs during a collective communication.
## VI Failure Management
### _Repairing the World_
When a process enters the error handler, it first revokes the OMPI communicator if it has not already been revoked so that the failure gets propagated and all the processes eventually enter the error handler. At this point, oworldComm (the Open MPI communicator holding all the processes) is shrunk and all dead processes are removed from it. If the failed process is a replica process, it is simply dropped, and the ranks of some of the replica processes and the computational-replica maps are updated. If the failed process is a computational process that has a replica, then the newly shrunk communicator has its processes shuffled such that the replica now becomes the computational process, following which it is considered that the replica was the one that had failed. Converting to a computational process at this point simply involves being a part of the first nComp processes (by rank) in oworldComm
and setting the EMPI_COMM_REP communicator to null. We then regenerate the EMPI communicators using the shrunk processes.
### _Message Recovery_
With the communicators appropriately repaired, the final task is to account for any transmissions lost during the process. This is handled using the logs that we maintained during the communications. We can also have cases where certain processes may already have received the data that would eventually be sent to them. An example of this could be when a replica process converts into a computational process, and the replica process may already have received certain data from other replica processes that would not have been received by its computational process before failure. For peer-to-peer communications, non-blocking receives that have not been completed are recalled using the logs. We use an EMPI_All-toall call using the eworldComm communicator to distribute information regarding the number of received messages in the array. We then use EMPI_Alltoallly using the eworldComm communicator to distribute the actual ids of all the receives made from every other process. All the messages that were sent by a process but not received at the destination are resent using the message logs. All the messages that were received by a process but not sent from the source are marked using their sendids to be skipped in the future. For collectives, we first identify the collectives that every live process has completed. Starting from that point, processes repeat each remaining collective in the same order from their logs and exit the error handler when there are no more logs remaining. Those processes would then continue with the collective calls as specified in the application.
## VII Experiments and Results
We performed our measurements on a computational cluster. This cluster consists of 29 compute nodes with 48 cores per node and 384GB RAM connected using Infiniband interconnect. Experiments were run utilizing up to 512 cores on the clusters.
We used the following benchmarks from the NAS parallel benchmark suite [26]: CG, BT, LU, EP, SP, IS and MG. We have also used CloverLeaf (CL), a mini-application that solves the compressible Euler equations on a Cartesian grid using an explicit, second-order accurate method [27] and another application related to Plasma Particle-In-Cell (PIC) simulation skeleton codes [28]. CloverLeaf operates on a system of three partial differential equations, which are mathematical statements of the conservation of mass, energy, and momentum. The equations are solved on a staggered grid in which each cell center stores the three quantities: energy, density, and pressure, and each node stores a velocity vector. The PIC simulation codes model plasmas as particles that interact self-consistently via the electromagnetic fields they themselves produce. PIC codes generally involve accumulating some particle quantity, such as charge, on a grid, solving Maxwell's equation or a subset to obtain the electric and/or magnetic fields, finding particle forces, and updating the particle coordinates. We have conducted our tests in failure-free conditions and in the presence of failures. We used MVAPICH2 [29] as the baseline for comparisons.1
Footnote 1: To the best of our exploration and knowledge, we could not find a replication-based MPI library for downloading, installation, and comparisons with our approach.
### _Failure-free Overheads_
We use a failure-free environment to measure the base overalled due to our library. We ran the benchmarks for 64, 128, and 256 processes using the base MVAPICH2 MPI library to obtain the baseline time taken for each of the benchmarks. We then ran the benchmarks using our library with MVAPICH2 loaded as the external MPI library. We ran the benchmarks using our library on 64, 128, and 256 computational processes and varied the number of replica processes in each case using replication degrees of 0 (zero replication), 6.25%, 12.5%, 25%, 50%, and 100% (full replication). Note that a replication degree of \(rDegree\) denotes the percentage of computational processes that have replicas. Thus, a 256-process benchmark execution with a replication degree of 100% involves a total
Fig. 7: Workflow for Fault Tolerant Peer-to-Peer Communication with replicas
of 512 processes, with 256 computational processes and 256 replica processes. We have measured the overall overhead our library incurs for the additional communication due to the presence of replicas as well as other local operations outside of direct calls to EMPI functions, like logging and checking for errors. Each result was obtained as an average of over five executions. Figure 8 shows the results of the failure-free experiments.
We can see that most of the experiments from the NAS parallel benchmark suite give negligible overheads with the use of our library as compared to using MVAPICH2 directly. In most cases, the overheads due to the library are under 6.4% with a heavy skew towards the lower values. We even see a significant number of cases where our library outperforms the baseline MVAPICH2 with negative overheads. In fact, for the IS benchmark, we find that the use of our library results in a 14.39-74.15% reduction in execution time when compared to the baseline MVAPICH2 results! Upon further analysis, we found that this is due to the combination of using non-blocking EMPI_lalltoally followed by a loop of EMPI_Test in our library performing better than simply calling the blocking EMPI_Alltoally in the case of MVAPICH2 for the IS benchmark. In other cases, the negative overheads are mostly over -5%. As the actual difference in terms of absolute time values is only a few seconds, these negative overheads could be a consequence of OS noise. Another factor could be that dynamic loading results in a small subset of all the functions defined in each library getting loaded, resulting in a better cache hit rate when only a limited number of functions are called. We also find that the overhead percentages generally do not vary with the replication degree. The only case where the use of our library results in significant overhead (up to 33.16% is in the case of MG benchmark execution on 256 processes with all processes replicated. We identified that the cause of this is most likely a threshold condition being met when using 512 processes that causes communication to slow down. We found this spike in execution time even when MVAPICH2 is directly used with 512 processes. We also tested this benchmark using 256 computational and 255 replica processes and only obtained a 12.12% overhead which is significantly lower than what we obtained by adding just one more process to the job. We see similarly negligible overheads that are under 9.7% in the case of the application benchmarks.
Figure 8: Failure-Free Results
### _Experiments in the presence of failures_
Note that in replication-based fault-tolerant libraries, an application is interrupted when both a computational process and its replica fail. The objective of our replication mechanism is to work along with the traditional checkpoint/restart mechanism by effectively increasing the Mean Time To Interruption (MTTI) of the application and consequently allowing for longer checkpoint intervals to be used. We have conducted experiments with a fault injector in order to obtain a measure of the improvement in the MTTI that can be provided by replication. We have conducted these experiments at 256 computational processes using varying degrees of replication. We use a fault injector that runs independently of the benchmark program. It uses a Weibull Distribution to generate fault injection timings and randomly kills one of the MPI processes after the generated time has passed. If the killed random processes had a replica, then the job continues executing. If the killed random process did not have a replica, then the job aborts and is restarted or continued later, recovering from the earlier checkpoint. Therefore, higher replication degrees would lead to higher times between application failures and checkpoint recoveries.
We performed these runs for ten executions for each replication degree and obtained the average of the total time taken outside the error handler. We have conducted these tests on the CG, BT, and LU benchmarks. While calculating the MTTI of the application, we do not count the time taken inside the error handler as it does not contribute to useful work but instead contributes to additional overheads. Note that counting the error handling times will only increase MTTI for our library, thereby showing increased benefits due to our library. We have also measured the overheads induced by our library under failure conditions by comparing the base MVAPICH2 execution time for 256 processes under failure-free conditions with our library at 256 computational processes and 256 replica processes in the presence of failures. Note that the current objective of this library is not to break past the replication efficiency barrier but rather to minimize the overheads incurred by adding redundant processes in order to tolerate failures.
Figure 9(a) shows the results of the experiments in the presence of failures. We can observe that even under failure conditions with multiple failures striking each benchmark at each run, our library completes the program with low overheads of 11-40% when compared to MVAPICH2 execution in failure-free conditions. The LU benchmark is the only case with significant overhead, and this is mostly due to the recovery process in the error handler, as this benchmark involves many peer-to-peer communications with large message sizes occurring asynchronously. We can also observe that most of the overheads across all the benchmarks are due to the error handler. The impact of the failures only introduces negligible overheads, if any, due to the change in the number of live processes and, consequently, the communication pattern they employ.
Figure 9(b) shows the MTTI for different replication degrees. We can observe that 100% replication significantly increases the MTTI across all the benchmarks. It is to be noted that all the 100% replication tests only stopped due to completion, and thus, their actual MTTI values are even higher than what is presented. However, even for the lower replication degrees like 50% and 25%, we can observe a significant improvement in the MTTI. We can observe that 50% replication nearly doubles the MTTI in the CG benchmark. In the BT benchmark, 25% replication results in an increase of around 60% in MTTI. In the LU benchmark, the increments to MTTI are almost linear, and each replication degree increments the MTTI by a significant amount. These results show the benefits of partial replication in reducing the checkpoint recovery time under the conditions of failures with standard Weibull distribution.
## VIII Conclusions and Future Work
In this work, we have implemented a fault-tolerant MPI library, _PartRePer-MPI_ that utilizes replication for fault tolerance while also providing performance through the use of efficient communication algorithms from native MPI libraries that may not inherently have fault tolerance support. We performed experiments with our library, PartRePer-MPI,
Fig. 9: Overheads and Mean Time To Interruption in the presence of failures
using seven NAS Parallel Benchmarks and two scientific applications. We observed that the failure-free overheads in PartRePer-MPI when compared to the baseline MVAPICH2, are only up to 6.4% for the NAS parallel benchmarks and 9.7% for the scientific applications. The overheads in the presence of failures are 11-40%. We also showed that the use of partial replication in our library results in doubling MTTI in some cases and hence reduced checkpoint recovery overheads under conditions of failures with Weibull distribution.
For our future work, we plan to expand our library to enable even more standard MPI features such as dynamic communicator creation, I/O, and one-sided communications. We also plan to implement dynamic replication and adaptive partial replication strategy based on failure prediction for increased application efficiency in the presence of failures.
|
2303.15741 | Gamma rays from Nebulae around Recurrent Novae | Novae were discovered to emit transient gamma rays during the period of
several days to a few weeks after initial explosion, indicating presence of
acceleration processes of particles in their expanding shells. In the case of
recurrent novae, electrons can be in principle accelerated in the nova shells
for the whole recurrence period of nova producing delayed $\gamma$ ray emission
as considered in Bednarek (2022). Here we extend the ideas presented in this
article by considering the fate of electrons which diffuse out of the shells of
novae supplying fresh relativistic electrons to the recurrent nova
super-remnants during the whole active period of nova ($\ge 10^4$ yrs). We
develop a model for the acceleration of electrons and their escape from the
nova shells. The electrons within the recurrent nova super-remnants produce
$\gamma$ rays in the comptonization process of the radiation from the red giant
companion and the Cosmic Microwave Background Radiation. As an example, the
case of a symbiotic nova RS Oph (with the recurrence period estimated on
$\sim$10-50 yrs) is considered in more detail. Predicted $\gamma$-ray emission
from the nova super-remnant around RS Oph is discussed in the context of its
observability by satellite experiments (i.e. Fermi-LAT) as well as current and
future Cherenkov telescopes. | W. Bednarek, J. Sitarek | 2023-03-28T05:33:00Z | http://arxiv.org/abs/2303.15741v1 | # Gamma rays from Nebulae around Recurrent Novae
###### Abstract
Novae were discovered to emit transient \(\gamma\) rays during the period of several days to a few weeks after initial explosion, indicating presence of acceleration processes of particles in their expanding shells. In the case of recurrent novae, electrons can be in principle accelerated in the nova shells for the whole recurrence period of nova producing delayed \(\gamma\) ray emission as considered in Bednarek (2022). Here we extend the ideas presented in this article by considering the fate of electrons which diffuse out of the shells of novae supplying fresh relativistic electrons to the recurrent nova super-remnants during the whole active period of nova (\(\geq 10^{4}\) yrs). We develop a model for the acceleration of electrons and their escape from the nova shells. The electrons within the recurrent nova super-remnants produce \(\gamma\) rays in the comptonization process of the radiation from the red giant companion and the Cosmic Microwave Background Radiation. As an example, the case of a symbiotic nova RS Oph (with the recurrence period estimated on \(\sim\)10-50 yrs) is considered in more detail. Predicted \(\gamma\)-ray emission from the nova super-remnant around RS Oph is discussed in the context of its observability by satellite experiments (i.e. _Fermi_-LAT) as well as current and future Cherenkov telescopes.
novae, cataclysmic variables -- binaries: symbiotic -- radiation mechanisms: non-thermal -- gamma-rays: stars
## 1 Introduction
Novae are thermonuclear explosions in a layer of matter accumulated on the surface of a white dwarf (WD) as a result of the accretion process from a companion star in the WD binary system. If the companion of the WD is a main sequence star then the nova is called a classical nova. In the rare case of a Red Giant (RG) companion, the nova is called symbiotic. It is expected that the recurrence time scale of nova explosions depends on the mass of the WD and the accretion rate. If the nova appears many times during a human life time in this same binary system, then it is called a recurrent nova. In this article we concentrate on the case of the symbiotic novae with a short recurrence periods (such as recently observed RS Oph).
The material expelled during recurrent nova explosion is slowed down due to the interaction with the surrounding medium forming a nebula, dubbed nova super-remnant (NSR). The structure of such NSR has been recently investigated by applying hydrodynamical simulations in the case of a specific recurrent nova M31N 2008-12a (in the Andromeda Galaxy) by Healy-Kalesh et al. (2023). It is found that in subsequent eruptions a NSR is formed with the radius of tens of pc. In fact, such optical NSR has been recently detected with a projected size of at least 134 by 90 parsecs in the case of the most rapidly recurring nova M31N 2008-12a (Darnley et al., 2019).
Novae have been recently established as a new type of GeV and sub-TeV \(\gamma\)-ray sources (Abdo et al., 2010; Ackermann et al., 2014; H. E. S. S. Collaboration et al., 2022; Acciari et al., 2022), indicating that particles (electrons, hadrons) are efficiently accelerated in their explosions. The GeV \(\gamma\)-ray emission is observed for several days to a few weeks after the explosion. The most likely scenario is that the nova super-remnant (NSR) is a "super-remnant" (NSR). The NSR has been recently investigated by the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of the _Fermi_-LAT (Fermi) and _Fermi_-LAT (Fermi) observations of
ter the nova explosion (e.g. Ackermann et al. 2014). However, for how long the process of acceleration of particles in nova shell is active is still an open issue. The observed transient \(\gamma\)-ray emission has been proposed to be likely produced by hadrons in collisions with the matter expelled in a nova explosion (e.g. Tatischeff & Hernanz 2007, Abdo et al. 2010, Sitarek & Bednarek 2012, Martin & Dubus 2013, Metzger et al. 2015, Ahnen et al. 2015, Acciari et al. 2022; H. E. S. S. Collaboration et al. 2022, Cheung et al. 2022, Zheng et al. 2022). However, relativistic electrons can also contribute (see e.g. Abdo et al. (2010), Sitarek & Bednarek 2012, Martin & Dubus 2013, Vurm & Metzger 2018, Martin et al. 2018, Bednarek 2022), due to efficient energy losses on the IC process in the radiation from the nova photosphere or the companion RG star (in the case of symbiotic novae).
Recently, Bednarek (2022) considered the model in which the electrons can be accelerated in nova shells for a much longer period of time than indicated by the \(\gamma\)-ray observations. In fact, provided that acceleration of particles occurs within the nova shell, the acceleration process could continue during the time scale corresponding to the recurrence period of the recurrent novae, at least in the part of the shell which propagates in the polar region of the nova binary system, where the shell is not significantly decelerated. In terms of such a model, the time dependent \(\gamma\)-ray emission produced in the Inverse Compton Scattering (ICS) of the radiation from the photosphere and the RG, as observed in the case of recurrent Nova RS Oph, has been calculated and compared with the sensitivities of the \(\gamma\)-ray telescopes. In Bednarek (2022) it is concluded that with the future CTA this transient \(\gamma\)-ray emission from electrons in the shells should be detected during the period of the years.
In this follow-up work, we consider the fate of the relativistic electrons, which were accelerated in the nova shells but after some time escaped from them into the interstellar space. We suppose that such escaping electrons deposit in NSR around the nova. In the case of recurrent novae such NSRs should be able to accumulate large amount of relativistic electrons from multiple explosions. These electrons are expected to produce \(\gamma\) rays by scattering mainly the Cosmic Microwave Background Radiation (CMBR) and thermal radiation from RG.
As in Bednarek (2022), we consider the recurrent nova RS Oph which shows one of the shortest average recurrence period (on average \(\sim\)14.7 yrs, with the spread between 8.6 to 26.6 yrs), among the known Galactic recurrent novae see Tab. 21 in Schaefer (2010). During its latest outburst in 2021 the source showed \(\gamma\)-ray emission from GeV energies (Cheung et al. 2022, Zheng et al. 2022) up to hundreds of GeV (H. E. S. S. Collaboration et al. 2022; Acciari et al. 2022), becoming the first nova detected in very-high-energy band. The RS Oph binary is composed of a White Dwarf (WD) and a RG with masses 1.2 - 1.4\(M_{\odot}\) and 0.68 - 0.80 \(M_{\odot}\), respectively (Brandi et al. 2009). The same authors report an orbit with the period of 453.6 days and the inclination of 49\({}^{\circ}\) - 59\({}^{\circ}\). The surface temperature of the RG is estimated on 3600 K and its radius on 67\({}^{+19}_{-16}\) R\({}_{\odot}\) (Dumm & Schild 1998), where \(R_{\odot}\) is the radius of the Sun. The mass-loss rate of the RG is estimated on \(5\times 10^{-7}M_{\odot}/\)yr (Booth, Mohamed, & Podsiadlowski 2016), however \(3.7\times 10^{-8}-10^{-6}M_{\odot}/\)yr values have been also suggested (Iijima 2008; Schaefer 2009). The wind concentrates around the equatorial plane of the binary system. The wind velocity is estimated on 40 km s\({}^{-1}\) (Wallerstein 1958). The upper limit on the mass expelled during nova explosion is constrained by the above mass loss rate of the RG and the recurrence period of the nova (\(\sim 15\) yrs), on \(5.6\times 10^{-7}-1.5\times 10^{-5}M_{\odot}\). This material forms very complicated structure around the nova. The observations of the nova RS Oph a few hundred days after explosion in 2006 with the _Hubble Space Telescope_ shows two-component flow, composed of a low-velocity high density equatorial region and a high-velocity low density polar region. The polar region expended with the velocity of \((5600\pm 1100)\) km s\({}^{-1}\) (Bode et al. 2006). On the other hand, at about one day after the nova explosion, the expansion velocity was \(\sim(4000-7500)\) km s\({}^{-1}\) (Buil 2006). Therefore, the polar expansion seems not to decelerate significantly. In the case of the outburst in 2021, the radio structure of the nova shell is very similar to that observed in 2006 (Munari et al. 2022). Also the velocity of the nova shell, averaged over the first 34 days, equal to \(\sim\)7550 km\({}^{-1}\) s\({}^{-1}\) is generally consistent with that observed in 2006. The initial
velocity of the shell is an important parameter in our modelling since it determines the ballistic time scale of the shell.
Rupen, Mioduszewski, & Sokoloski (2008) estimated the distance to RS Oph on \((2.45\pm 0.37)\) kpc (see also Bailer-Jones et al., 2021). This value is also very similar to the distance derived with the Gaia DR3-derived parallax: \(2.68^{+0.17}_{-0.15}\) kpc (see also Schaefer, 2022, Munari et al., 2022. Multiple, partially contradicting distance measurements are available (see the discussion in Acciari et al., 2022).
## 2 Injection of electrons from the Nova
In Bednarek (2022) we discuss the scenario in which electrons are assumed to be continuously accelerated in the nova shell long after nova explosion. Electrons, confined within the expanding shell, produce \(\gamma\)-rays by scattering RG and nova soft radiation. here, we extend this model by assuming that electrons are finally realized from the shell into the nova surrounding. They diffuse in the NSR around recurrent nova producing persistent \(\gamma\)-ray emission.
### Electrons within the shell
It is assumed that electrons are accelerated within the shell region for a certain period of time, \(t_{\rm inj}\), after the nova explosion. They are confined for some time within the expanding shell of the nova, losing energy on different radiation processes (i.e. synchrotron, IC, Bremsstrahlung). The shells move sub-relativisticly through the complex material surrounding the nova. They are finally decelerated in the interaction with the interstellar matter. Relativistic electrons in the shell can be responsible for the extended \(\gamma\)-ray emission from novae on a time scale of years. The \(\gamma\)-ray emission produced by electrons within the shell on a time scale of years has been recently discussed in Bednarek (2022).
Up to our knowledge, there is no any specific model for the acceleration of particles in novae on a time scale of the nova recurrence period. All the presently considered models concentrate on the time scale after the nova explosion corresponding to the observed time scale of the \(\gamma\)-ray emission already detected by the _Fermi_-LAT telescope, i.e. of the order of a month. Therefore, we introduce a general model for the acceleration
Figure 1: Schematic picture of the NSR around a recurrent nova (not to scale). In the case of the Nova RS Oph, every recurrent explosion injects a shell in which electrons are accelerated (2 shells are shown). The shells move with different velocities in the equatorial and polar regions of the binary system. A part of the sphere covered by the equatorial region is marked by the parameter \(\Omega\). In the equatorial region the shells are effectively decelerated by entraining the matter from the RG wind. Shells in both regions are decelerated on the sub-parsec distance scale by entraining the matter from the surrounding interstellar space. The electrons escape from the shells to the surrounding medium due to the (Bohm) diffusion process. These electrons form a nebula around the recurrent binary system, subsequently producing \(\gamma\)-ray by ICS of the CMBR. The emission is expected from a region with a size of the order of a few parsec at sub-TeV \(\gamma\) rays.
of the electrons, their energy losses, and their escape from the shell which is able to consider these processes on the time scale of the recurrence period of the nova. For the acceleration and propagation of electrons in the nova shell, we exploit the time-dependent model recently considered by Bednarek (2022). Here we remind the main aspects of model. Two regions around the nova binary system with different proprieties are considered. Following other works (Chesneau et al., 2007; Walder, Folini, & Shore, 2008; Montez et al., 2022), we assume that the wind from the RG is concentrated in the equatorial region of the binary system which extend is defined by its part of whole sphere (i.e. the parameter \(\Omega\)). A part of the nova shell, propagating in the equatorial region, is effectively decelerated due to the entrainment of the matter from the RG wind. On the other hand, the (1 - \(\Omega\)) part of the nova shell, propagating in the polar region, expends freely up to the distances at which the matter entrained from the interstellar space decelerates the shell. In fact such two component structure of the nova shell has been observed in the X-rays in the case of previous explosion of the Nova RS Oph see Montez et al. (2022), in which case the fast shell takes about \(\sim 0.2\) of the whole sphere and the slow shell \(\sim 0.8\). As a result of propagation in two different regions, their physical conditions (such as magnetic field strength, density of matter, radiation field, etc...) differ significantly. These conditions determine the acceleration process of the electrons, their energy losses in the shell and also the escape conditions from the shell into the NSR.
The electrons are assumed to be injected into the nova shell with a power-law spectrum and an exponential cut-off during initial time interval starting from the nova explosion. We develop a simple model which allows to determine the basic physical parameters of the expending shell in the long period after explosion such as, its expansion velocity and density of matter, the magnetic field strength, and different radiation fields (see for details Sect. 3 in Bednarek, 2022). The magnetic field is assumed to be at some level of equipartition with the kinetic energy of the shell. We take into account the radiation field from the RG and also from the nova photosphere at the early time after explosion. Then, we define the acceleration process of electrons, as a function of time, and their energy losses. The cut-off energies in the electron spectrum are obtained by balancing the acceleration time scale with their time scale for energy losses or the dynamical time scale of the shell. The total power of the electrons is normalized to the initial kinetic energy of the shell. Since the electrons lose energy on different radiation processes, we numerically calculate the evolution of the equilibrium spectrum of the electrons within the shell at an arbitrary moment after explosion (for details see Sect. 4 in Bednarek, 2022). Due to the time-dependent conditions within the shell, we follow the fate of electrons with different energies applying the time step method to a single shell. With \(\Delta t\) being the time step, the location of the shell at the time \(t_{\rm n}\) is calculated from, \(R_{\rm n}=R_{\rm n-1}+v_{\rm sh}(t_{\rm n-1})\cdot\Delta t\). The thickness of the shell is related to its radius according to \(\Delta R=\beta R\), with \(\beta<1\). In such a model, we define the condition for the escape of the electrons from the shell into the interstellar region. We assume that the diffusion process of the electrons is well described by the Bohm's prescription. However, since the physical conditions in the propagating shell vary in time, we calculate the characteristic diffusion distance of electrons in the shell from
\[R_{\rm n}^{\rm dif}=R_{\rm(n-1)}^{\rm dif}+\sqrt{D_{\rm B}/(2t_{\rm n})}\cdot \Delta t, \tag{1}\]
where the Bohm diffusion coefficient is \(D_{\rm B}=cR_{\rm L}/3\), and \(R_{\rm L}=E/(eB)\approx 3\times 10^{16}E_{10}/B_{\mu G}\) cm is the Larmor radius, \(E_{\rm e}=10E_{10}\) TeV is the electron energy, and \(B=10^{-6}B_{\mu{\rm G}}\) G is the magnetic field strength at the time \(t_{\rm(n-1)}\). The electrons can escape from the shell into the interstellar medium when the diffusion distance becomes comparable to the thickness of the shell, i.e. \(R_{\rm dif}=\beta R\).
### Electrons released from the shell into the NSR
In the present work we consider the fate of electrons which were able to escape from the nova shells. We argue that these relativistic electrons accumulate in nova surrounding. The electrons are confined around the nova for a certain time, diffusing slowly in the outward direction into the NSR. We investigate the \(\gamma\)-ray emission produced by the electrons in the IC scattering of the radiation from the RG (a companion of the WD), and the CMBR. We include the energy losses of
the electrons within the shell on the ICS of the thermal radiation from the nova photosphere during its initial propagation for the period of about 1 month. However, averaged over the whole recurrence period of RS Oph, this photosphere radiation field is negligible when compared to the RG thermal radiation, hence it is not taken into account when computing the persistent \(\gamma\)-ray production in the nova super-remnant. Our interest in the existence of relativistic electrons in the NSR we focus on the case of the recently detected in the sub-TeV range Nova RS Oph (Acciari et al., 2022; H. E. S. S. Collaboration et al., 2022). In the case of RS Oph six - nine1 explosions have been documented with the recurrence period of the order of 15 years. Those observations indicate that electrons can be accelerated to TeV energies at least in the early period after the nova explosion. The schematic picture of the scenario considered here is shown in Fig. 1.
Footnote 1: Three out of those nine outbursts were only indirectly observed.
Following the method described in the previous subsection for the evaluation of processes within the nova shell, we calculate the spectra of the electrons escaping from the shell into the interstellar medium for the parameters of the nova RS Oph and reasonable parameters of the acceleration model of the electrons. We consider the electron injection spectra with the spectral index -2 (Bell, 1978). In Fig. 2 we show how the maximum energies of the electrons evolve in time for a few different values of the magnetization of the shell. The magnetization parameter of the shell, \(\alpha\), which is obtained assuming some level of the equipartition between the kinetic energy density of the shell and energy density of the magnetic field in the shell, i.e. \(\alpha n_{\rm sh}m_{\rm p}v_{\rm sh}^{2}/2=B^{2}/(8\pi)\), where \(n_{\rm sh}\) is the density of the matter in the shell, \(m_{\rm p}\) is the proton mass, \(v_{\rm sh}\) is the shell velocity, and \(B\) is the magnetic field strength within the shell (see for details Bednarek, 2022). Both cases, with and without deceleration in the RG wind, are considered. The maximum energies of the electrons are limited either by the energy losses (at the early time) or the dynamical time scale of moving shell (at the latter time). They become larger at latter time due to decreasing magnetic field within the shell. These maximum energies are also significantly lower in the case of drastically decelerated shell since the acceleration parameter is assumed to depend on the shell velocity (see Eq. 3 in Bednarek, 2022). In this section we consider a relatively slow acceleration process of the electrons in the shell with the energy gain proportional to \(\xi\sim(v_{\rm sh}/c)^{2}\) (where \(v_{\rm sh}\) and \(c\) are speed of the shell and light respectively), which is characteristic for the second order Fermi acceleration process. When calculating the \(\gamma\)-ray emission from nova super-remnants, we will discuss also more efficient acceleration process.
We apply a simple model for the initial acceleration of the electrons since our main goal is to consider a radiation model for the nova super-remnant (formed by electrons escaping from the shell) in the late stages. A few more detailed models for the high energy processes in the initial stage after a nova explosion have been considered already. For example, Tatischeff & Hernanz (2007) consider non-linear acceleration of particles at the initial blast wave and show that protons can reach TeV energies already at days time scale after explosion. Also Martin & Dubus (2013) develop a model for the shock acceleration and radiation of electrons and protons in the shocked plasma during initial month time scale after the nova explosion. Martin et al. (2018) consider detailed model for the acceleration of particles in collisions of the wind from the WD with the initial shell of the nova during first month after explosion. They show that \(\gamma\)-ray emission comes mainly from hadronic interactions with the dense matter downstream of the shock. Similar general model, but limited to the equatorial region of the nova binary system, has been also considered by Metzger et al. (2015). Detailed calculations of the \(\gamma\)-ray emission in terms of the hadronic and leptonic scenario, during initial stage of nova, are presented in Vurm & Metzger (2018). All these models have been developed in order to explain the \(\gamma\)-ray emission from the initial stage of the nova explosion as observed by the _Fermi_-LAT during the time scale of the order of a month. Here we consider a model for possible \(\gamma\)-ray emission from novae at long time after explosion during the whole activity period of the recurrent nova. We argue that electrons, escaping from the shell can accumulate for a long time in the vicinity of the recurrent nova. They produce \(\gamma\)-ray nebula similar to those observed around rotation powered pul
sars.
We investigate the dependence of the spectrum of the escaping electrons on the basic parameters of the model, assuming the likely parameters of the nova RS Oph. In Fig. 3 we show the spectra of the electrons escaping from the polar part of the shell, assuming that they are accelerated only during short time (30 days) after the explosion of the nova or during the whole recurrence period of RS Oph (the average period of 15 yrs is assumed). The shorter time scale (of the order of a month) is consistent with the observed by the _Fermi_-LAT telescope time scale for the initial GeV gamma-ray emission from Nova RS Oph (Cheung et al. 2022). We suppose that the acceleration process of electrons lasts at least for the period of the initial \(\gamma\)-ray production phase as observed by the Fermi-LAT. Note also, that such order of the time scale for acceleration of particles is expected in the model by Martin et al. (2018) in which the acceleration process is active during the time of collision of the fast wind from the white dwarf with a slower moving initial shell of the nova. The longer time scale (of the order of the recurrence period of the Nova RS Oph 15 yrs) is motivated by the fact that at this time scale the nova shell is expected to significantly decelerate due to the entrainment of the interstellar matter. In order to estimate the deceleration distance of the shell we introduce a model for the velocity of the shell as a function of distance from the nova. We assume a conservation of the momentum of the shell interacting with the surrounding medium, \(M_{0}^{\rm sh}v_{0}^{\rm sh}=v_{\rm sh}(R)[M_{\rm sh}+M_{\rm cos}(R)]\), where \(M_{0}^{\rm sh}\) and \(v_{0}^{\rm sh}\) are the initial mass of the nova shell and its initial velocity, \(M_{\rm cos}=(4/3)\pi R^{3}n_{\rm cos}m_{\rm p}\) is the mass entrained from the interstellar medium, \(R\) is the radius of the shell, \(v_{\rm sh}(R)\) is the velocity of the shell at the distance \(R\), and \(n_{\rm cos}\) is the density of the interstellar medium. For reasonable densities of the interstellar medium (in the range \(n_{\rm cos}\approx 0.1-10\) cm\({}^{-3}\)), the shell starts to significantly decelerate at the distance of the order of \(\sim 10^{17}\) cm (see the results of calculations presented in Fig. 6 in Bednarek 2022). This distance scale is passed by the shell during the time similar to the recurrence period of the Nova RS Oph. Our simple model for the velocity profile generally agrees with the results of observations for the outburst of nova RS Oph during the early expansion phase reported in Tatischeff & Hernanz (2007, see Fig. 1). In our model (described by the solid curve in Fig. 6a, Bednarek 2022) the velocity of the forward shock, moving in the polar region, drops by
Figure 2: The maximum energies of the electrons in the shell as a function of propagation time of the shell. The electrons are accelerated within the nova shell in the case of its propagation in the polar region of the binary system, the so-called “no deceleration” case, (see panel on the left) and in the case of propagation in the equatorial wind of the RG (deceleration case) which is confined within the part of the whole sphere defined by \(\Omega=0.1\) (in the centre) and 0.5 (on the right). The magnetization of the nova shell is defined by \(\alpha=10^{-4}\) (dot-dashed curve), \(10^{-3}\) (dashed), 0.01 (dotted), and 0.1 (solid). The initial mass of the nova shell is \(M_{\rm sh}^{0}=10^{-6}\)\(M_{\odot}\) and its initial velocity is \(v_{0}^{\rm sh}=6000\) km s\({}^{-1}\). The mass loss rate of the RG wind is \(M_{\rm RG}=10^{-7}\) M\({}_{\odot}\) yr\({}^{-1}\) and its velocity \(v_{\rm RG}=40\) km s\({}^{-1}\).
a factor of the order 2 during the first \(\sim\)30 days as also observed in the case of the outburst in RS Oph during 2006 (Tatischeff & Hernanz 2007).
As we have shown above, the efficiency of the acceleration process of electrons, \(\xi\), depends in our model on the velocity of the shell. Therefore, it has to drop significantly at the time scale of the order of the recurrence period of the Nova RS Oph. Interestingly, there are observational evidence for shocks appearing at late stages after nova explosion. Babul et al. (2022) reported enhancement of radio emission starting 1.5 months after the eruption, and lasting at least for a month more. It is interpreted as synchrotron emission due to internal shocks within the ejecta. Moreover, at least 25% of novae observed by Chomiuk et al. (2021) show non-thermal synchrotron emission. Two of them, V5589 Sgr and V392 Per, appear to be on the bridge between classical and symbiotic novae. Another example of the late non-thermal synchrotron emission (months time scale) is the symbiotic recurrent nova V3890 Sgr (Nyamai et al. 2023).
The cases of slow and fast acceleration of the electrons within the shell are considered. They depend on the velocity of the shell. In the case of fast acceleration the acceleration coefficient is assumed to be \(\xi\sim(v_{\rm sh}/c)\). Specific spectra of the escaping electrons depend on the magnetization parameter of the shell which determines the diffusion process of the electrons in the shell and one of the main energy loss processes (i.e. synchrotron emission). We conclude that long acceleration process of the electrons results in their spectra extending to higher energies. In the case of fast acceleration scenario, the electrons are able to escape into NSR with multi-TeV energies. In the case of slow acceleration, typical energies of the escaping electrons are one-two orders of magnitude lower. The spectra of the electrons show interesting dependence on the magnetization parameter of the shell. For extended acceleration (during 15 yrs), the spectra of the electrons extend to larger energies for strong magnetization. This is due to the fact that the maximum energies of the electrons are determined in this case by the dynamical time scale of the shell. For acceleration limited to the early stage of the shell propagation, the dependence of the electron spectra on the magnetization is just the opposite. This is due to the fact that, soon after the nova explosion, the maximum energies of the accelerated electrons are determined by the synchrotron energy losses. This complicated behaviour is also in line with the dependency of the maximum electron energies inside the shell shown in Fig. 2.
In Fig. 4, we investigate the spectra of the escaping electrons in the part of the shell which propagates in the equatorial region of the nova binary system. Since this part of the shell decelerates fast, due to the entrainment of the dense RG wind material, the energies of the accelerated electrons are lower by one-two orders of magnitudes. Moreover, the electrons in the shell have much more time to lose energy on radiation processes already during the confinement within the shell. Therefore, their energies are usually limited to the sub-TeV range. The electrons escaping with such energies into the nova super-remnant mainly contribute to the \(\gamma\)-ray emission in the GeV range.
Note that recurrent novae of the RS Oph type, are expected to produce many explosions in a relatively short period. In fact, the electrons escaping from the last shell should pass through the content of the previous shells. This process is difficult to consider due to the unknown structure and magnetization of the previous shells at some time after slow down. We expect that such "old" shells affects the magnetic field strength within the inner part of the NSR. We consider this effect by studying much stronger magnetic field within the NSR (than expected from the values of the interstellar magnetic field strength) in the further sub-section.
It is assumed that nova shells decelerate completely and merge with the interstellar medium at the distance \(R_{\rm inj}=2\times 10^{17}\) cm, which is a typical distance for the deceleration of the shells in the medium with density of the interstellar space, equal to 1 particle cm\({}^{3}\) (see Fig. 6 in Bednarek 2022). Thus, all the electrons that are still present in the shell at this time feed the NSR. For simplicity we assume that the electrons that were able to escape earlier as well diffuse in the interstellar medium to \(R_{\rm inj}=2\times 10^{17}\) cm, hence the injection of all the electrons into the NSR occurs at the same distance. In most of the cases this is a reasonable assumption. However, the electrons that have energies \(\lesssim 10\) GeV and escaped closer than \(3\times 10^{16}\) cm would not be able to diffuse in the magnetic field of 3 \(\mu\)G up to \(R_{\rm inj}\) due to IC energy losses on RG radiation field. Nevertheless such low
Figure 4: As in Fig. 3 but for the part of the shell propagating in the wind of the RG expelled in the equatorial region of the nova binary system. Then, the shell decelerates according to the prescription defined in Sect. 6 in Bednarek (2022). A part of the sphere in which the shell propagates is fixed on \(\Omega=0.3\).
Figure 3: The differential spectral energy distribution of the electrons escaping from the nova shell into the NSR assuming that their acceleration process operates only during the short period after the nova explosion (assumed to be 30 days) or during the whole recurrence period of the nova (i.e. 15 yrs). The electrons are accelerated within the shell to a power-law spectrum with a spectral index \(-2\) and the exponential cut-off at \(E_{\rm max}\). The cut-off energy of \(E_{\rm max}\) is defined by two acceleration prescriptions, slow and fast (see text for details). The magnetization of the nova shell is defined by \(\alpha=10^{-5}\) (solid curve), \(10^{-4}\) (dotted), \(10^{-3}\) (dashed), and \(10^{-2}\) (dot-dashed). The shell propagates in the polar region of the nova binary system (no deceleration case). The initial mass of the nova shell is \(M_{\rm sh}^{0}=10^{-6}\)\(M_{\odot}\) and its initial velocity is \(v_{0}^{\rm sh}=6000\) km s\({}^{-1}\).
energy electrons are also not likely to diffuse away out of the shell having still strong magnetic field, and would more likely by trailed by it up to dispersion of the shell at \(R_{\rm inj}\). Due to Klein-Nishina effect higher energy electrons suffer smaller energy losses during the diffusion and e.g. 1 TeV electrons can keep most of their energy even if they escape from the shell at the distance of \(3\times 10^{15}\) cm.
## 3 Gamma rays from NSR around a recurrent nova
As we have shown above, the acceleration process of the electrons depends on the conditions within the nova shell. In the case of recurrent novae, many shells are expected to provide relativistic electrons to the medium surrounding the nova super-remnant. The electrons finally escape from the nova shell into the surrounding medium. They produce \(\gamma\) rays in the IC process by scattering the CMBR and optical radiation from the RG. We distinguished two main regions in which a nova shell propagates, the equatorial wind region and the polar region. The physical conditions for the electrons in these two regions differ significantly. Electrons are injected into the NSR from these two regions producing \(\gamma\)-ray emission with features which can differ significantly. In this section we investigate the \(\gamma\)-ray spectra (and their detectability) produced by the electrons from these two regions as a function of the free parameters of the model. As an example, we consider the physical parameters of the nova RS Oph and its binary system (see Introduction). We assume that this recurrent nova explodes with the average recurrence period of 15 yrs for the last \(t_{\rm max}=10^{4}\) yrs or \(10^{5}\) yrs. This time scale for the activity period of the recurrent nova determines the total energy transferred to the relativistic electrons in the nova super-remnant.
Relativistic electrons, which were able to escape from the nova shells, diffuse into the surrounding medium forming NSR. It resembles well known nebulae around rotation-powered pulsars. During the diffusion process electrons lose energy on the synchrotron and the Inverse Compton processes. They produce \(\gamma\) rays by scattering the CMBR and RG radiation. In order to follow the fate of the electrons in the nova super-remnant we modified the Monte Carlo code which has been originally developed for the diffusion of electrons within globular clusters (see Bednarek & Sitarek 2007, and Bednarek et al. 2016). The code follows the diffusion process of electrons through the globular cluster and their energy losses on the synchrotron process and the inverse Compton process in the optical radiation from the stars and the CMBR. In the code, we replaced the radiation field from the stars in the globular cluster by the radiation field of the RG. In this way we are able to calculate the \(\gamma\) ray (and also synchrotron) spectra produced by the relativistic electrons during the activity time of the recurrent nova.
### Contribution from the polar parts of the shells
In Fig. 5, we show the \(\gamma\)-ray spectra produced in the nova super-remnant by the electrons accelerated in terms of the two acceleration scenarios. They are defined by different prescriptions for the acceleration coefficient which defines the energy gain rate of the electrons, \(\xi=v_{\rm sh}/c\) (fast acceleration, see figures (a),(b), (e) and (f)) and \(\xi=0.1(v_{\rm sh}/c)^{2}\) (slow acceleration, see figures (c), (d), (g) and (h), see also Sect. 3, Bednarek 2022). It is assumed that the electrons are injected continuously during the recurrence period of the nova. The spectra are shown for different magnetization parameter of the nova shells, \(\alpha\). Two activity periods of the recurrent nova are considered, \(T_{\rm active}=10^{4}\) yrs and \(10^{5}\) yrs (see Fig. 5). In the case of fast acceleration, the \(\gamma\)-ray spectra clearly extend through the multi-TeV energy range, with slow dependency on the value of \(\alpha\). They should be easily detected by the CTA. The extensive observations with the present Cherenkov telescopes such as H.E.S.S., MAGIC, or VERITAS can also significantly constrain the allowed range of considered parameter space of the model. As RS Oph has a rather low Galactic latitude (Galactic coordinates \(l=19.8^{\circ}\), \(b\sim 10.37^{\circ}\)), where the sensitivity of GeV instruments strongly varies, we compare the expected emission with two bounding, publicly-available2 sensitivities: for latitudes \(b=0^{\circ}\) and \(30^{\circ}\). It appears that the sensitivity line for the nova RS Oph location should stay somewhere in the middle between these two (marked) lines. Therefore, we conclude that in the case of
the older nova super-remnants (with the activity time equal to \(T_{\rm active}=10^{5}\) yrs, see Figs. 5b,d), the \(\gamma\)-ray emission is expected to be also constrained by the _Fermi_-LAT telescope at GeV \(\gamma\)-ray energy range. In the case of the slow acceleration model for electrons, the \(\gamma\)-ray emission is limited to sub-TeV energies (see Fig. 5cd). In this case, \(\gamma\) rays can hardly be detected by the CTA. But, the allowed parameter space of the model is expected to be likely constrained by the observations with the _Fermi_-LAT.
In the bottom panel of Fig. 5, we show the \(\gamma\)-ray spectra for these same parameter space as in the upper panel of Fig. 5 but assuming that the electrons are accelerated in the shells only during their initial stage of propagation after nova explosion, i.e. within the first \(t_{\rm max}=30\) days. This time scale corresponds to the typical period of the GeV \(\gamma\)-ray emission detected from novae (e.g. Ackermann et al., 2014). In this case, only the \(\gamma\)-ray emission from the nova super-remnant to which the electrons are injected from the weakly magnetized shells with the fast acceleration model are within the sensitivity limit of the CTA (see Fig. 5ef). \(\gamma\)-rays produced in the slow acceleration model have the chance to be observed only by the satellite telescopes of the _Fermi_-LAT type.
### Contribution from the equatorial part of the shells
We also calculate the \(\gamma\)-ray spectra produced within the nova super-remnant which are expected from the electrons escaping from the (\(\Omega\)) part of the shells which propagate within the region of the RG wind (Fig. 6). Surprisingly, the shape of these \(\gamma\)-ray spectra look quite similar to those expected from the electrons escaping from the polar regions of the shells. This is due to two counter-working effects. We argued that the shell in the RG wind decelerates due to entrainment of the wind. As a result, the acceleration coefficient of the electrons drops resulting in lower maximum energies of the accelerated electrons. But, from another site, lower velocity of the shell allows more efficient escape of electrons from the shell to the NSR. As a result, the effect of decelerating shells becomes compensated.
The \(\gamma\)-ray spectra from the NSR, produced by the electrons escaping from the decelerating parts of the shells are typically on the level of only a factor of \(\sim\)2-3 lower than those expected from the polar regions of the shell. For the extreme values of the considered model parameters predicted \(\gamma\)-ray spectra can be still within the 50 hr sensitivity of the CTA.
### Dependence of gamma-ray emission on other model parameters
We also study the effect of other model parameters on the \(\gamma\)-ray emission from NSR. All the calculations of the \(\gamma\)-ray emission from the NSRs have been performed assuming that the relativistic electrons escape from the shells of novae at distances not larger than \(R_{\rm inj}=2\times 10^{17}\) cm. \(R_{\rm inj}\) corresponds to the characteristic distance at which nova shells start to significantly decelerate due to interaction with the interstellar space. Its value depends on the local density of interstellar space. In Fig. 7b, we investigate the effect of \(R_{\rm inj}\) on the \(\gamma\)-ray spectrum from NSR. We show the spectra for \(R_{\rm inj}=10^{16}\) cm - \(10^{18}\) cm. We conclude that the \(\gamma\)-ray spectra do not depend significantly on the injection distance \(R_{\rm inj}\) at the TeV energies. However, lower energy electrons, injected at large distances from the nova binary system, are not able to cool efficiently in a already weak radiation field of the RG companion star. Their energy losses on the CMBR are also inefficient.
Finally, in Fig. 7c, we investigate the dependence of the \(\gamma\)-ray spectrum on the activity period of the NSR, T\({}_{\rm active}\). The \(\gamma\)-ray emission is on a comparable level at multi-TeV energies, independent on the age of the NSR for the considered range of ages, i.e. \(10^{4}-10^{7}\) yrs. In fact, the collision time of the electrons with the CMBR can be estimated on
\[\tau_{\rm CMBR}\sim(n_{\rm CMBR}\sigma_{\rm T}c)^{-1}\approx 4.4\times 10^{3} \quad{\rm yrs}, \tag{2}\]
where \(n_{\rm CMBR}\) is the density of the CMBR and \(\sigma_{\rm T}\) is the Thomson cross section. \(\tau_{\rm CMBR}\) is clearly shorter than considered by us activity period of the recurrent Nova. The scattering process of the CMBR contributes mainly to the \(\gamma\)-ray spectrum at GeV energies. This part of the \(\gamma\)-ray spectra saturate for the large activity time of the recurrent nova (see dashed and dot-dashed curves in Fig. 7c). We conclude that the high energy electrons in the NSR can lose efficiently energy during the time \(T_{\rm active}\).
Figure 5: The \(\gamma\)-ray spectral energy distribution produced in the IC scattering of the RG and the CMBR by relativistic electrons which are accelerated in the polar region of the RG wind. The electrons are accelerated during the initial period after the nova explosion equal to \(t_{\rm max}=15\) yrs (upper panel) and \(t_{\rm max}=30\) days (bottom panel), in the sequence of nova explosions with the recurrence time \(t_{\rm rec}=15\) yrs. They lose energy on the synchrotron process in the NSR magnetic field assumed to be \(B_{\rm NN}=3\ \mu\)G. The electrons were accelerated in the \(1-\Omega=0.7\) fraction of the shells propagating in the polar region of the RG wind (without deceleration), taking 10% of the kinetic energy of the shells. The acceleration process of the electrons is defined by the acceleration efficiency \(\xi\) and the magnetization parameter of the shell \(\alpha\): (a and e) \(\gamma\)-ray spectra for \(\xi=v_{\rm sh}/c\), \(T_{\rm active}=10^{4}\) yrs and different magnetization \(\alpha=10^{-5}\) (solid curve), \(10^{-4}\) (dotted), \(10^{-3}\) (dashed), and \(10^{-2}\) (dot-dashed); (b and f) as in (a and e) but for \(T_{\rm active}=10^{5}\) yrs. (c, d, g and h) as in (a, b, e and f) but for the slow acceleration model, i.e. \(\xi=0.1(v_{\rm sh}/c)^{2}\). The 50 hr sensitivity of the CTA South is marked by the thin dashed curve (see Fig. 2 in Maier et al. 2017). The dot-dot-dashed lines show the 10 yrs _Fermi_-LAT sensitivity (Ajello et al. 2021) for Galactic longitude \(l=0^{\circ}\) and latitude \(b=0^{\circ}\) (upper curve) or \(b=30^{\circ}\) (lower curve)
Figure 6: As in Fig. 5 but for the part of the shell propagating within the equatorial region of the RG wind with \(\Omega=0.3\). In this case the nova shells are significantly decelerated due to the entrainment of the RG wind (see the main text for details).
Figure 7: The \(\gamma\)-ray spectral energy distribution as a function of: (a) the strength of the magnetic field within the NSR \(B_{\rm NSR}\) = 3 \(\mu\)G (solid), 10 \(\mu\)G (dashed), 30 \(\mu\)G (dotted), 100 \(\mu\)G (dot-dashed), and 300 \(\mu\)G (dot-dot-dashed) (a). The other parameters are \(\alpha=10^{-4}\), \(R_{\rm inj}=2\times 10^{17}\) cm, \(T_{\rm rec}\) = 15 yrs, \(T_{\rm active}=10^{5}\) yrs, and \(\xi=0.01\); (b) the injection distance of relativistic electrons within the NSR \(R_{\rm inj}=10^{16}\) cm (dotted curve), \(10^{17}\) cm (dashed), and \(10^{18}\) cm (solid). The other parameters are as above and \(B_{\rm NSR}\) = 3 \(\mu\)G; and (c) as a function of the activity stage of the nova \(T_{\rm active}\) = \(10^{4}\) yrs (solid), \(10^{5}\) yrs (dotted), \(10^{6}\) yrs (dashed), and \(10^{7}\) yrs (dot-dashed). The other parameters are the same as in (a) and (b).
### Synchrotron emission from electrons in NSR
All the above calculations have been performed under assumption that the magnetic field strength is of the order of that observed in the interstellar space (i.e. \(B_{\rm NSR}=3\mu\)G). In principle, the magnetic field in the surrounding of the recurrent nova might build up to larger values as observed in the pulsar wind nebulae. For example, this stronger magnetic field can be provided to the inner parts of the NSR with the multiple nova shells. In Fig. 7a, we show the \(\gamma\)-ray spectra calculated for much stronger values of the magnetic field \(B_{\rm NSR}\). As expected, the TeV part of the \(\gamma\)-ray spectrum is strongly suppressed for stronger magnetic fields due to the efficient synchrotron energy losses of the electrons. However, the GeV \(\gamma\)-ray emission is still on a comparable level provided that \(B_{\rm NSR}<100\mu\)G. For very strong magnetic fields, i.e. \(B_{\rm NSR}>100\mu\)G, the GeV \(\gamma\)-ray emission becomes also strongly suppressed. Therefore, we conclude that strongly magnetised NSRs around novae have still a chance to be detected at GeV \(\gamma\)-ray energies but not at TeV energies.
The TeV \(\gamma\)-ray emission from NSR is expected to be very sensitive on the strength of the magnetic field within the NSR. If \(B_{\rm NN}\) is larger than 100 \(\mu\)G, then the TeV \(\gamma\)-ray emission is below sensitivity of the CTA even for the most favourite parameters of the model (see Fig. 7a). However, the \(\gamma\)-ray emission in the GeV energy range is quite stable, provided that \(B_{\rm NSR}<100\ \mu\)G. If the magnetic field within NSR is significantly stronger than 3 \(\mu\)G, then the synchrotron emission from TeV electrons is expected in the range from the infrared to the X-rays, \(\varepsilon_{\rm syn}=m_{\rm e}c^{2}(B/B_{\rm cr})\gamma_{\rm e}^{2}\sim 0.05B _{\mu G}E_{\rm TeV}^{2}\) eV. We show the expected synchrotron spectra for the example parameters of the model (as used in Fig. 7a), as a function of the magnetic field within the NSR in Fig. 8. As expected the synchrotron emission starts to dominate over the \(\gamma\)-ray IC emission from the NSR for stronger magnetic fields. We note that this synchrotron emission in the optical range is a few orders of magnitudes below the optical emission from the RG. The thermal X-ray emission, detected in quiescence from RS Oph (e.g. Nelson et al., 2011) or the extended emission observed in the polar regions of the RS Oph during the outburst in 2006 (Montez et al., 2022) comes from a relatively very small region (of the order of arc seconds) around the nova RS Oph. This emission cannot constrain synchrotron X-ray emission expected in our model which comes from much more extended \(\gamma\)-ray NSR (see Eq. 7).
## 4 Estimate of the nova super-remnant size
The propagation process of charged particles in the vicinity of compact sources is not clear. Therefore, we cannot definitely conclude on the size of the NSR. In the calculations above, we applied the Bohm diffusion prescription. But in fact, two limiting scenarios can be considered. If the magnetic field around the nova is ordered (e.g. magnetic field mainly perpendicular to the radial direction from the binary system), then \(\gamma\)-rays produced in the IC process can be arbitrarily confined forming a point like source. On the contrary, the magnetic field can be random, resulting in a rapid diffusion process, e.g. consistent with the Bohm diffusion prescription. In this second limiting case, we can simply estimate the maximum size of the NSR provided that injected electrons lose energy on the IC process by scattering CMBR and RG radiation.
We estimate the size of the \(\gamma\)-ray region from the NSR for the electrons with energies \(E_{\gamma}=1E_{\rm TeV}\) TeV. Such electrons produce \(\gamma\)-ray photons with typical energies within sensitivity of the _Fermi_-LAT telescope, \(E_{\gamma}\sim\varepsilon_{\rm CMBR}\gamma_{\rm e}^{2}\sim 2.8E_{\rm TeV}^{2}\) GeV as a result of comptonization of the CMBR, where \(\varepsilon_{\rm CMBR}\sim 7\times 10^{-4}\) eV. They also produce \(\gamma\)-rays by scattering the RG radiation with energies comparable to the energy of the electrons, i.e. typical for the Cherenkov telescopes. The cooling time scale of electrons (in the Thomson regime) with energy equal to 1 TeV in the CMBR is
\[\tau_{\rm IC}^{\rm CMBR}\approx 3.2\times 10^{13}/E_{\rm TeV}\quad{\rm s}. \tag{3}\]
and in the RG radiation
\[\tau_{\rm IC}^{\rm RG}\approx 3\times 10^{10}R_{17}^{2}/E_{\rm TeV}\quad{\rm s}. \tag{4}\]
where \(R=10^{17}R_{17}\) cm is the distance measured from the RG, that determines the radiation field.
Bohm diffusion distance of these electrons during the cooling time scale on the CMBR, \(t=\tau_{\rm IC}^{\rm CMBR}\), is
\[x_{\rm dif,CMBR}=\sqrt{2D_{\rm B}t}\approx 29/B_{\rm NSR,\mu G}^{1/2}\quad{\rm pc}, \tag{5}\]
where \(B_{\rm NSR,\mu G}\) is \(B_{\rm NSR}\) in units of \(\mu G\). During the cooling time scale on the RG radiation the nominal diffusion distance is
\[x_{\rm dif,RG}\approx 0.3R_{17}/B_{\rm NSR,\mu G}^{1/2}\quad{\rm pc}, \tag{6}\]
Note that while \(x_{\rm dif,RG}<x_{\rm dif,CMBR}\) would suggest the dominant role of the RG radiation field in the cooling and limiting the extend of the diffusion this is not the case. For a broad range of parameters (\(B_{\rm NSR,\mu G}<100\mu G\)), Eq. 6 states that \(x_{\rm dif,RG}>R=10^{17}R_{17}\) cm. Therefore as the electrons diffuse they experience steeply falling radiation field from the RG (as \(R^{-2}\)), that is not able to cool them down. We conclude that the IC cooling of electrons on the RG radiation do not constrain the size of the NSR.
In Fig. 9 we present numerical calculations of the energy loss of the electrons of different energies during their diffusion through the NSR. While in the early stages after the escape from the shock into the NSR the energy loss on RG dominates, it is not fast enough to cool down the electrons completely. As the electrons diffuse farther on, the CMBR losses start to dominate. This effect is particularly important at higher energies, where the Klein-Nishina effect further suppresses the energy losses on the RG radiation field.
The size of the electron NSR around Nova RS Oph in case of the Bohm diffusion and the IC energy losses on the CMBR is then estimated on,
\[x_{\rm dif}/D=0.67/B_{\rm\mu G}^{1/2}\quad{\rm degree}. \tag{7}\]
For the magnetic field of 3\(\mu G\), the size of the NSR around RS Oph is of the order of \(\sim\)0.4 degree, comparable to _Fermi_-LAT angular resolution at a few GeV. Note that this predicted size is of the order of the size of super-remnant optically detected in the case of M31N recurrent nova in M31 (see Darnley et al., 2019). The inner radius of the super-remnant shell in M31N is 52 pc which is not surprising since these recurrent novae clearly differ in basic parameters (e.g. the recurrence period of M31N is about ten times shorter which results in larger amount of energy provided by the nova into its super-remnant.
To validate those order of magnitude calculations we performed numerical tracking of electrons taking into account energy losses via IC on CMBR and RG radiation field (including also
Figure 8: The IC (thick curves) and the synchrotron (thin) spectra calculated for different strength of the magnetic field within the NSR expected around RS Oph: \(B_{\rm NSR}=3\)\(\mu\)G (solid), 10 \(\mu\)G (dashed), 30 \(\mu\)G (dotted) and 100 \(\mu\)G (dot-dashed). The other parameters are as in Fig. 7a.
Figure 9: Energy loss of the electrons on the radiation field of RG (dashed lines) and on CMBR (solid lines, merging together) in their diffusion through the magnetic field of 3\(\mu\)G. Electrons are injected at the distance of \(2\times 10^{17}\) cm with energies 0.1 TeV (red), 1 TeV (green) and 10 TeV (blue). Assumed RG with the luminosity of \(100L_{\odot}\) and temperature of 3600 K.
Klein-Nishina effect) as well as modification of the diffusion coefficient as the particle is losing energy. Those calculations confirm that the diffusion process is limited by the CMBR rather than RG radiation field (however 5 GeV electrons still lose about 25% of their energy in the latter process). The obtained observed size of the \(\gamma\)-ray emission from those calculations is only slowly depending on the energy: \(\sim 0.24^{\circ}\) for \(E_{\gamma}\lesssim 0.1\) GeV and \(\sim 0.28^{\circ}\) for \(E_{\gamma}\gtrsim 1\) TeV. Thus we conclude that the NSR around RS Oph is predicted to be a point source for the _Fermi_-LAT and at most only moderately extended source for the Cherenkov telescopes.
At 200 GeV the angular resolution of CTA is \(\sim 0.08^{\circ}\)Maier et al. (2017). Hence in this Bohm diffusion limiting case, the drop of the sensitivity of CTA for observations of such slightly extended emission will be by a factor of \(\lesssim 3\). Therefore the extension of the emission might prevent the detection in the cases when the predicted emission is close to the point-like sensitivity limit of CTA. However in most of the investigated cases with long activity time of the source and fast acceleration, the detection is still within the reach of CTA, even despite the possible extension.
## 5 Conclusion
We investigated the consequences of the hypothesis that electrons are accelerated in the shells ejected in novae for a long period after their explosions (see Bednarek, 2022). We consider the models which cover a broad range of possible parameters for the acceleration process of the electrons within the shell, i.e. slow and fast acceleration and also short (or long) period injection of electrons after the nova explosion. As an example of the short period injection we consider the period of the order of a month after nova explosion (corresponding to the typical time scale of the GeV \(\gamma\)-ray emission observed by _Fermi_-LAT, and typically obtained in a number of nova acceleration models). The long period injection model assumes acceleration of electrons in the shell during the recurrence period of the nova RS Oph. Gamma-ray observations so far were unable to probe the acceleration over such long time scales. Two regions in the expending nova ejecta are also distinguished: (1) the equatorial region of the nova binary system in which the shell is effectively decelerated due to the entrainment of matter from the RG wind, and (2) the polar region in which the shell expands freely up to the moment of entrainment of the matter from the interstellar medium. As an example, electrons are assumed to be injected within the shell with the power law spectrum (spectral index -2, Bell, 1978) and normalization to the 10% of the kinetic energy of the nova shell. Such an order of the energy conversion efficiency from the kinetic energy of the shell to particles is expected in the case of supernova remnants responsible for the cosmic ray content in the Galaxy (e.g. Schlickeiser, 2002). With such a general acceleration model we likely cover a broad range of possible specific acceleration scenarios which will in the future consider in more detail the evolution of the nova shell in long time after explosion. The \(\gamma\)-ray emission predicted above, for the range of models with considered by us boundary parameters, constrains possible emission expected from such future specific models.
Electrons lose energy on radiation processes in evolving environment of the shell. Those, accelerated within the shell, can finally escape from the shells into the nova environment forming so called nova super-remnant. We concentrate on the case of the recurrent novae which, in such scenario, can periodically supply fresh relativistic electrons into the NSR. These electrons produce \(\gamma\) rays via ICS of mainly optical radiation from the RG and the CMBR. The level of this \(\gamma\)-ray emission depends naturally on the activity period of the recurrent nova.
We show that in the case of fast and efficient acceleration of the electrons and weak magnetization of the shell, the \(\gamma\)-ray emission from the NSR around the Nova RS Oph is expected to extend to TeV energy range. The predicted emission is within the sensitivity of the CTA. In more optimistic cases (acceleration for the whole recurrence period and total activity period of at least \(10^{5}\) yrs), there is a chance to detect the GeV \(\gamma\)-ray emission from NSR with _Fermi_-LAT. We note however, that the NSR is predicted to have the size of even \(\sim 0.3\) degree at GeV and TeV energies (if the electrons diffuse in the nova surrounding according to the Bohm diffusion prescription and the average magnetic field strength is \(3\mu\)G, i.e. comparable to the interstellar space). So, NSR is expected to be moderately extended for \(\gamma\)-ray instruments. However, the magnetic field in the NSR might be
stronger, due to its supply with the incoming shells of nova. This should essentially limit the size of the electron NSR.
In fact, direct correlated emission in the optical and \(\gamma\)-ray light curves of Nova V906 Car at early stage of the nova suggests that emission in this different energy ranges is produced by this same population of particles, likely relativistic electrons producing broad low energy emission in the synchrotron process and the \(\gamma\)-ray emission in the IC process (Aydi et al., 2020).
## Acknowledgments
We thank D. Green and R. Lopez-Coto for useful discussions and the Referee for many useful comments. This work is supported by the grant through the Polish National Research Centre No. 2019/33/B/ST9/01904.
|
2310.02305 | Wrinkles in Time -- I: Rapid Rotators Found in High Eccentricity Orbits | Recent space-based missions have ushered in a new era of observational
astronomy, where high-cadence photometric light curves for thousands to
millions of stars in the solar neighborhood can be used to test and apply
stellar age-dating methods, including gyrochronology. Combined with precise
kinematics, these data allow for powerful new insights into our understanding
of the Milky Way's dynamical history. Using TESS data, we build a series of
rotation period measurement and confirmation pipelines and test them on 1,560
stars across five benchmark samples: the Pleiades, Pisces--Eridanus, Praesepe,
the Hyades, and field stars from the MEarth Project. Our pipelines' recovery
rates across these groups are on average 89\%. We then apply these pipelines to
4,085 likely single stars with TESS light curves in two interesting regions of
Galactic action space. We identify 141 unique, rapidly rotating stars in highly
eccentric orbits in the disk, some of which appear as rotationally young as the
120-Myr-old Pleiades. Pending spectroscopic analysis to confirm their youth,
this indicates these stars were subject to fast-acting dynamical phenomena, the
origin of which will be investigated in later papers in this series. | Rayna Rampalli, Amy Smock, Elisabeth R. Newton, Kathryne J. Daniel, Jason L. Curtis | 2023-10-03T18:00:00Z | http://arxiv.org/abs/2310.02305v1 | # Wrinkles in Time - I: Rapid Rotators Found in High Eccentricity Orbits
###### Abstract
Recent space-based missions have ushered in a new era of observational astronomy, where high-cadence photometric light curves for thousands to millions of stars in the solar neighborhood can be used to test and apply stellar age-dating methods, including gyrochronology. Combined with precise kinematics, these data allow for powerful new insights into our understanding of the Milky Way's dynamical history. Using TESS data, we build a series of rotation period measurement and confirmation pipelines and test them on 1,560 stars across five benchmark samples: the Pleiades, Pisces-Eridanus, Praesepe, the Hyades, and field stars from the MEarth Project. Our pipelines' recovery rates across these groups are on average 89%. We then apply these pipelines to 4,085 likely single stars with TESS light curves in two interesting regions of Galactic action space. We identify 141 unique, rapidly rotating stars in highly eccentric orbits in the disk, some of which appear as rotationally young as the 120-Myr-old Pleiades. Pending spectroscopic analysis to confirm their youth, this indicates these stars were subject to fast-acting dynamical phenomena, the origin of which will be investigated in later papers in this series.
stars: ages - techniques: gyrochronology - Galaxy: kinematics and dynamics - solar neighborhood +
Footnote †: journal: ApJ
0000-0002-4070-387X]Rayna Rampalli
0000-0002-4880-0888]Amy Smock
0000-0002-4880-0888]Elisabeth R. Newton
0000-0002-4880-0888]Kathyrne J. Daniel
0000-0002-4880-0888]Jason L. Curtis
## 1 Introduction
With new near-all-sky astrometric, spectroscopic, and photometric observational surveys, we have the opportunity to understand the Milky Way dynamics impacting the solar neighborhood, using stellar ages as an additional probe. Gyrochronology is a robust age-dating method for younger, low-mass stars (\(\lesssim 2\) Gyr, 1.3 M\({}_{\odot}\)), particularly G-type stars. Upon measuring stellar rotational velocities and activity levels for stars in the Pleiades and Hyades open clusters and the Sun, Skumanich (1972) famously found that stars slow their rotation as they age. This relationship between a star's rotation period (\(P_{\rm rot}\)) and its age (\(t\)) follows a trend described by a simple power law:
\[P_{\rm rot}\propto t^{0.5}. \tag{1}\]
Today, one of the best ways to measure stellar rotation is using photometry to observe modulation in stellar brightness as a result of starspots rotating in and out of our view. NASA's Transiting Exoplanet Survey Satellite (TESS; Ricker et al., 2015) provides the ability to make such measurements for hundreds of thousands of nearby, bright stars. These data pair well with astrometry from the Gaia mission (Gaia Collaboration et al., 2016) and spectroscopy from surveys like GALAH (De Silva et al., 2015) and APOGEE (Majewski et al., 2017) of stars in the local volume.
This paper is the first in a series where we focus on young stars in highly eccentric Galactic orbits. It is generally assumed that stars are born on near-circular orbits. Over the lifetime of a star, its orbit can be dynamically perturbed to a more eccentric state (e.g., Sell
wood 2014 and references therein). We can explore exceptions to this trend with gyrochronological ages and precise kinematics.
### More on Gyrochronology and its Caveats
Since 1972, many research groups have used observations of stellar populations with known ages to empirically calibrate the gyrochronology age-rotation relation. The power-law exponent is typically found to be 0.50-0.65, close to Skumanich's original 0.50 value (Barnes, 2007; Mamajek and Hillenbrand, 2008; Meibom et al., 2009; Angus et al., 2015; Douglas et al., 2019; Angus et al., 2019; Curtis et al., 2020). Ultimately, this relation is intended to be inverted to obtain precise and reliable ages for field stars of unknown ages.
Gyrochronology is not applicable for all lower mass stars of all ages. As low mass stars are interacting with their circumstellar disks and contracting onto the main sequence, they start at a range of \(P_{\rm rot}\). They eventually converge to a single \(P_{\rm rot}\) for a given age and temperature (Barnes, 2007), with the timescale for convergence being mass-dependent. Gyrochronology is only applicable after this convergence occurs. At \(\approx\) 700 Myr, K dwarfs temporarily stall and only resume spinning down at 2 Gyr (Curtis et al., 2019, 2020). M dwarfs have different dynamos from their G- and K-type counterparts and likely do not follow the same power law for age-dating (Newton et al., 2016; Rebull et al., 2018; Popinchalk et al., 2021).
Thus, using gyrochronology on solely G dwarfs is advantageous due to their favorable timescales for convergence (\(\approx\) 100 Myr, see Rebull et al., 2016; Gillen et al., 2020; Boyle and Bouma, 2023), little to no stalling, and the relative abundance and brightness of these stars. This is feasibly done with TESS. TESS follows in the footsteps of Kepler (Borucki et al., 2010) and K2 (Howell et al., 2014), providing high-cadence, high-precision photometric data which can be used to measure \(P_{\rm rot}\) for large numbers of stars. Large-scale \(P_{\rm rot}\) measurements (Ruth Angus; private communication) and age-rotation relation calibration efforts (e.g., Kounkel et al., 2022) using TESS have already begun and will be incorporated into future analysis.
It is worth noting that the short \(\sim\)27-day baseline for most TESS observations limits the longest \(P_{\rm rot}\) we can reliably measure in this study at 13 days (determined in Appendix A), setting the upper age-limit \(\lessapprox\) 1.3 Gyr for stars \(\lessapprox\) 1.3 M\({}_{\odot}\) according to age-rotation relations.
### Interpreting Galactic Action Space
In this paper, we are interested in identifying young stars in orbits that are typically associated with older populations, namely high eccentricity orbits. Rather than reconstructing orbits using each star's 6D position and velocity coordinates, we approximate their orbital size and eccentricity by projecting these kinematics into Galactocentric actions. Action space is a natural space when analyzing orbital properties and can be calculated by assuming a reasonable gravitational potential for stars in the local volume (e.g., Binney and Tremaine, 2008; Trick et al., 2019).
Assuming a cylindrical symmetry for disk kinematics with the \(z\)-axis oriented perpendicular to the plane of the disk, azimuthal action, \(J_{\phi}\), is equal to orbital angular momentum in the \(z\)-direction, \(L_{z}\). Stars with higher values for \(J_{\phi}\) will also have larger mean orbital radii and thus larger orbits. Radial action, \(J_{R}\), is related to orbital eccentricity. In a disk with a flat rotation curve, the radial action can be approximated by \(J_{R}\propto E_{R}/\kappa\), where \(\kappa\) is the epicyclic frequency and \(E_{R}\) is the energy associated with the orbit's non-circular motion. In this approximation the value of \(E_{R}\) is \(E_{R}\)=\(E\)\(-E_{c}\), where \(E\) is the total orbital energy, \(E_{c}\) is the energy for a circular orbit, and each is evaluated at the angular momentum \(L_{z}\) for that orbit. For the remainder of this work we will use \(L_{z}\) and \(J_{R}\).
We calculated the radial and azimuthal action for each star in our catalog following Trick et al. (2019) using the Python-based galaxy modeling package galpy v1.7(Bovy, 2015).1 For these calculations, we approximate the underlying potential using the default settings for the MWPotential14 and use the Stackle-Fudge approximation (Binney, 2012). Our orbital analysis is described in section 2.4.
Footnote 1: The galpy package can be accessed using the following link: [http://github.com/jobovyy/galpy](http://github.com/jobovyy/galpy).
The distribution of actions for stars in our catalog within 200 pc of the Sun are shown in Figure 1; our catalog uses data from Gaia DR3, GALAH DR3, and APOGEE DR16 (see section 2). The horizontal axis shows orbital angular momentum \(L_{z}\), where orbits with larger mean orbital sizes will be found to the right of this plot. The vertical axis shows the square root in radial action, \(\sqrt{J_{R}}\). The distribution in the Solar Neighborhood broadly fills in an inverted triangle. Stars in circular orbits (low values for \(\sqrt{J_{R}}\)) have angular momentum close to the value of the Sun's \(L_{z,\odot}\), located at the lower point of the triangular distribution. Stars on highly eccentric orbits (high values for \(\sqrt{J_{R}}\)) may be interlopers in the Solar Neighborhood from other parts of the disk and thus may have angular momenta much
smaller (or larger) than that of the Sun and will appear in the upper left (or upper right) of Figure 1.
While an equilibrium distribution free of dynamical perturbations would appear smooth, Trick et al. (2019) first identified the diagonal, elongated substructures in the \(L_{z}-J_{R}\) plane at high values of \(J_{R}\), which we refer to hereafter to as "wrinkles". These are also sometimes referred to as corrugations or simply kinematic groups, and they motivate our age investigation of stars in this space.
### This Work
In this work, we build pipelines to measure \(P_{\rm rot}\) and test them on stars of known \(P_{\rm rot}\) to assess their performance. We then apply these pipelines to regions in high eccentricity phase-space. In section 2, we discuss the light curves used to measure \(P_{\rm rot}\), the literature we test our \(P_{\rm rot}\) pipelines on, and the datasets we use to estimate actions for stars. In section 3, we outline the design of our \(P_{\rm rot}\) pipelines. We compare our \(P_{\rm rot}\) measurements to literature measurements from five benchmark samples in section 4. In section 5 we show a preliminary \(P_{\rm rot}\) distribution for stars in a selection of high eccentricity orbits. We conclude in section 6 with a summary of this work and discussion about future plans for this paper series.
## 2 Data
### Measuring \(P_{\rm rot}\) with TESS Photometry
We use TESS photometry to measure \(P_{\rm rot}\), which is the foundation for determining gyrochronological ages. TESS is a nearly all-sky survey searching the nearest and brightest stars for transiting exoplanets (Ricker et al., 2015). Each observational period, or sector, lasts \(\approx 27\) days for a field of view that is 24 by 96 degrees. The mission obtained two-minute cadence photometry for 200,000 stars that were of high priority for its exoplanet science goals and supplied full-frame images (FFIs) from each sector at a cadence of 30 minutes. Now in its extended mission, it is continuing to provide observations based on guest investigator proposals and supply FFIs with a cadence of 200 seconds. We start with photometry from the FFIs, since the shorter cadence data is limited to certain targets. If the star does not have a light curve from the FFI-reduction pipeline we use, then we check the short cadence data.
All of the stars observed by TESS are listed in the TESS Input Catalog (TIC) with their respective stellar parameters based on the second data release from Gaia (Gaia Collaboration et al., 2018) and have an associated TIC identification number (Stassun et al., 2019). We used data accessed January 2023, which includes all sectors through sector 57.
Instrument systematics can affect the TESS photometry. This includes pointing jitter, the spacecraft's 14 day orbit, crowding from neighboring stars due to the cameras' large pixels, momentum dumps, and scattered light from the Earth and Moon (e.g. Jenkins, 2020; Stumpe et al., 2012; Smith et al., 2012). Various teams have developed light curve reduction pipelines to mitigate these effects, including the reduction pipeline from the mission itself, TESS Science Processing Operations Center (TESS-SPOC, Caldwell et al., 2020).
TESS-SPOC light curves are generated from the FFIs for all stars with a TESS magnitude (\(T_{\rm mag}\)) \(\leq 13.5\) using simple aperture photometry (SAP). Here, the pixels are summed within an aperture optimized for signal-to-noise of the particular target star (Morris et al., 2017). The resulting SAP light curve is then corrected with a set of background pixels. Given the instrument systematics discussed above, additional corrections are made by modeling and removing time-dependent instrumental signatures. Corrections to the light curve are also made to account for crowding from other stars. These light curves are known as the research data conditioning SAP light curves, or PDC-SAP light curves. In this analysis, we measure \(P_{\rm rot}\) using the PDC-SAP light curves from the TESS-SPOC pipeline as the default. If the star does not have a TESS-SPOC light curve, we also check for SPOC light curves, which are generated the same way but for the two-minute cadence targets (Jenkins et al., 2016); these stars can have \(T_{\rm mag}\leq 13.5\).
### Literature \(P_{\rm rot}\) Measurements from Kepler, K2, and MEarth
We build and test our \(P_{\rm rot}\)-measurement pipelines on samples of stars with previously measured \(P_{\rm rot}\). While we are interested in age-dating young stars (likely 0.5-1 Gyr), high-eccentricity phase-space is predominantly made up of field stars (see section 2.4) with \(P_{\rm rot}\) too long to measure with TESS. We choose a mix of young stars and old field stars with known \(P_{\rm rot}\) to ensure our pipelines can robustly measure the \(P_{\rm rot}\) they are supposed to and do not detect \(P_{\rm rot}\) when they should not. We crossmatch each of the \(P_{\rm rot}\) catalogs used with the TIC and Gaia DR3 to obtain information such as the TESS contamination ratio and Gaia astrometric information that may indicate the star is a binary (section 3.3).
For young stars, we look at four associations, three of which are open clusters of known ages. This includes 120-Myr-old Pleiades (Rebull et al., 2016), 670-Myr-old Praesepe (Rampalli et al., 2021), and 730-Myr
old Hyades (Douglas et al., 2019). The community has studied rotation in these clusters for more than a decade using observations with longer baselines (particularly with the K2 mission, which had an 80 day baseline and conducted multiple campaigns covering Praesepe and Hyades), making these good laboratories in which to test \(P_{\rm rot}\) recovery.
We also test our pipeline on stars in the Pleiades-aged Pisces-Eridanus stellar stream (Meingast and Alves, 2019, called Meingast 1 by Ratzenbock et al., 2020). The literature \(P_{\rm rot}\) from Curtis et al. (2019) were derived from light curves generated from TESS FFI using simple aperture photometry and measured by visually confirming Lomb-Scargle periodograms. While our analysis also utilizes TESS data, the light curves we use are generated and confirmed with a different approach.
For the field star population, we use the MEarth M dwarf sample which contains a broad range of rapid and slow rotators (Newton et al., 2016, 2018). This catalog provides a sample where some periods are known to be too long for rotational modulation to be apparent over 27 days. We use stars with the most reliable signatures of rotation, designated in those works as "grade a" or "grade b" rotators.
We list further information on the benchmark samples used in Table 1.
### Building a Kinematics Catalog with Gaia, GALAH, and APOGEE
The Gaia mission was launched in 2013 with the goal of creating the largest and most precise 3D catalog of objects in the sky (Gaia Collaboration et al., 2016). It is equipped with three instruments: the astrometry instrument (ASTRO) to measure positions, the photometric instrument (BP/RP) to measure luminosities, and the radial velocity spectrometer (RVS) to determine line-of-sight radial velocities. The third and latest data release,
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Group & Age (Myr) & Distance (pc) & \# of stars & Survey \\ \hline Pleiades & 120 & 136 & 739 & K2 \\ Pisces–Eridanus & 120 & 80–226 & 101 & TESS \\ Praesepe & 670 & 186 & 1013 & K2 \\ Hyades & 730 & 48 & 247 & K2 \\ MEarth & field & \(<\)30 & 586 & MEarth \\ \hline \end{tabular}
\end{table}
Table 1: Literature \(P_{\rm rot}\) catalogs tested with our \(P_{\rm rot}\) pipeline
Figure 1: \(L_{z}\) versus \(\sqrt{J_{R}}\) using updated kinematics from Gaia DR3, GALAH DR3, and APOGEE DR16. Larger \(L_{z}\) is associated with a larger orbital size in the solar neighborhood, and larger \(J_{R}\) is associated with a more eccentric orbit.
DR3, provides full astrometric solutions for over 1.46 billion stars in the Galaxy and radial velocities for 33 million stars (Gaia Collaboration et al., 2023).
While Gaia has spectroscopic capabilities, the resolution of RVS is not very high (\(R\)=11,000) given that spectroscopy is not a primary science goal of the mission. Furthermore, the precession of the Gaia spacecraft prevents the measurement of truly precise RVs with an average RV precision of 1.3 km s\({}^{-1}\)(Katz et al., 2023). However, a number of ground-based surveys provide accompanying higher-resolution spectroscopic observations for stars Gaia has observed. Two such efforts are the GALactic Archaeology with HERMES, or GALAH, Survey (\(R\) = 28,000) and Apache Point Observatory Galactic Evolution Experiment (APOGEE; \(R\)= 22,500); these both have RV precisions of around 0.1 km s\({}^{-1}\)(Zwitter et al., 2021; Nidever et al., 2015). GALAH observes stars of all ages and locations in the Milky Way for Galactic archaeology purposes (De Silva et al., 2015) while APOGEE is a near-infrared survey designed to primarily measure spectra for red giant stars in the inner disk and bulge (Majewski et al., 2017). When available, we use the more precise radial velocities from GALAH DR3 (Buder et al., 2021) and APOGEE DR16 (Jonsson et al., 2020).
We recreate the action space plot from Trick et al. (2019) based on the most recent phase-space information for 3.9 million stars observed by the Gaia mission (Gaia Collaboration et al., 2021, 2023; Katz et al., 2023), which we supplement with RVs from GALAH and APOGEE. This is shown in Figure 1. We first query Gaia DR3 for all stars within 200 pc that have a measured RV and parallax errors less than 10%, yielding 731,961 stars. We then crossmatch these results with GALAH RVs and APOGEE RVs, limiting stars with APOGEE RVs to an RV signal to noise ratio \(>10\) as suggested by the survey. This resulted in 16,937 and 47,432 crossmatches respectively. For stars that have RVs in each survey, we calculate the zero-point offset among surveys using GALAH as our standard. We find an offset of \(-0.1\pm 5.9\) km s\({}^{-1}\)between Gaia and GALAH and \(-0.5\pm 6.8\) km s\({}^{-1}\) between APOGEE and GALAH, which is reasonable given the errors associated with each survey 2. If the star does not have a GALAH RV, we use the APOGEE RV, otherwise we use the Gaia RV.
Footnote 2: We also consider RVs from the Gaia-ESO, LAMOST, and RAVE surveys (Randich et al., 2022; Wang et al., 2022; Steinmetz et al., 2020). However, the calculated offsets from Gaia, GALAH, or APOGEE exceeded the typical reported errors, so we choose not to include them in our action calculations.
### Action Space Calculations
As outlined by Trick et al. (2019), to calculate actions for our 731,961 stars, we convert the 6D observed kinematics to Galactic heliocentric Cartesian coordinates (X,Y,Z) and the corresponding heliocentric velocities (UHC, VHC, WHC) using coordinate transformations in galpy(Bovy, 2015). U defines the radial direction toward the Galactic center, V defines the rotational direction of the Galaxy, and W defines the vertical direction toward the Galactic North pole. We incorporate the Sun's motion with respect to the Local Standard of Rest (LSR), (U, V, W) = (11.1, 12.24, 7.25) km s\({}^{-1}\)(Schonrich et al., 2010). Then we convert (UHC, VHC, WHC) to (ULSR, VLSR, WLSR) assuming the Sun's distance to the Galactic center is 8 kpc, its circular velocity is 220 km s\({}^{-1}\), and its height above the Galactic plane is 25 pc (Bovy, 2015; Bovy et al., 2012; Juric et al., 2008). We transform (X, Y, Z, ULSR, VLSR, WLSR) to Galactocentric cylindrical coordinates: (R, \(\phi\), z, vR, vT, vz). Assuming the Stackle-Fudge Galactic potential (Binney, 2012), we find each star's orbital actions using galpy. We remove any stars with a \(\sqrt{J_{R}}>14.4\) kpc km s\({}^{-1/2}\) as these are likely stars in the halo.
We simulate the propagation of parallax and RV errors in our action calculations for 1000 random stars. We find that 10% error on parallax has minimal effect (\(\leq 1\%\)) on action calculations. Radial actions begin to significantly change for RV errors 5-10 km s\({}^{-1}\). The average change in \(\sqrt{J_{R}}\) was always \(<0.1\) kpc km s\({}^{-1/2}\), but the standard deviations were \(\geq 0.7\) kpc km s\({}^{-1/2}\) for RV errors 8-10 km s\({}^{-1}\). Based on the scales of wrinkles, we choose stars with a RV error \(\leq 7\) km s\({}^{-1}\), which gives an average change in \(\sqrt{J_{R}}\) of \(0.07\pm 0.6\) kpc km s\({}^{-1/2}\)3. Angular momentum changes are small (\(<1\%\)).
Footnote 3: Making a cut on RV error also can remove binaries, which is beneficial for gyrochronology. However, it can remove fainter rapid rotators as well (e.g., see footnote 4 in Rampalli et al., 2021). This will not be a significant issue here since the stars are nearby, and the RV error cut is still quite high.
In subsequent works we will explore how combining ages and kinematics can significantly deepen our interpretation of kinematic structures in action space. For this work, we choose a vertical and a horizontal slice in high \(J_{R}\) on which to run our \(P_{\rm rot}\) pipelines. The locations of these was in part motivated by where benchmark clusters fall in action space. In Figure 2 we see the younger clusters like the Pleiades, Pisces-Eridanus, the Hyades, and Praesepe are at lower \(J_{R}\), while the 2.7-Gyr-old Ruprecht 147 cluster sits a bit higher (Curtis et al., 2020). Note that Ruprecht 147, which is not one
of the clusters we study in this work, is at a distance of 302 pc, beyond our 200 pc limit. We have plotted its members (using membership cuts suggested by Curtis et al., 2020) for demonstration of where an older group of stars falls in action space. We also note that there is an elongation in most of the clusters. While exploring the reason for this morphology is beyond the scope of this work, we posit this could be the result of tidal effects (e.g. Roser et al., 2019; Roser and Schilbach, 2019), the presence of binaries and their effects on the measured RVs, or interloping non-members. However, because we use rotation-based catalogs, the latter is unlikely.
Both slices were selected using Glue(Beaumont et al., 2015). Glue is an interface that allows "linked-view" visualizations of astronomical image or catalog data. For example, users can draw an arbitrary shape on a plot with an associated table of data that will filter the particular data lying within the drawn boundaries. We recreate Figure 1 in the Glue interface and manually highlight the two slices, which in turn creates a sub-table with the selected stars. The vertical slice contains 8,834 stars, and the horizontal slice contains 3,952 stars with 238 stars overlapping. We choose the vertical slice of action space as \(6\lesssim\sqrt{J_{R}}\lesssim 14\) kpc km s\({}^{-1/2}\), \(1655\lesssim L_{z}\lesssim 1700\) kpc km s\({}^{-1}\) as shown in Figure 2. The \(\sqrt{J_{R}}\) lower bound begins at the location of Ruprecht 147 since its presence indicates stars in that region are likely over 1 Gyr old, and implies that finding young stars there would be anomalous. This slice also captures part of a wrinkle structure (discussed in section 1). Our horizontal slice is of a similar thickness located at \(11\lesssim\sqrt{J_{R}}\lesssim 11.5\) kpc km s\({}^{-1/2}\), \(1300\lesssim L_{z}\lesssim 2050\) kpc km s\({}^{-1}\). This slice also captures a part of two wrinkles.
## 3 Methods
### Measuring \(P_{\rm rot}\)
The Lomb-Scargle (LS) periodogram is perhaps the most used form of \(P_{\rm rot}\) measurement given its efficiency and ability to use unevenly sampled data (Lomb, 1976; Scargle, 1982; Press and Rybicki, 1989; VanderPlas, 2018). After defining a grid of periods or frequencies, sine curves with the defined periods/frequencies are fit to the signal. The result is an estimate of the Fourier power as a function of period/frequency. The second panels of Figure 3 show the periodograms for an example star. Visually, we can see the \(P_{\rm rot}\) is \(\approx 1.2\) days (left panels), and this is where the periodograms have their highest peaks. This is confirmed by phase-folding the light curve with the measured \(P_{\rm rot}\) as shown in the third panels.
The Generalized-Lomb-Scargle (GLS) periodogram (Zechmeister and Kurster, 2009) is an iteration of the LS periodogram in which the sine wave fit is more rigorous and can account for measurement errors and offsets resulting in more precise \(P_{\rm rot}\) measurements. It has been used successfully with TESS data already (Anthony et al., 2022). We use pystronomy's implementation (Czesla et al., 2019).
There should be at least 1.5-2 rotation cycles observed to reliably measure a \(P_{\rm rot}\). For TESS's typical observational baseline of 27 days, this means we cannot trust measured \(P_{\rm rot}\) beyond 13.5-18 days. We define a periodogram with periods ranging from 0.1 to 50 days with a grid spacing of 0.001 days. If \(P_{\rm rot}\geq 19\) days is measured, we expect this is likely a spurious signal, and our pipeline then chooses a new, shorter \(P_{\rm rot}\) corresponding to the next largest peak. This ultimately limits any measured \(P_{\rm rot}\) to \(\leq 19\) days, but we leave the periodogram limit at 50 days for better constraints on our periodograms' peak-width measurements (introduced in section 3.2.1).
For our analysis in section 4 and our injection and recovery test discussed in Appendix A, we set the reliable \(P_{\rm rot}\) detection limit at 13 days. This limit matches other TESS rotation pipelines and guarantees two rotation cycles in a single TESS sector. Previous attempts at automating measuring rotation using single sectors of TESS data have only been successful for \(P_{\rm rot}\leq 13\) days (Canto Martins et al., 2020; Holcomb et al., 2022; Kounkel et al., 2022), with measurements of longer periods more likely being associated with TESS systematics.
\(P_{\rm rot}\) measurements may return half or double the true \(P_{\rm rot}\). This can result from sampling, short baselines, stellar spot evolution, or an intrinsically double-peaked signal (e.g. Basri and Nguyen, 2018). If a substantial peak (60% of primary peak) is seen at double the period, then we automatically choose this for our \(P_{\rm rot}\) measurement as the code may be inadvertently folding the signal in half. If a longer harmonic is found at the pipeline \(P_{\rm rot}\) detection limit \(\geq 19\) days, we choose the shorter harmonic as the measured \(P_{\rm rot}\). We also calculate the window function for each light curve following VanderPlas (2018). We discard any \(P_{\rm rot}\) and peak-width that overlaps with the \(P_{\rm rot}\) returned from the primary peak of the window function, which is typically between 14-17 days.
### Confirming \(P_{\rm rot}\)
Instrument systematics and spot evolution require that a candidate \(P_{\rm rot}\) be subject to a confirmation process. For \(P_{\rm rot}\) in open clusters or samples of field stars, this has largely been done by eye since the number of stars is usually under a few thousand. To confirm larger samples like the entire Kepler or K2 fields, a series of metrics are used such as amplitude of the signal or primary peak height threshold (McQuillan et al., 2014;
Reinhold and Hekker, 2020). These thresholds are determined based on initial visual confirmation decisions and work best for data less subject to systematics than that from TESS (e.g. Kepler).
With TESS surveying the majority of the sky, it is necessary to develop an accurate confirmation that is at least partially automated, despite the systematics present. Automated machine learning tools such as random forest classifiers and convolutional neural networks have been used to measure \(P_{\rm rot}\) in Kepler (e.g. Breton et al., 2021) and TESS data. These methods do not necessarily outperform non-machine-learning methods for non-consecutive sectors of data at present (Claytor et al., 2022).
We develop a quasi-automated confirmation process that significantly reduces the number of stars that require by-eye inspection. This process results in three pipelines that employ varying levels of confirmation. We illustrate these pipelines in Figure 4 and outline the process below. Each subsequent pipeline includes the previous pipeline's steps, meaning that Pipeline III is the most comprehensive and thorough. Pipeline I and II are fully automated while Pipeline III adds visual inspection in select cases.
2.1 Pipeline I: two metrics required to be considered a period detection, no by-eye inspection required
Before querying the star's light curve, we determine if the star is a candidate binary based on archival catalog information (see section 3.3 for how this is done). If it is, we exclude the star from further analysis. If single, we check if the star's light curve(s) displays significant periodicity by calculating two metrics for each periodogram.
First, we calculate the peak power over the median power, or signal-to-noise ratio (S/N) of the periodogram. We show examples of a star with strong periodicity in two of three sectors (S/N \(>40\), Figure 3).
Second, we measure the peak-width of the period detected as a diagnostic tool for \(P_{\rm rot}\) measurement quality within and across sectors. We use the width of the highest peak in the GLS periodogram (see right hand panels in Figure 3), which we calculate by fitting a Gaussian. We discard any stars that I) have a peak-width measurement \(\geq 25\%\) of the candidate \(P_{\rm rot}\) or II) the peak-width measurement has failed. If the fitting fails, this indicates there was no peak-like feature to fit in the periodogram. This tends to happen at longer (and likely spurious) periods, where the periodogram feature is more plateau-like than peak-like.
If a star has at least one light curve with S/N \(\geq 40\) (see determination of this value in Appendix A) and a peak-width measurement \(<25\%\) of the candidate \(P_{\rm rot}\), then we consider it a _period detection_ for Pipeline I and
Figure 2: Updated action plot as shown in Figure 1 and described in §1.2 with the Pleiades (in pink), Pisces–Eridanus (orange), Praesepe (blue), Hyades (green), and Ruprecht 147 (red) overplotted. Vertical and horizontal slices chosen for \(P_{\rm rot}\) study noted in black. As the clusters increase in age, so do their radial actions, motivating our choice for studying \(P_{\rm rot}\) and finding young stars at high \(J_{R}\).
continue to the next pipeline. The final \(P_{\rm rot}\) reported for Pipeline I is the median \(P_{\rm rot}\) if there are Pipeline I detections in multiple sectors for a star.
#### 3.2.2 Pipeline II: multiple sectors with Pipeline I detections, no by-eye inspection required
If the star passes Pipeline I with at least one detection, we check if the star has multiple sectors with Pipeline I detections 4. If there are multiple sectors, we see if the \(P_{\rm rot}\) from each sector are within 2x the peak-width including any \(1/2\times\) or \(2\times\) harmonics. This comparison is done by taking the median Pipeline I \(P_{\rm rot}\) detection and comparing each individual \(P_{\rm rot}\) measurement to the median. If there are an even number of detections, we choose the higher middle value as the median. If \(\geq 2/3\) of the detections match the median within the uncertainties we measured in Pipeline I, the star's median \(P_{\rm rot}\) is considered a Pipeline II detection and reported as the final \(P_{\rm rot}\). If we find a harmonic is present among sectors, we report the longer \(P_{\rm rot}\) as the final \(P_{\rm rot}\).
Footnote 4: Given the systematics and data gaps in TESS, it is not yet possible to uniformly detrend light curves from multiple, non-consecutive sectors, stitch them together, and measure \(P_{\rm rot}\) longer than a single sector. We leave the maximum period in the periodogram at 50 days. However, inferring long \(P_{\rm rot}\) from multiple sectors of data is an area of active work (Hattori+ in prep, Claytor et al., 2023).
This form of automated confirmation is based on results from Rampalli et al. (2021) and Reinhold and Hekker (2020), which found that in spite of stellar spot evolution that occurs in stars observed with the K2 mission, it is expected that \(P_{\rm rot}\) can be recovered within 10-20% across year-long gaps between observations excluding harmonic effects. Because of the shorter observational baseline and systematics, longer TESS \(P_{\rm rot}\) (more than 10 days) may differ by more than 20% between sectors but still measure the same \(P_{\rm rot}\). The peak-width measurement captures a similar level of uncertainty.
#### 3.2.3 Pipeline III: by-eye inspection for stars that did not pass Pipeline II
Figure 3: Example of measurement and confirmation process for a star observed in sectors 42 (top row), 43 (middle row), and 44 (bottom row) that is detected in Pipelines I, II, and III. _Left_: Star’s light curve for sector. _Middle-Left_: GLS periodogram with measured \(P_{\rm rot}\) associated with highest peak indicated in yellow, median power indicated in green, and calculated S/N of the highest peak. The S/N of the three sectors (21, 52, 61) spans our detection threshold of 40. _Middle-Right_: Phase-folded light curve with measured \(P_{\rm rot}\). _Right_: Periodogram in frequency space with primary peak fitted for width measurement (in black). This star has a strong S/N and small peak-width in two of three sectors, which means its \(P_{\rm rot}\) is a Pipeline I detection that is automatically confirmed as part of Pipeline II and considered a Pipeline II/III detection.
A star with a period detection from Pipeline I might not be confirmed by Pipeline II for two reasons. The first case is stars where only a single sector has a Pipeline I period detection. The second case is where there are multiple sectors with Pipeline I period detections, but for which the periods do not agree. In these cases, the star is flagged for by-eye inspection in Pipeline III. To do a by-eye inspection for a star's candidate \(P_{\rm rot}\), we check if the modulation in either the light curve of the phase-folded light curve matches the measured \(P_{\rm rot}\). We use the quality check system (Q) used in Rampalli et al. (2021). We assign Q = 0 for obvious detections, Q = 1 for questionable detections (usually when there are obvious modulations in the light curve but an unclear period due to systematics), Q = 2 for spurious periodicity from instrument systematics, and Q = 3 for cases where the light curve is entirely dominated by systematics. We show examples of each classification in Figure 5. Periods are considered visually confirmed for Q=0,1 and discarded for Q=3,4.
A star that passes either automated (i.e. Pipeline II detection) or by-eye inspection (i.e. Pipeline III detection) is considered a _confirmed detection_.
### Candidate Binary & Subgiant Flagging
Gyrochronology is intended to be applied to single stars. Tight binary systems' rotational evolution can be significantly altered due to each body exerting tidal forces on the other (e.g., Meibom and Mathieu, 2005; Zahn, 2008). This can lead to a spin up or down effect that deviates from the known single star age-rotation relations. A companion can also contaminate the rotation we are attempting to measure from the source star. Additionally, whether or not stars are tidally interacting or even physically associated, TESS pixels are large and can contain multiple stars. These effects lead to incorrect placement on the color-period plane and imprecise age inference, so we remove binaries from our sample.
We also want to remove subgiants as while they might have solar colors, they were likely warmer when on the main sequence and beyond the Kraft break (van Saders and Pinsonneault, 2013). Thus, magnetic braking and gyrochronology may no longer apply despite the star having a rapid \(P_{\rm rot}\).
We use the high-precision Gaia astrometry, GALAH spectroscopy, and TESS photometry in formulating the following tests to identify binaries:
Figure 4: Flowchart of the \(P_{\rm rot}\) pipelines. Starting with a star’s light curve(s), we check if it is a candidate binary using ancillary data and discard if so. Next, we check for periodicity in the periodograms and identify stars with strong periodicity (Pipeline I detections). A Pipeline II detection is one in which a star has multiple sectors have Pipeline I detections, where the same \(P_{\rm rot}\) is measured. A pipeline III detection includes pipeline II detections plus any additional pipeline I detections that pass a by-eye inspection.
1. Gaia DR3 includes a number of additional catalogs with non-single star solutions (Gaia Collaboration, 2022). This includes spectroscopic, photometric, and eclipsing binary observations. Solutions from the combinations of these observations are also provided in Halbwachs et al. (2023) and Eyer et al. (2023). We check our stars are listed in any of these catalogs, flag them as binaries, and discard them from future analyses.
2. We examine the renormalized unit weight error (RUWE) measurement for each star. This is a goodness-of-fit measure of the single-star model fit to the source's astrometry. If a star has a RUWE \(>\) 1.4, there is a strong likelihood the star has a companion (e.g., Jorissen, 2019; Belokurov et al., 2020).
3. We check the GALAH DR3 catalog pre-main sequence star/binary flag column. Although pre-main sequence stars would be of interest given our goal of finding young stars (despite gyrochronology not being applicable), it is more likely for the stars to be binaries.
Figure 5: Examples of quality flag designations. Light curves are on the left, the corresponding periodogram in the middle, and phase-folded light curve on the right. The position of the detected period is indicated by the orange line. Top row: Q = 0. There is a clear 9 day periodic modulation in the light curve and a strong, sharp peak in the periodogram (literature \(P_{\rm rot}\): 8.7 days). Second row: Q = 1. There is evidence of modulation measured at 8.3 days in the light curve, but it is weak due to instrument systematics and the data gap (literature \(P_{\rm rot}\): 7.3 days). Third row: Q = 2. The light curve shows evidence for a long-term periodic trend likely due to systematics, which the periodogram chooses as the period instead of the masked and potentially astrophysically real period at 0.16 days (literature \(P_{\rm rot}\): 0.32 days). Bottom row: Q = 3. Systematics completely dominate the light curve structure, and no astrophysical periodicity appears to be present (literature \(P_{\rm rot}\): 0.43 days).
4. Each star in the TIC has a reported contamination ratio, or the ratio of flux from neighboring stars (\(F_{\rm neighbor}\)) to that of the source (\(F_{\rm source}\)) (Stassun et al., 2019). We identify stars with contamination ratio (CR) \(>20\%\) as visual binaries. This limit derives from the following procedure. Following Rampalli et al. (2019), the dilution (\(d\)) is equal to contribution of \(F_{\rm source}\) to the total flux observed in the light curve (\(F_{\rm total}\)): \[d=\frac{F_{\rm source}}{F_{\rm total}},\] (2) The contamination ratio (CR) is: \[{\rm CR}=\frac{F_{\rm neighbor}}{F_{\rm source}}=\frac{1-d}{d}.\] (3) For a star with CR=20%, we find using equation 3 that \(d\) is 83%, or alternatively that the neighboring star contributes 17% of the flux. When \(F_{\rm source}\) is constant and \(F_{\rm neighbor}\) is varying by 5% and contributes 17% of the flux, the total flux variation is \(0.83\times 0+0.17\times 0.05=0.0084\). A flux variation of around 0.84% has a 66% chance of detection with our pipelines (see bottom right panels in Figure A.2 in Appendix A).
5. Subgiants and equal-mass binaries can also pose as single-star interlopers. They are overluminous and sit above the single-star main sequence in a color-magnitude diagram. Following Douglas et al. (2019); Rampalli et al. (2021), we fit a 6th order polynomial to the Gaia color, \(G_{\rm BP}-G_{\rm RP}\) and absolute G-magnitude, \(M_{g}\), color-magnitude diagram for the single-star rotator sequence in Praesepe. Stars that lay \(>0.4\) magnitudes above this fit are flagged as binary or evolved stars.
Failing any one of these tests is sufficient to be labeled a candidate binary or subgiant.
While we do apply these binary and subgiant cuts to stars with calculated actions, we do not apply these cuts to any of the clusters and MEarth groups when testing our pipelines except the contamination ratio cut. This maximizes the number of stars on which we can validate our pipelines unless a star was unresolved in TESS. Whether or not a star is a binary or subgiant does not affect period detection. Rather, it is the \(P_{\rm rot}\) interpretation and subsequent gyrochronological age that is affected.
### Training, Validation, & Testing with Literature \(P_{\rm rot}\)
In order to understand how well our \(P_{\rm rot}\) pipelines work, we divide the five literature \(P_{\rm rot}\) catalogs into training, validation, and testing data sets. We begin with the training sample and use the validation sample to initially test the pipelines. Based on the validation sample results, we go back and adjust on the training sample to improve performance. The testing data set is used post-optimization to report final numbers (no changes can be made to the pipeline after reviewing the results for the testing data). We have broken up the catalogs in the following manner:
* Training: Hyades, Pleiades, 60% MEarth
* Validation: 50% Praesepe, 50% Psc-Eri, 20% of MEarth
* Testing: 50% Praesepe, 50% Psc-Eri, 20% of MEarth
## 4 Pipeline Results
We calculate six metrics for each sample for each pipeline's definition of a detection to assess pipeline performance. In our calculations of completeness and recovery, we use \(P_{\rm rot}\)= 13 days as the detection limit but do not impose this cut on our results, which occasionally include detections \(\geq\) 13 days.
* Recovery rate for all stars: How many of our TESS \(P_{\rm rot}\) measurements are within 2x the measured peak-width of the literature \(P_{\rm rot}\), considering only stars with literature \(P_{\rm rot}\)\(<\) 13 days. We also consider a \(P_{\rm rot}\) recovered if the \(P_{\rm rot}\) is within 2x the measured peak-width of a harmonic (1/2, 2) of the reported literature. A \(P_{\rm rot}\) does not have to be considered a detection by our pipeline for the correct literature \(P_{\rm rot}\) to be accurately measured.
* Recovery rate for pipeline _detections_: How many of our TESS stars had \(P_{\rm rot}\)_detections_ for the particular pipeline within 2x the measured peak-width of the literature \(P_{\rm rot}\) given the detection limit of 13 days? We also consider a \(P_{\rm rot}\) recovered if the \(P_{\rm rot}\) is within 2x the measured peak-width of a harmonic (1/2, 2) of the reported literature.
* True positive rate: How many of our TESS stars had \(P_{\rm rot}\) detections when a detection was expected (i.e., the true \(P_{\rm rot}\) was within the reliable \(P_{\rm rot}\) detection limit for TESS, \(<13\) days)? The detected \(P_{\rm rot}\) does not have to match the true \(P_{\rm rot}\).
* True negative rate: How many of our TESS stars had \(P_{\rm rot}\) non-detections when a non-detection was indeed expected (i.e. true \(P_{\rm rot}\) was outside the reliable \(P_{\rm rot}\) detection limit for TESS, \(>13\) days)?
* False positive rate: How many of our TESS stars had \(P_{\rm rot}\) detections when a detection was not expected (i.e. true \(P_{\rm rot}\) was outside the reliable \(P_{\rm rot}\) detection limit for TESS, \(>13\) days)?
* False negative rate: How many of our TESS stars had \(P_{\rm rot}\) non-detections when a detection was expected (i.e., the true \(P_{\rm rot}\) was within the reliable \(P_{\rm rot}\) detection limit for TESS, \(<13\) days)?
We report these values in Table 2 for all five groups in the three pipelines. Pipeline II results have fewer stars than the other two pipelines since it only considers stars with multiple sectors of data as shown in Figure 4. While we have divided these groups up for the training and validation phases, we combine the datasets used for training and validation when reporting numbers since they were all subjected to the same analyses. We repeat this analysis with the test set: 50% of the Pisces-Eridanus, 50% of the Praesepe, and 20% of the MEarth samples. The results are shown in Table B.1 in Appendix B. The numbers are very similar to those in Table 2 with an average recovery rate of 92%, true positive rate of 92%, true negative rate of 89%.
For all five samples, the overall recovery rate is \(\geq 90\%\). When considering recovery of each pipeline's _detections_, the rates generally remain similar except for the Pleiades and Praesepe, whose distances of 136 and 186 pc lead to magnitude limitations for TESS light curves (discussed further below).
True positive rates across groups and pipelines never fall below 80%. The lowest true negative rate is 76% for MEarth after running Pipeline I; this is not unexpected since Pipeline I has the least number of confirmation tests. The true negative rate increases to 87% and then 95% for MEarth with Pipelines II and III and thus subsequent confirmation checks. The Hyades has a similar true negative rate of 77%. The higher false positive rate is because many Hyads have \(P_{\rm rot}\) just slightly \(>13\) days. As can be seen in Figure 6, we identify a period for many of these stars at \(<13\) days which are then counted as
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Group & Pipeline & \# of stars & \% Recovery & \% Detection Recovery & \% True Pos. & \% True Neg. & \% False Pos. & \% False Neg. \\ \hline \multirow{3}{*}{Pleiades} & I & 491 & 92 & 79 & 83 & & & 17 \\ & II & 9 & 92 & 90 & 90 & & & 10 \\ & III & 491 & 92 & 79 & 80 & & & 20 \\ & I & 46 & 93 & 93 & 100 & & & 0 \\ Pisces-Eridanus & II & 40 & 93 & 90 & 95 & & & 5 \\ & III & 46 & 93 & 93 & 98 & & & 2 \\ & I & 208 & 90 & 83 & 91 & 89 & 11 & 9 \\ Praesepe & II & 118 & 90 & 84 & 92 & 93 & 7 & 8 \\ & III & 208 & 90 & 82 & 90 & 94 & 6 & 10 \\ & I & 160 & 94 & 91 & 97 & 77 & 23 & 3 \\ Hyades & II & 107 & 94 & 92 & 96 & 79 & 21 & 4 \\ & III & 160 & 94 & 90 & 93 & 89 & 11 & 7 \\ & I & 322 & 98 & 98 & 100 & 76 & 24 & 0 \\ MEarth & II & 104 & 98 & 98 & 99 & 87 & 13 & 1 \\ & III & 322 & 98 & 97 & 97 & 95 & 5 & 3 \\ \hline \end{tabular} Note. – Percentage recovered indicates whether 1, 2x, or 1/2x, the literature \(P_{\rm rot}\) was recovered with each pipeline within 2x the peak-width and within the reliable \(P_{\rm rot}\) detection limit (\(\leq 13\) days). Percentage of detections recovered indicates whether 1, 2x, or 1/2x the literature \(P_{\rm rot}\) was recovered with each pipeline for a detection within 2x the peak-width and within the reliable \(P_{\rm rot}\) detection limit (\(\leq 13\) days). True positive indicates a detection within the reliable \(P_{\rm rot}\) detection limit. True negative indicates a non-detection outside of the the reliable \(P_{\rm rot}\) detection limit. False positive indicates a detection outside of the the reliable \(P_{\rm rot}\) detection limit. False negative indicates a non-detection within the reliable \(P_{\rm rot}\) detection limit. The Pleiades and Pisces–Eridanus do not have true negative and false positive columns since \(\leq 2\) stars have \(P_{\rm rot}>13\) days. Pipeline II only considers stars with multiple sectors.
\end{table}
Table 2: \(P_{\rm rot}\) pipeline versus literature \(P_{\rm rot}\) catalogs results
false positives given our imposed detection limit of 13 days.
With Pipeline I, true positive rates are strong (83 to 100%). However, when looking at the associated recovery rates for detections, or how well Pipeline I does measuring the _correct_\(P_{\rm rot}\) for detections, we see that the accuracy is somewhat lower (79 to 98%). The true negative rates are quite strong as well with the exception of MEarth and the Hyades sample as discussed previously.
Pipeline II only considers stars with multiple sectors of data as a means of confirming \(P_{\rm rot}\). This additional confirmation test generally increases true negative rates, indicating that it is successful in identifying bad detections.
Pipeline III, our most rigorous pipeline, includes by-eye inspection. For the Pleiades, Pisces-Eridanus, Praesepe, Hyades, and MEarth, by-eye inspection of stars with mismatching \(P_{\rm rot}\) in multiple sectors occurred for 0%, 6%, 10%, 16%, and 11% of the stars in the five respective groups. By-eye inspection is also invoked for single sectors, which occurred for 98%, 7%, 7%, 13%, and 42% of the samples respectively.
For Pipeline III, recovery rate for detections and true positive rates decrease slightly across all groups compared to Pipeline I. Most \(P_{\rm rot}\) detections in Pipeline I are labeled true positives despite potentially having inaccurately measured \(P_{\rm rot}\) because there are no additional confirmation steps in this pipeline; any star's light curve with a significant peak in a periodogram is deemed a detection. Adding the by-eye inspection and/or comparison of \(P_{\rm rot}\) across sectors decreases false positives; the trade-off is that it can move stars originally classified as true positives in Pipeline I to false negatives in Pipelines II and III. True negative rates increase from Pipeline I, especially for MEarth and the Hyades, indicating the importance of by-eye inspection.
The overall trend going from Pipeline I, to II, to III is for improved true negative rates with only modest decreases in true positive rates. This is strongest in the MEarth sample. Given our motivations to explore ages of stars in high eccentricity orbits, (e.g. mostly old, field
Figure 6: Literature \(P_{\rm rot}\) versus TESS \(P_{\rm rot}\) for Pipeline III for Pleiades (pink), Pisces–Eridanus (orange), Praesepe (blue), Hyades (green), and MEarth (purple). Pipeline III detections considered recovered noted in black circles, and detection limit noted at 13 days with vertical dashed line.
stars with some young interlopers), the MEarth sample is our closest proxy. Thus, going forward, we will choose Pipeline II or Pipeline III, which best improve chances of eliminating false positives.
We further investigate the reasons behind the statistics of our pipeline. We show our literature versus TESS \(P_{\rm rot}\) plots for the five groups in Figure 6 for Pipeline III. In comparison to the detections found at \(<13\) days, very few detections are found for stars with longer periods (these are primarily in Praesepe and MEarth). In the case of Praesepe and the Hyades, we measure the half-period harmonics for a fair number of the longer rotators.
In Figure 7, we project these results into literature \(P_{\rm rot}\) space. These extend to the length of a TESS sector, 27 days, to illustrate where our pipeline starts failing. For MEarth, we extend the histogram out to 100 days in log space. We highlight in grey the reliable \(P_{\rm rot}\) detection range. We show the TESS magnitude distributions in Figure 8, highlighting in grey the TESS-SPOC magnitude regime. The targets for our period search -- young G dwarfs -- are generally expected to fall within the grey regions. The near-perfect recovery of Pisces-Eridanus is expected since both use TESS data. However, the method of light curve extraction and confirmation by Curtis et al. differs from ours yet we measure the same \(P_{\rm rot}\). This validates the automated aspects of our pipeline since all of the \(P_{\rm rot}\) measured in Curtis et al. (2019) were visually inspected, in contrast to our pipelines.
Figure 8 demonstrates that the lower period recovery rates for detections in Pleiades and Praesepe is due to their distances. Because Praesepe has rotators beyond the 13 day limit, most of which are also faint, this further contributes in failure to recover \(P_{\rm rot}\). Less than 60% of \(P_{\rm rot}\) are recovered beyond 12 days (Figure 7). The rotators beyond 13 days are mostly non-detections despite the true \(P_{\rm rot}\) being recovered in some cases (Figure 6). This is in contrast to the Hyades, which is close by. In the Hyades, rotators beyond 13 days are mostly detected with the true \(P_{\rm rot}\) recovered. Recovery rate decreases by 29% from 11-13 days, with the sharpest drop in recovery (\(\leq 60\%\)) at 15 days. The MEarth sample has a drop to \(<50\%\) recovery in the same 10-15 day range. Figures 7 and 8 illustrate the motivation for our 13 day limit (see also Appendix A).
## 5 Testing Pipeline II on Stars in High Eccentricity Orbits
Figure 7: Literature \(P_{\rm rot}\) distributions for Pipeline III detections for Pleiades (pink), Pisces–Eridanus (orange), Praesepe (blue), Hyades (green), and MEarth (purple). Recovered \(P_{\rm rot}\) noted by dashed black line, and gray regions indicate what we expect to detect with TESS (\(\leq 13\) days) shaded in gray.
### Pipeline Performance
Given the promising results from our pipelines, we remove binaries and query TESS light curves for the vertical (8,834 stars) and horizontal slices (3,952 stars) of action space shown in Figure 2. After applying Pipeline II on the 2,887 and 1,198 likely single stars with TESS-SPOC PDC-SAP light curves for multiple sectors, we are left with 302 and 60 rotators respectively. The difference in number of rotators found in each slice is expected as there are more stars in the vertical slice. This region also samples relatively lower radial action space, where young stars are more likely to populate.
We do an additional visual inspection following the quality flag guidelines outlined in Figure 5 and keep stars with Q=0,1. Post-inspection, we are left with 127 rotators out of 302 stars in the vertical slice, 17 rotators out of 60 stars in the horizontal slice, and three stars overlapping. This suggests a false positive rate of \(\approx 60-70\%\) and is significantly higher than the false positive rates in the MEarth sample, which most resembles this sample of mostly field stars. We attribute this to the fact that 40% of the stars in MEarth only had one sector of data and were therefore immediately flagged for by-eye inspection, increasing the true negative rates. While this implies that Pipeline II can be improved in the automatic vetting of multi-sector stars, Pipeline II has significantly reduced the number of stars normally needing by-eye inspection (4,085 to 359).
### Rotator Distribution in Color Space
We show the rotator distribution (filled stars are from the horizontal slice, unfilled from vertical) in color-\(P_{\rm rot}\) space in Figure 9 with the benchmark clusters' \(P_{\rm rot}\) for reference. We correct the Gaia colors for extinction, which is minimal given the proximity of the stars. We determine the \(A_{v}\) from the calculated reddening in the Gaia bandpasses using Bayestar19 (Green et al., 2019). The rotation periods detected correspond to periods typical for Pleiades-age (120 Myr) single stars to beyond the age of NGC 6811 (1 Gyr).
While we have removed binaries, there is likely still binary contamination in our sample. \(P_{\rm rot}\) work with Kepler and K2 have shown that rapid rotators with \(P_{\rm rot}\)\(\leq 2\) days largely tend to be binary stars undergoing tidal interactions (Douglas et al., 2016; Simonian et al., 2019). Following Kounkel et al. (2022), we flag stars with \(P_{\rm rot}\)\(<\) 2 days, and we consider the most robust region for gyrochronology to be 2 \(<P_{\rm rot}\)\(<\) 13 days. While there are young G dwarfs (11-20% for 87-120
Figure 8: TESS magnitude distributions for Pipeline III detections for Pleiades (pink), Pisces–Eridanus (orange), Praesepe (blue), Hyades (green), and MEarth (purple). Recovered stars noted by dashed black line, and gray regions indicate what we expect to detect with TESS-SPOC (\(\leq\) 13.5 mag) shaded in gray.
Myr) that can have \(<2\) day periods and be misclassified as binaries, this makes up a very small portion of the entire sample (Rebull et al., 2018; Boyle and Bouma, 2023). This is also on the same timescale of convergence onto the main sequence, where gyrochronology may not yet be applicable (Bouma et al., 2023). Recall 13 days was chosen as the upper limit to account for TESS's short observational baseline, to minimize misclassifying its orbital period systematics as astrophysical periodicity, and based on injection and recovery tests (Appendix A). These limits are shown as grey-dashed horizontal lines in Figure 9. We demarcate the G-dwarf regime with grey-dashed vertical lines, where gyrochronology will work the best since age-rotation relations are best calibrated for G dwarfs. There are 30 G dwarfs within these period and color limits that have \(P_{\rm rot}\) Pipeline II detections. Stars outside of this period and temperature regime may still be useful for indicating youth. For example, K dwarfs with \(P_{\rm rot}\) where spin-down has stalled still indicate youth, as do stars that have not yet converged to a gyrochronology track.
Though rapid rotation suggests youth, it is not a definitive indicator. We will obtain pre-existing or new spectra to provide an additional check for youth with abundance and/or activity measurements. One such elemental abundance is lithium, [Li/Fe]. Lithium is one of the most sensitive elements and burns at fairly low temperatures within a star. As a low-mass star ages, its measured [Li/Fe] ratio decreases due to the burning (Sestito and Randich, 2005). Using spectra from groups of coeval stars of known ages, empirical Li-age relations have been derived (Stanford-Moore et al., 2020; Jeffries et al., 2023) and have been used to confirm stellar ages in combination with gyrochronology (e.g. Tofflemire et al., 2021; Newton et al., 2022).
Using our kinematics catalog, we do a preliminary investigation [Li/Fe] abundance measurements that may exist for rotators. Only 15/141 unique stars had GALAH spectra. Five of the fifteen had reported Li abundances, and all were [Li/Fe] \(\,\leq-0.25\) dex indicating they are on the order of several Gyr old (Carlos et al., 2016). This strongly motivates the need for additional confirmations of youth and binary screening in our future work.
### Rotator Distribution in Action Space
While some rotators in this sample will be identified as interloping old stars, we have found stars with rotation indicative of youth in highly eccentric orbits as indicated by their radial actions. We show the distribution of rotators in action space in the left hand panel of Figures 10 and 11 color-coded by the stars' \(P_{\rm rot}\) as well as the benchmark clusters plotted for reference. In the right hand panel of Figures 10 and 11, we show the per
Figure 9: Color–\(P_{\rm rot}\) distribution of likely single rotators for horizontal section of action space (black, filled stars) and vertical section of action space (unfilled stars) with benchmark clusters plotted for reference (120-Myr-old Pleiades in pink, 670-Myr-old Praesepe in blue, 1-Gyr-old NGC 68111 in yellow, 2.7-Gyr-old Ruprecht 147 in red). The most robust region for gyrochronology (G dwarfs with reliable \(P_{\rm rot}\) measurements) is highlighted in the white region.
cent fraction of rotators found in binned values of radial action (vertical slice) and angular momentum (horizontal slice) with error bars. These bins are indicated in dashed lines in the left hand panel insets. We do not make the cuts on \(P_{\rm rot}\) or color delineated in Figures 9 as we are not inferring ages for these stars just yet and are interested in examining the full rotator distribution.
In the horizontal slice the distribution of rotators is flat considering the errors. Visually, there is a peak at \(L_{z}\sim 1600,\sqrt{J_{R}}\approx 11\), which corresponds to part of one of the many overdensities in action space that we call wrinkles (discussed in section 1.2).
In the right hand panel of Figure 11, we see a significant peak at \(\sqrt{J_{R}}\sim 6\). This is an exciting region dynamically, which we will explore in depth in following papers in this series. There are older associations like Ruprecht 147 (2.7 Gyr) whose stars could be in these orbits. This indicates some of the rotators we have identified could be from older associations such as these. However, we also note the presence of \(>10\) rotators that follow the Pleiades-like color-\(P_{\rm rot}\) distribution in Figure 9, which is unexpected this high in radial action space. This general region also includes part of a wrinkle as well.
## 6 Conclusions & Future Directions
The combination of the Gaia and TESS data offers an unprecedented opportunity to understand the dynamical history of the Milky Way using stellar ages with gyrochronology and kinematics. We build a series of TESS \(P_{\rm rot}\) measurement and confirmation pipelines that we test on 1,560 stars with literature \(P_{\rm rot}\) from 5 groups: the Pleiades, Pisces-Eridanus, Praesepe, the Hyades, and the MEarth sample. On average, we find a recovery rate of 88%, a true positive rate of 91%, and a true
Figure 11: _Left:_ Rotator distribution in vertical slice of action space with benchmark clusters overplotted as shown in Figure 2. Bins from right-hand panel indicated by dashed horizontal lines in inset. _Right:_ Fraction of rotators to all stars found in bins of square root of radial action.
Figure 10: _Left:_ Rotator distribution in horizontal slice of action space with benchmark clusters overplotted as shown in Figure 2. Bins from right-hand panel indicated by dashed vertical lines in inset. _Right:_ Fraction of rotators to all stars found in bins of angular momentum in z-direction.
negative rate of 89%. We find that our pipelines predominantly fail when stars are distant and thus faint like the cooler members of the Pleiades (136 pc) and Praesepe (186 pc).
Given the promising \(P_{\rm rot}\) pipeline results (Table 2), we apply our Pipeline II to a vertical and horizontal slice in Galactic action space, which we calculate using kinematics from Gaia DR3 and radial velocities from GALAH and APOGEE when possible (Figure 2). Each slice contain 1-2 overdensities of stars in highly eccentric orbits Trick et al. (2019) identified, which we call wrinkles. We find 141 unique rotators out of the 12,786 in these selected regions. Five stars have GALAH [Li/Fe] measurements that indicate they are interloping, several-Gyr-old stars (beyond the age where gyrochronology is applicable). When looking at the other rotators' Gaia color-\(P_{\rm rot}\) distributions, we see the stars' rotation periods suggest they are 120 Myr to 1 Gyr old (Figure 9). What is especially interesting is the rotator distribution at \(L_{z}\approx 1675\) and \(\sqrt{J_{R}}\approx 6\). This could be an area where older \(\approx 1\) Gyr associations live but perhaps also 120-Myr stars that could have been dynamically heated to these eccentric orbits.
The identification of rapid \(P_{\rm rot}<13\) days for stars in highly eccentric orbits motivates our future studies. We will apply our rotation pipelines to all of the stars at highly eccentric orbits (\(\sqrt{J_{R}}\geq 6\)). Once \(P_{\rm rot}\) are measured, we will age-date these stars using gyrochronology relations and confirm or refute their youth with spectroscopic observations. The confirmation of young stars at high radial actions in turn will allow us to understand the dynamics that caused their orbits to become highly eccentric.
We thank the anonymous referee for constructive comments that helped us improve the manuscript. We thank the THYME Collaboration, the Cool Stars 21 Conference, the Scialog conference, Graham Edwards, Alex Mule, Aylin Garcia Soto, Adrian Price-Whelan, and the Dartmouth graduate student community for helpful discussions. We thank the CCA for hosting KD's sabbatical and the group for in-person collaborative meetings. This work was funded by the NASA ADAP (21-ADAP21-0134). RR thanks the LSSTC Data Science Fellowship Program, which is funded by LSSTC, NSF Cybertraining Grant #1829740, the Brinson Foundation, and the Moore Foundation; her participation in the program has benefited this work. Funding for the TESS mission is provided by NASA's Science Mission directorate. This work has made use of data from the European Space Agency (ESA) mission Gaia ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the Gaia Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement.This work made use of the Third Data Release of the GALAH Survey (Buder et al., 2021).
The GALAH Survey is based on data acquired through the Australian Astronomical Observatory under programs: A/2013B/13 (The GALAH pilot survey); A/2014A/25, A/2015A/19, A2017A/18 (The GALAH survey phase 1); A2018A/18 (Open clusters with HERMES); A2019A/1 (Hierarchical star formation in Ori OB1); A2019A/15 (The GALAH survey phase 2); A/2015B/19, A/2016A/22, A/2016B/10, A/2017B/16, A/2018B/15 (The HERMES-TESS program); and A/2015A/3, A/2015B/1, A/2015B/19, A/2016A/22, A/2016B/12, A/2017A/14 (The HERMES K2-follow-up program). We acknowledge the traditional owners of the land on which the AAT stands, the Gamilaraay people, and pay our respects to elders past and present. This paper includes data that has been provided by AAO Data Central (datacentral.org.au). Funding for the Sloan Digital Sky Survey V has been provided by the Alfred P. Sloan Foundation, the Heising-Simons Foundation, the National Science Foundation, and the Participating Institutions. SDSS acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is www.sdss.org. SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration, including the Carnegie Institution for Science, Chilean National Time Allocation Committee (CNTAC) ratified researchers, the Gotham Participation Group, Harvard University, Heidelberg University, The Johns Hopkins University, L'Ecole polytechnique federale de Lausanne (EPFL), Leibniz-Institut fur Astrophysik Potsdam (AIP), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Extraterrestrische Physik (MPE), Nanjing University, National Astronomical Observatories of China (NAOC), New Mexico State University, The Ohio State University, Pennsylvania State University, Smithsonian Astrophysical Observatory, Space Telescope Science Institute (STScI), the Stellar Astrophysics Participation Group, Universidad Nacional Autonoma de Mexico, University of Arizona, University of Colorado Boulder, University of Illinois at Urbana-Champaign, University of Toronto, University of Utah, University of Virginia, Yale University, and Yunnan University.
_Facilities:_ Gaia (Gaia Collaboration et al., 2016), GALAH (De Silva et al., 2015), APOGEE (Majewski et al., 2017), TESS (Ricker et al., 2015)
_Software:_ astropy (Astropy Collaboration et al., 2013, 2018), scipy (Virtanen et al., 2020), pystronomy (Zechmeister & Kurster, 2009), lightkurve (Barentsen et al., 2021), glue (Beaumont et al., 2015), astroquery (Ginsburg et al., 2019), numpy (Harris et al., 2020), pandas (pandas development team, 2020; Wes McKinney, 2010), topcat (Taylor, 2005), galpy (Bovy, 2015), matplotlib (Hunter, 2007)
|
2307.08581 | BuboGPT: Enabling Visual Grounding in Multi-Modal LLMs | LLMs have demonstrated remarkable abilities at interacting with humans
through language, especially with the usage of instruction-following data.
Recent advancements in LLMs, such as MiniGPT-4, LLaVA, and X-LLM, further
enlarge their abilities by incorporating multi-modal inputs, including image,
video, and speech. Despite their effectiveness at generating precise and
detailed language understanding of the given modality signal, these LLMs give
up the ability to ground specific parts of inputs, thus only constructing a
coarse-grained mapping. However, explicit and informative correspondence
between text and other modalities will not only improve the user experience but
also help to expand the application scenario of multi-modal LLMs. Therefore, we
propose BuboGPT, a multi-modal LLM with visual grounding that can perform
cross-modal interaction between vision, audio and language, providing
fine-grained understanding of visual objects and other given modalities. As a
result, BuboGPT is able to point out the specific location of an object in the
image, when it is generating response or description for that object. Our
contributions are two-fold: 1) An off-the-shelf visual grounding module based
on SAM that extracts entities in a sentence and find corresponding masks in the
image. 2) A two-stage training scheme and instruction dataset to endow joint
text-image-audio understanding. Our experiments show that BuboGPT achieves
impressive multi-modality understanding and visual grounding abilities during
the interaction with human. It performs consistently well when provided by
arbitrary modality combinations (either aligned or unaligned). Our code, model
and dataset are available at https://bubo-gpt.github.io . | Yang Zhao, Zhijie Lin, Daquan Zhou, Zilong Huang, Jiashi Feng, Bingyi Kang | 2023-07-17T15:51:47Z | http://arxiv.org/abs/2307.08581v1 | # BuboGPT: Enabling Visual Grounding
###### Abstract
LLMs have demonstrated remarkable abilities at interacting with humans through language, especially with the usage of instruction-following data. Recent advancements in LLMs, such as MiniGPT-4, LLaVA, and X-LLM, further enlarge their abilities by incorporating multi-modal inputs, including image, video, and speech. Despite their effectiveness at generating precise and detailed language understanding of the given modality signal, these LLMs give up the ability to ground specific parts of inputs, thus only constructing a coarse-grained mapping. However, explicit and informative correspondence between text and other modalities will not only improve the user experience but also help to expand the application scenario of multi-modal LLMs. Therefore, we propose BuboGPT, a multi-modal LLM with visual grounding that can perform cross-modal interaction between vision, audio and language, providing fine-grained understanding of visual objects and other given modalities. As a result, BuboGPT is able to point out the specific location of an object in the image, when it is generating response or description for that object. Our contributions are two-fold: 1) An off-the-shelf visual grounding module based on SAM that extracts entities in a sentence and find corresponding masks in the image. 2) A two-stage training scheme and instruction dataset to endow joint text-image-audio understanding. Our experiments show that BuboGPT achieves impressive multi-modality understanding and visual grounding abilities during the interaction with human. It performs consistently well when provided by arbitrary modality combinations (either aligned or unaligned). Our code, model and dataset are available at [https://bubo-gpt.github.io](https://bubo-gpt.github.io).
## 1 Introduction
The large language models (LLMs) have made significant progress and demonstrated promising abilities in few-shot and zero-shot learning by leveraging instruct tuning [1] on carefully curated datasets. To harness the potential of LLMs beyond just language, some recent studies [2; 3; 4; 5; 6; 7; 8; 9; 10] successfully connect LLMs with more input signals (_e.g._, image, video, speech and audio), and build powerful multi-modal chatbots. However, these models often perform understanding without digging into the fine-grained relation between the visual objects and other given modalities. For example, when an illustrative figure is given, a visually-enhanced LLM will generate a high-quality description with rich details, but in a black-box manner. Instead, an instructive teacher-bot is going to show its audience which part of the figure it is referring to and what is happening there. Such visual grounding abilities are intriguing to LLMs but previously under-explored in the literature.
In this paper, we propose BuboGPT, the first attempt to incorporate visual grounding into LLMs by relating visual objects with other modalities. Moreover, it is able to perform joint multi-modal understanding and chatting for text, vision and audio, which is achieved by learning a shared representation space that aligns well with pre-trained LLMs.
the fine-grained relation between different visual objects and modalities. The pipeline is composed of three modules, namely, a _tagging module_, a _grounding module_ and a _entity-matching module_. The tagging module is a pre-trained modal [12] that can generate multiple text tags/labels that are relevant to the input image. The SAM-based [11] grounding module [13] further localize the semantic mask or box on the image for each tag/label. Then, the entity-matching module leverages the reasoning capabilities of LLMs to retrieve matched entities from tags and image descriptions. In this way, we connect visual objects and other modalities by using language as a bridge.
Then, to unlock the multi-modal understanding ability for arbitrarily combined inputs, we employ a two-stage training scheme similar to Mini-GTP4 [2]. More specifically, we use ImageBind [14] as the audio encoder, BLIP-2 [15] as the vision encoder and Vicuna [16] as the LLM. In the first stage, we learn a Q-former to align vision or audio features with language on image or audio caption datasets respectively. In the second stage, we perform multi-modal instruct tuning on a high-quality instruction-following dataset. We observe that the construction of this dataset is crucial for the LLM to recognize whether a modality is provided and whether the input modalities are well matched with each other. Therefore, we devise a novel high-quality dataset, which is composed of four subsets: 1) vision instruction dataset; 2) audio instruction dataset; 3) sound localization dataset with positively paired image-audio examples; 4) image-audio captioning dataset with negative pairs. Note that by introducing the negative image-audio pairs for semantic reasoning, the BuboGPT can learn better multi-modal alignment and demonstrate stronger capabilities of joint understanding.
Our experiments show that BuboGPT achieves impressive visual grounding abilities during multi-modal chat, even when arbitrary combinations of multi-modal inputs are provided, whether matched or unmatched. We summarize our key contributions as follows:
* We build a multi-modal LLM, BuboGPT for multi-modal understanding including image, audio and text by learning a common semantic space and further explore the fine-grained relation between different visual objects and different modalities.
* We construct a high-quality multi-modal instruction-tuning dataset including fine-grained audio descriptions and cross-modal sound localization, and introduce both positive and negative image-audio pairs for semantic matching to facilitate the cross-modal understanding.
## 2 Related Work
**Pre-trained LLMs in Mutli-modal Learning.** Due to the scaling up of training data and model size, large language models [17; 18; 19; 16] have demonstrated remarkable abilities across various linguistic tasks in a few-shot and zero-shot manner and also enabled conversational communication with humans. To leverage the powerful linguistic abilities of LLMs, some methods [20; 21] propose to connect different accesedation models for multi-modal tasks by using LLMs as a dispatch scheduler.
Based on high-quality multi-modal instruction-following data, recent end-to-end methods [2; 3; 4; 5; 6; 7; 8; 9; 10] have been introduced to extend LLMs for multi-modal learning as well. Some works such as Mini-GPT4 [2], X-LLM [3] and Video-ChatGPT [10] propose to align the input features of different modalities with pre-trained LLMs by learned visual encoder. Some works such as LLaMA-Adapter [5] and Otter [7] insert learnable cross-attention layers into the pre-trained LLMs
Figure 1: The overall framework of BuboGPT.
to incorporate multi-modalities knowledge. These prior methods mainly focus on tackling visual inputs (e.g. videos and images) [2; 5; 6; 4; 9; 7] or ignoring the fine-grained relation between the visual objects and other given modalities [8; 3]. We further attempt to incorporate visual grounding into LLMs by relating visual objects with other modalities and propose to learn multi-modal alignment including image, audio and text in a common space.
**Multi-modal Instruction Tuning Dataset.** To explore instruction tuning for multi-modal learning, [22] first introduces a multi-modal instruction tuning benchmark that is composed of 62 diverse multi-modal tasks in a unified seq-to-seq format. Mini-GPT4 [2] curates an instruction following dataset by combining Conceptual Caption [23; 24], SBU [25] and LAION [26] with hand-designed prompt, while LLaVA [6] proposes to use GPT-4 [17] to generate more detailed captions to expand COCO dataset [27]. Otter [7] further builds a multi-modal in-context tuning dataset to facilitate the in-context learning capabilities of multi-modal LLMs. Further, we build a high-quality instruction tuning dataset including fine-grained audio description and introduce the negative image-audio pairs for semantic reasoning to enhance the reasoning capabilities of our model.
## 3 Methods
The overall framework of BuboGPT is presented in Figure 1. As the Figure shown, we perform joint multi-modal understanding and chatting for text, vision and audio, which is achieved by learning a shared representation space that aligns well with pre-trained Vicuna [16]. We also build an off-the-shelf visual grounding pipeline to explore the fine-grained relation between different visual objects and modalities.
### Visual Grounding Pipeline
To explore the relation between different visual objects and input modalities, we further build the visual grounding pipeline, composed of a tagging module, a grounding module and a entity-matching module, as shown in Figure 2. Concretely, for a given image, we first use Recognize Anything Model (RAM) [12], a strong model based on Swin-transformer [28] for image tagging to generate relevant candidate tags, denoted as \(\{t_{1},t_{2},...,t_{n_{t}}\}\), where \(t_{i}\) is the \(i\)-th semantic tag and \(n_{t}\) is the number of detected tags. We then connect the tags with comma to form the prompt "\(t_{1},t_{2},...,t_{n_{t}}\)" and use the Grounding DINO [13], a open-set object detection model with referring textual queries to identify the visual entities and the corresponding boxes relevant to the tags. Followed by Segment Anything Model (SAM) [11], the boxes are taken as prompt to get fine-grained semantic masks.
With the tagging and grounding module, we then obtain all the visual entities and the corresponding grounding information, denoted as \(\{(e_{1},\mathbf{g_{1}}),(e_{2},\mathbf{g_{2}}),...,(e_{n_{e}},\mathbf{g_{n_{e}}})\}\), where \(e_{i},\mathbf{g_{i}}\) are separately the \(i\)-th visual entities and grounding information (i.e. boxes and masks), \(n_{e}\) is the number of entities. To model the relation between different visual entities and input modalities, we employ the text output \(\mathbf{t}_{o}\) of our multi-modal LLM as the bridge and build a entity-matching module based on GPT-4 to retrieve the matching pairs. To construct the prompt template "<_List>\(e_{1},e_{2},...,e_{n_{e}}\)</List>,<_Text>\(\mathbf{t}_{o}\)</Text>_", we utilize the powerful LLM for reasoning and retrieve the matching pairs, which reflects the relation between visual entities and input modalities.
### Multi-Modal LLM Training
BuboGPT considers the interaction between three modalities, _i.e._, text, vision and audio. It aligns a vision encoder and an audio encoder with the LLM with a Q-former for each modality. More specifically, we utilize the visual encoder together with the pre-trained Q-Former in BLIP-2 [15] and audio encoder in ImageBind [14] for visual and audio perception. For joint understanding over multiple modalities, we employ Vicuna as the LLM. We use a linear projection layer to connect the modality Q-Former with the LLM. To effectively train such a model, we develop the following two-stage training scheme. The modality encoders and Vicuna model with be fixed throughout the training procedure.
Stage 1: Single-modal Pre-trainingSimilar to MiniGPT-4 [2], the first stage is designed to align the output of the linear projection layer to the word embedding space of the LLM. This is achieved by training the modality Q-Former and linear projection layer on a large number of modality-text paired data. For visual perception, we only train the projection layer for image captioning with the Q-Former from BLIP2 fixed. For audio understanding, we jointly train the Q-Former and the projection layer
for audio captioning. There will not be any prompt used for both settings, the model just take the corresponding image or audio as input and predict the corresponding caption.
Stage 2: Multi-Modal Instruct TuningThis stage aims to equip the multi-modal LLM with the ability to understand human instructions such that it can generate proper responses based on the given modality signal. To this end, we curate a high-quality multi-modal instruction-following dataset, which contains image-text, audio-text and image-audio-text pairs. To make the model adapt to arbitrary combination of input modalities, we design a general prompt as: _###Human_: _<Vision><ModalityHere><Vision> <Audio><ModalityHere></Audio> <instruction> ###Assistantint_: _<Vision></Vision>_ and _<Audio></Audio>_ are special identifiers for image and audio input. _<ModalityHere>_ is going to be replaced by a sequence of image or audio tokens before feeding into the LLM. _<instruction>_ is the human instruction related to the input signals for the LLM to assist on. We list a few examples for different combinations of input modalities in Tab. 1. We empirically accessed that when only positively paired image-audio data are included in this stage, the model always assumes the image and audio are related to each other even though random samples are used at test time. Therefore, we manually create some negative pairs and asking the LLM to tell what are they respectively. The experiments show that introducing such negative paired data is able to overcome this problem significantly. We leave the creation of datasets in the next section.
## 4 Datasets
### Pretraining Datasets
Following MiniGPT-4 [2], we use a combined dataset of CC3M [23], CC12M [24], SBU [29] and LAION [26] to train the visual projection layer, resulting in a total of 130 million image-text pairs. For audio, we mainly use the WaveCaps [30] dataset, which contains 403,050 audio clips with average duration of 67.59 seconds and average caption length of 7.8 words. It combines four
\begin{table}
\begin{tabular}{|p{227.6pt}|} \hline
**\#Human: \textless{}Vision\textgreater{}<ModalityHere>\textless{}/Vision> What is the image? **\#Assistant**: \\
**\#Human: \textless{}Audio\textgreater{}<ModalityHere>\textless{}/Audio> Pay attention to the audio and describe what you notice. **\#AffAssistant**: \\
**\#Human: \textless{}Vision\textgreater{}<ModalityHere>\textless{}/Vision> \textless{}Audio\textgreater{}<ModalityHere>\textless{}/Audio> Please find the source that emits the given sound in this image. **\#AffAssistant**: \\
**\#Human: \textless{}Vision\textgreater{}<ModalityHere>\textless{}/Vision> \textless{}Audio\textgreater{}<ModalityHere>\textless{}/Audio> Are the audio and image related to each other? What are they? **\#AffAssistant**: \\ \hline \end{tabular}
\end{table}
Table 1: Instruction-following prompt examples for various input sources.
Figure 2: The pipeline of visual grounding that is composed of a tagging module, a grounding module and a entity-matching module.
datasets including FreeSound (262,300) [31], BBC Sound Effects (31,201)1, SoundBible (1,231)2 and AudioSet strongly-labelled subset (108,317) 3, and transform their raw-descriptions into captions with ChatGPT.
Footnote 1: [https://sound-effects.bbcrewind.co.uk/](https://sound-effects.bbcrewind.co.uk/)
Footnote 2: [https://soundbible.com/](https://soundbible.com/)
Footnote 3: [https://research.google.com/audioset/download_strong.html](https://research.google.com/audioset/download_strong.html)
### Instruction-Tuning Datasets
#### 4.2.1 Image-Text Dataset
We employ two previously published datasets for visual instruct tuning. The first one is released by MiniGPT-4, which contains 3,439 high-quality text-image pairs. The second one provided by LLaVA [6] is curated from 158K samples based on the COCO dataset, including three types of instructions, _i.e._, converstaions (58K), detailed description (23K) and complex reasonning (77K).
#### 4.2.2 Audio-Text Dataset
When it comes to the field of audio understanding, we also need to conduct the instruction-tuning operation on the audio Q-former. However, unlike vision-language understanding, a severe need still exists for high-quality and well-organized instruction-tuning datasets in this field. To this end, we generate a series of expressive and descriptive data to facilitate this process.
Specifically, we first investigate different kinds of existing audio caption datasets and select Clotho [32] as the original dataset to make the description extension. The reason can be explained in two folds. On the one hand, it has a moderate and acceptable scale to act as an instruction-tuning dataset, and the semantic range of audio is large enough. On the other hand, every audio has five short captions from different annotators, covering various possible scenes related to the audio and increasing the diversity of descriptions.
After obtaining the original data, we need to rewrite the short captions into descriptive and imaginative paragraphs. Considering the extraordinary ability of GPT-4 in the field of few-shot learning, text generation, and complex reasoning, we utilize it to help us automatically assemble short captions into long descriptions to mitigate the reliance on human annotation. The final description is expected to cover all the related original captions. For example, given the series of captions _l"A person is turning a map over and over."_, _"A person is very carefully wrapping a gift for someone else."_, _"A person is very carefully wrapping a gift for someone else."_, _"He signed as the turned the pages of the book, stopping to scan the information."_, _"Papers are being turned, stopped, then turned again, and someone is breathing."_, the description paragraph is expected to be _"A person is repeatedly flipping some papers. They might be reading a book, flipping through a map, or wrapping presents. Judging from the repeated flipping sounds, they are concentrating on repeating this action."_. We design a task-related prompt and construct some few-shot examples like this to promote the in-context reasoning process. As a result, we collect a novel dataset _Clotho-Detail_4 for instruction-tuning in audio understanding, which contains 3938 items and the average length of descriptions is 52.70 words.
Footnote 4: [https://huggingface.co/datasets/magic/BuboGPT/blob/main/Clotho-detail-annotation.json](https://huggingface.co/datasets/magic/BuboGPT/blob/main/Clotho-detail-annotation.json)
#### 4.2.3 Audio-Image-Text Dataset
**Positive Set** In order to further empower our model with the comprehensive ability of multi-modal reasoning, we apply a group of audio-image pairs to help the model to understand the correspondence between the audio and its source. Among the existing audio-vision datasets, VGGSS [33] turns out to be a better choice in this process. It covers a wide range of sounding objects, and the audio only relates to a specific region in the corresponding image. Therefore, we retrieve all the data cases and use a group of fixed templates to wrap the corresponding class labels into natural sentence descriptions. As a result, we generate a total of 5,158 _<audio, image, text>_ pairs to act as the triple-modality instruction tuning dataset 5.
**Negative Set** As discussed in the method section (Sec. 3.2), relying solely on the above dataset causes the LLM fail to recognize irrelevant audio-image pairs. Therefore, we construct negative <_audio, image, text>_ pairs such that <_text>_ gives independent descriptions for audio and image inputs. The audio is randomly sampled from the audio-text dataset presented in Sec. 4.2.2, while the image is randomly sampled from the MiniGPT-4 dataset discussed in Sec. 4.2.2. The text is constructed by concatenating the two captions that starts with "The image" and "The audio".
## 5 Experiment Results
In this section, we aim to answer the following two questions: 1) whether our BuboGPT is able to provide accurate and instructive visual grounding when the inputs contain images? 2) whether the modal is able to perceive arbitrary combinations of modalities and generate proper responses.
We first consider using a single image as input for **fine-grained visual understanding with grounding**. As shown in Fig. 3-7, the model can accurately associate textural words or phrases with image regions in various scenarios with different complexities. Then, when a single audio clip is provided for **audio understanding**. BuboGPT gives informative descriptions covering nearly all acoustic parts included, even when some audio fragments are too short for humans to notice, see Fig. 8-13 for details. Next, we show that the model can perform sound localization with a matched audio-image pair provided, which gives a perfect example for **aligned audio-image understanding**. As illustrated in Fig. 14-17, the model is going to generate an overall description for both input image and audio, then point out which object in the image emits the sound after reasoning. It is worth noting that our model can give correct predictions when we provide different audio and keep the image unchanged. This demonstrates that our model can understand both modalities comprehensively rather than generate answers with prior bias from a single modality. Moreover, we empirically accessed that if the model is only tuned with well-aligned image-audio data, it actually fails to discriminate when an irrelevant image and audio pair is provided, resulting in a non-factual response that is not consistent with the given image or audio (Fig. 20). After introducing the negatively paired dataset as discussed in Sec. 4.2.3, the model can tell whether the image and audio are relevant to each other and generate high-quality response for **arbitrary audio-image understanding**, as evidenced by Fig. 18-19.
## 6 Conclusion
In this report, we propose a multi-modal LLM, BuboGPT, which is capable of joint multi-modal understanding including image, audio and text, and perform more fine-grained understanding of multi-modal inputs by exploring the relation between different visual objects and modalities. We also build a high-quality instruction tuning dataset and the experiments show that BuboGPT achieves impressive visual grounding abilities during multi-modal chat, even when arbitrary combinations of multi-modal inputs are provided, whether matched or unmatched.
## 7 Limitations
**Language hallucination.** Following prior works, our method is based on the pre-trained Vicuna model, which inherits the limitations of LLMs including generating non-existent knowledge or non-factual information. The problem might be resolved by training with more high-quality data and developing trustworthy LLMs.
**Inadequate capacities of Grounding QA.** Since the connection between grounding results and different modalities is built upon text conversations without extra training, the capacities of QA on specific objects remain limited. The model can be improved by introducing fine-grained visual grounding datasets and considering the spatial location as extra input.
The image shows _two_ black _swans_ swimming in the _water_ and forming a heart shape with their _beak_. The swans appear to be engaging in a playful or affectionate gesture as they move around each other gracefully, displaying their elegant black feathers and white beaks. It captures a moment of harmony and beauty in the natural world, where these creatures appear to be enjoying each other's company.
Figure 4: Example 2 of fine-grained visual understanding with grounding.
The image shows a _small kitten_ in a white and brown color pattern laying in the bottom of a tan and brown leather purse or small _tote bag_ on a _wooden table_. The kitten appears to be curled up and comfortably nestled in the bag. The bag itself has a _strong_ on the side that allows the bag to be held and carried, likely making it an attractive and convenient option for the cat to nap and rest.
Figure 5: Example 3 of of fine-grained visual understanding with grounding.
The image shows a \(\copyright\) _woman_ with a black fur and white hair sitting on the \(\copyright\) _grass_, holding the back of a \(\copyright\) _chimpanzee_. The woman is wearing a white shirt and is smiling. The chimpanzee is reaching its \(\copyright\) _mouth_ towards the woman's mouth and the woman is making a kissing sound with her mouth open.
Figure 6: Example 4 of of fine-grained visual understanding with grounding.
Figure 7: Example 5 of of fine-grained visual understanding with grounding.
Figure 8: Example 1 of audio understanding. Audio 1 can be accessed via this link.
Figure 9: Example 2 of audio understanding. Audio 2 can be accessed via this link.
Figure 11: Example 4 of audio understanding. Audio 4 can be accessed via this link.
Figure 10: Example 3 of audio understanding. Audio 4 can be accessed via this link.
Figure 12: Example 5 of audio understanding. Audio 5 can be accessed via this link.
Figure 13: Example 6 of audio understanding. Audio 6 can be accessed via this link.
Figure 14: Example 1 of aligned audio-image understanding. Audio 7 can be accessed via this link.
Figure 15: Example 2 of aligned audio-image understanding. Audio 8 can be accessed via this link.
Figure 16: Example 3 of aligned audio-image understanding. Audio 9 can be accessed via this link.
Figure 17: Example 4 of aligned audio-image understanding. Audio 10 can be accessed via this link.
In this audio, a clock strikes hour after hour, creating a haunting and mournful sound. The sound echoes through the forest, wafing through the trees and into the distance. The image depicts a _mink_ _perched on a mossy _tree trunk_. The mink_ _stares_ _intently into the distance, possibly listening to the toll of the clock, representing the passing of time. This atmosphere might suggest a sense of solitude, introspection, or reminiscence._
Figure 18: Example 1 of arbitrary audio-image understanding. Audio 11 can be accessed via this link.
Figure 19: Example 2 of arbitrary audio-image understanding. Audio 12 can be accessed via this link.
In the image, we see a small brown squirrel perched on a branch of a tree in the forest. _The sound of wind rustling through the leaves provides the background ambiance._
Suddenly, _a bird starts chirping and singing a melody. The chirping gradually becomes louder and more persistent,_
creating a harmonious and peaceful atmosphere. As the bird
sings, the squirrel starts moving around on the branch. Its ears
perk up, and its tail is alert. Then, the squirrel suddenly stops
and listens intently, its eyes focused on the bird's melody. _The bird's song reaches its peak and then gradually fades away, leaving the forest silent._
The squirrel remains on the branch,
looking out into the woods, as if contemplating the beauty of
nature and the solitude of its surroundings. The contrast
between the peaceful bird song and the serene silence of the
forest creates a captivating scene. This combination of the
image and audio represents a moment of stillness and awe as
the squirrel and bird share the beauty of their respective
experiences in the forest.
Figure 20: Failure case of arbitrary audio-image understanding without using negative audio-image pairs. |
2308.07037 | Bayesian Flow Networks | This paper introduces Bayesian Flow Networks (BFNs), a new class of
generative model in which the parameters of a set of independent distributions
are modified with Bayesian inference in the light of noisy data samples, then
passed as input to a neural network that outputs a second, interdependent
distribution. Starting from a simple prior and iteratively updating the two
distributions yields a generative procedure similar to the reverse process of
diffusion models; however it is conceptually simpler in that no forward process
is required. Discrete and continuous-time loss functions are derived for
continuous, discretised and discrete data, along with sample generation
procedures. Notably, the network inputs for discrete data lie on the
probability simplex, and are therefore natively differentiable, paving the way
for gradient-based sample guidance and few-step generation in discrete domains
such as language modelling. The loss function directly optimises data
compression and places no restrictions on the network architecture. In our
experiments BFNs achieve competitive log-likelihoods for image modelling on
dynamically binarized MNIST and CIFAR-10, and outperform all known discrete
diffusion models on the text8 character-level language modelling task. | Alex Graves, Rupesh Kumar Srivastava, Timothy Atkinson, Faustino Gomez | 2023-08-14T09:56:35Z | http://arxiv.org/abs/2308.07037v5 | # Bayesian Flow Networks
###### Abstract
This paper introduces _Bayesian Flow Networks_ (BFNs), a new class of generative model in which the parameters of a set of independent distributions are modified with Bayesian inference in the light of noisy data samples, then passed as input to a neural network that outputs a second, interdependent distribution. Starting from a simple prior and iteratively updating the two distributions yields a generative procedure similar to the reverse process of diffusion models; however it is conceptually simpler in that no forward process is required. Discrete and continuous-time loss functions are derived for continuous, discretised and discrete data, along with sample generation procedures. Notably, the network inputs for discrete data lie on the probability simplex, and are therefore natively differentiable, paving the way for gradient-based sample guidance and few-step generation in discrete domains such as language modelling. The loss function directly optimises data compression and places no restrictions on the network architecture. In our experiments BFNs achieve competitive log-likelihoods for image modelling on dynamically binarized MNIST and CIFAR-10, and outperform all known discrete diffusion models on the text8 character-level language modelling task1.
Footnote 1: Code and trained models can be found at [https://github.com/nnaisense/bayesian-flow-networks](https://github.com/nnaisense/bayesian-flow-networks)
## 1 Introduction
Large-scale neural networks have revolutionised generative modelling over the last few years, with an unprecedented ability to capture complex relationships among many variables. Building a convincing joint model of all the pixels in a high resolution image, for example, was impossible before the advent of modern generative networks.
Key to the expressive power of most of these networks -- including autoregressive models e.g. [97, ], flow-based models [32], deep VAEs [48] and diffusion models [41] -- is that the joint distribution they encode is broken down into a series of steps, thereby eluding the "curse of dimensionality" that would doom any effort to explicitly define all the interactions among so many variables. In colloquial terms they solve a hard problem by splitting it into easy pieces.
A general way to view such distributions is as an exchange of messages between a sender, Alice, who has access to some data, and her friend Bob, who wishes to receive it in as few bits as possible. At each step Alice sends a message to Bob that reveals something about the data. Bob attempts to guess what the message is: the better his guess the fewer bits are needed to transmit it. After receiving the message, Bob uses the information he has just gained to improve his guess for the next message. The loss function is the total number of bits required for all the messages.
In an autoregressive language model, for example, the messages are the word-pieces the text is divided into. The distribution encoding Bob's prediction for the first message is of necessity uninformed: a zero-gram prior based on the relative frequencies of different word-pieces. The transmission cost is the negative log-probability under this prior. Bob then uses the first word-piece to predict the second; on average, the second prediction will be slightly more informed than the first, and the expected transmission cost will be slightly lower. The process repeats with the predictions improving at each step. The sum of the transmission costs is the negative log-probability of the complete text sequence, which is the loss function minimised by maximum likelihood training. It is also the minimum number of bits that would be required for Alice to transmit the pieces to Bob using arithmetic coding [51]. There is therefore a direct correspondence between fitting an autoregressive model with maximum likelihood and training it for data compression.
Autoregressive networks are currently state-of-the-art for language modelling [29], and in general perform well on discrete data where a natural ordering exists. However they have proved less effective in domains such as image generation, where the data is continuous and no natural order exists among variables (e.g. there is no reason to generate one pixel before another). They also have the drawback that generating samples requires as many network updates as there are variables in the data.
Diffusion models are an alternative framework that has proved particularly effective for image generation [5, 34]. In this case the transmission procedure is a little more complex2. Each message Bob receives is a noisy version of the message before, where the noise is designed so that in expectation the messages approach the data. The transmission cost at each step is the Kullback-Leibler divergence between the distribution from which Alice draws the message and Bob's prediction of that distribution (which is a reparameterisation of his prediction of the data, and which is therefore improved by the information he gained from the previous message). The sum of the KL divergences is the _evidence lower bound_ minimised by diffusion training [41]; it is also the expected number of bits needed to transmit the data using an efficient bits-back coding scheme [117 ]. Once again there is an exact equivalence between the loss function used to train the model and the model's ability to compress data, as elucidated by previous authors [46].
Footnote 2: We are here describing the reverse process of diffusion models.
We posit that the superiority of diffusion over autoregression for image generation lies in the way diffusion progresses from coarse to fine image details as the level of noise decreases -- a more natural way to construct an image than one dot at a time. However diffusion has yet to match autoregression for discrete data, which is unfortunate, as diffusion models have the advantage of decoupling the number of generation steps from the number of variables. A fundamental challenge is that when the data is discrete, the noise in the diffusion process is also discrete, and therefore discontinuous. To return to the transmission metaphor, if the data is a piece of text, then Bob begins the process with a totally garbled text, every symbol of which is either randomly altered or left unchanged by each of Alice's messages. A key motivation for this work was our belief that a fully continuous transmission process -- where Alice's messages smoothly alter Bob's beliefs -- would be more effective for discrete data. Moreover this should open the door to gradient-based sample guidance [5] and few-step generation techniques [37, 43, 50], similar to those that have been developed for continuous diffusion.
_Bayesian Flow Networks_ (BFNs), the model introduced in this paper, differ from diffusion models in that the network operates on the parameters of a data distribution, rather than on a noisy version of the data itself. This ensures that the generative process is fully continuous and differentiable,
even when the data is discrete. BFNs can be summarised by the following transmission scheme (Figure 1). Bob has an "input distribution" which is initially a simple prior: a standard normal for continuous data, a uniform categorical for discrete data. At each transmission step he feeds the parameters of the input distribution (e.g. the mean of a normal distribution, the probabilities of a categorical distribution) into a neural network. The network outputs the parameters of a second distribution referred to as the "output distribution". Alice then creates a "sender distribution" by adding noise to the data according to a predefined schedule, and Bob creates a "receiver distribution" by convolving the output distribution with the same noise distribution used by Alice: intuitively, for every value the data could take on, Bob constructs the sender distribution Alice would have used if the receiver is not in the same direction.
Figure 1: **System Overview**. The figure represents one step of the modelling process of a Bayesian Flow Network. The data in this example is a ternary symbol sequence, of which the first two variables (‘B’ and ‘A’) are shown. At each step the network emits the parameters of the output distribution based on the parameters of the previous input distribution. The sender and receiver distributions (both of which are continuous, even when the data is discrete) are created by adding random noise to the data and the output distribution respectively. A sample from the sender distribution is then used to update the parameters of the input distribution, following the rules of Bayesian inference. Conceptually, this is the message sent by Alice to Bob, and its contribution to the loss function is the KL divergence from the receiver to the sender distribution.
that value was correct, then sums over all these hypothetical sender distributions, weighted by the probability of the corresponding value under the output distribution. Alice picks a sample from the sender distribution and sends it to Bob at a cost equal to the KL divergence from receiver to sender. Bob then uses the sample to update his input distribution, following the rules of Bayesian inference. Usefully, the Bayesian updates are available in closed-form as long as the input distribution models all the variables in the data independently. Once the update is complete, Bob again feeds the parameters of the input distribution to the network which returns the parameters of the output distribution. The process repeats for \(n\) steps, at which point Bob can predict the data accurately enough that Alice can send it to him without any noise.
Note the key difference between the input and output distributions: the input distribution receives information about each variable in the data independently (via the Bayesian updates), and is therefore unable to exploit contextual information, such as neighbouring pixels in an image or related words in a text; the output distribution, on the other hand, is produced by a neural network that jointly processes all the parameters in the input distribution, giving it access to all available context. Intuitively, the combination of the input and output distributions represents a division of labour between Bayesian inference and deep learning that plays to both of their strengths: the former provides a mathematically optimal and finely controllable way to collect and summarise information about individual variables, while the latter excels at integrating information over many interrelated variables.
The above transmission process defines an \(n\)-step loss function that can be generalised to continuous time by sending \(n\) to \(\infty\). In continuous time the Bayesian updates become a _Bayesian flow_ of information from the data to the network. As well as removing the need to predefine the number of steps during training, the continuous-time loss function is mathematically simpler and easier to compute than the discrete-time loss. A BFN trained with continuous-time loss can be run for any number of discrete steps during inference and sampling, with performance improving as the number of steps increases.
The rest of the paper is structured as follows. A short summary of related work is given in Section 2. The basic framework of BFNs, along with a general derivation of the discrete and continuous time loss functions is provided in Section 3. Specialisations of the framework to continuous, discretised and discrete data are provided in Sections 4-6, along with pseudocode for training, evaluating and sampling from the network. Experimental results on the CIFAR-10, dynamically binarized MNIST and text8 datasets are provided in Section 7 and concluding remarks are given in Section 8.
## 2 Related Work
Of existing methods, Bayesian Flow Networks are most closely related to diffusion models. However the two differ in some crucial aspects. Most obviously BFNs embody a function from one distribution to another -- rather than from data to a distribution, like diffusion models and most other probabilistic networks. One advantage of this approach is that, because the parameters of a categorical distribution are real-valued probabilities, the inputs to the network are continuous even when the data is discrete. This contrasts with discrete diffusion, which natively uses discrete samples as input [1, 14, 41].
Numerous authors have proposed continuous variants of discrete diffusion. Typically these rely either on mapping to and from a continuous embedding space [2, 6, 21, 44], or on restricting
continuous diffusion to the probability simplex [23, 24, 33]. While we do not directly compare against the above methods, we note that continuity is an inherent property of the Bayesian Flow framework (the network inputs automatically lie on the probability simplex by virtue of being the parameters of a categorical distribution), rather than a constraint added to an existing system. As well as reducing the number of free parameters and design choices (e.g. the continuous embedding space, the mapping functions), this ensures that BFNs directly optimise the negative log-likelihood of discrete data, unlike continuous diffusion methods for discrete data, which typically require either simplified loss functions [24] or auxiliary loss terms [21] to make learning stable.
For continuous data, BFNs are most closely related to variational diffusion models [17], with a very similar continuous-time loss function. The main difference in this case is that the network inputs are considerably less noisy in BFNs than in variational diffusion and other continuous diffusion models. This is because the generative process of BFNs begins with the parameters of a fixed prior, whereas that of diffusion models begins with pure noise. We hypothesise that the reduction in noise could lead to faster learning on large datasets where the model underfits; however we have yet to test this hypothesis experimentally.
Another key difference from diffusion models is that there is no need to define and invert a forward process for BFNs, which arguably makes it easier to adapt them to different distributions and data types. We showcase this flexibility by adapting BFNs to continuous, discretised and discrete data, with minimal changes to the training procedure. This contrasts with e.g. discretised diffusion, which requires carefully defined transition matrices [1].
## 3 Bayesian Flow Networks
This section covers the basic mathematical formalism of Bayesian Flow Networks, laying out the structure of the various functions and distributions required by the model, along with the discrete and continuous-time loss functions used for training. Specific instantiations of the general framework for continuous, discretised and discrete data are given in Sections 4-6.
### Input and Sender Distributions
Given \(D\)-dimensional data \(\mathbf{x}=\left(x^{(1)},\ldots,x^{(D)}\right)\in\mathcal{X}^{D}\), let \(\boldsymbol{\theta}=\left(\theta^{(1)},\ldots,\theta^{(D)}\right)\) be the parameters of a factorised _input distribution_\(p_{{}_{I}}(\cdot\mid\boldsymbol{\theta})\), with
\[p_{{}_{I}}(\mathbf{x}\mid\boldsymbol{\theta})=\prod_{d=1}^{D}p_{{}_{I}}(x^{(d) }\mid\theta^{(d)}). \tag{1}\]
For example, \(\theta^{(d)}\) may consist of the probabilities of a categorical distribution. Let \(p_{{}_{S}}\left(\cdot\mid\mathbf{x};\alpha\right)\) be a similarly factorised _sender distribution_ with \(\mathbf{y}=\left(y^{(1)},\ldots,y^{(D)}\right)\in\mathcal{Y}^{D}\) and
\[p_{{}_{S}}\left(\mathbf{y}\mid\mathbf{x};\alpha\right)=\prod_{d=1}^{D}p_{{}_{ S}}\left(y^{(d)}\mid x^{(d)};\alpha\right), \tag{2}\]
where \(\alpha\in\mathbb{R}^{+}\) is an _accuracy_ parameter defined such that when \(\alpha=0\), the sender samples are entirely uninformative about \(\mathbf{x}\) and as \(\alpha\) increases the samples become progressively more informative.
### Output Distribution \(p_{{}_{O}}(\cdot\mid\mathbf{\theta},t)\)
During the data transmission process, the input parameters \(\mathbf{\theta}\) are passed along with the process time \(t\) as input to a neural network \(\Psi\). The network then emits an output vector \(\Psi(\mathbf{\theta},t)=\left(\Psi^{(1)}(\mathbf{\theta},t),\ldots,\Psi^{(D)}(\mathbf{ \theta},t)\right)\) which is used to parameterise an _output distribution_ factorised in the same way as the input and sender distributions:
\[p_{{}_{O}}(\mathbf{x}\mid\mathbf{\theta},t)=\prod_{d=1}^{D}p_{{}_{O}}(x^{(d)}\mid \Psi^{(d)}(\mathbf{\theta},t)). \tag{3}\]
As discussed in the introduction, the key difference between the input and output distributions is that while each \(p_{{}_{I}}(x^{(d)}\mid\theta^{(d)})\) depends only on information gathered via \(p_{{}_{S}}\left(y^{(d)}\mid x^{(d)};\alpha\right)\) about \(x^{(d)}\), each \(p_{{}_{O}}(x^{(d)}\mid\Psi^{(d)}(\mathbf{\theta},t))\) depends (via the network) on all of \(\mathbf{\theta}\) and hence all of \(\mathbf{x}\). The output distribution, unlike the input distribution, can therefore exploit context information, such as surrounding pixels in an image or related words in a text.
### Receiver Distribution \(p_{{}_{R}}(\cdot\mid\mathbf{\theta};t,\alpha)\)
Given sender distribution \(p_{{}_{S}}(\cdot\mid\mathbf{x};\alpha)\) and output distribution \(p_{{}_{O}}(\cdot\mid\mathbf{\theta},t)\) the _receiver distribution_ over \(\mathcal{Y}^{D}\) is defined as
\[p_{{}_{R}}(\mathbf{y}\mid\mathbf{\theta};t,\alpha)=\mathop{\mathbb{E}}_{p_{{}_{O} }(\mathbf{x}^{\prime}|\mathbf{\theta};t)}p_{{}_{S}}\left(\mathbf{y}\mid\mathbf{x} ^{\prime};\alpha\right). \tag{4}\]
Intuitively this can be understood as a receiver who knows the form of the sender distribution \(p_{{}_{S}}(\cdot\mid\mathbf{x};\alpha)\) but does not know \(\mathbf{x}\), and therefore integrates over all \(\mathbf{x}^{\prime}\in\mathcal{X}^{D}\), and hence all possible sender distributions, weighted by the probability given to \(\mathbf{x}^{\prime}\) by the output distribution \(p_{{}_{O}}(\mathbf{x}\mid\mathbf{\theta},t)\). The receiver distribution therefore combines two sources of uncertainty: the "known unknown" of the sender distribution entropy (which is a function of \(\alpha\)), and the "unknown unknown" of the output distribution entropy.
### Bayesian Updates
Given parameters \(\mathbf{\theta}\) and sender sample \(\mathbf{y}\) drawn with accuracy \(\alpha\) the _Bayesian update function_\(h\) is derived by applying the rules of Bayesian inference to compute the updated parameters \(\mathbf{\theta}^{\prime}\):
\[\mathbf{\theta}^{\prime}\gets h(\mathbf{\theta},\mathbf{y},\alpha). \tag{5}\]
The _Bayesian update distribution_\(p_{{}_{U}}(\cdot\mid\mathbf{\theta},\mathbf{x};\alpha)\) is then defined by marginalizing out \(\mathbf{y}\):
\[p_{{}_{U}}(\mathbf{\theta}^{\prime}\mid\mathbf{\theta},\mathbf{x};\alpha)=\mathop{ \mathbb{E}}_{p_{{}_{S}}(\mathbf{y}|\mathbf{x};\alpha)}\delta\left(\mathbf{\theta}^ {\prime}-h(\mathbf{\theta},\mathbf{y},\alpha)\right), \tag{6}\]
where \(\delta\left(\cdot-\mathbf{a}\right)\) is the multivariate Dirac delta distribution centred on the vector \(\mathbf{a}\). In Sections 4.4 and 6.7 we will prove that both forms of \(p_{{}_{U}}(\cdot\mid\mathbf{\theta},\mathbf{x};\alpha)\) considered in this paper have the following property: the accuracies are additive in the sense that if \(\alpha=\alpha_{a}+\alpha_{b}\) then
\[p_{{}_{U}}(\mathbf{\theta}^{\prime\prime}\mid\mathbf{\theta},\mathbf{x};\alpha)= \mathop{\mathbb{E}}_{p_{{}_{U}}(\mathbf{\theta}^{\prime}|\mathbf{\theta},\mathbf{x}; \alpha_{a})}p_{{}_{U}}(\mathbf{\theta}^{\prime\prime}\mid\mathbf{\theta}^{\prime}, \mathbf{x};\alpha_{b}). \tag{7}\]
It follows from this property that given prior input parameters \(\mathbf{\theta}_{0}\), the probability of observing parameters \(\mathbf{\theta}_{n}\) after drawing a sequence of \(n\) sender samples \(\mathbf{y}_{1},\ldots,\mathbf{y}_{n}\) with accuracies \(\alpha_{1},\ldots,\alpha_{n}\) is
\[\mathop{\mathbb{E}}_{p_{U}(\mathbf{\theta}_{1}|\mathbf{\theta}_{0}, \mathbf{x};\alpha_{1})}\mathop{\mathbb{E}}_{p_{U}(\mathbf{\theta}_{2}|\mathbf{\theta}_ {1},\mathbf{x};\alpha_{2})}\ldots\mathop{\mathbb{E}}_{p_{U}(\mathbf{\theta}_{n-1} |\mathbf{\theta}_{n-2},\mathbf{x};\alpha_{n-1})}p_{{}_{U}}(\mathbf{\theta}_{n}\mid\mathbf{ \theta}_{n-1},\mathbf{x};\alpha_{n})=p_{{}_{U}}\left(\mathbf{\theta}_{n}\mid\mathbf{ \theta}_{0},\mathbf{x};\sum_{i=1}^{n}\alpha_{i}\right). \tag{8}\]
### Accuracy Schedule \(\beta(t)\)
By performing an infinite number of transmission steps, the Bayesian update process can be generalized to continuous time. Let \(t\in[0,1]\) be the process _time_ and let \(\alpha(t)>0\) be the _accuracy rate_ at time \(t\). Now define the _accuracy schedule_\(\beta(t)\) as
\[\beta(t)=\int_{t^{\prime}=0}^{t}\alpha(t^{\prime})dt^{\prime}. \tag{9}\]
It follows from the above definitions that \(\beta(t)\) is a monotonically increasing function of \(t\), that \(\beta(0)=0\), and that \(\frac{d\beta(t)}{dt}=\alpha(t)\).
Specific forms of \(\beta(t)\) for continuous and discrete data are provided in Sections 4.5 and 6.8. Both are derived using simple heuristics, with a deeper investigation left for future work.
### Bayesian Flow Distribution \(p_{{}_{F}}(\cdot\mid\mathbf{x};t)\)
Given prior parameters \(\mathbf{\theta}_{0}\), Bayesian update distribution \(p_{{}_{U}}(\cdot\mid\mathbf{\theta},\mathbf{x};\alpha)\) and accuracy schedule \(\beta(t)\), the _Bayesian flow distribution_\(p_{{}_{F}}(\cdot\mid\mathbf{x};t)\) is the marginal distribution over input parameters at time \(t\), defined by
\[p_{{}_{F}}(\mathbf{\theta}\mid\mathbf{x};t)=p_{{}_{U}}(\mathbf{\theta} \mid\mathbf{\theta}_{0},\mathbf{x};\beta(t)). \tag{10}\]
### Loss Function \(L(\mathbf{x})\)
Given prior parameters \(\mathbf{\theta}_{0}\) and accuracy schedule \(\beta(t)\), consider a sequence of \(n\) sender samples \(\mathbf{y}_{1},\ldots,\mathbf{y}_{n}\) sampled at times \(t_{1},\ldots,t_{n}\) where \(t_{i}=i/n\). The sender distribution at step \(i\) is \(p_{{}_{S}}\left(\cdot\mid\mathbf{x};\alpha_{i}\right)\) where
\[\alpha_{i}=\beta(t_{i})-\beta(t_{i-1}), \tag{11}\]
the receiver distribution at step \(i\) is \(p_{{}_{R}}(\cdot\mid\mathbf{\theta}_{i-1};t_{i-1},\alpha_{i})\), and the input parameter sequence \(\mathbf{\theta}_{1},\ldots,\mathbf{\theta}_{n}\) is recursively calculated from
\[\mathbf{\theta}_{i}=h(\mathbf{\theta}_{i-1},\mathbf{y},\alpha_{i}). \tag{12}\]
Define the \(n\)-step _discrete-time loss_\(L^{n}(\mathbf{x})\) as the expected number of nats required to first transmit \(\mathbf{y}_{1},\ldots,\mathbf{y}_{n}\), and the _reconstruction loss_\(L^{r}(\mathbf{x})\) as the expected number of nats required to then transmit \(\mathbf{x}\). Since -- using a bits-back coding scheme [7, 11] -- it requires \(D_{KL}\left(p_{{}_{S}}\parallel p_{{}_{R}}\right)\) nats to transmit a sample from \(p_{{}_{S}}\) to a receiver with \(p_{{}_{R}}\),
\[L^{n}(\mathbf{x})=\mathop{\mathbb{E}}_{p(\mathbf{\theta}_{1},\ldots, \mathbf{\theta}_{n-1})}\sum_{i=1}^{n}D_{KL}\left(p_{{}_{S}}\left(\cdot\mid\mathbf{x };\alpha_{i}\right)\parallel p_{{}_{R}}(\cdot\mid\mathbf{\theta}_{i-1};t_{i-1}, \alpha_{i})\right), \tag{13}\]
where
\[p(\boldsymbol{\theta}_{1},\ldots,\boldsymbol{\theta}_{n})=\prod_{i=1}^{n}p_{{}_{U} }(\boldsymbol{\theta}_{i}\mid\boldsymbol{\theta}_{i-1},\mathbf{x};\alpha_{i}), \tag{14}\]
and since the number of nats needed to transmit \(x\) using an arithmetic coding scheme [51] based on \(p(x)\) is \(-\ln p(x)\), and the marginal probability of \(\boldsymbol{\theta}_{n}\) is given by \(p_{{}_{F}}(\cdot\mid\mathbf{x},1)\),
\[L^{r}(\mathbf{x})=-\mathop{\mathbb{E}}_{p_{{}_{F}}(\boldsymbol{ \theta}|\mathbf{x},1)}\ln p_{{}_{O}}(\mathbf{x}\mid\boldsymbol{\theta};1). \tag{15}\]
Note that \(L^{r}(\mathbf{x})\) is not directly optimised in this paper; however it is indirectly trained by optimising \(L^{n}(\mathbf{x})\) since both are minimised by matching the output distribution to the data. Furthermore, as long as \(\beta(1)\) is high enough, the input distribution at \(t=1\) will be very close to \(\mathbf{x}\), making it trivial for the network to fit \(p_{{}_{O}}(\mathbf{x}\mid\boldsymbol{\theta};1)\).
The loss function \(L(\mathbf{x})\) is defined as the total number of nats required to transmit the data, which is the sum of the n-step and reconstruction losses:
\[L(\mathbf{x})=L^{n}(\mathbf{x})+L^{r}(\mathbf{x}) \tag{16}\]
Alternatively \(L(\mathbf{x})\) can be derived as the loss function of a variational autoencoder (VAE; [18]). Consider the sequence \(\mathbf{y}_{1},\ldots,\mathbf{y}_{n}\) as a latent code with posterior probability given by
\[q(\mathbf{y}_{1},\ldots,\mathbf{y}_{n})=\prod_{i=1}^{n}p_{{}_{S}} \left(\mathbf{y}_{i}\mid\mathbf{x};\alpha_{i}\right), \tag{17}\]
and autoregressive prior probability given by
\[p(\mathbf{y}_{1},\ldots,\mathbf{y}_{n})=\prod_{i=1}^{n}p_{{}_{R}} (\mathbf{y}_{i}\mid\boldsymbol{\theta}_{i-1};t_{i-1},\alpha_{i}). \tag{18}\]
Then, noting that the decoder probability \(p(\mathbf{x}\mid\mathbf{y}_{1},\ldots,\mathbf{y}_{n})=p_{{}_{O}}(\mathbf{x} \mid\boldsymbol{\theta}_{n};1)\), the complete transmission process defines a VAE with loss function given by the negative variational lower bound (VLB)
\[L(\mathbf{x})=-\text{VLB}(\mathbf{x}) =D_{KL}\left(q\parallel p\right)-\mathop{\mathbb{E}}_{\mathbf{y} _{1},\ldots,\mathbf{y}_{n}\sim q}\ln p(\mathbf{x}\mid\mathbf{y}_{1},\ldots, \mathbf{y}_{n}) \tag{19}\] \[=L^{n}(\mathbf{x})+L^{r}(\mathbf{x}). \tag{20}\]
### Discrete-Time Loss \(L^{n}(\mathbf{x})\)
Eq. 13 can be rewritten as
\[L^{n}(\mathbf{x})=n\mathop{\mathbb{E}}_{i\sim U\{1,n\}}\mathop{ \mathbb{E}}_{p_{{}_{U}}(\boldsymbol{\theta}_{1}|\boldsymbol{\theta}_{0}, \mathbf{x};\alpha_{1})}\cdots\mathop{\mathbb{E}}_{p_{{}_{U}}(\boldsymbol{ \theta}|\boldsymbol{\theta}_{i-2},\mathbf{x};\alpha_{i-1})}D_{KL}\left(p_{{}_{S} }\left(\cdot\mid\mathbf{x};\alpha_{i}\right)\parallel p_{{}_{R}}(\cdot\mid \boldsymbol{\theta};t_{i-1},\alpha_{i})\right), \tag{21}\]
where \(U\{1,n\}\) is the uniform distribution over the integers from \(1\) to \(n\). Furthermore, it follows from Eqs. 8 and 10 that
\[\mathop{\mathbb{E}}_{p_{{}_{U}}(\boldsymbol{\theta}_{1}| \boldsymbol{\theta}_{0},\mathbf{x};\alpha_{1})}\cdots\mathop{\mathbb{E}}_{p_{{} _{U}}(\boldsymbol{\theta}|\boldsymbol{\theta}_{i-2},\mathbf{x};\alpha_{i-1})} =\mathop{\mathbb{E}}_{p_{{}_{U}}(\boldsymbol{\theta}|\boldsymbol {\theta}_{0},\mathbf{x};\beta(t_{i-1}))} \tag{22}\] \[=\mathop{\mathbb{E}}_{p_{{}_{F}}(\boldsymbol{\theta}|\mathbf{x};t _{i-1})}, \tag{23}\]
and hence
\[L^{n}(\mathbf{x})=n\mathop{\mathbb{E}}_{i\sim U\{1,n\},p_{{}_{F}}(\boldsymbol{ \theta}|\mathbf{x};t_{i-1})}D_{KL}\left(p_{{}_{S}}\left(\cdot\mid\mathbf{x}; \alpha_{i}\right)\parallel p_{{}_{R}}(\cdot\mid\boldsymbol{\theta};t_{i-1}, \alpha_{i})\right), \tag{24}\]
which allows us approximate \(L^{n}(\mathbf{x})\) via Monte-Carlo sampling without computing the \(n\)-step sum.
### Continuous-Time Loss \(L^{\infty}(\mathbf{x})\)
Eq. 24 can be used to train the network directly. However this presupposes that \(n\) is fixed during training. Furthermore, for discrete and discretised data the KL terms do not have analytic solutions, leading to noisy gradient estimates.
Inspired by Variational Diffusion Models [17] we derive a continuous-time loss function \(L^{\infty}(\mathbf{x})\) by taking the limit of \(L^{n}(\mathbf{x})\) as \(n\rightarrow\infty\). This turns out to be mathematically simpler than the discrete-time loss, as well as removing both the noisy gradients for the discrete and discretised KL terms and the need to fix \(n\) during training.
Let
\[\epsilon \stackrel{{\text{def}}}{{=}}\frac{1}{n}, \tag{25}\] \[\alpha(t,\epsilon) \stackrel{{\text{def}}}{{=}}\beta(t)-\beta(t- \epsilon),\] (26) \[L^{\infty}(\mathbf{x}) \stackrel{{\text{def}}}{{=}}\lim_{n\rightarrow\infty }L^{n}(\mathbf{x}). \tag{27}\]
Then, from the definition of \(L^{n}(\mathbf{x})\) in Eq. 24,
\[L^{\infty}(\mathbf{x})=\lim_{\epsilon\to 0}\frac{1}{\epsilon} \mathop{\mathbb{E}}_{t\sim U(\epsilon,1),p_{{}_{F}}(\boldsymbol{\theta}| \mathbf{x},t-\epsilon)}D_{KL}\left(p_{{}_{S}}\left(\cdot\mid\mathbf{x}; \alpha(t,\epsilon)\right)\parallel p_{{}_{R}}(\cdot\mid\boldsymbol{\theta};t- \epsilon,\alpha(t,\epsilon))\right), \tag{28}\]
where \(U(a,b)\) is the continuous uniform distribution over the interval \([a,b]\). As we will see, for all the sender, receiver distribution pairs in this paper,
\[D_{KL}\left(p_{{}_{S}}\left(\cdot\mid\mathbf{x};\alpha\right) \parallel p_{{}_{R}}(\cdot\mid\boldsymbol{\theta};\alpha,t)\right)=\sum_{d=1}^ {D}D_{KL}\left(\mathcal{N}\left(g(x^{(d)}),C\alpha^{-1}\right)\parallel P^{(d) }(\boldsymbol{\theta},t)*\mathcal{N}\left(0,C\alpha^{-1}\right)\right), \tag{29}\]
where \(g:\mathcal{X}\rightarrow\mathcal{Y}\) is a function from data space to sender space, \(P^{(d)}(\boldsymbol{\theta},t)\) is a distribution over \(\mathcal{Y}\) with finite expectation and variance, \(*\) denotes the convolution of two probability distributions and \(C\) is a scalar constant.
The following proposition is now required:
**Proposition 3.1**.: _For a continuous univariate probability distribution \(P\) with finite expectation \(E[P]\) and variance \(Var[P]\), the convolution \(P*\mathcal{N}\left(0,\sigma^{2}\right)\rightarrow\mathcal{N}\left(E[P],\sigma ^{2}\right)\) as \(\sigma^{2}\rightarrow\infty\)._
Proof.: Let \(\epsilon^{2}\) be some variance in the interval \(\left(0,\frac{\pi}{8}\right)\) and consider the sequence of random variables
\(X_{0},X_{1},\ldots,X_{n}\) where \(X_{0}\sim P\) and \(X_{j}\sim\mathcal{N}\left(0,\epsilon^{2}\right)\) for \(j>0\). Define
\[Y_{j} \stackrel{{\mathrm{def}}}{{=}}\begin{cases}X_{0}-E[P]& \text{if }j=0,\\ X_{j}&\text{otherwise}.\end{cases} \tag{30}\] \[R_{n} \stackrel{{\mathrm{def}}}{{=}}\sum_{j=0}^{n}Y_{j},\] (31) \[S_{n}^{2} \stackrel{{\mathrm{def}}}{{=}}\sum_{j=1}^{n}Var[Y_{ j}]=n\epsilon^{2},\] (32) \[T_{n}^{2} \stackrel{{\mathrm{def}}}{{=}}Var[P]+S_{n}^{2}. \tag{33}\]
It follows from the definition of convolution that \(\sum_{j=0}^{n}X_{j}\sim P*\mathcal{N}\left(0,n\epsilon^{2}\right)\). Since \(n\epsilon^{2}\rightarrow\infty\) as \(n\rightarrow\infty\), and \(\sum_{j=0}^{n}X_{j}=R_{n}+E[P]\), the result is proved if it can be shown that as \(n\rightarrow\infty\), \(R_{n}\rightarrow\mathcal{N}\left(0,n\epsilon^{2}\right)\) or equivalently \(R_{n}/(\epsilon\sqrt{n})\rightarrow\mathcal{N}\left(0,1\right)\).
The Lyapunov central limit theorem [8] states that if there exists \(\lambda>0\) such that \(\lim_{n\rightarrow\infty}\frac{1}{T_{n}^{2+\lambda}}\sum_{j=0}^{n}E\left(|Y_ {j}|^{2+\lambda}\right)=0\) then \(R_{n}/T_{n}\rightarrow\mathcal{N}\left(0,1\right)\). First note that \(T_{n}^{2}\to S_{n}^{2}=n\epsilon^{2}\) as \(n\rightarrow\infty\). Hence if \(R_{n}/T_{n}\rightarrow\mathcal{N}\left(0,1\right)\) then \(R_{n}/(\epsilon\sqrt{n})\rightarrow\mathcal{N}\left(0,1\right)\). Now set \(\lambda=1\) and observe that for \(Y_{j}\sim\mathcal{N}\left(0,\epsilon^{2}\right)\), \(\mathbb{E}\left(|Y_{j}|^{3}\right)\) is the third moment of the half-normal distribution, which is \(\epsilon^{3}\sqrt{\frac{8}{\pi}}\). Our choice of \(\epsilon^{2}\) therefore ensures that \(E\left(|Y_{j}|^{3}\right)<\epsilon^{2}\) for \(j>0\). Also note that \(T_{n}^{3}>S_{n}^{3}\) and, since \(E[P]\) and \(Var[P]\) are finite, \(E\left(|Y_{0}|^{3}\right)<C\) for some constant \(C\). Hence
\[\frac{1}{T_{n}^{3}}\sum_{j=0}^{n}E\left(|Y_{j}|^{3}\right)<\frac{1}{S_{n}^{3}} \left(C+n\epsilon^{2}\right)=\frac{C}{\epsilon^{3}n^{3/2}}+\frac{1}{\epsilon \sqrt{n}}\xrightarrow{n\rightarrow\infty}0. \tag{34}\]
It follows from the continuity of \(\beta(t)\) and Eq. 26 that \(\alpha(t,\epsilon)^{-1}\rightarrow\infty\) as \(\epsilon\to 0\). Therefore, Proposition 3.1 can be applied to Eq. 29 to yield
\[\lim_{\epsilon\to 0}D_{KL}\left(p_{{}_{S}}\left(\cdot \mid\mathbf{x},\alpha_{t}\right)\parallel p_{{}_{R}}(\cdot\mid\boldsymbol{ \theta},\alpha_{t},t)\right) =\sum_{d=1}^{D}D_{KL}\left(\mathcal{N}\left(g(x^{(d)}),\frac{C}{ \alpha(t,\epsilon)}\right)\parallel\mathcal{N}\left(E[P^{(d)}(\boldsymbol{ \theta},t)],\frac{C}{\alpha(t,\epsilon)}\right)\right) \tag{35}\] \[=\frac{\alpha(t,\epsilon)}{2C}\left\|g(\mathbf{x})-E[P( \boldsymbol{\theta},t)]\right\|^{2}, \tag{36}\]
where
\[g(\mathbf{x})=\left(g(x^{(1)}),\ldots,g(x^{(D)})\right), \tag{37}\] \[E[P(\boldsymbol{\theta},t)]=\left(E[P^{(1)}(\boldsymbol{\theta}, t)],\ldots,E[P^{(D)}(\boldsymbol{\theta},t)]\right). \tag{38}\]
Therefore,
\[L^{\infty}(\mathbf{x})=\operatorname*{\mathbb{E}}_{t\sim U(0,1),p_{{}_{F}}( \boldsymbol{\theta}|\mathbf{x},t)}\lim_{\epsilon\to 0}\frac{\alpha(t, \epsilon)}{\epsilon}\frac{\left\|g(\mathbf{x})-E[P(\boldsymbol{\theta},t)] \right\|^{2}}{2C}. \tag{39}\]
Substituting from Eq. 26,
\[\lim_{\epsilon\to 0}\frac{\alpha(t,\epsilon)}{\epsilon}=\lim_{\epsilon\to 0} \frac{\beta(t)-\beta(t-\epsilon)}{\epsilon}=\frac{d\beta(t)}{dt}=\alpha(t), \tag{40}\]
and hence
\[L^{\infty}(\mathbf{x})=\operatorname*{\mathbb{E}}_{t\sim U(0,1),p_{{}_{F}}( \boldsymbol{\theta}|\mathbf{x},t)}\alpha(t)\frac{\|g(\mathbf{x})-E[P( \boldsymbol{\theta},t)]\|^{2}}{2C}. \tag{41}\]
### Sample Generation
Given prior parameters \(\boldsymbol{\theta}_{0}\), accuracies \(\alpha_{1},\ldots,\alpha_{n}\) and corresponding times \(t_{i}=i/n\), the n-step sampling procedure recursively generates \(\boldsymbol{\theta}_{1},\ldots,\boldsymbol{\theta}_{n}\) by sampling \(\mathbf{x}^{\prime}\) from \(p_{{}_{O}}(\cdot\mid\boldsymbol{\theta}_{i-1},t_{i-1})\), \(\mathbf{y}\) from \(p_{{}_{S}}(\cdot\mid\mathbf{x}^{\prime},\alpha_{i})\) (meaning that \(\mathbf{y}\sim p_{{}_{R}}(\cdot\mid\boldsymbol{\theta}_{i-1};t_{i-1},\alpha_{ i})\) -- see Eq. 4), then setting \(\boldsymbol{\theta}_{i}=h(\boldsymbol{\theta}_{i-1},\mathbf{y})\). Given \(\boldsymbol{\theta}_{n}\) the network is run one more time and the final sample is drawn from \(p_{{}_{O}}(\cdot\mid\boldsymbol{\theta}_{n},1)\).
## 4 Continuous Data
For continuous data \(\mathcal{X}=\mathbb{R}\) and hence \(\mathbf{x}\in\mathbb{R}^{D}\). In our experiments, \(\mathbf{x}\) is normalised to lie in \([-1,1]^{D}\) to ensure that the network inputs remain in a reasonable range; however this is not essential for the mathematical framework.
### Input Distribution \(p_{{}_{I}}(\cdot\mid\boldsymbol{\theta})\)
The input distribution for continuous data is a diagonal normal:
\[\boldsymbol{\theta} \stackrel{{\text{def}}}{{=}}\{\boldsymbol{\mu},\rho\} \tag{42}\] \[p_{{}_{I}}(\mathbf{x}\mid\boldsymbol{\theta}) \stackrel{{\text{def}}}{{=}}\mathcal{N}\left( \mathbf{x}\mid\boldsymbol{\mu},\rho^{-1}\boldsymbol{I}\right), \tag{43}\]
where \(\boldsymbol{I}\) is the \(D\times D\) identity matrix. We define the prior parameters as
\[\boldsymbol{\theta}_{0}\stackrel{{\text{def}}}{{=}}\{\boldsymbol {0},1\}, \tag{44}\]
where \(\boldsymbol{0}\) is the length \(D\) vectors of zeros. Hence the input prior is a standard multivariate normal:
\[p_{{}_{I}}(\mathbf{x}\mid\boldsymbol{\theta}_{0})=\mathcal{N}\left(\mathbf{x} \mid\boldsymbol{0},\boldsymbol{I}\right). \tag{45}\]
The usual Bayesian approach would be to fit the prior mean and variance to the training data. However we found that a standard prior worked better in practice, as well as simplifying the equations. It is important to remember that the distributions \(p_{{}_{I}}(\mathbf{x}\mid\boldsymbol{\theta}_{0})\) are never used directly to make predictions, but rather to inform the network's predictions. All that matters is that the parameters fed into the network accurately and accessibly encode the information received so far about \(\mathbf{x}\). The network can easily learn the empirical prior of the training set and use that to correct its predictions.
### Bayesian Update Function \(h(\mathbf{\theta}_{i-1},\mathbf{y},\alpha)\)
Given a univariate Gaussian prior \(\mathcal{N}\left(\mu_{a},\rho_{a}^{-1}\right)\) over some unknown data \(x\) it can be shown [27] that the Bayesian posterior after observing a noisy sample \(y\) from a normal distribution \(\mathcal{N}\left(x,\alpha^{-1}\right)\) with known precision \(\alpha\) is \(\mathcal{N}\left(\mu_{b},\rho_{b}^{-1}\right)\), where
\[\rho_{b} =\rho_{a}+\alpha, \tag{46}\] \[\mu_{b} =\frac{\mu_{a}\rho_{a}+y\alpha}{\rho_{b}}. \tag{47}\]
Since both \(p_{{}_{I}}(\mathbf{x}\mid\mathbf{\theta})\) and \(p_{{}_{S}}\left(\mathbf{y}\mid\mathbf{x};\alpha\right)\) distributions are normal with diagonal covariance, Eqs. 46 and 47 can be applied to obtain the following Bayesian update function for parameters \(\mathbf{\theta}_{i-1}=\{\mathbf{\mu}_{i-1},\rho_{i-1}\}\) and sender sample \(\mathbf{y}\) drawn from \(p_{{}_{S}}\left(\cdot\mid\mathbf{x};\alpha\mathbf{I}\right)=\mathcal{N}\left( \mathbf{x},\alpha^{-1}\mathbf{I}\right)\):
\[h(\{\mathbf{\mu}_{i-1},\rho_{i-1}\},\mathbf{y},\alpha)=\{\mathbf{\mu}_{i},\rho_{i}\}, \tag{48}\]
with
\[\rho_{i} =\rho_{i-1}+\alpha, \tag{49}\] \[\mathbf{\mu}_{i} =\frac{\mathbf{\mu}_{i-1}\rho_{i-1}+\mathbf{y}\,\alpha}{\rho_{i}}. \tag{50}\]
### Bayesian Update Distribution \(p_{{}_{U}}(\cdot\mid\mathbf{\theta},\mathbf{x};\alpha)\)
Eq. 50 computes \(\mathbf{\mu}_{i}\) given a single sample \(\mathbf{y}\) from the sender distribution. To marginalise over \(\mathbf{y}\sim\mathcal{N}\left(\mathbf{y}\mid\mathbf{x},\alpha^{-1}\mathbf{I}\right)\) as defined in Eq. 6, the following standard identity for normal distributions
Figure 2: **Bayesian updates for continuous data**. For univariate data \(x=0.7\), the initial input distribution parameters \(\theta_{0}=\{\mu_{0}=0,\rho_{0}=1\}\) are updated to \(\theta_{1}=\{\mu_{1},\rho_{1}\}\), \(\theta_{2}=\{\mu_{2},\rho_{2}\}\), \(\theta_{3}=\{\mu_{3},\rho_{3}\}\) by iterating Eqs. 49 and 50 with sender samples \(y_{1}\), \(y_{2}\), \(y_{3}\) drawn with accuracies 2, 4, 6 respectively. Note how the input mean (\(\mu_{1}\), \(\mu_{2}\), \(\mu_{3}\)) stochastically approaches the data, while the input precision smoothly increases.
can be applied:
\[X\sim\mathcal{N}\left(\mu_{X},\sigma_{X}^{2}\right)\implies aX+b\sim \mathcal{N}\left(a\mu_{X}+b,a^{2}\sigma_{X}^{2}\right)\ \forall a,b\in\mathbb{R}\,. \tag{51}\]
Substituting \(X=\mathbf{y}\), \(\mu_{X}=\mathbf{x}\), \(\sigma_{X}^{2}=\alpha^{-1}\mathbf{I}\), \(a=\frac{\alpha}{\rho_{i}}\) and \(b=\frac{\mathbf{\mu}_{i-1}\rho_{i-1}}{\rho_{i}}\), Eq. 50 gives:
\[\mathbf{\mu}_{i}\sim\mathcal{N}\left(\frac{\alpha\,\mathbf{x}\!+\!\mathbf{ \mu}_{i-1}\rho_{i-1}}{\rho_{i}},\frac{\alpha}{\rho_{i}^{2}}\mathbf{I}\right), \tag{52}\]
and therefore (since \(\mathbf{\mu}_{i}\) is the only random part of \(\mathbf{\theta}_{i}\))
\[p_{{}_{U}}(\mathbf{\theta}_{i}\mid\mathbf{\theta}_{i-1},\mathbf{x}; \alpha)=\mathcal{N}\left(\mathbf{\mu}_{i}\mid\frac{\alpha\,\mathbf{x}\!+\!\mathbf{\mu} _{i-1}\rho_{i-1}}{\rho_{i}},\frac{\alpha}{\rho_{i}^{2}}\mathbf{I}\right). \tag{53}\]
### Additive Accuracies
We can check that the sender accuracies are additive in the sense required by Eq. 7 by first observing that if \(\mathbf{\theta}_{i-1}=\{\mathbf{\mu}_{i-1},\rho_{i-1}\}\) is drawn from \(p(\cdot\mid\mathbf{\theta}_{i-2},\mathbf{x};\alpha_{a})\) then
\[\mathbf{\mu}_{i-1}\sim\mathcal{N}\left(\frac{\alpha_{a}\,\mathbf{x}\!+\!\mathbf{\mu}_{ i-2}\rho_{i-2}}{\rho_{i-1}},\frac{\alpha_{a}}{\rho_{i-1}^{2}}\mathbf{I}\right). \tag{54}\]
Define
\[\mathbf{\mu}_{i}^{\prime}\stackrel{{\rm def}}{{=}}\frac {\alpha_{b}\,\mathbf{x}\!+\!\mathbf{\mu}_{i-1}\rho_{i-1}}{\rho_{i}}=\frac{\rho_{i- 1}}{\rho_{i}}\mathbf{\mu}_{i-1}+\frac{\alpha_{b}\,\mathbf{x}}{\rho_{i}}, \tag{55}\]
Figure 3: **Bayesian update distribution for continuous data**. For \(x=0.7\), the plot shows the distribution \(p(\mu\mid\theta_{0},x;\alpha)\) over input mean \(\mu\) from Eq. 52 given initial parameters \(\mu_{0}=0,\rho_{0}=1\) and \(11\)\(\alpha\) values spaced log-linearly between \(e^{-5}\) and \(e^{5}\). Note how the distribution is tightly concentrated around \(\mu_{0}\) for very low alpha, then smoothly progresses to a tight concentration around \(x\) for high alpha.
and apply Identity 51 with \(a=\frac{\rho_{i-1}}{\rho_{i}}\) and \(b=\frac{\alpha_{b}\,\mathbf{x}}{\rho_{i}}\) to see that
\[\boldsymbol{\mu}_{i}^{\prime} \tag{56}\] \[=\mathcal{N}\left(\frac{(\alpha_{a}+\alpha_{b})\,\mathbf{x}\!+ \!\boldsymbol{\mu}_{i-2}\rho_{i-2}}{\rho_{i}},\frac{\alpha_{a}}{\rho_{i}^{2}} \boldsymbol{I}\right). \tag{57}\]
Now observe that if \(\boldsymbol{\theta}_{i}=\{\boldsymbol{\mu}_{i},\rho_{i}\}\) is drawn from \(p(\cdot\mid\boldsymbol{\theta}_{i-1},\mathbf{x};\alpha_{b})\) then
\[\boldsymbol{\mu}_{i}\sim\mathcal{N}\left(\frac{\alpha_{b}\, \mathbf{x}\!+\!\boldsymbol{\mu}_{i-1}\rho_{i-1}}{\rho_{i}},\frac{\alpha_{b}}{ \rho_{i}^{2}}\boldsymbol{I}\right), \tag{58}\]
and hence
\[\boldsymbol{\mu}_{i}\sim\boldsymbol{\mu}_{i}^{\prime}+\boldsymbol{\epsilon}, \tag{59}\]
where
\[\boldsymbol{\epsilon}\sim\mathcal{N}\left(\boldsymbol{0},\frac{ \alpha_{b}}{\rho_{i}^{2}}\boldsymbol{I}\right). \tag{60}\]
Another standard identity for Gaussian variables can now be applied:
\[X\sim\mathcal{N}\left(\mu_{X},\sigma_{X}^{2}\right),Y\sim \mathcal{N}\left(\mu_{Y},\sigma_{Y}^{2}\right)\implies X+Y\sim\mathcal{N} \left(\mu_{X}+\mu_{Y},\sigma_{X}^{2}+\sigma_{Y}^{2}\right), \tag{61}\]
to see that
\[\boldsymbol{\mu}_{i}\sim\mathcal{N}\left(\frac{(\alpha_{a}+\alpha _{b})\,\mathbf{x}\!+\!\boldsymbol{\mu}_{i-2}\rho_{i-2}}{\rho_{i}},\frac{\alpha _{a}+\alpha_{b}}{\rho_{i}^{2}}\boldsymbol{I}\right), \tag{62}\]
and hence
\[\underset{p_{U}(\boldsymbol{\theta}_{i-1}|\boldsymbol{\theta}_{ i-2},\mathbf{x};\alpha_{a})}{\mathbb{E}}p_{ U}(\boldsymbol{\theta}_{i}\mid\boldsymbol{\theta}_{i-1},\mathbf{x};\alpha_{b})=p_{ U}(\boldsymbol{\theta}_{i}\mid\boldsymbol{\theta}_{i-2},\mathbf{x};\alpha_{a}+ \alpha_{b}), \tag{63}\]
as required.
### Accuracy Schedule \(\beta(t)\)
We derive \(\beta(t)\) for continuous data by requiring that the expected entropy of the input distribution linearly decreases with \(t\). Intuitively, this means that information flows into the input distribution at a constant rate. Define
\[H(t) \overset{\text{def}}{=}\underset{p_{F}(\boldsymbol{\theta}| \mathbf{x};t)}{\mathbb{E}}H(p_{{}_{I}}(\cdot\mid\boldsymbol{\theta})) \tag{64}\] \[=\frac{D}{2}\ln\left(\frac{2\pi e}{1+\beta(t)}\right). \tag{65}\]
Then if \(H(t)\) linearly decreases with \(t\),
\[H(t) =(1-t)H(0)+tH(1) \tag{66}\] \[\implies\ln\left(\frac{2\pi}{1+\beta(t)}\right) =(1-t)\ln(2\pi)+t\ln\left(\frac{2\pi}{1+\beta(1)}\right)\] (67) \[\implies-\ln(1+\beta(t)) =-t\ln(1+\beta(1))\] (68) \[\implies(1+\beta(t))^{-1} =(1+\beta(1))^{-t}. \tag{69}\]
Define \(\sigma_{1}\) to be the standard deviation of the input distribution at \(t=1\). We will choose \(\sigma_{1}\) empirically to minimise the loss; in general it should be small enough to ensure that the reconstruction loss is low, but not so small as to create unnecessary transmission costs. Recalling that the precision \(\rho\) at time \(t\) is \(1+\beta(t)\), we see that
\[\sigma_{1}^{2}=(1+\beta(1))^{-1}. \tag{70}\]
Therefore
\[(1+\beta(t))^{-1} =\sigma_{1}^{2t} \tag{71}\] \[\implies\beta(t) =\sigma_{1}^{-2t}-1\] (72) \[\implies\alpha(t) =\frac{d\left(\sigma_{1}^{-2t}-1\right)}{dt}\] (73) \[=-\frac{2\ln\sigma_{1}}{\sigma_{1}^{2t}}. \tag{74}\]
### Bayesian Flow Distribution \(p_{{}_{F}}(\cdot\mid\mathbf{x};t)\)
Recall from Eq. 10 that
\[p_{{}_{F}}(\boldsymbol{\theta}\mid\mathbf{x};t)=p_{{}_{U}}( \boldsymbol{\theta}\mid\boldsymbol{\theta}_{0},\mathbf{x},\beta(t)). \tag{75}\]
Therefore, setting \(\boldsymbol{\theta}_{i-1}=\boldsymbol{\theta}_{0}=\{\mathbf{0},1\}\) and \(\alpha=\beta(t)\) in Eq. 53, and recalling that \(\rho=1+\beta(t)\),
\[p_{{}_{F}}(\boldsymbol{\theta}\mid\mathbf{x};t) =\mathcal{N}\left(\boldsymbol{\mu}\mid\frac{\beta(t)}{1+\beta(t )}\,\mathbf{x},\frac{\beta(t)}{(1+\beta(t))^{2}}\boldsymbol{I}\right) \tag{76}\] \[=\mathcal{N}\left(\boldsymbol{\mu}\mid\gamma(t)\,\mathbf{x}, \gamma(t)(1-\gamma(t))\boldsymbol{I}\right), \tag{77}\]
where
\[\gamma(t) \overset{\mathrm{def}}{=}\frac{\beta(t)}{1+\beta(t)} \tag{78}\] \[=\frac{\sigma_{1}^{-2t}-1}{\sigma_{1}^{-2t}}\] (79) \[=1-\sigma_{1}^{2t}. \tag{80}\]
### Output Distribution \(p_{{}_{O}}(\cdot\mid\mathbf{\theta};t)\)
Following standard practice for diffusion models [42], the output distribution is defined by reparameterising a prediction of the Gaussian noise vector \(\mathbf{\epsilon}\sim\mathcal{N}\left(\mathbf{0},\mathbf{I}\right)\) used to generate the mean \(\mathbf{\mu}\) passed as input to the network. Recall from Eq. 77 that
\[\mathbf{\mu}\sim\mathcal{N}\left(\gamma(t)\,\mathbf{x},\gamma(t)(1-\gamma(t))\mathbf{I }\right), \tag{81}\]
and hence
\[\mathbf{\mu} =\gamma(t)\,\mathbf{x}+\sqrt{\gamma(t)(1-\gamma(t))}\mathbf{\epsilon} \tag{82}\] \[\implies\mathbf{x} =\frac{\mathbf{\mu}}{\gamma(t)}-\sqrt{\frac{1-\gamma(t)}{\gamma(t)}} \mathbf{\epsilon}. \tag{83}\]
The network outputs an estimate \(\mathbf{\hat{\epsilon}}(\mathbf{\theta},t)\) of \(\mathbf{\epsilon}\) and this is transformed into an estimate \(\mathbf{\hat{x}}(\mathbf{\theta},t)\) of \(\mathbf{x}\) by
\[\mathbf{\hat{x}}(\mathbf{\theta},t)=\frac{\mathbf{\mu}}{\gamma(t)}-\sqrt{\frac{1- \gamma(t)}{\gamma(t)}}\mathbf{\hat{\epsilon}}(\mathbf{\theta},t). \tag{84}\]
Given \(\mathbf{\hat{x}}(\mathbf{\theta},t)\) the output distribution is
\[p_{{}_{O}}(\mathbf{x}\mid\mathbf{\theta};t)=\delta(\mathbf{x}-\mathbf{\hat{x}}( \mathbf{\theta},t)), \tag{85}\]
Note that \(\gamma(0)=0\), making the transformation from \(\mathbf{\hat{\epsilon}}(\mathbf{\theta},t)\) to \(p_{{}_{O}}(\mathbf{x}\mid\mathbf{\theta};t)\) undefined at \(t=0\). We therefore set \(p_{{}_{O}}(\mathbf{x}\mid\mathbf{\theta};t)=\mathbf{0}\) for \(t\) under some small threshold \(t_{min}\). Also, \(\mathbf{\hat{x}}(\mathbf{\theta},t)\) is clipped to lie within the allowed range \([x_{min},x_{max}]\) for \(\mathbf{x}\). In our experiments \(t_{min}=1\mathrm{e}{-6}\) and \([x_{min},x_{max}]=[-1,1]\).
Figure 4: **Bayesian flow for continuous data**. For \(x=0.8\), \(\sigma_{1}=0.02\) and \(\gamma(t)\) defined as in Eqn. 80, the plot shows stochastic parameter trajectories for the input distribution mean \(\mu\) (white lines) superimposed on a log-scale heatmap of the Bayesian flow distribution \(p(\theta\mid x;t)\). Note how the trajectories all begin at \(\mu_{0}=0\) then fan out before converging on \(x\).
### Sender Distribution \(p_{{}_{S}}\left(\cdot\mid\mathbf{x};\alpha\right)\)
The sender space \(\mathcal{Y}=\mathcal{X}=\mathbb{R}\) for continuous data, and the sender distribution is normal with precision \(\alpha\):
\[p_{{}_{S}}\left(\mathbf{y}\mid\mathbf{x};\alpha\right)=\mathcal{N}\left( \mathbf{y}\mid\mathbf{x},\alpha^{-1}\boldsymbol{I}\right). \tag{86}\]
### Receiver Distribution \(p_{{}_{R}}(\cdot\mid\boldsymbol{\theta};t,\alpha)\)
Substituting Eqs. 85 and 86 into Eq. 4,
\[p_{{}_{R}}(\mathbf{y}\mid\boldsymbol{\theta};t,\alpha) =\underset{\delta(\mathbf{x}^{\prime}-\mathbf{x}(\boldsymbol{ \theta},t))}{\mathbb{E}}\,\mathcal{N}\left(\mathbf{y}\mid\mathbf{x}^{\prime}, \alpha^{-1}\boldsymbol{I}\right) \tag{87}\] \[=\mathcal{N}\left(\mathbf{y}\mid\mathbf{\hat{x}}(\boldsymbol{ \theta},t),\alpha^{-1}\boldsymbol{I}\right). \tag{88}\]
### Reconstruction Loss \(L^{r}(\mathbf{x})\)
Truly continuous data requires infinite precision to reconstruct, which makes the reconstruction loss problematic. However it would be reasonable to assume that either the data is finely discretised (as all information is on a digital computer), or that it contains some noise. The reconstruction loss for discretised data is presented in Section 5.3. Alternatively, if we assume the presence of normally distributed measurement noise on \(\mathbf{x}\), with fixed isotropic variance \(\sigma^{2}\), then a noisy version of the reconstruction loss can be defined as the expected KL divergence between \(\mathcal{N}\left(\mathbf{x},\sigma^{2}\boldsymbol{I}\right)\) and the output distribution at \(t=1\):
\[L^{r}(\mathbf{x}) =\underset{p_{{}_{F}}(\boldsymbol{\theta}|\mathbf{x},1)}{\mathbb{ E}}\,D_{KL}\left(\mathcal{N}\left(\mathbf{x},\sigma^{2}\boldsymbol{I}\right)\ \|\ \mathcal{N}\left(\mathbf{\hat{x}}(\boldsymbol{\theta},1),\sigma^{2} \boldsymbol{I}\right)\right) \tag{89}\] \[=\underset{p_{{}_{F}}(\boldsymbol{\theta}|\mathbf{x},1)}{\mathbb{ E}}\,\frac{1}{2\sigma^{2}}\,\|\mathbf{x}-\mathbf{\hat{x}}(\boldsymbol{\theta},1)\|^{2}\,. \tag{90}\]
Figure 5: **Input variance for Bayesian Flow Networks and diffusion models**. For \(\sigma_{1}=0.001\) and \(\gamma(t)\) defined as in Eqn. 80, the blue line shows the variance \(\gamma(t)(1-\gamma(t))\) of the distribution over the input mean \(\mu\) as a function of \(t\) (see Eq. 77). Note that the variance is \(0\) at \(t=0\) (since the input prior \(\mu_{0}\) is deterministic) and becomes small again as \(t\) approaches \(1\) and \(\mu\) becomes increasingly concentrated around the data. The green and red lines show the equivalent network input variance for two different noise schedules from the literature (linear [12] and cosine [28]) during the reverse process of a diffusion model (note that \(t\) is reversed relative to diffusion convention). The input variance is much lower for Bayesian Flow Networks.
The noise does not directly affect training, as the reconstruction loss is not optimised. However the value of \(\sigma\) places a natural upper limit on the value that should be chosen for \(\sigma_{1}\): there is no point transmitting the data to greater precision than it was originally measured. Empirically, we find that when \(\sigma_{1}<\sigma/2\) the reconstruction loss is very small.
### Discrete-Time Loss \(L^{n}(\mathbf{x})\)
From Eqs. 86 and 88,
\[D_{KL}\left(p_{{}_{S}}\left(\cdot\mid\mathbf{x},\alpha_{i} \right)\parallel p_{{}_{R}}(\cdot\mid\boldsymbol{\theta}_{i-1};t_{i-1},\alpha _{i})\right) =D_{KL}\left(\mathcal{N}\left(\mathbf{x},\alpha_{i}^{-1}\boldsymbol {I}\right)\parallel\mathcal{N}\left(\mathbf{\hat{x}}(\boldsymbol{\theta}_{i-1},t_{i-1}),\alpha_{i}^{-1}\boldsymbol{I}\right)\right) \tag{91}\] \[=\frac{\alpha_{i}}{2}\left\|\mathbf{x}\!-\!\mathbf{\hat{x}}( \boldsymbol{\theta}_{i-1},t_{i-1})\right\|^{2}, \tag{92}\]
and from Eqs. 11 and 72,
\[\alpha_{i} =\beta(t_{i})-\beta(t_{i-1}) \tag{93}\] \[=\sigma_{1}^{-2i/n}-\sigma_{1}^{-2(i-1)/n}\] (94) \[=\sigma_{1}^{-2i/n}\left(1-\sigma_{1}^{2/n}\right). \tag{95}\]
Therefore, substituting into Eq. 24,
\[L^{n}(\mathbf{x})=\frac{n}{2}\left(1-\sigma_{1}^{2/n}\right)_{i \sim U\{1,n\},p_{{}_{F}}(\boldsymbol{\theta}_{i-1}\mid\mathbf{x};t_{i-1})} \frac{\|\mathbf{x}\!-\!\mathbf{\hat{x}}(\boldsymbol{\theta}_{i-1},t_{i-1})\|^ {2}}{\sigma_{1}^{2i/n}}, \tag{96}\]
where \(t_{i-1}=(i-1)/n\).
### Continuous-time Loss \(L^{\infty}(\mathbf{x})\)
Eq. 29 claimed that
\[D_{KL}\left(p_{{}_{S}}\left(\cdot\mid\mathbf{x},\alpha\right) \parallel p_{{}_{R}}(\cdot\mid\boldsymbol{\theta},\alpha,t)\right)=D_{KL} \left(\mathcal{N}\left(g(\mathbf{x}),C\alpha^{-1}\boldsymbol{I}\right)\parallel P (\boldsymbol{\theta},t)\ast\mathcal{N}\left(\mathbf{0},C\alpha^{-1} \boldsymbol{I}\right)\right), \tag{97}\]
Figure 6: **Sender, output and receiver distributions for continuous data**. Note that the sender and receiver distributions have identical variance and the output distribution is a Dirac delta distribution centred on the network prediction \(\hat{x}(\theta,t)\).
for some embedding function \(g:\mathcal{X}\to\mathcal{Y}\), constant \(C\) and distribution \(p_{\boldsymbol{\theta}}\) over \(\mathcal{Y}^{D}\) with finite mean and variance. If \(g\) is the identity function, \(C=1\) and
\[P(\mathbf{y}\mid\boldsymbol{\theta},t)=\delta(\mathbf{y}-\mathbf{\hat{x}}( \boldsymbol{\theta},t)), \tag{98}\]
then \(P(\boldsymbol{\theta},t)\) has finite mean and variance and
\[\mathcal{N}\left(\mathbf{y}\mid g(\mathbf{x}),C\alpha^{-1} \boldsymbol{I}\right)=\mathcal{N}\left(\mathbf{y}\mid\mathbf{x},\alpha^{-1} \boldsymbol{I}\right) =p_{{}_{S}}\left(\mathbf{y}\mid\mathbf{x};\alpha\right), \tag{99}\] \[P(\mathbf{y}\mid\boldsymbol{\theta},t)\ast\mathcal{N}\left( \mathbf{0},C\alpha^{-1}\boldsymbol{I}\right)=\mathcal{N}\left(\mathbf{y}\mid \mathbf{\hat{x}}(\boldsymbol{\theta},t),\alpha^{-1}\boldsymbol{I}\right) =p_{{}_{R}}(\mathbf{y}\mid\boldsymbol{\theta},\alpha,t), \tag{100}\]
so the claim is true and the continuous-time loss from Eq 41 applies, with \(E[P(\boldsymbol{\theta},t)]=\mathbf{\hat{x}}(\boldsymbol{\theta},t)\) and \(\alpha(t)\) as defined in Eq 74, yielding
\[L^{\infty}(\mathbf{x})=-\ln\sigma_{1}\mathop{\mathbb{E}}_{t\sim U(0,1),p_{{}_ {F}}(\boldsymbol{\theta}|\mathbf{x};t)}\frac{\left\|\mathbf{x}-\mathbf{\hat{x} }(\boldsymbol{\theta},t)\right\|^{2}}{\sigma_{1}^{2t}}. \tag{101}\]
### Pseudocode
Pseudocode for evaluating the \(n\)-step loss \(L^{n}(\mathbf{x})\) and continuous-time loss \(L^{\infty}(\mathbf{x})\) for continuous data is presented in Algorithms 1 and 2, while the sample generation procedure is presented in Algorithm 3.
```
# Note that \(\theta=\{\mu,\rho\}\), but \(\rho\) is fully determined by \(t\)
For our experiments \(t_{min}=1\mathrm{e}{-6}\), \([x_{min},x_{max}]=[-1,1]\) functioncts_output_prediction(\(\boldsymbol{\mu}\in\mathbb{R}^{D},t\in[0,1],\gamma>\in\mathbb{R}^{+}\), \(t_{min}\in\mathbb{R}^{+}\), \(x_{min},x_{max}\in\mathbb{R}\)) if\(t<t_{min}\)then \(\mathbf{\hat{x}}(\boldsymbol{\theta},t)\leftarrow\mathbf{0}\) else Input (\(\boldsymbol{\mu},t\)) to network, receive \(\boldsymbol{\hat{\epsilon}}(\boldsymbol{\theta},t)\) as output \(\mathbf{\hat{x}}(\boldsymbol{\theta},t)\leftarrow\frac{\boldsymbol{\mu}}{ \gamma}-\sqrt{\frac{1-\gamma}{\gamma}}\boldsymbol{\hat{\epsilon}}(\boldsymbol{ \theta},t)\) clip \(\mathbf{\hat{x}}(\boldsymbol{\theta},t)\) to \([x_{min},x_{max}]\) endif Return \(\mathbf{\hat{x}}(\boldsymbol{\theta},t)\) endfunction
```
**Algorithm 1** Discrete-Time Loss \(L^{n}(\mathbf{x})\) for Continuous Data
```
# Note that \(\theta=\{\mu,\rho\}\), but \(\rho\) is fully determined by \(t\)
For our experiments \(t_{min}=1\mathrm{e}{-6}\), \([x_{min},x_{max}]=[-1,1]\) functioncts_output_prediction(\(\boldsymbol{\mu}\in\mathbb{R}^{D},t\in[0,1],\gamma>\in\mathbb{R}^{+}\), \(t_{min}\in\mathbb{R}^{+}\), \(x_{min},x_{max}\in\mathbb{R}\)) if\(t<t_{min}\)then \(\mathbf{\hat{x}}(\boldsymbol{\theta},t)\leftarrow\mathbf{0}\) else Input (\(\boldsymbol{\mu},t\)) to network, receive \(\boldsymbol{\hat{\epsilon}}(\boldsymbol{\theta},t)\) as output \(\mathbf{\hat{x}}(\boldsymbol{\theta},t)\leftarrow\frac{\boldsymbol{\mu}}{ \gamma}-\sqrt{\frac{1-\gamma}{\gamma}}\boldsymbol{\hat{\epsilon}}(\boldsymbol{ \theta},t)\) clip \(\mathbf{\hat{x}}(\boldsymbol{\theta},t)\) to \([x_{min},x_{max}]\) endif Return \(\mathbf{\hat{x}}(\boldsymbol{\theta},t)\) endfunction
```
**Algorithm 2** Discrete-Time Loss \(L^{n}(\mathbf{x})\) for Continuous Data
```
Require:\(\sigma_{1}\in\mathbb{R}^{+}\) Input: continuous data \(\mathbf{x}\in\mathbb{R}^{D}\) \(t\sim U(0,1)\) \(\gamma\gets 1-\sigma_{1}^{2t}\) \(\boldsymbol{\mu}\sim\mathcal{N}\left(\gamma\,\mathbf{x},\gamma(1-\gamma) \boldsymbol{I}\right)\) \(\hat{\mathbf{x}}(\boldsymbol{\theta},t)\leftarrow\textsc{cts\_output\_prediction}( \boldsymbol{\mu},t,\gamma)\) \(L^{\infty}(\mathbf{x})\leftarrow-\ln\sigma_{1}\sigma_{1}^{-2t}\left\|\mathbf{x }-\hat{\mathbf{x}}(\boldsymbol{\theta},t)\right\|^{2}\)
```
**Algorithm 2** Continuous-Time Loss \(L^{\infty}(\mathbf{x})\) for Continuous Data
```
Require:\(\sigma_{1}\in\mathbb{R}^{+}\), number of steps \(n\in\mathbb{N}\) \(\boldsymbol{\mu}\leftarrow\boldsymbol{0}\) \(\rho\gets 1\) for\(i=1\) to \(n\)do \(t\leftarrow\frac{i-1}{n}\) \(\hat{\mathbf{x}}(\boldsymbol{\theta},t)\leftarrow\textsc{cts\_output\_prediction}( \boldsymbol{\mu},t,1-\sigma_{1}^{2t})\) \(\alpha\gets\sigma_{1}^{-2i/n}\left(1-\sigma_{1}^{2/n}\right)\) \(\mathbf{y}\sim\mathcal{N}\left(\hat{\mathbf{x}}(\boldsymbol{\theta},t),\alpha ^{-1}\boldsymbol{I}\right)\) \(\boldsymbol{\mu}\leftarrow\frac{\rho\boldsymbol{\mu}+\alpha\mathbf{y}}{\rho+\alpha}\) \(\rho\leftarrow\rho+\alpha\) endfor \(\hat{\mathbf{x}}(\boldsymbol{\theta},1)\leftarrow\textsc{cts\_output\_prediction}( \boldsymbol{\mu},1,1-\sigma_{1}^{2})\) Return\(\hat{\mathbf{x}}(\boldsymbol{\theta},1)\)
```
**Algorithm 3** Sample Generation for Continuous Data
## 5 Discretised Data
This section considers continuous data that has been discretised into \(K\) bins. For example, 8-bit images are discretised into 256 bins, 16-bit audio is discretised in \(2^{16}=65,536\) bins. This data is represented by tiling \([-1,1]\) into \(K\) intervals, each of length \(2/K\). Let \(k_{l}\), \(k_{c}\) and \(k_{r}\) denote respectively the left, centre and right of interval \(k\), and let \(\{1,K\}\) denote the set of integers from 1 to \(K\). Then for \(k\in\{1,K\}\),
\[k_{c} =\frac{2k-1}{K}-1, \tag{102}\] \[k_{l} =k_{c}-\frac{1}{K},\] (103) \[k_{r} =k_{c}+\frac{1}{K}. \tag{104}\]
Let \(k(\mathbf{x})=\left(k(x^{(1)}),\ldots,k(x^{(D)})\right)\in\{1,K\}^{D}\) be the vector of the indices of the bins occupied by \(\mathbf{x}=\left(x^{(1)},\ldots,x^{(D)}\right)\in\mathbb{R}^{D}\), and let \(k_{l}(\mathbf{x})\), \(k_{c}(\mathbf{x})\) and \(k_{r}(\mathbf{x})\) be the corresponding vectors of left edges, centres and right edges of the bins. If the data has not already been discretised, we set \(\mathbf{x}=k_{c}(\mathbf{x})\). For example if the red channel in an 8-bit RGB image has index 110, it will be represented by the
number \(\frac{2*(110)-1}{256}-1=-0.14453125\). Note that each \(x^{(d)}\) therefore lies in the range \([\frac{1}{K}-1,1-\frac{1}{K}]\) and not \([-1,1]\).
The input distribution \(p_{{}_{I}}(\mathbf{x}\mid\boldsymbol{\theta})\), prior parameters \(\boldsymbol{\theta}_{0}\), sender distribution \(p_{{}_{S}}\left(\mathbf{y}\mid\mathbf{x};\alpha\right)\), Bayesian update function \(h(\boldsymbol{\theta}_{i-1},\mathbf{y},\alpha)\), Bayesian update distribution \(p_{{}_{U}}(\boldsymbol{\theta}_{i}\mid\boldsymbol{\theta}_{i-1},\mathbf{x}; \alpha)\), Bayesian flow distribution \(p_{{}_{F}}(\boldsymbol{\theta}\mid\mathbf{x};t)\) and accuracy schedule \(\beta(t)\) are all identical to the continuous case described in Section 4. It may surprise the reader that the output distribution is discretised while the input, sender and receiver distributions are not. We made this choice partly for mathematical convenience (Bayesian updates are considerably more complex for discretised distributions; [1]) and partly because we suspected that it would easier for the network to interpret continuous means than discrete probabilities as input. In a similar vein to our argument for standard priors in Sec. 4.1, we remind the reader that the input distribution only serves to inform the network and not directly to model the data; all that matters is that the input parameters contain enough information to allow the network to make accurate predictions.
Section 4.11 noted that the level of measurement noise assumed for continuous data should inform the choice of standard deviation \(\sigma_{1}\) for the input distribution at \(t=1\) (which in turn defines the accuracy schedule \(\beta(t)\)). For discretised data a similar role is played by the width of the discretisation bins, as these place a natural limit on how precisely the data needs to be transmitted. For example, for 8-bit data with 256 bins and hence a bin width of \(1/128\), setting \(\sigma_{1}=1\mathrm{e}{-3}\) corresponds to a final input distribution with standard deviation roughly one eighth of the width of the bin, which should be precise enough for the network to identify the correct bin with very high probability.
One caveat with discretisation is that calculating the loss has \(O(K)\) computational cost, which may be prohibitive for very finely discretised data. In any case, the benefits of discretisation tend to decrease as the number of bins increases, as we will see in our experiments.
### Output Distribution \(p_{{}_{O}}(\cdot\mid\boldsymbol{\theta},t)\)
Discretised continuous distributions offer a natural and expressive way to model discretised data with neural networks [38]. As in Section 4.7, the network outputs \(\Psi(\boldsymbol{\theta},t)\) are not used to predict
Figure 7: **Output distribution for discretised data.** For univariate data \(x\) discretised into \(K=16\) bins, the green line shows the continuous distribution \(\mathcal{N}\left(\mu_{x},\sigma_{x}^{2}\right)\) that is discretised to yield the output distribution \(p_{{}_{O}}(x\mid\theta,t)\), as described in Section 5.1. Bin boundaries are marked with vertical grey lines. The heights of the green bars represent the probabilities assigned to the respective bins by \(p_{{}_{O}}(x\mid\theta,t)\). For ease of visualisation these heights are rescaled relative to the probability density, as indicated on the right axis. Note the clipping at \(\pm 1\): the area under the dotted green line to the left of \(-1\) is added to the probability of the first bin, the area under the dotted green line to the right of \(1\) is added to the probability of the last bin.
\(\mathbf{x}\) directly, but rather to model the Gaussian noise vector \(\mathbf{e}\) used to generate the mean sample \(\boldsymbol{\mu}\) passed as input to the network.
First \(\Psi(\boldsymbol{\theta},t)\) is split into two length \(D\) vectors, \(\boldsymbol{\mu}_{\epsilon}\) and \(\ln\boldsymbol{\sigma}_{\epsilon}\). Then these are transformed to \(\boldsymbol{\mu}_{x}\) and \(\boldsymbol{\sigma}_{x}\) using
\[\boldsymbol{\mu}_{x} =\begin{cases}\mathbf{0}&\text{if }t<t_{min},\\ \frac{\boldsymbol{\mu}}{\gamma(t)}-\sqrt{\frac{1-\gamma(t)}{\gamma(t)}} \,\boldsymbol{\mu}_{\epsilon}&\text{otherwise},\end{cases} \tag{105}\] \[\boldsymbol{\sigma}_{x} =\begin{cases}\mathbf{1}&\text{if }t<t_{min},\\ \sqrt{\frac{1-\gamma(t)}{\gamma(t)}}\exp(\ln\boldsymbol{\sigma}_{\epsilon})& \text{otherwise}.\end{cases} \tag{106}\]
For each \(d\in\{1,D\}\), define the following univariate Gaussian cdf
\[F\left(x\mid\mu_{x}^{(d)},\sigma_{x}^{(d)}\right)=\frac{1}{2} \left[1+\text{erf}\left(\frac{x-\mu_{x}^{(d)}}{\sigma_{x}^{(d)}\sqrt{2}} \right)\right], \tag{107}\]
and clip at \([-1,1]\) to obtain
\[G\left(x\mid\mu_{x}^{(d)},\sigma_{x}^{(d)}\right)=\begin{cases} 0&\text{if }x\leq-1,\\ 1&\text{if }x\geq 1,\\ F\left(x\mid\mu_{x}^{(d)},\sigma_{x}^{(d)}\right)&\text{otherwise}.\end{cases} \tag{108}\]
Then, for \(k\in\{1,K\}\),
\[p_{{}_{O}}^{(d)}(k\mid\boldsymbol{\theta};t)\stackrel{{\text{def} }}{{=}}G(k_{r}\mid\mu_{x}^{(d)},\sigma_{x}^{(d)})-G(k_{l}\mid\mu_{x}^{(d)}, \sigma_{x}^{(d)}), \tag{109}\]
and hence
\[p_{{}_{O}}(\mathbf{x}\mid\boldsymbol{\theta},t)=\prod_{d=1}^{D} p_{{}_{O}}^{(d)}\left(k(x^{(d)})\mid\boldsymbol{\theta};t\right). \tag{110}\]
### Receiver Distribution \(p_{{}_{R}}(\cdot\mid\boldsymbol{\theta};t,\alpha)\)
Substituting Eq. 110 and Eq. 86 into Eq. 4 gives
\[p_{{}_{R}}(\mathbf{y}\mid\boldsymbol{\theta};t,\alpha) =\underset{p_{{}_{O}}(\mathbf{x}^{\prime}\mid\boldsymbol{\theta},t)}{\mathbb{E}}\,\mathcal{N}\left(y^{(d)}\mid k_{c}(\mathbf{x}^{\prime}), \alpha^{-1}\boldsymbol{I}\right) \tag{111}\] \[=\prod_{d=1}^{D}\int_{x^{\prime}}dx^{\prime}p_{{}_{O}}^{(d)} \left(k(x^{\prime})\mid\boldsymbol{\theta};t\right)\mathcal{N}\left(y^{(d)} \mid k_{c}(x^{\prime}),\alpha^{-1}\right)\] (112) \[=\prod_{d=1}^{D}\sum_{k=1}^{K}p_{{}_{O}}^{(d)}(k\mid \boldsymbol{\theta};t)\mathcal{N}\left(y^{(d)}\mid k_{c},\alpha^{-1}\right). \tag{113}\]
Figure 8: **Sender, output and receiver distributions for discretised data**. For data \(x\) discretised into 8 bins, the three plots depict the sender distribution (red line), the discretised output distribution (green bars; heights reflect the probabilities assigned to each bin, rescaled as in Figure 7) and receiver distribution (blue line) for progressively increasing values of \(\alpha\), and for progressively more accurate predictions of \(x\) (both of which typically happen as \(t\) increases). Also shown are the continuous distribution \(\mathcal{N}(x\mid\mu_{x},\sigma_{x}^{2})\) (dotted green line) which is discretized to create the output distribution and the continuous receiver distribution from Section 4 (dashed orange line). Bin boundaries are marked with vertical grey lines. Note the KL divergences printed in the top right: taking discretisation into account leads to a lower KL due to the density “bumps” at the bin centres where \(x\) could be. The advantage of discretisation becomes more pronounced as the prediction gets closer to \(x\) and more of the probability mass is concentrated in the correct bin.
### Reconstruction Loss \(L^{r}(\mathbf{x})\)
The reconstruction loss for discretised data is
\[L^{r}(\mathbf{x}) =-\operatorname*{\mathbb{E}}_{p_{{}_{F}}(\boldsymbol{\theta}| \mathbf{x},1)}\ln p_{{}_{O}}(\mathbf{x}\mid\boldsymbol{\theta};1) \tag{114}\] \[=-\operatorname*{\mathbb{E}}_{p_{{}_{F}}(\boldsymbol{\theta}| \mathbf{x},1)}\sum_{d=1}^{D}\ln p_{{}_{O}}^{(d)}\left(k(x^{(d)})\mid \boldsymbol{\theta};1\right). \tag{115}\]
### Discrete-time Loss \(L^{n}(\mathbf{x})\)
From Eqs. 86 and 113,
\[D_{KL}\left(p_{{}_{S}}\left(\cdot\mid\mathbf{x},\alpha_{i} \right)\parallel p_{{}_{R}}(\cdot\mid\boldsymbol{\theta}_{i-1};t_{i-1},\alpha_ {i})\right) \tag{116}\] \[=D_{KL}\left(\mathcal{N}\left(\mathbf{x},\alpha_{i}^{-1} \boldsymbol{I}\right)\parallel\prod_{d=1}^{D}\sum_{k=1}^{K}p_{{}_{O}}^{(d)}(k \mid\boldsymbol{\theta}_{i-1},t_{i-1})\mathcal{N}\left(k_{c},\alpha_{i}^{-1} \right)\right), \tag{117}\]
which cannot be calculated in closed form, but can be estimated with Monte-Carlo sampling. Substituting into Eq. 24,
\[L^{n}(\mathbf{x}) =n\operatorname*{\mathbb{E}}_{i\sim U\{1,n\},p_{{}_{F}}( \boldsymbol{\theta}|\mathbf{x};t_{i-1}),\mathcal{N}\left(\mathbf{y}|\mathbf{ x},\alpha_{i}^{-1}\boldsymbol{I}\right)}\ln\mathcal{N}\left(\mathbf{y}\mid \mathbf{x},\alpha_{i}^{-1}\boldsymbol{I}\right) \tag{118}\] \[\qquad\qquad\qquad-\sum_{d=1}^{D}\ln\left(\sum_{k=1}^{K}p_{{}_{O }}^{(d)}(k\mid\boldsymbol{\theta},t_{i-1})\mathcal{N}\left(y^{(d)}\mid k_{c}, \alpha_{i}^{-1}\right)\right). \tag{119}\]
### Continuous-time Loss \(L^{\infty}(\mathbf{x})\)
Justifying the claim made in Eq. 29 follows almost the same reasoning here as in Section 4.12, with \(C=1\) and \(g\) the identity function. The only difference is that
\[P(\mathbf{y}\mid\boldsymbol{\theta};t)=\prod_{d=1}^{D}\sum_{k=1}^{K}p_{{}_{O} }^{(d)}(k\mid\boldsymbol{\theta},t)\delta(y^{(d)}-k_{c}), \tag{120}\]
which clearly has finite variance and mean. Since
\[P(\mathbf{y}\mid\boldsymbol{\theta},t)*\mathcal{N}\left(\boldsymbol{0},C \alpha^{-1}\boldsymbol{I}\right)=p_{{}_{R}}(\mathbf{y}\mid\boldsymbol{\theta},\alpha,t), \tag{121}\]
the claim holds and the continuous time loss from Eq 41 can be applied with
\[E[P(\boldsymbol{\theta},t)]=\left(\sum_{k=1}^{K}p^{(1)}(k\mid\boldsymbol{ \theta},t)k_{c},\ldots,\sum_{k=1}^{K}p^{(D)}(k\mid\boldsymbol{\theta},t)k_{c} \right)\overset{\mathrm{def}}{=}\mathbf{\hat{k}}(\boldsymbol{\theta},t), \tag{122}\]
and \(\alpha(t)\) as defined in Eq 74, yielding
\[L^{\infty}(\mathbf{x})=-\ln\sigma_{1}\operatorname*{\mathbb{E}}_{t\sim U(0,1),p_{{}_{F}}(\boldsymbol{\theta}|\mathbf{x};t)}\frac{\left\|\mathbf{x}-\mathbf{ \hat{k}}(\boldsymbol{\theta},t)\right\|^{2}}{\sigma_{1}^{2t}}. \tag{123}\]
Note that \(\mathbf{\hat{k}}(\boldsymbol{\theta},t)\) is a function of the complete discretised distribution \(p_{{}_{O}}(\mathbf{x}\mid\boldsymbol{\theta},t)\), hence \(L^{\infty}(\mathbf{x})\) depends on both \(\boldsymbol{\mu}_{\mathbf{x}}\) and \(\boldsymbol{\sigma}_{\mathbf{x}}\), and not only on \(\boldsymbol{\mu}_{\mathbf{x}}\), as for continuous data. This also means that calculating \(L^{\infty}(\mathbf{x})\) has \(O(K)\) computational cost for discretised data.
### Pseudocode
Pseudocode for evaluating the discrete-time loss \(L^{n}(\mathbf{x})\) and continuous-time loss \(L^{\infty}(\mathbf{x})\) for discretised data is presented in Algorithms 4 and 5, while sample generation is presented in Algorithm 6.
```
functiondiscretised_cdf(\(\mu\in\mathbb{R},\sigma\in\mathbb{R}^{+},x\in\mathbb{R}\)) \(F(x)\leftarrow\frac{1}{2}\left[1+\operatorname{erf}\left(\frac{x-\mu}{\sigma \sqrt{2}}\right)\right]\) \(G(x)\leftarrow\begin{cases}0&\text{if }x\leq-1\\ 1&\text{if }x\geq 1\\ F(x)&\text{otherwise}\end{cases}\) Return\(G(x)\) endfunction
```
```
For our experiments \(t_{min}=1\)e\(-6\) # \(k_{l}=\frac{2(k-1)}{K}-1\), \(k_{r}=\frac{2k}{K}-1\) functiondiscretised_output_distribution(\(\boldsymbol{\mu}\in\mathbb{R}^{D},t\in[0,1],K\in\mathbb{N},\gamma\in\mathbb{R}^{+}\), \(t_{min}\in\mathbb{R}^{+}\)). if\(t<t_{min}\)then \(\boldsymbol{\mu}_{x}\leftarrow\mathbf{0}\) \(\boldsymbol{\sigma}_{x}\leftarrow\mathbf{1}\) else Input \((\boldsymbol{\mu},t)\) to network, receive \((\boldsymbol{\mu}_{\epsilon},\ln\boldsymbol{\sigma}_{\epsilon})\) as output \(\boldsymbol{\mu}_{x}\leftarrow\frac{\boldsymbol{\mu}}{\gamma}-\sqrt{\frac{1- \gamma}{\gamma}}\,\boldsymbol{\mu}_{\epsilon}\) \(\boldsymbol{\sigma}_{x}\leftarrow\sqrt{\frac{1-\gamma}{\gamma}}\exp(\ln \boldsymbol{\sigma}_{\epsilon})\) endif for\(d\in\{1,D\}\), \(k\in\{1,K\}\)do \(p_{{}_{O}}^{(d)}(k\mid\boldsymbol{\theta};t)\leftarrow\textsc{discretised\_cdf}( \mu_{x}^{(d)},\sigma_{x}^{(d)},k_{r})-\textsc{discretised\_cdf}(\mu_{x}^{(d)},\sigma_{x}^{(d)},k_{l})\) endfor Return\(p_{{}_{O}}(\cdot\mid\boldsymbol{\theta};t)\) endfunction
```
**Algorithm 6** Pseudocode for evaluating the discrete-time loss \(L^{n}(\mathbf{x})\) and continuous-time loss \(L^{\infty}(\mathbf{x})\) for discretised data is presented in Algorithms 4 and 5, while sample generation is presented in Algorithm 6.
```
``` #\(k_{c}=\frac{2k-1}{K}\) - 1
```
```
Require:\(\sigma_{1}\in\mathbb{R}^{+}\), number of steps \(n\in\mathbb{N}\), number of bins \(K\in\mathbb{N}\) Input: discretised data \(\mathbf{x}\in[\frac{1}{K}-1,1-\frac{1}{K}]^{D}\) \(i\sim U\{1,n\}\) \(t\leftarrow\frac{i-1}{n}\) \(\gamma\gets 1-\sigma_{1}^{2t}\) \(\boldsymbol{\mu}\sim\mathcal{N}\left(\gamma\,\mathbf{x},\gamma(1-\gamma) \boldsymbol{I}\right)\) \(\alpha\leftarrow\sigma_{1}^{-2i/n}\left(1-\sigma_{1}^{2/n}\right)\) \(\mathbf{y}\sim\mathcal{N}\left(\mathbf{x},\alpha^{-1}\boldsymbol{I}\right)\) \(\boldsymbol{p}_{{}_{O}}(\cdot\mid\boldsymbol{\theta};t)\leftarrow\textsc{ discretised\_output\_distribution}(\boldsymbol{\mu},t,K,\gamma)\) \(L^{n}(\mathbf{x})\gets n\left[\ln\mathcal{N}\left(\mathbf{y}\mid\mathbf{x}, \alpha^{-1}\boldsymbol{I}\right)-\sum_{d}\ln\left(\sum_{k}p_{{}_{O}}^{(d)}(k \mid\boldsymbol{\theta};t)\mathcal{N}\left(y^{(d)}\mid k_{c},\alpha^{-1} \right)\right)\right]\)
```
**Algorithm 4** Discrete-Time Loss \(L^{n}(\mathbf{x})\) for Discretised Data
```
Require:\(\sigma_{1}\in\mathbb{R}^{+}\), number of bins \(K\in\mathbb{N}\) Input: discretised data \(\mathbf{x}\in[\frac{1}{K}-1,1-\frac{1}{K}]^{D}\) \(t\sim U(0,1)\) \(\gamma\gets 1-\sigma_{1}^{2t}\) \(\boldsymbol{\mu}\sim\mathcal{N}\left(\gamma\,\mathbf{x},\gamma(1-\gamma) \boldsymbol{I}\right)\) \(\boldsymbol{p}_{{}_{O}}(\cdot\mid\boldsymbol{\theta};t)\leftarrow\textsc{ discretised\_output\_distribution}( \boldsymbol{\mu},t,K,\gamma)\) \(\hat{\mathbf{k}}(\boldsymbol{\theta},t)\leftarrow\left(\sum_{k}p_{{}_{O}}^{(1)} (k\mid\boldsymbol{\theta};t)k_{c},\ldots,\sum_{k}p_{{}_{O}}^{(D)}(k\mid \boldsymbol{\theta};t)k_{c}\right)\) \(L^{\infty}(\mathbf{x})\leftarrow-\ln\sigma_{1}\sigma_{1}^{-2t}\left\|\mathbf{x }-\hat{\mathbf{k}}(\boldsymbol{\theta},t)\right\|^{2}\)
```
**Algorithm 5** Continuous-Time Loss \(L^{\infty}(\mathbf{x})\) for Discretised Data
```
#\(k_{c}=\left(k_{c}^{(1)},\ldots,k_{c}^{(D)}\right)\) Require:\(\sigma_{1}\in\mathbb{R}^{+}\), number of steps \(n\in\mathbb{N}\), number of bins \(K\in\mathbb{N}\) \(\boldsymbol{\mu}\leftarrow\boldsymbol{0}\) \(\rho\gets 1\) for\(i=1\) to \(n\)do \(t\leftarrow\frac{i-1}{n}\) \(\mathbf{k}\sim\textsc{discretised\_output\_distribution}(\boldsymbol{\mu},t,K,1- \sigma_{1}^{2t})\) \(\alpha\leftarrow\sigma_{1}^{-2i/n}\left(1-\sigma_{1}^{2/n}\right)\) \(\mathbf{y}\sim\mathcal{N}\left(\mathbf{k}_{c},\alpha^{-1}\boldsymbol{I}\right)\) \(\boldsymbol{\mu}\leftarrow\frac{\boldsymbol{\mu}\boldsymbol{\mu}+\alpha }{\rho+\alpha}\) \(\rho\leftarrow\rho+\alpha\) endfor \(\mathbf{k}\sim\textsc{discretised\_output\_distribution}(\boldsymbol{\mu},1,K,1- \sigma_{1}^{2})\) Return\(\mathbf{k}_{c}\)
```
**Algorithm 6** Sample Generation for Discretised Data
```
#\(k_{c}=\left(k_{c}^{(1)},\ldots,k_{c}^{(D)}\right)\)
```
**Algorithm 7** Compute
Discrete Data
We now consider discrete data in which no meaningful order or distance exists between the classes, unlike the discretised continuous data covered in the previous section. Some obvious examples are text characters, classification labels or any binary data. In this context the data is represented as a \(D\) dimensional vector of class indices: \(\mathbf{x}=\left(x^{(1)},\ldots,x^{(D)}\right)\in\{1,K\}^{D}\), where \(\{1,K\}\) is the set of integers from \(1\) to \(K\).
### Input Distribution \(p_{{}_{I}}(\cdot\mid\boldsymbol{\theta})\)
For discrete data, the input distribution is a factorised categorical over the class indices. Let \(\boldsymbol{\theta}=\left(\theta^{(1)},\ldots,\theta^{(D)}\right)\in[0,1]^{KD}\) with \(\theta^{(d)}=\left(\theta^{(d)}_{1},\ldots,\theta^{(d)}_{K}\right)\in\Delta^ {K-1}\), where \(\theta^{(d)}_{k}\) is the probability assigned to class \(k\) for variable \(d\). Then
\[p_{{}_{I}}(\mathbf{x}\mid\boldsymbol{\theta})=\prod_{d=1}^{D} \theta^{(d)}_{x^{(d)}}. \tag{124}\]
The input prior is uniform with
\[\boldsymbol{\theta}_{0}=\frac{\mathbf{1}}{\boldsymbol{K}}, \tag{125}\]
where \(\frac{\mathbf{1}}{\boldsymbol{K}}\) is the length \(KD\) vector whose entries are all \(\frac{1}{K}\). We chose a uniform prior--rather than an empirical prior fit to the training data--for the same reasons we chose a standard normal prior for continuous data: it's mathematically simpler, and the disparity between the true prior and the simple prior can easily be corrected by the network.
### Output Distribution \(p_{{}_{O}}(\cdot\mid\boldsymbol{\theta};t)\)
Given data \(\mathbf{x}\), network inputs \(\boldsymbol{\theta},t\) and corresponding network outputs \(\Psi(\boldsymbol{\theta},t)=\left(\Psi^{(1)}(\boldsymbol{\theta},t),\ldots, \Psi^{(D)}(\boldsymbol{\theta},t)\right)\in\mathbb{R}^{KD}\), the output distribution for discrete data is as follows:
\[p_{{}_{O}}^{(d)}(k\mid\boldsymbol{\theta};t) =\left(\text{softmax}(\Psi^{(d)}(\boldsymbol{\theta},t))\right)_ {k}, \tag{126}\] \[p_{{}_{O}}(\mathbf{x}\mid\boldsymbol{\theta};t) =\prod_{d=1}^{D}p_{{}_{O}}^{(d)}(x^{(d)}\mid\boldsymbol{\theta};t). \tag{127}\]
Note that for binary data only the probability \(\theta^{(d)}_{1}\) that \(k=1\) is fed into the network, on the grounds that the probability of \(k=2\) can easily be inferred from \(\theta^{(d)}_{2}=1-\theta^{(d)}_{1}\). The output distribution for binary data is determined by applying the logistic sigmoid function elementwise to the length \(D\) output vector to get the probability for \(k=1\):
\[p_{{}_{O}}^{(d)}(1\mid\boldsymbol{\theta};t)=\sigma\left(\Psi^{ (d)}(\boldsymbol{\theta},t))\right), \tag{128}\]
where
\[\sigma(x)=\frac{1}{1-e^{-x}}, \tag{129}\]
then inferring the probabilities for \(k=2\) from
\[p_{{}_{O}}^{(d)}(2\mid\boldsymbol{\theta};t)=1-p_{{}_{O}}^{(d)}(1\mid\boldsymbol{ \theta};t). \tag{130}\]
In principle one class could also be removed from the inputs and outputs when \(K>2\) and inferred from the others. However this would require the network to internalise a slightly more sophisticated inference procedure that could potentially slow down learning. We therefore followed deep-learning convention and included a redundant input and output unit for \(K>2\).
All probabilities are rescaled to the range \([-1,1]\) by multiplying by two then subtracting one before feeding them into the network.
### Sender Distribution \(p_{{}_{S}}\left(\cdot\mid\mathbf{x};\alpha\right)\)
Given \(\omega\in[0,1]\), and a vector of \(D\) class indices \(\mathbf{k}=\left(k^{(1)},\ldots,k^{(D)}\right)\in\{1,K\}^{D}\), let
\[p(k^{(d)}\mid x^{(d)};\omega)\stackrel{{\mathrm{def}}}{{=}}\frac{ 1-\omega}{K}+\omega\delta_{k^{(d)}x^{(d)}}, \tag{131}\]
where \(\delta_{ij}\) is the Kronecker delta function. Clearly \(p(k^{(d)}\mid x^{(d)};\omega)\geq 0\)\(\forall k\) and \(\sum_{k=1}^{K}p(k^{(d)}\mid x^{(d)};\omega)=1\), so the vector
\[a(x^{(d)},\omega)\stackrel{{\mathrm{def}}}{{=}}\left(p(1\mid x^{ (d)};\omega),\ldots,p(K\mid x^{(d)};\omega)\right), \tag{132}\]
defines a valid distribution over \(K\) classes. To simplify notation we will from now on drop the superscripts and refer to \(x^{(d)}\) as \(x\), \(p(k^{(d)}\mid x^{(d)};\omega)\) as \(p(k\mid x;\omega)\) and so on, except where necessary to remove ambiguity.
Consider a vector of integer counts \(c=(c_{1},\ldots,c_{K})\in\{1,m\}^{K}\), corresponding to the number of times each of the \(K\) classes is observed among \(m\) independent draws from \(a(x,\omega)\). Then the probability of observing \(c\) is given by the following multinomial distribution:
\[p(c\mid x,\omega) =\mathrm{Multi}(m,a(x,\omega)) \tag{133}\] \[=\frac{m!}{c_{1}!\ldots c_{K}!}\prod_{k=1}^{K}\left(p(k\mid x; \omega)\right)^{c_{k}}\] (134) \[=\frac{m!}{c_{1}!\ldots c_{K}!}\prod_{k=1}^{K}\left(\frac{1- \omega}{K}+\omega\delta_{kd}\right)^{c_{k}}. \tag{135}\]
Now consider the fraction \(c_{k}/m\) of observations of class \(k\) in \(c\). Clearly
\[\lim_{m\to\infty}\frac{c_{k}}{m}=p(k\mid x;\omega), \tag{136}\]
meaning that for any finite \(\omega\) it would be possible to deduce from \(c\) what the value of \(x\) is if \(m\) is sufficiently large. However as \(\omega\) shrinks, \(p(k\mid x;\omega)\) becomes closer to uniform, meaning that a larger \(m\) is required to unambigously identify \(x\) from \(c\). By defining the accuracy \(\alpha\stackrel{{\mathrm{def}}}{{=}}m\omega^{2}\) and sending \(m\to\infty\) (and hence \(\omega\to 0\) for any finite \(\alpha\)), \(p(c\mid x,\omega)\) can therefore be used to define a continuous-valued sender distribution that smoothly varies from totally uninformative at \(\alpha=0\) to totally informative as \(\alpha\to\infty\), like the sender distribution for continuous data.
It can be proved from the central limit theorem that for any set of discrete probabilities \(p=\{p_{1},\ldots,p_{K}\}\), where \(0<p_{k}<1\)\(\forall k\), that if \(c\sim\text{Multi}(m,p)\) then in the limit \(m\to\infty\) the following result holds [8]:
\[\frac{c-mp}{\sqrt{mp}}\sim\mathcal{N}\left(0,\mathbf{I}\right), \tag{137}\]
where \(\mathbf{I}\) is the \(K\times K\) identity matrix. Therefore
\[\lim_{m\to\infty}p(c_{k}\mid x,\omega) =\mathcal{N}\left(c_{k}\mid mp(k\mid x;\omega),mp(k\mid x;\omega)\right) \tag{138}\] \[=\frac{1}{\sqrt{2\pi mp(k\mid x;\omega)}}\exp\left(\frac{-\left[ c_{k}-mp(k\mid x,\omega)\right]^{2}}{2mp(k\mid x;\omega)}\right). \tag{139}\]
Now define
\[\xi\stackrel{{\text{def}}}{{=}}1+\frac{\omega K}{1-\omega}. \tag{140}\]
And the length \(K\) sender sample \(y=(y_{1},\ldots,y_{K})\) as
\[y_{k}\stackrel{{\text{def}}}{{=}}\left(c_{k}-\frac{m}{K}\right) \ln\xi. \tag{141}\]
Note that \(y\), unlike \(x\), is continuous (\(\mathcal{Y}=\mathbb{R}^{K},\mathcal{X}=\{1,K\}\)), and that \(\left(c-\frac{m}{K}\right)\) measures the number of times each class is observed, minus the average number of observations per class. Intuitively, \(y\) provides information about the relative concentration of the classes among the counts, with (since \(\ln\xi>0\)) positive values for classes observed more frequently than the mean and negative values for those observed less frequently than the mean. As \(m\omega^{2}\) grows the concentration increases around the true class, and hence \(y\) become more informative about \(x\).
Rearranging Eq. 141,
\[c_{k} =\frac{y_{k}}{\ln\xi}+\frac{m}{K} \tag{142}\] \[\implies\frac{dc_{k}}{dy_{k}} =\frac{1}{\ln\xi}, \tag{143}\]
which we can use for the following change of variables:
\[p(y_{k}\mid x,\omega) =\left|\frac{dc_{k}}{dy_{k}}\right|p(c_{k}\mid x,\omega) \tag{144}\] \[=\frac{1}{\ln\xi\sqrt{2\pi mp(k\mid x,\omega)}}\exp\left(\frac{- \left[\frac{y_{k}}{\ln\xi}+\frac{m}{K}-mp(k\mid x,\omega)\right]^{2}}{2mp(k \mid x,\omega)}\right), \tag{145}\]
where we have used the fact that \(\xi\geq 1\) and hence \(\frac{dc_{k}}{dy_{k}}\geq 0\). Recall that \(\alpha=m\omega^{2}\) and hence \(m=\frac{\alpha}{\omega^{2}}\), which can be substituted into the above to yield
\[p(y_{k}\mid x,\omega) =\frac{1}{\frac{1}{\omega}\ln\xi}\frac{1}{\sqrt{2\pi\alpha p(k \mid x,\omega)}}\exp\left(\frac{-\left[\frac{y_{k}}{\frac{1}{\omega}\ln\xi}+ \frac{\alpha}{\omega}\left(\frac{1}{K}-p(k\mid x,\omega)\right)\right]^{2}}{2 \alpha p(k\mid x,\omega)}\right). \tag{146}\]
Substituting from Eq. 131,
\[\frac{1}{K}-p(k\mid x,\omega)=\omega\left(\frac{1}{K}-\delta_{kx},\right), \tag{147}\]
and hence
\[p(y_{k}\mid x,\omega)=\frac{1}{\frac{1}{\omega}\ln\xi}\frac{1}{ \sqrt{2\pi\alpha p(k\mid x,\omega)}}\exp\left(\frac{-\left[\frac{y_{k}}{\frac{1 }{\omega}\ln\xi}-\alpha\left(\delta_{kx}-\frac{1}{K}\right)\right]^{2}}{2 \alpha p(k\mid x,\omega)}\right). \tag{148}\]
Applying the identity \(\ln(1+x)=\sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{n}x^{n}\) for \(\left|x\right|<1\) to \(\ln\xi=\ln\left(1+\frac{\omega K}{1-\omega}\right)\) it can be seen that
\[\ln\xi\in\frac{\omega K}{1-\omega}+O(\omega^{2}), \tag{149}\]
and hence
\[\lim_{\omega\to 0}\frac{1}{\omega}\ln\xi=K. \tag{150}\]
Furthermore, it follows directly from Eq. 131 that
\[\lim_{\omega\to 0}p(k\mid x,\omega)=\frac{1}{K}\ \forall k\in\{1,K\}. \tag{151}\]
Now define
\[p_{{}_{S}}\left(y_{k}\mid x;\alpha\right)\stackrel{{\rm def}}{{= }}\lim_{\omega\to 0}p(y_{k}\mid x,\omega). \tag{152}\]
Plugging Eq. 150 and 151 into Eq. 148,
\[p_{{}_{S}}\left(y_{k}\mid x;\alpha\right) =\frac{1}{K\sqrt{2\pi\alpha\frac{1}{K}}}\exp\left(\frac{-\left[ \frac{y_{k}}{K}-\alpha\left(\delta_{kx}-\frac{1}{K}\right)\right]^{2}}{2 \alpha\frac{1}{K}}\right) \tag{153}\] \[=\frac{1}{\sqrt{2\pi\alpha K}}\exp\left(\frac{-\left[y_{k}- \alpha\left(K\delta_{kx}-1\right)\right]^{2}}{2\alpha K}\right)\] (154) \[=\mathcal{N}\left(\alpha\left(K\delta_{kx}-1\right),\alpha K \right). \tag{155}\]
Restoring the superscript,
\[p_{{}_{S}}\left(y^{(d)}\mid x^{(d)};\alpha\right)=\mathcal{N} \left(\alpha\left(K\mathbf{e}_{x^{(d)}}-\mathbf{1}\right),\alpha K\mathbf{I} \right), \tag{156}\]
where \(\mathbf{1}\) is a vector of ones, \(\mathbf{I}\) is the identity matrix and \(\mathbf{e}_{j}\in\mathbb{R}^{K}\) is the projection from the class index \(j\) to the length \(K\) one-hot vector defined by \((\mathbf{e}_{j})_{k}=\delta_{jk}\), and therefore
\[p_{{}_{S}}\left(\mathbf{y}\mid\mathbf{x};\alpha\right)=\mathcal{ N}\left(\mathbf{y}\mid\alpha\left(K\mathbf{e}_{\mathbf{x}}-\mathbf{1}\right), \alpha K\mathbf{I}\right), \tag{157}\]
where \(\mathbf{e}_{\mathbf{x}}\stackrel{{\rm def}}{{=}}(\mathbf{e}_{x^{ (1)}},\ldots,\mathbf{e}_{x^{(D)}})\in\mathbb{R}^{KD}\).
### Receiver Distribution \(p_{{}_{R}}(\cdot\mid\mathbf{\theta};t,\alpha)\)
Substituting Eq. 127 and Eq. 157 into Eq. 4 gives the following receiver distribution for dimension \(d\):
\[p_{{}_{R}}^{(d)}(y^{(d)}\mid\mathbf{\theta};t,\alpha) =\sum_{k=1}^{K}p_{{}_{O}}^{(d)}(k\mid\mathbf{\theta};t)\mathcal{N} \left(\alpha\left(K\mathbf{e}_{k}-\mathbf{1}\right),\alpha K\mathbf{I}\right), \tag{158}\] \[p_{{}_{R}}(\mathbf{y}\mid\mathbf{\theta};t,\alpha) =\prod_{d=1}^{D}p_{{}_{R}}^{(d)}(y^{(d)}\mid\mathbf{\theta};t,\alpha). \tag{159}\]
### Bayesian Update Function \(h(\mathbf{\theta}_{i-1},\mathbf{y},\alpha)\)
Recall from Section 6.1 that \(\left(\theta_{i-1}\right)_{k}^{(d)}\) is the probability assigned to \(x^{(d)}=k\) by \(p(x^{(d)}\mid\theta_{i-1})\). Dropping the superscript and returning to the count distribution \(p(c\mid x,\omega)\) defined in Eq. 133, the posterior probability that \(x=k\) after observing \(c\) is
\[p(k\mid c;\omega)=\frac{p(c\mid k;\omega)(\theta_{i-1})_{k}}{\sum_{k^{\prime}= 1}^{K}p(c\mid k^{\prime};\omega)(\theta_{i-1})_{k^{\prime}}}. \tag{160}\]
Substituting Eq. 135 into Eq. 160 and cancelling terms in the enumerator and denominator,
\[p(k\mid c;\omega) =\frac{\left[\frac{1-\omega}{K}\right]^{m-c_{k}}\left[\frac{1- \omega}{K}+\omega\right]^{c_{k}}(\theta_{i-1})_{k}}{\sum_{k^{\prime}=1}^{K} \left[\frac{1-\omega}{K}\right]^{m-c_{k^{\prime}}}\left[\frac{1-\omega}{K}+ \omega\right]^{c_{k^{\prime}}}(\theta_{i-1})_{k^{\prime}}} \tag{161}\] \[=\frac{\left[\frac{1-\omega}{K}\right]^{m}\left[1+\frac{\omega K} {1-\omega}\right]^{c_{k}}(\theta_{i-1})_{k}}{\left[\frac{1-\omega}{K}\right]^ {m}\sum_{k^{\prime}=1}^{K}\left[1+\frac{\omega K}{1-\omega}\right]^{c_{k^{ \prime}}}(\theta_{i-1})_{k^{\prime}}}\] (162) \[=\frac{\left[1+\frac{\omega K}{1-\omega}\right]^{c_{k}}(\theta_{i -1})_{k}}{\sum_{k^{\prime}=1}^{K}\left[1+\frac{\omega K}{1-\omega}\right]^{c_ {k^{\prime}}}(\theta_{i-1})_{k^{\prime}}}\] (163) \[=\frac{\xi^{c_{k}}(\theta_{i-1})_{k}}{\sum_{k^{\prime}=1}^{K}\xi ^{c_{k^{\prime}}}(\theta_{i-1})_{k^{\prime}}}. \tag{164}\]
Now define
\[h(\theta,y)\stackrel{{\mathrm{def}}}{{=}}\frac{e^{y}\theta}{\sum _{k=1}^{K}e^{y_{k}}\theta_{k}}. \tag{165}\]
Substituting the definition of \(y_{k}\) from Eq. 141 into the definition of \(h(\theta,y)\) from Eq. 165,
\[(h(\theta_{i-1},y))_{k} =\frac{\exp(-\frac{m}{K}\ln\xi)\exp(c_{k}\ln\xi)(\theta_{i-1})_{k }}{\exp(-\frac{m}{K}\ln\xi)\sum_{k^{\prime}=1}^{K}\exp(c_{k^{\prime}}\ln\xi) (\theta_{i-1})_{k^{\prime}}} \tag{166}\] \[=\frac{\exp(\ln\xi^{c_{k}})(\theta_{i-1})_{k}}{\sum_{k^{\prime}= 1}^{K}\exp(\ln\xi^{c_{k^{\prime}}})(\theta_{i-1})_{k^{\prime}}}\] (167) \[=\frac{\xi^{c_{k}}(\theta_{i-1})_{k}}{\sum_{k^{\prime}=1}^{K}\xi ^{c_{k^{\prime}}}(\theta_{i-1})_{k^{\prime}}}, \tag{168}\]
and hence, from Eq. 164,
\[h(\theta_{i-1},y)_{k}=p(k\mid c;\omega). \tag{170}\]
Therefore in the limit \(m\to\infty\) with \(m\omega^{2}=\alpha\), the stochastic parameter update from \(\theta_{i-1}\) to \(\theta_{i}\) induced by drawing \(c\) from \(\text{multi}(m,a(x,\omega))\) can be sampled by first drawing \(y\) from \(p_{{}_{S}}\left(\cdot\mid x,\alpha\right)\) then setting \(\theta_{i}=h(\theta_{i-1},y)\). Hence the Bayesian update function is
\[h(\boldsymbol{\theta}_{i-1},\mathbf{y},\alpha)\stackrel{{ \text{def}}}{{=}}\frac{e^{\mathbf{y}}\boldsymbol{\theta}_{i-1}}{\sum_{k=1}^{K }e^{\mathbf{y}_{k}}(\boldsymbol{\theta}_{i-1})_{k}}, \tag{171}\]
where the redundant parameter \(\alpha\) has been included for consistency with the update function for continuous data.
### Bayesian Update Distribution \(p_{{}_{U}}(\cdot\mid\boldsymbol{\theta}_{i-1},\mathbf{x};\alpha)\)
Substituting Eqs. 157 and 171 into Eq. 6,
\[p_{{}_{U}}(\boldsymbol{\theta}\mid\boldsymbol{\theta}_{i-1}, \mathbf{x};\alpha)=\underset{\mathcal{N}(\mathbf{y}\mid\alpha(K\mathbf{e}_{ \mathbf{x}}-\mathbf{1}),\alpha KI)}{\mathbb{E}}\delta\left(\boldsymbol{ \theta}-\frac{e^{\mathbf{y}}\boldsymbol{\theta}_{i-1}}{\sum_{k=1}^{K}e^{ \mathbf{y}_{k}}(\boldsymbol{\theta}_{i-1})_{k}}\right). \tag{172}\]
### Additive Accuracies
It follows from the definition of the update distribution that if \(y_{a}\) is drawn from \(p_{{}_{S}}\left(\cdot\mid x;\alpha_{a}\right)\) then \(\theta_{i-1}=h(y_{a},\theta_{i-2})\) is drawn from \(p(\cdot\mid\theta_{i-2},x;\alpha_{a})\). Furthermore, if \(y_{b}\) is drawn from \(p_{{}_{S}}\left(\cdot\mid x;\alpha_{b}\right)\) then \(\theta_{i}=h(y_{b},\theta_{i-1})=h(y_{b},h(y_{a},\theta_{i-2}))\) is drawn from \(\mathbb{E}_{p_{{}_{U}}(\theta_{i-1}\mid\theta_{i-2},x;\alpha_{a})}\,p_{{}_{U}} (\theta_{i}\mid\theta_{i-1},x;\alpha_{b})\). Substituting the definition of \(h\) from Eqn 165,
\[h(y_{b},h(y_{a},\theta_{i-2})) =\frac{\exp(y_{b})\frac{\exp(y_{a})\theta_{i-2}}{\sum_{k^{\prime} =1}^{K}\exp\left((y_{a})_{k^{\prime}}\right)(\theta_{i-2})_{k^{\prime}}}}{ \sum_{k=1}^{K}\exp\left((y_{b})_{k}\right)\frac{\exp\left((y_{a})_{k}(\theta_{ i-2})_{k}\right)}{\sum_{k^{\prime}=1}^{K}\exp\left((y_{a})_{k^{\prime}} \right)(\theta_{i-2})_{k^{\prime}}}} \tag{173}\] \[=\frac{\exp(y_{b})\exp(y_{a})\theta_{i-2}}{\sum_{k=1}^{K}\exp \left((y_{b})_{k}\right)\exp\left((y_{a})_{k}\right)(\theta_{i-2})_{k}}\] (174) \[=\frac{\exp(y_{a}+y_{b})\theta_{i-2}}{\sum_{k=1}^{K}\exp\left((y_ {a}+y_{b})_{k}\right)(\theta_{i-2})_{k}}\] (175) \[=h(y_{a}+y_{b},\theta_{i-2}). \tag{176}\]
From Eqn. 156
\[y_{a} \sim\mathcal{N}\left(\alpha_{a}\left(K\mathbf{e}_{x}-\mathbf{1} \right),\alpha_{a}K\boldsymbol{I}\right), \tag{177}\] \[y_{b} \sim\mathcal{N}\left(\alpha_{b}\left(K\mathbf{e}_{x}-\mathbf{1} \right),\alpha_{b}K\boldsymbol{I}\right) \tag{178}\]
and hence, from Identity 61
\[y_{a}+y_{b}\sim\mathcal{N}\left(\left(\alpha_{a}+\alpha_{b}\right)\left(K \mathbf{e}_{\mathbf{x}}-\mathbf{1}\right),(\alpha_{a}+\alpha_{b})K \boldsymbol{I}\right). \tag{180}\]
Therefore, if \(y\) is drawn from \(p_{{}_{S}}\left(\cdot\mid x;\alpha_{a}+\alpha_{b}\right)\) and \(\theta_{i}=h(y,\theta_{i-2})\) then \(\theta_{i}\) is drawn from
\(\mathbb{E}_{p_{{}_{U}}\left(\theta_{i-1}\mid\theta_{i-2},x;\alpha_{a}\right)}\, p_{{}_{U}}(\theta_{i}\mid\theta_{i-1},x;\alpha_{b})\) and
\[\underset{p_{{}_{U}}\left(\boldsymbol{\theta}_{i-1}\mid\boldsymbol{\theta}_{i- 2},\mathbf{x};\alpha_{a}\right)}{\mathbb{E}}p_{{}_{U}}(\boldsymbol{\theta}_{i }\mid\boldsymbol{\theta}_{i-1},\mathbf{x};\alpha_{b})=p_{{}_{U}}(\boldsymbol{ \theta}_{i}\mid\boldsymbol{\theta}_{i-2},\mathbf{x};\alpha_{a}+\alpha_{b}), \tag{181}\]
as required.
### Accuracy Schedule \(\beta(t)\)
As with continuous data, the guiding heuristic for \(\beta(t)\) was to decrease the expected entropy of the input distribution linearly with \(t\). In the continuous case, where the entropy is a deterministic function of \(\sigma^{2}\), applying the heuristic was straightforward; in the discrete case an explicit computation of \(\mathbb{E}_{p_{{}_{F}}\left(\boldsymbol{\theta}\mid x;t\right)}\,H\left[p_{{} _{I}}(\mathbf{x}\mid\boldsymbol{\theta})\right]\) would be needed. We were unable to derive an analytic expression for this term, but found that
\[\beta(t)=t^{2}\beta(1) \tag{182}\]
was a reasonable approximation, with \(\beta(1)\) determined empirically for each experiment. Therefore
\[\alpha(t)=\frac{d\beta(t)}{dt}=\beta(1)2t. \tag{183}\]
### Bayesian Flow Distribution \(p_{{}_{F}}(\cdot\mid\mathbf{x};t)\)
Substituting Eq. 172 into Eq. 10,
\[p_{{}_{F}}(\boldsymbol{\theta}\mid\mathbf{x};t)=\underset{\mathcal{N}\left( \mathbf{y}\mid\beta(t)(K\mathbf{e_{x}}-\mathbf{1}),\beta(t)KI\right)}{\mathbb{E }}\,\delta\left(\boldsymbol{\theta}-\frac{e^{\mathbf{y}}\boldsymbol{\theta}_{ 0}}{\sum_{k=1}^{K}e^{\mathbf{y}_{k}}(\boldsymbol{\theta}_{0})_{k}}\right). \tag{184}\]
Since the prior is uniform with \(\boldsymbol{\theta}_{0}=\frac{\mathbf{1}}{\boldsymbol{K}}\), this reduces to
\[p_{{}_{F}}(\boldsymbol{\theta}\mid\mathbf{x};t)=\underset{\mathcal{N}\left( \mathbf{y}\mid\beta(t)(K\mathbf{e_{x}}-\mathbf{1}),\beta(t)KI\right)}{\mathbb{ E}}\,\delta\left(\boldsymbol{\theta}-\text{softmax}(\mathbf{y})\right), \tag{185}\]
which can be sampled by drawing \(\mathbf{y}\) from \(\mathcal{N}\left(\beta(t)\left(K\mathbf{e_{x}}-\mathbf{1}\right),\beta(t)KI \right)\) then setting \(\boldsymbol{\theta}=\text{softmax}(\mathbf{y})\).
The sender distribution for discrete data can therefore be interpreted as a source of softmax logits for the Bayesian flow distribution; the higher the sender accuracy \(\alpha\) is, the larger in expectation the logits corresponding to \(\mathbf{x}\) will be in \(\mathbf{y}\), hence the closer \(\boldsymbol{\theta}\) will be to \(\mathbf{e_{x}}\) and the more information the network will gain about \(\mathbf{x}\).
### Reconstruction Loss \(L^{r}(\mathbf{x})\)
The reconstruction loss for discrete data is
\[L^{r}(\mathbf{x})=-\underset{p_{{}_{F}}\left(\boldsymbol{\theta}\mid\mathbf{x },1\right)}{\mathbb{E}}\ln p_{{}_{O}}(\mathbf{x}\mid\boldsymbol{\theta};1). \tag{186}\]
### Discrete-time Loss \(L^{n}(\mathbf{x})\)
From Eqs. 156 and 158,
\[D_{KL}\left(p_{{}_{S}}\left(\cdot\mid x^{(d)};\alpha\right)\parallel p _{{}_{R}}^{(d)}(\cdot\mid\boldsymbol{\theta};t,\alpha)\right) \tag{187}\] \[\qquad=D_{KL}\left(\mathcal{N}\left(\alpha\left(K\mathbf{e}_{x^{( d)}}-\mathbf{1}\right),\alpha K\boldsymbol{I}\right)\parallel\sum_{k=1}^{K}p_{{}_{O}}^{ (d)}(k\mid\boldsymbol{\theta};t)\mathcal{N}\left(\alpha\left(K\mathbf{e}_{k} -\mathbf{1}\right),\alpha K\boldsymbol{I}\right)\right). \tag{188}\]
Therefore, substituting into Eq. 24,
\[L^{n}(\mathbf{x})=n\underset{i\sim U\{1,n\},p(\boldsymbol{\theta }|\mathbf{x};t_{i-1}),\mathcal{N}\left(\mathbf{y}\right|\alpha_{i}\left(K \mathbf{e}_{\mathbf{x}}-\mathbf{1}\right),\alpha_{i}KI\right)}{\mathbb{E}} \ln\mathcal{N}\left(\mathbf{y}\mid\alpha_{i}\left(K\mathbf{e}_{\mathbf{x}}- \mathbf{1}\right),\alpha_{i}K\boldsymbol{I}\right) \tag{189}\] \[\qquad\qquad-\sum_{d=1}^{D}\ln\left(\sum_{k=1}^{K}p_{{}_{O}}^{( d)}(k\mid\boldsymbol{\theta};t_{i-1})\mathcal{N}\left(y^{(d)}\mid\alpha_{i} \left(K\mathbf{e}_{k}-\mathbf{1}\right),\alpha_{i}K\boldsymbol{I}\right) \right), \tag{190}\]
Figure 9: **Accuracy schedule vs. expected entropy for discrete data**. The surface plot shows the expectation over the parameter distribution \(p(\theta\mid x;\beta)\) of the entropy of the categorical input distribution \(p(x\mid\theta)\) for \(K=2\) to \(30\) and \(\beta=0.01\) to \(4\). The red and cyan lines highlight the entropy curves for \(2\) and \(27\) classes, the two values that occur in our experiments. The red and cyan stars show the corresponding values we chose for \(\beta(1)\).
where, from Eq. 182,
\[\alpha_{i} =\beta(t_{i})-\beta(t_{i-1}) \tag{191}\] \[=\beta(1)\left(\left(\frac{i}{n}\right)^{2}-\left(\frac{i-1}{n} \right)^{2}\right)\] (192) \[=\beta(1)\left(\frac{2i-1}{n^{2}}\right). \tag{193}\]
Figure 10: **Bayesian flow for discrete data**. For \(K=3\), the input distribution parameters \(\boldsymbol{\theta}=(\theta_{1},\theta_{2},\theta_{3})\) can be visualised as points on the 2-simplex, with the data \(x\) corresponding to the bottom left corner. For the accuracy schedule \(\beta(t)\) from Eq. 182, the white line shows a single input parameter trajectory starting from \(\boldsymbol{\theta}_{0}=\left(\frac{1}{3},\frac{1}{3},\frac{1}{3}\right)\) and evolving under the Bayesian update distribution \(p_{{}_{U}}(\boldsymbol{\theta}_{i}\mid\boldsymbol{\theta}_{i-1};x,\beta(t_{ i})-\beta(t_{i-1}))\) from Eq. 172, superimposed on log-scale heatmaps of the Bayesian flow distribution \(p_{{}_{F}}(\boldsymbol{\theta}\mid x;t)\) from Eq. 185, plotted at regular intervals from \(t=0.02\) to \(1\).
### Continuous-time Loss \(L^{\infty}(\mathbf{x})\)
Let
\[\mathbf{v}\stackrel{{\mathrm{def}}}{{=}}\frac{\mathbf{y}}{\alpha}+1, \tag{194}\]
and apply Identity 51 to see that if
\[y^{(d)}\sim p_{{}_{S}}\left(\cdot\mid x^{(d)};\alpha\right)=\mathcal{N}\left( \alpha(K\mathbf{e}_{x^{(d)}}-\mathbf{1}),\alpha K\boldsymbol{I}\right), \tag{195}\]
then
\[v^{(d)}\sim\mathcal{N}\left(K\mathbf{e}_{x^{(d)}},\frac{K}{\alpha}\boldsymbol{ I}\right), \tag{196}\]
and similarly if
\[y^{(d)}\sim p_{{}_{R}}^{(d)}(\cdot\mid\boldsymbol{\theta};t,\alpha)=\sum_{k=1 }^{K}p_{{}_{O}}^{(d)}(k\mid\boldsymbol{\theta};t)\mathcal{N}\left(y^{(d)}\mid \alpha\left(K\mathbf{e}_{k}-\mathbf{1}\right),\alpha K\boldsymbol{I}\right), \tag{197}\]
then
\[v^{(d)} \sim\sum_{k=1}^{K}p_{{}_{O}}^{(d)}(k\mid\boldsymbol{\theta};t) \mathcal{N}\left(K\mathbf{e}_{k},\frac{K}{\alpha}\boldsymbol{I}\right) \tag{198}\] \[=K\sum_{k=1}^{K}p_{{}_{O}}^{(d)}(k\mid\boldsymbol{\theta};t) \delta(\cdot-\mathbf{e}_{k})*\mathcal{N}\left(\mathbf{0},\frac{K}{\alpha} \boldsymbol{I}\right). \tag{199}\]
Figure 11: **Bayesian flow for binary data**. For the input probability \(p_{1}\) of class one, the plot shows several parameter trajectories starting from \(p_{1}=0.5\) at \(t=0\) and evolving under the Bayesian update distribution to \(t=1\), superimposed on a log-scale heatmap of the Bayesian flow distribution. \(\beta(1)=4\) in this plot. Note that both here and in Figure 10 the convergence towards the data appears slower and noisier than the equivalent trajectories for continuous data in Figure 4. This is a fundamental consequence of discreteness: since all points in \(\mathcal{X}\) are equidistant the input distributions cannot concentrate on values close to \(\mathbf{x}\) as the trajectories progress.
The Kullback-Leibler divergence is invariant under affine transformations of variables, hence
\[D_{KL} \left(p_{{}_{S}}\left(\cdot\mid x^{(d)};\alpha\right)\parallel p_{{} _{R}}^{(d)}(\cdot\mid\boldsymbol{\theta};t,\alpha_{i})\right) \tag{200}\] \[=D_{KL}\left(\mathcal{N}\left(K\mathbf{e}_{x^{(d)}},\frac{K}{ \alpha}\boldsymbol{I}\right)\parallel\sum_{k=1}^{K}p_{{}_{O}}^{(d)}(k\mid \boldsymbol{\theta};t)K\delta(\cdot-\mathbf{e}_{k})*\mathcal{N}\left(\mathbf{ 0},\frac{K}{\alpha}\boldsymbol{I}\right)\right). \tag{201}\]
Now set \(C=K\), \(g(x^{(d)})=K\mathbf{e}_{x^{(d)}}\) and
\[P^{(d)}(\boldsymbol{\theta},t)=K\sum_{k=1}^{K}p_{{}_{O}}^{(d)}(k \mid\boldsymbol{\theta};t)\delta(\cdot-\mathbf{e}_{k}), \tag{202}\]
which has finite variance and the following finite expectation
\[E[P^{(d)}(\boldsymbol{\theta},t)]=K\mathbf{\hat{e}}^{(d)}( \boldsymbol{\theta},t), \tag{203}\]
where
\[\mathbf{\hat{e}}^{(d)}(\boldsymbol{\theta},t)\stackrel{{\text{ def}}}{{=}}\sum_{k=1}^{K}p_{{}_{O}}^{(d)}(k\mid\boldsymbol{\theta};t)\mathbf{e}_{k}. \tag{204}\]
The conditions in Eq. 29 are therefore satisfied and Eqs. 203 and 183 can be substituted into Eq. 41 to yield
\[L^{\infty}(\mathbf{x})=K\beta(1)\underset{t\sim U(0,1),p_{{}_{F}} (\boldsymbol{\theta}|\mathbf{x},t)}{\mathbb{E}}t\|\mathbf{e}_{\mathbf{x}}- \mathbf{\hat{e}}(\boldsymbol{\theta},t)\|^{2}, \tag{205}\]
where
\[\mathbf{\hat{e}}(\boldsymbol{\theta},t)\stackrel{{\text{def}}}{{= }}\left(\mathbf{\hat{e}}^{(1)}(\boldsymbol{\theta},t),\ldots,\mathbf{\hat{e}} ^{(D)}(\boldsymbol{\theta},t)\right). \tag{206}\]
### Pseudocode
Pseudocode for evaluating the discrete-time loss \(L^{n}(\mathbf{x})\) and continuous-time loss \(L^{\infty}(\mathbf{x})\) for discrete data is presented in Algorithms 7 and 8, while sample generation is presented in Algorithm 9.
```
functiondiscrete_output_distribution(\(\boldsymbol{\theta}\in[0,1]^{KD},t\in[0,1]\)) Input \((\boldsymbol{\theta},t)\) to network, receive \(\Psi(\boldsymbol{\theta},t)\) as output for\(d\in\{1,D\}\)do if\(k=2\)then \(p_{{}_{O}}^{(d)}(1\mid\boldsymbol{\theta};t)\leftarrow\sigma\left(\Psi^{(d)}( \boldsymbol{\theta},t)\right)\) \(p_{{}_{O}}^{(d)}(2\mid\boldsymbol{\theta};t)\gets 1-p_{{}_{O}}^{(d)}(1\mid \boldsymbol{\theta};t)\) else \(p_{{}_{O}}^{(d)}(\cdot\mid\boldsymbol{\theta};t)\leftarrow\text{softmax}(\Psi^{(d)}( \boldsymbol{\theta},t))\) endif endfor Return\(p_{{}_{O}}(\cdot\mid\boldsymbol{\theta};t)\) endfunction
```
**Algorithm 9** Pseudocodeode for evaluating discrete-time loss \(L^{n}(\mathbf{x})\) and continuous-time loss \(L^{\infty}(\mathbf{x})\) for discrete data.
### Pseudocode
Pseudocode for evaluating the discrete-time loss \(L^{n}(\mathbf{x})\) and continuous-time loss \(L^{\infty}(\mathbf{x})\) for discrete data is presented in Algorithms 7 and 8, while sample generation is presented in Algorithm 9.
```
functiondiscrete_output_distribution(\(\boldsymbol{\theta}\in[0,1]^{KD},t\in[0,1]\)) Input \((\boldsymbol{\theta},t)\) to network, receive \(\Psi(\boldsymbol{\theta},t)\) as output for\(d\in\{1,D\}\)do if\(k=2\)then \(p_{{}_{O}}^{(d)}(1\mid\boldsymbol{\theta};t)\leftarrow\sigma\left(\Psi^{(d)}( \boldsymbol{\theta},t)\right)\) \(p_{{}_{O}}^{(d)}(2\mid\boldsymbol{\theta};t)\gets 1-p_{{}_{O}}^{(d)}(1\mid \boldsymbol{\theta};t)\) else \(p_{{}_{O}}^{(d)}(\cdot\mid\boldsymbol{\theta};t)\leftarrow\text{softmax}(\Psi^{(d)}( \boldsymbol{\theta},t))\) endif endfor Return\(p_{{}_{O}}(\cdot\mid\boldsymbol{\theta};t)\) endfunction
```
**Algorithm 10** Pseudocode for evaluating discrete-time loss \(L^{\infty}(\mathbf{x})\) for discrete data.
```
functiondiscrete_output_distribution(\(\boldsymbol{\theta}\in[0,1]^{KD},t\in[0,1]\)) Input \((\boldsymbol{\theta},t)\) to network, receive \(\Psi(\boldsymbol{\theta},t)\) as output for\(d\in\{1,D\}\)do if\(k=2\)then \(p_{{}_{O}}^{(d)}(1\mid\boldsymbol{\theta};t)\leftarrow\sigma\left(\Psi^{(d)}( \boldsymbol{\theta},t)\right)\) \(p_{{}_{O}}^{(d)}(2\mid\boldsymbol{\theta};t)\gets 1-p_{{}_{O}}^{(d)}(1\mid \boldsymbol{\theta};t)\) else \(p_{{}_{O}}^{(d)}(\cdot\mid\boldsymbol{\theta};t)\leftarrow\text{softmax}(\Psi^{(d)}( \boldsymbol{\theta},t))\) endif endfor Return\(p_{{}_{O}}(\cdot\mid\boldsymbol{\theta};t)\) endfunction
```
**Algorithm 11** Pseudocode for evaluating discrete-time loss \(L^{\infty}(\mathbf{x})\) for discrete data.
```
Require:\(\beta(1)\in\mathbb{R}^{+}\), number of steps \(n\in\mathbb{N}\), number of classes \(K\in\mathbb{N}\) Input: discrete data \(\mathbf{x}\in\{1,K\}^{D}\) \(i\sim U\{1,n\}\) \(t\leftarrow(i-1)/n\) \(\beta\leftarrow\beta(1)t^{2}\) \(\mathbf{y}^{\prime}\sim\mathcal{N}\left(\beta\left(K\mathbf{e_{x}-1}\right), \beta K\mathbf{I}\right)\) \(\mathbf{\theta}\leftarrow\text{softmax}(\mathbf{y}^{\prime})\) \(\mathbf{p}_{{}_{O}}(\cdot\mid\mathbf{\theta};t)\leftarrow\text{discrete\_output\_distribution}(\mathbf{\theta},t)\) \(\alpha\leftarrow\beta(1)\left(\frac{2i-1}{n^{2}}\right)\) \(\mathbf{y}\sim\mathcal{N}\left(\alpha\left(K\mathbf{e_{x}-1}\right),\alpha K \mathbf{I}\right)\) \(L^{n}(\mathbf{x})\gets n\left[\ln\mathcal{N}\left(\mathbf{y}\mid\alpha \left(K\mathbf{e_{x}-1}\right),\alpha K\mathbf{I}\right)-\sum_{d}\ln\left(\sum_{k }p_{{}_{O}}^{(d)}(k\mid\mathbf{\theta};t)\mathcal{N}\left(y^{(d)}\mid\alpha\left( K\mathbf{e_{k}-1}\right),\alpha K\mathbf{I}\right)\right)\right]\)
```
**Algorithm 7** Discrete-Time Loss \(L^{n}(\mathbf{x})\) for Discrete Data
```
Require:\(\beta(1)\in\mathbb{R}^{+}\), number of classes \(K\in\mathbb{N}\) Input: discrete data \(\mathbf{x}\in\{1,K\}^{D}\) \(t\sim U(0,1)\) \(\beta\leftarrow\beta(1)t^{2}\) \(\mathbf{y}\sim\mathcal{N}\left(\beta\left(K\mathbf{e_{x}-1}\right),\beta K\mathbf{I}\right)\) \(\mathbf{\theta}\leftarrow\text{softmax}(\mathbf{y})\) \(\mathbf{p}_{{}_{O}}(\cdot\mid\mathbf{\theta};t)\leftarrow\text{discrete\_output\_distribution}(\mathbf{\theta},t)\) \(\mathbf{\hat{e}}(\mathbf{\theta},t)\leftarrow\left(\sum_{k}p_{{}_{O}}^{(1)}(k\mid \mathbf{\theta};t)\mathbf{e_{k}},\ldots,\sum_{k}p_{{}_{O}}^{(D)}(k\mid\mathbf{\theta} ;t)\mathbf{e_{k}}\right)\) \(L^{\infty}(\mathbf{x})\gets K\beta(1)t\left\|\mathbf{e_{x}-\hat{e}}(\mathbf{ \theta},t)\right\|^{2}\)
```
**Algorithm 8** Continuous-Time Loss \(L^{\infty}(\mathbf{x})\) for Discrete Data
```
Require:\(\beta(1)\in\mathbb{R}^{+}\), number of steps \(n\in\mathbb{N}\), number of classes \(K\in\mathbb{N}\) \(\mathbf{\theta}\leftarrow\left(\frac{1}{K}\right)\) for\(i=1\) to \(n\)do \(t\leftarrow\frac{i-1}{n}\) \(\mathbf{k}\sim\text{discrete\_output\_distribution}(\mathbf{\theta},t)\) \(\alpha\leftarrow\beta(1)\left(\frac{2i-1}{n^{2}}\right)\) \(\mathbf{y}\sim\mathcal{N}\left(\alpha\left(K\mathbf{e_{k}-1}\right),\alpha K \mathbf{I}\right)\) \(\mathbf{\theta}^{\prime}\leftarrow\text{e}^{\mathbf{\theta}^{\prime}}\mathbf{\theta}\) \(\mathbf{\theta}\leftarrow\frac{\mathbf{\theta}^{\prime}}{\sum_{k}\mathbf{\theta}_{k}^{ \prime}}\) endfor \(\mathbf{k}\sim\text{discrete\_output\_distribution}(\mathbf{\theta},1)\) Return k
```
**Algorithm 9** Sample Generation for Discrete Data
## 7 Experiments
We evaluated Bayesian Flow Networks (BFNs) on the following generative benchmarks: CIFAR-10 (32\(\times\)32 8-bit color images), dynamically binarized MNIST (28\(\times\)28 binarized images of handwritten digits) and text8 (length 256 character sequences with a size 27 alphabet). The continuous (Sec. 4) and discretised (Sec. 5) versions of the system were compared on CIFAR-10, while the discrete version (Sec. 6) was applied to the other datasets. In all cases, the network was trained using the continuous-time loss \(L^{\infty}(\mathbf{x})\), with the discrete-time loss \(L^{n}(\mathbf{x})\) evaluated for testing only, with various values of \(n\). Standard network architectures and training algorithms were used throughout to allow for direct comparison with existing methods. Because the focus of this paper is on probabilistic modelling rather than image generation, FID scores were not calculated. However, examples of generated data are provided for all experiments.
\begin{table}
\begin{tabular}{l c c} \hline \hline Model & Dynamically Binarized MNIST & CIFAR-10 \\ \hline Improved DDPM [28] & & 2.94 \\ NVAE [48] & 78.01 & 2.91 \\ PixelVAE++\({}^{\dagger}\)[35] & 78.00 & 2.90 \\ Locally Masked PixelCNN\({}^{\dagger}\)[15] & 77.58 & 2.89 \\ Image Transformer\({}^{\dagger}\)[30] & & 2.89 \\ DDPM++ [16] & & 2.88 \\ LSGM [49] & & 2.87 \\ VDVAE [3] & & 2.87 \\ Sparse Transformer\({}^{\dagger}\)[4] & & 2.80 \\ Reflected Diffusion [23] & & 2.68 \\ VDM [17] & & 2.65 \\ ARDM-Upscale 4 [13] & & 2.64 \\ \hline
**BFN** & 77.87 & 2.66 \\ \hline CR-NVAE* [40] & 76.93 & 2.51 \\ VDM* [17] & & 2.49 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Comparison of dynamically binarized MNIST and CIFAR-10 results with other methods**. The best published results for both datasets (*) use data augmentation for regularization. Results for models marked with (\({}^{\dagger}\)) are exact values; all other results are upper bounds.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \(n\)-steps & 10 & 25 & 50 & 100 & 784 & 1000 & \(\infty\) \\ \hline NPI & 95.21 & 84.40 & 81.06 & 79.46 & 78.02 & 78.07 & 77.87 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Dynamically binarized MNIST results**. NPI is nats per image averaged over 2,000 passes through the test set with \(L^{n}(\mathbf{x})\) or \(L^{\infty}(\mathbf{x})\) sampled once per test image per pass. The reconstruction loss \(L^{r}(\mathbf{x})\) (included in NPI) was 0.46. 784 is the total number of pixels per image, hence the number of steps required to generate an image with an autoregressive model.
### Dynamically Binarized MNIST
Data.The binarized MNIST benchmark data was originally created from the MNIST dataset of handwritten images [20] by treating the grayscale pixel intensities as Bernoulli probabilities and sampling a particular binarization [36] which is held fixed during training. In recent years, a variant of the same benchmark has become more popular, with a new binarization sampled from the probabilities for every training batch. The two are not comparable, as the latter, which we refer to as dynamically binarized MNIST, effectively has a larger training set and hence gives better test set performance. All our experiments and the results referenced from the literature use dynamically binarized MNIST.
Setup.The network architecture was based on a U-Net introduced for diffusion models [28]. Starting from the hyperparameters used for the CIFAR-10 dataset (see Appendix A in the above reference), we made the following modifications: the number of resblocks was reduced from three to two and the layer widths were reduced from \([C,2C,2C,2C]\) to \([C,2C,2C]\) with \(C=128\). Finally, the input and output of the standard network were concatenated and projected back to the output size. 600 randomly selected training images (1% of the training set) were used as a validation set. The optimiser was AdamW [22] with learning rate 0.0001, weight decay 0.01 and \((\beta_{1},\beta_{2})=(0.9,0.98)\). Dropout was used with probability 0.5, the training batch size was 512, and \(\beta(1)\) was set to 3 (see Sec. 6.8). The network was trained for 150 000 weight updates until early stopping. An exponential moving average of model parameters with a decay rate of 0.9999 was used for evaluation and sample generation. The total number of learnable parameters was approximately 25M.
Results.As can be seen from Table 1, BFN is close to state-of-the-art for this task with no data augmentation. Table 2 shows the expected inverse relationship between loss and number of steps. Direct optimisation of the \(n\)-step loss would likely lead to reduced loss for low values of \(n\); however
Figure 12: MNIST real and generated data. Samples generated with 100 steps.
we leave that for future work. One issue is that the reconstruction loss was relatively high at \(0.46\) nats per image. The obvious way to decrease this would be to increase \(\beta(1)\), but we found that doing so led to slower learning and worse performance. Along with the loss curves in Figure 14, this suggests that the accuracy schedule is suboptimal for binary data.
Figure 14: **MNIST losses against time**. The left plot shows the mean over the test set of the cts. time loss \(L^{\infty}(\mathbf{x})\) used for training for transmission time \(t\) between \(0\) and \(1\). The right plot shows the average cumulative value of \(L^{\infty}(\mathbf{x})\) up to \(t\), along with the reconstruction loss \(L^{r}(\mathbf{x})\) evaluated at \(t\) and the sum of these two losses, which would be the total loss if the transmission process halted at \(t\). Note the unevenness of \(L^{\infty}(\mathbf{x})\) against \(t\): we speculate that rescaling \(\beta(t)\) to make the loss curve more uniform could improve performance.
Figure 13: **MNIST Input and output distributions**. For two test set images the figure shows the white pixel probability at \(20\) steps evenly spaced between \(t=0\) and \(t=1/3\). Note how the input probabilities are initially uniform whereas the output distribution initially predicts a superposition of multiple digits, closely matching the per-pixel marginal prior over the training set: this supports our belief that the network learns to correct for the uniform prior in the input distribution. Also note that the output distribution is much less noisy than the input distribution, and that it changes more dramatically as new information is received (e.g. the network appears to switch from predicting a \(6\) to a \(2\) to a \(7\) for the first image). This highlights the network’s use of context to resolve ambiguity and noise in the input distribution.
### Cifar-10
Data.Two sets of generative modelling experiments were conducted on the CIFAR-10 database [19], one at the standard bit-depth of 8, corresponding to 256 discretised bins per colour channel, and one at a reduced bit-depth of 4, corresponding to 16 bins per channel. In both cases the bins evenly partitioned the interval \([-1,1]\) and the data was pre-processed by assigning each channel intensity to the nearest bin centre, as described in Section 5. The purpose of comparing 16 and 256 bin discretisation was twofold: (1) to test the hypothesis that the advantage of training with the discretised loss from Section 5 rather than the continuous loss from Section 4 would be greater when the number of bins was lower, and (2) to test whether modelling the data at lower precision would lead to improved perceptual quality. No data augmentation, such as horizontal flips or random crops, was used on the training set.
Setup.The network architecture was essentially the same as that used for Variational Diffusion Models (VDMs [17]), including the Fourier feature inputs. The only modification was an extra input-output connection similar to the network for MNIST. In total there were approximately 31M learnable parameters. The following hyperparameters were used for all CIFAR-10 experiments: a validation set of 500 randomly selected training images (1% of the training set), the AdamW [22] optmizer with weight decay 0.01, learning rate 0.0002 and \((\beta_{1},\beta_{2})=(0.9,0.99)\), dropout with probability 0.1, training batch size of 128, \(t_{min}=1\mathrm{e}{-6}\), \([x_{min},x_{max}]=[-1,1]\), and an exponential moving average of model parameters with a decay rate of 0.9999 for evaluation and sample generation. For the 256 bin experiments \(\sigma_{1}=0.001\), while for the 16 bin experiments \(\sigma_{1}=\sqrt{0.001}\). For the networks trained with continuous loss, the reconstruction loss was measured using the discretised version of \(L^{r}(\mathbf{x})\) from Section 5.3 rather than the continuous version from Section 4.10, using a discretised Gaussian with mean equal to \(\hat{x}(\boldsymbol{\theta},1)\) and std. deviation chosen empirically to be \(\sigma_{1}\) for 256 bins and \(0.7\sigma_{1}\) for 16 bins. This ensured the results were comparable between continuous and discretised training, and consistent with the literature.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(n\)-steps & Cts. (256 bins) & Discd. (256 bins) & Cts. (16 bins) & Discd. (16 bins) \\ \hline
10 & 6.18 & 3.91 & 1.42 & 1.16 \\
25 & 3.65 & 3.16 & 1.11 & 1.02 \\
50 & 3.10 & 2.93 & 1.03 & 0.98 \\
100 & 2.86 & 2.81 & 0.99 & 0.96 \\
250 & 2.73 & 2.73 & 0.97 & 0.94 \\
500 & 2.69 & 2.71 & 0.96 & 0.94 \\
1000 & 2.67 & 2.70 & 0.96 & 0.94 \\ \hline \(\infty\) & 2.66 & 2.68 & 0.96 & 0.94 \\ \hline \hline \(L^{r}(\mathbf{x})\) & 0.001 & 0.003 & 0.073 & 0.070 \\ \hline Updates & 5M & 5M & 250K & 1M \\ \hline \hline \end{tabular}
\end{table}
Table 3: **CIFAR-10 results**. All losses are bits per dimension (BPD) averaged over 100 passes through the test set with \(L^{n}(\mathbf{x})\) or \(L^{\infty}(\mathbf{x})\) sampled once per test image per pass. The reconstruction losses \(L^{r}(\mathbf{x})\) (included in BPD) and the number of training updates for each network are shown below.
Results.Table 1 shows that the best performing BFN gives 2.66 BPD for the 256 bin data, which is close to the state-of-the-art at 2.64 BPD. The most obvious performance benchmark (given the shared network architecture and similarity in loss function) is the VDM result at 2.65 BPD [17]. However this took 10M weight updates to achieve, and due to time constraints we were only able to train BFNs for 5M updates. Validation performance was still improving after 5M updates, and it remains unclear how much performance would improve with 10M updates.
Figure 15: **CIFAR-10 real and generated data. Samples generated with 4,000 steps, using networks trained with discretised loss. The same random seed was used for both sets of samples. Note the improved image quality of the 16 bin samples compared to the 256 bin samples.**
Table 3 shows that discretised loss gave better performance than continuous loss for 16 bins, as well as much faster training time (250K updates vs. 1M). This supports the hypothesis that training with discretised loss is most beneficial when the number of bins is relatively low. Furthermore, for both 16 and 256 bins, discretised training gave much better results when the number of steps \(n\) was low (e.g. 10 or 25). However continuous loss gave better performance than discretised loss on 256 bins (2.66 BPC vs 2.68); more investigation would be needed to understand why.
Figure 15 shows that discretised training with 16 bins gives better sample quality than training with 256 bins. This is presumably because the loss function of the former is restricted to the first four bits of the data in which -- as can be seen by comparing the test data at 16 and 256 bins -- most of the perceptually relevant information is contained. An interesting direction for future work would be to train one BFN to model the lower bits of an image, and a second BFN to conditionally upscale to higher bits, as has previously been explored for autoregressive models [13, 26].
Figure 16: CIFAR-10 Input and output distributions. For two test set images the figure shows the means of the input and output distributions at steps evenly spaced between \(t=0\) and \(t=0.25\).
Figure 17: CIFAR-10 losses against time. The plot was made using the network trained with discretised loss on 256 bins. Note the high loss at the very start of the process, which we did not observe with discrete data.
### text8
Data.The text8 dataset [25] was derived from a subset of the enwik9 Wikipedia dataset by removing punctuation and restricting the text to lowercase Latin letters and spaces, giving an alphabet of size 27. For clarity, we represent the space character with an underscore in figures.
Setup.The network architecture was a Transformer similar to the small model (\(d_{\text{model}}=768\)) used by Radford et al. [31] except that it uses the GELU activation function [10] and the depth was increased to 24 layers. The input and output of the Transformer were concatenated and then projected back to the output size to produce the final output. The standard training/validation/test split of 90M/5M/5M consecutive characters was used, and the network was trained with a batch size of 3328 sequences of length 256, randomly cropped from the training set, for 1.2 M weight updates using the AdamW optimizer[22]. The learning rate was set to \(10^{-4}\), weight decay to 0.1 and \((\beta_{1},\beta_{2})\) to \((0.9,0.98)\). An exponential moving average of model parameters with a decay rate of 0.9999 was used for evaluation and sample generation. Dropout was not used, but overfitting was observed towards the end of training indicating that regularization may further improve results. \(\beta(1)\) was 0.75. The total number of learnable parameters was approximately 170M. Note that the batch size and number of layers were larger than prior results from diffusion models. The first choice
\begin{table}
\begin{tabular}{l l l} \hline \hline & Model & BPC \\ \hline \multirow{3}{*}{Flow-based models} & IAF/SCF\({}^{\dagger}\)[52] & 1.88 \\ & Argmax Coupling Flow\({}^{\dagger}\)[14] & 1.80 \\ & Discrete Flow\({}^{\dagger}\)[47] & 1.23 \\ \hline \multirow{3}{*}{Order-agnostic Models} & OA-ARDM [13] & 1.43 \(\pm\) 0.001 \\ & MAC [39] & 1.40 \\ \hline \multirow{3}{*}{Diffusion models} & Multinomial Diffusion [14] & 1.72 \\ & D3PM uniform [1] & 1.61 \(\pm\) 0.02 \\ & D3PM NN [1] & 1.59 \(\pm\) 0.03 \\ & D3PM mask [1] & 1.45 \(\pm\) 0.02 \\ \hline \multirow{3}{*}{Autoregressive baseline} & **BFN** & **1.41** \\ \cline{1-1} & Transformer\({}^{\dagger}\)[1] & 1.23 \\ \cline{1-1} & Adaptive Span Transformer\({}^{\dagger}\)[45] & 1.07 \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Comparison of text8 results with other methods**. The best published model on this dataset (*) was trained on sequences of length 512. Rest of the above models were trained on sequences of length 256. Results for models marked with (\({}^{\dagger}\)) are exact values; all other results are upper bounds.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \(n\)-steps & 10 & 25 & 50 & 100 & 256 & 1000 & \(\infty\) \\ \hline BPC & 1.70 & 1.52 & 1.47 & 1.43 & 1.42 & 1.41 & 1.41 \\ \hline \hline \end{tabular}
\end{table}
Table 5: **text8 results**. BPC is bits per character averaged over 1M randomly cropped sequences from the test set with \(L^{n}(\mathbf{x})\) or \(L^{\infty}(\mathbf{x})\) sampled once per crop. The reconstruction loss \(L^{r}(\mathbf{x})\) (included in BPC) was 0.006.
in_ghi_pinfle_nentu_in_one_five_six_one_epepetched_snap_a_single_sheet_structure_of_stistis_uni_tchu_pinfle_nentu_in_ghi_pinfle_nentu_in_ghi_pinfle_nentu_ |
2303.05552 | EfficientTempNet: Temporal Super-Resolution of Radar Rainfall | Rainfall data collected by various remote sensing instruments such as radars
or satellites has different space-time resolutions. This study aims to improve
the temporal resolution of radar rainfall products to help with more accurate
climate change modeling and studies. In this direction, we introduce a solution
based on EfficientNetV2, namely EfficientTempNet, to increase the temporal
resolution of radar-based rainfall products from 10 minutes to 5 minutes. We
tested EfficientRainNet over a dataset for the state of Iowa, US, and compared
its performance to three different baselines to show that EfficientTempNet
presents a viable option for better climate change monitoring. | Bekir Z Demiray, Muhammed Sit, Ibrahim Demir | 2023-03-09T19:19:56Z | http://arxiv.org/abs/2303.05552v1 | # EfficientTempNet: Temporal Super-Resolution of Radar Rainfall
###### Abstract
Rainfall data collected by various remote sensing instruments such as radars or satellites has different space-time resolutions. This study aims to improve the temporal resolution of radar rainfall products to help with more accurate climate change modeling and studies. In this direction, we introduce a solution based on EfficientNetV2, namely EfficientTempNet, to increase the temporal resolution of radar-based rainfall products from 10 minutes to 5 minutes. We tested EfficientRainNet over a dataset for the state of Iowa, US, and compared its performance to three different baselines to show that EfficientTempNet presents a viable option for better climate change monitoring.
## 1 Introduction
The importance of environmental data has grown significantly in recent years due to the increased impact of natural disasters. Rainfall data, in particular, plays a crucial role in various climate modeling applications, such as flood forecasting Sit et al. (2021); Xiang and Demir (2022), monitoring water quality Jha et al. (2007), or managing wastewater Cahoon and Hanke (2017). Given that the spatial and temporal patterns of rain are crucial in these modeling efforts, having reliable and accessible precipitation maps is vital for advancing research on climate related hazards with different objectives like risk assessment Alabbad et al. (2021) or disaster mitigation Alabbad et al. (2022).
Quantitative Precipitation Estimation (QPE) systems provide rainfall data that takes into account three dimensions, which include latitude and longitude as the spatial coordinates and temporal resolution as the third dimension. Weather radars are the primary source used in QPE, and they allow to record the space-time characteristics of precipitation, which is essential for making accurate streamflow predictions in hydrology. Improving rainfall datasets in radar hydrology largely involves addressing uncertainty factors, while the focus is on acquiring more precise precipitation data to enhance our understanding of weather patterns in terms of space and time. However, once the data is obtained, the task of creating better datasets becomes a separate challenge.
The temporal resolution of rainfall data is a critical factor in determining the accuracy of predictive modeling efforts (e.g., Atencia et al. (2011)). This paper aims to address the issue of low temporal resolution rainfall products by proposing a convolutional neural network to enhance the temporal resolution of rainfall data. The proposed CNN model, EfficientTempNet, is based on EfficientNetV2 Tan and Le (2021), and the performance of the network is compared to three different methods: the nearest frame, optical flow, and TempNet Sit et al. (2021).
### Related Work
Rainfall products aren't the only data that is subject to temporal interpolation between two 2D maps. Various studies in computer vision literature were presented employing neural networks based approaches for video frame interpolation Niklaus et al. (2017, 2017); Liu et al. (2017); Jiang et al. (2018). Conversely, the literature about how to interpolate time in rainfall datasets is limited. In Seo and Krajewski (2015), researchers used advection correction to create 1-minute rainfall maps from 5-minute ones. Building upon Seo and Krajewski (2015), in Sit et al. (2021), a residual but simple CNN architecture, namely TempNet, was proposed and was shown to be superior to the dense optical flow-based advection correction method and a non-residual CNN over the IowaRain Sit et al. (2021) dataset. To the best of our knowledge, TempNet is the only study that tackles the problem of temporal super-resolution of radar rainfall products using neural networks, thus forming the baseline of this study. Consequently, in a similar fashion, in this study, the performance of EfficientTempNet will be presented over the IowaRain dataset to show the increment in performance.
The paper is organized as follows: Section 2 introduces the dataset used and provides an overview of the methodology. Section 3 presents and discusses the preliminary results of the comparison between all methods. Finally, Section 4 summarizes the findings.
## 2 Methods
### Data
IowaRain Sit et al. (2021); Seo and Krajewski (2020) is a rainfall event dataset that covers the years of 2016 and 2019 with 5-minute temporal resolution. The dataset covers an area that bounds the state of Iowa and has the size of 1088x1760 pixels with 500m of spatial resolution. In order to meet memory bottlenecks and computational complexity drawbacks, we sampled an area within the IowaRain domain from eastern Iowa that is 768x768 in size; then we averaged the values in the area to downscale the rainmap spatial resolution to 3 km, which effectively changed the rainfall map sizes to 128x128. After the subsampling, the event detection criteria of IowaRain were applied to the new area; 82, 83, 90, and 110 events were obtained for each year in 2016 and 2019, respectively. To get a more approximate 70/30 split when dividing the dataset, we used the rainfall events in 2019 as the test set and all the rainfall events before 2019 as the train set, for total set sizes of 255 and 110 events. For each snapshot, or 2D map, \(t_{s}\) from a rain event, a snapshot immediately following it \(t_{s+5}\) and immediately preceding it \(t_{s-5}\) was converted into its own entry in the dataset. In the end, for each dataset entry or sample, there were three 2D rainfall maps, for \(t_{s-5}\), \(t_{s+5}\), and \(t_{s}\), the first two being inputs and the last one being the output (Figure 1). In the end, there were 19,264 train and 7,762 test entries used in this study.
### EfficientTempNet
The foundation of this work is based on EfficientNetV2 Tan and Le (2021). In order to create a model that takes two rainfall maps and outputs another one with the same size, EfficientNetV2 is altered by modifying its key component, MBConv block Sandler et al. (2018), which was not
Figure 1: Input/Output shapes and problem definition for EfficientTempNet
specifically developed with this task in mind. Our model takes two 2D rain map and combines them prior to passing them into a convolutional layer with 24 feature maps. After this layer, multiple SimpleConv and MBConv blocks are used to extract information. In the last section of the model, two convolutional layers help to get the desired output. Our model is depicted visually in Figure 2.
In EfficientNetV2, there are two different blocks, namely MBConv and Fused-MBConv. In the proposed method, we modify the MBConv blocks, and instead of Fused-MBConv blocks, SimpleConv blocks were used, as our experiments during model development favored them over Fused-MBConv blocks. In our MBConv blocks, batch normalization layers are removed, and activation layers are changed with LeakyReLU. The remaining parts of the MBConv are the same as the original implementation, including the SE layer and depthwise convolutions. In our SimpleConv blocks, two convolutional layers with kernel sizes 3 and 1 are used with LeakyReLU in between them. Similar to the MBConv block, SimpleConv blocks take advantage of residual connections. In addition to these, input size doesn't change throughout the model. Details of SimpleConv and MBConv blocks are provided in Table 1.
The RAdam optimizer Liu et al. (2019) was employed for training with a learning rate of 0.001, and the Mean Absolute Error as the loss function and for evaluation. The implementation of the network was done using Pytorch 1.9 Paszke et al. (2019), and the ReduceLROnPlateau scheduler was utilized to adjust the learning rate downward if there was no progress in reducing the model's loss over three consecutive epochs. The training was performed using NVIDIA Titan V GPUs.
## 3 Results
This section outlines the metrics used to evaluate performance and presents the results of our methods as well as three baselines: nearest frame, optical flow, and TempNet. First, we will give information about compared methods, then describe the metrics, and finalize the section with scores and discussion.
**Nearest Frame** - The nearest frame involves assuming that the interpolated frame is equal to the closest frame in time to the forecasted frame. In our case, we decided to select the predecessor frame as the closest, although both frames are in the same proximity.
**Optical Flow** - Although a variety of optical flow calculation algorithms can be found in the computer vision literature, the Gunnar-Farneback optical flow was used in this paper Farneback (2003). Each value's pixel intensity was determined using the Gunnar-Farneback optical flow. For the rain map scenario, this would entail figuring out the shifts in each measurement over the course of the two-dimensional rain map's 3km by 3km grid. Once the optical flow is computed, all measurements are shifted between frames based on their position in the first frame and their motion vectors in the optical flow. NumPy Harris et al. (2020), a library for numerical computation, and
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & **SimpleConv\#1** & **SimpleConv\#2** & **SimpleConv\#3** & **MBConv\#1** & **MBConv\#2** \\ \hline
**\# of Channels** & 24 & 48 & 64 & 128 & 256 \\ \hline
**Expansion ratio** & 1 & 4 & 4 & 4 & 6 \\ \hline
**\# of Layers** & 2 & 4 & 3 & 4 & 8 \\ \hline \end{tabular}
\end{table}
Table 1: Blocks’ details in our model
Figure 2: Architecture of Proposed Method
OpenCV Bradski (2000), a library for computer vision, were utilized in implementation.
**TempNet** - TempNet has done a similar job to ours. The study is powered by three components, and the design is completed with residual connections. More details on the TempNet can be found here Sit et al. (2021b).
We measured the performance of each of the previously mentioned methods using four metrics, namely, Mean Absolute Error (MAE), Probability of Detection (POD), and False Alarm Ratio (FAR) and Critical Success Index (CSI). The MAE calculates the average of the absolute differences between the estimated and actual 2D rain maps in the test dataset. The POD 1, FAR 2, and CSI 3 metrics are calculated using the number of hits (H), false alarms (F), and misses (M), respectively, in a binary manner. H represents the number of correctly estimated rainfall cells, meaning the number of elements in the 2D map that were correctly estimated as non-zero values. F represents the number of wrongly estimated rainfall cells, where the cells were estimated to have rain, but the corresponding indices in the ground truth 2D map had zero. M represents the number of rainfall cells that were estimated as zero but had non-zero values in the ground truth. It's important to note that a value of 1.0 is best for POD and CSI, whereas a value of 0.0 is best for FAR. All metrics were calculated using a threshold value of 0.0001 over the estimated rainfall maps, since the neural networks would produce small non-zero values throughout the estimated 2D rainfall maps.
\[POD=\frac{H}{H+M} \tag{1}\] \[FAR=\frac{F}{H+F}\] (2) \[CSI=\frac{H}{H+F+M} \tag{3}\]
Table 2 presents the performance of the baselines as well as the CNN-based models on the test set in terms of described metrics for the interpolation of the frame at \(t_{s}\) from frames at \(t_{s-5}\) and \(t_{s+5}\). As shown in Table 2, both TempNet and EfficientTempNet outperform the baseline methods in terms of the MAE metric, which was used to train them. Between EfficientTempNet and TempNet, EfficientTempNet shows more strong results with a significant margin in MAE scores as a result of an increase in model size. As for the POD, optical flow provides the best result with a small margin in comparison to EfficientTempNet. However, with consideration of FAR and POD together, it can be argued that optical flow tends to generate non-zero values that results in correctly identifying true positives but also causes a higher number of false positives compared to EfficientTempNet. Overall, in improving temporal resolution of radar rainfall products, the EfficientTempNet offers a better solution compared to other employed methods since its scores are either best or runner-up over the used metrics.
## 4 Conclusion
In this study, the EfficientNetV2-based CNN model, EfficientTempNet, was introduced for improving the temporal resolution of radar rainfall products and compared to three additional methods. The results showed that EfficientTempNet outperformed the other approaches in terms of various performance metrics, including MAE. This work represents significant progress in creating improved rainfall maps for various purposes, such as flood forecasting Xiang et al. (2021); Sit & Demir (2019) and climate change modeling Rolnick et al. (2022); Sit et al. (2020).
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Methodology** & **MAE(mm/h)\(\downarrow\)** & **CSI \(\uparrow\)** & **POD \(\uparrow\)** & **FAR \(\downarrow\)** \\ \hline
**Nearest Frame** & 0.630 & 0.840 & 0.911 & 0.0878 \\ \hline
**Optical Flow** & 0.324 & 0.857 & **0.978** & 0.127 \\ \hline
**TempNet** & 0.272 & 0.898 & 0.919 & **0.0252** \\ \hline
**EfficientTempNet** & **0.208** & **0.923** & 0.972 & 0.0524 \\ \hline \end{tabular}
\end{table}
Table 2: Performance summary of tested methods for predicting intermediate frame |
2304.10846 | Maximal Hardy Fields | We show that all maximal Hardy fields are elementarily equivalent as
differential fields, and give various applications of this result and its
proof. We also answer some questions on Hardy fields posed by Boshernitzan. | Matthias Aschenbrenner, Lou van den Dries, Joris van der Hoeven | 2023-04-21T09:43:47Z | http://arxiv.org/abs/2304.10846v2 | # Maximal Hardy fields
###### Abstract.
We show that all maximal Hardy fields are elementarily equivalent as differential fields, and give various applications of this result and its proof. We also answer some questions on Hardy fields posed by Boshernitzan.
###### Contents
* 1 Preliminaries
* 1.1 Linear Differential Operators and Differential Polynomials
* 1.2 The Group of Logarithmic Derivatives
* 1.3 The Valuation of Differential Polynomials at Infinity (\({}^{*}\))
* 1.4 \(\lambda\)-freeness and \(\omega\)-freeness
* 1.5 Complements on Linear Differential Operators
* 1.6 Special Elements
* 1.7 Differential Henselianity of the Completion
* 1.8 Complements on Newtonianity
* 2 The Universal Exponential Extension
* 2.1 Some Facts about Group Rings
* 2.2 The Universal Exponential Extension
* 2.3 The Spectrum of a Differential Operator
* 2.4 Self-Adjointness and its Variants (\({}^{*}\))
* 2.5 Eigenvalues and Splittings
* 2.6 Valuations on the Universal Exponential Extension
* 3 Normalizing Holes and Slots
* 3.1 The Span of a Linear Differential Operator
* 3.2 Holes and Slots
* 3.3 The Normalization Theorem
* 3.4 Isolated Slots
* 3.5 Holes of Order and Degree One
**Part 4.**: **Slots in \(H\)-Fields** * 177 * 4.1 Some Valuation-Theoretic Lemmas * 177 * 4.2 Approximating Linear Differential Operators * 179 * 4.3 Split-Normal Slots * 184 * 4.4 Ultimate Slots and Firm Slots * 198 * 4.5 Repulsive-Normal Slots
**Part 5.**: **Hardy Fields and their Universal Exponential Extensions** * 5.1 Germs of Continuous Functions * 5.2 Linear Differential Equations * 5.3 Hardy Fields * 5.4 Upper and Lower Bounds on the Growth of Hardian Germs (*) * 5.5 Second-Order Linear Differential Equations over Hardy Fields * 5.6 Maximal Hardy Fields are \(\omega\)-Free * 5.7 Bounding Solutions of Linear Differential Equations * 5.8 Almost Periodic Functions * 5.9 Uniform Distribution Modulo One * 5.10 Universal Exponential Extensions of Hardy Fields
**Part 6.**: **Filling Holes in Hardy Fields** * 6.1 Inverting Linear Differential Operators over Hardy Fields * 6.2 Solving Split-Normal Equations over Hardy Fields * 6.3 Smoothness Considerations * 6.4 Application to Filling Holes in Hardy Fields * 6.5 Weights * 6.6 Asymptotic Similarity * 6.7 Differentially Algebraic Hardy Field Extensions
**Part 7.**: **Applications** * 7.1 Transfer Theorems * 7.2 Relative Differential Closure * 7.3 Embeddings into Transseries and Maximal Hardy Fields * 7.4 Linear Differential Equations over Hardy Fields * 7.5 Revisiting Second-Order Linear Differential Equations * 7.6 The Example of the Bessel Equation * 7.7 Holes and Slots in Perfect Hardy Fields
**Index**: * 7.4.1 **List of Symbols**
**Preface**
A Hardy field is said to _maximal_ if it has no proper Hardy field extension. In these notes we show that all maximal Hardy fields are elementarily equivalent (as ordered differential fields) to the ordered differential field \(\mathbb{T}\) of transseries. This is part of our main result, Theorem 6.7.22.
We shall depend heavily on our book [ADH], which contains a model-theoretic analysis of \(\mathbb{T}\). Besides developing further the asymptotic differential algebra from that book we require also a good dose of analysis. These notes are divided in Parts 1-7, preceded by a somewhat lengthy Introduction including a sketch of the proof of our main result. Parts 1-4 consist of further asymptotic differential algebra and culminates in various normalization theorems for algebraic differential equations over suitable \(H\)-fields. Parts 5 and 6 are more analytic and apply the normalization theorems to Hardy fields. Part 7 consists of applications. We finish with an index and a list of symbols newly introduced in this work. (All other notation is standard or comes from [ADH].)
The present notes are probably not suitable for publication as a journal article, since we took the liberty of including extensive sections with complete proofs on classical topics such as self-adjoint linear differential operators, almost periodic functions, uniform distribution modulo 1, and Bessel functions. This was partly done for our own education, and partly to put things in a form convenient for our purpose. We also took the opportunity to develop some topics a bit further than needed for the main theorem, and in this way we could also answer in Part 5 some questions about Hardy fields raised by Boshernitzan. We have in mind further use of the material here, for example in [15] and in relation to open problems posed in [ADH]. (Our main theorem solves one of those problems.)
Readers only interested in the proof of our main result can skip Sections 1.3, 2.4, 5.4, as well as several subsections of other sections in Parts 1-6. These (sub)sections are marked by an asterisk (\({}^{*}\)).
The main results in these notes are really about _differentially algebraic_ Hardy field extensions, especially their construction. We complement this in [14] with an account of constructing _differentially transcendental_ Hardy field extensions, leading to the result that all maximal Hardy fields are \(\eta_{1}\) in the sense of Hausdorff: in other words, given any Hardy field \(H\) and countable subsets \(A<B\) in \(H\), there is an element \(f\) in a Hardy field extension of \(H\) such that \(A<f<B\). This can be used to show that all maximal Hardy fields are back-and-forth equivalent, which is considerably stronger than their elementary equivalence. We mention this here because the proof of a key ingredient in [14] makes essential use of the main result from the present notes.
We are still deliberating how to publish this material (these notes and [14]), but thought it best to make it available for now on the arxiv.
**Introduction**
Du Bois-Reymond's "orders of infinity" [28]-[31] were put on a firm basis by Hardy [87], leading to the notion of a Hardy field (Bourbaki [39]). A _Hardy field_ is a field \(H\) of germs at \(+\infty\) of differentiable real-valued functions on intervals \((a,+\infty)\) such that for any differentiable function whose germ is in \(H\) the germ of its derivative is also in \(H\). (See Section 5.3 for more precision.) Every Hardy field is naturally a differential field, and an ordered field with the germ of \(f\) being \(>0\) iff \(f(t)>0\), eventually. _Hardy fields are the natural domain of asymptotic analysis, where all rules hold, without qualifying conditions_[170, p. 297]. The basic theory of Hardy fields was mostly developed by Boshernitzan [32]-[35] and Rosenlicht [170]-[174].
The germs of Hardy's logarithmic-exponential functions [84] furnish the classical example of a Hardy field: these functions are the real-valued functions that can be built from real constants and the identity function \(x\), using addition, multiplication, division, taking logarithms, and exponentiating. Examples include the germs of the functions \((0,+\infty)\to\mathbb{R}\) given by \(x^{r}\) (\(r\in\mathbb{R}\)), \(\mathrm{e}^{x^{2}}\), and \(\log\log x\). Other Hardy fields contain (germs of) differentially transcendental functions, such as the Riemann \(\zeta\)-function and Euler's \(\Gamma\)-function [170], and even functions ultimately growing faster than each iterate of the exponential function [34]. One source of Hardy fields is o-minimality: every o-minimal structure on the real field naturally gives rise to a Hardy field (of germs of definable functions). This yields a wealth of examples such as those obtained from quasi-analytic Denjoy-Carleman classes [166], or containing certain transition maps of plane analytic vector fields [110], and explains the role of Hardy fields in model theory and its applications to real analytic geometry and dynamical systems [8, 22, 139]. Hardy fields have also found applications in computer algebra [176, 177, 185], ergodic theory (see, e.g., [20, 37, 73, 115]), and other areas of mathematics [19, 42, 44, 68, 80].
In the remainder of this introduction, \(H\) is a Hardy field. Then \(H(\mathbb{R})\) (obtained by adjoining the germs of the constant functions) is also a Hardy field, and for any \(h\in H\), the germ \(\mathrm{e}^{h}\) generates a Hardy field \(H(\mathrm{e}^{h})\) over \(H\), and so does any differentiable germ with derivative \(h\). Moreover, \(H\) has a unique Hardy field extension that is algebraic over \(H\) and real closed. (See [32, 165, 171] or Section 5.3 below.) Our main result is Theorem 6.7.22, and it yields what appears to be the ultimate fact about differentially algebraic Hardy field extensions:
**Theorem A**.: _Let \(P(Y)\) be a differential polynomial in a single differential indeterminate \(Y\) over \(H\), and let \(f<g\) in \(H\) be such that \(P(f)<0<P(g)\). Then there is a \(y\) in a Hardy field extension of \(H\) such that \(f<y<g\) and \(P(y)=0\)._
By Zorn, every Hardy field extends to a maximal Hardy field, so by the theorem above, maximal Hardy fields have the intermediate value property for differential polynomials. (In [14] we show there are very many maximal Hardy fields, namely \(2^{\mathfrak{c}}\) many, where \(\mathfrak{c}\) is the cardinality of the continuum.) By the results mentioned earlier, maximal Hardy fields are also Liouville closed \(H\)-fields in the sense of [6]; thus they contain the germs of all logarithmic-exponential functions. Hiding behind the intermediate value property of Theorem A are two more fundamental properties, \(\mathfrak{o}\)-_freeness_ and _newtonianity,_ which are central in our book [ADH]. (Roughly speaking, \(\mathfrak{o}\)-freeness controls the solvability of second-order homogeneous differential equations, and newtonianity is a strong version of differential-henselianity.) We
show that any Hardy field has an \(\omega\)-free Hardy field extension (Theorem 5.6.2), and next the much harder result that any \(\omega\)-free Hardy field extends to a newtonian \(\omega\)-free Hardy field: Theorem 6.7.22, which is really the main result of this paper. It follows that every maximal Hardy field is, in the terminology of [12], an _\(H\)-closed field_ with small derivation. Now the elementary theory \(T_{H}\) of \(H\)-closed fields with small derivation (denoted by \(T_{\rm small}^{\rm nl}\) in [ADH]) is _complete_, by [ADH, 16.6.3]. This means in particular that any two maximal Hardy fields are indistinguishable as to their elementary properties:
**Corollary 1**.: _If \(H_{1}\) and \(H_{2}\) are maximal Hardy fields, then \(H_{1}\) and \(H_{2}\) are elementarily equivalent as ordered differential fields._
To derive Theorem A we use also the key results from the book [103] to the effect that \({\mathbb{T}}_{\rm g}\), the ordered differential field of grid-based transseries, is \(H\)-closed with small derivation and the intermediate value property for differential polynomials. In particular, it is a model of the complete theory \(T_{H}\). Thus maximal Hardy fields have the intermediate value property for differential polynomials as well, and this amounts to Theorem A, obtained here as a byproduct of more fundamental results. (A more detailed account of the differential intermediate value property for \(H\)-fields is in [13].) We sketch the proof of our main result (Theorem 6.7.22) later in this introduction, after describing further consequences.
### Further consequences of our main result
In [ADH] we prove more than completeness of \(T_{H}\): a certain natural extension by definitions of \(T_{H}\) has quantifier elimination. This leads to a strengthening of Corollary 1 by allowing parameters from a common Hardy subfield of \(H_{1}\) and \(H_{2}\). To fully appreciate this statement requires more knowledge of model theory, as in [ADH, Appendix B], which we do not assume for this introduction. However, we can explain a special case in a direct way, in terms of solvability of systems of algebraic differential equations, inequalities, and asymptotic inequalities. Here we find it convenient to use the notation for asymptotic relations introduced by du Bois-Reymond and Hardy instead of Bachmann-Landau's \(O\)-notation: for germs \(f\), \(g\) in a Hardy field set
\[f\preccurlyeq g :\Longleftrightarrow\quad f=O(g) :\Longleftrightarrow\quad|f|\leqslant c|g|\mbox{ for some real }c>0,\] \[f\preccurlyeq g :\Longleftrightarrow\quad f=o(g) :\Longleftrightarrow\quad|f|<c|g|\mbox{ for all real }c>0.\]
Let now \(Y=(Y_{1},\ldots,Y_{n})\) be a tuple of distinct (differential) indeterminates, and consider a system of the following form:
( \[*\] ) \[\left\{\begin{array}{ccc}P_{1}(Y)&\varrho_{1}&Q_{1}(Y)\\ \vdots&\vdots&\vdots\\ P_{k}(Y)&\varrho_{k}&Q_{k}(Y)\end{array}\right.\]
Here each \(P_{i}\), \(Q_{i}\) is a differential polynomial in \(Y\) (that is, a polynomial in the indeterminates \(Y_{j}\) and their formal derivatives \(Y^{\prime}_{j},Y^{\prime\prime}_{j},\ldots\)) with coefficients in our Hardy field \(H\), and each \(\varrho_{i}\) is one of the symbols \(=\), \(\neq\), \(\leqslant\), \(<\), \(\preccurlyeq\), \(\prec\). Given a Hardy field \(E\supseteq H\), a _solution_ of (\(*\)) in \(E\) is an \(n\)-tuple \(y=(y_{1},\ldots,y_{n})\in E^{n}\) such that for \(i=1,\ldots,k\), the relation \(P_{i}(y)\,\varrho_{i}\,Q_{i}(y)\) holds in \(E\). Here is a Hardy field analogue of the "Tarski Principle" of real algebraic geometry [ADH, B.12.14]:
**Corollary 2**.: _If the system (\(*\)) has a solution in some Hardy field extension of \(H\), then (\(*\)) has a solution in every maximal Hardy field extension of \(H\)._
(The symbols \(\neq\), \(\leqslant\), \(<\), \(\preccurlyeq\) in \((*)\) are for convenience only: their occurrences can be eliminated at the cost of increasing \(m\), \(n\). But \(\prec\) is essential; see [ADH, 16.2.6].) Besides the quantifier elimination alluded to, Corollary 2 depends on Lemma 7.1.1, which says that for any Hardy field \(H\) all maximal Hardy field extensions of \(H\) induce the same \(\Lambda\Omega\)-cut on \(H\), as defined in [ADH, 16.3].
In particular, taking for \(H\) the smallest Hardy field \(\mathbb{Q}\), we see that a system \((*)\) with a solution in some Hardy field has a solution in _every_ maximal Hardy field, thus recovering a special case of our Corollary 1. Call such a system \((*)\) over \(\mathbb{Q}\)_consistent._ For example, with \(X\), \(Y\), \(Z\) denoting here single distinct differential indeterminates, the system
\[Y^{\prime}Z\ \preccurlyeq\ Z^{\prime},\qquad Y\preccurlyeq 1,\qquad 1\prec Z\]
is inconsistent, whereas for any \(Q\in\mathbb{Q}\{Y\}\) and \(n\geqslant 2\) the system
\[X^{n}Y^{\prime}\ =\ Q(Y),\qquad X^{\prime}=1,\quad Y\prec 1\]
is consistent. As a consequence of the completeness of \(T_{H}\) we obtain the existence of an algorithm (albeit a very impractical one) for deciding whether a system \((*)\) over \(\mathbb{Q}\) is consistent, and this opens up the possibility of automating a substantial part of asymptotic analysis in Hardy fields. We remark that Singer [188] proved the existence of an algorithm for deciding whether a given system \((*)\) over \(\mathbb{Q}\) without occurrences of \(\preccurlyeq\) or \(\prec\) has a solution in _some_ ordered differential field (and then it will have a solution in the ordered differential field of germs of real meromorphic functions at \(0\)); but there are such systems, like
\[X^{\prime}\ =\ 1,\qquad XY^{2}\ =\ 1-X,\]
which are solvable in an ordered differential field, but not in a Hardy field. Also, algorithmically deciding the solvability of a system \((*)\) over \(\mathbb{Q}\) in a _given_ Hardy field \(H\) may be impossible when \(H\) is "too small": e.g., if \(H=\mathbb{R}(x)\), by [55].
As these results suggest, the aforementioned quantifier elimination for \(T_{H}\) yields a kind of "resultant" for systems \((*)\) that allows one to make explicit within \(H\) itself for which choices of coefficients of the differential polynomials \(P_{i}\), \(Q_{i}\) the system \((*)\) has a solution in a Hardy field extension of \(H\). Without going into details, we only mention here some attractive consequences for systems \((*)\) depending on parameters. For this, let \(X_{1},\ldots,X_{m},Y_{1},\ldots,Y_{n}\) be distinct indeterminates and \(X=(X_{1},\ldots,X_{m})\), \(Y=(Y_{1},\ldots,Y_{n})\), and consider a system
( \[**\] ) \[\left\{\begin{array}{cccc}&P_{1}(X,Y)&\varrho_{1}&Q_{1}(X,Y)\\ &\vdots&\vdots&\vdots\\ &P_{k}(X,Y)&\varrho_{k}&Q_{k}(X,Y)\end{array}\right.\]
where \(P_{i}\), \(Q_{i}\) are now differential polynomials in \((X,Y)\) over \(H\), and the \(\varrho_{i}\) are as before. Specializing \(X\) to \(c\in\mathbb{R}^{m}\) then yields a system
( \[*c\] ) \[\left\{\begin{array}{cccc}&P_{1}(c,Y)&\varrho_{1}&Q_{1}(c,Y)\\ &\vdots&\vdots&\vdots\\ &P_{k}(c,Y)&\varrho_{k}&Q_{k}(c,Y)\end{array}\right.\]
where \(P_{i}(c,Y)\), \(Q_{i}(c,Y)\) are differential polynomials in \(Y\) with coefficients in the Hardy field \(H(\mathbb{R})\). (We only substitute real constants, so may assume that the \(P_{i}\), \(Q_{i}\)
are _polynomial_ in \(X\), that is, none of the derivatives \(X^{\prime}_{j},X^{\prime\prime}_{j},\dots\) occur in the \(P_{i},Q_{i}\).) Using [ADH, 16.0.2(ii)] we obtain:
**Corollary 3**.: _The set of all \(c\in\mathbb{R}^{m}\) such that the system \((*c)\) has a solution in some Hardy field extension of \(H\) is semialgebraic._
Recall: a subset of \(\mathbb{R}^{m}\) is said to be _semialgebraic_ if it is a finite union of sets
\[\bigl{\{}c\in\mathbb{R}^{m}:\ p(c)=0,\ q_{1}(c)>0,\dots,q_{l}(c)>0\bigr{\}}\]
where \(p,q_{1},\dots,q_{l}\in\mathbb{R}[X]\) are ordinary polynomials. (The topological and geometric properties of semialgebraic sets have been studied extensively [24]. For example, it is well-known that a semialgebraic set can have only have finitely many connected components, and that each such component is itself semialgebraic.)
In connection with Corollary 3 we mention that the asymptotics of Hardy field solutions to algebraic differential equations \(Q(Y)=0\), where \(Q\) is a differential polynomial with constant real coefficients, has been investigated by Hardy [85] and Fowler [72] in cases where \(\operatorname{order}Q\leqslant 2\) (see [18, Chapter 5]), and later by Shackell [175, 183, 184] in general. Special case of our corollary: for any differential polynomial \(P(X,Y)\) with constant real coefficients, the set of parameters \(c\in\mathbb{R}^{m}\) such that the differential equation \(P(c,Y)=0\) has a solution \(y\) in some Hardy field, in addition possibly also satisfying given asymptotic side conditions (such as \(y\prec 1\)), is semialgebraic. Example: the set of real parameters \((c_{1},\dots,c_{m})\in\mathbb{R}^{m}\) for which the homogeneous linear differential equation
\[y^{(m)}+c_{1}y^{(m-1)}+\dots+c_{m}y\ =\ 0\]
has a nonzero solution \(y\prec 1\) in a Hardy field is semialgebraic; in fact, it is the set of all \((c_{1},\dots,c_{m})\in\mathbb{R}^{m}\) such that the polynomial \(Y^{m}+c_{1}Y^{m-1}+\dots+c_{m}\in\mathbb{R}[Y]\) has a negative real zero. (Below we discuss more general linear differential equations over Hardy fields.) Nonlinear example: for \(g_{2},g_{3}\in\mathbb{R}\) the differential equation
\[(Y^{\prime})^{2}\ =\ 4Y^{3}-g_{2}Y-g_{3}\]
has a nonconstant solution in a Hardy field iff \(g_{2}^{3}=27g_{3}^{2}\) and \(g_{3}\leqslant 0\). In both cases, the Hardy field solutions are germs of logarithmic-exponential functions. But the class of differentially algebraic germs in Hardy fields is much more extensive; for example, the antiderivatives of \(\operatorname{e}^{x^{2}}\) are not logarithmico-exponential (Liouville).
Instead of \(c\in\mathbb{R}^{m}\), substitute \(h\in H^{m}\) for \(X\) in \((**)\), resulting in a system
\[\left\{\begin{array}{cccc}&P_{1}(h,Y)&\varrho_{1}&Q_{1}(h,Y)\\ &\vdots&\vdots&\vdots\\ &P_{k}(h,Y)&\varrho_{k}&Q_{k}(h,Y)\end{array}\right.\]
where \(P_{i}(h,Y)\), \(Q_{i}(h,Y)\) are now differential polynomials in \(Y\) with coefficients in \(H\). It is well-known that for any semialgebraic set \(S\subseteq\mathbb{R}^{m+1}\) there is a natural number \(B=B(S)\) such that for every \(c\in\mathbb{R}^{m}\), if the section \(\bigl{\{}y\in\mathbb{R}:(c,y)\in S\bigr{\}}\) has \(>B\) elements, then this section has nonempty interior in \(\mathbb{R}\). In contrast, the set of solutions of \((*h)\) for \(n=1\) in a maximal \(H\) can be simultaneously infinite and discrete in the order topology of \(H\): this happens precisely if some nonzero one-variable differential polynomial over \(H\) vanishes on this solution set [ADH, 16.6.11]. (Consider the example of the single algebraic differential equation \(Y^{\prime}=0\), which has solution set \(\mathbb{R}\) in each maximal Hardy field.) Nevertheless, we have the
following uniform finiteness principle for solutions of \((*h)\); its proof is considerably deeper than Corollary 3 and also draws on results from [10].
**Corollary 4**.: _There is a natural number \(B=B(**)\) such that for all \(h\in H^{m}\): if the system \((*h)\) has \(>B\) solutions in some Hardy field extension of \(H\), then \((*h)\) has continuum many solutions in every maximal Hardy field extension of \(H\)._
Next we turn to issues of smoothness and analyticity in Corollary 2. By definition, a Hardy field is a differential subfield of the differential ring \(\mathcal{C}^{<\infty}\) consisting of the germs of functions \((a,+\infty)\to\mathbb{R}\)\((a\in\mathbb{R})\) which are, for each \(n\), eventually \(n\)-times continuously differentiable. Now \(\mathcal{C}^{<\infty}\) has the differential subring \(\mathcal{C}^{\infty}\) whose elements are the germs that are eventually \(\mathcal{C}^{\infty}\). A \(\mathcal{C}^{\infty}\)_-Hardy field_ is a Hardy field \(H\subseteq\mathcal{C}^{\infty}\). (See [77] for an example of a Hardy field \(H\not\subseteq\mathcal{C}^{\infty}\).) A \(\mathcal{C}^{\infty}\)-Hardy field is said to be \(\mathcal{C}^{\infty}\)_-maximal_ if it has no proper \(\mathcal{C}^{\infty}\)-Hardy field extension. Now \(\mathcal{C}^{\infty}\) in turn has the differential subring \(\mathcal{C}^{\omega}\) whose elements are the germs that are eventually real analytic, and so we define likewise \(\mathcal{C}^{\omega}\)-Hardy fields (\(\mathcal{C}^{\omega}\)-maximal Hardy fields, respectively). Our main theorems go through in the \(\mathcal{C}^{\infty}\)- and \(\mathcal{C}^{\omega}\)-settings; combined with model completeness of \(T_{H}\) shown in [ADH, 16.2] this ensures the existence of solutions with appropriate smoothness in Corollary 2:
**Corollary 5**.: _If \(H\subseteq\mathcal{C}^{\infty}\) and the system \((*)\) has a solution in some Hardy field extension of \(H\), then \((*)\) has a solution in every \(\mathcal{C}^{\infty}\)-maximal Hardy field extension of \(H\). In particular, if \(H\) is \(\mathcal{C}^{\infty}\)-maximal and \((*)\) has a solution in a Hardy field extension of \(H\), then it has a solution in \(H\)._ (_Likewise with \(\mathcal{C}^{\omega}\) in place of \(\mathcal{C}^{\infty}\)._)
We already mentioned \(\mathbb{T}_{\mathbb{g}}\) as a quintessential example of an \(H\)-closed field. Its cousin \(\mathbb{T}\), the ordered differential field of transseries, extends \(\mathbb{T}_{\mathbb{g}}\) and is also \(H\)-closed with constant field \(\mathbb{R}\) [ADH, 15.0.2]. The elements of \(\mathbb{T}\) are certain generalized series (in the sense of Hahn) in an indeterminate \(x>\mathbb{R}\) with real coefficients, involving exponential and logarithmic terms, such as
\[f\ =\ \mathrm{e}^{\frac{1}{2}\,\mathrm{e}^{x}}-5\,\mathrm{e}^{x^{2}}+\mathrm{e} ^{x^{-1}+2x^{-2}+\cdots}+\sqrt[3]{2}\log x-x^{-1}+\mathrm{e}^{-x}+\mathrm{e}^ {-2x}+\cdots+5\,\mathrm{e}^{-x^{3/2}}\,.\]
Mathematically significant examples are the more simply structured transseries
\[\mathrm{Ai}\ =\ \mathrm{e}^{-\xi}\,x^{-1/4}\sum_{n}(-1)^{n}c_{n} \xi^{-n},\quad\mathrm{Bi}\ =\ \mathrm{e}^{\xi}\,x^{-1/4}\sum_{n}c_{n}\xi^{-n},\] \[\mathrm{where}\ \ c_{n}\ =\ \frac{(2n+1)(2n+3)\cdots(6n-1)}{(216)^{n}n!} \ \ \mathrm{and}\ \ \xi\ =\ \frac{2}{3}x^{3/2},\]
which are \(\mathbb{R}\)-linearly independent solutions of the Airy equation \(Y^{\prime\prime}=xY\)[144, Chapter 11, (1.07)]. For information about \(\mathbb{T}\) see [ADH, Appendix A] or [62, 103]. We just mention here that like each \(H\)-field, \(\mathbb{T}\) comes equipped with its own versions of the asymptotic relations \(\preccurlyeq\), \(\prec\), defined as for \(H\) above. The asymptotic rules valid in all Hardy fields, such as
\[f\preccurlyeq 1\ \Rightarrow\ f^{\prime}\prec 1,\qquad f\preccurlyeq g\prec 1 \ \Rightarrow\ f^{\prime}\preccurlyeq g^{\prime},\qquad f^{\prime}=f\neq 0\ \Rightarrow\ f\succ x^{n}\]
also hold in \(\mathbb{T}\). Here \(x\) denotes, depending on the context, the germ of the identity function on \(\mathbb{R}\), as well as the element \(x\in\mathbb{T}\). (We make this precise in Section 7.3, where we also give a finite axiomatization of these rules.)
Now suppose that we are given an embedding \(\iota\colon H\to\mathbb{T}\) of ordered differential fields. We may view such an embedding as a _formal expansion operator_ and its inverse as a _summation operator_. (See Section 7.3 below for an example of a Hardy field,
arising from a fairly rich o-minimal structure, which admits such an embedding.) From \((*)\) we obtain a system
( \[\iota(P_{1})(Y)\quad\varrho_{1}\quad\iota(Q_{1})(Y)\] \[\quad\vdots\quad\vdots\quad\quad\vdots\] \[\iota(P_{k})(Y)\quad\varrho_{m}\quad\iota(Q_{k})(Y)\]
of algebraic differential equations and (asymptotic) inequalities over \(\mathbb{T}\), where \(\iota(P_{i})\), \(\iota(Q_{i})\) denote the differential polynomials over \(\mathbb{T}\) obtained by applying \(\iota\) to the coefficients of \(P_{i}\), \(Q_{i}\), respectively. A _solution_ of \((\iota*)\) is a tuple \(y=(y_{1},\ldots,y_{n})\in\mathbb{T}^{n}\) such that \(\iota(P_{i})(y)\,\varrho_{i}\,\iota(Q_{i})(y)\) holds in \(\mathbb{T}\), for \(i=1,\ldots,m\). Differential-difference equations in \(\mathbb{T}\) are sometimes amenable to functional-analytic techniques like fixed point theorems or small (compact-like) operators [102], and the formal nature of transseries also makes it possible to solve algebraic differential equations in \(\mathbb{T}\) by quasi-algorithmic methods [100, 103]. The simple example of the Euler equation
\[Y^{\prime}+Y\ =\ x^{-1}\]
is instructive: its solutions in \(\mathcal{C}^{<\infty}\) are given by the germs of
\[t\mapsto\operatorname{e}^{-t}\int_{1}^{t}\frac{\operatorname{e}^{s}}{s}\,ds+c \operatorname{e}^{-t}\colon(1,+\infty)\to\mathbb{R}\qquad(c\in\mathbb{R}),\]
all contained in a common Hardy field extension of \(\mathbb{R}(x)\). The solutions of this differential equation in \(\mathbb{T}\) are
\[\sum_{n}n!\,x^{-(n+1)}+c\operatorname{e}^{-x}\qquad(c\in\mathbb{R}),\]
where the particular solution \(\sum_{n}n!\,x^{-(n+1)}\) is obtained as the unique fixed point of the operator \(f\mapsto x^{-1}-f^{\prime}\) on the differential subfield \(\mathbb{R}(\!(x^{-1})\!)\) of \(\mathbb{T}\) (cf. [ADH, 2.2.13]). (Note: \(\sum_{n}n!\,t^{-(n+1)}\) diverges for each \(t>0\).) In general, the existence of a solution of \((\iota*)\) in \(\mathbb{T}\) entails the existence of a solution of \((*)\) in some Hardy field extension of \(H\) and vice versa; more precisely:
**Corollary 6**.: _The system \((\iota*)\) has a solution in \(\mathbb{T}\) iff \((*)\) has a solution in some Hardy field extension of \(H\). In this case, we can choose a solution of \((*)\) in a Hardy field extension \(E\) of \(H\) for which \(\iota\) extends to an embedding of ordered differential fields \(E\to\mathbb{T}\)._
In particular, a system \((*)\) over \(\mathbb{Q}\) is consistent if and only if it has a solution in \(\mathbb{T}\). (The "if" direction already follows from [ADH, Chapter 16] and [104]; the latter constructs a summation operator on the ordered differential subfield \(\mathbb{T}^{\mathrm{da}}\subseteq\mathbb{T}\) of differentially algebraic transseries.)
It may seem remarkable that a result about differential polynomials in one differentiable indeterminate, like Theorem A (or Theorem B below), yields similar facts about _systems_ of algebraic differential equations and asymptotic inequalities in several indeterminates over Hardy fields as in the corollaries above; we owe this to the strength of the model-theoretic methods employed in [ADH]. But our theorem in combination with [ADH] already has interesting consequences for one-variable differential polynomials over \(H\) and over its "complexification" \(K:=H[i]\) (where \(i^{2}=-1\)), which is a differential subfield of the differential ring \(\mathcal{C}^{<\infty}[i]\). Some of these facts are analogous to familiar properties of ordinary one-variable
polynomials over the real or complex numbers. First, it follows from Theorem A that every differential polynomial in a single differential indeterminate over \(H\) of odd degree has a zero in a Hardy field extension of \(H\). (See Corollary 7.1.20.) For example, a differential polynomial like
\[(Y^{\prime\prime})^{5}+\sqrt{2}\,\mathrm{e}^{x}(Y^{\prime\prime})^{4}Y^{\prime \prime\prime}-x^{-1}\log x\,Y^{2}Y^{\prime\prime}+YY^{\prime}-\Gamma\]
has a zero in every maximal Hardy field extension of the Hardy field \(\mathbb{R}\langle\mathrm{e}^{x},\log x,\Gamma\rangle\). Passing to \(K=H[\mathrm{i}]\) we have:
**Corollary 7**.: _For each differential polynomial \(P\notin K\) in a single differential indeterminate with coefficients in \(K\) there are \(f\), \(g\) in a Hardy field extension of \(H\) such that \(P(f+g\mathrm{i})=0\)._
In particular, each linear differential equation
\[y^{(n)}+a_{1}y^{(n-1)}+\cdots+a_{n}y\ =\ b\qquad(a_{1},\ldots,a_{n},b\in K)\]
has a solution \(y=f+g\mathrm{i}\) where \(f\), \(g\) lie in some Hardy field extension of \(H\). (Of course, if \(b=0\), then we may take here the trivial solution \(y=0\).) Although this special case of Corollary 7 concerns differential polynomials of degree \(1\), it seems hard to obtain this result without recourse to our more general extension theorems: a solution \(y\) of a linear differential equation of order \(n\) over \(K\) as above may simultaneously be a zero of a non-linear differential polynomial \(P\) over \(K\) of order \(<n\), and the structure of the differential field extension of \(K\) generated by \(y\) is governed by \(P\) (when taken of minimal complexity in the sense of [ADH, 4.3]).
Turning now to homogeneous linear differential equations over Hardy fields, we first introduce some notation and terminology. Let \(R[\partial]\) be the ring of linear differential operators over a differential ring \(R\): this ring is a free left \(R\)-module with basis \(\partial^{n}\) (\(n\in\mathbb{N}\)) such that \(\partial^{0}=1\) and \(\partial\cdot f=f\partial+f^{\prime}\) for \(f\in R\), where \(\partial:=\partial^{1}\). (See [ADH, 5.1] or [158, 2.1].) Any operator \(A\in R[\partial]\) gives rise to an additive map \(y\mapsto A(y)\colon R\to R\), with \(\partial^{n}(y)=y^{(n)}\) (the \(n\)th derivative of \(y\) in \(R\)) and \(r(y)=ry\) for \(r=r\cdot 1\in R\subseteq R[\partial]\). The elements of \(\partial^{n}+R\partial^{n-1}+\cdots+R\subseteq R[\partial]\) are said to be _monic_ of order \(n\). It is well-known [43, 136, 137] that for \(R=\mathcal{C}^{<\infty}[\mathrm{i}]\), each monic \(A\in R[\partial]\) factors as a product of monic operators of order \(1\) in \(R[\partial]\); if \(A\in K[\partial]\), then such a factorization already happens over the complexification of some Hardy field extension of \(H\):
**Corollary 8**.: _If \(H\) is maximal, then each monic operator in \(K[\partial]\) is a product of monic operators of order \(1\) in \(K[\partial]\)._
This follows quite easily from Corollary 7 using the Riccati transform [ADH, 5.8]. _In the remainder of this subsection we let \(A\in K[\partial]\) be monic of order \(n\), and we fix a maximal Hardy field extension \(E\) of \(H\)._ The factorization result in Corollary 8 gives rise to a description of a fundamental system of solutions for the homogeneous linear differential equation \(A(y)=0\) in terms of Hardy field germs. Here, of course, complex exponential terms naturally appear, but only in a controlled way: the \(\mathbb{C}\)-linear space consisting of all \(y\in\mathcal{C}^{<\infty}[\mathrm{i}]\) with \(A(y)=0\) has a basis of the form
\[f_{1}\,\mathrm{e}^{\phi_{1}\mathrm{i}},\ \ldots,\ f_{n}\,\mathrm{e}^{\phi_{n} \mathrm{i}}\]
where \(f_{j}\in E[\mathrm{i}]\) and \(\phi_{j}\in E\) with \(\phi_{j}=0\) or \(|\phi_{j}|>\mathbb{R}\) for \(j=1,\ldots,n\). We can arrange here that for \(i,j=1,\ldots,n\) we have \(\phi_{i}=\phi_{j}\) or \(|\phi_{i}-\phi_{j}|>\mathbb{R}\). (Note that for \(\phi\) in a Hardy field we have \(\phi>\mathbb{R}\) iff \(\phi(t)\to+\infty\) as \(t\to+\infty\).) In this case, the
basis elements \(f_{i}\,\mathrm{e}^{\phi_{i}i}\) for distinct frequencies \(\phi_{i}\) are pairwise orthogonal in a sense made precise in Section 7.4.
_Example_.: If \(y\in\mathcal{C}^{<\infty}[i]\) is _holonomic,_ that is, \(L(y)=0\) for some monic \(L\in\mathbb{C}(x)[\partial]\), then \(y\) is a \(\mathbb{C}\)-linear combination of germs \(f\,\mathrm{e}^{\phi i}\) where \(f\in E[i]\), \(\phi\in E\), and \(\phi=0\) or \(|\phi|>\mathbb{R}\). Here, more information about the \(f\), \(\phi\) is available (see, e.g., [70, VIII.7], [204, SS19.1]). Many special functions are holonomic [70, B.4].
By the usual correspondence between linear differential operators and matrix differential equations (see, e.g., [ADH, 5.5]), our results about zeros of linear differential operators also yield facts about systems \(y^{\prime}=Ny\) of linear differential equations over Hardy fields. If the matrix \(N\) has suitable symmetry, we can even guarantee the existence of a nonzero solution \(y\) which lies in \(E[i]^{n}\) (and thus does not exhibit oscillating behavior). A sample result, also shown in Section 7.4: every matrix differential equation \(y^{\prime}=Ny\), where \(N\) is an \(n\times n\) matrix over \(K\) (\(n\geqslant 1\)), has a nonzero solution \(y\in E[i]^{n}\) provided \(n\) is _odd_ and \(N\) is _skew-symmetric_. (The study of such matrix differential equations, for \(n=3\), goes back at least to Darboux [54, Livre I, Chapitre II].) For example, for each \(c,d\in\mathbb{C}\) there is a nonzero \(y\in E[i]^{3}\) such that
\[y^{\prime}\ =\ \left(\begin{array}{ccc}0&-c&-\dfrac{d}{\mathrm{e}^{x}+ \mathrm{e}^{-x}}\\ c&0&\dfrac{\mathrm{e}^{x}-\mathrm{e}^{-x}}{\mathrm{e}^{x}+\mathrm{e}^{-x}}\\ \dfrac{d}{\mathrm{e}^{x}+\mathrm{e}^{-x}}&\dfrac{\mathrm{e}^{-x}-\mathrm{e}^{ x}}{\mathrm{e}^{x}+\mathrm{e}^{-x}}&0\end{array}\right)y.\]
(For \(c=0\), \(d=2\sqrt{2}\), this equation is studied in [2].)
We now return to the operator setting and focus on the case where \(A\) is real, that is, \(A\in H[\partial]\). Mammana [137] conjectured that each monic operator in \(\mathcal{C}^{<\infty}[\partial]\) of odd order has a monic factor of order \(1\); this is false in general [178] but holds in the Hardy field world, thanks to our "real" version of Corollary 8:
**Corollary 9**.: _Suppose \(A\in H[\partial]\). Then \(A\) is a product of monic operators in \(E[\partial]\), each of order \(1\) or irreducible of order \(2\)._
As a consequence, the \(\mathbb{R}\)-linear space of zeros of \(A\in H[\partial]\) in \(\mathcal{C}^{<\infty}\) has a basis
\[g_{1}\cos\phi_{1},\ g_{1}\sin\phi_{1},\ \ldots,\ g_{r}\cos\phi_{r},\ g_{r} \sin\phi_{r},\ h_{1},\ \ldots,\ h_{s}\qquad(2r+s=n)\]
where \(g_{j},\phi_{j}\in E\) with \(\phi_{j}>\mathbb{R}\) for \(j=1,\ldots,r\) and \(h_{k}\in E\) for \(k=1,\ldots,s\). In particular, if \(n\) is odd, then \(A(y)=0\) for some nonzero \(y\in E\).
A function \(y\colon[a,+\infty)\to\mathbb{R}\) (\(a\in\mathbb{R}\)) is _non-oscillating_ if \(\operatorname{sign}y(t)\) is eventually constant (and otherwise \(y\)_oscillates_). Similarly we define (non)-oscillation of germs. No germ in a Hardy field oscillates. The following corollary characterizes when \(A\) in Corollary 9 is a product of monic operators of order \(1\) in \(E[\partial]\):
**Corollary 10**.: _The \((\)monic\()\) operator \(A\in H[\partial]\) is a product of monic operators of order \(1\) in \(E[\partial]\) iff no zero of \(A\) in \(\mathcal{C}^{<\infty}\) oscillates. In this case \(E\) contains a basis \(y_{1}\prec\cdots\prec y_{n}\) of the \(\mathbb{R}\)-linear space of zeros of \(A\) in \(\mathcal{C}^{<\infty}\), and_
\[A\ =\ (\partial-a_{n})\cdots(\partial-a_{1})\]
_for a unique tuple \((a_{1},\ldots,a_{n})\in E^{n}\) such that for all sufficiently small \(f>\mathbb{R}\) in \(E\) we have \(a_{j}+(f^{\prime\prime}/f^{\prime})<a_{j+1}\) for \(j=1,\ldots,n-1\)._
Factorizations of linear differential operators as in Corollary 10 are closely connected to the classical topic of _disconjugacy._ We recall the definition, which arose from the calculus of variations [212]. Let \(f_{1},\ldots,f_{n}\colon I\to\mathbb{R}\) be continuous, where \(I=[a,+\infty)\), \(a\in\mathbb{R}\). The linear differential equation
(L) \[y^{(n)}+f_{1}y^{(n-1)}+\cdots+f_{n}y\ =\ 0\]
on \(I\) is said to be _disconjugate_ if every nonzero solution \(y\in\mathcal{C}^{n}(I)\) of (L) has at most \(n-1\) zeros, counted with their multiplicities. (For example, \(y^{(n)}=0\), on any such \(I\), is disconjugate.) The solutions of disconjugate linear differential equations are suitable for approximation and interpolation purposes; see [52, Chapter 3] and [56, Chapter 3, SS11]. We also say that (L) is _eventually disconjugate_ if for some \(b\geq a\) the linear differential equation on \(J:=[b,+\infty)\) obtained from (L) by restricting \(f_{1},\ldots,f_{n}\) to \(J\) is disconjugate. If (L) is eventually disconjugate, then it has no oscillating solutions in \(\mathcal{C}^{n}(I)\). The converse of this implication holds when \(n\leqslant 2\) but fails for each \(n>2\)[81]. There is an extensive literature, mostly dating back to the 1970s, which develops sufficient conditions for (eventual) disconjugacy of linear differential equations (see, e.g. [52, 63, 64, 78, 198]), often by restricting the growth of the \(f_{i}\); for example, (L) is disconjugate if \(\int_{a}^{\infty}\lvert f_{i}(t)\rvert(t-a)^{i-1}\,dt<\infty\) for \(i=1,\ldots,n\) (cf. [209]). Corollary 10 allows us to contribute another natural criterion for eventual disconjugacy:
**Corollary 11**.: _If the germs of \(f_{1},\ldots,f_{n}\) lie in a Hardy field and (L) has no oscillating solutions in \(\mathcal{C}^{n}(I)\), then (L) is eventually disconjugate._
A fundamental property of disconjugate linear differential operators is the existence of a canonical factorization discovered by Trench [200]. (See also Proposition 5.2.42 below.) Corollary 10 can also be used to strengthen this factorization in the situation of Corollary 11. See Corollary 7.4.58 for the details.
We finish with discussing the instructive case of an operator \(A\in H[\partial]\) of order \(2\). If such \(A\) has a non-oscillating zero \(y\neq 0\) in \(\mathcal{C}^{<\infty}\), then by Sturm's Oscillation Theorem all zeros of \(A\) in \(\mathcal{C}^{<\infty}\) are non-oscillating and hence contained in every maximal Hardy field, by Corollary 5.5.7 or [33, Theorem 16.7], [171, Corollary 2]. For example, the germs of the \(\mathbb{R}\)-linearly independent solutions \(\operatorname{Ai},\operatorname{Bi}\colon\mathbb{R}\to\mathbb{R}\) of the Airy equation \(Y^{\prime\prime}-xY=0\) given by
\[\operatorname{Ai}(t) =\ \frac{1}{\pi}\int_{0}^{\infty}\cos\left(\frac{s^{3}}{3}+st \right)ds,\] \[\operatorname{Bi}(t) =\ \frac{1}{\pi}\int_{0}^{\infty}\left[\exp\left(-\frac{s^{3}}{3}+ st\right)+\sin\left(\frac{s^{3}}{3}+st\right)\right]ds\]
lie in each maximal Hardy field, with \(\operatorname{Ai}\sim 1\prec\operatorname{Bi}\). In the oscillating case, we have:
**Corollary 12**.: _If \(A\in H[\partial]\) of order \(2\) has an oscillating zero in \(\mathcal{C}^{<\infty}\), then there are \(g,\phi\in E\) with \(\phi>\mathbb{R}\) such that the zeros of \(A\) in \(\mathcal{C}^{<\infty}\) are exactly the germs \(cg\cos(\phi+d)\)\((c,d\in\mathbb{R})\)._
This corollary was announced by Boshernitzan as part of [35, Theorem 5.4], but apparently a proof of this theorem never appeared in print. (See also [33, Conjecture 4 in SS20].) In Section 7.5 below we state and prove a strengthening of his theorem; this includes a criterion for the uniqueness of the germs \(g\), \(\phi\). For every \(\phi>\mathbb{R}\) in \(E\) there is at most one \(g\in E\) (up to multiplication by a nonzero constant) such that
the conclusion of Corollary 12 holds; in general, the pair \((g,\phi)\) is not unique (up to multiplication of \(g\) by a nonzero constant and addition of a constant to \(\phi\)), but it is if the coefficients of \(A\) are differentially algebraic (over \(\mathbb{R}\)). As a final example, consider the Bessel equation of order \(\nu\in\mathbb{R}\) (see, e.g., [66, VII], [144, I, SS9], [205]):
\[x^{2}Y^{\prime\prime}+xY^{\prime}+(x^{2}-\nu^{2})Y\ =\ 0.\]
It is well-known that each solution \(y\in\mathcal{C}^{<\infty}\) of this equation satisfies
\[y\ =\ cx^{-1/2}\cos(x+d)+o(x^{-1/2})\quad\text{for some $c,d\in\mathbb{R}$}.\]
(See [18, Chapter 6, SS18], [91, Corollary XI.8.1], [204, Example 13.2].) We give a similar parametrization using germs in Hardy fields. More precisely, we show that there is a unique germ \(\phi=\phi_{\nu}\) in a Hardy field with \(\phi-x\preccurlyeq x^{-1}\) such that every solution \(y\in\mathcal{C}^{<\infty}\) of the Bessel equation has the form
\[y\ =\ cx^{-1/2}g\cos(\phi+d)\quad\text{for some $c,d\in\mathbb{R}$ and $g:=1/\sqrt{\phi^{\prime}}$}.\]
This explains the phenomenon, observed in [95, 96], that the Bessel equation admits a non-oscillating phase function. Knowing that \(\phi\) lives in a Hardy field allows one to reprove a number of classical results about Bessel functions in a short transparent way. Remarkably, every maximal Hardy field contains the phase function \(\phi\), and \(\phi\) has an asymptotic expansion
\[\phi\sim x+\frac{\mu-1}{8}x^{-1}+\frac{\mu^{2}-26\mu+25}{384}x^{-3}+\frac{\mu ^{3}-115\mu^{2}+1187\mu-1073}{5120}x^{-5}+\cdots\]
where \(\mu=4\nu^{2}\). Only for special choices of \(\nu\) is the germ \(\phi\) contained in the Liouville closure of \(\mathbb{R}(x)\), and hence easily obtainable by the classical extension results for Hardy fields from [32, 84, 171]: by results of Liouville [132] this holds precisely if \(\nu\in\frac{1}{2}+\mathbb{Z}\); see Section 7.6 for a proof.
Michael Boshernitzan's papers [32]-[36] on Hardy fields have been a frequent source of inspiration for us, and we dedicate this work to his memory. He paid particular attention to the germs in \(\mathcal{C}^{<\infty}\), such as \(\phi_{\nu}\) above, that lie in _every_ maximal Hardy field. They form a Liouville closed Hardy field E properly containing Hardy's differential field of logarithmico-exponential functions. In the course of our work below we prove Conjecture 1 from [32, SS10] and Conjecture 4 from [33, SS20] about E. We also prove Conjecture 17.11 from [33, SS17] and answer Question 4 from [34, SS7]. (See Corollaries 7.2.14, Theorems 7.5.1 and 5.5.38, and Proposition 5.6.6, respectively.) Section 7.7 contains some additional observations which may eventually help to shed further light on the nature of the Hardy field E.
### Synopsis of the proof of our main theorem
In the rest of the paper we assume familiarity with the terminology and concepts of asymptotic differential algebra from our book [ADH]. (We review some of this in the last subsection of the introduction below.) The proof of our main result requires, besides differential-algebraic and valuation-theoretic tools from [ADH], also analytic arguments in an essential way. Some of our analytic machinery is obtained by adapting material from [104] to a more general setting. As explained earlier, our main Theorem A is a consequence of the following extension theorem:
**Theorem B**.: _Every \(\omega\)-free Hardy field has a newtonian Hardy field extension._
The proof of this is long, so it may be useful to outline the strategy behind it.
Holes and slots._ For now, let \(K\) be an \(H\)-asymptotic field with rational asymptotic integration. In Section 3.2 below we introduce the apparatus of _holes_ in \(K\) as a means to systematize the study of solutions of algebraic differential equations over \(K\) in immediate asymptotic extensions of \(K\): such a hole in \(K\) is a triple \((P,\mathfrak{m},\widehat{f})\) where \(P\) is a differential polynomial in a single differential indeterminate \(Y\) with coefficients in \(K\), \(P\neq 0\), \(0\neq\mathfrak{m}\in K\), and \(\widehat{f}\notin K\) lies an immediate asymptotic extension of \(K\) with \(P(\widehat{f})=0\) and \(\widehat{f}\prec\mathfrak{m}\). It is sometimes technically convenient to work with the more flexible concept of a _slot_ in \(K\), where instead of \(P(\widehat{f})=0\) we only require \(P\) to vanish at \((K,\widehat{f})\) in the sense of [ADH, 11.4]. The _complexity_ of a slot \((P,\mathfrak{m},\widehat{f})\) is the complexity of the differential polynomial \(P\) as in [ADH, p. 216]. Now if \(K\) is \(\mathfrak{o}\)-free, then by Lemma 3.2.1,
\[K\text{ is newtonian}\quad\Longleftrightarrow\quad K\text{ has no hole.}\]
This equivalence suggests an opening move for proving Theorem B by induction on complexity as follows: Let \(H\supseteq\mathbb{R}\) be an \(\mathfrak{o}\)-free Liouville closed Hardy field, and suppose \(H\) is not newtonian; it is enough to show that then \(H\) has a proper Hardy field extension. By the above equivalence, \(H\) has a hole \((P,\mathfrak{m},\widehat{f})\), and we can take here \((P,\mathfrak{m},\widehat{f})\) to be of minimal complexity among holes in \(H\). This minimality has consequences that are important for us; for example \(r:=\operatorname{order}P\geqslant 1\), \(P\) is a minimal annihilator of \(\widehat{f}\) over \(H\), and \(H\) is \((r-1)\)-newtonian as defined in [ADH, 14.2]. We arrange \(\mathfrak{m}=1\) by replacing \((P,\mathfrak{m},\widehat{f})\) with the hole \((P_{\times\mathfrak{m}},1,\widehat{f}/\mathfrak{m})\) in \(H\).
Solving algebraic differential equations over Hardy fields.For Theorem B it is enough to show that under these conditions \(P\) is a minimal annihilator of some germ \(f\in\mathcal{C}^{<\infty}\) that generates a (necessarily proper) Hardy field extension \(H\langle f\rangle\) of \(H\). So at a minimum, we need to find a solution in \(\mathcal{C}^{<\infty}\) to the algebraic differential equation \(P(Y)=0\). For this, it is natural to use fixed point techniques as in [104]. Notation: for \(a\in\mathbb{R}\), let \(\mathcal{C}^{n}_{a}\) be the \(\mathbb{R}\)-linear space of functions \([a,+\infty)\to\mathbb{R}\) which extend to an \(n\)-times continuously differentiable function \(U\to\mathbb{R}\) on an open subset \(U\supseteq[a,+\infty)\) of \(\mathbb{R}\). For any \(a\) and \(n\), each germ in \(\mathcal{C}^{<\infty}\) has representatives in \(\mathcal{C}^{n}_{a}\).
A fixed point theorem.Let \(L:=L_{P}\in H[\partial]\) be the linear part of \(P\). Replacing \((P,1,\widehat{f})\) with another minimal hole in \(H\) we arrange \(\operatorname{order}L=r\). Representing the coefficients of \(P\) (and thus of \(L\)) by functions in \(\mathcal{C}^{0}_{a}\) we obtain an \(\mathbb{R}\)-linear operator \(y\mapsto L(y)\colon\mathcal{C}^{r}_{a}\to\mathcal{C}^{0}_{a}\). For now we make the bold assumption that \(L\in H[\partial]\) splits over \(H\). Using such a splitting and increasing \(a\) if necessary, \(r\)-fold integration yields an \(\mathbb{R}\)-linear operator \(L^{-1}\colon\mathcal{C}^{0}_{a}\to\mathcal{C}^{r}_{a}\) which is a _right-inverse_ of \(L\colon\mathcal{C}^{r}_{a}\to\mathcal{C}^{0}_{a}\), that is, \(L\big{(}L^{-1}(y)\big{)}=y\) for all \(y\in\mathcal{C}^{0}_{a}\). Consider the (generally non-linear) operator
\[f\mapsto\Phi(f):=L^{-1}\big{(}R(f)\big{)}\]
on \(\mathcal{C}^{r}_{a}\); here \(P=P_{1}-R\) where \(P_{1}\) is the homogeneous part of degree \(1\) of \(P\). We try to show that \(\Phi\) restricts to a contractive operator on a closed ball of an appropriate subspace of \(\mathcal{C}^{r}_{a}\) equipped with a suitable complete norm, whose fixed points are then solutions to \(P(Y)=0\); this may also involve increasing \(a\) again and replacing the coefficient functions of \(P\) by their corresponding restrictions. To obtain such contractivity, we would need to ensure that \(R\) is asymptotically small compared to \(P_{1}\) in a certain sense. This can indeed be achieved by transforming \((P,1,\widehat{f})\)
into a certain normal form through successive _refinements_ and (_additive_, _multiplicative_, and _compositional_) _conjugations_ of the hole \((P,1,\widehat{f})\). This normalization is done under more general algebraic assumptions in Section 3.3. The analytic arguments leading to fixed points are in Sections 6.1-6.3. Developments below involve the algebraic closure \(K:=H[i]\) of \(H\) and we work more generally with a decomposition \(P=\widetilde{P}_{1}-R\) where \(\widetilde{P}_{1}\in K\{Y\}\) is homogeneous of degree \(1\), not necessarily \(\widetilde{P}_{1}=P_{1}\), such that \(L_{\widetilde{P}_{1}}\in K[\![\partial]\!]\) splits and \(R\) is "small" compared to \(\widetilde{P}_{1}\).
#### Passing to the complex realm
In general we are not so lucky that \(L\) splits over \(H\). The minimality of our hole \((P,1,\widehat{f})\) does not even ensure that \(L\) splits over \(K\). At this point we recall from [ADH, 11.7.23] that \(K\) is \(\omega\)-free because \(H\) is. We can also draw hope from the fact that every nonzero linear differential operator over \(K\) would split over \(K\) if \(H\) were newtonian [ADH, 14.5.8]. Although \(H\) is not newtonian, it is \((r-1)\)-newtonian, and \(L\) is only of order \(r\), so we optimistically restart our attempt, and instead of a hole of minimal complexity in \(H\), we now let \((P,\mathfrak{m},\widehat{f})\) be a hole of minimal complexity in \(K\). Again it follows that \(r:=\operatorname{order}P\geqslant 1\), \(P\) is a minimal annihilator of \(\widehat{f}\) over \(K\), and \(K\) is \((r-1)\)-newtonian. As before we arrange that \(\mathfrak{m}=1\) and the linear part \(L_{P}\in K[\![\partial]\!]\) of \(P\) has order \(r\). We can also arrange \(\widehat{f}\in\widehat{K}=\widehat{H}[i]\) where \(\widehat{H}\) is an immediate asymptotic extension of \(H\). So \(\widehat{f}=\widehat{g}+\widehat{h}i\) where \(\widehat{g},\widehat{h}\in\widehat{H}\) satisfy \(\widehat{g},\widehat{h}\prec 1\), and \(\widehat{g}\notin H\) or \(\widehat{h}\notin H\), say \(\widehat{g}\notin H\). Now minimality of \((P,1,\widehat{f})\) and algebraic closedness of \(K\) give that \(K\) is \(r\)-linearly closed, that is, every nonzero \(A\in K[\![\partial]\!]\) of order \(\leqslant r\) splits over \(K\) (Corollary 3.2.4). Then \(L_{P}\) splits over \(K\) as desired, and a version of the above fixed point construction with \(\mathcal{C}_{a}^{r}[\![\) i\(]\) in place of \(\mathcal{C}_{a}^{r}\) can be carried out successfully to solve \(P(Y)=0\) in the differential ring extension \(\mathcal{C}^{<\infty}[\![\) i\(]\) of \(\mathcal{C}^{<\infty}\).
#### Return to the real world
But at this point we face another obstacle: even once we have our hands on a zero \(f\in\mathcal{C}^{<\infty}[\![\) i\(]\) of \(P\), it is not clear why \(g:=\operatorname{Re}f\) should generate a proper Hardy field extension of \(H\): Let \(Q\) be a minimal annihilator of \(\widehat{g}\) over \(H\); we cannot expect that \(Q(g)=0\). If \(L_{Q}\in H[\![\partial]\!]\) splits over \(K\), then we can try to apply fixed point arguments like the ones above, with \((P,1,\widehat{f})\) replaced by the hole \((Q,1,\widehat{g})\) in \(H\), to find a zero \(y\in\mathcal{C}^{<\infty}\) of \(Q\). (We do need to take care that constructed zero is real.) Unfortunately we can only ascertain that \(1\leqslant s\leqslant 2r\) for \(s:=\operatorname{order}Q\), and since we may have \(s>r\), we cannot leverage the minimality of \((P,1,\widehat{f})\) anymore to ensure that \(L_{Q}\) splits over \(K\), or to normalize \((Q,1,\widehat{g})\) in the same way as indicated above for \((P,1,\widehat{f})\). This situation seems hopeless, but now a purely differential-algebraic observation comes to the rescue: although the linear part \(L_{Q_{+\widehat{g}}}\in\widehat{H}[\![\partial]\!]\) of the differential polynomial \(Q_{+\widehat{g}}\in\widehat{H}\{Y\}\) also has order \(s\) (which may be \(>r\)), _if \(\widehat{K}\) is \(r\)-linearly closed, then \(L_{Q_{+\widehat{g}}}\) does split over \(\widehat{K}\)_; see [ADH, 5.1.37]. If moreover \(g\in H\) is sufficiently close to \(\widehat{g}\), then the linear part \(L_{Q_{+g}}\in H[\![\partial]\!]\) of \(Q_{+g}\in H\{Y\}\) is close to an operator in \(H[\![\partial]\!]\) that does split over \(K=H[\![\) i\(]\), and so using \((Q_{+g},1,\widehat{g}-g)\) instead of \((Q,1,\widehat{g})\) may offer a way out of this impasse.
#### Approximating \(\widehat{g}\)
Suppose for a moment that \(H\) is (valuation) dense in \(\widehat{H}\). Then by extending \(\widehat{H}\) we arrange that \(\widehat{H}\) is the completion of \(H\), and \(\widehat{K}\) of \(K\) (as in [ADH, 4.4]). In this case \(\widehat{K}\) inherits from \(K\) the property of being \(r\)-linearly closed, by results in Section 1.8, and the desired approximation of \(\widehat{g}\) by \(g\in H\) can be achieved.
We cannot in general expect \(H\) to be dense in \(\widehat{H}\). But we are saved by Section 1.6 to the effect that \(\widehat{g}\) can be made _special_ over \(H\) in the sense of [ADH, 3.4], that is, some nontrivial convex subgroup \(\Delta\) of the value group of \(H\) is cofinal in \(v(\widehat{g}-H)\). Then passing to the \(\Delta\)-specializations of the various valued differential fields encountered above (see [ADH, 9.4]) we regain density and this allows us to implement the desired approximation. The technical details are involved, and are carried out in the first three sections of Part 4. A minor obstacle to obtain the necessary specialness of \(\widehat{g}\) is that the hole \((Q,1,\widehat{g})\) in \(H\) may not be of minimal complexity. This can be ameliorated by using a differential polynomial of minimal complexity vanishing at \((H,\widehat{g})\) instead of \(Q\), in the process replacing the hole \((Q,1,\widehat{g})\) in \(H\) by a slot in \(H\), which we then aim to approximate by a _strongly split-normal_ slot in \(H\); see Definition 4.5.32. Another caveat: to carry out our approximation scheme we require \(\deg P>1\). Fortunately, if \(\deg P=1\), then necessarily \(r=\operatorname{order}P=1\), and this case can be dealt with through separate arguments: see Section 6.7 where we finish the proof of Theorem B.
Enlarging the Hardy fieldNow suppose we have finally arranged things so that our Fixed Point Theorem applies: it delivers \(g\in\mathcal{C}^{<\infty}\) such that \(Q(g)=0\) and \(g\prec 1\). (Notation: for a germ \(\phi\in\mathcal{C}^{<\infty}[\mathrm{i}]\) and \(0\neq\mathfrak{n}\in H\) we write \(\phi\prec\mathfrak{n}\) if \(\phi(t)/\mathfrak{n}(t)\to 0\) as \(t\to+\infty\).) However, in order that \(g\) generates a proper Hardy field extension \(H\langle g\rangle\) of \(H\) isomorphic to \(H\langle\widehat{g}\rangle\) by an isomorphism over \(H\) sending \(g\) to \(\widehat{g}\) requires that \(g\) and \(\widehat{g}\) have similar asymptotic properties with respect to the elements of \(H\). For example, suppose \(h,\mathfrak{n}\in H\) and \(\widehat{g}-h\prec\mathfrak{n}\preccurlyeq 1\); then we must show \(g-h\prec\mathfrak{n}\). (Of course, we need to show much more about the asymptotic behavior of \(g\), and this is expressed using the notion of _asymptotic similarity_: see Sections 6.6 and 6.7.) Now the germ \((g-h)/\mathfrak{n}\in\mathcal{C}^{<\infty}\) is a zero of the conjugated differential polynomial \(Q_{+h,\times\mathfrak{n}}\in H\{Y\}\), as is the element \((\widehat{g}-h)/\mathfrak{n}\prec 1\) of \(\widehat{H}\). The Fixed Point Theorem can also be used to produce a zero \(y\prec 1\) of \(Q_{+h,\times\mathfrak{n}}\) in \(\mathcal{C}^{<\infty}\). Set \(g_{1}:=y\mathfrak{n}+h\); then \(Q(g)=Q(g_{1})=0\) and \(g,g_{1}\prec 1\). We are thus naturally lead to consider the difference \(g-g_{1}\) between the solutions \(g,g_{1}\in\mathcal{C}^{<\infty}\) of the differential equation (with asymptotic side condition)
(E) \[Q(Y)\ =\ 0,\qquad Y\prec 1.\]
If we manage to show \(g-g_{1}\prec\mathfrak{n}\), then \(g-h=(g-g_{1})-y\mathfrak{n}\prec\mathfrak{n}\) as required. Simple estimates coming out of the proof of the Fixed Point Theorem are not good enough for this (cf. Lemma 6.2.5). We need a generalization of the Fixed Point Theorem for _weighted norms_ with (the germ of) the relevant weight function given by \(\mathfrak{n}\), shown in Section 6.5. To render this generalized version useful, we also have to make the construction of the right-inverse \(A^{-1}\) of the linear differential operator \(A\in H[\partial]\), which splits over \(K\) and approximates \(L_{Q}\) as postulated by strong split-normality, and which is central for the definition of the contractive operator used in the Fixed Point Theorem, in some sense uniform in \(\mathfrak{n}\). This is carried out in Section 4.5, refining our approximation arguments by improving strong split-normality to _strong repulsive-normality_ as defined in 4.5.32.
Exponential sumsJust for this discussion, call \(\phi\in\mathcal{C}^{<\infty}[\mathrm{i}]\)_small_ if \(\phi\prec\mathfrak{n}\) for all \(\mathfrak{n}\in H\) with \(v\mathfrak{n}\in v(\widehat{g}-H)\). Thus our aim is to show that differences between solutions of (E) in \(\mathcal{C}^{<\infty}\) are small in this sense. We show that each such difference gives rise to a zero \(z\in\mathcal{C}^{<\infty}[\mathrm{i}]\) of \(A\) with \(z\prec 1\) whose smallness would imply the smallness of the difference under consideration. To ensure that every zero \(z\prec 1\)
of \(A\) is indeed small requires us to have performed beforehand yet another (rather unproblematic) normalization procedure on our slot, transforming it into _ultimate_ shape. (See Section 4.4.) Recall the special fundamental systems of solutions to linear differential equations over maximal Hardy fields explained after Corollary 8: since \(A\) splits over \(K\), our zero \(z\) of \(A\) is a \(\mathbb{C}\)-linear combination of exponential terms. As a tool for systematically dealing with such exponential sums over \(K\) in a formal way, we introduce the concept of the _universal exponential extension_ of a differential field. Finally, from conditions like \(z\prec 1\) we need to be able to obtain asymptotic information about the summands of \(z\) when expressed as an exponential sum in a certain canonical way. For this we are able to exploit facts about uniform distribution mod 1 for germs in Hardy fields due to Boshernitzan [36]; see Sections 5.8-5.10.
### Organization of the manuscript
Part 1 has preliminaries on linear differential operators and differential polynomials, on the group of logarithmic derivatives, on special elements, and on differential-henselianity and newtonianity. In Part 2 we define the universal exponential extension of a differential field, and we consider the eigenvalues of linear differential operators and their connections to splittings. Part 3 then introduces holes and slots, and proves the Normalization Theorem hinted at earlier in this introduction. In Part 4 we focus on slots in \(H\)-fields and their algebraic closures, and implement the approximation arguments for obtaining (strongly) split-normal or repulsive-normal slots. In Part 5 we begin the analytic part of the paper, introducing Hardy fields, showing that maximal Hardy fields are \(\omega\)-free, and investigating the universal exponential extensions of Hardy fields. In the final act (Part 6) we prove our Fixed Point Theorem and give the proof of Theorem B. We finish with a coda (Part 7) consisting of applications, including the proof of Theorem A and the corollaries above. We refer to the introduction of each part for more details about their respective contents.
### Previous work
Theorem A for \(P\) of order 1 is in [59]. By [104] there exists a Hardy field \(H\supseteq\mathbb{R}\) isomorphic as an ordered differential field to \(\mathbb{T}_{\mathrm{g}}\), so by [103] this \(H\) has the intermediate value property for all differential polynomials over it. We announced the \(\omega\)-freeness of maximal Hardy fields already in [12].
### Notations and terminology
We freely use the notations and conventions from our book [ADH], and recall here a few. Throughout, \(m\), \(n\) range over the set \(\mathbb{N}=\{0,1,2,\dots\}\). Given an additively written abelian group \(A\) we let \(A^{\neq}:=A\setminus\{0\}\). Rings (usually, but not always, commutative) are associative with identity 1. For a ring \(R\) we let \(R^{\times}\) be the multiplicative group of units of \(R\) (consisting of the \(a\in R\) such that \(ab=ba=1\) for some \(b\in R\)).
A _differential ring_ is a commutative ring \(R\) containing (an isomorphic copy of) \(\mathbb{Q}\) as a subring and equipped with a derivation \(\partial\colon R\to R\), in which case \(C_{R}:=\ker\partial\) is a subring of \(R\), called the ring of constants of \(R\), and \(\mathbb{Q}\subseteq C_{R}\). A _differential field_ is a differential ring \(K\) whose underlying ring is a field. In this case \(C_{K}\) as a subfield of \(K\), and if \(K\) is understood from the context we often write \(C\) instead of \(C_{K}\). An _ordered differential field_ is an ordered field equipped with a derivation on its underlying field; such an ordered differential field is in particular a differential ring.
Often we are given a differential field \(H\) in which \(-1\) is not a square, and then \(H[\mathrm{i}]\) is a differential field extension with \(\mathrm{i}^{2}=-1\). Then for \(z\in H[\mathrm{i}]\), \(z=a+b\mathrm{i}\), \(a,b\in H\)
we set \(\operatorname{Re}z:=a\), \(\operatorname{Im}z:=b\), and \(\overline{z}:=a-b\hat{\imath}\). Hence \(z\mapsto\overline{z}\) is an automorphism of the differential field \(H\). If in addition there is given a differential field extension \(F\) of \(H\) in which \(-1\) is not a square, we always tacitly arrange \(i\) to be such that \(H[i]\) is a differential subfield of the differential field extension \(F[i]\) of \(F\).
Let \(R\) be a differential ring and \(a\in R\). When its derivation \(\mathfrak{d}\) is clear from the context we denote \(\mathfrak{d}(a),\mathfrak{d}^{2}(a),\dots,\mathfrak{d}^{n}(a),\dots\) by \(a^{\prime},a^{\prime\prime},\dots,a^{(n)},\dots\), and if \(a\in R^{\times}\), then \(a^{\dagger}:=a^{\prime}/a\) denotes the logarithmic derivative of \(a\), so \((ab)^{\dagger}=a^{\dagger}+b^{\dagger}\) for all \(a,b\in R^{\times}\). We have the differential ring \(R\{Y\}=R[Y,Y^{\prime},Y^{\prime\prime},\dots]\) of differential polynomials in a differential indeterminate \(Y\) over \(R\). Given \(P=P(Y)\in R\{Y\}\), the smallest \(r\in\mathbb{N}\) such that \(P\in R[Y,Y^{\prime},\dots,Y^{(r)}]\) is called the order of \(P\), denoted by \(r=\operatorname{order}(P)\); if \(P\) has order \(r\), then \(P=\sum_{\boldsymbol{i}}P_{\boldsymbol{i}}Y^{\boldsymbol{i}}\), as in [ADH, 4.2], with \(\boldsymbol{i}\) ranging over tuples \((i_{0},\dots,i_{r})\in\mathbb{N}^{1+r}\), \(Y^{i}:=Y^{i_{0}}(Y^{\prime})^{i_{1}}\cdots(Y^{(r)})^{i_{r}}\), coefficients \(P_{\boldsymbol{i}}\) in \(R\), and \(P_{\boldsymbol{i}}\neq 0\) for only finitely many \(\boldsymbol{i}\). For \(P\in R\{Y\}\) and \(a\in R\) we let \(P_{+a}(Y):=P(a+Y)\) and \(P_{\times a}(Y):=P(aY)\) be the _additive conjugate_ and the _multiplicative conjugate_ of \(P\) by \(a\), respectively. For \(\phi\in R^{\times}\) we also let \(R^{\phi}\) be the _compositional conjugate of \(R\) by \(\phi\)_: the differential ring with the same underlying ring as \(R\) but with derivation \(\phi^{-1}\mathfrak{d}\) (usually denoted by \(\mathfrak{d}\)) instead of \(\mathfrak{d}\). We have an \(R\)-algebra isomorphism \(P\mapsto P^{\phi}\colon R\{Y\}\to R^{\phi}\{Y\}\) such that \(P^{\phi}(y)=P(y)\) for all \(y\in R\); see [ADH, 5.7].
For a field \(K\) we have \(K^{\times}=K^{\neq}\), and a (Krull) valuation on \(K\) is a surjective map \(v\colon K^{\times}\to\Gamma\) onto an ordered abelian group \(\Gamma\) (additively written) satisfying the usual laws, and extended to \(v\colon K\to\Gamma_{\infty}:=\Gamma\cup\{\infty\}\) by \(v(0)=\infty\), where the ordering on \(\Gamma\) is extended to a total ordering on \(\Gamma_{\infty}\) by \(\gamma<\infty\) for all \(\gamma\in\Gamma\). A _valued field_\(K\) is a field (also denoted by \(K\)) together with a valuation ring \(\mathcal{O}\) of that field, and the corresponding valuation \(v\colon K^{\times}\to\Gamma\) on the underlying field is such that \(\mathcal{O}=\{a\in K:va\geqslant 0\}\) as explained in [ADH, 3.1].
Let \(K\) be a valued field with valuation ring \(\mathcal{O}_{K}\) and valuation \(v\colon K^{\times}\to\Gamma_{K}\). Then \(\mathcal{O}_{K}\) is a local ring with maximal ideal \(\sigma_{K}=\{a\in K:va>0\}\) and residue field \(\operatorname{res}(K)=\mathcal{O}_{K}/\sigma_{K}\). If \(\operatorname{res}(K)\) has characteristic zero, then \(K\) is said to be of equicharacteristic zero. When, as here, we use the capital \(K\) for the valued field under consideration, then we denote \(\Gamma_{K}\), \(\mathcal{O}_{K}\), \(\sigma_{K}\), by \(\Gamma\), \(\mathcal{O}\), \(\circ\), respectively. A very handy alternative notation system in connection with the valuation is as follows. With \(a,b\) ranging over \(K\), set
\[a\asymp b\ :\Leftrightarrow\ va=vb, a\prec b\ :\Leftrightarrow\ va\geqslant vb, a\prec b\ :\Leftrightarrow\ va>vb,\] \[a\sucarrow b\ :\Leftrightarrow\ b\prec a, a\succ b\ :\Leftrightarrow\ b\prec a, a\sim b\ :\Leftrightarrow\ a-b\prec a.\]
It is easy to check that if \(a\sim b\), then \(a,b\neq 0\) and \(a\asymp b\), and that \(\sim\) is an equivalence relation on \(K^{\times}\). Given a valued field extension \(L\) of \(K\), we identify in the usual way \(\operatorname{res}(K)\) with a subfield of \(\operatorname{res}(L)\), and \(\Gamma\) with an ordered subgroup of \(\Gamma_{L}\). We use _pc-sequence_ to abbreviate _pseudocauchy sequence_, and \(a_{\rho}\rightsquigarrow a\) indicates that \((a_{\rho})\) is a pc-sequence pseudoconverging to \(a\); here the \(a_{\rho}\) and \(a\) lie in a valued field understood from the context, see [ADH, 2.2, 3.2].
As in [ADH], a _valued differential field_ is a valued field of equicharacteristic zero together with a derivation, generally denoted by \(\mathfrak{d}\), on the underlying field. (Unlike [11] we do not assume in this definition that \(\mathfrak{d}\) is continuous with respect to the valuation topology.) A valued differential field \(K\) is said to have _small derivation
if \(\partial\varcirc\subseteq\vardiamond\); then also \(\partial\mathcal{O}\subseteq\mathcal{O}\) [ADH, 4.4.2], and so \(\partial\) induces a derivation on \(\operatorname{res}(K)\) making the residue morphism \(\mathcal{O}\to\operatorname{res}(K)\) into a morphism of differential rings.
We shall also consider various special classes of valued differential fields introduced in [ADH], such as the class of _asymptotic fields_ (and their relatives, _\(H\)-asymptotic fields_) and its subclass of _pre_-\(\mathrm{d}\)_-valued fields_, which in turn contains the class of \(\mathrm{d}\)-_valued fields_ [ADH, 9.1, 10.1]. (As usual in [ADH], the prefix "\(\mathrm{d}\)" abbreviates "differential".) Every asymptotic field \(K\) has its associated asymptotic couple \((\Gamma,\psi)\), where \(\psi\colon\Gamma^{\neq}\to\Gamma\) satisfies \(\psi(vg)=v(g^{\dagger})\) for \(g\in K^{\times}\) with \(vg\neq 0\). See [ADH, 9.1, 9.2] for more on asymptotic couples, in particular the taxonomy of asymptotic fields introduced via their asymptotic couples: having a _gap,_ being _grounded,_ having _asymptotic integration,_ and having _rational asymptotic integration_.
An _ordered valued differential field_ is a valued differential field \(K\) equipped with an ordering on \(K\) making \(K\) an ordered field. An ordered differential field \(K\) is called an _\(H\)-field_ if for all \(f\in K\) with if \(f\succ 1\) we have \(f^{\dagger}>0\), and \(\mathcal{O}=C+\vardiamond\) where \(\mathcal{O}=\big{\{}g\in K:|g|\leqslant c\text{ for some }c\in C\big{\}}\) and \(\vardiamond\) is the maximal ideal of the convex subring \(\mathcal{O}\) of \(K\). Thus \(K\) equipped with its valuation ring \(\mathcal{O}\) is an ordered valued differential field. _Pre_-\(H\)_-fields_ are the ordered valued differential subfields of \(H\)-fields. See [ADH, 10.5] for basic facts about (pre-) \(H\)-fields. An \(H\)-field \(K\) is said to be _Liouville closed_ if \(K\) is real closed and for all \(f,g\in K\) there exists \(y\in K^{\times}\) with \(y^{\prime}+fy=g\). Every \(H\)-field extends to a Liouville closed one; see [ADH, 10.6].
We alert the reader that in a few places we refer to the Liouville closed \(H\)-field \(\mathbb{T}_{\mathbb{g}}\) of grid-based transseries from [103], which is denoted there by \(\mathbb{T}\). Here we adopt the notation of [ADH] where \(\mathbb{T}\) is the larger field of logarithmic-exponential series.
**Acknowledgements.** Various institutions supported this work during its genesis: Aschenbrenner and van den Dries thank the Institut Henri Poincare, Paris, for its hospitality during the special semester "Model Theory, Combinatorics and Valued Fields" in 2018. Aschenbrenner also thanks the Institut fur Mathematische Logik und Grundlagenforschung, Universitat Munster, for a Munster Research Fellowship during the spring of 2019, and he acknowledges partial support from NSF Research Grant DMS-1700439. All three authors received support from the Fields Institute for Research in Mathematical Sciences, Toronto, during its Thematic Program "Tame Geometry, Transseries and Applications to Analysis and Geometry", Spring 2022.
**Part 1**.: **Preliminaries**
After generalities on linear differential operators and differential polynomials in Section 1.1, we investigate the group of logarithmic derivatives in valued differential fields of various kinds (Section 1.2) and recall how iterated logarithmic derivatives can be used to study the asymptotic behavior of differential polynomials over such valued differential fields for "large" arguments (Section 1.3). We also assemble some basic preservation theorems for \(\boldsymbol{\lambda}\)_-freeness_ and \(\boldsymbol{\omega}\)_-freeness_ (Section 1.4) and continue the study of linear differential operators over \(H\)-asymptotic fields initiated in [ADH, 5.6, 14.2] (Section 1.5). In our analysis of the solutions of algebraic differential equations over \(H\)-asymptotic fields in Part 3, special pc-sequences in the sense of [ADH, 3.4] play an important role; Section 1.6 explains why. A cornerstone of [ADH] is the concept of _newtonianity_, an analogue of henselianity appropriate for \(H\)-asymptotic fields with asymptotic integration [ADH, Chapter 14]. Related to this is _differential-henselianity_ [ADH, Chapter 7], which makes sense for a broader class of valued differential fields. In Sections 1.7 and 1.8 we further explore these notions. Among other things, we study their persistence under taking the completion of a valued differential field with small derivation (as defined in [ADH, 4.4]).
### Linear Differential Operators and Differential Polynomials
This section gathers miscellaneous facts of a general nature about linear differential operators and differential polynomials, sometimes in a valued differential setting. We first discuss splittings and least common left multiples of linear differential operators, then recall the complexity and the separant of differential polynomials, and finally deduce some useful estimates for derivatives of exponential terms.
**Splittings.** In this subsection \(K\) is a differential field. Let \(A\in K[\mathfrak{d}]^{\neq}\) be monic of order \(r\geqslant 1\). A **splitting of \(A\) over \(K\)** is a tuple \((g_{1},\dots,g_{r})\in K^{r}\) such that \(A=(\partial-g_{1})\cdots(\partial-g_{r})\). If \((g_{1},\dots,g_{r})\) is a splitting of \(A\) over \(K\) and \(\mathfrak{n}\in K^{\times}\), then \((g_{1}-\mathfrak{n}^{\dagger},\dots,g_{r}-\mathfrak{n}^{\dagger})\) is a splitting of \(A_{\ltimes\mathfrak{n}}=\mathfrak{n}^{-1}A\mathfrak{n}\) over \(K\).
Suppose \(A=A_{1}\cdots A_{m}\) where every \(A_{i}\in K[\mathfrak{d}]\) is monic of positive order \(r_{i}\) (so \(r=r_{1}+\cdots+r_{m}\)). Given any splittings
\[(g_{11},\dots,g_{1r_{1}}),\ \dots,\ (g_{m1},\dots,g_{mr_{m}})\]
of \(A_{1},\dots,A_{m}\), respectively, we obtain a splitting
\[\big{(}g_{11},\dots,g_{1r_{1}},\ \dots,\ g_{m1},\dots,g_{mr_{m}}\big{)}\]
of \(A\) by concatenating the given splittings of \(A_{1},\dots,A_{m}\) in the order indicated, and call it a splitting of \(A\)**induced** by the factorization \(A=A_{1}\cdots A_{m}\). For \(B\in K[\mathfrak{d}]\) of order \(r\geqslant 1\) we have \(B=bA\) with \(b\in K^{\times}\) and monic \(A\in K[\mathfrak{d}]\), and then a **splitting of \(B\) over \(K\)** is by definition a splitting of \(A\) over \(K\). A splitting of \(B\) over \(K\) remains a splitting of \(aB\) over \(K\), for any \(a\in K^{\times}\). Thus:
**Lemma 1.1.1**.: _If \(B\in K[\mathfrak{d}]\) has order \(r\geqslant 1\), and \((g_{1},\dots,g_{r})\) is a splitting of \(B\) over \(K\) and \(\mathfrak{n}\in K^{\times}\), then \((g_{1}-\mathfrak{n}^{\dagger},\dots,g_{r}-\mathfrak{n}^{\dagger})\) is a splitting of \(B_{\ltimes\mathfrak{n}}\) over \(K\) and a splitting of \(B\mathfrak{n}\) over \(K\)._
From [ADH, 5.1, 5.7] we know that if \(A\in K[\mathfrak{d}]\) splits over \(K\), then for any \(\phi\in K^{\times}\) the operator \(A^{\phi}\in K^{\phi}[\mathfrak{d}]\) splits over \(K^{\phi}\); here is how a splitting of \(A\) over \(K\) transforms into a splitting of \(A^{\phi}\) over \(K^{\phi}\):
**Lemma 1.1.2**.: _Let \(\phi\in K^{\times}\) and_
\[A\ =\ c(\partial-a_{1})\cdots(\partial-a_{r})\ \ \ \ \mbox{ with }c\in K^{\times}\mbox{ and }a_{1},\ldots,a_{r}\in K.\]
_Then in \(K^{\phi}[\mathfrak{d}]\) we have_
\[A^{\phi}\ =\ c\phi^{r}(\mathfrak{d}-b_{1})\cdots(\mathfrak{d}-b_{r})\ \ \ \ \mbox{ where }b_{j}:=\phi^{-1}\big{(}a_{j}-(r-j)\phi^{\dagger}\big{)}\ (j=1,\ldots,r).\]
Proof.: Induction on \(r\). The case \(r=0\) being obvious, suppose \(r\geqslant 1\), and set \(B:=(\partial-a_{2})\cdots(\partial-a_{r})\). By inductive hypothesis
\[B^{\phi}=\phi^{r-1}(\mathfrak{d}-b_{2})\cdots(\mathfrak{d}-b_{r})\ \ \ \ \mbox{ where }b_{j}:=\phi^{-1}\big{(}a_{j}-(r-j)\phi^{\dagger}\big{)}\mbox{ for }j=2,\ldots,r.\]
Then
\[A^{\phi}\ =\ c\phi\left(\mathfrak{d}-(a_{1}/\phi)\right)B^{\phi}\ =\ c\phi^{r}\left( \mathfrak{d}-(a_{1}/\phi)\right)_{\mbox{\tiny K}\phi^{r-1}}(\mathfrak{d}-b_{ 2})\cdots(\mathfrak{d}-b_{r})\]
with
\[\left(\mathfrak{d}-(a_{1}/\phi)\right)_{\mbox{\tiny K}\phi^{r-1}}=\mathfrak{d }-(a_{1}/\phi)+(r-1)\phi^{\dagger}/\phi\]
by [ADH, p. 243].
A different kind of factorization, see for example [156], reduces the process of solving the differential equation \(A(y)=0\) to repeated multiplication and integration:
**Lemma 1.1.3**.: _Let \(A\in K[\partial]^{\neq}\) be monic of order \(r\geqslant 1\). If \(b_{1},\ldots,b_{r}\in K^{\times}\) and_
\[A\ =\ b_{1}\cdots b_{r-1}b_{r}(\partial b_{r}^{-1})(\partial b_{r-1}^{-1}) \cdots(\partial b_{1}^{-1}),\]
_then \((a_{r},\ldots,a_{1})\), where \(a_{j}:=(b_{1}\cdots b_{j})^{\dagger}\) for \(j=1,\ldots,r\), is a splitting of \(A\) over \(K\). Conversely, if \((a_{r},\ldots,a_{1})\) is a splitting of \(A\) over \(K\) and \(b_{1},\ldots,b_{r}\in K^{\times}\) are such that \(b_{j}^{\dagger}=a_{j}-a_{j-1}\) for \(j=1,\ldots,r\) with \(a_{0}:=0\), then \(A\) is as in the display._
This follows easily by induction on \(r\).
### Real splittings
Let \(H\) be a differential field in which \(-1\) is not a square. Then we let \(i\) denote an element in a differential field extension of \(H\) with \(i^{2}=-1\), and consider the differential field \(K=H[\mbox{i}]\). Suppose \(A\in H[\partial]\) is monic of order \(2\) and splits over \(K\), so
\[A\ =\ (\partial-f)(\partial-g),\qquad f,g\in K.\]
Then
\[A\ =\ \partial^{2}-(f+g)\partial+fg-g^{\prime},\]
and thus \(f\in H\) iff \(g\in H\). One checks easily that if \(g\notin H\), then there are unique \(a,b\in H\) with \(b\neq 0\) such that
\[f\ =\ a-bi+b^{\dagger},\qquad g\ =\ a+bi,\]
and thus
\[A\ =\ \partial^{2}-(2a+b^{\dagger})\partial+a^{2}+b^{2}-a^{\prime}+ab^{\dagger}.\]
Conversely, if \(a,b\in H\) and \(b\neq 0\), then for \(f:=a-bi+b^{\dagger}\) and \(g:=a+bi\) we have \((\partial-f)(\partial-g)\in H[\partial]\).
Let now \(A\in H[\partial]\) be monic of order \(r\geqslant 1\).
**Lemma 1.1.4**.: _Suppose \(A\) splits over \(K\). Then \(A=A_{1}\cdots A_{m}\) for some \(A_{1},\ldots,A_{m}\) in \(H[\partial]\) that are monic and irreducible of order \(1\) or \(2\) and split over \(K\)._
Proof.: By [ADH, 5.1.35], \(A=A_{1}\cdots A_{m}\), where every \(A_{i}\in H[\partial]\) is monic and irreducible of order \(1\) or \(2\). By [ADH, 5.1.22], such \(A_{i}\) split over \(K\)
**Definition 1.1.5**.: A **real splitting of \(A\)** (over \(K\)) is a splitting of \(A\) over \(K\) that is induced by a factorization \(A=A_{1}\cdots A_{m}\) where every \(A_{i}\in H[\partial]\) is monic of order \(1\) or \(2\) and splits over \(K\). (Note that we do not require the \(A_{i}\) of order \(2\) to be irreducible in \(H[\partial]\).)
Thus if \(A\) splits over \(K\), then \(A\) has a real splitting over \(K\) by Lemma 1.1.4. Note that if \((g_{1},\ldots,g_{r})\) is a real splitting of \(A\) and \(\mathfrak{n}\in H^{\times}\), then \((g_{1}-\mathfrak{n}^{\dagger},\ldots,g_{r}-\mathfrak{n}^{\dagger})\) is a real splitting of \(A_{\ltimes\mathfrak{n}}\).
It is convenient to extend the above slightly: for \(B\in H[\partial]\) of order \(r\geqslant 1\) we have \(B=bA\) with \(b\in H^{\times}\) and monic \(A\in H[\partial]\), and then a **real splitting of \(B\)** (over \(K\)) is by definition a real splitting of \(A\) (over \(K\)).
In later use, \(H\) is a valued differential field with small derivation such that \(-1\) is not a square in the differential residue field \(\operatorname{res}(H)\). For such \(H\), let \(\mathcal{O}\) be the valuation ring of \(H\). We make \(K\) a valued differential field extension of \(H\) with small derivation by taking \(\mathcal{O}_{K}=\mathcal{O}+\mathcal{O}\mathfrak{i}\) as the valuation ring of \(K\). We have the residue map \(a\mapsto\operatorname{res}a\colon\mathcal{O}_{K}\to\operatorname{res}(K)\), so \(\operatorname{res}(K)=\operatorname{res}(H)[\mathfrak{i}]\), writing here \(\mathfrak{i}\) for \(\operatorname{res}\mathfrak{i}\). We extend this map to a ring morphism \(B\mapsto\operatorname{res}B\colon\mathcal{O}_{K}[\partial]\to\operatorname{res }(K)[\partial]\) by sending \(\partial\in\mathcal{O}[\partial]\) to \(\partial\in\operatorname{res}(K)[\partial]\).
**Lemma 1.1.6**.: _Suppose \((g_{1},\ldots,g_{r})\in\operatorname{res}(K)^{r}\) is a real splitting of a monic operator \(D\in\operatorname{res}(H)[\partial]\) of order \(r\geqslant 1\). Then there are \(b_{1},\ldots,b_{r}\in\mathcal{O}_{K}\) such that_
\[B\ :=\ (\partial-b_{1})\cdots(\partial-b_{r})\in\mathcal{O}[\partial],\]
\((b_{1},\ldots,b_{r})\) _is a real splitting of \(B\), and \(\operatorname{res}b_{j}=g_{j}\) for \(j=1,\ldots,r\)._
Proof.: We can assume \(r\in\{1,2\}\). The case \(r=1\) is obvious, so let \(r=2\). Then the case where \(g_{1},g_{2}\in\operatorname{res}(H)\) is again obvious, so let \(g_{1}=\operatorname{res}(a)-\operatorname{res}(b)\mathfrak{i}+(\operatorname{ res}b)^{\dagger}\), \(g_{2}=\operatorname{res}(a)+\operatorname{res}(b)\mathfrak{i}\) where \(a,b\in\mathcal{O}\), \(\operatorname{res}b\neq 0\). Set \(b_{1}:=a-bi+b^{\dagger}\), \(b_{2}:=a+bi\). Then \(b_{1},b_{2}\in\mathcal{O}_{K}\) with \(\operatorname{res}b_{1}=g_{1}\), \(\operatorname{res}b_{2}=g_{2}\), and \(B:=(\partial-b_{1})(\partial-b_{2})\in\mathcal{O}[\partial]\) have the desired properties.
**Least common left multiples and complex conjugation.** In this subsection \(H\) is a differential field. Recall from [ADH, 5.1] the definition of the _least common left multiple_\(\operatorname{lclm}(A_{1},\ldots,A_{m})\) of operators \(A_{1},\ldots,A_{m}\in H[\partial]^{\neq}\), \(m\geqslant 1\): this is the monic operator \(A\in H[\partial]\) such that \(H[\partial]A_{1}\cap\cdots\cap H[\partial]A_{m}=H[\partial]A\). For \(A,B\in H[\partial]^{\neq}\) we have:
\[\max\big{\{}\text{order}(A),\text{order}(B)\big{\}}\ \leqslant\ \text{order} \big{(}\operatorname{lclm}(A,B)\big{)}\ \leqslant\ \text{order}(A)+\text{order}(B).\]
For the inequality on the right, note that the natural \(H[\partial]\)-module morphism
\[H[\partial]\ \to\ \big{(}H[\partial]/H[\partial]A\big{)}\times\big{(}H[\partial]/H[ \partial]B\big{)}\]
has kernel \(H[\partial]\operatorname{lclm}(A,B)\), and for \(D\in H[\partial]^{\neq}\), the \(H\)-linear space \(H[\partial]/K[\partial]D\) has dimension \(\text{order}\,D\).
We now assume that \(-1\) is not a square in \(H\); then we have a differential field extension \(H[\mathfrak{i}]\) where \(\mathfrak{i}^{2}=-1\). The automorphism \(a+bi\mapsto\overline{a+b\mathfrak{i}}:=a-bi\) (\(a,b\in H\)) of the differential field \(H[\mathfrak{i}]\) extends uniquely to an automorphism \(A\mapsto\overline{A}\) of the ring \(H[\mathfrak{i}][\partial]\) with \(\overline{\partial}=\partial\). Let \(A\in H[\mathfrak{i}][\partial]\); then \(\overline{A}=A\Longleftrightarrow A\in H[\partial]\). Hence if \(A\neq 0\) is monic, then \(L:=\operatorname{lclm}(A,\overline{A})\in H[\partial]\) and thus \(L=BA=\overline{B}\,\overline{A}\) where \(B\in H[\mathfrak{i}][\partial]\).
_Example 1.1.7_.: Let \(A=\partial-a\) where \(a\in H[\mathfrak{i}]\). If \(a\in H\), then \(\operatorname{lclm}(A,\overline{A})=A\), and if \(a\notin H\), then \(\operatorname{lclm}(A,\overline{A})=(\partial-b)(\partial-a)=(\partial-\overline{ b})(\partial-\overline{a})\) where \(b\in H[\mathfrak{i}]\setminus H\).
Let now \(F\) be a differential field extension of \(H\) in which \(-1\) is not a square; we assume that \(i\) is an element of a differential ring extension of \(F\).
**Lemma 1.1.8**.: _Let \(A\in H[i][\partial]^{\neq}\) be monic, \(b\in H[i]\), and \(f\in F[i]\) such that \(A(f)=b\). Let \(B\in H[i][\partial]\) be such that \(L:=\operatorname{lclm}(A,\overline{A})=BA\). Then \(L(f)=B(b)\) and hence \(L\big{(}\mathrm{Re}(f)\big{)}=\mathrm{Re}\big{(}B(b)\big{)}\) and \(L\big{(}\mathrm{Im}(f)\big{)}=\mathrm{Im}\big{(}B(b)\big{)}\)._
In Sections 6.4 and 6.7 we need a slight extension of this lemma:
_Remark 1.1.9_.: Let \(F\) be a differential ring extension of \(H\) in which \(-1\) is not a square and let \(i\) be an element of a commutative ring extension of \(F\) such that \(i^{2}=-1\) and the \(F\)-algebra \(F[i]=F+Fi\) is a free \(F\)-module with basis \(1\), \(i\). For \(f=g+h\dot{\mathrm{c}}\in F[i]\) with \(g,h\in F\) we set \(\mathrm{Re}(f):=g\) and \(\mathrm{Im}(f):=h\). We make \(F[i]\) into a differential ring extension of \(F\) in the only way possible (which has \(i^{\prime}=0\)). Then Lemma 1.1.8 goes through.
### Complexity and the separant
We recall some definitions and observations from [1]. Let \(K\) be a differential field and \(P\in K\{Y\}\), \(P\notin K\), and set \(r=\operatorname{order}P\), \(s=\deg_{Y^{(r)}}P\), and \(t=\deg P\). Then the _complexity_ of \(P\) is the triple \(\mathrm{c}(P)=(r,s,t)\in\mathbb{N}^{3}\); we order \(\mathbb{N}^{3}\) lexicographically. Let \(a\in K\). Then \(\mathrm{c}(P_{+a})=\mathrm{c}(P)\), and \(\mathrm{c}(P_{\times a})=\mathrm{c}(P)\) if \(a\neq 0\). The differential polynomial \(S_{P}:=\frac{\partial P}{\partial Y^{(r)}}\) is called the _separant_ of \(P\); thus \(\mathrm{c}(S_{P})<\mathrm{c}(P)\) (giving complexity \((0,0,0)\) to elements of \(K\)), and \(S_{aP}=aS_{P}\) if \(a\neq 0\). Moreover:
**Lemma 1.1.10**.: _We have_
\[S_{P_{+a}}=(S_{P})_{+a},\quad S_{P_{\times a}}=a\cdot(S_{P})_{\times a},\quad \text{and}\quad S_{P^{\phi}}=\phi^{r}(S_{P})^{\phi}\text{ for }\phi\in K^{\times}.\]
Proof.: For \(S_{P_{+a}}\) and \(S_{P_{\times a}}\) this is from [1]; for \(S_{P^{\phi}}\), express \(P\) as a polynomial in \(Y^{(r)}\) and use \((Y^{(r)})^{\phi}=\phi^{r}Y^{(r)}+\text{lower order terms}\).
### Some transformation formulas
In this subsection \(K\) is a differential ring. Let \(u\in K^{\times}\). Then in \(K[\partial]\) we have
\[(\partial-u^{\dagger})^{0} =\ 1,\] \[(\partial-u^{\dagger})^{1} =\ \partial-u^{\prime}u^{-1},\] \[(\partial-u^{\dagger})^{2} =\ \partial^{2}-2u^{\prime}u^{-1}\partial+\big{(}2(u^{\prime})^{2}-u^{ \prime\prime}u\big{)}u^{-2}.\]
More generally:
**Lemma 1.1.11**.: _There are differential polynomials \(Q_{k}^{n}(X)\in\mathbb{Q}\{X\}\)\((0\leqslant k\leqslant n)\), independent of \(K\) and \(u\), such that \(Q_{n}^{n}=1\) and_
\[(\partial-u^{\dagger})^{n}\ =\ Q_{n}^{n}(u)\partial^{n}+Q_{n-1}^{n}(u)u^{-1} \partial^{n-1}+\cdots+Q_{0}^{n}(u)u^{-n}.\]
_Setting \(Q_{-1}^{n}:=0\), we have_
\[Q_{k}^{n+1}(X)\ =\ Q_{k}^{n}(X)^{\prime}X+Q_{k}^{n}(X)(k-n-1)X^{\prime}+Q_{k-1} ^{n}(X)\qquad(0\leqslant k\leqslant n).\]
_Hence every \(Q_{k}^{n}\) with \(0\leqslant k\leqslant n\) has integer coefficients and is homogeneous of degree \(n-k\) and isobaric of weight \(n-k\)._
Proof.: By induction on \(n\). The case \(n=0\) is obvious. Suppose for a certain \(n\) we have \(Q_{k}^{n}\) for \(0\leqslant k\leqslant n\) as above. Then
\[(\partial-u^{\dagger})^{n+1} =\ (\partial-u^{\dagger})\sum_{k=0}^{n}Q_{k}^{n}(u)u^{k-n} \partial^{k}\] \[=\ \sum_{k=0}^{n}\Big{(}\big{(}Q_{k}^{n}(u)u^{k-n}\big{)}^{\prime}-u ^{\dagger}Q_{k}^{n}(u)u^{k-n}\Big{)}\partial^{k}+\sum_{k=0}^{n}Q_{k}^{n}(u)u^{k -n}\partial^{k+1}\] \[=\ \sum_{k=0}^{n}\Big{(}Q_{k}^{n}(u)^{\prime}u+Q_{k}^{n}(u)(k-n-1)u ^{\prime}\Big{)}u^{k-(n+1)}\partial^{k}+\] \[\ \ \ \ \ \sum_{k=1}^{n+1}Q_{k-1}^{n}(u)u^{k-(n+1)}\partial^{k},\]
and this yields the inductive step.
For \(f\in K\) we have
\[(fu^{-1})^{(n)}\ =\ (\partial^{n})_{\ltimes u^{-1}}(f)u^{-1}\ =\ (\partial_{\ltimes u^{-1}})^{n}(f)u^{-1}\ =\ (\partial-u^{\dagger})^{n}(f)u^{-1}\]
and hence:
**Corollary 1.1.12**.: _Let \(f\in K\); then_
\[(fu^{-1})^{(n)}\ =\ Q_{n}^{n}(u)f^{(n)}u^{-1}+Q_{n-1}^{n}(u)f^{(n-1)}u^{-2}+ \cdots+Q_{0}^{n}(u)fu^{-(n+1)}.\]
### Estimates for derivatives of exponential terms
In this subsection \(K\) is an asymptotic differential field with small derivation, and \(\phi\in K\). We also fix \(\mathfrak{m}\in K^{\times}\) with \(\mathfrak{m}\prec 1\). Recall from [ADH, 4.2] that for \(P\in K\{Y\}^{\neq}\) the _multiplicity of \(P\) at \(0\)_ is \(\operatorname{mul}(P)=\min\{d\in\mathbb{N}:P_{d}\neq 0\}\), where \(P_{d}\) denotes the homogeneous part of degree \(d\) of \(P\). Here is a useful bound:
**Lemma 1.1.13**.: _Let \(r\in\mathbb{N}\) and \(y\in K\) satisfy \(y\prec\mathfrak{m}^{r+m}\prec 1\). Then \(P(y)\prec\mathfrak{m}^{m\mu}P\) for all \(P\in K\{Y\}^{\neq}\) of order at most \(r\) with \(\mu=\operatorname{mul}(P)\geqslant 1\)._
Proof.: Note that \(0\neq\mathfrak{m}\prec 1\) and \(r+m\geqslant 1\). Hence
\[y^{\prime}\ \prec\ (\mathfrak{m}^{r+m})^{\prime}\ =\ (r+m)\mathfrak{m}^{r+m-1} \mathfrak{m}^{\prime}\ \prec\ \mathfrak{m}^{r-1+m},\]
so by induction \(y^{(i)}\prec\mathfrak{m}^{r-i+m}\) for \(i=0,\ldots,r\). Hence \(y^{\boldsymbol{i}}\prec\mathfrak{m}^{(r+m)|\boldsymbol{i}|-\|\boldsymbol{i} \|}\preccurlyeq\mathfrak{m}^{m|\boldsymbol{i}|}\) for nonzero \(\boldsymbol{i}=(i_{0},\ldots,i_{r})\in\mathbb{N}^{1+r}\), which yields the lemma.
**Corollary 1.1.14**.: _If \(f\in K\) and \(f\prec\mathfrak{m}^{n}\), then \(f^{(k)}\prec\mathfrak{m}^{n-k}\) for \(k=0,\ldots,n\)._
Proof.: This is a special case of Lemma 1.1.13.
**Corollary 1.1.15**.: _Let \(f\in K^{\times}\) and \(n\geqslant 1\) be such that \(f\preccurlyeq\mathfrak{m}^{n}\). Then \(f^{(k)}\prec\mathfrak{m}^{n-k}\) for \(k=1,\ldots,n\)._
Proof.: Note that \(\mathfrak{m}^{n}\neq 0\), so \(f^{\prime}\preccurlyeq(\mathfrak{m}^{n})^{\prime}=n\mathfrak{m}^{n-1} \mathfrak{m}^{\prime}\prec\mathfrak{m}^{n-1}\) [ADH, 9.1.3]. Now apply Corollary 1.1.14 with \(f^{\prime}\), \(n-1\) in place of \(f\), \(n\).
In the remainder of this subsection we let \(\xi\in K^{\times}\) and assume \(\xi\succ 1\) and \(\zeta:=\xi^{\dagger}\succcurlyeq 1\).
**Lemma 1.1.16**.: _The elements \(\xi,\zeta\in K\) have the following asymptotic properties:_
* \(\zeta^{n}\prec\xi\) _for all_ \(n\)_;_
* \(\zeta^{(n)}\preccurlyeq\zeta^{2}\) _for all_ \(n\)_._
_Thus for each \(P\in\mathcal{O}\{Z\}\) there is an \(N\in\mathbb{N}\) with \(P(\zeta)\preccurlyeq\zeta^{N}\), and hence \(P(\zeta)\prec\xi\)._
Proof.: Part (i) follows from [ADH, 9.2.10(iv)] for \(\gamma=v(\xi)\). As to (ii), if \(\zeta^{\prime}\preccurlyeq\zeta\), then \(\zeta^{(n)}\preccurlyeq\zeta\) by [ADH, 4.5.3], and we are done. Suppose \(\zeta^{\prime}\succ\zeta\) and set \(\gamma:=v(\zeta)\). Then \(\gamma,\gamma^{\dagger}<0\), so \(\gamma^{\dagger}=o(\gamma)\) by [ADH, 9.2.10(iv)] and hence \(v(\zeta^{(n)})=\gamma+n\gamma^{\dagger}>2\gamma=v(\zeta^{2})\) by [ADH, 6.4.1(iv)].
Recall from [ADH, 5.8] that for a homogeneous differential polynomial \(P\in K\{Y\}\) of degree \(d\in\mathbb{N}\) the _Riccati transform_\(\operatorname{Ri}(P)\in K\{Z\}\) of \(P\) satisfies
\[\operatorname{Ri}(P)(z)=P(y)/y^{d}\quad\text{ for }y\in K^{\times},\,z=y^{\dagger}.\]
In the next two corollaries, \(l\in\mathbb{Z}\), \(\xi=\phi^{\prime}\), and \(\operatorname{e}^{\phi}\) denotes a unit of a differential ring extension of \(K\) with multiplicative inverse \(\operatorname{e}^{-\phi}\) and such that \((\operatorname{e}^{\phi})^{\prime}=\phi^{\prime}\operatorname{e}^{\phi}\).
**Corollary 1.1.17**.: \((\xi^{l}\operatorname{e}^{\phi})^{(n)}\ =\ \xi^{l+n}(1+\varepsilon) \operatorname{e}^{\phi}\) _where \(\varepsilon\in K\), \(\varepsilon\prec 1\)._
Proof.: By Lemma 1.1.16(i) we have \(l\zeta+\xi\sim\xi\succ 1\). Now use \((\xi^{l}\operatorname{e}^{\phi})^{(n)}/(\xi^{l}\operatorname{e}^{\phi})=R_{n} (l\zeta+\xi)\) for \(R_{n}=\operatorname{Ri}(Y^{(n)})\) in combination with [ADH, 11.1.5].
Applying the corollary above with \(\phi\), \(\xi\) replaced by \(-\phi\), \(-\xi\), respectively, we obtain:
**Corollary 1.1.18**.: \((\xi^{l}\operatorname{e}^{-\phi})^{(n)}\ =\ (-1)^{n}\xi^{l+n}(1+\delta) \operatorname{e}^{-\phi}\) _where \(\delta\in K\), \(\delta\prec 1\)._
### Estimates for Riccati transforms
In this subsection \(K\) is a valued differential field with small derivation. For later use we prove variants of [ADH, 11.1.5].
**Lemma 1.1.19**.: _If \(z\in K^{\succ 1}\), then \(R_{n}(z)=z^{n}(1+\varepsilon)\) with \(v\varepsilon\geqslant v(z^{-1})+o(vz)>0\)._
Proof.: This is clear for \(n=0\) and \(n=1\). Suppose \(z\succ 1\), \(n\geqslant 1\), and \(R_{n}(z)=z^{n}(1+\varepsilon)\) with \(\varepsilon\) as in the lemma. As in the proof of [ADH, 11.1.5],
\[R_{n+1}(z)\ =\ z^{n+1}\left(1+\varepsilon+n\frac{z^{\dagger}}{z}(1+\varepsilon)+ \frac{\varepsilon^{\prime}}{z}\right).\]
Now \(v(z^{\dagger})\geqslant o(vz)\): this is obvious if \(z^{\dagger}\preccurlyeq 1\), and follows from \(\triangledown(\gamma)=o(\gamma)\) for \(\gamma\neq 0\) if \(z^{\dagger}\succ 1\)[ADH, 6.4.1(iii)]. This gives the desired result in view of \(\varepsilon^{\prime}\prec 1\).
**Lemma 1.1.20**.: _Suppose \(\partial\mathcal{O}\subseteq\sigma\). If \(z\in K^{\succcurlyeq 1}\), then \(R_{n}(z)=z^{n}(1+\varepsilon)\) with \(\varepsilon\prec 1\)._
Proof.: The case \(z\succ 1\) follows from Lemma 1.1.19. For \(z\asymp 1\), proceed as in the proof of that lemma, using \(\partial\mathcal{O}\subseteq\sigma\).
By [ADH, 9.1.3 (iv)] the condition \(\partial\mathcal{O}\subseteq\sigma\) is satisfied if \(K\) is d-valued, or asymptotic with \(\Psi\cap\Gamma^{>}\neq\emptyset\).
**Lemma 1.1.21**.: _Suppose \(K\) is asymptotic, and \(z\in K\) with \(0\neq z\preccurlyeq z^{\prime}\prec 1\). Then \(R_{n}(z)\sim z^{(n-1)}\) for \(n\geqslant 1\)._
Proof.: Induction on \(n\) gives \(z\preccurlyeq z^{\prime}\prec\cdots\preccurlyeq z^{(n)}\prec 1\) for all \(n\). We now show \(R_{n}(z)\sim z^{(n-1)}\) for \(n\geqslant 1\), also by induction. The case \(n=1\) is clear from \(R_{1}=Z\), so suppose \(n\geqslant 1\) and \(R_{n}(z)\sim z^{(n-1)}\). Then
\[R_{n+1}(z)\ =\ zR_{n}(z)+R_{n}(z)^{\prime}\]
where \(R_{n}(z)^{\prime}\sim z^{(n)}\) by [ADH, 9.1.4(ii)] and \(zR_{n}(z)\asymp zz^{(n-1)}\preccurlyeq z^{(n-1)}\). Hence \(R_{n+1}(z)\sim z^{(n)}\).
**Valued differential fields with very small derivation**(\({}^{*}\)).: The generalities in this subsection will be used in Section 7.3. Let \(K\) be a valued differential field with derivation \(\partial\). Recall that if \(K\) has small derivation (that is, \(\partial\circ\subseteq\circ\)), then also \(\partial\mathcal{O}\subseteq\mathcal{O}\) by [ADH, 4.4.2], so we have a unique derivation on the residue field \(\boldsymbol{k}:=\mathcal{O}/\circ\) that makes the residue morphism \(\mathcal{O}\to\boldsymbol{k}\) into a morphism of differential rings (and we call \(\boldsymbol{k}\) with this induced derivation the differential residue field of \(K\)). We say that \(\partial\) is **very small** if \(\partial\mathcal{O}\subseteq\circ\). So \(K\) has very small derivation iff \(K\) has small derivation and the induced derivation on \(\boldsymbol{k}\) is trivial. If \(K\) has small derivation and \(\mathcal{O}=C+\circ\), then \(K\) has very small derivation. If \(K\) has very small derivation, then so does every valued differential subfield of \(K\), and if \(L\) is a valued differential field extension of \(K\) with small derivation and \(\boldsymbol{k}_{L}=\boldsymbol{k}\), then \(L\) has very small derivation. Moreover:
**Lemma 1.1.22**.: _Let \(L\) be a valued differential field extension of \(K\), algebraic over \(K\), and suppose \(K\) has very small derivation. Then \(L\) also has very small derivation._
Proof.: By [ADH, 6.2.1], \(L\) has small derivation. The derivation of \(\boldsymbol{k}\) is trivial and \(\boldsymbol{k}_{L}\) is algebraic over \(\boldsymbol{k}\) [ADH, 3.1.9], so the derivation of \(\boldsymbol{k}_{L}\) is also trivial.
Next we focus on pre-d-valued fields with very small derivation. First an easy observation about asymptotic couples:
**Lemma 1.1.23**.: _Let \((\Gamma,\psi)\) be an asymptotic couple; then_
\[(\Gamma,\psi)\text{ has gap }0\quad\Longleftrightarrow\quad(\Gamma,\psi) \text{ has small derivation and }\Psi\subseteq\Gamma^{<}.\]
_In particular, if \((\Gamma,\psi)\) has small derivation and does not have gap \(0\), then each asymptotic couple extending \((\Gamma,\psi)\) has small derivation._
**Corollary 1.1.24**.: _Suppose \(K\) is pre-\(\mathrm{d}\)-valued with small derivation, and suppose \(0\) is not a gap in \(K\). Then \(K\) has very small derivation._
Proof.: The previous lemma gives \(g\in K^{\times}\) with \(g\not\asymp 1\) and \(g^{\dagger}\preccurlyeq 1\). Then for each \(f\in K\) with \(f\preccurlyeq 1\) we have \(f^{\prime}\preccurlyeq g^{\dagger}\preccurlyeq 1\).
**Corollary 1.1.25**.: _Suppose \(K\) is pre-\(\mathrm{d}\)-valued of \(H\)-type with very small derivation. Then the \(\mathrm{d}\)-valued hull \(\mathrm{dv}(K)\) of \(K\) has small derivation._
Proof.: By Lemma 1.1.23, if \(0\) is not a gap in \(K\), then every pre-d-valued field extension of \(K\) has small derivation. If \(0\) is a gap in \(K\), then no \(b\asymp 1\) in \(K\) satisfies \(b^{\prime}\asymp 1\), since \(K\) has very small derivation. Thus \(\Gamma_{\mathrm{dv}(K)}=\Gamma\) by [ADH, 10.3.2(ii)], so \(0\) remains a gap in \(\mathrm{dv}(K)\). In both cases, \(\mathrm{dv}(K)\) has small derivation.
If \(K\) is pre-d-valued and ungrounded, then for each \(\phi\in K\) which is active in \(K\), the pre-d-valued field \(K^{\phi}\) (with derivation \(\curlyeq 8=\phi^{-1}\partial\)) has very small derivation.
Now a fact about \(A\in F[\partial]\), where \(F\) is any differential field. For the definition of \(A^{(n)}\), see [ADH, p. 243]. Recall that \(\mathrm{Ri}(A)\in F\{Z\}\). For \(P\in F\{Z\}\) we denote by \(P_{[0]}\) the isobaric part of \(P\) of weight \(0\), as in [ADH, p. 212], so \(P\in F[Z]\).
**Lemma 1.1.26**.: _For \(P:=\mathrm{Ri}(A)_{[0]}\) we have \(\mathrm{Ri}(A^{(n)})_{[0]}=P^{(n)}\) for all \(n\)._
Proof.: We treat the case \(n=1\); the general case then follows by induction on \(n\). By \(F\)-linearity of \(A\mapsto\mathrm{Ri}(A)\) we reduce to the case \(A=\partial^{m}\), \(m\geqslant 1\), so \(P=Z^{m}\). Then \(A^{\prime}=m\partial^{m-1}\), so \(\mathrm{Ri}(A^{\prime})=mR_{m-1}\) and hence \(\mathrm{Ri}(A^{\prime})_{[0]}=mZ^{m-1}=P^{\prime}\).
We need this for the next lemma, which in turn is used in proving Corollary 1.8.50. As usual, we extend the residue map \(a\mapsto\operatorname{res}a\colon\mathcal{O}\to\boldsymbol{k}\) to the ring morphism
\[P\mapsto\operatorname{res}P\;:\;\mathcal{O}[Y]\to\boldsymbol{k}[Y],\qquad Y \mapsto Y.\]
**Lemma 1.1.27**.: _Let \(K\) have very small derivation, \(A\in\mathcal{O}[\partial]\), \(R:=\operatorname{Ri}(A)\), so \(R\in\mathcal{O}\{Z\}\), and \(P:=R_{[0]}\in\mathcal{O}[Z]\). Let \(a\in\mathcal{O}\), so \(Q:=(R_{+a})_{[0]}\in\mathcal{O}[Z]\). Then_
\[(\operatorname{res}P)_{+\,\operatorname{res}a}\;=\;\operatorname{res}Q.\]
Proof.: It suffices to show \(P_{+a}-Q\prec 1\). We have \(R(a)\equiv R_{[0]}(a)\bmod\sigma\), as \(K\) has very small derivation. Applying this to \(\operatorname{Ri}(A^{(n)})\) in place of \(R=\operatorname{Ri}(A)\) and using Lemma 1.1.26 yields \(\operatorname{Ri}(A^{(n)})(a)\equiv P^{(n)}(a)\bmod\sigma\) for all \(n\). Now use \(P_{+a}=\sum_{n}\frac{1}{n!}P^{(n)}(a)\,Z^{n}\) by Taylor expansion and \(R_{+a}=\sum_{n}\frac{1}{n!}\operatorname{Ri}(A^{(n)})(a)\,R_{n}\) by [ADH, p. 301], so \(Q=\sum_{n}\frac{1}{n!}\operatorname{Ri}(A^{(n)})(a)\,Z^{n}\).
**Rosenlicht's proof of a result of Kolchin** (\({}^{*}\)).: Corollary 1.1.30 below will be used in Section 7.4. Let \(K\) be a differential field and \(m\geqslant 1\), \(P\in K\{Y\}\), \(\deg P<m\).
**Lemma 1.1.28**.: _Let \(L\) be a differential field extension of \(K\) and \(t\in L\), \(t^{\prime}\in K+tK\), and suppose \(L\) is algebraic over \(K(t)\). If \(y^{m}=P(y)\) for some \(y\in L\), then \(z^{m}=P(z)\) for some \(z\) in a differential field extension of \(K\) which is algebraic over \(K\)._
Proof.: We may assume \(t\) is transcendental over \(K\). View \(K(t)\) as a subfield of the Laurent series field \(F=K(\!(t^{-1})\!)\). We equip \(F\) with the valuation ring \(\mathcal{O}_{F}=K[[t^{-1}]\!]\) and the continuous derivation extending that of \(K(t)\), cf. [ADH, p. 226]. Then the valued differential field \(F\) is monotone. Hence the valued differential subfield \(K(t)\) of \(F\) is also monotone. We equip \(L\) with a valuation ring \(\mathcal{O}_{L}\) lying over \(\mathcal{O}_{F}\cap K(t)\). Then \(L\) is monotone by [ADH, 6.3.10]. We identify \(K\) with its image under the residue morphism \(a\mapsto\operatorname{res}a\colon\mathcal{O}_{L}\to\boldsymbol{k}_{L}:= \operatorname{res}(L)\). Then \(K\) is a differential subfield of the differential residue field \(\boldsymbol{k}_{L}\) of \(L\), and \(\boldsymbol{k}_{L}\) is algebraic over \(K\). Let now \(y\in L\) with \(y^{m}=P(y)\), and towards a contradiction suppose \(y\succ 1\). Then \(y^{\dagger}\preccurlyeq 1\), thus \(y^{(n)}=y\,R_{n}(y^{\dagger})\preccurlyeq y\) for all \(n\), and hence \(y^{m}=P(y)\preccurlyeq y^{d}\) where \(d=\deg P<m\), a contradiction. Thus \(y\preccurlyeq 1\), and \(z:=\operatorname{res}y\in\boldsymbol{k}_{L}\) has the required property.
We recall from [ADH, p. 462] that a _Liouville extension_ of \(K\) is a differential field extension \(E\) of \(K\) such that \(C_{E}\) is algebraic over \(C\) and for each \(t\in E\) there are \(t_{1},\dots,t_{n}\in L\) such that \(t\in K(t_{1},\dots,t_{n})\) and for \(i=1,\dots,n\):
1. \(t_{i}\) is algebraic over \(K(t_{1},\dots,t_{i-1})\), or
2. \(t^{\prime}_{i}\in K(t_{1},\dots,t_{i-1})\), or
3. \(t^{\prime}_{i}\in t_{i}K(t_{1},\dots,t_{i-1})\).
**Proposition 1.1.29** (Rosenlicht [167, p. 371]).: _Suppose \(y^{m}=P(y)\) for some \(y\) in a Liouville extension of \(K\). Then \(z^{m}=P(z)\) for some \(z\) in a differential field extension of \(K\) which is algebraic over \(K\)._
Proof.: A _Liouville sequence_ over \(K\) is by definition a sequence \((t_{1},\dots,t_{n})\) of elements of a differential field extension \(E\) of \(K\) such that for \(i=1,\dots,n\):
1. \(t_{i}\) is algebraic over \(K(t_{1},\dots,t_{i-1})\), or
2. \(t^{\prime}_{i}\in K(t_{1},\dots,t_{i-1})\), or
3. \(t^{\prime}_{i}\in t_{i}K(t_{1},\dots,t_{i-1})\).
Note that then \(K_{i}:=K(t_{1},\ldots,t_{i})\) is a differential subfield of \(E\) for \(i=1,\ldots,n\). By induction on \(d\in\mathbb{N}\) we now show: if \((t_{1},\ldots,t_{n})\) is a Liouville sequence over \(K\) with \(\operatorname{trdeg}\bigl{(}K(t_{1},\ldots,t_{n})|K\bigr{)}=d\) and \(y^{m}=P(y)\) for some \(y\in K(t_{1},\ldots,t_{n})\), then the conclusion of the proposition holds. This is obvious for \(d=0\), so let \((t_{1},\ldots,t_{n})\) be a Liouville sequence over \(K\) with \(\operatorname{trdeg}\bigl{(}K(t_{1},\ldots,t_{n})|K\bigr{)}=d\geqslant 1\) and \(y^{m}=P(y)\) for some \(y\in K(t_{1},\ldots,t_{n})\). Take \(i\in\{1,\ldots,n\}\) maximal such that \(t_{i}\) is transcendental over \(K_{i-1}=K(t_{1},\ldots,t_{i-1})\). Then \(t_{i}^{\prime}\in K_{i-1}+t_{i}K_{i-1}\), and \(K(t_{1},\ldots,t_{n})\) is algebraic over \(K(t_{1},\ldots,t_{i})\). Applying Lemma 1.1.28 to \(K_{i-1}\), \(t_{i}\) in the role of \(K,t\) yields a \(z\) in an algebraic differential field extension of \(K_{i-1}\) with \(z^{m}=P(z)\). Now apply the inductive hypothesis to the Liouville sequence \((t_{1},\ldots,t_{i-1},z)\) over \(K\).
**Corollary 1.1.30** (Kolchin).: _Let \(A\in K[\mathfrak{d}]^{\neq}\), and suppose \(A(y)=0\) for some \(y\neq 0\) in a Liouville extension of \(K\). Then \(A(y)=0\) for some \(y\neq 0\) in a differential field extension of \(K\) with \(y^{\dagger}\) algebraic over \(K\)._
Proof.: Let \(m=\operatorname{order}A\) and note that \(\operatorname{Ri}(A)=Z^{m}+Q\) with \(\deg Q<m\)[ADH, p. 299]. Now apply the proposition above.
_Remark_.: Corollary 1.1.30 goes back to Liouville [131] in an analytic setting for \(A\) of order \(2\) and \(K=\mathbb{C}(x)\) with \(C=\mathbb{C}\), \(x^{\prime}=1\).
**Results of Srinivasan**(\({}^{*}\)).: Corollary 1.1.35 and Lemma 1.1.36 below will be used in Section 7.6. In this subsection \(K\) is a differential field and \(a_{2},\ldots,a_{n}\in K\), \(n\geqslant 2\), \(a_{n}\neq 0\). We investigate the solutions of the differential equation
\[y^{\prime}\ =\ a_{2}y^{2}+a_{3}y^{3}+\cdots+a_{n}y^{n} \tag{1.1.1}\]
in Liouville extensions of \(K\). For \(n=3\) this is a special case of Abel's differential equation of the first kind, first studied by Abel [1] (cf. [111, SSA.4.10]). In the next three lemmas and in Proposition 1.1.34 we let \(y\) be an element of a differential field extension \(L\) of \(K\) satisfying (1.1.1). At various places we consider a field \(E(\!(t)\!)\) of Laurent series over a field \(E\); it is to be viewed as a valued field in the usual way.
**Lemma 1.1.31**.: _Suppose \(y\) is transcendental over \(K\). Then \(K\langle y\rangle^{\dagger}\cap K=K^{\dagger}\)._
Proof.: We view \(K\langle y\rangle=K(y)\) as a differential subfield of \(K(\!(y)\!)\) equipped with the unique continuous derivation extending that of \(K(y)\). Let \(f=\sum_{j\geqslant k}f_{j}y^{j}\in K(\!(y)\!)\) with \(k\in\mathbb{Z}\), all \(f_{j}\in K\), and \(f_{k}\neq 0\). Then
\[f^{\prime}=f_{k}^{\prime}y^{k}+(f_{k+1}^{\prime}+kf_{k}a_{2})y^{k+1}+\bigl{(} f_{k+2}^{\prime}+kf_{k}a_{3}+(k+1)f_{k+1}a_{2}\bigr{)}y^{k+2}+\cdots.\]
Hence if \(f^{\prime}=af\) where \(a\in K\), then \(f_{k}^{\prime}=af_{k}\) and so \(a\in K^{\dagger}\).
**Lemma 1.1.32**.: _Suppose \(y\) transcendental over \(K\) and \(R\) is a differential subring of \(K\) with \(C\subseteq R=\mathfrak{d}(R)\) and \(a_{2},\ldots,a_{n}\in R\). Then \(\mathfrak{d}(K\langle y\rangle)\cap K=\mathfrak{d}(K)\)._
Proof.: Let \(K(\!(y)\!)\) and \(f\) be as in the proof of Lemma 1.1.31. Then
\[g:=f^{\prime}=\sum_{j\geqslant k}g_{j}y^{j}\quad\text{where $g_{j}=f_{j}^{ \prime}+\sum_{i=1}^{n-1}(j-i)f_{j-i}a_{i+1}$ and $f_{l}:=0$ for $l<k$.}\]
Towards a contradiction, suppose \(f^{\prime}=a\in K\setminus\mathfrak{d}(K)\). By induction on \(j\geqslant k\) we show that then \(f_{j}\in R\) and \(g_{j}=0\). We have \(g_{k}=f_{k}^{\prime}\), and if \(f_{k}^{\prime}\neq 0\), then \(k=0\) and \(a=f_{k}^{\prime}\in\mathfrak{d}(K)\), a contradiction. Therefore \(f_{k}\in C^{\times}\) and \(g_{k}=0\). Suppose \(j\geqslant k+1\) and \(f_{k},\ldots,f_{j-1}\in R\). Take \(h\in R\) with \(h^{\prime}=\sum_{i=1}^{n-1}(j-i)f_{j-i}a_{i+1}\). Now \(g_{j}=(f_{j}+h)^{\prime}\neq a\) since \(a\notin\mathfrak{d}(K)\), hence \(g_{j}=0\) and thus \(f_{j}\in-h+C\subseteq R\)
**Lemma 1.1.33**.: _Let \(t\in L^{\times}\) and suppose \(L\) is algebraic over \(K(t)\). If_
* \(t^{\prime}\in K\setminus\partial(K)\) _and there is a differential subring of_ \(K\) _with_ \(C\subseteq R=\partial(R)\) _and_ \(a_{2},\ldots,a_{n}\in R\)_, or_
* \(t^{\dagger}\in K\setminus\mathbb{Q}K^{\dagger}\)_,_
_then \(y\) is algebraic over \(K\)._
Proof.: We may assume that \(t\) is transcendental over \(K\). Suppose \(y\) is transcendental over \(K\). Then \(t\) is algebraic over \(K(y)=K\langle y\rangle\). If \(t^{\prime}\in K\), then with \(R=K\langle y\rangle\), \(r=t^{\prime}\), and \(x=t\) in [ADH, 4.6.10] we obtain \(t^{\prime}\in\partial(K\langle y\rangle)\). If \(t^{\dagger}\in K\), then with \(R=K\langle y\rangle\), \(r=t^{\dagger}\), and \(x=t\) in [ADH, 4.6.11] we get \(t^{\dagger}\in\mathbb{Q}K\langle y\rangle^{\dagger}\). Thus (i) contradicts Lemma 1.1.32 and (ii) contradicts Lemma 1.1.31.
**Proposition 1.1.34**.: _Suppose \(C\) is algebraically closed, \(R\) is a differential subring of \(K\) with \(C\subseteq R=\partial(R)\) and \(a_{2},\ldots,a_{n}\in R\), and \(L\) is a Liouville extension of \(K\). Then \(y\) is algebraic over \(K\)._
Proof.: By induction on \(d\in\mathbb{N}\) we show: if \((t_{1},\ldots,t_{m})\in L^{m}\) is a Liouville sequence over \(K\) with \(\operatorname{trdeg}\big{(}K(t_{1},\ldots,t_{m})|K\big{)}=d\) and \(y\in K(t_{1},\ldots,t_{m})\), then \(y\) is algebraic over \(K\). This is clear for \(d=0\), so let \((t_{1},\ldots,t_{m})\in L^{m}\) be a Liouville sequence over \(K\) with \(\operatorname{trdeg}\big{(}K(t_{1},\ldots,t_{m})|K\big{)}=d\geqslant 1\) and \(y\in K(t_{1},\ldots,t_{m})\). Take \(i\in\{1,\ldots,m\}\) maximal such that \(t_{i}\) is transcendental over \(K_{i-1}:=K(t_{1},\ldots,t_{i-1})\). Then \(t_{i}^{\prime}\in K_{i-1}\setminus\partial(K_{i-1})\) or \(t_{i}^{\dagger}\in K_{i-1}\setminus\mathbb{Q}K_{i-1}^{\dagger}\). Hence \(y\) is algebraic over \(K_{i-1}\) by Lemma 1.1.33 applied to \(K_{i-1}\), \(t_{i}\), \(K(t_{1},\ldots,t_{m})\) in the role of \(K\), \(t\), \(L\). Now apply the (tacit) inductive hypothesis to the Liouville sequence \((t_{1},\ldots,t_{i-1},y)\) over \(K\).
In the remainder of this subsection \(C\) is algebraically closed, \(x\in K\), \(x^{\prime}=1\) (so \(x\) is transcendental over \(C\)), and \(a_{2},\ldots,a_{n}\in C[x]\). Applying Proposition 1.1.34 with \(C(x)\), \(C[x]\) in place of \(K\), \(R\), respectively, yields [196, Proposition 4.1] with a shorter proof:
**Corollary 1.1.35** (Srinivasan).: _Any \(y\) in any Liouville extension of \(C(x)\) satisfying (1.1.1) is algebraic over \(C(x)\)._
We now assume \(a_{2},\ldots,a_{n}\in C\) and put \(P:=a_{2}Y^{2}+\cdots+a_{n}Y^{n}\in C[Y]\). We equip \(C(Y)\) with the derivation that is trivial on \(C\) and satisfies \(Y^{\prime}=1\). Thus the field isomorphism \(C(x)\to C(Y)\) over \(C\) with \(x\mapsto Y\) is an isomorphism between the differential subfield \(C(x)\) of \(K\) and \(C(Y)\). Next a special case of [194, Proposition 3.1]:
**Lemma 1.1.36** (Srinivasan).: _The following are equivalent:_
* _there is a non-constant_ \(y\) _in a differential field extension of_ \(C(x)\) _such that_ \(y\) _is algebraic over_ \(C(x)\) _and_ \(y^{\prime}=P(y)\)_;_
* _there exists_ \(Q\in C(Y)\) _such that_ \(Q^{\prime}=1/P\)_._
Proof.: Let \(y\notin C\) be algebraic over \(C(x)\) with \(y^{\prime}=P(y)\). Then \(y\) is transcendental over \(C\), hence \(x\) is algebraic over \(C(y)\) and so \(x\in C(y)\) by [ADH, 4.6.10] applied to \(R=C(y)\). Take \(Q\in C(Y)\) such that \(x=Q(y)\). Then \(1=x^{\prime}=Q(y)^{\prime}=Q^{\prime}(y)P(y)\) and thus \(Q^{\prime}=1/P\). This shows (i) \(\Rightarrow\) (ii). Conversely, let \(Q\in C(Y)\) be such that \(Q^{\prime}=1/P\), and let \(Q=A/B\) with \(A,B\in C[Y]\), \(B\neq 0\). By [ADH, 4.6.14] we have \(y\) in a differential field extension of \(C(x)\) with constant field \(C\) such that \(y^{\prime}=P(y)\) and \(B(y)\neq 0\). Then \(Q(y)^{\prime}=Q^{\prime}(y)y^{\prime}=(1/P(y))P(y)=1\)
so \(Q(y)=x+c\in C(y)\) with \(c\in C\). Then \(y\notin C\), hence \(y\) is transcendental over \(C\) and \(y\) is algebraic over \(C(x)\).
Here are two applications of Lemma 1.1.36. In the proofs we extend the derivation of \(C(Y)\) to the continuous \(C\)-linear derivation on \(C(\!(Y)\!)\) with \(Y^{\prime}=1\).
**Corollary 1.1.37**.: _Suppose \(n\geqslant 3\), \(a_{2},a_{3}\neq 0\), and \(y\) in a Liouville extension of \(C(x)\) satisfies \(y^{\prime}=P(y)\). Then \(y\in C\)._
Proof.: In \(C(\!(Y)\!)\) we have \(1/P=(1/a_{2})Y^{-2}-(a_{3}/a_{2}^{2})Y^{-1}+\cdots\) and hence \(Q^{\prime}\neq 1/P\) for all \(Q\in C(\!(Y)\!)\), so \(y\in C\) by Corollary 1.1.35 and Lemma 1.1.36.
**Corollary 1.1.38**.: _Suppose \(P\) has a simple zero in \(C\) and \(y\) in a Liouville extension of \(C(x)\) satisfies \(y^{\prime}=P(y)\). Then \(y\in C\)._
Proof.: Let \(c\in C\) with \(P(c)=0\), \(P^{\prime}(c)\neq 0\). Then in \(C(\!(Y)\!)\) we have \(1/P(Y+c)\in aY^{-1}+C[[Y]]\) where \(a\in C^{\times}\), hence \(Q^{\prime}\neq 1/P\) for all \(Q\in C(\!(Y)\!)\). Thus \(y\in C\) by Corollary 1.1.35 and Lemma 1.1.36.
### 1.2. The Group of Logarithmic Derivatives
Let \(K\) be a differential field. The map \(y\mapsto y^{\dagger}\colon K^{\times}\to K\) is a morphism from the multiplicative group of \(K\) to the additive group of \(K\), with kernel \(C^{\times}\). Its image
\[(K^{\times})^{\dagger}\ =\ \bigl{\{}y^{\dagger}:\,y\in K^{\times}\bigr{\}}\]
is an additive subgroup of \(K\), which we call the **group of logarithmic derivatives** of \(K\). The morphism \(y\mapsto y^{\dagger}\) induces an isomorphism \(K^{\times}/C^{\times}\to(K^{\times})^{\dagger}\). To shorten notation, set \(0^{\dagger}:=0\), so \(K^{\dagger}=(K^{\times})^{\dagger}\). For \(\phi\in K^{\times}\) we have \(\phi(K^{\phi})^{\dagger}=K^{\dagger}\). The group \(K^{\times}\) is divisible iff both \(C^{\times}\) and \(K^{\dagger}\) are divisible. If \(K\) is algebraically closed, then \(K^{\times}\) and hence \(K^{\dagger}\) are divisible, making \(K^{\dagger}\) a \(\mathbb{Q}\)-linear subspace of \(K\). Likewise, if \(K\) is real closed, then the multiplicative subgroup \(K^{>}\) of \(K^{\times}\) is divisible, so \(K^{\dagger}=(K^{>})^{\dagger}\) is a \(\mathbb{Q}\)-linear subspace of \(K\).
**Lemma 1.2.1**.: _Suppose \(K^{\dagger}\) is divisible, \(L\) is a differential field extension of \(K\) with \(L^{\dagger}\cap K=K^{\dagger}\), and \(M\) is a differential field extension of \(L\) and algebraic over \(L\). Then \(M^{\dagger}\cap K=K^{\dagger}\)._
Proof.: Let \(f\in M^{\times}\) be such that \(f^{\dagger}\in K\). Then \(f^{\dagger}\in L\), so for \(n:=[L(f):L]\),
\[nf^{\dagger}\ =\ \mbox{tr}_{L(f)|L}(f^{\dagger})\ =\ \mbox{N}_{L(f)|L}(f)^{ \dagger}\in L^{\dagger}\]
by an identity in [ADH, 4.4]. Hence \(nf^{\dagger}\in K^{\dagger}\), and thus \(f^{\dagger}\in K^{\dagger}\).
In particular, if \(K^{\dagger}\) is divisible and \(M\) is a differential field extension of \(K\) and algebraic over \(K\), then \(M^{\dagger}\cap K=K^{\dagger}\).
In the next two lemmas \(a,b\in K\); distinguishing whether or not \(a\in K^{\dagger}\) helps to describe the solutions to the differential equation \(y^{\prime}+ay=b\):
**Lemma 1.2.2**.: _Suppose \(\partial K=K\), and let \(L\) be differential field extension of \(K\) with \(C_{L}=C\). Suppose also \(a\in K^{\dagger}\). Then for some \(y_{0}\in K^{\times}\) and \(y_{1}\in K\),_
\[\{y\in L:\,y^{\prime}+ay=b\}\ =\ \{y\in K:\,y^{\prime}+ay=b\}\ =\ Cy_{0}+y_{1}.\]
Proof.: Take \(y_{0}\in K^{\times}\) with \(y_{0}^{\dagger}=-a\), so \(y_{0}^{\prime}+ay_{0}=0\). Twisting \(\partial+a\in K[\![\partial]\!]\) by \(y_{0}\) (see [ADH, p. 243]) transforms the equation \(y^{\prime}+ay=b\) into \(z^{\prime}=y_{0}^{-1}b\). This gives \(y_{1}\in K\) with \(y_{1}^{\prime}+ay_{1}=b\). Using \(C_{L}=C\), these \(y_{0},y_{1}\) have the desired properties.
**Lemma 1.2.3**.: _Let \(L\) be a differential field extension of \(K\) with \(L^{\dagger}\cap K=K^{\dagger}\). Assume \(a\notin K^{\dagger}\). Then there is at most one \(y\in L\) with \(y^{\prime}+ay=b\)._
Proof.: If \(y_{1}\), \(y_{2}\) are distinct solutions in \(L\) of the equation \(y^{\prime}+ay=b\), then we have \(-a=(y_{1}-y_{2})^{\dagger}\in L^{\dagger}\cap K=K^{\dagger}\), contradicting \(a\notin K^{\dagger}\).
**Logarithmic derivatives under algebraic closure.**_In this subsection \(K\) is a differential field._ We describe for real closed \(K\) how \(K^{\dagger}\) changes if we pass from \(K\) to its algebraic closure. More generally, suppose the underlying field of \(K\) is euclidean; in particular, \(-1\) is not a square in \(K\). We equip \(K\) with the unique ordering making \(K\) an ordered field. For \(y=a+bi\in K[i]\) (\(a,b\in K\)) we let \(|y|\in K^{\geqslant}\) be such that \(|y|^{2}=a^{2}+b^{2}\). Then \(y\mapsto|y|\colon K[i]\to K^{\geqslant}\) is an absolute value on \(K[i]\), i.e., for all \(x,y\in K[i]\),
\[|x|\ =\ 0\iff x=0,\qquad|xy|\ =\ |x||y|,\qquad|x+y|\ \leqslant\ |x|+|y|.\]
For \(a\in K\) we have \(|a|=\max\{a,-a\}\). We have the subgroup
\[S\ :=\ \big{\{}y\in K[i]:|y|=1\big{\}}\ =\ \big{\{}a+bi:a,b\in K,\ a^{2}+b^{2}=1 \big{\}}\]
of the multiplicative group \(K[i]^{\times}\). By an easy computation all elements of \(K[i]\) are squares in \(K[i]\); hence \(K[i]^{\dagger}\) is 2-divisible. The next lemma describes \(K[i]^{\dagger}\); it partly generalizes [ADH, 10.7.8].
**Lemma 1.2.4**.: _We have \(K[i]^{\times}=K^{>}\cdot S\) with \(K^{>}\cap S=\{1\}\), and_
\[K[i]^{\dagger}\ =\ K^{\dagger}\oplus S^{\dagger}\qquad(\text{internal direct sum of subgroups of $K[i]^{\dagger}$}).\]
_For \(a,b\in K\) with \(a+bi\in S\) we have \((a+bi)^{\dagger}=\operatorname{wr}(a,b)i\). Thus \(K[i]^{\dagger}\cap K=K^{\dagger}\)._
Proof.: Let \(y=a+bi\in K[i]^{\times}\) (\(a,b\in K\)), and take \(r\in K^{>}\) with \(r^{2}=a^{2}+b^{2}\); then \(y=r\cdot(y/r)\) with \(y/r\in S\). Thus \(K[i]^{\times}=K^{>}\cdot S\), and clearly \(K^{>}\cap S=\{1\}\). Hence \(K[i]^{\dagger}=K^{\dagger}+S^{\dagger}\). Suppose \(a\in K^{\times}\), \(s\in S\), and \(a^{\dagger}=s^{\dagger}\); then \(a=cs\) with \(c\in C_{K[i]}\), and \(C_{K[i]}=C[i]\) by [ADH, 4.6.20] and hence \(\max\{a,-a\}=|a|=|c|\in C\), so \(a\in C\) and thus \(a^{\dagger}=s^{\dagger}=0\); therefore the sum is direct. Now if \(a,b\in K\) and \(|a+bi|=1\), then
\[(a+bi)^{\dagger} =\ (a^{\prime}+b^{\prime}i)(a-bi)\] \[=\ (aa^{\prime}+bb^{\prime})+(ab^{\prime}-a^{\prime}b)i\] \[=\ \tfrac{1}{2}\big{(}a^{2}+b^{2}\big{)}^{\prime}+(ab^{\prime}-a^{ \prime}b)i\ =\ (ab^{\prime}-a^{\prime}b)i\ =\ \operatorname{wr}(a,b)i.\qed\]
**Corollary 1.2.5**.: _For \(y\in K[i]^{\times}\) we have \(\operatorname{Re}(y^{\dagger})=|y|^{\dagger}\), and the group morphism \(y\mapsto\operatorname{Re}(y^{\dagger})\colon K[i]^{\times}\to K\) has kernel \(C^{>}S\)._
If \(K\) is real closed and \(\mathcal{O}\) a convex valuation ring of \(K\), then \(\mathcal{O}[i]=\mathcal{O}+\mathcal{O}i\) is the unique valuation ring of \(K[i]\) that lies over \(\mathcal{O}\), and so \(S\subseteq\mathcal{O}[i]^{\times}\), hence \(y\asymp|y|\) for all \(y\in K[i]^{\times}\). Thus by [ADH, 10.5.2(i)] and Corollary 1.2.5:
**Corollary 1.2.6**.: _If \(K\) is a real closed pre-\(H\)-field, then for all \(y,z\in K[i]^{\times}\),_
\[y\prec z\quad\Longrightarrow\quad\operatorname{Re}(y^{\dagger})<\operatorname{ Re}(z^{\dagger}).\]
We also have a useful decomposition for \(S\):
**Corollary 1.2.7**.: _Suppose \(K\) is a real closed \(H\)-field. Then_
\[S\ =\ S_{C}\cdot\big{(}S\cap(1+\sigma_{K[i]})\big{)}\]
_where \(S_{C}:=S\cap C[i]^{\times}\) and \(S\cap(1+\sigma_{K[i]})\) are subgroups of \(\mathcal{O}[i]^{\times}\)._
Proof.: The inclusion \(\supseteq\) is clear. For the reverse inclusion, let \(a,b\in K\), \(a^{2}+b^{2}=1\) and take the unique \(c,d\in C\) with \(a-c\prec 1\) and \(b-d\prec 1\). Then \(c^{2}+d^{2}=1\) and \(a+b\dot{\imath}\sim c+d\dot{\imath}\), and so \((a+bi)/(c+d\dot{\imath})\in S\cap(1+\sigma_{K[\dot{\imath}]})\).
**Logarithmic derivatives in asymptotic fields.**_Let \(K\) be an asymptotic field._ If \(K\) is henselian and \(\boldsymbol{k}:=\operatorname{res}K\), then by [ADH, remark before 3.3.33], \(K^{\times}\) is divisible iff the groups \(\boldsymbol{k}^{\times}\) and \(\Gamma\) are both divisible. Recall that in [ADH, 14.2] we defined the \(\mathcal{O}\)-submodule
\[\operatorname{I}(K)\ =\ \{y\in K:\,y\preccurlyeq f^{\prime}\text{ for some }f\in\mathcal{O}\}\]
of \(K\). We have \(\partial\mathcal{O}\subseteq\operatorname{I}(K)\), hence \((1+\circ)^{\dagger}\subseteq(\mathcal{O}^{\times})^{\dagger}\subseteq \operatorname{I}(K)\). One easily verifies:
**Lemma 1.2.8**.: _Suppose \(K\) is pre-\(\operatorname{d}\)-valued. If \(\operatorname{I}(K)\subseteq\partial K\), then \(\operatorname{I}(K)=\partial\mathcal{O}\). If \(\operatorname{I}(K)\subseteq K^{\dagger}\), then \(\operatorname{I}(K)=(\mathcal{O}^{\times})^{\dagger}\), with \(\operatorname{I}(K)=(1+\circ)^{\dagger}\) if \(K\) is \(\operatorname{d}\)-valued._
If \(K\) is \(\operatorname{d}\)-valued or \(K\) is pre-\(\operatorname{d}\)-valued without a gap, then
\[\operatorname{I}(K)\ =\ \{y\in K:y\preccurlyeq f^{\prime}\text{ for some }f\in\circ\}.\]
For \(\phi\in K^{\times}\) we have \(\phi\operatorname{I}(K^{\phi})=\operatorname{I}(K)\). If \(K\) has asymptotic integration and \(L\) is an asymptotic extension of \(K\), then \(\operatorname{I}(K)=\operatorname{I}(L)\cap K\). The following is [ADH, 14.2.5]:
**Lemma 1.2.9**.: _If \(K\) is \(H\)-asymptotic, has asymptotic integration, and is \(1\)-linearly newtonian, then it is \(\operatorname{d}\)-valued and \(\partial\mathcal{O}=\operatorname{I}(K)=(1+\circ)^{\dagger}\)._
We now turn our attention to the condition \(\operatorname{I}(K)\subseteq K^{\dagger}\). If \(\operatorname{I}(K)\subseteq K^{\dagger}\), then also \(\operatorname{I}(K^{\phi})\subseteq(K^{\phi})^{\dagger}\) for \(\phi\in K^{\times}\), where
\[(K^{\phi})^{\dagger}\ :=\ \{\phi^{-1}f^{\prime}/f:\,f\in K^{\times}\}\ =\ \phi^{-1}K^{\dagger}.\]
By [ADH, Section 9.5 and 10.4.3]:
**Lemma 1.2.10**.: _Let \(K\) be of \(H\)-type. If \(K\) is \(\operatorname{d}\)-valued, or pre-\(\operatorname{d}\)-valued without a gap, then \(K\) has an immediate henselian asymptotic extension \(L\) with \(\operatorname{I}(L)\subseteq L^{\dagger}\)._
**Corollary 1.2.11**.: _Suppose \(K\) has asymptotic integration. Let \(L\) be an asymptotic field extension of \(K\) such that \(L^{\times}=K^{\times}C_{L}^{\times}(1+\circ_{L})\). Then \(L^{\dagger}=K^{\dagger}+(1+\circ_{L})^{\dagger}\), and if \(\operatorname{I}(K)\subseteq K^{\dagger}\), then \(L^{\dagger}\cap K=K^{\dagger}\)._
Proof.: Let \(f\in L^{\times}\), and take \(b\in K^{\times}\), \(c\in C_{L}^{\times}\), \(g\in\circ_{L}\) with \(f=bc(1+g)\); then \(f^{\dagger}=b^{\dagger}+(1+g)^{\dagger}\), showing \(L^{\dagger}=K^{\dagger}+(1+\circ_{L})^{\dagger}\). Next, suppose \(\operatorname{I}(K)\subseteq K^{\dagger}\), let \(b\), \(c\), \(f\), \(g\) be as before, and assume \(a:=f^{\dagger}\in K\); then
\[a-b^{\dagger}\in(1+\circ_{L})^{\dagger}\cap K\ \subseteq\ \operatorname{I}(L)\cap K \ =\ \operatorname{I}(K)\ \subseteq\ K^{\dagger}\]
and hence \(a\in K^{\dagger}\). This shows \(L^{\dagger}\cap K=K^{\dagger}\).
Two cases where the assumption on \(L\) in Corollary 1.2.11 is satisfied: (1) \(L\) is an immediate asymptotic field extension of \(K\), because then \(L^{\times}=K^{\times}(1+\circ_{L})\); and (2) \(L\) is a \(\operatorname{d}\)-valued field extension of \(K\) with \(\Gamma=\Gamma_{L}\).
If \(F\) is a henselian valued field of residue characteristic \(0\), then clearly the subgroup \(1+\circ_{F}\) of \(F^{\times}\) is divisible. Hence, if \(K\) and \(L\) are as in Corollary 1.2.11 and in addition \(K^{\dagger}\) is divisible and \(L\) is henselian, then \(L^{\dagger}\) is divisible.
_Example 1.2.12_.: Let \(C\) be a field of characteristic \(0\) and \(Q\) be a subgroup of \(\mathbb{Q}\) with \(1\in Q\). The Hahn field \(C(\!(t^{Q})\!)=C[[x^{Q}]]\), with \(x=t^{-1}\), is given the natural derivation with \(c^{\prime}=0\) for all \(c\in C\) and \(x^{\prime}=1\): this derivation is defined by
\[\bigg{(}\sum_{q\in Q}c_{q}x^{q}\bigg{)}^{\prime}\ :=\ \sum_{q\in Q}qc_{q}x^{q-1} \qquad(\text{all }c_{q}\in C).\]
Then \(C(\!(t^{Q})\!)\) has constant field \(C\), and is d-valued of \(H\)-type. Thus \(K:=C(\!(t^{Q})\!)\) satisfies \(\mathrm{I}(K)\subseteq K^{\dagger}\) by Lemma 1.2.10. Hence by Lemma 1.2.8,
\[\mathrm{I}(K)\ =\ (1+o)^{\dagger}\ =\ \big{\{}f\in K:\,f\prec x^{\dagger}=t \big{\}}\ =\ \o\,t.\]
It follows easily that \(K^{\dagger}=Qt\oplus\mathrm{I}(K)\) (internal direct sum of subgroups of \(K^{\dagger}\)) and thus \((K^{t})^{\dagger}=Q\oplus\o\subseteq\mathcal{O}\). In particular, if \(Q=\mathbb{Z}\) (so \(K=C(\!(t)\!)\)), then \((K^{t})^{\dagger}=\mathbb{Z}\oplus tC[[t]]\). Moreover, if \(L:=\mathrm{P}(C)\subseteq C(\!(t^{\mathbb{Q}})\!)\) is the differential field of Puiseux series over \(C\), then \((L^{t})^{\dagger}=\mathbb{Q}\oplus o_{L}\).
In the next three corollaries we continue with the d-valued Hahn field \(K=C(\!(t^{Q})\!)\) from the example above. So \(CK^{\dagger}=Ct\oplus\mathrm{I}(K)\) (internal direct sum of \(C\)-linear subspaces of \(K\)) where \(\mathrm{I}(K)=\o t\), hence \(CK^{\dagger}=\mathcal{O}t\). For \(f=\sum_{q\in Q}f_{q}x^{q}\in K\) (all \(f_{q}\in C\)) we have the "residue" \(f_{-1}\) of \(f\), and we observe that \(f\mapsto f_{-1}\colon K\to C\) is \(C\)-linear with kernel \(\partial(K)\). Thus:
**Corollary 1.2.13**.: \(\partial(K)\cap CK^{\dagger}=\mathrm{I}(K)\)_._
This yields a fact needed in Section 7.6:
**Corollary 1.2.14**.: _Let \(F:=C(x)\subseteq K\). Then \(\partial(F)\cap CF^{\dagger}=\{0\}\)._
Proof.: We arrange that \(C\) is algebraically closed. Let \(f\in\partial(F)\cap CF^{\dagger}\). Then \(f\in\mathrm{I}(K)=\o t\) by Corollary 1.2.13, so it suffices to show \(f\in C[x]\). For \(c\in C\), let \(v_{c}\colon F^{\times}\to\mathbb{Z}\) be the valuation on \(F\) with \(v(C^{\times})=\{0\}\) and \(v(x-c)=1\). Then \(v_{c}=v\circ\sigma_{c}\) where \(\sigma_{c}\) is the \(C\)-linear automorphism of the field \(F\) with \(x\mapsto c+t\). Hence it suffices that \(\sigma_{c}(f)\preccurlyeq 1\) for all \(c\in C\). For \(c\in C\), \(g\in F\) we have \(\sigma_{c}(g)^{\prime}=-t^{2}\sigma_{c}(g^{\prime})\), so \(-t^{2}\sigma_{c}(f)\in\partial(F)\cap CF^{\dagger}\subseteq ot\), hence \(\sigma_{c}(f)\prec x\) and thus \(\sigma_{c}(f)\preccurlyeq 1\).
For the next corollary, compare [86, p. 14] and think of \(y_{j}\) as \(\log(x-c_{j})\).
**Corollary 1.2.15** (Linear independence of logarithms).: _Let \(c_{1},\ldots,c_{n}\in C\) be distinct, and let \(y_{1},\ldots,y_{n}\) in a common differential field extension of \(C(x)\) be such that \(y_{j}^{\prime}=(x-c_{j})^{-1}\) for \(j=1,\ldots,n\). Then for all \(a_{1},\ldots,a_{n}\in C\),_
\[a_{1}y_{1}+\cdots+a_{n}y_{n}\in C(x)\ \Longrightarrow\ a_{1}=\cdots=a_{n}=0.\]
Proof.: Set \(F:=C(x)\) and suppose \(a_{1},\ldots,a_{n}\in C\) and \(f:=a_{1}y_{1}+\cdots+a_{n}y_{n}\in F\). Then \(f^{\prime}=a_{1}(x-c_{1})^{-1}+\cdots+a_{n}(x-c_{n})^{-1}\in\partial(F)\cap CF ^{\dagger}\), so by Corollary 1.2.14,
\[a_{1}(x-c_{1})^{-1}+\cdots+a_{n}(x-c_{n})^{-1}\ =\ 0.\]
Multiplying both sides of this equality by \(\prod_{j=1}^{n}(x-c_{j})\) and substituting \(c_{j}\) for \(x\) yields \(a_{j}=0\), for \(j=1,\ldots,n\)
**The real closed case.** _In this subsection \(H\) is a real closed asymptotic field whose valuation ring \(\mathcal{O}\) is convex with respect to the ordering of \(H\)._ (In later use \(H\) is often a Hardy field, which is why we use the letter \(H\) here.) The valuation ring of the asymptotic field extension \(K=H[i]\) of \(H\) is then \(\mathcal{O}_{K}=\mathcal{O}+\mathcal{O}i\), from which we obtain \(\mathrm{I}(K)=\mathrm{I}(H)\oplus\mathrm{I}(H)i\). Let
\[S\ :=\ \big{\{}y\in K:\,|y|=1\big{\}},\qquad W\ :=\ \big{\{}\mathrm{wr}(a,b):\,a,b\in H,\ a^{2}+b^{2}=1\big{\}},\]
so \(S\) is a subgroup of \(\mathcal{O}_{K}^{\times}\) with \(S^{\dagger}=Wi\) and \(K^{\dagger}=H^{\dagger}\oplus Wi\) by Lemma 1.2.4. Since \(\partial\mathcal{O}\subseteq\mathrm{I}(H)\), we have \(W\subseteq\mathrm{I}(H)\), and thus: \(W=\mathrm{I}(H)\iff\mathrm{I}(H)i\subseteq K^{\dagger}\).
**Lemma 1.2.16**.: _The following are equivalent:_
1. \(\mathrm{I}(K)\subseteq K^{\dagger}\)_;_
2. \(W=\mathrm{I}(H)\subseteq H^{\dagger}\)_._
Proof.: Assume (i). Then \(\mathrm{I}(H)\,i\subseteq\mathrm{I}(K)\subseteq K^{\dagger}\), so \(W=\mathrm{I}(H)\) by the equivalence preceding the lemma. Also \(\mathrm{I}(H)\subseteq\mathrm{I}(K)\) and \(K^{\dagger}\cap H=H^{\dagger}\) (by Lemma 1.2.4), hence \(\mathrm{I}(H)\subseteq H^{\dagger}\), so (ii) holds. For the converse, assume (ii). Then
\[\mathrm{I}(K)\ =\ \mathrm{I}(H)\oplus\mathrm{I}(H)i\ \subseteq\ H^{\dagger} \oplus Wi\ =\ K^{\dagger}.\qed\]
Applying now Lemma 1.2.9 we obtain:
**Corollary 1.2.17**.: _If \(H\) is \(H\)-asymptotic and has asymptotic integration, and \(K\) is \(1\)-linearly newtonian, then \(K\) is \(\mathrm{d}\)-valued and \(\mathrm{I}(K)\subseteq K^{\dagger}\); in particular, \(W=\mathrm{I}(H)\)._
**Corollary 1.2.18**.: _Suppose \(H\) has asymptotic integration and \(W=\mathrm{I}(H)\). Let \(F\) be a real closed asymptotic extension of \(H\) whose valuation ring is convex. Then_
\[F[i]^{\dagger}\cap K\ =\ (F^{\dagger}\cap H)\oplus\mathrm{I}(H)i.\]
_If in addition \(H^{\dagger}=H\), then \(F[i]^{\dagger}\cap K=H\oplus\mathrm{I}(H)i=K^{\dagger}\)._
Proof.: We have
\[F^{\dagger}\cap H\subseteq F[i]^{\dagger}\cap K\quad\text{ and }\quad\mathrm{I} (H)i=Wi\subseteq K^{\dagger}\cap Hi\subseteq F[i]^{\dagger}\cap K,\]
so \((F^{\dagger}\cap H)\oplus\mathrm{I}(H)i\subseteq F[i]^{\dagger}\cap K\). For the reverse inclusion, \(F[i]^{\dagger}=F^{\dagger}\oplus W_{F}i\), with
\[W_{F}\ :=\ \big{\{}\mathrm{wr}(a,b):\,a,b\in F,\ a^{2}+b^{2}=1\big{\}}\ \subseteq\ \mathrm{I}(F),\]
hence
\[F[i]^{\dagger}\cap K =\,(F^{\dagger}\cap H)\oplus(W_{F}\cap H)i\] \[\subseteq\,(F^{\dagger}\cap H)\oplus\big{(}\mathrm{I}(F)\cap H \big{)}i\,=\,(F^{\dagger}\cap H)\oplus\mathrm{I}(H)i,\]
using \(\mathrm{I}(F)\cap H=\mathrm{I}(H)\), a consequence of \(H\) having asymptotic integration. If \(H^{\dagger}=H\) then clearly \(F^{\dagger}\cap H=H\), hence \(F[i]^{\dagger}\cap K=K^{\dagger}\).
**Trigonometric closure.** In this subsection \(H\) is a real closed \(H\)-field. Let \(\mathcal{O}\) be its valuation ring and \(\mathcal{o}\) the maximal ideal of \(\mathcal{O}\). The algebraic closure \(K=H[i]\) of \(H\) is a \(\mathrm{d}\)-valued \(H\)-asymptotic extension with valuation ring \(\mathcal{O}_{K}=\mathcal{O}+\mathcal{O}i\). We have the "complex conjugation" automorphism \(z=a+b\mathrm{i}\mapsto\overline{z}=a-b\mathrm{i}\ (a,b\in H)\) of the valued differential field \(K\). For such \(z\), \(a\), \(b\) we have
\[|z|\ =\ \sqrt{z\overline{z}}\ =\ \sqrt{a^{2}+b^{2}}\ \in\ H^{\geqslant}.\]
**Lemma 1.2.19**.: _Suppose \(\theta\in H\) and \(\theta^{\prime}i\in K^{\dagger}\). Then \(\theta^{\prime}\in\partial\mathcal{o}\), and there is a unique \(y\sim 1\) in \(K\) such that \(y^{\dagger}=\theta^{\prime}i\). For this \(y\) we have \(|y|=1\), so \(y^{-1}=\overline{y}\)._
Proof.: From \(\theta^{\prime}i\in K^{\dagger}\) we get \(\theta^{\prime}\in W\subseteq\mathrm{I}(H)\), so \(\theta\preccurlyeq 1\), hence \(\theta^{\prime}\in\partial\mathcal{O}=\partial\mathcal{O}\). Let \(z\in K^{\times}\) and \(z^{\dagger}=\theta^{\prime}i\). Then \(\mathrm{Re}\,z^{\dagger}=0\), so by Corollaries 1.2.5 and 1.2.7 we have \(z=cy\) with \(c\in C_{K}^{\times}\) and \(y\in S\cap(1+c_{K})\) where \(S=\{a\in K:\ |a|=1\}\). Hence \(y\sim 1\), \(|y|=1\), and \(y^{\dagger}=\theta^{\prime}i\). If also \(y_{1}\in K\) and \(y_{1}\sim 1\), \(y_{1}^{\dagger}=\theta^{\prime}i\), then \(y_{1}=c_{1}y\) with \(c_{1}\in C_{K}^{\times}\), so \(c_{1}=1\) in view of \(y\sim y_{1}\).
By [ADH, 10.4.3], if \(y\) in an \(H\)-asymptotic extension \(L\) of \(K\) satisfies \(y\sim 1\) and \(y^{\dagger}\in\partial\mathcal{O}_{K}\), then the asymptotic field \(K(y)\subseteq L\) is an immediate extension of \(K\), and so is any algebraic asymptotic extension of \(K(y)\).
Call \(H\)**trigonometrically closed** if for all \(\theta\prec 1\) in \(H\) there is a (necessarily unique) \(y\in K\) such that \(y\sim 1\) and \(y^{\dagger}=\theta^{\prime}i\). (By convention "trigonometrically closed" includes "real closed".) For such \(\theta\) and \(y\) we think of \(y\) as \(\mathrm{e}^{i\theta}\) and accordingly of the elements \(\frac{y+\overline{y}}{2}=\frac{y+y^{-1}}{2}\) and \(\frac{y-\overline{y}}{2i}=\frac{y-y^{-1}}{2i}\) of \(H\) as \(\cos\theta\) and \(\sin\theta\); this explains the terminology. By Lemma 1.2.19 the restrictions \(\theta\prec 1\) and \(y\sim 1\) are harmless. Our aim in this subsection is to construct a canonical trigonometric closure of \(H\).
Note that if \(\mathrm{I}(K)\subseteq K^{\dagger}\), then \(H\) is trigonometrically closed. As a partial converse, if \(\mathrm{I}(H)\subseteq H^{\dagger}\cap\partial H\) and \(H\) is trigonometrically closed, then \(\mathrm{I}(K)\subseteq K^{\dagger}\); this is an easy consequence of \(\mathrm{I}(K)=\mathrm{I}(H)+\mathrm{I}(H)i\). Thus for Liouville closed \(H\) we have:
\[H\text{ is trigonometrically closed }\Longleftrightarrow\mathrm{I}(K)\subseteq K^{\dagger}.\]
Note also that for trigonometrically closed \(H\) there is no \(y\) in any \(H\)-asymptotic extension of \(K\) such that \(y\notin K\), \(y\sim 1\), and \(y^{\dagger}\in(\partial\mathcal{O})i\).
If \(H\) is Schwarz closed, then \(H\) is trigonometrically closed by the next lemma:
**Lemma 1.2.20**.: _Suppose \(H\) is Liouville closed and \(\omega(H)\) is downward closed. Then \(H\) is trigonometrically closed._
Proof.: Let \(0\neq\theta\prec 1\) in \(H\). By Lemma 1.2.19 it suffices to show that then \(\theta^{\prime}i\in K^{\dagger}\). Note that \(h:=\theta^{\prime}\in\mathrm{I}(H)^{\neq}\); we arrange \(h>0\). Now
\[f\ :=\ \omega(-h^{\dagger})+4h^{2}\ =\ \sigma(2h),\qquad 2h\in H^{>}\cap \mathrm{I}(H),\]
hence \(2h\in H^{>}\setminus\Gamma(H)\) by [ADH, 11.8.19]. So \(f\in\omega(H)^{\downarrow}=\omega(H)\) by [ADH, 11.8.31], and thus \(\dim_{C_{H}}\ker 4\partial^{2}+f\geqslant 1\) by [ADH, p. 258]. Put \(A:=\partial^{2}-h^{\dagger}\partial+h^{2}\in H[\partial]\). The isomorphism \(y\mapsto y\sqrt{h}\colon\ker(4\partial^{2}+f)\to\ker A\) of \(C_{H}\)-linear spaces [ADH, 5.1.13] then yields an element of \(\ker^{\neq}A\) that for suggestiveness we denote by \(\cos\theta\). Put \(\sin\theta:=-(\cos\theta)^{\prime}/h\). Then
\[(\sin\theta)^{\prime} =\ -(\cos\theta)^{\prime\prime}/h+(\cos\theta)^{\prime}h^{\dagger}/h\] \[=\ \bigl{(}-h^{\dagger}(\cos\theta)^{\prime}+h^{2}\cos\theta\bigr{)} /h+(\cos\theta)^{\prime}h^{\dagger}/h\ =\ h\cos\theta\]
and thus \(y^{\dagger}=\theta^{\prime}i\) for \(y:=\cos\theta+i\sin\theta\in K^{\times}\).
If \(H\) is \(H\)-closed, then \(H\) is Schwarz closed by [ADH, 14.2.20], and thus trigonometrically closed. Using also Lemma 1.2.16 and remarks preceding it this yields:
**Corollary 1.2.21**.: _If \(H\) is \(H\)-closed, then \(\mathrm{I}(K)\subseteq K^{\dagger}=H\oplus\mathrm{I}(H)i\)._
Suppose now that \(H\) is _not_ trigonometrically closed; so we have \(\theta\prec 1\) in \(H\) with \(\theta^{\prime}i\notin K^{\dagger}\). Then [ADH, 10.4.3] provides an immediate asymptotic extension \(K(y)\) of \(K\) with \(y\sim 1\) and \(y^{\dagger}=\theta^{\prime}i\). To simplify notation and for suggestiveness we set
\[\cos\theta\ :=\ \frac{y+y^{-1}}{2},\qquad\sin\theta\ :=\ \frac{y-y^{-1}}{2i},\]
so \(y=\cos\theta+\mathrm{i}\sin\theta\) and \((\cos\theta)^{2}+(\sin\theta)^{2}=1\). Moreover \((\cos\theta)^{\prime}=-\theta^{\prime}\sin\theta\) and \((\sin\theta)^{\prime}=\theta^{\prime}\cos\theta\). It follows that \(H^{+}:=H(\cos\theta,\sin\theta)\) is a differential subfield of \(K(y)\) with \(K(y)=H^{+}[\mathrm{i}]\), and thus \(H^{+}\), as a valued differential subfield of \(H(y)\), is an asymptotic extension of \(H\).
**Lemma 1.2.22**.: \(H^{+}\) _is an immediate extension of \(H\)._
Proof.: Since \((y^{-1})^{\dagger}=-\theta^{\prime}\mathrm{i}\), the uniqueness property stated in [ADH, 10.4.3] allows us to extend the complex conjugation automorphism of \(K\) (which is the identity on \(H\) and sends \(i\) to \(-i\)) to an automorphism \(\sigma\) of the valued differential field \(K(y)\) such that \(\sigma(y)=y^{-1}\). Then \(\sigma(\cos\theta)=\cos\theta\) and \(\sigma(\sin\theta)=\sin\theta\), so \(H^{+}=\mathrm{Fix}(\sigma)\). Let \(\boldsymbol{k}\) be the residue field of \(H\); so \(\boldsymbol{k}[\mathrm{res}\,\mathrm{i}]\) is the residue field of \(K\) and of its immediate extension \(K(y)\). Now \(\sigma(\mathcal{O}_{K(y)})=\mathcal{O}_{K(y)}\), so \(\sigma\) induces an automorphism of this residue field \(\boldsymbol{k}[\mathrm{res}\,\mathrm{i}]\) which is the identity on \(\boldsymbol{k}\) and sends \(\mathrm{res}\,\mathrm{i}\) to \(-\,\mathrm{res}\,\mathrm{i}\). Hence \(\mathrm{res}\,\mathrm{i}\) does not lie in the residue field of \(H^{+}\), so this residue field is just \(\boldsymbol{k}\).
Equip \(H^{+}\) with the unique field ordering making it an ordered field extension of \(H\) in which \(\mathcal{O}_{H^{+}}\) is convex; see [ADH, 10.5.8]. Then \(H^{+}\) is an \(H\)-field, and its real closure is an immediate real closed \(H\)-field extension of \(H\).
**Lemma 1.2.23**.: _The \(H\)-field \(H^{+}\) embeds uniquely over \(H\) into any trigonometrically closed \(H\)-field extension of \(H\)._
Proof.: Let \(H^{*}\) be a trigonometrically closed \(H\)-field extension of \(H\). Take the unique \(z\sim 1\) in \(H^{*}\) such that \(z^{\dagger}=\theta^{\prime}\mathrm{i}\). Then any \(H\)-field embedding \(H^{+}\to H^{*}\) over \(H\) extends to a valued differential field embedding \(H^{+}[\mathrm{i}]=K(y)\to H^{*}[\mathrm{i}]\) sending \(\mathrm{i}\in K\) to \(\mathrm{i}\in H^{*}[\mathrm{i}]\), and this extension must send \(y\) to \(z\). Hence there is at most one \(H\)-field embedding \(H^{+}\to H^{*}\) over \(H\). For the existence of such an embedding, the uniqueness properties from [ADH, 10.4.3] yield a valued differential field embedding \(K(y)\to H^{*}[\mathrm{i}]\) over \(H\) sending \(\mathrm{i}\in K\) to \(\mathrm{i}\in H^{*}[\mathrm{i}]\) and \(y\) to \(z\). This embedding maps \(H^{+}\) into \(H^{*}\). The uniqueness property of the ordering on \(H^{+}\) shows that this embedding restricts to an \(H\)-field embedding \(H^{+}\to H^{*}\).
By iterating the extension step that leads from \(H\) to \(H^{+}\), alternating it with taking real closures, and taking unions at limit stages we obtain:
**Proposition 1.2.24**.: \(H\) _has a trigonometrically closed \(H\)-field extension \(H^{\mathrm{trig}}\) that embeds uniquely over \(H\) into any trigonometrically closed \(H\)-field extension of \(H\)._
This is an easy consequence of Lemma 1.2.23. Note that the universal property stated in Proposition 1.2.24 determines \(H^{\mathrm{trig}}\) up-to-unique-isomorphism of \(H\)-fields over \(H\). We refer to such \(H^{\mathrm{trig}}\) as the **trigonometric closure** of \(H\). Note that \(H^{\mathrm{trig}}\) is an immediate extension of \(H\), by Lemma 1.2.22, and that \(H^{\mathrm{trig}}[\mathrm{i}]\) is a Liouville extension of \(K\) and thus of \(H\).
A _trigonometric extension_ of \(H\) is a real closed \(H\)-field extension \(E\) of \(H\) such that for all \(a\in E\) there are real closed \(H\)-subfields \(H_{0}\subseteq H_{1}\subseteq\dots\subseteq H_{n}\) of \(E\) such that
1. \(H_{0}=H\) and \(a\in H_{n}\);
2. for \(j=0,\dots,n-1\) there are \(\theta_{j}\in H_{j}\) and \(y_{j}\in H_{j+1}[\mathrm{i}]\subseteq E[\mathrm{i}]\) such that \(y_{j}\sim 1\), \(\theta^{\prime}_{j}\mathrm{i}=y^{\dagger}_{j}\), and \(H_{j+1}[\mathrm{i}]\) is algebraic over \(H_{j}[\mathrm{i}](y_{j})\).
If \(E\) is a trigonometric extension of \(H\), then \(E\) is an immediate extension of \(H\) and \(E[\mathrm{i}]\) is an immediate Liouville extension of \(K\) and thus of \(H\). The next lemma states some further easy consequences of the definition above:
**Lemma 1.2.25**.: _If \(E\) is a trigonometric extension of \(H\), then \(E\) is a trigonometric extension of any real closed \(H\)-subfield \(F\supseteq H\) of \(E\). If \(H\) is trigonometrically closed, then \(H\) has no proper trigonometric extension._
Induction on \(m\) shows that if \(E\) is a trigonometric extension of \(H\), then for any \(a_{1},\ldots,a_{m}\in E\) there are real closed \(H\)-subfields \(H_{0}\subseteq H_{1}\subseteq\cdots\subseteq H_{n}\) of \(E\) such that \(H_{0}=H\), \(a_{1},\ldots,a_{m}\in H_{n}\) and (2) above holds. This helps in proving:
**Corollary 1.2.26**.: _A trigonometric extension of a trigonometric extension of \(H\) is a trigonometric extension of \(H\), and \(H^{\operatorname{trig}}\) is a trigonometric extension of \(H\)._
**Asymptotic fields of Hardy type.** Let \((\Gamma,\psi)\) be an asymptotic couple, \(\Psi:=\psi(\Gamma^{\neq})\), and let \(\gamma\), \(\delta\) range over \(\Gamma\). Recall that \([\gamma]\) denotes the archimedean class of \(\gamma\) [ADH, 2.4]. Following [169, Section 3] we say that \((\Gamma,\psi)\) is of **Hardy type** if for all \(\gamma,\delta\neq 0\) we have \([\gamma]\leqslant[\delta]\Longleftrightarrow\psi(\gamma)\geqslant\psi(\delta)\). Note that then \((\Gamma,\psi)\) is of \(H\)-type, and \(\psi\) induces an order-reversing bijection \([\Gamma^{\neq}]\to\Psi\). If \(\Gamma\) is archimedean, then \((\Gamma,\psi)\) is of Hardy type. If \((\Gamma,\psi)\) is of Hardy type, then so is \((\Gamma,\psi+\delta)\) for each \(\delta\). We also say that an asymptotic field is of Hardy type if its asymptotic couple is. Every asymptotic subfield and every compositional conjugate of an asymptotic field of Hardy type is also of Hardy type. Moreover, every Hardy field is of Hardy type [ADH, 9.1.11]. Let now \(\Delta\) be a convex subgroup of \(\Gamma\). Note that \(\Delta\) contains the archimedean class \([\delta]\) of each \(\delta\in\Delta\). Hence, if \(\delta\in\Delta^{\neq}\) and \(\gamma\notin\Delta\), then \([\delta]<[\gamma]\) and thus:
**Lemma 1.2.27**.: _If \((\Gamma,\psi)\) is of Hardy type and \(\gamma\notin\Delta\), \(\delta\in\Delta^{\neq}\), then \(\psi(\gamma)<\psi(\delta)\)._
**Corollary 1.2.28**.: _Suppose \((\Gamma,\psi)\) is of Hardy type with small derivation, \(\gamma,\delta\neq 0\), \(\psi(\delta)\leqslant 0\), and \([\gamma^{\prime}]>[\delta]\). Then \(\psi(\gamma)<\psi(\delta)\)._
Proof.: Let \(\Delta\) be the smallest convex subgroup of \(\Gamma\) with \(\delta\in\Delta\); then \(\gamma^{\prime}\notin\Delta\), and \(\psi(\delta)\in\Delta\) by [ADH, 9.2.10(iv)]. Thus \(\gamma\notin\Delta\) by [ADH, 9.2.25].
In [7, Section 7] we say that an \(H\)-field \(H\) is _closed under powers_ if for all \(c\in C\) and \(f\in H^{\times}\) there is a \(y\in H^{\times}\) with \(y^{\dagger}=cf^{\dagger}\). (Think of \(y\) as \(f^{c}\).) Thus if \(H\) is Liouville closed, then \(H\) is closed under powers. _In the rest of this subsection we let \(H\) be an \(H\)-field closed under powers, with asymptotic couple \((\Gamma,\psi)\) and constant field \(C\)._ We recall some basic facts from [7, Section 7]. First, we can make the value group \(\Gamma\) into an ordered vector space over the constant field \(C\):
**Lemma 1.2.29**.: _For all \(c\in C\) and \(\gamma=vf\) with \(f\in H^{\times}\) and each \(y\in H^{\times}\) with \(y^{\dagger}=cf^{\dagger}\), the element \(vy\in\Gamma\) only depends on \((c,\gamma)\)_(_not on the choice of \(f\) and \(y\)_)_, and is denoted by \(c\cdot\gamma\). The scalar multiplication \((c,\gamma)\mapsto c\cdot\gamma\colon C\times\Gamma\to\Gamma\) makes \(\Gamma\) into an ordered vector space over the ordered field \(C\)._
Let \(G\) be an ordered vector space over the ordered field \(C\). From [ADH, 2.4] recall that the \(C\)-archimedean class of \(a\in G\) is defined as
\[[a]_{C}:=\big{\{}b\in G:\,\tfrac{1}{c}|a|\leqslant|b|\leqslant c|a|\text{ for some }c\in C^{>}\big{\}}.\]
Thus if \(C=\mathbb{Q}\), then \([a]_{\mathbb{Q}}\) is just the archimedean class \([a]\) of \(a\in G\). Moreover, if \(C^{*}\) is an ordered subfield of \(C\), then \([a]_{C^{*}}\subseteq[a]_{C}\) for each \(a\in G\), with equality if \(C^{*}\) is cofinal in \(C\). Hence if \(C\) is archimedean, then \([a]=[a]_{C}\) for all \(a\in G\). Put \([G]_{C}:=\big{\{}[a]_{C}:a\in G\big{\}}\) and linearly order \([G]_{C}\) by
\[[a]_{C}<[b]_{C}\quad:\Longleftrightarrow\quad[a]_{C}\neq[b]_{C}\text{ and }|a|<|b|.\]
Thus \([G]_{C}\) has smallest element \([0]_{C}=\{0\}\). We also set \([G^{\neq}]_{C}:=[G]_{C}\setminus\{[0]_{C}\}\). From [7, Proposition 7.5] we have:
**Proposition 1.2.30**.: _For all \(\gamma,\delta\neq 0\) we have_
\[[\gamma]_{C}\leqslant[\delta]_{C}\quad\Longleftrightarrow\quad\psi(\gamma) \geqslant\psi(\delta).\]
_Hence \(\psi\) induces an order-reversing bijection \([\Gamma^{\neq}]_{C}\to\Psi=\psi(\Gamma^{\neq})\)._
Proposition 1.2.30 yields:
**Corollary 1.2.31**.: \(H\) _is of Hardy type \(\Longleftrightarrow[\gamma]=[\gamma]_{C}\) for all \(\gamma\). Hence if \(C\) is archimedean, then \(H\) is of Hardy type; if \(\Gamma\neq\{0\}\), then the converse also holds._
### The Valuation of Differential Polynomials at Infinity\((^{*})\)
Our goal in this work is to solve certain kinds of algebraic differential equations in Hardy fields. In this section we review some general facts about the asymptotic behavior of solutions of algebraic differential equations in \(H\)-asymptotic fields. We will not need these results in order to achieve our main objective, but they will be used at a few points for applications and corollaries; see Section 5.4 and Corollary 7.1.20. _Throughout this section \(K\) is an \(H\)-asymptotic field, and \(f\), \(g\) range over \(K\)._
**Iterated logarithmic derivatives**.: Let \((\Gamma,\psi)\) be an \(H\)-asymptotic couple. As usual we introduce a new symbol \(\infty\notin\Gamma\), extend the ordering of \(\Gamma\) to an ordering on \(\Gamma_{\infty}=\Gamma\cup\{\infty\}\) such that \(\infty>\Gamma\), and extend \(\psi\colon\Gamma^{\neq}\to\Gamma\) to a map \(\Gamma_{\infty}\to\Gamma_{\infty}\) by setting \(\psi(0):=\psi(\infty):=\infty\). (See [ADH, 6.5].) We let \(\gamma\) range over \(\Gamma\), and we define \(\gamma^{(n)}\in\Gamma_{\infty}\) inductively by \(\gamma^{\langle 0\rangle}:=\gamma\) and \(\gamma^{\langle n+1\rangle}:=\psi(\gamma^{\langle n\rangle})\). The following is [5, Lemma 5.2]; for the convenience of the reader we include a proof:
**Lemma 1.3.1**.: _Suppose that \(0\in(\Gamma^{<})^{\prime}\), \(\gamma\neq 0\), and \(n\geqslant 1\). If \(\gamma^{\langle n\rangle}<0\), then \(\gamma^{\langle i\rangle}<0\) for \(i=1,\ldots,n\) and \([\gamma]>[\gamma^{\uparrow}]>\cdots>[\gamma^{\langle n-1\rangle}]>[\gamma^{ \langle n\rangle}]\)._
Proof.: By [ADH, 9.2.9], \((\Gamma,\psi)\) has small derivation, hence the case \(n=1\) follows from [ADH, 9.2.10(iv)]. Assume inductively that the lemma holds for a certain value of \(n\geqslant 1\), and suppose \(\gamma^{\langle n+1\rangle}<0\). Then \(\gamma^{\langle n\rangle}\neq 0\), so we can apply the case \(n=1\) to \(\gamma^{\langle n\rangle}\) instead of \(\gamma\) and get \([\gamma^{\langle n\rangle}]>[\gamma^{\langle n+1\rangle}]\). By the inductive assumption the remaining inequalities will follow from \(\gamma^{\langle n\rangle}<0\). From \(0\in(\Gamma^{<})^{\prime}\) we obtain an element \(1\) of \(\Gamma^{>}\) with \(0=(-1)^{\prime}=-1+1^{\dagger}\). Suppose \(\gamma^{\langle n\rangle}\geqslant 0\). Then \(\gamma^{\langle n\rangle}\in\Psi\), thus \(0<\gamma^{\langle n\rangle}<1+1^{\dagger}=1+1\) and so \([\gamma^{\langle n\rangle}]\leqslant[1]\). Hence \(0>\gamma^{\langle n+1\rangle}\geqslant 1^{\dagger}=1\), a contradiction.
Suppose now that \((\Gamma,\psi)\) is the asymptotic couple of \(K\). If \(y\in K^{\times}\) and \((vy)^{\langle n\rangle}\neq\infty\), then the \(n\)th iterated logarithmic derivative \(y^{\langle n\rangle}\) of \(y\) is defined (see [ADH, 4.2]), and \(v(y^{\langle n\rangle})=(vy)^{\langle n\rangle}\in\Gamma\). Recall from [ADH, p. 383] that for \(f,g\neq 0\),
\[f\ll g\ :\Leftrightarrow\ f^{\dagger}\prec g^{\dagger},\quad f\lneq g\ :\Leftrightarrow\ f^{\dagger}\prec g^{\dagger},\quad f\lneq g\ : \Leftrightarrow\ f^{\dagger}\asymp g^{\dagger},\]
hence, assuming also \(f,g\not\asymp 1\),
\[f\ll g\ \Rightarrow\ [vf]<[vg],\qquad[vf]\leqslant[vg]\ \Rightarrow\ f \lneq g.\]
_In the rest of this section we are given \(x\succ 1\) in \(K\) with \(x^{\prime}\asymp 1\). Then \(0\in(\Gamma^{<})^{\prime}\), so from the previous lemma we obtain:
**Corollary 1.3.2**.: _If \(y\in K^{\times}\), \(y\neq 1\), \(n\geqslant 1\), and \((vy)^{\langle n\rangle}<0\), then \(y^{\langle i\rangle}\succ 1\) for \(i=1,\ldots,n\) and \([vy]>\big{[}v(y^{\dagger})\big{]}>\cdots>\big{[}v(y^{\langle n-1\rangle})\big{]}> \big{[}v(y^{\langle n\rangle})\big{]}\)._
Let \(\boldsymbol{i}=(i_{0},\ldots,i_{n})\in\mathbb{Z}^{1+n}\) and \(y\in K^{\times}\) be such that \(y^{\langle n\rangle}\) is defined; we put
\[y^{\langle\boldsymbol{i}\rangle}\;:=\;(y^{\langle 0\rangle})^{i_{0}}\cdots(y^{ \langle n\rangle})^{i_{n}}\in K.\]
If \(y^{\langle n\rangle}\neq 0\), then \(\boldsymbol{i}\mapsto y^{\langle\boldsymbol{i}\rangle}\colon\mathbb{Z}^{1+n} \to K^{\times}\) is a group morphism. Suppose now that \(y\in K^{\times}\), \((vy)^{\langle n\rangle}<0\), and \(\boldsymbol{i}=(i_{0},\ldots,i_{n})\in\mathbb{Z}^{1+n}\), \(\boldsymbol{i}\neq 0\), and \(m\in\{0,\ldots,n\}\) is minimal with \(i_{m}\neq 0\). Then by Corollary 1.3.2, \(\big{[}v(y^{\langle\boldsymbol{i}\rangle})\big{]}=\big{[}v(y^{\langle m \rangle})\big{]}\). Thus if \(y\succ 1\), we have the equivalence \(y^{\langle\boldsymbol{i}\rangle}\succ 1\;\Leftrightarrow\;i_{m}\geqslant 1\). If \(K\) is equipped with an ordering making it a pre-\(H\)-field and \(y\succ 1\), then \(y^{\dagger}>0\), so \(y^{\langle i\rangle}>0\) for \(i=1,\ldots,n\), and thus \(\operatorname{sign}y^{\langle\boldsymbol{i}\rangle}=\operatorname{sign}y^{i_{0}}\).
**Iterated exponentials**.: _In this subsection we assume that \(\Psi\) is downward closed._
For \(f\succ 1\) we have \(f^{\prime}\succ f^{\dagger}\), so we can and do choose \(\operatorname{E}(f)\in K^{\times}\) such that \(\operatorname{E}(f)\succ 1\) and \(\operatorname{E}(f)^{\dagger}\asymp f^{\prime}\), hence \(f\prec\operatorname{E}(f)\) and \(f\prec\operatorname{E}(f)\). Moreover, if \(f,g\succ 1\), then
\[f\prec g\quad\Longleftrightarrow\quad\operatorname{E}(f)\preccurlyeq \operatorname{E}(g).\]
For \(f\succ 1\) define \(\operatorname{E}_{n}(f)\in K^{\succ 1}\) inductively by
\[\operatorname{E}_{0}(f)\;:=\;f,\qquad\operatorname{E}_{n+1}(f)\;:=\; \operatorname{E}\bigl{(}\operatorname{E}_{n}(f)\bigr{)},\]
and thus by induction
\[\operatorname{E}_{n}(f)\;\prec\;\operatorname{E}_{n+1}(f)\quad\text{ and } \quad\operatorname{E}_{n}(f)\;\preccurlyeq\;\operatorname{E}_{n+1}(f)\qquad \text{ for all }n.\]
_In the rest of this subsection \(f\succeq x\), and \(y\) ranges over elements of \(H\)-asymptotic extensions of \(K\)._ The proof of the next lemma is like that of [7, Lemma 1.3(2)].
**Lemma 1.3.3**.: _If \(y\succeq\operatorname{E}_{n+1}(f)\), \(n\geqslant 1\), then \(y\neq 0\) and \(y^{\dagger}\succeq\operatorname{E}_{n}(f)\)._
Proof.: If \(y\succeq\operatorname{E}_{2}(f)\), then \(y\neq 0\), and using \(\operatorname{E}_{2}(f)\succ 1\) we obtain
\[y^{\dagger}\;\succeq\;\operatorname{E}_{2}(f)^{\dagger}\;\asymp\; \operatorname{E}(f)^{\prime}\;=\;\operatorname{E}(f)\operatorname{E}(f)^{ \dagger}\;\asymp\;\operatorname{E}(f)f^{\prime}\;\succeq\;\operatorname{E}(f),\]
Thus the lemma holds for \(n=1\). In general, \(\operatorname{E}_{n-1}(f)\sucsim f\succeq x\), hence the lemma follows from the case \(n=1\) applied to \(\operatorname{E}_{n-1}(f)\) in place of \(f\).
An obvious induction on \(n\) using Lemma 1.3.3 shows: if \(y\succeq\operatorname{E}_{n}(f)\), then \((vy)^{\langle n\rangle}\leqslant vf<0\). We shall use this fact without further reference.
**Lemma 1.3.4**.: _If \(y\succeq\operatorname{E}_{n+1}(f)\), then \(y^{\langle n\rangle}\) is defined and \(y^{\langle n\rangle}\succeq\operatorname{E}(f)\)._
Proof.: First note that if \(y\neq 0\), \(n\geqslant 1\), and \((y^{\dagger})^{\langle n-1\rangle}\) is defined, then \(y^{\langle n\rangle}\) is defined and \(y^{\langle n\rangle}=(y^{\dagger})^{\langle n-1\rangle}\). Now use induction on \(n\) and Lemma 1.3.3.
**Lemma 1.3.5**.: _If \(y\succeq\operatorname{E}_{n}(f^{2})\), then \(y^{\langle n\rangle}\) is defined and \(y^{\langle n\rangle}\sucsim f\), with \(y^{\langle n\rangle}\succ f\) if \(f\succ x\)._
Proof.: This is clear if \(n=0\), so suppose \(y\succeq\operatorname{E}_{n+1}(f^{2})\). Then by Lemma 1.3.4 (applied with \(f^{2}\) in place of \(f\)) we have \(y^{\langle n\rangle}\succeq\operatorname{E}(f^{2})\succ 1\), so
\[y^{\langle n+1\rangle}\;=\;(y^{\langle n\rangle})^{\dagger}\;\succeq\; \operatorname{E}(f^{2})^{\dagger}\;\asymp\;(f^{2})^{\prime}\;=\;2ff^{\prime}\; \sucsim\;f,\]
with \(y^{\langle n+1\rangle}\succ f\) if \(f\succ x\), as required.
**Corollary 1.3.6**.: _Suppose \(y\succeq\operatorname{E}_{n}(f^{2})\), and let \(\boldsymbol{i}\in\mathbb{Z}^{1+n}\) be such that \(\boldsymbol{i}>0\) lexicographically. Then \(y^{\langle n\rangle}\) is defined and \(y^{\langle\boldsymbol{i}\rangle}\sucsim f\), with \(y^{\langle\boldsymbol{i}\rangle}\succ f\) if \(f\succ x\)._
Proof.: By Lemma 1.3.5, \(y^{\langle n\rangle}\) is defined with \(y^{\langle n\rangle}\searrow f\), and \(y^{\langle n\rangle}\succ f\) if \(f\succ x\). Let \(m\in\{0,\ldots,n\}\) be minimal such that \(i_{m}\neq 0\); so \(i_{m}\geqslant 1\). If \(m=n\) then \(y^{\langle\mathbf{i}\rangle}=(y^{\langle n\rangle})^{i_{n}}\succcurlyeq y^{ \langle n\rangle}\), hence \(y^{\langle\mathbf{i}\rangle}\succcurlyeq f\), with \(y^{\langle\mathbf{i}\rangle}\succ f\) if \(f\succ x\). Suppose \(m<n\). Then \(y\succcurlyeq\mathrm{E}_{m+1}(f^{2})\) and hence \(y^{\langle m\rangle}\succcurlyeq\mathrm{E}(f^{2})\) by Lemma 1.3.4. Also, \(f\succcurlyeq f^{2}\llcurlyeq\mathrm{E}(f^{2})\), thus \(y^{\langle m\rangle}\succcurlyeq f\). The remarks following Corollary 1.3.2 now yield \(y^{\langle\mathbf{i}\rangle}\succ f\).
**Asymptotic behavior of \(P(y)\) for large \(y\).** In this subsection \(\mathbf{i}\), \(\mathbf{j}\), \(\mathbf{k}\) range over \(\mathbb{N}^{1+n}\). Let \(P_{\langle\mathbf{i}\rangle}\in K\) be such that \(P_{\langle\mathbf{i}\rangle}=0\) for all but finitely many \(\mathbf{i}\) and \(P_{\langle\mathbf{i}\rangle}\neq 0\) for some \(\mathbf{i}\), and set \(P:=\sum_{\mathbf{i}}P_{\langle\mathbf{i}\rangle}Y^{\langle\mathbf{i}\rangle}\in K\langle Y\rangle\). So if \(P\in K\{Y\}\), then \(P=\sum_{\mathbf{i}}P_{\langle\mathbf{i}\rangle}Y^{\langle\mathbf{i}\rangle}\) is the logarithmic decomposition of the differential polynomial \(P\) as defined in [ADH, 4.2]. If \(y\) is an element in a differential field extension \(L\) of \(K\) such that \(y^{\langle n\rangle}\) is defined, then we put \(P(y):=\sum_{\mathbf{i}}P_{\langle\mathbf{i}\rangle}y^{\langle\mathbf{i}\rangle}\in L\) (and for \(P\in K\{Y\}\) this has the usual value). Let \(\mathbf{j}\) be lexicographically maximal such that \(P_{\langle\mathbf{j}\rangle}\neq 0\), and choose \(\mathbf{k}\) so that \(P_{\langle\mathbf{k}\rangle}\) has minimal valuation. If \(P_{\langle\mathbf{k}\rangle}/P_{\langle\mathbf{j}\rangle}\succ x\), set \(f:=P_{\langle\mathbf{k}\rangle}/P_{\langle\mathbf{j}\rangle}\); otherwise set \(f:=x^{2}\). Then \(f\succ x\) and \(f\succcurlyeq P_{\langle\mathbf{i}\rangle}/P_{\langle\mathbf{j}\rangle}\) for all \(\mathbf{i}\). The following is a more precise version of [ADH, 16.6.10] and [103, (8.8)]:
**Proposition 1.3.7**.: _Suppose \(\Psi\) is downward closed, and \(y\) in an \(H\)-asymptotic extension of \(K\) satisfies \(y\succcurlyeq\mathrm{E}_{n}(f^{2})\). Then \(y^{\langle n\rangle}\) is defined and \(P(y)\sim P_{\langle\mathbf{j}\rangle}y^{\langle\mathbf{j}\rangle}\)._
Proof.: Let \(\mathbf{i}<\mathbf{j}\). We have \(f\succ x\), so \(y^{\langle\mathbf{j}-\mathbf{i}\rangle}\succ f\succcurlyeq P_{\langle\mathbf{i}\rangle}/P_ {\langle\mathbf{j}\rangle}\) by Corollary 1.3.6. Hence \(P_{\langle\mathbf{j}\rangle}y^{\langle\mathbf{j}\rangle}\succ P_{\langle\mathbf{i}\rangle} y^{\langle\mathbf{i}\rangle}\).
From Corollary 1.3.2, Lemma 1.3.5, and Proposition 1.3.7 we obtain:
**Corollary 1.3.8**.: _Suppose \(\Psi\) is downward closed and \(y\) in an \(H\)-asymptotic extension of \(K\) satisfies \(y\succ K\). Then \(y\) is \(\mathrm{d}\)-transcendental over \(K\), and for all \(n\), \(y^{\langle n\rangle}\) is defined, \(y^{\langle n\rangle}\succ K\), and \(y^{\langle n+1\rangle}\llcurlyeq y^{\langle n\rangle}\). The \(H\)-asymptotic extension \(K\langle y\rangle\) of \(K\) has residue field \(\operatorname{res}K\langle y\rangle=\operatorname{res}K\) and value group \(\Gamma_{K\langle y\rangle}=\Gamma\oplus\bigoplus_{n}\mathbb{Z}v(y^{\langle n \rangle})\)\((\)internal direct sum\()\), and \(\Gamma_{K\langle y\rangle}\) contains \(\Gamma\) as a convex subgroup._
Suppose now that \(K\) is equipped with an ordering making it a pre-\(H\)-field. From Proposition 1.3.7 we recover [7, Theorem 3.4] in slightly stronger form:
**Corollary 1.3.9**.: _Suppose \(y\) lies in a Liouville closed \(H\)-field extension of \(K\). If \(y\succcurlyeq\mathrm{E}_{n}(f^{2})\), then \(y^{\langle n\rangle}\) is defined and \(\operatorname{sign}P(y)=\operatorname{sign}P_{\langle\mathbf{j}\rangle}y^{j_{0}}\). In particular, if \(y^{\langle n\rangle}\) is defined and \(P(y)=0\), then \(y\prec\mathrm{E}_{n}(f^{2})\)._
_Example._ Suppose \(P\in K\{Y\}\). Using [ADH, 4.2, subsection on logarithmic decomposition] we obtain \(j_{0}=\deg P\), and the logarithmic decomposition
\[P(-Y)\ =\ \sum_{\mathbf{i}}P_{\langle\mathbf{i}\rangle}(-1)^{i_{0}}Y^{\langle\mathbf{i} \rangle}.\]
If \(\deg P\) is odd, and \(y>0\) lies in a Liouville closed \(H\)-field extension of \(K\) such that \(y\succcurlyeq\mathrm{E}_{n}(f^{2})\), then
\[\operatorname{sign}P(y)\ =\ \operatorname{sign}P_{\langle\mathbf{j}\rangle},\qquad \operatorname{sign}P(-y)\ =\ -\operatorname{sign}P_{\langle\mathbf{j}\rangle}\ =\ -\operatorname{sign}P(y).\]
### 1.4. \(\lambda\)-freeness and \(\omega\)-freeness
This section contains preservation results for the important properties of \(\lambda\)-_freeness_ and \(\omega\)-_freeness_ from [ADH]. Let \(K\) be an ungrounded \(H\)-asymptotic field such that \(\Gamma\neq\{0\}\), and as in [ADH, 11.5], fix a logarithmic sequence \((\ell_{\rho})\) for \(K\) and define the pc-sequences \((\lambda_{\rho})=(-\ell_{\rho}^{\dagger\dagger})\) and \((\omega_{\rho})=(\omega(\lambda_{\rho}))\) in \(K\), where \(\omega(z):=-2z^{\prime}-z^{2}\).
Recall that \(K\) is \(\lambda\)-_free_ iff \((\lambda_{\rho})\) does not have a pseudolimit in \(K\), and \(K\) is \(\omega\)-_free_ iff \((\omega_{\rho})\) does not have a pseudolimit in \(K\). If \(K\) is \(\omega\)-free, then \(K\) is \(\lambda\)-free. We refer to [ADH, 11.6, 11.7] for this and other basic facts about \(\lambda\)-freeness and \(\omega\)-freeness used below. (For \(\omega\)-free Hardy fields, see also Section 5.6.) As in [ADH], \(L\) being \(\lambda\)-free or \(\omega\)-free includes \(L\) being an ungrounded \(H\)-asymptotic field with \(\Gamma_{L}\neq\{0\}\).
**Preserving \(\lambda\)-freeness and \(\omega\)-freeness.**_In this subsection \(K\) is an ungrounded \(H\)-asymptotic field with \(\Gamma\neq\{0\}\), and \((\ell_{\rho})\), \((\lambda_{\rho})\), \((\omega_{\rho})\) are as above._ If \(K\) has a \(\lambda\)-free \(H\)-asymptotic field extension \(L\) such that \(\Gamma^{<}\) is cofinal in \(\Gamma^{<}_{L}\), then \(K\) is \(\lambda\)-free, and similarly with "\(\omega\)-free" in place of "\(\lambda\)-free" [ADH, remarks after 11.6.4, 11.7.19]. The property of \(\omega\)-freeness is very robust; indeed, by [ADH, 13.6.1]:
**Theorem 1.4.1**.: _If \(K\) is \(\omega\)-free and \(L\) is a pre-\(\mathrm{d}\)-valued \(\mathrm{d}\)-algebraic \(H\)-asymptotic extension of \(K\), then \(L\) is \(\omega\)-free and \(\Gamma^{<}\) is cofinal in \(\Gamma^{<}_{L}\)._
In contrast, \(\lambda\)-freeness is more delicate: Theorem 1.4.1 fails with "\(\lambda\)-free" in place of "\(\omega\)-free", as the next example shows.
_Example 1.4.2_.: The \(H\)-field \(K=\mathbb{R}\langle\omega\rangle\) from [ADH, 13.9.1] is \(\lambda\)-free, but its \(H\)-field extension \(L=\mathbb{R}\langle\lambda\rangle\) is not, and this extension is \(\mathrm{d}\)-algebraic: \(2\lambda^{\prime}+\lambda^{2}+\omega=0\).
In the rest of this subsection we consider cases where parts of Theorem 1.4.1 do hold. Recall from [ADH, 11.6.8] that if \(K\) is \(\lambda\)-free, then \(K\) has (rational) asymptotic integration, and \(K\) is \(\lambda\)-free iff its algebraic closure is \(\lambda\)-free. Moreover, \(\lambda\)-freeness is preserved under adjunction of constants:
**Proposition 1.4.3**.: _Suppose \(K\) is \(\lambda\)-free and \(L=K(D)\) is an \(H\)-asymptotic extension of \(K\) with \(D\supseteq C\) a subfield of \(C_{L}\). Then \(L\) is \(\lambda\)-free with \(\Gamma_{L}=\Gamma\)._
We are going to deduce this from the next three lemmas. Recall that \(K\) is pre-\(\mathrm{d}\)-valued, by [ADH, 10.1.3]. Let \(\mathrm{dv}(K)\) be the \(\mathrm{d}\)-valued hull of \(K\) (see [ADH, 10.3]).
**Lemma 1.4.4**.: _Suppose \(K\) is \(\lambda\)-free. Then \(L:=\mathrm{dv}(K)\) is \(\lambda\)-free and \(\Gamma_{L}=\Gamma\)._
Proof.: The first statement is [75, Theorem 10.2], and the second statement follows from [ADH, 10.3.2(i)].
If \(L=K(D)\) is a differential field extension of \(K\) with \(D\supseteq C\) a subfield of \(C_{L}\), then \(D=C_{L}\), and \(K\) and \(D\) are linearly disjoint over \(C\)[ADH, 4.6.20]. If \(K\) is \(\mathrm{d}\)-valued and \(L=K(D)\) is an \(H\)-asymptotic extension of \(K\) with \(D\supseteq C\) a subfield of \(C_{L}\), then \(L\) is \(\mathrm{d}\)-valued and \(\Gamma_{L}=\Gamma\)[ADH, 10.5.15].
**Lemma 1.4.5**.: _Suppose \(K\) is \(\mathrm{d}\)-valued and \(\lambda\)-free, and \(L=K(D)\) is an \(H\)-asymptotic extension of \(K\) with \(D\supseteq C\) a subfield of \(C_{L}\). Then \(L\) is \(\lambda\)-free._
Proof.: First, \((\lambda_{\rho})\) is of transcendental type over \(K\): otherwise, [ADH, 3.2.7] would give an algebraic extension of \(K\) that is not \(\lambda\)-free. Next, our logarithmic sequence \((\ell_{\rho})\) for \(K\) remains a logarithmic sequence for \(L\).
Zorn and the \(\forall\exists\)-form of the \(\lambda\)-freeness axiom [ADH, 1.6.1(ii)] reduce us to the case \(D=C(d)\), \(d\notin C\), \(d\) transcendental over \(K\), so \(L=K(d)\). Suppose \(L\) is not \(\lambda\)-free. Then \(\lambda_{\rho}\rightsquigarrow\lambda\in L\), and such \(\lambda\) is transcendental over \(K\) and gives an immediate extension \(K(\lambda)\) of \(K\) by [ADH, 3.2.6]. Hence \(L\) is algebraic over \(K(\lambda)\), so \(\operatorname{res}L\) is algebraic over \(\operatorname{res}K(\lambda)=\operatorname{res}K\cong C\) and thus \(d\) is algebraic over \(C\), a contradiction.
**Lemma 1.4.6**.: _Suppose \(K\) is \(\lambda\)-free and \(L\) is an \(H\)-asymptotic extension of \(K\), where \(L=K(d)\) with \(d\in C_{L}\). Then \(L\) is pre-\(\mathrm{d}\)-valued._
Proof.: Let \(L^{\mathrm{a}}\) be an algebraic closure of the \(H\)-asymptotic field \(L\), and let \(K^{\mathrm{a}}\) be the algebraic closure of \(K\) inside \(L^{\mathrm{a}}\). Then \(K^{\mathrm{a}}\) is pre-\(\mathrm{d}\)-valued by [1, 10.1.22]. Replacing \(K\), \(L\) by \(K^{\mathrm{a}}\), \(K^{\mathrm{a}}(d)\) we arrange that \(K\) is algebraically closed. We may assume \(d\notin C\), so \(d\) is transcendental over \(K\) by [1, 4.1.1, 4.1.2].
Suppose first that \(\mathrm{res}(d)\in\mathrm{res}(K)\subseteq\mathrm{res}(L)\), and take \(b\in\mathcal{O}\) such that \(y:=b-d\prec 1\). Then \(b^{\prime}\notin\partial o\): otherwise \(y^{\prime}=b^{\prime}=\delta^{\prime}\) with \(\delta\in o\), so \(y=\delta\in K\) and hence \(d\in K\), a contradiction. Also \(vb^{\prime}\in(\Gamma^{>})^{\prime}\): otherwise \(vb^{\prime}<(\Gamma^{>})^{\prime}\), by [1, 9.2.14], and \(vb^{\prime}\) would be a gap in \(K\), contradicting \(\lambda\)-freeness of \(K\). Hence \(L=K(y)\) is pre-\(\mathrm{d}\)-valued by [1, 10.2.4, 10.2.5(iii)] applied to \(s:=b^{\prime}\).
If \(\mathrm{res}(d)\notin\mathrm{res}(K)\), then \(\mathrm{res}(d)\) is transcendental over \(\mathrm{res}(K)\) by [1, 3.1.17], hence \(\Gamma_{L}=\Gamma\) by [1, 3.1.11], and so \(L\) has asymptotic integration and thus is pre-\(\mathrm{d}\)-valued by [1, 10.1.3].
Proof of Proposition 1.4.3.: By Zorn we reduce to the case \(L=K(d)\) with \(d\in C_{L}\). Then \(L\) is pre-\(\mathrm{d}\)-valued by Lemma 1.4.6. By Lemma 1.4.4, the \(\mathrm{d}\)-valued hull \(K_{1}:=\mathrm{dv}(K)\) of \(K\) is \(\lambda\)-free with \(\Gamma_{K_{1}}=\Gamma\), and by the universal property of \(\mathrm{d}\)-valued hulls we may arrange that \(K_{1}\) is a \(\mathrm{d}\)-valued subfield of \(L_{1}:=\mathrm{dv}(L)\) [1, 10.3.1]. The proof of [1, 10.3.1] gives \(L_{1}=L(E)\) where \(E=C_{L_{1}}\), and so \(L_{1}=K_{1}(E)\). Hence by Lemma 1.4.5 and the remarks preceding it, \(L_{1}\) is \(\lambda\)-free with \(\Gamma_{L_{1}}=\Gamma_{K_{1}}=\Gamma\). Thus \(L\) is \(\lambda\)-free with \(\Gamma_{L}=\Gamma\).
**Lemma 1.4.7**.: _Let \(H\) be a \(\lambda\)-free real closed \(H\)-field. Then the trigonometric closure \(H^{\mathrm{trig}}\) of \(H\) is \(\lambda\)-free._
Proof.: We show that \(H^{+}\) as in Lemma 1.2.22 is \(\lambda\)-free. There \(H^{+}[\mathrm{i}]=K(y)\) where \(K\) is the \(H\)-asymptotic extension \(H[\mathrm{i}]\) of \(H\) and \(y\sim 1\), \(y^{\dagger}\notin K^{\dagger}\), \(y^{\dagger}\in\mathrm{i}\partial o_{H}\). Then \(K\) is \(\lambda\)-free, so \(K(y)\) is \(\lambda\)-free by [75, Proposition 7.2], hence \(H^{+}\) is \(\lambda\)-free.
In Example 1.4.2 we have a \(\lambda\)-free \(K\) and an \(H\)-asymptotic extension \(L\) of \(K\) that is not \(\lambda\)-free, with \(\mathrm{trdeg}(L|K)=1\). The next proposition shows that the second part of the conclusion of Theorem 1.4.1 nevertheless holds for such \(K,L\).
**Proposition 1.4.8**.: _The following are equivalent:_
1. \(K\) _has rational asymptotic integration;_
2. _for every_ \(H\)_-asymptotic extension_ \(L\) _of_ \(K\) _with_ \(\mathrm{trdeg}(L|K)\leqslant 1\) _we have that_ \(\Gamma^{<}\) _is cofinal in_ \(\Gamma^{<}_{L}\)_._
Proof.: For (i) \(\Rightarrow\) (ii), assume (i), and let \(L\) be an \(H\)-asymptotic extension of \(K\) with \(\mathrm{trdeg}(L|K)\leqslant 1\). Towards showing that \(\Gamma^{<}\) is cofinal in \(\Gamma^{<}_{L}\) we can arrange that \(K\) and \(L\) are algebraically closed. Suppose towards a contradiction that \(\gamma\in\Gamma_{L}\) and \(\Gamma^{<}<\gamma<0\). Then \(\Psi<\gamma^{\prime}<(\Gamma^{>})^{\prime}\), and so \(\Gamma\) is dense in \(\Gamma+\mathbb{Q}\gamma^{\prime}\) by [1, 2.4.16, 2.4.17], in particular, \(\gamma\notin\Gamma+\mathbb{Q}\gamma^{\prime}\). Thus \(\gamma\), \(\gamma^{\prime}\) are \(\mathbb{Q}\)-linearly independent over \(\Gamma\), which contradicts \(\mathrm{trdeg}(L|K)\leqslant 1\) by [1, 3.1.11].
As to (ii) \(\Rightarrow\) (i), we prove the contrapositive, so assume \(K\) does not have rational asymptotic integration. We arrange again that \(K\) is algebraically closed. Then \(K\) has a gap \(vs\) with \(s\in K^{\times}\), and so [1, 10.2.1 and its proof] gives an \(H\)-asymptotic extension \(K(y)\) of \(K\) with \(y^{\prime}=s\) and \(0<vy<\Gamma^{>}\).
Recall from [1, 11.6] that Liouville closed \(H\)-fields are \(\lambda\)-free. To prove the next result we also use Gehret's theorem [75, Theorem 12.1(1)] that an \(H\)-field \(H\) has
up to isomorphism over \(H\) exactly one Liouville closure iff \(H\) is grounded or \(\lambda\)-free. Here _isomorphism_ means of course _isomorphism of \(H\)-fields_, and likewise with the embeddings referred to in the next result:
**Proposition 1.4.9**.: _Let \(H\) be a grounded or \(\lambda\)-free \(H\)-field. Then \(H\) has a trigonometrically closed and Liouville closed \(H\)-field extension \(H^{\mathrm{tl}}\) that embeds over \(H\) into any trigonometrically closed Liouville closed \(H\)-field extension of \(H\)._
Proof.: We build real closed \(H\)-fields \(H_{0}\subseteq H_{1}\subseteq H_{2}\subseteq\cdots\) as follows: \(H_{0}\) is a real closure of \(H\), and, recursively, \(H_{2n+1}\) is a Liouville closure of \(H_{2n}\), and \(H_{2n+2}:=H_{2n+1}^{\mathrm{trig}}\) is the trigonometric closure of \(H_{2n+1}\). Then \(H^{*}:=\bigcup_{n}H_{n}\) is a trigonometrically closed Liouville closed \(H\)-field extension of \(H\). Induction using Lemma 1.4.7 shows that all \(H_{n}\) with \(n\geqslant 1\) are \(\lambda\)-free, and that \(H_{2n}\) has for all \(n\) up to isomorphism over \(H\) a unique Liouville closure. Given any trigonometrically closed Liouville closed \(H\)-field extension \(E\) of \(H\) we then use the embedding properties of _Liouville closure_ and _trigonometric closure_ to construct by a similar recursion embeddings \(H_{n}\to E\) that extend to an embedding \(H^{*}\to E\) over \(H\).
For \(H\) as in Proposition 1.4.9, the \(H^{*}\) constructed in its proof is minimal: Let \(E\supseteq H\) be any trigonometrically closed Liouville closed \(H\)-subfield of \(H^{*}\). Then induction on \(n\) yields \(H_{n}\subseteq\ E\) for all \(n\), so \(E=H^{*}\). It follows that any \(H^{\mathrm{tl}}\) as in Proposition 1.4.9 is isomorphic over \(H\) to \(H^{*}\), and we refer to such \(H^{\mathrm{tl}}\) as a **trigonometric-Liouville closure** of \(H\). Here are some useful facts about \(H^{\mathrm{tl}}\):
**Corollary 1.4.10**.: _Let \(H\) be a \(\lambda\)-free \(H\)-field. Then \(C_{H^{\mathrm{tl}}}\) is a real closure of \(C_{H}\), the \(H\)-asymptotic extension \(K^{\mathrm{tl}}:=H^{\mathrm{tl}}[\mathrm{i}]\) of \(H^{\mathrm{tl}}\) is a Liouville extension of \(H\) with \(\mathrm{I}(K^{\mathrm{tl}})\subseteq(K^{\mathrm{tl}})^{\dagger}\), and \(\Gamma_{H}^{<}\) is cofinal in \(\Gamma_{H^{n}}^{<}\). Moreover,_
\[H\text{ is $\omega$-free}\iff H^{\mathrm{tl}}\text{ is $\omega$-free.}\]
Proof.: The construction of \(H^{*}\) in the proof of Proposition 1.4.9 gives that \(C_{H^{*}}\) is a real closure of \(C_{H}\), and that the \(H\)-asymptotic extension \(K^{*}:=H^{*}[\mathrm{i}]\) of \(H^{*}\) is a Liouville extension of \(H\) with \(\mathrm{I}(K^{*})\subseteq(K^{*})^{\dagger}\). Induction using Lemma 1.4.7 and Proposition 1.4.8 shows that \(H_{n}\) is \(\lambda\)-free and \(\Gamma_{H}^{<}\) is cofinal in \(\Gamma_{H_{n}}^{<}\), for all \(n\), so \(\Gamma_{H}^{<}\) is cofinal in \(\Gamma_{H}^{<}\).
The final equivalence follows from Theorem 1.4.1 and a remark preceding it.
Proposition 1.4.8 and [ADH, remarks after 11.6.4 and after 11.7.19] yield:
**Corollary 1.4.11**.: _Suppose \(K\) has rational asymptotic integration, and let \(L\) be an \(H\)-asymptotic extension of \(K\) with \(\mathrm{trdeg}(L|K)\leqslant 1\). If \(L\) is \(\lambda\)-free, then so is \(K\), and if \(L\) is \(\omega\)-free, then so is \(K\)._
We also have a similar characterization of \(\lambda\)-freeness:
**Proposition 1.4.12**.: _The following are equivalent:_
1. \(K\) _is_ \(\lambda\)_-free;_
2. _every_ \(H\)_-asymptotic extension_ \(L\) _of_ \(K\) _with_ \(\mathrm{trdeg}(L|K)\leqslant 1\) _has asymptotic integration._
Proof.: Assume \(K\) is \(\lambda\)-free; let \(L\) be an \(H\)-asymptotic extension of \(K\) such that \(\mathrm{trdeg}(L|K)\leqslant 1\). By Proposition 1.4.8, \(\Gamma^{<}\) is cofinal in \(\Gamma_{L}^{<}\), so \(L\) is ungrounded. Towards a contradiction, suppose \(vf\) (\(f\in L^{\times}\)) is a gap in \(L\). Passing to algebraic closures we arrange that \(K\) and \(L\) are algebraically closed. Set \(\lambda:=-f^{\dagger}\). Then
for all active \(a\) in \(L\) we have \(\lambda+a^{\dagger}\prec a\) by [ADH, 11.5.9] and hence \(\lambda_{\rho}\rightsquigarrow\lambda\) by [ADH, 11.5.6]. By \(\lambda\)-freeness of \(K\) and [ADH, 3.2.6, 3.2.7], the valued field extension \(K(\lambda)\supseteq K\) is immediate of transcendence degree \(1\), so \(L\supseteq K(\lambda)\) is algebraic and \(\Gamma=\Gamma_{L}\). Hence \(vf\) is a gap in \(K\), a contradiction. This shows (i) \(\Rightarrow\) (ii).
To show the contrapositive of (ii) \(\Rightarrow\) (i), suppose \(\lambda\in K\) is a pseudolimit of \((\lambda_{\rho})\). If the algebraic closure \(K^{\mathrm{a}}\) of \(K\) does not have asymptotic integration, then clearly (ii) fails. If \(K^{\mathrm{a}}\) has asymptotic integration, then \(-\lambda\) creates a gap over \(K\) by [ADH, 11.5.14] applied to \(K^{\mathrm{a}}\) in place of \(K\), hence (ii) also fails.
The next two lemmas include converses to Lemmas 1.4.4 and 1.4.5.
**Lemma 1.4.13**.: _Let \(E\) be a pre-\(\mathrm{d}\)-valued \(H\)-asymptotic field. Then:_
1. _if_ \(E\) _is not_ \(\lambda\)_-free, then_ \(\mathrm{dv}(E)\) _is not_ \(\lambda\)_-free;_
2. _if_ \(E\) _is not_ \(\omega\)_-free, then_ \(\mathrm{dv}(E)\) _is not_ \(\omega\)_-free._
Proof.: This is clear if \(E\) has no rational asymptotic integration, because then \(\mathrm{dv}(E)\) has no rational asymptotic integration either, by [ADH, 10.3.2]. Assume \(E\) has rational asymptotic integration. Then \(\mathrm{dv}(E)\) is an immediate extension of \(E\) by [ADH, 10.3.2], and then (i) and (ii) follow from the characterizations of \(\lambda\)-freeness and \(\omega\)-freeness in terms of nonexistence of certain pseudolimits.
**Lemma 1.4.14**.: _Let \(E\) be a \(\mathrm{d}\)-valued \(H\)-asymptotic field and \(F\) an \(H\)-asymptotic extension of \(E\) such that \(F=E(C_{F})\). Then:_
1. _if_ \(E\) _is not_ \(\lambda\)_-free, then_ \(F\) _is not_ \(\lambda\)_-free;_
2. _if_ \(E\) _is not_ \(\omega\)_-free, then_ \(F\) _is not_ \(\omega\)_-free._
Proof.: By [ADH, 10.5.15]\(E\) and \(F\) have the same value group. The rest of the proof is like that for the previous lemma, with \(F\) instead of \(\mathrm{dv}(E)\).
_In the rest of this subsection \(K\) is in addition a pre-\(H\)-field and \(L\) a pre-\(H\)-field extension of \(K\)._ The following is shown in the proof of [75, Lemma 12.5]:
**Proposition 1.4.15** (Gehret).: _Suppose \(K\) is a \(\lambda\)-free \(H\)-field and \(L\) is a Liouville \(H\)-field extension of \(K\). Then \(L\) is \(\lambda\)-free and \(\Gamma^{<}\) is cofinal in \(\Gamma^{<}_{L}\)._
_Example 1.4.16_.: Let \(K=\mathbb{R}\langle\omega\rangle\) be the \(\lambda\)-free but non-\(\omega\)-free \(H\)-field from [ADH, 13.9.1]. Then \(K\) has a unique Liouville closure \(L\), up to isomorphism over \(K\), by [75, Theorem 12.1(1)]. By Proposition 1.4.15, \(L\) is not \(\omega\)-free; [9] has another proof of this fact. By [ADH, 13.9.5] we can take here \(K\) to be a Hardy field, and then \(L\) is isomorphic over \(K\) to a Hardy field extension of \(K\) [ADH, 10.6.11].
Applying Corollary 1.4.10 to \(H:=\mathbb{R}\langle\omega\rangle\) yields a Liouville closed \(H\)-field \(H^{\mathrm{tl}}\) that is not \(\omega\)-free but does satisfy \(\mathrm{I}(K^{\mathrm{tl}})\subseteq(K^{\mathrm{tl}})^{\dagger}\) for \(K^{\mathrm{tl}}:=H^{\mathrm{tl}}[\mathrm{i}]\).
For a pre-\(H\)-field \(H\) we singled out in [ADH, p. 520] the following subsets:
**Lemma 1.4.17**.: _Suppose \(K\) is \(\lambda\)-free, \(\lambda\in\Lambda(L)^{\downarrow}\), \(\omega:=\omega(\lambda)\in K\), and suppose \(\omega\big{(}\Lambda(K)\big{)}<\omega<\sigma\big{(}\Gamma(K)\big{)}\). Then \(\lambda_{\rho}\rightsquigarrow\lambda\), and the pre-\(H\)-subfield \(K\langle\lambda\rangle=K(\lambda)\) of \(L\) is an immediate extension of \(K\) (and so \(K\langle\lambda\rangle\) is not \(\lambda\)-free)._
Proof.: From \(\Lambda(L)<\Delta(L)\) [ADH, p. 522] and \(\Delta(K)\subseteq\Delta(L)\) we obtain \(\lambda<\Delta(K)\). The restriction of \(\omega\) to \(\Lambda(L)^{\downarrow}\) is strictly increasing [ADH, p. 526] and \(\Lambda(K)\subseteq\Lambda(L)\), so \(\omega\big{(}\Lambda(K)\big{)}<\omega=\omega(\lambda)\) gives \(\Lambda(K)<\lambda\). Hence \(\lambda_{\rho}\rightsquigarrow\lambda\) by [ADH, 11.8.16]. Also \(\omega_{\rho}\rightsquigarrow\omega\) by [ADH, 11.8.30]. Thus \(K\langle\lambda\rangle\) is an immediate extension of \(K\) by [ADH, 11.7.13].
**Achieving \(\omega\)-freeness for pre-\(H\)-fields.**_In the rest of this section \(H\) is a pre-\(H\)-field and \(L\) is a Liouville closed \(\mathrm{d}\)-algebraic \(H\)-field extension of \(H\)._ Thus if \(H\) is \(\omega\)-free, then so is \(L\), by Theorem 1.4.1.
The lemmas below give conditions guaranteeing that \(L\) is \(\omega\)-free, while \(H\) is not.
**Lemma 1.4.18**.: _Suppose \(H\) is grounded or has a gap. Then \(L\) is \(\omega\)-free._
Proof.: Suppose \(H\) is grounded. Let \(H_{\omega}\) be the \(\omega\)-free pre-\(H\)-field extension of \(H\) introduced in connection with [ADH, 11.7.17] (where we use the letter \(F\) instead of \(H\)). Identifying \(H_{\omega}\) with its image in \(L\) under an embedding \(H_{\omega}\to L\) over \(H\) of pre-\(H\)-fields, we apply Theorem 1.4.1 to \(K:=H_{\omega}\) to conclude that \(L\) is \(\omega\)-free.
Next, suppose \(H\) has a gap \(\beta=vb\), \(b\in H^{\times}\). Take \(a\in L\) with \(a^{\prime}=b\) and \(a\not\neq 1\). Then \(\alpha:=va\) satisfies \(\alpha^{\prime}=\beta\), and so the pre-\(H\)-field \(H(a)\subseteq L\) is grounded, by [ADH, 9.8.2 and remarks following its proof]. Now apply the previous case to \(H(a)\) in place of \(H\).
**Lemma 1.4.19**.: _Suppose \(H\) has asymptotic integration and divisible value group, and \(s\in H\) creates a gap over \(H\). Then \(L\) is \(\omega\)-free._
Proof.: Take \(f\in L^{\times}\) with \(f^{\dagger}=s\). Then by [ADH, remark after 11.5.14], \(vf\) is a gap in \(H\langle f\rangle=H(f)\), so \(L\) is \(\omega\)-free by Lemma 1.4.18 applied to \(H\langle f\rangle\) in place of \(H\).
**Lemma 1.4.20**.: _Suppose \(H\) is not \(\lambda\)-free. Then \(L\) is \(\omega\)-free._
Proof.: By [ADH, 11.6.8], the real closure \(H^{\mathrm{rc}}\) of \(H\) inside \(L\) is not \(\lambda\)-free, hence replacing \(H\) by \(H^{\mathrm{rc}}\) we arrange that \(H\) is real closed. If \(H\) does not have asymptotic integration, then we are done by Lemma 1.4.18. So suppose \(H\) has asymptotic integration. Then some \(s\in H\) creates a gap over \(H\), by [ADH, 11.6.1], so \(L\) is \(\omega\)-free by Lemma 1.4.19.
**Corollary 1.4.21**.: _Suppose \(H\) is \(\lambda\)-free and \(\lambda\in\Lambda(L)^{\downarrow}\) is such that \(\omega:=\omega(\lambda)\in H\) and \(\omega\big{(}\Lambda(H)\big{)}<\omega<\sigma\big{(}\Gamma(H)\big{)}\). Then \(L\) is \(\omega\)-free._
Proof.: By Lemma 1.4.17, the pre-\(H\)-subfield \(H\langle\lambda\rangle=H(\lambda)\) of \(L\) is an immediate non-\(\lambda\)-free extension of \(H\). Now apply Lemma 1.4.20 to \(H\langle\lambda\rangle\) in place of \(H\).
### 1.5. Complements on Linear Differential Operators
In this section we tie up loose ends from the material on linear differential operators in [ADH, 14.2] and [11, Section 8]. _Throughout \(K\) is an ungrounded asymptotic field, \(a\), \(b\), \(f\), \(g\), \(h\) range over arbitrary elements of \(K\), and \(\phi\) over those active in \(K\), in particular, \(\phi\neq 0\)_. Recall from [ADH, p. 479] our use of the term "eventually": a property \(S(\phi)\) of elements \(\phi\) is said to hold _eventually_ if for some active \(\phi_{0}\) in \(K\), \(S(\phi)\) holds for all \(\phi\preccurlyeq\phi_{0}\).
We shall consider linear differential operators \(A\in K[\partial]^{\neq}\) and set \(r:=\mathrm{order}(A)\). In [ADH, Section 11.1] we introduced the set
\[\mathscr{E}^{\mathrm{e}}(A)\ =\ \mathscr{E}^{\mathrm{e}}_{K}(A)\ :=\ \big{\{}\gamma\in\Gamma:\,\mathrm{nwt}_{A}(\gamma) \geqslant 1\big{\}}\ =\ \bigcap_{\phi}\mathscr{E}(A^{\phi})\]
of _eventual exceptional values of \(A\)_. For \(a\neq 0\) we have \(\mathscr{E}^{\mathrm{e}}(aA)=\mathscr{E}^{\mathrm{e}}(A)\) and \(\mathscr{E}^{\mathrm{e}}(Aa)=\mathscr{E}^{\mathrm{e}}(A)-va\). An easy consequence of the definitions: \(\mathscr{E}^{\mathrm{e}}(A^{f})=\mathscr{E}^{\mathrm{e}}(A)\) for \(f\neq 0\). A key fact about \(\mathscr{E}^{\mathrm{e}}(A)\) is that if \(y\in K^{\times}\), \(vy\notin\mathscr{E}^{\mathrm{e}}(A)\), then \(A(y)\asymp A^{\phi}y\), eventually. Since \(A^{\phi}y\neq 0\) for \(y\in K^{\times}\), this gives \(v(\ker^{\neq}A)\subseteq\mathscr{E}^{\mathrm{e}}(A)\).
**Lemma 1.5.1**.: _If \(L\) is an ungrounded asymptotic extension of \(K\), then \(\mathscr{E}^{\rm e}_{L}(A)\cap\Gamma\subseteq\mathscr{E}^{\rm e}(A)\), with equality if \(\Psi\) is cofinal in \(\Psi_{L}\)._
Proof.: For the inclusion, use that \({\rm dwt}(A^{\phi})\) decreases as \(v\phi\) strictly increases [ADH, 11.1.12]. Thus its eventual value \({\rm nwt}(A)\), evaluated in \(K\), cannot strictly increase when evaluated in an ungrounded asymptotic extension of \(K\).
_In the rest of this section we assume in addition that \(K\) is \(H\)-asymptotic with asymptotic integration._ Then by [ADH, 14.2.8]:
**Proposition 1.5.2**.: _If \(K\) is \(r\)-linearly newtonian, then \(v(\ker^{\neq}A)=\mathscr{E}^{\rm e}(A)\)._
_Remark 1.5.3_.: If \(K\) is d-valued, then \(|v(\ker^{\neq}A)|=\dim_{C}\ker A\leqslant r\) by [ADH, 5.6.6], using a reduction to the case of "small derivation" by compositional conjugation.
**Corollary 1.5.4**.: _Suppose \(K\) is \(\mathrm{d}\)-valued, \(\mathscr{E}^{\rm e}(A)=v(\ker^{\neq}A)\), and \(0\neq f\in A(K)\). Then \(A(y)=f\) for some \(y\in K\) with \(vy\notin\mathscr{E}^{\rm e}(A)\)._
Proof.: Let \(y\in K\), \(A(y)=f\), with \(vy\) maximal. Then \(vy\notin\mathscr{E}^{\rm e}(A)\): otherwise we have \(z\in\ker A\) with \(z\sim y\), so \(A(y-z)=f\) and \(v(y-z)>vy\).
**Corollary 1.5.5**.: _Suppose \(K\) is \(\omega\)-free. Then \(\sum_{\gamma\in\Gamma}{\rm nwt}_{A}(\gamma)=|\mathscr{E}^{\rm e}(A)|\leqslant r\)._
Proof.: The remarks following [ADH, 14.0.1] give an immediate asymptotic extension \(L\) of \(K\) that is newtonian. Then \(L\) is d-valued by Lemma 1.2.9, hence \(|\mathscr{E}^{\rm e}(A)|=|\mathscr{E}^{\rm e}_{L}(A)|\leqslant r\) by Proposition 1.5.2 and Remark 1.5.3. By [ADH, 13.7.10] we have \({\rm nwt}_{A}(\gamma)\leqslant 1\) for all \(\gamma\in\Gamma\), thus \(\sum_{\gamma\in\Gamma}{\rm nwt}_{A}(\gamma)=|\mathscr{E}^{\rm e}(A)|\).
In [ADH, Section 11.1] we defined \(v^{\rm e}_{A}\colon\Gamma\to\Gamma\) by requiring that for all \(\gamma\in\Gamma\):
\[v_{A^{\phi}}(\gamma)\ =\ v^{\rm e}_{A}(\gamma)+{\rm nwt}_{A}(\gamma)v\phi, \qquad\text{ eventually.} \tag{1.5.1}\]
We recall from that reference that for \(a\neq 0\) and \(\gamma\in\Gamma\) we have
\[v^{\rm e}_{aA}(\gamma)\ =\ va+v^{\rm e}_{A}(\gamma),\qquad v^{\rm e}_{ Aa}(\gamma)=v^{\rm e}_{A}(va+\gamma).\]
As an example from [ADH, p. 481], \(v^{\rm e}_{\partial}(\gamma)=\gamma+\psi(\gamma)\) for \(\gamma\in\Gamma\setminus\{0\}\) and \(v^{\rm e}_{\partial}(0)=0\). By [ADH, 14.2.7 and the remark preceding it] we have:
**Lemma 1.5.6**.: _The restriction of \(v^{\rm e}_{A}\) to a function \(\Gamma\setminus\mathscr{E}^{\rm e}(A)\to\Gamma\) is strictly increasing, and \(v\big{(}A(y)\big{)}=v^{\rm e}_{A}(vy)\) for all \(y\in K\) with \(vy\in\Gamma\setminus\mathscr{E}^{\rm e}(A)\). Moreover, if \(K\) is \(\omega\)-free, then \(v^{\rm e}_{A}\big{(}\Gamma\setminus\mathscr{E}^{\rm e}(A)\big{)}=\Gamma\)._
The following is [ADH, 14.2.10] without the hypothesis of \(\omega\)-freeness:
**Corollary 1.5.7**.: _Suppose \(K\) is \(r\)-linearly newtonian. Then for each \(f\neq 0\) there exists \(y\in K^{\times}\) such that \(A(y)=f\), \(vy\notin\mathscr{E}^{\rm e}(A)\), and \(v^{\rm e}_{A}(vy)=vf\)._
Proof.: If \(r=0\), then \(\mathscr{E}^{\rm e}(A)=\emptyset\) and our claim is obviously valid. Suppose \(r\geqslant 1\). Then \(K\) is d-valued by Lemma 1.2.9, and \(v(\ker^{\neq}A)=\mathscr{E}^{\rm e}(A)\) by Proposition 1.5.2, Moreover, by [ADH, 14.2.2], \(K\) is \(r\)-linearly surjective, hence \(f\in A(K)\). Now Corollary 1.5.4 yields \(y\in K^{\times}\) with \(A(y)=f\) and \(vy\notin\mathscr{E}^{\rm e}(A)\). By Lemma 1.5.6 we have \(v^{\rm e}_{A}(vy)=v\big{(}A(y)\big{)}=vf\).
From the proof of [ADH, 14.2.10] we extract the following:
**Corollary 1.5.8**.: _Suppose \(K\) is \(r\)-linearly newtonian with small derivation, and \(A\in\ \mathcal{O}[\partial]\) with \(a_{0}:=A(1)\asymp 1\), and \(f\asymp^{\flat}1\). Then there is \(y\in K^{\times}\) such that \(A(y)=f\) and \(y\sim f/a_{0}\). For any such \(y\) we have \(vy\notin\mathscr{E}^{\rm e}(A)\) and \(v^{\rm e}_{A}(vy)=vf\)._
Proof.: The case \(r=0\) is trivial. Assume \(r\geqslant 1\), so \(K\) is d-valued by Lemma 1.2.9. Hence \(f^{\dagger}\prec 1\), that is, \(f^{\prime}\prec f\), so \(f^{(n)}\prec f\) for all \(n\geqslant 1\) by [ADH, 4.4.2]. Then \(Af\preccurlyeq\ f\) by [ADH, (5.1.3), (5.1.2)], and \(A(f)\sim a_{0}f\), so \(A_{\ltimes f}\in\mathcal{O}[\mathfrak{d}]\) and \(A_{\ltimes f}(1)\sim\ a_{0}\). Thus we may replace \(A\), \(f\) by \(A_{\ltimes f}\), \(1\) to arrange \(f=1\). Now \(a_{0}\asymp 1\) gives \(\operatorname{dwm}(A)=0\), so \(\operatorname{dwt}(A^{\phi})=0\) eventually, by [ADH, 11.1.11(ii)], that is, \(\operatorname{nwt}(A)=0\). Also \(A^{\phi}(1)=A(1)=a_{0}\asymp 1\), so \(v^{e}(A)=0\). Arguing as in the proof of [ADH, 14.2.10] we obtain \(y\in K^{\times}\) with \(A(y)=1\) and \(y\sim 1/a_{0}\). It is clear that \(vy=0\notin\mathscr{E}^{\mathrm{e}}(A)\) and \(v^{e}_{A}(vy)=v^{e}(A)=0=vf\) for any such \(y\).
In the next few subsections below we consider more closely the case of order \(r=1\), and in the last subsection the case of arbitrary order.
### First-order operators
_In this subsection_ \(A=\partial-g\)_. By [ADH, p. 481],
\[\mathscr{E}^{\mathrm{e}}(A)\ =\ \mathscr{E}^{\mathrm{e}}_{K}(A)\ =\ \big{\{}vy:\,y\in K^{\times},\ v(g-y^{\dagger})>\Psi\big{\}}\]
has at most one element. We also have \(|v(\ker^{\neq}A)|=\dim_{C}\ker A\leqslant 1\) in view of \(C^{\times}\subseteq\mathcal{O}^{\times}\). Proposition 1.5.2 holds under a weaker assumption on \(K\) for \(r=1\):
**Lemma 1.5.9**.: _Suppose \(\operatorname{I}(K)\subseteq K^{\dagger}\). Then \(v(\ker^{\neq}A)=\mathscr{E}^{\mathrm{e}}(A)\)._
Proof.: It remains to show "\(\supseteq\)". Suppose \(\mathscr{E}^{\mathrm{e}}(A)=\{0\}\). Then \(g-y^{\dagger}\in\operatorname{I}(K)\) with \(y\asymp 1\) in \(K\), hence \(g\in\operatorname{I}(K)\subseteq K^{\dagger}\), so \(g=h^{\dagger}\) with \(h\asymp 1\), and thus \(0=vh\in v(\ker^{\neq}A)\). The general case reduces to the case \(\mathscr{E}^{\mathrm{e}}(A)=\{0\}\) by twisting.
**Lemma 1.5.10**.: _Suppose \(L\) is an ungrounded \(H\)-asymptotic extension of \(K\). Then \(\mathscr{E}^{\mathrm{e}}_{L}(A)\cap\Gamma=\mathscr{E}^{\mathrm{e}}(A)\)._
Proof.: Lemma 1.5.1 gives \(\mathscr{E}^{\mathrm{e}}_{L}(A)\cap\Gamma\subseteq\mathscr{E}^{\mathrm{e}}(A)\). Next, let \(vy\in\mathscr{E}^{\mathrm{e}}(A)\), \(y\in K^{\times}\). Then \(v(g-y^{\dagger})>\Psi\) and so \(v(g-y^{\dagger})\in(\Gamma^{>})^{\prime}\) since \(K\) has asymptotic integration. Hence \(v(g-y^{\dagger})>\Psi_{L}\) and thus \(vy\in\mathscr{E}^{\mathrm{e}}_{L}(A)\), by [ADH, p. 481].
Recall also from [ADH, 9.7] that for an ordered abelian group \(G\) and \(U\subseteq G\), a function \(\eta\colon U\to G\) is said to be _slowly varying_ if \(\eta(\alpha)-\eta(\beta)=o(\alpha-\beta)\) for all \(\alpha\neq\beta\) in \(U\); then the function \(\gamma\mapsto\gamma+\eta(\gamma)\colon U\to G\) is strictly increasing. The quintessential example of a slowly varying function is \(\psi\colon\Gamma^{\neq}\to\Gamma\) [ADH, 6.5.4(ii)].
**Proposition 1.5.11**.: _There is a unique slowly varying function \(\psi_{A}\colon\Gamma\setminus\mathscr{E}^{\mathrm{e}}(A)\to\Gamma\) such that for all \(y\in K^{\times}\) with \(vy\notin\mathscr{E}^{\mathrm{e}}(A)\) we have \(v\big{(}A(y)\big{)}=vy+\psi_{A}(vy)\)._
Proof.: For d-valued \(K\), use [11, 8.4]. In general, pass to the d-valued hull \(L:=\operatorname{d\!v}(K)\) of \(K\) from [ADH, 10.3] and use \(\Gamma_{L}=\Gamma\) [ADH, 10.3.2].
If \(b\neq 0\), then \(\mathscr{E}^{\mathrm{e}}(A_{\ltimes b})=\mathscr{E}^{\mathrm{e}}(A)-vb\) and \(\psi_{A_{\ltimes b}}(\gamma)=\psi_{A}(\gamma+vb)\) for \(\gamma\in\Gamma\setminus\mathscr{E}^{\mathrm{e}}(A_{\ltimes b})\).
_Example_.: We have \(\mathscr{E}^{\mathrm{e}}(\mathfrak{d})=\{0\}\) and \(\psi_{\mathfrak{d}}=\psi\). More generally, if \(g=b^{\dagger}\), \(b\neq 0\), then \(A_{\ltimes b}=\mathfrak{d}\) and so \(\mathscr{E}^{\mathrm{e}}(A)=\{vb\}\) and \(\psi_{A}(\gamma)=\psi(\gamma-vb)\) for \(\gamma\in\Gamma\setminus\{vb\}\).
If \(\Gamma\) is divisible, then \(\Gamma\setminus v\big{(}A(K)\big{)}\) has at most one element by [ADH, 11.6.16]. Also, \(K\) is \(\lambda\)-free iff \(v\big{(}A(K)\big{)}=\Gamma_{\infty}\) for all \(A=\mathfrak{d}-g\) by [ADH, 11.6.17].
**Lemma 1.5.12**.: _Suppose \(K\) is \(\lambda\)-free and \(f\neq 0\). Then for some \(y\in K^{\times}\) with \(vy\notin\mathscr{E}^{\mathrm{e}}(A)\) we have \(A(y)\asymp f\). \((\)Hence \(\gamma\mapsto\gamma+\psi_{A}(\gamma)\colon\Gamma\setminus\mathscr{E}^{\mathrm{e}} (A)\to\Gamma\) is surjective.\()_
Proof.: [ADH, 11.6.17] gives \(y\in K^{\times}\) with \(A^{\phi}y\asymp f\) eventually. Now
\[A^{\phi}y\ =\ \phi y\delta-(g-y^{\dagger})y\ \text{ in }K^{\phi}[\delta], \qquad\delta:=\phi^{-1}\partial.\]
Since \(v(A^{\phi}y)=vf\) eventually, this forces \(g-y^{\dagger}\succ\phi\) eventually, so \(vy\notin\mathscr{E}^{\mathrm{e}}(A)\).
Call \(A\)**steep** if \(g\succ^{\flat}1\), that is, \(g\succ 1\) and \(g^{\dagger}\succcurlyeq 1\). If \(K\) has small derivation and \(A\) is steep, then \(g^{\dagger}\prec g\) by [ADH, 9.2.10].
**Lemma 1.5.13**.: _Suppose \(K\) has small derivation, \(A\) is steep, and \(y\in K^{\times}\) such that \(A(y)=f\neq 0\), \(g\succ f^{\dagger}\), and \(vy\notin\mathscr{E}^{\mathrm{e}}(A)\). Then \(y\sim-f/g\)._
Proof.: We have
\[(f/g)^{\dagger}-g=f^{\dagger}-g^{\dagger}-g\sim-g\succ g^{\dagger},\]
hence \(v(f/g)\notin\mathscr{E}^{\mathrm{e}}(A)\), and
\[A(f/g)\ =\ (f/g)^{\prime}-(f/g)g\ =\ (f/g)\cdot\big{(}f^{\dagger}-g^{ \dagger}-g\big{)}\ \sim\ (f/g)\cdot(-g)\ =\ -f.\]
Since \(A(y)=f\sim A(-f/g)\) and \(vy,v(f/g)\in\Gamma\setminus\mathscr{E}^{\mathrm{e}}(A)\), this gives \(y=u\cdot f/g\) where \(u\asymp 1\), by Proposition 1.5.11. Now \(u^{\dagger}\prec 1\prec g\) and \((f/g)^{\dagger}=f^{\dagger}-g^{\dagger}\prec g\), hence \(y^{\dagger}\prec g\) and so
\[f=A(y)=y\cdot(y^{\dagger}-g)\sim-yg.\]
Therefore \(y\sim-f/g\).
**Lemma 1.5.14**.: _Suppose \(K\) has small derivation and \(y\in K^{\times}\) is such that \(A(y)=f\neq 0\), \(g-f^{\dagger}\succ^{\flat}1\) and \(vy\notin\mathscr{E}^{\mathrm{e}}(A)\). Then \(y\sim f/(f^{\dagger}-g)\)._
Proof.: From \(g-f^{\dagger}\succ 1\) we get \(vf\notin\mathscr{E}^{\mathrm{e}}(A)\). Now \(A(y)=f\prec f(f^{\dagger}-g)=A(f)\), so \(y\prec f\) by [ADH, 5.6.8], and \(v(y/f)\notin\mathscr{E}^{\mathrm{e}}(A_{\times f})=\mathscr{E}^{\mathrm{e}}( A)-vf\). Since \(A_{\times f}=\partial-(g-f^{\dagger})\) is steep, Lemma 1.5.13 applies to \(A_{\times f}\), \(y/f\), \(1\) in the role of \(A\), \(y\), \(f\).
Suppose \(K\) is \(\lambda\)-free and \(f\neq 0\). Then [ADH, 11.6.1] gives an active \(\phi_{0}\) in \(K\) with \(f^{\dagger}-g-\phi^{\dagger}\succcurlyeq\phi_{0}\) for all \(\phi\prec\phi_{0}\). The convex subgroups \(\Gamma_{\phi}^{\flat}\) of \(\Gamma\) become arbitrarily small as we let \(v\phi\) increase cofinally in \(\Psi^{\downarrow}\), so \(\phi\prec_{\phi}^{\flat}\phi_{0}\) eventually, and hence \(f^{\dagger}-g-\phi^{\dagger}\succ_{\phi}^{\flat}\phi\) eventually, that is, \(\phi^{-1}(f/\phi)^{\dagger}-g/\phi\succ_{\phi}^{\flat}1\) eventually. So replacing \(K\) by \(K^{\phi}\), \(A\) by \(\phi^{-1}A^{\phi}=\delta-(g/\phi)\) in \(K^{\phi}[\delta]\), and \(f\) and \(g\) by \(f/\phi\) and \(g/\phi\), for suitable \(\phi\), we arrange \(f^{\dagger}-g\succ^{\flat}1\). Thus by Lemma 1.5.14:
**Corollary 1.5.15**.: _If \(K\) is \(\lambda\)-free, \(y\in K^{\times}\), \(A(y)=f\neq 0\), and \(vy\notin\mathscr{E}^{\mathrm{e}}(A)\), then \(y\sim f/\big{(}(f/\phi)^{\dagger}-g\big{)}\), eventually._
_Example._ If \(K\) is \(\lambda\)-free and \(y\in K\), \(y^{\prime}=f\neq 0\) with \(y\neq 1\), then \(y\sim f/(f/\phi)^{\dagger}\), eventually.
**From \(K\) to \(K[i]\).**_In this subsection \(K\) is a real closed \(H\)-field_. Then \(K[i]\) with \(i^{2}=-1\) is an \(H\)-asymptotic extension of \(K\), with \(\Gamma_{K[i]}=\Gamma\). Consider a linear differential operator \(B=\partial-(g+hi)\) over \(K[i]\). Note that \(g+h\mathrm{i}\in K[i]^{\dagger}\) iff \(g\in K^{\dagger}\) and \(h\mathrm{i}\in K[i]^{\dagger}\), by Lemma 1.2.4. Under further assumptions on \(K\), the next two results give explicit descriptions of \(\psi_{B}\) when \(g\in K^{\dagger}\).
**Proposition 1.5.16**.: _Suppose \(K[i]\) is \(1\)-linearly newtonian and \(g\in K^{\dagger}\). Then:_
1. _if_ \(h\mathrm{i}\in K[i]^{\dagger}\)_, then for some_ \(\beta\in\Gamma\) _we have_ \[\mathscr{E}^{\mathrm{e}}(B)\ =\ \{\beta\},\qquad\psi_{B}(\gamma)\ =\ \psi(\gamma-\beta)\ \text{ for all }\gamma\in\Gamma\setminus\{\beta\};\]
* _if_ \(hi\notin K[\mathrm{i}]^{\dagger}\) _and_ \(g=b^{\dagger}\)_,_ \(b\neq 0\)_, then_ \[\mathscr{E}^{\mathrm{e}}(B)\ =\ \emptyset,\qquad\psi_{B}(\gamma)\ =\ \min\bigl{(}\psi(\gamma-vb),vh\bigr{)}\ \ \text{for all $\gamma\in\Gamma$.}\]
Proof.: As to (i), apply the example following Proposition 1.5.11 to \(K[\mathrm{i}]\), \(B\), \(g+h\mathrm{i}\) in the roles of \(K\), \(A\), \(g\). For (ii), assume \(hi\notin K[\mathrm{i}]^{\dagger}\), \(g=b^{\dagger}\), \(b\neq 0\). Replacing \(B\) by \(B_{\ltimes b}\) we arrange \(g=0\), \(b=1\), \(B=\partial-hi\). Corollary 1.2.17 gives \(K[\mathrm{i}]^{\dagger}=K^{\dagger}\oplus\mathrm{I}(K)i\), so \(h\notin\mathrm{I}(K)\), and thus \(vh\in\Psi^{\downarrow}\). Let \(y\in K[\mathrm{i}]^{\times}\), and take \(z\in K^{\times}\) and \(s\in\mathrm{I}(K)\) with \(y^{\dagger}=z^{\dagger}+s\mathrm{i}\). Then \(vh<vs\), hence
\[v(y^{\dagger}-hi)\ =\ \min\bigl{(}v(z^{\dagger}),v(s-h)\bigr{)}\ =\ \min\bigl{(}v(z^{\dagger}),vs,vh\bigr{)}\ =\ \min\bigl{(}v(y^{\dagger}),vh\bigr{)},\]
where the last equality uses \(v(y^{\dagger})=\min\big{(}v(z^{\dagger}),vs\big{)}\). Thus \(v(y^{\dagger}-hi)\in\Psi^{\downarrow}\) and
\[v\bigl{(}B(y)\bigr{)}-vy\ =\ v(y^{\dagger}-hi)\ =\ \min\bigl{(}v(y^{\dagger}),vh \bigr{)}\ =\ \min\bigl{(}\psi(vy),vh\bigr{)},\]
which gives the desired result.
**Corollary 1.5.17**.: _Suppose \(K\) is \(\omega\)-free, \(g\in K^{\dagger}\), \(g=b^{\dagger}\), \(b\neq 0\). Then either for some \(\beta\in\Gamma\) we have \(\mathscr{E}^{\mathrm{e}}(B)=\{\beta\}\) and \(\psi_{B}(\gamma)=\psi(\gamma-\beta)\) for all \(\gamma\in\Gamma\setminus\{\beta\}\), or \(\mathscr{E}^{\mathrm{e}}(B)=\emptyset\) and \(\psi_{B}(\gamma)=\min\bigl{(}\psi(\gamma-vb),vh\bigr{)}\) for all \(\gamma\in\Gamma\)._
Proof.: By [ADH, 14.0.1 and remarks following it] we have an immediate newtonian extension \(L\) of \(K\). Then \(L\) is still a real closed \(H\)-field [ADH, 10.5.8, 3.5.19], and \(L[\mathrm{i}]\) is newtonian [ADH, 14.5.7], so Proposition 1.5.16 applies to \(L\) in place of \(K\).
### Higher-order operators
We begin with the following observation:
**Lemma 1.5.18**.: _Let \(B\in K[\partial]^{\neq}\) and \(\gamma\in\Gamma\). Then \(\mathrm{nwt}_{AB}(\gamma)\geqslant\mathrm{nwt}_{B}(\gamma)\), and_
\[\gamma\notin\mathscr{E}^{\mathrm{e}}(B)\ \Longrightarrow\ \mathrm{nwt}_{AB}( \gamma)\ =\ \mathrm{nwt}_{A}\bigl{(}v_{B}^{\mathrm{e}}(\gamma)\bigr{)}\ \text{and}\ \ v_{AB}^{\mathrm{e}}(\gamma)\ =\ v_{A}^{\mathrm{e}} \bigl{(}v_{B}^{\mathrm{e}}(\gamma)\bigr{)}.\]
Proof.: We have \(\mathrm{nwt}_{AB}(\gamma)=\mathrm{dwt}_{(AB)^{\diamond}}(\gamma)\) eventually, and \((AB)^{\phi}=A^{\phi}B^{\phi}\). Hence by [ADH, Section 5.6] and the definition of \(v_{B}^{\mathrm{e}}(\gamma)\) in (1.5.1):
\[\mathrm{nwt}_{AB}(\gamma) \ =\ \mathrm{dwt}_{A^{\phi}}\bigl{(}v_{B^{\phi}}(\gamma)\bigr{)}+ \mathrm{dwt}_{B^{\phi}}(\gamma)\] \[\ =\ \mathrm{dwt}_{A^{\phi}}\bigl{(}v_{B}^{\mathrm{e}}(\gamma)+ \mathrm{nwt}_{B}(\gamma)v\phi\bigr{)}+\mathrm{nwt}_{B}(\gamma),\ \text{eventually},\]
so \(\mathrm{nwt}_{AB}(\gamma)\geqslant\mathrm{nwt}_{B}(\gamma)\). Now suppose \(\gamma\notin\mathscr{E}^{\mathrm{e}}(B)\). Then \(\mathrm{nwt}_{B}(\gamma)=0\), so
\[\mathrm{nwt}_{AB}(\gamma)=\mathrm{dwt}_{A^{\phi}}\bigl{(}v_{B}^{\mathrm{e}}( \gamma)\bigr{)}=\mathrm{nwt}_{A}\bigl{(}v_{B}^{\mathrm{e}}(\gamma)\bigr{)}, \qquad\text{eventually}.\]
Moreover, \(v_{(AB)^{\phi}}=v_{A^{\phi}B^{\phi}}=v_{A^{\phi}}\circ v_{B^{\phi}}\), hence using (1.5.1):
\[v_{(AB)^{\phi}}(\gamma)\ =\ v_{A^{\phi}}\bigl{(}v_{B^{\phi}}(\gamma)\bigr{)}\ =\ v_{A^{\phi}}\bigl{(}v_{B}^{\mathrm{e}}(\gamma)\bigr{)},\ \text{eventually},\]
and thus eventually
\[v_{AB}^{\mathrm{e}}(\gamma) \ =\ v_{(AB)^{\phi}}(\gamma)-\mathrm{nwt}_{AB}(\gamma)v\phi\] \[\ =\ v_{A^{\phi}}\bigl{(}v_{B}^{\mathrm{e}}(\gamma)\bigr{)}- \mathrm{nwt}_{A}\bigl{(}v_{B}^{\mathrm{e}}(\gamma)\bigr{)}v\phi\ =\ v_{A}^{\mathrm{e}}\bigl{(}v_{B}^{\mathrm{e}}(\gamma)\bigr{)}.\qed\]
Lemmas 1.5.6 and 1.5.18 yield:
**Corollary 1.5.19**.: _Let \(B\in K[\partial]^{\neq}\). Then_
\[\mathscr{E}^{\mathrm{e}}(AB)\ =\ (v_{B}^{\mathrm{e}})^{-1}\bigl{(}\mathscr{E}^{ \mathrm{e}}(A)\bigr{)}\cup\mathscr{E}^{\mathrm{e}}(B)\]
_and hence \(|\mathscr{E}^{\mathrm{e}}(AB)|\leqslant|\mathscr{E}^{\mathrm{e}}(A)|+| \mathscr{E}^{\mathrm{e}}(B)|\), with equality if \(v_{B}^{\mathrm{e}}\bigl{(}\Gamma\setminus\mathscr{E}^{\mathrm{e}}(B)\bigr{)}=\Gamma\)._
As an easy consequence we have a variant of Corollary 1.5.5:
**Corollary 1.5.20**.: _If \(A\) splits over \(K\), then \(|\mathscr{E}^{\mathrm{e}}(A)|\leqslant r\)._
To study \(v_{A}^{\rm e}\) in more detail we introduce the function
\[\psi_{A}\,:\ \Gamma\setminus\mathscr{E}^{\rm e}(A)\to\Gamma,\qquad\gamma\mapsto v _{A}^{\rm e}(\gamma)-\gamma.\]
For monic \(A\) of order \(1\) this agrees with \(\psi_{A}\) as defined in Proposition 1.5.11. For \(A=a\) (\(a\neq 0\)) we have \(\mathscr{E}^{\rm e}(A)=\emptyset\) and \(\psi_{A}(\gamma)=va\) for all \(\gamma\in\Gamma\).
**Lemma 1.5.21**.: _Let \(B\in K[\partial]^{\neq}\) and \(\gamma\in\Gamma\setminus\mathscr{E}^{\rm e}(AB)\). Then_
\[\psi_{AB}(\gamma)\ =\ \psi_{A}\big{(}v_{B}^{\rm e}(\gamma)\big{)}+\psi_{B}( \gamma).\]
Proof.: We have \(\gamma\notin\mathscr{E}^{\rm e}(B)\) and \(v_{B}^{\rm e}(\gamma)\notin\mathscr{E}^{\rm e}(A)\) by Corollary 1.5.19, hence
\[\psi_{AB}(\gamma)\ =\ v_{A}^{\rm e}\big{(}v_{B}^{\rm e}(\gamma)\big{)}-\gamma\ =\ v_{B}^{\rm e}(\gamma)+\psi_{A}\big{(}v_{B}^{\rm e}(\gamma)\big{)}-\gamma\ =\ \psi_{A}\big{(}v_{B}^{\rm e}(\gamma)\big{)}+\psi_{B}(\gamma)\]
by Lemma 1.5.18.
Thus for \(a\neq 0\) and \(\gamma\in\Gamma\) we have
\[\psi_{aA}(\gamma)=va+\psi_{A}(\gamma)\ \text{if}\ \gamma\notin\mathscr{E}^{\rm e }(A),\ \ \ \psi_{Aa}(\gamma)=\psi_{A}(va+\gamma)+va\ \text{if}\ \gamma\notin\mathscr{E}^{\rm e}(A)-va.\]
_Example_.: Suppose \(K\) has small derivation and \(x\in K\), \(x^{\prime}\asymp 1\). Then \(vx<0\) and \(\mathscr{E}^{\rm e}(\partial^{2})=\{vx,0\}\), and \(\psi_{\partial^{2}}(\gamma)=\psi\big{(}\gamma+\psi(\gamma)\big{)}+\psi(\gamma)\) for \(\gamma\in\Gamma\setminus\mathscr{E}^{\rm e}(\partial^{2})\).
**Lemma 1.5.22**.: _Suppose \(\psi_{A}\) is slowly varying. Let \(\Delta\) be a convex subgroup of \(\Gamma\) and let \(y,z\in K^{\times}\) be such that \(vy,vz\notin\mathscr{E}^{\rm e}(A)\). Then_
\[v_{\Delta}(y)<v_{\Delta}(z)\ \Longleftrightarrow\ v_{\Delta}\big{(}A(y) \big{)}<v_{\Delta}\big{(}A(z)\big{)}.\]
Proof.: By Lemma 1.5.6 we have
\[v\big{(}A(y)\big{)}-v\big{(}A(z)\big{)}\ =\ v_{A}^{\rm e}(vy)-v_{A}^{\rm e}( vz)\ =\ yy-vz+\psi_{A}(vy)-\psi_{A}(vz)\]
and \(\psi_{A}(vy)-\psi_{A}(vz)=o(vy-vz)\) if \(vy\neq vz\).
Call \(A\)**asymptotically surjective** if \(v_{A}^{\rm e}\big{(}\Gamma\setminus\mathscr{E}^{\rm e}(A)\big{)}=\Gamma\) and \(\psi_{A}\) is slowly varying. If \(A\) is asymptotically surjective, then so are \(aA\) and \(Aa\) for \(a\neq 0\), and if \(A\) has order \(0\), then \(A\) is asymptotically surjective. If \(K\) is \(\mathfrak{\lambda}\)-free and \(A\) has order \(1\), then \(A\) is asymptotically surjective, thanks to Proposition 1.5.11 and Lemma 1.5.12.
The next lemma has an obvious proof.
**Lemma 1.5.23**.: _Let \(G\) be an ordered abelian group and \(U,V\subseteq G\). If \(\eta_{1},\eta_{2}\colon U\to G\) are slowly varying, then so is \(\eta_{1}+\eta_{2}\). If \(\eta\colon U\to G\) and \(\zeta\colon V\to G\) are slowly varying and \(\gamma+\zeta(\gamma)\in U\) for all \(\gamma\in V\), then the function \(\gamma\mapsto\eta\big{(}\gamma+\zeta(\gamma)\big{)}\colon V\to G\) is also slowly varying._
**Lemma 1.5.24**.: _If \(A\) and \(B\in K[\partial]^{\neq}\) are asymptotically surjective, then so is \(AB\)._
Proof.: Let \(A\), \(B\) be asymptotically surjective and \(\gamma\in\Gamma\). This gives \(\alpha\in\Gamma\setminus\mathscr{E}^{\rm e}(A)\) with \(v_{A}^{\rm e}(\alpha)=\gamma\) and \(\beta\in\Gamma\setminus\mathscr{E}^{\rm e}(B)\) with \(v_{B}^{\rm e}(\beta)=\alpha\). Then \(\beta\notin\mathscr{E}^{\rm e}(AB)\) by Corollary 1.5.19, and \(v_{AB}^{\rm e}(\beta)=\gamma\) by Lemma 1.5.18. Moreover, \(\psi_{AB}\) is slowly varying by Lemmas 1.5.21 and 1.5.23.
A straightforward induction on \(r\) using this lemma yields:
**Corollary 1.5.25**.: _If \(K\) is \(\mathfrak{\lambda}\)-free and \(A\) splits over \(K\), then \(A\) is asymptotically surjective._
We can now add to Lemma 1.5.6:
**Corollary 1.5.26**.: _Suppose \(K\) is \(\mathbf{0}\)-free. Then \(A\) is asymptotically surjective._
Proof.: By the second part of Lemma 1.5.6 it is enough to show that \(\psi_{A}\) is slowly varying. For this we may replace \(K\) by any \(\omega\)-free \(H\)-asymptotic extension \(L\) of \(K\) with \(\Psi\) cofinal in \(\Psi_{L}\). Thus we can arrange by [ADH, 14.5.7, remarks following 14.0.1] that \(K\) is newtonian, and by passing to the algebraic closure, algebraically closed. Then \(A\) splits over \(K\) by [ADH, 5.8.9, 14.5.3], and so \(A\) is asymptotically surjective by Corollary 1.5.25.
### Special Elements
Let \(K\) be a valued field and let \(\widehat{a}\) be an element of an immediate extension of \(K\) with \(\widehat{a}\notin K\). Recall that
\[v(\widehat{a}-K)\ =\ \bigl{\{}v(\widehat{a}-a):a\in K\bigr{\}}\]
is a nonempty downward closed subset of \(\Gamma:=v(K^{\times})\) without a largest element. Call \(\widehat{a}\)_special_ over \(K\) if some nontrivial convex subgroup of \(\Gamma\) is cofinal in \(v(\widehat{a}-K)\) [ADH, p. 167]. In this case \(v(\widehat{a}-K)\cap\Gamma^{>}\neq\emptyset\), and there is a unique such nontrivial convex subgroup \(\Delta\) of \(\Gamma\), namely
\[\Delta\ =\ \bigl{\{}\delta\in\Gamma:\,|\delta|\in v(\widehat{a}-K)\bigr{\}}.\]
We also call \(\widehat{a}\)_almost special_ over \(K\) if \(\widehat{a}/\mathfrak{m}\) is special over \(K\) for some \(\mathfrak{m}\in K^{\times}\).
If \(\Gamma\neq\{0\}\) is archimedean, then \(\widehat{a}\) is special over \(K\) iff \(v(\widehat{a}-K)=\Gamma\), iff \(\widehat{a}\) is the limit of a divergent c-sequence in \(K\). (Recall that "c-sequence" abbreviates "cauchy sequence" [ADH, p. 82].) In the next lemma \(a\) ranges over \(K\) and \(\mathfrak{m}\), \(\mathfrak{n}\) over \(K^{\times}\).
**Lemma 1.6.1**.: _Suppose \(\widehat{a}\prec\mathfrak{m}\) and \(\widehat{a}/\mathfrak{m}\) is special over \(K\). Then for all \(a\), \(\mathfrak{n}\), if \(\widehat{a}-a\prec\mathfrak{n}\prec\mathfrak{m}\), then \((\widehat{a}-a)/\mathfrak{n}\) is special over \(K\)._
Proof.: Replacing \(\widehat{a}\), \(a\), \(\mathfrak{m}\), \(\mathfrak{n}\) by \(\widehat{a}/\mathfrak{m}\), \(a/\mathfrak{m}\), \(1\), \(\mathfrak{n}/\mathfrak{m}\), respectively, we arrange \(\mathfrak{m}=1\). So let \(\widehat{a}\) be special over \(K\) with \(\widehat{a}\prec 1\). It is enough to show: (1) \(\widehat{a}-a\) is special over \(K\), for all \(a\); (2) for all \(\mathfrak{n}\), if \(\widehat{a}\prec\mathfrak{n}\prec 1\), then \(\widehat{a}/\mathfrak{n}\) is special over \(K\). Here (1) follows from \(v(\widehat{a}-a-K)=v(\widehat{a}-K)\). For (2), note that if \(\widehat{a}\prec\mathfrak{n}\preccurlyeq 1\), then \(v\mathfrak{n}\in\Delta\) with \(\Delta\) as above, and so \(v(\widehat{a}/\mathfrak{n}-K)=v(\widehat{a}-K)-v\mathfrak{n}=v(\widehat{a}-K)\).
The remainder of this section is devoted to showing that (almost) special elements arise naturally in the analysis of certain immediate d-algebraic extensions of valued differential fields. We first treat the case of asymptotic fields with small derivation, and then focus on the linearly newtonian \(H\)-asymptotic case.
We recall some notation: for an ordered abelian group \(\Gamma\) and \(\alpha\in\Gamma_{\infty}\), \(\beta\in\Gamma\), \(\gamma\in\Gamma^{>}\) we mean by "\(\alpha\geqslant\beta+o(\gamma)\)" that \(\alpha\geqslant\beta-(1/n)\gamma\) for all \(n\geqslant 1\), while "\(\alpha<\beta+o(\gamma)\)" is its negation, that is, \(\alpha<\beta-(1/n)\gamma\) for some \(n\geqslant 1\); see [ADH, p. 312]. Here and later inequalities are in the sense of the ordered divisible hull \(\mathbb{Q}\Gamma\) of the relevant \(\Gamma\).
### A source of special elements
We recall that a differential field \(F\) is said to be \(r\)-linearly surjective (\(r\in\mathbb{N}\)) if \(A(F)=F\) for every nonzero \(A\in F[\partial]\) of order at most \(r\). _In this subsection \(K\) is an asymptotic field with small derivation, value group \(\Gamma=v(K^{\times})\neq\{0\}\), and differential residue field \(\boldsymbol{k}\); we also let \(r\in\mathbb{N}^{\geqslant 1}\)._ Below we use the notion _neatly surjective_ from [ADH, 5.6]: \(A\in K[\partial]^{\neq}\) is neatly surjective iff for all \(b\in K^{\times}\) there exists \(a\in K^{\times}\) such that \(A(a)=b\) and \(v_{A}(va)=vb\). We often let \(\widehat{f}\) be an element in an immediate asymptotic extension \(\widehat{K}\) of \(K\), but in the statement of the next lemma we take \(\widehat{f}\in K\):
**Lemma 1.6.2**.: _Assume \(\boldsymbol{k}\) is \(r\)-linearly surjective, \(A\in K[\![\partial]\!]^{\neq}\) of order \(\leqslant r\) is neatly surjective, \(\gamma\in\mathbb{Q}\Gamma\), \(\gamma>0\), \(\widehat{f}\in K^{\times}\), and \(v\big{(}A(\widehat{f})\big{)}\geqslant v(A\widehat{f})+\gamma\). Then \(A(f)=0\) and \(v(\widehat{f}-f)\geqslant v(\widehat{f})+\gamma+o(\gamma)\) for some \(f\in K\)._
Proof.: Set \(B:=g^{-1}A\widehat{f}\), where we take \(g\in K^{\times}\) such that \(vg=v(A\widehat{f})\). Then \(B\asymp 1\), \(B\) is still neatly surjective, and \(B(1)=g^{-1}A(\widehat{f})\), \(v\big{(}B(1)\big{)}\geqslant\gamma\). It suffices to find \(y\in K\) such that \(B(y)=0\) and \(v(y-1)\geqslant\gamma+o(\gamma)\), because then \(f:=\widehat{f}y\) has the desired property. If \(B(1)=0\), then \(y=1\) works, so assume \(B(1)\neq 0\). By [ADH, 7.2.7] we have an immediate extension \(\widehat{K}\) of \(K\) that is \(r\)-differential henselian. Then \(\widehat{K}\) is asymptotic by [ADH, 9.4.2 and 9.4.5]. Set \(R(Z):=\operatorname{Ri}(B)\in K\{Z\}\). Then the proof of [ADH, 7.5.1] applied to \(\widehat{K}\) and \(B\) in the roles of \(K\) and \(A\) yields \(z\prec 1\) in \(\widehat{K}\) with \(R(z)=0\). Now \(R(0)=B(1)\), hence by [ADH, 7.2.2] we can take such \(z\) with \(v(z)\geqslant\beta+o(\beta)\) where \(\beta:=v\big{(}B(1)\big{)}\geqslant\gamma\). As in the proof of [ADH, 7.5.1] we next take \(y\in\widehat{K}\) with \(v(y-1)>0\) and \(y^{\dagger}=z\) to get \(B(y)=0\), and observe that then \(v(y-1)\geqslant\beta+o(\beta)\), by [ADH, 9.2.10(iv)], hence \(v(y-1)\geqslant\gamma+o(\gamma)\). It remains to note that \(y\in K\) by [ADH, 7.5.7].
By a remark following the proof of [ADH, 7.5.1] the assumption that \(\boldsymbol{k}\) is \(r\)-linearly surjective in the lemma above can be replaced for \(r\geqslant 2\) by the assumption that \(\boldsymbol{k}\) is \((r-1)\)-linearly surjective.
Next we establish a version of the above with \(\widehat{f}\) in an immediate asymptotic extension of \(K\). Recall that an asymptotic extension of \(K\) with the same value group as \(K\) has small derivation, by [ADH, 9.4.1].
**Lemma 1.6.3**.: _Assume \(\boldsymbol{k}\) is \(r\)-linearly surjective, \(A\in K[\![\partial]\!]^{\neq}\) of order \(\leqslant r\) is neatly surjective, \(\gamma\in\mathbb{Q}\Gamma\), \(\gamma>0\), \(\widehat{K}\) is an immediate asymptotic extension of \(K\), \(\widehat{f}\in\widehat{K}^{\times}\), and \(v\big{(}A(\widehat{f})\big{)}\geqslant v(A\widehat{f})+\gamma\). Then for some \(f\in K\) we have_
\[A(f)\ =\ 0,\qquad v(\widehat{f}-f)\ \geqslant\ v(\widehat{f})+\gamma+o( \gamma).\]
Proof.: By extending \(\widehat{K}\) we can arrange that \(\widehat{K}\) is \(r\)-differential henselian, so \(A\) remains neatly surjective as an element of \(\widehat{K}[\![\partial]\!]\), by [ADH, 7.1.8]. Then by Lemma 1.6.2 with \(\widehat{K}\) in the role of \(K\) we get \(f\in\widehat{K}\) such that \(A(f)=0\) and \(v(\widehat{f}-f)\geqslant v(\widehat{f})+\gamma+o(\gamma)\). It remains to note that \(f\in K\) by [ADH, 7.5.7].
We actually need an inhomogeneous variant of the above:
**Lemma 1.6.4**.: _Assume \(\boldsymbol{k}\) is \(r\)-linearly surjective, \(A\in K[\![\partial]\!]^{\neq}\) of order \(\leqslant r\) is neatly surjective, \(b\in K\), \(\gamma\in\mathbb{Q}\Gamma\), \(\gamma>0\), \(v(A)=o(\gamma)\), \(v(b)\geqslant o(\gamma)\), \(\widehat{K}\) is an immediate asymptotic extension of \(K\), \(\widehat{f}\in\widehat{K}\), \(\widehat{f}\preccurlyeq 1\), and \(v\big{(}A(\widehat{f})-b\big{)}\geqslant\gamma+o(\gamma)\). Then_
\[A(f)\ =\ b,\qquad v(\widehat{f}-f)\ \geqslant\ (1/2)\gamma+o(\gamma)\]
_for some \(f\in K\)._
Proof.: Take \(y\in K\) with \(A(y)=b\) and \(v(y)\geqslant o(\gamma)\). Then \(A(\widehat{g})=A(\widehat{f})-b\) for \(\widehat{g}:=\widehat{f}-y\), so \(v\big{(}A(\widehat{g})\big{)}\geqslant\gamma+o(\gamma)\) and \(v(\widehat{g})\geqslant o(\gamma)\). We distinguish two cases:
(1) \(v(\widehat{g})\geqslant(1/2)\gamma+o(\gamma)\). Then \(v(\widehat{f}-y)\geqslant(1/2)\gamma+o(\gamma)\), so \(f:=y\) works.
(2) \(v(\widehat{g})<(1/2)\gamma+o(\gamma)\). Then by [ADH, 6.1.3],
\[v(A\widehat{g})\ <\ (1/2)\gamma+o(\gamma),\qquad v\big{(}A(\widehat{g})\big{)}\ \geqslant\ \gamma+o(\gamma),\]
so \(v\big{(}A(\widehat{g})\big{)}\geqslant v(A\widehat{g})+(1/2)\gamma\). Then Lemma 1.6.3 gives an element \(g\in K\) such that \(A(g)=0\) and \(v(\widehat{g}-g)\geqslant(1/2)\gamma+o(\gamma)\). Hence \(f:=y+g\) works.
Recall from [ADH, 7.2] that \(\mathcal{O}\) is said to be \(r\)_-linearly surjective_ if for every \(A\) in \(K[\partial]^{\neq}\) of order \(r\) with \(v(A)=0\) there exists \(y\in\mathcal{O}\) with \(A(y)=1\).
**Proposition 1.6.5**.: _Assume \(\mathcal{O}\) is \(r\)-linearly surjective, \(P\in K\{Y\}\), \(\operatorname{order}(P)\leqslant r\), \(\operatorname{ddeg}P=1\), and \(P(\widehat{a})=0\), where \(\widehat{a}\preccurlyeq 1\) lies in an immediate asymptotic extension of \(K\) and \(\widehat{a}\notin K\). Then \(\widehat{a}\) is special over \(K\)._
Proof.: The hypothesis on \(\mathcal{O}\) yields: \(\boldsymbol{k}\) is \(r\)-linearly surjective and all \(A\in K[\partial]^{\neq}\) of order \(\leqslant r\) are neatly surjective. Let \(0<\gamma\in v(\widehat{a}-K)\); we claim that \(v(\widehat{a}-K)\) has an element \(\geqslant(4/3)\gamma\). We arrange \(P\asymp 1\). Take \(a\in K\) with \(v(\widehat{a}-a)=\gamma\). Then \(P_{+a}\asymp 1\), \(\operatorname{ddeg}P_{+a}=1\), so
\[P_{+a,1}\ \asymp\ 1,\quad P_{+a,>1}\ \prec\ 1,\quad P_{+a}\ =\ P(a)+P_{+a,1}+P_{+a, >1}\]
and
\[0\ =\ P(\widehat{a})\ =\ P_{+a}(\widehat{a}-a)\ =\ P(a)+P_{+a,1}(\widehat{a}-a)+P_ {+a,>1}(\widehat{a}-a),\]
with
\[v\big{(}P_{+a,1}(\widehat{a}-a)+P_{+a,>1}(\widehat{a}-a)\big{)}\geqslant\gamma +o(\gamma),\]
and thus \(v(P(a))\geqslant\gamma+o(\gamma)\). Take \(g\in K^{\times}\) with \(vg=\gamma\) and set \(Q:=g^{-1}P_{+a,\times g}\), so \(Q=Q_{0}+Q_{1}+Q_{>1}\) with
\[Q_{0}\ =\ Q(0)\ =\ g^{-1}P(a),\quad Q_{1}\ =\ g^{-1}(P_{+a,1})_{\times g},\quad Q _{>1}\ =\ g^{-1}(P_{+a,>1})_{\times g},\]
hence
\[v(Q_{0})\geqslant o(\gamma),\quad v(Q_{1})=o(\gamma),\quad v(Q_{>1})\geqslant \gamma+o(\gamma).\]
We set \(\widehat{f}:=g^{-1}(\widehat{a}-a)\), so \(Q(\widehat{f})=0\) and \(\widehat{f}\asymp 1\), and \(A:=L_{Q}\in K[\partial]\). Then \(Q(\widehat{f})=0\) gives
\[Q_{0}+A(\widehat{f})\ =\ Q_{0}+Q_{1}(\widehat{f})\ =\ -Q_{>1}(\widehat{f}),\ \ \text{with}\ v\big{(}Q_{>1}(\widehat{f})\big{)}\ \geqslant\ \gamma+o(\gamma),\]
so \(v\big{(}Q_{0}+A(\widehat{f})\big{)}\geqslant\gamma+o(\gamma)\). Since \(v(A)=v(Q_{1})=o(\gamma)\), Lemma 1.6.4 then gives \(f\in K\) with \(v(\widehat{f}-f)\geqslant(1/3)\gamma\). In view of \(\widehat{a}-a=g\widehat{f}\), this yields
\[v\big{(}\widehat{a}-(a+gf)\big{)}\ =\ \gamma+v(\widehat{f}-f)\ \geqslant\ (4/3)\gamma,\]
which proves our claim. It gives the desired result.
### A source of almost special elements
_In this subsection_ \(K\)_,_ \(\Gamma\)_,_ \(\boldsymbol{k}\)_, and_ \(r\) _are as in the previous subsection, and we assume that_ \(\mathcal{O}\) _is_ \(r\)_-linearly surjective._ (So_ \(\boldsymbol{k}\) _is_ \(r\)_-linearly surjective, and_ \(\sup\Psi=0\) _by_ _[_ADH, 9.4.2_]__.) Let_ \(\widehat{a}\) _be an element in an immediate asymptotic extension of_ \(K\) _such that_ \(\widehat{a}\notin K\) _and_ \(K\langle\widehat{a}\rangle\) _has transcendence degree_ \(\leqslant r\) _over_ \(K\)_. We shall use Proposition_ 1.6.5 _to show:_
**Proposition 1.6.6**.: _If \(\Gamma\) is divisible, then \(\widehat{a}\) is almost special over \(K\)._
Towards the proof we first note that \(\widehat{a}\) has a minimal annihilator \(P(Y)\) over \(K\) of order \(\leqslant r\). We also fix a divergent pc-sequence \((a_{\rho})\) in \(K\) such that \(a_{\rho}\rightsquigarrow\widehat{a}\). We next show how to improve \(\widehat{a}\) and \(P\) (without assuming divisibility of \(\Gamma\)):
**Lemma 1.6.7**.: _For some \(\widehat{b}\) in an immediate asymptotic extension of \(K\) we have:_
1. \(v(\widehat{a}-K)=v(\widehat{b}-K)\)_;_
_._
2. \((a_{\rho})\) _has a minimal differential polynomial_ \(Q\) _over_ \(K\) _of order_ \(\leqslant r\) _such that_ \(Q\) _is also a minimal annihilator of_ \(\widehat{b}\) _over_ \(K\)_._
Proof.: By [ADH, 6.8.1, 6.9.2], \((a_{\rho})\) is of d-algebraic type over \(K\) with a minimal differential polynomial \(Q(Y)\) over \(K\) such that \(\operatorname{order}Q\leqslant\operatorname{order}P\leqslant r\). By [ADH, 6.9.3, 9.4.5] this gives an element \(\widehat{b}\) in an immediate asymptotic extension of \(K\) such that \(Q\) is a minimal annihilator of \(\widehat{b}\) over \(K\) and \(a_{\rho}\rightsquigarrow\widehat{b}\). Then \(Q\) and \(\widehat{b}\) have the desired properties.
Proof of Proposition 1.6.: Replace \(\widehat{a}\) and \(P\) by \(\widehat{b}\) and \(Q\) from Lemma 1.6.7 (and rename) to arrange that \(P\) is a minimal differential polynomial of \((a_{\rho})\) over \(K\). Now assuming \(\Gamma\) is divisible, [160, Proposition 3.1] gives \(a\in K\) and \(g\in K^{\times}\) such that \(\widehat{a}-a\asymp g\) and \(\operatorname{ddeg}P_{+a,\times g}=1\).
Set \(F:=P_{+a,\times g}\) and \(\widehat{f}:=(\widehat{a}-a)/g\). Then \(\operatorname{ddeg}F=1\), \(F(\widehat{f})=0\), and \(\widehat{f}\preccurlyeq 1\). Applying Proposition 1.6.5 to \(F\) and \(\widehat{f}\) in the role of \(P\) and \(\widehat{a}\) yields a nontrivial convex subgroup \(\Delta\) of \(\Gamma\) that is cofinal in \(v(\widehat{f}-K)\). Setting \(\alpha:=vg\), it follows that \(\alpha+\Delta\) is cofinal in \(v\big{(}(\widehat{a}-a)-K\big{)}=v(\widehat{a}-K)\).
We can trade the divisibility assumption in Proposition 1.6.6 against a stronger hypothesis on \(K\), the proof using [160, 3.3] instead of [160, 3.1]:
**Corollary 1.6.8**.: _If \(K\) is henselian and \(\boldsymbol{k}\) is linearly surjective, then \(\widehat{a}\) is almost special over \(K\)._
**The linearly newtonian setting**.: _In this subsection \(K\) is an \(\mathfrak{o}\)-free \(r\)-linearly newtonian \(H\)-asymptotic field, \(r\geqslant 1\)._ Thus \(K\) is d-valued by Lemma 1.2.9. We let \(\phi\) range over the elements active in \(K\). We now mimick the material in the previous two subsections. Note that for \(A\in K[\mathfrak{o}]^{\neq}\) and any element \(\widehat{f}\) in an asymptotic extension of \(K\) we have \(A(\widehat{f})\preccurlyeq A^{\phi}\widehat{f}\), since \(A(\widehat{f})=A^{\phi}(\widehat{f})\).
**Lemma 1.6.9**.: _Assume that \(A\in K[\mathfrak{o}]^{\neq}\) has order \(\leqslant r\), \(\gamma\in\mathbb{Q}\Gamma\), \(\gamma>0\), \(\widehat{f}\in K^{\times}\), and \(v\big{(}A(\widehat{f})\big{)}\geqslant v(A^{\phi}\widehat{f})+\gamma\), eventually. Then there exists an \(f\in K\) such that \(A(f)=0\) and \(v(\widehat{f}-f)\geqslant v(\widehat{f})+\gamma+o(\gamma)\)._
Proof.: Take \(\phi\) such that \(v\phi\geqslant\gamma^{\dagger}\) and \(v\big{(}A(\widehat{f})\big{)}\geqslant v(A^{\phi}\widehat{f})+\gamma\). Next, take \(\beta\in\Gamma\) such that \(\beta\geqslant\gamma\) and \(v\big{(}A(\widehat{f})\big{)}\geqslant v(A^{\phi}\widehat{f})+\beta\). Then \(v\phi\geqslant\beta^{\dagger}\), so \(\beta>\Gamma^{\flat}_{\phi}\), hence the valuation ring of the flattening \((K^{\phi},v^{\flat}_{\phi})\) is \(r\)-linearly surjective, by [ADH, 14.2.1]. We now apply Lemma 1.6.2 to
\[(K^{\phi},v^{\flat}_{\phi}),\quad A^{\phi},\quad\dot{\beta}:=\beta+\Gamma^{ \flat}_{\phi}\]
in the role of \(K\), \(A\), \(\gamma\) to give \(f\in K\) with \(A(f)=0\) and \(v^{\flat}_{\phi}(\widehat{f}-f)\geqslant v^{\flat}_{\phi}(\widehat{f})+\dot{ \beta}+o(\dot{\beta})\). Then also \(v(\widehat{f}-f)\geqslant v(\widehat{f})+\beta+o(\beta)\), and thus \(v(\widehat{f}-f)\geqslant v(\widehat{f})+\gamma+o(\gamma)\).
**Lemma 1.6.10**.: _Assume \(A\in K[\mathfrak{o}]^{\neq}\) has order \(\leqslant r\), \(\widehat{K}\) is an immediate \(\mathrm{d}\)-algebraic asymptotic extension of \(K\), \(\gamma\in\mathbb{Q}\Gamma\), \(\gamma>0\), \(\widehat{f}\in\widehat{K}^{\times}\), and \(v\big{(}A(\widehat{f})\big{)}\geqslant v(A^{\phi}\widehat{f})+\gamma\) eventually. Then \(A(f)=0\) and \(v(\widehat{f}-f)\geqslant v(\widehat{f})+\gamma+o(\gamma)\) for some \(f\in K\)._
Proof.: Since \(K\) is \(\mathfrak{o}\)-free, so is \(\widehat{K}\) by Theorem 1.4.1. By [ADH, 14.0.1 and subsequent remarks] we can extend \(\widehat{K}\) to arrange that \(\widehat{K}\) is also newtonian. Then by Lemma 1.6.9 with \(\widehat{K}\) in the role of \(K\) we get \(f\in\widehat{K}\) with \(A(f)=0\) and \(v(\widehat{f}-f)\geqslant v(\widehat{f})+\gamma+o(\gamma)\). Now use that \(f\in K\) by [ADH, line before 14.2.10].
**Lemma 1.6.11**.: _Assume \(A\in K[\mathfrak{o}]^{\neq}\) has order \(\leqslant r\), \(b\in K\), \(\gamma\in\mathbb{Q}\Gamma\), \(\gamma>0\), \(\widehat{K}\) is an immediate \(\mathrm{d}\)-algebraic asymptotic extension of \(K\), and \(\widehat{f}\in\widehat{K}\), \(v(\widehat{f})\geqslant o(\gamma)\). Assume also that eventually \(v(b)\geqslant v(A^{\phi})+o(\gamma)\) and \(v\big{(}A(\widehat{f})-b\big{)}\geqslant v(A^{\phi})+\gamma+o(\gamma)\). Then for some \(f\in K\) we have \(A(f)=b\) and \(v(\widehat{f}-f)\geqslant(1/2)\gamma+o(\gamma)\)._
Proof.: We take \(y\in K\) with \(A(y)=b\) as follows: If \(b=0\), then \(y:=0\). If \(b\neq 0\), then Corollary 1.5.7 yields \(y\in K^{\times}\) such that \(A(y)=b\), \(vy\notin\mathscr{E}^{\mathrm{e}}(A)\), and \(v^{\mathrm{e}}_{A}(vy)=vb\). In any case, \(vy\geqslant o(\gamma)\): when \(b\neq 0\), the sentence preceding [ADH, 14.2.7] gives \(v_{A^{\phi}}(vy)=vb\), eventually, to which we apply [ADH, 6.1.3].
Now \(A(\widehat{g})=A(\widehat{f})-b\) for \(\widehat{g}:=\widehat{f}-y\), so \(v(\widehat{g})\geqslant o(\gamma)\), and eventually \(v\big{(}A(\widehat{g})\big{)}\geqslant v(A^{\phi})+\gamma+o(\gamma)\). We distinguish two cases:
(1) \(v(\widehat{g})\geqslant(1/2)\gamma+o(\gamma)\). Then \(v(\widehat{f}-y)\geqslant(1/2)\gamma+o(\gamma)\), so \(f:=y\) works.
(2) \(v(\widehat{g})<(1/2)\gamma+o(\gamma)\). Then by [ADH, 6.1.3] we have eventually
\[v(A^{\phi}\widehat{g})\ <\ v(A^{\phi})+(1/2)\gamma+o(\gamma),\qquad v\big{(}A( \widehat{g})\big{)}\ \geqslant\ v(A^{\phi})+\gamma+o(\gamma),\]
so \(v(A(\widehat{g}))\geqslant v(A^{\phi}\widehat{g})+(1/2)\gamma\), eventually. Lemma 1.6.10 gives an element \(g\in K\) with \(A(g)=0\) and \(v(\widehat{g}-g)\geqslant(1/2)\gamma+o(\gamma)\). Hence \(f:=y+g\) works.
**Proposition 1.6.12**.: _Suppose that \(P\in K\{Y\}\), \(\mathrm{order}\,P\leqslant r\), \(\mathrm{ndeg}\,P=1\), and \(P(\widehat{a})=0\), where \(\widehat{a}\approx 1\) lies in an immediate asymptotic extension of \(K\) and \(\widehat{a}\notin K\). Then \(\widehat{a}\) is special over \(K\)._
The proof is like that of Proposition 1.6.5, but there are some differences that call for further details.
Proof.: Given \(0<\gamma\in v(\widehat{a}-K)\), we claim that \(v(\widehat{a}-K)\) has an element \(\geqslant(4/3)\gamma\). Take \(a\in K\) with \(v(\widehat{a}-a)=\gamma\). Then \(\mathrm{ndeg}\,P_{+a}=1\) by [ADH, 11.2.3(i)], so eventually we have
\[P(a)\ \prec\ P^{\phi}_{+a,1}\ \succ\ P^{\phi}_{+a,>1},\quad P^{\phi}_{+a}\ =\ P(a)+P^{\phi}_{+a,1}+P^{\phi}_{+a,>1}\]
and
\[0\ =\ P(\widehat{a}) =\ P^{\phi}_{+a}(\widehat{a}-a)\] \[=\ P(a)+P^{\phi}_{+a,1}(\widehat{a}-a)+P^{\phi}_{+a,>1}(\widehat{a }-a),\] \[\qquad v\big{(}P^{\phi}_{+a,1}(\widehat{a}-a)+P^{\phi}_{+a,>1}( \widehat{a}-a)\big{)}\ \geqslant\ v(P^{\phi}_{+a,1})+\gamma+o(\gamma),\]
and thus eventually \(v\big{(}P(a)\big{)}\ \geqslant\ v(P^{\phi}_{+a,1})+\gamma+o(\gamma)\). Take \(g\in K^{\times}\) with \(vg=\gamma\) and set \(Q:=g^{-1}P_{+a,\times g}\), so \(Q=Q_{0}+Q_{1}+Q_{>1}\) with
\[Q_{0}\ =\ Q(0)\ =\ g^{-1}P(a),\quad Q_{1}\ =\ g^{-1}(P_{+a,1})_{\times g},\quad Q _{>1}\ =\ g^{-1}(P_{+a,>1})_{\times g}.\]
Then \(v(Q_{0})=v\big{(}P(a)\big{)}-\gamma\geqslant v(P^{\phi}_{+a,1})+o(\gamma)\), eventually. By [ADH, 6.1.3],
\[v(Q_{1}^{\phi})\ =\ v(P^{\phi}_{+a,1})+o(\gamma),\qquad v(Q_{>1}^{\phi})\ \geqslant\ v(P^{\phi}_{+a,>1})+\gamma+o(\gamma)\]
for all \(\phi\). Since \(P^{\phi}_{+a,>1}\preccurlyeq P^{\phi}_{+a,1}\), eventually, the last two displayed inequalities give \(v(Q^{\phi}_{>1})\geqslant v(Q_{1}^{\phi})+\gamma+o(\gamma)\), eventually. We set \(\widehat{f}:=g^{-1}(\widehat{a}-a)\), so \(Q(\widehat{f})=0\) and \(\widehat{f}\asymp 1\). Set \(A:=L_{Q}\in K[\mathfrak{o}]\). Then \(Q(\widehat{f})=0\) gives
\[Q_{0}+A(\widehat{f})\ =\ Q_{0}+Q_{1}(\widehat{f})\ =\ -Q_{>1}^{\phi}(\widehat{f}),\]
with \(v\big{(}Q_{>1}^{\phi}(\widehat{f})\big{)}\geqslant v(Q_{1}^{\phi})+\gamma+o(\gamma)\), eventually, so
\[v\big{(}Q_{0}+A(\widehat{f})\big{)}\ \geqslant\ v(A^{\phi})+\gamma+o(\gamma), \quad\text{eventually}.\]
Moreover, \(v(Q_{0})\geqslant v(A^{\phi})+o(\gamma)\), eventually. Lemma 1.6.11 then gives \(f\in K\) with \(v(\widehat{f}-f)\geqslant(1/3)\gamma\). In view of \(\widehat{a}-a=g\widehat{f}\), this yields
\[v\big{(}\widehat{a}-(a+gf)\big{)}\ =\ \gamma+v(\widehat{f}-f)\ \geqslant\ (4/3)\gamma,\]
which proves our claim.
In the rest of this subsection we assume that \(\widehat{a}\notin K\) lies in an immediate asymptotic extension of \(K\) and \(K\langle\widehat{a}\rangle\) has transcendence degree \(\leqslant r\) over \(K\).
**Proposition 1.6.13**.: _If \(\Gamma\) is divisible, then \(\widehat{a}\) is almost special over \(K\)._
Towards the proof, we fix a minimal annihilator \(P(Y)\) of \(\widehat{a}\) over \(K\), so \(\operatorname{order}P\leqslant r\). We also fix a divergent pc-sequence \((a_{\rho})\) in \(K\) such that \(a_{\rho}\rightsquigarrow\widehat{a}\). We next show how to improve \(\widehat{a}\) and \(P\) if necessary:
**Lemma 1.6.14**.: _For some \(\widehat{b}\) in an immediate asymptotic extension of \(K\) we have:_
1. \(v(\widehat{a}-a)=v(\widehat{b}-a)\) _for all_ \(a\in K\)_;_
2. \((a_{\rho})\) _has a minimal differential polynomial_ \(Q\) _over_ \(K\) _of order_ \(\leqslant r\) _such that_ \(Q\) _is also a minimal annihilator of_ \(\widehat{b}\) _over_ \(K\)_._
Proof.: By the remarks following the proof of [ADH, 11.4.3] we have \(P\in Z(K,\widehat{a})\). Take \(Q\in Z(K,\widehat{a})\) of minimal complexity. Then \(\operatorname{order}Q\leqslant\operatorname{order}P\leqslant r\), and \(Q\) is a minimal differential polynomial of \((a_{\rho})\) over \(K\) by [ADH, 11.4.13]. By [ADH, 11.4.8 and its proof] this gives an element \(\widehat{b}\) in an immediate asymptotic extension of \(K\) such that (i) holds and \(Q\) is a minimal annihilator of \(\widehat{b}\) over \(K\). Then \(Q\) and \(\widehat{b}\) have the desired properties.
Proof of Proposition 1.6.13.: Assume \(\Gamma\) is divisible. Replace \(\widehat{a}\), \(P\) by \(\widehat{b}\), \(Q\) from Lemma 1.6.14 and rename to arrange that \(P\) is a minimal differential polynomial of \((a_{\rho})\) over \(K\). By [ADH, 14.5.1] we have \(a\in K\) and \(g\in K^{\times}\) such that \(\widehat{a}-a\asymp g\) and \(\operatorname{ndeg}P_{+a,\times g}=1\). Set \(F:=P_{+a,\times g}\) and \(\widehat{f}:=(\widehat{a}-a)/g\). Then \(\operatorname{ndeg}F=1\), \(F(\widehat{f})=0\), and \(\widehat{f}\preccurlyeq 1\). Applying Proposition 1.6.12 to \(F\) and \(\widehat{f}\) in the role of \(P\) and \(\widehat{a}\) yields a nontrivial convex subgroup \(\Delta\) of \(\Gamma\) that is cofinal in \(v(\widehat{f}-K)\). Setting \(\alpha:=vg\), it follows that \(\alpha+\Delta\) is cofinal in \(v\big{(}(\widehat{a}-a)-K\big{)}=v(\widehat{a}-K)\).
**Corollary 1.6.15**.: _If \(K\) is henselian, then \(\widehat{a}\) is almost special over \(K\)._
The proof is like that of Proposition 1.6.13, using [159, 3.3] instead of [ADH, 14.5.1].
### The case of order \(1\)
We show here that Proposition 1.6.12 goes through in the case of order \(1\) under weaker assumptions: _in this subsection \(K\) is a \(1\)-linearly newtonian \(H\)-asymptotic field with asymptotic integration._ Then \(K\) is d-valued with \(\operatorname{I}(K)\subseteq K^{\dagger}\), by Lemma 1.2.9, and \(\lambda\)-free, by [ADH, 14.2.3]. We let \(\phi\) range over elements active in \(K\). In the next two lemmas \(A\in K[\widehat{\varrho}]^{\neq}\) has order \(\leqslant 1\), \(\gamma\in\mathbb{Q}\Gamma\), \(\gamma>0\), and \(\widehat{K}\) is an immediate asymptotic extension of \(K\).
**Lemma 1.6.16**.: _Let \(\widehat{f}\in\widehat{K}^{\times}\) be such that \(v\big{(}A(\widehat{f})\big{)}\geqslant v(A^{\phi}\widehat{f})+\gamma\) eventually. Then there exists \(f\in K\) such that \(A(f)=0\) and \(v(\widehat{f}-f)\geqslant v(\widehat{f})+\gamma\)._
Proof.: Note that \(\operatorname{order}(A)=1\); we arrange \(A=\partial-g\) (\(g\in K\)). If \(A(\widehat{f})=0\), then \(\widehat{f}\) is in \(K\) [ADH, line before 14.2.10], and \(f:=\widehat{f}\) works. Assume \(A(\widehat{f})\neq 0\). Then
\[v\big{(}A^{\phi}(\widehat{f})\big{)}=v\big{(}A(\widehat{f})\big{)}\geqslant v (A^{\phi}\widehat{f})+\gamma>v(A^{\phi}\widehat{f}),\ \ \ \ \text{ eventually},\]
so \(v(\widehat{f})\in\mathscr{E}^{\rm e}(A)\), and Lemma 1.5.9 yields an \(f\in K\) with \(f\sim\widehat{f}\) and \(A(f)=0\). We claim that this \(f\) has the desired property. Set \(b:=A(\widehat{f})\). By the remarks preceding Corollary 1.5.15 we can replace \(K\), \(\widehat{K}\), \(A\), \(b\) by \(K^{\phi}\), \(\widehat{K}^{\phi}\), \(\phi^{-1}A^{\phi}\), \(\phi^{-1}b\), respectively, for suitable \(\phi\), to arrange that \(K\) has small derivation and \(b^{\dagger}-g\succ^{\flat}1\). Using the hypothesis of the lemma we also arrange \(vb\geqslant v(A\widehat{f})+\gamma\). It remains to show that for \(\widehat{g}:=\widehat{f}-f\neq 0\) we have \(v(\widehat{g})\geqslant v(\widehat{f})+\gamma\). Now \(A(\widehat{g})=b\) with \(v(\widehat{g})\notin\mathscr{E}^{\rm e}(A)\), hence \(\widehat{g}\sim b/(b^{\dagger}-g)\prec^{\flat}b\) by Lemma 1.5.14, and thus \(v(\widehat{g})>vb\geqslant v(A\widehat{f})+\gamma\), so it is enough to show \(v(A\widehat{f})\geqslant v(\widehat{f})\). Now \(b=A(\widehat{f})=\widehat{f}(\widehat{f}^{\dagger}-g)\) and \(A\widehat{f}=\widehat{f}\big{(}\partial+\widehat{f}^{\dagger}-g\big{)}\). As \(vb\geqslant v(A\widehat{f})+\gamma>v(A\widehat{f})\), this yields \(v(\widehat{f}^{\dagger}-g)>0\), so \(v(A\widehat{f})=v(\widehat{f})\).
**Lemma 1.6.17**.: _Let \(b\in K\) and \(\widehat{f}\in\widehat{K}\) with \(v(\widehat{f})\geqslant o(\gamma)\). Assume also that eventually \(v(b)\geqslant v(A^{\phi})+o(\gamma)\) and \(v\big{(}A(\widehat{f})-b\big{)}\geqslant v(A^{\phi})+\gamma+o(\gamma)\). Then for some \(f\in K\) we have \(A(f)=b\) and \(v(\widehat{f}-f)\geqslant(1/2)\gamma+o(\gamma)\)._
The proof is like that of Lemma 1.6.11, using Lemma 1.6.16 instead of Lemma 1.6.10. In the same way Lemma 1.6.11 gave Proposition 1.6.12, Lemma 1.6.17 now yields:
**Proposition 1.6.18**.: _If \(P\in K\{Y\}\), \(\operatorname{order}P\leqslant 1\), \(\operatorname{ndeg}P=1\), and \(P(\widehat{a})=0\), where \(\widehat{a}\preccurlyeq 1\) lies in an immediate asymptotic extension of \(K\) and \(\widehat{a}\notin K\), then \(\widehat{a}\) is special over \(K\)._
_Remark_.: Proposition 1.6.13 does not hold for \(r=1\) under present assumptions. To see this, let \(K\) be a Liouville closed \(H\)-field which is not \(\mathfrak{o}\)-free, as in Example 1.4.16 or [9]. Then \(K\) is \(1\)-linearly newtonian by Corollary 1.8.29 below. Consider the pc-sequences \((\lambda_{\rho})\) and \((\mathfrak{o}_{\rho})\) in \(K\) as in [ADH, 11.7], let \(\mathfrak{o}\in K\) with \(\mathfrak{o}_{\rho}\rightsquigarrow\mathfrak{o}\), and \(P=2Y^{\prime}+Y^{2}+\mathfrak{o}\). Then [ADH, 11.7.13] gives an element \(\lambda\) in an immediate asymptotic extension of \(K\) but not in \(K\) with \(\lambda_{\rho}\rightsquigarrow\lambda\) and \(P(\lambda)=0\). However, \(\lambda\) is not almost special over \(K\) [ADH, 3.4.13, 11.5.2].
**Relating \(Z(K,\widehat{a})\) and \(v(\widehat{a}-K)\) for special \(\widehat{a}\).**_In this subsection \(K\) is a valued differential field with small derivation \(\partial\neq 0\) such that \(\Gamma\neq\{0\}\) and \(\Gamma^{>}\) has no least element._ We recall from [11] that a valued differential field extension \(L\) of \(K\) is said to be _strict_ if for all \(\phi\in K^{\times}\),
\[\partial\mathcal{O}\subseteq\phi\mathcal{O}\mathcal{O}_{L}\subseteq\phi \mathcal{O}_{L},\ \ \ \ \ \partial\mathcal{O}\subseteq\phi\mathcal{O}_{L}\subseteq\phi\mathcal{O}_{L}.\]
(If \(K\) is asymptotic, then any immediate asymptotic extension of \(K\) is automatically strict, by [11, 1.11].) Let \(\widehat{a}\) lie in an immediate strict extension of \(K\) such that \(\widehat{a}\preccurlyeq 1\), \(\widehat{a}\notin K\), and \(\widehat{a}\) is special over \(K\). We adopt from [11, Sections 2, 4] the definitions of \(\operatorname{ndeg}P\) for \(P\in K\{Y\}^{\neq}\) and of the set \(Z(K,\widehat{a})\subseteq K\{Y\}^{\neq}\). Also recall that \(\Gamma(\partial):=\{v\phi:\,\phi\in K^{\times},\,\partial\mathcal{O}\subseteq \phi\mathcal{O}\}\).
**Lemma 1.6.19**.: _Let \(P\in Z(K,\widehat{a})\) and \(P\asymp 1\). Then \(v\big{(}P(\widehat{a})\big{)}>v(\widehat{a}-K)\)._
Proof.: Take a divergent pc-sequence \((a_{\rho})\) in \(\mathcal{O}\) with \(a_{\rho}\rightsquigarrow\widehat{a}\), and as in [ADH, 11.2] let \(\boldsymbol{a}:=c_{K}(a_{\rho})\). Then \(\operatorname{ndeg}_{\boldsymbol{a}}P\geqslant 1\) by [11, 4.7]. We arrange \(\gamma_{\rho}:=v(\widehat{a}-a_{\rho})\) to be strictly increasing as a function of \(\rho\), with \(0<2\gamma_{\rho}<\gamma_{s(\rho)}\) for all \(\rho\). Take \(g_{\rho}\in\mathcal{O}\)
with \(g_{\rho}\asymp\widehat{a}-a_{\rho}\); then \(1\leqslant d:=\operatorname{ndeg}_{\boldsymbol{a}}P=\operatorname{ndeg}P_{+a_{\rho}, \times g_{\rho}}\) for all sufficiently large \(\rho\), and we arrange that this holds for all \(\rho\). We have \(\widehat{a}=a_{\rho}+g_{\rho}y_{\rho}\) with \(y_{\rho}\asymp 1\), and
\[P(\widehat{a})\ =\ P_{+a_{\rho},\times g_{\rho}}(y_{\rho})\ =\ \sum_{i}(P_{+a_{\rho}, \times g_{\rho}})_{i}(y_{\rho}).\]
Pick for every \(\rho\) an element \(\phi_{\rho}\in K^{\times}\) such that \(0\leqslant v(\phi_{\rho})\in\Gamma(\mathfrak{d})\) and \((P_{+a_{\rho},\times g_{\rho}}^{\phi_{\rho}})_{i}\ \asymp(P_{+a_{\rho},\times g_{\rho}}^{\phi_{\rho}})_{d}\) for all \(i\). Then for all \(\rho\) and \(i\),
\[(P_{+a_{\rho},\times g_{\rho}})_{i}(y_{\rho})\ =(P_{+a_{\rho}, \times g_{\rho}}^{\phi_{\rho}})_{i}(y_{\rho})\ \prec\ (P_{+a_{\rho},\times g_{\rho}}^{\phi_{\rho}})_{i}\ \prec\ (P_{+a_{\rho}, \times g_{\rho}}^{\phi_{\rho}})_{d}\ \text{ with }\] \[v\big{(}(P_{+a_{\rho},\times g_{\rho}}^{\phi_{\rho}})_{d}\big{)} \ \geqslant\ d\gamma_{\rho}+o(\gamma_{\rho})\ \geqslant\ \gamma_{\rho}+o(\gamma_{\rho}),\]
where for the next to last inequality we use [ADH, 11.1.1, 5.7.1, 5.7.5, 6.1.3]. Hence \(v\big{(}P(\widehat{a})\big{)}\geqslant\gamma_{\rho}+o(\gamma_{\rho})\) for all \(\rho\), and thus \(v\big{(}P(\widehat{a})\big{)}>v(\widehat{a}-K)\).
We also have a converse under extra assumptions:
**Lemma 1.6.20**.: _Assume \(K\) is asymptotic and \(\Psi\subseteq v(\widehat{a}-K)\). Let \(P\in K\{Y\}\) be such that \(P\asymp 1\) and \(v\big{(}P(\widehat{a})\big{)}>v(\widehat{a}-K)\). Then \(P\in Z(K,\widehat{a})\)._
Proof.: Let \(\Delta\) be the nontrivial convex subgroup of \(\Gamma\) that is cofinal in \(v(\widehat{a}-K)\). Let \(\kappa:=\operatorname{cf}(\Delta)\). Take a divergent pc-sequence \((a_{\rho})_{\rho<\kappa}\) in \(K\) such that \(a_{\rho}\rightsquigarrow\widehat{a}\). We arrange \(\gamma_{\rho}:=v(\widehat{a}-a_{\rho})\) is strictly increasing as a function of \(\rho\), with \(\gamma_{\rho}>0\) for all \(\rho\); thus \(a_{\rho}\preccurlyeq 1\) for all \(\rho\). Consider the \(\Delta\)-coarsening \(\dot{v}=v_{\Delta}\) of the valuation \(v\) of \(K\); it has valuation ring \(\dot{\mathcal{O}}\) with differential residue field \(\dot{K}\). Consider likewise the \(\Delta\)-coarsening of the valuation of the immediate extension \(L=K\langle\widehat{a}\rangle\) of \(K\). Let \(a^{*}\) be the image of \(\widehat{a}\) in the differential residue field \(\dot{L}\) of \((L,\dot{v})\). Note that \(\dot{L}\) is an immediate extension of \(\dot{K}\). The pc-sequence \((a_{\rho})\) then yields a sequence \((\dot{a_{\rho}})\) in \(\dot{K}\) with \(v(a^{*}-\dot{a_{\rho}})=\gamma_{\rho}\) for all \(\rho\). Thus \((\dot{a}_{\rho})\) is a c-sequence in \(\dot{K}\) with \(\dot{a}_{\rho}\to a^{*}\), so \(\dot{P}(\dot{a}_{\rho})\to\dot{P}(a^{*})\) by [ADH, 4.4.5]. From \(v\big{(}P(\widehat{a})\big{)}>\Delta\) we obtain \(\dot{P}(a^{*})=0\), and so \(\dot{P}(\dot{a}_{\rho})\to 0\). So far we have not used our assumption that \(K\) is asymptotic and \(\Psi\subseteq v(\widehat{a}-K)\). Using this now, we note that for \(\alpha\in\Delta^{>}\) we have \(0<\alpha^{\prime}=\alpha+\alpha^{\dagger}\), so \(\alpha^{\prime}\in\Delta\), hence the derivation of \(\dot{K}\) is nontrivial. Thus we can apply [ADH, 4.4.10] to \(\dot{K}\) and modify the \(a_{\rho}\) without changing \(\gamma_{\rho}=v(a^{*}-\dot{a}_{\rho})\) to arrange that in addition \(\dot{P}(\dot{a}_{\rho})\neq 0\) for all \(\rho\). Since \(\kappa=\operatorname{cf}(\Delta)\), we can replace \((a_{\rho})\) by a cofinal subsequence so that \(P(a_{\rho})\rightsquigarrow 0\), hence \(P\in Z(K,\widehat{a})\) by [11, 4.6].
To elaborate on this, let \(\Delta\) be a convex subgroup of \(\Gamma\) and \(\dot{K}\) the valued differential residue field of the \(\Delta\)-coarsening \(v_{\Delta}\) of the valuation \(v\) of \(K\). We view \(\dot{K}\) in the usual way as a valued differential subfield of the valued differential residue field \(\widehat{\dot{K}}\) of the \(\Delta\)-coarsening of the valuation of \(\widehat{K}\) by \(\Delta\); see [ADH, pp. 159-160 and 4.4.4].
**Corollary 1.6.21**.: _Suppose \(K\) is asymptotic, \(\Psi\subseteq v(\widehat{a}-K)\), and \(\Delta\) is cofinal in \(v(\widehat{a}-K)\). Let \(P\in K\{Y\}\) with \(P\asymp 1\). Then \(P\in Z(K,\widehat{a})\) if and only if \(\dot{P}(\dot{\widehat{a}})=0\) in \(\hat{\widehat{K}}\). Also, \(P\) is an element of \(Z(K,\widehat{a})\) of minimal complexity if and only if \(\dot{P}\) is a minimal annihilator of \(\dot{\widehat{a}}\) over \(\dot{K}\) and \(\dot{P}\) has the same complexity as \(P\)._
Proof.: The first statement is immediate from Lemmas 1.6.19 and 1.6.20. For the rest use that for \(R\in\dot{\mathcal{O}}\{Y\}\) we have \(\operatorname{c}(\dot{R})\leqslant\operatorname{c}(R)\) and that for all \(Q\in\dot{K}\{Y\}\) there is an \(R\in\dot{\mathcal{O}}\{Y\}\) with \(Q=\dot{R}\) and \(Q\boldsymbol{i}\neq 0\) iff \(\boldsymbol{R}_{\boldsymbol{i}}\neq 0\) for all \(\boldsymbol{i}\), so \(\operatorname{c}(\dot{R})=\operatorname{c}(R)\).
### 1.7. Differential Henselianity of the Completion
_Let \(K\) be a valued differential field with small derivation._ We let \(\Gamma:=v(K^{\times})\) be the value group of \(K\) and \(\boldsymbol{k}:=\operatorname{res}(K)\) be the differential residue field of \(K\), and we let \(r\in\mathbb{N}\). The following summarizes [ADH, 7.1.1, 7.2.1]:
**Lemma 1.7.1**.: _The valued differential field \(K\) is \(r\)-\(\operatorname{d}\)-henselian iff for each \(P\) in \(K\{Y\}\) of order \(\leqslant r\) with \(\operatorname{ddeg}P=1\) there is a \(y\in\mathcal{O}\) with \(P(y)=0\)._
Recall that the derivation of \(K\) being small, it is continuous [ADH, 4.4.6], and hence extends uniquely to a continuous derivation on the completion \(K^{\operatorname{c}}\) of the valued field \(K\) [ADH, 4.4.11]. We equip \(K^{\operatorname{c}}\) with this derivation, which remains small [ADH, 4.4.12], so \(K^{\operatorname{c}}\) is an immediate valued differential field extension of \(K\) with small derivation, in particular, \(\boldsymbol{k}=\operatorname{res}(K^{\operatorname{c}})\).
Below we characterize in a first-order way when \(K^{\operatorname{c}}\) is \(r\)-\(\operatorname{d}\)-henselian. We shall use tacitly that for \(P\in K\{Y\}\) we have \(P(g)\preccurlyeq P_{\times g}\) for all \(g\in K\); to see this, replace \(P\) by \(P_{\times g}\) to reduce to \(g=1\), and observe that \(P(1)=\sum_{\|\boldsymbol{\sigma}\|=0}P_{\boldsymbol{\sigma}}|\preccurlyeq P\).
**Lemma 1.7.2**.: _Let \(P\in K^{\operatorname{c}}\{Y\}\), \(b\in K^{\operatorname{c}}\) with \(b\preccurlyeq 1\) and \(P(b)=0\), and \(\gamma\in\Gamma^{>}\). Then there is an \(a\in\mathcal{O}\) with \(v\big{(}P(a)\big{)}>\gamma\)._
Proof.: To find an \(a\) as claimed we take \(f\in K\) satisfying \(f\asymp P\) and replace \(P\), \(\gamma\) by \(f^{-1}P\), \(\gamma-vf\), respectively, to arrange \(P\asymp 1\) and thus \(P_{+b}\asymp 1\). We also assume \(b\neq 0\). Since \(K\) is dense in \(K^{\operatorname{c}}\) we can take \(a\in K\) such that \(a\sim b\) (so \(a\in\mathcal{O}\)) and \(v(a-b)>2\gamma\). Then with \(g:=a-b\), using [ADH, 4.5.1(i) and 6.1.4] we conclude
\[v\big{(}P(a)\big{)}=v\big{(}P_{+b}(g)\big{)}\geqslant v\big{(}(P_{+b})_{ \times g}\big{)}\geqslant v(P_{+b})+vg+o(vg)=vg+o(vg)>\gamma\]
as required.
Recall that if \(K\) is asymptotic, then so is \(K^{\operatorname{c}}\) by [ADH, 9.1.6].
**Lemma 1.7.3**.: _Suppose \(K\) is asymptotic, \(\Gamma\neq\{0\}\), and for every \(P\in K\{Y\}\) of order at most \(r\) with \(\operatorname{ddeg}P=1\) and every \(\gamma\in\Gamma^{>}\) there is an \(a\in\mathcal{O}\) with \(v\big{(}P(a)\big{)}>\gamma\). Then \(K^{\operatorname{c}}\) is \(r\)-\(\operatorname{d}\)-henselian._
Proof.: The hypothesis applied to \(P\in\mathcal{O}\{Y\}\) of order \(\leqslant r\) with \(\operatorname{ddeg}P=\deg P=1\) yields that \(\boldsymbol{k}\) is \(r\)-linearly surjective. Let now \(P\in K^{\operatorname{c}}\{Y\}\) be of order \(\leqslant r\) with \(\operatorname{ddeg}P=1\). We need to show that there exists \(b\in K^{\operatorname{c}}\) with \(b\preccurlyeq 1\) and \(P(b)=0\). First we arrange \(P\asymp 1\). By [ADH, remarks after 9.4.11] we can take \(b\preccurlyeq 1\) in an immediate \(\operatorname{d}\)-henselian asymptotic field extension \(L\) of \(K^{\operatorname{c}}\) with \(P(b)=0\). We prove below that \(b\in K^{\operatorname{c}}\). We may assume \(b\notin K\), so \(v(b-K)\) has no largest element, since \(L\supseteq K\) is immediate. Note also that \(\operatorname{ddeg}P_{+b}=1\) by [ADH, 6.6.5(i)]; since \(P(b)=0\) we thus have \(\operatorname{ddeg}P_{+b,\times g}=1\) for all \(g\preccurlyeq 1\) in \(L^{\times}\) by [ADH, 6.6.7].
**Claim :**_Let \(\gamma\in\Gamma^{>}\) and \(a\in K\) with \(v(b-a)\geqslant 0\). There is a \(y\in\mathcal{O}\) such that \(v\big{(}P(y)\big{)}>\gamma\) and \(v(b-y)\geqslant v(b-a)\)._
To prove this claim, take \(g\in K^{\times}\) with \(vg=v(b-a)\). Then by [ADH, 6.6.6] and the observation preceding the claim we have \(\operatorname{ddeg}P_{+a,\times g}=\operatorname{ddeg}P_{+b,\times g}=1\). Thanks to density of \(K\) in \(K^{\operatorname{c}}\) we may take \(Q\in K\{Y\}\) of order \(\leqslant r\) with \(P_{+a,\times g}\sim Q\) and \(v(P_{+a,\times g}-Q)>\gamma\). Then \(\operatorname{ddeg}Q=1\), so by the hypothesis of the lemma we have \(z\in\mathcal{O}\) with \(v\big{(}Q(z)\big{)}>\gamma\). Set \(y:=a+gz\in\mathcal{O}\); then \(v\big{(}P(y)\big{)}=v\big{(}P_{+a,\times g}(z)\big{)}>\gamma\) and \(v(b-y)=v(b-a-gz)\geqslant v(b-a)=vg\) as claimed.
Let now \(\gamma\in\Gamma^{>}\); to show that \(b\in K^{\rm c}\), it is enough by [ADH, 3.2.15, 3.2.16] to show that then \(v(a-b)>\gamma\) for some \(a\in K\). Let \(A:=L_{P_{+b}}\in L[\Eu{0}]\); then \(A\asymp 1\). Since \(|\mathscr{E}_{L}(A)|\leqslant r\) by [ADH, 7.5.3], the claim gives an \(a\in\mathcal{O}\) with \(v\big{(}P(a)\big{)}>2\gamma\) and \(0<v(b-a)\notin\mathscr{E}_{L}(A)\). Put \(g:=a-b\) and \(R:=(P_{+b})_{>1}\). Then \(R\prec 1\) and
\[P(a)\ =\ P_{+b}(g)\ =\ A(g)+R(g)\]
where by the definition of \(\mathscr{E}_{L}(A)\) and [ADH, 6.4.1(iii), 6.4.3] we have in \(\mathbb{Q}\Gamma\):
\[v\big{(}A(g)\big{)}\ =\ v_{A}(vg)\ =\ vg+o(vg)\ <\ vR+(3/2)vg\ \leqslant\ v_{R}( vg)\ \leqslant\ v\big{(}R(g)\big{)}\]
and so \(v\big{(}P(a)\big{)}=vg+o(vg)>2\gamma\). Therefore \(v(a-b)=vg>\gamma\) as required.
The last two lemmas yield an analogue of [ADH, 3.3.7] for \(r\)-d-henselianity and a partial generalization of [ADH, 7.2.15]:
**Corollary 1.7.4**.: _Suppose \(K\) is asymptotic and \(\Gamma\neq\{0\}\). Then the following are equivalent:_
1. \(K^{\rm c}\) _is_ \(r\)_-_\(\mathrm{d}\)_-henselian;_
2. _for every_ \(P\in K\{Y\}\) _of order at most_ \(r\) _with_ \(\mathrm{ddeg}\,P=1\) _and every_ \(\gamma\in\Gamma^{>}\) _there exists_ \(a\in\mathcal{O}\) _with_ \(v\big{(}P(a)\big{)}>\gamma\)_._
_In particular, if \(K\) is \(r\)-\(\mathrm{d}\)-henselian, then so is \(K^{\rm c}\)._
### 1.8. Complements on Newtonianity
_In this section \(K\) is an ungrounded \(H\)-asymptotic field with \(\Gamma=v(K^{\times})\neq\{0\}\)._ Note that then \(K^{\rm c}\) is also \(H\)-asymptotic. We let \(r\) range over \(\mathbb{N}\) and \(\phi\) over the active elements of \(K\). Our first aim is a newtonian analogue of Corollary 1.7.4:
**Proposition 1.8.1**.: _For \(\mathrm{d}\)-valued and \(\Eu{0}\)-free \(K\), the following are equivalent:_
1. \(K^{\rm c}\) _is_ \(r\)_-newtonian;_
2. _for every_ \(P\in K\{Y\}\) _of order at most_ \(r\) _with_ \(\mathrm{ndeg}\,P=1\) _and every_ \(\gamma\in\Gamma^{>}\) _there is an_ \(a\in\mathcal{O}\) _with_ \(v\big{(}P(a)\big{)}>\gamma\)_._
_If \(K\) is \(\mathrm{d}\)-valued, \(\Eu{0}\)-free, and \(r\)-newtonian, then so is \(K^{\rm c}\)._
The final statement in this proposition extends [ADH, 14.1.5]. Towards the proof we first state a variant of [ADH, 13.2.2] which follows easily from [ADH, 11.1.4]:
**Lemma 1.8.2**.: _Assume \(K\) has small derivation and let \(P,Q\in K\{Y\}^{\neq}\) and \(\phi\preccurlyeq 1\). Then \(P^{\phi}\asymp^{\flat}P\), and so we have the three implications_
\[P\preccurlyeq^{\flat}Q\ \Longrightarrow\ P^{\phi}\preccurlyeq^{\flat}Q^{\phi}, \quad P\preccurlyeq^{\flat}Q\ \Longrightarrow\ P^{\phi}\preccurlyeq^{\flat}Q^{\phi},\quad P\sim^{\flat}Q\ \Longrightarrow\ P^{\phi}\sim^{\flat}Q^{\phi}.\]
_The last implication gives: \(P\sim^{\flat}Q\ \Longrightarrow\ \mathrm{ndeg}\,P=\mathrm{ndeg}\,Q\) and \(\,\mathrm{nmult}\,P=\mathrm{nmult}\,Q\)._
For \(P^{\phi}\asymp^{\flat}P\) and the subsequent three implications in the lemma above we can drop the assumption that \(K\) is ungrounded.
**Lemma 1.8.3**.: _Suppose \(K\) is \(\mathrm{d}\)-valued, \(\Eu{0}\)-free, and for every \(P\in K\{Y\}\) of order at most \(r\) with \(\mathrm{ndeg}\,P=1\) and every \(\gamma\in\Gamma^{>}\) there is an \(a\in\mathcal{O}\) with \(v\big{(}P(a)\big{)}>\gamma\). Then \(K^{\rm c}\) is \(\mathrm{d}\)-valued, \(\Eu{0}\)-free, and \(r\)-newtonian._
Proof.: By [ADH, 9.1.6 and 11.7.20], \(K^{\rm c}\) is \(\mathrm{d}\)-valued and \(\Eu{0}\)-free. Let \(P\in K^{\rm c}\{Y\}\) be of order \(\leqslant\,r\) with \(\mathrm{ndeg}\,P=1\). We need to show that \(P(b)=0\) for some \(b\preccurlyeq 1\) in \(K^{\rm c}\). To find \(b\) we may replace \(K\), \(P\) by \(K^{\phi}\), \(P^{\phi}\); in particular we may assume that \(K\) has small derivation and \(\Gamma^{\flat}\neq\Gamma\). By [ADH, 14.0.1 and the remarks following it] we can
take \(b\preccurlyeq 1\) in an immediate newtonian extension \(L\) of \(K^{\rm c}\) such that \(P(b)=0\). We claim that \(b\in K^{\rm c}\). To show this we may assume \(b\notin K\), so \(v(b-K)\) does not have a largest element. By [ADH, 11.2.3(i)] we have \({\rm ndeg}\,P_{+b}=1\) and so \({\rm ndeg}\,P_{+b,\times g}=1\) for all \(g\preccurlyeq 1\) in \(L^{\times}\) by [ADH, 11.2.5], in view of \(P(b)=0\).
**Claim :**_Let \(\gamma\in\Gamma^{>}\) and \(a\in K\) with \(v(b-a)\geqslant 0\). There is a \(y\in{\mathcal{O}}\) such that \(v\big{(}P(y)\big{)}>\gamma\) and \(v(b-y)\geqslant v(b-a)\)._
The proof is similar to that of the claim in the proof of Lemma 1.7.3: Take \(g\in K^{\times}\) with \(vg=v(b-a)\). Then \({\rm ndeg}\,P_{+a,\times g}={\rm ndeg}\,P_{+b,\times g}=1\) by [ADH, 11.2.4] and the observation preceding the claim. Density of \(K\) in \(K^{\rm c}\) yields \(Q\in K\{Y\}\) of order \(\leqslant r\) with \(v(P_{+a,\times g}-Q)>\gamma\) and \(P_{+a,\times g}\sim^{>}Q\), the latter using \(\Gamma^{\flat}\neq\Gamma\). Then \({\rm ndeg}\,Q={\rm ndeg}\,P_{+a,\times g}=1\) by Lemma 1.8.2, so the hypothesis of the lemma gives \(z\in{\mathcal{O}}\) with \(v\big{(}Q(z)\big{)}>\gamma\). Setting \(y:=a+gz\in{\mathcal{O}}\) we have \(v\big{(}P(y)\big{)}=v\big{(}P_{+a,\times g}(z)\big{)}>\gamma\) and \(v(b-y)=v(b-a-gz)\geqslant vg=v(b-a)\).
Let \(\gamma\in\Gamma^{>}\); to get \(b\in K^{\rm c}\), it is enough to show that then \(v(a-b)>\gamma\) for some \(a\in K\). Let \(A:=L_{P_{+b}}\in L[\partial]\). Since \(|\mathscr{E}_{L}^{\rm e}(A)|\leqslant r\) by [ADH, 14.2.9], by the claim we can take \(a\in{\mathcal{O}}\) with \(v\big{(}P(a)\big{)}>2\gamma\) and \(0<v(b-a)\notin\mathscr{E}_{L}^{\rm e}(A)\). Now put \(g:=a-b\) and take \(\phi\) with \(vg\notin\mathscr{E}_{L^{\phi}}(A^{\phi})\); note that then \(A^{\phi}=L_{P_{+b}^{\phi}}\). Replacing \(K\), \(L\), \(P\) by \(K^{\phi}\), \(L^{\phi}\), \(P^{\phi}\) we arrange \(vg\notin\mathscr{E}_{L}(A)\), and (changing \(\phi\) if necessary) \({\rm ddeg}\,P_{+b}=1\). We also arrange \(P_{+b}\asymp 1\), and then \((P_{+b})_{>1}\prec 1\). As in the proof of Lemma 1.7.3 above we now derive \(v(a-b)=vg>\gamma\).
Combining Lemmas 1.7.2 and 1.8.3 now yields Proposition 1.8.1.
To show that newtonianity is preserved under specialization, we assume below that \(\Psi\cap\Gamma^{>}\neq\emptyset\), so \(K\) has small derivation. Let \(\Delta\neq\{0\}\) be a convex subgroup of \(\Gamma\) with \(\psi(\Delta^{\neq})\subseteq\Delta\). Then \(1\in\Delta\) where \(1\) denotes the unique positive element of \(\Gamma\) fixed by the function \(\psi\): use that \(\psi(\gamma)\geqslant 1\) for \(0<\gamma<1\). (Conversely, any convex subgroup \(G\) of \(\Gamma\) with \(1\in G\) satisfies \(\psi(G^{\neq})\subseteq G\).) Let \(\dot{v}\) be the coarsening of the valuation \(v\) of \(K\) by \(\Delta\), with valuation ring \(\check{\mathcal{O}}\), maximal ideal \(\dot{\circ}\) of \(\check{\mathcal{O}}\), and residue field \(\dot{K}=\check{\mathcal{O}}/\dot{\circ}\). The derivation of \(K\) is small with respect to \(\dot{v}\), and \(\dot{K}\) with the induced valuation \(v\colon\dot{K}^{\times}\to\Delta\) and induced derivation as in [ADH, p. 405] is an asymptotic field with asymptotic couple \((\Delta,\psi|\Delta^{\neq})\), and so is of \(H\)-type with small derivation. If \(K\) is d-valued, then so is \(\dot{K}\) by [ADH, 10.1.8], and if \(K\) is \(\mathfrak{o}\)-free, then so is \(\dot{K}\) by [ADH, 11.7.24]. The residue map \(a\mapsto\dot{a}:=a+\dot{\circ}\colon\check{\mathcal{O}}\to\dot{K}\) is a differential ring morphism, extends to a differential ring morphism \(P\mapsto\dot{P}\colon\check{\mathcal{O}}\{Y\}\to\dot{K}\{Y\}\) of differential rings sending \(Y\) to \(Y\), and \({\rm ddeg}\,P={\rm ddeg}\,\dot{P}\) for \(P\in\check{\mathcal{O}}\{Y\}\) with \(\dot{P}\neq 0\).
We now restrict \(\phi\) to range over active elements of \({\mathcal{O}}\). Then \(v\phi\leqslant 1+1\), so \(v\phi\in\Delta\), and hence \(\phi\) is a unit of \(\check{\mathcal{O}}\). It follows that \(\dot{\phi}\) is active in \(\dot{K}\), and every active element of \(\dot{K}\) lying in its valuation ring is of this form. Moreover, the differential subrings \(\check{\mathcal{O}}\) of \(K\) and \(\check{\mathcal{O}}^{\phi}:=(\check{\mathcal{O}})^{\phi}\) of \(K^{\phi}\) have the same underlying ring, and the derivation of \(K^{\phi}\) is small with respect to \(\dot{v}\). Thus the differential residue fields \(\dot{K}=\check{\mathcal{O}}/\dot{\circ}\) and \(\dot{K}^{\phi}:=\check{\mathcal{O}}^{\phi}/\dot{\circ}\) have the same underlying field and are related as follows:
\[\dot{K}^{\phi}\;=\;(\dot{K})^{\dot{\phi}}.\]
For \(P\in\check{\mathcal{O}}\{Y\}\) we have \(P^{\phi}\in\hat{\mathcal{O}}^{\phi}\{Y\}\), and the image of \(P^{\phi}\) under the residue map \(\dot{\mathcal{O}}^{\phi}\{Y\}\to\dot{K}^{\phi}\{Y\}\) equals \(\dot{P}^{\dot{\phi}}\); hence \({\rm ndeg}\,P={\rm ndeg}\,\dot{P}\) for \(P\in\dot{\mathcal{O}}\{Y\}\) satisfying \(\dot{P}\neq 0\). These remarks imply:
**Lemma 1.8.4**.: _If \(K\) is \(r\)-newtonian, then \(\dot{K}\) is \(r\)-newtonian._
Combining Proposition 1.8.1 and Lemmas 1.8.3 and 1.8.4 yields:
**Corollary 1.8.5**.: _Suppose \(K\) is \(\mathrm{d}\)-valued, \(\omega\)-free, and \(r\)-newtonian. Then \(\dot{K}\) and its completion are \(\mathrm{d}\)-valued, \(\omega\)-free, and \(r\)-newtonian._
We finish with a newtonian analogue of [ADH, 7.1.7]:
**Lemma 1.8.6**.: _Suppose \((K,\dot{\mathcal{O}})\) is \(r\)-\(\mathrm{d}\)-henselian and \(\dot{K}\) is \(r\)-newtonian. Then \(K\) is \(r\)-newtonian._
Proof.: Let \(P\in K\{Y\}\) be of order \(\leqslant r\) and \(\mathrm{ndeg}\,P=1\); we need to show the existence of \(b\in\mathcal{O}\) with \(P(b)=0\). Replacing \(K\) and \(P\) by \(K^{\phi}\) and \(P^{\phi}\) for suitable \(\phi\) (and renaming) we arrange \(\mathrm{ddeg}\,P=1\); this also uses [ADH, section 7.3, subsection on compositional conjugation]. We can further assume that \(P\asymp 1\). Put \(Q:=\dot{P}\in\dot{K}\{Y\}\), so \(\mathrm{ndeg}\,Q=1\), and thus \(r\)-newtonianity of \(\dot{K}\) yields an \(a\in\mathcal{O}\) with \(Q(\dot{a})=0\). Then \(P(a)\stackrel{{\cdot}}{{\prec}}1\), \(P_{+a,1}\sim P_{1}\asymp 1\), and \(P_{+a,>1}\prec 1\). Since \((K,\dot{\mathcal{O}})\) is \(r\)-\(\mathrm{d}\)-henselian, this gives \(y\in\dot{\phi}\) with \(P_{+a}(y)=0\), and then \(P(b)=0\) for \(b:=a+y\in\mathcal{O}\).
Lemmas 1.8.4, 1.8.6, and [ADH, 14.1.2] yield:
**Corollary 1.8.7**.: \(K\) _is \(r\)-newtonian iff \((K,\dot{\mathcal{O}})\) is \(r\)-\(\mathrm{d}\)-henselian and \(\dot{K}\) is \(r\)-newtonian._
**Invariance of Newton quantities.**_In this subsection \(P\in K\{Y\}^{\neq}\)._ In [ADH, 11.1] we associated to \(P\) its Newton weight \(\mathrm{nwt}\,P\), Newton degree \(\mathrm{ndeg}\,P\), and Newton multiplicity \(\mathrm{nmul}\,P\) at \(0\), all elements of \(\mathbb{N}\), as well as the element \(v^{e}(P)\) of \(\Gamma\); these quantities do not change when passing to an \(H\)-asymptotic extension \(L\) of \(K\) with \(\Psi\) cofinal in \(\Psi_{L}\), cf. [ADH, p. 480], where the assumptions on \(K\), \(L\) are slightly weaker. Thus by Theorem 1.4.1, these quantities do not change for \(\omega\)-free \(K\) in passing to an \(H\)-asymptotic pre-\(\mathrm{d}\)-valued \(\mathrm{d}\)-algebraic extension of \(K\). Below we improve on this in several ways. First, for order \(P\leqslant 1\) we can drop \(\Psi\) being cofinal in \(\Psi_{L}\) by a strengthening of [ADH, 11.2.13]:
**Lemma 1.8.8**.: _Suppose \(K\) is \(H\)-asymptotic with rational asymptotic integration and \(P\in K[Y,Y^{\prime}]^{\neq}\). Then there are \(w\in\mathbb{N}\), \(\alpha\in\Gamma^{>}\), \(A\in K[Y]^{\neq}\), and an active \(\phi_{0}\) in \(K\) such that for every asymptotic extension \(L\) of \(K\) and active \(f\preccurlyeq\phi_{0}\) in \(L\),_
\[P^{f}\ =\ f^{w}A(Y)(Y^{\prime})^{w}+R_{f},\quad R_{f}\in L^{f}[Y,Y^{\prime}], \quad v(R_{f})\ \geqslant\ v(P^{f})+\alpha.\]
_For such \(w,A\) we have for any ungrounded \(H\)-asymptotic extension \(L\) of \(K\),_
\[\mathrm{nwt}_{L}\,P\ =\ w,\quad\mathrm{ndeg}_{L}\,P\ =\ \mathrm{deg}\,A{+}w, \quad\mathrm{nmul}_{L}\,P\ =\ \mathrm{mul}\,A{+}w,\quad\ v^{e}_{L}(P)=v(A).\]
Proof.: Let \(w\) be as in the proof of [ADH, 11.2.13]. Using its notations, this proof yields an active \(\phi_{0}\) in \(K\) such that
\[w\gamma+v(A_{w})\ <\ j\gamma+v(A_{j}) \tag{1.8.1}\]
for all \(\gamma\geqslant v(\phi_{0})\) in \(\Psi^{\downarrow}\) and \(j\in J\setminus\{w\}\). This gives \(\beta\in\mathbb{Q}\Gamma\) such that \(\beta>\Psi\) and (1.8.1) remains true for all \(\gamma\in\Gamma\) with \(v(\phi_{0})\leqslant\gamma<\beta\). Since \((\mathbb{Q}\Gamma,\psi)\) has asymptotic integration, \(\beta\) is not a gap in \((\mathbb{Q}\Gamma,\psi)\), so \(\beta>\beta_{0}>\Psi\) with \(\beta_{0}\in\mathbb{Q}\Gamma\). This yields an element \(\alpha\in(\mathbb{Q}\Gamma)^{>}\) such that for all \(\gamma\in\mathbb{Q}\Gamma\) with \(v(\phi_{0})\leqslant\gamma\leqslant\beta_{0}\) we have
\[w\gamma+v(A_{w})+\alpha\ \leqslant\ j\gamma+v(A_{j}) \tag{1.8.2}\]
Since \(\Gamma\) has no least positive element, we can decrease \(\alpha\) to arrange \(\alpha\in\Gamma^{>}\). Now (1.8.2) remains true for all elements \(\gamma\) of any divisible ordered abelian group extending \(\mathbb{Q}\Gamma\) with \(v(\phi_{0})\leqslant\gamma\leqslant\beta_{0}\). Thus \(w\), \(\alpha\), \(A=A_{w}\), and \(\phi_{0}\) are as required.
For any ungrounded \(H\)-asymptotic extension \(L\) of \(K\) we obtain for active \(f\preccurlyeq\phi_{0}\) in \(L\) that \(v(P^{f})=v(A)+wv(f)\), so \(v^{\rm e}_{L}(P)=v(A)\) in view of the identity in [ADH, 11.1.15] defining \(v^{\rm e}_{L}(P)\).
For quasilinear \(P\) we have:
**Lemma 1.8.9**.: _Suppose \(K\) is \(\lambda\)-free and \({\rm ndeg}\,P=1\). Then there are active \(\phi_{0}\) in \(K\) and \(a,b\in K\) with \(a\preccurlyeq b\neq 0\) such that either_ (i) _or_ (ii) _below holds:_
* \(P^{f}\,\sim^{\flat}_{\phi_{0}}\,a+bY\) _for all active_ \(f\preccurlyeq\phi_{0}\) _in all_ \(H\)_-asymptotic extensions of_ \(K\)_;_
* \(P^{f}\,\sim^{\flat}_{\phi_{0}}\,\frac{f}{\phi_{0}}\,b\,Y^{\prime}\) _for all active_ \(f\preccurlyeq\phi_{0}\) _in all_ \(H\)_-asymptotic extensions of_ \(K\)_._
_In particular, for each ungrounded \(H\)-asymptotic extension \(L\) of \(K\),_
\[{\rm nwt}_{L}\,P={\rm nwt}\,P\leqslant 1,\quad{\rm ndeg}_{L}\,P=1,\quad{ \rm nmul}_{L}\,P={\rm nmul}\,P,\quad v^{\rm e}_{L}(P)=v^{\rm e}(P).\]
Proof.: From [ADH, 13.7.10] we obtain an active \(\phi_{0}\) in \(K\) and \(a,b\in K\) with \(a\preccurlyeq b\) such that in \(K^{\phi_{0}}\{Y\}\), either \(P^{\phi_{0}}\,\sim^{\flat}_{\phi_{0}}\,a+bY\) or \(P^{\phi_{0}}\,\sim^{\flat}_{\phi_{0}}\,b\,Y^{\prime}\) (so \(b\neq 0\)). In the first case, set \(A(Y):=a+bY\), \(w:=0\); in the second case, set \(A(Y):=bY^{\prime}\), \(w:=1\). Then \(P^{\phi_{0}}=A+R\) where \(R\,\sim^{\flat}_{\phi_{0}}\,b\asymp P^{\phi_{0}}\) in \(K^{\phi_{0}}\{Y\}\).
Let \(L\) be an \(H\)-asymptotic extension of \(K\). Then \(R\,\prec^{\flat}_{\phi_{0}}\,P^{\phi_{0}}\) remains true in \(L^{\phi_{0}}\{Y\}\), and if \(f\preccurlyeq\phi_{0}\) is active in \(L\), then \(P^{f}=(P^{\phi_{0}})^{f/\phi_{0}}=(f/\phi_{0})^{w}A+R^{f/\phi_{0}}\) where \(R^{f/\phi_{0}}\,\sim^{\flat}_{\phi_{0}}\,P^{f}\) by Lemma 1.8.2 and the remark following its proof. As to \(v^{\rm e}_{L}(P)=v^{\rm e}(P)\) for ungrounded \(L\), the identity from [ADH, 11.1.15] defining these quantities shows that both are \(vb\) in case (i), and \(v(b)-v(\phi_{0})\) in case (ii).
Lemma 1.8.9 has the following consequence, partly generalizing Corollary 1.5.5:
**Corollary 1.8.10**.: _Suppose \(K\) is \(\lambda\)-free, \(A\in K[\![\partial]\!]^{\neq}\) and \(L\) is an ungrounded \(H\)-asymptotic extension of \(K\). Then for \(\gamma\in\Gamma\) the quantities \({\rm nwt}_{A}(\gamma)\leqslant 1\) and \(v^{\rm e}_{A}(\gamma)\) do not change when passing from \(K\) to \(L\); in particular,_
\[\mathscr{E}^{\rm e}(A)\ =\ \bigl{\{}\gamma\in\Gamma:{\rm nwt}_{A}(\gamma)=1 \bigr{\}}\ =\ \mathscr{E}^{\rm e}_{L}(A)\cap\Gamma.\]
This leads to a variant of Corollary 1.5.20:
**Corollary 1.8.11**.: _Suppose \(K\) is \(\lambda\)-free. Then \(|\mathscr{E}^{\rm e}(A)|\leqslant{\rm order}\,A\) for all \(A\in K[\![\partial]\!]^{\neq}\)._
Proof.: By [ADH, 10.1.3], \(K\) is pre-d-valued, hence by [ADH, 11.7.18] it has an \(\omega\)-free \(H\)-asymptotic extension. It remains to appeal to Corollaries 1.5.5 and 1.8.10.
For completeness we next state a version of Lemma 1.8.9 for \({\rm ndeg}\,P=0\); the proof using [ADH, 13.7.9] is similar, but simpler, and hence omitted.
**Lemma 1.8.12**.: _Suppose \(K\) is \(\lambda\)-free and \({\rm ndeg}\,P=0\). Then there are an active \(\phi_{0}\) in \(K\) and \(a\in K^{\times}\) such that \(P^{f}\,\sim^{\flat}_{\phi_{0}}\,a\) for all active \(f\preccurlyeq\phi_{0}\) in all \(H\)-asymptotic extensions of \(K\)._
In particular, for \(K\), \(P\) as in Lemma 1.8.12, no \(H\)-asymptotic extension of \(K\) contains any \(y\preccurlyeq 1\) such that \(P(y)=0\).
For general \(P\) and \(\omega\)-free \(K\) we can still do better than stated earlier:
**Lemma 1.8.13**.: _Suppose \(K\) is \(\omega\)-free. Then there are \(w\in\mathbb{N}\), \(A\in K[Y]^{\neq}\), and an active \(\phi_{0}\) in \(K\) such that for all active \(f\preccurlyeq\phi_{0}\) in all \(H\)-asymptotic extensions of \(K\),_
\[P^{f}\ \sim^{\flat}_{\phi_{0}}\ (f/\phi_{0})^{w}A(Y)(Y^{\prime})^{w}.\]
_For such \(w\), \(A\), \(\phi_{0}\) we have for any ungrounded \(H\)-asymptotic extension \(L\) of \(K\),_
\[\operatorname{nwt}_{L}P\ =\ w, \operatorname{ndeg}_{L}P\ =\ \deg A+w,\] \[\operatorname{nmul}_{L}P\ =\ \operatorname{mul}A+w, \operatorname{v}_{L}^{\mathrm{e}}(P)\ =\ v(A)-wv(\phi_{0}).\]
Proof.: By [1, 13.6.11] we have active \(\phi_{0}\) in \(K\) and \(A\in K[Y]^{\neq}\) such that
\[P^{\phi_{0}}\ =\ A\cdot(Y^{\prime})^{w}+R,\quad w:=\operatorname{nwt}P,\quad R \in K^{\phi_{0}}\{Y\},\ R\prec^{\flat}_{\phi_{0}}P^{\phi_{0}}.\]
(Here \(\phi_{0}\) and \(A\) are the \(e\) and \(aA\) in [1, 13.6.11].) The rest of the argument is just like in the second part of the proof of Lemma 1.8.9.
**Remarks on newton position**.: For the next lemma we put ourselves in the setting of [1, 14.3]: \(K\) is \(\omega\)-free, \(P\in K\{Y\}^{\neq}\), and \(a\) ranges over \(K\). Recall that \(P\) is said to be in _newton position at \(a\)_ if \(\operatorname{nmul}P_{+a}=1\).
Suppose \(P\) is in newton position at \(a\); then \(A:=L_{P_{+a}}\in K[\partial]^{\neq}\). Recall the definition of \(v^{\mathrm{e}}(P,a)=v_{K}^{\mathrm{e}}(P,a)\in\Gamma_{\infty}\): if \(P(a)=0\), then \(v^{\mathrm{e}}(P,a)=\infty\); if \(P(a)\neq 0\), then \(v^{\mathrm{e}}(P,a)=vg\) where \(g\in K^{\times}\) satisfies \(P(a)\asymp(P_{+a})^{\phi}_{1,\times g}\) eventually, that is, \(v_{A^{\phi}}(vg)=v\big{(}P(a)\big{)}\) eventually. In the latter case \(\operatorname{nwt}_{A}(vg)=0\), that is, \(vg\notin\mathscr{E}^{\mathrm{e}}(A)\), and \(v_{A}^{\mathrm{e}}(vg)=v\big{(}P(a)\big{)}\), since \(v_{A^{\phi}}(vg)=v_{A}^{\mathrm{e}}(vg)+\operatorname{nwt}_{A}(vg)v\phi\) eventually. For any \(f\in K^{\times}\), \(P^{f}\) is also in newton position at \(a\), and \(v^{\mathrm{e}}(P^{f},a)=v^{\mathrm{e}}(P,a)\). Note also that \(P_{+a}\) is in newton position at \(0\) and \(v^{\mathrm{e}}(P_{+a},0)=v^{\mathrm{e}}(P,a)\). Moreover, in passing from \(K\) to an \(\omega\)-free extension, \(P\) remains in newton position at \(a\) and \(v^{\mathrm{e}}(P,a)\) does not change, by Lemma 1.8.13.
_In the rest of this subsection \(P\) is in newton position at \(a\), and \(\widehat{a}\) is an element of an \(H\)-asymptotic extension \(\widehat{K}\) of \(K\) such that \(P(\widehat{a})=0\)._ (We allow \(\widehat{a}\in K\).) We first generalize part of [1, 14.3.1], with a similar proof:
**Lemma 1.8.14**.: \(v^{\mathrm{e}}(P,a)>0\) _and \(v(\widehat{a}-a)\leqslant v^{\mathrm{e}}(P,a)\)._
Proof.: This is clear if \(P(a)=0\). Assume \(P(a)\neq 0\). Replace \(P\), \(\widehat{a}\), \(a\) by \(P_{+a}\), \(\widehat{a}-a\), \(0\), respectively, to arrange \(a=0\). Recall that \(K^{\phi}\) has small derivation. Set \(\gamma:=v^{\mathrm{e}}(P,0)\in\Gamma\) and take \(g\in K\) with \(vg=\gamma\). Now \((P_{1}^{\phi})_{\times g}\asymp P_{0}\), eventually, and \(\operatorname{nmul}P=1\) gives \(P(0)\prec P_{1}^{\phi}\), eventually, hence \(g\prec 1\). Moreover, for \(j\geqslant 2\), \(P_{1}^{\phi}\succcurlyeq P_{j}^{\phi}\), eventually, so \((P_{1}^{\phi})_{\times g}\succ(P_{j}^{\phi})_{\times g}\), eventually, by [1, 1.3]. Thus for \(j\geqslant 1\) we have \((P_{\times g}^{\phi})_{j}=(P_{j}^{\phi})_{\times g}\preccurlyeq P(0)\), eventually; in particular, there is no \(y\prec 1\) in any \(H\)-asymptotic extension of \(K\) with \(P_{\times g}(y)=0\). Since \(P(\widehat{a})=0\), this yields \(v(\widehat{a})\leqslant\gamma=v^{\mathrm{e}}(P,0)\).
Here is a situation where \(v(\widehat{a}-a)=v^{\mathrm{e}}(P,a)\):
**Lemma 1.8.15**.: _Suppose \(\Psi\) is cofinal in \(\Psi_{\widehat{K}}\), \(\widehat{a}-a\prec 1\), and \(v(\widehat{a}-a)\notin\mathscr{E}^{\mathrm{e}}_{\widehat{K}}(A)\) where \(A:=L_{P_{+a}}\). Then \(v(\widehat{a}-a)=v^{\mathrm{e}}(P,a)\)._
Proof.: Note that \(\widehat{K}\) is ungrounded, so \(\mathscr{E}^{\mathrm{e}}_{\widehat{K}}(A)\) is defined, and \(\widehat{K}\) is pre-d-valued. As in the proof of Lemma 1.8.14 we arrange \(a=0\). As an asymptotic subfield of \(\widehat{K}\), \(K\langle\widehat{a}\rangle\) is pre-d-valued. Hence \(K\langle\widehat{a}\rangle\) is \(\omega\)-free by Theorem 1.4.1. The remarks preceding Lemma 1.8.14 then allow us to replace \(K\) by \(K\langle\widehat{a}\rangle\) to arrange \(\widehat{a}\in K\)
The case \(\widehat{a}=0\) is trivial, so assume \(0\neq\widehat{a}\prec 1\). Now \(\operatorname{nmul}P=1\) gives for \(j\geqslant 2\) that \(P_{1}^{\phi}\succcurlyeq P_{j}^{\phi}\), eventually, hence \((P_{1}^{\phi})_{\times\widehat{a}}\succ(P_{j}^{\phi})_{\times\widehat{a}}\), eventually, by [1], 6.1.3]. Moreover, \(P_{1}(\widehat{a})=A(\widehat{a})=A^{\phi}(\widehat{a})\asymp A^{\phi}\widehat {a}\), eventually, using \(v(\widehat{a})\notin\mathscr{E}_{\widehat{R}}^{\rm e}(A)\) in the last step, so for \(j\geqslant 2\), eventually
\[P_{1}(\widehat{a})\ \asymp\ (P_{1}^{\phi})_{\times\widehat{a}}\ \succ\ (P_{j}^{\phi})_{ \times\widehat{a}}\ \succcurlyeq P_{j}^{\phi}(\widehat{a})\ =\ P_{j}(\widehat{a}).\]
Also \(P_{1}(\widehat{a})\neq 0\), since \(A^{\phi}\widehat{a}\neq 0\). Then \(P(\widehat{a})=0\) gives \(P(0)\asymp P_{1}(\widehat{a})\). Thus \(v\big{(}P(0)\big{)}=v_{A^{\phi}}\big{(}v(\widehat{a})\big{)}\), eventually, so \(v^{\rm e}(P,0)=v(\widehat{a})\) by the definition of \(v^{\rm e}(P,0)\).
**Corollary 1.8.16**.: _Suppose \(\widehat{K}\) is ungrounded and equipped with an ordering making it a pre-\(H\)-field, and assume \(\widehat{a}-a\prec 1\) and \(v(\widehat{a}-a)\notin\mathscr{E}_{\widehat{K}}^{\rm e}(A)\) where \(A:=L_{P_{+a}}\). Then \(v(\widehat{a}-a)=v^{\rm e}(P,a)\)._
Proof.: In view of Lemma 1.5.1 and using [1] we can extend \(\widehat{K}\) to arrange that it is an \(\omega\)-free newtonian Liouville closed \(H\)-field. Next, let \(H\) be the real closure of the \(H\)-field hull of \(K\langle\widehat{a}\rangle\), all inside \(\widehat{K}\). Then \(H\) is \(\omega\)-free, by Theorem 1.4.1, and hence has a Newton-Liouville closure \(L\) inside \(\widehat{K}\)[1]. Since \(L\preccurlyeq\widehat{K}\) by [1], we have \(v(\widehat{a}-a)\notin\mathscr{E}_{L}^{\rm e}(A)\). Now \(L\) is d-algebraic over \(K\) by [1], 14.5.9], so \(\Psi\) is cofinal in \(\Psi_{L}\) by Theorem 1.4.1. It remains to apply Lemma 1.8.15.
### Newton position in the order \(1\) case
_In this subsection \(K\) is \(\lambda\)-free, \(P\in K\{Y\}\) has order \(1\), and \(a\in K\)._ We basically copy here a definition and two lemmas from [1] with the \(\omega\)-free assumption there replaced by the weaker \(\lambda\)-freeness, at the cost of restricting \(P\) to have order \(1\).
Suppose \(\operatorname{nmul}P=1\), \(P_{0}\neq 0\). Then [1] yields \(g\in K^{\times}\) with \(P_{0}\asymp P_{1,\times g}^{\phi}\), eventually. Since \(P_{0}\prec P_{1}^{\phi}\), eventually, we have \(g\prec 1\). Moreover, if \(i\geqslant 2\), then \(P_{1}^{\phi}\succcurlyeq P_{i}^{\phi}\), eventually, hence \(P_{1,\times g}^{\phi}\succ P_{i,\times g}^{\phi}\), eventually. Thus \(\operatorname{ndeg}P_{\times g}=1\).
Define \(P\) to be in **newton position at \(a\)** if \(\operatorname{nmul}P_{+a}=1\). Suppose \(P\) is in newton position at \(a\); set \(Q:=P_{+a}\), so \(Q(0)=P(a)\). If \(P(a)\neq 0\), then the above yields \(g\in K^{\times}\) with \(P(a)=Q(0)\asymp Q_{1,\times g}^{\phi}\), eventually; as \(vg\) does not depend on the choice of such \(g\), we set \(v^{\rm e}(P,a):=vg\). If \(P(a)=0\), then we set \(v^{\rm e}(P,a):=\infty\in\Gamma_{\infty}\). In passing from \(K\) to a \(\lambda\)-free extension, \(P\) remains in newton position at \(a\) and \(v^{\rm e}(P,a)\) does not change, by Lemma 1.8.8. _In the rest of this subsection we assume \(P\) is in newton position at \(a\)_.
**Lemma 1.8.17**.: _If \(P(a)\neq 0\), then there exists \(b\in K\) with the following properties:_
* \(P\) _is in newton position at_ \(b\)_,_ \(v(a-b)=v^{\rm e}(P,a)\)_, and_ \(P(b)\prec P(a)\)_;_
* _for all_ \(b^{*}\in K\) _with_ \(v(a-b^{*})\geqslant v^{\rm e}(P,a)\)_:_ \(P(b^{*})\prec P(a)\Leftrightarrow a-b\sim a-b^{*}\)_;_
* _for all_ \(b^{*}\in K\)_, if_ \(a-b\sim a-b^{*}\)_, then_ \(P\) _is in newton position at_ \(b^{*}\) _and_ \(v^{\rm e}(P,b^{*})>v^{\rm e}(P,a)\)_._
This is shown as in [1]. Next an analogue of [1], with the same proof, but using Lemma 1.8.17 in place of [1]:
**Lemma 1.8.18**.: _If there is no \(b\) with \(P(b)=0\) and \(v(a-b)=v^{\rm e}(P,a)\), then there is a divergent pc-sequence \((a_{\rho})_{\rho<\lambda}\) in \(K\), indexed by all ordinals \(\rho\) smaller than some infinite limit ordinal \(\lambda\), such that \(a_{0}=a\), \(v(a_{\rho}-a_{\rho^{\prime}})=v^{\rm e}(P,a_{\rho})\) for all \(\rho<\rho^{\prime}<\lambda\), and \(P(a_{\rho})\rightsquigarrow 0\)._
The next result is proved just like Lemma 1.8.14:
**Lemma 1.8.19**.: _If \(P(\widehat{a})=0\) with \(\widehat{a}\) in an \(H\)-asymptotic extension of \(K\), then \(v^{\rm e}(P,a)>0\) and \(v(\widehat{a}-a)\leqslant v^{\rm e}(P,a)\)._
Next an analogue of Lemma 1.8.15 using Propositions 1.4.8 and 1.4.12 in its proof:
**Lemma 1.8.20**.: _Suppose \(\widehat{a}\) in an ungrounded \(H\)-asymptotic extension \(\widehat{K}\) of \(K\) satisfies \(P(\widehat{a})=0\), \(\widehat{a}-a\prec 1\), and \(v(\widehat{a}-a)\notin\mathscr{E}_{\widehat{K}}^{\rm e}(A)\), where \(A:=L_{P_{+a}}\). Then \(v(\widehat{a}-a)=v^{\rm e}(P,a)\)._
Proof.: We arrange \(a=0\) and assume \(\widehat{a}\neq 0\). Then \(L:=K\langle\widehat{a}\rangle\) has asymptotic integration, by Proposition 1.4.12, and \(v(\widehat{a})\notin\mathscr{E}_{L}^{\rm e}(A)\) by Lemma 1.5.10 (applied with \(L\), \(\widehat{K}\) in place of \(K\), \(L\)). Moreover, \(\Psi\) is cofinal in \(\Psi_{L}\) by Proposition 1.4.8. As in the proof of Lemma 1.8.15 this leads to \(P_{1}(\widehat{a})=A(\widehat{a})=A^{\phi}(\widehat{a})\asymp A^{\phi} \widehat{a}\), eventually, and then as in the rest of that proof we derive \(v^{\rm e}(P,0)=v(\widehat{a})\).
### Zeros of differential polynomials of order and degree \(1\)
_In this subsection \(K\) has asymptotic integration._ We fix a differential polynomial
\[P(Y)\ =\ a(Y^{\prime}+gY-u)\in K\{Y\}\qquad(a,g,u\in K,\ a\neq 0),\]
and set \(A:=L_{P}=a(\partial+g)\in K[\partial]\). Section 1.2 gives for \(y\in K\) the equivalence \(y\in{\rm I}(K)\Leftrightarrow vy>\Psi\), so by Section 1.5, \(\mathscr{E}^{\rm e}(A)=\emptyset\Leftrightarrow g\notin{\rm I}(K)+K^{\dagger}\), and \(v(\ker_{\widehat{K}}^{\neq}A)\subseteq\mathscr{E}^{\rm e}(A)\) for each immediate \(H\)-asymptotic field extension \(\widehat{K}\) of \(K\). Thus:
**Lemma 1.8.21**.: _If \(g\notin{\rm I}(K)+K^{\dagger}\), then each immediate \(H\)-asymptotic extension of \(K\) contains at most one \(y\) such that \(P(y)=0\)._
If \(\partial K=K\) and \(g\in K^{\dagger}\), then \(P(y)=0\) for some \(y\in K\), and if moreover \(K\) is d-valued, then any \(y\) in any immediate \(H\)-asymptotic extension of \(K\) with \(P(y)=0\) lies in \(K\). (Lemma 1.2.2.) If \(y\prec 1\) in an immediate \(H\)-asymptotic extension of \(K\) satisfies \(P(y)=0\), then by [ADH, 11.2.3(ii), 11.2.1] we have
\[\operatorname{nmul}P\ =\ \operatorname{nmul}P_{+y}\ =\ \operatorname{mul}P_{+y}\ =\ 1.\]
Lemma 1.8.18 yields the following partial converse (a variant of [11, Lemma 8.5]):
**Corollary 1.8.22**.: _Suppose \(K\) is \(\uplambda\)-free and \(\operatorname{nmul}P=1\). Then there is a \(y\prec 1\) in an immediate \(H\)-asymptotic extension of \(K\) with \(P(y)=0\)._
Proof.: Replacing \(K\) by its henselization and using [ADH, 11.6.7], we arrange that \(K\) is henselian. Suppose that \(P\) has no zero in \(\circ\). Then \(P\) is in newton position at \(0\), and so Lemma 1.8.18 yields a divergent pc-sequence \((a_{\rho})_{\rho<\uplambda}\) in \(K\), indexed by all ordinals \(\rho\) smaller than some infinite limit ordinal \(\uplambda\), with \(a_{0}=0\), \(v(a_{\rho}-a_{\rho^{\prime}})=v^{\rm e}(P,a_{\rho})\) for all \(\rho<\rho^{\prime}<\uplambda\), and \(P(a_{\rho})\rightsquigarrow 0\). Since \(\deg P=\operatorname{order}P=1\) and \(K\) is henselian, \(P\) is a minimal differential polynomial of \((a_{\rho})\) over \(K\), and \(v(a_{\rho})=v^{\rm e}(P,0)>0\) for all \(\rho>0\). Hence [ADH, 9.7.6] yields a pseudolimit \(y\) of \((a_{\rho})\) in an immediate asymptotic extension of \(K\) with \(P(y)=0\) and \(y\prec 1\), as required.
We say that \(P\) is **proper** if \(u\neq 0\) and \(g+u^{\dagger}\asymp 1\). If \(P\) is proper, then so is \(bP\) for each \(b\in K^{\times}\). For \(\mathfrak{m}\in K^{\times}\) we have
\[P_{\times\mathfrak{m}}\ =\ a\mathfrak{m}\big{(}Y^{\prime}+(g+\mathfrak{m}^{ \dagger})Y-u\mathfrak{m}^{-1}\big{)},\]
hence if \(P\) is proper, then so is \(P_{\times\mathfrak{m}}\). If \(u\neq 0\), then \(P\) is proper iff \(a^{-1}A_{\ltimes u}=\partial+(g+u^{\dagger})\) is steep, as defined in Section 1.5. Note that
\[P^{\phi}\ =\ a\phi\big{(}Y^{\prime}+(g/\phi)Y-(u/\phi)\big{)}.\]
**Lemma 1.8.23**.: _Suppose \(K\) has small derivation, and \(P\) is proper. Then \(P^{\phi}\) is proper \((\)with respect to \(K^{\phi})\) for all \(\phi\preccurlyeq 1\)._
Proof.: Let \(\phi\preccurlyeq 1\). Then we have \(\phi\asymp^{\flat}1\) and hence \(\phi^{\dagger}\asymp^{\flat}\phi^{\prime}\preccurlyeq 1\prec^{\flat}g+u^{\dagger}\). Thus
\[g+(u/\phi)^{\dagger}=(g+u^{\dagger})-\phi^{\dagger}\sim^{\flat}g+u^{\dagger} \succ^{\flat}1\sucarrow\phi,\]
hence \((g/\phi)+\phi^{-1}(u/\phi)^{\dagger}\succ^{\flat}1\) and so \((g/\phi)+\phi^{-1}(u/\phi)^{\dagger}\succ^{\flat}_{\phi}1\). Therefore \(P^{\phi}\) is proper (with respect to \(K^{\phi}\)).
**Lemma 1.8.24**.: _Suppose \(K\) is \(\lambda\)-free and \(u\neq 0\). Then there is an active \(\phi_{0}\) in \(K\) such that for all \(\phi\prec\phi_{0}\), \(P^{\phi}\) is proper with \(g+(u/\phi)^{\dagger}\sim g+(u/\phi_{0})^{\dagger}\)._
Proof.: The argument before Corollary 1.5.15 yields an active \(\phi_{0}\) in \(K\) such that \(u^{\dagger}+g-\phi^{\dagger}\sucarrow\phi_{0}\) for all \(\phi\prec\phi_{0}\). For such \(\phi\) we have \(\phi^{\dagger}-\phi_{0}^{\dagger}\prec\phi_{0}\) as noted just before [1, 11.5.3], and so \((u/\phi)^{\dagger}+g\sim(u/\phi_{0})^{\dagger}+g\). The argument before Corollary 1.5.15 also gives \(\phi^{-1}(u/\phi)^{\dagger}+g/\phi\succ^{\flat}_{\phi}1\) eventually, and if \(\phi^{-1}(u/\phi)^{\dagger}+g/\phi\succ^{\flat}_{\phi}1\), then \(P^{\phi}\) is proper.
**Lemma 1.8.25**.: _We have \(\operatorname{nmul}P=1\) iff \(u\preccurlyeq g\) or \(u\in\operatorname{I}(K)\). Moreover, if \(K\) is \(\lambda\)-free, \(\operatorname{nmul}P=1\), and \(u\neq 0\), then \(u\prec^{\flat}_{\phi}g+(u/\phi)^{\dagger}\), eventually._
Proof.: For the equivalence, note that the identity above for \(P^{\phi}\) yields:
\[\operatorname{nmul}P=0\iff u\sucarrow g,\text{ and }u/\phi\sucarrow 1\text{ eventually.}\]
Suppose \(K\) is \(\lambda\)-free, \(\operatorname{nmul}P=1\), and \(u\neq 0\). If \(u\in\operatorname{I}(K)\), then \(u\prec\phi\prec^{\flat}_{\phi}g+(u/\phi)^{\dagger}\), eventually, by Lemma 1.8.24. Suppose \(u\notin\operatorname{I}(K)\). Then \(v(u)\in\Psi^{\downarrow}\) and \(u\preccurlyeq g\). Hence by [1, 9.2.11] we have \((u/\phi)^{\dagger}\prec u\preccurlyeq g\), eventually, and thus \(u\preccurlyeq g+(u/\phi)^{\dagger}\), eventually. Thus \(u\prec^{\flat}_{\phi}g+(u/\phi)^{\dagger}\), eventually.
Assume now \(P(y)=0\) with \(y\) in an immediate \(H\)-asymptotic extension of \(K\); so \(A(y)=u\). Note: if \(vy\in\Gamma\setminus\mathscr{E}^{\mathrm{e}}(A)\), then \(u\neq 0\). From Lemma 1.5.14 we get:
**Lemma 1.8.26**.: _If \(K\) has small derivation, \(P\) is proper, and \(vy\in\Gamma\setminus\mathscr{E}^{\mathrm{e}}(A)\), then \(y\sim u/(g+u^{\dagger})\)._
By Lemmas 1.8.24 and 1.8.26, and using Lemma 1.8.25 for the last part:
**Corollary 1.8.27**.: _If \(K\) is \(\lambda\)-free and \(vy\in\Gamma\setminus\mathscr{E}^{\mathrm{e}}(A)\), then_
\[y\sim u/\big{(}g+(u/\phi)^{\dagger}\big{)}\quad\text{ eventually.}\]
_If in addition \(\operatorname{nmul}P=1\), then \(y\prec 1\)._
### A characterization of \(1\)-linear newtonianity
_In this subsection_ \(K\) _has asymptotic integration._ We first expand [1, 14.2.4]:
**Proposition 1.8.28**.: _The following are equivalent:_
1. \(K\) _is_ \(1\)_-linearly newtonian;_
2. _every_ \(P\in K\{Y\}\) _with_ \(\operatorname{nmul}P=\deg P=1\) _and_ \(\operatorname{order}P\leqslant 1\) _has a zero in_ \(\circ\)_;_
3. \(K\) _is_ \(\mathrm{d}\)_-valued,_ \(\lambda\)_-free, and_ \(1\)_-linearly surjective, with_ \(\operatorname{I}(K)\subseteq K^{\dagger}\)_._
Proof.: The equivalence of (i) and (ii) is [ADH, 14.2.4], and the implication (i) \(\Rightarrow\) (iii) follows from [ADH, 14.2.2, 14.2.3, 14.2.5]. To show (iii) \(\Rightarrow\) (ii), suppose (iii) holds, and let \(g,u\in K\) and \(P=Y^{\prime}+gY-u\) with \(\operatorname{nmul}P=1\). We need to find \(y\in\vardiamond\) such that \(P(y)=0\). Corollary 1.8.22 gives an element \(y\prec 1\) in an immediate \(H\)-asymptotic extension \(L\) of \(K\) with \(P(y)=0\). It suffices to show that then \(y\in K\) (and thus \(y\in\vardiamond\)). If \(g\notin K^{\dagger}\), then this follows from Lemma 1.8.21, using \(\operatorname{I}(K)\subseteq K^{\dagger}\) and \(1\)-linear surjectivity of \(K\); if \(g\in K^{\dagger}\), then this follows from Lemma 1.2.2 and \(\partial K=K\).
By the next corollary, each Liouville closed \(H\)-field is \(1\)-linearly newtonian:
**Corollary 1.8.29**.: _Suppose \(K^{\dagger}=K\). Then the following are equivalent:_
1. \(K\) _is_ \(1\)_-linearly newtonian;_
2. \(K\) _is_ \(\operatorname{d}\)_-valued and_ \(1\)_-linearly surjective;_
3. \(K\) _is_ \(\operatorname{d}\)_-valued and_ \(\partial K=K\)_._
Proof.: Note that \(K\) is \(\operatorname{\lambda}\)-free by [ADH, remarks following 11.6.2]. Hence the equivalence of (i) and (ii) follows from Proposition 1.8.28. For the equivalence of (ii) with (iii), see [ADH, example following 5.5.22].
**Linear newtonianity descends.**_In this subsection \(H\) is \(\operatorname{d}\)-valued with valuation ring \(\mathcal{O}\) and constant field \(C\). Let \(r\in\mathbb{N}^{\geqslant 1}\). If \(H\) is \(\operatorname{\omega}\)-free, \(\Gamma\) is divisible, and \(H\) has a newtonian algebraic extension \(K=H(C_{K})\), then \(H\) is also newtonian, by [ADH, 14.5.6]. Here is an analogue of this for \(r\)-linear newtonianity:
**Lemma 1.8.30**.: _Let \(K=H(C_{K})\) be an algebraic asymptotic extension of \(H\) which is \(r\)-linearly newtonian. Then \(H\) is \(r\)-linearly newtonian._
Proof.: Take a basis \(B\) of the \(C\)-linear space \(C_{K}\) with \(1\in B\), and let \(b\) range over \(B\). We have \(H(C_{K})=H[C_{K}]\), and \(H\) is linearly disjoint from \(C_{K}\) over \(C\)[ADH, 4.6.16], so \(B\) is a basis of the \(H\)-linear space \(H[C_{K}]\). Let \(P\in H\{Y\}\) with \(\deg P=1\) and \(\operatorname{order}(P)\leqslant r\) be quasilinear; then \(P\) as element of \(K\{Y\}\) remains quasilinear, since \(\Gamma_{K}=\Gamma\) by [ADH, 10.5.15]. Let \(y\in\mathcal{O}_{K}\) be a zero of \(P\). Take \(y_{b}\in H\) (\(b\in B\)) with \(y_{b}=0\) for all but finitely many \(b\) and \(y=\sum_{b}y_{b}\,b\). Then \(y_{b}\in\mathcal{O}\) for all \(b\), and
\[0\ =\ P(y)\ =\ P_{0}+P_{1}(y)\ =\ P_{0}+\sum_{b}P_{1}(y_{b})b,\]
so \(P(y_{1})=P_{0}+P_{1}(y_{1})=0\).
Thus if \(H[i]\) with \(i^{2}=-1\) is \(r\)-linearly newtonian, then \(H\) is \(r\)-linearly newtonian.
**Cases of bounded order.**_In the rest of this section \(r\in\mathbb{N}^{\geqslant 1}\)._ Define \(K\) to be **strongly \(r\)-newtonian** if \(K\) is \(r\)-newtonian and for each divergent pc-sequence \((a_{\rho})\) in \(K\) with minimal differential polynomial \(G(Y)\) over \(K\) of order \(\leqslant r\) we have \(\operatorname{ndeg}_{\boldsymbol{\alpha}}G=1\), where \(\boldsymbol{a}:=c_{K}(a_{\rho})\). Given \(P\in K\{Y\}^{\neq}\), a \(K\)**-external zero of \(P\)** is an element \(\widehat{a}\) of some immediate asymptotic extension \(\widehat{K}\) of \(K\) such that \(P(\widehat{a})=0\) and \(\widehat{a}\notin K\). Now [ADH, 14.1.11] extends as follows with the same proof:
**Lemma 1.8.31**.: _Suppose \(K\) has rational asymptotic integration and \(K\) is strongly \(r\)-newtonian. Then no \(P\in K\{Y\}^{\neq}\) of order \(\leqslant r\) can have a \(K\)-external zero._
The following is important in certain inductions on the order.
**Lemma 1.8.32**.: _Suppose \(K\) has asymptotic integration, is \(1\)-linearly newtonian, and \(r\)-linearly closed. Then \(K\) is \(r\)-linearly newtonian._
Proof.: Note that \(K\) is \(\lambda\)-free and d-valued by Proposition 1.8.28. Let \(P\in K\{Y\}\) be such that \(\operatorname{nml}P=\deg P=1\) and \(\operatorname{order}P\leqslant r\); by [ADH, 14.2.6] it suffices to show that then \(P\) has a zero in \(\circ\). By [ADH, proof of 13.7.10] we can compositionally conjugate, pass to an elementary extension, and multiply by an element of \(K^{\times}\) to arrange that \(K\) has small derivation, \(P_{0}\prec^{\flat}1\), and \(P_{1}\asymp 1\). Let \(A:=L_{P}\). The valuation ring of the flattening \((K,v^{\flat})\) is \(1\)-linearly surjective by [ADH, 14.2.1], so all operators in \(K[\![\partial]\!]\) of order \(1\) are neatly surjective in the sense of \((K,v^{\flat})\). Since \(A\) splits over \(K\), we obtain from [ADH, 5.6.10(ii)] that \(A\) is neatly surjective in the sense of \((K,v^{\flat})\). As \(v^{\flat}(A)=0\) and \(v^{\flat}(P_{0})>0\), this gives \(y\in K\) with \(v^{\flat}(y)>0\) such that \(P_{0}+A(y)=0\), that is, \(P(y)=0\).
Using the terminology of \(K\)-external zeros, we can add another item to the list of equivalent statements in Proposition 1.8.28:
**Lemma 1.8.33**.: _Suppose \(K\) has asymptotic integration. Then we have:_
\(K\) _is \(1\)-linearly newtonian \(\iff\)\(K\) is \(\lambda\)-free and no \(P\in K\{Y\}\) with \(\deg P=1\)_
_and \(\operatorname{order}P=1\) has a \(K\)-external zero._
Proof.: Suppose \(K\) is \(1\)-linearly newtonian. Then by (i) \(\Rightarrow\) (iii) in Proposition 1.8.28, \(K\) is \(\lambda\)-free, d-valued, \(1\)-linearly surjective, and \(\operatorname{I}(K)\subseteq K^{\dagger}\). Let \(P\in K\{Y\}\) where \(\deg P=\operatorname{order}P=1\) and \(y\) in an immediate asymptotic extension \(L\) of \(K\) with \(P(y)=0\). Then [ADH, 9.1.2] and Corollary 1.2.11 give \(L^{\dagger}\cap K=K^{\dagger}\), so \(y\in K\) by Lemmas 1.2.2 and 1.2.3. This gives the direction \(\Rightarrow\). The converse follows from Corollary 1.8.22 and (ii) \(\Rightarrow\) (i) in Proposition 1.8.28.
Here is a higher-order version of Lemma 1.8.33:
**Lemma 1.8.34**.: _Suppose \(K\) is \(\omega\)-free. Then_
\(K\) _is \(r\)-linearly newtonian \(\iff\) no \(P\in K\{Y\}\) with \(\deg P=1\) and \(\operatorname{order}P\leqslant r\)_
_has a \(K\)-external zero._
Proof.: Suppose \(K\) is \(r\)-linearly newtonian. Then \(K\) is d-valued by Lemma 1.2.9. Let \(P\in K\{Y\}\) be of degree \(1\) and \(\operatorname{order}\leqslant r\), and let \(y\) be in an immediate asymptotic extension \(L\) of \(K\) with \(P(y)=0\). Then \(A(y)=b\) for \(A:=L_{P}\in K[\![\partial]\!]\), \(b:=-P(0)\in K\). By [ADH, 14.2.2] there is also a \(z\in K\) with \(A(z)=b\), hence \(y-z\in\ker_{L}A=\ker A\) by [ADH, remarks after 14.2.9] and so \(y\in K\). This gives the direction \(\Rightarrow\). For the converse note that every quasilinear \(P\in K\{Y\}\) has a zero \(\widehat{a}\preccurlyeq 1\) in an immediate asymptotic extension of \(K\) by [ADH, 14.0.1 and subsequent remarks].
We also have the following \(r\)-version of [ADH, 14.0.1]:
**Proposition 1.8.35**.: _If \(K\) is \(\lambda\)-free and no \(P\in K\{Y\}^{\neq}\) of order \(\leqslant r\) has a \(K\)-external zero, then \(K\) is \(\omega\)-free and \(r\)-newtonian._
Proof.: The \(\omega\)-freeness follows as before from [ADH, 11.7.13]. The rest of the proof is as in [ADH, p. 653] with \(P\) restricted to have order \(\leqslant r\)
**Application to solving asymptotic equations**.: _Here \(K\) is \(\mathrm{d}\)-valued, \(\omega\)-free, with small derivation, and \(\mathfrak{M}\) is a monomial group of \(K\)._ We let \(a\), \(b\), \(y\) range over \(K\). In addition we fix a \(P\in K\{Y\}^{\neq}\) of order \(\leqslant r\) and a \(\preccurlyeq\)-closed set \(\mathcal{E}\subseteq K^{\times}\). (Recall that \(r\geqslant 1\).) This gives the asymptotic equation
(E) \[P(Y)=0,\qquad Y\in\mathcal{E}.\]
This gives the following \(r\)-version of [ADH, 13.8.8], with basically the same proof:
**Proposition 1.8.36**.: _Suppose \(\Gamma\) is divisible, no \(Q\in K\{Y\}^{\neq}\) of order \(\leqslant r\) has a \(K\)-external zero, \(d:=\mathrm{ndeg}_{\mathcal{E}}\,P\geqslant 1\), and there is no \(f\in\mathcal{E}\cup\{0\}\) with \(\mathrm{mul}\,P_{+f}=d\). Then_ (E) _has an unraveler._
Here is an \(r\)-version of [ADH, 14.3.4] with the same proof:
**Lemma 1.8.37**.: _Suppose \(K\) is \(r\)-newtonian. Let \(g\in K^{\times}\) be an approximate zero of \(P\) with \(\mathrm{ndeg}\,P_{\times g}=1\). Then there exists \(y\sim g\) such that \(P(y)=0\)._
For the next three results we assume the following:
\(C\) _is algebraically closed, \(\Gamma\) is divisible, and no \(Q\in K\{Y\}^{\neq}\) of order \(\leqslant r\) has a \(K\)-external zero._
These three results are \(r\)-versions of [ADH, 14.3.5, 14.3.6, 14.3.7] with the same proofs, using Propositions 1.8.35 and 1.8.36 instead of [ADH, 14.0.1, 13.8.8]:
**Proposition 1.8.38**.: _If \(\mathrm{ndeg}_{\mathcal{E}}\,P>\mathrm{mul}(P)=0\), then_ (E) _has a solution._
**Corollary 1.8.39**.: \(K\) _is weakly \(r\)-differentially closed._
**Corollary 1.8.40**.: _Suppose \(g\in K^{\times}\) is an approximate zero of \(P\). Then \(P(y)=0\) for some \(y\sim g\)._
**A useful equivalence**.: _Suppose \(K\) is \(\omega\)-free._ (No small derivation or monomial group assumed.) Recall that \(r\geqslant 1\). Here is an \(r\)-version of [159, 3.4]:
**Corollary 1.8.41**.: _The following are equivalent:_
* \(K\) _is_ \(r\)_-newtonian;_
* \(K\) _is strongly_ \(r\)_-newtonian;_
* _no_ \(P\in K\{Y\}^{\neq}\) _of order_ \(\leqslant r\) _has a_ \(K\)_-external zero._
Proof.: Since \(K\) is \(\omega\)-free it has rational asymptotic integration [ADH, p. 515]. Also, if \(K\) is \(1\)-newtonian, then \(K\) is henselian [ADH, p. 645] and d-valued [ADH, 14.2.5]. For (i) \(\Rightarrow\) (ii), use [159, 3.3], for (ii) \(\Rightarrow\) (iii), use Lemma 1.8.31, and for (iii) \(\Rightarrow\) (i), use Proposition 1.8.35.
Next an \(r\)-version of [ADH, 14.5.3]:
**Corollary 1.8.42**.: _Suppose \(K\) is \(r\)-newtonian, \(\Gamma\) is divisible, and \(C\) is algebraically closed. Then \(K\) is weakly \(r\)-differentially closed, so \(K\) is \((r+1)\)-linearly closed and thus \((r+1)\)-linearly newtonian._
Proof.: To show that \(K\) is weakly \(r\)-differentially closed we arrange by compositional conjugation and passing to a suitable elementary extension that \(K\) has small derivation and \(K\) has a monomial group. Then \(K\) is weakly \(r\)-differentially closed by Corollaries 1.8.39 and 1.8.41. The rest uses [ADH, 5.8.9] and Lemma 1.8.32.
**Complementing [ADH, 14.2.12]**\(({}^{*})\).: In this subsection \(P(Y)\in\mathcal{O}\{Y\}\) has order at most \(r\in\mathbb{N}^{\geqslant 1}\).
**Lemma 1.8.43**.: _Let \(y\in K^{\times}\), \(y^{\prime}\preccurlyeq y\prec 1\), and \(P(0)=P(y)\). Then \(L_{P}(y)\prec y\)._
Proof.: Induction on \(n\) gives \(y^{(n)}\preccurlyeq y^{(n-1)}\preccurlyeq\cdots\preccurlyeq y\prec 1\) for all \(n\). Hence if \(\boldsymbol{i}=(i_{0},\ldots,i_{r})\in\mathbb{N}^{1+r}\), \(|\boldsymbol{i}|\geqslant 2\), then \(y^{\boldsymbol{i}}=y^{i_{0}}(y^{\prime})^{i_{1}}\cdots(y^{(r)})^{i_{r}} \preccurlyeq y^{|\boldsymbol{i}|}\prec y\). Now
\[P(y)\ =\ P(0)+L_{P}(y)+\sum_{|\boldsymbol{i}|\geqslant 2}P_{\boldsymbol{i}}\,y^ {\boldsymbol{i}},\]
so \(L_{P}(y)+\sum_{|\boldsymbol{i}|\geqslant 2}P_{\boldsymbol{i}}\,y^{\boldsymbol{i}}=0\), and thus \(L_{P}(y)\prec y\).
We extend the residue map \(a\mapsto\operatorname{res}a\colon\mathcal{O}\to\boldsymbol{k}:=\operatorname {res}(K)\) to the ring morphism
\[p\mapsto\operatorname{res}p\,:\ \mathcal{O}[Y]\to\boldsymbol{k}[Y],\qquad Y\mapsto Y.\]
For \(w\in\mathbb{N}\) we let \(P_{[w]}\) be the isobaric part of \(P\) of weight \(w\), as in [ADH, 4.2]. Thus \(p:=P_{[0]}\in\mathcal{O}[Y]\).
**Corollary 1.8.44**.: _Suppose the derivation of \(K\) is very small, and let \(a\in\mathcal{O}\), \(y\in\mathcal{o}\) with \(P(a)=P(a+y)\) and \((\operatorname{res}p)^{\prime}(\operatorname{res}a)\neq 0\). Then \(y^{\prime}\succcurlyeq y\)._
Proof.: Put \(R:=\sum_{w\geqslant 1}P_{[w]}=P-p\). Now \(a^{(n)}\prec 1\) for all \(n\geqslant 1\), so \(\bigl{(}\frac{\partial R}{\partial Y}\bigr{)}(a)\prec 1\). Towards a contradiction, assume \(y^{\prime}\prec y\). Then \(L_{P_{+a}}(y)\prec y\) by Lemma 1.8.43 applied to \(P_{+a}\) in place of \(P\). Induction on \(n\) gives \(y^{(n)}\prec y^{(n-1)}\prec\cdots\prec y\prec 1\) for all \(n\) and so \(L_{R_{+a}}(y)=\sum_{n}\big{(}\frac{\partial R}{\partial Y^{(n)}}\bigr{)}(a)y^ {(n)}\prec y\). Together with \(L_{P_{+a}}(y)=p^{\prime}(a)y+L_{R_{+a}}(y)\) and \(p^{\prime}(a)\asymp 1\) this yields the desired contradiction.
In the next corollary we assume that \(K\) has asymptotic integration. We let \(\phi\) range over active elements of \(K\), and we let \(\sigma^{\flat}_{\phi}=\{f\in\sigma:f^{\prime}\succcurlyeq f\phi\}\) be the maximal ideal of the flattened valuation ring of \(K^{\phi}\); cf. [ADH, pp. 406-407].
**Corollary 1.8.45**.: _Suppose \(K\) is \(r\)-newtonian. Let \(u\in\mathcal{O}\) and \(A\in\boldsymbol{k}[Y]\) be such that \(A(\operatorname{res}u)=0\), \(A^{\prime}(\operatorname{res}u)\neq 0\), and \(D_{P^{\phi}}\in\boldsymbol{k}^{\times}A\), eventually. Then \(P\) has a zero in \(u+\mathcal{o}\), and for all zeros \(a,b\in u+\mathcal{o}\) of \(P\) we have: \(a-b\in\sigma^{\flat}_{\phi}\), eventually._
Proof.: For the first claim, see [ADH, 14.2.12]. Suppose \(D_{P^{\phi}}\in\boldsymbol{k}^{\times}A\) and take \(\mathfrak{m}\in K^{\times}\) with \(\mathfrak{m}\asymp P^{\phi}\), so \(Q:=\mathfrak{m}^{-1}P^{\phi}\in K^{\phi}\{Y\}\) and \(q:=Q_{[0]}\) satisfy \(vQ=0\) and \(\operatorname{res}q\in\boldsymbol{k}^{\times}A\). Note that \(K\) is d-valued by [ADH, 14.2.5]; hence \(K^{\phi}\) has very small derivation. Let \(a,b\in u+\mathcal{o}\) and \(P(a)=P(b)=0\); then \(y:=b-a\in\mathcal{o}\), and so Corollary 1.8.44 applied to \(K^{\phi}\), \(Q\) in place of \(K\), \(P\) yields \(y^{\prime}\succcurlyeq y\phi\).
**Newton polynomials of Riccati transforms**\(({}^{*})\).: _In this subsection we assume that \(K\) has small derivation and asymptotic integration._ Let
\[A\ =\ a_{0}+a_{1}\partial+\cdots+a_{r}\partial^{r}\in K[\partial]\qquad\text{ where }a_{0},\ldots,a_{r}\in K\text{, }a_{r}\neq 0,\]
with Riccati transform
\[R\ :=\ \operatorname{Ri}(A)\ =\ a_{0}R_{0}(Z)+a_{1}R_{1}(Z)+\cdots+a_{r}R_{r}(Z) \in K\{Z\},\]
and set
\[P\ :=\ a_{0}+a_{1}Z+\cdots+a_{r}Z^{r}\in K[Z].\]
We equip the differential fraction field \(K\langle Z\rangle\) of \(K\{Z\}\) with the gaussian extension of the valuation of \(K\) and likewise with \(K^{\phi}\) instead of \(K\). Then \(K^{\phi}\langle Z\rangle\) is a valued differential field with small derivation by [ADH, 6.3]. (Although \(K\) is asymptotic, \(K\langle Z\rangle\) is not, by [ADH, 9.4.6].)
**Lemma 1.8.46**.: _Eventually, \(R^{\phi}\sim P\)._
Proof.: It is enough to show that \(R_{n}(Z)^{\phi}\sim Z^{n}\) eventually. For \(n=0,1\) we have \(R_{n}(Z)=Z^{n}\). Now \(R_{n+1}(Z)=ZR_{n}(Z)+\partial\big{(}R_{n}(Z)\big{)}\), so by [1, 5.7.1],
\[R_{n+1}(Z)^{\phi}\ =\ ZR_{n}(Z)^{\phi}+\phi\delta\big{(}R_{n}(Z)^{\phi}\big{)}, \qquad\delta\ :=\ \phi^{-1}\partial.\]
Assuming \(R_{n}(Z)^{\phi}\sim Z^{n}\) eventually, this yields \(R_{n+1}(Z)^{\phi}\sim Z^{n+1}\) eventually.
_Remark_.: Suppose \(K\) is d-valued and equipped with a monomial group. In [1, 13.0.1] we associate to \(Q\in K\{Z\}^{\neq}\) its Newton polynomial \(N_{Q}\in C\{Z\}\) such that \(D_{Q^{\phi}}=N_{Q}\), eventually. Then \(N_{R}=D_{P}\in C[Z]\) by Lemma 1.8.46.
Next an application of Lemma 1.8.46. For simplicity, assume \(vA=0\), so \(P\in\mathcal{O}[Z]\), \(vP=0\). We also let \(Q\mapsto\operatorname{res}Q\colon\mathcal{O}[Z]\to\boldsymbol{k}[Z]\) be the extension of the residue map \(a\mapsto\operatorname{res}a\colon\mathcal{O}\to\boldsymbol{k}\) to a ring morphism with \(Z\mapsto Z\).
**Corollary 1.8.47**.: _Suppose \(K\) is \((r-1)\)-newtonian, \(r\geqslant 1\). Then for all \(\alpha\in\boldsymbol{k}\) with \(\operatorname{res}P(\alpha)=0\) and \((\operatorname{res}P)^{\prime}(\alpha)\neq 0\) there is \(a\in\mathcal{O}\) with \(R(a)=0\) and \(\operatorname{res}a=\alpha\)._
Proof.: If \(r=1\), use \(R=P=a_{0}+a_{1}Z\). Assume \(r\geqslant 2\). By Lemma 1.8.46 we have \(D_{R^{\phi}}\in\boldsymbol{k}^{\times}\cdot\operatorname{res}P\), eventually, so we can apply [1, 14.2.12] to \(R\), \(\operatorname{res}P\) in the role of \(P\), \(A\) there.
In the rest of this subsection we assume \(A\in\mathcal{O}[\partial]\) is monic. To what extent is the zero \(a\) of \(R\) in Corollary 1.8.47 unique? Corollaries 1.8.49 and 1.8.50 below give answers to this question.
**Lemma 1.8.48**.: _Let \(a,b\in\mathcal{O}\) be such that \(R(a)=R(b)=0\) and \(y:=b-a\prec 1\). Then \(y^{\prime}\preccurlyeq y\)._
Proof.: Replace \(R\) by \(R_{+a}\) to arrange \(a=0\), \(b=y\), so \(a_{0}=0\). Note that \(r\geqslant 1\). Towards a contradiction, assume \(y\prec y^{\prime}\). Then \(R_{n}(y)\sim y^{(n-1)}\) for all \(n\geqslant 1\) by Lemma 1.1.21, and \(y\prec y^{\prime}\prec\cdots\prec y^{(r-1)}\), so
\[R(y)\ =\ a_{1}R_{1}(y)+\cdots+a_{r-1}R_{r-1}(y)+R_{r}(y)\ \sim\ y^{(r-1)},\]
hence \(R(y)\neq 0\), a contradiction.
**Corollary 1.8.49**.: _Suppose \(K\) has very small derivation, \(\alpha\in\boldsymbol{k}\) is a simple zero of \(\operatorname{res}P\), and \(a,b\in\mathcal{O}\), \(R(a)=R(b)=0\), and \(\operatorname{res}a=\operatorname{res}b=\alpha\). Then for \(y:=b-a\) we have \(y^{\prime}\asymp y\)._
Proof.: We have \(y^{\prime}\preccurlyeq y\) by Lemma 1.8.48, and \(y^{\prime}\succcurlyeq y\) by Corollary 1.8.44 applied to \(R\) in place of \(P\), using \(R_{[0]}=P\).
In the next result we assume \(K=H[\mathrm{i}]\) where \(H\) is a real closed differential subfield of \(K\) such that the valuation ring \(\mathcal{O}_{H}:=\mathcal{O}\cap H\) of \(H\) is convex with respect to the ordering of \(H\) and \(\mathcal{O}_{H}=C_{H}+\sigma_{H}\). So \(C=C_{H}[\mathrm{i}]\) and \(\mathcal{O}=C+\sigma\) (cf. remarks after Corollary 1.2.5). We identify \(C\) with \(\boldsymbol{k}\) via the residue morphism \(\mathcal{O}\to\boldsymbol{k}\).
**Corollary 1.8.50**.: _Let \(\alpha\in C\) be a simple zero of \(\operatorname{res}P\in C[Z]\) such that for all zeros \(\beta\in C\) of \(\operatorname{res}P\) we have \(\operatorname{Re}\alpha\leqslant\operatorname{Re}\beta\). Then there is at most one \(a\in\mathcal{O}\) with \(R(a)=0\) and \(\operatorname{res}a=\alpha\)._
Proof.: Let \(a\in\mathcal{O}\), \(R(a)=0\), and \(\operatorname{res}a=\alpha\). Towards a contradiction, suppose \(b\in\mathcal{O}\), \(b\neq a\), \(R(b)=0\), and \(\operatorname{res}b=\alpha\). By Lemma 1.1.27 we may replace \(a\), \(R\) by \(0\), \(R_{+a}\) to arrange \(a=0\). Then \(0\neq b\prec 1\) and \(a_{0}=R(0)=0\), so \(P=QZ\) where \(Q\in\mathcal{O}[Z]\) and all zeros of \(\operatorname{res}Q\) in \(C\) have nonnegative real part. Moreover \(b^{\prime}\asymp b\) by Corollary 1.8.49. Take \(c\in C^{\times}\) with \(b^{\prime}\sim bc\). By Lemma 1.1.21 and [ADH, 9.1.4(ii)] we get \(R_{n}(b)\sim b^{(n-1)}\sim bc^{n-1}\) for \(n\geqslant 1\). Now \(R=a_{1}R_{1}+\cdots+a_{r}R_{r}\) gives \(Q=a_{1}+\cdots+a_{r}Z^{r-1}\), so \(R(b)\in b\cdot\big{(}Q(c)+\circ\big{)}\), hence \((\operatorname{res}Q)(c)=0\) in view of \(R(b)=0\), and thus \(\operatorname{Re}c\geqslant 0\). On the other hand, \(b\prec 1\) and Corollary 1.2.6 give \(\operatorname{Re}(b^{\dagger})<0\), so \(\operatorname{Re}c<0\), a contradiction.
**Part \(2\). The Universal Exponential Extension**
Let \(K\) be an algebraically closed differential field. In Section 2.2 below we extend \(K\) in a canonical way to a differential integral domain \(\mathrm{U}=\mathrm{U}_{K}\) whose differential fraction field has the same constant field \(C\) as \(K\), called the _universal exponential extension_ of \(K\). (The universal exponential extension of \(\mathbb{T}[i]\) appeared in [103] in the guise of "oscillating transseries"; we explain the connection at the end of Section 2.5.) The underlying ring of \(\mathrm{U}\) is a group ring of a certain abelian group over \(K\), and we therefore first review some relevant basic facts about such group rings in Section 2.1. The main feature of \(\mathrm{U}\) is that if \(K\) is \(1\)-linearly surjective, then each \(A\in K[\mathfrak{d}]\) of order \(r\in\mathbb{N}\) which splits over \(K\) has \(r\) many \(C\)-linearly independent zeros in \(\mathrm{U}\). This is explained in Section 2.5, after some differential-algebraic preliminaries in Sections 2.3 and 2.4, where we consider a novel kind of _spectrum_ of a linear differential operator over a differential field. In Section 2.6 we introduce for \(H\)-asymptotic \(K\) with small derivation and asymptotic integration the _ultimate exceptional values_ of a given linear differential operator \(A\in K[\mathfrak{d}]^{\neq}\). These help to isolate the zeros of \(A\) in \(\mathrm{U}\) much like the exceptional values of \(A\) help to locate the zeros of \(A\) in immediate asymptotic extensions of \(K\) as in Section 1.5. In Section 5.10 below we discuss the analytic meaning of \(\mathrm{U}\) when \(K\) is the algebraic closure of a Liouville closed Hardy field containing \(\mathbb{R}\) as a subfield.
### Some Facts about Group Rings
_In this section \(G\) is a torsion-free abelian group, written multiplicatively, \(K\) is a field, and \(\gamma\), \(\delta\) range over \(G\)._ For use in Section 2.2 below we recall some facts about the group ring \(K[G]\): a commutative \(K\)-algebra with \(1\neq 0\) that contains \(G\) as a subgroup of its multiplicative group \(K[G]^{\times}\) and which, as a \(K\)-linear space, decomposes as
\[K[G]\ =\ \bigoplus_{\gamma}K\gamma\qquad\text{(internal direct sum).}\]
Hence for any \(f\in K[G]\) we have a unique family \((f_{\gamma})\) of elements of \(K\), with \(f_{\gamma}=0\) for all but finitely many \(\gamma\), such that
\[f\ =\ \sum_{\gamma}f_{\gamma}\gamma. \tag{2.1.1}\]
We define the support of \(f\in K[G]\) as above by
\[\mathrm{supp}(f)\ :=\ \{\gamma:\,f_{\gamma}\neq 0\}\ \subseteq\ G.\]
_In the rest of this section \(f\), \(g\), \(h\) range over \(K[G]\)._ For any \(K\)-algebra \(R\), every group morphism \(G\to R^{\times}\) extends uniquely to a \(K\)-algebra morphism \(K[G]\to R\).
Clearly \(K[G]^{\times}\supseteq K^{\times}G\); in fact:
**Lemma 2.1.1**.: _The ring \(K[G]\) is an integral domain and \(K[G]^{\times}=K^{\times}G\)._
Proof.: We take an ordering of \(G\) making \(G\) into an ordered abelian group; see [ADH, 2.4]. Let \(f,g\neq 0\) and set
\[\gamma^{-}:=\min\mathrm{supp}(f),\ \gamma^{+}:=\max\mathrm{supp}(f),\ \ \delta^{-}:=\min\mathrm{supp}(g),\ \delta^{+}:=\max\mathrm{supp}(g);\]
so \(\gamma^{-}\leqslant\gamma^{+}\) and \(\delta^{-}\leqslant\delta^{+}\). We have \((fg)_{\gamma^{-}\delta^{-}}=f_{\gamma^{-}}g_{\delta^{-}}\neq 0\), and likewise with \(\gamma^{+}\), \(\delta^{+}\) in place of \(\gamma^{-}\), \(\delta^{-}\). In particular, \(fg\neq 0\), showing that \(K[G]\) is an integral domain. Now suppose \(fg=1\). Then \(\operatorname{supp}(fg)=\{1\}\), hence \(\gamma^{-}\delta^{-}=1=\gamma^{+}\delta^{+}\), so \(\gamma^{-}=\gamma^{+}\), and thus \(f\in K^{\times}G\).
**Lemma 2.1.2**.: _Suppose \(K\) has characteristic \(0\) and \(G\neq\{1\}\). Then the fraction field \(\Omega\) of \(K[G]\) is not algebraically closed._
Proof.: Let \(\gamma\in G\setminus\{1\}\) and \(n\geqslant 1\). We claim that there is no \(y\in\Omega\) with \(y^{2}=1-\gamma^{n}\). For this, first replace \(G\) by its divisible hull to arrange that \(G\) is divisible. Towards a contradiction, suppose \(f,g\in K[G]^{\neq}\) and \(f^{2}=g^{2}(1-\gamma^{n})\). Take a divisible subgroup \(H\) of \(G\) that is complementary to the smallest divisible subgroup \(\gamma^{\mathbb{Q}}\) of \(G\) containing \(\gamma\), so \(G=H\gamma^{\mathbb{Q}}\) and \(G\cap\gamma^{\mathbb{Q}}=\{1\}\). Then \(K[G]\subseteq K(H)[\gamma^{\mathbb{Q}}]\) (inside \(\Omega\)), so we may replace \(K\), \(G\) by \(K(H)\), \(\gamma^{\mathbb{Q}}\) to arrange \(G=\gamma^{\mathbb{Q}}\). For suitable \(m\geqslant 1\) we apply the \(K\)-algebra automorphism of \(K[G]\) given by \(\gamma\mapsto\gamma^{m}\) to arrange \(f,g\in K[\gamma,\gamma^{-1}]\) (replacing \(n\) by \(mn\)). Then replace \(f\), \(g\) by \(\gamma^{m}f\), \(\gamma^{m}g\) for suitable \(m\geqslant 1\) to arrange \(f,g\in K[\gamma]\). Now use that \(1-\gamma\) is a prime divisor of \(1-\gamma^{n}\) of multiplicity \(1\) in the UFD \(K[\gamma]\) to get a contradiction.
The \(K\)-linear map
\[f\mapsto\operatorname{tr}(f):=f_{1}\;:\;K[G]\to K\]
is called the **trace** of \(K[G]\). Thus
\[\operatorname{tr}(fg)\;=\;\sum_{\gamma}f_{\gamma}g_{\gamma^{-1}}.\]
We claim that \(\operatorname{tr}\circ\sigma=\operatorname{tr}\) for every automorphism \(\sigma\) of the \(K\)-algebra \(K[G]\). This invariance comes from an intrinsic description of \(\operatorname{tr}(f)\) as follows: given \(f\) we have a unique finite set \(U\subseteq K[G]^{\times}=K^{\times}G\) such that \(f=\sum_{u\in U}u\) and \(u_{1}/u_{2}\notin K^{\times}\) for all distinct \(u_{1},u_{2}\in U\); if \(U\cap K^{\times}=\{c\}\), then \(\operatorname{tr}(f)=c\); if \(U\cap K^{\times}=\emptyset\), then \(\operatorname{tr}(f)=0\). If \(G_{0}\) is a subgroup of \(G\) and \(K_{0}\) is a subfield of \(K\), then \(K_{0}[G_{0}]\) is a subring of \(K[G]\), and the trace of \(K[G]\) extends the trace of \(K_{0}[G_{0}]\).
### The automorphisms of \(K[g]\)
For a commutative group \(H\), written multiplicatively, \(\operatorname{Hom}(G,H)\) denotes the set of group morphisms \(G\to H\), made into a group by pointwise multiplication. Any \(\chi\in\operatorname{Hom}(G,K^{\times})\)--sometimes called a _character_--gives a \(K\)-algebra automorphism \(f\mapsto f_{\chi}\) of \(K[G]\) defined by
\[f_{\chi}\;:=\;\sum_{\gamma}f_{\gamma}\chi(\gamma)\gamma. \tag{2.1.2}\]
This yields a group action of \(\operatorname{Hom}(G,K^{\times})\) on \(K[G]\) by \(K\)-algebra automorphisms:
\[\operatorname{Hom}(G,K^{\times})\times K[G]\to K[G],\qquad(\chi,f)\mapsto f_{\chi}.\]
Sending \(\chi\in\operatorname{Hom}(G,K^{\times})\) to \(f\mapsto f_{\chi}\) yields an embedding of the group \(\operatorname{Hom}(G,K^{\times})\) into the group \(\operatorname{Aut}(K[G]|K)\) of automorphisms of the \(K\)-algebra \(K[G]\); its image is the (commutative) subgroup of \(\operatorname{Aut}(K[G]|K)\) consisting of the \(K\)-algebra automorphisms \(\sigma\) of \(K[G]\) such that \(\sigma(\gamma)/\gamma\in K^{\times}\) for all \(\gamma\). Identify \(\operatorname{Hom}(G,K^{\times})\) with its image under this embedding. From \(K[G]^{\times}=K^{\times}G\) we obtain \(\sigma(K^{\times}G)=K^{\times}G\) for all \(\sigma\in\operatorname{Aut}(K[G]|K)\), and using this one verifies easily that \(\operatorname{Hom}(G,K^{\times})\) is a normal subgroup of \(\operatorname{Aut}(K[G]|K)\). We also have the group embedding
\[\operatorname{Aut}(G)\;\to\;\operatorname{Aut}(K[G]|K)\]
assigning to each \(\sigma\in\operatorname{Aut}(G)\) the unique automorphism of the \(K\)-algebra \(K[G]\) extending \(\sigma\). Identifying \(\operatorname{Aut}(G)\) with its image in \(\operatorname{Aut}(K[G]|K)\) via this embedding we have \(\operatorname{Hom}(G,K^{\times})\cap\operatorname{Aut}(G)=\{\operatorname{id}\}\) and \(\operatorname{Hom}(G,K^{\times})\cdot\operatorname{Aut}(G)=\operatorname{Aut }(K[G],|K)\) inside \(\operatorname{Aut}(K[G]|K)\), and thus \(\operatorname{Aut}(K[G]|K)=\operatorname{Hom}(G,K^{\times})\rtimes \operatorname{Aut}(G)\), an internal semidirect product of subgroups of \(\operatorname{Aut}(K[G]|K)\).
The gaussian extension.: In this subsection \(v\colon K^{\times}\to\Gamma\) is a valuation on the field \(K\). We extend \(v\) to a map \(v_{\operatorname{g}}\colon K[G]^{\neq}\to\Gamma\) by setting
(2.1.3) \[v_{\operatorname{g}}f\ :=\ \min_{\gamma}vf_{\gamma}\qquad(f\in K[G]^{\neq} \text{ as in \eqref{eq:main
also \(\langle f,\lambda g\rangle=\overline{\lambda}\langle f,g\rangle\). (Hermitian forms are usually defined only on \(\mathbb{C}\)-linear spaces and are \(\mathbb{C}\)-valued, which is why we used quote marks, as we do below for _norm_ and _orthonormal basis_; see [122, Chapter XV, SS5] for the more general case.) Note:
\[\langle f,gh\rangle\ =\ \mathrm{tr}\big{(}f(gh)^{*}\big{)}\ =\ \big{\langle}fg^{*},h\rangle.\]
**Lemma 2.1.4**.: _Let \(u,w\in K[G]^{\times}\). If \(u\notin K^{\times}w\), then \(\langle u,w\rangle=0\), and if \(u\in K^{\times}w\), then \(\langle u,w\rangle=uw^{*}\)._
Proof.: Take \(a,b\in K^{\times}\) and \(\gamma\), \(\delta\) such that \(u=a\gamma\), \(w=b\delta\). If \(u\notin K^{\times}w\), then \(\gamma\neq\delta\), so \(\langle u,w\rangle=0\). If \(u\in K^{\times}w\), then \(\gamma=\delta\), hence \(\langle u,w\rangle=a\overline{b}=uw^{*}\).
For \(z\in K\) we set \(|z|:=\sqrt{z\overline{z}}\in H^{\geqslant}\), and then define \(\|\cdot\|\colon K[G]\to H^{\geqslant}\) by
\[\|f\|^{2}\ =\ \langle f,f\rangle\ =\ \sum_{\gamma}|f_{\gamma}|^{2}.\]
As in the case \(H=\mathbb{R}\) and \(K=\mathbb{C}\) one derives the Cauchy-Schwarz Inequality:
\[|\langle f,g\rangle|\ \leqslant\ \|f\|\cdot\|g\|.\]
Thus \(\|\cdot\|\) is a "norm" on the \(K\)-linear space \(K[G]\): for all \(f,g\) and all \(\lambda\in K\),
\[\|f+g\|\leqslant\|f\|+\|g\|,\quad\|\lambda f\|=|\lambda|\cdot\|f\|,\quad\|f\|= 0\Leftrightarrow f=0.\]
Note that \(G\) is an "orthonormal basis" of \(K[G]\) with respect to \(\langle\,\ \rangle\), and \(f_{\gamma}=\langle f,\gamma\rangle\). We also use the function \(\|\cdot\|_{1}\colon K[G]\to H^{\geqslant}\) given by
\[\|f\|_{1}\ :=\ \sum_{\gamma}|f_{\gamma}|,\]
which is a "norm" on \(K[G]\) in the sense of obeying the same laws as we mentioned for \(\|\cdot\|\). The two "norms" are in some sense equivalent:
\[\|f\|\ \leqslant\ \|f\|_{1}\ \leqslant\ \sqrt{n}\|f\|\qquad(n:=|\mathrm{supp}(f)|).\]
where the first inequality follows from the triangle inequality for \(\|\cdot\|\) and the second is of Cauchy-Schwarz type. Moreover:
**Lemma 2.1.5**.: _Let \(u\in K[G]^{\times}\). Then \(\|fu\|=\|f\|\,\|u\|\) and \(\|fu\|_{1}=\|f\|_{1}\,\|u\|_{1}\)._
Proof.: We have
\[\|f\gamma\|\ =\ \langle f\gamma,f\gamma\rangle\ =\ \big{\langle}f\gamma \gamma^{*},f\big{\rangle}\ =\ \big{\langle}f,f\big{\rangle}\ =\ \|f\|\]
using \(\gamma^{*}=\gamma^{-1}\). Together with \(K[G]^{\times}=K^{\times}G\) this yields the first claim; the second claim follows easily from the definition of \(\|\cdot\|_{1}\).
**Corollary 2.1.6**.: \(\|fg\|\leqslant\|f\|\,\cdot\|g\|_{1}\) _and \(\|fg\|_{1}\leqslant\|f\|_{1}\,\cdot\|g\|_{1}\)._
Proof.: By the triangle inequality for \(\|\cdot\|\) and the previous lemma,
\[\|fg\|\ \leqslant\ \sum_{\gamma}\|fg_{\gamma}\gamma\|\ =\ \sum_{\gamma}\|f\|\, \|g_{\gamma}\gamma\|\ =\ \|f\|\sum_{\gamma}|g_{\gamma}|\ =\ \|f\|\,\|g\|_{1}.\]
The inequality involving \(\|fg\|_{1}\) follows likewise.
In the next lemma we let \(\chi\in\mathrm{Hom}(G,K^{\times})\); recall from (2.1.2) the automorphism \(f\mapsto f_{\chi}\) of the \(K\)-algebra \(K[G]\).
**Lemma 2.1.7**.: \((f_{\chi})^{*}=(f^{*})_{\chi}\) _iff \(|\chi(\gamma)|=1\) for all \(\gamma\in\mathrm{supp}(f)\)._
Proof.: Let \(a\in K\); then \(\big{(}(a\gamma)_{\chi}\big{)}^{*}=\overline{a\chi(\gamma)}\gamma^{-1}\) and \(\big{(}(a\gamma)^{*}\big{)}_{\chi}=\overline{a}\chi(\gamma)^{-1}\gamma^{-1}\).
**Corollary 2.1.8**.: _Let \(\chi\in\operatorname{Hom}(G,K^{\times})\) with \(|\chi(\gamma)|=1\) for all \(\gamma\). Then \(\langle f_{\chi},g_{\chi}\rangle=\langle f,g\rangle\) for all \(f\), \(g\), and hence \(\|f_{\chi}\|=\|f\|\) for all \(f\)._
Proof.: Since \(\operatorname{tr}\circ\sigma=\operatorname{tr}\) for every automorphism \(\sigma\) of the \(K\)-algebra \(K[G]\),
\[\langle f_{\chi},g_{\chi}\rangle\ =\ \operatorname{tr}\bigl{(}f_{\chi}(g_{\chi})^{ *}\bigr{)}\ =\ \operatorname{tr}\bigl{(}(fg^{*})_{\chi}\bigr{)}\ =\ \operatorname{tr}(fg^{*})\ =\ \langle f,g\rangle,\]
where we use Lemma 2.1.7 for the second equality.
### Valuation and norm
Let \(v\colon H^{\times}\to\Gamma\) be a convex valuation on the ordered field \(H\), extended uniquely to a valuation \(v\colon K^{\times}\to\Gamma\) on the field \(K=H[i]\), so \(a\asymp|a|\) for \(a\in K\). (See the remarks before Corollary 1.2.6.) Let \(v_{\mathrm{g}}\colon K[G]^{\neq}\to\Gamma\) be the gaussian extension of \(v\), given by (2.1.3).
**Lemma 2.1.9**.: \(\|f\|_{1}\preccurlyeq 1\Leftrightarrow f\preccurlyeq_{\mathrm{g}}1\)_, and \(\|f\|_{1}\preccurlyeq f\preccurlyeq_{\mathrm{g}}1\)._
Proof.: Using that the valuation ring of \(H\) is convex we have
\[\|f\|_{1}=\sum_{\gamma}|f_{\gamma}|\preccurlyeq 1\iff|f_{\gamma}|\preccurlyeq 1 \text{ for all }\gamma\iff f_{\gamma}\preccurlyeq 1\text{ for all }\gamma\iff f\preccurlyeq_{\mathrm{g}}1.\]
Likewise one shows: \(\|f\|_{1}\prec 1\Leftrightarrow f\preccurlyeq_{\mathrm{g}}1\).
**Corollary 2.1.10**.: \(\|f\|\asymp\|f\|_{1}\asymp_{\mathrm{g}}f\)_._
Proof.: This is trivial for \(f=0\), so assume \(f\neq 0\). Take \(a\in H^{>}\) with \(a\asymp_{\mathrm{g}}f\), and replace \(f\) by \(f/a\), to arrange \(f\asymp_{\mathrm{g}}1\). Then \(\|f\|\asymp\|f\|_{1}\asymp_{\mathrm{g}}1\) by Lemma 2.1.9.
### The Universal Exponential Extension
As in [ADH, 5.9], given a differential ring \(K\), a _differential \(K\)-algebra_ is a differential ring \(R\) with a morphism \(K\to R\) of differential rings. If \(R\) is a differential ring extension of a differential ring \(K\) we consider \(R\) as a differential \(K\)-algebra via the inclusion \(K\to R\).
### Exponential extensions
_In this subsection \(R\) is a differential ring and \(K\) is a differential subring of \(R\)_. Call \(a\in R\)**exponential over \(K\)** if \(a^{\prime}\in aK\). Note that if \(a\in R\) is exponential over \(K\), then \(K[a]\) is a differential subring of \(R\). If \(a\in R\) is exponential over \(K\) and \(\phi\in K^{\times}\), then \(a\), as element of the differential ring extension \(R^{\phi}\) of \(K^{\phi}\), is exponential over \(K^{\phi}\). Every \(c\in C_{R}\) is exponential over \(K\), and every \(u\in K^{\times}\) is exponential over \(K\). If \(a,b\in R\) are exponential over \(K\), then so is \(ab\), and if \(a\in R^{\times}\) is exponential over \(K\), then so is \(a^{-1}\). Hence the units of \(R\) that are exponential over \(K\) form a subgroup \(E\) of the group \(R^{\times}\) of units of \(R\) with \(E\supseteq C_{R}^{\times}\cdot K^{\times}\); if \(R=K[E]\), then we call \(R\)**exponential over \(K\)**. An **exponential extension of \(K\)** is a differential ring extension of \(K\) that is exponential over \(K\). If \(R=K[E]\) where \(E\) is a set of elements of \(R^{\times}\) which are exponential over \(K\), then \(R\) is exponential over \(K\). If \(R\) is an exponential extension of \(K\) and \(\phi\in K^{\times}\), then \(R^{\phi}\) is an exponential extension of \(K^{\phi}\). The following lemma is extracted from the proof of [168, Theorem 1]:
**Lemma 2.2.1** (Rosenlicht).: _Suppose \(K\) is a field and \(R\) is an integral domain with differential fraction field \(F\). Let \(I\neq R\) be a differential ideal of \(R\), and let \(u_{1},\dots,u_{n}\in R^{\times}\)\((n\geqslant 1)\) be exponential over \(K\) with \(u_{i}\notin u_{j}C_{F}^{\times}K^{\times}\) for \(i\neq j\). Then \(\sum_{i}u_{i}\notin I\)._
Proof.: Suppose \(u_{1},\ldots,u_{n}\) is a counterexample with minimal \(n\geqslant 1\). Then \(n\geqslant 2\) and \(\sum_{i}u_{i}^{\prime}\in I\), so
\[\sum_{i}u_{i}^{\prime}-u_{1}^{\dagger}\sum_{i}u_{i}\ =\ \sum_{i>1}(u_{i}/u_{1})^{ \dagger}u_{i}\in I.\]
Hence \((u_{i}/u_{1})^{\dagger}=0\) and thus \(u_{i}/u_{1}\in C_{F}^{\times}\), for all \(i>1\), a contradiction.
**Corollary 2.2.2**.: _Suppose \(K\) is a field and \(F=K(E)\) is a differential field extension of \(K\) with \(C_{F}=C\), where \(E\) is a subgroup of \(F^{\times}\) whose elements are exponential over \(K\). Then \(\{y\in F^{\times}:\ y\text{ is exponential over }K\}=K^{\times}E\)._
Proof.: Let \(y\in F^{\times}\) be exponential over \(K\). Take \(K\)-linearly independent \(u_{1},\ldots,u_{n}\) in \(E\) and \(a_{1},\ldots,a_{n},b_{1},\ldots,b_{n}\in K\) with \(b_{j}\neq 0\) for some \(j\), such that
\[y\ =\ \Big{(}\sum_{i}a_{i}u_{i}\Big{)}\Big{/}\left(\sum_{j}b_{j}u_{j}\right).\]
Then \(\sum_{j}b_{j}yu_{j}-\sum_{i}a_{i}u_{i}=0\), and so Lemma 2.2.1 applied with \(R=F\), \(I=\{0\}\) gives \(b_{j}yu_{j}\in a_{i}u_{i}K^{\times}\) for some \(i\), \(j\) with \(a_{i},b_{j}\neq 0\), and thus \(y\in K^{\times}E\).
_Remark_.: In the context of Corollary 2.2.2, see [168, Theorem 1] for the structure of the group of elements of \(F^{\times}\) exponential over \(K\), for finitely generated \(E\).
**Lemma 2.2.3**.: _Suppose \(C_{R}^{\times}\) is divisible and \(E\) is a subgroup of \(R^{\times}\) containing \(C_{R}^{\times}\). Then there is a group morphism \(e\colon E^{\dagger}\to E\) such that \(e(b)^{\dagger}=b\) for all \(b\in E^{\dagger}\)._
Proof.: We have a short exact sequence of commutative groups
\[1\to C_{R}^{\times}\stackrel{{\iota}}{{\longrightarrow}}E \stackrel{{\ell}}{{\longrightarrow}}E^{\dagger}\to 0,\]
where \(\iota\) is the natural inclusion and \(\ell(a):=a^{\dagger}\) for \(a\in E\). Since \(C_{R}^{\times}\) is divisible, this sequence splits, which is what we claimed.
Let \(E\), \(e\), \(R\) be as in the previous lemma. Then \(e\) is injective, and its image is a complement of \(C_{R}^{\times}\) in \(E\). Moreover, given also a group morphism \(\widetilde{e}\colon E^{\dagger}\to E\) such that \(\widetilde{e}(b)^{\dagger}=b\) for all \(b\in E^{\dagger}\), the map \(b\mapsto e(b)\widetilde{e}(b)^{-1}\) is a group morphism \(E^{\dagger}\to C_{R}^{\times}\).
_In the rest of this section \(K\) is a differential field with algebraically closed constant field \(C\) and divisible group \(K^{\dagger}\) of logarithmic derivatives._ (These conditions are satisfied if \(K\) is an algebraically closed differential field.) In the next subsection we show that up to isomorphism over \(K\) there is a unique exponential extension \(R\) of \(K\) satisfying \(C_{R}=C\) and \((R^{\times})^{\dagger}=K\). By Lemma 2.2.3 we must then have a group embedding \(e\colon K\to R^{\times}\) such that \(e(b)^{\dagger}=b\) for all \(b\in K\); this motivates the construction below.
### The universal exponential extension
We first describe a certain exponential extension of \(K\). For this, take a **complement**\(\Lambda\) of \(K^{\dagger}\), that is, a \(\mathbb{Q}\)-linear subspace of \(K\) such that \(K=K^{\dagger}\oplus\Lambda\) (internal direct sum of \(\mathbb{Q}\)-linear subspaces of \(K\)). Below \(\lambda\) ranges over \(\Lambda\). Let \(\mathrm{e}(\Lambda)\) be a multiplicatively written abelian group, isomorphic to the additive subgroup \(\Lambda\) of \(K\), with isomorphism \(\lambda\mapsto\mathrm{e}(\lambda)\colon\Lambda\to\mathrm{e}(\Lambda)\). Put
\[\mathrm{U}\ :=\ K\big{[}\mathrm{e}(\Lambda)\big{]},\]
the group ring of \(\mathrm{e}(\Lambda)\) over \(K\), an integral domain. As \(K\)-linear space,
\[\mathrm{U}\ =\ \bigoplus_{\lambda}K\,\mathrm{e}(\lambda)\qquad\text{(an internal direct sum of $K$-linear subspaces)}.\]
For every \(f\in\mathrm{U}\) we have a unique family \((f_{\lambda})\) in \(K\) such that
\[f\ =\ \sum_{\lambda}f_{\lambda}\,\mathrm{e}(\lambda),\]
with \(f_{\lambda}=0\) for all but finitely many \(\lambda\); we call \((f_{\lambda})\) the **spectral decomposition** of \(f\) (with respect to \(\Lambda\)). We turn \(\mathrm{U}\) into a differential ring extension of \(K\) by
\[\mathrm{e}(\lambda)^{\prime}\ =\ \lambda\,\mathrm{e}(\lambda)\qquad\text{for all $\lambda$}.\]
(Think of \(\mathrm{e}(\lambda)\) as \(\exp(\int\lambda)\).) Thus for \(f\in\mathrm{U}\) with spectral decomposition \((f_{\lambda})\),
\[f^{\prime}\ =\ \sum_{\lambda}\big{(}f_{\lambda}^{\prime}+\lambda f_{\lambda} \big{)}\,\mathrm{e}(\lambda),\]
so \(f^{\prime}\) has spectral decomposition \((f_{\lambda}^{\prime}+\lambda f_{\lambda})\). Note that \(\mathrm{U}\) is exponential over \(K\) by Lemma 2.1.1: \(\mathrm{U}^{\times}=K^{\times}\,\mathrm{e}(\Lambda)\), so \((\mathrm{U}^{\times})^{\dagger}=K^{\dagger}+\Lambda=K\).
_Example 2.2.4_.: Let \(K=C(\!(\ell^{\mathbb{Q}})\!)\) be as in Example 1.2.12, so \(K^{\dagger}=(\mathbb{Q}\oplus\sigma)t\). Take a \(\mathbb{Q}\)-linear subspace \(\Lambda_{\mathrm{c}}\) of \(C\) with \(C=\mathbb{Q}\oplus\Lambda_{\mathrm{c}}\) (internal direct sum of \(\mathbb{Q}\)-linear subspaces of \(C\)), and let
\[K_{\succ}\ :=\ \big{\{}f\in K:\ \mathrm{supp}(f)\succ 1\big{\}},\]
a \(C\)-linear subspace of \(K\). Then \(\Lambda:=(K_{\succ}\oplus\Lambda_{\mathrm{c}})t\) is a complement to \(K^{\dagger}\), and hence \(t^{-1}\Lambda=K_{\succ}\oplus\Lambda_{\mathrm{c}}\) is a complement to \((K^{t})^{\dagger}\) in \(K^{t}\). Moreover, if \(L:=\mathrm{P}(C)\subseteq K\) is the differential field of Puiseux series over \(C\) and \(L_{\succ}:=K_{\succ}\cap L\), then \(L_{\succ}\oplus\Lambda_{\mathrm{c}}\) is a complement to \((L^{t})^{\dagger}\).
A subgroup \(\Lambda_{0}\) of \(\Lambda\) yields a differential subring \(K\big{[}\mathrm{e}(\Lambda_{0})\big{]}\) of \(\mathrm{U}\) that is exponential over \(K\) as well. These differential subrings have a useful property. Recall from [ADH, 4.6] that a differential ring is said to be _simple_ if \(\{0\}\) is its only proper differential ideal.
**Lemma 2.2.5**.: _Let \(\Lambda_{0}\) be a subgroup of \(\Lambda\). Then the differential subring \(K\big{[}\mathrm{e}(\Lambda_{0})\big{]}\) of \(\mathrm{U}\) is simple. In particular, the differential ring \(\mathrm{U}\) is simple._
Proof.: Let \(I\neq R\) be a differential ideal of \(R:=K\big{[}\mathrm{e}(\Lambda_{0})\big{]}\). Let \(f_{1},\ldots,f_{n}\in K^{\times}\) and let \(\lambda_{1},\ldots,\lambda_{n}\in\Lambda_{0}\) be distinct such that \(f=\sum_{i=1}^{n}f_{i}\,\mathrm{e}(\lambda_{i})\in I\). If \(n\geqslant 1\), then Lemma 2.2.1 yields \(i\neq j\) with \(\mathrm{e}(\lambda_{i})/\,\mathrm{e}(\lambda_{j})=cg\) for some constant \(c\) in the differential fraction field of \(\mathrm{U}\) and some \(g\in K^{\times}\), so by taking logarithmic derivatives, \(\lambda_{i}-\lambda_{j}\in K^{\dagger}\) and thus \(\lambda_{i}=\lambda_{j}\), a contradiction. Thus \(f=0\).
**Corollary 2.2.6**.: _Any morphism \(K\big{[}\mathrm{e}(\Lambda_{0})\big{]}\to R\) of differential \(K\)-algebras, with \(\Lambda_{0}\) a subgroup of \(\Lambda\) and \(R\) a differential ring extension of \(K\), is injective._
The differential ring \(\mathrm{U}\) is the directed union of its differential subrings of the form \(\mathrm{U}_{0}=K\big{[}\mathrm{e}(\Lambda_{0})\big{]}\) where \(\Lambda_{0}\) is a finitely generated subgroup of \(\Lambda\). These \(\mathrm{U}_{0}\) are simple by Lemma 2.2.5 and finitely generated as a \(K\)-algebra, hence their differential fraction fields have constant field \(C\) by [ADH, 4.6.12]. Thus the differential fraction field of \(\mathrm{U}\) has constant field \(C\).
**Lemma 2.2.7**.: _Suppose \(R\) is an exponential extension of \(K\) and \(R_{0}\) is a differential subring of \(R\) with \(C_{R}^{\times}\subseteq C_{R_{0}}\) and \(K\subseteq(R_{0}^{\times})^{\dagger}\). Then \(R_{0}=R\)._
Proof.: Let \(E\) be the group of units of \(R\) that are exponential over \(K\); so \(R=K[E]\). Given \(u\in E\) we have \(u^{\dagger}\in K\subseteq(R_{0}^{\times})^{\dagger}\), hence we have \(u_{0}\in R_{0}^{\times}\) with \(u^{\dagger}=u_{0}^{\dagger}\), so \(u=cu_{0}\) with \(c\in C_{R}^{\times}\subseteq C_{R_{0}}\). Thus \(E\subseteq R_{0}\) and so \(R_{0}=R\)
**Corollary 2.2.8**.: _Every endomorphism of the differential \(K\)-algebra \(\mathrm{U}\) is an automorphism._
Proof.: Injectivity holds by Corollary 2.2.6, and surjectivity by Lemma 2.2.7.
Every exponential extension of \(K\) with constant field \(C\) embeds into \(\mathrm{U}\), and hence is an integral domain. More precisely:
**Lemma 2.2.9**.: _Let \(R\) be an exponential extension of \(K\) such that \(C_{R}^{\times}\) is divisible, and set \(\Lambda_{0}:=\Lambda\cap(R^{\times})^{\dagger}\), a subgroup of \(\Lambda\). Then there exists a morphism \(K\big{[}\mathrm{e}(\Lambda_{0})\big{]}\to R\) of differential \(K\)-algebras. Any such morphism is injective, and if \(C_{R}=C\), then any such morphism is an isomorphism._
Proof.: Let \(E\) be as in the proof of Lemma 2.2.7, and let \(e_{E}\colon E^{\dagger}\to E\) be the map \(e\) from Lemma 2.2.3. Since \(E^{\dagger}=K^{\dagger}+\Lambda_{0}\) we have
\[E\ =\ C_{R}^{\times}\,e_{E}(E^{\dagger})\ =\ C_{R}^{\times}\,e_{E}(K^{\dagger} )\,e_{E}(\Lambda_{0})\ =\ C_{R}^{\times}\,K^{\times}\,e_{E}(\Lambda_{0}). \tag{2.2.1}\]
The group morphism \(\mathrm{e}(\lambda_{0})\mapsto e_{E}(\lambda_{0})\colon\mathrm{e}(\Lambda_{0 })\to E\) (\(\lambda_{0}\in\Lambda_{0}\)) extends uniquely to a \(K\)-algebra morphism \(\iota\colon K\big{[}\mathrm{e}(\Lambda_{0})\big{]}\to R=K[E]\). One verifies easily that \(\iota\) is a differential ring morphism. The injectivity claim follows from Corollary 2.2.6. If \(C_{R}=C\), then \(E=K^{\times}e_{E}(\Lambda_{0})\) by (2.2.1), whence surjectivity.
Recall that \(\mathrm{U}\) is an exponential extension of \(K\) with \(C_{\mathrm{U}}=C\) and \((\mathrm{U}^{\times})^{\dagger}=K\). By Lemma 2.2.9, this property characterizes \(\mathrm{U}\) up to isomorphism:
**Corollary 2.2.10**.: _If \(U\) is an exponential extension of \(K\) such that \(C_{U}=C\) and \(K\subseteq(U^{\times})^{\dagger}\), then \(U\) is isomorphic to \(\mathrm{U}\) as a differential \(K\)-algebra._
Now \(\mathrm{U}\) is also an exponential extension of \(K\) with \(C_{\mathrm{U}}=C\) and with the property that every exponential extension \(R\) of \(K\) with \(C_{R}=C\) embeds into \(\mathrm{U}\) as a differential \(K\)-algebra. This property determines \(\mathrm{U}\) up to isomorphism as well:
**Corollary 2.2.11**.: _Suppose \(U\) is an exponential extension of \(K\) with \(C_{U}=C\) such that every exponential extension \(R\) of \(K\) with \(C_{R}=C\) embeds into \(U\) as a differential \(K\)-algebra. Then \(U\) is isomorphic to \(\mathrm{U}\) as a differential \(K\)-algebra._
Proof.: Any embedding \(\mathrm{U}\to U\) of differential \(K\)-algebras gives \(K\subseteq(U^{\times})^{\dagger}\).
The results above show to what extent \(\mathrm{U}\) is independent of the choice of \(\Lambda\). We call \(\mathrm{U}\) the **universal exponential extension of \(K\)**. If we need to indicate the dependence of \(\mathrm{U}\) on \(K\) we denote it by \(\mathrm{U}_{K}\). By [ADH, 5.1.40] every \(y\in\mathrm{U}=K\{\mathrm{e}(\Lambda)\}\) satisfies a linear differential equation \(A(y)=0\) where \(A\in K[\mathfrak{d}]^{\neq}\); in the next section we isolate conditions on \(K\) which ensure that every \(A\in K[\mathfrak{d}]^{\neq}\) has a zero \(y\in\mathrm{U}^{\times}=K^{\times}\,\mathrm{e}(\Lambda)\).
Corollary 2.2.10 gives for \(\phi\in K^{\times}\) an isomorphism \(\mathrm{U}_{K^{\phi}}\cong(\mathrm{U}_{K})^{\phi}\) of differential \(K^{\phi}\)-algebras. Next we investigate how \(\mathrm{U}_{K}\) behaves when passing from \(K\) to a differential field extension. Therefore, _in the rest of this subsection \(L\) is a differential field extension of \(K\) with algebraically closed constant field \(C_{L}\), and \(L^{\dagger}\) is divisible._
The next lemma relates the universal exponential extension \(\mathrm{U}_{L}\) of \(L\) to \(\mathrm{U}_{K}\):
**Lemma 2.2.12**.: _The inclusion \(K\to L\) extends to an embedding \(\iota\colon\mathrm{U}_{K}\to\mathrm{U}_{L}\) of differential rings. The image of any such embedding \(\iota\) is contained in \(K[E]\) where \(E:=\{u\in\mathrm{U}_{L}^{\times}:u^{\dagger}\in K\}\), and if \(C_{L}=C\), then \(\iota(\mathrm{U}_{K})=K[E]\)._
Proof.: The differential subring \(R:=K[E]\) of \(\mathrm{U}_{L}\) is exponential over \(K\) with \((R^{\times})^{\dagger}=K\), hence Lemma 2.2.9 gives an embedding \(\mathrm{U}_{K}\to R\) of differential \(K\)-algebras. Let \(\iota\colon\mathrm{U}_{K}\to\mathrm{U}_{L}\) be any embedding of differential \(K\)-algebras. Then \(\iota\big{(}\mathrm{e}(\Lambda)\big{)}\subseteq E\), so \(\iota(\mathrm{U}_{K})\subseteq R\); if \(C_{L}=C\), then \(\iota(\mathrm{U}_{K})=R\) by Lemma 2.2.7.
**Corollary 2.2.13**.: _If \(L^{\dagger}\cap K=K^{\dagger}\) and \(\iota\colon\mathrm{U}_{K}\to\mathrm{U}_{L}\) is an embedding of differential \(K\)-algebras, then \(L^{\times}\cap\iota(\mathrm{U}_{K}^{\times})=K^{\times}\)._
Proof.: Assume \(L^{\dagger}\cap K=K^{\dagger}\) and identify \(\mathrm{U}_{K}\) with a differential \(K\)-subalgebra of \(\mathrm{U}_{L}\) via an embedding \(\mathrm{U}_{K}\to\mathrm{U}_{L}\) of differential \(K\)-algebras. Let \(a\in L^{\times}\cap\mathrm{U}_{K}^{\times}\); then \(a^{\dagger}\in L^{\dagger}\cap K=K^{\dagger}\), so \(a=bc\) where \(c\in C_{L}^{\times}\), \(b\in K^{\times}\). Now \(c=a/b\in C_{L}^{\times}\cap\mathrm{U}_{K}^{\times}=C^{\times}\), since \(\mathrm{U}_{K}\) has ring of constants \(C\). So \(a\in K^{\times}\) as required.
Suppose \(L^{\dagger}\cap K=K^{\dagger}\). Then the subspace \(L^{\dagger}\) of the \(\mathbb{Q}\)-linear space \(L\) has a complement \(\Lambda_{L}\supseteq\Lambda\). We fix such \(\Lambda_{L}\) and extend \(\mathrm{e}\colon\Lambda\to\mathrm{e}(\Lambda)\) to a group isomorphism \(\Lambda_{L}\to\mathrm{e}(\Lambda_{L})\), also denoted by \(\mathrm{e}\), with \(\mathrm{e}(\Lambda_{L})\) a multiplicatively written commutative group extending \(\mathrm{e}(\Lambda)\). Let \(\mathrm{U}_{L}:=L\big{[}\mathrm{e}(\Lambda_{L})\big{]}\) be the corresponding universal exponential extension of \(L\). Then the natural inclusion \(\mathrm{U}_{K}\to\mathrm{U}_{L}\) is an embedding of differential \(K\)-algebras.
### Automorphisms of \(\mathrm{U}\)
These are easy to describe: the beginning of Section 2.1 gives a group embedding
\[\chi\mapsto\sigma_{\chi}\colon\operatorname{Hom}(\Lambda,K^{\times})\to \operatorname{Aut}\bigl{(}K[\mathrm{e}(\Lambda)]|K\bigr{)}\]
into the group of \(K\)-algebra automorphisms of \(K\big{[}\mathrm{e}(\Lambda)\big{]}\), given by
\[\sigma_{\chi}(f)\ :=f_{\chi}\ =\ \sum_{\lambda}f_{\lambda}\chi(\lambda)\, \mathrm{e}(\lambda)\qquad(\chi\in\operatorname{Hom}(\Lambda,K^{\times}),\ f\in K[ \mathrm{e}(\Lambda)]).\]
It is easy to check that if \(\chi\in\operatorname{Hom}(\Lambda,C^{\times})\subseteq\operatorname{Hom}( \Lambda,K^{\times})\), then \(\sigma_{\chi}\in\operatorname{Aut}_{\mathfrak{d}}(\mathrm{U}|K)\), that is, \(\sigma_{\chi}\) is a differential \(K\)-algebra automorphism of \(\mathrm{U}\). Moreover:
**Lemma 2.2.14**.: _The map \(\chi\mapsto\sigma_{\chi}\colon\operatorname{Hom}(\Lambda,C^{\times})\to \operatorname{Aut}_{\mathfrak{d}}(\mathrm{U}|K)\) is a group isomorphism. Its inverse assigns to any \(\sigma\in\operatorname{Aut}_{\mathfrak{d}}(\mathrm{U}|K)\) the function \(\chi\colon\Lambda\to C^{\times}\) given by \(\chi(\lambda):=\sigma\big{(}\mathrm{e}(\lambda)\big{)}\,\mathrm{e}(-\lambda)\). In particular, \(\operatorname{Aut}_{\mathfrak{d}}(\mathrm{U}|K)\) is commutative._
Proof.: Let \(\sigma\in\operatorname{Aut}_{\mathfrak{d}}(\mathrm{U}|K)\) and let \(\chi\colon\Lambda\to\mathrm{U}^{\times}\) be given by \(\chi(\lambda):=\sigma\big{(}\mathrm{e}(\lambda)\big{)}\,\mathrm{e}(-\lambda)\). Then \(\chi(\lambda)^{\dagger}=0\) for all \(\lambda\). It follows easily that \(\chi\in\operatorname{Hom}(\Lambda,C^{\times})\) and \(\sigma_{\chi}=\sigma\).
The proof of the next result uses that the additive group \(\mathbb{Q}\) embeds into \(C^{\times}\).
**Corollary 2.2.15**.: _If \(f\in\mathrm{U}\) and \(\sigma(f)=f\) for all \(\sigma\in\operatorname{Aut}_{\mathfrak{d}}(\mathrm{U}|K)\), then \(f\in K\)._
Proof.: Suppose \(f\in U\) and \(\sigma(f)=f\) for all \(\sigma\in\operatorname{Aut}_{\mathfrak{d}}(\mathrm{U}|K)\). For \(\chi\in\operatorname{Hom}(\Lambda,C^{\times})\) we have \(f_{\chi}=f\), that is, \(f_{\lambda}\chi(\lambda)=f_{\lambda}\) for all \(\lambda\), so \(\chi(\lambda)=1\) whenever \(f_{\lambda}\neq 0\). Now use that for \(\lambda\neq 0\) there exists \(\chi\in\operatorname{Hom}(\Lambda,C^{\times})\) such that \(\chi(\lambda)\neq 1\), so \(f_{\lambda}=0\).
**Corollary 2.2.16**.: _Every automorphism of the differential field \(K\) extends to an automorphism of the differential ring \(\mathrm{U}\)._
Proof.: Lemma 2.2.3 yields a group morphism \(\mu\colon K\to\mathrm{U}^{\times}\) such that \(\mu(a)^{\dagger}=a\) for all \(a\in K\). Let \(\sigma\in\operatorname{Aut}_{\mathfrak{d}}(K)\). Then \(\sigma\) extends to an endomorphism, denoted also by \(\sigma\), of the ring \(\mathrm{U}\), such that \(\sigma\big{(}\mathrm{e}(\lambda)\big{)}=\mu\big{(}\sigma(\lambda)\big{)}\) for all \(\lambda\). Then
\[\sigma\big{(}\mathrm{e}(\lambda)^{\prime}\big{)}\ =\ \sigma\big{(}\lambda\, \mathrm{e}(\lambda)\big{)}\ =\ \sigma(\lambda)\mu\big{(}\sigma(\lambda)\big{)}\ =\ \mu\big{(}\sigma(\lambda)\big{)}^{\prime}\ =\ \sigma\big{(} \mathrm{e}(\lambda)\big{)}^{\prime},\]
hence \(\sigma\) is an endomorphism of the differential ring \(\mathrm{U}\). By Lemma 2.2.5, \(\sigma\) is injective, and by Lemma 2.2.7, \(\sigma\) is surjective.
**The real case.**_In this subsection \(K=H[i]\) where \(H\) is a real closed differential subfield of \(K\) and \(i^{2}=-1\)._ Set \(S_{C}:=\big{\{}c\in C:\,|c|=1\big{\}}\), a subgroup of \(C^{\times}\). Then by Lemmas 2.1.7 and 2.2.14:
**Corollary 2.2.17**.: _For \(\sigma\in\mathrm{Aut}_{0}(\mathrm{U}|K)\) we have the equivalence_
\[\sigma(f^{*})=\sigma(f)^{*}\mbox{ for all }f\in\mathrm{U}\quad\Longleftrightarrow \quad\sigma=\sigma_{\chi}\mbox{ for some }\chi\in\mathrm{Hom}(\Lambda,S_{C}).\]
Corollaries 2.2.17 and 2.1.8 together give:
**Corollary 2.2.18**.: _Let \(\sigma\in\mathrm{Aut}_{0}(\mathrm{U}|K)\) satisfy \(\sigma(f^{*})=\sigma(f)^{*}\) for all \(f\in\mathrm{U}\). Then \(\big{\langle}\sigma(f),\sigma(g)\big{\rangle}=\langle f,g\rangle\) for all \(f,g\in\mathrm{U}\), hence \(\|\sigma(f)\|=\|f\|\) for all \(f\in\mathrm{U}\)._
Next we consider the subgroup
\[S\ :=\ \{a+bi:\ a,b\in H,\ a^{2}+b^{2}=1\}\]
of \(K^{\times}\), which is divisible, hence so is the subgroup \(S^{\dagger}\) of \(K^{\dagger}\). Lemma 1.2.4 yields \(K^{\dagger}=H^{\dagger}\oplus S^{\dagger}\) (internal direct sum of \(\mathbb{Q}\)-linear subspaces of \(K\)) and \(S^{\dagger}\subseteq Hi\). Thus we can (and do) take the complement \(\Lambda\) of \(K^{\dagger}\) in \(K\) so that \(\Lambda=\Lambda_{\mathrm{r}}+\Lambda_{\mathrm{i}}\mathrm{i}\) where \(\Lambda_{\mathrm{r}},\Lambda_{\mathrm{i}}\) are subspaces of the \(\mathbb{Q}\)-linear space \(H\) with \(\Lambda_{\mathrm{r}}\) a complement of \(H^{\dagger}\) in \(H\) and \(\Lambda_{\mathrm{i}}\mathrm{i}\) a complement of \(S^{\dagger}\) in \(Hi\). The automorphism \(a+bi\mapsto\overline{a+bi}:=a-bi\ (a,b\in H)\) of the differential field \(K\) now satisfies in \(\mathrm{U}=K[\mathrm{e}(\Lambda)]\) the identity
\[\mathrm{e}(\overline{\lambda+\mu})\ =\ \mathrm{e}(\overline{\lambda})\,\mathrm{e}( \overline{\mu})\qquad(\lambda,\mu\in\Lambda),\]
so it extends to an automorphism \(f\mapsto\overline{f}\) of the ring \(\mathrm{U}\) as follows: for \(f\in\mathrm{U}\) with spectral decomposition \((f_{\lambda})\), set
\[\overline{f}\ :=\ \sum_{\lambda}\overline{f_{\lambda}}\,\mathrm{e}(\overline{ \lambda})\ =\ \sum_{\lambda}\overline{f_{\overline{\lambda}}}\,\mathrm{e}(\lambda),\]
so \(\overline{\mathrm{e}(\lambda)}=\mathrm{e}(\overline{\lambda})\), and \(\overline{f}\) has spectral decomposition \((\overline{f_{\overline{\lambda}}})\). We have \(\overline{\overline{f}}=f\) for \(f\in\mathrm{U}\), and \(f\mapsto\overline{f}\) lies in \(\mathrm{Aut}_{0}(\mathrm{U}|H)\). If \(H^{\dagger}=H\), then \(\Lambda_{r}=\{0\}\) and hence \(\overline{f}=f^{*}\) for \(f\in\mathrm{U}\), where \(f^{*}\) is as defined in Section 2.1. For \(f\in\mathrm{U}\) we set
\[\mathrm{Re}\,f\ :=\ \tfrac{1}{2}(f+\overline{f}),\qquad\mathrm{Im}\,f\ :=\ \tfrac{1}{2\mathrm{i}}(f-\overline{f}).\]
(For \(f\in K\) these agree with the usual real and imaginary parts of \(f\) as an element of \(H[\mathrm{i}]\).) Consider the differential \(H\)-subalgebra
\[\mathrm{U}_{\mathrm{r}}\ :=\ \big{\{}f\in\mathrm{U}:\overline{f}=f\big{\}}\]
of \(\mathrm{U}\). For \(f\in\mathrm{U}\) with spectral decomposition \((f_{\lambda})\) we have \(f\in\mathrm{U}_{\mathrm{r}}\) iff \(f_{\overline{\lambda}}=\overline{f_{\lambda}}\) for all \(\lambda\); in particular \(\mathrm{U}_{\mathrm{r}}\cap K=H\). For \(f\in\mathrm{U}\) we have \(f=(\mathrm{Re}\,f)+(\mathrm{Im}\,f)i\) with \(\mathrm{Re}\,f,\mathrm{Im}\,f\in\mathrm{U}_{\mathrm{r}}\), hence
\[\mathrm{U}\ =\ \mathrm{U}_{\mathrm{r}}\oplus\mathrm{U}_{\mathrm{r}}i\quad\mbox{ (internal direct sum of $H$-linear subspaces)}.\]
Let \(D\) be a subfield of \(H\) (not necessarily the constant field of \(H\)), so \(D[\mathrm{i}]\) is a subfield of \(K\). Let \(V\) be a \(D[\mathrm{i}]\)-linear subspace of \(\mathrm{U}\); then \(V_{\mathrm{r}}:=V\cap\mathrm{U}_{\mathrm{r}}\) is a \(D\)-linear subspace of \(V\). If \(\overline{V}=V\) (that is, \(V\) is closed under \(f\mapsto\overline{f}\)), then \(\mathrm{Re}\,f,\mathrm{Im}\,f\in V_{\mathrm{r}}\) for all \(f\in V\), hence \(V=V_{\mathrm{r}}\oplus V_{\mathrm{r}}i\) (internal direct sum of \(D\)-linear subspaces of \(V\)), so any basis of the \(D\)-linear space \(V_{\mathrm{r}}\) is a basis of the \(D[\mathrm{i}]\)-linear space \(V\).
Suppose now that \(V=\bigoplus_{\lambda}V_{\lambda}\) (internal direct sum of subspaces of \(V\)) where \(V_{\lambda}\) is for each \(\lambda\) a \(D[\mathrm{i}]\)-linear subspace of \(K\operatorname{e}(\lambda)\). Then \(\overline{V}=V\) iff \(V_{\overline{\lambda}}=\overline{V_{\lambda}}\) for all \(\lambda\). Moreover:
**Lemma 2.2.19**.: _Assume \(H=H^{\dagger}\), \(V_{0}=\{0\}\), and \(\overline{V}=V\). Let \(\mathcal{V}\subseteq\operatorname{U}^{\times}\) be a basis of the subspace \(\sum_{\operatorname{Im}\lambda>0}V_{\lambda}\) of \(V\). Then the maps \(v\mapsto\operatorname{Re}v,\ v\mapsto\operatorname{Im}v\colon\mathcal{V}\to V_ {\mathrm{r}}\) are injective, \(\operatorname{Re}\mathcal{V}\) and \(\operatorname{Im}\mathcal{V}\) are disjoint, and \(\operatorname{Re}\mathcal{V}\cup\operatorname{Im}\mathcal{V}\) is a basis of \(V_{\mathrm{r}}\)._
Proof.: Note that \(\Lambda=\Lambda_{\mathrm{i}}i\). Let \(\mu\) range over \(\Lambda_{\mathrm{i}}^{>}\) and set \(\mathcal{V}_{\mu}=\mathcal{V}\cap K^{\times}\operatorname{e}(\mu i)\), a basis of the \(D[\mathrm{i}]\)-linear space \(V_{\mu i}\). Then \(\mathcal{V}=\bigcup_{\mu}\mathcal{V}_{\mu}\), a disjoint union. For \(v\in\mathcal{V}_{\mu}\) we have \(v=a\operatorname{e}(\mu i)\) with \(a=a_{v}\in K^{\times}\), so
\[\operatorname{Re}v\ =\ \tfrac{a}{2}\operatorname{e}(\mu i)+\tfrac{\pi}{2} \operatorname{e}(-\mu i),\qquad\operatorname{Im}v\ =\ \tfrac{a}{2i}\operatorname{e}(\mu i)-\tfrac{\pi}{2i} \operatorname{e}(-\mu i),\]
from which it is clear that the two maps \(\mathcal{V}\to V_{\mathrm{r}}\) in the statement of the lemma are injective. It is also easy to check that \(\operatorname{Re}\mathcal{V}\) and \(\operatorname{Im}\mathcal{V}\) are disjoint.
As \(\mathcal{V}\) is a basis of the \(D[\mathrm{i}]\)-linear space \(\sum_{\mu}V_{\mu i}=\sum_{\operatorname{Im}\lambda>0}V_{\lambda}\), its set of conjugates \(\overline{\mathcal{V}}\) is a basis of the \(D[\mathrm{i}]\)-linear space \(\sum_{\mu}\overline{V_{\mu i}}=\sum_{\mu}V_{-\mu i}=\sum_{\operatorname{Im} \lambda<0}V_{\lambda}\), and so \(\mathcal{V}\cup\overline{\mathcal{V}}\) (a disjoint union) is a basis of \(V\). Thus \(\operatorname{Re}\mathcal{V}\cup\operatorname{Im}\mathcal{V}\) is a basis of \(V\) as well. As \(\operatorname{Re}\mathcal{V}\cup\operatorname{Im}\mathcal{V}\) is contained in \(V_{\mathrm{r}}\), it is a basis of the \(D\)-linear space \(V_{\mathrm{r}}\).
If \(H=H^{\dagger}\), then \(V:=\sum_{\lambda\neq 0}K\operatorname{e}(\lambda)\) gives \(\overline{V}=V\), so Lemma 2.2.19 gives then for \(D:=H\) the basis of the \(H\)-linear space \(V_{\mathrm{r}}\) consisting of the elements
**Corollary 2.2.20**.: _Suppose \(H=H^{\dagger}\). Set \(\operatorname{c}(\lambda):=\operatorname{Re}\bigl{(}\operatorname{e}(\lambda) \bigr{)}\) and \(\operatorname{s}(\lambda):=\operatorname{Im}\bigl{(}\operatorname{e}(\lambda) \bigr{)}\), for \(\operatorname{Im}\lambda>0\). Then for \(V:=\sum_{\lambda\neq 0}K\operatorname{e}(\lambda)\) we have \(\operatorname{U}_{\mathrm{r}}=H+V_{\mathrm{r}}\), so_
\[\operatorname{U}_{\mathrm{r}}\ =\ H\oplus\bigoplus_{\operatorname{Im}\lambda>0} \bigl{(}H\operatorname{c}(\lambda)\oplus H\operatorname{s}(\lambda)\bigr{)} \quad(\text{internal direct sum of $H$-linear subspaces})\]
_and thus \(\operatorname{U}_{\mathrm{r}}=H\bigl{[}\operatorname{c}(\Lambda_{\mathrm{i}}^ {>}i)\cup\operatorname{s}(\Lambda_{\mathrm{i}}^{>}i)\bigr{]}\)._
### The Spectrum of a Differential Operator
_In this section \(K\) is a differential field, \(a\), \(b\) range over \(K\), and \(A\), \(B\) over \(K[\mathfrak{d}]\)._ This and the next two sections are mainly differential-algebraic in nature, and deal with splittings of linear differential operators. In the present section we introduce the concept of _eigenvalue_ of \(A\) and the _spectrum_ of \(A\) (the collection of its eigenvalues). In Section 2.4 we give criteria for \(A\) to have eigenvalue \(0\), and in Section 2.5 we show how the eigenvalues of \(A\) relate to the behavior of \(A\) over the universal exponential extension of \(K\).
**Twisting.** Let \(L\) be a differential field extension of \(K\) with \(L^{\dagger}\supseteq K\). Let \(u\in L^{\times}\) be such that \(u^{\dagger}=a\in K\). Then the twist \(A_{\times u}=u^{-1}Au\) of \(A\) by \(u\) has the same order as \(A\) and coefficients in \(K\) [ADH, 5.8.8], and only depends on \(a\), not on \(u\) or \(L\); in fact, \(\operatorname{Ri}(A_{\times u})=\operatorname{Ri}(A)_{+a}\) [ADH, 5.8.5]. Hence for each \(a\) we may define
\[A_{a}\ :=\ A_{\times u}\ =\ u^{-1}Au\in K[\mathfrak{d}]\]
where \(u\in L^{\times}\) is arbitrary with \(u^{\dagger}=a\). The map \(A\mapsto A_{\times u}\) is an automorphism of the ring \(K[\mathfrak{d}]\) that is the identity on \(K\) (with inverse \(B\mapsto B_{\times u^{-1}}\)); so \(A\mapsto A_{a}\) is an
automorphism of the ring \(K[\partial]\) that is the identity on \(K\) (with inverse \(B\mapsto B_{-a}\)). Note that \(\partial_{a}=\partial+a\), and that
\[(a,A)\mapsto A_{a}\ :\ K\times K[\partial]\to K[\partial]\]
is an action of the additive group of \(K\) on the set \(K[\partial]\), in particular, \(A_{a}=A\) for \(a=0\). For \(b\neq 0\) we have \((A_{a})_{\ltimes b}=A_{a+b^{\dagger}}\).
**Eigenvalues**.: _In the rest of this section \(A\neq 0\) and \(r:=\operatorname{order}(A)\)._ We call
\[\operatorname{mult}_{a}(A)\ :=\ \dim_{C}\ker_{K}A_{a}\in\{0,\dots,r\}\]
the **multiplicity** of \(A\) at \(a\). If \(B\neq 0\), then \(\operatorname{mult}_{a}(B)\leqslant\operatorname{mult}_{a}(AB)\), as well as
\[\operatorname{mult}_{a}(AB)\ \leqslant\ \operatorname{mult}_{a}(A)+ \operatorname{mult}_{a}(B), \tag{2.3.1}\]
with equality if and only if \(B_{a}(K)\supseteq\ker_{K}A_{a}\); see [ADH, remarks before 5.1.12]. For \(u\in K^{\times}\) we have an isomorphism
\[y\mapsto yu\ :\ \ker_{K}A_{\ltimes u}\to\ker_{K}A\]
of \(C\)-linear spaces, hence
\[\operatorname{mult}_{a}(A)\ =\ \operatorname{mult}_{b}(A)\qquad\text{whenever $a-b\in K^{\dagger}$}.\]
Thus we may define the **multiplicity** of \(A\) at the element \([a]:=a+K^{\dagger}\) of \(K/K^{\dagger}\) as \(\operatorname{mult}_{[a]}(A):=\operatorname{mult}_{a}(A)\).
_In the rest of this section \(\alpha\) ranges over \(K/K^{\dagger}\)._ We say that \(\alpha\) is an **eigenvalue** of \(A\) if \(\operatorname{mult}_{\alpha}(A)\geqslant 1\). Thus for \(B\neq 0\): if \(\alpha\) is an eigenvalue of \(B\) of multiplicity \(\mu\), then \(\alpha\) is an eigenvalue of \(AB\) of multiplicity \(\geqslant\mu\); if \(\alpha\) is an eigenvalue of \(AB\), then it is an eigenvalue of \(A\) or of \(B\); and if \(B_{a}(K)\supseteq\ker_{K}(A_{a})\), then \(\alpha=[a]\) is an eigenvalue of \(AB\) if and only if it is an eigenvalue of \(A\) or of \(B\).
_Example 2.3.1_.: Suppose \(A=\partial-a\). Then for each element \(u\neq 0\) in a differential field extension of \(K\) with \(b:=u^{\dagger}\in K\) we have \(A_{b}=A_{\ltimes u}=\partial-(a-b)\), so \(\operatorname{mult}_{b}(A)\geqslant 1\) iff \(a-b\in K^{\dagger}\). Hence the only eigenvalue of \(A\) is \([a]\).
The **spectrum** of \(A\) is the set \(\Sigma(A)=\Sigma_{K}(A)\) of its eigenvalues. Thus \(\Sigma(A)=\emptyset\) if \(r=0\), and for \(b\neq 0\) we have \(\operatorname{mult}_{a}(A)=\operatorname{mult}_{a}(bA)=\operatorname{mult}_{ a}(A_{\ltimes b})\), so \(A\), \(bA\), and \(Ab=bA_{\ltimes b}\) all have the same spectrum. By [ADH, 5.1.21] we have
\[\Sigma(A)=\big{\{}\alpha:A\in K[\partial](\partial-a)\text{ for some $a$ with $[a]=\alpha$}\big{\}}. \tag{2.3.2}\]
Hence for irreducible \(A\): \(\Sigma(A)\neq\emptyset\ \Leftrightarrow\ r=1\). From (2.3.1) we obtain:
**Lemma 2.3.2**.: _Suppose \(B\neq 0\) and set \(s:=\operatorname{order}B\). Then_
\[\operatorname{mult}_{\alpha}(B)\ \leqslant\ \operatorname{mult}_{\alpha}(AB)\ \leqslant\ \operatorname{mult}_{\alpha}(A)+ \operatorname{mult}_{\alpha}(B),\]
_where the second inequality is an equality if \(K\) is \(s\)-linearly surjective. Hence_
\[\Sigma(B)\ \subseteq\ \Sigma(AB)\ \subseteq\ \Sigma(A)\cup\Sigma(B).\]
_If \(K\) is \(s\)-linearly surjective, then \(\Sigma(AB)=\Sigma(A)\cup\Sigma(B)\)._
_Example_.: For \(n\geqslant 1\) we have \(\Sigma\big{(}(\partial-a)^{n}\big{)}=\big{\{}[a]\big{\}}\). (By induction on \(n\), using Example 2.3.1 and Lemma 2.3.2.)
It follows from Lemma 2.3.2 that \(A\) has at most \(r\) eigenvalues. More precisely:
**Lemma 2.3.3**.: _We have \(\sum_{\alpha}\operatorname{mult}_{\alpha}(A)\leqslant r\). If \(\sum_{\alpha}\operatorname{mult}_{\alpha}(A)=r\), then \(A\) splits over \(K\); the converse holds if \(r=1\) or \(K\) is \(1\)-linearly surjective._
Proof.: By induction on \(r\). The case \(r=0\) is obvious, so suppose \(r>0\). We may also assume \(\Sigma(A)\neq\emptyset\): otherwise \(\sum_{\alpha}\operatorname{mult}_{\alpha}(A)=0\) and \(A\) does not split over \(K\). Now (2.3.2) gives \(a\), \(B\) with \(A=B(\partial-a)\). By Example 2.3.1 we have \(\Sigma(\partial-a)=\bigl{\{}[a]\bigr{\}}\) and \(\operatorname{mult}_{a}(\partial-a)=1\). By the inductive hypothesis applied to \(B\) and the second inequality in Lemma 2.3.2 we thus get \(\sum_{\alpha}\operatorname{mult}_{\alpha}(A)\leqslant r\).
Suppose that \(\sum_{\alpha}\operatorname{mult}_{\alpha}(A)=r\). Then \(\sum_{\alpha}\operatorname{mult}_{\alpha}(B)=r-1\) by Lemma 2.3.2 and the inductive hypothesis applied to \(B\). Therefore \(B\) splits over \(K\), again by the inductive hypothesis, and so does \(A\). Finally, if \(K\) is \(1\)-linearly surjective and \(A\) splits over \(K\), then we arrange that \(B\) splits over \(K\), so \(\sum_{\alpha}\operatorname{mult}_{\alpha}(B)=r-1\) by the inductive hypothesis, hence \(\sum_{\alpha}\operatorname{mult}_{\alpha}(A)=r\) by Lemma 2.3.2.
Section 2.5 gives a more explicit proof of Lemma 2.3.3, under additional hypotheses on \(K\). Next, let \(L\) be a differential field extension of \(K\). Then \(\operatorname{mult}_{a}(A)\) does not strictly decrease in passing from \(K\) to \(L\) [ADH, 4.1.13]. Hence the group morphism
\[a+K^{\dagger}\mapsto a+L^{\dagger}\colon K/K^{\dagger}\to L/L^{\dagger}\]
restricts to a map \(\Sigma_{K}(A)\to\Sigma_{L}(A)\); in particular, if \(\Sigma_{K}(A)\neq\emptyset\), then \(\Sigma_{L}(A)\neq\emptyset\). If \(L^{\dagger}\cap K=K^{\dagger}\), then \(|\Sigma_{K}(A)|\leqslant|\Sigma_{L}(A)|\), and \(\sum_{\alpha}\operatorname{mult}_{\alpha}(A)\) also does not strictly decrease if \(K\) is replaced by \(L\).
**Lemma 2.3.4**.: _Let \(a_{1},\ldots,a_{r}\in K\) and_
\[A\ =\ (\partial-a_{r})\cdots(\partial-a_{1}),\quad\sum_{\alpha}\operatorname{ mult}_{\alpha}(A)\ =\ r.\]
_Then the spectrum of \(A\) is \(\bigl{\{}[a_{1}],\ldots,[a_{r}]\bigr{\}}\), and for all \(\alpha\),_
\[\operatorname{mult}_{\alpha}(A)\ =\ \bigl{|}\bigl{\{}i\in\{1,\ldots,r\}:\ \alpha=[a_{i}]\bigr{\}}\bigr{|}.\]
Proof.: Let \(i\) range over \(\{1,\ldots,r\}\). By Lemma 2.3.2 and Example 2.3.1,
\[\operatorname{mult}_{\alpha}(A)\ \leqslant\ \sum_{i}\operatorname{mult}_{ \alpha}(\partial-a_{i})\ =\ \bigl{|}\bigl{\{}i:\alpha=[a_{i}]\bigr{\}}\bigr{|}\]
and hence
\[r\ =\ \sum_{\alpha}\operatorname{mult}_{\alpha}(A)\ \leqslant\ \sum_{\alpha} \bigl{|}\bigl{\{}i:\alpha=[a_{i}]\bigr{\}}\bigr{|}\ =\ r.\]
Thus for each \(\alpha\) we have \(\operatorname{mult}_{\alpha}(A)=\bigl{|}\bigl{\{}i:\alpha=[a_{i}]\bigr{\}} \bigr{|}\) as required.
Recall from [ADH, 5.1.8] that \(D^{*}\in K[\partial]\) denotes the _adjoint_ of \(D\in K[\partial]\), and that the map \(D\mapsto D^{*}\) is an involution of the ring \(K[\partial]\) with \(a^{*}=a\) for all \(a\) and \(\partial^{*}=-\partial\). If \(A\) splits over \(K\), then so does \(A^{*}\). Furthermore, \((A_{a})^{*}=(A^{*})_{-a}\) for all \(a\). By Lemmas 2.3.3 and 2.3.4:
**Corollary 2.3.5**.: _Suppose \(K\) is \(1\)-linearly surjective and \(\sum_{\alpha}\operatorname{mult}_{\alpha}(A)=r\). Then \(\operatorname{mult}_{\alpha}(A)=\operatorname{mult}_{-\alpha}(A^{*})\) for all \(\alpha\). In particular, the map \(\alpha\mapsto-\alpha\) restricts to a bijection \(\Sigma(A)\to\Sigma(A^{*})\)._
Let \(\phi\in K^{\times}\). Then \((A^{\phi})_{a}=(A_{\phi a})^{\phi}\) and hence
\[\operatorname{mult}_{a}(A^{\phi})\ =\ \operatorname{mult}_{\phi a}(A),\]
so the group isomorphism
\[[a]\mapsto[\phi a]\,:\ K^{\phi}/\phi^{-1}K^{\dagger}\to K/K^{\dagger} \tag{2.3.3}\]
maps \(\Sigma(A^{\phi})\) onto \(\Sigma(A)\).
Note that \(K[\partial]/K\partial]A\) as a \(K\)-linear space has dimension \(r=\operatorname{order}A\). Recall from [ADH, 5.1] that \(A\) and \(B\neq 0\) are said to _have the same type_ if the (left) \(K[\partial]\)-modules \(K[\partial]/K[\partial]A\) and \(K[\partial]/K[\partial]B\) are isomorphic (and so \(\operatorname{order}B=r\)). By [ADH, 5.1.19]:
**Lemma 2.3.6**.: _The operators \(A\) and \(B\neq 0\) have the same type iff \(\operatorname{order}B=r\) and there is \(R\in K[\partial]\) of order \(<r\) with \(1\in K[\partial]R+K[\partial]A\) and \(BR\in K[\partial]A\)._
Hence if \(A\), \(B\) have the same type, then they also have the same type as elements of \(L[\partial]\), for any differential field extension \(L\) of \(K\). Since \(B\mapsto B_{a}\) is an automorphism of the ring \(K[\partial]\), Lemma 2.3.6 and [ADH, 5.1.20] yield:
**Lemma 2.3.7**.: _If \(A\) and \(B\neq 0\) have the same type, then so do \(A_{a}\), \(B_{a}\), for all \(a\), and thus \(A\), \(B\) have the same eigenvalues, with same multiplicity._
By this lemma the spectrum of \(A\) depends only on the type of \(A\), that is, on the isomorphism type of the \(K[\partial]\)-module \(K[\partial]/K[\partial]A\), suggesting one might try to associate a spectrum to each differential module over \(K\). (Recall from [ADH, 5.5] that a differential module over \(K\) is a \(K[\partial]\)-module of finite dimension as \(K\)-linear space.) Although our focus is on differential operators, we carry this out in the next subsection: it motivates the terminology of "eigenvalues" originating in the case of the differential field of Puiseux series over \(\mathbb{C}\) treated in [158]. This point of view will be further developed in the projected second volume of [ADH].
**The spectrum of a differential module \((^{*})\).**_In this subsection \(M\) is a differential module over \(K\) and \(r=\dim_{K}M\)._ For each \(B\) we let \(\ker_{M}B\) denote the kernel of the \(C\)-linear map \(y\mapsto By\colon M\to M\). For \(M=K\) as horizontal differential module over \(K\)[ADH, 5.5.2], this agrees with the \(C\)-linear subspace
\[\ker_{K}B\ =\ \ker B\ =\ \bigl{\{}y\in K:B(y)=0\bigr{\}}\]
of \(K\). Also, for \(B=\partial\) we obtain the \(C\)-linear subspace \(\ker_{M}\partial\) of horizontal elements of \(M\). We define the **spectrum** of \(M\) to be the set
\[\Sigma(M):=\bigl{\{}\alpha:\ker_{M}(\partial-a)\neq\{0\}\text{ for some $a$ with $[a]=\alpha$}\bigr{\}}.\]
The elements of \(\Sigma(M)\) are called **eigenvalues** of \(M\). If \(M=\{0\}\), then \(\Sigma(M)=\emptyset\). Isomorphic differential modules over \(K\) have clearly the same spectrum.
Let \(\phi\in K^{\times}\) and \(\mathfrak{d}=\phi^{-1}\partial\). Then \(K[\partial]=K^{\phi}[\mathfrak{d}]\) as rings, hence \(M\) is also a differential module over \(K^{\phi}\) with \(\phi^{-1}\partial_{M}\) instead of \(\partial_{M}\) as its distinguished operation; we denote it by \(M^{\phi}\) and call it the **compositional conjugate** of \(M\) by \(\phi\). Every cyclic vector of \(M\) is also a cyclic vector of \(M^{\phi}\). The group isomorphism (2.3.3) maps \(\Sigma(M^{\phi})\) onto \(\Sigma(M)\).
In the next lemma we assume \(r\geqslant 1\) and denote the \(r\times r\) identity matrix over \(K\) by \(I_{r}\). Below \(P\) is also an \(r\times r\) matrix over \(K\). The \(C\)-linear space of solutions to the matrix differential equation \(y^{\prime}=Py\) over \(K\) is the set of all column vectors \(e\in K^{r}\) such that \(e^{\prime}=Pe\), and is denoted by \(\operatorname{sol}(P)\)[ADH, p. 276]. Recall that \(a\) is said to be an eigenvalue of \(P\) over \(K\) if \(Pe=ae\) for some nonzero column vector \(e\in K^{r}\). Recall also from [ADH, p. 277] that we associate to \(P\) the differential module \(M_{P}\) having the space \(K^{r}\) of column vectors as its underlying \(K\)-linear space and satisfying \(\partial e=e^{\prime}-Pe\) for all \(e\in K^{r}\). Thus by [ADH, 5.4.8]:
**Lemma 2.3.8**.: _Let \(M=M_{-P}\) be the differential module over \(K\) associated to \(-P\). Then \(\ker_{M}(\partial-a)=\operatorname{sol}(aI_{r}-P)\), so \(\dim_{C}\ker_{M}(\partial-a)\leqslant r\) and_
\[\Sigma(M)=\bigl{\{}\alpha:\operatorname{sol}(aI_{r}-P)\neq\{0\}\text{ for some $a$ with $[a]=\alpha$}\bigr{\}}.\]
We define \(\operatorname{mult}_{a}(M):=\dim_{C}\ker_{M}(\partial-a)\); thus \(\operatorname{mult}_{a}(M)\in\{0,\dots,r\}\) by the previous lemma. For \(b\neq 0\) we have a \(C\)-linear isomorphism
\[y\mapsto by\colon\ker_{M}(\partial-a)\to\ker_{M}(\partial-a-b^{\dagger}).\]
This observation allows us to define the **multiplicity**\(\operatorname{mult}_{\alpha}(M)\) of \(M\) at \(\alpha\) as the quantity \(\operatorname{mult}_{a}(M)\) where \(a\) with \([a]=\alpha\) is arbitrary. Clearly isomorphic differential modules over \(K\) have the same multiplicity at a given \(\alpha\).
**Lemma 2.3.9**.: _The following are equivalent:_
1. \(K\) _is_ \(r\)_-linearly surjective;_
2. _for each differential module_ \(N\) _over_ \(K\) _with_ \(\dim_{K}N=r\) _and every_ \(a\)_, the_ \(C\)_-linear map_ \(y\mapsto(\partial-a)y\colon N\to N\) _is surjective;_
3. _for each differential module_ \(N\) _over_ \(K\) _with_ \(\dim_{K}N\leqslant r\) _we have_ \(\partial N=N\)_;_
4. _for_ \(n=1,\dots,r\)_, each matrix differential equation_ \(y^{\prime}=Fy+g\) _with_ \(F\) _an_ \(n\times n\) _matrix over_ \(K\) _and_ \(g\in K^{n}\) _has a solution in_ \(K\)_._
Proof.: For (i) \(\Rightarrow\) (ii), let \(K\) be \(r\)-linearly surjective. The case \(r=0\) being trivial, let \(r\geqslant 1\), so \(C\neq K\). Let \(N\) be a differential module over \(K\) with \(\dim_{K}N=r\). Towards proving that \(y\mapsto(\partial-a)y\colon N\to N\) is surjective, we can assume by [ADH, 5.5.3] that \(N=K[\partial]/K[\partial]A\) with \(A\) of order \(r\). Let \(a\), \(B\) be given, and let \(y\) range over \(K\). It suffices to find \(R\in K[\partial]\) and \(y\) such that \((\partial-a)R=yA-B\), that is, \(yA-B\in(\partial-a)K[\partial]\) for some \(y\), equivalently, \(yA_{a}-B_{a}\in\partial K[\partial]\) for some \(y\). Taking adjoints this amounts to finding \(y\) such that \(A_{a}^{*}y-B_{a}^{*}\in K[\partial]\partial\), that is, \(A_{a}^{*}(y)=B_{a}^{*}(1)\). Such \(y\) exists because \(K\) is \(r\)-linearly surjective.
For (ii) \(\Rightarrow\) (iii), use that by [ADH, 5.5.2] each differential module over \(K\) of dimension \(\leqslant r\) is a direct summand of a differential module over \(K\) of dimension \(r\). For (iii) \(\Rightarrow\) (iv), note that for an \(n\times n\) matrix \(F\) over \(K\) (\(n\geqslant 1\)), with associated differential module \(M_{F}\) over \(K\), and \(g,y\in K^{n}\), we have \(y^{\prime}=Fy+g\) iff \(\partial y=g\) in \(M_{F}\)[ADH, p. 277]. For (iv) \(\Rightarrow\) (i), use [ADH, remarks before 5.4.3].
The previous lemma refines [ADH, 5.4.2], and leads to a more precise version of [ADH, 5.4.3] with a similar proof:
**Corollary 2.3.10**.: _Suppose \(K\) is \(mn\)-linearly surjective and \(L\) is a differential field extension of \(K\) with \([L:K]=m\). Then \(L\) is \(n\)-linearly surjective._
Proof.: Let \(F\) be an \(n\times n\) matrix over \(L\), \(n\geqslant 1\), and \(g\in L^{n}\); by (iv) \(\Rightarrow\) (i) in Lemma 2.3.9 with \(L\) in place of \(K\) it is enough to show that the equation \(y^{\prime}+Fy=g\) has a solution in \(L\). For this, take a basis \(e_{1},\dots,e_{m}\) of the \(K\)-linear space \(L\). As in the proof of [ADH, 5.4.3] we obtain an \(mn\times mn\) matrix \(F^{\diamond}\) over \(K\) and a column vector \(g^{\diamond}\in K^{mn}\) such that any solution of \(z^{\prime}=F^{\diamond}z+g^{\diamond}\) in \(K\) yields a solution of \(y^{\prime}=Fy+g\) in \(L\). Such a solution \(z\) exists by (i) \(\Rightarrow\) (iv) in Lemma 2.3.9.
Let \(0\to M_{1}\stackrel{{\iota}}{{\longrightarrow}}M\stackrel{{ \pi}}{{\longrightarrow}}M_{2}\to 0\) be a short exact sequence of differential modules over \(K\), where for notational simplicity we assume that \(M_{1}\) is a submodule of \(M\) and \(\iota\) is the natural inclusion. By restriction we obtain a sequence
\[0\to\ker_{M_{1}}(\partial-a)\stackrel{{\iota_{a}}}{{ \longrightarrow}}\ker_{M}(\partial-a)\stackrel{{\pi_{a}}}{{ \longrightarrow}}\ker_{M_{2}}(\partial-a)\to 0, \tag{2.3.4}\]
of \(C\)-linear maps, not necessarily exact, but with \(\operatorname{im}\iota_{a}=\ker\pi_{a}\). Hence
\[\operatorname{mult}_{a}(M)\ \leqslant\ \operatorname{mult}_{a}(M_{1})+ \operatorname{mult}_{a}(M_{2}). \tag{2.3.5}\]
Therefore \(\Sigma(M)\subseteq\Sigma(M_{1})\cup\Sigma(M_{2})\). If \((\partial-a)M\cap M_{1}=(\partial-a)M_{1}\), then \(\pi_{a}\) is surjective, so the sequence of \(C\)-linear maps (2.3.4) is exact, hence the inequality (2.3.5) is an equality. Thus the next corollary whose hypothesis holds if \(M=M_{1}\oplus M_{2}\) (internal direct sum of submodules of \(M\)) and \(\iota\) and \(\pi\) are the natural morphisms, and also if \(K\) is \(r_{1}\)-linearly surjective for \(r_{1}:=\dim_{K}M_{1}\), by Lemma 2.3.9:
**Corollary 2.3.11**.: _Suppose \((\partial-a)M\cap M_{1}=(\partial-a)M_{1}\) for each \(a\). Then_
\[\operatorname{mult}_{\alpha}(M)=\operatorname{mult}_{\alpha}(M_{1})+ \operatorname{mult}_{\alpha}(M_{2})\quad\text{for each $\alpha$;}\]
_in particular, \(\Sigma(M)=\Sigma(M_{1})\cup\Sigma(M_{2})\)._
Let \(E:=\operatorname{End}_{C}(M)\) be the \(C\)-algebra of endomorphisms of the \(C\)-linear space \(M\). We have a ring morphism \(K[\partial]\to E\) which assigns to \(B\in K[\partial]\) the element \(y\mapsto By\) of \(E\), and we view \(E\) accordingly as \(K[\partial]\)-module: \((Bf)(y):=B\cdot f(y)\) for \(f\in E\), \(y\in M\). In the next corollary of Lemma 2.3.9 we let \(\partial-a\) stand for the image of \(\partial-a\in K[\partial]\) under the above ring morphism \(K[\partial]\to E\).
**Corollary 2.3.12**.: _If \(K\) is \(r\)-linearly surjective where \(r=\dim_{K}M\), then_
\[\Sigma(M)=\big{\{}\alpha:(\partial-a)\notin E^{\times}\text{ for some $a$ with $[a]=\alpha$}\big{\}}.\]
_Remark_.: The description of \(\Sigma(M)\) in the previous corollary is reminiscent of the definition of the _spectrum_ of an element \(x\) of an arbitrary \(K\)-algebra \(E\) with unit as the set of all \(a\) such that \((x-a)\notin E^{\times}\), as given in [41, SS1]. (If \(C=K\), then \(K^{\dagger}=\{0\}\), and identifying \(K/K^{\dagger}\) with \(K\) in the natural way, \(\Sigma(M)\) is the spectrum of \(\partial\in E\) in this sense.)
Let now \(N\) be a differential module over \(K\) and \(s:=\dim_{K}N\). From [ADH, p. 279] recall that the \(K\)-linear space \(\operatorname{Hom}_{K}(M,N)\) of all \(K\)-linear maps \(M\to N\) (of dimension \(\dim_{K}\operatorname{Hom}_{K}(M,N)=rs\)) is a differential module over \(K\) with
\[(\partial\phi)(f)\ :=\ \partial(\phi f)-\phi(\partial f)\qquad\text{ for $\phi\in \operatorname{Hom}_{K}(M,N)$ and $f\in M$.}\]
Given a \(K[\partial]\)-linear map \(\theta\colon N\to P\) into a differential module \(P\) over \(K\), this yields a \(K[\partial]\)-linear map \(\operatorname{Hom}_{K}(M,\theta)\colon\operatorname{Hom}_{K}(M,N)\to \operatorname{Hom}_{K}(M,P)\) which sends any \(\phi\) in \(\operatorname{Hom}_{K}(M,N)\) to \(\theta\circ\phi\in\operatorname{Hom}_{K}(M,P)\). The horizontal elements of \(\operatorname{Hom}_{K}(M,N)\) are the \(K[\partial]\)-module morphisms \(M\to N\); they are the elements of a finite-dimensional \(C\)-linear subspace \(\operatorname{Hom}_{K[\partial]}(M,N)\) of \(\operatorname{Hom}_{K}(M,N)\):
**Lemma 2.3.13**.: _We have_
\[\dim_{C}\operatorname{Hom}_{K[\partial]}(M,N)\ \leqslant\ \dim_{K}\operatorname{Hom}_ {K}(M,N),\]
_with equality iff \(\operatorname{Hom}_{K}(M,N)\) is horizontal._
Proof.: By [ADH, 5.4.8 and remarks before 5.5.2], the dimension of the \(C\)-linear space of horizontal elements of \(M\) is at most \(\dim_{K}M\), with equality iff \(M\) is horizontal. Now apply this with \(\operatorname{Hom}_{K}(M,N)\) in place of \(M\).
Recall: \(M^{*}:=\operatorname{Hom}_{K}(M,K)\) is the _dual_ of \(M\); see [ADH, 5.5]. By Lemma 2.3.13, the dimension of the \(C\)-linear subspace \(\operatorname{Hom}_{K[\partial]}(M,K)=\ker_{M^{*}}\partial\) of \(M^{*}\) is at most \(\dim_{K}M\). For the differential module \(M=K[\partial]/K[\partial]A\) we can say more:
**Lemma 2.3.14**.: _Suppose \(M=K[\partial]/K[\partial]A\) and \(e:=1+K[\partial]A\in M\). Then for all \(\phi\in\operatorname{Hom}_{K[\partial]}(M,K)\) we have \(\phi(e)\in\ker A\), and the map_
\[\phi\mapsto\phi(e)\colon\operatorname{Hom}_{K[\partial]}(M,K)\partial\to\ker A\]
_is an isomorphism of \(C\)-linear spaces._
Proof.: The first claim follows from \(A(\phi(e))=\phi(Ae)=0\), as \(Ae=A+K[\partial]A\) is the zero element of \(M\). This yields a \(C\)-linear map as displayed. To show that it is surjective, let \(y\in\ker A\) be given. Then \(B\mapsto B(y)\colon K[\partial]\to K\) is \(K[\partial]\)-linear with \(K[\partial]A\) contained in its kernel, and thus yields \(\phi\in\operatorname{Hom}_{K[\partial]}(M,K)\) with \(\phi(e)=y\). Injectivity is clear since \(M=K[\partial]e\).
Given \(a\), the map \(\partial-a\colon M\to M\) is a \(\partial\)-compatible derivation on the \(K\)-linear space \(M\) [ADH, 5.5]. Let \(M_{a}\) be the \(K\)-linear space \(M\) equipped with this \(\partial\)-compatible derivation. Thus \(M_{a}\) is a differential module over \(K\) with
\[\dim_{K}M_{a}=\dim_{K}M=r,\quad\ker_{M^{*}}(\partial-a)=\operatorname{Hom}_{K[ \partial]}(M_{a},K).\]
Moreover, if \(e\) is a cyclic vector of \(M\) with \(Ae=0\), then \(e\) is a cyclic vector of \(M_{a}\) with \(A_{a}e=0\). Hence by the previous lemma:
**Corollary 2.3.15**.: _Let \(A\), \(e\), \(M\) be as in Lemma 2.3.14. Then for each \(\phi\in\ker_{M^{*}}(\partial-a)\) we have \(\phi(e)\in\ker A_{a}\), and the map_
\[\phi\mapsto\phi(e)\colon\ker_{M^{*}}(\partial-a)\to\ker A_{a}\]
_is an isomorphism of \(C\)-linear spaces. In particular, \(\operatorname{mult}_{\alpha}(M^{*})=\operatorname{mult}_{\alpha}(A)\), so \(\alpha\) is an eigenvalue of \(M^{*}\) iff \(\alpha\) is an eigenvalue of \(A\)._
Recall that every differential module \(M\) has finite length, denoted by \(\ell(M)\) [ADH, pp. 36-38, 251], with \(\ell(M)\leqslant\dim_{K}M=r\). We say that \(M\)**splits** if \(\ell(M)=r\). By [ADH, 5.1.25], \(M=K[\partial]/K[\partial]A\) splits iff \(A\) splits over \(K\). By additivity of \(\ell(-)\) and \(\dim_{K}(-)\) on short exact sequences (see [ADH, 1.2]) we have:
**Lemma 2.3.16**.: _Let \(N\) be a differential submodule of \(M\). Then \(M\) splits iff both \(N\) and \(M/N\) split._
Hence if \(N\) is a differential module over \(K\), then \(M\oplus N\) splits iff \(M\) and \(N\) split. Thus the least common left multiple of \(A_{1},\dots,A_{m}\in K[\partial]^{\neq}\), \(m\geqslant 1\), splits over \(K\) iff \(A_{1},\dots,A_{m}\) split over \(K\): use that the differential module
\[K[\partial]/K[\partial]\operatorname{lclm}(A_{1},\dots,A_{m})\]
over \(K\) is isomorphic to the image of the natural (diagonal) \(K[\partial]\)-linear map
\[K[\partial]\to\big{(}K[\partial]/K[\partial]A_{1}\big{)}\times\dots\times(K[ \partial]/K[\partial]A_{m}\big{)}.\]
A \(K[\partial]\)-linear map \(M\to N\) into a differential module \(N\) over \(K\) induces a \(K[\partial]\)-linear map \(\phi^{*}\colon N^{*}\to M^{*}\) given by \(\phi^{*}(f)=f\circ\phi\), and if \(\phi\) is surjective, then \(\phi^{*}\) is injective. This gives a contravariant functor \((-)^{*}\) from the category of differential modules over \(K\) to itself; the morphisms of this category are the \(K[\partial]\)-linear maps between differential modules over \(K\). Using \(\dim_{K}M=\dim_{K}M^{*}<\infty\) it follows easily from these facts that if \(\phi\colon M\to N\) is an injective \(K[\partial]\)-linear map into a differential module \(N\), then \(\phi^{*}\colon N^{*}\to M^{*}\) is surjective.
**Lemma 2.3.17**.: \(\ell(M)=\ell(M^{*})\)_, so if \(M\) splits, then \(M^{*}\) splits as well._
Proof.: Induction on \(\ell(M)\) using the canonical \(K[\partial]\)-linear isomorphism \(M\cong M^{**}\) and what was said about the functor \((-)^{*}\) shows \(\ell(M)=\ell(M^{*})\).
Let \(L\) be a differential field extension of \(K\). Recall from [ADH, 5.9.2] that the base change \(L\otimes_{K}M\) of \(M\) to \(L\) is a differential module over \(L\) with \(\dim_{L}L\otimes_{K}M=\dim_{L}L\otimes_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M= \dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M= \dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M= \dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M= \dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M= \dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M= \dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M= \dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M= \dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M= \dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M= \dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M= \dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M= \dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M= \dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M= \dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M= \dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M= \dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M= \dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M= \dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M=\dim_{K}M= \dim_
\(\dim_{K}M\). A \(K[\partial]\)-linear map \(M\to N\) into a differential module \(N\) over \(K\) induces an \(L[\partial]\)-linear map
\[L\otimes_{K}\phi\,:\ L\otimes_{K}M\to L\otimes_{K}N,\qquad\lambda\otimes x \mapsto\lambda\otimes\phi(x),\]
and this yields a covariant functor \(L\otimes_{K}-\) from the category of differential modules over \(K\) to the category of differential modules over \(L\). Using the above invariance of dimension one checks easily that this functor transforms short exact sequences in the first category to short exact sequences in the second category. Hence, by an easy induction on \(\ell(M)\) we have \(\ell(M)\leqslant\ell(L\otimes_{K}M)\).
We say that \(M\)**splits over**\(L\) if \(L\otimes_{K}M\) splits. The \(K[\partial]\)-linear isomorphism
\[x\mapsto 1\otimes x\,:\ M\to K\otimes_{K}M\]
shows that \(M\) splits iff \(M\) splits over \(K\), and then \(M\) splits over each differential field extension of \(K\). Let \(E\) be a differential field extension of \(L\). Then we have an \(E[\partial]\)-linear isomorphism
\[E\otimes_{K}M\to E\otimes_{L}(L\otimes_{K}M),\quad e\otimes x\mapsto e \otimes(1\otimes x),\]
so if \(M\) splits over \(L\), then \(M\) also splits over \(E\). If \(N\) is a differential submodule of \(M\), then \(M\) splits over \(L\) iff both \(N\) and \(M/N\) split over \(L\).
**Lemma 2.3.18**.: _If \(M\) splits over \(L\), then so does \(M^{*}\)._
Proof.: For \(\phi\in M^{*}\) we have the \(L\)-linear map
\[\operatorname{id}_{L}\otimes\phi\,:\ L\otimes M\to L\otimes_{K}K,\quad s \otimes y\mapsto s\otimes\phi(y)\quad(\lambda,s\in L,\phi\in M^{*},y\in M).\]
We also have the \(L[\partial]\)-linear isomorphism \(i_{L}\colon L\otimes_{K}K\to L\) given by \(i_{L}(s\otimes 1)=s\) for \(s\in L\). It is straightforward to check that this yields an \(L\)-linear isomorphism
\[L\otimes_{K}M^{*}\to(L\otimes_{K}M)^{*},\qquad 1\otimes\phi\mapsto i_{L} \circ(\operatorname{id}_{L}\otimes\phi)\quad(\phi\in M^{*}),\]
and that this map is even \(L[\partial]\)-linear. Now use Lemma 2.3.17.
Call \(M\)**cyclic** if it has a cyclic vector, equivalently, for some \(A\) we have
\[M\ \cong\ K[\partial]/K[\partial]A,\ \text{as}\ K[\partial]\text{-modules}.\]
**Corollary 2.3.19**.: _We have \(\sum_{\alpha}\operatorname{mult}_{\alpha}(M)\leqslant r\), hence \(|\Sigma(M)|\leqslant r\). If moreover \(\sum_{\alpha}\operatorname{mult}_{\alpha}(M)=r\), then \(M\) splits. Conversely, if \(K\) is \(1\)-linearly surjective and \(M\) splits, then \(\sum_{\alpha}\operatorname{mult}_{\alpha}(M)=r\)._
Proof.: We prove this for \(M^{*}\) instead of \(M\). (Then by various results above and the natural \(K[\partial]\)-linear isomorphism \(M\cong M^{**}\) it also follows for \(M\).) If \(M\) is cyclic, then \(\sum_{\alpha}\operatorname{mult}_{\alpha}(M^{*})\leqslant r\) by Lemma 2.3.3 and Corollary 2.3.15, and the rest follows using also Lemma 2.3.17 and remarks following Corollary 2.3.15. Thus we are done if \(C\neq K\), by [ADH, 5.5.3].
If \(C=K\), then a differential module over \(K\) is just a finite-dimensional vector space \(M\) over \(K\) equipped with a \(K\)-linear map \(\partial\colon M\to M\), so we can use the well-known internal direct sum decomposition \(\sum_{a}\ker_{M}(\partial-a)=\bigoplus_{a}\ker_{M}(\partial-a)\).
Likewise we obtain from Corollary 2.3.5 and [ADH, 5.5.8]:
**Corollary 2.3.20**.: _Suppose \(K\) is \(1\)-linearly surjective and \(\sum_{\alpha}\operatorname{mult}_{\alpha}(M)=r\). Then the map \(\alpha\mapsto-\alpha\) restricts to a bijection \(\Sigma(M)\to\Sigma(M^{*})\) with \(\operatorname{mult}_{\alpha}(M)=\operatorname{mult}_{-\alpha}(M^{*})\) for each \(\alpha\)._
We now aim for a variant of Corollary 2.3.20: Corollary 2.3.23 below.
**Lemma 2.3.21**.: _Suppose \(\dim_{C}\ker A=r\). Then \(\dim_{C}\ker A^{*}=r\)._
Proof.: The case \(r=0\) being trivial, suppose \(r\geqslant 1\) and set \(M:=K[\partial]/K[\partial]A\), so \(M\neq\{0\}\). By Lemma 2.3.14, \(M^{*}\) is horizontal, hence \(M\) is also horizontal by [ADH, remark after 5.5.5], and therefore \(\dim_{C}\ker A^{*}=\dim_{C}\ker_{M}\partial=r\) by Lemma 2.3.14 and [ADH, 5.5.8].
**Corollary 2.3.22**.: _Let \(d:=\dim_{C}\ker A\) and suppose \(K\) is \((r-d)\)-linearly surjective. Then \(\dim_{C}\ker A^{*}=d\)._
Proof.: It suffices to show \(\dim_{C}\ker A^{*}\geqslant d\), since then the reverse inequality follows by interchanging the role of \(A\) and \(A^{*}\). Let \(y_{1},\dots,y_{d}\) be a basis of the \(C\)-linear space \(\ker A\). Then
\[L(Y):=\operatorname{wr}(Y,y_{1},\dots,y_{d})\in K\{Y\}\]
is homogeneous of degree \(1\) and order \(d\) with zero set \(Z(L)=\ker A\)[ADH, 4.1.13]. So with \(B\in K[\partial]\) the linear part of \(L\) we have \(A=DB\) where \(D\in K[\partial]\) has order \(r-d\), by [ADH, 5.1.15(i)], so \(A^{*}=B^{*}D^{*}\) and \(D^{*}(K)=K\). Hence
\[\dim_{C}\ker A^{*}\ =\ \dim_{C}\ker B^{*}+\dim_{C}\ker D^{*}\ \geqslant\ \dim_{C}\ker B^{*}\ =\ \dim_{C}\ker B\ =\ d\]
where we used [ADH, remark before 5.1.12] for the first equality and the previous lemma (applied to \(B\) in place of \(A\)) for the second equality.
Suppose now that \(r\geqslant 1\) and \(K\) is \((r-1)\)-linearly surjective. Then Corollary 2.3.22 and \(A^{**}=A\) give \(\dim_{C}\ker A=\dim_{C}\ker A^{*}\) (even when \(\dim_{C}\ker A=0\)). Hence for all \(a\) we have \(\dim_{C}\ker A_{a}=\dim_{C}\ker(A^{*})_{-a}\). This leads to:
**Corollary 2.3.23**.: _If \(r\geqslant 1\) and \(K\) is \((r-1)\)-linearly surjective, then we have a bijection \(\alpha\mapsto-\alpha\colon\Sigma(M)\to\Sigma(M^{*})\), and \(\operatorname{mult}_{\alpha}(M)\ =\ \operatorname{mult}_{-\alpha}(M^{*})\) for all \(\alpha\)._
**Complex conjugation**\((^{*})\).: _In this subsection \(K=H[i]\) where \(H\) is a differential subfield of \(K\), \(i^{2}=-1\), and \(i\notin H\). Then \(C=C_{H}[i]\). Recall that \(A\in K[\partial]^{\neq}\) has order \(r\). The complex conjugation automorphism \(z=g+h\imath\mapsto\overline{z}:=g-h\imath\)\((g,h\in H)\) of the differential field \(K\) induces an automorphism \(\alpha\mapsto\overline{\alpha}\) of the group \(K/K^{\dagger}\) with \(\overline{\alpha}=[\overline{a}]\) for \(\alpha=[a]\), \(a\in K\). The automorphism \(z\mapsto\overline{z}\) of \(K\) extends uniquely to an automorphism \(D\mapsto\overline{D}\) of the ring \(K[\partial]\) with \(\overline{\partial}=\partial\). If \(A\) and \(B\neq 0\) have the same type, then so do \(\overline{A}\) and \(\overline{B}\). (Lemma 2.3.6.) Now \(\overline{A(f)}=\overline{A(f)}\) for \(f\in K\), so \(\dim_{C}\ker_{K}A=\dim_{C}\ker_{K}\overline{A}\). Moreover, \(\overline{A_{a}}=\overline{A_{\overline{a}}}\), hence \(\operatorname{mult}_{\alpha}(A)=\operatorname{mult}_{\overline{\alpha}}( \overline{A})\) for all \(\alpha\); so \(\alpha\) is an eigenvalue of \(A\) iff \(\overline{\alpha}\) is an eigenvalue of \(\overline{A}\). Note that \(\overline{A^{*}}=\overline{A^{*}}\). We call \(\overline{A^{*}}\) the **conjugate adjoint** of \(A\). Corollary 2.3.5 yields:
**Corollary 2.3.24**.: _If \(K\) is \(1\)-linearly surjective and \(\sum_{\alpha}\operatorname{mult}_{\alpha}(A)=r\), then we have a bijection \(\alpha\mapsto-\overline{\alpha}\colon\Sigma(A)\to\Sigma(\overline{A^{*}})\), with \(\operatorname{mult}_{\alpha}(A)=\operatorname{mult}_{-\overline{\alpha}}( \overline{A^{*}})\) for all \(\alpha\)._
Next, let \(M\) be a (left) \(K[\partial]\)-module. Then we define \(\overline{M}\) as the \(K[\partial]\)-module arising from \(M\) by replacing its scalar multiplication \((A,f)\mapsto Af\colon K[\partial]\times M\to M\) with
\[(A,f)\mapsto\overline{A}f\,:\ K[\partial]\times\overline{M}\to\overline{M}.\]
We call \(\overline{M}\) the **complex conjugate** of \(M\). Note that \(\overline{\overline{M}}=M\). If \(\phi\colon M\to N\) is a morphism of \(K[\partial]\)-modules, then \(\phi\) is also a morphism of \(K[\partial]\)-modules \(\overline{M}\to\overline{N}\), which we denote by \(\overline{\phi}\). Hence we have a covariant functor \((\,\cdot\,)\) from the category of \(K[\partial]\)-modules to itself. We have \(\dim_{K}M=\dim_{K}\overline{M}\), hence if \(M\) is a differential module over \(K\), then so is \(\overline{M}\). Thus \((\,\cdot\,)\) restricts to a functor from the category
of differential modules over \(K\) to itself. If \(P\) is an \(r\times r\) matrix over \(K\) and \(M=M_{P}\) is the differential module associated to \(P\)[ADH, p. 277], then the differential modules \(\overline{M}\) and \(M_{\overline{P}}\) over \(K\) both have underlying additive group \(K^{r}\), and the map \(e\mapsto\overline{e}\colon\overline{M}\to M_{\overline{P}}\) is an isomorphism of differential modules over \(K\).
_Example 2.3.25_.: For \(M=K[\mathfrak{d}]\) we have an isomorphism \(B\mapsto\overline{B}\colon M\to\overline{M}\) of \(K[\mathfrak{d}]\)-modules. For \(N=K[\mathfrak{d}]/K[\mathfrak{d}]A\) we have an isomorphism
\[B+K[\mathfrak{d}]A\mapsto\overline{B}+K[\mathfrak{d}]\overline{A}\,:\,\, \overline{N}\to K[\mathfrak{d}]/K[\mathfrak{d}]\overline{A}\]
of differential modules over \(K\).
_Below \(M\) is a differential module over \(K\) and \(r=\dim_{K}M\)._ Then for each \(B\) we have \(\ker_{M}B=\ker_{\overline{M}}\overline{B}\). Hence \(\operatorname{mult}_{\alpha}(M)=\operatorname{mult}_{\overline{\alpha}}( \overline{M})\) for all \(\alpha\), so \(\alpha\) is an eigenvalue of \(M\) iff \(\overline{\alpha}\) is an eigenvalue of \(\overline{M}\).
Next, let \(N\) be a differential module over \(K\). A map \(\phi\colon M\to N\) is \(K\)-linear if \(\phi\colon\overline{M}\to\overline{N}\) is \(K\)-linear, so \(\operatorname{Hom}_{K}(M,N)\) and \(\operatorname{Hom}_{K}(\overline{M},\overline{N})\) have the same underlying additive group. It is easy to check that for the differential module \(P:=\operatorname{Hom}_{K}(M,N)\) we have \(\overline{P}=\operatorname{Hom}_{K}(\overline{M},\overline{N})\). Thus \(\overline{M^{*}}=\operatorname{Hom}_{K}(\overline{M},\overline{K})\). In view of the isomorphism \(z\mapsto\overline{z}\colon K\to\overline{K}\) of differential modules over \(K\) this yields an isomorphism \(\overline{M^{*}}\cong\overline{M^{*}}\) of differential modules over \(K\). We call \(\overline{M^{*}}\) the **conjugate dual** of \(M\). From Corollaries 2.3.20 and 2.3.23 we obtain:
**Corollary 2.3.26**.: _Suppose \(K\) is \(1\)-linearly surjective and \(\sum_{\alpha}\operatorname{mult}_{\alpha}(M)=r\), or \(r\geqslant 1\) and \(K\) is \((r-1)\)-linearly surjective. Then the map \(\alpha\mapsto-\overline{\alpha}\) restricts to a bijection \(\Sigma(M)\to\Sigma(\overline{M^{*}})\) with \(\operatorname{mult}_{\alpha}(M)=\operatorname{mult}_{-\overline{\alpha}}( \overline{M^{*}})\) for all \(\alpha\)._
In the remainder of this section we discuss eigenvalues of differential modules over \(K\) in the presence of a valuation on \(K\). This is only used for the proof of Lemma 7.4.27 in Section 7.4. In preparation for this, we first study lattices over valued fields.
**Lattices \((^{*})\)**.: _In this subsection \(F\) is a valued field with valuation ring \(R\)._ Let \(L\) be an \(R\)-module, with its torsion submodule
\[L_{\operatorname{tor}}=\bigl{\{}y\in L:ry=0\text{ for some }r\in R^{\neq} \bigr{\}}.\]
Call \(L\)_torsion-free_ if \(L_{\operatorname{tor}}=\{0\}\), and a _torsion module_ if \(L_{\operatorname{tor}}=L\). For the following basic fact, cf. [40, VI, SS4, Lemme 1]:
**Lemma 2.3.27**.: _Every finitely generated torsion-free \(R\)-module is free._
Proof.: Let \(L\) be a finitely generated torsion-free \(R\)-module. Let \(x_{1},\dots,x_{m}\in L\) be distinct such that \(\{x_{1},\dots,x_{m}\}\) is a minimal set of generators of \(L\)[ADH, p. 44]. Towards a contradiction, suppose \(r_{1}x_{1}+\dots+r_{m}x_{m}=0\) with \(r_{1},\dots,r_{m}\) in \(R\) not all zero. By reordering, arrange \(r_{j}\in r_{1}R\) for \(j=2,\dots,m\). Torsion-freeness of \(L\) yields \(x_{1}+s_{2}x_{2}+\dots+s_{m}x_{m}=0\) where \(s_{j}:=r_{j}/r_{1}\) for \(j=2,\dots,m\). Hence \(\{x_{2},\dots,x_{m}\}\) is also a set of generators of \(L\), contradicting the minimality of \(\{x_{1},\dots,x_{m}\}\). Thus \(x_{1},\dots,x_{m}\) are \(R\)-linearly independent.
Let now \(M\) be a finite-dimensional \(F\)-linear space and \(m:=\dim_{F}M\).
**Lemma 2.3.28**.: _Let \(L\) be a finitely generated \(R\)-submodule of \(M\). Then \(L\) is free of rank \(\leqslant m\), and the following are equivalent:_
1. \(L\) _has rank_ \(m\)_;_
2. \(L\) _has a basis which is also a basis of the_ \(F\)_-linear space_ \(M\)
_._
3. \(L\) _contains a basis of_ \(M\)_;_
4. \(L\) _contains a generating set of_ \(M\)_;_
5. _the_ \(R\)_-module_ \(M/L\) _is a torsion module._
Proof.: Freeness of \(L\) follows from Lemma 2.3.27. Every set of \(R\)-linearly independent elements of \(L\) is \(F\)-linearly independent, so \(\operatorname{rank}(L)\leqslant m\). Let \(y_{1},\ldots,y_{n}\) be a basis of \(L\). Assuming \(n<m\) yields \(z\in M\) such that \(y_{1},\ldots,y_{n},z\) are \(F\)-linearly independent, so \(M/L\) is not a torsion module. This shows (v) \(\Rightarrow\) (i), and (i) \(\Rightarrow\) (ii) \(\Rightarrow\) (iii) \(\Rightarrow\) (v) are clear.
A finitely generated \(R\)-submodule \(L\) of \(M\) is called an \(R\)**-lattice** in \(M\), or just a **lattice** in \(M\) if \(R\) is understood from the context, if it satisfies one of the equivalent conditions (i)-(v) in Lemma 2.3.28. If \(L\) is a lattice in \(M\), then every basis of \(L\) is a basis of the \(F\)-linear space \(M\), and every lattice of \(M\) is of the form \(\sigma(L)\) for some automorphism \(\sigma\) of \(M\). Next some easy consequences of Lemma 2.3.28:
**Corollary 2.3.29**.: _If \(\pi\colon M\to M^{\prime}\) is a surjective morphism of \(F\)-linear spaces and \(L\) a lattice in \(M\), then \(L^{\prime}:=\pi(L)\) is a lattice in \(M^{\prime}\)._
**Corollary 2.3.30**.: _Let \(N\) be an \(F\)-linear subspace of \(M\) and \(L\) a lattice in \(M\). Then \(L\cap N\) is a lattice in \(N\)._
Proof.: Take a basis \(x_{1},\ldots,x_{n}\) of \(N\). Lemma 2.3.28(v) gives \(r_{1},\ldots,r_{n}\in R^{\neq}\) with \(r_{1}x_{1},\ldots,r_{n}x_{n}\in L\). Then \(r_{1}x_{1},\ldots,r_{n}x_{n}\) is a basis of \(N\) contained in \(L\cap N\). Now apply Lemma 2.3.28(iii) to \(N\), \(L\cap N\) in place of \(M\), \(L\).
**Corollary 2.3.31**.: _If \(L\) is a lattice in \(M\) and \(E\) a valued field extension of \(F\) with valuation ring \(S\), then the \(E\)-linear space \(M_{E}:=E\otimes_{F}M\) has dimension \(m\), and the \(S\)-submodule \(L_{E}\) of \(M_{E}\) generated by the image of \(L\) under the \(F\)-linear embedding \(y\mapsto 1\otimes y\colon M\to M_{E}\) is an \(S\)-lattice in \(M_{E}\)._
For \(i=1,2\) let \(M_{i}\) be a \(F\)-linear space with \(m_{i}:=\dim_{K}M_{i}<\infty\) and \(L_{i}\) be a lattice in \(M_{i}\). Then \(L_{1}\oplus L_{2}\) is a lattice in \(M_{1}\oplus M_{2}\), and the \(R\)-submodule of \(M_{1}\otimes_{F}M_{2}\) generated by the elements \(y_{1}\otimes y_{2}\) (\(y_{1}\in L_{1}\), \(y_{2}\in L_{2}\)) is a lattice in \(M_{1}\otimes_{F}M_{2}\). The \(F\)-linear space \(\operatorname{Hom}(M_{1},M_{2})=\operatorname{Hom}_{F}(M_{1},M_{2})\) of \(F\)-linear maps \(M_{1}\to M_{2}\) has dimension \(m_{1}m_{2}\), and the \(R\)-module \(\operatorname{Hom}(L_{1},L_{2})\) of \(R\)-linear maps \(L_{1}\to L_{2}\) is free of rank \(m_{1}m_{2}\). Each \(R\)-linear map \(\phi\colon L_{1}\to L_{2}\) extends uniquely to an \(F\)-linear map \(\widehat{\phi}\colon M_{1}\to M_{2}\), and \(\phi\mapsto\widehat{\phi}\) is an embedding of \(\operatorname{Hom}(L_{1},L_{2})\) into \(\operatorname{Hom}(M_{1},M_{2})\) viewed as \(R\)-module. We identify \(\operatorname{Hom}(L_{1},L_{2})\) via this embedding with its image in \(\operatorname{Hom}(M_{1},M_{2})\); then \(\operatorname{Hom}(L_{1},L_{2})\) is a lattice in \(\operatorname{Hom}(M_{1},M_{2})\). In particular, if \(L\) is a lattice in \(M\), then \(L^{*}=\operatorname{Hom}(L,R)\) is a lattice in \(M^{*}=\operatorname{Hom}(M,F)\).
### Lattices in differential modules \((^{*})\)
_In the rest of this section_ \(K\) _is equipped with a valuation ring_ \(\mathcal{O}\) _making_ \(K\) _a valued differential field with small derivation. We also let_ \(M\) _be a differential module over_ \(K\)_. Thus_ \(M\) _is a_ \(K[\mathfrak{d}]\)_-module which is finite-dimensional as_ \(K\)_-linear space. We have the subring_ \(\mathcal{O}[\mathfrak{d}]\) _of_ \(K[\mathfrak{d}]\)_. A_ **lattice** _in_ \(M\) _is an_ \(\mathcal{O}[\mathfrak{d}]\)_-submodule of_ \(M\) _that is also an_ \(\mathcal{O}\)_-lattice in the_ \(K\)_-linear space_ \(M\)_. An_ \(\mathcal{O}\)_-lattice_ \(L\) _in the_ \(K\)_-linear space_ \(M\) _is a lattice in the differential module_
iff \(\partial L\subseteq L\), iff there is a generating set \(S\) of the \(\mathcal{O}\)-module \(L\) with \(\partial S\subseteq L\). If \(a\neq 0\) and \(a^{\dagger}\preccurlyeq 1\) and \(L\) is a lattice in \(M\), then \(aL\) is also a lattice in \(M\).
_Examples 2.3.32_.:
1. Suppose \(M=M_{N}\) where \(N\) is an \(n\times n\) matrix over \(\mathcal{O}\) (\(n\geqslant 1\)). The underlying \(K\)-linear space of \(M\) is \(K^{n}\), and for each \(e\in M\) we have \(\partial e=e^{\prime}-Ne\) [ADH, p. 277], so \(L:=\mathcal{O}^{n}\) is a lattice in \(M\).
2. Suppose \(M\cong K[\partial]/K[\partial]A\) where \(A\in\mathcal{O}[\partial]\) is monic. Let \(e\) be a cyclic vector of \(M\) with \(Ae=0\). Then the \(K\)-linear space \(M\) has basis \(e,\partial e,\dots,\partial^{r-1}e\) and \(L:=\mathcal{O}e+\mathcal{O}\partial e+\dots+\mathcal{O}\partial^{r-1}e\) is a lattice in \(M\).
For \(i=1,2\) let \(M_{i}\) be a differential module over \(K\) and \(L_{i}\) be a lattice in \(M_{i}\). Then \(L_{1}\oplus L_{2}\) is a lattice in the differential module \(M_{1}\oplus M_{2}\) over \(K\), and the \(\mathcal{O}\)-submodule of \(M_{1}\otimes_{K}M_{2}\) generated by the elements \(y_{1}\otimes y_{2}\) (\(y_{i}\in L_{i}\), \(i=1,2\)) is a lattice in the differential module \(M_{1}\otimes_{K}M_{2}\) over \(K\). Also, \(\operatorname{Hom}(L_{1},L_{2})\) is a lattice in the differential module \(\operatorname{Hom}_{K}(M_{1},M_{2})\) over \(K\).
Let \(L\) be a lattice in \(M\). If \(\pi\colon M\to N\) is a surjective morphism of differential modules over \(K\), then \(\pi(L)\) is a lattice in \(N\). If \(N\) is a differential submodule of \(M\), then \(L\cap N\) is a lattice in \(N\). Using the notation from Corollary 2.3.31 we have:
**Lemma 2.3.33**.: _Let \(L\) be a lattice in \(M\) and \(E\) be a valued differential field extension of \(K\) with small derivation. Then \(L_{E}\) is a lattice in the base change \(M_{E}=E\otimes_{K}M\) of the differential module \(M\) over \(K\) to a differential module over \(E\)._
If \(A\in\mathcal{O}[\partial]\) is monic (of order \(r\) by our convention), then \(\mathcal{O}[\partial]A\) is a left ideal of the ring \(\mathcal{O}[\partial]\), and the resulting left \(\mathcal{O}[\partial]\)-module \(\mathcal{O}[\partial]/\mathcal{O}[\partial]A\) is free on \(e,\partial e,\dots,\partial^{r-1}e\) for \(e:=1+\mathcal{O}[\partial]A\), as is easily verified. Conversely, if \(L\) is a left \(\mathcal{O}[\partial]\)-module free on \(e,\partial e,\dots,\partial^{r-1}e\), \(e\in L\), then the unique monic \(A\in\mathcal{O}[\partial]\) (of order \(r\)) such that \(Ae=0\) yields an isomorphism \(\mathcal{O}[\partial]/\mathcal{O}[\partial]A\to L\) sending \(1+\mathcal{O}[\partial]A\) to \(e\).
Next, let \(\boldsymbol{k}=\mathcal{O}/\mathcal{o}\) be the differential residue field of \(K\); cf. [ADH, 4.4]. Here is a version of the cyclic vector theorem [ADH, 5.5.3] for lattices:
**Proposition 2.3.34**.: _Suppose the derivation on \(\boldsymbol{k}\) is nontrivial and \(L\) is a lattice in \(M\) and \(\dim_{K}M=r\). Then \(L\cong\mathcal{O}[\partial]/\mathcal{O}[\partial]A\) for some monic \(A\in\mathcal{O}[\partial]\)._
The case \(r=0\) being trivial, we assume for the proof below that \(r\geqslant 1\). We now introduce a tuple \(Y=(Y_{0},\dots,Y_{r-1})\) of distinct differential indeterminates over \(K\), and let \(i\), \(j\), \(k\), \(l\) range over \(\{0,\dots,r-1\}\).
**Lemma 2.3.35**.: _For all \(i\), \(j\), let \(P_{ij}\in Y_{i}^{(j)}+\sum_{k<r,\ l<j}K\,Y_{k}^{(l)}\). Then the coefficient of \(Y_{0}Y_{1}^{\prime}\cdots Y_{r-1}^{(r-1)}\) in \(\det(P_{ij})\in K\{Y\}\) is \(1\)._
Proof.: For \(p=0,\dots,r-1\) we prove by induction on \(p\) that the coefficient of \(Y_{0}Y_{1}^{\prime}\cdots Y_{p}^{(p)}\) in \(\det(P_{ij})_{i,j\leqslant p}\in K\{Y\}\) is \(1\) (which for \(p=r-1\) gives the desired result). The case \(p=0\) is clear, so assume \(p\geqslant 1\). Then \(Y_{p}^{(p)}\) occurs in the matrix \((P_{ij})_{i,j\leqslant p}\) only in the \((p,p)\)-entry \(P_{pp}\in Y_{p}^{(p)}+\sum_{k<r,\ l<p}K\,Y_{l}^{(l)}\), and so the coefficient of \(Y_{0}Y_{1}^{\prime}\cdots Y_{p}^{(p)}\) in \(\det(P_{ij})_{i,j\leqslant p}\) is the coefficient of \(Y_{0}Y_{1}^{\prime}\cdots Y_{p-1}^{(p-1)}\) in \(\det(P_{ij})_{i,j\leqslant p-1}\), and the latter is \(1\) by inductive assumption.
Proof of Proposition 2.3.34.: Let \(z_{0},\dots,z_{r-1}\) be a basis of the \(\mathcal{O}\)-module \(L\). Consider the base change \(\widehat{M}:=K\{Y\}\otimes_{K}M\) of \(M\) to the differential \(K\)-algebra \(K\{Y\}\)
cf. [ADH, p. 304]. So \(\widehat{M}\) is a left \(K\{Y\}[\partial]\)-module and the \(K\{Y\}\)-module \(\widehat{M}\) is free on \(1\otimes z_{0},\dots,1\otimes z_{r-1}\). Set
\[\widehat{e}\ :=\ Y_{0}\otimes z_{0}+\dots+Y_{r-1}\otimes z_{r-1}\in\widehat{M}\]
and let \(P_{ij}\in K\{Y\}\) be such that
\[\partial^{j}\widehat{e}\ =\ P_{0j}\otimes z_{0}+\dots+P_{r-1,j}\otimes z_{r-1}.\]
An easy induction on \(j\) using \(\partial L\subseteq L\) shows that \(P_{ij}\in Y_{i}^{(j)}+\sum_{k<r,\ l<j}{\mathcal{O}}\,Y_{k}^{(l)}\). Put \(P:=\det(P_{ij})\in{\mathcal{O}}\{Y\}\); then \(v(P)=0\) by the lemma above, so [ADH, 4.2.1] applied to \(\boldsymbol{k}\) and the image of \(P\) under the natural morphism \({\mathcal{O}}\{Y\}\to\boldsymbol{k}\{Y\}\) in place of \(K\) and \(P\), respectively, yields an \(a\in{\mathcal{O}}^{r}\) such that \(P(a)\asymp 1\). We obtain a \(K[\partial]\)-module morphism
\[\phi\colon\widehat{M}\to M\quad\text{ with }\quad\phi(Q\otimes z)=Q(a)z\text{ for }Q\in K\{Y\}\text{ and }z\in M.\]
Put \(R:={\mathcal{O}}\{Y\}\), a differential subring of \(K\{Y\}\), and let \(\widehat{L}\) be the \(R[\partial]\)-submodule of \(\widehat{M}\) generated by \(1\otimes z_{0},\dots,1\otimes z_{r-1}\), so
\[\widehat{L}\ =\ \{Q_{0}\otimes z_{0}+\dots+Q_{r-1}\otimes z_{r-1}:\ Q_{0}, \dots,Q_{r-1}\in{\mathcal{O}}\{Y\}\}.\]
Then \(\partial^{j}\widehat{e}\in\widehat{L}\) for all \(j\) and \(\phi(\widehat{L})=L\). With \(e:=\phi(\widehat{e})\) we have
\[\partial^{j}e=\phi(\partial^{j}\widehat{e})=P_{0j}(a)z_{0}+\dots+P_{r-1,j}(a) z_{r-1}\]
and \(\det(P_{ij}(a))=P(a)\in{\mathcal{O}}^{\times}\), so \(L={\mathcal{O}}e+{\mathcal{O}}\partial e+\dots+{\mathcal{O}}\partial^{r-1}e\). By a remark preceding Proposition 2.3.34, this concludes its proof.
_Remark_.: Taking \(K={\mathcal{O}}\), Proposition 2.3.34 yields another proof of [ADH, 5.5.3], in the spirit of [51]; cf. [48]. Note also that \(e\) as constructed in the proof of Proposition 2.3.34 is a cyclic vector of \(M\), and so yields the isomorphism \(M\cong K[\partial]/K[\partial]A\) sending \(e\) to \(1+K[\partial]A\), with monic \(A\in{\mathcal{O}}\) (of order \(r\) by convention) determined by the requirement \(Ae=0\).
### Eigenvalues of bounded operators \((^{*})\)
_In this subsection \(A\in{\mathcal{O}}[\partial]\) is monic, of order \(r\) by earlier convention._ Recall that \([a]:=a+K^{\dagger}\) for \(a\in K\). Put
\[[{\mathcal{O}}]:=({\mathcal{O}}+K^{\dagger})/K^{\dagger}=\big{\{}[a]:a\in{ \mathcal{O}}\big{\}}\qquad\text{(a divisible subgroup of $K/K^{\dagger}$)}.\]
Thus \(\Sigma(A)\subseteq[{\mathcal{O}}]\) by (2.3.2) and [ADH, 5.6.3]. More precisely, with \(\boldsymbol{k}\) denoting the differential residue field of \(K\), recall from [ADH, 5.6] that the residue map \(a\mapsto\operatorname{res}a\colon{\mathcal{O}}\to\boldsymbol{k}\) extends to a ring morphism \(B\mapsto\operatorname{res}B\colon{\mathcal{O}}[\partial]\to\boldsymbol{k}[ \partial]\) with \(\partial\mapsto\partial\). For each \(B\in{\mathcal{O}}[\partial]\) and \(y\in{\mathcal{O}}\) we have \(B(y)\in{\mathcal{O}}\) and \(\operatorname{res}\bigl{(}B(y)\bigr{)}=(\operatorname{res}B)(\operatorname{ res}y)\). Also, \(\operatorname{res}A\in\boldsymbol{k}[\partial]\) is monic with \(\operatorname{order}\operatorname{res}A=\operatorname{order}A=r\). By [ADH, 5.6.3], if \(B,D\in K[\partial]\) are monic and \(A=BD\), then \(B,D\in{\mathcal{O}}[\partial]\). In particular, all \(a_{1},\dots,a_{r}\in K\) such that \(A=(\partial-a_{1})\dots(\partial-a_{r})\) are in \({\mathcal{O}}\), and
\[\operatorname{res}A\ =\ (\partial-\operatorname{res}a_{1})\dots(\partial- \operatorname{res}a_{r}).\]
Moreover, using also (2.3.2), we conclude:
**Lemma 2.3.36**.: _If \(A=B(\partial-a)\), \(B\in K[\partial]\), then \(a\in{\mathcal{O}}\), \(B\in{\mathcal{O}}[\partial]\), and \(\operatorname{res}A=(\operatorname{res}B)\cdot(\partial-\operatorname{res}a)\). Hence for each \(\alpha\in\Sigma(A)\) there is an \(a\in{\mathcal{O}}\) such that \(\alpha=[a]\) and \(\operatorname{res}a+\boldsymbol{k}^{\dagger}\in\Sigma(\operatorname{res}A)\)._
Suppose now \(\partial\mathcal{O}\subseteq\sigma\). Then the derivation of \(\boldsymbol{k}\) is trivial, and we have a \(\boldsymbol{k}\)-algebra isomorphism \(P(Y)\mapsto P(\partial)\colon\boldsymbol{k}[Y]\to\boldsymbol{k}[\partial]\). We let the **characteristic polynomial** of \(B\) be the \(\chi_{B}\in\boldsymbol{k}[Y]\) satisfying \(\chi_{B}(\partial)=\operatorname{res}B\). Then \(B\mapsto\chi_{B}\colon\mathcal{O}[\partial]\to\boldsymbol{k}[Y]\) is a ring morphism extending the residue morphism \(\mathcal{O}\to\boldsymbol{k}\) with \(\partial\mapsto Y\). Identifying \(\boldsymbol{k}/\boldsymbol{k}^{\dagger}\) with \(\boldsymbol{k}\) in the natural way, the set of zeros of \(\chi_{A}\) in \(\boldsymbol{k}\) is \(\Sigma(\operatorname{res}A)\). Thus by Lemma 2.3.36:
**Corollary 2.3.37**.: _If \(A\in K[\partial](\partial-a)\), then \(a\in\mathcal{O}\) and \(\chi_{A}(\operatorname{res}a)=0\). Hence for each \(\alpha\in\Sigma(A)\) there is an \(a\in\mathcal{O}\) such that \(\alpha=[a]\) and \(\chi_{A}(\operatorname{res}a)=0\)._
If \(\mathcal{O}=C+\sigma\), then \(\partial\mathcal{O}\subseteq\sigma\) and the residue morphism \(\mathcal{O}\to\boldsymbol{k}\) restricts to an isomorphism \(C\to\boldsymbol{k}\), via which we identify \(\boldsymbol{k}\) with \(C\) making \(\chi_{A}\) an element of \(C[Y]\).
_In the rest of this subsection \(K=H[\mathrm{i}]\) where \(H\) is a real closed differential subfield of \(K\) such that the valuation ring \(\mathcal{O}_{H}:=\mathcal{O}\cap H\) of \(H\) is convex with respect to the ordering of \(H\) and \(\mathcal{O}_{H}=C_{H}+\mathcal{O}_{H}\). Then \(C=C_{H}[\mathrm{i}]\). A remark after Corollary 1.2.5 then yields \(\mathcal{O}=\mathcal{O}_{H}+\mathcal{O}_{H}\mathrm{i}=C+\sigma\). Using that remark and Lemma 1.2.4 we have \(K^{\dagger}\subseteq H^{\dagger}+\sigma_{H}\mathrm{i}\subseteq H^{\dagger}+ \mathcal{O}\), and thus:_
**Lemma 2.3.38**.: \(H^{\dagger}+\mathcal{O}\ =\ H^{\dagger}+C_{H}+C_{H}\mathrm{i}+\sigma\ =\ \big{\{}a\in K:[a]\in[\mathcal{O}]\big{\}}\)_._
**Lemma 2.3.39**.: _Suppose that \(C_{H}\subseteq H^{\dagger}\), and let \(\alpha\in[\mathcal{O}]\). Then there is a unique \(b\in C_{H}\) such that \(\alpha=[bi+\varepsilon]\) for some \(\varepsilon\in\sigma\). For this \(b\) we have_
\[\operatorname{mult}_{\alpha}(A)\ \leqslant\ \sum_{c\in C,\ \operatorname{Im}c=b} \operatorname{mult}_{c}(\chi_{A}). \tag{2.3.6}\]
Proof.: Lemma 2.3.38 and \(C_{H}\subseteq H^{\dagger}\) yield the existence of \(b\in C_{H}\) such that \(\alpha=[bi+\varepsilon]\) for some \(\varepsilon\in\sigma\). Since \(K^{\dagger}\subseteq H+\mathcal{O}_{H}\mathrm{i}\), there is at most one \(b\in C_{H}\). We prove (2.3.6) by induction on \(r\). The cases \(r=0\) and \(\operatorname{mult}_{\alpha}(A)=0\) being trivial, suppose \(r\geqslant 1\) and \(\alpha\in\Sigma(A)\). From (2.3.2) and [ADH, 5.6.3] we get \(a\in\mathcal{O}\) and monic \(B\in\mathcal{O}[\partial]\) with \([a]=\alpha\) and \(A=B(\partial-a)\). Then \(\operatorname{mult}_{\alpha}(A)\leqslant\operatorname{mult}_{\alpha}(B)+1\) by Lemma 2.3.2, and with \(c\in C\) such that \(a-c\prec 1\) we have \(b=\operatorname{Im}c\) and \(\chi_{A}(c)=0\). Now apply the inductive hypothesis to \(B\) in place of \(A\).
_Remark_.: The inequality (2.3.6) is strict in general: \(H\) can be an \(H\)-field with an element \(x\in H\) such that \(x^{\prime}=1\). Then \(x\succ 1\), \(1/x\notin\mathrm{I}(H)\), so \(\varepsilon:=i/x\in\sigma\setminus K^{\dagger}\). Then for \(A\ :=\ (\partial-(\mathrm{i}+\varepsilon))(\partial-i)\) we have \(\operatorname{mult}_{[\mathrm{i}]}A=1\) while \(\chi_{A}=(Y-\mathrm{i})^{2}\).
**Corollary 2.3.40**.: _Suppose \(K\) has asymptotic integration and is \((r-1)\)-newtonian, \(r\geqslant 1\). Let \(c_{1},\ldots,c_{r}\in C\) be the zeros of \(\chi_{A}\), and suppose \(c_{1},\ldots,c_{r}\) are distinct and \(\operatorname{Re}c_{1}\geqslant\cdots\geqslant\operatorname{Re}c_{r}\). Then for each splitting \((a_{1},\ldots,a_{r})\) of \(A\) over \(K\) we have \(a_{1},\ldots,a_{r}\in\mathcal{O}\). Moreover, there is a unique such splitting of \(A\) over \(K\) such that \(a_{1}-c_{1},\ldots,a_{r}-c_{r}\prec 1\)._
Proof.: The first claim is immediate from [ADH, 5.6.3]. We prove the second claim by induction on \(r\). The case \(r=1\) being trivial, suppose \(r>1\). Corollary 1.8.47 yields an \(a_{r}\in\mathcal{O}\) with \(\operatorname{Ri}(A)(a_{r})=0\) and \(a_{r}-c_{r}\prec 1\), and then \(A=B(\partial-a_{r})\) where \(B\in\mathcal{O}[\partial]\) is monic, by [ADH, 5.6.3, 5.8.7]. By inductive hypothesis \(B=(\partial-a_{1})\cdots(\partial-a_{r-1})\) where \(a_{j}\in\mathcal{O}\) with \(a_{j}-c_{j}\prec 1\) for \(j=1,\ldots,r-1\). This shows existence. Uniqueness follows in a similar way, using Corollary 1.8.50.
**Bounded differential modules \((^{*})\).**_In this subsection \(A\) is monic and \(M\) is a differential module over \(K\)._ We call \(M\)**bounded** if there exists a lattice in \(M\). By remarks in an earlier subsection the class of bounded differential modules over \(K\) is quite robust: if \(M_{1}\), \(M_{2}\) are bounded differential modules over \(K\), then so are the differential modules \(M_{1}\oplus M_{2}\), \(M_{1}\otimes_{K}M_{2}\), and \(\operatorname{Hom}_{K}(M_{1},M_{2})\) over \(K\), and if \(M\) is bounded, then so is every differential submodule of \(M\), every image of \(M\) under a morphism of differential modules over \(K\), and every base change of \(M\) to a valued differential field extension of \(K\) with small derivation.
_Example 2.3.41_.: Let \(u\in K\) and suppose \(M=K\) with \(\partial a=a^{\prime}+ua\) for all \(a\). Then for \(e\in K^{\times}\), \(\mathcal{O}e\) is a lattice in \(M\) iff \(e^{\prime}+ue=\partial e\in\mathcal{O}e\). Hence \(M\) is bounded iff \(u\in\mathcal{O}+K^{\dagger}\).
If \(A_{\ltimes a}\in\mathcal{O}[\partial]\) for some \(a\neq 0\), then \(K[\partial]/K[\partial]A\) is bounded by Example 2.3.32(2).
**Lemma 2.3.42**.: _Suppose \(M=K[\partial]/K[\partial]A\) and \(r=1\). Then_
\[M\text{ is bounded }\iff A_{\ltimes a}\in\mathcal{O}[\partial]\text{ for some }a\neq 0.\]
Proof.: Let \(A=\partial-u\), \(u\in K\). Identifying \(K\) with \(M\) via \(a\mapsto a+K[\partial]A\) we have \(\partial a=a^{\prime}+ua\) for all \(a\) in \(K=M\), so if \(M\) is bounded, then Example 2.3.41 gives \(a\neq 0\) with \(u\in\mathcal{O}+a^{\dagger}\), hence \(A_{\ltimes a}\in\mathcal{O}[\partial]\).
**Lemma 2.3.43**.: _Suppose the valuation ring \(\mathcal{O}\) is discrete \((\)that is, a DVR\()\) and \(M=K[\partial]/K[\partial]A\) is bounded. Then \(A_{\ltimes a^{-1}}\in\mathcal{O}[\partial]\) for some \(a\in\mathcal{O}^{\neq}\)._
Proof.: Let \(L\) be a lattice in \(M\) and \(e:=1+K[\partial]A\), a cyclic vector of \(M\). Since \(M/L\) is a torsion module we get \(a\in\mathcal{O}^{\neq}\) with \(f:=ae\in L\). Because \(\mathcal{O}\) is noetherian, the submodule of the finitely generated \(\mathcal{O}\)-module \(L\) generated by \(f,\partial f,\partial^{2}f,\dots\) is itself finitely generated, and this yields \(n\) with \(\partial^{n}f\in\mathcal{O}f+\mathcal{O}\partial f+\dots+\mathcal{O}\partial^{ n-1}f\)[122, Chapter X, SS1]. We obtain a monic \(B\in\mathcal{O}[\partial]\) of order \(n\) with \(Bf=0\). Then \(B_{\ltimes a}e=0\), so \(B\in K[\partial]A_{\ltimes a^{-1}}\), and thus \(A_{\ltimes a^{-1}}\in\mathcal{O}[\partial]\) by [ADH, 5.6.3].
If \(K\) is monotone \(K\), then \(v(B_{\ltimes a})=v(B)\) for all \(B\) and \(a\neq 0\), by [ADH, 4.5.4]. If \(\mathcal{O}\) is discrete, then \(K\) is monotone by [ADH, 6.1.2]. Hence by Lemma 2.3.43:
**Corollary 2.3.44**.: _Suppose \(\mathcal{O}\) is discrete. Then:_
_the differential module \(K[\partial]/K[\partial]A\) over \(K\) is bounded \(\iff A\in\mathcal{O}[\partial]\)._
_Remark_.: In the case \((K,\mathcal{O})=\big{(}\mathbb{C}(t),\mathbb{C}[[t]]\big{)}\) and \(\partial=t\frac{d}{dt}\), bounded differential modules over \(K\) are called _regular singular_ in [158], and Corollary 2.3.44 in this case is implicit in the proof of [158, Proposition 3.16].
**Lemma 2.3.45**.: _Suppose \(M\cong K[\partial]/K[\partial]A\) where \(A\in\mathcal{O}[\partial]\). Then \(\Sigma(M)\subseteq[\mathcal{O}]\)._
Proof.: The case \(r=0\) is trivial, so assume \(r\geqslant 1\). Corollary 2.3.15 and remarks in the last subsection yield \(\Sigma(M^{*})=\Sigma(A)\subseteq[\mathcal{O}]\). By [ADH, 5.5.8] we have \(M^{*}\cong K[\partial]/K[\partial]B\) where \(B:=(-1)^{r}A^{*}\in\mathcal{O}[\partial]\) is monic. Also \(M\cong M^{**}\), hence \(\Sigma(M)=\Sigma(M^{**})\subseteq[\mathcal{O}]\) by the above applied to \(M^{*}\), \(B\) in place of \(M\), \(A\).
**Corollary 2.3.46**.: _Suppose \(M\) is bounded. Assume also that the derivation of \(\boldsymbol{k}\) is nontrivial, or the derivation of \(K\) is nontrivial and \(\mathcal{O}\) is discrete. Then \(\Sigma(M)\subseteq[\mathcal{O}]\)._
Proof.: The remark following the proof of Proposition 2.3.34 and Lemma 2.3.43 yield \(A\in\mathcal{O}[\partial]\) with \(M\cong K[\partial]/K[\partial]A\), and so \(\Sigma(M)\subseteq[\mathcal{O}]\) by Lemma 2.3.45.
**Corollary 2.3.47**.: _Suppose \(M\) splits and is bounded. Then \(\Sigma(M)\subseteq[\mathcal{O}]\)._
Proof.: We proceed by induction on \(r=\dim_{K}M=\ell(M)\). If \(r=0\), then \(\Sigma(M)=\emptyset\). Next suppose \(r=1\), and take \(e\in M\) with \(M=Ke\) and \(a\in K\) with \(\partial e=ae\). Then \(\Sigma(M)=\big{\{}[a]\big{\}}\) by Corollary 2.3.19, and \(M\cong K[\partial]/K[\partial](\partial-a)\), so \([a]\in[\mathcal{O}]\) by Lemma 2.3.42. Now suppose \(r>1\). Take a differential submodule \(M_{1}\) of \(M\) with \(\ell(M_{1})=r-1\), so \(\ell(M_{2})=1\) for \(M_{2}:=M/M_{1}\). Then \(M_{1}\), \(M_{2}\) split and are bounded, by Lemma 2.3.16 and the remarks before Example 2.3.41. Hence by inductive hypothesis \(\Sigma(M_{i})\subseteq[\mathcal{O}]\) for \(i=1,2\), so \(\Sigma(M)\subseteq[\mathcal{O}]\) by the remark after (2.3.5).
**Corollary 2.3.48**.: _Let \(H\) be a Liouville closed, trigonometrically closed \(H\)-field with small derivation, \(K=H[\mathrm{i}]\), and suppose \(M\) is bounded. Then \(\Sigma(M)\subseteq[\mathcal{O}]\)._
Proof.: We have \(K^{\dagger}=H\oplus\mathrm{I}(H)\mathrm{i}\) and so \(\mathcal{O}+K^{\dagger}=H\oplus\mathcal{O}_{H}\mathrm{i}\). Take an \(H\)-closed field extension \(H_{1}\) of \(H\) and set \(K_{1}:=H_{1}[\mathrm{i}]\). The base change \(M_{1}:=K_{1}\otimes_{K}M\) of \(M\) to \(K_{1}\) splits and is bounded, so \(\Sigma(M_{1})\subseteq[\mathcal{O}_{1}]\) by Corollary 2.3.47. We have \(K_{1}^{\dagger}=H_{1}\oplus\mathrm{I}(H_{1})\mathrm{i}\) by Corollary 1.2.21, so \(\mathcal{O}_{1}+K_{1}^{\dagger}=H_{1}\oplus\mathcal{O}_{H_{1}}\mathrm{i}\). This yields \(K_{1}^{\dagger}\cap K=K^{\dagger}\) and \((\mathcal{O}_{1}+K_{1}^{\dagger})\cap K=\mathcal{O}+K^{\dagger}\), so identifying \(K/K^{\dagger}\) with its image under the group embedding \(a+K^{\dagger}\mapsto a+K_{1}^{\dagger}\colon K/K^{\dagger}\to K_{1}/K_{1}^{\dagger}\) (\(a\in K\)) we have \(\Sigma(M)\subseteq\Sigma(M_{1})\) and \([\mathcal{O}]=[\mathcal{O}_{1}]\cap(K/K^{\dagger})\). Thus \(\Sigma(M)\subseteq[\mathcal{O}]\).
_Question_.: Does it follow from \(M\) being bounded and \(C\neq K\) that \(\Sigma(M)\subseteq[\mathcal{O}]\)?
### Self-Adjointness and its Variants (\({}^{*}\))
_In this section \(K\) is a differential field. We let \(A\), \(B\) range over \(K[\partial]\) with \(A\neq 0\), and set \(r:=\mathrm{order}\,A\). We also let \(\alpha\) range over \(K/K^{\dagger}\). The material in this section elaborates on Corollaries 2.3.5 and 2.3.20 and shows how symmetries of \(A\) force it to have eigenvalue \(0\in K/K^{\dagger}\), mainly by making some classical results (cf. [54, Chapitre V] and [181, SS23-25]) precise and putting them into our present context. It can be skipped on first reading, since it is only needed in Section 7.4 for applications of our main theorem to linear differential equations over complexified Hardy fields._
**Operators of the same type.** Suppose \(B\neq 0\) and set \(s:=\mathrm{order}\,B\). Consider now the \(C\)-linear subspace
\[\mathcal{E}(A,B)\ :=\ \big{\{}R\in K[\partial]:\mathrm{order}\,R<r\ \mathrm{and}\ BR\in K[ \partial]A\big{\}}\]
of \(K[\partial]\). The next lemma and its corollary elaborate on Lemma 2.3.6.
**Lemma 2.4.1**.: _Let \(M:=K[\partial]/K[\partial]A\) and \(N:=K[\partial]/K[\partial]B\). Then we have an isomorphism_
\[R\ \mapsto\ \phi_{R}\ :\ \mathcal{E}(A,B)\to\mathrm{Hom}_{K[\partial]}(N,M)\]
_of \(C\)-linear spaces where_
\[\phi_{R}(1+K[\partial]B)\ =\ R+K[\partial]A\qquad\ \ \text{for}\ R\in \mathcal{E}(A,B).\]
Proof.: Let \(R\in\mathcal{E}(A,B)\). Then the kernel of the \(K[\partial]\)-linear map \(K[\partial]\to K[\partial]/K[\partial]A\) sending \(1\) to \(R+K[\partial]A\) contains \(K[\partial]B\), hence induces a \(K[\partial]\)-linear map
\[\phi_{R}\colon N=K[\partial]/K[\partial]B\to K[\partial]/K[\partial]A=M\]
as indicated. It is easy to check that \(R\mapsto\phi_{R}\) is \(C\)-linear. If \(\phi_{R}=0\), then \(\phi_{R}(1+K[\partial]B)=K[\partial]A\) and hence \(R\in K[\partial]A\), so \(R=0\), since \(\mathrm{order}\,R<r\)
Given \(\phi\in\operatorname{Hom}_{K[\partial]}(N,M)\) we have \(\phi(1+K[\partial]B)=R+K[\partial]A\) where \(R\in K[\partial]\) has order \(<r\); then \(BR\in K[\partial]A\), so \(R\in\mathcal{E}(A,B)\), and we have \(\phi=\phi_{R}\).
In particular, \(0\leqslant\dim_{C}\mathcal{E}(A,B)\leqslant rs\) by Lemmas 2.3.13 and 2.4.1. Moreover:
**Corollary 2.4.2**.: _Suppose \(r=s\). Then the isomorphism \(R\mapsto\phi_{R}\) from the previous lemma maps the subset_
\[\mathcal{E}(A,B)^{\times}\ :=\ \big{\{}R\in\mathcal{E}(A,B):1\in K[\partial]R+K[ \partial]A\big{\}}\]
_of \(\mathcal{E}(A,B)\) bijectively onto the set of \(K[\partial]\)-linear isomorphisms \(N\to M\)._
Set \(M:=K[\partial]/K[\partial]A\). We make the \(C\)-module \(\operatorname{End}_{K[\partial]}(M):=\operatorname{Hom}_{K[\partial]}(M,M)\) into a \(C\)-algebra with its ring multiplication given by composition. We equip the \(C\)-module \(\mathcal{E}(A):=\mathcal{E}(A,A)\) with the ring multiplication making the map
\[R\mapsto\phi_{R}\,:\ \mathcal{E}(A)\to\operatorname{End}_{K[\partial]}(M)\]
an isomorphism of \(C\)-algebras. The \(C\)-algebra \(\mathcal{E}(A)\) is called the **eigenring** of \(A\); cf. [189] or [158, SS2.2]. Note that if \(r\geqslant 1\), then \(C\subseteq\mathcal{E}(A)\). If the \(K[\partial]\)-module \(M\) is irreducible, then \(\operatorname{End}_{K[\partial]}(M)\) is a division ring, by Schur's Lemma [122, Chapter XVII, Proposition 1.1]. Now \(M\) is irreducible iff \(A\) is irreducible [ADH, p. 251], hence:
**Corollary 2.4.3**.: _Suppose \(A\) is irreducible. Then \(\mathcal{E}(A)\) is a division algebra over \(C\). If \(C\) is algebraically closed, then \(\mathcal{E}(A)=C\)._
Proof.: As to the second claim, let \(e\in\mathcal{E}(A)\). The elements of \(C\) commute with \(e\), so we have a commutative domain \(C[e]\subseteq\mathcal{E}(A)\), hence \(e\) is algebraic over \(C\) in view of \(\dim_{C}\mathcal{E}(A)\leqslant r^{2}\), and thus \(e\in C\) if \(C\) is algebraically closed.
We may have \(\mathcal{E}(A)=C\) without \(A\) being irreducible [158, Exercise 2.14]. If \(A\), \(B\) have the same type, then the \(C\)-algebras \(\mathcal{E}(A)\), \(\mathcal{E}(B)\) are isomorphic. By Lemma 2.4.1 and Corollary 2.4.2 we have:
**Corollary 2.4.4**.: _Suppose \(\mathcal{E}(A)=C\) and \(A\), \(B\) have the same type. Then for some \(e\in\mathcal{E}(A,B)^{\neq}\) we have \(\mathcal{E}(A,B)=Ce\), and \(\mathcal{E}(A,B)^{\times}=C^{\times}e\)._
### Self-duality
Let \(M\) be a differential module over \(K\). We say that \(M\) is **self-dual** if \(M\cong M^{*}\). If \(M\) is self-dual, then so is of course every isomorphic \(K[\partial]\)-module, in particular \(M^{*}\). Given also a differential module \(N\) over \(K\), we say that a \(K\)-bilinear map \([\,\ ]\colon M\times N\to K\) is \(\mathfrak{d}\)**-compatible** if
\[\partial[f,g]\ =\ [\partial f,g]+[f,\partial g]\qquad\text{for all $f\in M$, $g\in N$.}\]
The non-degenerate \(K\)-bilinear map
\[(\phi,f)\mapsto\langle\phi,f\rangle:=\phi(f)\,:\ M^{*}\times M\to K\]
is \(\mathfrak{d}\)-compatible by [ADH, (5.5.1)]. One verifies easily:
**Lemma 2.4.5**.: \(M\) _is self-dual iff there is a non-degenerate \(\mathfrak{d}\)-compatible \(K\)-bilinear form on \(M\). In more detail, any isomorphism \(\iota\colon M\to M^{*}\) yields a non-degenerate \(\mathfrak{d}\)-compatible \(K\)-bilinear form \((f,g)\mapsto\langle\iota(f),g\rangle\colon M\times M\to K\), and every non-degenerate \(\mathfrak{d}\)-compatible \(K\)-bilinear form on \(M\) arises in this way from a unique isomorphism \(\iota\colon M\to M^{*}\)\((\)of differential modules over \(K\)\()\)._
In terms of matrices, let \(e_{1},\ldots,e_{n}\) be a basis of \(M\) and let \(e_{1}^{*},\ldots,e_{n}^{*}\) be the dual basis of \(M^{*}\). Let \(\iota\colon M\to M^{*}\) be an isomorphism \(M\to M^{*}\) with matrix \(P\) with respect to these bases. Then for the corresponding \(K\)-bilinear form \([\,\ ]\) on \(M\) from the above lemma we have \([e_{i},e_{j}]=P_{ji}=(P^{\mathrm{t}})_{ij}\).
Next a consequence of Corollaries 2.3.20 and 2.3.23. It provides useful information about the spectrum of \(M\), which explains our interest in self-duality.
**Corollary 2.4.6**.: _Let \(\dim_{K}M=r\), and suppose \(\sum_{\alpha}\mathrm{mult}_{\alpha}(M)=r\) and \(K\) is \(1\)-linearly surjective, or \(r\geqslant 1\) and \(K\) is \((r-1)\)-linearly surjective. Assume also that \(M\) is self-dual. Then \(\mathrm{mult}_{\alpha}(M)=\mathrm{mult}_{-\alpha}(M)\) for all \(\alpha\). Hence if additionally \(K^{\dagger}\) is \(2\)-divisible and \(\sum_{\alpha}\mathrm{mult}_{\alpha}(M)\) is odd, then \(0\in\Sigma(M)\)._
Suppose now that \(M=K[\partial]/K[\partial]A\) and \(r\geqslant 1\). Then \(M^{*}\cong K[\partial]/K[\partial]A^{*}\) by [ADH, 5.5.8], hence \(M\) is self-dual iff \(A\), \(A^{*}\) have the same type. By Lemma 2.3.6 this is the case iff there are \(R,S\in K[\partial]\) of order \(<r\) with \(1\in K[\partial]R+K[\partial]A\) and \(A^{*}R=SA\). When \(\mathcal{E}(A)=C\), we can replace this with a more symmetric condition:
**Lemma 2.4.7**.: _Suppose \(\mathcal{E}(A)=C\). Then \(A\), \(A^{*}\) have the same type iff for some \(R\in K[\partial]\) of order \(n<r\) we have \(A^{*}R=(-1)^{n+r}R^{*}A\) and \(1\in K[\partial]R+K[\partial]A\)._
Proof.: Suppose \(A\), \(A^{*}\) have the same type. By Corollary 2.4.4 we obtain \(R,S\in K[\partial]\) of order \(<r\) such that \(A^{*}R=SA\), \(1\in K[\partial]R+K[\partial]A\), and \(\mathcal{E}(A,A^{*})^{\times}=C^{\times}R\). Now taking adjoints yields \(A^{*}S^{*}=R^{*}A\), so \(0\neq S^{*}\in\mathcal{E}(A,A^{*})\), hence \(S^{*}\in\mathcal{E}(A,A^{*})^{\times}\) and thus \(S^{*}=cR\) (\(c\in C^{\times}\)). Comparing coefficients of the highest order terms on both sides of \(cA^{*}R=R^{*}A\) gives \(c=(-1)^{n+r}\).
We say that \(A\) is **self-dual** if \(A\), \(A^{*}\) have the same type. Thus if \(A\) is self-dual, then so is \(A^{*}\), and so is every operator of the same type as \(A\). Moreover, by Lemma 2.3.7, if \(A\) is self-dual, then \(A\), \(A^{*}\) have the same eigenvalues, with the same multiplicities. Combining Corollary 2.4.3 with the previous lemma yields:
**Corollary 2.4.8**.: _Suppose \(A\) is irreducible and \(C\) is algebraically closed. Then \(A\) is self-dual iff for some \(R\in K[\partial]\) of order \(n<r\) we have \(A^{*}R=(-1)^{n+r}R^{*}A\) and \(1\in K[\partial]R+K[\partial]A\)._
Here is the operator version of Corollary 2.4.6:
**Corollary 2.4.9**.: _Suppose \(A\) is self-dual, and set \(s:=\sum_{\alpha}\mathrm{mult}_{\alpha}(A)\). Also assume \(K\) is \(1\)-linearly surjective and \(s=r\), or \(r\geqslant 1\) and \(K\) is \((r-1)\)-linearly surjective. Then \(\mathrm{mult}_{\alpha}(A)=\mathrm{mult}_{-\alpha}(A)\) for each \(\alpha\). Hence if in addition \(K^{\dagger}\) is \(2\)-divisible and \(s\) is odd, then \(0\in\Sigma(A)\)._
Let \(\phi\in K^{\times}\), \(B\neq 0\). If \(A\), \(B\) have the same type, then so do \(A^{\phi},B^{\phi}\in K^{\phi}[\delta]\), by Lemma 2.3.6. Hence by the next lemma, if \(A\) is self-dual, then so is \(A^{\phi}\).
**Lemma 2.4.10**.: \((A^{\phi})^{*}=(A^{*})^{\phi}_{\times\phi}\)_._
Proof.: We have
\[(\partial^{\phi})^{*}\ =\ (\phi\delta)^{*}\ =\ -\delta\phi\ =\ (-\phi\delta)_{ \times\phi}\ =\ (-\partial)^{\phi}_{\times\phi}\ =\ (\partial^{*})^{\phi}_{\times\phi},\]
so the lemma holds for \(A=\partial\). It remains to note that \(B\mapsto(B^{\phi})^{*}\) and \(B\mapsto(B^{*})^{\phi}_{\times\phi}\) are ring morphisms \(K[\partial]\to K^{\phi}[\delta]^{\mathrm{opp}}\) that are the identity on \(K\), where \(K^{\phi}[\delta]^{\mathrm{opp}}\) is the opposite ring of \(K^{\phi}[\delta]\); cf. [ADH, proof of 5.1.8].
If \(A^{*}=(-1)^{r}A_{\ltimes a}\) (\(a\in K^{\times}\)), then \(A\) is self-dual, so there is a non-degenerate \(\mathfrak{d}\)-compatible \(K\)-bilinear form on \(K[\mathfrak{d}]/K[\mathfrak{d}]A\). The next proposition gives more information. We say that a \(K\)-bilinear form \([\,\ ]\) on a \(K\)-linear space \(M\) is \((-1)^{n}\)**-symmetric** if \([f,g]=(-1)^{n}[g,f]\) for all \(f,g\in M\).
**Proposition 2.4.11** (Bogner [25]).: _Suppose \(r\geqslant 1\), and let \(M=K[\mathfrak{d}]/K[\mathfrak{d}]A\) and \(e=1+K[\mathfrak{d}]A\in M\). Then the following are equivalent:_
1. \(A^{*}=(-1)^{r}A_{\ltimes a}\) _for some_ \(a\in K^{\times}\)_;_
2. _there is a non-degenerate_ \(\mathfrak{d}\)_-compatible_ \(K\)_-bilinear form_ \([\,\ ]\) _on_ \(M\) _such that_ \[[e,\mathfrak{d}^{j}e]=0\quad\text{for $j=0,\dots,r-2$.}\]
_Any form \([\,\ ]\) on \(M\) as in_ (ii) _is \((-1)^{r-1}\)-symmetric._
Proof.: We first arrange that \(A\) is monic. Let \(e_{0}^{*},\dots,e_{r-1}^{*}\) be the basis of \(M^{*}\) dual to the basis \(e,\partial e,\dots,\mathfrak{d}^{r-1}e\) of \(M\), so \(e^{*}:=e_{r-1}^{*}\) is a cyclic vector of \(M^{*}\) with \(A^{*}e^{*}=0\), by [ADH, 5.5.7]. Below we let \(i\), \(j\) range over \(\{0,\dots,r-1\}\).
Suppose \(A^{*}=(-1)^{r}A_{\ltimes a}\), \(a\in K^{\times}\). Then \(A^{*}e^{*}=0\) gives \(Aae^{*}=0\), so we have a \(K[\mathfrak{d}]\)-linear isomorphism \(\varphi\colon M\to M^{*}\) with \(\varphi(e)=ae^{*}\). Let \([\,\ ]\) be the non-degenerate \(\mathfrak{d}\)-compatible \(K\)-bilinear form on \(M\) given by \([f,g]=\langle\varphi(f),g\rangle\). Then
\[[e,\mathfrak{d}^{j}e]=\langle\varphi(e),\mathfrak{d}^{j}e\rangle=a\langle e^ {*},\mathfrak{d}^{j}e\rangle\quad\text{ for all $j$,}\]
proving (ii). Suppose conversely that \([\,\ ]\) is as in (ii). Then \(a:=[e,\mathfrak{d}^{r-1}e]\neq 0\) since \([\,\ ]\) is non-degenerate. Let \(\varphi\colon M\to M^{*}\) be the isomorphism with \(\varphi(f)=[f,-]\) for all \(f\in M\). Then \([e,\mathfrak{d}^{j}e]=\langle\varphi(e),\mathfrak{d}^{j}e\rangle\) for all \(j\) and thus \(\varphi(e)=a\,e^{*}\). Hence
\[Aa\,e^{*}\ =\ A\varphi(e)\ =\ \varphi(Ae)\ =\ \varphi(0)\ =\ 0,\]
and this yields \(A^{*}=(-1)^{r}A_{\ltimes a}\).
Let now \([\,\ ]\) be as in (ii) and set \(a:=[e,\mathfrak{d}^{r-1}e]\). Induction on \(i\) using \(\mathfrak{d}\)-compatibility of \([\,\ ]\) shows \([\mathfrak{d}^{i}e,\mathfrak{d}^{j}e]=0\) for \(i\leqslant r-2\), \(j\leqslant r-2-i\). In particular, \([\mathfrak{d}^{i}e,e]=0=[e,\mathfrak{d}^{i}e]\) for \(i\leqslant r-2\). Induction on \(i\) using the second display in the proof of [ADH, 5.5.7] gives
\[(\mathfrak{d}^{*})^{i}e^{*} \in e_{r-1-i}^{*}+\sum_{r-i\leqslant j\leqslant r-1}Ke_{j}^{*},\ \text{and hence}\] \[(-1)^{r-1}\mathfrak{d}^{r-1}e^{*} \in e_{0}^{*}+Ke_{1}^{*}+\dots+Ke_{r-1}^{*}.\]
It follows that \([\mathfrak{d}^{r-1}e,e]=\langle\mathfrak{d}^{r-1}ae^{*},e\rangle=(-1)^{r-1}a\). This covers the base case \(i=0\) of an induction on \(i\) showing \([\mathfrak{d}^{i}e,g]=(-1)^{r-1}[g,\mathfrak{d}^{i}e]\) for all \(g\in M\). Suppose this identity holds for a certain \(i\leqslant r-2\). Then by \(\mathfrak{d}\)-compatibility
\[[\mathfrak{d}^{i+1}e,g]=\mathfrak{d}[\mathfrak{d}^{i}e,g]-[\mathfrak{d}^{i}e, \mathfrak{d}g]=(-1)^{r-1}\big{(}\mathfrak{d}[g,\mathfrak{d}^{i}e]-[\mathfrak{d }g,\mathfrak{d}^{i}e]\big{)}=(-1)^{r-1}[g,\mathfrak{d}^{i+1}e]\]
as required.
See [25] for the geometric significance of operators as in this proposition when \(K\) is the differential field of germs of meromorphic functions at \(0\).
Let \(r\geqslant 1\), and \(A,a\) be as in (i) of Proposition 2.4.11, with
\[A\ =\ \mathfrak{d}^{r}+a_{r-1}\mathfrak{d}^{r-1}+\dots+a_{0}\quad(a_{0},\dots,a_{ r-1}\in K).\]
Then \(a^{\dagger}=-(2/r)a_{r-1}\). Set \(b:=a^{\dagger}\). Then the operator \(B:=A_{b/2}\) of order \(r\) satisfies \(B^{*}=(-1)^{r}B\), so the cases \(B^{*}=B\) and \(B^{*}=-B\) deserve particular attention:
**Definition 2.4.12**.: A linear differential operator \(B\) is said to be (formally) **self-adjoint** if \(B^{*}=B\) and **skew-adjoint** if \(B^{*}=-B\).
We discuss self-adjoint and skew-adjoint operators in more depth after reviewing a useful identity relating a linear differential operator and its adjoint, which is obtained by transferring [ADH, (5.5.1)] to the level of operators.
### The Lagrange Identity
Let \(M\) be a differential module over \(K\) with \(\dim_{K}M=n\geqslant 1\), and suppose \(e\) is a cyclic vector of \(M\). Then \(e_{0},\ldots,e_{n-1}\) with \(e_{i}:=\partial^{i}e\) is a basis of \(M\). Let \(e_{0}^{*},\ldots,e_{n-1}^{*}\) be the dual basis of \(M^{*}\). Then \(e^{*}:=e_{n-1}^{*}\) is a cyclic vector of \(M^{*}\) [ADH, 5.5.7], so \(e^{*},\partial e^{*},\ldots,\partial^{n-1}e^{*}\) is a basis of \(M^{*}\). Let \(L\in K[\partial]\) be monic of order \(n\) such that \(Le=0\). Then
\[L=a_{0}+a_{1}\partial+\cdots+a_{n}\partial^{n}\qquad\text{ where }a_{0},\ldots,a_{n}\in K \text{ (so }a_{n}=1).\]
By [ADH, 5.5.7] and its proof we have \(L^{*}e^{*}=0\) and
\[e_{n-i-1}^{*}=L_{i}e^{*}\quad\text{where }L_{i}:=\sum_{j=0}^{i}(\partial^{*})^ {i-j}a_{n-j}\in K[\partial]\quad(i=0,\ldots,n-1). \tag{2.4.1}\]
Let \(d_{0},\ldots,d_{n-1}\) be the basis of \(M\) dual to the basis \(e^{*},\partial e^{*},\ldots,\partial^{n-1}e^{*}\) of \(M^{*}\). Then
\[\langle e_{n-1}^{*},d_{j}\rangle=\delta_{0j},\quad\langle e_{n-i-1}^{*},d_{n- 1}\rangle=(-1)^{n-1}\delta_{i,n-1}\quad\text{ for }1\leqslant i\leqslant n-1 \tag{2.4.2}\]
(Kronecker deltas) using (2.4.1). Let \(y,z\in K\) and set
\[\phi:=ye_{0}^{*}+y^{\prime}e_{1}^{*}+\cdots+y^{(n-1)}e_{n-1}^{*}\in M^{*}, \quad f:=zd_{0}+z^{\prime}d_{1}+\cdots+z^{(n-1)}d_{n-1}\in M.\]
Then
\[\partial\phi\ =\ L(y)e_{n-1}^{*},\qquad\partial f\ =\ (-1)^{n}L^{*}(z)d_{n-1}.\]
For the first equality, use the first display in the proof of [ADH, 5.5.7]. The second equality follows from the first by reversing the roles of \(M\) and \(M^{*}\) and noting that \((-1)^{n}L^{*}\) is monic of order \(n\) with \((-1)^{n}L^{*}e^{*}=0\). Hence
\[\langle\partial\phi,f\rangle=L(y)\langle e^{*},f\rangle=L(y)z,\qquad\langle \phi,\partial f\rangle=(-1)^{n}L^{*}(z)\langle\phi,d_{n-1}\rangle=-L^{*}(z)y\]
by (2.4.2) and so by the identity [ADH, (5.5.1)],
\[\partial\langle\phi,f\rangle\ =\ \langle\partial\phi,f\rangle+\langle\phi, \partial f\rangle\ =\ L(y)z-L^{*}(z)y.\]
Now \(\langle\partial^{i}e^{*},f\rangle=z^{(i)}\) for \(i<n\), so \(\langle Be^{*},f\rangle=B(z)\) for all \(B\) of order \(<n\), hence
\[\langle\phi,f\rangle=\sum_{i=0}^{n-1}y^{(n-i-1)}\langle L_{i}e^{*},f\rangle= \sum_{0\leqslant j\leqslant i<n}y^{(n-i-1)}(-1)^{i-j}(a_{n-j}z)^{(i-j)}=P_{L}(y,z)\]
where
\[P_{L}(Y,Z)\ :=\ \sum_{0\leqslant i\leqslant j<n}Y^{(i)}(-1)^{j-i}(a_{j+1}Z)^{(j-i )}\in K\{Y,Z\}, \tag{2.4.3}\]
a homogeneous differential polynomial of degree \(2\). These considerations show:
**Proposition 2.4.13** (Lagrange Identity).: _The map_
\[(y,z)\mapsto[y,z]_{L}:=P_{L}(y,z)\,:\ K\times K\to K\]
_is \(C\)-bilinear, and for \(y,z\in K\) we have_
\[\partial\big{(}[y,z]_{L}\big{)}\ =\ L(y)z-L^{*}(z)y. \tag{2.4.4}\]
We assumed here that \(L\) is monic of order \(n\geqslant 1\). For an arbitrary linear differential operator \(L=a_{0}+a_{1}\partial+\cdots+a_{n}\partial^{n}\in K[\partial]\)\((a_{0},\ldots,a_{n}\in K)\) we define \(P_{L}\) as in (2.4.3) and set \([y,z]_{L}:=P_{L}(y,z)\) for \(y,z\in K\). Then Proposition 2.4.13 continues to hold; to see (2.4.4) for \(L\neq 0\), reduce to the monic case, using that for all \(a\in K\) and \(L\in K[\partial]\) we have
\[P_{aL}(Y,Z)=P_{L}(Y,aZ),\hskip 28.452756pt\text{hence}\hskip 28.452756pt[y,z]_{aL }=[y,az]_{L}\text{ for }y,z\in K. \tag{2.4.5}\]
The differential polynomial \(P_{L}\) is called the **concomitant** of \(L\); it does not change when passing from \(K\) to a differential field extension.
**Lemma 2.4.14** (Hesse).: _Let \(L,L_{1},L_{2}\in K[\partial]\); then_
\[P_{L^{*}}(Y,Z)=-P_{L}(Z,Y)\quad\text{and}\quad P_{L_{1}+L_{2}}(Y,Z)=P_{L_{1}}( Y,Z)+P_{L_{2}}(Y,Z).\]
Proof.: By (2.4.4) we have \(\partial\big{(}[y,z]_{L^{*}}\big{)}=-\partial\big{(}[z,y]_{L}\big{)}\) for all \(y\), \(z\) in every differential field extension of \(K\), hence \(\big{(}P_{L^{*}}(Y,Z)+P_{L}(Z,Y)\big{)}^{\prime}=0\) in \(K\{Y,Z\}\) and then \(P_{L^{*}}(Y,Z)+P_{L}(Z,Y)=0\), since \(P_{L^{*}}\), \(P_{L}\) are homogeneous of degree \(2\). This shows the first identity. The second identity is clear by inspection of (2.4.3).
_Example_.: For \(L=0,1,\partial,\partial^{2}\) we have
\[P_{0}=P_{1}=0,\quad P_{\partial}=YZ,\quad P_{\partial^{2}}=Y^{\prime}Z-YZ^{\prime}\]
so for \(y,z\in K\):
\[[y,z]_{0}=[y,z]_{1}=0,\quad[y,z]_{\partial}=yz,\quad[y,z]_{\partial^{2}}=y^{ \prime}z-yz^{\prime},\]
which for \(L=a\partial^{2}+b\partial+c\)\((a,b,c\in K)\), using (2.4.5), gives
\[P_{L}\ =\ aY^{\prime}Z-aYZ^{\prime}+(b-a^{\prime})YZ,\quad[y,z]_{L}\ =\ ay^{\prime}z- ayz^{\prime}+(b-a^{\prime})yz.\]
Below we use that evaluating the differential operator \(L\in K[\partial]\) at the element \(Y\) of the differential ring extension \(K\{Y\}\) of \(K\) results in the differential polynomial \(L(Y)\in K\{Y\}\), which is homogeneous of degree \(1\). With this notation, we have \(P_{L}(Y,Z)^{\prime}=L(Y)Z-L^{*}(Z)Y\). The next result characterizes the concomitant and adjoint of a differential operator accordingly.
**Lemma 2.4.15** (Frobenius).: _The pair \((P_{L},L^{*})\) is the only pair \((P,\widetilde{L})\) with \(P\) in \(K\{Y,Z\}\) homogeneous of degree \(2\) and \(\widetilde{L}\in K[\partial]\) such that_
\[P(Y,Z)^{\prime}\ =\ L(Y)Z-\widetilde{L}(Z)Y.\]
Proof.: If \((P,\widetilde{L})\) is such a pair, then \(P_{1}:=P_{L}-P\), \(L_{1}:=L^{*}-\widetilde{L}\) gives \(P_{1}(Y,Z)^{\prime}=-L_{1}(Z)Y\), and from this one can derive \(L_{1}=0\) and then \(P_{1}=0\).
Let now \(L\in K[\partial]^{\neq}\) be of order \(n\), and set \(V:=\ker L\), \(W:=\ker L^{*}\). Then for \(y\in V\), \(z\in W\) we have \([y,z]_{L}\in C\) by (2.4.4); thus \([\,\ ]_{L}\) restricts to a \(C\)-bilinear map \(V\times W\to C\), also denoted by \([\,\ ]_{L}\).
**Corollary 2.4.16**.: _Suppose \(\dim_{C}V=n\). Then the pairing_
\[[\,\ ]_{L}\ :\ V\times W\to C\]
_is non-degenerate._
Proof.: By Lemma 2.3.21 we have \(\dim_{C}W=n\). Let \(y\in V^{\neq}\). Then \(P_{L}(y,Z)\in K\{Z\}\) is homogeneous of degree \(1\) and order \(n-1\), hence cannot vanish on the \(C\)-linear subspace \(W\) of \(K\) of dimension \(n\)[ADH, 4.1.14]. Similarly with \(z\in W^{\neq}\), \(P_{L}(Y,z)\in K\{Y\}\), \(V\) in place of \(y\), \(P_{L}(y,Z)\), \(W\), respectively.
**The concomitant of operators which split over \(K\).** In this subsection we assume that \(A\in K[\partial]\) and \(a_{1},\dots,a_{r}\in K\) satisfy
\[A=(\partial-a_{r})\cdots(\partial-a_{1}),\quad\text{so}\quad A^{*}=(-1)^{r}( \partial+a_{1})\cdots(\partial+a_{r}).\]
For \(i=0,\dots,r\) we define
\[A_{i}:=(\partial-a_{i})\cdots(\partial-a_{1}),\qquad B_{i}:=(-1)^{r-i}( \partial+a_{i+1})\cdots(\partial+a_{r}). \tag{2.4.6}\]
Thus \(A_{i}\) has order \(i\), \(B_{i}\) has order \(r-i\), and
\[A_{0}=B_{r}=1,\quad A_{r}=A,\quad B_{0}=A^{*}.\]
We then have the following formula for \(P_{A}\):
**Lemma 2.4.17**.: \(P_{A}(Y,Z)=\sum_{i=0}^{r-1}A_{i}(Y)B_{i+1}(Z)\)_._
Towards proving this, take \(b_{1},\dots,b_{r}\neq 0\) in a differential field extension \(E\) of \(K\) with \(b_{j}^{\dagger}=a_{j}-a_{j-1}\) for \(j=1,\dots,r\), where \(a_{0}:=0\), and set \(b_{r+1}:=(b_{1}\cdots b_{r})^{-1}\). Lemma 1.1.3 then gives
\[A\ =\ b_{r+1}^{-1}(\partial b_{r}^{-1})\cdots(\partial b_{2}^{-1})(\partial b_ {1}^{-1}).\]
For \(i=0,\dots,r\),
\[L_{i}\ :=\ b_{i+1}^{-1}(\partial b_{i}^{-1})\cdots(\partial b_{2}^{-1})( \partial b_{1}^{-1})\in E[\partial]\]
has order \(i\), with \(L_{0}=b_{1}^{-1}\), \(L_{r}=A\). Likewise we introduce for \(i=0,\dots,r\),
\[M_{i}\ :=\ (-1)^{r-i}b_{i+1}^{-1}(\partial b_{i+2}^{-1})\cdots(\partial b_{r}^{-1 })(\partial b_{r+1}^{-1})\in E[\partial]\]
of order \(r-i\), so \(M_{0}=A^{*}\), \(M_{r}=b_{r+1}^{-1}\). Note that
\[L_{i+1}\ =\ b_{i+2}^{-1}2\partial L_{i},\quad M_{i}=-b_{i+1}^{-1}\partial M_{i+1} \qquad(i=0,\dots,r-1). \tag{2.4.7}\]
With these notations, we have:
**Lemma 2.4.18** (Darboux).: \(P_{A}(Y,Z)\ =\ \sum_{i=0}^{r-1}L_{i}(Y)M_{i+1}(Z)\) _in \(E\{Y,Z\}\)._
Proof.: The cases \(r=0\) and \(r=1\) are easy to verify directly. Assume \(r\geqslant 2\). It suffices to show that the differential polynomial \(P(Y,Z)\) on the right-hand side of the claimed equality satisfies \(P(Y,Z)^{\prime}=A(Y)Z-YA^{*}(Z)\). From (2.4.7) we obtain
\[A(Y)Z\ =\ L_{r-1}(Y)^{\prime}M_{r}(Z),\qquad-YA^{*}(Z)\ =\ -YM_{0}(Z)\ =\ L_{0}(Y)M_{1}(Z)^{\prime}\]
and
\[\left(L_{i}(Y)M_{i+1}(Z)\right)^{\prime} =\ L_{i}(Y)^{\prime}M_{i+1}(Z)+L_{i}(Y)M_{i+1}(Z)^{\prime}\] \[=\ L_{i}(Y)M_{i+1}(Z)^{\prime}-L_{i+1}(Y)M_{i+2}(Z)^{\prime}\quad \text{for $i=0,\dots,r-2$}.\]
Now use the cancellations in \(\sum_{i=0}^{r-1}\left(L_{i}(Y)M_{i+1}(Z)\right)^{\prime}\).
This yields Lemma 2.4.17: For \(i=0,\dots,r-1\), we have
\[L_{i}\ =\ (b_{i+1}b_{i}\cdots b_{1})^{-1}(\partial-a_{i})\cdots(\partial-a_{1})\ =\]
and
\[M_{i+1}=(-1)^{r-i-1}(b_{i+2}\cdots b_{r+1})^{-1}(\partial+a_{i+2})\cdots( \partial+a_{r})\]
and hence \(L_{i}(Y)M_{i+1}(Z)=A_{i}(Y)B_{i+1}(Z)\)
**Self-adjoint and skew-adjoint operators.** If \(A\) is self-adjoint, then \(r=\operatorname{order}A\) is even. Moreover, for \(B\neq 0\): \(A\) is self-adjoint iff \(B^{*}AB\) is self-adjoint. The self-adjoint operators form a \(C\)-linear subspace of \(K[\mathfrak{d}]\) containing \(K\).
**Lemma 2.4.19** (Jacobi).: _Let \(s\in\mathbb{N}\) and suppose \(r=2s\). Then \(A\) is self-adjoint iff there are \(b_{0},\dots,b_{s}\in K\) such that_
\[A\ =\ \mathfrak{d}^{s}b_{s}\mathfrak{d}^{s}+\mathfrak{d}^{s-1}b_{s-1} \mathfrak{d}^{s-1}+\dots+b_{0}.\]
Proof.: If \(A\) has the displayed shape, then evidently \(A\) is self-adjoint. We show the converse by induction on \(s\). The case \(s=0\) being trivial, suppose \(s\geqslant 1\). Say \(A=a_{r}\mathfrak{d}^{r}+\) lower order terms (\(a_{r}\in K^{\times}\)). Then \(B=A-\mathfrak{d}^{s}a_{r}\mathfrak{d}^{s}\) is self-adjoint of order \(<r\), hence the inductive hypothesis applies to \(B\).
_Example_.: If \(r=2\), then \(A\) is self-adjoint iff \(A=a\mathfrak{d}^{2}+a^{\prime}\mathfrak{d}+b\) (\(a,b\in K\)). In particular, \(\mathfrak{d}^{2}+b\) (\(b\in K\)) is self-adjoint.
If \(A\) is self-adjoint, then \([y,z]_{A}=-[z,y]_{A}\) for all \(y,z\in K\), by Lemma 2.4.14. Thus \([y,y]_{A}=0\) for \(y\in K\). This fact is used in the proof of the next lemma:
**Lemma 2.4.20**.: _Suppose \(A\) is self-adjoint and splits over \(K\), and \(r=2s,\ s\in\mathbb{N}\). Then there are \(a\in K^{\times}\) and \(a_{1},\dots,a_{s}\in K\) such that_
\[A\ =\ (\mathfrak{d}+a_{1})\dotsb(\mathfrak{d}+a_{s})a(\mathfrak{d}-a_{s}) \dotsb(\mathfrak{d}-a_{1}).\]
_If \(A\) is monic, then \(a=1\) for any such \(a\)._
Proof.: By induction on \(s\). The case \(s=0\) being trivial, suppose \(s\geqslant 1\). Let \(z\neq 0\) be a zero of \(A\) in a differential field extension \(\Omega\) of \(K\) with \(a_{1}:=z^{\dagger}\in K\). The differential polynomial \(P(Y):=P_{A}(Y,z)\) is homogeneous of degree \(1\) and order \(r-1\) with \(P(z)=[z,z]_{A}=0\); hence by [ADH, 5.1.21] we obtain \(A_{0}\in K[\mathfrak{d}]\) with \(L_{P}=A_{0}(\mathfrak{d}-a_{1})\). By (2.4.4) we have \(zA=\mathfrak{d}L_{P}=\mathfrak{d}A_{0}(\mathfrak{d}-a_{1})\) and so
\[A=z^{-1}\mathfrak{d}A_{0}(\mathfrak{d}-a_{1})=(\mathfrak{d}+a_{1})A_{1}( \mathfrak{d}-a_{1})\qquad\text{where }A_{1}:=z^{-1}A_{0}\in\Omega[\mathfrak{d}].\]
The inductive hypothesis applies to \(A_{1}\): \(A_{1}\in K[\mathfrak{d}]\) by [ADH, 5.1.11], \(A_{1}\) is self-adjoint of order \(r-2\), and \(A_{1}\) splits over \(K\) by [ADH, 5.1.22].
This gives rise to the following corollary:
**Corollary 2.4.21** (Frobenius, Jacobi).: _Suppose \(A\) is self-adjoint and \(\dim_{C}\ker A=r=2s\). Then \(A=B^{*}bB\) where \(B=\mathfrak{d}b_{s}^{-1}\dotsb\mathfrak{d}_{1}^{-1}\) with \(b,b_{1},\dots,b_{s}\in K^{\times}\)._
Proof.: From \(\dim_{C}\ker A=r\) we obtain that \(A\) splits over \(K\). Hence the previous lemma yields \(a_{1},\dots,a_{s}\in K\), \(a\in K^{\times}\) such that
\[A=(\mathfrak{d}+a_{1})\dotsb(\mathfrak{d}+a_{s})a(\mathfrak{d}-a_{s})\dotsb( \mathfrak{d}-a_{1}),\]
and \(a_{1},\dots,a_{s}\in K^{\dagger}\) by Lemma 2.3.4. Lemma 1.1.3 yields \(b_{1},\dots,b_{s}\in K^{\times}\) with
\[(\mathfrak{d}-a_{s})\dotsb(\mathfrak{d}-a_{1})=b_{1}\dotsb_{s}\mathfrak{d}b_{ s}^{-1}\dotsb\mathfrak{d}_{1}^{-1}.\]
Set \(B:=\mathfrak{d}b_{s}^{-1}\dotsb\mathfrak{d}_{1}^{-1}\). Then \(A=B^{*}bB\) for \(b:=(-1)^{s}(b_{1}\dotsb_{s})^{2}a\).
Recall that \(A\) is called skew-adjoint if \(A^{*}=-A\) (and then \(r=\operatorname{order}A\) is odd). The skew-adjoint operators form a \(C\)-linear subspace of \(K[\mathfrak{d}]\). For \(B\neq 0\), the operator \(B^{*}AB\) (\(B\neq 0\)) is skew-adjoint iff \(A\) is skew-adjoint. We have a characterization of skew-adjoint operators analogous to Lemma 2.4.19:
**Lemma 2.4.22**.: _Let \(s\in\mathbb{N}\) and suppose \(r=2s+1\). Then \(A\) is skew-adjoint iff there are \(b_{0},\ldots,b_{s}\in K\) such that_
\[A=(\partial^{s+1}b_{s}\partial^{s}+\partial^{s}b_{s}\partial^{s+1})+(\partial^{ s}b_{s-1}\partial^{s-1}+\partial^{s-1}b_{s-1}\partial^{s})+\cdots+(\partial b _{0}+b_{0}\partial).\]
Proof.: Suppose \(A\) is skew-adjoint. Say \(A=a_{r}\partial^{r}+\text{lower order terms }(a_{r}\in K^{\times})\), and set \(b_{s}:=a_{r}/2\). Then \(A-(\partial^{s+1}b_{s}\partial^{s}+\partial^{s}b_{s}\partial^{s+1})\) is skew-adjoint of order \(<r\). Hence the forward direction follows by induction on \(s\). The converse is obvious.
_Example_.: If \(r=1\), then \(A\) is skew-adjoint iff \(A=a\partial+(a^{\prime}/2)\) (\(a\in K^{\times}\)).
For monic \(A\) of order \(3\), \(A\) is skew-adjoint iff \(A=\partial^{3}+f\partial+(f^{\prime}/2)\) for some \(f\in K\). In the next lemma we consider this case; for a more general version of this lemma, see [158, Proposition 4.26(1)].
**Lemma 2.4.23**.: _Let \(f\in K\), \(A=\partial^{3}+f\partial+(f^{\prime}/2)\), \(B=4\partial^{2}+f\) and \(y,z\in\ker B\). Then \(yz\in\ker A\). Moreover, if \(y\), \(z\) is a basis of the \(C\)-linear space \(\ker B\), then \(y^{2},yz,z^{2}\) is a basis of \(\ker A\)._
Proof.: We have
\[(yz)^{\prime}\ =\ y^{\prime}z+yz^{\prime},\quad(yz)^{\prime\prime}\ =\ y^{\prime\prime}z+2y^{\prime}z^{\prime}+yz^{\prime\prime}\ =\ 2y^{\prime}z^{\prime}-(f/2)yz,\]
hence
\[(yz)^{\prime\prime\prime} =\ 2y^{\prime\prime}z^{\prime}+2y^{\prime}z^{\prime\prime}-(f^{ \prime}/2)yz-(f/2)(yz)^{\prime}\] \[=\ -(f/2)(yz^{\prime}+y^{\prime}z)-(f^{\prime}/2)yz-(f/2)(yz)^{\prime}\] \[=\ -f(yz)^{\prime}-(f^{\prime}/2)yz,\]
and so \(yz\in\ker A\). Suppose \(ay^{2}+byz+cz^{2}=0\) for some \(a,b,c\in C\), not all zero; we claim that then \(y\), \(z\) are \(C\)-linearly dependent. We have \(a\neq 0\) or \(c\neq 0\), and so we may assume \(a\neq 0\), \(z\neq 0\). Then \(u:=y/z\) satisfies \(au^{2}+bu+c=0\), so \(u\in C\) [ADH, 4.1.1], hence \(y\in Cz\).
If \(A\) is skew-adjoint, then \(P_{A}(Y,Z)=P_{A}(Z,Y)\), so
\[P_{A}(Y+Z,Y+Z)\ =\ P_{A}(Y,Y)+P_{A}(Z,Z)+2P_{A}(Y,Z)\]
and \([y,z]_{A}=[z,y]_{A}\) for all \(y,z\in K\).
**Lemma 2.4.24**.: _Suppose \(r\geqslant 1\). Then the following are equivalent:_
* \(A\) _is skew-adjoint;_
* _there is a homogeneous differential polynomial_ \(Q\in K\{Y\}\) _of degree_ \(2\) _such that_ \(A(Y)Y=Q(Y)^{\prime}\)_._
_Moreover, the differential polynomial \(Q\) in (ii) is unique, and \(Q(Y)=\frac{1}{2}P_{A}(Y,Y)\)._
Proof.: For (i) \(\Rightarrow\) (ii) take \(Q(Y):=\frac{1}{2}P_{A}(Y,Y)\). For the converse let \(Q\) be as in (ii). Let \(Z\) be a differential indeterminate over \(K\) different from \(Y\) and \(c\) be a constant in a differential field extension \(\Omega\), with \(c\) transcendental over \(C\). Then
\[A(Y+cZ)(Y+cZ)\ =\ Q(Y+cZ)^{\prime}\ \text{ in }\Omega\{Y,Z\}.\]
Also, in \(\Omega\{Y,Z\}\),
\[A(Y+cZ)(Y+cZ)\ =\ A(Y)Y+c\big{(}A(Y)Z+A(Z)Y\big{)}+c^{2}A(Z)Z.\]
Take \(P,R\in K\{Y,Z\}\) such that
\[Q(Y+cZ)\ =\ Q(Y)+cP(Y,Z)+c^{2}R(Y,Z).\]
Comparing the coefficients of \(c\) now yields
\[A(Y)Z+A(Z)Y\ =\ P(Y,Z)^{\prime}.\]
Now using Lemma 2.4.15 gives \(A^{*}=-A\), \(P=P_{A}\), proving (i).
_In the rest of this subsection \(A\) is skew-adjoint, \(r\geqslant 3\), and \(Q(Y):=\frac{1}{2}P_{A}(Y,Y)\)._
**Lemma 2.4.25**.: _If \(\dim_{C}\ker A\geqslant 2\) and \(C^{\times}\) is \(2\)-divisible, then \(A(z)=Q(z)=0\) for some \(z\in K^{\times}\)._
Proof.: Apply [122, Chapter XV, Theorem 3.1] to the symmetric bilinear form
\[(y,z)\mapsto[y,z]_{A}\ :\ \ker A\times\ker A\to C\]
on the \(C\)-linear space \(\ker A\).
**Lemma 2.4.26**.: _Suppose \(K^{\dagger}\) is \(2\)-divisible, and \(z\neq 0\) lies in a differential field extension of \(K\) with \(A(z)=0\) and \(z^{\dagger}\in K\setminus K^{\dagger}\). Then \(Q(z)=0\)._
Proof.: From \(z^{\prime}\in Kz\) it follows by induction that \((Kz)^{(n)}\subseteq Kz\) for all \(n\). Using (2.4.3) this yields \(Q(z)=az^{2}\) for a certain \(a\in K\). Also \(Q(z)^{\prime}=A(z)z=0\) and so if \(a\neq 0\), then \(z^{\dagger}=-\frac{1}{2}a^{\dagger}\in K^{\dagger}\), a contradiction.
Let \(z\neq 0\) lie in a differential field extension \(\Omega\) of \(K\) with \(A(z)=Q(z)=0\). The differential polynomial \(P(Y):=P_{A}(Yz,z)\in\Omega\{Y\}\) is homogeneous of degree \(1\) and order \(r-1\). Substitution in the identity \(P_{A}(Y,Z)^{\prime}=A(Y)Z+A(Z)Y\) gives \(P(Y)^{\prime}=zA(Yz)\). The coefficient of \(Y\) in \(P\) is \(P(1)=P_{A}(z,z)=0\), hence
\[P(Y)\ =\ A_{0}(Y^{\prime}),\qquad A_{0}\in\Omega[\partial]\ \text{of order $r-2$.}\]
**Lemma 2.4.27**.: _In \(\Omega[\partial]\) we have \(\partial A_{0}\partial=zAz\), so \(A_{0}\) is skew-adjoint._
Proof.: From \(P(Y)^{\prime}=zA(Yz)\) and \(P(Y)=A_{0}(Y^{\prime})\) we obtain \(A_{0}(Y^{\prime})^{\prime}=zA(Yz)\). In terms of operators this means \(\partial A_{0}\partial=zAz\).
Next we use these lemmas to prove a skew-adjoint version of Lemma 2.4.20.
**Factorization of skew-adjoint operators**.: _In this subsection \(K\) is \(1\)-linearly surjective, \(K^{\dagger}\) and \(C^{\times}\) are \(2\)-divisible, and \(A\) is monic._
**Proposition 2.4.28**.: _Suppose \(A\) is skew-adjoint and splits over \(K\). Then there are \(a_{1},\dots,a_{s}\in K\), where \(r=2s+1\), such that_
\[A\ =\ (\partial+a_{1})\cdots(\partial+a_{s})\partial(\partial-a_{s})\cdots( \partial-a_{1}).\]
Proof.: We proceed by induction on \(s\). The case \(s=0\) is clear (see the example following Lemma 2.4.22), so let \(s\geqslant 1\). With \(Q\) as in the previous subsection we claim that \(A(z)=Q(z)=0\) and \(z^{\dagger}\in K\) for some \(z\neq 0\) in a differential field extension \(\Omega\) of \(K\). If \(\dim_{C}\ker A=r\), then Lemma 2.4.25 yields such a \(z\) in \(\Omega=K\). Otherwise, Lemma 2.3.3 gives \(a\in K\setminus K^{\dagger}\) with \(\operatorname{mult}_{a}(A)\geqslant 1\), which in turn gives \(z\neq 0\) in a differential field extension \(\Omega\) of \(K\) with \(A(z)=0\) and \(z^{\dagger}\in a+K^{\dagger}\), and thus \(Q(z)=0\) by Lemma 2.4.26. This proves the claim.
Let \(z\) and \(\Omega\) be as in the claim, set \(a_{1}:=z^{\dagger}\), and let \(A_{0}\in\Omega[\partial]\) be the skew-adjoint differential operator from the previous subsection. Then
\[A\ =\ z^{-1}\partial A_{0}\partial z^{-1}\ =\ (\partial+a_{1})A_{1}(\partial-a_{1}) \qquad\text{where $A_{1}:=z^{-1}A_{0}z^{-1}$.}\]
By Lemma 2.4.27, \(A_{1}\) is skew-adjoint of order \(r-2\). By [ADH, 5.1.11, 5.1.22], \(A_{1}\in K[\partial]\) is monic and splits over \(K\), so the inductive hypothesis applies to \(A_{1}\).
**Corollary 2.4.29** (Darboux).: _Suppose \(A\) is skew-adjoint with \(\dim_{C}\ker A=r=2s+1\). Then \(A=B^{*}\partial B\) for some \(B\). More precisely, there are \(b_{1},\dots,b_{s}\in K^{\times}\) such that \(A=B^{*}\partial B\) for \(B:=\partial b_{s}^{-1}\cdots\partial b_{1}^{-1}\)._
Proof.: Arguing as in the proof of Corollary 2.4.21, using Proposition 2.4.28 instead of Lemma 2.4.20, gives \(A=(-1)^{s}B^{*}b\partial bB\) with \(B=\partial b_{s}^{-1}\cdots\partial b_{1}^{-1}\), \(b_{1},\dots,b_{s}\in K^{\times}\), and \(b=b_{1}\cdots b_{s}\). But \(A\) is monic, so \((-1)^{s}b^{2}=1\), hence \(b\in C\) and \(A=B^{*}\partial B\).
**Corollary 2.4.30**.: _Suppose \(A^{*}=(-1)^{r}A_{\ltimes a}\) with \(a\in K^{\times}\), and \(A\) splits over \(K\). Then there are \(a_{1},\dots,a_{r}\in K\) such that_
\[A=(\partial-a_{r})\cdots(\partial-a_{1})\quad\text{and}\quad a_{j}+a_{r+1-j}= a^{\dagger}\text{ for }j=1,\dots,r.\]
Proof.: By a remark preceding Definition 2.4.12 we have \(B^{*}=(-1)^{r}B\) where \(B:=A_{b/2}\), \(b:=a^{\dagger}\), so \(B=A_{\ltimes d}\) with \(d\in K^{\times},\ d^{2}=a\). Suppose \(r=2s\) is even. Then \(B\) is self-adjoint and Lemma 2.4.20 gives
\[B\ =\ (\partial+b_{1})\cdots(\partial+b_{s})(\partial-b_{s})\cdots(\partial-b _{1})\quad\text{ with }b_{1},\dots,b_{s}\in K.\]
Hence
\[A\ =\ B_{\ltimes d^{-1}}\ =\ (\partial+b_{1}-d^{\dagger})\cdots(\partial+b_{s}- d^{\dagger})(\partial-b_{s}-d^{\dagger})\cdots(\partial-b_{1}-d^{\dagger}).\]
with the desired result for \(a_{j}=b_{j}+d^{\dagger}\) and \(a_{r+1-j}=-b_{j}+d^{\dagger}\), \(j=1,\dots,s\). The case of odd \(r=2s+1\) is handled in the same way, using Proposition 2.4.28 instead of Lemma 2.4.20.
**Eigenrings of matrix differential equations**.: _In the rest of this section \(N\), \(N_{1}\), \(N_{2}\), \(P\), range over \(n\times n\) matrices over \(K\)\((n\geqslant 1)\). Associated to the matrix differential equation \(y^{\prime}=Ny\) over \(K\) we have the differential module \(M_{N}\) over \(K\) with \(\dim_{K}M=n\)[ADH, 5.5]. Recall that matrix differential equations \(y^{\prime}=N_{1}y\) and \(y^{\prime}=N_{2}y\) over \(K\) are said to be _equivalent_ if \(M_{N_{1}}\cong M_{N_{2}}\). Let \(K^{n\times n}\) be the \(C\)-linear space of all \(n\times n\) matrices over \(K\), and consider the subspace
\[\mathcal{E}(N_{1},N_{2})\ :=\ \big{\{}P:\ P^{\prime}=N_{2}P-PN_{1}\big{\}}\]
of \(K^{n\times n}\). Given a differential ring extension \(R\) of \(K\), each \(P\in\mathcal{E}(N_{1},N_{2})\) yields a \(C_{R}\)-linear map \(y\mapsto Py\colon\operatorname{sol}_{R}(N_{1})\to\operatorname{sol}_{R}(N_{2})\). By Lemma 2.3.13 and the next lemma we have \(\dim_{C}\mathcal{E}(N_{1},N_{2})\leqslant n^{2}\):
**Lemma 2.4.31**.: _We have an isomorphism_
\[P\mapsto\phi_{P}\ :\ \mathcal{E}(N_{1},N_{2})\to\operatorname{Hom}_{K[ \partial]}(M_{N_{1}},M_{N_{2}})\]
_of \(C\)-linear spaces given by_
\[\phi_{P}(y)\ =\ Py\quad\text{for }P\in\mathcal{E}(N_{1},N_{2})\text{ and }y\in M_{N_{1}}.\]
Proof.: Let \(P\in\mathcal{E}(N_{1},N_{2})\), and define \(\phi_{P}\in\operatorname{Hom}_{K}(M_{N_{1}},M_{N_{2}})\) by \(\phi_{P}(y)=Py\). Then for \(y\in M_{N_{1}}\) we have
\[\phi_{P}(\partial y)\ =\ Py^{\prime}-PN_{1}y\ =\ Py^{\prime}+\left(P^{\prime}y-N_{ 2}Py\right)\ =\ \left(Py\right)^{\prime}-N_{2}Py\ =\ \partial\phi_{P}(y),\]
hence \(\phi_{P}\in\operatorname{Hom}_{K[\partial]}(M_{N_{1}},M_{N_{2}})\). The rest follows easily.
One verifies easily that \(\mathcal{E}(N):=\mathcal{E}(N,N)\) is a subalgebra of the \(C\)-algebra of \(n\times n\)-matrices over \(K\) and that this yields an isomorphism
\[P\mapsto\phi_{P}\ :\ \mathcal{E}(N)\to\operatorname{End}_{K[\partial]}(M_{N})\]
of \(C\)-algebras. The \(C\)-algebra \(\mathcal{E}(N)\) is called the **eigenring** of \(y^{\prime}=Ny\). We have \(1\leqslant\dim_{C}\mathcal{E}(N)\leqslant n^{2}\), and \(C\) is algebraically closed in \(K\). It follows that the minimum polynomial of any \(P\in\mathcal{E}(N)\) over \(K\) (that is, the monic polynomial \(f(T)\in K[T]\) of least degree with \(f(P)=0\)) has degree at most \(n^{2}\) and has its coefficients in \(C\). In particular, if \(C\) is algebraically closed, then the eigenvalues of any \(P\in\mathcal{E}(N)\) are in \(C\). If \(y^{\prime}=N_{1}y\) and \(y^{\prime}=N_{2}y\) are equivalent, then their eigenrings are isomorphic as \(C\)-algebras.
**Corollary 2.4.32**.: _The isomorphism \(P\mapsto\phi_{P}\) from the previous lemma restricts to a bijection between the subset_
\[\mathcal{E}(N_{1},N_{2})^{\times}\ :=\ \operatorname{GL}_{n}(K)\cap\mathcal{E}(N _{1},N_{2})\]
_of \(\mathcal{E}(N_{1},N_{2})\) and the set of isomorphisms \(M_{N_{1}}\to M_{N_{2}}\). If \(\mathcal{E}(N_{1})=C\cdot 1\) and \(P\in\mathcal{E}(N_{1},N_{2})^{\times}\), then \(\mathcal{E}(N_{1},N_{2})=C\cdot P\) and \(\mathcal{E}(N_{1},N_{2})^{\times}=C^{\times}\cdot P\)._
Hence \(y^{\prime}=N_{1}y\) and \(y^{\prime}=N_{2}y\) are equivalent iff \(\mathcal{E}(N_{1},N_{2})^{\times}\neq\emptyset\), and in this case \(y^{\prime}=N_{1}y\) is also called a **gauge transform** of \(y^{\prime}=N_{2}y\).
For \(P\in\mathcal{E}(N_{1},N_{2})^{\times}\) and each differential ring extension \(R\) of \(K\) we have the isomorphism
\[y\mapsto Py\colon\operatorname{sol}_{R}(N_{1})\to\operatorname{sol}_{R}(N_{2})\]
of \(C_{R}\)-modules, and any fundamental matrix \(F\) for \(y^{\prime}=N_{1}y\) in \(R\) yields a fundamental matrix \(PF\) for \(y^{\prime}=N_{2}y\) in \(R\).
We have a right action of the group \(\operatorname{GL}_{n}(K)\) on \(K^{n\times n}\) given by
\[(N,P)\mapsto P^{-1}(N):=P^{-1}NP-P^{-1}P^{\prime}.\]
For each \(N\) and \(P\in\operatorname{GL}_{n}(K)\), we have \(P\in\mathcal{E}(P^{-1}(N),N)^{\times}\), so the matrix differential equation \(y^{\prime}=P^{-1}(N)y\) is a gauge transform of \(y^{\prime}=Ny\).
Next we relate the eigenrings of linear differential operators introduced above with the eigenrings of matrix differential equations over \(K\). We precede this by some generalities about differential modules: Let \(M,M_{1},M_{2}\) be (left) \(K[\mathfrak{o}]\)-modules. The _dual_\(M^{*}:=\operatorname{Hom}_{K}(M,K)\) of \(M\) is then a \(K[\mathfrak{o}]\)-module, and \(\left\langle\phi,f\right\rangle:=\phi(f)\in K\) for \(\phi\in M^{*}\), \(f\in M\). This yields the injective \(K[\mathfrak{o}]\)-linear map
\[\alpha\mapsto\alpha^{*}\colon\operatorname{Hom}_{K}(M_{2},M_{1})\to \operatorname{Hom}_{K}(M_{1}^{*},M_{2}^{*})\quad\text{where $\alpha^{*}(\phi)=\phi\circ\alpha$ for $\phi\in M_{1}^{*}$},\]
and
\[\left\langle\alpha^{*}(\phi),f\right\rangle\ =\ \left\langle\phi,\alpha(f) \right\rangle\quad\text{for $\alpha\in\operatorname{Hom}_{K}(M_{2},M_{1})$, $\phi\in M_{1}^{*}$, $f\in M_{2}$}.\]
If \(M_{1}\), \(M_{2}\) are differential modules over \(K\), then \(\alpha\mapsto\alpha^{*}\) is an isomorphism. Note that \(\operatorname{Hom}_{K[\mathfrak{o}]}(M_{2},M_{1})\) is a \(C\)-linear subspace of \(H:=\operatorname{Hom}_{K}(M_{2},M_{1})\), with
\[\operatorname{Hom}_{K[\mathfrak{o}]}(M_{2},M_{1})\ =\ \ker_{H}\mathfrak{o}.\]
Hence the \(K[\mathfrak{o}]\)-module morphism \(\alpha\mapsto\alpha^{*}\) restricts to a \(C\)-linear map
\[\operatorname{Hom}_{K[\mathfrak{o}]}(M_{2},M_{1})\to\operatorname{Hom}_{K[ \mathfrak{o}]}(M_{1}^{*},M_{2}^{*}),\]
which is bijective if \(M_{1}\), \(M_{2}\) are differential modules over \(K\).
Let \(N\) be the companion matrix of a monic operator \(A\in K[\mathfrak{o}]\) of order \(n\), and set \(M:=K[\mathfrak{o}]/K[\mathfrak{o}]A\), a differential module over \(K\) of dimension \(n\), with cyclic vector \(e:=1+K[\mathfrak{o}]A\), \(Ae=0\), and with basis \(e_{0},\dots,e_{n-1}\), \(e_{j}:=\mathfrak{o}^{j}e\) for \(j=0,\dots,n-1\). Then \(M^{*}\) has matrix \(N\) with respect to the dual basis \(e_{0}^{*},\dots,e_{n-1}^{*}\). Accordingly we identify \(M^{*}\) with \(M_{N}\) via the isomorphism \(M^{*}\to M_{N}\) sending \(e_{j-1}^{*}\) to the \(j\)th standard basis vector of \(K^{n}\), for \(j=1,\dots,n\).
In the following lemma \(N_{1}\), \(N_{2}\) are the companion matrices of monic operators \(A_{1},A_{2}\in K[\partial]\) of order \(n\), respectively. Set \(M_{1}:=K[\partial]/K[\partial]A_{1}\), \(M_{2}:=K[\partial]/K[\partial]A_{2}\) and identify \(M_{1}^{*}\), \(M_{2}^{*}\) with \(M_{N_{1}}\), \(M_{N_{2}}\), as we just indicated for \(M\). Let \(\Phi\) be the isomorphism of \(C\)-linear spaces making the diagram
commute.
**Lemma 2.4.33**.: _Let \(R=r_{0}+r_{1}\partial+\dots+r_{n-1}\partial^{n-1}\in\mathcal{E}(A_{1},A_{2})\)\((r_{0},\dots,r_{n-1}\in K)\); then the first row of the \(n\times n\) matrix \(\Phi(R)\) is \((r_{0},r_{1},\dots,r_{n-1})\)._
Proof.: Set \(P=\Phi(R)\); so \(\phi_{P}=\phi_{R}^{*}\). Let \(e:=1+K[\partial]A_{1}\in M_{1}\), and let \(e_{0}^{*},\dots,e_{n-1}^{*}\) be the basis of \(M_{N_{1}}=M_{1}^{*}\) dual to the basis \(e,\partial e,\dots,\partial^{n-1}e\) of \(M_{1}\). Likewise, let \(f:=1+K[\partial]A_{2}\in M_{2}\), and let \(f_{0}^{*},\dots,f_{n-1}^{*}\) be the basis of \(M_{N_{2}}=M_{2}^{*}\) dual to the basis \(f,\partial f,\dots,\partial^{n-1}f\) of \(M_{2}\). Then for \(j=0,\dots,n-1\) we have \(\phi_{P}(e_{j}^{*})\in M_{2}^{*}\), and
\[\left\langle\phi_{P}(e_{j}^{*}),f\right\rangle\ =\ \left\langle\phi_{R}^{*}(e_{j}^{*}),f \right\rangle\ =\ \left\langle e_{j}^{*},\phi_{R}(f)\right\rangle\ =\ r_{j}.\]
Hence the matrix \(P\) of the \(K\)-linear map \(\phi_{P}\) with respect to the standard bases of \(M_{N_{1}}=K^{n}\) and \(M_{N_{2}}=K^{n}\) has first row \((r_{0},r_{1},\dots,r_{n-1})\).
Self-dual matrix differential equationsRecall that \(N^{*}=-N^{\mathrm{t}}\) by [ADH, 5.5.6] and \(M_{N^{*}}\cong(M_{N})^{*}\) by [ADH, p.279]. The _adjoint equation_ of \(y^{\prime}=Ny\) is the matrix differential equation \(y^{\prime}=N^{*}y\) over \(K\). We say that \(y^{\prime}=Ny\) is **self-dual** if it is equivalent to its adjoint equation [ADH, p. 277]. Hence \(y^{\prime}=Ny\) is self-dual iff the differential module \(M_{N}\) over \(K\) is self-dual. Thus if \(y^{\prime}=Ny\) is self-dual, then so is any matrix differential equation over \(K\) equivalent to \(y^{\prime}=Ny\), as is the adjoint equation \(y^{\prime}=N^{*}y\) of \(y^{\prime}=Ny\). By [ADH, 5.5.8, 5.5.9] we have:
**Corollary 2.4.34**.: _If \(C\neq K\), then every self-dual matrix differential equation \(y^{\prime}=N_{1}y\) over \(K\) is equivalent to a matrix differential equation \(y^{\prime}=N_{2}y\) with \(N_{2}\) the companion matrix of a monic self-dual operator in \(K[\partial]\)._
We set \(\mathrm{mult}_{\alpha}(N):=\mathrm{mult}_{\alpha}(M_{N})\) and call
\[\Sigma(N)\ :=\ \Sigma(M_{N})\ =\ \left\{\alpha:\mathrm{mult}_{\alpha}(N)\geqslant 1\right\}\]
the **spectrum** of \(y^{\prime}=Ny\). The elements of \(\Sigma(N)\) are the **eigenvalues** of \(y^{\prime}=Ny\).
**Lemma 2.4.35**.: _Suppose \(B\in K[\partial]\) is monic of order \(n\) and \(N\) is the companion matrix of \(B\). Then \(\mathrm{mult}_{\alpha}(B)=\mathrm{mult}_{\alpha}(N)\) for all \(\alpha\). In particular, \(\alpha\) is an eigenvalue of \(B\) iff \(\alpha\) is an eigenvalue of \(y^{\prime}=Ny\)._
Proof.: Use Corollary 2.3.15 and \(M_{N}\cong M^{*}\) for \(M:=K[\partial]/K[\partial]B\) [ADH, 5.5.8].
From Corollary 2.4.6 we obtain:
**Corollary 2.4.36**.: _Assume that \(y^{\prime}=Ny\) is self-dual. Suppose in addition that \(\sum_{\alpha}\mathrm{mult}_{\alpha}(N)=n\) and \(K\) is \(1\)-linearly surjective, or \(K\) is \((n-1)\)-linearly surjective. Then \(\mathrm{mult}_{\alpha}(N)=\mathrm{mult}_{-\alpha}(N)\) for all \(\alpha\). Hence, if also \(K^{\dagger}\) is \(2\)-divisible and \(\sum_{\alpha}\mathrm{mult}_{\alpha}(N)\) is odd, then \(0\in\Sigma(N)\)._
Note that
\[\mathcal{E}(N,N^{*})\ =\ \big{\{}P:P^{\prime}=N^{*}P-PN\big{\}},\qquad\mathcal{E}(N, N^{*})^{\times}\ =\ \mathrm{GL}_{n}(K)\cap\mathcal{E}(N,N^{*})\]
are both closed under matrix transpose. The matrix differential equation \(y^{\prime}=Ny\) is self-dual iff \(\mathcal{E}(N,N^{*})^{\times}\neq\emptyset\). Moreover, there is a \((-1)^{n}\)-symmetric non-degenerate \(\mathfrak{d}\)-compatible \(K\)-bilinear form on \(M_{N}\) iff \(\mathcal{E}(N,N^{*})^{\times}\) contains a matrix \(P\) such that \(P^{\mathrm{t}}=(-1)^{n}P\). One calls \(y^{\prime}=Ny\)**self-adjoint** if \(N^{*}=N\), that is, \(N\) is skew-symmetric (in which case, \(y^{\prime}=Ny\) is self-dual).
_Example 2.4.37_.: Suppose \(n=3m\) and
\[N=\begin{pmatrix}&\kappa I&\\ -\kappa I&&\tau I\\ &-\tau I&\end{pmatrix}\]
where \(I\) denotes the \(m\times m\) identity matrix and \(\kappa,\tau\in K\). Then \(y^{\prime}=Ny\) is self-adjoint. Let \(\pi\) be the permutation of \(\{1,\dots,n\}\) given for \(i=1,\dots,m\) by
\[\pi(i)\ =\ 3i-2,\quad\pi(m+i)\ =\ 3i-1,\quad\pi(2m+i)\ =\ 3i.\]
Then \(P\in\mathrm{GL}_{n}(K)\) with \(Pe_{j}=e_{\pi(j)}\) (\(j=1,\dots,n\)) gives \(P^{\prime}=0\in K^{n\times n}\), so
\[P^{-1}(N)\ =\ \mathrm{diag}(T,\dots,T)\in K^{n\times n}\quad\text{where }T:= \begin{pmatrix}0&\kappa&0\\ -\kappa&0&\tau\\ 0&-\tau&0\end{pmatrix}\in K^{3\times 3}.\]
By Corollary 2.3.11, \(\mathrm{mult}_{\alpha}(N)=m\,\mathrm{mult}_{\alpha}(T)\) for all \(\alpha\), so \(\Sigma(N)=\Sigma(T)\). If \(F\) is a fundamental matrix for \(y^{\prime}=Ty\), then \(G:=\mathrm{diag}(F,\dots,F)\in\mathrm{GL}_{n}(K)\) is a fundamental matrix for \(y^{\prime}=P^{-1}(N)y\), that is, \(G^{\prime}=P^{-1}NPG\), so \(PG\) is a fundamental matrix for \(y^{\prime}=Ny\). Suppose now that \(K\) is \(1\)-linearly surjective, \(K^{\dagger}\) is \(2\)-divisible, and \(\sum_{\alpha}\mathrm{mult}_{\alpha}(T)=3\). Then \(\sum_{\alpha}\mathrm{mult}_{\alpha}(N)=n\) and \(\mathrm{mult}_{\alpha}(T)=\mathrm{mult}_{-\alpha}(T)\) for all \(\alpha\), so \(\Sigma(N)=\Sigma(T)=\{\alpha,-\alpha,0\}\) for some \(\alpha\).
**Lemma 2.4.38**.: _Suppose \(y^{\prime}=Ny\) is self-adjoint and let \(y,z\in\mathrm{sol}(N)\), where \(y=(y_{1},\dots,y_{n})^{\mathrm{t}}\) and \(z=(z_{1},\dots,z_{n})^{\mathrm{t}}\). Then \(y_{1}z_{1}+\dots+y_{n}z_{n}\in C\)._
Proof.: With \(\langle\,\cdot\,,\,\cdot\,\rangle\) denoting the usual inner product on \(K^{n}\), we have
\[\langle y,z\rangle^{\prime}\ =\ \langle y^{\prime},z\rangle+\langle y,z^{\prime} \rangle\ =\ \langle Ny,z\rangle+\langle y,Nz\rangle\ =\ \langle y,N^{\mathrm{t}}z\rangle+\langle y,Nz\rangle\ =\ 0\]
since \(N^{\mathrm{t}}=-N\).
Thus if \(y^{\prime}=Ny\) is self-adjoint, then we have a symmetric bilinear form
\[(y,z)\mapsto\langle y,z\rangle=y_{1}z_{1}+\dots+y_{n}z_{n}\quad(y=(y_{1}, \dots,y_{n})^{\mathrm{t}},\ z=(z_{1},\dots,z_{n})^{\mathrm{t}})\]
on the \(C\)-linear subspace \(\mathrm{sol}(N)\) of \(K^{n}\) of dimension \(\leqslant n\).
A matrix \(F\in K^{n\times n}\) is said to be _orthogonal_ if \(FF^{\mathrm{t}}=I_{n}\), where \(I_{n}\) denotes the identity of the ring \(K^{n\times n}\) of \(n\times n\)-matrices over \(K\). This yields the subgroup \(\mathrm{O}_{n}(K)\) of \(\mathrm{GL}_{n}(K)\) consisting of the orthogonal matrices \(F\in K^{n\times n}\).
Suppose \(F\in\mathrm{GL}_{n}(K)\) is a fundamental matrix for \(y^{\prime}=Ny\). By [ADH, 5.5.12] this yields a a fundamental matrix \((F^{\mathrm{t}})^{-1}\in\mathrm{GL}_{n}(K)\) for \(y^{\prime}=N^{*}y\), so if \(F\) is orthogonal, then \(y^{\prime}=Ny\) is self-adjoint. Conversely:
**Lemma 2.4.39**.: _Suppose \(y^{\prime}=Ny\) is self-adjoint, \(\mathrm{dim}_{C}\,\mathrm{sol}(N)=n\), and \(C^{\times}\) is \(2\)-divisible. Then \(\mathrm{GL}_{n}(K)\) contains an orthogonal fundamental matrix for \(y^{\prime}=Ny\)._
Proof.: Take a fundamental matrix \(F\in\operatorname{GL}_{n}(K)\) for \(y^{\prime}=Ny\). Then \(G:=(F^{\mathrm{t}})^{-1}\) is also a fundamental matrix for \(y^{\prime}=Ny\), so \(F^{\mathrm{t}}F=G^{-1}F\in\operatorname{GL}_{n}(C)\) by [1, 5.5.11]. Now the matrix \(F^{\mathrm{t}}F\) is symmetric, so [122, Chapter XV, Theorem 3.1] gives \(D,U\) in \(\operatorname{GL}_{n}(C)\) with diagonal \(D\) such that \(F^{\mathrm{t}}F=U^{\mathrm{t}}DU\). Let \(V:=\sqrt{D}U\) where \(\sqrt{D}\) in \(C^{n\times n}\) is diagonal with \((\sqrt{D})^{2}=D\). Then \(F^{\mathrm{t}}F=V^{\mathrm{t}}V\) and so \(FV^{-1}\in\operatorname{GL}_{n}(K)\) is an orthogonal fundamental matrix for \(y^{\prime}=Ny\) by [1, 5.5.11].
The skew-symmetric \(n\times n\) matrices over \(K\) form a Lie subalgebra
\[\mathfrak{so}_{n}(K)\ =\ \{N:\ N^{*}=N\}\]
of \(K^{n\times n}\) equipped with the Lie bracket \([N_{1},N_{2}]=N_{1}N_{2}-N_{2}N_{1}\). Suppose now \(n=2m\) is even, and set \(J:=\bigl{(}\begin{smallmatrix}I_{m}&I_{m}\\ -I_{m}&\end{smallmatrix}\bigr{)}\). Then \(J^{\mathrm{t}}=J^{-1}=-J\), and
\[\mathfrak{sp}_{n}(K)\ =\ \{N:\ N^{*}J=JN\}\]
is also a Lie subalgebra of \(K^{n\times n}\). The matrices in \(\mathfrak{sp}_{n}(K)\) are called _hamiltonian_; thus \(N\) is hamiltonian iff \(JN\) is symmetric. We say that the matrix differential equation \(y^{\prime}=Ny\) is **hamiltonian** if \(N\in\mathfrak{sp}_{n}(K)\); in that case \(J\in\mathcal{E}(N,N^{*})^{\times}\), so \(y^{\prime}=Ny\) is self-dual. A matrix \(F\in K^{n\times n}\) is said to be _symplectic_ if \(F^{\mathrm{t}}JF=J\). The symplectic matrices form a subgroup \(\operatorname{Sp}_{n}(K)\) of \(\operatorname{GL}_{n}(K)\). If \(y^{\prime}=Ny\) has a fundamental matrix \(F\in\operatorname{Sp}_{n}(K)\), then \(y^{\prime}=Ny\) is hamiltonian. In analogy with Lemma 2.4.39 we have a converse:
**Lemma 2.4.40**.: _Suppose \(y^{\prime}=Ny\) is hamiltonian and \(\dim_{C}\operatorname{sol}(N)=n\). Then \(\operatorname{GL}_{n}(K)\) contains a symplectic fundamental matrix for \(y^{\prime}=Ny\)._
Proof.: Take a fundamental matrix \(F\in\operatorname{GL}_{n}(K)\) for \(y^{\prime}=Ny\). Then \(JF\) and \(G:=(F^{\mathrm{t}})^{-1}\) are fundamental matrices for \(y^{\prime}=N^{*}y\), so \(F^{\mathrm{t}}JF=G^{-1}JF\in\operatorname{GL}_{n}(C)\). Now \(F^{\mathrm{t}}JF\) is skew-symmetric, hence \(F^{\mathrm{t}}JF=U^{\mathrm{t}}JU\) with \(U\in\operatorname{Gl}_{n}(C)\)[122, Chapter XV, Corollary 8.2]; then \(FU^{-1}\) is a symplectic fundamental matrix for \(y^{\prime}=Ny\).
For a hamiltonian analogue of Lemma 2.4.38, let \(\langle\,\cdot\,,\,\cdot\,\rangle\) denote the usual inner product on \(K^{n}\) and let \((y,z)\mapsto\omega(y,z):=\langle y,Jz\rangle\) be the standard symplectic bilinear form on \(K^{n}\).
**Lemma 2.4.41**.: _Suppose \(y^{\prime}=Ny\) is hamiltonian and \(y,z\in\operatorname{sol}(N)\). Then \(\omega(y,z)\in C\)._
Proof.: We have
\[\omega(y,z)^{\prime}\ =\ \langle y^{\prime},Jz\rangle+\langle y,Jz^{\prime} \rangle\ =\ \langle Ny,Jz\rangle+\langle y,JNz\rangle\ =\ \langle y,N^{\mathrm{t}}Jz\rangle+\langle y,JNz\rangle\ =\ 0\]
where we used \(-N^{\mathrm{t}}J=N^{*}J=JN\) for the last equality.
Note also that \(N\) is hamiltonian iff \(N=JN^{\mathrm{t}}J\). It follows that \(N\) is hamiltonian iff \(N=\left(\begin{smallmatrix}Q&R\\ P&Q^{*}\end{smallmatrix}\right)\) where \(P,R\in K^{m\times m}\) are symmetric and \(Q\in K^{m\times m}\).
Hamiltonian matrix differential equations appear naturally in the study of more general (non-linear) Hamiltonian systems (as the variational equations along an integral curve of such a system). See, e.g., [47]. They also arise from self-adjoint linear differential operators: using Lemma 2.4.19 one can show that if \(A\) is self-adjoint with companion matrix \(M\), then with \(n:=r\) there is some hamiltonian \(N\) such that \(\mathcal{E}(M,N)^{\times}\neq\emptyset\); see [52, p. 76].
**Anti-self-duality.** We now continue in the setting of the subsection _Complex conjugation_ in Section 2.3. Thus \(K=H[\mathrm{i}]\) where \(H\) is a differential subfield of \(K\), \(\mathrm{i}^{2}=-1\), and \(i\notin H\). The isomorphisms below are of differential modules over \(K\). Let \(M\) be a differential module over \(K\). We establish some analogues of results above for the conjugate dual of \(M\) instead of its dual.
Call \(M\) is **anti-self-dual** if \(M\cong\overline{M^{*}}\). If \(M\) is anti-self-dual, then so is every isomorphic \(K[\partial]\)-module, in particular, \(\overline{M^{*}}\). Here is an analogue of Corollary 2.4.6 which follows immediately from Corollary 2.3.26:
**Corollary 2.4.42**.: _Let \(\dim_{K}M=r\) and assume \(M\) is anti-self-dual. Suppose also that \(K\) is \(1\)-linearly surjective and \(\sum_{\alpha}\mathrm{mult}_{\alpha}(M)=r\), or \(r\geqslant 1\) and \(K\) is \((r-1)\)-linearly surjective. Then \(\mathrm{mult}_{\alpha}(M)=\mathrm{mult}_{-\overline{\alpha}}(M)\) for all \(\alpha\). Hence if additionally \(K^{\dagger}\) is \(2\)-divisible and \(\sum_{\alpha}\mathrm{mult}_{\alpha}(M)\) is odd, then \([bi]\in\Sigma(M)\) for some \(b\in H\)._
Suppose now \(M=K[\partial]/K[\partial]A\) and \(r\geqslant 1\). Then \(\overline{M^{*}}\cong K[\partial]/K[\partial]\overline{A^{*}}\) by [ADH, 5.5.8] and Example 2.3.25. Hence \(M\) is anti-self-dual iff \(A\), \(\overline{A^{*}}\) have the same type. We say that \(A\) is **anti-self-dual** if \(A\), \(\overline{A^{*}}\) have the same type. If \(A\) is anti-self-dual, then so are \(\overline{A}\) and \(A^{*}\), and so is every operator of the same type as \(A\). If \(A\) is anti-self-dual, then \(A\), \(\overline{A^{*}}\) have the same eigenvalues, with the same multiplicities. The previous corollary yields:
**Corollary 2.4.43**.: _Suppose \(A\) is anti-self-dual, and set \(s:=\sum_{\alpha}\mathrm{mult}_{\alpha}(A)\). Also assume \(K\) is \(1\)-linearly surjective and \(s=r\), or \(r\geqslant 1\) and \(K\) is \((r-1)\)-linearly surjective. Then \(\mathrm{mult}_{\alpha}(A)=\mathrm{mult}_{-\overline{\alpha}}(A)\) for all \(\alpha\). Hence if in addition \(K^{\dagger}\) is \(2\)-divisible and \(s\) is odd, then \([bi]\in\Sigma(A)\) for some \(b\in H\)._
Later \(H\) is usually a Hardy field with \(H^{\dagger}=H\), so \(\alpha=-\overline{\alpha}\) for all \(\alpha\). In this case Corollaries 2.4.42 and 2.4.43 are less useful than their cousins Corollaries 2.4.6 and 2.4.9. Note also that if \(A\in H[\partial]\), then \(A\) is self-dual iff \(A\) is anti-self-dual.
We now consider anti-self-duality for a matrix differential equation \(y^{\prime}=Ny\) over \(K\). Recall that \(N\) is an \(n\times n\)-matrix over \(K\) with \(n\geqslant 1\), and that if \(M=M_{N}\) is the differential module over \(K\) associated to \(N\), then \(\overline{M}\cong M_{\overline{N}}\) by the remarks preceding Example 2.3.25, and \(M^{*}\cong M_{N^{*}}\) by [ADH, pp. 279-280]. We say that \(y^{\prime}=Ny\) is **anti-self-dual** if it is equivalent to the matrix differential equation \(y^{\prime}=\overline{N^{*}}y\) over \(K\). (Note: \(\overline{N^{*}}=-\overline{N}^{\mathrm{t}}\).) Hence \(y^{\prime}=Ny\) is anti-self-dual iff \(M_{N}\) is anti-self-dual. If \(y^{\prime}=Ny\) is anti-self-dual, then so is any matrix differential equation over \(K\) equivalent to \(y^{\prime}=Ny\), as are the matrix differential equations \(y^{\prime}=N^{*}y\) and \(y^{\prime}=\overline{N}y\) over \(K\). If \(N\in H^{n\times n}\), then the matrix differential equation \(y^{\prime}=Ny\) over \(K\) is self-dual iff it is anti-self-dual.
**Corollary 2.4.44**.: _Suppose \(C\neq K\) and \(y^{\prime}=Ny\) is anti-self-dual. Then \(y^{\prime}=Ny\) is equivalent to a matrix differential equation \(y^{\prime}=A_{L}y\) with \(A_{L}\) the companion matrix of a monic anti-self-dual \(L\in K[\partial]\)._
Proof.: By [ADH, 5.5.9], \(y^{\prime}=Ny\) is equivalent to \(y^{\prime}=A_{L}y\) where \(A_{L}\) is the companion matrix of a monic \(L\in K[\partial]\). Then \(L\) is anti-self-dual by [ADH, 5.5.8] and Example 2.3.25.
We say that \(y^{\prime}=Ny\) is **anti-self-adjoint** if \(\overline{N^{*}}=N\), that is, \(N^{\mathrm{t}}=-\overline{N}\). Then \(y^{\prime}=Ny\) is in particular anti-self-dual. If \(N\in H^{n\times n}\), then \(y^{\prime}=Ny\) is anti-self-adjoint iff it is self-adjoint. To state an anti-self-adjoint analogue of Lemma 2.4.38 we use the "hermitian" inner product \(\langle\,\cdot\,,\,\cdot\,\rangle\) on \(K^{n}\) given by \(\langle y,z\rangle=y_{1}\overline{z}_{1}+\cdots+y_{n}\overline{z}_{n}\) for \(y=(y_{1},\ldots,y_{n})^{\mathrm{t}}\in K^{n}\) and \(z=(z_{1},\ldots,z_{n})^{\mathrm{t}}\in K^{n}\).
**Lemma 2.4.45**.: _If \(y^{\prime}=Ny\) is anti-self-adjoint and \(y,z\in\operatorname{sol}(N)\), then \(\langle y,z\rangle\in C\)._
Proof.: Assume \(y^{\prime}=Ny\) is anti-self-adjoint. Then \(\overline{N}^{\mathfrak{t}}=-N\), so
\[\langle y,z\rangle^{\prime}\ =\ \langle y^{\prime},z\rangle+\langle y,z^{\prime} \rangle\ =\ \langle Ny,z\rangle+\langle y,Nz\rangle\ =\ \langle y,\overline{N}^{\mathfrak{t}}z\rangle+\langle y,Nz\rangle\ =\ 0.\qed\]
Thus if \(y^{\prime}=Ny\) is anti-self-adjoint, then we have a hermitian form
\[(y,z)\mapsto\langle y,z\rangle\ =\ y_{1}\overline{z_{1}}+\dots+y_{n}\overline{z_{ n}}\quad(y=(y_{1},\dots,y_{n})^{\mathfrak{t}},\ z=(z_{1},\dots,z_{n})^{\mathfrak{t}})\]
on the \(C\)-linear subspace \(\operatorname{sol}(N)\) of \(K^{n}\) of dimension \(\leqslant n\).
A matrix \(U\in K^{n\times n}\) is _unitary_ if \(U^{\mathfrak{t}}\overline{U}=I_{n}\), equivalently, \(\langle Ux,Uy\rangle=\langle x,y\rangle\) for all \(x,y\in K^{n}\). The unitary matrices form a subgroup \(\operatorname{U}_{n}(K)\) of \(\operatorname{GL}_{n}(K)\). Suppose \(F\in\operatorname{GL}_{n}(K)\) is a fundamental matrix for \(y^{\prime}=Ny\). Then \(\overline{F}\) is a fundamental matrix for \(y^{\prime}=\overline{N}y\), and so \((\overline{F}^{\mathfrak{t}})^{-1}\in\operatorname{GL}_{n}(K)\) is a fundamental matrix for \(y^{\prime}=\overline{N^{*}}y\). So if \(F\) is unitary, then \(y^{\prime}=Ny\) is anti-self-adjoint. Here is a converse, analogous to Lemma 2.4.39:
**Lemma 2.4.46**.: _Suppose \(y^{\prime}=Ny\) is anti-self-adjoint, \(\dim_{C}\operatorname{sol}(N)=n\), and \(H\) is real closed. Then \(\operatorname{GL}_{n}(K)\) contains a unitary fundamental matrix for \(y^{\prime}=Ny\)._
Proof.: Take a fundamental matrix \(F\in\operatorname{GL}_{n}(K)\) for \(y^{\prime}=Ny\). Then \(G:=(\overline{F}^{\mathfrak{t}})^{-1}\) is also a fundamental matrix for \(y^{\prime}=Ny\), so \(\overline{F}^{\mathfrak{t}}F=G^{-1}F\in\operatorname{GL}_{n}(C)\). Now \(P:=\overline{F}^{\mathfrak{t}}F\) is hermitian (i.e., \(\overline{P}^{\mathfrak{t}}=P\)), so [122, Chapter XV, SS5, 6] gives a diagonal \(D\in\operatorname{GL}_{n}(C_{H})\) and a \(U\in\operatorname{U}_{n}(C)\) with \(P=\overline{U}^{\mathfrak{t}}DU\). So for \(x\in C^{n}\) and \(y:=U^{-1}x\in C^{n}\),
\[\langle Dx,x\rangle\ =\ \langle DUy,Uy\rangle\ =\ \langle Py,y\rangle\ =\ \langle Fy,Fy\rangle,\]
a sum of squares in \(H\). As \(C_{H}\) is also real closed, all entries of \(D\) are squares in \(C_{H}\). Take diagonal \(E\in C_{H}^{n\times n}\) with \(E^{2}=D\). Then \(V:=EU\in\operatorname{GL}_{n}(C)\) and \(P=\overline{V}^{\mathfrak{t}}V\), so \(FV^{-1}\in\operatorname{GL}_{n}(K)\) is a unitary fundamental matrix for \(y^{\prime}=Ny\).
### Eigenvalues and Splittings
_In this section \(K\) is a differential field such that \(C\) is algebraically closed and \(K^{\dagger}\) is divisible._ We let \(A\), \(B\) range over \(K[\partial]\), and we assume \(A\neq 0\) and set \(r:=\operatorname{order}A\).
### Spectral decomposition of differential operators
Fix a complement \(\Lambda\) of the subspace \(K^{\dagger}\) of the \(\mathbb{Q}\)-linear space \(K\), let \(\operatorname{U}:=K\bigl{[}\operatorname{e}(\Lambda)\bigr{]}\) be the universal exponential extension of \(K\), let \(\Omega\) be the differential fraction field of the differential \(K\)-algebra \(\operatorname{U}\), and let \(\lambda\) range over \(\Lambda\). Then
\[A_{\lambda}\ =\ A_{\lambda\operatorname{e}(\lambda)}\ =\ \operatorname{e}(- \lambda)A\operatorname{e}(\lambda)\in K[\partial].\]
Moreover, for every \(a\in K\) there is a unique \(\lambda\) with \(a-\lambda\in K^{\dagger}\), so \(\operatorname{mult}_{[a]}(A)=\operatorname{mult}_{\lambda}(A)\). Call \(\lambda\) an **eigenvalue** of \(A\) with respect to our complement \(\Lambda\) of \(K^{\dagger}\) in \(K\) if \([\lambda]\) is an eigenvalue of \(A\); thus the group isomorphism \(\lambda\mapsto[\lambda]\colon\Lambda\to K/K^{\dagger}\) maps the set of eigenvalues of \(A\) with respect to \(\Lambda\) onto the spectrum of \(A\). For \(f\in\operatorname{U}\) with spectral decomposition \((f_{\lambda})\) we have
\[A(f)\ =\ \sum_{\lambda}A_{\lambda}(f_{\lambda})\ \operatorname{e}(\lambda),\]
so \(A(\operatorname{U}^{\times})\subseteq\operatorname{U}^{\times}\cup\{0\}\). We call the family \((A_{\lambda})\) the **spectral decomposition** of \(A\) (with respect to \(\Lambda\)). Given a \(C\)-linear subspace \(V\) of \(\operatorname{U}\), we set \(V_{\lambda}:=V\cap K\operatorname{e}(\lambda)\), a \(C\)-linear subspace of \(V\); the sum \(\sum_{\lambda}V_{\lambda}\) is direct. For \(V:=\operatorname{U}\) we have \(\operatorname{U}_{\lambda}=K\operatorname{e}(\lambda)\),
and \(\operatorname{U}=\bigoplus_{\lambda}\operatorname{U}_{\lambda}\) with \(A(\operatorname{U}_{\lambda})\subseteq\operatorname{U}_{\lambda}\) for all \(\lambda\). Taking \(V:=\ker_{\operatorname{U}}A\), we obtain \(V_{\lambda}=(\ker_{K}A_{\lambda})\operatorname{e}(\lambda)\) and hence \(\dim_{C}V_{\lambda}=\operatorname{mult}_{\lambda}(A)\), and \(V=\bigoplus_{\lambda}V_{\lambda}\). Thus
\[|\Sigma(A)|\ \leqslant\ \sum_{\lambda}\operatorname{mult}_{\lambda}(A)\ =\ \dim_{C}\ker_{\operatorname{U}}A\ \leqslant\ r. \tag{2.5.1}\]
Moreover:
**Lemma 2.5.1**.: _The \(C\)-linear space \(\ker_{\operatorname{U}}A\) has a basis contained in \(\operatorname{U}^{\times}=K^{\times}\operatorname{e}(\Lambda)\)._
_Example_.: We have a \(C\)-algebra isomorphism \(P(Y)\mapsto P(\partial)\colon C[Y]\to C[\partial]\). Suppose \(A\in C[\partial]\subseteq K[\partial]\), let \(P(Y)\in C[Y]\), \(P(\partial)=A\), and let \(c_{1},\dots,c_{n}\in C\) be the distinct zeros of \(P\), of respective multiplicities \(m_{1},\dots,m_{n}\in\mathbb{N}^{\geqslant 1}\) (so \(r=\deg P=m_{1}+\cdots+m_{n}\)). Suppose also \(C\subseteq\Lambda\), and \(x\in K\) satisfies \(x^{\prime}=1\). (This holds in Example 2.2.4.) Then the \(x^{i}\operatorname{e}(c_{j})\in\operatorname{U}\) (\(1\leqslant j\leqslant n\), \(0\leqslant i<m_{j}\)) form a basis of the \(C\)-linear space \(\ker_{\operatorname{U}}A\) by [ADH, 5.1.18]. So the eigenvalues of \(A\) with respect to \(\Lambda\) are \(c_{1},\dots,c_{n}\), with respective multiplicities \(m_{1},\dots,m_{n}\).
**Corollary 2.5.2**.: _Suppose \(\dim_{C}\ker_{\operatorname{U}}A=r\geqslant 1\) and \(A=\partial^{r}+a_{r-1}\partial^{r-1}+\cdots+a_{0}\) where \(a_{0},\dots,a_{r-1}\in K\). Then_
\[\sum_{\lambda}\operatorname{mult}_{\lambda}(A)\lambda\ \equiv\ -a_{r-1}\mod K^{\dagger}.\]
_In particular, \(\sum_{\lambda}\operatorname{mult}_{\lambda}(A)\lambda=0\) iff \(a_{r-1}\in K^{\dagger}\)._
Proof.: Take a basis \(y_{1},\dots,y_{r}\) of \(\ker_{\operatorname{U}}A\) with \(y_{j}=f_{j}\operatorname{e}(\lambda_{j})\), \(f_{j}\in K^{\times}\), \(\lambda_{j}\in\Lambda\). The Wronskian matrix \(\operatorname{Wr}(y_{1},\dots,y_{r})\) of \((y_{1},\dots,y_{r})\)[ADH, p. 206] equals
\[\operatorname{Wr}(y_{1},\dots,y_{r})\ =\ M\begin{pmatrix}\operatorname{e}(\lambda_{1}) &&\\ &\ddots&\\ &&\operatorname{e}(\lambda_{r})\end{pmatrix}\qquad\text{where $M\in\operatorname{GL}_{n}(K)$.}\]
Then \(w:=\operatorname{wr}(y_{1},\dots,y_{r})=\det\operatorname{Wr}(y_{1},\dots,y_ {r})\neq 0\) by [ADH, 4.1.13] and
\[-a_{r-1}\ =\ w^{\dagger}\ =\ (\det M)^{\dagger}+\lambda_{1}+\cdots+\lambda_{r}\]
where we used [ADH, 4.1.17] for the first equality.
If \(A\) splits over \(K\), then so does \(A_{\lambda}\). Moreover, if \(A_{\lambda}(K)=K\), then \(A(\operatorname{U}_{\lambda})=\operatorname{U}_{\lambda}\): for \(f,g\in K\) with \(A_{\lambda}(f)=g\) we have \(A\big{(}f\operatorname{e}(\lambda)\big{)}=g\operatorname{e}(\lambda)\). Thus:
**Lemma 2.5.3**.: _Suppose \(K\) is \(r\)-linearly surjective, or \(K\) is \(1\)-linearly surjective and \(A\) splits over \(K\). Then \(A(\operatorname{U}_{\lambda})=\operatorname{U}_{\lambda}\) for all \(\lambda\) and hence \(A(\operatorname{U})=\operatorname{U}\)._
In the next subsection we study the connection between splittings of \(A\) and bases of the \(C\)-linear space \(\ker_{\operatorname{U}}A\) in more detail.
### Constructing splittings and bases
Recall that \(\operatorname{order}A=r\in\mathbb{N}\). Set \(\operatorname{U}=\operatorname{U}_{K}\), so \(\operatorname{U}^{\times}=K^{\times}\operatorname{e}(\Lambda)\). Let \(y_{1},\dots,y_{r}\in\operatorname{U}^{\times}\). We construct a sequence \(A_{0},\dots,A_{n}\) of monic operators in \(K[\partial]\) with \(n\leqslant r\) as follows. First, set \(A_{0}:=1\). Next, given \(A_{0},\dots,A_{i-1}\) in \(K[\partial]^{\neq}\) (\(1\leqslant i\leqslant r\)), set \(f_{i}:=A_{i-1}(y_{i})\); if \(f_{i}\neq 0\), then \(f_{i}\in\operatorname{U}^{\times}\), so \(f_{i}^{\dagger}\in K\), and the next term in the sequence is
\[A_{i}\ :=\ (\partial-a_{i})A_{i-1},\qquad a_{i}\ :=\ f_{i}^{\dagger},\]
whereas if \(f_{i}=0\), then \(n:=i-1\) and the construction is finished.
**Lemma 2.5.4**.: \(\ker_{\operatorname{U}}A_{i}=Cy_{1}\oplus\cdots\oplus Cy_{i}\)__(_internal direct sum_) _for \(i=0,\dots,n\)._
Proof.: By induction on \(i\leqslant n\). The case \(i=0\) being trivial, suppose \(1\leqslant i\leqslant n\) and the claim holds for \(i-1\) in place of \(i\). Then \(A_{i-1}(y_{i})=f_{i}\neq 0\), hence \(y_{i}\notin\ker_{\mathrm{U}}A_{i-1}=Cy_{1}\oplus\cdots\oplus Cy_{i-1}\), and \(A_{i}=(\partial-f_{i}^{\dagger})A_{i-1}\), so by [ADH, 5.1.14(i)] we have \(\ker_{\mathrm{U}}A_{i}=\ker_{\mathrm{U}}A_{i-1}\oplus Cy_{i}=Cy_{1}\oplus \cdots\oplus Cy_{i}\).
We denote the tuple \((a_{1},\ldots,a_{n})\in K^{n}\) just constructed by \(\operatorname{split}(y_{1},\ldots,y_{r})\), so \(A_{n}=(\partial-a_{n})\cdots(\partial-a_{1})\). Suppose \(r\geqslant 1\). Then \(n\geqslant 1\), \(a_{1}=y_{1}^{\dagger}\), \(A_{1}=\partial-a_{1}\), \(A_{1}(y_{2}),\ldots,A_{1}(y_{n})\in\mathrm{U}^{\times}\), and we have
\[(a_{2},\ldots,a_{n})\ =\ \operatorname{split}\bigl{(}A_{1}(y_{2}),\ldots,A_{ 1}(y_{n})\bigr{)}.\]
By Lemma 2.5.4, \(n\leqslant r\) is maximal such that \(y_{1},\ldots,y_{n}\) are \(C\)-linearly independent. In particular, \(y_{1},\ldots,y_{r}\) are \(C\)-linearly independent iff \(n=r\).
**Corollary 2.5.5**.: _If \(A(y_{i})=0\) for \(i=1,\ldots,n\), then \(A\in K[\partial]A_{n}\). Thus if \(n=r\) and \(A(y_{i})=0\) for \(i=1,\ldots,r\), then \(A=a(\partial-a_{r})\cdots(\partial-a_{1})\) where \(a\in K^{\times}\)._
This follows from [ADH, 5.1.15(i)] and Lemma 2.5.4.
Suppose that \(H\) is a differential subfield of \(K\) and \(y_{1}^{\dagger},\ldots,y_{r}^{\dagger}\in H\). Then we have \(\operatorname{split}(y_{1},\ldots,y_{r})\in\ H^{n}\): use that \(y^{\prime}\in Hy\) with \(y\in U\) gives \(y^{(m)}\in Hy\) for all \(m\), so \(B(y)\in Hy\) for all \(B\in H[\partial]\), hence for such \(B\), if \(f:=B(y)\neq 0\), then \(f^{\dagger}\in H\).
**Corollary 2.5.6**.: _Suppose \(\dim_{C}\ker_{\mathrm{U}}A=r\). Then \(\ker_{\mathrm{U}}A=\ker_{\Omega}A\) and \(A\) splits over \(K\). If \(A=(\partial-a_{r})\cdots(\partial-a_{1})\), \(a_{1},\ldots,a_{r}\in K\), then the spectrum of \(A\) is \(\bigl{\{}[a_{1}],\ldots,[a_{r}]\bigr{\}}\), and for all \(\alpha\in K/K^{\dagger}\),_
\[\operatorname{mult}_{\alpha}(A)\ =\ \bigl{|}\bigl{\{}i\in\{1,\ldots,r\}:\ \alpha=[a_{i}] \bigr{\}}\bigr{|}.\]
Proof.: \(A\) splits over \(K\) by Lemma 2.5.1 and Corollary 2.5.5. The rest follows from Lemma 2.3.4 in view of \(\sum_{\lambda}\operatorname{mult}_{\lambda}(A)=\dim_{C}\ker_{\mathrm{U}}A\).
Conversely, we can associate to a given splitting of \(A\) over \(K\) a basis of \(\ker_{\mathrm{U}}A\) consisting of \(r\) elements of \(\mathrm{U}^{\times}\), provided \(K\) is \(1\)-linearly surjective when \(r\geqslant 2\):
**Lemma 2.5.7**.: _Assume \(K\) is \(1\)-linearly surjective in case \(r\geqslant 2\). Let_
\[A\ =\ (\partial-a_{r})\cdots(\partial-a_{1})\qquad\text{where $a_{i}=b_{i}^{ \dagger}+\lambda_{i}$, $b_{i}\in K^{\times}$, $\lambda_{i}\in\Lambda\ (i=1,\ldots,r)$.}\]
_Then there are \(C\)-linearly independent \(y_{1},\ldots,y_{r}\in\ker_{\mathrm{U}}A\) with \(y_{i}\in K^{\times}\operatorname{e}(\lambda_{i})\) for \(i=1,\ldots,r\) and \(\operatorname{split}(y_{1},\ldots,y_{r})=(a_{1},\ldots,a_{r})\)._
Proof.: By induction on \(r\). The case \(r=0\) is trivial, and for \(r=1\) we can take \(y_{1}=b_{1}\operatorname{e}(\lambda_{1})\). Let \(r\geqslant 2\) and suppose inductively that for
\[B\ :=\ (\partial-a_{r})\cdots(\partial-a_{2})\]
we have \(C\)-linearly independent \(z_{2},\ldots,z_{r}\in\ker_{\mathrm{U}}B\) with \(z_{i}\in K^{\times}\operatorname{e}(\lambda_{i})\) for \(i=2,\ldots,r\) and \(\operatorname{split}(z_{2},\ldots,z_{r})=(a_{2},\ldots,a_{r})\). For \(i=2,\ldots,r\), Lemma 2.5.3 gives \(y_{i}\in K^{\times}\operatorname{e}(\lambda_{i})\) with \((\partial-a_{1})(y_{i})=z_{i}\). Set \(y_{1}:=b_{1}\operatorname{e}(\lambda_{1})\), so \(\ker_{\mathrm{U}}(\partial-a_{1})=\ Cy_{1}\). Then \(y_{1},\ldots,y_{r}\in\ker_{\mathrm{U}}A\) are \(C\)-linearly independent with \(y_{i}\in K^{\times}\operatorname{e}(\lambda_{i})\) for \(i=1,\ldots,r\), and one verifies easily that \(\operatorname{split}(y_{1},\ldots,y_{r})=(a_{1},\ldots,a_{r})\).
**Corollary 2.5.8**.: _Assume \(K\) is \(1\)-linearly surjective when \(r\geqslant 2\). Then_
\[A\text{ splits over $K$}\iff\dim_{C}\ker_{\mathrm{U}}A\ =\ r.\]
_Remark_.: If \(\dim_{C}\ker_{\mathrm{U}}A=r\) and \(\lambda_{1},\ldots,\lambda_{d}\) are the eigenvalues of \(A\) with respect to \(\Lambda\), then the differential subring \(K\big{[}\,\mathrm{e}(\lambda_{1}),\mathrm{e}(-\lambda_{1}),\ldots,\mathrm{e}( \lambda_{d}),\mathrm{e}(-\lambda_{d})\big{]}\) of \(\mathrm{U}\) is the Picard-Vessiot ring for \(A\) over \(K\); see [158, Section 1.3]. If \(K\) is linearly closed and linearly surjective, then \(\mathrm{U}\) is by Corollary 2.5.8 the universal Picard-Vessiot ring of the differential field \(K\) as defined in [158, Chapter 10]. Our construction of \(\mathrm{U}\) above is modeled on the description of the universal Picard-Vessiot ring of the algebraic closure of \(C(\!(t)\!)\) given in [158, Chapter 3].
Recalling our convention that \(r=\operatorname{order}A\), here is a complement to Lemma 2.5.1:
**Corollary 2.5.9**.: _Let \(V\) be a \(C\)-linear subspace of \(\mathrm{U}\) with \(r=\dim_{C}V\). Then there is at most one monic \(A\) with \(V=\ker_{\mathrm{U}}A\). Moreover, the following are equivalent:_
1. \(V=\ker_{\mathrm{U}}A\) _for some monic_ \(A\) _that splits over_ \(K\)_;_
2. \(V=\ker_{\mathrm{U}}B\) _for some_ \(B\neq 0\)_;_
3. \(V=\sum_{\lambda}V_{\lambda}\)_;_
4. \(V\) _has a basis contained in_ \(\mathrm{U}^{\times}\)_._
Proof.: The first claim follows from [ADH, 5.1.15] applied to the differential fraction field of \(\mathrm{U}\) in place of \(K\). The implication (i) \(\Rightarrow\) (ii) is clear, (ii) \(\Rightarrow\) (iii) was noted before Lemma 2.5.1, and (iii) \(\Rightarrow\) (iv) is obvious. For (iv) \(\Rightarrow\) (i), let \(y_{1},\ldots,y_{r}\in\mathrm{U}^{\times}\) be a basis of \(V\). Then \(\operatorname{split}(y_{1},\ldots,y_{r})=(a_{1},\ldots,a_{r})\in K^{r}\), so \(V=\ker_{\mathrm{U}}A\) for \(A=(\partial-a_{r})\cdots(\partial-a_{1})\) by Lemma 2.5.4, so (i) holds.
Let \(y_{1},\ldots,y_{r}\in\mathrm{U}^{\times}\) and \((a_{1},\ldots,a_{n}):=\operatorname{split}(y_{1},\ldots,y_{r})\). We finish this subsection with some remarks about \((a_{1},\ldots,a_{n})\). Let \(A_{0},\ldots,A_{n}\in K[\partial]\) be as above and recall that \(n\leqslant r\) is maximal such that \(y_{1},\ldots,y_{n}\) are \(C\)-linearly independent.
**Lemma 2.5.10**.: _Assume \(n=r\). Let \(z_{1},\ldots,z_{r}\in\mathrm{U}^{\times}\). The following are equivalent:_
1. \(z_{1},\ldots,z_{r}\) _are_ \(C\)_-linearly independent and_ \((a_{1},\ldots,a_{r})=\operatorname{split}(z_{1},\ldots,z_{r})\)_;_
2. _for_ \(i=1,\ldots,r\) _there are_ \(c_{ii},c_{i,i-1},\ldots,c_{i1}\in C\) _such that_ \[z_{i}\ =\ c_{ii}y_{i}+c_{i,i-1}y_{i-1}+\cdots+c_{i1}y_{1}\text{ and }c_{ii}\neq 0.\]
Proof.: The case \(r=0\) is trivial. Let \(r=1\). If (i) holds, then \(y_{1}^{\dagger}=a_{1}=z_{1}^{\dagger}\), hence \(z_{1}\in C^{\times}\,y_{1}\), so (ii) holds. The converse is obvious. Let \(r\geqslant 2\), and assume (i) holds. Put \(\widetilde{y}_{i}:=A_{1}(y_{i})\) and \(\widetilde{z}_{i}:=A_{1}(z_{i})\) for \(i=2,\ldots,r\). Then
\[\operatorname{split}(\widetilde{y}_{2},\ldots,\widetilde{y}_{r})\ =\ (a_{2},\ldots,a_{r})\ = \ \operatorname{split}(\widetilde{z}_{2},\ldots,\widetilde{z}_{r}),\]
so we can assume inductively to have \(c_{ij}\in C\ (2\leqslant j\leqslant i\leqslant r)\) with
\[\widetilde{z}_{i}\ =\ c_{ii}\widetilde{y}_{i}+c_{i,i-1}\widetilde{y}_{i-1}+ \cdots+c_{i2}\widetilde{y}_{2}\quad\text{and}\quad c_{ii}\neq 0\qquad(2 \leqslant i\leqslant r).\]
Hence for \(2\leqslant i\leqslant r\),
\[z_{i}\in c_{ii}y_{i}+c_{i,i-1}y_{i-1}+\cdots+c_{i2}y_{2}+\ker_{\mathrm{U}}A_ {1}.\]
Now use \(\ker_{\mathrm{U}}A_{1}=Cy_{1}\) to conclude (ii). For the converse, let \(c_{ij}\in C\) be as in (ii). Then clearly \(z_{1},\ldots,z_{r}\) are \(C\)-linearly independent. Let \((b_{1},\ldots,b_{r}):=\operatorname{split}(z_{1},\ldots,z_{r})\) and \(B_{r-1}:=(\partial-b_{r-1})\cdots(\partial-b_{1})\). Then \(a_{r}=f_{r}^{\dagger}\) where \(f_{r}=A_{r-1}(y_{r})\neq 0\), and \(b_{r}=g_{r}^{\dagger}\) where \(g_{r}:=B_{r-1}(z_{r})\neq 0\). Now inductively we have \(a_{j}=b_{j}\) for \(j=1,\ldots,r-1\), so \(A_{r-1}=B_{r-1}\), and \(A_{r-1}(y_{i})=0\) for \(i=1,\ldots,r-1\) by Lemma 2.5.4. Hence \(g_{r}=c_{rr}f_{r}\), and thus \(a_{r}=b_{r}\).
**Lemma 2.5.11**.: _Let \(z\in\mathrm{U}^{\times}\). Then \(\operatorname{split}(y_{1}z,\ldots,y_{r}z)=(a_{1}+z^{\dagger},\ldots,a_{n}+ z^{\dagger})\)._
Proof.: Since for \(m\leqslant r\), the units \(y_{1}z,\ldots,y_{m}z\) of \(\mathrm{U}\) are \(C\)-linearly independent iff \(y_{1},\ldots,y_{m}\) are \(C\)-linearly independent, we see that the tuples \(\operatorname{split}(y_{1}z,\ldots,y_{r}z)\) and \(\operatorname{split}(y_{1},\ldots,y_{r})\) have the same length \(n\). Let \((b_{1},\ldots,b_{n}):=\operatorname{split}(y_{1}z,\ldots,y_{r}z)\); we show \((b_{1},\ldots,b_{n})=(a_{1}+z^{\dagger},\ldots,a_{n}+z^{\dagger})\) by induction on \(n\). The case \(n=0\) is obvious, so suppose \(n\geqslant 1\). Then \(a_{1}=y_{1}^{\dagger}\) and \(b_{1}=(y_{1}z)^{\dagger}=a_{1}+z^{\dagger}\) as required. By remarks following the proof of Lemma 2.5.4 we have
\[(a_{2},\ldots,a_{n})\ =\ \operatorname{split}\bigl{(}A_{1}(y_{2}),\ldots,A_{ 1}(y_{n})\bigr{)}\qquad\text{where $A_{1}:=\partial-a_{1}$.}\]
Now \(B_{1}:=\partial-b_{1}=(A_{1})_{\times z^{-1}}\), so likewise
\[(b_{2},\ldots,b_{n})\ =\ \operatorname{split}\bigl{(}B_{1}(y_{2}z),\ldots,B_{ 1}(y_{n}z)\bigr{)}\ =\ \operatorname{split}\bigl{(}A_{1}(y_{2})z,\ldots,A_{1}(y_{n})z\bigr{)}.\]
Hence \(b_{2}=a_{2}+z^{\dagger},\ldots,b_{n}=a_{n}+z^{\dagger}\) by our inductive hypothesis.
For \(f\in\partial K\) we let \(\int f\) denote an element of \(K\) such that \((\int f)^{\prime}=f\).
**Lemma 2.5.12**.: _Let \(g_{1},\ldots,g_{r}\in K^{\times}\) and_
\[A\ =\ g_{1}\cdots g_{r}(\partial g_{r}^{-1})(\partial g_{r-1}^{-1})\cdots( \partial g_{1}^{-1}),\]
_and suppose the integrals below can be chosen such that_
\[y_{1}\ =\ g_{1},\quad y_{2}\ =\ g_{1}\int g_{2},\quad\ldots,\quad y_{r}\ =\ g_{1}\int(g_{2}\int g_{3}(\cdots(g_{r-1}\int g_{r})\cdots)),\]
_Then \(y_{1},\ldots,y_{r}\in K^{\times}\), \(n=r\), and \(a_{i}=(g_{1}\cdots g_{i})^{\dagger}\) for \(i=1,\ldots,r\)._
Proof.: Let \(b_{i}:=(g_{1}\cdots g_{i})^{\dagger}\) for \(i=1,\ldots,r\). By induction on \(i=0,\ldots,r\) we show \(n\geqslant i\) and \((a_{1},\ldots,a_{i})=(b_{1},\ldots,b_{i})\). This is clear for \(i=0\), so suppose \(i\in\{1,\ldots,r\}\), \(n\geqslant i-1\), and \((a_{1},\ldots,a_{i-1})=(b_{1},\ldots,b_{i-1})\). Then
\[A_{i-1}=(\partial-a_{i-1})\cdots(\partial-a_{1})=(\partial-b_{i-1})\cdots( \partial-b_{1})=g_{1}\cdots g_{i-1}(\partial g_{i-1}^{-1})\cdots(\partial g_ {1}^{-1}),\]
using Lemma 1.1.3 for the last equality. So \(A_{i-1}(y_{i})=g_{1}\cdots g_{i}\neq 0\), and thus \(n\geqslant i\) and \(a_{i}=A_{i-1}(y_{i})^{\dagger}=b_{i}\).
### Splittings and derivatives \((^{*})\)
The material in this subsection is only needed for the proof of Lemma 7.5.29, and not for the proof of our main theorem. _In this subsection \(A\) is monic and \(a_{0}:=A(1)\neq 0\)_. Let \(A^{\partial}\) be the unique element of \(K[\partial]\) such that \(A^{\partial}\partial=\partial A-a_{0}^{\dagger}A\). Then \(A^{\partial}\) is monic of order \(r\), and if \(A\in H[\partial]\) for some differential subfield \(H\) of \(K\), then also \(A^{\partial}\in H[\partial]\).
_Examples_.: If \(\operatorname{order}A=0\) then \(A^{\partial}=1\), and if \(\operatorname{order}A=1\) then \(A^{\partial}=\partial+(a_{0}-a_{0}^{\dagger})\). Next, suppose \(A=\partial^{2}+a_{1}\partial+a_{0}\ (a_{0},a_{1}\in K)\); then
\[A^{\partial}\ =\ \partial^{2}+(a_{1}-a_{0}^{\dagger})\partial+(a_{1}^{\prime}+a_{0 }-a_{1}a_{0}^{\dagger}).\]
If \(A(y)=0\) with \(y\) in a differential ring extension of \(K\), then \(A^{\partial}(y^{\prime})=0\). Also:
**Lemma 2.5.13**.: _Let \(R\) be a differential integral domain extending \(K\). Suppose the differential fraction field of \(R\) has constant field \(C\), and \(\dim_{C}\ker_{R}A=r\). Then \(\ker_{R}A^{\partial}=\{y^{\prime}:y\in\ker_{R}A\}\) and \(\dim_{C}\ker_{R}A^{\partial}=r\)._
Proof.: Let \(y_{1},\ldots,y_{r}\) be a basis of the \(C\)-linear space \(\ker_{R}A\), and assume towards a contradiction that \(c_{1}y_{1}^{\prime}+\cdots+c_{r}y_{r}^{\prime}=0\) with \(c_{1},\ldots,c_{r}\in C\) not all zero. Then \(y:=c_{1}y_{1}+\cdots+c_{r}y_{r}\in\ker_{R}^{\neq}A\) and \(y^{\prime}=0\). Hence \(a_{0}y=A(y)=0\) and thus \(a_{0}=0\), a contradiction.
Let \(\mathrm{U}=\mathrm{U}_{K}\) and \(f_{1}\,\mathrm{e}(\lambda_{1}),\ldots,f_{r}\,\mathrm{e}(\lambda_{r})\in\mathrm{U} ^{\times}\) be a basis of the \(C\)-linear space \(\ker_{\mathrm{U}}A\), where \(f_{j}\in K^{\times}\) and \(\lambda_{j}\in\Lambda\) for \(j=1,\ldots,r\). Then by Lemma 2.5.13,
\[(f_{1}^{\prime}+\lambda_{1}f_{1})\,\mathrm{e}(\lambda_{1}),\ldots,(f_{r}^{ \prime}+\lambda_{r}f_{r})\,\mathrm{e}(\lambda_{r})\in\mathrm{U}^{\times}\]
is a basis of the \(C\)-linear space \(\ker_{\mathrm{U}}A^{\flat}\). Hence by Corollary 2.5.6:
**Corollary 2.5.14**.: _Suppose \(\dim_{C}\ker_{\mathrm{U}}A=r\). Then \(\mathrm{mult}_{\alpha}(A)=\mathrm{mult}_{\alpha}(A^{\flat})\) for all \(\alpha\in K/K^{\dagger}\), so \(\Sigma(A)=\Sigma(A^{\flat})\), and both \(A\), \(A^{\flat}\) split over \(K\)._
Suppose now that \(K\) is \(1\)-linearly surjective when \(r\geqslant 2\), and \(A\) splits over \(K\). Then \(A^{\flat}\) splits over \(K\) by Corollaries 2.5.8 and 2.5.14.
### Splitting and adjoints \((^{*})\)
In this subsection \(y_{1},\ldots,y_{r}\in\mathrm{U}^{\times}\),
\[(a_{1},\ldots,a_{r})\ =\ \mathrm{split}(y_{1},\ldots,y_{r}),\qquad A\ =\ ( \partial-a_{r})\cdots(\partial-a_{1}).\]
So \(y_{1},\ldots,y_{r}\) is a basis of the \(C\)-linear space \(V:=\ker_{\mathrm{U}}A=\ker_{\Omega}A\),
\[A^{*}\ =\ (-1)^{r}(\partial+a_{1})\cdots(\partial+a_{r}),\]
and \(\dim_{C}W=r\) for the \(C\)-linear space \(W:=\ker_{\mathrm{U}}A^{*}=\ker_{\Omega}A^{*}\) by Lemma 2.3.21. (Recall here that \(\Omega\) denotes the differential fraction field of \(\mathrm{U}\).) Proposition 2.4.13 with \(\Omega\) instead of \(K\) yields the \(C\)-bilinear map \([\,\ ]_{A}\colon\Omega\times\Omega\to\Omega\), which restricts to a perfect pairing \(V\times W\to C\) by Corollary 2.4.16. We let \(j,k\) range over \(\{1,\ldots,r\}\) and take \(\lambda_{j}\in\Lambda\) such that \(y_{j}^{\dagger}\equiv\lambda_{j}\bmod K^{\dagger}\), so \(y_{j}\in K^{\times}\,\mathrm{e}(\lambda_{j})\).
**Lemma 2.5.15**.: _Suppose \(z_{1},\ldots,z_{r}\in\mathrm{U}^{\times}\) are \(C\)-linearly independent such that_
\[\mathrm{split}(z_{r},\ldots,z_{1})\ =\ (-a_{r},\ldots,-a_{1}).\]
_Then \(z_{1},\ldots,z_{r}\) is a basis of the \(C\)-linear space \(W\), \([y_{j},z_{k}]_{A}=0\) if \(j<k\) and \([y_{k},z_{k}]_{A}\neq 0\). Moreover, \(z_{k}\in K^{\times}\,\mathrm{e}(-\lambda_{k})\), and if \([y_{j},z_{k}]_{A}\neq 0\), then \(\lambda_{j}=\lambda_{k}\)._
Proof.: Let \(i\) range over \(\{0,\ldots,r\}\). As in (2.4.6), set
\[A_{i}\ :=\ (\partial-a_{i})\cdots(\partial-a_{1}),\qquad B_{i}\ :=\ (-1)^{r-i}( \partial+a_{i+1})\cdots(\partial+a_{r}).\]
Then by Lemma 2.5.4 we have
\[\ker_{\mathrm{U}}A_{i}\ =\ Cy_{1}\oplus\cdots\oplus Cy_{i},\qquad\ker_{ \mathrm{U}}B_{i}\ =\ Cz_{r}\oplus\cdots\oplus Cz_{i+1}\]
and thus
\[A_{i}(y_{j})\ =\ 0\ \ \mathrm{if}\ i\geqslant j,\qquad B_{i}(z_{k})\ =\ 0\ \ \mathrm{if}\ i+1 \leqslant k.\]
Then Lemma 2.4.17 yields
\[[y_{j},z_{k}]_{A}\ =\ \sum_{i<r}A_{i}(y_{j})B_{i+1}(z_{k})\ =\ \sum_{k-2<i<j}A_{i}(y_{j})B_{i+1}(z_{k}),\]
so \([y_{j},z_{k}]_{A}=0\) whenever \(j<k\). Moreover,
\[[y_{k},z_{k}]_{A}\ =\ A_{k-1}(y_{k})B_{k}(z_{k})\ \neq\ 0.\]
Take \(\mu_{k}\in\Lambda\) with \(z_{k}^{\dagger}\equiv\mu_{k}\bmod K^{\dagger}\). Then \(y_{j}\in K\,\mathrm{e}(\lambda_{j})\) and \(z_{k}\in K\,\mathrm{e}(\mu_{k})\), so \([y_{j},z_{k}]_{A}\in C\cap K\,\mathrm{e}(\lambda_{j}+\mu_{k})\) by (2.4.3). Hence, if \([y_{j},z_{k}]_{A}\neq 0\), then \(\lambda_{j}+\mu_{k}=0\). In particular, \(\mu_{k}=-\lambda_{k}\) and so \(z_{k}\in K^{\times}\,\mathrm{e}(-\lambda_{k})\).
**Corollary 2.5.16**.: _Assume \(K\) is \(1\)-linearly surjective if \(r\geqslant 2\). Then there is a basis \(y_{1}^{*},\ldots,y_{r}^{*}\) of the \(C\)-linear space \(W\) such that \([y_{j},y_{k}^{*}]_{A}=\delta_{jk}\) for all \(j,k\), and_
\[y_{j}^{*}\in K^{\times}\,\mathrm{e}(-\lambda_{j})\ \text{for all}\ j,\qquad \mathrm{split}(y_{r}^{*},\ldots,y_{1}^{*})\ =\ (-a_{r},\ldots,-a_{1}).\]
Proof.: Lemma 2.5.7 gives \(C\)-linearly independent \(z_{1},\ldots,z_{r}\in\mathrm{U}^{\times}\) such that
\[\mathrm{split}(z_{r},\ldots,z_{1})\ =\ (-a_{r},\ldots,-a_{1}).\]
Lemma 2.5.15 gives constants \(c_{k}\in C^{\times}\) such that \([y_{j},c_{k}z_{k}]_{A}=\delta_{jk}\) for \(j\leqslant k\). We now set \(y_{r}^{*}:=c_{r}z_{r}\), so \([y_{j},y_{r}^{*}]_{A}=\delta_{jr}\) for all \(j\) and \(y_{r}^{*}\in K^{\times}\,\mathrm{e}(-\lambda_{r})\). Let \(1<k\leqslant r\) and assume inductively that we have \(y_{k}^{*},\ldots,y_{r}^{*}\in W\) such that for \(i=k,\ldots,r\) we have \(y_{i}^{*}\in Cz_{i}+\cdots+Cz_{r}\), \([y_{j},y_{i}^{*}]_{A}=\delta_{ji}\) for all \(j\), and \(y_{i}^{*}\in K^{\times}\,\mathrm{e}(-\lambda_{i})\). Then for
\[y_{k-1}^{*}\ :=\ c_{k-1}z_{k-1}-\sum_{i=k}^{r}[y_{i},c_{k-1}z_{k-1}]_{A}y_{i}^{*}\]
we have
\[y_{k-1}^{*}\in Cz_{k-1}+Cz_{k}+\cdots+Cz_{r},\qquad[y_{j},y_{k-1}^{*}]_{A}= \delta_{j,k-1}\text{ for all }j.\]
If \(k\leqslant i\leqslant r\) and \([y_{i},c_{k-1}z_{k-1}]_{A}\neq 0\), then \(\lambda_{i}=\lambda_{k-1}\) by the last part of Lemma 2.5.15, so \(y_{k-1}^{*}\in K^{\times}\,\mathrm{e}(-\lambda_{k-1})\) by the inductive assumption and \(z_{k-1}\in K\,\mathrm{e}(-\lambda_{k-1})\).
This recursive construction yields a basis \(y_{1}^{*},\ldots,y_{r}^{*}\) of the \(C\)-linear space \(W\) such that \([y_{j},y_{k}^{*}]=\delta_{jk}\) for all \(j\), \(k\), and \(y_{i}^{*}\in K^{\times}\,\mathrm{e}(-\lambda_{i})\) for \(i=1,\ldots,r\). It now follows from Lemma 2.5.10 that \(\mathrm{split}(y_{r}^{*},\ldots,y_{1}^{*})=(-a_{r},\ldots,-a_{1})\).
Lemma 2.5.15 also yields:
**Corollary 2.5.17**.: _If \((a_{1},\ldots,a_{r})=(-a_{r},\ldots,-a_{1})\), then \([y_{j},y_{r+1-k}]_{A}=0\) for all \(j<k\), and \([y_{k},y_{r+1-k}]_{A}\neq 0\) for all \(k\)._
**The case of real operators.** We now continue the subsection _The real case_ of Section 2.2. Thus \(K=H[\mathrm{i}]\) where \(H\) is a real closed differential subfield of \(K\) and \(i^{2}=-1\), and \(\Lambda=\Lambda_{\mathrm{r}}+\Lambda_{\mathrm{i}}\mathrm{i}\) where \(\Lambda_{\mathrm{r}}\), \(\Lambda_{\mathrm{i}}\) are subspaces of the \(\mathbb{Q}\)-linear space \(H\). The complex conjugation automorphism \(z\mapsto\overline{z}\) of the differential field \(K\) extends uniquely to an automorphism \(B\mapsto\overline{B}\) of the ring \(K[\partial]\) with \(\overline{\partial}=\partial\). We have \(\overline{A(f)}=\overline{A(\overline{f})}\) for \(f\in\mathrm{U}\), from which it follows that \(\dim_{C}\ker_{K}A=\dim_{C}\ker_{K}\overline{A}\), \((\overline{A})_{\lambda}=\overline{(A_{\overline{\lambda}})}\), \(\mathrm{mult}_{\lambda}\,\overline{A}=\mathrm{mult}_{\overline{\lambda}}\,A\), and \(f\mapsto\overline{f}\colon\mathrm{U}\to\mathrm{U}\) restricts to a \(C_{H}\)-linear bijection \(\ker_{\mathrm{U}}A\to\ker_{\mathrm{U}}\overline{A}\).
_In the rest of this subsection we assume \(H=H^{\dagger}\)\((so\ \Lambda=\Lambda_{\mathrm{i}}\mathrm{i})\) and \(A\in H[\partial]\)\((\)and by earlier conventions, \(A\neq 0\) and \(r:=\mathrm{order}\,A\)\()\). Then \(A=\overline{A}\), hence for all \(\lambda\) we have \(A_{\overline{\lambda}}=\overline{A_{\lambda}}\) and \(\mathrm{mult}_{\lambda}\,A=\mathrm{mult}_{\overline{\lambda}}\,A\). Thus with \(\mu\) ranging over \(\Lambda_{\mathrm{i}}^{>}\):_
\[\sum_{\lambda}\mathrm{mult}_{\lambda}(A)\ =\ \mathrm{mult}_{0}(A)+2\sum_{\mu} \mathrm{mult}_{\mu\mathrm{i}}(A).\]
_Note that \(0\) is an eigenvalue of \(A\) iff \(\ker_{H}A\neq\{0\}\)._
Let \(V:=\ker_{\mathrm{U}}A\), a subspace of the \(C\)-linear space \(\mathrm{U}\) with \(\overline{V}=V\) and \(\dim_{C}V\leqslant r\). Recall that we have the differential \(H\)-subalgebra \(\mathrm{U}_{\mathrm{r}}=\{f\in\mathrm{U}:\overline{f}=f\}\) of \(\mathrm{U}\) and the \(C_{H}\)-linear subspace \(V_{\mathrm{r}}=\ker_{\mathrm{U}_{\mathrm{r}}}A\) of \(\mathrm{U}_{\mathrm{r}}\). Now \(V=V_{\mathrm{r}}\oplus V_{\mathrm{r}}\mathrm{i}\) (internal direct sum of \(C_{H}\)-linear subspaces), so \(\dim_{C}V=\dim_{C_{H}}V_{\mathrm{r}}\). Combining Lemma 2.5.1 and the remarks preceding it with Lemma 2.2.19 and its proof yields:
**Corollary 2.5.18**.: _The \(C\)-linear space \(V\) has a basis \(a_{1}\operatorname{e}(\mu_{1}\dot{\imath}),\,\overline{a_{1}}\operatorname{e}(- \mu_{1}\dot{\imath}),\,\,\dots,\,\,a_{m}\operatorname{e}(\mu_{m}\dot{\imath}),\, \overline{a_{m}}\operatorname{e}(-\mu_{m}\dot{\imath}),\,\,h_{1},\,\,\dots,\, \,h_{n}\qquad(2m+n\leqslant r),\) where \(a_{1},\dots,a_{m}\in K^{\times}\), \(\mu_{1},\dots,\mu_{m}\in\Lambda_{\dot{\imath}}^{>}\), \(h_{1},\dots,h_{n}\in H^{\times}\). For such a basis,_
_is a basis of the \(C_{H}\)-linear space \(V_{\mathrm{r}}\), and \(h_{1},\dots,h_{n}\) is a basis of the \(C_{H}\)-linear subspace \(\ker_{H}A=V\cap H\) of \(H\)._
Using \(H=H^{\dagger}\), arguments as in the proof of Lemma 2.5.7 show:
**Lemma 2.5.19**.: _Assume \(H\) is \(1\)-linearly surjective when \(r\geqslant 2\). Let \(a_{1},\dots,a_{r}\in H\) be such that \(A=(\partial-a_{r})\cdots(\partial-a_{1})\). Then the \(C_{H}\)-linear space \(\ker_{H}A\) has a basis \(y_{1},\dots,y_{r}\) such that \(\operatorname{split}(y_{1},\dots,y_{r})=(a_{1},\dots,a_{r})\)._
Recall from Lemma 2.3.3 that if \(r=1\) or \(K\) is \(1\)-linearly surjective, then
\[A\text{ splits over }K\quad\Longleftrightarrow\quad\sum_{\lambda} \operatorname{mult}_{\lambda}(A)=r.\]
Now \(\operatorname{mult}_{\lambda}(A)=\operatorname{mult}_{\overline{\lambda}}(A)\) for all \(\lambda\), so if \(\operatorname{mult}_{\lambda}(A)=r\geqslant 1\), then \(\lambda=0\). Also, for \(W:=V\cap K=\ker_{K}A\) and \(W_{\mathrm{r}}:=W\cap\mathrm{U}_{\mathrm{r}}\) we have \(W_{\mathrm{r}}=\ker_{H}A\) and
\[W\ =\ W_{\mathrm{r}}\oplus W_{\mathrm{r}}i\quad\text{ (internal direct sum of $C_{H}$-linear subspaces)},\]
so \(\operatorname{mult}_{0}(A)=\dim_{C}\ker_{K}A=\dim_{C_{H}}\ker_{H}A\). If \(y_{1},\dots,y_{r}\) is a basis of the \(C_{H}\)-linear space \(\ker_{H}A\), then \(\operatorname{split}(y_{1},\dots,y_{r})\in H^{r}\) in reversed order is a splitting of \(A\) over \(H\) by Corollary 2.5.5. These remarks and Lemma 2.5.19 now yield:
**Corollary 2.5.20**.: _If \(\operatorname{mult}_{0}(A)=r\), then \(A\) splits over \(H\). The converse holds if \(H\) is \(1\)-linearly surjective or \(r=1\)._
**Corollary 2.5.21**.: _Suppose \(r\geqslant 1\), and \(K\) is \(1\)-linearly surjective if \(r\geqslant 2\). Then_
\[A\text{ splits over }H\quad\Longleftrightarrow\quad\operatorname{mult}_{0}(A )=r\quad\Longleftrightarrow\quad|\Sigma(A)|=1.\]
We now focus on the order \(2\) case:
**Lemma 2.5.22**.: _Suppose \(r=2\) and \(A\) splits over \(K\) but not over \(H\). Then_
\[\dim_{C}\ker_{\mathrm{U}}A\ =\ 2.\]
_If \(H\) is \(1\)-linearly surjective, then \(A\) has two distinct eigenvalues._
Proof.: We can assume \(A\) is monic, so \(A=(\partial-f)(\partial-g)\) with \(f,g\in K\) and \(g=a+bi\), \(a,b\in H\), \(b\neq 0\). Then \(g=d^{\dagger}+\mu\dot{\imath}\) with \(d\in K^{\times}\) and \(\mu\in\Lambda_{\dot{\imath}}\), and so \(d\operatorname{e}(\mu\dot{\imath})\in\ker_{\mathrm{U}}A\). From \(A=\overline{A}\) we obtain \(\overline{d}\operatorname{e}(-\mu\dot{\imath})\in\ker_{\mathrm{U}}A\). These two elements of \(\ker_{\mathrm{U}}A\) are \(C\)-linearly independent, since
\[d\operatorname{e}(\mu\dot{\imath})/\overline{d}\operatorname{e}(-\mu\dot{ \imath})\ =\ (d/\overline{d})\operatorname{e}(2\mu\dot{\imath})\notin C:\]
this is clear if \(\mu\neq 0\), and if \(\mu=0\), then \(d^{\dagger}=g\), so \((d/\overline{d})^{\dagger}=g-\overline{g}=2bi\neq 0\), and hence \(d/\overline{d}\notin C\). Thus \(\dim_{C}\ker_{\mathrm{U}}A\ =\ 2\), and \(\mu\dot{\imath}\), \(-\mu\dot{\imath}\) are eigenvalues of \(A\) with respect to \(\Lambda\). Now assume \(H\) is \(1\)-linearly surjective. Then we claim that \(\mu\neq 0\). To see this note that [ADH, 5.1.21, 5.2.10] and the assumption that \(A\) does not split over \(H\) yield \(\dim_{C_{H}}\ker_{H}A=\dim_{C}\ker_{K}A=0\), hence \(g\notin K^{\dagger}\) and thus \(\mu\dot{\imath}=g-d^{\dagger}\neq 0\).
Combining Lemmas 2.5.19 and 2.5.22 yields:
**Corollary 2.5.23**.: _If \(H\) is \(1\)-linearly surjective, \(A\) has order \(2\), and \(A\) splits over \(K\), then \(\dim_{C}\ker_{\mathrm{U}}A\ =\ 2\)._
_In the rest of this subsection \(H\) is \(1\)-linearly surjective and \(A=4\partial^{2}+f\), \(f\in H\)._
Let the functions \(\omega\colon H\to H\) and \(\sigma\colon H^{\times}\to H\) be as in [ADH, 5.2]. Then we have [ADH, remarks before 5.2.1, and (5.2.1)]:
\[A\text{ splits over }H \iff f\in\omega(H),\] \[A\text{ splits over }K \iff f\in\sigma(H^{\times})\cup\omega(H).\]
If \(A\) splits over \(H\), then \(\Sigma(A)=\{0\}\) and \(\operatorname{mult}_{0}(A)=2\), by Corollary 2.5.21. Suppose \(A\) splits over \(K\) but not over \(H\), and let \(y\in H^{\times}\) satisfy \(\sigma(y)=f\notin\omega(H)\). Then by [ADH, p. 262] we have \(A=4(\partial+g)(\partial-g)\) where \(g=\frac{1}{2}(-y^{\dagger}+yi)\). Hence the two distinct eigenvalues of \(A\) are \((y/2)\mathrm{i}+K^{\dagger}\) and \(-(y/2)\mathrm{i}+K^{\dagger}\). We consider also the skew-adjoint differential operator
\[B\ :=\ \partial^{3}+f\partial+(f^{\prime}/2)\in H[\partial].\]
If \(\dim_{C}\ker_{\mathrm{U}}A=2\), then \(\dim_{C}\ker_{\mathrm{U}}B=3\) by Lemma 2.4.23. Likewise,
\[\dim_{C_{H}}\ker_{H}A=2\ \Longrightarrow\ \dim_{C_{H}}\ker_{H}B=3.\]
**Lemma 2.5.24**.: _If \(A\) splits over \(K\), then so does \(B\). Likewise with \(H\) instead of \(K\)._
Proof.: If \(A\) splits over \(K\), then \(\dim_{C}\ker_{\mathrm{U}}A=2\) by Corollary 2.5.23 and therefore \(\dim_{C}\ker_{\mathrm{U}}B=3\) by the remark preceding the lemma, so \(B\) splits over \(K\) by Corollary 2.5.6. If \(A\) splits over \(H\), then \(\dim_{C_{H}}\ker_{H}A=2\) by Lemma 2.5.19 and hence \(\dim_{C_{H}}\ker_{H}B=3\), so \(B\) splits over \(H\) by Corollary 2.5.5 and the remark following it.
**Lemma 2.5.25**.: _Let \(y\in H^{\times}\) with \(\sigma(y)=f\notin\omega(H)\). Then \(\Sigma(B)=\{\beta,0,-\beta\}\) where \(\beta:=y\mathrm{i}+K^{\dagger}\neq 0\), and \(\dim_{C_{H}}\ker_{H}B=1\)._
Proof.: Put \(g=\frac{1}{2}(-y^{\dagger}+yi)\), so \(A=4(\partial+g)(\partial-g)\), and take \(d\in K^{\times}\) and \(\mu\in\Lambda_{\mathrm{i}}\) with \(g=d^{\dagger}+\mu\mathrm{i}\). Then \(d\operatorname{e}(\mu\mathrm{i})\), \(\overline{d}\operatorname{e}(-\mu\mathrm{i})\) is a basis of \(\ker_{\mathrm{U}}A\) and \(\mu\neq 0\), by the argument in the proof of Lemma 2.5.22. Hence
\[d^{2}\operatorname{e}(2\mu\mathrm{i}),\quad|d|^{2},\quad\overline{d}^{2} \operatorname{e}(-2\mu\mathrm{i})\]
is a basis of \(\ker_{\mathrm{U}}B\) by Lemma 2.4.23, so
\[\big{(}d^{2}\operatorname{e}(2\mu\mathrm{i})\big{)}^{\dagger}+K^{\dagger}=2 \mu\mathrm{i}+K^{\dagger},\ \big{(}|d|^{2}\big{)}^{\dagger}+K^{\dagger}=[0],\ \big{(} \overline{d}^{2}\operatorname{e}(-2\mu\mathrm{i})\big{)}^{\dagger}+K^{\dagger}= -2\mu\mathrm{i}+K^{\dagger}\]
are eigenvalues of \(B\). Since \(\mu\mathrm{i}\notin K^{\dagger}\), these are distinct eigenvalues, and so there are no other eigenvalues. Note: \(g=\frac{1}{2}(-y^{\dagger}+y\mathrm{i})=d^{\dagger}+\mu\mathrm{i}\) gives \(y\mathrm{i}+K^{\dagger}=2\mu\mathrm{i}+K^{\dagger}\). Finally, \(\dim_{C_{H}}\ker_{H}B=1\) by Corollary 2.5.18.
**Factoring linear differential operators over \(H\)-fields \((^{*})\).**_In this subsection \(H\) is a real closed \(H\)-field with \(x\in H\), \(x^{\prime}=1\), \(x\succ 1\), and \(K=H[\mathrm{i}]\) where \(\mathrm{i}^{2}=-1\)._
In the proof of the next lemma we use [ADH, 10.5.2(i)]:
\[y,z\in H^{\times},\ y\prec z\ \Longrightarrow\ y^{\dagger}<z^{\dagger}. \tag{2.5.2}\]
**Lemma 2.5.26**.: _Let \(y\), \(z\) be \(C_{H}\)-linearly independent elements of \(H^{\times}\) and \((a,b):=\operatorname{split}(y,z)\). If \(xy\succ z\), then \(a>b\), and if \(xy\prec z\), then \(a<b\)._
Proof.: Replacing \((y,z)\) by \((1,z/y)\) we arrange \(y=1\), \(a=0\), by Lemma 2.5.11. Then \(z^{\prime}\neq 0\) and \(b=z^{\prime\dagger}\). Now \(x\succ z\) implies \(1\succ z^{\prime}\), and \(x\prec z\) implies \(1\prec z^{\prime}\). It remains to use the remark preceding the lemma.
In the next three lemmas \(y_{1},\ldots,y_{r}\in H\) (\(r\in\mathbb{N}\)) are \(C_{H}\)-linearly independent and \((a_{1},\ldots,a_{r}):=\operatorname{split}(y_{1},\ldots,y_{r})\in H^{r}\). We also assume that \(H\) is \(\lambda\)-free.
**Lemma 2.5.27**.: \(y_{1}\succ\cdots\succ y_{r}\ \Rightarrow\ a_{1}>\cdots>a_{r}\)_._
Proof.: The cases \(r=0,1\) are trivial, so suppose that \(r\geqslant 2\) and \(y_{1}\succ\cdots\succ y_{r}\). We have \((a_{1},\ldots,a_{r-1})=\operatorname{split}(y_{1},\ldots,y_{r-1})\). Assume \(a_{1}>\cdots>a_{r-1}\) as inductive hypothesis. It remains to show \(a_{r-1}>a_{r}\). Put \(B:=(\partial-a_{r-2})\cdots(\partial-a_{1})\); so \(B=A_{r-2}\) in the notation introduced before Lemma 2.5.4, and \((a_{r-1},a_{r})=\operatorname{split}\bigl{(}B(y_{r-1}),B(y_{r})\bigr{)}\). By Lemma 2.5.4, \(y_{1},\ldots,y_{r-2}\) is a basis of the \(C_{H}\)-linear subspace \(\ker_{H}B\) of \(H\), and hence
\[v(\ker_{H}^{\neq}B)\ =\ \bigl{\{}v(y_{1}),\ldots,v(y_{r-2})\bigr{\}}\ =\ \mathscr{E}^{\rm e}(B)\]
by Corollary 1.5.20, so \(v(y_{r-1}),v(y_{r})\notin\mathscr{E}^{\rm e}(B)\). Then Lemma 1.5.6 gives \(B(y_{r-1})\succ B(y_{r})\), so \(xB(y_{r-1})\succ B(y_{r})\). Now Lemma 2.5.26 yields \(a_{r-1}>a_{r}\).
**Lemma 2.5.28**.: \(y_{1}\prec^{\flat}\cdots\prec^{\flat}y_{r}\ \Rightarrow\ a_{1}<\cdots<a_{r}\)_._
Proof.: Similar to the proof of Lemma 2.5.27, using in the inductive step that \(B\) is asymptotically surjective by Corollary 1.5.25, hence if \(y,z\in H^{\times}\), \(vy,vz\notin\mathscr{E}^{\rm e}(B)\), and \(y\prec^{\flat}z\), then \(B(y)\prec^{\flat}B(z)\) by Lemma 1.5.22, and so \(xB(y)\prec B(z)\).
Along the lines of the proof of Lemma 2.5.27 we obtain:
**Lemma 2.5.29**.: _Suppose \(y_{i}\not\prec y_{j}\) for all \(i,j\) with \(1\leqslant i<j\leqslant r\). Then_
\[a_{1}\leqslant\cdots\leqslant a_{r}\ \Rightarrow\ y_{1}\prec\cdots\prec y_{r}.\]
Under present assumptions we can strengthen the conclusion of Lemma 2.5.19:
**Lemma 2.5.30**.: _Assume \(H\) is Liouville closed. Let \(a_{1},\ldots,a_{r}\in H\) and set_
\[A\ :=\ (\partial-a_{r})\cdots(\partial-a_{1}).\]
_Then the \(C_{H}\)-linear space \(\ker_{H}A\) has a basis \(y_{1},\ldots,y_{r}\) such that \(\operatorname{split}(y_{1},\ldots,y_{r})=(a_{1},\ldots,a_{r})\) and \(y_{i}\not\prec y_{j}\) for all \(i,j\) with \(1\leqslant i<j\leqslant r\)._
Proof.: The case \(r=0\) is clear. Let \(r\geqslant 1\) and assume inductively that
\[B\ :=\ (\partial-a_{r})\cdots(\partial-a_{2})\]
has a basis \(z_{2},\ldots,z_{r}\) of \(\ker_{H}B\) such that \(\operatorname{split}(z_{2},\ldots,z_{r})=(a_{2},\ldots,a_{r})\) and \(z_{i}\not\prec z_{j}\) whenever \(2\leqslant i<j\leqslant r\). Take \(y_{1}\in H^{\times}\) with \(y_{1}^{\dagger}=a_{1}\), so \(\ker_{H}(\partial-a_{1})=Cy_{1}\) and \(\mathscr{E}^{\rm e}_{H}(\partial-a_{1})=\{vy_{1}\}\). For \(i=2,\ldots,r\), Corollary 1.5.4 then gives \(y_{i}\in H^{\times}\) with \((\partial-a_{1})(y_{i})=z_{i}\) and \(y_{i}\not\prec y_{1}\). Then \(y_{1},\ldots,y_{r}\in\ker_{H}A\) and \(y_{i}\not\prec y_{j}\) for all \(i\neq j\), and \(\operatorname{split}(y_{1},\ldots,y_{r})=(a_{1},\ldots,a_{r})\).
The valuation of \(H\) being trivial on \(C_{H}\), the proof of the next lemma is obvious.
**Lemma 2.5.31**.: _Let \(g_{1}\prec\cdots\prec g_{n}\) in \(H\) and let \(h_{1},\ldots,h_{n}\) be in the \(C_{H}\)-linear subspace spanned by \(g_{1},\ldots,g_{n}\). Then the following are equivalent:_
* \(h_{1}\prec\cdots\prec h_{n}\)_;_
* _for_ \(i=1,\ldots,n\) _there are_ \(c_{ii},c_{i,i-1},\ldots,c_{i1}\in C_{H}\) _such that_ \[h_{i}\ =\ c_{ii}g_{i}+c_{i,i-1}g_{i-1}+\cdots+c_{i1}g_{1}\text{ and }c_{ii}\neq 0.\]
Below \(A\in H[\partial]^{\neq}\) has order \(r\geqslant 1\). Now the main results of this subsection:
**Lemma 2.5.32**.: _There is at most one splitting \((a_{r},\ldots,a_{1})\) of \(A\) over \(H\) such that \(a_{1}\leqslant\cdots\leqslant a_{r}\)._
Proof.: Let \((a_{r},\ldots,a_{1})\), \((b_{r},\ldots,b_{1})\) be splittings of \(A\) over \(H\) with \(a_{1}\leqslant\cdots\leqslant a_{r}\) and \(b_{1}\leqslant\cdots\leqslant b_{r}\). Towards showing that \(a_{i}=b_{i}\) for \(i=1,\ldots,r\) we arrange that \(H\) is Liouville closed. Then Lemma 2.5.30 yields bases \(y_{1},\ldots,y_{r}\) and \(z_{1},\ldots,z_{r}\) of \(\ker_{H}A\) such that \(\operatorname{split}(y_{1},\ldots,y_{r})=(a_{1},\ldots,a_{r})\), \(\operatorname{split}(z_{1},\ldots,z_{r})=(b_{1},\ldots,b_{r})\) and \(y_{i}\neq y_{j}\), \(z_{i}\neq z_{j}\) whenever \(i\neq j\). By Lemma 2.5.29 we have \(y_{1}\prec\cdots\prec y_{r}\) and \(z_{1}\prec\cdots\prec z_{r}\), and hence by Lemmas 2.5.10 and 2.5.31,
\[(a_{1},\ldots,a_{r})\ =\ \operatorname{split}(y_{1},\ldots,y_{r})\ =\ \operatorname{split}(z_{1},\ldots,z_{r})\ =\ (b_{1},\ldots,b_{r}).\qed\]
_Example_.: Let \(a,b\in H\) in this example. Then
\[(\partial-b)(\partial-a)=\partial^{2}-\partial a-b\partial+ab=\partial^{2}-(a +b)\partial+(ab-a^{\prime}),\]
so for \(f,g\in H\),
\[(\partial-b)(\partial-a)=(\partial-g)(\partial-f)\quad\Longleftrightarrow \quad a+b=f+g\text{ and }ab-a^{\prime}=fg-f^{\prime}.\]
Now take \(A=\partial^{2}\). Then \(1\), \(x\) is a basis of \(\ker_{H}A\), and
\[A=(\partial-b)(\partial-a)\quad\Longleftrightarrow\quad a+b=0\text{ and }ab-a^{\prime}=0\]
\[\Longleftrightarrow\quad a=-b=(cx+d)^{\dagger}\text{ for some }c,d\in C_{H}, \text{ not both zero}.\]
Hence if \((b,a)\) is any splitting of \(A\) over \(H\), then \(a\geqslant 0\geqslant b\), and the only splitting \((b,a)\) of \(A\) over \(H\) with \(a\leqslant b\) is \((b,a)=(0,0)\).
We call \(A\)**scrambled** if there are \(y,z\in\ker_{H}^{\neq}A\) with \(y\neq z\) and \(y\asymp^{\flat}z\), and **unscrambled** otherwise. Hence if \(r=1\), then \(A\) is unscrambled, whereas \(A=\partial^{2}\) is scrambled. For \(a,b\in H^{\times}\) we have: \(A\) is scrambled \(\Leftrightarrow aAb\) is scrambled. Moreover:
**Lemma 2.5.33**.: _Assume \(H\) has asymptotic integration, and let \(B\in H[\partial]\) have order \(s\geqslant 1\). If \(B\) is scrambled, then so is \(AB\). If \(A\) is scrambled, \(B\) is asymptotically surjective, \(\mathscr{E}^{\rm e}(B)=v(\ker_{H}^{\neq}B)\), and \(\ker_{H}A\subseteq B(H)\), then \(AB\) is scrambled._
Proof.: The first statement is clear since \(\ker_{H}B\subseteq\ker_{H}AB\). Suppose \(B\) is asymptotically surjective, \(\mathscr{E}^{\rm e}(B)=v(\ker_{H}^{\neq}B)\), and \(\ker_{H}A\subseteq B(H)\), and let \(f,g\in\ker_{H}^{\neq}A\) be such that \(f\neq g\) and \(f\asymp^{\flat}g\). Corollary 1.5.4 yields \(y,z\in H\) with \(B(y)=f\), \(B(z)=g\) and \(vy,vz\notin\mathscr{E}^{\rm e}(B)\). Then \(y,z\in\ker_{H}^{\neq}AB\), and \(y\neq z\), \(y\asymp^{\flat}z\) by Lemmas 1.5.6 and 1.5.22.
**Proposition 2.5.34**.: _Suppose \(H\) is Liouville closed, \(A\) splits over \(H\), and \(A\) is unscrambled. Then there is a unique splitting \((a_{r},\ldots,a_{1})\) of \(A\) over \(H\) such that \(a_{1}\leqslant\cdots\leqslant a_{r}\). For this splitting we have \(a_{1}<\cdots<a_{r}\)._
Proof.: We first arrange that \(A\) is monic. The uniqueness part is immediate from Lemma 2.5.32. To obtain a splitting \((a_{r},\ldots,a_{1})\) of \(A\) over \(H\) with \(a_{1}<\cdots<a_{r}\), Lemma 2.5.30 gives a basis \(y_{1}\prec\cdots\prec y_{r}\) of the \(C_{H}\)-linear space \(\ker_{H}A\). Now set \((a_{1},\ldots,a_{r}):=\operatorname{split}(y_{1},\ldots,y_{r})\). Then \((a_{r},\ldots,a_{1})\) is a splitting of \(A\) over \(H\), by Corollary 2.5.5. Since \(A\) is unscrambled, we have \(y_{1}\prec^{\flat}\cdots\prec^{\flat}y_{r}\), so \(a_{1}<\cdots<a_{r}\), by Lemma 2.5.28.
In [103, Exercise 7.28] it is claimed that for the Liouville closed \(H\)-field \(H=\mathbb{T}_{\mathbb{S}}\) of grid-based transseries, if \(A\) splits over \(H\), then \(A\) always has a splitting \((a_{r},\ldots,a_{1})\) over \(H\) with \(a_{1}\leqslant\cdots\leqslant a_{r}\). The next example shows this to be incorrect for \(r=2\)
_Example 2.5.35_.: Let \(z\in H\setminus C_{H}\) and suppose \(A=(\partial-g)\partial\) where \(g:=z^{\prime}{}^{\dagger}\). Then \(1\), \(z\) is a basis of \(\ker_{H}A\). With \(a,b\in H\), this fact leads to the equivalence
\[A=(\partial-b)(\partial-a)\ \ \iff\ \ a=g-b=(cz+d)^{\dagger}\text{ for some }c,d\in C _{H},\text{ not both zero}.\]
Now take \(z=x-x^{-1}\), so \(z^{\prime}=1+x^{-2}\), \(z^{\prime\prime}=-2x^{-3}\), and hence
\[g\ =\ z^{\prime}{}^{\dagger}\ =\ \frac{z^{\prime\prime}}{z^{\prime}}\ =\ -\frac{2}{x(x^{2}+1)}<0.\]
Let \((b,a)\) be a splitting of \(A\) over \(H\). We claim that then \(a\geqslant 0>b\). This is clear if \(a=0\), so assume \(a\neq 0\). For \(c,d\in C_{H}\), \(c\neq 0\) we have \((cz+d)^{\dagger}\sim x^{-1}\). Thus \(a\sim x^{-1}\) and \(b=g-a\sim-x^{-1}\). Hence \(a>0>b\) as claimed.
_In the rest of this subsection \(H\) has asymptotic integration, and \(\phi\) ranges over the elements of \(H^{>}\) that are active in \(H\)._ Note that if \(A\) is unscrambled and \(\phi\preccurlyeq 1\), then \(A^{\phi}\in H^{\phi}[\emptyset]\) is also unscrambled. Moreover:
**Lemma 2.5.36**.: \(A^{\phi}\) _is unscrambled, eventually._
Proof.: By Remark 1.5.3 we have \(|v(\ker_{H}^{\neq}A)|\leqslant r\), thus we can take \(\phi\) so that \(\gamma-\delta\notin\Gamma_{\phi}^{\flat}\) for all \(\gamma\neq\delta\) in \(v(\ker_{H}^{\neq}A)\). Now \(A^{\phi}\) is unscrambled since \(\ker_{H}A=\ker_{H^{\phi}}A^{\phi}\).
**Lemma 2.5.37**.: _Let \((a_{r},\ldots,a_{1})\) be a splitting of \(A\) over \(H\) such that \(a_{1}\leqslant\cdots\leqslant a_{r}\), suppose \(\phi\preccurlyeq 1\), and set \(b_{j}:=\phi^{-1}\big{(}a_{j}-(j-1)\phi^{\dagger}\big{)}\) for \(j=1,\ldots,r\). Then \((b_{r},\ldots,b_{1})\) is a splitting of \(A^{\phi}\) over \(H^{\phi}\) with \(b_{1}<\cdots<b_{r}\)._
Proof.: By Lemma 1.1.2, \((b_{r},\ldots,b_{1})\) is a splitting of \(A^{\phi}\) over \(H^{\phi}\). Since \(\phi^{\dagger}<0\), we have \(b_{1}<\cdots<b_{r}\).
**Corollary 2.5.38**.: _Suppose \(H\) is Liouville closed and \(A\) splits over \(H\). Then there is a unique splitting \((a_{r},\ldots,a_{1})\) of \(A\) over \(H\) such that eventually \(a_{j}+\phi^{\dagger}<a_{j+1}\) for \(j=1,\ldots,r-1\)._
Proof.: Let \((a_{r},\ldots,a_{1})\) be a splitting of \(A\) over \(H\) and \(\phi\) so that \(a_{j}+\phi^{\dagger}<a_{j+1}\) for \(j=1,\ldots,r-1\). Set \(b_{j}:=\phi^{-1}\big{(}a_{j}-(j-1)\phi^{\dagger}\big{)}\) for \(j=1,\ldots,r\). Then \((b_{r},\ldots,b_{1})\) is a splitting of \(A^{\phi}\) over \(H^{\phi}\) with \(b_{1}<\cdots<b_{r}\). Thus by Lemma 2.5.32 there can be at most one splitting \((a_{r},\ldots,a_{1})\) of \(A\) over \(H\) such that eventually \(a_{j}+\phi^{\dagger}<a_{j+1}\) for \(j=1,\ldots,r-1\). (Here we also use (2.5.2).)
For existence, take \(\phi\) with unscrambled \(A^{\phi}\) and a splitting \((b_{r},\ldots,b_{1})\) of \(A^{\phi}\) over \(H^{\phi}\) with \(b_{1}<\cdots<b_{r}\) as in Proposition 2.5.34. For \(j=1,\ldots,r\), take \(a_{j}\in H\) such that \(b_{j}=\phi^{-1}\big{(}a_{j}-(j-1)\phi^{\dagger}\big{)}\). Then \((a_{r},\ldots,a_{1})\) is a splitting of \(A\) over \(H\), by Lemma 1.1.2, and \(a_{j}+\phi^{\dagger}<a_{j+1}\) for \(j=1,\ldots,r-1\).
_Example_.: Suppose \(H\) is Liouville closed and \(A\), \(g\), \(z=x-x^{-1}\) are as in Example 2.5.35. Then the unique splitting \((b,a)\) of \(A\) over \(H\) such that eventually \(a+\phi^{\dagger}<b\) is \((b,a)=(g,0)\). (To see this use that eventually \(\phi^{\dagger}\sim-x^{-1}\).)
We finish this subsection with a variant of Proposition 2.5.34 for Polya-style splittings. In [ADH, 11.8] we defined \(\Gamma(H):=\{h^{\dagger}:h\in H^{\succ 1}\}\). If \(H\) is Liouville closed, then \(H^{>}\setminus\operatorname{I}(H)=\Gamma(H)\)[ADH, p. 520].
**Proposition 2.5.39**.: _Suppose \(H\) is Liouville closed and \(A\) is monic and splits over \(H\). Then there are \(g_{1},\ldots,g_{r}\in H^{>}\) such that_
\[A\ =\ g_{1}\cdots g_{r}(\partial g_{r}^{-1})\cdots(\partial g_{2}^{-1})( \partial g_{1}^{-1})\ \ \text{ and }\ \ g_{j}\in\Gamma(H)\text{ for }j=2,\ldots,r.\]
_Such \(g_{1},\ldots,g_{r}\) are unique up to multiplication by positive constants._
Proof.: Let \((a_{r},\ldots,a_{1})\) be the splitting of \(A\) over \(H\) from Corollary 2.5.38. Take \(g_{j}\in H^{>}\) such that \(g_{j}^{\dagger}=a_{j}-a_{j-1}\) for \(j=1,\ldots,r\), where \(a_{0}:=0\). Then
\[A=g_{1}\cdots g_{r}(\partial g_{r}^{-1})\cdots(\partial g_{2}^{-1})(\partial g_ {1}^{-1})\]
by Lemma 1.1.3. For \(j=2,\ldots,r\) we have \((g_{j}/\phi)^{\dagger}>0\), eventually, hence \(g_{j}/\phi\succcurlyeq 1\), eventually, so \(g_{j}\in H^{>}\setminus{\rm I}(H)=\Gamma(H)\). Suppose now \(h_{1},\ldots,h_{r}\in H^{>}\) are such that
\[A\ =\ h_{1}\cdots h_{r}(\partial h_{r}^{-1})\cdots(\partial h_{2}^{-1})( \partial h_{1}^{-1})\quad\text{and}\quad h_{j}\in\Gamma(H)\text{ for }j=2,\ldots,r.\]
Set \(b_{j}:=(h_{1}\cdots h_{j})^{\dagger}\) for \(j=1,\ldots,r\). Then \(A=(\partial-b_{r})\cdots(\partial-b_{1})\) by Lemma 1.1.3. Also \(\int h_{j}\succ 1\) for \(j=2,\ldots,r\), for any choice of the integrals in \(H\). Let
\[z_{1}:=h_{1},\quad z_{2}:=h_{1}\int h_{2},\quad\ldots,\quad z_{r}:=h_{1}\int(h _{2}\int h_{3}(\cdots(h_{r-1}\int h_{r})\cdots))\]
for some choice of the integrals in \(H\). Then \(z_{1},\ldots,z_{r}\in\ker_{H}^{\neq}A\), and induction on \(j=1,\ldots,r\) using [ADH, 9.1.3(iii)] gives \(z_{1}\prec\cdots\prec z_{j}\). Then Lemma 2.5.12 yields \(\operatorname{split}(z_{1},\ldots,z_{r})=(b_{1},\ldots,b_{r})\). Applied to \(g_{1},\ldots,g_{r}\) instead of \(h_{1},\ldots,h_{r}\), this gives \(y_{1}\prec\cdots\prec y_{r}\in\ker_{H}^{\neq}A\) such that \(\operatorname{split}(y_{1},\ldots,y_{r})=(a_{1},\ldots,a_{r})\). Now \(y_{1},\ldots,y_{r}\) and \(z_{1},\ldots,z_{r}\) are both bases of \(\ker_{H}A\), so by Lemmas 2.5.31 and 2.5.10:
\[(a_{1},\ldots,a_{r})\ =\ \operatorname{split}(y_{1},\ldots,y_{r})\ =\ \operatorname{split}(z_{1},\ldots,z_{r})\ =\ (b_{1},\ldots,b_{r}).\]
So \(g_{j}^{\dagger}=a_{j}-a_{j-1}=b_{j}-b_{j-1}=h_{j}^{\dagger}\) (\(b_{0}:=0\)), and thus \(g_{j}\in C_{H}^{>}h_{j}\), \(j=1,\ldots,r\).
### The case of oscillating transseries \((^{*})\)
The above material in this section applies to the algebraically closed differential field \(K={\mathbb{T}}[i]\) with constant field \({\mathbb{C}}\). It extends the (real closed) differential field \({\mathbb{T}}\) of transseries.
**Lemma 2.5.40**.: \({\mathbb{T}}[i]\) _is linearly closed and linearly surjective._
Proof.: By [ADH, 15.0.2], \({\mathbb{T}}\) is newtonian, so \({\mathbb{T}}[i]\) is newtonian by [ADH, 14.5.7]. Hence \({\mathbb{T}}[i]\) is linearly closed by [ADH, 5.8.9, 14.5.3], and \({\mathbb{T}}[i]\) is linearly surjective by [ADH, 14.2.2].
Now applying Corollary 2.5.8 and Lemma 2.5.1 to \({\mathbb{T}}[i]\) gives:
**Corollary 2.5.41**.: _For \(K={\mathbb{T}}[i]\), there are \({\mathbb{C}}\)-linearly independent units \(y_{1},\ldots,y_{r}\) of \({\rm U}_{{\mathbb{T}}[i]}\) with \(A(y_{1})=\cdots=A(y_{r})=0\)._
Next we describe another incarnation of \({\rm U}_{{\mathbb{T}}[i]}\), namely as a ring of "oscillating" transseries. Towards this goal we first note that by [ADH, 11.5.1, 11.8.2] we have
\[{\rm I}({\mathbb{T}})\ =\ \bigl{\{}y\in{\mathbb{T}}:\ y\preccurlyeq f^{ \prime}\text{ for some }f\prec 1\text{ in }{\mathbb{T}}\bigr{\}}\] \[=\ \bigl{\{}y\in{\mathbb{T}}:\ y\prec 1/(\ell_{0}\cdots\ell_{n}) \text{ for all }n\bigr{\}},\]
so a complement \(\Lambda_{\mathbb{T}}\) of \({\rm I}({\mathbb{T}})\) in \({\mathbb{T}}\) is given by
\[\Lambda_{\mathbb{T}}\ :=\ \bigl{\{}y\in{\mathbb{T}}:\ \operatorname{supp}(y) \succ 1/(\ell_{0}\cdots\ell_{n-1}\ell_{n}^{2})\text{ for all }n\bigr{\}}.\]
Since \({\mathbb{T}}^{\dagger}={\mathbb{T}}\) and \({\rm I}\bigl{(}{\mathbb{T}}[i]\bigr{)}\subseteq{\mathbb{T}}[i]^{\dagger}\) we have \({\mathbb{T}}[i]^{\dagger}={\mathbb{T}}\oplus{\rm I}({\mathbb{T}})i\) by Lemmas 1.2.4 and 1.2.16. We now take \(\Lambda=\Lambda_{\mathbb{T}}i\) as our complement \(\Lambda\) of \({\mathbb{T}}[i]^{\dagger}\) in \({\mathbb{T}}[i]\) and explain how the universal exponential extension \({\rm U}\) of \({\mathbb{T}}[i]\) for this \(\Lambda\) was introduced in [103, Section 7.7] in a different way. Let
\[{\mathbb{T}}_{\succ}\ :=\ \{f\in{\mathbb{T}}:\operatorname{supp}f\succ 1\},\]
and similarly with \(\prec\) in place of \(\succ\); then \(\mathbb{T}_{\prec}=\circ_{\mathbb{T}}\) and \(\mathbb{T}_{\succ}\) are \(\mathbb{R}\)-linear subspaces of \(\mathbb{T}\), and \(\mathbb{T}\) decomposes as an internal direct sum
\[\mathbb{T}\ =\ \mathbb{T}_{\succ}\oplus\mathbb{R}\oplus\mathbb{T}_{\prec} \tag{2.5.3}\]
of \(\mathbb{R}\)-linear subspaces of \(\mathbb{T}\). Let \(\mathrm{e}^{\mathrm{i}\mathbb{T}_{\succ}}=\{\mathrm{e}^{\mathrm{i}f}:f\in \mathbb{T}_{\succ}\}\) be a multiplicative copy of the additive group \(\mathbb{T}_{\succ}\), with isomorphism \(f\mapsto\mathrm{e}^{\mathrm{i}f}\). Then we have the group ring
\[\mathbb{O}\ :=\ K\big{[}\,\mathrm{e}^{\mathrm{i}\mathbb{T}_{\succ}}\,\big{]}\]
of \(\mathrm{e}^{\mathrm{i}\mathbb{T}_{\succ}}\) over \(K=\mathbb{T}[\mathrm{i}]\). We make \(\mathbb{O}\) into a differential ring extension of \(K\) by
\[(\mathrm{e}^{\mathrm{i}f})^{\prime}\ =\ if^{\prime}\,\mathrm{e}^{\mathrm{i}f} \qquad(f\in\mathbb{T}_{\succ}).\]
Hence \(\mathbb{O}\) is an exponential extension of \(K\). The elements of \(\mathbb{O}\) are called _oscillating transseries_. For each \(f\in\mathbb{T}\) there is a unique \(g\in\mathbb{T}\), to be denoted by \(\int f\), such that \(g^{\prime}=f\) and \(g\) has constant term \(g_{1}=0\). Note that the injective map \(\int\colon\mathbb{T}\to\mathbb{T}\) is \(\mathbb{R}\)-linear. We show that \(\mathrm{U}\) and \(\mathbb{O}\) are disguised versions of each other:
**Proposition 2.5.42**.: _There is a unique isomorphism \(\mathrm{U}=K\big{[}\mathrm{e}(\Lambda)\big{]}\to\mathbb{O}\) of differential \(K\)-algebras sending \(\mathrm{e}(h\mathrm{i})\) to \(\mathrm{e}^{\mathrm{i}\int h}\) for all \(h\in\Lambda_{\mathbb{T}}\)._
This requires the next lemma. We assume familiarity with [ADH, Appendix A], especially with the ordered group \(G^{\mathrm{LE}}\) (a subgroup of \(\mathbb{T}^{\times}\)) of logarithmic-exponential monomials and its subgroup \(G^{\mathrm{E}}=\bigcup_{n}G_{n}\) of exponential monomials.
**Lemma 2.5.43**.: _If \(\mathfrak{m}\in G^{\mathrm{LE}}\) and \(\mathfrak{m}\succ 1\), then \(\mathrm{supp}\,\mathfrak{m}^{\prime}\ \subseteq\ \Lambda_{\mathbb{T}}\)._
Proof.: We first prove by induction on \(n\) a fact about elements of \(G^{\mathrm{E}}\):
\[\mathrm{if}\ \mathfrak{m}\in G_{n},\,\mathfrak{m}\succ 1,\,\mathrm{then}\ \mathrm{supp}\,\mathfrak{m}^{\prime}\succ 1/x.\]
For \(r\in\mathbb{R}^{>}\) we have \((x^{r})^{\prime}=rx^{r-1}\succ 1/x\), so the claim holds for \(n=0\). Suppose the claim holds for a certain \(n\). Now \(G_{n+1}=G_{n}\exp(A_{n})\), \(G_{n}\) is a convex subgroup of \(G_{n+1}\), and
\[A_{n}\ =\ \big{\{}f\in\mathbb{R}[[G_{n}]]:\ \mathrm{supp}\,f\succ G_{n-1} \big{\}}\qquad(\mathrm{where}\ G_{-1}:=\{1\}).\]
Let \(\mathfrak{m}=\mathfrak{n}\exp(a)\in G_{n+1}\) where \(\mathfrak{n}\in G_{n}\), \(a\in A_{n}\); then
\[\mathfrak{m}\succ 1\quad\Longleftrightarrow\quad a>0,\,\mathrm{or}\ a=0,\, \mathfrak{n}\succ 1.\]
Suppose \(\mathfrak{m}\succ 1\). If \(a=0\), then \(\mathfrak{m}=\mathfrak{n}\), and we are done by inductive hypothesis, so assume \(a>0\). Then \(\mathfrak{m}^{\prime}=(\mathfrak{n}^{\prime}+\mathfrak{n}a^{\prime})\exp(a)\) and \((\mathfrak{n}^{\prime}+\mathfrak{n}a^{\prime})\in\mathbb{R}[[G_{n}]]\), a differential subfield of \(\mathbb{T}\), and \(\exp(a)>\mathbb{R}[[G_{n}]]\), hence \(\mathrm{supp}\,\mathfrak{m}^{\prime}\succ 1\succ 1/x\) as required.
Next, suppose \(\mathfrak{m}\in G^{\mathrm{LE}}\) and \(\mathfrak{m}\succ 1\). Take \(n\geqslant 1\) such that \(\mathfrak{m}\!\!\uparrow^{n}\in G^{\mathrm{E}}\). We have \((\mathfrak{m}\!\!\uparrow^{n})^{\prime}=(\mathfrak{m}^{\prime}\cdot\ell_{0}\ell _{1}\cdots\ell_{n-1})\!\!\uparrow^{n}\). For \(\mathfrak{n}\in\mathrm{supp}\,\mathfrak{m}^{\prime}\) and using \(\mathfrak{m}\!\!\uparrow^{n}\succ 1\) this gives
\[(\mathfrak{n}\cdot\ell_{0}\ell_{1}\cdots\ell_{n-1})\!\!\uparrow^{n}\ \succ\ 1/x\]
by what we proved for monomials in \(G^{\mathrm{E}}\). Applying \(\downarrow_{n}\) this yields \(\mathfrak{n}\succ 1/(\ell_{0}\ell_{1}\cdots\ell_{n})\), hence \(\mathfrak{n}\in\Lambda_{\mathbb{T}}\) as claimed.
Proof of Proposition 2.5.42.: Applying \(\partial\) to the decomposition (2.5.3) gives
\[\mathbb{T}\ =\ \partial(\mathbb{T}_{\succ})\oplus\partial(\mathbb{T}_{\prec}).\]
Now \(\partial(\mathbb{T}_{\succ})\subseteq\Lambda_{\mathbb{T}}\) by Lemma 2.5.43, and \(\partial(\mathbb{T}_{\prec})\subseteq\mathrm{I}(\mathbb{T})\), and so these two inclusions are equalities. Thus \(\int\Lambda_{\mathbb{T}}=\mathbb{T}_{\succ}\), from which the proposition follows.
**Proposition 2.5.44**.: _There is a unique group morphism \(\exp\colon K=\mathbb{T}[i]\to\mathbb{O}^{\times}\) that extends the given exponential maps \(\exp\colon\mathbb{T}\to\mathbb{T}^{\times}\) and \(\exp\colon\mathbb{C}\to\mathbb{C}^{\times}\), and such that \(\exp(if)=\mathrm{e}^{i\mathbb{T}}\) for all \(f\in\mathbb{T}_{\succ}\) and \(\exp(\varepsilon)=\sum_{n}\frac{\varepsilon^{n}}{n!}\) for all \(\varepsilon\in\sigma\). It is surjective, has kernel \(2\pi\mathrm{i}\mathbb{Z}\subseteq\mathbb{C}\), and satisfies \(\exp(f)^{\prime}=f^{\prime}\exp(f)\) for all \(f\in K\)._
Proof.: The first statement follows easily from the decompositions
\[K\ =\ \mathbb{T}\oplus\mathrm{i}\mathbb{T}\ =\ \mathbb{T}\oplus i\mathbb{T}_{ \succ}\oplus\mathrm{i}\mathbb{R}\oplus\mathrm{i}\mathcal{O}_{\mathbb{T}},\qquad \mathbb{C}\ =\ \mathbb{R}\oplus i\mathbb{R},\qquad\sigma\ =\ \mathcal{O}_{\mathbb{T}}\oplus i \mathcal{O}_{\mathbb{T}}\]
of \(K\), \(\mathbb{C}\), and \(\sigma=\sigma_{K}\) as internal direct sums of \(\mathbb{R}\)-linear subspaces. Next,
\[\mathbb{O}^{\times}\ =\ K^{\times}\,\mathrm{e}^{\mathrm{i}\mathbb{T}_{\succ}} =\ \mathbb{T}^{>}\cdot S_{\mathbb{C}}\cdot(1+\sigma)\cdot\mathrm{e}^{ \mathrm{i}\mathbb{T}_{\succ}},\qquad S_{\mathbb{C}}\ :=\ \big{\{}z\in\mathbb{C}:\ |z|=1 \big{\}},\]
by Lemmas 2.1.1 and 1.2.4, and Corollary 1.2.7. Now \(\mathbb{T}^{>}=\exp(\mathbb{T})\) and \(S_{\mathbb{C}}=\exp(\mathrm{i}\mathbb{R})\), so surjectivity follows from \(\exp(\sigma)=1+\sigma\), a consequence of the well-known bijectivity of the map \(\varepsilon\mapsto\sum_{n}\frac{\varepsilon^{n}}{n!}\colon\sigma\to 1+\sigma\), whose inverse is given by
\[1+\delta\mapsto\log(1+\delta):=\sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{n}\delta^ {n}\qquad(\delta\in\sigma).\]
That the kernel is \(2\pi\mathrm{i}\mathbb{Z}\) follows from the initial decomposition of the additive group of \(K\) as \(\mathbb{T}\oplus i\mathbb{T}_{\succ}\oplus i\mathbb{R}\oplus i\mathcal{O}_{ \mathbb{T}}\). The identity \(\exp(f)^{\prime}=f^{\prime}\exp(f)\) for \(f\in K\) follows from it being satisfied for \(f\in\mathbb{T}\), \(f\in\mathrm{i}\mathbb{T}_{\succ}\), \(f\in\mathbb{C}\), and \(f\in\sigma\).
### Valuations on the Universal Exponential Extension
_In this section \(K\) is a valued differential field with algebraically closed constant field \(C\subseteq\mathcal{O}\) and divisible group \(K^{\dagger}\) of logarithmic derivatives._ Then \(\Gamma=v(K^{\times})\) is also divisible, since we have a group isomorphism
\[va\mapsto a^{\dagger}+(\mathcal{O}^{\times})^{\dagger}\ :\ \Gamma\to K^{ \dagger}/(\mathcal{O}^{\times})^{\dagger}\qquad(a\in K^{\times}).\]
Let \(\Lambda\) be a complement of the \(\mathbb{Q}\)-linear subspace \(K^{\dagger}\) of \(K\), let \(\lambda\) range over \(\Lambda\), let \(\mathrm{U}=K\big{[}\mathrm{e}(\Lambda)\big{]}\) be the universal exponential extension of \(K\) constructed in Section 2.2 and set \(\Omega:=\mathrm{Frac}(\mathrm{U})\). Thus \(\Omega\) is a differential field with constant field \(C\).
**The gaussian extension.** We equip \(\mathrm{U}\) with the gaussian extension \(v_{\mathrm{g}}\) of the valuation of \(K\) as defined in Section 2.1; so for \(f\in\mathrm{U}\) with spectral decomposition \((f_{\lambda})\):
\[v_{\mathrm{g}}(f)\ =\ \min_{\lambda}v(f_{\lambda}),\]
and hence
\[v_{\mathrm{g}}(f^{\prime})\ =\ \min_{\lambda}v(f^{\prime}_{\lambda}+\lambda f_{ \lambda}).\]
The field \(\Omega\) with the valuation extending \(v_{\mathrm{g}}\) is a valued differential field extension of \(K\), but it can happen that \(K\) has small derivation, whereas \(\Omega\) does not:
_Example_.: Let \(K=C(\!(t^{\mathbb{Q}})\!)\) and \(\Lambda\) be as in Example 2.2.4, so \(t\prec 1\prec x=t^{-1}\) and \(t^{\prime}=-t^{2}\). Then \(K\) is d-valued of \(H\)-type with small derivation, but in \(\Omega\) with the above valuation,
\[t\,\mathrm{e}(x)\ \prec\ 1,\qquad\big{(}t\,\mathrm{e}(x)\big{)}^{\prime}\ =\ -t^{2}\,\mathrm{e}(x)+\mathrm{e}(x)\ \sim\ \mathrm{e}(x)\ \asymp\ 1.\]
To obtain an example where \(K=H[i]\) for a Liouville closed \(H\)-field \(H\) and \(\mathrm{i}^{2}=-1\), take \(K:=\mathbb{T}[i]\) and \(\Lambda:=\Lambda_{\mathbb{T}}\mathrm{i}\) as at the end of Section 2.5. Now \(x\in\Lambda_{\mathbb{T}}\) and in \(\Omega\) equipped with the above valuation we have for \(t:=x^{-1}\):
\[t\,\mathrm{e}(x\mathrm{i})\ \prec\ 1,\qquad\big{(}t\,\mathrm{e}(x\mathrm{i}) \big{)}^{\prime}\ =\ -t^{2}\,\mathrm{e}(x\mathrm{i})+i\,\mathrm{e}(x\mathrm{i})\ \sim\ i\,\mathrm{e}(x\mathrm{i})\ \asymp\ 1,\]
so \(\big{(}t\,\mathrm{e}(x\mathrm{i})\big{)}^{\prime}\not\prec t^{\dagger}\), hence \(\Omega\) is neither asymptotic nor has small derivation.
However, we show next that under certain assumptions on \(K\) with small derivation, \(\Omega\) has also a valuation which does make \(\Omega\) a valued differential field extension of \(K\) with small derivation. For this we rely on results from [1, 10.4]. Although such a valuation is less canonical than \(v_{\mathrm{g}}\), it is useful for harnessing the finiteness statements about the set \(\mathscr{E}^{\mathrm{e}}(A)\) of eventual exceptional values of \(A\in K[\partial]^{\neq}\) from Section 1.5 to obtain similar facts about the set of _ultimate exceptional values_ of \(A\) introduced later in this section.
### Spectral extensions
_In this subsection \(K\) is \(\mathrm{d}\)-valued of \(H\)-type with \(\Gamma\neq\{0\}\) and with small derivation._
**Lemma 2.6.1**.: _The valuation of \(K\) extends to a valuation on the field \(\Omega\) that makes \(\Omega\) a \(\mathrm{d}\)-valued extension of \(K\) of \(H\)-type with small derivation._
Proof.: Applying [1, 10.4.7] to an algebraic closure of \(K\) gives a \(\mathrm{d}\)-valued algebraically closed extension \(L\) of \(K\) of \(H\)-type with small derivation and \(C_{L}=C\) such that \(L^{\dagger}\supseteq K\). Let \(E:=\{y\in L^{\times}:\,y^{\dagger}\in K\}\), so \(E\) is a subgroup of \(L^{\times}\), \(E^{\dagger}=K\), and \(K[E]\) is an exponential extension of \(K\) with \(C_{K[E]}=C\). Then Corollary 2.2.10 gives an embedding \(\mathrm{U}\to L\) of differential \(K\)-algebras with image \(K[E]\), which extends to an embedding \(\Omega\to L\) of differential fields. Using this embedding to transfer the valuation of \(L\) to \(\Omega\) gives a valuation as required.
A **spectral extension** of the valuation of \(K\) to \(\Omega\) is a valuation on the field \(\Omega\) with the properties stated in Lemma 2.6.1. If \(K\) is \(\mathbf{o}\)-free, then so is \(\Omega\) equipped with any spectral extension of the valuation of \(K\), by [1] (and then \(\Omega\) has rational asymptotic integration by [1, 11.7]). We do not know whether this goes through with "\(\uplambda\)-free" instead of "\(\mathbf{o}\)-free". Here is something weaker:
**Lemma 2.6.2**.: _Suppose \(K\) is algebraically closed and \(\uplambda\)-free. Then some spectral extension of the valuation of \(K\) to \(\Omega\) makes \(\Omega\) a \(\mathrm{d}\)-valued field with divisible value group and asymptotic integration._
Proof.: Take \(L\), \(E\) and an embedding \(\Omega\to L\) as in the proof of Lemma 2.6.1. Use this embedding to identify \(\Omega\) with a differential subfield of \(L\), so \(U=K[E]\) and \(\Omega=K(E)\), and equip \(\Omega\) with the spectral extension of the valuation of \(K\) obtained by restricting the valuation of \(L\) to \(\Omega\). Since \(L\) is algebraically closed, \(E\) is divisible, and \(\Gamma_{L}=\Gamma+v(E)\) by [1, 10.4.7(iv)]. So \(\Gamma_{\Omega}=\Gamma_{L}\) is divisible. Let \(a\in K^{\times}\), \(y\in E\). Then \(K(y)\) has asymptotic integration by Proposition 1.4.12, hence \(v(ay)\in(\Gamma_{K(y)}^{\neq})^{\prime}\subseteq(\Gamma_{\Omega}^{\neq})^{\prime}\). Thus \(\Omega\) has asymptotic integration.
In the rest of this subsection \(\Omega\) is equipped with a spectral extension \(v\) (with value group \(\Gamma_{\Omega}\)) of the valuation of \(K\). The proof of Lemma 2.6.1 and [1, 10.4.7] show that we can choose \(v\) so that \(\Psi_{\Omega}\subseteq\Gamma\); but under suitable hypotheses on \(K\), this is automatic:
**Lemma 2.6.3**.: _Suppose \(K\) has asymptotic integration and \(\mathrm{I}(K)\subseteq K^{\dagger}\). Then \(\Psi_{\Omega}\subseteq\Gamma\), the group morphism_
\[\lambda\mapsto v\big{(}\mathrm{e}(\lambda)\big{)}\;:\;\Lambda\to\Gamma_{\Omega} \tag{2.6.1}\]
_is injective, and \(\Gamma_{\Omega}\) is divisible with \(\Gamma_{\Omega}=\Gamma\oplus v\big{(}\mathrm{e}(\Lambda)\big{)}\)\((\)internal direct sum of \(\mathbb{Q}\)-linear subspaces of \(\Gamma_{\Omega}\)\()\). Moreover, \(\Psi_{\Omega}=\Psi^{\downarrow}\) in \(\Gamma\)._
Proof.: For \(a\in K^{\times}\) we have \(\big{(}a\,{\rm e}(\lambda)\big{)}^{\dagger}=a^{\dagger}+\lambda\in K\), and if \(a\,{\rm e}(\lambda)\asymp 1\), then
\[a^{\dagger}+\lambda\ =\ \big{(}a\,{\rm e}(\lambda)\big{)}^{\dagger}\in({\mathcal{ O}}_{\Omega}^{\times})^{\dagger}\cap K\ \subseteq\ {\rm I}(\Omega)\cap K\ =\ {\rm I}(K),\]
so \(\lambda\in\Lambda\cap\big{(}{\rm I}(K)+K^{\dagger}\big{)}=\Lambda\cap K^{ \dagger}=\{0\}\) and \(a\asymp 1\). Thus for \(a_{1},a_{2}\in K^{\times}\) and distinct \(\lambda_{1},\lambda_{2}\in\Lambda\) we have \(a_{1}\,{\rm e}(\lambda_{1})\nleq a_{2}\,{\rm e}(\lambda_{2})\), and so for \(f\in{\rm U}\) with spectral decomposition \((f_{\lambda})\) we have \(vf=\min_{\lambda}v\big{(}f_{\lambda}\,{\rm e}(\lambda)\big{)}\). Hence
\[\Psi_{\Omega}\ \subseteq\big{\{}v(a^{\dagger}+\lambda):\ a\in K^{\times},\ \lambda\in\Lambda\big{\}}\ =\ v(K)\ =\ \Gamma_{\infty},\]
the map (2.6.1) is injective and \(\Gamma\cap v\big{(}{\rm e}(\Lambda)\big{)}=\{0\}\), and so \(\Gamma_{\Omega}=\Gamma\oplus v\big{(}{\rm e}(\Lambda)\big{)}\) (internal direct sum of subgroups of \(\Gamma_{\Omega}\)). Since \(\Gamma\) and \(\Lambda\) are divisible, so is \(\Gamma_{\Omega}\). Now \(\Psi_{\Omega}=\Psi^{\downarrow}\) follows from \(K=({\rm U}^{\times})^{\dagger}\subseteq\Omega^{\dagger}\) and \(K\) having asymptotic integration.
We can now improve on Lemma 2.5.1:
**Corollary 2.6.4**.: _Suppose \(K\) has asymptotic integration and \({\rm I}(K)\subseteq K^{\dagger}\), and let \(A\in K[{\mathfrak{d}}]^{\neq}\). Then the \(C\)-linear space \(\ker_{\rm U}A\) has a basis \({\mathcal{B}}\subseteq{\rm U}^{\times}\) such that \(v\) is injective on \({\mathcal{B}}\) and \(v({\mathcal{B}})=v(\ker_{\rm U}^{\neq}A)\), and thus \(|v(\ker_{\rm U}^{\neq}A)|=\dim_{C}\ker_{\rm U}A\)._
Proof.: By [ADH, 5.6.6] we have a basis \({\mathcal{B}}_{\lambda}\) of the \(C\)-linear space \(\ker_{K}A_{\lambda}\) such that \(v\) is injective on \({\mathcal{B}}_{\lambda}\) and \(v({\mathcal{B}}_{\lambda})=v(\ker_{K}^{\neq}A_{\lambda})\). Then \({\mathcal{B}}:=\bigcup_{\lambda}{\mathcal{B}}_{\lambda}\,{\rm e}(\lambda)\) is a basis of \(\ker_{\rm U}A\). It has the desired properties by Lemma 2.6.3.
**Corollary 2.6.5**.: _Suppose \(K\) is \(\lambda\)-free and \({\rm I}(K)\subseteq K^{\dagger}\). Then \(\Omega\) has asymptotic integration, and so its \(H\)-asymptotic couple is closed by Lemma 2.6.3._
Proof.: By Lemma 2.6.3, \(\Gamma_{\Omega}=\Gamma+v\big{(}{\rm e}(\Lambda)\big{)}\). Using Proposition 1.4.12 as in the proof of Lemma 2.6.2, with \({\rm e}(\Lambda)\) in place of \(E\), shows \(\Omega\) has asymptotic integration.
**An application \((^{*})\).** We use spectral extensions to prove an analogue of [ADH, 16.0.3]:
**Theorem 2.6.6**.: _If \(K\) is an \(\omega\)-free newtonian \({\rm d}\)-valued field, then \(K\) has no proper \({\rm d}\)-algebraic \({\rm d}\)-valued field extension \(L\) of \(H\)-type with \(C_{L}=C\) and \(L^{\dagger}\cap K=K^{\dagger}\)._
We retain of course our assumption that \(C\) is algebraically closed and \(K^{\dagger}\) is divisible. In the same way that [ADH, 16.0.3] follows from [ADH, 16.1.1], Theorem 2.6.6 follows from an analogue of [ADH, 16.1.1]:
**Lemma 2.6.7**.: _Let \(K\) be an \(\omega\)-free newtonian \({\rm d}\)-valued field, \(L\) a \({\rm d}\)-valued field extension of \(K\) of \(H\)-type with \(C_{L}=C\) and \(L^{\dagger}\cap K=K^{\dagger}\), and let \(f\in L\setminus K\). Suppose there is no \(y\in K\langle f\rangle\setminus K\) such that \(K\langle y\rangle\) is an immediate extension of \(K\). Then the \({\mathbb{Q}}\)-linear space \({\mathbb{Q}}\Gamma_{K\langle f\rangle}/\Gamma\) is infinite-dimensional._
The proof of Lemma 2.6.7 is much like that of [ADH, 16.1.1], except where the latter uses that any \(b\) in a Liouville closed \(H\)-field equals \(a^{\dagger}\) for some nonzero \(a\) in that field. This might not work with elements of \(K\), and the remedy is to take instead for every \(b\in K\) an element \(a\) in \({\rm U}^{\times}\) with \(b=a^{\dagger}\). The relevant computation should then take place in the differential fraction field \(\Omega_{L}\) of \({\rm U}_{L}\) instead of in \(L\) where \(\Omega_{L}\) is equipped with a spectral extension of the valuation of \(L\). For all this to make sense, we first take an active \(\phi\) in \(K\) and replace \(K\) and \(L\) by \(K^{\phi}\) and \(L^{\phi}\), arranging in this way that the derivation of \(L\) (and of \(K\)) is small. Next we replace \(L\) by its algebraic closure, so that \(L^{\dagger}\) is divisible, while preserving \(L^{\dagger}\cap K=K^{\dagger}\) by Lemma 1.2.1, and also preserving the other conditions on \(L\) in Lemma 2.6.7, as well
as the derivation of \(L\) being small. This allows us to identify \(\mathrm{U}\) with a differential subring of \(\mathrm{U}_{L}\) as in Lemma 2.2.12, and accordingly \(\Omega\) with a differential subfield of \(\Omega_{L}\). We equip \(\Omega_{L}\) with a spectral extension of the valuation of \(L\) (possible by Lemma 2.6.1), and make \(\Omega\) a valued subfield of \(L\). Then the valuation of \(\Omega\) is a spectral extension of the valuation of \(K\) to \(\Omega\), so we have the following inclusions of d-valued fields:
With these preparations we can now give the proof of Lemma 2.6.7:
Proof.: As we just indicated we arrange that \(L\) is algebraically closed with small derivation, and with an inclusion diagram of d-valued fields involving \(\Omega\) and \(\Omega_{L}\), as above. (This will not be used until we arrive at the Claim below.)
By [ADH, 14.0.2], \(K\) is asymptotically d-algebraically maximal. Using this and the assumption about \(K\langle f\rangle\) it follows as in the proof of [ADH, 16.1.1] that there is no divergent pc-sequence in \(K\) with a pseudolimit in \(K\langle f\rangle\). Thus every \(y\) in \(K\langle f\rangle\setminus K\) has a _a best approximation in \(K\),_ that is, an element \(b\in K\) such that \(v(y-b)=\max v(y-K)\). For such \(b\) we have \(v(y-b)\notin\Gamma\), since \(C_{L}=C\).
Now pick a best approximation \(b_{0}\) in \(K\) to \(f_{0}:=f\), and set \(f_{1}:=(f_{0}-b_{0})^{\dagger}\). Then \(f_{1}\in K\langle f\rangle\setminus K\), since \(L^{\dagger}\cap K=K^{\dagger}\) and \(C=C_{L}\). Thus \(f_{1}\) has a best approximation \(b_{1}\) in \(K\), and continuing this way, we obtain a sequence \((f_{n})\) in \(K\langle f\rangle\setminus K\) and a sequence \((b_{n})\) in \(K\), such that \(b_{n}\) is a best approximation in \(K\) to \(f_{n}\) and \(f_{n+1}=(f_{n}-b_{n})^{\dagger}\) for all \(n\). Thus \(v(f_{n}-b_{n})\in\Gamma_{K\langle f\rangle}\setminus\Gamma\) for all \(n\).
**Claim:**\(v(f_{0}-b_{0}),v(f_{1}-b_{1}),v(f_{2}-b_{2}),\dots\) _are \(\mathbb{Q}\)-linearly independent over \(\Gamma\)._
To prove this claim, take \(a_{n}\in\mathrm{U}^{\times}\) with \(a_{n}^{\dagger}=b_{n}\) for \(n\geqslant 1\). Then in \(\Omega_{L}\),
\[f_{n}-b_{n}\ =\ (f_{n-1}-b_{n-1})^{\dagger}-a_{n}^{\dagger}\ =\ \left(\frac{f_{n-1}-b_{n-1}}{a_{n}}\right)^{\dagger}\qquad(n \geqslant 1).\]
With \(\psi:=\psi_{\Omega_{L}}\) and \(\alpha_{n}=v(a_{n})\in\Gamma_{\Omega}\subseteq\Gamma_{\Omega_{L}}\) for \(n\geqslant 1\), we get
\[v(f_{n}-b_{n})\ =\ \psi\big{(}v(f_{n-1}-b_{n-1})-\alpha_{n}\big{)}, \text{ so by an easy induction on $n$,}\] \[v(f_{n}-b_{n})\ =\ \psi_{\alpha_{1},\dots,\alpha_{n}}\big{(}v(f_{0}-b_{0}) \big{)},\qquad(n\geqslant 1).\]
Suppose towards a contradiction that \(v(f_{0}-b_{0}),\dots,v(f_{n}-b_{n})\) are \(\mathbb{Q}\)-linearly dependent over \(\Gamma\). Then we have \(m<n\) and \(q_{1},\dots,q_{n-m}\in\mathbb{Q}\) such that
\[v(f_{m}-b_{m})+q_{1}v(f_{m+1}-b_{m+1})+\dots+q_{n-m}v(f_{n}-b_{n})\in\Gamma.\]
For \(\gamma:=v(f_{m}-b_{m})\in\Gamma_{L}\setminus\Gamma\) this gives
\[\gamma+q_{1}\psi_{\alpha_{m+1}}(\gamma)+\dots+q_{n-m}\psi_{\alpha_{m+1},\dots, \alpha_{n}}(\gamma)\in\Gamma.\]
By Lemma 1.2.9 we have \(\mathrm{I}(K)\subseteq K^{\dagger}\), so the \(H\)-asymptotic couple of \(\Omega\) is closed with \(\Psi_{\Omega}\subseteq\Gamma\), by Lemma 2.6.3 and Corollary 2.6.5. Hence \(\gamma\in\Gamma_{\Omega}\) by [ADH, 9.9.2]. Together with \(\Psi_{\Omega}\subseteq\Gamma\) and \(\alpha_{m+1},\dots,\alpha_{n}\in\Gamma_{\Omega}\) this gives
\[\psi_{\alpha_{m+1}}(\gamma),\dots,\psi_{\alpha_{m+1},\dots,\alpha_{n}}(\gamma)\in\Gamma\]
and thus \(\gamma\in\Gamma\), a contradiction.
**Ultimate exceptional values.**_In this subsection \(K\) is \(H\)-asymptotic with small derivation and asymptotic integration._ Also \(A\in K[\partial]^{\neq}\) and \(r:=\operatorname{order}(A)\), and \(\gamma\) ranges over \(\Gamma=v(K^{\times})\). We have \(v(\ker^{\neq}A_{\lambda})\subseteq\mathscr{E}^{\rm e}(A_{\lambda})\), so if \(\lambda\) is an eigenvalue of \(A\) with respect to \(\lambda\), then \(\mathscr{E}^{\rm e}(A_{\lambda})\neq\emptyset\). We call the elements of the set
\[\mathscr{E}^{\rm u}(A)\ =\ \mathscr{E}^{\rm u}_{K}(A)\ :=\ \bigcup_{\lambda} \mathscr{E}^{\rm e}(A_{\lambda})\ =\ \big{\{}\gamma:\ {\rm nwt}_{A_{\lambda}}(\gamma)\geqslant 1 \ {\rm for\ some}\ \lambda\big{\}}\]
the **ultimate exceptional values of \(A\)** with respect to \(\Lambda\). The definition of \(\mathscr{E}^{\rm u}_{K}(A)\) involves our choice of \(\Lambda\), but we are leaving this implicit to avoid complicated notation. In Section 4.4 we shall restrict \(K\) and \(\Lambda\) so that \(\mathscr{E}^{\rm u}(A)\) does not depend any longer on the choice of \(\Lambda\). There we shall use the following observation:
**Lemma 2.6.8**.: _Let \(a,b\in K\) be such that \(a-b\in(\mathcal{O}^{\times})^{\dagger}\). Then for all \(\gamma\) we have \({\rm nwt}_{A_{a}}(\gamma)={\rm nwt}_{A_{b}}(\gamma)\); in particular, \(\mathscr{E}^{\rm e}(A_{a})=\mathscr{E}^{\rm e}(A_{b})\)._
Proof.: Use that if \(u\in\mathcal{O}^{\times}\) and \(a-b=u^{\dagger}\), then \(A_{a}=(A_{b})_{\ltimes u}\).
**Corollary 2.6.9**.: _Let \(\Lambda^{*}\) be a complement of the \(\mathbb{Q}\)-linear subspace \(K^{\dagger}\) of \(K\) and let \(\lambda\mapsto\lambda^{*}\colon\Lambda\to\Lambda^{*}\) be the group isomorphism with \(\lambda-\lambda^{*}\in K^{\dagger}\) for all \(\lambda\). If \(\lambda-\lambda^{*}\in(\mathcal{O}^{\times})^{\dagger}\) for all \(\lambda\), then \({\rm nwt}_{A_{\lambda}}(\gamma)={\rm nwt}_{A_{\lambda^{*}}}(\gamma)\) for all \(\gamma\), so \(\mathscr{E}^{\rm u}(A)=\bigcup_{\lambda}\mathscr{E}^{\rm e}(A_{\lambda^{*}})\)._
_Remark 2.6.10_.: For \(a\in K^{\times}\) we have \(\mathscr{E}^{\rm u}(aA)=\mathscr{E}^{\rm u}(A)\) and \(\mathscr{E}^{\rm u}(Aa)=\mathscr{E}^{\rm u}(A)-va\). Note also that \(\mathscr{E}^{\rm e}(A)=\mathscr{E}^{\rm e}(A_{0})\subseteq\mathscr{E}^{\rm u }(A)\). Let \(\phi\in K^{\times}\) be active in \(K\), and set \(\lambda^{\phi}:=\phi^{-1}\lambda\). Then \(\Lambda^{\phi}:=\phi^{-1}\Lambda\) is a complement of the \(\mathbb{Q}\)-linear subspace \((K^{\phi})^{\dagger}=\phi^{-1}K^{\dagger}\) of \(K^{\phi}\), and \((A^{\phi})_{\lambda^{\phi}}=(A_{\lambda})^{\phi}\). Hence \(\mathscr{E}^{\rm u}_{K}(A)\) agrees with the set \(\mathscr{E}^{\rm u}_{K^{\phi}}(A^{\phi})\) of ultimate exceptional values of \(A^{\phi}\) with respect to \(\Lambda^{\phi}\).
_Remark 2.6.11_.: Suppose \(L\) is an \(H\)-asymptotic extension of \(K\) with asymptotic integration and algebraically closed constant field \(C_{L}\) such that \(L^{\dagger}\) is divisible, and \(\Psi\) is cofinal in \(\Psi_{L}\) or \(K\) is \(\lambda\)-free. Then \(\mathscr{E}^{\rm e}(A_{\lambda})=\mathscr{E}^{\rm e}_{L}(A_{\lambda})\cap\Gamma\), by Lemma 1.5.1 and Corollary 1.8.10. Hence if \(\Lambda_{L}\supseteq\Lambda\) is a complement of the subspace \(L^{\dagger}\) of the \(\mathbb{Q}\)-linear space \(L\), and \(\mathscr{E}^{\rm u}_{L}(A)\) is the set of ultimate exceptional values of \(A\) (viewed as an element of \(L[\partial]\)) with respect to \(\Lambda_{L}\), then \(\mathscr{E}^{\rm u}(A)\subseteq\mathscr{E}^{\rm u}_{L}(A)\). (Note that such a complement \(\Lambda_{L}\) exists iff \(L^{\dagger}\cap K=K^{\dagger}\).)
In the rest of this subsection we equip \(\mathrm{U}\) with the gaussian extension \(v_{\rm g}\) of the valuation of \(K\). Recall that we have a decomposition \(\ker_{\mathrm{U}}A=\bigoplus_{\lambda}(\ker A_{\lambda})\,{\rm e}(\lambda)\) of the \(C\)-linear space \(\ker_{\mathrm{U}}A\) as an internal direct sum of subspaces, and hence
\[v_{\rm g}(\ker_{\mathrm{U}}^{\neq}A)\ =\ \bigcup_{\lambda}\,v(\ker^{\neq}A_{ \lambda})\ \subseteq\ \bigcup_{\lambda}\,\mathscr{E}^{\rm e}(A_{\lambda})\ =\ \mathscr{E}^{\rm u}(A). \tag{2.6.2}\]
Here are some consequences:
**Lemma 2.6.12**.: _Suppose \(K\) is \(r\)-linearly newtonian. Then \(v_{\rm g}(\ker_{\mathrm{U}}^{\neq}A)=\mathscr{E}^{\rm u}(A)\)._
Proof.: By Proposition 1.5.2 we have \(v(\ker^{\neq}A_{\lambda})=\mathscr{E}^{\rm e}(A_{\lambda})\) for each \(\lambda\). Therefore \(v_{\rm g}(\ker_{\mathrm{U}}^{\neq}A)=\mathscr{E}^{\rm u}(A)\) by (2.6.2).
**Lemma 2.6.13**.: _Suppose \(K\) is \(\mathrm{d}\)-valued. Then \(|v_{\rm g}(\ker_{\mathrm{U}}^{\neq}A)|\leqslant\dim_{C}\ker_{\mathrm{U}}A \leqslant r\)._
Proof.: By [ADH, 5.6.6(i)] applied to \(A_{\lambda}\) in place of \(A\) we have
\[|v(\ker^{\neq}A_{\lambda})|\ =\ \dim_{C}\ker A_{\lambda}\ =\ \operatorname{mult}_{ \lambda}(A)\qquad\text{ for all }\lambda\]
and thus by (2.6.2),
\[|v_{\operatorname{g}}(\ker_{\operatorname{U}}^{\neq}A)|\ \leqslant\ \sum_{\lambda}|v(\ker^{\neq}A_{\lambda})|\ =\ \sum_{\lambda}\,\operatorname{mult}_{ \lambda}(A)\ =\ \dim_{C}\ker_{\operatorname{U}}A\ \leqslant\ r\]
as claimed.
**Lemma 2.6.14**.: _Suppose \(\operatorname{I}(K)\subseteq K^{\dagger}\) and \(r=1\). Then_
\[v_{\operatorname{g}}(\ker_{\operatorname{U}}^{\neq}A)\ =\ \mathscr{E}^{ \operatorname{u}}(A),\qquad|\mathscr{E}^{\operatorname{u}}(A)|\ =\ 1.\]
Proof.: Arrange \(A=\partial-g\), \(g\in K\), and take \(f\in K^{\times}\) and \(\lambda\) such that \(g=f^{\dagger}+\lambda\). Then \(u:=f\operatorname{e}(\lambda)\in\operatorname{U}^{\times}\) satisfies \(A(u)=0\), hence \(\ker_{\operatorname{U}}^{\neq}A=Cu\) and thus \(v_{\operatorname{g}}(\ker_{\operatorname{U}}^{\neq}A)=\{vf\}\). By Lemma 1.5.9 we have \(v(\ker^{\neq}A_{\lambda})=\mathscr{E}^{\operatorname{e}}(A_{\lambda})\) for all \(\lambda\) and hence \(v_{\operatorname{g}}(\ker_{\operatorname{U}}^{\neq}A)=\mathscr{E}^{ \operatorname{u}}(A)\) by (2.6.2).
**Corollary 2.6.15**.: _If \(\operatorname{I}(K)\subseteq K^{\dagger}\) and \(a\in K^{\times}\), then \(\mathscr{E}^{\operatorname{e}}(\partial-a^{\dagger})=\mathscr{E}^{\operatorname {u}}(\partial-a^{\dagger})=\{va\}\)._
Proposition 2.6.17 below partly extends Lemma 2.6.14.
**Spectral extensions and ultimate exceptional values.**_In this subsection \(K\) is \(\operatorname{d}\)-valued of \(H\)-type with small derivation, asymptotic integration, and \(\operatorname{I}(K)\subseteq K^{\dagger}\)._ Also \(A\in K[\partial]^{\neq}\) has order \(r\) and \(\gamma\) ranges over \(\Gamma\).
Suppose \(\Omega\) is equipped with a spectral extension \(v\) of the valuation of \(K\). Let \(g\in K^{\times}\) with \(vg=\gamma\). The Newton weight of \(A_{\lambda}g\in K[\partial]\) does not change in passing from \(K\) to \(\Omega\), since \(\Psi\) is cofinal in \(\Psi_{\Omega}\) by Lemma 2.6.3; see [ADH, 11.1]. Thus
\[\operatorname{nwt}_{A_{\lambda}}(\gamma)=\operatorname{nwt}(A_{\lambda}g)= \operatorname{nwt}\bigl{(}Ag\operatorname{e}(\lambda)\bigr{)}=\operatorname{ nwt}_{A}\bigl{(}v(g\operatorname{e}(\lambda)\bigr{)}=\operatorname{nwt}_{A} \bigl{(}\gamma+v(\operatorname{e}(\lambda)\bigr{)}\bigr{)}.\]
In particular, using \(\Gamma_{\Omega}=\Gamma\oplus v\bigl{(}\operatorname{e}(\Lambda)\bigr{)}\),
\[\mathscr{E}^{\operatorname{e}}_{\Omega}(A)\ =\ \bigcup_{\lambda}\mathscr{E}^{ \operatorname{e}}(A_{\lambda})+v\bigl{(}\operatorname{e}(\lambda)\bigr{)} \qquad\text{(a disjoint union)}. \tag{2.6.3}\]
Thus \(\mathscr{E}^{\operatorname{u}}(A)=\pi\bigl{(}\mathscr{E}^{\operatorname{e}}_{ \Omega}(A)\bigr{)}\) where \(\pi\colon\Gamma_{\Omega}\to\Gamma\) is given by \(\pi\bigl{(}\gamma+v\bigl{(}\operatorname{e}(\lambda)\bigr{)}\bigr{)}=\gamma\).
**Lemma 2.6.16**.: _We have \(\dim_{C}\ker_{\operatorname{U}}A\leqslant\sum_{\lambda}|\mathscr{E}^{ \operatorname{e}}(A_{\lambda})|\), and_
\[\dim_{C}\ker_{\operatorname{U}}A\ =\ \sum_{\lambda}|\mathscr{E}^{\operatorname{e}}(A_{ \lambda})|\ \Longleftrightarrow\ v(\ker^{\neq}A_{\lambda})\ =\ \mathscr{E}^{\operatorname{e}}(A_{\lambda})\text{ for all }\lambda.\]
_Moreover, if \(\dim_{C}\ker_{\operatorname{U}}A=\sum_{\lambda}|\mathscr{E}^{\operatorname{e} }(A_{\lambda})|\), then \(v_{\operatorname{g}}(\ker_{\operatorname{U}}^{\neq}A)=\mathscr{E}^{ \operatorname{u}}(A)\)._
Proof.: Clearly, \(\dim_{C}\ker_{\operatorname{U}}A\leqslant\dim_{C}\ker_{\Omega}A\). Equip \(\Omega\) with a spectral extension of the valuation of \(K\). Then \(\dim_{C}\ker_{\Omega}A=|v(\ker_{\Omega}^{\neq}A)|\) and \(v(\ker_{\Omega}^{\neq}A)\subseteq\mathscr{E}^{\operatorname{e}}_{\Omega}(A)\) by [ADH, 5.6.6(i)] and [ADH, p. 481], respectively, applied to \(\Omega\) in the role of \(K\). Also \(|\mathscr{E}^{\operatorname{e}}_{\Omega}(A)|=\sum_{\lambda}|\mathscr{E}^{ \operatorname{e}}(A_{\lambda})|\) (a sum of cardinals) by the remarks preceding the lemma. This yields the first claim of the lemma.
Next, note that \(v(\ker^{\neq}A_{\lambda})\subseteq\mathscr{E}^{\operatorname{e}}(A_{\lambda})\) for all \(\lambda\). Hence from (2.6.3) and
\[v(\ker_{\operatorname{U}}^{\neq}A)\ =\ \bigcup_{\lambda}v(\ker^{\neq}A_{ \lambda})+v\bigl{(}\operatorname{e}(\lambda)\bigr{)}\qquad\text{(a disjoint union)}\]
we obtain:
\[v(\ker_{\operatorname{U}}^{\neq}A)=\mathscr{E}^{\operatorname{e}}_{\Omega}(A) \quad\Longleftrightarrow\quad v(\ker^{\neq}A_{\lambda})\ =\ \mathscr{E}^{\operatorname{e}}(A_{\lambda})\text{ for all }\lambda.\]
Also \(|v(\ker_{\rm U}^{\neq}A)|=\dim_{C}\ker_{\rm U}A\) by [ADH, 2.3.13], and
\[v(\ker_{\rm U}^{\neq}A)\ \subseteq\ v(\ker_{\Omega}^{\neq}A)\ \subseteq\ \mathscr{E}_{\Omega}^{\rm ee}(A),\qquad|\mathscr{E}_{\Omega}^{\rm ee}(A)|\ =\ \sum_{\lambda}|\mathscr{E}^{\rm e}(A_{\lambda})|.\]
This yields the displayed equivalence.
Suppose \(\dim_{C}\ker_{\rm U}A=\sum_{\lambda}|\mathscr{E}^{\rm e}(A_{\lambda})|\); we need to show \(v_{\rm g}(\ker_{\rm U}^{\neq}A)=\mathscr{E}^{\rm u}(A)\). We have \(\pi\big{(}\mathscr{E}_{\Omega}^{\rm ee}(A)\big{)}=\mathscr{E}^{\rm u}(A)\) for the above projection map \(\pi\colon\Gamma_{\Omega}\to\Gamma\), so it is enough to show \(\pi\big{(}v(\ker_{\rm U}^{\neq}A)\big{)}=v_{\rm g}(\ker_{\rm U}^{\neq}A)\). For that, note that for \(\mathcal{B}\subseteq K^{\times}\operatorname{e}(\Lambda)\) in Corollary 2.6.4 we have
\[\pi\big{(}v(\ker_{\rm U}^{\neq}A)\big{)}\ =\ \pi(v\mathcal{B})\ =\ v_{\rm g}( \mathcal{B})\ =\ v_{\rm g}(\ker_{\rm U}^{\neq}A),\]
using for the last equality the details in the proof of Corollary 2.6.4.
**Proposition 2.6.17**.: _Suppose \(K\) is \(\omega\)-free. Then \({\rm nwt}_{A_{\lambda}}(\gamma)=0\) for all but finitely many pairs \((\gamma,\lambda)\) and_
\[|\mathscr{E}^{\rm u}(A)|\ \leqslant\ \sum_{\lambda}|\mathscr{E}^{\rm e}(A_{ \lambda})|\ =\ \sum_{\gamma,\lambda}{\rm nwt}_{A_{\lambda}}(\gamma)\ \leqslant\ r.\]
_If \(\dim_{C}\ker_{\rm U}A=r\), then \(\sum_{\lambda}|\mathscr{E}^{\rm e}(A_{\lambda})|=r\) and \(v_{\rm g}(\ker_{\rm U}^{\neq}A)=\mathscr{E}^{\rm u}(A)\)._
Proof.: Equip \(\Omega\) with a spectral extension \(v\) of the valuation of \(K\). Then \(\Omega\) is \(\omega\)-free, so \(\sum_{\lambda}|\mathscr{E}^{\rm e}(A_{\lambda})|=|\mathscr{E}_{\Omega}^{\rm ee }(A)|\leqslant r\) by the remarks preceding Lemma 2.6.16 and Corollary 1.5.5 applied to \(\Omega\) in place of \(K\). These remarks also give \({\rm nwt}_{A_{\lambda}}(\gamma)=0\) for all but finitely many pairs \((\gamma,\lambda)\), and so
\[\sum_{\gamma,\lambda}{\rm nwt}_{A_{\lambda}}(\gamma)\ =\ \sum_{\gamma,\lambda}{\rm nwt}_{A} \big{(}\gamma+v(\operatorname{e}(\lambda)\big{)}\ =\ |\mathscr{E}_{\Omega}^{\rm ee}(A)|\ \leqslant\ r.\]
Corollary 1.5.5 applied to \(A_{\lambda}\) in place of \(A\) yields \(|\mathscr{E}^{\rm e}(A_{\lambda})|=\sum_{\gamma}{\rm nwt}_{A_{\lambda}}(\gamma)\) and so \(\sum_{\lambda}|\mathscr{E}^{\rm e}(A_{\lambda})|=\sum_{\gamma,\lambda}{\rm nwt} _{A_{\lambda}}(\gamma)\). This proves the first part (including the display). The rest follows from this and Lemma 2.6.16.
In the next lemma (to be used in the proof of Proposition 3.1.26), as well as in Corollary 2.6.23, \(L\) is a d-valued \(H\)-asymptotic extension of \(K\) with algebraically closed constant field and asymptotic integration (so \(L\) has small derivation), such that \(L^{\dagger}\) is divisible, \(L^{\dagger}\cap K=K^{\dagger}\), and \({\rm I}(L)\subseteq L^{\dagger}\). We also fix there a complement \(\Lambda_{L}\) of the \(\mathbb{Q}\)-linear subspace \(L^{\dagger}\) of \(L\) with \(\Lambda\subseteq\Lambda_{L}\). Let \({\rm U}_{L}=L\big{[}\operatorname{e}(\Lambda_{L})\big{]}\) be the corresponding universal exponential extension of \(L\) containing \({\rm U}=K\big{[}\operatorname{e}(\Lambda)\big{]}\) as a differential subring, as described in the remarks following Corollary 2.2.13, with differential fraction field \(\Omega_{L}\).
**Lemma 2.6.18**.: _Assume \(C_{L}=C\). Let \(\Omega_{L}\) be equipped with a spectral extension of the valuation of \(L\), and take \(\Omega\) as a valued subfield of \(\Omega_{L}\); so the valuation of \(\Omega\) is a spectral extension of the valuation of \(K\). Suppose \(\Psi\) is cofinal in \(\Psi_{L}\) or \(K\) is \(\lambda\)-free. Then \(\mathscr{E}_{\Omega_{L}}^{\rm ee}(A)\cap\Gamma_{\Omega}\ =\ \mathscr{E}_{\Omega}^{\rm ee}(A)\)._
Proof.: Let \(\mu\) range over \(\Lambda_{L}\). We have
\[\Gamma_{\Omega_{L}}\ =\ \Gamma_{L}\oplus v\big{(}\operatorname{e}(\Lambda_{L}) \big{)},\qquad\Gamma_{\Omega}\ =\ \Gamma\oplus v\big{(}\operatorname{e}(\Lambda)\big{)}\]
by Lemma 2.6.3 and
\[\mathscr{E}_{\Omega_{L}}^{\rm ee}\ =\ \bigcup_{\mu}\mathscr{E}_{L}^{\rm ee}(A_{ \mu})+v\big{(}\operatorname{e}(\mu)\big{)},\qquad\mathscr{E}_{\Omega}^{\rm ee} \ =\ \bigcup_{\lambda}\mathscr{E}^{\rm e}(A_{\lambda})+v\big{(}\operatorname{e}(\lambda) \big{)}\]
by (2.6.3). Hence
\[\mathscr{E}^{\rm e}_{\Omega_{L}}(A)\cap\Gamma_{\Omega}\ =\ \bigcup_{\lambda}\big{(} \mathscr{E}^{\rm e}_{L}(A_{\lambda})\cap\Gamma\big{)}+v\big{(}{\rm e}(\lambda) \big{)}\ =\ \bigcup_{\lambda}\mathscr{E}^{\rm e}(A_{\lambda})+v\big{(}{\rm e}(\lambda) \big{)}\ =\ \mathscr{E}^{\rm e}_{\Omega}(A),\]
where we used the injectivity of \(\mu\mapsto v\big{(}{\rm e}(\mu)\big{)}\) for the first equality and Remark 2.6.11 for the second.
Call \(A\)**terminal** with respect to \(\Lambda\) if \(\sum_{\lambda}|\mathscr{E}^{\rm e}(A_{\lambda})|=r\). We omit "with respect to \(\Lambda\)" if it is clear from the context what \(\Lambda\) is. In Section 4.4 we shall restrict \(K\), \(\Lambda\) anyway so that this dependence on \(\Lambda\) disappears. Recall also that for a given spectral extension of the valuation of \(K\) to \(\Omega\) we have \(|\mathscr{E}^{\rm e}_{\Omega}(A)|=\sum_{\lambda}|\mathscr{E}^{\rm e}(A_{ \lambda})|\). If \(A\) is terminal and \(\phi\in K^{\times}\) is active in \(K\), then \(A^{\phi}\in K^{\phi}[\$]\) is terminal with respect to \(\Lambda^{\phi}\) (cf. remarks after Corollary 2.6.9). If \(A\) is terminal and \(a\in K^{\times}\), then \(aA\) is terminal. If \(r=0\), then \(A\) is terminal.
**Lemma 2.6.19**.: _If \(r=1\), then \(A\) is terminal._
Proof.: Assume \(r=1\). Then \(\dim_{C}\ker_{\rm U}A=1\), so \(\sum_{\lambda}|\mathscr{E}^{\rm e}(A_{\lambda})|\geqslant 1\) by Lemma 2.6.16. Equip \(\Omega\) with a spectral extension of the valuation of \(K\). Then \(\Omega\) is ungrounded by Lemma 2.6.3, and \(r=1\) gives \(|\mathscr{E}_{\Omega}(A)|\leqslant 1\). Now use \(|\mathscr{E}_{\Omega}(A)|=\sum_{\lambda}|\mathscr{E}^{\rm e}(A_{\lambda})|\).
**Lemma 2.6.20**.: _Suppose \(A\) and \(B\in K[\partial]^{\neq}\) are terminal, and each operator \(B_{\lambda}\) is asymptotically surjective. Then \(AB\) is terminal._
Proof.: Use that \((AB)_{\lambda}=A_{\lambda}B_{\lambda}\), and that \(|\mathscr{E}^{\rm e}(A_{\lambda}B_{\lambda})|=|\mathscr{E}^{\rm e}(A_{\lambda} )|+|\mathscr{E}^{\rm e}(B_{\lambda})|\) by Corollary 1.5.19.
Thus if \(A\) is terminal and \(a\in K^{\times}\), then \(aA,Aa\), and \(A_{\times a}\) are terminal. From Lemmas 2.6.19, 2.6.20, and Corollary 1.5.25 we conclude:
**Corollary 2.6.21**.: _If \(K\) is \(\lambda\)-free and \(A\) splits over \(K\), then \(A\) is terminal._
**Corollary 2.6.22**.: _Suppose \(K\) is \(\omega\)-free and \(B\in K[\partial]^{\neq}\). Then \(A\) and \(B\) are terminal iff \(AB\) is terminal._
Proof.: The "only if" part follows from Lemma 2.6.20 and Corollary 1.5.26. For the "if" part, use Corollary 1.5.19 and Proposition 2.6.17.
**Corollary 2.6.23**.: _Suppose \(A\) is terminal, \(\Psi\) is cofinal in \(\Psi_{L}\) or \(K\) is \(\lambda\)-free, and \(L\) is \(\omega\)-free. Then, with respect to the complement \(\Lambda_{L}\) of \(L^{\dagger}\) in \(L\), we have:_
* _as an element of_ \(L[\partial]\)_,_ \(A\) _is terminal;_
* \(\mathscr{E}^{\rm e}(A_{\mu})=\emptyset\) _for all_ \(\mu\in\Lambda_{L}\setminus\Lambda\)_;_
* \(\mathscr{E}^{\rm e}(A_{\lambda})=\mathscr{E}^{\rm e}_{L}(A_{\lambda})\) _for all_ \(\lambda\)_; and_
* \(\mathscr{E}^{\rm u}(A)=\mathscr{E}^{\rm L}_{L}(A)\)_._
Proof.: By the remarks after Corollary 2.6.9 we have \(\mathscr{E}^{\rm e}(A_{\lambda})\subseteq\mathscr{E}^{\rm e}_{L}(A_{\lambda})\) for each \(\lambda\), and so with \(\mu\) ranging over \(\Lambda_{L}\), by Proposition 2.6.17 applied to \(L\) in place of \(K\), we have \(r=\sum_{\lambda}|\mathscr{E}^{\rm e}(A_{\lambda})|\leqslant\sum_{\mu}| \mathscr{E}^{\rm e}_{L}(A_{\mu})|\leqslant r\). This yields (i)-(iv).
In [15] we shall study other situations where \(A\) is terminal.
**The real case**.: _In this subsection \(H\) is a real closed \(H\)-field with small derivation, asymptotic integration, and \(H^{\dagger}=H\); also \(K=H[i]\), \(i^{2}=-1\), for our valued differential field \(K\). We also assume \(\mathrm{I}(H)i\subseteq K^{\dagger}\). Then \(K\) is \(\mathrm{d}\)-valued of \(H\)-type with small derivation, asymptotic integration, \(K^{\dagger}=H+\mathrm{I}(H)i\), and \(\mathrm{I}(K)\subseteq K^{\dagger}\). Note that \(H\) and thus \(K\) is \(\lambda\)-free by [ADH, remark after 11.6.2, and 11.6.8]. Let \(A\) in \(K[\mathfrak{o}]^{\neq}\) have order \(r\) and let \(\gamma\) range over \(\Gamma\)._
**Lemma 2.6.24**.: _If the real closed \(H\)-asymptotic extension \(F\) of \(H\) has asymptotic integration and convex valuation ring, then \(L^{\dagger}\cap K=K^{\dagger}\) for the algebraically closed \(H\)-asymptotic field extension \(L:=F[i]\) of \(K\)._
Proof.: Use Corollary 1.2.18 and earlier remarks in the same subsection.
**Corollary 2.6.25**.: _The \(H\)-field \(H\) has an \(H\)-closed extension \(F\) with \(C_{F}=C_{H}\), and for any such \(F\), the algebraically closed \(\mathrm{d}\)-valued field extension \(L:=F[i]\) of \(H\)-type of \(K\) is \(\omega\)-free with \(C_{L}=C\), \(\mathrm{I}(L)\subseteq L^{\dagger}\), and \(L^{\dagger}\cap K=K^{\dagger}\)._
Proof.: Use [ADH, 16.4.1, 9.1.2] to extend \(H\) to an \(\omega\)-free \(H\)-field with the same constant field as \(H\), next use [ADH, 11.7.23] to pass to its real closure, and then use [ADH, 14.5.9] to extend further to an \(H\)-closed \(F\), still with the same constant field as \(H\). For any such \(F\), the \(\mathrm{d}\)-valued field \(L:=F[i]\) of \(H\)-type is \(\omega\)-free by [ADH, 11.7.23] and newtonian by [ADH, 14.5.7]. Hence \(\mathrm{I}(L)\subseteq L^{\dagger}\) by Lemma 1.2.9, and \(L^{\dagger}\cap K=K^{\dagger}\) by Lemma 2.6.24.
This leads to a variant of Proposition 2.6.17:
**Proposition 2.6.26**.: _The conclusion of Proposition 2.6.17 holds. In particular:_
\[\dim_{C}\ker_{\mathrm{U}}A=r\implies A\text{ is terminal}.\]
Proof.: Corollary 2.6.25 gives an \(H\)-closed extension \(F\) of \(H\) with \(C_{F}=C_{H}\), so \(L:=F[i]\) is \(\omega\)-free, \(C_{L}=C\), \(\mathrm{I}(L)\subseteq L^{\dagger}\), and \(L^{\dagger}\cap K=K^{\dagger}\). Take a complement \(\Lambda_{L}\supseteq\Lambda\) of the subspace \(L^{\dagger}\) of the \(\mathbb{Q}\)-linear space \(L\). By Remark 2.6.11 we have \(\mathscr{E}^{\mathrm{e}}(A_{\lambda})=\mathscr{E}^{\mathrm{e}}_{L}(A_{\lambda })\cap\Gamma\). Hence Proposition 2.6.17 applied to \(K\), \(\Lambda_{L}\), respectively, and \(A\) viewed as element of \(L[\mathfrak{o}]\), yields \(\sum_{\lambda}|\mathscr{E}^{\mathrm{e}}(A_{\lambda})|\leqslant r\). Corollary 1.8.10 applied to \(A_{\lambda}\) in place of \(A\) gives \(|\mathscr{E}^{\mathrm{e}}(A_{\lambda})|=\sum_{\gamma}\mathrm{nwt}_{A_{\lambda }}(\gamma)\). This yields the conclusion of Proposition 2.6.17 as in the proof of that proposition.
Let now \(F\) be a Liouville closed \(H\)-field extension of \(H\) and suppose \(\mathrm{I}(L)\subseteq L^{\dagger}\) where \(L:=F[i]\). Lemma 2.6.24 yields \(L^{\dagger}\cap K=K^{\dagger}\), so \(L\) is as described just before Lemma 2.6.18, and we have a complement \(\Lambda_{L}\supseteq\Lambda\) of the subspace \(L^{\dagger}\) of the \(\mathbb{Q}\)-linear space \(L\). Note that if \(A\) splits over \(K\), then \(A\) is terminal by Corollary 2.6.21.
**Corollary 2.6.27**.: _Suppose \(A\) is terminal. Then, with respect to the complement \(\Lambda_{L}\) of \(L^{\dagger}\) in \(L\), the conclusions_ (i)-(iv) _of Corollary 2.6.23 hold._
Proof.: By the remarks after Corollary 2.6.9 we have \(\mathscr{E}^{\mathrm{e}}(A_{\lambda})\subseteq\mathscr{E}^{\mathrm{e}}_{L}(A_{ \lambda})\) for all \(\lambda\), and so with \(\mu\) ranging over \(\Lambda_{L}\), Proposition 2.6.26 applied to \(L\) in place of \(K\), we have \(r=\sum_{\lambda}|\mathscr{E}^{\mathrm{e}}(A_{\lambda})|\leqslant\sum_{\mu}| \mathscr{E}^{\mathrm{e}}_{L}(A_{\mu})|\leqslant r\). This yields (i)-(iv).
**Part \(3\). Normalizing Holes and Slots**
In this introduction \(K\) is an \(H\)-asymptotic field with small derivation and rational asymptotic integration. In Section 3.2 we introduce _holes_ in \(K\): A _hole in \(K\)_ is a triple \((P,\mathfrak{m},\widehat{a})\) with \(P\in K\{Y\}\setminus K\), \(\mathfrak{m}\in K^{\times}\), and \(\widehat{a}\in\widehat{K}\setminus K\) for some immediate asymptotic extension \(\widehat{K}\) of \(K\), such that \(\widehat{a}\prec\mathfrak{m}\) and \(P(\widehat{a})=0\). The main goal of Part \(3\) is a Normalization Theorem, namely Theorem 3.3.33, that allows us to transform under reasonable conditions a hole \((P,\mathfrak{m},\widehat{a})\) in \(K\) into a "normal" hole; this helps to pin down the location of \(\widehat{a}\) relative to \(K\). The notion of \((P,\mathfrak{m},\widehat{a})\) being _normal_ involves the linear part of the differential polynomial \(P_{\times\mathfrak{m}}\), in particular the _span_ of this linear part. We introduce the span in the preliminary Section 3.1. In Section 3.4 we study _isolated_ holes \((P,\mathfrak{m},\widehat{a})\) in \(K\), which under reasonable conditions ensure the uniqueness of the isomorphism type of \(K\langle\widehat{a}\rangle\) as a valued differential field over \(K\); see Proposition 3.4.9. In Section 3.5 we focus on holes \((P,\mathfrak{m},\widehat{a})\) in \(K\) where \(\operatorname{order}P=\deg P=1\). For technical reasons we actually work in Part \(3\) also with _slots_ in \(K\), which are a bit more general than holes in \(K\).
First some notational conventions. Let \(\Gamma\) be an ordered abelian group. For \(\gamma,\delta\in\Gamma\) with \(\gamma\neq 0\) the expression "\(\delta=o(\gamma)\)" means "\(n|\delta|<|\gamma|\) for all \(n\geqslant 1\)" according to [ADH, 2.4], but here we find it convenient to extend this to \(\gamma=0\), in which case "\(\delta=o(\gamma)\)" means "\(\delta=0\)". Suppose \(\Gamma=v(E^{\times})\) is the value group of a valued field \(E\) and \(\mathfrak{m}\in E^{\times}\). Then we denote the archimedean class \([v\mathfrak{m}]\subseteq\Gamma\) of \(v\mathfrak{m}\in\Gamma\) by just \([\mathfrak{m}]\). Suppose \(\mathfrak{m}\not\asymp 1\). Then we have a proper convex subgroup
\[\Delta(\mathfrak{m})\ :=\ \big{\{}\gamma\in\Gamma:\,\gamma=o(v\mathfrak{m}) \big{\}}\ =\ \big{\{}\gamma\in\Gamma:\,[\gamma]<[\mathfrak{m}]\big{\}},\]
of \(\Gamma\). If \(\mathfrak{m}\asymp_{\Delta(\mathfrak{m})}\mathfrak{n}\in E\), then \(0\neq\mathfrak{n}\not\asymp 1\) and \(\Delta(\mathfrak{m})=\Delta(\mathfrak{n})\). In particular, if \(\mathfrak{m}\asymp\mathfrak{n}\in E\), then \(0\neq\mathfrak{n}\not\asymp 1\) and \(\Delta(\mathfrak{m})=\Delta(\mathfrak{n})\). Note that for \(f,g\in E\) the meaning of "\(f\asymp_{\Delta(\mathfrak{m})}g\)" does not change in passing to a valued field extension of \(E\), although \(\Delta(\mathfrak{m})\) can increase as a subgroup of the value group of the extension.
### 3.1. The Span of a Linear Differential Operator
_In this section \(K\) is a valued differential field with small derivation and \(\Gamma:=v(K^{\times})\). We let \(a\), \(b\), sometimes subscripted, range over \(K\), and \(\mathfrak{m}\), \(\mathfrak{n}\) over \(K^{\times}\)._ Consider a linear differential operator
\[A\ =\ a_{0}+a_{1}\partial+\cdots+a_{r}\partial^{r}\in K[\partial],\qquad a_{r} \neq 0.\]
We shall use below the quantities \(\operatorname{dwm}(A)\) and \(\operatorname{dwt}(A)\) defined in [ADH, 5.6]. We also introduce a measure \(\mathfrak{v}(A)\) for the "lopsidedness" of \(A\) as follows:
\[\mathfrak{v}(A)\ :=\ a_{r}/a_{m}\in\ K^{\times}\qquad\text{where }m:= \operatorname{dwt}(A).\]
So \(a_{r}\asymp\mathfrak{v}(A)A\) and \(\mathfrak{v}(A)\preccurlyeq 1\), with
\[\mathfrak{v}(A)\asymp 1\quad\Longleftrightarrow\quad\operatorname{dwt}(A)=r \quad\Longleftrightarrow\quad\mathfrak{v}(A)=1.\]
Also note that \(\mathfrak{v}(aA)=\mathfrak{v}(A)\) for \(a\neq 0\). Moreover,
\[\mathfrak{v}(A_{\ltimes\mathfrak{n}})A_{\ltimes\mathfrak{n}}\ \asymp\ a_{r}\ \asymp\ \mathfrak{v}(A)A\]
since \(A_{\ltimes\mathfrak{n}}=a_{r}\partial^{r}+\text{lower order terms in }\partial\).
_Example_.: \(\mathfrak{v}(a+\partial)=1\) if \(a\preccurlyeq 1\), and \(\mathfrak{v}(a+\partial)=1/a\) if \(a\succ 1\).
We call \(\mathfrak{v}(A)\) the **span** of \(A\). We are mainly interested in the valuation of \(\mathfrak{v}(A)\). This is related to the gaussian valuation \(v(A)\) of \(A\): if \(A\) is monic, then \(v\big{(}\mathfrak{v}(A)\big{)}=-v(A)\). An important property of the span of \(A\) is that its valuation is not affected by small additive perturbations of \(A\):
**Lemma 3.1.1**.: _Suppose \(B\in K[\partial]\), \(\operatorname{order}(B)\leqslant r\) and \(B\prec\mathfrak{v}(A)A\). Then:_
* \(A+B\sim A\)_,_ \(\operatorname{dwm}(A+B)=\operatorname{dwm}(A)\)_, and_ \(\operatorname{dwt}(A+B)=\operatorname{dwt}(A)\)_;_
* \(\operatorname{order}(A+B)=r\) _and_ \(\mathfrak{v}(A+B)\sim\mathfrak{v}(A)\)_._
Proof.: From \(B\prec\mathfrak{v}(A)A\) and \(\mathfrak{v}(A)\preccurlyeq 1\) we obtain \(B\prec A\), and thus (i). Set \(m:=\operatorname{dwt}(A)\), let \(i\) range over \(\{0,\dots,r\}\), and let \(B=b_{0}+b_{1}\partial+\dots+b_{r}\partial^{r}\). Then \(a_{i}\preccurlyeq a_{m}\) and \(b_{i}\prec\mathfrak{v}(A)A\asymp a_{r}\preccurlyeq a_{m}\). Therefore, if \(a_{i}\asymp a_{m}\), then \(a_{i}+b_{i}\sim a_{m}\), and if \(a_{i}\prec a_{m}\), then \(a_{i}+b_{i}\prec a_{m}\). Hence \(\mathfrak{v}(A+B)=(a_{r}+b_{r})/(a_{m}+b_{m})\sim a_{r}/a_{m}=\mathfrak{v}(A)\).
For \(b\neq 0\), the valuation of \(\mathfrak{v}(Ab)\) only depends on \(vb\); it is enough to check this for \(b\asymp 1\). More generally:
**Lemma 3.1.2**.: _Let \(B\in K[\partial]^{\neq}\) and \(b\asymp B\). Then \(\mathfrak{v}(AB)\asymp\mathfrak{v}(Ab)\mathfrak{v}(B)\)._
Proof.: Let \(B=b_{0}+b_{1}\partial+\dots+b_{s}\partial^{s}\), \(b_{s}\neq 0\). Then
\[AB\ =\ a_{r}b_{s}\partial^{r+s}+\text{lower order terms in }\partial,\]
so by [ADH, 5.6.1(ii)] for \(\gamma=0\):
\[v\big{(}\mathfrak{v}(AB)\big{)} =\ v(a_{r}b_{s})-v(AB)\ =\ v(a_{r}b_{s})-v(Ab)\] \[=\ v(a_{r}b)-v(Ab)+v(b_{s})-v(B)\] \[=\ v\big{(}\mathfrak{v}(Ab)\mathfrak{v}(B)\big{)}.\qed\]
**Corollary 3.1.3**.: _Let \(B\in K[\partial]^{\neq}\). If \(\mathfrak{v}(AB)=1\), then \(\mathfrak{v}(A)=\mathfrak{v}(B)=1\). The converse holds if \(B\) is monic._
This is clear from from Lemma 3.1.2, and in turn gives:
**Corollary 3.1.4**.: _Suppose \(A=a(\partial-b_{1})\dotsm(\partial-b_{r})\). Then_
\[\mathfrak{v}(A)=1\quad\Longleftrightarrow\quad b_{1},\dots,b_{r}\preccurlyeq 1.\]
_Remark_.: Suppose \(K=C(\!(t)\!)\) with the \(t\)-adic valuation and derivation \(\partial=t\frac{d}{dt}\). In the literature, \(A\) is called _regular singular_ if \(\mathfrak{v}(A)=1\), and _irregular singular_ if \(\mathfrak{v}(A)\prec 1\); see [158, Definition 3.14].
**Lemma 3.1.5**.: _Let \(B\in K[\partial]^{\neq}\). Then \(\mathfrak{v}(AB)\preccurlyeq\mathfrak{v}(B)\), and if \(B\) is monic, then \(\mathfrak{v}(AB)\preccurlyeq\mathfrak{v}(A)\)._
Proof.: Lemma 3.1.2 and \(\mathfrak{v}(Ab)\preccurlyeq 1\) for \(b\neq 0\) yields \(\mathfrak{v}(AB)\preccurlyeq\mathfrak{v}(B)\). Suppose \(B\) is monic, so \(v(B)\leqslant 0\). To show \(\mathfrak{v}(AB)\preccurlyeq\mathfrak{v}(A)\) we arrange that \(A\) is also monic. Then \(AB\) is monic, and \(\mathfrak{v}(AB)\preccurlyeq\mathfrak{v}(A)\) is equivalent to \(v(AB)\leqslant v(A)\). Now
\[v(AB)\ =\ v_{AB}(0)\ =\ v_{A}\big{(}v_{B}(0)\big{)}\ =\ v_{A}\big{(}v(B)\big{)}\ \leqslant\ v_{A}(0)\ =\ v(A)\]
by [ADH, 4.5.1(iii), 5.6.1(ii)].
**Corollary 3.1.6**.: _If \(A=a(\partial-b_{1})\dotsm(\partial-b_{r})\), then \(b_{1},\dots,b_{r}\ \preccurlyeq\mathfrak{v}(A)^{-1}\)._
Let \(\Delta\) be a convex subgroup of \(\Gamma\), let \(\hat{\mathcal{O}}\) be the valuation ring of the coarsening \(v_{\Delta}\) of the valuation \(v\) of \(K\) by \(\Delta\), with maximal ideal \(\dot{\circ}\), and \(\dot{K}=\hat{\mathcal{O}}/\dot{\circ}\) be the valued differential residue field of \(v_{\Delta}\). The residue morphism \(\hat{\mathcal{O}}\to\dot{K}\) extends to the ring morphism \(\hat{\mathcal{O}}[\partial]\to\dot{K}[\partial]\) with \(\partial\mapsto\partial\). If \(A\in\hat{\mathcal{O}}[\partial]\) and \(\dot{A}\neq 0\), then \(\mathrm{d}\mathrm{w}\mathrm{m}(\dot{A})=\mathrm{d}\mathrm{w}\mathrm{m}(A)\) and \(\mathrm{d}\mathrm{w}\mathrm{t}(\dot{A})=\mathrm{d}\mathrm{w}\mathrm{t}(A)\). We set \(\mathfrak{v}:=\mathfrak{v}(A)\).
**Lemma 3.1.7**.: _If \(A\in\hat{\mathcal{O}}[\partial]\) and \(\mathrm{order}(\dot{A})=r\), then \(\mathfrak{v}(\dot{A})=\dot{\mathfrak{v}}\)._
### Behavior of the span under twisting
Recall that \(o(\gamma):=0\in\Gamma\) for \(\gamma=0\in\Gamma\). With this convention, here is a consequence of [ADH, 6.1.3]:
**Lemma 3.1.8**.: _Let \(B\in K[\partial]^{\neq}\). Then \(v(AB)=v(A)+v(B)+o\big{(}v(B)\big{)}\)._
Proof.: Take \(b\) with \(b\asymp B\). Then
\[v(AB)=v_{AB}(0)=v_{A}\big{(}v_{B}(0)\big{)}=v_{A}(vb)=v(Ab)\]
by [ADH, 5.6.1(ii)]. Moreover, \(v(Ab)=v(A)+vb+o(vb)\), by [ADH, 6.1.3].
We have \(\mathfrak{v}(A_{\ltimes\mathfrak{n}})=\mathfrak{v}(A\mathfrak{n})\), so \(v(A_{\ltimes\mathfrak{n}})=v(A)+o(v\mathfrak{n})\) by Lemma 3.1.8. Moreover:
**Lemma 3.1.9**.: \(v\big{(}\mathfrak{v}(A\mathfrak{n})\big{)}=v\big{(}\mathfrak{v}(A)\big{)}+o( v\mathfrak{n})\)_._
Proof.: Replacing \(A\) by \(a_{r}^{-1}A\) we arrange \(A\) is monic, so \(A_{\ltimes\mathfrak{n}}\) is monic, and thus
\[v\big{(}\mathfrak{v}(A\mathfrak{n})\big{)}\ =\ v\big{(}\mathfrak{v}(A_{\ltimes \mathfrak{n}})\big{)}=-v(A_{\ltimes\mathfrak{n}})=-v(A)+o(v\mathfrak{n})=v \big{(}\mathfrak{v}(A)\big{)}+o(v\mathfrak{n})\]
by remarks preceding the lemma.
Recall: we denote the archimedean class \([v\mathfrak{n}]\subseteq\Gamma\) by \([\mathfrak{n}]\). Lemma 3.1.9 yields:
**Corollary 3.1.10**.: \(\big{[}\mathfrak{v}(A)\big{]}<[\mathfrak{n}]\iff\big{[}\mathfrak{v}(A \mathfrak{n})\big{]}<[\mathfrak{n}]\)_._
Under suitable conditions on \(K\) we can say more about the valuation of \(\mathfrak{v}(A_{\ltimes\mathfrak{n}})\): Lemma 3.1.12 below.
**Lemma 3.1.11**.: _Let \(\mathfrak{n}^{\dagger}\succcurlyeq 1\) and \(\mathfrak{m}_{0},\dots,\mathfrak{m}_{r}\in K^{\times}\) be such that_
\[v(\mathfrak{m}_{i})+v(A)\ =\ \min_{i\leqslant j\leqslant r}v(a_{j})+(j-i)v( \mathfrak{n}^{\dagger}).\]
_Then with \(m:=\mathrm{d}\mathrm{w}\mathrm{t}(A)\) we have_
\[\mathfrak{m}_{0}\ \succcurlyeq\ \cdots\ \succcurlyeq\mathfrak{m}_{r}\quad\text{and} \quad(\mathfrak{n}^{\dagger})^{m}\ \preccurlyeq\mathfrak{m}_{0}\ \preccurlyeq\ (\mathfrak{n}^{\dagger})^{r}.\]
(_In particular, \([\mathfrak{m}_{0}]\leqslant[\mathfrak{n}^{\dagger}]\), with equality if \(m>0\)._)
Proof.: From \(v(\mathfrak{n}^{\dagger})\leqslant 0\) we obtain \(v(\mathfrak{m}_{0})\leqslant\cdots\leqslant v(\mathfrak{m}_{r})\). We have \(0\leqslant v(a_{j}/a_{m})\) for \(j=0,\dots,r\) and so
\[rv(\mathfrak{n}^{\dagger})\ \leqslant\ \min_{0\leqslant j\leqslant r}v(a_{j}/a_{m}) +jv(\mathfrak{n}^{\dagger})\ =\ v(\mathfrak{m}_{0})\ \leqslant\ mv(\mathfrak{n}^{\dagger})\]
as required.
**Lemma 3.1.12**.: _Suppose \(\partial\mathcal{O}\subseteq\circ\). Then_
\[\mathfrak{n}^{\dagger}\preccurlyeq 1\ \Longrightarrow\ v(A_{\ltimes \mathfrak{n}})=v(A),\qquad\mathfrak{n}^{\dagger}\succeq 1\ \Longrightarrow\ |v(A_{\ltimes\mathfrak{n}})-v(A)|\leqslant-rv(\mathfrak{n}^{\dagger}).\]
Proof.: Let \(R:=\operatorname{Ri}A\). Then \(v(A_{\ltimes\mathfrak{n}})=v(R_{+\mathfrak{n}^{\dagger}})\) by [ADH, 5.8.11]. If \(\mathfrak{n}^{\dagger}\preccurlyeq 1\), then \(v(R_{+\mathfrak{n}^{\dagger}})=v(R)\) by [ADH, 4.5.1(i)], hence \(v(A_{\ltimes\mathfrak{n}})=v(R)=v(A)\) by [ADH, 5.8.10]. Now suppose \(\mathfrak{n}^{\dagger}\succ 1\). _Claim_: \(v(A_{\ltimes\mathfrak{n}})-v(A)\geqslant rv(\mathfrak{n}^{\dagger})\). To prove this claim we replace \(A\) by \(a^{-1}A\), where \(a\asymp A\), to arrange \(A\asymp 1\). Let \(i\), \(j\) range over \(\{0,\ldots,r\}\). We have \(R_{+\mathfrak{n}^{\dagger}}=\sum_{i}b_{i}R_{i}\) where
\[b_{i}\ =\ \sum_{j\geqslant i}\binom{j}{i}a_{j}R_{j-i}(\mathfrak{n}^{\dagger}).\]
Take \(\mathfrak{m}_{i}\in K^{\times}\) as in Lemma 3.1.11. By Lemma 1.1.20 we have \(R_{n}(\mathfrak{n}^{\dagger})\sim(\mathfrak{n}^{\dagger})^{n}\) for all \(n\); hence \(v(b_{i})\geqslant v(\mathfrak{m}_{i})\) for all \(i\). Thus
\[v(A_{\ltimes\mathfrak{n}})-v(A)\ =\ v(A_{\ltimes\mathfrak{n}})\ =\ v(R_{+ \mathfrak{n}^{\dagger}})\ \geqslant\ \min_{i}v(b_{i})\ \geqslant\ v(\mathfrak{m}_{0})\ \geqslant\ rv(\mathfrak{n}^{\dagger})\]
by Lemma 3.1.11, proving our claim. Applying this claim with \(A_{\ltimes\mathfrak{n}}\), \(\mathfrak{n}^{-1}\) in place of \(A\), \(\mathfrak{n}\) also yields \(v(A_{\ltimes\mathfrak{n}})-v(A)\leqslant-rv(\mathfrak{n}^{\dagger})\), thus \(|v(A_{\ltimes\mathfrak{n}})-v(A)|\leqslant-rv(\mathfrak{n}^{\dagger})\).
_Remark_.: Suppose that \(\partial\mathcal{O}\subseteq\sigma\) and \(\mathfrak{n}^{\dagger}\succ 1\). Then Lemma 3.1.12 improves on Lemma 3.1.9, since \(v(\mathfrak{n}^{\dagger})=o(v\mathfrak{n})\) by [ADH, 6.4.1(iii)].
**Lemma 3.1.13**.: _Suppose \(\partial\mathcal{O}\subseteq\sigma\) and \(\mathfrak{n}^{\dagger}\preccurlyeq\mathfrak{v}(A)^{-1}\). Let \(B\in K[\sigma]\) and \(s\in\mathbb{N}\) be such that \(\operatorname{order}(B)\leqslant s\) and \(B\preccurlyeq\mathfrak{v}(A)^{s+1}A\). Then \(B_{\ltimes\mathfrak{n}}\preccurlyeq\mathfrak{v}(A_{\ltimes\mathfrak{n}})A_{ \ltimes\mathfrak{n}}\)._
Proof.: We may assume \(B\neq 0\) and \(s=\operatorname{order}(B)\). It suffices to show \(B_{\ltimes\mathfrak{n}}\preccurlyeq\mathfrak{v}(A)A\). If \(\mathfrak{n}^{\dagger}\preccurlyeq 1\), then Lemma 3.1.12 applied to \(B\) in place of \(A\) yields \(B_{\ltimes\mathfrak{n}}\asymp B\preccurlyeq\mathfrak{v}(A)A\). Suppose \(\mathfrak{n}^{\dagger}\succ 1\). Then Lemma 3.1.12 gives \(|v(B_{\ltimes\mathfrak{n}})-v(B)|\leqslant-sv(\mathfrak{n}^{\dagger})\leqslant sv (\mathfrak{v}(A))\) and hence \(B_{\ltimes\mathfrak{n}}\preccurlyeq\mathfrak{v}(A)^{-s}B\preccurlyeq \mathfrak{v}(A)A\).
If \(\partial\mathcal{O}\subseteq\sigma\), then we have functions \(\operatorname{dwm}_{A},\operatorname{dwt}_{A}\colon\Gamma\to\mathbb{N}\) as defined in [ADH, 5.6]. Combining Lemmas 3.1.1 and 3.1.13 yields a variant of [ADH, 6.1.7]:
**Corollary 3.1.14**.: _Suppose \(\partial\mathcal{O}\subseteq\sigma\) and \(\mathfrak{n}^{\dagger}\preccurlyeq\mathfrak{v}(A)^{-1}\). Let \(B\in K[\sigma]\) be such that \(\operatorname{order}(B)\leqslant r\) and \(B\preccurlyeq\mathfrak{v}(A)^{r+1}A\). Then \(\operatorname{dwm}_{A+B}(v\mathfrak{n})=\operatorname{dwm}_{A}(v\mathfrak{n})\) and \(\operatorname{dwt}_{A+B}(v\mathfrak{n})=\operatorname{dwt}_{A}(v\mathfrak{n})\). In particular,_
\[v\mathfrak{n}\in\mathscr{E}(A+B)\ \Longleftrightarrow\ v\mathfrak{n}\in \mathscr{E}(A).\]
**About \(A(\mathfrak{n}^{q})\) and \(A\mathfrak{n}^{q}\).** Suppose \(\mathfrak{m}^{l}=\pm\mathfrak{n}^{k}\) where \(k,l\in\mathbb{Z}\), \(l\neq 0\). Then \(\mathfrak{m}^{\dagger}=q\mathfrak{n}^{\dagger}\) with \(q=k/l\in\mathbb{Q}\). In particular, if \(K\) is real closed or algebraically closed, then for any \(\mathfrak{n}\) and \(q\in\mathbb{Q}\) we have \(\mathfrak{m}^{\dagger}=q\mathfrak{n}^{\dagger}\) for some \(\mathfrak{m}\).
_Below in this subsection \(K\) is \(\operatorname{d}\)-valued and \(\mathfrak{n}\) is such that for all \(q\in\mathbb{Q}^{>}\) we are given an element of \(K^{\times}\), denoted by \(\mathfrak{n}^{q}\) for suggestiveness, with \((\mathfrak{n}^{q})^{\dagger}=q\mathfrak{n}^{\dagger}\)._
Let \(q\in\mathbb{Q}^{>}\); then \(v(\mathfrak{n}^{q})=qv(\mathfrak{n})\): to see this we may arrange that \(K\) is algebraically closed by [ADH, 10.1.23], and hence contains an \(\mathfrak{m}\) such that \(v\mathfrak{m}=q\,v\mathfrak{n}\) and \(\mathfrak{m}^{\dagger}=q\mathfrak{n}^{\dagger}=(\mathfrak{n}^{q})^{\dagger}\), and thus \(v(\mathfrak{n}^{q})=v\mathfrak{m}=q\,v\mathfrak{n}\).
**Lemma 3.1.15**.: _Suppose \(\mathfrak{n}^{\dagger}\succcurlyeq 1\). Then for all but finitely many \(q\in\mathbb{Q}^{>}\),_
\[v\big{(}A(\mathfrak{n}^{q})\big{)}\ =\ v(\mathfrak{n}^{q})+\min_{j}v(a_{j})+jv( \mathfrak{n}^{\dagger}).\]
Proof.: Let \(q\in\mathbb{Q}^{>}\) and take \(b_{0},\ldots,b_{r}\in K\) with \(A\mathfrak{n}^{q}=b_{0}+b_{1}\partial+\cdots+b_{r}\partial^{r}\). Then
\[b_{0}\ =\ A(\mathfrak{n}^{q})\ =\ \mathfrak{n}^{q}\big{(}a_{0}R_{0}(q \mathfrak{n}^{\dagger})+a_{1}R_{1}(q\mathfrak{n}^{\dagger})+\cdots+a_{r}R_{r}(q \mathfrak{n}^{\dagger})\big{)}.\]
Let \(i\), \(j\) range over \(\{0,\ldots,r\}\). By Lemma 1.1.20, \(R_{i}(q\mathfrak{n}^{\dagger})\sim q^{i}(\mathfrak{n}^{\dagger})^{i}\) for all \(i\). Take \(\mathfrak{m}\) (independent of \(q\)) such that \(v(\mathfrak{m})=\min_{j}v(a_{j})+jv(\mathfrak{n}^{\dagger})\), and let \(I\) be the nonempty
set of \(i\) with \(\mathfrak{m}\asymp a_{i}(\mathfrak{n}^{\dagger})^{i}\). For \(i\in I\) we take \(c_{i}\in C^{\times}\) such that \(a_{i}(\mathfrak{n}^{\dagger})^{i}\sim c_{i}\mathfrak{m}\), and set \(R:=\sum_{i\in I}c_{i}Y^{i}\in C[Y]^{\neq}\). Therefore, if \(R(q)\neq 0\), then
\[\sum_{i\in I}a_{i}R_{i}(q\mathfrak{n}^{\dagger})\ \sim\ \mathfrak{m}R(q).\]
Assume \(R(q)\neq 0\) in what follows. Then
\[\sum_{i=0}^{r}a_{i}R_{i}(q\mathfrak{n}^{\dagger})\ \sim\ \sum_{i\in I}a_{i}R_{i}(q \mathfrak{n}^{\dagger})\ \sim\ \mathfrak{m}R(q)\ \asymp\ \mathfrak{m},\]
hence \(b_{0}\asymp\mathfrak{mn}^{q}\), in particular, \(b_{0}\neq 0\).
**Lemma 3.1.16**.: _Assume \(\mathfrak{n}^{\dagger}\succc 1\) and \([\mathfrak{v}]<[\mathfrak{n}]\) for \(\mathfrak{v}:=\mathfrak{v}(A)\). Then \(\big{[}\mathfrak{v}(A\mathfrak{n}^{q})\big{]}<[\mathfrak{n}]\) for all \(q\in\mathbb{Q}^{>}\), and for all but finitely many \(q\in\mathbb{Q}^{>}\) we have \(\mathfrak{v}(A\mathfrak{n}^{q})\preccurlyeq\mathfrak{v}\), and thus \([\mathfrak{v}]\leqslant\big{[}\mathfrak{v}(A\mathfrak{n}^{q})\big{]}\)._
Proof.: Let \(q\in\mathbb{Q}^{>}\). Then \([\mathfrak{v}]<[\mathfrak{n}]=[\mathfrak{n}^{q}]\), so \([\mathfrak{v}(A\mathfrak{n}^{q})]<[\mathfrak{n}^{q}]=[\mathfrak{n}]\) by Corollary 3.1.10. To show the second part, let \(m=\operatorname{dwt}(A)\). Replacing \(A\) by \(a_{m}^{-1}A\) we arrange \(a_{m}=1\), so \(a_{r}=\mathfrak{v}\), \(A\asymp 1\). Take \(b_{0},\ldots,b_{r}\) with \(A\mathfrak{n}^{q}=b_{0}+b_{1}\partial+\cdots+b_{r}\partial^{r}\). As in the proof of Lemma 3.1.15 we obtain an \(\mathfrak{m}\) and a polynomial \(R(Y)\in C[Y]^{\neq}\) (both independent of \(q\)) such that \(v(\mathfrak{m})=\min_{j}v(a_{j})+jv(\mathfrak{n}^{\dagger})\), and \(b_{0}\asymp\mathfrak{mn}^{q}\) if \(R(q)\neq 0\). Assume \(R(q)\neq 0\) in what follows; we show that then \(\mathfrak{v}(A\mathfrak{n}^{q})\preccurlyeq\mathfrak{v}\). For \(n:=\operatorname{dwt}(A\mathfrak{n}^{q})\),
\[b_{0}\mathfrak{v}(A\mathfrak{n}^{q})\ \preccurlyeq\ b_{n}\mathfrak{v}(A \mathfrak{n}^{q})\ =\ b_{r}\ =\ \mathfrak{n}^{q}\mathfrak{v},\]
hence \(\mathfrak{v}(A\mathfrak{n}^{q})\preccurlyeq\mathfrak{v}/\mathfrak{m}\). It remains to note that \(\mathfrak{m}\succc 1\).
**Lemma 3.1.17**.: _Assume \(\mathfrak{n}^{\dagger}\succc 1\) and \(\mathfrak{m}\) satisfies_
\[v\mathfrak{m}+v(A)\ =\ \min_{0\leqslant j\leqslant r}v(a_{j})+jv(\mathfrak{n}^{ \dagger}).\]
_Then \([\mathfrak{m}]\leqslant[\mathfrak{n}^{\dagger}]\), with equality if \(\operatorname{dwt}(A)>0\), and for all but finitely many \(q\in\mathbb{Q}^{>}\),_
\[A\mathfrak{n}^{q}\ \asymp\ \mathfrak{m}\,\mathfrak{n}^{q}\,A,\qquad\mathfrak{v}(A) /\mathfrak{v}(A\mathfrak{n}^{q})\ \asymp\ \mathfrak{m}.\]
Proof.: Replacing \(A\) by \(a_{m}^{-1}A\) where \(m=\operatorname{dwt}(A)\) we arrange \(a_{m}=1\), so \(a_{r}=\mathfrak{v}:=\mathfrak{v}(A)\) and \(A\asymp 1\). Let \(i\), \(j\) range over \(\{0,\ldots,r\}\). Let \(q\in\mathbb{Q}^{>}\), and take \(b_{i}\in K\) such that \(A\mathfrak{n}^{q}=\sum_{i}b_{i}\mathfrak{j}^{i}\). By [ADH, (5.1.3)] we have
\[b_{i}\ =\ \frac{1}{i!}A^{(i)}(\mathfrak{n}^{q})\ =\ \mathfrak{n}^{q}\frac{1}{i!} \operatorname{Ri}(A^{(i)})(q\mathfrak{n}^{\dagger})\ =\ \mathfrak{n}^{q}\sum_{j\geqslant i}\binom{j}{i}a_{j}R_{j-i}(q \mathfrak{n}^{\dagger}).\]
Take \(\mathfrak{m}_{i}\in K^{\times}\) as in Lemma 3.1.11. Then \(\mathfrak{m}_{0}\asymp\mathfrak{m}\) (so \([\mathfrak{m}]\leqslant[\mathfrak{n}^{\dagger}]\), with equality if \(m>0\)), and \(\mathfrak{m}_{r}\asymp\mathfrak{v}\). Lemma 3.1.15 applied to \(A^{(i)}/i!\) instead of \(A\) gives that for all but finitely \(q\in\mathbb{Q}^{>}\) we have \(b_{i}\asymp\mathfrak{m}_{i}\mathfrak{n}^{q}\) for all \(i\). Assume that \(q\in\mathbb{Q}^{>}\) has this property. From \(v(\mathfrak{m})=v(\mathfrak{m}_{0})\leqslant\cdots\leqslant v(\mathfrak{m}_{r })=v(\mathfrak{v})\) we obtain
\[v(\mathfrak{m})+qv(\mathfrak{n})\ =\ v(b_{0})\ \leqslant\ \cdots\ \leqslant\ v(b_{r})\ =\ v( \mathfrak{v})+q\,v(\mathfrak{n}).\]
With \(n=\operatorname{dwt}(A\mathfrak{n}^{q})\) this gives \(v(b_{0})=\cdots=v(b_{n})=v(A\mathfrak{n}^{q})\). Thus
\[\mathfrak{v}(A\mathfrak{n}^{q})\ =\ b_{r}/b_{n}\ \asymp\ b_{r}/b_{0}\ \asymp\ ( \mathfrak{n}^{q}\mathfrak{v})/(\mathfrak{n}^{q}\mathfrak{m})\ =\ \mathfrak{v}/\mathfrak{m}\]
as claimed.
Let \(\mathfrak{v}\in K^{\times}\) with \(\mathfrak{v}\not\asymp 1\); so we have the proper convex subgroup of \(\Gamma\) given by
\[\Delta(\mathfrak{v})\ =\ \big{\{}\gamma\in\Gamma:\,\gamma=o(v\mathfrak{v})\big{\}}\ =\ \big{\{}\gamma\in\Gamma:\,[\gamma]<[\mathfrak{v}]\big{\}}.\]
If \(K\) is asymptotic of \(H\)-type, then we also have the convex subgroup
\[\Delta\ =\ \big{\{}\gamma\in\Gamma:\,\gamma^{\dagger}>v(\mathfrak{v}^{\dagger}) \big{\}}\]
of \(\Gamma\) with \(\Delta\subseteq\Delta(\mathfrak{v})\), and \(\Delta=\Delta(\mathfrak{v})\) if \(K\) is of Hardy type (cf. Section 1.2).
**Corollary 3.1.18**.: _Suppose \(\mathfrak{n}^{\dagger}\succcurlyeq 1\) and \([\mathfrak{n}^{\dagger}]<[\mathfrak{v}]\) where \(\mathfrak{v}:=\mathfrak{v}(A)\)\((\)so \(0\neq\mathfrak{v}\prec 1)\). Let \(A_{*}\in K[\partial]\) and \(w\geqslant r\) be such that \(A_{*}\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w}A\). Then for all but finitely many \(q\in\mathbb{Q}^{>}\) we have \(\mathfrak{w}:=\mathfrak{v}(A\mathfrak{n}^{q})\asymp_{\Delta(\mathfrak{v})} \mathfrak{v}\) and \(A_{*}\mathfrak{n}^{q}\prec_{\Delta(\mathfrak{v})}\mathfrak{w}^{w}A\mathfrak{ n}^{q}\)._
Proof.: The case \(A_{*}=0\) is trivial, so assume \(A_{*}\neq 0\). Take \(\mathfrak{m}\) as in Lemma 3.1.17, and take \(\mathfrak{m}_{*}\) likewise with \(A_{*}\) in place of \(A\). By this lemma, \([\mathfrak{m}],[\mathfrak{m}_{*}]\leqslant[\mathfrak{n}^{\dagger}]<[\mathfrak{ v}]\), hence \(\mathfrak{m},\mathfrak{m}_{*}\asymp_{\Delta(\mathfrak{v})}1\). Moreover, for all but finitely many \(q\in\mathbb{Q}^{>}\) we have \(A\mathfrak{n}^{q}\asymp\mathfrak{m}^{q}A\), \(A_{*}\mathfrak{n}^{q}\asymp\mathfrak{m}_{*}\mathfrak{n}^{q}A_{*}\), and \(\mathfrak{v}/\mathfrak{v}\asymp\mathfrak{m}\) where \(\mathfrak{w}:=\mathfrak{v}(A\mathfrak{n}^{q})\); assume that \(q\in\mathbb{Q}^{>}\) has these properties. Then \(A_{*}\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w}A\) yields
\[A_{*}\mathfrak{n}^{q}\ \asymp\ \mathfrak{m}_{*}\mathfrak{n}^{q}A_{*}\ \prec_{\Delta( \mathfrak{v})}\mathfrak{m}\mathfrak{n}^{q}\mathfrak{v}^{w}A\ \asymp\mathfrak{v}^{w}A\mathfrak{n}^{q}.\]
Now \(\mathfrak{m}\asymp_{\Delta(\mathfrak{v})}1\) gives \(\mathfrak{v}\asymp_{\Delta(\mathfrak{v})}\mathfrak{w}\), hence \(A_{*}\mathfrak{n}^{q}\prec_{\Delta(\mathfrak{v})}\mathfrak{w}^{w}A\mathfrak{ n}^{q}\).
The behavior of the span under compositional conjugation.If \(K\) is \(H\)-asymptotic with asymptotic integration, then \(\Psi\cap\Gamma^{>}\neq\emptyset\), but it is convenient not to require "asymptotic integration" in some lemmas below. Instead: _In this subsection \(K\) is \(H\)-asymptotic and ungrounded with \(\Psi\cap\Gamma^{>}\neq\emptyset\)_. We let \(\phi\), \(\mathfrak{v}\) range over \(K^{\times}\). We say that \(\phi\) is _active_ if \(\phi\) is active in \(K\). Recall from [ADH, pp. 290-292] that \(\mathfrak{d}\) denotes the derivation \(\phi^{-1}\mathfrak{d}\) of \(K^{\phi}\), and that
\[A^{\phi}\ =\ a_{r}\phi^{r}\mathfrak{s}^{r}+\text{lower order terms in $\mathfrak{d}$}. \tag{3.1.1}\]
**Lemma 3.1.19**.: _Suppose \(\mathfrak{v}:=\mathfrak{v}(A)\prec^{\flat}1\) and \(\phi\preccurlyeq 1\) is active. Then_
\[A\ \asymp_{\Delta(\mathfrak{v})}\ A^{\phi},\qquad\mathfrak{v}\ \asymp_{\Delta(\mathfrak{v})}\mathfrak{v}(A^{\phi})\ \prec^{\flat}1,\qquad\mathfrak{v},\mathfrak{v}(A^{\phi})\ \prec^{\flat}_{\phi}\ 1.\]
Proof.: From \(\phi^{\dagger}\prec 1\preccurlyeq\mathfrak{v}^{\dagger}\) we get \([\phi]<[\mathfrak{v}]\), so \(\phi\asymp_{\Delta(\mathfrak{v})}1\). Hence \(A^{\phi}\asymp_{\Delta(\mathfrak{v})}A\) by [ADH, 11.1.4]. For the rest we can arrange \(A\asymp 1\), so \(A^{\phi}\asymp_{\Delta(\mathfrak{v})}1\) and \(\mathfrak{v}\asymp a_{r}\). In view of (3.1.1) this yields \(\mathfrak{v}(A^{\phi})\asymp_{\Delta(\mathfrak{v})}a_{r}\phi^{r}\asymp_{\Delta( \mathfrak{v})}\mathfrak{v}\). So \(\mathfrak{v}(A^{\phi})^{\dagger}\asymp\mathfrak{v}^{\dagger}\succcurlyeq 1\), which gives \(\mathfrak{v}(A^{\phi})\prec^{\flat}1\), and also \(\mathfrak{v},\mathfrak{v}(A^{\phi})\prec^{\flat}_{\phi}1\).
**Lemma 3.1.20**.: _If \(\operatorname{nwt}(A)=r\), then \(\mathfrak{v}(A^{\phi})=1\) eventually, and if \(\operatorname{nwt}(A)<r\), then \(\mathfrak{v}(A^{\phi})\prec^{\flat}_{\phi}1\) eventually._
Proof.: Clearly, if \(\operatorname{nwt}(A)=r\), then \(\operatorname{dwt}(A^{\phi})=r\) and so \(\mathfrak{v}(A^{\phi})=1\) eventually. Suppose \(\operatorname{nwt}(A)<r\). To show that \(\mathfrak{v}(A^{\phi})\prec^{\flat}_{\phi}1\) eventually, we may replace \(A\) by \(A^{\phi_{0}}\) for suitable active \(\phi_{0}\) and assume that \(n:=\operatorname{nwt}(A)=\operatorname{dwt}(A^{\phi})=\operatorname{dwt}(A^{ \phi})\) for all active \(\phi\preccurlyeq 1\). Thus \(v(A^{\phi})=v(A)+nv\phi\) for all active \(\phi\preccurlyeq 1\) by [ADH, 11.1.11(i)]. Using (3.1.1) we therefore obtain for active \(\phi\preccurlyeq 1\):
\[\mathfrak{v}(A^{\phi})\ \asymp\ a_{r}\phi^{r}/a_{n}\phi^{n}\ =\ \mathfrak{v}(A)\phi^{r-n}\ \preccurlyeq\phi^{r-n}\ \prec\ \phi.\]
Take \(x\in K^{\times}\) with \(x\not\asymp 1\) and \(x^{\prime}\asymp 1\); then \(x\succ 1\), so \(x^{-1}\asymp x^{\dagger}\prec 1\) is active. Hence for active \(\phi\preccurlyeq x^{-1}\) we have \(\phi\prec^{\flat}_{\phi}1\) and thus \(\mathfrak{v}(A^{\phi})\prec^{\flat}_{\phi}1\).
**Corollary 3.1.21**.: _The following conditions on \(K\) are equivalent:_
1. \(K\) _is_ \(\lambda\)_-free;_
_._
2. \(\operatorname{nwt}(B)\leqslant 1\) _for all_ \(B\in K[\![\partial]\!]\)__\((so\ \mathfrak{v}(B^{\phi})\prec_{\phi}^{\flat}1\) _eventually);_
3. \(\operatorname{nwt}(B)\leqslant 1\) _for all_ \(B\in K[\![\partial]\!]\) _of order_ \(2\)_._
Proof.: The implication (i) \(\Rightarrow\) (ii) follows from [1, 13.7.10] and Lemma 3.1.20, and (ii) \(\Rightarrow\) (iii) is clear. Suppose \(K\) is not \(\lambda\)-free. Take \(\lambda\in K\) such that \(\phi^{\dagger}+\lambda\prec\phi\) for all active \(\phi\) ([1, 11.6.1]); set \(B:=(\partial+\lambda)\partial=\partial^{2}+\lambda\partial\). Then for active \(\phi\) we have \(B^{\phi}=\phi^{2}\big{(}\delta^{2}+(\phi^{\dagger}+\lambda)\phi^{-1}\delta \big{)}\), so \(\operatorname{dwt}(B^{\phi})=2\). Thus (iii) \(\Rightarrow\) (i).
Lemma 3.1.19 leads to an "eventual" version of Corollary 3.1.14:
**Lemma 3.1.22**.: _Suppose \(K\) is \(\lambda\)-free and \(B\in K[\![\partial]\!]\) is such that \(\operatorname{order}(B)\leqslant r\) and \(B\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{r+1}A\), where \(\mathfrak{v}:=\mathfrak{v}(A)\prec^{\flat}1\). Then \(\mathscr{E}^{\mathrm{e}}(A+B)=\mathscr{E}^{\mathrm{e}}(A)\)._
Proof.: By [1, 10.1.3, 11.7.18] and Corollary 1.8.10 we can pass to an extension to arrange that \(K\) is \(\omega\)-free. Next, by [1, remark following 14.0.1] we extend further to arrange that \(K\) is algebraically closed and newtonian, and thus \(\mathrm{d}\)-valued by Lemma 1.2.9. Then \(\mathscr{E}^{\mathrm{e}}(A)=v(\ker^{\neq}A)\) by Proposition 1.5.2, and \(A\) splits over \(K\) by [1, 14.5.3, 5.8.9]. It remains to show that \(\mathscr{E}^{\mathrm{e}}(A)\subseteq\mathscr{E}^{\mathrm{e}}(A+B)\): the reverse inclusion then follows by interchanging \(A\) and \(A+B\), using \(\mathfrak{v}(A)\sim\mathfrak{v}(A+B)\). Let \(\gamma\in\mathscr{E}^{\mathrm{e}}(A)\). Take \(\mathfrak{n}\in\ker^{\neq}A\) with \(v\mathfrak{n}=\gamma\). Then \(A\in K[\![\partial]\!](\partial-\mathfrak{n}^{\dagger})\) by [1, 5.1.21] and so \(\mathfrak{n}^{\dagger}\preccurlyeq\mathfrak{v}^{-1}\), by [1, 1.22] and Corollary 3.1.6. Now \(\mathscr{E}^{\mathrm{e}}(A)\subseteq\mathscr{E}(A)\), so \(\gamma=v\mathfrak{n}\in\mathscr{E}(A+B)\) by Corollary 3.1.14. Let \(\phi\preccurlyeq 1\) be active; it remains to show that then \(\gamma\in\mathscr{E}\big{(}(A+B)^{\phi}\big{)}\). By Lemma 3.1.19, \(A^{\phi}\asymp_{\Delta(\mathfrak{v})}A\); also \(B^{\phi}\preccurlyeq B\) by [1, 11.1.4]. Lemma 3.1.19 gives \(\mathfrak{v}\asymp_{\Delta(\mathfrak{v})}\mathfrak{v}(A^{\phi})\), hence \(B^{\phi}\prec_{\Delta(\mathfrak{v})}\mathfrak{v}(A^{\phi})^{r+1}A^{\phi}\). Thus with \(K^{\phi}\), \(A^{\phi}\), \(B^{\phi}\) in the role of \(K\), \(A\), \(B\), the above argument leading to \(\gamma\in\mathscr{E}(A+B)\) gives \(\gamma\in\mathscr{E}(A^{\phi}+B^{\phi})=\mathscr{E}\big{(}(A+B)^{\phi}\big{)}\).
For \(r=1\) we can weaken the hypothesis of \(\lambda\)-freeness:
**Corollary 3.1.23**.: _Suppose \(K\) has asymptotic integration, \(r=1\), and \(B\in K[\![\partial]\!]\) of order \(\leqslant 1\) satisfies \(B\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{2}A\), where \(\mathfrak{v}:=\mathfrak{v}(A)\prec^{\flat}1\). Then \(\mathscr{E}^{\mathrm{e}}(A+B)=\mathscr{E}^{\mathrm{e}}(A)\)._
Proof.: Using Lemma 1.2.10 we replace \(K\) by an immediate extension to arrange \(\mathrm{I}(K)\subseteq K^{\dagger}\). Then \(\mathscr{E}^{\mathrm{e}}(A)=v(\ker^{\neq}A)\) by Lemma 1.5.9. Now argue as in the proof of Lemma 3.1.22.
_In the next proposition and its corollary \(K\) is \(\mathrm{d}\)-valued with algebraically closed constant field \(C\) and divisible group \(K^{\dagger}\) of logarithmic derivatives._ We choose a complement \(\Lambda\) of the \(\mathbb{Q}\)-linear subspace \(K^{\dagger}\) of \(K\). Then we have the set \(\mathscr{E}^{\mathrm{u}}(A)\) of ultimate exceptional values of \(A\) with respect to \(\Lambda\). The following stability result will be crucial in Section 4.4:
**Proposition 3.1.24**.: _Suppose \(K\) is \(\omega\)-free, \(\mathrm{I}(K)\subseteq K^{\dagger}\), and \(B\in K[\![\partial]\!]\) of order \(\leqslant r\) satisfies \(B\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{r+1}A\), where \(\mathfrak{v}:=\mathfrak{v}(A)\prec^{\flat}1\). Then \(\mathscr{E}^{\mathrm{u}}(A+B)=\mathscr{E}^{\mathrm{u}}(A)\)._
Proof.: Let \(\Omega\) be the differential fraction field of the universal exponential extension \(\mathrm{U}=K\big{[}\mathrm{e}(\Lambda)\big{]}\) of \(K\) from Section 2.2. Equip \(\Omega\) with a spectral extension of the valuation of \(K\); see Section 2.6. Apply Lemma 3.1.22 to \(\Omega\) in place of \(K\) to get \(\mathscr{E}^{\mathrm{e}}_{\Omega}(A+B)=\mathscr{E}^{\mathrm{e}}_{\Omega}(A)\). Hence \(\mathscr{E}^{\mathrm{u}}(A+B)=\mathscr{E}^{\mathrm{u}}(A)\) by (2.6.3).
In a similar manner we obtain an analogue of Corollary 3.1.23:
**Corollary 3.1.25**.: _Suppose \(K\) has asymptotic integration, \(\mathrm{I}(K)\subseteq K^{\dagger}\), \(r=1\), and \(B\in K[\![\partial]\!]\) satisfies \(\operatorname{order}(B)\leqslant 1\) and \(B\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{2}A\), where \(\mathfrak{v}:=\mathfrak{v}(A)\prec^{\flat}1\). Then \(\mathscr{E}^{\mathrm{u}}(A+B)=\mathscr{E}^{\mathrm{u}}(A)\)._
Proof.: Let \(\Omega\) be as in the proof of Proposition 3.1.24. Then \(\Omega\) is ungrounded by Lemma 2.6.3, hence \(|\mathscr{E}^{\mathrm{e}}_{\Omega}(A)|\leqslant 1\) and \(v(\ker_{\Omega}^{\neq}A)\subseteq\mathscr{E}^{\mathrm{e}}_{\Omega}(A)\) by [ADH, p. 481]. But \(\dim_{C}\ker_{\Omega}A=1\), so \(v(\ker_{\Omega}^{\neq}A)=\mathscr{E}^{\mathrm{e}}_{\Omega}(A)\). The proof of Lemma 3.1.22 with \(\Omega\) in place of \(K\) now gives \(\mathscr{E}^{\mathrm{e}}_{\Omega}(A+B)=\mathscr{E}^{\mathrm{e}}_{\Omega}(A)\), so \(\mathscr{E}^{\mathrm{u}}(A+B)=\mathscr{E}^{\mathrm{u}}(A)\) by (2.6.3).
In the "real" case we have the following variant of Proposition 3.1.24:
**Proposition 3.1.26**.: _Suppose \(K=H[i]\), \(i^{2}=-1\), where \(H\) is a real closed \(H\)-field with asymptotic integration such that \(H^{\dagger}=H\) and \(\mathrm{I}(H)i\subseteq K^{\dagger}\). Let \(B\in K[\partial]\) of order \(\leqslant r\) be such that \(B\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{r+1}A\) with \(\mathfrak{v}:=\mathfrak{v}(A)\prec^{\flat}1\). Let \(\Lambda\) be a complement of the subspace \(K^{\dagger}\) of the \(\mathbb{Q}\)-linear space \(K\). Then \(\mathscr{E}^{\mathrm{u}}(A+B)=\mathscr{E}^{\mathrm{u}}(A)\), where the ultimate exceptional values are with respect to \(\Lambda\)._
Proof.: Take an \(H\)-closed extension \(F\) of \(H\) with \(C_{F}=C_{H}\) as in Corollary 2.6.25. Then the algebraically closed d-valued \(H\)-asymptotic extension \(L:=F[i]\) of \(K\) is \(\mathfrak{w}\)-free, \(C_{L}=C\), \(\mathrm{I}(L)\subseteq L^{\dagger}\), and \(L^{\dagger}\cap K=K^{\dagger}\). Take a complement \(\Lambda_{L}\supseteq\Lambda\) of the subspace \(L^{\dagger}\) of the \(\mathbb{Q}\)-linear space \(L\). Let \(\mathrm{U}_{L}=L\big{[}\mathrm{e}(\Lambda_{L})\big{]}\) be the universal exponential extension of \(L\) from Section 2.2; it has the universal exponential extension \(\mathrm{U}:=K\big{[}\mathrm{e}(\Lambda)\big{]}\) of \(K\) as a differential subring. Let \(\Omega\), \(\Omega_{L}\) be the differential fraction fields of \(\mathrm{U}\), \(\mathrm{U}_{L}\), respectively, and equip \(\Omega_{L}\) with a spectral extension of the valuation of \(L\); then the restriction of this valuation to \(\Omega\) is a spectral extension of the valuation of \(K\) (see remarks preceding Lemma 2.6.18). Lemma 3.1.22 applied to \(\Omega_{L}\) in place of \(K\) yields \(\mathscr{E}^{\mathrm{e}}_{\Omega_{L}}(A+B)=\mathscr{E}^{\mathrm{e}}_{\Omega_{ L}}(A)\), hence \(\mathscr{E}^{\mathrm{e}}_{\Omega}(A+B)=\mathscr{E}^{\mathrm{e}}_{\Omega}(A)\) by Lemma 2.6.18 and thus \(\mathscr{E}^{\mathrm{u}}(A+B)=\mathscr{E}^{\mathrm{u}}(A)\).
### The span of the linear part of a differential polynomial
In this subsection \(P\in K\{Y\}^{\neq}\) has order \(r\). Recall from [ADH, 5.1] that the _linear part_ of \(P\) is the differential operator
\[L_{P}\ :=\ \sum_{n}\frac{\partial P}{\partial Y^{(n)}}(0)\,\partial^{n}\in K[ \partial]\]
of order \(\leqslant r\). We have \(L_{P_{\times\mathfrak{m}}}=L_{P}\mathfrak{m}\)[ADH, p. 242]; hence items 3.1.9, 3.1.10 and 3.1.12 above yield information about the span of \(L_{P_{\times\mathfrak{m}}}\) (provided \(L_{P}\neq 0\)). We now want to similarly investigate the span of the linear part
\[L_{P_{+a}}\ =\ \sum_{n}\frac{\partial P}{\partial Y^{(n)}}(a)\,\partial^{n}\]
of the additive conjugate \(P_{+a}\) of \(P\) by some \(a\prec 1\). In the next two lemmas we assume \(\mathrm{order}(L_{P})=r\) (in particular, \(L_{P}\neq 0\)), \(\mathfrak{v}(L_{P})\prec 1\), and \(a\prec 1\), we set
\[L:=L_{P},\quad L^{+}:=L_{P_{+a}},\quad\mathfrak{v}:=\mathfrak{v}(L),\]
and set \(L_{n}:=\frac{\partial P}{\partial Y^{(n)}}(0)\) and \(L_{n}^{+}:=\frac{\partial P}{\partial Y^{(n)}}(a)\), so \(L=\sum_{n}L_{n}\partial^{n}\), \(L^{+}=\sum_{n}L_{n}^{+}\partial^{n}\). Recall from [ADH, 4.2] the decomposition of \(P\) into homogeneous parts: \(P=\sum_{d}P_{d}\) where \(P_{d}=\sum_{|\mathfrak{i}|=d}P_{\mathfrak{i}}Y^{\mathfrak{i}}\); we set \(P_{>1}:=\sum_{d>1}P_{d}\).
**Lemma 3.1.27**.: _Suppose \(P_{>1}\prec_{\Delta(\mathfrak{v})}\mathfrak{v}P_{1}\) and \(n\leqslant r\). Then_
* \(L_{r}^{+}\sim_{\Delta(\mathfrak{v})}L_{r}\)_, and thus_ \(\mathrm{order}(L^{+})=\mathrm{order}(L)=r\)_;_
* _if_ \(L_{n}\asymp_{\Delta(\mathfrak{v})}L\)_, then_ \(L_{n}^{+}\sim_{\Delta(\mathfrak{v})}L_{n}\)_, and so_ \(v(L_{n}^{+})=v(L_{n})\)_;_
* _if_ \(L_{n}\prec_{\Delta(\mathfrak{v})}L\)_, then_ \(L_{n}^{+}\prec_{\Delta(\mathfrak{v})}L\)_, and so_ \(v(L_{n}^{+})>v(L)\)_._
_In particular, \(L^{+}\sim_{\Delta(\mathfrak{v})}L\), \(\mathrm{dwt}\,L^{+}=\mathrm{dwt}\,L\), and \(\mathfrak{v}(L^{+})\sim_{\Delta(\mathfrak{v})}\mathfrak{v}\)._
Proof.: Take \(Q,R\in K\{Y\}\) with \(\deg_{Y^{(n)}}Q\leqslant 0\) and \(R\in Y^{(n)}K\{Y\}\), such that
\[P\ =\ Q+(L_{n}+R)Y^{(n)},\qquad\text{so}\qquad\frac{\partial P}{\partial Y^{(n)}} \ =\ \frac{\partial R}{\partial Y^{(n)}}Y^{(n)}+L_{n}+R.\]
Now \(R\prec_{\Delta(\mathfrak{v})}\mathfrak{v}P_{1}\), so \(\frac{\partial P}{\partial Y^{(n)}}-L_{n}\prec_{\Delta(\mathfrak{v})} \mathfrak{v}P_{1}\). In \(K[\eth]\) we thus have
\[L_{n}^{+}-L_{n}\ =\ \frac{\partial P}{\partial Y^{(n)}}(a)-L_{n}\ \prec_{\Delta(\mathfrak{v})} \mathfrak{v}L\ \asymp\ L_{r}.\]
So \(L_{n}^{+}-L_{n}\prec_{\Delta(\mathfrak{v})}L\) and (taking \(r=n\)) \(L_{r}^{+}-L_{r}\prec_{\Delta(\mathfrak{v})}L_{r}\). This yields (i)-(iii).
**Lemma 3.1.28**.: _Suppose \(P_{>1}\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{m+1}P_{1}\), and let \(A,B\in K[\eth]\) be such that \(L=A+B\), \(B\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{m+1}L\). Then_
\[L^{+}\ =\ A+B^{+}\ \text{ where }B^{+}\in K[\eth]\text{, }B^{+}\ \prec_{\Delta( \mathfrak{v})}\mathfrak{v}^{m+1}L^{+}.\]
_In particular, \(L-L^{+}\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{m+1}L\)._
Proof.: Let \(A_{n},B_{n}\in K\) be such that \(A=\sum_{n}A_{n}\eth^{n}\) and \(B=\sum_{n}B_{n}\eth^{n}\), so \(L_{n}=A_{n}+B_{n}\). Let any \(n\) (possibly \(>r\)) be given and take \(Q,R\in K\{Y\}\) as in the proof of Lemma 3.1.27. Then \(R\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{m+1}P_{1}\). Since \(B\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{m+1}L\), this yields
\[\frac{\partial P}{\partial Y^{(n)}}-A_{n}\ =\ \frac{\partial R}{\partial Y^{(n)}}Y^ {(n)}+B_{n}+R\ \prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{m+1}P_{1}.\]
We have \(L_{n}^{+}=\frac{\partial P}{\partial Y^{(n)}}(a)\), so
\[L_{n}^{+}-A_{n}\ =\ \frac{\partial P}{\partial Y^{(n)}}(a)-A_{n}\ \prec_{\Delta( \mathfrak{v})}\mathfrak{v}^{m+1}L.\]
By Lemma 3.1.27 we have \(L^{+}\sim_{\Delta(\mathfrak{v})}L\), hence \(B^{+}=L^{+}-A\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{m+1}L^{+}\).
### Holes and Slots
_Throughout this section \(K\) is an \(H\)-asymptotic field with small derivation and with rational asymptotic integration. We set \(\Gamma:=v(K^{\times})\). So \(K\) is pre-d-valued, \(\Gamma\neq\{0\}\) has no least positive element, and \(\Psi\cap\Gamma^{>}\neq\emptyset\). We let \(a\), \(b\), \(f\), \(g\) range over \(K\), and \(\phi\), \(\mathfrak{m}\), \(\mathfrak{n}\), \(\mathfrak{v}\), \(\mathfrak{w}\) (possibly decorated) over \(K^{\times}\). As at the end of the previous section we shorten "active in \(K\)" to "active"._
**Holes.** A **hole** in \(K\) is a triple \((P,\mathfrak{m},\widehat{a})\) where \(P\in K\{Y\}\setminus K\) and \(\widehat{a}\) is an element of \(\widehat{K}\setminus K\), for some immediate asymptotic extension \(\widehat{K}\) of \(K\), such that \(\widehat{a}\prec\mathfrak{m}\) and \(P(\widehat{a})=0\). (The extension \(\widehat{K}\) may vary with \(\widehat{a}\).) The **order**, **degree**, and **complexity** of a hole \((P,\mathfrak{m},\widehat{a})\) in \(K\) are defined as the order, (total) degree, and complexity, respectively, of the differential polynomial \(P\). A hole \((P,\mathfrak{m},\widehat{a})\) in \(K\) is called **minimal** if no hole in \(K\) has smaller complexity; then \(P\) is a minimal annihilator of \(\widehat{a}\) over \(K\).
If \((P,\mathfrak{m},\widehat{a})\) is a hole in \(K\), then \(\widehat{a}\) is a \(K\)-external zero of \(P\), in the sense of Section 1.8. Conversely, every \(K\)-external zero \(\widehat{a}\) of a differential polynomial \(P\in K\{Y\}^{\neq}\) gives for every \(\mathfrak{m}\succ\widehat{a}\) a hole \((P,\mathfrak{m},\widehat{a})\) in \(K\). By Proposition 1.8.35 and Corollary 1.8.41:
**Lemma 3.2.1**.: _Let \(r\in\mathbb{N}^{\geqslant 1}\), and suppose \(K\) is \(\lambda\)-free. Then_
\[K\text{ is $\omega$-free and $r$-newtonian}\quad\Longleftrightarrow\quad K\text{ has no hole of order $\leqslant r$.}\]
Thus for \(\omega\)-free \(K\), being newtonian is equivalent to having no holes. Recall that \(K\) being henselian is equivalent to \(K\) having no proper immediate algebraic valued field extension, and hence to \(K\) having no hole of order \(0\).
Minimal holes are like the "minimal counterexamples" in certain combinatorial settings, and we need to understand such holes in a rather detailed way for later use in inductive arguments. Below we also consider the more general notion of _\(Z\)-minimal hole_, which has an important role to play as well. We recall that \(Z(K,\widehat{a})\) is the set of all \(Q\in K\{Y\}^{\neq}\) that vanish at \((K,\widehat{a})\) as defined in [1, 11.4]
**Lemma 3.2.2**.: _Let \((P,\mathfrak{m},\widehat{a})\) be a hole in \(K\). Then \(P\in Z(K,\widehat{a})\). If \((P,\mathfrak{m},\widehat{a})\) is minimal, then \(P\) is an element of minimal complexity of \(Z(K,\widehat{a})\)._
Proof.: Let \(a\), \(\mathfrak{v}\) with \(\widehat{a}-a\prec\mathfrak{v}\). Since \(\widehat{a}\notin K\) lies in an immediate extension of \(K\) we can take \(\mathfrak{n}\) with \(\mathfrak{n}\asymp\widehat{a}-a\). By [1, 12.2.1] we then have \(\operatorname{ndeg}_{\prec\mathfrak{v}}P_{+a}\geqslant\operatorname{ndeg}P_{+ a,\times\mathfrak{n}}\geqslant 1\). Hence \(P\in Z(K,\widehat{a})\). Suppose \(P\) is not of minimal complexity in \(Z(K,\widehat{a})\). Take \(Q\in Z(K,\widehat{a})\) of minimal complexity. Then [1, 11.4.8] yields a \(K\)-external zero \(\widehat{b}\) of \(Q\), and any \(\mathfrak{n}\succ\widehat{b}\) gives a hole \((Q,\mathfrak{n},\widehat{b})\) in \(K\) of smaller complexity than \((P,\mathfrak{m},\widehat{a})\).
In connection with the next result, note that \(K\) being \(0\)-newtonian just means that \(K\) is henselian as a valued field.
**Corollary 3.2.3**.: _Suppose \(K\) is \(\lambda\)-free and has a minimal hole of order \(r\geqslant 1\). Then \(K\) is \((r-1)\)-newtonian, and \(\omega\)-free if \(r\geqslant 2\)._
Proof.: This is clear for \(r=1\) (and doesn't need \(\lambda\)-freeness), and for \(r\geqslant 2\) follows from Lemma 3.2.1.
**Corollary 3.2.4**.: _Suppose \(K\) is \(\omega\)-free and has a minimal hole of order \(r\geqslant 2\). Assume also that \(C\) is algebraically closed and \(\Gamma\) is divisible. Then \(K\) is \(\operatorname{d}\)-valued, \(r\)-linearly closed, and \(r\)-linearly newtonian._
Proof.: This follows from Lemma 1.2.9, Corollary 1.8.42, and Corollary 3.2.3.
Here is a linear version of Lemma 3.2.1:
**Lemma 3.2.5**.: _If \(K\) is \(\lambda\)-free, then_
\(K\) _is \(1\)-linearly newtonian \(\iff\)\(K\) has no hole of degree \(1\) and order \(1\)._
_If \(r\in\mathbb{N}^{\geqslant 1}\) and \(K\) is \(\omega\)-free, then_
\(K\) _is \(r\)-linearly newtonian \(\iff\)\(K\) has no hole of degree \(1\) and order \(\leqslant r\)._
Proof.: The first statement follows from Lemma 1.8.33, and the second statement from Lemma 1.8.34.
**Corollary 3.2.6**.: _If \(K\) is \(\omega\)-free and has a minimal hole in \(K\) of order \(r\) and degree \(>1\), then \(K\) is \(r\)-linearly newtonian._
**Lemma 3.2.7**.: _Suppose \(K\) has a hole \((P,\mathfrak{m},\widehat{a})\) of degree \(1\), and \(L_{P}\in K[\partial]^{\neq}\) splits over \(K\). Then \(K\) has a hole of complexity \((1,1,1)\)._
Proof.: Let \((P,\mathfrak{m},\widehat{a})\) as in the hypothesis have minimal order. Then \(\operatorname{order}P\geqslant 1\), so \(\operatorname{order}P=\operatorname{order}L_{P}\). Take \(A,B\in K[\partial]\) such that \(\operatorname{order}A=1\) and \(L_{P}=AB\). If \(\operatorname{order}B=0\), then \((P,\mathfrak{m},\widehat{a})\) has complexity \((1,1,1)\). Assume \(\operatorname{order}B\geqslant 1\). Then \(B(\widehat{a})\notin K\): otherwise, taking \(Q\in K\{Y\}\) of degree \(1\) with \(L_{Q}=B\) and \(Q(0)=\operatorname{order}L_{P}\). Since \(\operatorname{order}B=0\), then \((P,\mathfrak{m},\widehat{a})\) has complexity \((1,1,1)\). Assume \(\operatorname{order}B\geqslant 1\). Then \(B(\widehat{a})\notin K\): otherwise, taking \(Q\in K\{Y\}\) of degree \(1\) with \(L_{Q}=B\) and \(Q(0)=\operatorname{order}L_{P}\). Since \(\operatorname{order}B=0\), then \((P,\mathfrak{m},\widehat{a})\) has complexity \((1,1,1)\). Assume \(\operatorname{order}B\geqslant 1\). Then \(B(\widehat{a})\notin K\): otherwise, taking \(Q\in K\{Y\}\) of degree \(1\) with \(L_{Q}=B\) and \(Q(0)=\operatorname{order}L_{P}\). Since \(\operatorname{order}B=0\), then \((P,\mathfrak{m},\widehat{a})\) has complexity \((1,1,1)\). Assume \(\operatorname{order}B\geqslant 1\). Then \(B(\widehat{a})\notin K\): otherwise, taking \(Q\in K\{Y\}\) of degree \(1\) with \(L_{Q}=B\) and \(Q(0)=\operatorname{order}L_{P}\). Since \(\operatorname{order}B=0\), then \((P,\mathfrak{m},\widehat{a})\) has complexity \((1,1,1)\). Assume \(\operatorname{order}B\geqslant 1\). Then \(B(\widehat{a})\notin K\): otherwise, taking \(Q\in K\{Y\}\) of degree \(1\) with \(L_{Q}=B\) and \(Q(0)=\operatorname{order}L_{P}\). Since \(\operatorname{order}B=0\), then \((P,\mathfrak{m},\widehat{a})\) has complexity \((1,1,1)\). Assume \(\operatorname{order}B\geqslant 1\). Then \(B(\widehat{a})\notin K\): otherwise, taking \(Q\in K\{Y\}\) of degree \(1\) with \(L_{Q}=B\) and \(Q(0)=\operatorname{order}L_{P}\). Since \(\operatorname{order}B=0\), then \((P,\mathfrak{m},\widehat{a})\) has complexity \((1,1,1)\). Assume \(\operatorname{order}B\geqslant 1\). Then \(B(\widehat{a})\notin K\): otherwise, taking \(Q\in K\{Y\}\) of degree \(1\) with \(L_{Q}=B\) and \(Q(0)=\operatorname{order}L_{P}\). Since \(\operatorname{order}B=0\), then \((P,\mathfrak{m},\widehat{a})\) has complexity \((1,1,1)\). Assume \(\operatorname{order}B\geqslant 1\). Then \(B(\widehat{a})\notin K\): otherwise, taking \(Q\in K\{Y\}\) of degree \(1\) with \(L_{Q}=B\) and \(Q(0)=\operatorname{order}L_{P}\). Since \(\operatorname{order}B=0\), then \((P,\mathfrak{m},\widehat{a})\) has complexity \((1,1,1)\). Assume \(\operatorname{order}B\geqslant 1\). Then \(B(\widehat{a})\notin K\): otherwise, taking \(Q\in K\{Y\}\) of degree \(1\) with \(L_{Q}=B\) and \(Q(0)=\operatorname{order}L_{P}\). Since \(\operatorname{order}L_{P}\) is \(\operatorname{order}L_{P}\), then \((P,\mathfrak{m},\widehat{a})\) has complexity \((1,1,1)\). Assume \(\operatorname{order}B\geqslant 1\). Then \(B(\widehat{a})\notin K\): otherwise, taking \(Q\in K\{Y\}\) of degree \(1\) with \(L_{Q}=B\) and \(Q(0)=\operatorname{order}L_{P}\). Since \(\operatorname{order}L_{P}\) is \(\operatorname{order}L_{P}\), then \((P,\mathfrak{m},\widehat{a})\) has complexity \((1,1,1)\). Assume \(\operatorname{order}B\geqslant 1\). Then \(B(\widehat{a})\notin K\): otherwise, taking \(Q\in K\{Y\}\) of degree \(1\) with \(L_{Q}=B\) and \(Q(0)=\operatorname{order}L_{P}\). Since \(\operatorname{order}L_{P}\) is \(\operatorname{order}L_{P}\), then \((P,\mathfrak{m},\widehat{a})\) has complexity \((1,1,1)\).
\(-B(\widehat{a})\) yields a hole \((Q,\mathfrak{m},\widehat{a})\) in \(K\) where \(\deg Q=1\) and \(L_{Q}\) splits over \(K\), and \((Q,\mathfrak{m},\widehat{a})\) has smaller order than \((P,\mathfrak{m},\widehat{a})\). Set \(\widehat{b}:=B(\widehat{a})\) and take \(R\in K\{Y\}\) of degree \(1\) with \(L_{R}=A\) and \(R(0)=P(0)\). Then
\[R(\widehat{b})\ =\ R(0)+L_{R}(\widehat{b})\ =\ P(0)+L_{P}(\widehat{a})\ =\ P( \widehat{a})\ =\ 0,\]
hence for any \(\mathfrak{n}\succ\widehat{b}\), \((R,\mathfrak{n},\widehat{b})\) is a hole in \(K\) of complexity \((1,1,1)\).
**Corollary 3.2.8**.: _Suppose \(K\) is \(\omega\)-free, \(C\) is algebraically closed, and \(\Gamma\) is divisible. Then every minimal hole in \(K\) of degree \(1\) has order \(1\). If in addition \(K\) is \(1\)-linearly newtonian, then every minimal hole in \(K\) has degree \(>1\)._
Proof.: The first statement follows from Corollary 3.2.4 and the preceding lemma. For the second statement, use the first and Lemma 3.2.5.
Let \((P,\mathfrak{m},\widehat{a})\) be a hole in \(K\). We say \((P,\mathfrak{m},\widehat{a})\) is \(Z\)**-minimal** if \(P\) has minimal complexity in \(Z(K,\widehat{a})\). Thus if \((P,\mathfrak{m},\widehat{a})\) is minimal, then it is \(Z\)-minimal by Lemma 3.2.2. If \((P,\mathfrak{m},\widehat{a})\) is \(Z\)-minimal, then by [ADH, remarks following 11.4.3], the differential polynomial \(P\) is a minimal annihilator of \(\widehat{a}\) over \(K\). Note also that \(\operatorname{ndeg}P_{\times\mathfrak{m}}\geqslant 1\) by [ADH, 11.2.1]. In more detail:
**Lemma 3.2.9**.: _Let \((P,\mathfrak{m},\widehat{a})\) be a hole in \(K\). Then for all \(\mathfrak{n}\) with \(\widehat{a}\prec\mathfrak{n}\preccurlyeq\mathfrak{m}\),_
\[1\ \leqslant\ \operatorname{dmul}P_{\times\mathfrak{n}}\ \leqslant\ \operatorname{ddeg}P_{\times\mathfrak{n}}\ \leqslant\ \operatorname{ddeg}P_{\times\mathfrak{m}}.\]
_In particular, \(\operatorname{ddeg}_{\prec\mathfrak{m}}P\geqslant 1\)._
Proof.: Assume \(\widehat{a}\prec\mathfrak{n}\preccurlyeq\mathfrak{m}\). Then \(\widehat{a}=\mathfrak{n}\widehat{b}\) with \(\widehat{b}\prec 1\); put \(Q:=P_{\times\mathfrak{n}}\in K\{Y\}^{\neq}\). Then \(Q(\widehat{b})=0\), hence \(D_{Q}(0)=0\) and so \(\operatorname{dmul}Q=\operatorname{dmul}P_{\times\mathfrak{n}}\geqslant 1\). The rest follows from [ADH, 6.6.5(ii), 6.6.7, 6.6.9] and \(\Gamma^{>}\) having no least element.
In the next lemma, \((\lambda_{\rho})\), \((\omega_{\rho})\) are pc-sequences in \(K\) as in [ADH, 11.5, 11.7].
**Lemma 3.2.10**.: _Suppose \(K\) is \(\lambda\)-free and \(\omega\in K\) is such that \(\omega_{\rho}\rightsquigarrow\omega\)_(_so \(K\) is not \(\omega\)-free_)_. Then we have a hole \((P,\mathfrak{m},\lambda)\) in \(K\) where \(P=2Y^{\prime}+Y^{2}+\omega\) and \(\lambda_{\rho}\rightsquigarrow\lambda\), and each such hole in \(K\) is a \(Z\)-minimal hole in \(K\)._
Proof.: From [ADH, 11.7.13] we obtain \(\lambda\) in an immediate asymptotic extension of \(K\) such that \(\lambda_{\rho}\rightsquigarrow\lambda\) and \(P(\lambda)=0\). Taking any \(\mathfrak{m}\) with \(\lambda\prec\mathfrak{m}\) then yields a hole \((P,\mathfrak{m},\lambda)\) in \(K\) with \(\lambda_{\rho}\rightsquigarrow\lambda\), and each such hole in \(K\) is a \(Z\)-minimal hole in \(K\) by [ADH, 11.4.13, 11.7.12].
**Corollary 3.2.11**.: _If \(K\) is \(\lambda\)-free but not \(\omega\)-free, then each minimal hole in \(K\) of positive order has complexity \((1,1,1)\) or complexity \((1,1,2)\). If \(K\) is a Liouville closed \(H\)-field and not \(\omega\)-free, then \((P,\mathfrak{m},\lambda)\) is a minimal hole of complexity \((1,1,2)\), where \(\omega\), \(P\), \(\lambda\), \(\mathfrak{m}\) are as in Lemma 3.2.10._
Here the second part uses Corollary 1.8.29 and Lemma 3.2.5.
**Slots.** In some arguments the notion of a hole in \(K\) turns out to be too stringent. Therefore we introduce a more flexible version of it:
**Definition 3.2.12**.: A **slot** in \(K\) is a triple \((P,\mathfrak{m},\widehat{a})\) where \(P\in K\{Y\}\setminus K\) and \(\widehat{a}\) is an element of \(\widehat{K}\setminus K\), for some immediate asymptotic extension \(\widehat{K}\) of \(K\), such that \(\widehat{a}\prec\mathfrak{m}\) and \(P\in Z(K,\widehat{a})\). The **order**, **degree**, and **complexity** of such a slot in \(K\) are defined to be the order, degree, and complexity of the differential
polynomial \(P\), respectively. A slot in \(K\) of degree \(1\) is also called a **linear** slot in \(K\). A slot \((P,\mathfrak{m},\widehat{a})\) in \(K\) is \(Z\)**-minimal** if \(P\) is of minimal complexity among elements of \(Z(K,\widehat{a})\).
Thus by Lemma 3.2.2, holes in \(K\) are slots in \(K\), and a hole in \(K\) is \(Z\)-minimal iff it is \(Z\)-minimal as a slot in \(K\). From [ADH, 11.4.13] we obtain:
**Corollary 3.2.13**.: _Let \((P,\mathfrak{m},\widehat{a})\) be a \(Z\)-minimal slot in \(K\) and \((a_{\rho})\) be a divergent pc-sequence in \(K\) such that \(a_{\rho}\rightsquigarrow\widehat{a}\). Then \(P\) is a minimal differential polynomial of \((a_{\rho})\) over \(K\)._
We say that slots \((P,\mathfrak{m},\widehat{a})\) and \((Q,\mathfrak{n},\widehat{b})\) in \(K\) are **equivalent** if \(P=Q\), \(\mathfrak{m}=\mathfrak{n}\), and \(v(\widehat{a}-a)=v(\widehat{b}-a)\) for all \(a\); note that then \(Z(K,\widehat{a})=Z(K,\widehat{b})\), so \((P,\mathfrak{m},\widehat{a})\) is \(Z\)-minimal iff \((P,\mathfrak{m},\widehat{b})\) is \(Z\)-minimal. Clearly this is an equivalence relation on the class of slots in \(K\). The following lemma often allows us to pass from a \(Z\)-minimal slot to a \(Z\)-minimal hole:
**Lemma 3.2.14**.: _Let \((P,\mathfrak{m},\widehat{a})\) be a \(Z\)-minimal slot in \(K\). Then \((P,\mathfrak{m},\widehat{a})\) is equivalent to a \(Z\)-minimal hole in \(K\)._
Proof.: By [ADH, 11.4.8] we obtain \(\widehat{b}\) in an immediate asymptotic extension of \(K\) with \(P(\widehat{b})=0\) and \(v(\widehat{a}-a)=v(\widehat{b}-a)\) for all \(a\). In particular \(\widehat{b}\notin K\), \(\widehat{b}\prec\mathfrak{m}\), so \((P,\mathfrak{m},\widehat{b})\) is a hole in \(K\) equivalent to \((P,\mathfrak{m},\widehat{a})\).
By [ADH, 11.4.8] the extension below containing \(\widehat{b}\) is not required to be immediate:
**Corollary 3.2.15**.: _If \((P,\mathfrak{m},\widehat{a})\) is a \(Z\)-minimal hole in \(K\) and \(\widehat{b}\) in an asymptotic extension of \(K\) satisfies \(P(\widehat{b})=0\) and \(v(\widehat{a}-a)=v(\widehat{b}-a)\) for all \(a\), then there is an isomorphism \(K\langle\widehat{a}\rangle\to K\langle\widehat{b}\rangle\) of valued differential fields over \(K\) sending \(\widehat{a}\) to \(\widehat{b}\)._
In particular, equivalent \(Z\)-minimal holes \((P,\mathfrak{m},\widehat{a})\), \((P,\mathfrak{m},\widehat{b})\) in \(K\) yield an isomorphism \(K\langle\widehat{a}\rangle\to K\langle\widehat{b}\rangle\) of valued differential fields over \(K\) sending \(\widehat{a}\) to \(\widehat{b}\).
From Lemmas 3.2.1 and 3.2.14 we obtain:
**Corollary 3.2.16**.: _Let \(r\in\mathbb{N}^{\geqslant 1}\), and suppose \(K\) is \(\omega\)-free. Then_
\[K\text{ is $r$-newtonian}\quad\Longleftrightarrow\quad K\text{ has no slot of order }\leqslant r.\]
Let \((P,\mathfrak{m},\widehat{a})\) be a slot in \(K\). Then \((bP,\mathfrak{m},\widehat{a})\) for \(b\neq 0\) is a slot in \(K\) of the same complexity as \((P,\mathfrak{m},\widehat{a})\), and if \((P,\mathfrak{m},\widehat{a})\) is \(Z\)-minimal, then so is \((bP,\mathfrak{m},\widehat{a})\); likewise with "hole in \(K\)" in place of "slot in \(K\)". For active \(\phi\) we have the **compositional conjugate**\((P^{\phi},\mathfrak{m},\widehat{a})\) by \(\phi\) of \((P,\mathfrak{m},\widehat{a})\): it is a slot in \(K^{\phi}\) of the same complexity as \((P,\mathfrak{m},\widehat{a})\), it is \(Z\)-minimal if \((P,\mathfrak{m},\widehat{a})\) is, and it is a hole (minimal hole) in \(K^{\phi}\) if \((P,\mathfrak{m},\widehat{a})\) is a hole (minimal hole, respectively) in \(K\). If the slots \((P,\mathfrak{m},\widehat{a})\), \((Q,\mathfrak{n},\widehat{b})\) in \(K\) are equivalent, then so are \((bP,\mathfrak{m},\widehat{a})\), \((bQ,\mathfrak{n},\widehat{b})\) for \(b\neq 0\), as well as the slots \((P^{\phi},\mathfrak{m},\widehat{a})\), \((Q^{\phi},\mathfrak{n},\widehat{b})\) in \(K^{\phi}\) for active \(\phi\).
The following conventions are in force in the rest of this section:
_We let \(r\) range over natural numbers \(\geqslant 1\) and let \((P,\mathfrak{m},\widehat{a})\) denote a slot in \(K\) of order \(r\), so \(P\notin K[Y]\) has order \(r\). We set \(w:=\operatorname{wt}(P)\), so \(w\geqslant r\geqslant 1\)._
Thus \(\operatorname{wt}(P_{+a})=\operatorname{wt}(P_{\times\mathfrak{n}})= \operatorname{wt}(P^{\phi})=w\).
**Refinements and multiplicative conjugates.** For \(a\), \(\mathfrak{n}\) such that \(\widehat{a}-a\prec\mathfrak{n}\preccurlyeq\mathfrak{m}\) we obtain a slot \((P_{+a},\mathfrak{n},\widehat{a}-a)\) in \(K\) of the same complexity as \((P,\mathfrak{m},\widehat{a})\) [ADH, 4.3, 11.4]. Slots of this form are said to **refine**\((P,\mathfrak{m},\widehat{a})\) and are called **refinements** of \((P,\mathfrak{m},\widehat{a})\). A refinement of a refinement of \((P,\mathfrak{m},\widehat{a})\) is itself a refinement of \((P,\mathfrak{m},\widehat{a})\). If \((P,\mathfrak{m},\widehat{a})\) is \(Z\)-minimal, then so is any refinement of \((P,\mathfrak{m},\widehat{a})\). If \((P,\mathfrak{m},\widehat{a})\) is a hole in \(K\), then so is each of its refinements, and likewise with "minimal hole" in place of "hole". For active \(\phi\), \((P_{+a},\mathfrak{n},\widehat{a}-a)\) refines \((P,\mathfrak{m},\widehat{a})\) iff \((P^{\phi}_{+a},\mathfrak{n},\widehat{a}-a)\) refines \((P^{\phi},\mathfrak{m},\widehat{a})\). If \((P,\mathfrak{m},\widehat{a})\), \((P,\mathfrak{m},\widehat{b})\) are equivalent slots in \(K\) and \((P_{+a},\mathfrak{n},\widehat{a}-a)\) refines \((P,\mathfrak{m},\widehat{a})\), then \((P_{+a},\mathfrak{n},\widehat{b}-a)\) refines \((P,\mathfrak{m},\widehat{b})\), and the slots \((P_{+a},\mathfrak{n},\widehat{a}-a)\), \((P_{+a},\mathfrak{n},\widehat{b}-a)\) in \(K\) are equivalent. Conversely, if \((P,\mathfrak{m},\widehat{a})\) and \((P,\mathfrak{m},\widehat{b})\) are slots in \(K\) with equivalent refinements, then \((P,\mathfrak{m},\widehat{a})\) and \((P,\mathfrak{m},\widehat{b})\) are equivalent.
**Lemma 3.2.17**.: _Let \((P_{+a},\mathfrak{n},\widehat{a}-a)\) be a slot in \(K\). Then \((P_{+a},\mathfrak{n},\widehat{a}-a)\) refines \((P,\mathfrak{m},\widehat{a})\), or \((P,\mathfrak{m},\widehat{a})\) refines \((P_{+a},\mathfrak{n},\widehat{a}-a)\)._
Proof.: If \(\mathfrak{n}\preccurlyeq\mathfrak{m}\), then \(\widehat{a}-a\prec\mathfrak{n}\preccurlyeq\mathfrak{m}\), so \((P_{+a},\mathfrak{n},\widehat{a}-a)\) refines \((P,\mathfrak{m},\widehat{a})\), whereas if \(\mathfrak{m}\prec\mathfrak{n}\), then \((\widehat{a}-a)-(-a)=\widehat{a}\preccurlyeq\mathfrak{m}\preccurlyeq \mathfrak{n}\), so
\[(P,\mathfrak{m},\widehat{a})=\big{(}(P_{+a})_{+(-a)},\mathfrak{m},(\widehat{ a}-a)-(-a)\big{)}\]
refines \((P_{+a},\mathfrak{n},\widehat{a}-a)\).
**Lemma 3.2.18**.: _Let \(Q\in K\{Y\}^{\neq}\) be such that \(Q\notin Z(K,\widehat{a})\). Then there is a refinement \((P_{+a},\mathfrak{n},\widehat{a}-a)\) of \((P,\mathfrak{m},\widehat{a})\) such that \(\operatorname{ndeg}Q_{+a,\times\mathfrak{n}}=0\) and \(\widehat{a}-a\prec\mathfrak{n}\prec\widehat{a}\)._
Proof.: Take \(b\), \(\mathfrak{v}\) such that \(\widehat{a}-b\prec\mathfrak{v}\) and \(\operatorname{ndeg}_{\prec\mathfrak{v}}Q_{+b}=0\). We shall find an \(a\) such that \(\operatorname{ndeg}_{\prec\mathfrak{v}}Q_{+a}=0\), \(\widehat{a}-a\preccurlyeq\widehat{a}\), and \(\widehat{a}-a\prec\mathfrak{v}\): if \(\widehat{a}-b\preccurlyeq\widehat{a}\), we take \(a:=b\); if \(\widehat{a}-b\succ\widehat{a}\), then \(-b\sim\widehat{a}-b\) and so \(\operatorname{ndeg}_{\prec\mathfrak{v}}Q=\operatorname{ndeg}_{\prec\mathfrak{v} }Q_{+b}=0\) by [ADH, 11.2.7], hence \(a:=0\) works. We next arrange \(\widehat{a}-a\prec\widehat{a}\): if \(\widehat{a}-a\asymp\widehat{a}\), take \(a_{1}\) with \(\widehat{a}-a_{1}\prec\widehat{a}\), so \(a-a_{1}\prec\mathfrak{v}\), hence \(\operatorname{ndeg}_{\prec\mathfrak{v}}Q_{+a_{1}}=\operatorname{ndeg}_{\prec \mathfrak{v}}Q_{+a}=0\), and thus \(a\) can be replaced by \(a_{1}\). Since \(\Gamma^{>}\) has no real element, we can choose \(\mathfrak{n}\) with \(\widehat{a}-a\prec\mathfrak{n}\prec\widehat{a},\mathfrak{v}\), and then \((P_{+a},\mathfrak{n},\widehat{a}-a)\) refines \((P,\mathfrak{m},\widehat{a})\) as desired.
If \((P_{+a},\mathfrak{m},\widehat{a}-a)\) refines \((P,\mathfrak{m},\widehat{a})\), then \(D_{P_{+a,\times\mathfrak{m}}}=D_{P_{\times\mathfrak{m},+(a/\mathfrak{m})}}=D_{ P_{\times\mathfrak{m}}}\) by [ADH, 6.6.5(iii)], and thus
\[\operatorname{ddeg}P_{+a,\times\mathfrak{m}}\ =\ \operatorname{ddeg}P_{\times \mathfrak{m}},\qquad\operatorname{dmul}P_{+a,\times\mathfrak{m}}\ =\ \operatorname{dmul}P_{\times\mathfrak{m}}.\]
In combination with Lemma 3.2.9 this has some useful consequences:
**Corollary 3.2.19**.: _Suppose \((P,\mathfrak{m},\widehat{a})\) is a hole in \(K\) and \(\operatorname{ddeg}P_{\times\mathfrak{m}}=1\). Then \(\operatorname{ddeg}_{\prec\mathfrak{m}}P=1\), and for all \(\mathfrak{n}\) with \(\widehat{a}\prec\mathfrak{n}\preccurlyeq\mathfrak{m}\), \((P,\mathfrak{n},\widehat{a})\) refines \((P,\mathfrak{m},\widehat{a})\) with \(\operatorname{ddeg}P_{\times\mathfrak{n}}=\operatorname{dmul}P_{\times \mathfrak{n}}=1\)._
**Corollary 3.2.20**.: _Suppose \((P_{+a},\mathfrak{n},\widehat{a}-a)\) refines the hole \((P,\mathfrak{m},\widehat{a})\) in \(K\). Then_
\[\operatorname{ddeg}P_{\times\mathfrak{m}}\ =\ 1\ \Longrightarrow\ \operatorname{ddeg}P_{+a,\times \mathfrak{n}}\ =\ \operatorname{dmul}P_{+a,\times\mathfrak{n}}\ =\ 1.\]
Proof.: Use
\[1\leqslant\operatorname{dmul}P_{+a,\times\mathfrak{n}}\leqslant\operatorname{ ddeg}P_{+a,\times\mathfrak{n}}\leqslant\operatorname{ddeg}P_{+a,\times\mathfrak{m}}= \operatorname{ddeg}P_{\times\mathfrak{m}},\]
where the first inequality follows from Lemma 3.2.9 applied to \((P_{+a},\mathfrak{n},\widehat{a}-a)\).
If \((P_{+a},\mathfrak{m},\widehat{a}-a)\) refines \((P,\mathfrak{m},\widehat{a})\), then in analogy with \(\operatorname{ddeg}\) and \(\operatorname{dmul}\),
\[\operatorname{ndeg}P_{+a,\times\mathfrak{m}}\ =\ \operatorname{ndeg}P_{\times \mathfrak{m}},\qquad\operatorname{nmul}P_{+a,\times\mathfrak{m}}\ =\ \operatorname{nmul}P_{\times\mathfrak{m}}.\]
(Use compositional conjugation by active \(\phi\).) Lemma 3.2.9 goes through for slots, provided we use \(\operatorname{ndeg}\) and \(\operatorname{nmul}\) instead of \(\operatorname{ddeg}\) and \(\operatorname{dmul}\):
**Lemma 3.2.21**.: _Suppose \(\widehat{a}\prec\mathfrak{n}\preccurlyeq\mathfrak{m}\). Then_
\[1\ \leqslant\ \operatorname{nmul}P_{\times\mathfrak{n}}\ \leqslant\ \operatorname{ ndeg}P_{\times\mathfrak{n}}\ \leqslant\ \operatorname{ndeg}P_{\times\mathfrak{m}}.\]
Proof.: By [ADH, 11.2.3(iii), 11.2.5] it is enough to show \(\operatorname{nmul}P_{\times\mathfrak{n}}\geqslant 1\). Replacing \((P,\mathfrak{m},\widehat{a})\) by its refinement \((P,\mathfrak{n},\widehat{a})\) we arrange \(\mathfrak{m}=\mathfrak{n}\). Now \(\Gamma^{>}\) has no smallest element, so by definition of \(Z(K,\widehat{a})\) and [ADH, p. 483] we have
\[1\ \leqslant\ \operatorname{ndeg}_{\prec\mathfrak{m}}P\ =\ \max\big{\{} \operatorname{nmul}P_{\times\mathfrak{p}}:\mathfrak{v}\prec\mathfrak{m}\big{\}}.\]
Thus by [ADH, 11.2.5] we can take \(\mathfrak{v}\) with \(\widehat{a}\prec\mathfrak{v}\prec\mathfrak{m}\) with \(\operatorname{nmul}P_{\times\mathfrak{v}}\geqslant 1\), and hence \(\operatorname{nmul}P_{\times\mathfrak{m}}\geqslant 1\), again by [ADH, 11.2.5].
Lemma 3.2.21 yields results analogous to Corollaries 3.2.19 and 3.2.20 above:
**Corollary 3.2.22**.: _If \(\operatorname{ndeg}P_{\times\mathfrak{m}}=1\), then for all \(\mathfrak{n}\) with \(\widehat{a}\prec\mathfrak{n}\preccurlyeq\mathfrak{m}\), \((P,\mathfrak{n},\widehat{a})\) refines \((P,\mathfrak{m},\widehat{a})\) and \(\operatorname{ndeg}P_{\times\mathfrak{n}}=\operatorname{nmul}P_{\times \mathfrak{n}}=1\)._
**Corollary 3.2.23**.: _If \((P_{+a},\mathfrak{n},\widehat{a}-a)\) refines \((P,\mathfrak{m},\widehat{a})\), then_
\[\operatorname{ndeg}P_{\times\mathfrak{m}}\ =\ 1\ \Longrightarrow\ \operatorname{ ndeg}P_{+a,\times\mathfrak{n}}\ =\ \operatorname{nmul}P_{+a,\times\mathfrak{n}}\ =\ 1.\]
Any triple \((P_{\times\mathfrak{n}},\mathfrak{m}/\mathfrak{n},\widehat{a}/\mathfrak{n})\) is also a slot in \(K\), with the same complexity as \((P,\mathfrak{m},\widehat{a})\); it is called the **multiplicative conjugate** of \((P,\mathfrak{m},\widehat{a})\) by \(\mathfrak{n}\). If \((P,\mathfrak{m},\widehat{a})\) is \(Z\)-minimal, then so is any multiplicative conjugate. If \((P,\mathfrak{m},\widehat{a})\) is a hole in \(K\), then so is any multiplicative conjugate; likewise with "minimal hole" in place of "hole". If two slots in \(K\) are equivalent, then so are their multiplicative conjugates by \(\mathfrak{n}\).
Refinements and multiplicative conjugates interact in the following way: Suppose \((P_{+a},\mathfrak{n},\widehat{a}-a)\) refines \((P,\mathfrak{m},\widehat{a})\). Multiplicative conjugation of the slot \((P_{+a},\mathfrak{n},\widehat{a}-a)\) in \(K\) by \(\mathfrak{v}\) then results in the slot \((P_{+a,\times\mathfrak{v}},\mathfrak{n}/\mathfrak{v},(\widehat{a}-a)/ \mathfrak{v})\) in \(K\). On the other hand, first taking the multiplicative conjugate \((P_{\times\mathfrak{v}},\mathfrak{m}/\mathfrak{v},\widehat{a}/\mathfrak{v})\) of \((P,\mathfrak{m},\widehat{a})\) by \(\mathfrak{v}\) and then refining to \((P_{\times\mathfrak{v},+a/\mathfrak{v}},\mathfrak{n}/\mathfrak{v},\widehat{a }/\mathfrak{v}-a/\mathfrak{v})\) results in the same slot in \(K\), thanks to the identity \(P_{+a,\times\mathfrak{v}}=P_{\times\mathfrak{v},+a/\mathfrak{v}}\).
### Quasilinear slots
Note that \(\operatorname{ndeg}P_{\times\mathfrak{m}}\geqslant 1\) by Lemma 3.2.21. We call \((P,\mathfrak{m},\widehat{a})\)**quasilinear** if \(P_{\times\mathfrak{m}}\) is quasilinear, that is, \(\operatorname{ndeg}P_{\times\mathfrak{m}}=1\). If \((P,\mathfrak{m},\widehat{a})\) is quasilinear, then so is any slot in \(K\) equivalent to \((P,\mathfrak{m},\widehat{a})\), any multiplicative conjugate of \((P,\mathfrak{m},\widehat{a})\), as well as any refinement of \((P,\mathfrak{m},\widehat{a})\), by Corollary 3.2.23. If \((P,\mathfrak{m},\widehat{a})\) is linear, then it is quasilinear by Lemma 3.2.21.
Let \((a_{\rho})\) be a divergent pc-sequence in \(K\) with \(a_{\rho}\rightsquigarrow\widehat{a}\) and for each index \(\rho\), let \(\mathfrak{m}_{\rho}\in K^{\times}\) be such that \(\mathfrak{m}_{\rho}\asymp\widehat{a}-a_{\rho}\). Take an index \(\rho_{0}\) such that \(\mathfrak{m}_{\sigma}\prec\mathfrak{m}_{\rho}\prec\mathfrak{m}\) for all \(\sigma>\rho\geqslant\rho_{0}\), cf. [ADH, 2.2].
**Lemma 3.2.24**.: _Let \(\sigma\geqslant\rho\geqslant\rho_{0}\). Then_
* \((P_{+a_{\rho+1}},\mathfrak{m}_{\rho},\widehat{a}-a_{\rho+1})\) _is a refinement of_ \((P,\mathfrak{m},\widehat{a})\)_;_
* _if_ \((P_{+a},\mathfrak{n},\widehat{a}-a)\) _is a refinement of_ \((P,\mathfrak{m},\widehat{a})\)_, then_ \(\mathfrak{m}_{\rho}\preccurlyeq\mathfrak{n}\) _for all sufficiently large_ \(\rho\)_, and for such_ \(\rho\)_,_ \((P_{+a_{\rho+1}},\mathfrak{m}_{\rho},\widehat{a}-a_{\rho+1})\) _refines_ \((P_{+a},\mathfrak{n},\widehat{a}-a)\)_;_
* \((P_{+a_{\sigma+1}},\mathfrak{m}_{\sigma},\widehat{a}-a_{\sigma+1})\) _refines_ \((P_{+a_{\rho+1}},\mathfrak{m}_{\rho},\widehat{a}-a_{\rho+1})\)_._
Proof.: Part (i) follows from \(\widehat{a}-a_{\rho+1}\asymp\mathfrak{m}_{\rho+1}\prec\mathfrak{m}_{\rho}\prec \mathfrak{m}\). For (ii) let \((P_{+a},\mathfrak{n},\widehat{a}-a)\) be a refinement of \((P,\mathfrak{m},\widehat{a})\). Since \(\widehat{a}-a\prec\mathfrak{n}\), we have \(\mathfrak{m}_{\rho}\preccurlyeq\mathfrak{n}\) for all sufficiently large \(\rho\). For such \(\rho\), with \(b:=a_{\rho+1}-a\) we have
\[(P_{+a_{\rho+1}},\mathfrak{m}_{\rho},\widehat{a}-a_{\rho+1})\ =\ \big{(}(P_{+a})_{+b}, \mathfrak{m}_{\rho},(\widehat{a}-a)-b\big{)}\]
and
\[(\widehat{a}-a)-b\ =\ \widehat{a}-a_{\rho+1}\ \asymp\ \mathfrak{m}_{\rho+1}\ \prec\ \mathfrak{m}_{\rho}\ \prec\ \mathfrak{n}.\]
Hence \((P_{+a_{\rho+1}},\mathfrak{m}_{\rho},\widehat{a}-a_{\rho+1})\) refines \((P_{+a},\mathfrak{n},\widehat{a}-a)\). Part (iii) follows from (i) and (ii).
Let \(\boldsymbol{a}=c_{K}(a_{\rho})\) be the cut defined by \((a_{\rho})\) in \(K\) and \(\operatorname{ndeg}_{\boldsymbol{a}}P\) be the Newton degree of \(P\) in \(\boldsymbol{a}\) as introduced in [ADH, 11.2]. Then \(\operatorname{ndeg}_{\boldsymbol{a}}P\) is the eventual value of \(\operatorname{ndeg}P_{+a_{\rho},\times\mathfrak{m}_{\rho}}\). Increasing \(\rho_{0}\) we arrange that additionally for all \(\rho\geqslant\rho_{0}\) we have \(\operatorname{ndeg}P_{+a_{\rho},\times\mathfrak{m}_{\rho}}=\operatorname{ndeg} _{\boldsymbol{a}}P\).
**Corollary 3.2.25**.: \((P,\mathfrak{m},\widehat{a})\) _has a quasilinear refinement iff \(\operatorname{ndeg}_{\boldsymbol{a}}P=1\)._
Proof.: By Lemma 3.2.21 and [ADH, 11.2.8] we have
\[1\leqslant\operatorname{ndeg}P_{+a_{\rho+1},\times\mathfrak{m}_{\rho}}= \operatorname{ndeg}P_{+a_{\rho},\times\mathfrak{m}_{\rho}}. \tag{3.2.1}\]
Thus if \(\operatorname{ndeg}_{\boldsymbol{a}}P=1\), then for \(\rho\geqslant\rho_{0}\), the refinement \((P_{+a_{\rho+1}},\mathfrak{m}_{\rho},\widehat{a}-a_{\rho+1})\) of \((P,\mathfrak{m},\widehat{a})\) is quasilinear. Conversely, if \((P_{+a},\mathfrak{n},\widehat{a}-a)\) is a quasilinear refinement of \((P,\mathfrak{m},\widehat{a})\), then Lemma 3.2.24(ii) yields a \(\rho\geqslant\rho_{0}\) such that \(\mathfrak{m}_{\rho}\preccurlyeq\mathfrak{n}\), and then \((P_{+a_{\rho+1}},\mathfrak{m}_{\rho},\widehat{a}-a_{\rho+1})\) in \(K\) refines \((P_{+a},\mathfrak{n},\widehat{a}-a)\) and hence is also quasilinear, so \(\operatorname{ndeg}_{\boldsymbol{a}}P=\operatorname{ndeg}P_{+a_{\rho},\times \mathfrak{m}_{\rho}}=1\) by (3.2.1).
**Lemma 3.2.26**.: _Assume \(K\) is \(\operatorname{d}\)-valued and \(\omega\)-free, and \(\Gamma\) is divisible. Then every \(Z\)-minimal slot in \(K\) of positive order has a quasilinear refinement._
Proof.: Suppose \((P,\mathfrak{m},\widehat{a})\) is \(Z\)-minimal. Take a divergent pc-sequence \((a_{\rho})\) in \(K\) such that \(a_{\rho}\rightsquigarrow\widehat{a}\). Then \(P\) is a minimal differential polynomial of \((a_{\rho})\) over \(K\), by Corollary 3.2.13. Hence \(\operatorname{ndeg}_{\boldsymbol{a}}P=1\) by [ADH, 14.5.1], where \(\boldsymbol{a}:=c_{K}(a_{\rho})\). Now Corollary 3.2.25 gives a quasilinear refinement of \((P,\mathfrak{m},\widehat{a})\).
_Remark_.: Suppose \(K\) is a real closed \(H\)-field that is \(\lambda\)-free but not \(\omega\)-free. (For example, the real closure of the \(H\)-field \(\mathbb{R}\langle\omega\rangle\) from [ADH, 13.9.1] satisfies these conditions, by [ADH, 11.6.8, 11.7.23, 13.9.1].) Take \((P,\mathfrak{m},\lambda)\) as in Lemma 3.2.10. Then by Corollary 3.2.25 and [ADH, 11.7.9], \((P,\mathfrak{m},\lambda)\) has no quasilinear refinement. Thus Lemma 3.2.26 fails if "\(\omega\)-free" is replaced by "\(\lambda\)-free".
**Lemma 3.2.27**.: _Let \(L\) be an \(r\)-newtonian \(H\)-asymptotic extension of \(K\) such that \(\Gamma^{<}\) is cofinal in \(\Gamma^{<}_{L}\), and suppose \((P,\mathfrak{m},\widehat{a})\) is quasilinear. Then \(P(\widehat{b})=0\) and \(\widehat{b}\prec\mathfrak{m}\) for some \(\widehat{b}\in L\)._
Proof.: Lemma 3.2.21 and \(\operatorname{ndeg}P_{\times\mathfrak{m}}=1\) gives \(\mathfrak{n}\prec\mathfrak{m}\) with \(\operatorname{ndeg}_{\times\mathfrak{n}}P=1\). By [ADH, p. 480], \(\operatorname{ndeg}P_{\times\mathfrak{n}}\) does not change in passing from \(K\) to \(L\). As \(L\) is \(r\)-newtonian this yields \(\widehat{b}\preccurlyeq\mathfrak{n}\) in \(L\) with \(P(\widehat{b})=0\).
In the next two corollaries we assume that \(K\) is \(\operatorname{d}\)-valued and \(\omega\)-free, and that \(L\) is a newtonian \(H\)-asymptotic extension of \(K\).
**Corollary 3.2.28**.: _If \((P,\mathfrak{m},\widehat{a})\) is quasilinear, then \(P(\widehat{b})=0\), \(\widehat{b}\prec\mathfrak{m}\) for some \(\widehat{b}\in L\)._
Proof.: By [159, Theorem B], \(K\) has a newtonization \(K^{*}\) inside \(L\). Such \(K^{*}\) is d-algebraic over \(K\) by [ADH, remarks after 14.0.1], so \(\Gamma^{<}\) is cofinal in \(\Gamma^{<}_{K^{*}}\) by Theorem 1.4.1. Thus we can apply Lemma 3.2.27 to \(K^{*}\) in the role of \(L\).
Here is a variant of Lemma 3.2.14:
**Corollary 3.2.29**.: _Suppose \(\Gamma\) is divisible and \((P,\mathfrak{m},\widehat{a})\) is \(Z\)-minimal. Then there exists \(\widehat{b}\in L\) such that \(K(\widehat{b})\) is an immediate extension of \(K\) and \((P,\mathfrak{m},\widehat{b})\) is a hole in \(K\) equivalent to \((P,\mathfrak{m},\widehat{a})\)._ (_Thus if \((P,\mathfrak{m},\widehat{a})\) is also a hole in \(K\), then there is an embedding \(K\langle\widehat{a}\rangle\to L\) of valued differential fields over \(K\)._)
Proof.: By Lemma 3.2.26 we may refine \((P,\mathfrak{m},\widehat{a})\) to arrange that \((P,\mathfrak{m},\widehat{a})\) is quasi-linear. Then [ADH, 11.4.8] gives \(\widehat{b}\) in an immediate \(H\)-asymptotic extension of \(K\) with \(P(\widehat{b})=0\) and \(v(\widehat{a}-a)=v(\widehat{b}-a)\) for all \(a\). So \((P,\mathfrak{m},\widehat{b})\) is a hole in \(K\) equivalent to \((P,\mathfrak{m},\widehat{a})\). The immediate d-algebraic extension \(K\langle\widehat{b}\rangle\) of \(K\) is \(\mathfrak{o}\)-free by Theorem 1.4.1. Then [ADH, remarks following 14.0.1] gives a newtonian d-algebraic immediate extension \(M\) of \(K\langle\widehat{b}\rangle\) and thus of \(K\). Then \(M\) is a newtonization of \(K\) by [ADH, 14.5.4] and thus embeds over \(K\) into \(L\). The rest follows from Corollary 3.2.15.
_Remark_.: Lemma 3.2.26 and Corollary 3.2.29 go through with the hypothesis "\(\Gamma\) is divisible" replaced by "\(K\) is henselian". The proofs are the same, using [159, 3.3] in place of [ADH, 14.5.1] in the proof of Lemma 3.2.26, and [159, 3.5] in place of [ADH, 14.5.4] in the proof of Corollary 3.2.29.
For \(r=1\) we can weaken the hypothesis of \(\mathfrak{o}\)-freeness in Corollary 3.2.29:
**Corollary 3.2.30**.: _Suppose \(K\) is \(\lambda\)-free and \(\Gamma\) is divisible, and \((P,\mathfrak{m},\widehat{a})\) is \(Z\)-minimal of order \(r=1\) with a quasilinear refinement. Let \(L\) be a newtonian \(H\)-asymptotic extension of \(K\). Then there exists \(\widehat{b}\in L\) such that \(K\langle\widehat{b}\rangle\) is an immediate extension of \(K\) and \((P,\mathfrak{m},\widehat{b})\) is a hole in \(K\) equivalent to \((P,\mathfrak{m},\widehat{a})\)._ (_So if \((P,\mathfrak{m},\widehat{a})\) is also a hole in \(K\), then we have an embedding \(K\langle\widehat{a}\rangle\to L\) of valued differential fields over \(K\)._)
Proof.: Take a divergent pc-sequence \((a_{\rho})\) in \(K\) with \(a_{\rho}\rightsquigarrow\widehat{a}\). Then \(\operatorname{ndeg}_{\boldsymbol{a}}P=1\) for \(\boldsymbol{a}:=c_{K}(a_{\rho})\), by Corollary 3.2.25, and \(P\) is a minimal differential polynomial of \((a_{\rho})\) over \(K\), by [ADH, 11.4.13]. The equality \(\operatorname{ndeg}_{\boldsymbol{a}}P=1\) remains valid when passing from \(K\), \(\boldsymbol{a}\) to \(L\), \(c_{L}(a_{\rho})\), respectively, by Lemma 1.8.8. Hence [ADH, 14.1.10] yields \(\widehat{b}\in L\) such that \(P(\widehat{b})=0\) and \(a_{\rho}\rightsquigarrow\widehat{b}\), so \(v(\widehat{a}-a)=v(\widehat{b}-a)\) for all \(a\). Then \(K\langle\widehat{b}\rangle\) is an immediate extension of \(K\) by [ADH, 9.7.6], so \((P,\mathfrak{m},\widehat{b})\) is a hole in \(K\) equivalent to \((P,\mathfrak{m},\widehat{a})\). For the rest use Corollary 3.2.15.
**The linear part of a slot.** We define the **linear part** of \((P,\mathfrak{m},\widehat{a})\) to be the linear part \(L_{P_{\times}\mathfrak{m}}\in K[\partial]\) of \(P_{\times\mathfrak{m}}\). By [ADH, p. 242] and Lemma 1.1.10 we have
\[L_{P_{\times\mathfrak{m}}}\ =\ L_{P}\,\mathfrak{m}\ =\ \sum_{n=0}^{r}\frac{ \partial P_{\times\mathfrak{m}}}{\partial Y^{(n)}}(0)\,\mathfrak{a}^{n}\ =\ \mathfrak{m}S_{P}(0)\mathfrak{a}^{r}+\text{lower order terms in $\partial$.}\]
The slot \((P,\mathfrak{m},\widehat{a})\) has the same linear part as each of its multiplicative conjugates.
The linear part of a refinement \((P_{+a},\mathfrak{n},\widehat{a}-a)\) of \((P,\mathfrak{m},\widehat{a})\) is given by
\[L_{P_{+a,\times\mathfrak{n}}}\ =\ L_{P_{+a}}\mathfrak{n} \ =\ \sum_{m=0}^{r}\left(\sum_{n=m}^{r}\binom{n}{m}\mathfrak{n}^{(n-m)} \frac{\partial P}{\partial Y^{(n)}}(a)\right)\mathfrak{d}^{m}\] \[\ =\ \mathfrak{n}\,S_{P}(a)\,\mathfrak{d}^{r}+\text{lower order terms in }\mathfrak{d}.\]
(See [ADH, (5.1.1)].) By [ADH, 5.7.5] we have \((P^{\phi})_{d}=(P_{d})^{\phi}\) for \(d\in\mathbb{N}\); in particular \(L_{P^{\phi}}=(L_{P})^{\phi}\) and so \(\operatorname{order}(L_{P^{\phi}})=\operatorname{order}(L_{P})\). A particularly favorable situation occurs when \(L_{P}\) splits over a given differential field extension \(E\) of \(K\) (which includes requiring \(L_{P}\neq 0\)). Typically, \(E\) is an algebraic closure of \(K\). In any case, \(L_{P}\) splits over \(E\) iff \(L_{P\times_{\mathfrak{n}}}\) splits over \(E\), iff \(L_{P^{\phi}}\) splits over \(E^{\phi}\). Thus:
**Lemma 3.2.31**.: _Suppose \(\deg P=1\) and \(L_{P}\) splits over \(E\). Then the linear part of any refinement of \((P,\mathfrak{m},\widehat{a})\) and any multiplicative conjugate of \((P,\mathfrak{m},\widehat{a})\) also splits over \(E\), and any compositional conjugate of \((P,\mathfrak{m},\widehat{a})\) by an active \(\phi\) splits over \(E^{\phi}\)._
Let \(\boldsymbol{i}=(i_{0},\ldots,i_{r})\) range over \(\mathbb{N}^{1+r}\). As in [ADH, 4.2] we set
\[P_{(\boldsymbol{i})}\ :=\ \frac{P^{(\boldsymbol{i})}}{\boldsymbol{i}!}\qquad \text{where }P^{(\boldsymbol{i})}\ :=\ \frac{\partial^{|\boldsymbol{i}|}P}{\partial^{i_{0}}Y \cdots\partial^{i_{r}}Y^{(r)}}.\]
If \(|\boldsymbol{i}|=i_{0}+\cdots+i_{r}\geqslant 1\), then \(\operatorname{c}(P_{(\boldsymbol{i})})<\operatorname{c}(P)\). Note that for \(\boldsymbol{i}=(0,\ldots,0,1)\) we have \(P_{(\boldsymbol{i})}=S_{P}\neq 0\), since \(\operatorname{order}P=r\). We now aim for Corollary 3.2.34.
**Lemma 3.2.32**.: _Suppose that \((P,\mathfrak{m},\widehat{a})\) is \(Z\)-minimal. Then \((P,\mathfrak{m},\widehat{a})\) has a refinement \((P_{+a},\mathfrak{n},\widehat{a}-a)\) such that for all \(\boldsymbol{i}\) with \(|\boldsymbol{i}|\geqslant 1\) and \(P_{(\boldsymbol{i})}\neq 0\),_
\[\operatorname{ndeg}\,(P_{(\boldsymbol{i})})_{+a,\times\mathfrak{n}}\ =\ 0.\]
Proof.: Let \(\boldsymbol{i}\) range over the (finitely many) elements of \(\mathbb{N}^{1+r}\) satisfying \(|\boldsymbol{i}|\geqslant 1\) and \(P_{(\boldsymbol{i})}\neq 0\). Each \(P_{(\boldsymbol{i})}\) has smaller complexity than \(P\), so \(P_{(\boldsymbol{i})}\notin Z(K,\widehat{a})\). Then \(Q:=\prod_{\boldsymbol{i}}P_{(\boldsymbol{i})}\notin Z(K,\widehat{a})\) by [ADH, 11.4.4], so Lemma 3.2.18 gives a refinement \((P_{+a},\mathfrak{n},\widehat{a}-a)\) of \((P,\mathfrak{m},\widehat{a})\) with \(\operatorname{ndeg}\,Q_{+a,\times\mathfrak{n}}=0\). Then \(\operatorname{ndeg}\,(P_{(\boldsymbol{i})})_{+a,\times\mathfrak{n}}=0\) for all \(\boldsymbol{i}\), by [ADH, remarks before 11.2.6].
From [ADH, (4.3.3)] we recall that \((P_{(\boldsymbol{i})})_{+a}=(P_{+a})_{(\boldsymbol{i})}\). Also recall that \((P_{+a})_{\boldsymbol{i}}=P_{(\boldsymbol{i})}(a)\) by Taylor expansion. In particular, if \(P_{(\boldsymbol{i})}=0\), then \((P_{+a})_{\boldsymbol{i}}=0\).
**Lemma 3.2.33**.: _Suppose \((P_{+a},\mathfrak{n},\widehat{a}-a)\) refines \((P,\mathfrak{m},\widehat{a})\) and \(\boldsymbol{i}\) is such that \(|\boldsymbol{i}|\geqslant 1\), \(P_{(\boldsymbol{i})}\neq 0\), and \(\operatorname{ndeg}\,(P_{(\boldsymbol{i})})_{\times\mathfrak{m}}=0\). Then_
\[\operatorname{ndeg}\,(P_{(\boldsymbol{i})})_{+a,\times\mathfrak{n}}\ =\ 0,\qquad(P_{+a})_{ \boldsymbol{i}}\ \sim\ P_{\boldsymbol{i}}.\]
Proof.: Using [ADH, 11.2.4, 11.2.3(iii), 11.2.5] we get
\[\operatorname{ndeg}\,(P_{(\boldsymbol{i})})_{+a,\times\mathfrak{n}}\ =\ \operatorname{ndeg}\,(P_{(\boldsymbol{i})})_{+\widehat{a},\times \mathfrak{n}}\ \leqslant\ \operatorname{ndeg}\,(P_{(\boldsymbol{i})})_{+\widehat{a},\times\mathfrak{m}}\ =\ \operatorname{ndeg}\,(P_{(\boldsymbol{i})})_{\times \mathfrak{m}}\ =\ 0,\]
so \(\operatorname{ndeg}\,(P_{(\boldsymbol{i})})_{+a,\times\mathfrak{n}}=0\). Thus \(P_{(\boldsymbol{i})}\notin Z(K,\widehat{a})\), hence \((P_{+a})_{\boldsymbol{i}}=P_{(\boldsymbol{i})}(a)\sim P_{(\boldsymbol{i})}( \widehat{a})\) by [ADH, 11.4.3]; applying this to \(a=0\), \(\mathfrak{n}=\mathfrak{m}\) yields \(P_{\boldsymbol{i}}=P_{(\boldsymbol{i})}(0)\sim P_{(\boldsymbol{i})}(\widehat{a})\).
Combining Lemmas 3.2.32 and 3.2.33 gives:
**Corollary 3.2.34**.: _Every \(Z\)-minimal slot in \(K\) of order \(r\) has a refinement \((P,\mathfrak{m},\widehat{a})\) such that for all refinements \((P_{+a},\mathfrak{n},\widehat{a}-a)\) of \((P,\mathfrak{m},\widehat{a})\) and all \(\boldsymbol{i}\) with \(|\boldsymbol{i}|\geqslant 1\) and \(P_{(\boldsymbol{i})}\neq 0\) we have \((P_{+a})_{\boldsymbol{i}}\sim P_{\boldsymbol{i}}\)\((\)and thus \(\operatorname{order}L_{P_{+a}}=\operatorname{order}L_{P}=r)\)._
Here the condition "of order \(r\)" may seem irrelevant, but is forced on us because refinements preserve order and by our convention that \(P\) has order \(r\).
**Special slots.** The slot \((P,\mathfrak{m},\widehat{a})\) in \(K\) is said to be **special** if \(\widehat{a}/\mathfrak{m}\) is special over \(K\) in the sense of [1]: some nontrivial convex subgroup \(\Delta\) of \(\Gamma\) is cofinal in \(v\big{(}\frac{\widehat{a}}{\mathfrak{m}}-K\big{)}\). If \((P,\mathfrak{m},\widehat{a})\) is special, then so are \((bP,\mathfrak{m},\widehat{a})\) for \(b\neq 0\), any multiplicative conjugate of \((P,\mathfrak{m},\widehat{a})\), any compositional conjugate of \((P,\mathfrak{m},\widehat{a})\), and any slot in \(K\) equivalent to \((P,\mathfrak{m},\widehat{a})\). Also, by Lemma 1.6.1:
**Lemma 3.2.35**.: _If \((P,\mathfrak{m},\widehat{a})\) is special, then so is any refinement._
Here is our main source of special slots:
**Lemma 3.2.36**.: _Let \(K\) be \(r\)-linearly newtonian, and \(\mathfrak{o}\)-free if \(r>1\). Suppose \((P,\mathfrak{m},\widehat{a})\) is quasilinear, and \(Z\)-minimal or a hole in \(K\). Then \((P,\mathfrak{m},\widehat{a})\) is special._
Proof.: Use Lemma 3.2.14 to arrange \((P,\mathfrak{m},\widehat{a})\) is a hole in \(K\). Next arrange \(\mathfrak{m}=1\) by replacing \((P,\mathfrak{m},\widehat{a})\) with \((P_{\times\mathfrak{m}},1,\widehat{a}/\mathfrak{m})\). So \(\operatorname{ndeg}P=1\), hence \(\widehat{a}\) is special over \(K\) by Proposition 1.6.12 (if \(r>1\)) and 1.6.18 (if \(r=1\)).
Next an approximation result used in the proof of Corollary 6.5.19 in Part 6:
**Lemma 3.2.37**.: _Suppose \(\mathfrak{m}=1\), \((P,1,\widehat{a})\) is special and \(Z\)-minimal, and \(\widehat{a}-a\preccurlyeq\mathfrak{n}\preccurlyeq 1\) for some \(a\). Then \(\widehat{a}-b\prec\mathfrak{n}^{r+1}\) for some \(b\), and \(P(b)\prec\mathfrak{n}P\) for any such \(b\)._
Proof.: Using Lemma 3.2.14 we arrange \(P(\widehat{a})=0\). The differential polynomial \(Q(Y):=\sum_{|\boldsymbol{i}|\geqslant 1}P_{(\boldsymbol{i})}(\widehat{a})Y^{ \boldsymbol{i}}\in\widehat{K}\{Y\}\) has order \(\leqslant r\) and \(\operatorname{mul}(Q)\geqslant 1\), and Taylor expansion yields, for all \(a\):
\[P(a)\ =\ P(\widehat{a})+\sum_{|\boldsymbol{i}|\geqslant 1}P_{(\boldsymbol{i})}( \widehat{a})(a-\widehat{a})^{\boldsymbol{i}}\ =\ Q(a-\widehat{a}).\]
Since \(\widehat{a}\) is special over \(K\), we have \(b\) with \(\widehat{a}-b\prec\mathfrak{n}^{r+1}\), and then by Lemma 1.1.13 we have \(Q(b-\widehat{a})\prec\mathfrak{n}Q\preccurlyeq\mathfrak{n}P\).
### The Normalization Theorem
_Throughout this section \(K\) is an \(H\)-asymptotic field with small derivation and with rational asymptotic integration. We set \(\Gamma:=v(K^{\times})\). The notational conventions introduced in the last section remain in force: \(a\), \(b\), \(f\), \(g\) range over \(K\); \(\phi\), \(\mathfrak{m}\), \(\mathfrak{n}\), \(\mathfrak{v}\), \(\mathfrak{w}\) over \(K^{\times}\). As at the end of Section 3.1 we shall frequently use for \(\mathfrak{v}\prec 1\) the coarsening of \(v\) by the convex subgroup \(\Delta(\mathfrak{v})=\big{\{}\gamma\in\Gamma:\,\gamma=o(v\mathfrak{v})\big{\}}\) of \(\Gamma\)._
_We fix a slot \((P,\mathfrak{m},\widehat{a})\) in \(K\) of order \(r\geqslant 1\), and set \(w:=\operatorname{wt}(P)\) (so \(w\geqslant r\geqslant 1\)). In the next subsections we introduce various conditions on \((P,\mathfrak{m},\widehat{a})\). These conditions will be shown to be related as follows:_
_strictly normal normal_ _not_ _steep_ _quasilinear_ _deep_
_Thus "deep \(+\) strictly normal" yields the rest. The main results of this section are Theorem 3.3.33 and its variants 3.3.34, 3.3.36, and 3.3.48.
**Steep and deep slots.** In this subsection, if \(\operatorname{order}(L_{P_{\times{\mathfrak{m}}}})=r\), then we set
\[{\mathfrak{v}}\ :=\ {\mathfrak{v}}(L_{P_{\times{\mathfrak{m}}}}).\]
The slot \((P,{\mathfrak{m}},\widehat{a})\) in \(K\) is said to be **steep** if \(\operatorname{order}(L_{P_{\times{\mathfrak{m}}}})=r\) and \({\mathfrak{v}}\prec^{\flat}1\). Thus
\[(P,{\mathfrak{m}},\widehat{a})\ \text{is steep}\ \Longleftrightarrow\ (P_{\times{ \mathfrak{n}}},{\mathfrak{m}}/{\mathfrak{n}},\widehat{a}/{\mathfrak{n}})\ \text{is steep}\ \Longleftrightarrow\ (bP,{\mathfrak{m}},\widehat{a})\ \text{is steep}\]
for \(b\neq 0\). If \((P,{\mathfrak{m}},\widehat{a})\) is steep, then so is any slot in \(K\) equivalent to \((P,{\mathfrak{m}},\widehat{a})\). If \((P,{\mathfrak{m}},\widehat{a})\) is steep, then so is any slot \((P^{\phi},{\mathfrak{m}},\widehat{a})\) in \(K^{\phi}\) for active \(\phi\preccurlyeq 1\), by Lemma 3.1.19, and thus \(\operatorname{nwt}(L_{P_{\times{\mathfrak{m}}}})<r\). Below we tacitly use that if \((P,{\mathfrak{m}},\widehat{a})\) is steep, then
\[{\mathfrak{n}}\asymp_{\Delta({\mathfrak{v}})}{\mathfrak{v}}\ \Longrightarrow\ [{ \mathfrak{n}}]=[{\mathfrak{v}}],\qquad{\mathfrak{n}}\prec 1,\ [{\mathfrak{n}}]=[{ \mathfrak{v}}]\ \Longrightarrow\ {\mathfrak{n}}\prec^{\flat}1.\]
Note also that if \((P,{\mathfrak{m}},\widehat{a})\) is steep, then \({\mathfrak{v}}^{\dagger}\asymp_{\Delta({\mathfrak{v}})}1\) by [ADH, 9.2.10(iv)].
**Lemma 3.3.1**.: _Suppose \((P,{\mathfrak{m}},\widehat{a})\) is steep, \(\widehat{a}\prec{\mathfrak{n}}\preccurlyeq{\mathfrak{m}}\) and \([{\mathfrak{n}}/{\mathfrak{m}}]\leqslant[{\mathfrak{v}}]\). Then_
\[\operatorname{order}(L_{P_{\times{\mathfrak{n}}}})\ =\ r,\qquad{\mathfrak{v}}(L_{P_{ \times{\mathfrak{n}}}})\ \asymp_{\Delta({\mathfrak{v}})}\ {\mathfrak{v}},\]
_so \((P,{\mathfrak{n}},\widehat{a})\) is a steep refinement of \((P,{\mathfrak{m}},\widehat{a})\)._
Proof.: Replace \((P,{\mathfrak{m}},\widehat{a})\) and \({\mathfrak{n}}\) by \((P_{\times{\mathfrak{m}}},1,\widehat{a}/{\mathfrak{m}})\) and \({\mathfrak{n}}/{\mathfrak{m}}\) to arrange \({\mathfrak{m}}=1\). Set \(L:=L_{P}\) and \(\widetilde{L}:=L_{P_{\times{\mathfrak{n}}}}\). Then \(\widetilde{L}=L\mathfrak{n}\asymp_{\Delta({\mathfrak{v}})}{\mathfrak{n}}L\) by [ADH, 6.1.3]. Hence
\[\widetilde{L}_{r}\ =\ {\mathfrak{n}}L_{r}\ \asymp\ {\mathfrak{n}}{\mathfrak{n}}L \asymp_{\Delta({\mathfrak{v}})}\ {\mathfrak{v}}\widetilde{L}.\]
Since \({\mathfrak{v}}(\widetilde{L})\widetilde{L}\asymp\widetilde{L}_{r}\), this gives \({\mathfrak{v}}(\widetilde{L})\widetilde{L}\asymp_{\Delta({\mathfrak{v}})}{ \mathfrak{v}}\widetilde{L}\), and thus \({\mathfrak{v}}(\widetilde{L})\asymp_{\Delta({\mathfrak{v}})}{\mathfrak{v}}\).
If \((P,{\mathfrak{m}},\widehat{a})\) is steep and linear, then \(L_{P_{+a,\times{\mathfrak{m}}}}=L_{P_{\times{\mathfrak{m}},+(a/{\mathfrak{m}}) }}=L_{P_{\times{\mathfrak{m}}}}\), so any refinement \((P_{+a},{\mathfrak{m}},\widehat{a}-a)\) of \((P,{\mathfrak{m}},\widehat{a})\) is also steep and linear.
**Lemma 3.3.2**.: _Suppose \(\operatorname{order}L_{P_{\times{\mathfrak{m}}}}=r\). Then \((P,{\mathfrak{m}},\widehat{a})\) has a refinement \((P,{\mathfrak{n}},\widehat{a})\) such that \(\operatorname{nwt}L_{P_{\times{\mathfrak{n}}}}=0\), and \((P^{\phi},{\mathfrak{n}},\widehat{a})\) is steep, eventually._
Proof.: Replacing \((P,{\mathfrak{m}},\widehat{a})\) by \((P_{\times{\mathfrak{m}}},1,\widehat{a}/{\mathfrak{m}})\) we arrange \({\mathfrak{m}}=1\). Take \({\mathfrak{n}}_{1}\) with \(\widehat{a}\prec{\mathfrak{n}}_{1}\prec 1\). Then \(\operatorname{order}\,(P_{1})_{\times{\mathfrak{n}}_{1}}=\operatorname{ order}P_{1}=\operatorname{order}L_{P}=r\), and thus \((P_{1})_{\times{\mathfrak{n}}_{1}}\neq 0\). So [ADH, 11.3.6] applied to \((P_{1})_{\times{\mathfrak{n}}_{1}}\) in place of \(P\) yields an \({\mathfrak{n}}\) with \({\mathfrak{n}}_{1}\prec{\mathfrak{n}}\prec 1\) and \(\operatorname{nwt}(P_{1})_{\times{\mathfrak{n}}}=0\), so \(\operatorname{nwt}L_{P_{\times{\mathfrak{n}}}}=0\). Hence by Lemma 3.1.20, \((P^{\phi},{\mathfrak{n}},\widehat{a})\) is steep, eventually.
Recall that the separant \(S_{P}=\partial P/\partial Y^{(r)}\) of \(P\) has lower complexity than \(P\). Below we sometimes use the identity \(S_{P_{\times{\mathfrak{m}}}^{\phi}}=\phi^{r}(S_{P_{\times{\mathfrak{m}}}})^{\phi}\) from Lemma 1.1.10.
The slot \((P,{\mathfrak{m}},\widehat{a})\) in \(K\) is said to be **deep** if it is steep and for all active \(\phi\preccurlyeq 1\),
* \(\operatorname{ddeg}S_{P_{\times{\mathfrak{m}}}^{\phi}}=0\) (hence \(\operatorname{ndeg}S_{P_{\times{\mathfrak{m}}}}=0\)), and
* \(\operatorname{ddeg}P_{\times{\mathfrak{m}}}^{\phi}=1\) (hence \(\operatorname{ndeg}P_{\times{\mathfrak{m}}}=1\)).
If \(\operatorname{deg}P=1\), then (D1) is automatic, for all active \(\phi\preccurlyeq 1\). If \((P,{\mathfrak{m}},\widehat{a})\) is deep, then so are \((P_{\times{\mathfrak{n}}},{\mathfrak{m}}/{\mathfrak{n}},\widehat{a}/{\mathfrak{n}})\) and \((bP,{\mathfrak{m}},\widehat{a})\) for \(b\neq 0\), as well as every slot in \(K\) equivalent to \((P,{\mathfrak{m}},\widehat{a})\) and the slot \((P^{\phi},{\mathfrak{m}},\widehat{a})\) in \(K^{\phi}\) for active \(\phi\preccurlyeq 1\). Every deep slot in \(K\) is quasilinear, by (D2). If \(\operatorname{deg}P=1\), then \((P,{\mathfrak{m}},\widehat{a})\) is quasilinear iff \((P^{\phi},{\mathfrak{m}},\widehat{a})\) is deep for some active \(\phi\preccurlyeq 1\). Moreover, if \((P,{\mathfrak{m}},\widehat{a})\) is a deep hole in \(K\), then \(\operatorname{dmul}P_{\times{\mathfrak{m}}}^{\phi}=1\) for all active \(\phi\preccurlyeq 1\), by (D2) and Lemma 3.2.9.
_Example 3.3.3_.: Suppose \(P=Y^{\prime}+gY-u\) where \(g,u\in K\) and \(\mathfrak{m}=1\), \(r=1\). Set \(L:=L_{P}=\partial+g\) and \(\mathfrak{v}:=\mathfrak{v}(L)\). Then \(\mathfrak{v}=1\) if \(g\preccurlyeq 1\), and \(\mathfrak{v}=1/g\) if \(g\succ 1\). Thus
\[(P,1,\widehat{a})\text{ is steep}\quad\Longleftrightarrow\quad g\succ^{ \flat}1\quad\Longleftrightarrow\quad g\succ 1\text{ and }g^{\dagger}\suclyeq 1.\]
Note that \((P,1,\widehat{a})\) is steep iff \(L\) is steep as defined in Section 1.5. Also,
\[(P,1,\widehat{a})\text{ is deep}\quad\Longleftrightarrow\quad(P,1,\widehat{a}) \text{ is steep and }g\suclyeq u.\]
Hence if \(u=0\), then \((P,1,\widehat{a})\) is deep iff it is steep.
**Lemma 3.3.4**.: _For steep \((P,\mathfrak{m},\widehat{a})\), the following are equivalent:_
* \((P^{\phi},\mathfrak{m},\widehat{a})\) _is deep, eventually;_
* \(\operatorname{ndeg}S_{P_{\times\mathfrak{m}}}=0\) _and_ \(\operatorname{ndeg}P_{\times\mathfrak{m}}=1\)_._
Note that if \(\operatorname{ddeg}S_{P_{\times\mathfrak{m}}}=0\) or \(\operatorname{ndeg}S_{P_{\times\mathfrak{m}}}=0\), then \(S_{P_{\times\mathfrak{m}}}(0)\neq 0\), so \(\operatorname{order}L_{P_{\times\mathfrak{m}}}=r\).
**Lemma 3.3.5**.: _Suppose \((P_{+a},\mathfrak{n},\widehat{a}-a)\) refines the hole \((P,\mathfrak{m},\widehat{a})\) in \(K\). Then:_
* \(\operatorname{ddeg}S_{P_{\times\mathfrak{m}}}=0\implies\operatorname{ddeg}S_{P _{+a,\times\mathfrak{n}}}=0\)_;_
* \(\operatorname{ddeg}P_{\times\mathfrak{m}}=1\implies\operatorname{ddeg}P_{+a, \times\mathfrak{n}}=1\)_;_
* \(\operatorname{ndeg}S_{P_{\times\mathfrak{m}}}=0\implies S_{P}(a)\sim S_{P}(0)\)_._
_Thus if \((P,\mathfrak{m},\widehat{a})\) is deep and \((P_{+a},\mathfrak{n},\widehat{a}-a)\) is steep, then \((P_{+a},\mathfrak{n},\widehat{a}-a)\) is deep._
Proof.: Suppose \(\operatorname{ddeg}S_{P_{\times\mathfrak{m}}}=0\). Then \(\operatorname{ddeg}S_{P_{+a,\times\mathfrak{n}}}=0\) follows from
\[\operatorname{ddeg}S_{P_{+a,\times\mathfrak{n}}}\ =\ \operatorname{ddeg}(S_{P})_{+a, \times\mathfrak{n}}\text{ and }\operatorname{ddeg}(S_{P})_{\times\mathfrak{m}}\ =\ \operatorname{ddeg}S_{P_{\times\mathfrak{m}}}\]
(consequences of Lemma 1.1.10), and
\[\operatorname{ddeg}\left(S_{P}\right)_{+a,\times\mathfrak{n}}\ =\ \operatorname{ddeg} \left(S_{P}\right)_{+\widehat{a},\times\mathfrak{n}}\ \leqslant\ \operatorname{ddeg} \left(S_{P}\right)_{+\widehat{a},\times\mathfrak{m}}\ =\ \operatorname{ddeg} \left(S_{P}\right)_{\times\mathfrak{m}}\]
which holds by [ADH, 6.6.7]. This proves (i). Corollary 3.2.20 yields (ii), and (iii) is contained in Lemma 3.2.33.
Lemmas 3.2.14 and 3.3.5 give:
**Corollary 3.3.6**.: _If \((P,\mathfrak{m},\widehat{a})\) is \(Z\)-minimal and deep, then each steep refinement of \((P,\mathfrak{m},\widehat{a})\) is deep._
Here is another sufficient condition on refinements of deep holes to remain deep:
**Lemma 3.3.7**.: _Suppose \((P,\mathfrak{m},\widehat{a})\) is a deep hole in \(K\), and \((P_{+a},\mathfrak{n},\widehat{a}-a)\) refines \((P,\mathfrak{m},\widehat{a})\) with \([\mathfrak{n}/\mathfrak{m}]\leqslant[\mathfrak{v}]\). Then \((P_{+a},\mathfrak{n},\widehat{a}-a)\) is deep with \(\mathfrak{v}(L_{P_{+a,\times\mathfrak{n}}})\asymp_{\Delta(\mathfrak{v})} \mathfrak{v}\)._
Proof.: From \((P,\mathfrak{m},\widehat{a})\) we pass to the hole \((P_{+a},\mathfrak{m},\widehat{a}-a)\) and then to \((P_{+a},\mathfrak{n},\widehat{a}-a)\). We first show that \(\operatorname{order}L_{P_{+a,\times\mathfrak{m}}}=r\) and \(\mathfrak{v}(L_{P_{+a,\times\mathfrak{m}}})\sim\mathfrak{v}\), from which it follows that \((P_{+a},\mathfrak{m},\widehat{a}-a)\) is steep, hence deep by Lemma 3.3.5. By Corollary 3.2.20,
\[\operatorname{ddeg}P_{+a,\times\mathfrak{m}}\ =\ \operatorname{dmul}P_{+a, \times\mathfrak{m}}\ =\ 1,\]
so \((P_{+a,\times\mathfrak{m}})_{1}\sim P_{+a,\times\mathfrak{m}}\). Also
\[(P_{\times\mathfrak{m}})_{1}\sim P_{\times\mathfrak{m}}\sim P_{\times \mathfrak{m},+(a/\mathfrak{m})}=P_{+a,\times\mathfrak{m}},\]
by [ADH, 4.5.1(i)], and thus \((P_{+a,\times\mathfrak{m}})_{1}\sim(P_{\times\mathfrak{m}})_{1}\). By Lemmas 1.1.10 and 3.3.5(iii),
\[S_{P_{+a,\times\mathfrak{m}}}(0)\ =\ \mathfrak{m}S_{P}(a)\ \sim\ \mathfrak{m}S_{P}(0)\ =\ S_{P_{\times \mathfrak{m}}}(0),\]
so \(S_{P_{+a,\times\mathfrak{m}}}(0)\sim S_{P_{\times\mathfrak{m}}}(0)\). This gives \(\mathfrak{v}(L_{P_{+a,\times\mathfrak{m}}})\sim\mathfrak{v}\) as promised.
Next, Lemma 3.3.1 applied to \((P_{+a},\mathfrak{m},\widehat{a}-a)\) in the role of \((P,\mathfrak{m},\widehat{a})\) gives that \((P_{+a},\mathfrak{n},\widehat{a}-a)\) is steep with \(\mathfrak{v}(L_{P_{+a,\times\mathfrak{n}}})\asymp_{\Delta(\mathfrak{v})} \mathfrak{v}\). Now Lemma 3.3.5 applied
to \((P_{+a},\mathfrak{m},\widehat{a}-a)\) and \((P_{+a},\mathfrak{n},\widehat{a}-a)\) in the role of \((P,\mathfrak{m},\widehat{a})\) and \((P_{+a},\mathfrak{n},\widehat{a}-a)\), respectively, gives that \((P_{+a},\mathfrak{n},\widehat{a}-a)\) is deep.
Lemmas 3.2.14 and 3.3.7 give a version for \(Z\)-minimal slots:
**Corollary 3.3.8**.: _Suppose \((P,\mathfrak{m},\widehat{a})\) is \(Z\)-minimal and deep, and \((P_{+a},\mathfrak{n},\widehat{a}-a)\) refines \((P,\mathfrak{m},\widehat{a})\) with \([\mathfrak{n}/\mathfrak{m}]\leqslant[\mathfrak{v}]\), where \(\mathfrak{v}:=\mathfrak{v}(L_{P_{\times\mathfrak{m}}})\). Then \((P_{+a},\mathfrak{n},\widehat{a}-a)\) is deep with \(\mathfrak{v}(L_{P_{+a},\times_{\mathfrak{n}}})\asymp_{\Delta(\mathfrak{v})} \mathfrak{v}\)._
Next we turn to the task of turning \(Z\)-minimal slots into deep ones.
**Lemma 3.3.9**.: _Every quasilinear \(Z\)-minimal slot in \(K\) of order \(r\) has a refinement \((P,\mathfrak{m},\widehat{a})\) such that:_
* \(\operatorname{ndeg}\,(P_{(\boldsymbol{i})})_{\times\mathfrak{m}}=0\) _for all_ \(\boldsymbol{i}\) _with_ \(|\boldsymbol{i}|\geqslant 1\) _and_ \(P_{(\boldsymbol{i})}\neq 0\)_;_
* \(\operatorname{ndeg}\,P_{\times\mathfrak{m}}=\operatorname{nmul}P_{\times \mathfrak{m}}=1\)_, and_
* \(\operatorname{nwt}L_{P_{\times\mathfrak{m}}}=0\)_._
Proof.: By Corollary 3.2.22, any quasilinear \((P,\mathfrak{m},\widehat{a})\) satisfies (ii). Any refinement of a quasilinear \((P,\mathfrak{m},\widehat{a})\) remains quasilinear, by Corollary 3.2.23. By Lemma 3.2.32 and a subsequent remark any quasilinear \(Z\)-minimal slot in \(K\) of order \(r\) can be refined to a quasilinear \((P,\mathfrak{m},\widehat{a})\) that satisfies (i), and by Lemma 3.2.33, any further refinement of such \((P,\mathfrak{m},\widehat{a})\) continues to satisfy (i). Thus to prove the lemma, assume we are given a quasilinear \((P,\mathfrak{m},\widehat{a})\) satisfying (i); it is enough to show that then \((P,\mathfrak{m},\widehat{a})\) has a refinement \((P,\mathfrak{n},\widehat{a})\) satisfying (iii) with \(\mathfrak{n}\) instead of \(\mathfrak{m}\) (and thus also (i) and (ii) with \(\mathfrak{n}\) instead of \(\mathfrak{m}\)).
Take \(\widetilde{\mathfrak{m}}\) with \(\widehat{a}\prec\widetilde{\mathfrak{m}}\prec\mathfrak{m}\). Then \((P_{\times\widetilde{\mathfrak{m}}})_{1}\neq 0\) by (ii), so [ADH, 11.3.6] applied to \((P_{1})_{\times\widetilde{\mathfrak{m}}}\) in place of \(P\) yields an \(\mathfrak{n}\) with \(\widetilde{\mathfrak{m}}\prec\mathfrak{n}\prec\mathfrak{m}\) and \(\operatorname{nwt}\,(P_{1})_{\times\mathfrak{n}}=0\). Hence the refinement \((P,\mathfrak{n},\widehat{a})\) of \((P,\mathfrak{m},\widehat{a})\) satisfies (iii) with \(\mathfrak{n}\) instead of \(\mathfrak{m}\).
**Corollary 3.3.10**.: _Every quasilinear \(Z\)-minimal slot in \(K\) of order \(r\) has a refinement \((P,\mathfrak{m},\widehat{a})\) such that \(\operatorname{nwt}L_{P_{\times\mathfrak{m}}}=0\), and \((P^{\phi},\mathfrak{m},\widehat{a})\) is deep, eventually._
Proof.: Given a quasilinear \(Z\)-minimal slot in \(K\) of order \(r\), we take a refinement \((P,\mathfrak{m},\widehat{a})\) as in Lemma 3.3.9. Then \(\operatorname{ndeg}\,S_{P_{\times\mathfrak{m}}}=0\) by (i) of that lemma, so \(\operatorname{order}L_{P_{\times\mathfrak{m}}}=r\) by the remark that precedes Lemma 3.3.5. Then (iii) of Lemma 3.3.9 and Lemma 3.1.20 give that \((P^{\phi},\mathfrak{m},\widehat{a})\) is steep, eventually. Using now \(\operatorname{ndeg}\,S_{P_{\times\mathfrak{m}}}=0\) and (ii) of Lemma 3.3.9 we obtain from Lemma 3.3.4 that \((P^{\phi},\mathfrak{m},\widehat{a})\) is deep, eventually.
Lemma 3.2.26 and the previous lemma and its corollary now yield:
**Lemma 3.3.11**.: _Suppose \(K\) is \(\mathrm{d}\)-valued and \(\mathfrak{o}\)-free, and \(\Gamma\) is divisible. Then every \(Z\)-minimal slot in \(K\) of order \(r\) has a refinement \((P,\mathfrak{m},\widehat{a})\) satisfying (i)-(iii) in Lemma 3.3.9._
**Corollary 3.3.12**.: _Suppose \(K\) is \(\mathrm{d}\)-valued and \(\mathfrak{o}\)-free, and \(\Gamma\) is divisible. Then every \(Z\)-minimal slot in \(K\) of order \(r\) has a quasilinear refinement \((P,\mathfrak{m},\widehat{a})\) such that \(\operatorname{nwt}L_{P_{\times\mathfrak{m}}}=0\), and \((P^{\phi},\mathfrak{m},\widehat{a})\) is deep, eventually._
### Approximating \(Z\)-minimal slots
In this subsection we set, as before,
\[\mathfrak{v}\ :=\ \mathfrak{v}(L_{P_{\times\mathfrak{m}}}),\]
provided \(L_{P_{\times\mathfrak{m}}}\) has order \(r\). The next lemma is a key approximation result.
**Lemma 3.3.13**.: _Suppose \((P,\mathfrak{m},\widehat{a})\) is \(Z\)-minimal and steep, and_
\[\operatorname{ddeg}P_{\times\mathfrak{m}}\ =\ \operatorname{ndeg}P_{\times \mathfrak{m}}\ =\ 1,\qquad\operatorname{ddeg}S_{P\times_{\mathfrak{m}}}\ =\ 0.\]
_Then there exists an \(a\) such that \(\widehat{a}-a\prec_{\Delta(\mathfrak{v})}\mathfrak{m}\)._
Proof.: We can arrange \(\mathfrak{m}=1\) and \(P\asymp 1\). Then \(\operatorname{ddeg}P=1\) gives \(P_{1}\asymp 1\), so \(S_{P}(0)\asymp\mathfrak{v}\). Take \(Q,R_{1},\ldots,R_{n}\in K\{Y\}\) (\(n\geqslant 1\)) of order \(<r\) such that
\[P\ =\ Q+R_{1}Y^{(r)}+\cdots+R_{n}(Y^{(r)})^{n},\qquad S_{P}\ =\ R_{1}+\cdots+nR_{n}(Y^{(r)})^{n-1}.\]
Then \(R_{1}(0)=S_{P}(0)\asymp\mathfrak{v}\). As \(\operatorname{ddeg}S_{P}=0\), this gives \(S_{P}\sim R_{1}(0)\), hence
\[R\ :=\ P-Q\ \sim\ R_{1}(0)Y^{(r)}\ \asymp\ \mathfrak{v}\ \prec_{\Delta( \mathfrak{v})}\ 1\ \asymp\ P,\]
so \(P\sim_{\Delta(\mathfrak{v})}Q\). Thus \(Q\neq 0\), and \(Q\notin Z(K,\widehat{a})\) because \(\operatorname{order}Q<r\). Now Lemma 3.2.18 gives a refinement \((P_{+a},\mathfrak{n},\widehat{a}-a)\) of \((P,1,\widehat{a})\) such that \(\operatorname{ndeg}Q_{+a,\times\mathfrak{n}}=0\) and \(\mathfrak{n}\prec 1\). We claim that then \(\widehat{a}-a\prec_{\Delta(\mathfrak{v})}1\). (Establishing this claim finishes the proof.) Suppose the claim is false. Then \(\widehat{a}-a\asymp_{\Delta(\mathfrak{v})}1\), so \(\mathfrak{n}\asymp_{\Delta(\mathfrak{v})}1\), hence \(Q_{+a,\times\mathfrak{n}}\asymp_{\Delta(\mathfrak{v})}Q_{+a}\asymp Q\) by [ADH, 4.5.1]. Likewise, \(R_{+a,\times\mathfrak{n}}\asymp_{\Delta(\mathfrak{v})}R\). Using \(P_{+a,\times\mathfrak{n}}=Q_{+a,\times\mathfrak{n}}+R_{+a,\times\mathfrak{n}}\) gives \(Q_{+a,\times\mathfrak{n}}\sim_{\Delta(\mathfrak{v})}P_{+a,\times\mathfrak{n}}\), so \(Q_{+a,\times\mathfrak{n}}\sim^{\flat}P_{+a,\times\mathfrak{n}}\). Then \(\operatorname{ndeg}Q_{+a,\times\mathfrak{n}}=\operatorname{ndeg}P_{+a,\times \mathfrak{n}}=1\) by Lemma 1.8.2 and Corollary 3.2.23, a contradiction.
Lemmas 3.2.9 and 3.3.13, and a remark following the definition of _deep_ give:
**Corollary 3.3.14**.: _If \((P,\mathfrak{m},\widehat{a})\) is \(Z\)-minimal, steep, and linear, then there exists an \(a\) such that \(\widehat{a}-a\prec_{\Delta(\mathfrak{v})}\mathfrak{m}\)._
**Corollary 3.3.15**.: _Suppose \((P,\mathfrak{m},\widehat{a})\) is \(Z\)-minimal, deep, and special. Then for all \(n\geqslant 1\) there is an \(a\) with \(\widehat{a}-a\prec\mathfrak{v}^{n}\mathfrak{m}\)._
Proof.: We arrange \(\mathfrak{m}=1\) in the usual way. Let \(\Delta\) be the convex subgroup of \(\Gamma\) that is cofinal in \(v(\widehat{a}-K)\). Lemma 3.3.13 gives an element \(\gamma\in v(\widehat{a}-K)\) with \(\gamma\geqslant\delta/m\) for some \(m\geqslant 1\). Hence \(v(\widehat{a}-K)\) contains for every \(n\geqslant 1\) an element \(>n\delta\).
Combining Lemma 3.2.36 with Corollary 3.3.15 yields:
**Corollary 3.3.16**.: _If \(K\) is \(r\)-linearly newtonian, \(\mathfrak{o}\)-free if \(r>1\), and \((P,\mathfrak{m},\widehat{a})\) is \(Z\)-minimal and deep, then for all \(n\geqslant 1\) there is an \(a\) such that \(\widehat{a}-a\prec\mathfrak{v}^{n}\mathfrak{m}\)._
**Normal slots.** We say that our slot \((P,\mathfrak{m},\widehat{a})\) in \(K\), with linear part \(L\), is **normal** if \(\operatorname{order}L=r\) and, with \(\mathfrak{v}:=\mathfrak{v}(L)\) and \(w:=\operatorname{wt}(P)\),
* \(\mathfrak{v}\prec^{\flat}1\);
* \((P_{\times\mathfrak{m}})_{>1}\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}( P_{\times\mathfrak{m}})_{1}\).
Note that then \(\mathfrak{v}\prec 1\), \(\operatorname{dwt}(L)<r\), \((P,\mathfrak{m},\widehat{a})\) is steep, and
\[P_{\times\mathfrak{m}}\sim_{\Delta(\mathfrak{v})}P(0)+(P_{\times\mathfrak{m}})_ {1}\qquad\text{ (so $\operatorname{ddeg}P_{\times\mathfrak{m}}\leqslant 1$)}. \tag{3.3.1}\]
If \(\operatorname{order}L=r\), \(\mathfrak{v}:=\mathfrak{v}(L)\), and \(L\) is monic, then \((P_{\times\mathfrak{m}})_{1}\asymp\mathfrak{v}^{-1}\), so that (N2) is then equivalent to: \((P_{\times\mathfrak{m}})_{>1}\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w}\). If \(\deg P=1\), then \(\operatorname{order}L=r\) and (N2) automatically holds, hence \((P,\mathfrak{m},\widehat{a})\) is normal iff it is steep. Thus by Lemma 3.1.20:
**Lemma 3.3.17**.: _If \(\deg P=1\) and \(\operatorname{nwt}(L)<r\), then \((P^{\phi},\mathfrak{m},\widehat{a})\) is normal, eventually._
If \((P,\mathfrak{m},\widehat{a})\) is normal, then so are \((P_{\times\mathfrak{n}},\mathfrak{m}/\mathfrak{n},\widehat{a}/\mathfrak{n})\) and \((bP,\mathfrak{m},\widehat{a})\) for \(b\neq 0\). In particular, \((P,\mathfrak{m},\widehat{a})\) is normal iff \((P_{\times\mathfrak{m}},1,\widehat{a}/\mathfrak{m})\) is normal. If \((P,\mathfrak{m},\widehat{a})\) is normal, then so is any equivalent slot. Hence by (3.3.1) and Lemmas 3.2.9 and 3.2.14:
**Lemma 3.3.18**.: _If \((P,\mathfrak{m},\widehat{a})\) is normal, and \((P,\mathfrak{m},\widehat{a})\) is \(Z\)-minimal or is a hole in \(K\), then \(\operatorname{ddeg}P_{\times\mathfrak{m}}=\operatorname{dmul}P_{\times \mathfrak{m}}=1\)._
_Example_.: Let \(K\supseteq\mathbb{R}(\mathrm{e}^{x})\) be an \(H\)-subfield of \(\mathbb{T}\), \(\mathfrak{m}=1\), \(r=2\). If \(P=D+R\) where
\[D\ =\ \mathrm{e}^{-x}\,Y^{\prime\prime}-Y,\qquad R\ =\ f+\mathrm{e}^{-4x}\,Y^{5} \quad(f\in K),\]
then \(\mathfrak{v}=-\,\mathrm{e}^{-x}\prec^{\flat}1\), \(P_{1}=D\sim-Y\), \(w=2\), and \(P_{>1}=\mathrm{e}^{-4x}\,Y^{5}\prec_{\Delta(\mathfrak{v})}\mathrm{e}^{-3x}P_{1}\), so \((P,1,\widehat{a})\) is normal. However, if \(P=D+S\) with \(D\) as above and \(S=f+\mathrm{e}^{-3x}\,Y^{5}\) (\(f\in K\)), then \(P_{>1}=\mathrm{e}^{-3x}\,Y^{5}\succcurlyeq_{\Delta(\mathfrak{v})}\mathrm{e}^{ -3x}P_{1}\), so \((P,1,\widehat{a})\) is not normal.
**Lemma 3.3.19**.: _Suppose \(\operatorname{order}(L)=r\) and \(\mathfrak{v}\) is such that (N1) and (N2) hold, and \(\mathfrak{v}(L)\asymp_{\Delta(\mathfrak{v})}\mathfrak{v}\). Then \((P,\mathfrak{m},\widehat{a})\) is normal._
Proof.: Put \(\mathfrak{w}:=\mathfrak{v}(L)\). Then \([\mathfrak{w}]=[\mathfrak{v}]\), and so \(\mathfrak{v}\prec^{\flat}1\) gives \(\mathfrak{w}\prec^{\flat}1\). Also,
\[(P_{\times\mathfrak{m}})_{>1}\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}( P_{\times\mathfrak{m}})_{1}\asymp_{\Delta(\mathfrak{v})}\mathfrak{w}^{w+1}(P_{ \times\mathfrak{m}})_{1}.\]
Hence (N1), (N2) hold with \(\mathfrak{w}\) in place of \(\mathfrak{v}\).
**Lemma 3.3.20**.: _Suppose \((P,\mathfrak{m},\widehat{a})\) is normal and \(\phi\preccurlyeq 1\) is active. Then the slot \((P^{\phi},\mathfrak{m},\widehat{a})\) in \(K^{\phi}\) is normal._
Proof.: We arrange \(\mathfrak{m}=1\) and put \(\mathfrak{v}:=\mathfrak{v}(L)\), \(\mathfrak{w}:=\mathfrak{v}(L_{P^{\phi}})\). Now \(L_{P^{\phi}}=L^{\phi}\), so \(\mathfrak{v}\asymp_{\Delta(\mathfrak{v})}\mathfrak{w}\) and \(\mathfrak{v}\prec_{\phi}^{\flat}1\) by Lemma 3.1.19. By [ADH, 11.1.1], \([\phi]<[\mathfrak{v}]\), and (N2) we have
\[(P^{\phi})_{>1}\ =\ (P_{>1})^{\phi}\ \asymp_{\Delta(\mathfrak{v})}\ P_{>1}\ \prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}P_{1}\ \asymp_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}P_{1}^{\phi},\]
which by Lemma 3.3.19 applied to \((P^{\phi},1,\widehat{a})\) in the role of \((P,\mathfrak{m},\widehat{a})\) gives that \((P^{\phi},1,\widehat{a})\) is normal.
**Corollary 3.3.21**.: _Suppose \((P,\mathfrak{m},\widehat{a})\) is normal. Then \((P,\mathfrak{m},\widehat{a})\) is quasilinear._
Proof.: Lemma 3.2.21 gives \(\operatorname{ndeg}P_{\times\mathfrak{m}}\geqslant 1\). The parenthetical remark after (3.3.1) above and Lemma 3.3.20 gives \(\operatorname{ndeg}P_{\times\mathfrak{m}}\leqslant 1\).
Combining Lemmas 3.3.18 and 3.3.20 yields:
**Corollary 3.3.22**.: _If \((P,\mathfrak{m},\widehat{a})\) is normal and linear, and \((P,\mathfrak{m},\widehat{a})\) is \(Z\)-minimal or a hole in \(K\), then \((P,\mathfrak{m},\widehat{a})\) is deep._
There are a few occasions later where we need to change the "monomial" \(\mathfrak{m}\) in \((P,\mathfrak{m},\widehat{a})\) while preserving key properties of this slot. Here is what we need:
**Lemma 3.3.23**.: _Let \(u\in K\), \(u\asymp 1\). Then \((P,u\mathfrak{m},\widehat{a})\) refines \((P,\mathfrak{m},\widehat{a})\), and if \((P_{+a},\mathfrak{n},\widehat{a}-a)\) refines \((P,\mathfrak{m},\widehat{a})\), then so does \((P_{+a},u\mathfrak{n},\widehat{a}-a)\). If \((P,\mathfrak{m},\widehat{a})\) is quasilinear, respectively deep, respectively normal, then so is \((P,u\mathfrak{m},\widehat{a})\)._
Proof.: The refinement claims are clearly true, and quasilinearity is preserved since \(\operatorname{ndeg}P_{\times\mathfrak{m}}=\operatorname{ndeg}P_{\times \mathfrak{m}}\) by [ADH, 11.2.3(iii)]. "Steep" is preserved by Lemma 3.3.1, and hence "deep" is preserved using Lemma 1.1.10 and [ADH, 6.6.5(ii)]. Normality is preserved because steepness is,
\[(P_{\times\mathfrak{u}\mathfrak{m}})_{d}\ =\ (P_{d})_{\times\mathfrak{u} \mathfrak{m}}\ \asymp\ (P_{d})_{\times\mathfrak{m}}\ =\ (P_{\times\mathfrak{m}})_{d}\quad\text{ for all }d\in \mathbb{N}\]
by [ADH, 4.3, 4.5.1(ii)], and \(\mathfrak{v}(L_{P_{\times\mathfrak{u}\mathfrak{m}}})\asymp\mathfrak{v}(L_{P_{ \times\mathfrak{m}}})\) by Lemma 3.1.2.
Here is a useful invariance property of normal slots:
**Lemma 3.3.24**.: _Suppose \((P,\mathfrak{m},\widehat{a})\) is normal and \(a\prec\mathfrak{m}\). Then \(L_{P}\) and \(L_{P_{+a}}\) have order \(r\). If in addition \(K\) is \(\uplambda\)-free or \(r=1\), then \(\mathscr{E}^{\rm e}(L_{P})=\mathscr{E}^{\rm e}(L_{P_{+a}})\)._
Proof.: \(L_{P_{\times\mathfrak{m}}}=L_{P}\mathfrak{m}\) (so \(L_{P}\) has order \(r\)), and \(L_{P_{+a,\times\mathfrak{m}}}=L_{P_{\times\mathfrak{m},+a/\mathfrak{m}}}=L_{P_ {+a}}\mathfrak{m}\). The slot \((P_{\times\mathfrak{m}},1,\widehat{a}/\mathfrak{m})\) in \(K\) is normal and \(a/\mathfrak{m}\prec 1\). Thus we can apply Lemma 3.1.27(i) to \(\widehat{K}\), \(P_{\times\mathfrak{m}}\), \(a/\mathfrak{m}\) in place of \(K\), \(P\), \(a\) to give order \(L_{P_{+a}}=r\). Next, applying likewise Lemma 3.1.28 with \(L:=L_{P_{\times\mathfrak{m}}}\), \(\mathfrak{v}:=\mathfrak{v}(L_{P_{\times\mathfrak{m}}})\), \(m=r\), \(B=0\), gives
\[L_{P}\mathfrak{m}-L_{P_{+a}}\mathfrak{m}\ =\ L_{P_{\times\mathfrak{m}}}-L_{P_{ \times\mathfrak{m},+a/\mathfrak{m}}}\ \prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{r+1}L_{P}\mathfrak{m}.\]
Hence, if \(K\) is \(\uplambda\)-free, then \(\mathscr{E}^{\rm e}(L_{P}\mathfrak{m})\ =\ \mathscr{E}^{\rm e}(L_{P_{+a}} \mathfrak{m})\) by Lemma 3.1.22, so
\[\mathscr{E}^{\rm e}(L_{P})\ =\ \mathscr{E}^{\rm e}(L_{P}\mathfrak{m})+v( \mathfrak{m})\ =\ \mathscr{E}^{\rm e}(L_{P_{+a}}\mathfrak{m})+v(\mathfrak{m})\ =\ \mathscr{E}^{\rm e}(L_{P_{+a}}).\]
If \(r=1\) we obtain the same equality from Corollary 3.1.23.
### Normality under refinements
In this subsection we study how normality behaves under more general refinements. This is not needed to prove the main result of this section, Theorem 3.3.33, but is included to obtain useful variants of it.
**Proposition 3.3.25**.: _Suppose \((P,\mathfrak{m},\widehat{a})\) is normal. Let a refinement \((P_{+a},\mathfrak{m},\widehat{a}-a)\) of \((P,\mathfrak{m},\widehat{a})\) be given. Then this refinement is also normal._
Proof.: By the remarks following the definition of "multiplicative conjugate" in Section 3.2 and after replacing the slots \((P,\mathfrak{m},\widehat{a})\) and \((P_{+a},\mathfrak{m},\widehat{a}-a)\) in \(K\) by \((P_{\times\mathfrak{m}},1,\widehat{a}/\mathfrak{m})\) and \(\big{(}P_{\times\mathfrak{m},+a/\mathfrak{m}},1,(\widehat{a}-a)/\mathfrak{m} \big{)}\), respectively, we arrange that \(\mathfrak{m}=1\). Let \(\mathfrak{v}:=\mathfrak{v}(L_{P})\). By Lemma 3.1.27 we have order\((L_{P_{+a}})=r\), \(\mathfrak{v}(L_{P_{+a}})\sim_{\Delta(\mathfrak{v})}\mathfrak{v}\), and \((P_{+a})_{1}\sim_{\Delta(\mathfrak{v})}P_{1}\). Using [ADH, 4.5.1(i)] we have for \(d>1\) with \(P_{d}\neq 0\),
\[(P_{+a})_{d}\ =\ \big{(}(P_{\geqslant d})_{+a}\big{)}_{d}\ \prec\ (P_{\geqslant d})_{+a}\ \sim\ P_{\geqslant d}\ \prec\ P_{>1},\]
and using (N2), this yields
\[(P_{+a})_{>1}\ \preccurlyeq\ P_{>1}\ \prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+ 1}P_{1}\ \asymp\mathfrak{v}^{w+1}(P_{+a})_{1}.\]
Hence (N2) holds with \(\mathfrak{m}=1\) and with \(P\) replaced by \(P_{+a}\). Thus \((P_{+a},1,\widehat{a}-a)\) is normal, by Lemma 3.3.19.
**Proposition 3.3.26**.: _Suppose \((P,\mathfrak{m},\widehat{a})\) is a normal hole in \(K\), \(\widehat{a}\prec\mathfrak{n}\preccurlyeq\mathfrak{m}\), and \([\mathfrak{n}/\mathfrak{m}]\leqslant\big{[}\mathfrak{v}(L_{P_{\times \mathfrak{m}}})\big{]}\). Then the refinement \((P,\mathfrak{n},\widehat{a})\) of \((P,\mathfrak{m},\widehat{a})\) is also normal._
Proof.: As in the proof of Lemma 3.3.1 we arrange \(\mathfrak{m}=1\) and set \(L:=L_{P}\), \(\mathfrak{v}:=\mathfrak{v}(L)\), and \(\widetilde{L}:=L_{P_{\times\mathfrak{n}}}\), to obtain \([\mathfrak{n}]\leqslant[\mathfrak{v}]\) and \(\mathfrak{v}(\widetilde{L})\asymp_{\Delta(\mathfrak{v})}\mathfrak{v}\). Recall from [ADH, 4.3] that \((P_{\times\mathfrak{n}})_{d}=(P_{d})_{\times\mathfrak{n}}\) for \(d\in\mathbb{N}\). For such \(d\) we have by [ADH, 6.1.3],
\[(P_{d})_{\times\mathfrak{n}}\ \asymp_{\Delta(\mathfrak{v})}\mathfrak{n}^{d}P_{d}\ \prec\ \mathfrak{n}^{d}P_{\geqslant d}.\]
In particular, \((P_{\times\mathfrak{n}})_{1}\asymp_{\Delta(\mathfrak{v})}\mathfrak{n}P_{1}\). By (N2) we also have, for \(d>1\):
\[P_{\geqslant d}\ \preccurlyeq P_{>1}\ \prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+ 1}P_{1}.\]
By Lemma 3.3.18 we have \(P\sim P_{1}\). For \(d>1\) we have by [ADH, 6.1.3],
\[\mathfrak{n}^{d}P\ \asymp\mathfrak{n}^{d}P_{1}\ \asymp_{\Delta(\mathfrak{v})} \mathfrak{n}^{d-1}(P_{1})_{\times\mathfrak{n}}\ \preccurlyeq(P_{1})_{\times\mathfrak{n}}\ =\ (P_{\times\mathfrak{n}})_{1}\ \preccurlyeq P_{\times\mathfrak{n}}\]
and thus
\[(P_{\times\mathfrak{n}})_{d}\ =\ (P_{d})_{\times\mathfrak{n}}\ \prec_{\Delta( \mathfrak{v})}\mathfrak{n}^{d}P_{\geqslant d}\ \prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}\mathfrak{n}^{d}P_{1}\ \preccurlyeq_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}(P_{\times\mathfrak{n}})_{1}.\]
Hence (N2) holds with \(\mathfrak{m}\) replaced by \(\mathfrak{n}\). Thus \((P,\mathfrak{n},\widehat{a})\) is normal, using \(\mathfrak{v}(\widetilde{L})\asymp_{\Delta(\mathfrak{v})}\mathfrak{v}\) and Lemmas 3.3.1 and 3.3.19.
From Lemma 3.2.14 and Proposition 3.3.26 we obtain:
**Corollary 3.3.27**.: _Suppose \((P,\mathfrak{m},\widehat{a})\) is normal and \(Z\)-minimal, \(\widehat{a}\prec\mathfrak{n}\preccurlyeq\mathfrak{m}\), and \([\mathfrak{n}/\mathfrak{m}]\leqslant\big{[}\mathfrak{v}(L_{P_{\times\mathfrak{ m}}})\big{]}\). Then the refinement \((P,\mathfrak{n},\widehat{a})\) of \((P,\mathfrak{m},\widehat{a})\) is also normal._
_In the rest of this subsection \(\mathfrak{m}=1\), \(\widehat{a}\prec\mathfrak{n}\prec 1\), \(\operatorname{order}(L_{P})=r\), and \([\mathfrak{v}]<[\mathfrak{n}]\) where \(\mathfrak{v}:=\mathfrak{v}(L_{P})\)._ So \((P,\mathfrak{n},\widehat{a})\) refines \((P,1,\widehat{a})\), \(L_{P_{\times\mathfrak{n}}}=L_{P}\mathfrak{n}\), and \(\operatorname{order}L_{P_{\times\mathfrak{n}}}=r\).
**Lemma 3.3.28**.: _Suppose \((P,1,\widehat{a})\) is steep, \(\mathfrak{v}(L_{P_{\times\mathfrak{n}}})\preccurlyeq\mathfrak{v}\), and \(P_{>1}\preccurlyeq P_{1}\). Then \((P,\mathfrak{n},\widehat{a})\) is normal._
Proof.: Put \(\mathfrak{w}:=\mathfrak{v}(L_{P_{\times\mathfrak{n}}})\). Then \([\mathfrak{w}]<[\mathfrak{n}]\) by Corollary 3.1.10, and \(\mathfrak{w}\preccurlyeq\mathfrak{v}\preccurlyeq^{\flat}1\) gives \(\mathfrak{w}\preccurlyeq^{\flat}1\). It remains to show that \((P_{\times\mathfrak{n}})_{>1}\prec_{\Delta(\mathfrak{w})}\mathfrak{w}^{w+1}(P _{\times\mathfrak{n}})_{1}\). Using \([\mathfrak{n}]>[\mathfrak{w}]\) it is enough that \((P_{\times\mathfrak{n}})_{>1}\prec_{\Delta}\mathfrak{w}^{w+1}(P_{\times \mathfrak{n}})_{1}\), where \(\Delta:=\Delta(\mathfrak{n})\). Since \(\mathfrak{w}\preccurlyeq_{\Delta}1\), it is even enough that \((P_{\times\mathfrak{n}})_{>1}\prec_{\Delta}(P_{\times\mathfrak{n}})_{1}\), to be derived below. Let \(d>1\). Then by [ADH, 6.1.3] and \(P_{d}\preccurlyeq P_{>1}\preccurlyeq P_{1}\) we have
\[(P_{\times\mathfrak{n}})_{d}\ =\ (P_{d})_{\times\mathfrak{n}}\ \prec_{\Delta}\ P_{d} \mathfrak{n}^{d}\ \preccurlyeq\ P_{1}\,\mathfrak{n}^{d}.\]
In view of \(\mathfrak{n}\prec_{\Delta}1\) and \(d>1\) we have
\[P_{1}\,\mathfrak{n}^{d}\ \prec_{\Delta}\ P_{1}\,\mathfrak{n}\ \asymp_{\Delta}\ (P_{1})_{\times\mathfrak{n}}\ =\ (P_{\times \mathfrak{n}})_{1},\]
using again [ADH, 6.1.3]. Thus \((P_{\times\mathfrak{n}})_{d}\prec_{\Delta}(P_{\times\mathfrak{n}})_{1}\), as promised.
**Corollary 3.3.29**.: _If \((P,1,\widehat{a})\) is normal and \(\mathfrak{v}(L_{P_{\times\mathfrak{n}}})\preccurlyeq\mathfrak{v}\), then \((P,\mathfrak{n},\widehat{a})\) is normal._
In the next lemma and its corollary \(K\) is d-valued and for every \(q\in\mathbb{Q}^{>}\) there is given an element \(\mathfrak{n}^{q}\) of \(K^{\times}\) such that \((\mathfrak{n}^{q})^{\dagger}=q\mathfrak{n}^{\dagger}\); the remark before Lemma 3.1.15 gives \(v(\mathfrak{n}^{q})=qv(\mathfrak{n})\) for \(q\in\mathbb{Q}^{>}\). Hence for \(0<q\leqslant 1\) in \(\mathbb{Q}\) we have \(\widehat{a}\prec\mathfrak{n}\preccurlyeq\mathfrak{n}^{q}\prec 1\), so \((P,\mathfrak{n}^{q},\widehat{a})\) refines \((P,1,\widehat{a})\).
**Lemma 3.3.30**.: _Suppose \((P,1,\widehat{a})\) is steep and \(P_{>1}\preccurlyeq P_{1}\). Then \((P,\mathfrak{n}^{q},\widehat{a})\) is normal, for all but finitely many \(q\in\mathbb{Q}\) with \(0<q\leqslant 1\)._
Proof.: We have \(\mathfrak{n}^{\dagger}\succcurlyeq 1\) by \(\mathfrak{n}\preccurlyeq\mathfrak{v}\prec 1\) and \(\mathfrak{v}^{\dagger}\succcurlyeq 1\). Hence Lemma 3.1.16 gives \(\mathfrak{v}(L_{P_{\times\mathfrak{n}^{q}}})\preccurlyeq\mathfrak{v}\) for all but finitely many \(q\in\mathbb{Q}^{>}\). Suppose \(\mathfrak{v}(L_{P_{\times\mathfrak{n}^{q}}})\preccurlyeq\mathfrak{v}\), \(0<q\leqslant 1\) in \(\mathbb{Q}\). Then \((P,\mathfrak{n}^{q},\widehat{a})\) is normal by Lemma 3.3.28 applied with \(\mathfrak{n}^{q}\) instead of \(\mathfrak{n}\).
**Corollary 3.3.31**.: _If \((P,1,\widehat{a})\) is normal, then \((P,\mathfrak{n}^{q},\widehat{a})\) is normal for all but finitely many \(q\in\mathbb{Q}\) with \(0<q\leqslant 1\)._
### Normalizing
If in this subsection \(\operatorname{order}(L_{P_{\times\mathfrak{m}}})=r\), then \(\mathfrak{v}:=\mathfrak{v}(L_{P_{\times\mathfrak{m}}})\). Towards proving that normality can always be achieved we first show:
**Lemma 3.3.32**.: _Suppose \(\Gamma\) is divisible, \((P,\mathfrak{m},\widehat{a})\) is a deep hole in \(K\), and \(\widehat{a}-a\prec\mathfrak{v}^{w+2}\mathfrak{m}\) for some \(a\). Then \((P,\mathfrak{m},\widehat{a})\) has a refinement that is deep and normal._
Proof.: Replacing \((P,\mathfrak{m},\widehat{a})\) by \((P_{\times\mathfrak{m}},1,\widehat{a}/\mathfrak{m})\) and renaming we arrange \(\mathfrak{m}=1\). Take \(a\) such that \(\widehat{a}-a\prec\mathfrak{v}^{w+2}\). For \(e:=w+\frac{3}{2}\), let \(\mathfrak{v}^{e}\) be an element of \(K^{\times}\) with \(v(\mathfrak{v}^{e})=e\,v(\mathfrak{v})\). _Claim_: the refinement \((P_{+a},\mathfrak{v}^{e},\widehat{a}-a)\) of \((P,1,\widehat{a})\) is deep and normal. By Lemma 3.3.7, \((P_{+a},\mathfrak{v}^{e},\widehat{a}-a)\) is deep, so we do have \(\operatorname{order}(L_{P_{+a,\times\mathfrak{v}^{e}}})=r\) and \(\mathfrak{v}(L_{P_{+a,\times\mathfrak{v}^{e}}})\preccurlyeq^{\flat}1\). Lemma 3.3.7 also yields \(\mathfrak{v}(L_{P_{+a,\times\mathfrak{v}^{e}}})\preccurlyeq_{\Delta(\mathfrak{v})}\). Since \(\operatorname{ddeg}P=\operatorname{dmul}P=1\), we can use Corollary 3.2.20 for \(\mathfrak{n}=\mathfrak{v}^{e}\) and for \(\mathfrak{n}=1\) to obtain
\[\operatorname{ddeg}P_{+a,\times\mathfrak{v}^{e}}\ =\ \operatorname{dmul}P_{+a,\times \mathfrak{v}^{e}}\ =\ \operatorname{ddeg}P_{+a}\ =\ \operatorname{dmul}P_{+a}=1\]
and thus \((P_{+a,\times\mathfrak{v}^{e}})_{1}\sim P_{+a,\times\mathfrak{v}^{e}}\); also \(P_{1}\sim P\sim P_{+a}\sim(P_{+a})_{1}\), where \(P\sim P_{+a}\) follows from \(a\prec 1\) and [ADH, 4.5.1(i)]. Now let \(d>1\). Then
\[(P_{+a,\times\mathfrak{v}^{e}})_{d}\ \asymp_{\Delta(\mathfrak{v})}( \mathfrak{v}^{e})^{d}(P_{+a})_{d}\ \prec\ (\mathfrak{v}^{e})^{d}P_{+a}\ \sim\ ( \mathfrak{v}^{e})^{d}(P_{+a})_{1}\] \[\asymp_{\Delta(\mathfrak{v})}(\mathfrak{v}^{e})^{d-1}(P_{+a, \times\mathfrak{v}^{e}})_{1}\ \prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}(P_{+a,\times \mathfrak{v}^{e}})_{1},\]
using [ADH, 6.1.3] for \(\asymp_{\Delta(\mathfrak{v})}\). So \((P_{+a},\mathfrak{v}^{e},\widehat{a}-a)\) is normal by Lemma 3.3.19.
We can now finally show:
**Theorem 3.3.33**.: _Suppose \(K\) is \(\mathfrak{o}\)-free and \(r\)-linearly newtonian, and \(\Gamma\) is divisible. Then every \(Z\)-minimal slot in \(K\) of order \(r\) has a refinement \((P,\mathfrak{m},\widehat{a})\) such that \((P^{\phi},\mathfrak{m},\widehat{a})\) is deep and normal, eventually._
Proof.: By Lemma 3.2.14 it is enough to show this for \(Z\)-minimal holes in \(K\) of order \(r\). Given such hole in \(K\), use Corollary 3.3.12 to refine it to a hole \((P,\mathfrak{m},\widehat{a})\) such that \((P^{\phi},\mathfrak{m},\widehat{a})\) is deep, eventually. Replacing \((P,\mathfrak{m},\widehat{a})\) by \((P^{\phi},\mathfrak{m},\widehat{a})\) for a suitable active \(\phi\prec 1\) we arrange that \((P,\mathfrak{m},\widehat{a})\) itself is deep. Then an appeal to Corollary 3.3.16 followed by an application of Lemma 3.3.32 yields a deep and normal refinement of \((P,\mathfrak{m},\widehat{a})\). Now apply Lemma 3.3.20 to this refinement.
Next we indicate some variants of Theorem 3.3.33:
**Corollary 3.3.34**.: _Suppose \(K\) is \(\mathrm{d}\)-valued and \(\mathfrak{o}\)-free, and \(\Gamma\) is divisible. Then every minimal hole in \(K\) of order \(r\) has a refinement \((P,\mathfrak{m},\widehat{a})\) such that \((P^{\phi},\mathfrak{m},\widehat{a})\) is deep and normal, eventually._
Proof.: Given a minimal hole in \(K\) of order \(r\), use Corollary 3.3.12 to refine it to a hole \((P,\mathfrak{m},\widehat{a})\) in \(K\) such that \(\operatorname{nwt}L_{P_{\times\mathfrak{m}}}=0\) and \((P^{\phi},\mathfrak{m},\widehat{a})\) is deep, eventually. If \(\deg P=1\), then \((P^{\phi},\mathfrak{m},\widehat{a})\) is normal, eventually, by Lemma 3.3.17. If \(\deg P>1\), then \(K\) is \(r\)-linearly newtonian by Corollary 3.2.6, so we can use Theorem 3.3.33.
For \(r=1\) we can follow the proof of Theorem 3.3.33, using Corollary 3.3.10 in place of Corollary 3.3.12, to obtain:
**Corollary 3.3.35**.: _If \(K\) is \(1\)-linearly newtonian and \(\Gamma\) is divisible, then every quasi-linear \(Z\)-minimal slot in \(K\) of order \(1\) has a refinement \((P,\mathfrak{m},\widehat{a})\) such that \((P^{\phi},\mathfrak{m},\widehat{a})\) is deep and normal, eventually._
Here is another variant of Theorem 3.3.33:
**Proposition 3.3.36**.: _If \(K\) is \(\mathrm{d}\)-valued and \(\mathfrak{o}\)-free, and \(\Gamma\) is divisible, then every \(Z\)-minimal special slot in \(K\) of order \(r\) has a refinement \((P,\mathfrak{m},\widehat{a})\) such that \((P^{\phi},\mathfrak{m},\widehat{a})\) is deep and normal, eventually._
To establish this proposition we follow the proof of Theorem 3.3.33, using Lemma 3.2.35 to preserve specialness in the initial refining. Corollary 3.3.15 takes over the role of Corollary 3.3.16 in that proof.
For linear slots in \(K\) we can weaken the hypotheses of Theorem 3.3.33:
**Corollary 3.3.37**.: _Suppose \(\deg P=1\). Then \((P,\mathfrak{m},\widehat{a})\) has a refinement \((P,\mathfrak{n},\widehat{a})\) such that \((P^{\phi},\mathfrak{n},\widehat{a})\) is deep and normal, eventually. Moreover, if \(K\) is \(\lambda\)-free and \(r>1\), then \((P^{\phi},\mathfrak{m},\widehat{a})\) is deep and normal, eventually._
Proof.: By the remarks before Lemma 3.3.17, \((P,\mathfrak{m},\widehat{a})\) is normal iff it is steep. Moreover, if \((P,\mathfrak{m},\widehat{a})\) is normal, then it is quasilinear by Corollary 3.3.21, and hence \((P^{\phi},\mathfrak{m},\widehat{a})\) is deep and normal, eventually, by the remarks before Example 3.3.3 and Lemma 3.3.20. By Lemma 3.3.2, \((P,\mathfrak{m},\widehat{a})\) has a refinement \((P,\mathfrak{n},\widehat{a})\) such that \((P^{\phi},\mathfrak{n},\widehat{a})\) is steep, eventually. This yields the first part. The second part follows from Corollary 3.1.21 and Lemma 3.3.17.
**Corollary 3.3.38**.: _Suppose \(K\) is \(\lambda\)-free, \(\Gamma\) is divisible, and \((P,\mathfrak{m},\widehat{a})\) is a quasilinear minimal hole in \(K\) of order \(r=1\). Then \((P,\mathfrak{m},\widehat{a})\) has a refinement \((Q,\mathfrak{n},\widehat{b})\) such that \((Q^{\phi},\mathfrak{n},\widehat{b})\) is deep and normal, eventually._
Proof.: The case \(\deg P=1\) is part of Corollary 3.3.37. If \(\deg P>1\), then \(K\) is \(1\)-linearly newtonian by Lemma 3.2.5, so we can use Corollary 3.3.35.
**Improving normality**.: _In this subsection \(L:=L_{P_{\times\mathfrak{m}}}\)._ Note that if \((P,\mathfrak{m},\widehat{a})\) is a normal hole in \(K\), then \(P_{\times\mathfrak{m}}\sim(P_{\times\mathfrak{m}})_{1}\) by Lemma 3.3.18. We call our slot \((P,\mathfrak{m},\widehat{a})\) in \(K\)**strictly normal** if it is normal, but with the condition (N2) replaced by the stronger condition
* \((P_{\times\mathfrak{m}})_{\neq 1}\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1} (P_{\times\mathfrak{m}})_{1}\).
Thus for normal \((P,\mathfrak{m},\widehat{a})\) and \(\mathfrak{v}=\mathfrak{v}(L)\) we have:
\[(P,\mathfrak{m},\widehat{a})\text{ is strictly normal}\iff P(0)\prec_{\Delta( \mathfrak{v})}\mathfrak{v}^{w+1}(P_{\times\mathfrak{m}})_{1}.\]
So if \((P,\mathfrak{m},\widehat{a})\) is normal and \(P(0)=0\), then \((P,\mathfrak{m},\widehat{a})\) is strictly normal. Note that if \((P,\mathfrak{m},\widehat{a})\) is strictly normal, then
\[P_{\times\mathfrak{m}}\sim_{\Delta(\mathfrak{v})}(P_{\times\mathfrak{m}})_{1 }\qquad(\text{and hence }\mathrm{ddeg}\,P_{\times\mathfrak{m}}=1).\]
If \((P,\mathfrak{m},\widehat{a})\) is strictly normal, then so are \((P_{\times\mathfrak{n}},\mathfrak{m}/\mathfrak{n},\widehat{a}/\mathfrak{n})\) and \((bP,\mathfrak{m},\widehat{a})\) for \(b\neq 0\). Thus \((P,\mathfrak{m},\widehat{a})\) is strictly normal iff \((P_{\times\mathfrak{m}},1,\widehat{a}/\mathfrak{m})\) is strictly normal. If \((P,\mathfrak{m},\widehat{a})\) is strictly normal, then so is every equivalent slot in \(K\). The proof of Lemma 3.3.23 shows that if \((P,\mathfrak{m},\widehat{a})\) is strictly normal and \(u\in K\), \(u\asymp 1\), then \((P,u\mathfrak{m},\widehat{a})\) is also strictly normal. The analogue of Lemma 3.3.19 goes through, with \((P_{\times\mathfrak{m}})_{\neq 1}\) instead of \((P_{\times\mathfrak{m}})_{>1}\) in the proof:
**Lemma 3.3.39**.: _Suppose \(\mathrm{order}(L)=r\) and \(\mathfrak{v}\) are such that_ (N1) _and_ (N2s) _hold, and \(\mathfrak{v}(L)\asymp_{\Delta(\mathfrak{v})}\mathfrak{v}\). Then \((P,\mathfrak{m},\widehat{a})\) is strictly normal._
Lemma 3.3.20 goes likewise through with "strictly normal" instead of "normal":
**Lemma 3.3.40**.: _If \((P,\mathfrak{m},\widehat{a})\) is strictly normal and \(\phi\preccurlyeq 1\) is active, then the slot \((P^{\phi},\mathfrak{m},\widehat{a})\) in \(K^{\phi}\) is strictly normal._ (_Hence if \((P,\mathfrak{m},\widehat{a})\) is strictly normal, then \((P,\mathfrak{m},\widehat{a})\) is quasilinear, and if in addition \((P,\mathfrak{m},\widehat{a})\) is linear, then it is deep._)
As to Proposition 3.3.25, here is a weak version for strict normality:
**Lemma 3.3.41**.: _Suppose \((P,\mathfrak{m},\widehat{a})\) is a strictly normal hole in \(K\) and \(\widehat{a}-a\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{r+w+1}\mathfrak{m}\) where \(\mathfrak{v}:=\mathfrak{v}(L)\). Then its refinement \((P_{+a},\mathfrak{m},\widehat{a}-a)\) is also strictly normal._
Proof.: As in the proof of Proposition 3.3.25 we arrange \(\mathfrak{m}=1\). We can also assume \(P_{1}\asymp 1\). From \(P=P(0)+P_{1}+P_{>1}\) we get
\[P(a)\ =\ P(0)+P_{1}(a)+P_{>1}(a),\]
where \(P(0)\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}\) and \(P_{>1}(a)\preccurlyeq P_{>1}\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}\) by (N2s) and \(a\prec 1\); we show that also \(P_{1}(a)\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}\). To see this note that
\[0\ =\ P(\widehat{a})\ =\ P(0)+P_{1}(\widehat{a})+P_{>1}(\widehat{a}),\]
where as before \(P(0),P_{>1}(\widehat{a})\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}\), so \(P_{1}(\widehat{a})\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}\). Lemma 1.1.13 applied to \((\widehat{K},\preccurlyeq_{\Delta(\mathfrak{v})},P_{1})\) in place of \((K,\preccurlyeq,P)\), with \(m=w+1\), \(y=a-\widehat{a}\), yields \(P_{1}(a-\widehat{a})\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}\), hence
\[P_{1}(a)\ =\ P_{1}(a-\widehat{a})+P_{1}(\widehat{a})\ \prec_{\Delta(\mathfrak{v})} \mathfrak{v}^{w+1}\]
as claimed. It remains to use \(\mathfrak{v}(L_{P_{+a}})\asymp_{\Delta(\mathfrak{v})}\mathfrak{v}\) and the normality of \((P_{+a},1,\widehat{a}-a)\) obtained from Proposition 3.3.25 and its proof.
We also have a version of Lemma 3.3.41 for \(Z\)-minimal slots, obtained from that lemma via Lemma 3.2.14:
**Lemma 3.3.42**.: _Suppose \((P,\mathfrak{m},\widehat{a})\) is \(Z\)-minimal and strictly normal. Set \(\mathfrak{v}:=\mathfrak{v}(L)\), and suppose \(\widehat{a}-a\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{r+w+1}\mathfrak{m}\). Then the refinement \((P_{+a},\mathfrak{m},\widehat{a}-a)\) of \((P,\mathfrak{m},\widehat{a})\) is strictly normal._
Next two versions of Proposition 3.3.26:
**Lemma 3.3.43**.: _Suppose \((P,\mathfrak{m},\widehat{a})\) is a strictly normal hole in \(K\), \(\widehat{a}\prec\mathfrak{n}\preccurlyeq\mathfrak{m}\), and \([\mathfrak{n}/\mathfrak{m}]<\big{[}\mathfrak{v}(L)\big{]}\). Then the refinement \((P,\mathfrak{n},\widehat{a})\) of \((P,\mathfrak{m},\widehat{a})\) is strictly normal._
Proof.: As in the proof of Proposition 3.3.26 we arrange \(\mathfrak{m}=1\) and, setting \(\mathfrak{v}:=\mathfrak{v}(L)\), \(\widetilde{L}:=L_{P_{\times\mathfrak{n}}}\), show that \(\operatorname{order}(\widetilde{L})=r\), \(\mathfrak{v}(\widetilde{L})\asymp_{\Delta(\mathfrak{v})}\mathfrak{v}\), and that (N2) holds with \(\mathfrak{m}\) replaced by \(\mathfrak{n}\). Now \([\mathfrak{n}]<[\mathfrak{v}]\) yields \(\mathfrak{n}\asymp_{\Delta(\mathfrak{v})}1\); together with \((P_{\times\mathfrak{n}})_{1}\asymp_{\Delta(\mathfrak{v})}\mathfrak{n}P_{1}\) this gives \(P(0)\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}P_{1}\asymp_{\Delta( \mathfrak{v})}\mathfrak{v}^{w+1}(P_{\times\mathfrak{n}})_{1}\). Hence (N2s) holds with \(\mathfrak{m}\) replaced by \(\mathfrak{n}\). Lemma 3.3.39 now yields that \((P,\mathfrak{n},\widehat{a})\) is strictly normal.
**Lemma 3.3.44**.: _Suppose \((P,\mathfrak{m},\widehat{a})\) is a strictly normal hole in \(K\) and \(\widehat{a}\prec_{\Delta(\mathfrak{v})}\mathfrak{m}\) where \(\mathfrak{v}:=\mathfrak{v}(L)\). Assume also that for all \(q\in\mathbb{Q}^{>}\) there is given an element \(\mathfrak{v}^{q}\) of \(K^{\times}\) with \(v(\mathfrak{v}^{q})=q\,v(\mathfrak{v})\). Then for all sufficiently small \(q\in\mathbb{Q}^{>}\) and \(\mathfrak{n}\) with \(\mathfrak{n}\asymp\mathfrak{v}^{q}\mathfrak{m}\) we have: \(\widehat{a}\prec\mathfrak{n}\) and the refinement \((P,\mathfrak{n},\widehat{a})\) of \((P,\mathfrak{m},\widehat{a})\) is strictly normal._
Proof.: We arrange \(\mathfrak{m}=1\) as usual, and take \(q_{0}\in\mathbb{Q}^{>}\) such that \(\widehat{a}\prec\mathfrak{v}^{q_{0}}\) and \(P(0)\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1+q_{0}}P_{1}\). Let \(q\in\mathbb{Q}\), \(0<q\leqslant q_{0}\), and suppose \(\mathfrak{n}\asymp\mathfrak{v}^{q}\). Then \((P,\mathfrak{n},\widehat{a})\) is a refinement of \((P,1,\widehat{a})\), and the proof of Proposition 3.3.26 gives: \(\widetilde{L}:=L_{P_{\times\mathfrak{n}}}\) has order \(r\) with \(\mathfrak{v}(\widetilde{L})\asymp_{\Delta(\mathfrak{v})}\mathfrak{v}\), \(\mathfrak{n}P_{1}\asymp_{\Delta(\mathfrak{v})}(P_{\times\mathfrak{n}})_{1}\), and (N2) holds with \(\mathfrak{m}\) replaced by \(\mathfrak{n}\). Hence
\[P(0)\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1+q_{0}}P_{1}\preccurlyeq \mathfrak{v}^{w+1}\mathfrak{n}P_{1}\asymp_{\Delta(\mathfrak{v})}\mathfrak{v}^{w +1}(P_{\times\mathfrak{n}})_{1}.\]
Now as in the proof of the previous lemma we conclude that \((P,\mathfrak{n},\widehat{a})\) is strictly normal.
_Remark 3.3.45_.: In Lemmas 3.3.43 and 3.3.44 we assumed that \((P,\mathfrak{m},\widehat{a})\) is a strictly normal hole in \(K\). By Lemma 3.2.14 these lemmas go through if this hypothesis is replaced by "\((P,\mathfrak{m},\widehat{a})\) is a strictly normal \(Z\)-minimal slot in \(K\)".
We now turn to refining a given normal hole to a strictly normal hole. We only do this under additional hypotheses, tailored so that we may employ Lemma 3.1.17. Therefore we assume in the rest of this subsection: \(K\)_is \(\mathfrak{d}\)-_valued and for all \(\mathfrak{v}\) and \(q\in\mathbb{Q}^{>}\) we are given an element \(\mathfrak{v}^{q}\) of \(K^{\times}\) with \((\mathfrak{v}^{q})^{\dagger}=q\mathfrak{v}^{\dagger}\)_. Note that
then \(v(\mathfrak{v}^{q})=q\,v(\mathfrak{v})\) for such \(q\). (In particular, \(\Gamma\) is divisible.) We also adopt the convention that if \(\operatorname{order}L=r\), then \(\mathfrak{v}:=\mathfrak{v}(L)\).
**Lemma 3.3.46**.: _Suppose \((P,\mathfrak{m},\widehat{a})\) is a normal hole in \(K\) and \(\widehat{a}-a\preccurlyeq\mathfrak{v}^{w+2}\mathfrak{m}\). Then the refinement \((P_{+a},\mathfrak{m},\widehat{a}-a)\) of \((P,\mathfrak{m},\widehat{a})\) is strictly normal._
Proof.: As usual we arrange that \(\mathfrak{m}=1\). By Proposition 3.3.25, \((P_{+a},1,\widehat{a}-a)\) is normal; the proof of this proposition gives \(\operatorname{order}(L_{P_{+a}})=r\), \(\mathfrak{v}(L_{P_{+a}})\sim_{\Delta(\mathfrak{v})}\mathfrak{v}\), \((P_{+a})_{1}\sim_{\Delta(\mathfrak{v})}P_{1}\), and (N2) holds with \(\mathfrak{m}=1\) and \(P\) replaced by \(P_{+a}\). It remains to show that \(P_{+a}(0)\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}(P_{+a})_{1}\), equivalently, \(P(a)\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}P_{1}\).
Let \(\widehat{L}:=L_{P_{+\widehat{a}}}\in\widehat{K}[\widehat{a}]\) and \(R:=P_{>1}\in K\{Y\}\); note that \(P_{(i)}=R_{(i)}\) for \(|i|>1\) and \(R\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}P_{1}\). Hence Taylor expansion and \(P(\widehat{a})=0\) give
\[P(a) =\ P(\widehat{a})+\widehat{L}(a-\widehat{a})+\sum_{|i|>1}P_{(i)} (\widehat{a})\cdot(a-\widehat{a})^{i}\] \[=\ \widehat{L}(a-\widehat{a})+\sum_{|i|>1}R_{(i)}(\widehat{a}) \cdot(a-\widehat{a})^{i}\] \[\qquad\qquad\text{where }R_{(i)}(\widehat{a})\cdot(a-\widehat{a})^{i} \ \prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}P_{1}\text{ for }|i|>1,\]
so it is enough to show \(\widehat{L}(a-\widehat{a})\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}P_{1}\). Lemma 3.1.27 applied to \((\widehat{K},\widehat{a})\) in place of \((K,a)\) gives \(\operatorname{order}\widehat{L}=r\) and \(\widehat{L}\sim_{\Delta(\mathfrak{v})}L\). Since \(\widehat{K}\) is d-valued, Lemma 3.1.17 yields a \(q\in\mathbb{Q}\) with \(w+1<q\leqslant w+2\) and a \(\mathfrak{w}\) such that \(\widehat{L}\mathfrak{v}^{q}\asymp\mathfrak{w}\mathfrak{v}^{q}\,\widehat{L}\) where \([\mathfrak{w}]\leqslant[\mathfrak{v}^{\dagger}]\) and hence \(\mathfrak{w}\asymp_{\Delta(\mathfrak{v})}1\) (see the remark before Lemma 3.3.1). With \(\mathfrak{n}\asymp a-\widehat{a}\) we have \(\mathfrak{n}\preccurlyeq\mathfrak{v}^{w+2}\preccurlyeq\mathfrak{v}^{q}\prec _{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}\) and therefore
\[\widehat{L}(a-\widehat{a})\ \preccurlyeq\widehat{L}\mathfrak{n}\ \preccurlyeq \widehat{L}\mathfrak{v}^{q}\ \asymp\mathfrak{w}\mathfrak{v}^{q}\,\widehat{L}\ \asymp_{\Delta(\mathfrak{v})}\mathfrak{v}^{q}\widehat{L}\ \prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}\widehat{L}.\]
Hence \(\widehat{L}(a-\widehat{a})\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}P_{1}\) as required.
In particular, if \((P,\mathfrak{m},\widehat{a})\) is a normal hole in \(K\) and \(\widehat{a}\preccurlyeq\mathfrak{v}^{w+2}\mathfrak{m}\), then \((P,\mathfrak{m},\widehat{a})\) is strictly normal.
**Corollary 3.3.47**.: _Suppose \((P,\mathfrak{m},\widehat{a})\) is \(Z\)-minimal, deep, and normal. If \((P,\mathfrak{m},\widehat{a})\) is special, then \((P,\mathfrak{m},\widehat{a})\) has a deep and strictly normal refinement \((P_{+a},\mathfrak{m},\widehat{a}-a)\) where \(\widehat{a}-a\prec_{\Delta(\mathfrak{v})}\mathfrak{m}\) and \(\mathfrak{v}(L_{P_{+a,\chi}\,\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
in \(K^{\phi}\) is deep and strictly normal, for all active \(\phi\preccurlyeq\phi_{0}\) (in \(K\)). Thus \((P_{+a},\mathfrak{m},\widehat{a}-a)\) refines the original \(Z\)-minimal slot in \(K\) and has the desired property.
Corollaries 3.2.6 and 3.3.48 have the following consequence:
**Corollary 3.3.49**.: _Suppose \(K\) is \(\omega\)-free. Then every minimal hole in \(K\) of order \(r\) and degree \(>1\) has a refinement \((P,\mathfrak{m},\widehat{a})\) such that \((P^{\phi},\mathfrak{m},\widehat{a})\) is deep and strictly normal, eventually._
Corollary 3.3.47 also gives the following variant of Corollary 3.3.48, where the role of Theorem 3.3.33 in its proof is taken over by Proposition 3.3.36:
**Corollary 3.3.50**.: _Suppose \(K\) is \(\omega\)-free. Then every \(Z\)-minimal special slot in \(K\) of order \(r\) has a refinement \((P,\mathfrak{m},\widehat{a})\) such that \((P^{\phi},\mathfrak{m},\widehat{a})\) is deep and strictly normal, eventually._
### Isolated Slots
In this short section we study the concept of isolation, which plays well together with normality. _Throughout this section \(K\) is an \(H\)-asymptotic field with small derivation and with rational asymptotic integration._ We let \(a\), \(b\) range over \(K\) and \(\phi\), \(\mathfrak{m}\), \(\mathfrak{n}\), \(\mathfrak{w}\) over \(K^{\times}\). We also let \((P,\mathfrak{m},\widehat{a})\) be a slot in \(K\) of order \(r\geqslant 1\). Recall that \(v(\widehat{a}-K)\) is a cut in \(\Gamma\) without largest element. Note that \(v\big{(}(\widehat{a}-a)-K\big{)}=v(\widehat{a}-K)\) and \(v(\widehat{a}\mathfrak{n}-K)=v(\widehat{a}-K)+v\mathfrak{n}\).
**Definition 3.4.1**.: We say that \((P,\mathfrak{m},\widehat{a})\) is **isolated** if for all \(a\prec\mathfrak{m}\),
\[\text{order}(L_{P_{+a}})=r\ \text{ and }\ \mathscr{E}^{\rm e}(L_{P_{+a}})\cap v (\widehat{a}-K)\ <\ v(\widehat{a}-a);\]
equivalently, for all \(a\prec\mathfrak{m}\): \(\text{order}(L_{P_{+a}})=r\) and whenever \(\mathfrak{w}\preccurlyeq\widehat{a}-a\) is such that \(v(\mathfrak{w})\in\mathscr{E}^{\rm e}(L_{P_{+a}})\), then \(\mathfrak{w}\prec\widehat{a}-b\) for all \(b\).
In particular, if \((P,\mathfrak{m},\widehat{a})\) is isolated, then \(v(\widehat{a})\notin\mathscr{E}^{\rm e}(L_{P})\). If \((P,\mathfrak{m},\widehat{a})\) is isolated, then so is every equivalent slot in \(K\), as well as \((bP,\mathfrak{m},\widehat{a})\) for \(b\neq 0\) and the slot \((P^{\phi},\mathfrak{m},\widehat{a})\) in \(K^{\phi}\) for active \(\phi\) in \(K\). Moreover:
**Lemma 3.4.2**.: _If \((P,\mathfrak{m},\widehat{a})\) is isolated, then so is any refinement \((P_{+a},\mathfrak{n},\widehat{a}-a)\) of it._
Proof.: For the case \(\mathfrak{n}=\mathfrak{m}\), use \(v\big{(}(\widehat{a}-a)-K\big{)}=v(\widehat{a}-K)\). The case \(a=0\) is clear. The general case reduces to these two special cases.
**Lemma 3.4.3**.: _Suppose \((P,\mathfrak{m},\widehat{a})\) is isolated. Then the multiplicative conjugate \((P_{\times\mathfrak{n}},\mathfrak{m}/\mathfrak{n},\widehat{a}/\mathfrak{n})\) of \((P,\mathfrak{m},\widehat{a})\) by \(\mathfrak{n}\) is isolated._
Proof.: Let \(a\prec\mathfrak{m}/\mathfrak{n}\). Then \(a\mathfrak{n}\prec\mathfrak{m}\), so \(\text{order}(L_{P_{\times\mathfrak{n},+a}})=\text{order}(L_{P_{+a\mathfrak{n},\times\mathfrak{n}}})=\text{order}(L_{P_{+an}})=r\). Suppose \(\mathfrak{w}\preccurlyeq(\widehat{a}/\mathfrak{n})-a\) and \(v(\mathfrak{w})\in\mathscr{E}^{\rm e}\big{(}L_{P_{\times\mathfrak{n},+a}}\big{)}\). Now \(L_{P_{\times\mathfrak{n},+a}}=L_{P_{+an,\times\mathfrak{n}}}=L_{P_{+an}} \mathfrak{n}\) and thus \(\mathfrak{w}\preccurlyeq\widehat{a}-a\mathfrak{n}\), \(v(\mathfrak{w}\mathfrak{n})\in\mathscr{E}^{\rm e}\big{(}P_{+an}\big{)}\). But \((P,\mathfrak{m},\widehat{a})\) is isolated, so \(v(\mathfrak{w}\mathfrak{n})>v(\widehat{a}-K)\) and hence \(v(\mathfrak{w})>v\big{(}(\widehat{a}/\mathfrak{n})-K\big{)}\). Thus \((P_{\times\mathfrak{n}},\mathfrak{m}/\mathfrak{n},\widehat{a}/\mathfrak{n})\) is isolated.
**Lemma 3.4.4**.: _Suppose \(K\) is \(\lambda\)-free or \(r=1\), and \((P,\mathfrak{m},\widehat{a})\) is normal. Then_
\[(P,\mathfrak{m},\widehat{a})\text{ is isolated}\quad\Longleftrightarrow\quad \mathscr{E}^{\rm e}(L_{P})\cap v(\widehat{a}-K)\ \leqslant\ v\mathfrak{m}.\]
Proof.: Use Lemma 3.3.24; for the direction \(\Rightarrow\), use also that \(\widehat{a}-a\prec\mathfrak{m}\) iff \(a\prec\mathfrak{m}\).
**Lemma 3.4.5**.: _Suppose \(\deg P=1\). Then_
\[(P,\mathfrak{m},\widehat{a})\mbox{ is isolated}\quad\Longleftrightarrow\quad \mathscr{E}^{\mathrm{e}}(L_{P})\cap v(\widehat{a}-K)\ \leqslant\ v\mathfrak{m}.\]
Proof.: Use that \(\operatorname{order}L_{P}=r\) and \(L_{P_{+a}}=L_{P}\) for all \(a\).
**Proposition 3.4.6**.: _Suppose \(K\) is \(\lambda\)-free or \(r=1\), and \((P,\mathfrak{m},\widehat{a})\) is normal. Then \((P,\mathfrak{m},\widehat{a})\) has an isolated refinement._
Proof.: Suppose \((P,\mathfrak{m},\widehat{a})\) is not already isolated. Then Lemma 3.4.4 gives \(\gamma\) with
\[\gamma\in\mathscr{E}^{\mathrm{e}}(L_{P})\cap v(\widehat{a}-K),\quad\gamma>v \mathfrak{m}.\]
We have \(|\mathscr{E}^{\mathrm{e}}(L_{P})|\leqslant r\), by [ADH, p. 481] if \(r=1\), and Corollary 1.8.11 and \(\lambda\)-freeness of \(K\) if \(r>1\). Hence we can take \(\gamma:=\max\mathscr{E}^{\mathrm{e}}(L_{P})\cap v(\widehat{a}-K)\), and then \(\gamma>v\mathfrak{m}\). Take \(a\) and \(\mathfrak{n}\) with \(v(\widehat{a}-a)>\gamma=v(\mathfrak{n})\); then \((P_{+a},\mathfrak{n},\widehat{a}-a)\) is a refinement of \((P,\mathfrak{m},\widehat{a})\) and \(a\prec\mathfrak{m}\). Let \(b\prec\mathfrak{n}\); then \(a+b\prec\mathfrak{m}\), so by Lemma 3.3.24,
\[\operatorname{order}(L_{(P_{+a})_{+b}})\ =\ r,\qquad\mathscr{E}^{\mathrm{e}}(L_{(P_{+ a})_{+b}})\ =\ \mathscr{E}^{\mathrm{e}}(L_{P}).\]
Also \(v\big{(}(\widehat{a}-a)-b\big{)}>\gamma\), hence
\[\mathscr{E}^{\mathrm{e}}\big{(}L_{(P_{+a})_{+b}}\big{)}\cap v\big{(}(\widehat {a}-a)-K\big{)}\ =\ \mathscr{E}^{\mathrm{e}}(L_{P})\cap v(\widehat{a}-K)\ \leqslant\ \gamma\ <\ v\big{(}(\widehat{a}-a)-b\big{)}.\]
Thus \((P_{+a},\mathfrak{n},\widehat{a}-a)\) is isolated.
_Remark 3.4.7_.: Proposition 3.4.6 goes through if instead of assuming that \((P,\mathfrak{m},\widehat{a})\) is normal, we assume that \((P,\mathfrak{m},\widehat{a})\) is linear. (Same argument, using Lemma 3.4.5 in place of Lemma 3.4.4 and \(L_{(P_{+a})_{+b}}=L_{P}\) in place of Lemma 3.3.24.)
**Corollary 3.4.8**.: _Suppose \(r=1\), and \((P,\mathfrak{m},\widehat{a})\) is normal or linear. If \(\mathscr{E}^{\mathrm{e}}(L_{P})=\emptyset\), then \((P,\mathfrak{m},\widehat{a})\) is isolated. If \(\mathscr{E}^{\mathrm{e}}(L_{P})\neq\emptyset\), so \(\mathscr{E}^{\mathrm{e}}(L_{P})=\{v\mathfrak{g}\}\) where \(\mathfrak{g}\in K^{\times}\), then \((P,\mathfrak{m},\widehat{a})\) is isolated iff \(\mathfrak{m}\preccurlyeq\mathfrak{g}\) or \(\widehat{a}-K\succ\mathfrak{g}\)._
This follows immediately from Lemmas 3.4.4 and 3.4.5. The results in the rest of this subsection are the _raison d'etre_ of isolated holes:
**Proposition 3.4.9**.: _Suppose \(K\) is \(\omega\)-free and \((P,\mathfrak{m},\widehat{a})\) is an isolated hole in \(K\) which is normal or linear. Let \(\widehat{b}\) in an immediate asymptotic extension of \(K\) satisfy \(P(\widehat{b})=0\) and \(\widehat{b}\prec\mathfrak{m}\). Then \(v(\widehat{a}-a)=v(\widehat{b}-a)\) for all \(a\), so \(\widehat{b}\notin K\)._
Proof.: Replacing \((P,\mathfrak{m},\widehat{a})\), \(\widehat{b}\) by \((P_{\times\mathfrak{m}},1,\widehat{a}/\mathfrak{m})\), \(\widehat{b}/\mathfrak{m}\), we arrange \(\mathfrak{m}=1\). Let \(a\) be given; we show \(v(\widehat{a}-a)=v(\widehat{b}-a)\). This is clear if \(a\succcurlyeq 1\), so assume \(a\prec 1\). Corollary 3.3.21 (if \((P,\mathfrak{m},\widehat{a})\) is normal) and Lemma 3.2.21 (if \((P,\mathfrak{m},\widehat{a})\) is linear) give \(\operatorname{ndeg}P=1\). Thus \(P\) is inwaton position at \(a\) by Corollary 3.2.23. Moreover \(v(\widehat{a}-a)\notin\mathscr{E}^{\mathrm{e}}(L_{P_{+a}})\), hence \(v(\widehat{a}-a)=v^{\mathrm{e}}(P,a)\) by Lemma 1.8.15. Likewise, if \(v(\widehat{b}-a)\notin\mathscr{E}^{\mathrm{e}}(L_{P_{+a}})\), then \(v(\widehat{b}-a)=v^{\mathrm{e}}(P,a)\) by Lemma 1.8.15, so \(v(\widehat{a}-a)=v(\widehat{b}-a)\).
Thus to finish the proof it is enough to show that \(\mathscr{E}^{\mathrm{e}}(L_{P_{+a}})\cap v(\widehat{b}-K)\leqslant 0\). Now \(|\mathscr{E}^{\mathrm{e}}(L_{P_{+a}})|\leqslant r\) by Corollary 1.5.5, so we have \(b\prec 1\) such that
\[\mathscr{E}^{\mathrm{e}}(L_{P_{+a}})\cap v(\widehat{b}-K)\ <\ v(\widehat{b}-b),\]
in particular, \(v(\widehat{b}-b)\notin\mathscr{E}^{\mathrm{e}}(L_{P_{+a}})\). If \((P,\mathfrak{m},\widehat{a})\) is normal, then Lemma 3.3.24 gives
\[\mathscr{E}^{\mathrm{e}}(L_{P_{+a}})\ =\ \mathscr{E}^{\mathrm{e}}(L_{P})\ =\ \mathscr{E}^{ \mathrm{e}}(L_{P_{+b}}),\]
so by the above with \(b\) instead of \(a\) we have \(v(\widehat{a}-b)=v(\widehat{b}-b)\). If \((P,\mathfrak{m},\widehat{a})\) is linear, then \(L_{P_{+a}}=L_{P}=L_{P_{+b}}\), and we obtain likewise \(v(\widehat{a}-b)=v(\widehat{b}-b)\). Hence
\[\mathscr{E}^{\mathrm{e}}(L_{P_{+a}})\cap v(\widehat{b}-K)\ \subseteq\ \mathscr{E}^{ \mathrm{e}}(L_{P_{+a}})\cap\Gamma^{<v(\widehat{a}-b)}\ \subseteq\ \mathscr{E}^{\mathrm{e}}(L_{P})\cap v(\widehat{a}-K)\ \leqslant\ 0.\]
using Lemmas 3.4.4 and 3.4.5 for the last step.
Combining Proposition 3.4.9 with Corollary 3.2.15 yields:
**Corollary 3.4.10**.: _Let \(K\), \((P,\mathfrak{m},\widehat{a})\), \(\widehat{b}\) be as in Proposition 3.4.9, and assume also that \((P,\mathfrak{m},\widehat{a})\) is \(Z\)-minimal. Then there is an isomorphism \(K\langle\widehat{a}\rangle\to K\langle\widehat{b}\rangle\) of valued differential fields over \(K\) sending \(\widehat{a}\) to \(\widehat{b}\)._
Using the Normalization Theorem, we now obtain:
**Corollary 3.4.11**.: _Suppose \(K\) is \(\mathfrak{o}\)-free and \(\Gamma\) is divisible. Then every minimal hole in \(K\) of order \(r\) has an isolated refinement \((P,\mathfrak{m},\widehat{a})\) such that for any \(\widehat{b}\) in an immediate asymptotic extension of \(K\) with \(P(\widehat{b})=0\) and \(\widehat{b}\prec\mathfrak{m}\) there is an isomorphism \(K\langle\widehat{a}\rangle\to K\langle\widehat{b}\rangle\) of valued differential fields over \(K\) sending \(\widehat{a}\) to \(\widehat{b}\)._
Proof.: Given a minimal linear hole in \(K\) of order \(r\), use Remark 3.4.7 to refine it to an isolated minimal linear hole \((P,\mathfrak{m},\widehat{a})\) in \(K\) of order \(r\), and use Corollary 3.4.10. Suppose we are given a minimal non-linear hole in \(K\) of order \(r\). Then \(K\) is \(r\)-linearly newtonian by Corollary 3.2.6. Then Theorem 3.3.33 yields a refinement \((Q,\mathfrak{w},\widehat{d})\) of it and an active \(\theta\) in \(K\) such that the minimal hole \((Q^{\theta},\mathfrak{w},\widehat{d})\) in \(K^{\theta}\) is normal. Proposition 3.4.6 gives an isolated refinement \((Q^{\theta}_{+d},\mathfrak{v},\widehat{d}-d)\) of \((Q^{\theta},\mathfrak{w},\widehat{d})\). Suitably refining \((Q^{\theta}_{+d},\mathfrak{v},\widehat{d}-d)\) further followed by compositionally conjugating with a suitable active element of \(K^{\theta}\) yields by Theorem 3.3.33 and Lemma 3.4.2 a refinement \((P,\mathfrak{m},\widehat{a})\) of \((Q,\mathfrak{w},\widehat{d})\) (and thus of the originally given hole) and an active \(\phi\) in \(K\) such that \((P^{\phi},\mathfrak{m},\widehat{a})\) is both normal and isolated. Then \((P,\mathfrak{m},\widehat{a})\) is isolated, and we can apply Corollary 3.4.10 to \(K^{\phi}\) and \((P^{\phi},\mathfrak{m},\widehat{a})\) in the role of \(K\) and \((P,\mathfrak{m},\widehat{a})\).
For \(r=1\) we can replace "\(\mathfrak{o}\)-free" in Proposition 3.4.9 and Corollary 3.4.10 by the weaker "\(\uplambda\)-free" (same proofs, using Lemma 1.8.20 instead of Lemma 1.8.15):
**Proposition 3.4.12**.: _Suppose \(K\) is \(\uplambda\)-free, \((P,\mathfrak{m},\widehat{a})\) is an isolated hole in \(K\) of order \(r=1\), and suppose \((P,\mathfrak{m},\widehat{a})\) is normal or linear. Let \(\widehat{b}\) in an immediate asymptotic extension of \(K\) satisfy \(P(\widehat{b})=0\) and \(\widehat{b}\prec\mathfrak{m}\). Then \(v(\widehat{a}-a)=v(\widehat{b}-a)\) for all \(a\)._ (_Hence if \((P,\mathfrak{m},\widehat{a})\) is \(Z\)-minimal, then there is an isomorphism \(K\langle\widehat{a}\rangle\to K\langle\widehat{b}\rangle\) of valued differential fields over \(K\) sending \(\widehat{a}\) to \(\widehat{b}\)._)
This leads to an analogue of Corollary 3.4.11:
**Corollary 3.4.13**.: _Suppose \(K\) is \(\uplambda\)-free and \(\Gamma\) is divisible. Then every quasilinear minimal hole in \(K\) of order \(r=1\) has an isolated refinement \((P,\mathfrak{m},\widehat{a})\) such that for any \(\widehat{b}\) in an immediate asymptotic extension of \(K\) with \(P(\widehat{b})=0\) and \(\widehat{b}\prec\mathfrak{m}\) there is an isomorphism \(K\langle\widehat{a}\rangle\to K\langle\widehat{b}\rangle\) of valued differential fields over \(K\) sending \(\widehat{a}\) to \(\widehat{b}\)._
Proof.: Suppose we are given a quasilinear minimal hole in \(K\) of order \(r=1\). Then Corollary 3.3.38 yields a refinement \((Q,\mathfrak{w},\widehat{d})\) of it and an active \(\theta\) in \(K\) such that the quasilinear minimal hole \((Q^{\theta},\mathfrak{w},\widehat{d})\) in \(K^{\theta}\) of order \(1\) is normal. Proposition 3.4.6 gives an isolated refinement \((Q^{\theta}_{+d},\mathfrak{v},\widehat{d}-d)\) of \((Q^{\theta},\mathfrak{w},\widehat{d})\), and then Corollary 3.3.38
yields a refinement \((P,\mathfrak{m},\widehat{a})\) of \((Q,\mathfrak{w},\widehat{d})\) and an active \(\phi\) in \(K\) such that \((P^{\phi},\mathfrak{m},\widehat{a})\) is normal and isolated. Now apply Proposition 3.4.12 with \(K^{\phi}\) and \((P^{\phi},\mathfrak{m},\widehat{a})\) in the role of \(K\) and \((P,\mathfrak{m},\widehat{a})\).
Next a variant of Lemma 3.2.1 for \(r=1\) without assuming \(\mathfrak{w}\)-freeness:
**Corollary 3.4.14**.: _Suppose \(K\) is \(1\)-newtonian and \(\Gamma\) is divisible. Then \(K\) has no quasilinear \(Z\)-minimal slot of order \(1\)._
Proof.: By Proposition 1.8.28, \(K\) is \(\lambda\)-free. Towards a contradiction, let \((P,\mathfrak{m},\widehat{a})\) be a quasilinear \(Z\)-minimal slot in \(K\) of order \(1\). By Lemma 3.2.14 we arrange that \((P,\mathfrak{m},\widehat{a})\) is a hole in \(H\). Using Corollary 3.3.35, Lemma 3.4.2 and the remark before it, and Proposition 3.4.6, we can refine further so that \((P^{\phi},\mathfrak{m},\widehat{a})\) is normal and isolated for some active \(\phi\) in \(K\). Then there is no \(y\in K\) with \(P(y)=0\) and \(y\prec\mathfrak{m}\), by Proposition 3.4.12, contradicting Lemma 3.2.27 for \(L=K\).
Finally, for isolated linear holes, without additional hypotheses:
**Lemma 3.4.15**.: _Suppose \((P,\mathfrak{m},\widehat{a})\) is an isolated linear hole in \(K\), and \(\widehat{a}-a\prec\mathfrak{m}\). Then \(P(a)\neq 0\), and \(\gamma=v(\widehat{a}-a)\) is the unique element of \(\Gamma\setminus\mathscr{E}^{\rm e}(L_{P})\) such that \(v^{\rm e}_{L_{P}}(\gamma)=v\big{(}P(a)\big{)}\)._
Proof.: By Lemma 3.4.5, \(\gamma:=v(\widehat{a}-a)\in\Gamma\setminus\mathscr{E}^{\rm e}(L_{P})\). Since \(\deg P=1\),
\[L_{P}(\widehat{a}-a)\ =\ L_{P}(\widehat{a})-L_{P}(a)\ =\ -P(0)-L_{P}(a)\ =\ -P(a),\]
so \(P(a)\neq 0\). By Lemma 1.5.6, \(v^{\rm e}_{L_{P}}(\gamma)=v\big{(}L_{P}(\widehat{a}-a)\big{)}=v\big{(}P(a) \big{)}\).
In [15] we shall prove a version of Proposition 3.4.9 without the hypothesis that \(\widehat{b}\) lies in an immediate extension of \(K\). In Section 4.4 below we consider, in a more restricted setting, a variant of isolated slots, with ultimate exceptional values taking over the role played by exceptional values in Definition 3.4.1.
### Holes of Order and Degree One
In this section \(K\) is a d-valued field of \(H\)-type with small derivation and rational asymptotic integration. (Later on we will impose additional restrictions on \(K\).) We also let \(\widehat{K}\) be an immediate asymptotic extension of \(K\). We focus here on slots of complexity \((1,1,1)\) in \(K\). As a byproduct we obtain in Corollary 3.5.18 a partial generalization of Corollary 3.3.49 to minimal holes in \(K\) of arbitrary degree. First we establish in the next subsection a useful formal identity. We let \(j\), \(k\) range over \(\mathbb{N}\) (in addition to \(m\), \(n\), as usual).
**An integration identity.** Let \(R\) be a differential ring, and let \(f\), \(g\), \(h\), range over \(R\). We use \(\int f=g+\int h\) as a suggestive way to express that \(f=g^{\prime}+h\), and likewise, \(\int f=g-\int h\) means that \(f=g^{\prime}-h\). For example,
\[\int f^{\prime}g\ =\ fg-\int fg^{\prime}\qquad\text{(integration by parts)}.\]
Let \({\rm e},\xi\in R^{\times}\) satisfy \({\rm e}^{\dagger}=\xi\). We wish to expand \(\int{\rm e}\) by iterated integration by parts. Now for \(g={\rm e}\) we have \(g^{\prime}\in R^{\times}\) with \(\frac{g}{g^{\prime}}=\frac{1}{\xi}\), so in view of \({\rm e}=g^{\prime}\frac{{\rm e}}{g^{\prime}}\):
\[\int{\rm e}\ =\ \int g^{\prime}\frac{{\rm e}}{g^{\prime}}\ =\ \frac{{\rm e}}{\xi}- \int g\bigg{(}\frac{{\rm e}}{g^{\prime}}\bigg{)}^{\prime},\]
\[\left(\frac{\mathrm{e}}{g^{\prime}}\right)^{\prime}=\left(\frac{1}{\xi}\right)^{ \prime}=\frac{-\xi^{\prime}}{\xi^{2}}=\frac{-\xi^{\dagger}}{\xi},\]
and thus
\[\int\mathrm{e}\ =\ \frac{\mathrm{e}}{\xi}+\int\frac{\xi^{\dagger}}{\xi}\, \mathrm{e}\,.\]
More generally, using the above identities for \(g=\mathrm{e}\),
\[\int f\,\mathrm{e} =\ \int g^{\prime}f\frac{\mathrm{e}}{g^{\prime}}\ =\ \frac{f}{\xi}\,\mathrm{e}-\int g\bigg{(}f\frac{\mathrm{e}}{g^{\prime}}\bigg{)} ^{\prime}\] \[=\ \frac{f}{\xi}\,\mathrm{e}-\int g\left(f^{\prime}\frac{\mathrm{e }}{g^{\prime}}+f\bigg{(}\frac{\mathrm{e}}{g^{\prime}}\bigg{)}^{\prime}\, \right)\ =\ \frac{f}{\xi}\,\mathrm{e}-\int\left(\frac{f^{\prime}}{\xi}\,\mathrm{e}+fg \Big{(}\frac{-\xi^{\dagger}}{\xi}\Big{)}\right)\] \[=\ \frac{f}{\xi}\,\mathrm{e}-\int\left(\frac{f^{\prime}}{\xi}\, \mathrm{e}+\frac{-f\xi^{\dagger}}{\xi}\,\mathrm{e}\right)\ =\ \frac{f}{\xi}\,\mathrm{e}+\int\left(\frac{\xi^{\dagger}f-f^{\prime}}{\xi} \right)\mathrm{e}\,.\]
Replacing \(f\) by \(f/\xi^{k}\) gives the following variant of this identity:
\[\int\frac{f}{\xi^{k}}\,\mathrm{e}\ =\ \frac{f}{\xi^{k+1}}\,\mathrm{e}+\int \frac{(k+1)\xi^{\dagger}f-f^{\prime}}{\xi^{k+1}}\,\mathrm{e}\,.\]
Induction on \(m\) using the last identity yields:
**Lemma 3.5.1**.: _Set \(\zeta:=\xi^{\dagger}\). Then_
\[\int f\,\mathrm{e}\ =\ \sum_{j=0}^{m}P_{j}(\zeta,f)\frac{\mathrm{e}}{\xi^{j+1}}+ \int P_{m+1}(\zeta,f)\frac{\mathrm{e}}{\xi^{m+1}},\]
_where the \(P_{j}\in\mathbb{Q}\{Z,V\}=\mathbb{Q}\{Z\}\{V\}\) are independent of \(R\), \(\mathrm{e}\), \(\xi\):_
\[P_{0}\ :=\ V,\qquad P_{j+1}\ :=\ (j+1)ZP_{j}-P_{j}^{\prime}.\]
_Thus \(P_{j}=P_{j0}V+P_{j1}V^{\prime}+\cdots+P_{jj}V^{(j)}\) with all \(P_{jk}\in\mathbb{Q}\{Z\}\) and \(P_{jj}=(-1)^{j}\)._
For example,
\[P_{0}=V,\quad P_{1}=ZV-V^{\prime},\quad P_{2}=(2Z^{2}-Z^{\prime})V-3ZV^{\prime }+V^{\prime\prime}.\]
An asymptotic expansion.: _In this subsection \(\xi\in K\) and \(\xi\succ^{\flat}1\)_; equivalently, \(\xi\in K\) satisfies \(\xi\succ 1\) and \(\zeta:=\xi^{\dagger}\succcurlyeq 1\). _We also assume that \(\xi\notin\mathrm{I}(K)+K^{\dagger}\). Since \(\widehat{K}\) is d-valued of \(H\)-type with asymptotic integration, it has by [ADH, 10.2.7] an immediate asymptotic extension \(\widehat{K}(\phi)\) with \(\phi^{\prime}=\xi\). Then the algebraic closure of \(\widehat{K}(\phi)\) is still d-valued of \(H\)-type, by [ADH, 9.5], and so [ADH, 10.4.1] yields a d-valued \(H\)-asymptotic extension \(L\) of this algebraic closure with an element \(\mathrm{e}\neq 0\) such that \(\mathrm{e}^{\dagger}=\xi\). All we need about \(L\) below is that it is a d-valued \(H\)-asymptotic extension of \(\widehat{K}\) with elements \(\phi\) and \(\mathrm{e}\) such that \(\phi^{\prime}=\xi\) and \(\mathrm{e}\neq 0\), \(\mathrm{e}^{\dagger}=\xi\). Note that then \(L\) has small derivation, and \(\xi\succ^{\flat}1\) in \(L\). (The element \(\phi\) will only play an auxiliary role later in this subsection.)
**Lemma 3.5.2**.: \(v(\mathrm{e})\notin\Gamma\)_._
Proof.: Suppose otherwise. Take \(a\in K^{\times}\) with \(a\,\mathrm{e}\asymp 1\). Then \(a^{\dagger}+\xi=(a\,\mathrm{e})^{\dagger}\in\mathrm{I}(L)\cap K=\mathrm{I}(K)\) and thus \(\xi\in\mathrm{I}(K)+K^{\dagger}\), a contradiction.
By Lemma 3.5.2 there is for each \(g\in L\) at most one \(\widehat{f}\in\widehat{K}\) with \(\big{(}\widehat{f}\frac{\mathrm{e}}{\xi}\big{)}^{\prime}=g\). Let \(f\in K^{\times}\) be given with \(f\preccurlyeq 1\), and suppose \(\widehat{f}\in\widehat{K}\) satisfies \(\big{(}\widehat{f}\frac{\mathrm{e}}{\xi}\big{)}^{\prime}=f\,\mathrm{e}\). Our aim is to show that with \(P_{j}\) as in Lemma 3.5.1, the series \(\sum_{j=0}^{\infty}P_{j}(\zeta,f)\frac{1}{\xi^{j}}\) is a kind of asymptotic expansion of \(\widehat{f}\). The partial sums
\[f_{m}:=\sum_{j=0}^{m}P_{j}(\zeta,f)\frac{1}{\xi^{j}}\]
of this series lie in \(K\), with \(f_{0}=f\) and \(f_{n}-f_{m}\prec\xi^{-m}\) for \(m<n\), by Lemma 1.1.16. More precisely, we show:
**Proposition 3.5.3**.: _We have \(\widehat{f}-f_{m}\prec\xi^{-m}\) for all \(m\)._ (_Thus: \(f\asymp 1\,\Rightarrow\,\widehat{f}\sim f\)._)
Towards the proof, note that by Lemma 3.5.1 with \(R=L\),
\[\widehat{f}\,\frac{\mathrm{e}}{\xi} \;=\;\sum_{j=0}^{m}P_{j}(\zeta,f)\frac{\mathrm{e}}{\xi^{j+1}}+I_ {m},\quad I_{m}\in L,\;\text{and thus} \tag{3.5.1}\] \[\widehat{f} \;=\;f_{m}+\frac{\mathrm{e}}{\mathrm{e}}I_{m}\]
where \(I_{m}\in\widehat{K}\,\mathrm{e}\) satisfies \(I_{m}^{\prime}=P_{m+1}(\zeta,f)\frac{\mathrm{e}}{\xi^{m+1}}\), a condition that determines \(I_{m}\) uniquely up to an additive constant from \(C_{L}\). The proof of Proposition 3.5.3 now rests on the following lemmas:
**Lemma 3.5.4**.: _In \(L\) we have \((\mathrm{e}\,\xi^{l})^{(k)}\;\sim\;\mathrm{e}\,\xi^{l+k}\), for all \(l\in\mathbb{Z}\) and all \(k\)._
This is Corollary 1.1.17 with our \(L\) in the role of \(K\) there, and taking \(\mathrm{e}^{\phi}\) there as our \(\mathrm{e}\in L\); note that here our \(\phi\in L\) with \(\phi^{\prime}=\xi\) is needed.
**Lemma 3.5.5**.: _Suppose \(\mathrm{e}\succ\xi^{m+1}\). Then \(\frac{\mathrm{e}}{\mathrm{e}}I_{m}\prec\xi^{-m}\)._
Proof.: This amounts to \(I_{m}\prec\frac{\mathrm{e}}{\xi^{m+1}}\). Suppose \(I_{m}\succcurlyeq\frac{\mathrm{e}}{\xi^{m+1}}\succ 1\). Then we have \(I_{m}^{\prime}\succcurlyeq\big{(}\frac{\mathrm{e}}{\xi^{m+1}}\big{)}^{\prime} \sim\frac{\mathrm{e}}{\xi^{m}}\) by Lemma 3.5.4, so \(P_{m+1}(\zeta,f)\frac{\mathrm{e}}{\xi^{m+1}}\succcurlyeq\frac{\mathrm{e}}{\xi^ {m}}\), and thus \(P_{m+1}(\zeta,f)\succcurlyeq\xi\), contradicting Lemma 1.1.16.
**Lemma 3.5.6**.: _Suppose \(\mathrm{e}\preccurlyeq\xi^{m}\). Then \(I_{m}\prec 1\) and \(\frac{\mathrm{e}}{\mathrm{e}}I_{m}\prec\xi^{-m}\)._
Proof.: Lemma 1.1.16 gives
\[P_{m+1}(\zeta,f)\frac{\mathrm{e}}{\xi^{m+1}}\preccurlyeq\zeta^{N}\frac{ \mathrm{e}}{\xi^{m+1}}\preccurlyeq\frac{\zeta^{N}}{\xi}\quad\text{ for some }N\in\mathbb{N},\]
so \(v(I_{m}^{\prime})>\Psi_{L}\), and thus \(I_{m}\preccurlyeq 1\). If \(I_{m}\asymp 1\), then \(v(\frac{\mathrm{e}}{\mathrm{e}}I_{m})=v(\xi)-v(\mathrm{e})\notin\Gamma\), contradicting \(\frac{\mathrm{e}}{\mathrm{e}}I_{m}=\widehat{f}-f_{m}\in\widehat{K}\), by (3.5.1). Thus \(I_{m}\prec 1\). Now assume towards a contradiction that \(\frac{\mathrm{e}}{\mathrm{e}}I_{m}\succcurlyeq\xi^{-m}\). Then \(\frac{\mathrm{e}}{\xi^{m+1}}\preccurlyeq I_{m}\prec 1\), so \(I_{m}^{\prime}\succcurlyeq\big{(}\frac{\mathrm{e}}{\xi^{m+1}}\big{)}^{\prime} \sim\frac{\mathrm{e}}{\xi^{m}}\) by Lemma 3.5.4, and this yields a contradiction as in the proof of Lemma 3.5.5.
Proof of Proposition 3.5.3.: Let \(m\) be given. If \(\mathrm{e}\succ\xi^{m+1}\), then \(\widehat{f}-f_{m}=\frac{\mathrm{e}}{\mathrm{e}}I_{m}\prec\xi^{-m}\) by Lemma 3.5.5. Suppose \(\mathrm{e}\preccurlyeq\xi^{m+1}\). Then Lemma 3.5.6 (with \(m+1\) instead of \(m\)) gives \(\widehat{f}-f_{m+1}\prec\xi^{-(m+1)}\), hence \(\widehat{f}-f_{m}=(\widehat{f}-f_{m+1})+(f_{m+1}-f_{m})\prec\xi^{-m}\).
**Application to linear differential equations of order \(1\).** Proposition 3.5.3 yields information about the asymptotics of solutions (in \(\widehat{K}\)) of certain linear differential equations of order \(1\) over \(K\):
**Corollary 3.5.7**.: _Let \(f,\xi\in K\), \(f\preccurlyeq 1\), \(\xi\succ^{\flat}1\), \(\xi\notin\mathrm{I}(K)+K^{\dagger}\), and suppose \(y\in\widehat{K}\) satisfies \(y^{\prime}+\xi y=f\). Then there is for every \(m\) an element \(y_{m}\in K\) with \(y-y_{m}\prec\xi^{-m}\). Also, \(f\asymp 1\ \Rightarrow\ y\sim f\xi^{-1}\)._
Proof.: Take \(L\) and \(\mathrm{e}\in L\) as at the beginning of the previous subsection, and set \(\widehat{f}:=y\xi\in\widehat{K}\). Then for \(A:=\partial+\xi\) we have \(A(\widehat{f}/\xi)=f\), so
\[\left(\widehat{f}\,\frac{\mathrm{e}}{\xi}\right)^{\prime}\ =\ (\widehat{f}/\xi)^{ \prime}\,\mathrm{e}+(\widehat{f}/\xi)\xi\,\mathrm{e}\ =\ A(\widehat{f}/\xi)\,\mathrm{e}\ =\ f\,\mathrm{e},\]
hence \(\widehat{f}\) is as in the previous subsection. Now apply Proposition 3.5.3.
**Corollary 3.5.8**.: _Let \(g\in K\), \(u\in K^{\times}\) be such that \(g\notin\mathrm{I}(K)+K^{\dagger}\) and \(\xi:=g+u^{\dagger}\succ^{\flat}1\). Suppose \(z\in\widehat{K}\) satisfies \(z^{\prime}+gz=u\). Then \(z\sim u/\xi\), and for every \(m\) there is a \(z_{m}\in K\) such that \(z-z_{m}\prec u\xi^{-m}\)._
Proof.: Set \(A:=\partial+g\). Then \(A_{\ltimes u}=\partial+\xi\), so \(A(z)=u\) yields for \(y:=z/u\) that \(y^{\prime}+\xi y=1\). Now observe that \(\xi\notin\mathrm{I}(K)+K^{\dagger}\) and use the previous corollary.
**Slots of order and degree \(1\).** In the rest of this section we use the material above to analyze slots of order and degree \(1\) in \(K\). _Below \(K\) is henselian and \((P,\mathfrak{m},\widehat{f})\) is a slot in \(K\) with \(\mathrm{order}\,P=\deg P=1\) and \(\widehat{f}\in\widehat{K}\setminus K\). We let \(f\) range over \(K\), \(\mathfrak{n}\) over \(K^{\times}\), and \(\phi\) over active elements of \(K\)._ Thus
\[P \ =\ a(Y^{\prime}+gY-u)\quad\text{where }a\in K^{\times},\ g,u \in K,\] \[P_{\ltimes\mathfrak{n}} \ =\ a\mathfrak{n}\big{(}Y^{\prime}+(g+\mathfrak{n}^{\dagger})Y- \mathfrak{n}^{-1}u\big{)}.\]
Since \(K\) is henselian, \((P,\mathfrak{m},\widehat{f})\) is \(Z\)-minimal and thus equivalent to a hole in \(K\), by Lemma 3.2.14. Also, \(\mathrm{num}\,P_{\ltimes\mathfrak{m}}=\mathrm{ndeg}\,P_{\ltimes\mathfrak{m}}=1\) by Lemma 3.2.21. We have \(L_{P}=a(\partial+g)\), so
\[g\in K^{\dagger}\iff\ker L_{P}\neq\{0\},\qquad\qquad g\in\mathrm{I}(K)+K^{ \dagger}\iff\mathscr{E}^{\mathrm{e}}(L_{P})\neq\emptyset,\]
using for the second equivalence the remark on \(\mathscr{E}^{\mathrm{e}}(A)\) preceding Lemma 1.5.9. If \((P,\mathfrak{m},\widehat{f})\) is isolated, then \(P(f)\neq 0\) for \(\widehat{f}-f\prec\mathfrak{m}\) by Lemmas 3.2.14 and 3.4.15, so, taking \(f=0\), we have \(u\neq 0\).
**Lemma 3.5.9**.: _Suppose \(\partial K=K\) and \(\mathrm{I}(K)\subseteq K^{\dagger}\). Then \(\mathscr{E}^{\mathrm{e}}(L_{P})=\emptyset\), so \((P,\mathfrak{m},\widehat{f})\) is isolated by Lemma 3.4.5._
Proof.: Passing to an equivalent hole in \(K\), arrange that \((P,\mathfrak{m},\widehat{f})\) is a hole in \(K\). Since \(\partial K=K\) and \(\widehat{f}\in\widehat{K}\setminus K\), the remark following Lemma 1.8.21 yields \(g\notin K^{\dagger}=\mathrm{I}(K)+K^{\dagger}\), therefore \(\mathscr{E}^{\mathrm{e}}(L_{P})=\emptyset\).
Set \(\mathfrak{v}:=\mathfrak{v}(L_{P_{\times m}})\); thus \(\mathfrak{v}=1\) if \(g+\mathfrak{m}^{\dagger}\preccurlyeq 1\) and \(\mathfrak{v}=1/(g+\mathfrak{m}^{\dagger})\) otherwise. Hence from Example 3.3.3 and the remarks before Lemma 3.3.17 we obtain:
\[(P,\mathfrak{m},\widehat{f})\text{ is normal}\quad\iff\quad(P, \mathfrak{m},\widehat{f})\text{ is steep}\quad\iff\quad\mathfrak{v}\preccurlyeq 1,\] \[(P,\mathfrak{m},\widehat{f})\text{ is deep}\quad\iff\quad \mathfrak{v}\preccurlyeq^{\flat}1\text{ and }u\preccurlyeq\mathfrak{m}/\mathfrak{v}.\]
We have \(P(0)=-au\), and if \(\mathfrak{v}\prec 1\), then \((P_{\times\mathfrak{m}})_{1}\sim(a\mathfrak{m}/\mathfrak{v})Y\). Thus
\[(P,\mathfrak{m},\widehat{f})\text{ is strictly normal }\iff\mathfrak{v} \prec^{\flat}1\text{ and }u\prec_{\Delta(\mathfrak{v})}\mathfrak{m}\mathfrak{v}.\]
We say that \((P,\mathfrak{m},\widehat{f})\) is **balanced** if \((P,\mathfrak{m},\widehat{f})\) is steep and \(P(0)\preccurlyeq S_{P_{\times\mathfrak{m}}}(0)\), equivalently, \((P,\mathfrak{m},\widehat{f})\) is steep and \(u\preccurlyeq\mathfrak{m}\). Thus
\[(P,\mathfrak{m},\widehat{f})\text{ is strictly normal }\Longrightarrow(P, \mathfrak{m},\widehat{f})\text{ is balanced }\Longrightarrow(P,\mathfrak{m},\widehat{f})\text{ is deep,}\]
and with \(b\in K^{\times}\),
\[(P,\mathfrak{m},\widehat{f})\text{ is balanced }\Longleftrightarrow(P_{\times \mathfrak{n}},\mathfrak{m}/\mathfrak{n},\widehat{f}/\mathfrak{n})\text{ is balanced }\Longleftrightarrow(bP,\mathfrak{m},\widehat{f})\text{ is balanced.}\]
If \((P,\mathfrak{m},\widehat{f})\) is balanced, then so is any slot in \(K\) equivalent to \((P,\mathfrak{m},\widehat{f})\). Moreover, if \((P,\mathfrak{m},\widehat{f})\) is a hole in \(K\), then \(P(0)=-L_{P}(\widehat{f})\), so \((P,\mathfrak{m},\widehat{f})\) is balanced iff it is steep and \(L_{P}(\widehat{f})\preccurlyeq S_{P_{\times\mathfrak{m}}}(0)\). By Corollary 3.3.14, if \((P,\mathfrak{m},\widehat{f})\) is steep, then \(\widehat{f}-f\prec_{\Delta(\mathfrak{v})}\mathfrak{m}\) for some \(f\). For balanced \((P,\mathfrak{m},\widehat{f})\) we have a variant of this fact:
**Lemma 3.5.10**.: _Suppose \((P,\mathfrak{m},\widehat{f})\) is balanced and \(g\notin\operatorname{I}(K)+K^{\dagger}\). Then there is for all \(n\) an \(f\) such that \(\widehat{f}-f\prec\mathfrak{v}^{n}\mathfrak{m}\)._
Proof.: Replacing \((P,\mathfrak{m},\widehat{f})\) by an equivalent hole in \(K\), we arrange that \((P,\mathfrak{m},\widehat{f})\) is a hole in \(K\), and replacing \((P,\mathfrak{m},\widehat{f})\) by \((P_{\times\mathfrak{m}},1,\widehat{f}/\mathfrak{m})\), that \(\mathfrak{m}=1\). Then \(\widehat{f}^{\prime}+g\widehat{f}=u\) with \(g=1/\mathfrak{v}\succ^{\flat}1\), \(g\notin\operatorname{I}(K)+K^{\dagger}\), and \(u\preccurlyeq 1\). Hence the lemma follows from Corollary 3.5.7.
In the next corollary we assume that the subgroup \(K^{\dagger}\) of \(K\) is divisible. (Since \(K\) is henselian and d-valued, this holds if the groups \(C^{\times}\) and \(\Gamma\) are divisible.)
**Corollary 3.5.11**.: _Suppose \((P,\mathfrak{m},\widehat{f})\) is balanced and \(g\notin\operatorname{I}(K)+K^{\dagger}\). Then \((P,\mathfrak{m},\widehat{f})\) has a strictly normal refinement \((P_{+f},\mathfrak{m},\widehat{f}-f)\)._
Proof.: First arrange that \((P,\mathfrak{m},\widehat{f})\) is a hole in \(K\). The previous lemma yields an \(f\) such that \(\widehat{f}-f\preccurlyeq\mathfrak{v}^{3}\mathfrak{m}\). Then \((P_{+f},\mathfrak{m},\widehat{f}-f)\) is a strictly normal refinement of \((P,\mathfrak{m},\widehat{f})\), by Lemma 3.46 (where the latter uses divisibility of \(K^{\dagger}\)).
**Lemma 3.5.12**.: _Suppose \((P,\mathfrak{m},\widehat{f})\) is balanced with \(v\widehat{f}\notin\mathscr{E}^{\rm e}(L_{P})\) and \(\widehat{f}-f\preccurlyeq\widehat{f}\). Then the refinement \((P_{+f},\mathfrak{m},\widehat{f}-f)\) of \((P,\mathfrak{m},\widehat{f})\) is balanced._
Proof.: By Lemma 3.2.14 we arrange \((P,\mathfrak{m},\widehat{f})\) is a hole. Replacing \((P,\mathfrak{m},\widehat{f})\) and \(f\) by \((P_{\times\mathfrak{m}},1,\widehat{f}/\mathfrak{m})\) and \(f/\mathfrak{m}\) we arrange next that \(\mathfrak{m}=1\). By the remark preceding Lemma 3.3.2, \((P_{+f},1,\widehat{f}-f)\) is steep. Take \(\phi\) such that \(v\widehat{f}\notin\mathscr{E}\big{(}(L_{P})^{\phi}\big{)}\), and set \(\widehat{g}:=\widehat{f}-f\), so \(0\neq\widehat{g}\preccurlyeq\widehat{f}\). Recall from [ADH, 5.7.5] that \(L_{P^{\phi}}=(L_{P})^{\phi}\) and hence \(L_{P^{\phi}}(\widehat{f})=L_{P}(\widehat{f})\) and \(L_{P}(\widehat{g})=L_{P^{\phi}}(\widehat{g})\). Thus
\[L_{P_{+f}}(\widehat{g})=L_{P}(\widehat{g})\preccurlyeq L_{P^{\phi}}\widehat{g} \preccurlyeq L_{P^{\phi}}\widehat{f}\asymp L_{P^{\phi}}(\widehat{f})=L_{P}( \widehat{f})\preccurlyeq S_{P}(0)=S_{P_{+f}}(0),\]
using [ADH, 4.5.1(iii)] to get the second \(\preccurlyeq\) and \(v\widehat{f}\notin\mathscr{E}(L_{P^{\phi}})\) to get \(\asymp\); the last \(\preccurlyeq\) uses \((P,1,\widehat{f})\) being a hole. Therefore \((P_{+f},1,\widehat{g})\) is balanced.
Combining Lemmas 3.4.2 and 3.5.12 yields:
**Corollary 3.5.13**.: _If \((P,\mathfrak{m},\widehat{f})\) is balanced and isolated, and \(\widehat{f}-f\preccurlyeq\widehat{f}\), then the refinement \((P_{+f},\mathfrak{m},\widehat{f}-f)\) of \((P,\mathfrak{m},\widehat{f})\) is also balanced and isolated._
We call \((P,\mathfrak{m},\widehat{f})\)**proper** if the differential polynomial \(P\) is proper as defined in Section 1.8 (that is, \(u\neq 0\) and \(g+u^{\dagger}\succ^{\flat}1\)). If \((P,\mathfrak{m},\widehat{f})\) is proper, then so are \((bP,\mathfrak{m},\widehat{f})\) for \(b\neq 0\) and \((P\times_{\mathfrak{n}},\mathfrak{m}/\mathfrak{n},\widehat{f}/\mathfrak{n})\), as well as each refinement \((P,\mathfrak{n},\widehat{f})\) of \((P,\mathfrak{m},\widehat{f})\) and each slot in \(K\) equivalent to \((P,\mathfrak{m},\widehat{f})\). By Lemma 1.8.23, if \((P,\mathfrak{m},\widehat{f})\) is proper, then so is \((P^{\phi},\mathfrak{m},\widehat{f})\) for \(\phi\preccurlyeq 1\).
**Lemma 3.5.14**.: _Suppose \((P,\mathfrak{m},\widehat{f})\) is proper and \(\mathfrak{m}\asymp u\); then \((P,\mathfrak{m},\widehat{f})\) is balanced._
Proof.: Replacing \((P,\mathfrak{m},\widehat{f})\) by \((P\times_{\mathfrak{m}},1,\widehat{f}/\mathfrak{m})\), we arrange \(\mathfrak{m}=1\). Then \(u\asymp 1\) and thus \((P,1,\widehat{f})\) is balanced.
**Proposition 3.5.15**.: _Suppose \((P,\mathfrak{m},\widehat{f})\) is proper and \(v\widehat{f}\notin\mathscr{E}^{\mathrm{e}}(L_{P})\). Then \((P,\mathfrak{m},\widehat{f})\) has a balanced refinement._
Proof.: We arrange \(\mathfrak{m}=1\) as usual. By Lemmas 1.8.26 and 3.2.14 we have
\[\widehat{f}\ \sim\ u/(g+u^{\dagger})\prec^{\flat}u.\]
Hence if \(u\preccurlyeq 1\), then \((P,u,\widehat{f})\) refines \((P,1,\widehat{f})\), and so \((P,u,\widehat{f})\) is balanced by Lemma 3.5.14. Assume now that \(u\succ 1\). Then \(1\prec u\prec g\) by Lemma 1.8.25 and \(\operatorname{nmul}P=1\), and hence \(u^{\dagger}\preccurlyeq g^{\dagger}\preccurlyeq g\). So \(g\sim g+u^{\dagger}\succ^{\flat}1\), hence \((P,1,\widehat{f})\) is steep, and \(\widehat{f}\sim u/g\). Set \(f:=u/g\prec 1\); then \((P_{+f},1,\widehat{f}-f)\) is a steep refinement of \((P,1,\widehat{f})\). Moreover
\[P_{+f}(0)=P(f)=af^{\prime}\prec a=S_{P_{+f}}(0),\]
hence \((P_{+f},1,\widehat{f}-f)\) is balanced.
**Corollary 3.5.16**.: _Suppose \(K\) is \(\lambda\)-free. Then there exists \(\phi\preccurlyeq 1\) and a refinement \((P_{+f},\mathfrak{n},\widehat{f}-f)\) of \((P,\mathfrak{m},\widehat{f})\) such that \((P_{+f}^{\phi},\mathfrak{n},\widehat{f}-f)\) is balanced._
Proof.: Using Remark 3.4.7 we can replace \((P,\mathfrak{m},\widehat{f})\) by a refinement to arrange that \((P,\mathfrak{m},\widehat{f})\) is isolated. Then \(u\neq 0\) by the remark before Lemma 3.5.9, so by Lemma 1.8.24, \(P^{\phi}\) is proper, eventually. Now apply Proposition 3.5.15 to a proper (and isolated) \((P^{\phi},\mathfrak{m},\widehat{f})\) with \(\phi\preccurlyeq 1\).
**Corollary 3.5.17**.: _Suppose \(K\) is \(\lambda\)-free, \(\partial K=K\), \(\mathrm{I}(K)\subseteq K^{\dagger}\), and \(K^{\dagger}\) is divisible. Then \((P,\mathfrak{m},\widehat{f})\) has a refinement \((P_{+f},\mathfrak{n},\widehat{f}-f)\) such that \((P_{+f}^{\phi},\mathfrak{n},\widehat{f}-f)\) is strictly normal for some \(\phi\preccurlyeq 1\)._
Proof.: Corollary 3.5.16 yields a refinement \((P_{+f_{1}},\mathfrak{n}_{1},\widehat{f}-f_{1})\) of \((P,\mathfrak{m},\widehat{f})\) and a \(\phi\preccurlyeq 1\) such that \((P_{+f_{1}}^{\phi},\mathfrak{n}_{1},\widehat{f}-f_{1})\) is balanced. By Lemma 3.5.9 with \(K^{\phi}\) in the role of \(K\) and \((P_{+f_{1}}^{\phi},\mathfrak{n}_{1},\widehat{f}-f_{1})\) in the role of \((P,\mathfrak{m},\widehat{f})\) we can apply Corollary 3.5.11 to \((P_{+f_{1}}^{\phi},\mathfrak{n}_{1},\widehat{f}-f_{1})\) to give a strictly normal refinement \((P_{f_{1}+f_{2}}^{\phi},\mathfrak{n},\widehat{f}-f_{1}-f_{2})\) of it. Thus for \(f:=f_{1}+f_{2}\) the refinement \((P_{+f},\mathfrak{n},\widehat{f}-f)\) of \((P,\mathfrak{m},\widehat{f})\) has the property that \((P_{+f}^{\phi},\mathfrak{n},\widehat{f}-f)\) is strictly normal.
Combining this corollary with Corollaries 3.2.8, 3.3.49, and Lemma 3.3.40 yields:
**Corollary 3.5.18**.: _If \(K\) is \(\omega\)-free and algebraically closed with \(\partial K=K\) and \(\mathrm{I}(K)\subseteq K^{\dagger}\), then every minimal hole in \(K\) of order \(\geqslant 1\) has a refinement \((Q,\mathfrak{n},\widehat{g})\) such that \((Q^{\phi},\mathfrak{n},\widehat{g})\) is deep and strictly normal, eventually._
_Remark_.: Suppose \(K\) is \(\lambda\)-free, with \(\partial K=K\), \(\operatorname{I}(K)\subseteq K^{\dagger}\), and \(K^{\dagger}\) is divisible. By Corollary 3.3.37 every linear slot in \(K\) of order \(r\geqslant 1\) has a refinement \((Q,\mathfrak{n},\widehat{g})\) such that \((Q^{\phi},\mathfrak{n},\widehat{g})\) is deep and normal, eventually. We don't know whether every linear minimal hole in \(K\) of order \(r\geqslant 1\) has a refinement \((Q,\mathfrak{n},\widehat{g})\) such that \((Q^{\phi},\mathfrak{n},\widehat{g})\) is deep and strictly normal, eventually. (For \(r=1\) this holds by Corollary 3.5.17.)
**Part 4. Slots in \(H\)-Fields**
Here we specialize to the case that \(K\) is the algebraic closure of a Liouville closed \(H\)-field \(H\) with small derivation. After the preliminary Sections 4.1 and 4.2 we come in Sections 4.3-4.5 to the technical heart of Part 4. Section 4.3 shows that every minimal hole in \(K\) gives rise to a _split-normal_ slot \((Q,\mathfrak{n},\widehat{b})\) in \(H\): a normal slot in \(H\) whose linear part \(L_{Q_{\times\mathfrak{n}}}\in H[\partial]\) "asymptotically" splits over \(K\); see Definition 4.3.3 for the precise definition, and Theorem 4.3.9 for the detailed statement of the main result of this section. In the intended setting where \(H\) is a Hardy field, this asymptotic splitting will allow us to define in Part 6 a contractive operator on a space of real-valued functions; this operator then has a fixed point whose germ \(y\) satisfies \(Q(y)=0\), \(y\prec\mathfrak{n}\). A main difficulty in that part will lie in guaranteeing that such germs \(y\) have similar asymptotic properties as \(\widehat{b}\). Sections 4.4 and 4.5 prepare the ground for dealing with this: In Section 4.4 we strengthen the concept of isolated slot to _ultimate_ slot (in \(H\), or in \(K\)). This relies on the ultimate exceptional values of linear differential operators over \(K\) introduced in Part 2. In Section 4.5 we single out among split-normal slots in \(H\) those that are _repulsive-normal_, culminating in the proof of Theorem 4.5.28: an analogue of Theorem 4.3.9 producing repulsive-normal ultimate slots in \(H\) from minimal holes in \(K\).
### 4.1. Some Valuation-Theoretic Lemmas
The present section contains preliminaries for the next section on approximating splittings of linear differential operators; these facts in turn will be used in Section 4.3 on split-normality. We shall often deal with real closed fields with extra structure, denoted usually by \(H\), since the results in this section about such \(H\) will later be applied to \(H\)-fields and Hardy fields. We begin by summarizing some purely valuation-theoretic facts.
Completion and specialization of real closed valued fields.Let \(H\) be a real closed valued field whose valuation ring \(\mathcal{O}\) is convex in \(H\) (with respect to the unique ordering on \(H\) making \(H\) an ordered field). Using [ADH, 3.5.15] we equip the algebraic closure \(K=H[\mathrm{i}]\) (\(\mathrm{i}^{2}=-1\)) of \(H\) with its unique valuation ring lying over \(\mathcal{O}\), which is \(\mathcal{O}+\mathcal{O}\mathrm{i}\). We set \(\Gamma:=v(H^{\times})\), so \(\Gamma_{K}=\Gamma\).
**Lemma 4.1.1**.: _The completion \(H^{\mathrm{c}}\) of the valued field \(H\) is real closed, its valuation ring is convex in \(H^{\mathrm{c}}\), and there is a unique valued field embedding \(H^{\mathrm{c}}\to K^{\mathrm{c}}\) over \(H\). Identifying \(H^{\mathrm{c}}\) with its image under this embedding we have \(H^{\mathrm{c}}[\mathrm{i}]=K^{\mathrm{c}}\)._
Proof.: For the first two claims, see [ADH, 3.5.20]. By [ADH, 3.2.20] we have a unique valued field embedding \(H^{\mathrm{c}}\to K^{\mathrm{c}}\) over \(H\), and viewing \(H^{\mathrm{c}}\) as a valued subfield of \(K^{\mathrm{c}}\) via this embedding we have \(K^{\mathrm{c}}=H^{\mathrm{c}}K=H^{\mathrm{c}}[\mathrm{i}]\) by [ADH, 3.2.29].
We identify \(H^{\mathrm{c}}\) with its image in \(K^{\mathrm{c}}\) as in the previous lemma. Fix a convex subgroup \(\Delta\) of \(\Gamma\). Let \(\hat{\mathcal{O}}\) be the valuation ring of the coarsening of \(H\) by \(\Delta\), with maximal ideal \(\dot{\circ}\). Then by [ADH, 3.5.11 and subsequent remarks] \(\hat{\mathcal{O}}\) and \(\dot{\circ}\) are convex in \(H\), the specialization \(\dot{H}=\hat{\mathcal{O}}/\dot{\circ}\) of \(H\) by \(\Delta\) is naturally an ordered and valued field, and the valuation ring of \(\dot{H}\) is convex in \(\dot{H}\). Moreover, \(\dot{H}\) is even real closed by [ADH, 3.5.16]. Likewise, the coarsening of \(K\) by \(\Delta\) has valuation ring \(\hat{\mathcal{O}}_{K}\) with maximal ideal \(\dot{\circ}_{K}\) and valued residue field \(\dot{K}\). Thus \(\hat{\mathcal{O}}_{K}\) lies over \(\hat{\mathcal{O}}\)
by [ADH, 3.4, subsection _Coarsening and valued field extensions_], so \((K,\hat{\mathcal{O}}_{K})\) is a valued field extension of \((H,\hat{\mathcal{O}})\). In addition:
**Lemma 4.1.2**.: \(\dot{K}\) _is a valued field extension of \(\dot{H}\) and an algebraic closure of \(\dot{H}\)._
Proof.: The second part follows by general valuation theory from \(K\) being an algebraic closure of \(H\). In fact, with the image of \(i\in\mathcal{O}_{K}\subseteq\hat{\mathcal{O}}_{K}\) in \(\dot{K}\) denoted by the same symbol, we have \(\dot{K}=\dot{H}[i]\).
Next, let \(\widehat{H}\) be an immediate valued field extension of \(H\). We equip \(\widehat{H}\) with the unique field ordering making it an ordered field extension of \(H\) in which \(\mathcal{O}_{\widehat{H}}\) is convex; see [ADH, 3.5.12]. Choose \(i\) in a field extension of \(\widehat{H}\) with \(i^{2}=-1\). Equip \(\widehat{H}[i]\) with the unique valuation ring of \(\widehat{H}[i]\) that lies over \(\mathcal{O}_{\widehat{H}}\), namely \(\mathcal{O}_{\widehat{H}}+\mathcal{O}_{\widehat{H}}i\)[ADH, 3.5.15]. Let \(\widehat{a}=\widehat{b}+\widehat{c}\,i\in\widehat{H}[i]\setminus H[i]\) with \(\widehat{b},\widehat{c}\in\widehat{H}\), and let \(b\), \(c\) range over \(H\). Then
\[v\big{(}\widehat{a}-(b+c{\rm i})\big{)}\ =\ \min\bigl{\{}v(\widehat{b}-b),v( \widehat{c}-c)\bigr{\}}\]
and thus \(v\big{(}\widehat{a}-H[i]\big{)}\subseteq v(\widehat{b}-H)\) and \(v\big{(}\widehat{a}-H[i]\big{)}\subseteq v(\widehat{c}-H)\).
**Lemma 4.1.3**.: _We have \(v(\widehat{b}-H)\subseteq v(\widehat{c}-H)\) or \(v(\widehat{c}-H)\subseteq v(\widehat{b}-H)\). Moreover, the following are equivalent:_
* \(v(\widehat{b}-H)\subseteq v(\widehat{c}-H)\)_;_
* _for all_ \(b\) _there is a_ \(c\) _with_ \(v\big{(}\widehat{a}-(b+c{\rm i})\big{)}=v(\widehat{b}-b)\)_;_
* \(v\big{(}\widehat{a}-H[i]\big{)}=v(\widehat{b}-H)\)_._
Proof.: For the first assertion, use that \(v(\widehat{b}-H),v(\widehat{c}-H)\subseteq\Gamma_{\infty}\) are downward closed. Suppose \(v(\widehat{b}-H)\subseteq v(\widehat{c}-H)\), and let \(b\) be given. If \(\widehat{c}\in H\), then for \(c:=\widehat{c}\) we have \(v\big{(}\widehat{a}-(b+c{\rm i})\big{)}=v(\widehat{b}-b)\). Suppose \(\widehat{c}\notin H\). Then \(v(\widehat{c}-H)\subseteq\Gamma\) does not have a largest element and \(v(\widehat{b}-b)\in v(\widehat{c}-H)\), so we have \(c\) with \(v(\widehat{b}-b)<v(\widehat{c}-c)\); thus
\[v\big{(}\widehat{a}-(b+c{\rm i})\big{)}=\min\bigl{\{}v(\widehat{b}-b),v( \widehat{c}-c)\bigr{\}}=v(\widehat{b}-b).\]
This shows (i) \(\Rightarrow\) (ii). Moreover, (ii) \(\Rightarrow\) (iii) follows from \(v\big{(}\widehat{a}-H[i]\big{)}\subseteq v(\widehat{b}-H)\), and (iii) \(\Rightarrow\) (i) from \(v\big{(}\widehat{a}-H[i]\big{)}\subseteq v(\widehat{c}-H)\).
So if \(v(\widehat{b}-H)\subseteq v(\widehat{c}-H)\), then: \(\widehat{a}\) is special over \(H[i]\iff\widehat{b}\) is special over \(H\).
To apply Lemma 4.1.3 to \(H\)-fields we assume in the next lemma more generally that \(H\) is equipped with a derivation making it a d-valued field and that \(\widehat{H}\) is equipped with a derivation \(\mathfrak{d}\) making it an asymptotic field extension of \(H\); then \(\widehat{H}\) is also d-valued with the same constant field as \(H\)[ADH, 9.1.2].
**Lemma 4.1.4**.: _Suppose \(H\) is closed under integration. Then we have:_
\[v(\widehat{b}-H)\subseteq v(\widehat{c}-H)\ \Longrightarrow\ v(\widehat{ \partial}\widehat{b}-H)\subseteq v(\partial\widehat{c}-H).\]
Proof.: Assume \(v(\widehat{b}-H)\subseteq v(\widehat{c}-H)\). Let \(b\in H\), and take \(g\in H\) with \(g^{\prime}=b\); adding a suitable constant to \(g\) we arrange \(\widehat{b}-g\not\asymp 1\). Next, take \(h\in H\) with \(\widehat{b}-g\asymp\widehat{c}-h\). Then
\[\partial\widehat{b}-b\ =\ \mathfrak{d}(\widehat{b}-g)\ \asymp\ \mathfrak{d}( \widehat{c}-h)\ =\ \mathfrak{d}\widehat{c}-h^{\prime},\]
so \(v(\partial\widehat{b}-b)\in v(\partial\widehat{c}-H)\).
**Embedding into the completion.** In this subsection \(K\) is an asymptotic field, \(\Gamma:=v(K^{\times})\neq\{0\}\), and \(L\) is an asymptotic field extension of \(K\) such that \(\Gamma\) is cofinal in \(\Gamma_{L}\).
**Lemma 4.1.5**.: _Let \(a\in L\) and let \((a_{\rho})\) be a c-sequence in \(K\) with \(a_{\rho}\to a\) in \(L\). Then for each \(n\), \((a_{\rho}^{(n)})\) is a c-sequence in \(K\) with \(a_{\rho}^{(n)}\to a^{(n)}\) in \(L\)._
Proof.: By induction on \(n\) it suffices to treat the case \(n=1\). Let \(\gamma\in\Gamma_{L}\); we need to show the existence of an index \(\sigma\) such that \(v(a^{\prime}-a_{\rho}^{\prime})>\gamma\) for all \(\rho>\sigma\). By [1, 9.2.6] we have \(f\in L^{\times}\) with \(f\prec 1\) and \(v(f^{\prime})\geqslant\gamma\). Take \(\sigma\) such that \(v(a-a_{\rho})>vf\) for all \(\rho>\sigma\). Then \(v(a^{\prime}-a_{\rho}^{\prime})>v(f^{\prime})\geqslant\gamma\) for \(\rho>\sigma\).
Let \(K^{\rm c}\) be the completion of the valued differential field \(K\); then \(K^{\rm c}\) is asymptotic by [1, 9.1.6]. Lemma 4.1.5 and [1, 3.2.13 and 3.2.15] give:
**Corollary 4.1.6**.: _Let \((a_{i})_{i\in I}\) be a family of elements of \(L\) such that \(a_{i}\) is the limit in \(L\) of a c-sequence in \(K\), for each \(i\in I\). Then there is a unique embedding \(K\big{\langle}(a_{i})_{i\in I}\big{\rangle}\to K^{\rm c}\) of valued differential fields over \(K\)._
Next suppose that \(H\) is a real closed asymptotic field whose valuation ring \(\mathcal{O}\) is convex in \(H\) with \(\mathcal{O}\neq H\), the asymptotic extension \(\widehat{H}\) of \(H\) is immediate, and \(i\) is an element of an asymptotic extension of \(\widehat{H}\) with \(i^{2}=-1\). Then \(i\notin\widehat{H}\), and we identify \(H^{\rm c}\) with a valued subfield of \(H[i]^{\rm c}\) as in Lemma 4.1.1, so that \(H^{\rm c}[i]=H[i]^{\rm c}\) as in that lemma. Using also Lemma 4.1.5 we see that \(H^{\rm c}\) is actually a valued _differential_ subfield of the asymptotic field \(H[i]^{\rm c}\), and so \(H^{\rm c}[i]=H[i]^{\rm c}\) also as _asymptotic_ fields. Thus by Corollary 4.1.6 applied to \(K:=H\) and \(L:=\widehat{H}\):
**Corollary 4.1.7**.: _Let \(a\in\widehat{H}[i]\) be the limit in \(\widehat{H}[i]\) of a c-sequence in \(H[i]\). Then \(\operatorname{Re}a\), \(\operatorname{Im}a\) are limits in \(\widehat{H}\) of c-sequences in \(H\), hence there is a unique embedding \(H[i]\big{\langle}\operatorname{Re}a,\operatorname{Im}a\big{\rangle}\to H^{ \rm c}[i]\) of valued differential fields over \(H[i]\)._
### Approximating Linear Differential Operators
_In this section \(K\) is a valued differential field with small derivation, \(\Gamma:=v(K^{\times})\). For later use we prove here Corollaries 4.2.6 and 4.2.9 and consider strong splitting. Much of this section rests on the following basic estimate for linear differential operators which split over \(K\):_
**Lemma 4.2.1**.: _Let \(b_{1},\ldots,b_{r}\in K\) and \(n\) be given. Then there exists \(\gamma_{0}\in\Gamma^{\geqslant}\) such that for all \(b_{1}^{\star},\ldots,b_{r}^{\star}\in K\) and \(\gamma\in\Gamma\) with \(\gamma>\gamma_{0}\) and \(v(b_{i}-b_{i}^{\star})\geqslant(n+r)\gamma\) for \(i=1,\ldots,r\), we have \(v(B-B^{\star})\geqslant vB+n\gamma\), where_
\[B\ :=\ (\partial-b_{1})\cdots(\partial-b_{r})\in K[\partial],\quad B^{\star}\ :=\ ( \partial-b_{1}^{\star})\cdots(\partial-b_{r}^{\star})\in K[\partial].\]
Proof.: By induction on \(r\in\mathbb{N}\). The case \(r=0\) is clear (any \(\gamma_{0}\in\Gamma^{\geqslant}\) works). Suppose the lemma holds for a certain \(r\). Let \(b_{1},\ldots,b_{r+1}\in K\) and \(n\) be given. Set \(\beta_{i}:=vb_{i}\ (i=1,\ldots,r+1)\). Take \(\gamma_{0}\) as in the lemma applied to \(b_{1},\ldots,b_{r}\) and \(n+1\) in place of \(n\), and let \(\gamma_{1}:=\gamma_{0}\) if \(b_{r+1}=0\), \(\gamma_{1}:=\max\big{\{}\gamma_{0},|\beta_{r+1}|\big{\}}\) otherwise. Let \(b_{1}^{\star},\ldots,b_{r+1}^{\star}\in K\) and \(\gamma\in\Gamma\) with \(\gamma>\gamma_{1}\) and \(v(b_{i}-b_{i}^{\star})\geqslant(n+r+1)\gamma\) for \(i=1,\ldots,r+1\). Set
\[B\ :=\ (\partial-b_{1})\cdots(\partial-b_{r}),\qquad B^{\star}\ :=\ (\partial-b_{1}^{\star})\cdots( \partial-b_{r}^{\star}),\qquad E\ :=\ B-B^{\star}.\]
Then
\[B(\partial-b_{r+1})\ =\ B^{\star}(\partial-b_{r+1}^{\star})+B^{\star}(b_{r+1}^{ \star}-b_{r+1})+E(\partial-b_{r+1}).\]
Inductively we have \(vE\geqslant vB+(n+1)\gamma\). Suppose \(E\neq 0\) and \(0\neq b_{r+1}\not\asymp 1\). Then by [ADH, 6.1.5],
\[v_{E}(\beta_{r+1})-v_{B}(\beta_{r+1}) =\ vE-vB+o(\beta_{r+1})\] \[\geqslant\ (n+1)\gamma+o(\beta_{r+1})\] \[\geqslant\ n\gamma+|\beta_{r+1}|+o(\beta_{r+1})\ >\ n\gamma.\]
Hence, using \(E(\partial-b_{r+1})=E\partial-Eb_{r+1}\) and \(v(E\partial)=v(E)\neq v_{E}(\beta_{r+1})\),
\[v\big{(}E(\partial-b_{r+1})\big{)}\ =\ \min\bigl{\{}vE,v_{E}(\beta_{r+1}) \bigr{\}}\ >\ \min\bigl{\{}vB,v_{B}(\beta_{r+1})\bigr{\}}+n\gamma\] \[=\ v\big{(}B(\partial-b_{r+1})\big{)}+n\gamma,\]
where for the last equality we use \(vB\neq v_{B}(\beta_{r+1})\). Also,
\(v\big{(}B^{\star}(b_{r+1}^{\star}-b_{r+1})\big{)}=v_{B^{\star}}\big{(}v(b_{r+ 1}^{\star}-b_{r+1})\big{)}\geqslant v_{B^{\star}}\big{(}(n+r+1)\gamma\big{)}=v _{B}\big{(}(n+r+1)\gamma\big{)}\)
where we use [ADH, 6.1.7] for the last equality. Moreover, by [ADH, 6.1.4],
\(v_{B}\big{(}(n+r+1)\gamma\big{)}-n\gamma\ \geqslant\ vB+(r+1)\gamma+o(\gamma)\ >\ vB\ \geqslant\ v\big{(}B(\partial-b_{r+1})\big{)}\).
This yields the desired result for \(E\neq 0\), \(0\neq b_{r+1}\not\asymp 1\). The cases \(E\neq 0\), \(b_{r+1}=0\) and \(E=0\), \(0\neq b_{r+1}\not\asymp 1\) are simpler versions of the above, and so is the case \(E\neq 0\), \(b_{r+1}\asymp 1\) using [ADH, 5.6.1(i)]. The remaining cases, \(E=0\), \(b_{r+1}=0\) and \(E=0\), \(b_{r+1}\asymp 1\), are even simpler to handle.
**Corollary 4.2.2**.: _Let \(a,b_{1},\ldots,b_{r}\in K\), \(a\neq 0\). Then there exists \(\gamma_{0}\in\Gamma^{\geqslant}\) such that for all \(a^{\star},b_{1}^{\star},\ldots,b_{r}^{\star}\in K\) and \(\gamma\in\Gamma\) with \(\gamma>\gamma_{0}\), \(v(a-a^{\star})\geqslant va+\gamma\), and \(v(b_{i}-b_{i}^{\star})\geqslant(r+1)\gamma\) for \(i=1,\ldots,r\), we have \(v(A-A^{\star})\geqslant vA+\gamma\), where_
\(A\ :=\ a(\partial-b_{1})\cdots(\partial-b_{r})\in K[\partial],\quad A^{\star}\ :=\ a^{\star}(\partial-b_{1}^{\star})\cdots(\partial-b_{r}^{\star})\in K[ \partial].\)__
Proof.: Take \(\gamma_{0}\) as in the previous lemma applied to \(b_{1},\ldots,b_{r}\) and \(n=1\), and let \(B=(\partial-b_{1})\cdots(\partial-b_{r})\), \(A=aB\). Let \(a^{\star},b_{1}^{\star},\ldots,b_{r}^{\star}\in K\) and \(\gamma\in\Gamma\) be such that \(\gamma>\gamma_{0}\), \(v(a-a^{\star})\geqslant va+\gamma\), and \(v(b_{i}-b_{i}^{\star})\geqslant(r+1)\gamma\) for \(i=1,\ldots,r\). Set \(B^{\star}:=(\partial-b_{1}^{\star})\cdots(\partial-b_{r}^{\star})\), \(A^{\star}:=a^{\star}B^{\star}\). Then
\(E\ :=\ A-A^{\star}\ =\ a(B-B^{\star})+(a-a^{\star})B^{\star}\).
Lemma 4.2.1 gives \(vB^{\star}=vB\), and so
\(v\big{(}a(B-B^{\star})\big{)}\ \geqslant\ va+vB+\gamma\ =\ vA+\gamma,\quad v \big{(}(a-a^{\star})B^{\star}\big{)}\ =\ v(a-a^{\star})+vB\ \geqslant\ vA+\gamma\),
so \(vE\geqslant vA+\gamma\).
_In the rest of this subsection we assume \(P\in K\{Y\}\setminus K\), set \(r:=\operatorname{order}P\), and let \(i\), \(j\) range over \(\mathbb{N}^{1+r}\)._
**Lemma 4.2.3**.: _For \(\delta:=v\big{(}P-P(0)\big{)}\) and all \(h\in\varcirc\) we have \(v\big{(}P_{+h}-P\big{)}\geqslant\delta+\frac{1}{2}vh\)._
Proof.: Note that \(\delta\in\Gamma\) and \(v(P_{\boldsymbol{j}})\geqslant\delta\) for all \(\boldsymbol{j}\) with \(|\boldsymbol{j}|\geqslant 1\). Let \(h\in\varcirc^{\not\pm}\) and \(\boldsymbol{i}\) be given; we claim that \(v\big{(}(P_{+h})_{\boldsymbol{i}}-P_{\boldsymbol{i}}\big{)}\geqslant\delta+ \frac{1}{2}vh\). By [ADH, (4.3.1)] we have
\[(P_{+h})_{\boldsymbol{i}}\ =\ P_{\boldsymbol{i}}+Q(h)\quad\text{where}\ Q(Y):= \sum_{|\boldsymbol{j}|\geqslant 1}\binom{\boldsymbol{i}+\boldsymbol{j}}{\boldsymbol{i}}P_{ \boldsymbol{i}+\boldsymbol{j}}\,Y^{\boldsymbol{j}}\in K\{Y\}.\]
From \(Q(0)=0\) and [ADH, 6.1.4] we obtain
\[v(Q_{\times h})\ \geqslant\ v(Q)+vh+o(vh)\ \geqslant\ \delta+\frac{1}{2}vh.\]
Together with \(v\big{(}Q(h)\big{)}\geqslant v(Q_{\times h})\) this yields the lemma.
**Corollary 4.2.4**.: _Let \(f\in K\). Then there exists \(\delta\in\Gamma\) such that for all \(f^{\star}\in K\) with \(f-f^{\star}\prec 1\) we have \(v\big{(}P_{+f^{\star}}-P_{+f}\big{)}\geqslant\delta+\frac{1}{2}v(f^{\star}-f)\)._
Proof.: Take \(\delta\) as in the preceding lemma with \(P_{+f}\) in place of \(P\) and \(h=f^{\star}-f\).
**Corollary 4.2.5**.: _Let \(a,b_{1},\ldots,b_{r},f\in K\) be such that_
\[A\ :=\ L_{P_{+f}}\ =\ a(\partial-b_{1})\cdots(\partial-b_{r}),\qquad a\ \neq\ 0.\]
_Then there exists \(\gamma_{1}\in\Gamma^{\geqslant}\) such that for all \(a^{\star},b_{1}^{\star},\ldots,b_{r}^{\star},f^{\star}\in K\) and \(\gamma\in\Gamma\), if_
\[\gamma>\gamma_{1},\ v(a-a^{\star})\geqslant va+\gamma,\ v(b_{i}-b_{i}^{\star} )\geqslant(r+1)\gamma\ (i=1,\ldots,r),\ \text{and}\ v(f-f^{\star})\geqslant 4\gamma,\]
_then_
* \(v\big{(}P_{+f^{\star}}-P_{+f}\big{)}\geqslant vA+\gamma\)_; and_
* \(L_{P_{+f^{\star}}}=a^{\star}(\partial-b_{1}^{\star})\cdots(\partial-b_{r}^{ \star})+E\) _where_ \(vE\geqslant vA+\gamma\)_._
Proof.: Take \(\gamma_{0}\) as in Corollary 4.2.2 applied to \(a,b_{1},\ldots,b_{r}\), and take \(\delta\) as in Corollary 4.2.4. Then \(\gamma_{1}:=\max\{\gamma_{0},vA-\delta\}\) has the required property.
In the next result \(L\) is a valued differential field extension of \(K\) with small derivation such that \(\Gamma\) is cofinal in \(\Gamma_{L}\). Then the natural inclusion \(K\to L\) extends uniquely to an embedding \(K^{\rm c}\to L^{\rm c}\) of valued fields by [ADH, 3.2.20]. It is easy to check that this is even an embedding of valued _differential_ fields; we identify \(K^{\rm c}\) with a valued differential subfield of \(L^{\rm c}\) via this embedding.
**Corollary 4.2.6**.: _Let \(a,b_{1},\ldots,b_{r}\in L^{\rm c}\) and \(f\in K^{\rm c}\) be such that in \(L^{\rm c}[\partial]\),_
\[A\ :=\ L_{P_{+f}}\ =\ a(\partial-b_{1})\cdots(\partial-b_{r}),\qquad a,f\neq 0, \quad\mathfrak{v}:=\mathfrak{v}(A)\prec 1,\]
_and let \(w\in\mathbb{N}\). Then there are \(a^{\star},b_{1}^{\star},\ldots,b_{r}^{\star}\in L\) and \(f^{\star}\in K\) such that_
\[a^{\star}\ \sim\ a,\qquad f^{\star}\ \sim\ f,\qquad A^{\star}:=L_{P_{+f^{ \star}}}\ \sim\ A,\qquad\operatorname{order}A^{\star}\ =\ r,\qquad\mathfrak{v}(A^{\star})\ \sim\ \mathfrak{v},\]
_and such that for \(\Delta:=\big{\{}\alpha\in\Gamma_{L}:\,\alpha=o\big{(}v(\mathfrak{v})\big{)}\big{\}}\) we have in \(L[\partial]\),_
\[A^{\star}\ =\ a^{\star}(\partial-b_{1}^{\star})\cdots(\partial-b_{r}^{\star})+E,\qquad E\prec_{\Delta}\mathfrak{v}^{w+1}A.\]
Proof.: Let \(\gamma_{1}\in\Gamma_{L}^{\geqslant}\) be as in Corollary 4.2.5 applied to \(L^{\rm c}\) in place of \(K\), and take \(\gamma_{2}\in\Gamma\) such that \(\gamma_{2}\geqslant\max\{\gamma_{1},\frac{1}{4}vf\}+vA\) and \(\gamma_{2}\geqslant v\big{(}(P_{+f})_{\boldsymbol{i}}\big{)}\) for all \(\boldsymbol{i}\) with \((P_{+f})_{\boldsymbol{i}}\neq 0\). Let \(\gamma\in\Gamma\) and \(\gamma>\gamma_{2}\). Then \(\gamma-vA>\gamma_{1}\). By the density of \(K\), \(L\) in \(K^{\rm c}\), \(L^{\rm c}\), respectively, we can take \(a^{\star},b_{1}^{\star},\ldots,b_{r}^{\star}\in L\) and \(f^{\star}\in K\) such that
\[v(a-a^{\star})\ \geqslant\ va+(\gamma-vA),\qquad v(b_{i}-b_{i}^{\star})\geqslant(r +1)(\gamma-vA)\ \ \text{for}\ i=1,\ldots,r,\]
and \(v(f-f^{\star})\geqslant 4(\gamma-vA)>vf\). Then \(a^{\star}\sim a\), \(f^{\star}\sim f\), and by Corollary 4.2.5,
\[v\big{(}P_{+f^{\star}}-P_{+f}\big{)}\ \geqslant\ \gamma,\quad A^{\star}:=L_{P_{+f^{ \star}}}\ =\ a^{\star}(\partial-b_{1}^{\star})\cdots(\partial-b_{r}^{\star})+E,\quad vE \geqslant\gamma.\]
Hence \((P_{+f^{\star}})_{\boldsymbol{i}}\sim(P_{+f})_{\boldsymbol{i}}\) if \((P_{+f})_{\boldsymbol{i}}\neq 0\), and \(v\big{(}(P_{+f^{\star}})_{\boldsymbol{i}}\big{)}>\gamma_{2}\geqslant vA\) if \((P_{+f})_{\boldsymbol{i}}=0\), so \(A^{\star}\sim A\), \(\operatorname{order}A^{\star}=r\), and \(\mathfrak{v}(A^{\star})\sim\mathfrak{v}\). Choosing \(\gamma\) so that also \(\gamma>v(\mathfrak{v}^{w+1}A)+\Delta\) we achieve in addition that \(E\prec_{\Delta}\mathfrak{v}^{w+1}A\)
**Keeping it real**.: _In this subsection \(H\) is a real closed \(H\)-asymptotic field with small derivation whose valuation ring is convex, with \(\Gamma:=v(H^{\times})\neq\{0\}\), and \(K\) is the asymptotic extension \(H[\mathrm{i}]\) of \(H\) with \(\mathrm{i}^{2}=-1\)._ Then \(H^{\mathrm{c}}\) is real closed and \(H^{\mathrm{c}}[\mathrm{i}]=K^{\mathrm{c}}\) as valued field extension of \(H\) according to Lemma 4.1.1, and as asymptotic field extension of \(H\) by the discussion after Corollary 4.1.6. Using the real splittings from Definition 1.1.5 we show here that we can "preserve the reality of \(A\)" in Corollary 4.2.6.
**Lemma 4.2.7**.: _Let \(A\in H^{\mathrm{c}}[\partial]\) be of order \(r\geqslant 1\) and let \((g_{1},\dots,g_{r})\in H^{\mathrm{c}}[\mathrm{i}]^{r}\) be a real splitting of \(A\) over \(H^{\mathrm{c}}[\mathrm{i}]\). Then for every \(\gamma\in\Gamma\) there are \(g_{1}^{\star},\dots,g_{r}^{\star}\) in \(H[\mathrm{i}]\) such that \(v(g_{i}-g_{i}^{\star})>\gamma\) for \(i=1,\dots,r\),_
\[A^{\star}\ :=\ (\partial-g_{1}^{\star})\cdots(\partial-g_{r}^{\star})\in H [\partial],\]
_and \((g_{1}^{\star},\dots,g_{r}^{\star})\) is a real splitting of \(A^{\star}\) over \(H[\mathrm{i}]\)._
Proof.: We can reduce to the case where \(r=1\) or \(r=2\). If \(r=1\), then the lemma holds trivially, so suppose \(r=2\). Then again the lemma holds trivially if \(g_{1},g_{2}\in H^{\mathrm{c}}\), so we can assume instead that
\[g_{1}\ =\ a-bi+b^{\dagger},\quad g_{2}\ =\ a+bi,\qquad a\in H^{\mathrm{c}},\ b \in(H^{\mathrm{c}})^{\times}.\]
Let \(\gamma\in\Gamma\) be given. The density of \(H\) in \(H^{\mathrm{c}}\) gives \(a^{\star}\in H\) with \(v(a-a^{\star})\geqslant\gamma\). Next, choose \(\gamma^{\star}\in\Gamma\) such that \(\gamma^{\star}\geqslant\max\{\gamma,vb\}\) and \(\alpha^{\prime}>\gamma\) for all nonzero \(\alpha>\gamma^{\star}-vb\) in \(\Gamma\), and take \(b^{\star}\in H\) with \(v(b-b^{\star})>\gamma^{\star}\). Then \(v(b-b^{\star})>\gamma\) and \(b\sim b^{\star}\). In fact, \(b=b^{\star}(1+\varepsilon)\) where \(v\varepsilon+vb=v(b-b^{\star})>\gamma^{\star}\) and so \(v\big{(}(b/b^{\star})^{\dagger}\big{)}=v(\varepsilon^{\prime})>\gamma\). Set \(g_{1}^{\star}:=a^{\star}-b^{\star}\mathrm{i}+b^{\star}\mathrm{i}\) and \(g_{2}^{\star}:=a^{\star}+b^{\star}\mathrm{i}\). Then
\[v(g_{1}-g_{1}^{\star})\ =\ v\big{(}a-a^{\star}+(b/b^{\star})^{ \dagger}+(b^{\star}-b)\mathrm{i}\big{)}\ >\ \gamma,\quad v(g_{2}-g_{2}^{\star})\ >\ \gamma,\] \[(\partial-g_{1}^{\star})\cdot(\partial-g_{2}^{\star})\ =\ \partial^{2}-\big{(}2a^{ \star}+b^{\star}\mathrm{\dagger}\big{)}\partial+\big{(}(-a^{\star})^{\prime}+a^ {\star 2}+a^{\star}b^{\star}\mathrm{\dagger}+b^{\star 2}\big{)}\ \in\ H[\partial].\]
Hence \((g_{1}^{\star},g_{2}^{\star})\) is a real splitting of \(A^{\star}:=(\partial-g_{1}^{\star})(\partial-g_{2}^{\star})\in H[\partial]\).
In the next two corollaries \(a\in(H^{\mathrm{c}})^{\times}\) and \(b_{1},\dots,b_{r}\in K^{\mathrm{c}}\) are such that
\[A\ :=\ a(\partial-b_{1})\cdots(\partial-b_{r})\in H^{\mathrm{c}}[\partial],\]
\((b_{1},\dots,b_{r})\) is a real splitting of \(A\) over \(K^{\mathrm{c}}\), and \(\mathfrak{v}:=\mathfrak{v}(A)\prec 1\). We set \(\Delta:=\Delta(\mathfrak{v})\).
**Corollary 4.2.8**.: _Suppose \(A=L_{P_{+f}}\) with \(P\in H\{Y\}\) of order \(r\geqslant 1\) and \(f\) in \((H^{\mathrm{c}})^{\times}\). Let \(\gamma\in\Gamma\) and \(w\in\mathbb{N}\). Then there is \(f^{\star}\in H^{\times}\) such that \(v(f^{\star}-f)\geqslant\gamma\),_
\[f^{\star}\ \sim\ f,\quad A^{\star}\ :=\ L_{P_{+}f^{\star}}\ \sim\ A,\quad\mathrm{ order}\,A^{\star}\ =\ r,\quad\mathfrak{v}(A^{\star})\ \sim\ \mathfrak{v}, \tag{4.2.1}\]
_and we have \(a^{\star}\in H^{\times}\), \(b_{1}^{\star},\dots,b_{r}^{\star}\in K\), and \(B^{\star},E^{\star}\in H[\partial]\) with \(A^{\star}=B^{\star}+E^{\star}\), \(E^{\star}\prec_{\Delta}\mathfrak{v}^{w+1}A\), such that_
\[B^{\star}\ =\ a^{\star}(\partial-b_{1}^{\star})\cdots(\partial-b_{r}^{ \star}),\qquad v(a-a^{\star}),\ v(b_{1}-b_{1}^{\star}),\ \dots\,v(b_{r}-b_{r}^{\star})\ \geqslant\ \gamma,\]
_and \((b_{1}^{\star},\dots,b_{r}^{\star})\) is a real splitting of \(B^{\star}\) over \(K\)._
Proof.: We apply Corollary 4.2.6 with \(H\), \(K\) in the role of \(K\), \(L\), and take \(\gamma_{1}\), \(\gamma_{2}\) as in the proof of that corollary. We can assume \(\gamma>\gamma_{2}\), so that \(\gamma-vA>0\). The density of \(H\) in \(H^{\mathrm{c}}\) gives \(a^{\star}\in H\) such that \(v(a-a^{\star})\geqslant\max\bigl{\{}va+(\gamma-vA),\gamma\bigr{\}}\) (so \(a^{\star}\sim a\)), and Lemma 4.2.7 gives \(b_{1}^{\star},\dots,b_{r}^{\star}\in K\) such that \(v(b_{i}-b_{i}^{\star})\geqslant\max\bigl{\{}(r+1)(\gamma-vA),\gamma\bigr{\}}\) for \(i=1,\dots,r\), and \((b_{1}^{\star},\dots,b_{r}^{\star})\) is a real splitting of
\[B^{\star}\ :=\ a^{\star}(\partial-b_{1}^{\star})\cdots(\partial-b_{r}^{\star})\in H [\partial]\]
over \(K\). Take \(f^{\star}\in H\) with \(v(f-f^{\star})\geqslant\max\bigl{\{}4(\gamma-vA),\gamma\bigr{\}}\). Then (4.2.1) follows from the proof of Corollary 4.2.6. We can increase \(\gamma\) so that \(\gamma>v(\mathfrak{v}^{w+1}A)+\Delta\), and then we have \(A^{\star}-B^{\star}\prec_{\Delta}\mathfrak{v}^{w+1}A\).
This result persists after multiplicative conjugation:
**Corollary 4.2.9**.: _Suppose \(A=L_{P_{+f,\times\mathfrak{m}}}\) with \(P\in H\{Y\}\) of order \(r\geqslant 1\), and \(f\) in \((H^{\mathrm{c}})^{\times}\), \(\mathfrak{m}\in H^{\times}\). Let \(\gamma\in\Gamma\), \(w\in\mathbb{N}\). Then there is \(f^{\star}\in H^{\times}\) such that_
\[v(f^{\star}-f)\geqslant\gamma,\quad f^{\star}\ \sim\ f,\quad A^{\star}\ :=\ L_{P_{+f^{\star},\times \mathfrak{m}}}\ \sim\ A,\quad\mathrm{order}\,A^{\star}\ =\ r,\quad\mathfrak{v}(A^{\star})\ \sim\ \mathfrak{v},\]
_and we have \(a^{\star}\in H^{\times}\), \(b^{\star}_{1},\ldots,b^{\star}_{r}\in K\), and \(B^{\star},E^{\star}\in H[\![\partial]\!]\) with the properties stated in the previous corollary._
Proof.: Put \(Q:=P_{\times\mathfrak{m}}\in H\{Y\}\), \(g:=f/\mathfrak{m}\in H^{\mathrm{c}}\); then \(Q_{+g}=P_{+f,\times\mathfrak{m}}\). Applying the previous corollary to \(Q\), \(g\) in place of \(P\), \(f\) yields \(g^{\star}\in H^{\times}\), \(a^{\star}\in H^{\times}\), and \(b^{\star}_{1},\ldots,b^{\star}_{r}\in K\) such that \(v(g^{\star}-g)\geqslant\ \gamma-v\mathfrak{m}\),
\[g^{\star}\ \sim\ g,\qquad A^{\star}\ :=\ L_{Q_{+g^{\star}}}\ \sim\ A,\qquad \mathrm{order}\,A^{\star}\ =\ r,\qquad\mathfrak{v}(A^{\star})\ \sim\ \mathfrak{v}\]
and \(A^{\star}=B^{\star}+E^{\star}\), with \(B^{\star},E^{\star}\in H[\![\partial]\!]\), \(E^{\star}\prec_{\Delta}\mathfrak{v}^{w+1}A\), and
\[B^{\star}\ =\ a^{\star}(\partial-b^{\star}_{1})\cdots(\partial-b^{\star}_{r}), \qquad v(a-a^{\star}),\ v(b_{1}-b^{\star}_{1}),\ \ldots\,v(b_{r}-b^{\star}_{r})\ \geqslant\ \gamma,\]
and \((b^{\star}_{1},\ldots,b^{\star}_{r})\) is a real splitting of \(B^{\star}\) over \(K\). Therefore \(f^{\star}:=g^{\star}\mathfrak{m}\in H^{\times}\) and \(a^{\star},b^{\star}_{1},\ldots,b^{\star}_{r}\) have the required properties.
### Strong splitting
_In this subsection \(H\) is a real closed \(H\)-field with small derivation and asymptotic integration_. Thus \(K:=H[\![\)i\(]\) is a d-valued extension of \(H\). Let \(A\in K[\![\partial]\!]^{\neq}\) have order \(r\geqslant 1\) and set \(\mathfrak{v}:=\mathfrak{v}(A)\), and let \(f\), \(g\), \(h\) (possibly subscripted) range over \(K\). Recall from Section 1.1 that a splitting of \(A\) over \(K\) is an \(r\)-tuple \((g_{1},\ldots,g_{r})\) such that
\[A\ =\ f(\partial-g_{1})\cdots(\partial-g_{r})\quad\text{where $f\neq 0$.}\]
We call such a splitting \((g_{1},\ldots,g_{r})\) of \(A\) over \(K\)**strong** if \(\mathrm{Re}\,g_{j}\succcurlyeq\mathfrak{v}^{\dagger}\) for \(j=1,\ldots,r\), and we say that \(A\)**splits strongly over \(K\)** if there is a strong splitting of \(A\) over \(K\). This notion is mainly of interest for \(\mathfrak{v}\prec 1\), since otherwise \(\mathfrak{v}=1\), and then any splitting of \(A\) over \(K\) is a strong splitting of \(A\) over \(K\).
**Lemma 4.2.10**.: _Let \((g_{1},\ldots,g_{r})\) be a strong splitting of \(A\) over \(K\). If \(h\neq 0\), then \((g_{1},\ldots,g_{r})\) is a strong splitting of \(hA\) over \(K\). If \(h\asymp 1\), then \((g_{1}-h^{\dagger},\ldots,g_{r}-h^{\dagger})\) is a strong splitting of \(Ah\) over \(K\)._
Proof.: Suppose \(h\asymp 1\). Now use Lemma 1.1.1, and the fact that if \(\mathfrak{v}\prec 1\), then \(\mathrm{Re}\,h^{\dagger}\prec h^{\dagger}\prec\mathfrak{v}^{\dagger}\). If \(\mathfrak{v}=1\), then use that \(\mathfrak{v}(Ah)=1\) by Corollary 3.1.3.
**Lemma 4.2.11**.: _Suppose \(g\asymp\mathrm{Re}\,g\). Then \(A=\partial-g\) splits strongly over \(K\)._
Proof.: Assuming \(\mathfrak{v}\prec 1\) gives \(\mathfrak{v}^{\prime}\prec 1\), so \(\mathfrak{v}^{\dagger}\prec 1/\mathfrak{v}\asymp g\asymp\mathrm{Re}\,g\).
In particular, every \(A\in H[\![\partial]\!]^{\neq}\) of order \(1\) splits strongly over \(K\).
**Lemma 4.2.12**.: _Suppose \((g_{1},\ldots,g_{r})\) is a strong splitting of \(A\) over \(K\) and \(\mathfrak{v}\prec^{\flat}1\). Let \(\phi\preccurlyeq 1\) be active in \(H\) and set \(h_{j}:=\phi^{-1}\bigl{(}g_{j}-(r-j)\phi^{\dagger}\bigr{)}\) for \(j=1,\ldots,r\). Then \((h_{1},\ldots,h_{r})\) is a strong splitting of \(A^{\phi}\) over \(K^{\phi}=H^{\phi}[\![\)i\(]\)._
Proof.: By Lemma 1.1.2, \((h_{1},\ldots,h_{r})\) is a splitting of \(A^{\phi}\) over \(K^{\phi}\). We have \(\phi^{\dagger}\prec 1\preccurlyeq\mathfrak{v}^{\dagger}\), so \(\operatorname{Re}h_{j}\sim\phi^{-1}\operatorname{Re}g_{j}\succcurlyeq\phi^{-1} \mathfrak{v}^{\dagger}\) for \(j=1,\ldots,r\). Set \(\mathfrak{w}:=\mathfrak{v}(A^{\phi})\) and \(\delta:=\phi^{-1}\partial\). Lemma 3.1.19 gives \(\mathfrak{v}^{\dagger}\asymp\mathfrak{w}^{\dagger}\), so \(\phi^{-1}\mathfrak{v}^{\dagger}\asymp\delta(\mathfrak{w})/\mathfrak{w}\).
_In the next two results we assume that for all \(q\in\mathbb{Q}^{>}\) and \(\mathfrak{n}\in H^{\times}\) there is given an element \(\mathfrak{n}^{q}\in H^{\times}\) such that \((\mathfrak{n}^{q})^{\dagger}=q\mathfrak{n}^{\dagger}\)_(_and thus \(v(\mathfrak{n}^{q})=q\,v(\mathfrak{n})\)_).
**Lemma 4.2.13**.: _Suppose \((g_{1},\ldots,g_{r})\) is a splitting of \(A\) over \(K\), \(\mathfrak{v}\prec 1\), \(\mathfrak{n}\in H^{\times}\), and \([\mathfrak{v}]\leqslant[\mathfrak{n}]\). Then for all \(q\in\mathbb{Q}^{>}\) with at most \(r\) exceptions, \((g_{1}-q\mathfrak{n}^{\dagger},\ldots,g_{r}-q\mathfrak{n}^{\dagger})\) is a strong splitting of \(A\mathfrak{n}^{q}\) over \(K\)._
Proof.: Let \(q\in\mathbb{Q}^{>}\). Then \((g_{1}-q\mathfrak{n}^{\dagger},\ldots,g_{r}-q\mathfrak{n}^{\dagger})\) is a splitting of \(A\mathfrak{n}^{q}\) over \(K\), by Lemma 1.1.1. Moreover, \(\big{[}\mathfrak{v}(A\mathfrak{n}^{q})\big{]}\leqslant[\mathfrak{n}]\), by Lemma 3.1.9, so \(\mathfrak{v}(A\mathfrak{n}^{q})^{\dagger}\preccurlyeq\mathfrak{n}^{\dagger}\). Thus if \(\operatorname{Re}g_{j}\not\sim q\mathfrak{n}^{\dagger}\) for \(j=1,\ldots,r\), then \((g_{1}-q\mathfrak{n}^{\dagger},\ldots,g_{r}-q\mathfrak{n}^{\dagger})\) is a strong splitting of \(A\mathfrak{n}^{q}\) over \(K\).
**Corollary 4.2.14**.: _Let \((P,\mathfrak{m},\widehat{a})\) be a steep slot in \(K\) of order \(r\geqslant 1\) whose linear part \(L:=L_{P_{\times n}}\) splits over \(K\) and such that \(\widehat{a}\prec_{\Delta}\mathfrak{m}\) for \(\Delta:=\Delta\big{(}\mathfrak{v}(L)\big{)}\). Then for all sufficiently small \(q\in\mathbb{Q}^{>}\), any \(\mathfrak{n}\asymp|\mathfrak{v}(L)|^{q}\mathfrak{m}\) in \(K^{\times}\) gives a steep refinement \(\big{(}P,\mathfrak{n},\widehat{a}\big{)}\) of \((P,\mathfrak{m},\widehat{a})\) whose linear part \(L_{P_{\times n}}\) splits strongly over \(K\)._
Proof.: Note that \(|f|\asymp f\) for all \(f\). Lemma 3.3.1 gives \(q_{0}\in\mathbb{Q}^{>}\) such that for all \(q\in\mathbb{Q}^{>}\) with \(q\leqslant q_{0}\) and any \(\mathfrak{n}\asymp|\mathfrak{v}(L)|^{q}\mathfrak{m}\), \((P,\mathfrak{n},\widehat{a})\) is a steep refinement of \((P,\mathfrak{m},\widehat{a})\). Now apply Lemma 4.2.13 with \(L\), \(\mathfrak{v}(L)\), \(|\mathfrak{v}(L)|\) in the respective roles of \(A\), \(\mathfrak{v}\), \(\mathfrak{n}\), and use Lemma 4.2.10 and the fact that for \(\mathfrak{n}\asymp|\mathfrak{v}(L)|^{q}\mathfrak{m}\) we have \(L_{P_{\times n}}=L\cdot\mathfrak{n}/\mathfrak{m}=L|\mathfrak{v}(L)|^{q}h\) with \(h\asymp 1\).
We finish this section with a useful fact on slots in \(K\). Given such a slot \((P,\mathfrak{m},\widehat{a})\), the element \(\widehat{a}\) lies in an immediate asymptotic extension of \(K\) that might not be of the form \(\widehat{H}[\mathrm{i}]\) with \(\widehat{H}\) an immediate \(H\)-field extension of \(H\). By the next lemma we can nevertheless often reduce to this situation, and more:
**Lemma 4.2.15**.: _Suppose \(H\) is \(\omega\)-free. Then every \(Z\)-minimal slot in \(K\) of positive order is equivalent to a hole \((P,\mathfrak{m},\widehat{b})\) in \(K\) with \(\widehat{b}\in\widehat{K}=\widehat{H}[\mathrm{i}]\) for some immediate \(\omega\)-free newtonian \(H\)-field extension \(\widehat{H}\) of \(H\)._
Proof.: Let \((P,\mathfrak{m},\widehat{a})\) be a \(Z\)-minimal slot in \(K\) of order \(\geqslant 1\). Take an immediate \(\omega\)-free newtonian \(H\)-field extension \(\widehat{H}\) of \(H\); such \(\widehat{H}\) exists by remarks following [ADH, 14.0.1]. Then \(\widehat{K}=\widehat{H}[\mathrm{i}]\) is also newtonian by [ADH, 14.5.7]. Now apply Corollary 3.2.29 with \(L:=\widehat{K}\) to obtain \(\widehat{b}\in\widehat{K}\) such that \((P,\mathfrak{m},\widehat{b})\) is a hole in \(K\) equivalent to \((P,\mathfrak{m},\widehat{a})\).
### Split-Normal Slots
_In this section \(H\) is a real closed \(H\)-field with small derivation and asymptotic integration. We let \(\mathcal{O}:=\mathcal{O}_{H}\) be its valuation ring and \(C:=C_{H}\) its constant field. We fix an immediate asymptotic extension \(\widehat{H}\) of \(H\) with valuation ring \(\widehat{\mathcal{O}}\) and an element \(\mathrm{i}\) of an asymptotic extension of \(\widehat{H}\) with \(\mathrm{i}^{2}=-1\). Then \(\widehat{H}\) is also an \(H\)-field by [ADH, 10.5.8], \(\mathrm{i}\notin\widehat{H}\) and \(K:=H[\mathrm{i}]\) is an algebraic closure of \(H\). With \(\widehat{K}:=\widehat{H}[\mathrm{i}]\)
we have the inclusion diagram
By [ADH, 3.5.15, 10.5.7], \(K\) and \(\widehat{K}\) are d-valued with valuation rings \(\mathcal{O}+\mathcal{O}\mathrm{i}\) and \(\widehat{\mathcal{O}}+\widehat{\mathcal{O}}\mathrm{i}\) and with the same constant field \(C[\mathrm{i}]\), and \(\widehat{K}\) is an immediate extension of \(K\). Thus \(H\), \(K\), \(\widehat{H}\), \(\widehat{K}\) have the same \(H\)-asymptotic couple \((\Gamma,\psi)\).
**Lemma 4.3.1**.: _Let \(\widehat{a}\in\widehat{H}\setminus H\). Then \(Z(H,\widehat{a})=Z\big{(}K,\widehat{a}\big{)}\cap H\{Y\}\)._
Proof.: The inclusion "\(\supseteq\)" is obvious since the Newton degree of a differential polynomial \(Q\in H\{Y\}^{\neq}\) does not change when \(H\) is replaced by its algebraic closure; see [ADH, 11.1]. Conversely, let \(P\in Z(H,\widehat{a})\). Then for all \(\mathfrak{v}\in H^{\times}\) and \(a\in H\) such that \(a-\widehat{a}\prec\mathfrak{v}\) we have \(\operatorname{ndeg}_{\prec\mathfrak{v}}H_{+a}\geqslant 1\). Let \(\mathfrak{v}\in H^{\times}\) and \(z\in K\) be such that \(z-\widehat{a}\prec\mathfrak{v}\). Take \(a,b\in H\) such that \(z=a+bi\). Then \(a-\widehat{a},b\mathrm{i}\prec\mathfrak{v}\) and hence \(\operatorname{ndeg}_{\prec\mathfrak{v}}P_{+z}=\operatorname{ndeg}_{\prec \mathfrak{v}}P_{+a}\geqslant 1\), using [ADH, 11.2.7]. Thus \(P\in Z\big{(}K,\widehat{a}\big{)}\).
**Corollary 4.3.2**.: _Let \((P,\mathfrak{m},\widehat{a})\) be a slot in \(H\) with \(\widehat{a}\in\widehat{H}\). Then \((P,\mathfrak{m},\widehat{a})\) is also a slot in \(K\), and if \((P,\mathfrak{m},\widehat{a})\) is \(Z\)-minimal as a slot in \(K\), then \((P,\mathfrak{m},\widehat{a})\) is \(Z\)-minimal as a slot in \(H\). Moreover, \((P,\mathfrak{m},\widehat{a})\) is a hole in \(H\) iff \((P,\mathfrak{m},\widehat{a})\) is a hole in \(K\), and if \((P,\mathfrak{m},\widehat{a})\) is a minimal hole in \(K\), then \((P,\mathfrak{m},\widehat{a})\) is a minimal hole in \(H\)._
Proof.: The first three claims are obvious from \(\widehat{K}\) being an immediate extension of \(K\) and the previous lemma. Suppose \((P,\mathfrak{m},\widehat{a})\) is minimal as a hole in \(K\). Let \((Q,\mathfrak{n},\widetilde{b})\) be a hole in \(H\); thus \(\widetilde{b}\in\widetilde{H}\) where \(\widetilde{H}\) is an immediate asymptotic extension of \(H\). By the first part of the corollary applied to \((Q,\mathfrak{n},\widetilde{b})\) and \(\widehat{H}\) in place of \((P,\mathfrak{m},\widehat{a})\) and \(\widehat{H}\), respectively, \((Q,\mathfrak{n},\widetilde{b})\) is also a hole in \(K\). Hence \(\mathrm{c}(P)\leqslant\mathrm{c}(Q)\), proving the last claim.
In the next subsection we define the notion of a _split-normal_ slot in \(H\). Later in this section we employ the results of Sections 3.3-4.2 to show, under suitable hypotheses on \(H\), that minimal holes in \(K\) of order \(\geqslant 1\) give rise to a split-normal \(Z\)-minimal slots in \(H\). (Theorem 4.3.9.) We then investigate which kinds of refinements preserve split-normality, and also consider a strengthening of split-normality.
**Defining split-normality**.: _In this subsection \(b\) ranges over \(H\) and \(\mathfrak{m},\mathfrak{n}\) over \(H^{\times}\). Also, \((P,\mathfrak{m},\widehat{a})\) is a slot in \(H\) of order \(r\geqslant 1\) with \(\widehat{a}\in\widehat{H}\setminus H\) and linear part \(L:=L_{P_{\times m}}\). Set \(w:=\operatorname{wt}(P)\), so \(w\geqslant r\); if \(\operatorname{order}L=r\), we set \(\mathfrak{v}:=\mathfrak{v}(L)\)._
**Definition 4.3.3**.: We say that \((P,\mathfrak{m},\widehat{a})\) is **split-normal** if \(\operatorname{order}L=r\), and
1. \(\mathfrak{v}\prec^{\flat}1\);
2. \((P_{\times\mathfrak{m}})_{\geqslant 1}=Q+R\) where \(Q,R\in H\{Y\}\), \(Q\) is homogeneous of degree \(1\) and order \(r\), \(L_{Q}\) splits over \(K\), and \(R\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}(P_{\times\mathfrak{m}})_{1}\).
Note that in (SN2) we do not require that \(Q=(P_{\times\mathfrak{m}})_{1}\).
**Lemma 4.3.4**.: _Suppose \((P,\mathfrak{m},\widehat{a})\) is split-normal. Then \((P,\mathfrak{m},\widehat{a})\) is normal, and with \(Q\), \(R\) as in (SN2) we have \((P_{\times\mathfrak{m}})_{1}-Q\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}(P _{\times\mathfrak{m}})_{1}\), so \((P_{\times\mathfrak{m}})_{1}\sim Q\)._
Proof.: We have \((P_{\times{\mathfrak{m}}})_{1}=Q+R_{1}\) and \(R_{1}\preccurlyeq R\prec_{\Delta({\mathfrak{v}})}{\mathfrak{v}}^{w+1}(P_{\times{ \mathfrak{m}}})_{1}\), and thus \((P_{\times{\mathfrak{m}}})_{1}-Q\prec_{\Delta({\mathfrak{v}})}{\mathfrak{v}}^{ w+1}(P_{\times{\mathfrak{m}}})_{1}\). Now \((P,{\mathfrak{m}},\widehat{a})\) is normal because \((P_{\times{\mathfrak{m}}})_{>1}=R_{>1}\prec_{\Delta({\mathfrak{v}})}{\mathfrak{ v}}^{w+1}(P_{\times{\mathfrak{m}}})_{1}\).
If \((P,{\mathfrak{m}},\widehat{a})\) is normal and \((P_{\times{\mathfrak{m}}})_{1}=Q+R\) where \(Q,R\in H\{Y\}\), \(Q\) is homogeneous of degree \(1\) and order \(r\), \(L_{Q}\) splits over \(K\), and \(R\prec_{\Delta({\mathfrak{v}})}{\mathfrak{v}}^{w+1}(P_{\times{\mathfrak{m}}}) _{1}\), then \((P,{\mathfrak{m}},\widehat{a})\) is split-normal. Thus if \((P,{\mathfrak{m}},\widehat{a})\) is normal and \(L\) splits over \(K\), then \((P,{\mathfrak{m}},\widehat{a})\) is split-normal; in particular, if \((P,{\mathfrak{m}},\widehat{a})\) is normal of order \(r=1\), then it is split-normal. If \((P,{\mathfrak{m}},\widehat{a})\) is split-normal, then so are \((bP,{\mathfrak{m}},\widehat{a})\) for \(b\neq 0\) and \((P_{\times{\mathfrak{n}}},{\mathfrak{m}}/{\mathfrak{n}},\widehat{a}/{ \mathfrak{n}})\). Note also that if \((P,{\mathfrak{m}},\widehat{a})\) is split-normal, then with \(Q\) as in (SN2) we have \({\mathfrak{v}}(L)\sim{\mathfrak{v}}(L_{Q})\), by Lemma 3.1.1. If \((P,{\mathfrak{m}},\widehat{a})\) is split-normal and \(H\) is \(\lambda\)-free, then \(\mathscr{E}^{\rm c}(L)=\mathscr{E}^{\rm c}(L_{Q})\) with \(Q\) as in (SN2), by Lemmas 4.3.4 and 3.1.22.
**Lemma 4.3.5**.: _Suppose \((P,{\mathfrak{m}},\widehat{a})\) is split-normal and \(\phi\preccurlyeq 1\) is active in \(H\) and \(\phi>0\)\((\)so \(H^{\phi}\) is still an \(H\)-field\()\). Then the slot \((P^{\phi},{\mathfrak{m}},\widehat{a})\) in \(H^{\phi}\) is split-normal._
Proof.: We first arrange \({\mathfrak{m}}=1\). Note that \(L_{P^{\phi}}=L^{\phi}\) has order \(r\). Put \({\mathfrak{w}}:={\mathfrak{v}}(L_{P^{\phi}})\), and take \(Q\), \(R\) as in (SN2). Then \({\mathfrak{v}}\prec_{\Delta({\mathfrak{v}})}{\mathfrak{w}}\prec_{\phi}^{ \phi}1\) by Lemma 3.1.19. Moreover, \(L_{Q^{\phi}}=L_{Q}^{\phi}\) splits over \(K^{\phi}\); see [ADH, p. 291] or Lemma 1.1.2. By [ADH, 11.1.4],
\[R^{\phi}\ \asymp_{\Delta({\mathfrak{v}})}R\ \prec_{\Delta({\mathfrak{v}})} {\mathfrak{v}}^{w+1}P_{1}\ \asymp_{\Delta({\mathfrak{v}})}{\mathfrak{w}}^{w+1}P_{1}^{\phi},\]
so \((P^{\phi},{\mathfrak{m}},\widehat{a})\) is split-normal.
Since we need to preserve \(H\) being an \(H\)-field when compositionally conjugating, we say: \((P^{\phi},{\mathfrak{m}},\widehat{a})\)_is eventually split-normal_ if there exists an active \(\phi_{0}\) in \(H\) such that \((P^{\phi},{\mathfrak{m}},\widehat{a})\) is split-normal for all active \(\phi\preccurlyeq\phi_{0}\) in \(H\) with \(\phi>0\). We use this terminology in a similar way with "split-normal" replaced by other properties of slots of order \(r\geqslant 1\) in real closed \(H\)-fields with small derivation and asymptotic integration, such as "deep" and "deep and split-normal".
**Achieving split-normality**.: _Assume \(H\) is \({\mathfrak{w}}\)-free and \((P,{\mathfrak{m}},\widehat{a})\) is a minimal hole in \(K=H[i]\) of order \(r\geqslant 1\), with \({\mathfrak{m}}\in H^{\times}\) and \(\widehat{a}\in\widehat{K}\setminus K\). Note that then \(K\) is \({\mathfrak{w}}\)-free by [ADH, 11.7.23], \(K\) is \((r-1)\)-newtonian by Corollary 3.2.3, and \(K\) is \(r\)-linearly closed by Corollary 3.2.4. In particular, the linear part of \((P,{\mathfrak{m}},\widehat{a})\) is \(0\) or splits over \(K\). If \(\deg P=1\), then \(r=1\) by Corollary 3.2.8. If \(\deg P>1\), then \(K\) and \(H\) are \(r\)-linearly newtonian by Corollary 3.2.6 and Lemma 1.8.30. In particular, if \(H\) is \(1\)-linearly newtonian, then \(H\) is \(r\)-linearly newtonian. _In this subsection we let \(a\) range over \(K\), \(b\), \(c\) over \(H\), and \({\mathfrak{n}}\) over \(H^{\times}\)._
**Lemma 4.3.6**.: _Let \((Q,{\mathfrak{n}},\widehat{b})\) be a hole in \(H\) with \({\rm c}(Q)\leqslant{\rm c}(P)\) and \(\widehat{b}\in\widehat{H}\). Then \({\rm c}(Q)={\rm c}(P)\), \((Q,{\mathfrak{n}},\widehat{b})\) is minimal and remains a minimal hole in \(K\). The linear part of \((Q,{\mathfrak{n}},\widehat{b})\) is \(0\) or splits over \(K\), and \((Q,{\mathfrak{n}},\widehat{b})\) has a refinement \((Q_{+b},{\mathfrak{p}},\widehat{b}-b)\)\((\)in \(H)\) such that \((Q_{+b}^{\phi},{\mathfrak{p}},\widehat{b}-b)\) is eventually deep and split-normal._
Proof.: By Corollary 4.3.2, \((Q,{\mathfrak{n}},\widehat{b})\) is a hole in \(K\), and this hole in \(K\) is minimal with \({\rm c}(Q)={\rm c}(P)\), since \((P,{\mathfrak{m}},\widehat{a})\) is minimal. By Corollary 4.3.2 again, \((Q,{\mathfrak{n}},\widehat{b})\) as a hole in \(H\) is also minimal. Since \(K\) is \(r\)-linearly closed, the linear part of \((Q,{\mathfrak{n}},\widehat{b})\) is \(0\) or splits over \(K\). Corollary 3.3.34 gives a refinement \((Q_{+b},{\mathfrak{p}},\widehat{b}-b)\) of the minimal hole \((Q,{\mathfrak{n}},\widehat{b})\) in \(H\) such that \((Q_{+b}^{\phi},{\mathfrak{p}},\widehat{b}-b)\) is deep and normal, eventually. Thus the linear part of \((Q_{+b},{\mathfrak{p}},\widehat{b}-b)\) is not \(0\), and as \({\rm c}(Q_{+b})={\rm c}(P)\), this linear
part splits over \(K\). Hence for active \(\phi\) in \(H\) the linear part of \((Q_{+b}^{\phi},\mathfrak{p},\widehat{b}-b)\) splits over \(K^{\phi}=H^{\phi}[i]\). Thus \((Q_{+b}^{\phi},\mathfrak{p},\widehat{b}-b)\) is eventually split-normal.
Now \(\widehat{a}=\widehat{b}+\widehat{c}\,i\) with \(\widehat{b},\widehat{c}\in\widehat{H}\), and \(\widehat{b},\widehat{c}\prec\mathfrak{m}\). Moreover, \(\widehat{b}\notin H\) or \(\widehat{c}\notin H\). Since \(\widehat{a}\) is differentially algebraic over \(H\), so is its conjugate \(\widehat{b}-\widehat{c}\,i\), and therefore its real and imaginary parts \(\widehat{b}\) and \(\widehat{c}\) are differentially algebraic over \(H\); thus \(Z(\widehat{b},H)\neq\emptyset\) for \(\widehat{b}\notin H\), and \(Z(\widehat{c},H)\neq\emptyset\) for \(\widehat{c}\notin H\). More precisely:
**Lemma 4.3.7**.: _We have \(\operatorname{trdeg}\bigl{(}H\langle\widehat{b}\rangle|H\bigr{)}\leqslant 2r\). If \(\widehat{b}\notin H\), then \(Z(H,\widehat{b})\cap H[Y]=\emptyset\), so \(1\leqslant\operatorname{order}Q\leqslant 2r\) for all \(Q\in Z(H,\widehat{b})\) of minimal complexity. These statements also hold for \(\widehat{c}\) instead of \(\widehat{b}\)._
Proof.: The first statement follows from \(\widehat{b}\in H\langle\widehat{b}+\widehat{c}\,i,\widehat{b}-\widehat{c}\,i\rangle\). Suppose \(\widehat{b}\notin H\). If \(Q\in Z(H,\widehat{b})\) has minimal complexity, then [ADH, 11.4.8] yields an element \(f\) in a proper immediate asymptotic extension of \(H\) with \(Q(f)=0\), so \(Q\notin H[Y]\).
**Lemma 4.3.8**.: _Suppose \(\deg P=1\) and \(\widehat{b}\notin H\). Let \(Q\in Z(H,\widehat{b})\) be of minimal complexity; then either \(\operatorname{order}Q=1\), or \(\operatorname{order}Q=2\), \(\deg Q=1\). Let \(\widehat{Q}\in H\{Y\}\) be a minimal annihilator of \(\widehat{b}\) over \(H\); then either \(\operatorname{order}\widehat{Q}=1\), or \(\operatorname{order}\widehat{Q}=2\), \(\deg\widehat{Q}=1\), and \(L_{\widehat{Q}}\in H[\partial]\) splits over \(K\)._
Proof.: Recall that \(r=1\) by Corollary 3.2.8. Example 1.1.7 and Lemma 1.1.8 give a \(\widetilde{Q}\in H\{Y\}\) of degree \(1\) and order \(1\) or \(2\) such that \(\widetilde{Q}(\widehat{b})=0\) and \(L_{\widetilde{Q}}\) splits over \(K\). Then \(\operatorname{c}(\widetilde{Q})=(1,1,1)\) or \(\operatorname{c}(\widetilde{Q})=(2,1,1)\), which proves the claim about \(Q\), using also Lemma 4.3.7. Also, \(\widehat{Q},\widehat{Q}\in Z(H,\widehat{b})\), hence \(\operatorname{c}(Q)\leqslant\operatorname{c}(\widehat{Q})\leqslant \operatorname{c}(\widehat{Q})\). If \(\operatorname{c}(\widehat{Q})=\operatorname{c}(\widehat{Q})\), then \(\widehat{Q}=a\widetilde{Q}\) for some \(a\in H^{\times}\). The claim about \(\widehat{Q}\) now follows easily.
By Corollary 3.3.34 and Lemma 3.3.23, our minimal hole \((P,\mathfrak{m},\widehat{a})\) in \(K\) has a refinement \((P_{+a},\mathfrak{n},\widehat{a}-a)\) such that eventually \((P_{+a}^{\phi},\mathfrak{n},\widehat{a}-a)\) is deep and normal. Moreover, as \(K\) is \(r\)-linearly closed, the linear part of \((P_{+a}^{\phi},\mathfrak{n},\widehat{a}-a)\) (for active \(\phi\) in \(K\)) splits over \(K^{\phi}=H^{\phi}[i]\). Our main goal in this subsection is to prove analogues of these facts for suitable \(Z\)-minimal slots \((Q,\mathfrak{m},\widehat{b})\) or \((R,\mathfrak{m},\widehat{c})\) in \(H\):
**Theorem 4.3.9**.: _If \(H\) is \(1\)-linearly newtonian, then one of the following holds:_
* \(\widehat{b}\notin H\) _and there exists a_ \(Z\)_-minimal slot_ \((Q,\mathfrak{m},\widehat{b})\) _in_ \(H\) _with a refinement_ \((Q_{+b},\mathfrak{n},\widehat{b}-b)\) _such that_ \((Q_{+b}^{\phi},\mathfrak{n},\widehat{b}-b)\) _is eventually deep and split-normal;_
* \(\widehat{c}\notin H\) _and there exists a_ \(Z\)_-minimal slot_ \((R,\mathfrak{m},\widehat{c})\) _in_ \(H\) _with a refinement_ \((R_{+c},\mathfrak{n},\widehat{c}-c)\) _such that_ \((R_{+c}^{\phi},\mathfrak{n},\widehat{c}-c)\) _is eventually deep and split-normal._
Lemmas 4.3.10, 4.3.11 and Corollaries 4.3.13-4.3.16 below are more precise (only Corollary 4.3.15 has \(H\) being \(1\)-linearly newtonian as a hypothesis) and together give Theorem 4.3.9. We first deal with the case where \(\widehat{b}\) or \(\widehat{c}\) is in \(H\):
**Lemma 4.3.10**.: _Suppose \(\widehat{c}\in H\). Then some hole \((Q,\mathfrak{m},\widehat{b})\) in \(H\) has the same complexity as \((P,\mathfrak{m},\widehat{a})\). Any such hole \((Q,\mathfrak{m},\widehat{b})\) in \(H\) is minimal and has a refinement \((Q_{+b},\mathfrak{n},\widehat{b}-b)\) such that \((Q_{+b}^{\phi},\mathfrak{n},\widehat{b}-b)\) is eventually deep and split-normal._
Proof.: Let \(A,B\in H\{Y\}\) be such that \(P_{+\widehat{c}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,
closed with convex valuation ring and \(\dot{K}\) is an algebraic closure of \(\dot{H}\). Moreover, \(\hat{\dot{H}}\) is an immediate extension of \(\dot{H}\) and \(\hat{\dot{K}}\) is an immediate extension of \(\dot{K}\). Denoting the image of \(i\) under the residue morphism \(\hat{\mathcal{O}}_{\hat{K}}\to\hat{\dot{K}}\) by the same symbol, we then have \(\dot{K}=\dot{H}[i]\), \(\hat{\dot{K}}=\hat{\dot{H}}[i]\), and \(i\notin\hat{\dot{H}}\). This gives the following inclusion diagram:
Now \(\widehat{a}\in\mathcal{O}_{\hat{K}}\subseteq\hat{\mathcal{O}}_{\hat{K}}\) and \(\widehat{b},\widehat{c}\in\mathcal{O}_{\hat{H}}\subseteq\hat{\mathcal{O}}_{ \hat{H}}\), and \(\hat{\dot{a}}=\hat{\dot{b}}+\hat{\dot{c}}i\), \(\operatorname{Re}\hat{\dot{a}}=\hat{\dot{b}}\), \(\operatorname{Im}\hat{\dot{a}}=\hat{\dot{c}}\). For all \(a\in\hat{\mathcal{O}}_{K}\) we have \(v(\hat{\dot{a}}-\dot{a})=v(\widehat{a}-a)\in\Delta\), hence \(\hat{\dot{a}}\notin\dot{K}\); likewise \(v(\widehat{b}-b)\in\Delta\) for all \(b\in\hat{\mathcal{O}}\), so \(\hat{\dot{b}}\notin\dot{H}\). Moreover, for all \(\delta\in\Delta\) there is an \(a\in\hat{\mathcal{O}}_{K}\) with \(v(\dot{\dot{a}}-\dot{a})=\delta\); hence \(\hat{\dot{a}}\) is the limit of a c-sequence in \(\dot{K}\). This leads us to consider the completions \(\dot{H}^{\text{c}}\) and \(\dot{K}^{\text{c}}\) of \(\dot{H}\) and \(\dot{K}\). By [ADH, 4.4.11] and Lemma 4.1.1, these yield an inclusion diagram of valued differential field extensions:
where \(\dot{H}^{\text{c}}\) is real closed with algebraic closure \(\dot{K}^{\text{c}}=\dot{H}^{\text{c}}[i]\). These completions are d-valued by [ADH, 9.1.6]. By Corollary 1.8.5, \(\dot{K}\) and \(K^{\text{c}}\) are \(\omega\)-free and \((r-1)\)-newtonian; thus \(\dot{K}^{\text{c}}\) is \(r\)-linearly closed by Corollary 1.8.42. We identify the valued differential subfield \(\dot{K}\big{\langle}\operatorname{Re}\hat{\dot{a}},\operatorname{Im}\hat{ \dot{a}}\big{\rangle}\) of \(\hat{\dot{K}}\) with its image under the embedding into \(\dot{K}^{\text{c}}\) over \(\dot{K}\) from Corollary 4.1.7; then \(\dot{\dot{a}}\in\dot{K}^{\text{c}}\) and \(\hat{\dot{b}}=\operatorname{Re}\hat{\dot{a}}\in\dot{H}^{\text{c}}\). This leads to the next inclusion diagram:
By Corollary 1.6.21, \(\dot{P}\in\dot{K}\{Y\}\) is a minimal annihilator of \(\dot{\dot{a}}\) over \(\dot{K}\) and has the same complexity as \(P\). Likewise, \(\dot{Q}\in\dot{H}\{Y\}\) is a minimal annihilator of \(\dot{\dot{b}}\) over \(\dot{H}\) and has the same complexity as \(Q\). Let \(s:=\operatorname{order}Q=\operatorname{order}\dot{Q}\), so \(1\leqslant s\leqslant 2r\) by Lemma 4.3.7, and the linear part \(A\in\dot{H}^{\text{c}}[\partial]\) of \(\dot{Q}_{+\hat{\dot{b}}}\) has order \(s\) as well. By [ADH, 5.1.37] applied to \(\dot{H}^{\text{c}}\), \(\dot{H}\), \(\dot{P}\), \(\dot{Q}\), \(\dot{\dot{a}}\) in the role of \(K\), \(F\), \(P\), \(S\), \(f\), respectively, \(A\) splits over \(K^{\text{c}}=\dot{H}^{\text{c}}[i]\), so Lemma 1.1.4 gives a real splitting \((g_{1},\dots,g_{s})\) of \(A\) over \(\dot{K}^{\text{c}}\):
\[A\ =\ f(\partial-g_{1})\cdots(\partial-g_{s}),\qquad f,g_{1},\dots,g_{s}\in\dot{ K}^{\text{c}},\ f\neq 0.\]
The slot \((Q,1,\widehat{b})\) in \(H\) is normal, so \(\mathfrak{v}(L_{Q_{+\widehat{b}}})\sim\mathfrak{v}(L_{Q})\prec^{\flat}1\) by Lemma 3.1.27, hence \(\mathfrak{v}(A)\prec^{\flat}1\) in \(\dot{K}^{c}\) by Lemma 3.1.7. Then Corollary 4.2.8 gives \(a,b\in\dot{\mathcal{O}}\) and \(b_{1},\ldots,b_{s}\in\dot{\mathcal{O}}_{K}\) with \(\dot{a},\dot{b}\neq 0\) in \(\dot{H}\) such that for the linear part \(\widetilde{A}\in\dot{H}[\eth]\) of \(\dot{Q}_{+\dot{b}}\),
\[\dot{b}\ \sim\ \overset{\wedge}{b},\qquad\widetilde{A}\ \sim\ A,\qquad \operatorname{order}\widetilde{A}\ =\ s,\qquad\mathfrak{w}\ :=\ \mathfrak{v}(\widetilde{A})\ \sim\ \mathfrak{v}(A),\]
and such that for \(w:=\operatorname{wt}(Q)\) and with \(\Delta(\mathfrak{w})\subseteq\Delta\):
\[\widetilde{A}=\widetilde{B}+\widetilde{E},\ \widetilde{B}=\dot{a}(\eth-\dot{b} _{1})\cdots(\eth-\dot{b}_{s})\in\dot{H}[\eth],\quad\widetilde{E}\in\dot{H}[ \eth],\quad\widetilde{E}\prec_{\Delta(\mathfrak{w})}\mathfrak{w}^{w+1} \widetilde{A},\]
and \((\dot{b}_{1},\ldots,\dot{b}_{s})\) is a real splitting of \(\widetilde{B}\) over \(\dot{K}\). Lemma 1.1.6 shows that we can change \(b_{1},\ldots,b_{s}\) if necessary, without changing \(\dot{b}_{1},\ldots,\dot{b}_{s}\), to arrange that \(B:=a(\eth-b_{1})\cdots(\eth-b_{s})\) lies in \(\dot{\mathcal{O}}[\eth]\subseteq H[\eth]\) and \((b_{1},\ldots,b_{s})\) is a real splitting of \(B\) over \(K\). Now \(\widehat{b}-b\prec\widehat{b}\prec 1\), so \((Q_{+b},1,\widehat{b}-b)\) is a refinement of the normal slot \((Q,1,\widehat{b})\). Hence \((Q_{+b},1,\widehat{b}-b)\) is normal by Proposition 3.3.25, so \(\mathfrak{v}:=\mathfrak{v}(L_{Q_{+b}})\prec^{\flat}1\). By Lemma 3.1.7 we have \(\mathfrak{v}=\mathfrak{w}\), so \(\Delta(\mathfrak{v})=\Delta(\mathfrak{w})\subseteq\Delta\). Hence in \(H[\eth]\):
\[L_{Q_{+b}}\ =\ B+E,\quad E\in\dot{\mathcal{O}}[\eth],\ E\prec_{\Delta(\mathfrak{v})} \mathfrak{v}^{w+1}L_{Q_{+b}}.\]
Thus \((Q_{+b},1,\widehat{b}-b)\) is split-normal.
Recall from the beginning of this subsection that if \(\deg P>1\), then \(K=H[i]\) is \(r\)-linearly newtonian; this allows us to remove the assumptions that \((P,\mathfrak{m},\widehat{a})\) is special and \((Q,\mathfrak{m},\widehat{b})\) is normal in Proposition 4.3.12, by reducing to that case:
**Corollary 4.3.13**.: _Suppose \(\deg P>1\) and \(v(\widehat{b}-H)\subseteq v(\widehat{c}-H)\). Then \((Q,\mathfrak{m},\widehat{b})\) has a special refinement \((Q_{+b},\mathfrak{n},\widehat{b}-b)\) such that \((Q_{+b}^{\phi},\mathfrak{n},\widehat{b}-b)\) is eventually deep and split-normal._
Proof.: By Lemmas 3.2.26 and 3.3.23, the hole \((P,\mathfrak{m},\widehat{a})\) in \(K\) has a quasilinear refinement \((P_{+a},\mathfrak{n},\widehat{a}-a)\). (The use of Lemma 3.3.23 is because we require \(\mathfrak{n}\in H^{\times}\).) Let \(b=\operatorname{Re}a\). Then, using Lemma 4.1.3 for the second equality,
\[v\big{(}(\widehat{a}-a)-K\big{)}\ =\ v(\widehat{a}-K)\ =\ v(\widehat{b}-H)\ =\ v \big{(}(\widehat{b}-b)-H\big{)},\]
and \((Q_{+b},\mathfrak{n},\widehat{b}-b)\) is a \(Z\)-minimal refinement of \((Q,\mathfrak{m},\widehat{b})\). We replace \((P,\mathfrak{m},\widehat{a})\) and \((Q,\mathfrak{m},\widehat{b})\) by \((P_{+a},\mathfrak{n},\widehat{a}-a)\) and \((Q_{+b},\mathfrak{n},\widehat{b}-b)\), respectively, to arrange that the hole \((P,\mathfrak{m},\widehat{a})\) in \(K\) is quasilinear. Then by Proposition 1.6.12 and \(K\) being \(r\)-linearly newtonian, \((P,\mathfrak{m},\widehat{a})\) is special. Hence \((Q,\mathfrak{m},\widehat{b})\) is also special, so Proposition 3.3.36 gives a refinement \((Q_{+b},\mathfrak{n},\widehat{b}-b)\) of \((Q,\mathfrak{m},\widehat{b})\) and an active \(\phi_{0}\in H^{>}\) such that \((Q_{+b}^{\phi_{0}},\mathfrak{n},\widehat{b}-b)\) is deep and normal. Refinements of \((P,\mathfrak{m},\widehat{a})\) remain quasilinear, by Corollary 3.2.23. Since \(v(\widehat{b}-H)\subseteq v(\widehat{c}-H)\) we have a refinement \((P_{+a},\mathfrak{n},\widehat{a}-a)\) of \((P,\mathfrak{m},\widehat{a})\) with \(\operatorname{Re}a=b\). Then by Lemma 3.2.35 the minimal hole \((P_{+a}^{\phi_{0}},\mathfrak{n},\widehat{a}-a)\) in \(H^{\phi_{0}}[i]\) is special. Now apply Proposition 4.3.12 with \(H^{\phi_{0}}\), \((P_{+a}^{\phi_{0}},\mathfrak{n},\widehat{a}-a)\), \((Q_{+b}^{\phi_{0}},\mathfrak{n},\widehat{b}-b)\) in place of \(H\), \((P,\mathfrak{m},\widehat{a})\), \((Q,\mathfrak{m},\widehat{b})\), respectively: it gives \(b_{0}\in H\) and a refinement
\[\big{(}(Q_{+b}^{\phi_{0}})_{+b_{0}},\mathfrak{n},(\widehat{b}-b)-b_{0}\big{)} \ =\ \big{(}Q_{+(b+b_{0})}^{\phi_{0}},\mathfrak{n},\widehat{b}-(b+b_{0})\big{)}\]
of \((Q_{+b}^{\phi_{0}},\mathfrak{n},\widehat{b}-b)\), and thus a refinement \(\big{(}Q_{+(b+b_{0})},\mathfrak{n},\widehat{b}-(b+b_{0})\big{)}\) of \((Q_{+b},\mathfrak{n},\widehat{b}-b)\), such that \(\big{(}Q_{+(b+b_{0})}^{\phi},\mathfrak{n},\widehat{b}-(b+b_{0})\big{)}\) is eventually split-normal. By the remark before Proposition 4.3.12, \(\big{(}Q_{+(b+b_{0})}^{\phi},\mathfrak{n},\widehat{b}-(b+b_{0})\big{)}\) is also eventually deep.
Recall that \(v(\widehat{b}-H)\subseteq v(\widehat{c}-H)\) or \(v(\widehat{c}-H)\subseteq v(\widehat{b}-H)\). The following corollary concerns the second case:
**Corollary 4.3.14**.: _If \(\deg P>1\), \(v(\widehat{c}-H)\subseteq v(\widehat{b}-H)\), and \(R\in Z(H,\widehat{c})\) has minimal complexity, then the \(Z\)-minimal slot \((R,\mathfrak{m},\widehat{c})\) in \(H\) has a special refinement \((R_{+c},\mathfrak{n},\widehat{c}-c)\) such that \((R_{+c}^{\phi},\mathfrak{n},\widehat{c}-c)\) is eventually deep and split-normal._
Proof.: Apply Corollary 4.3.13 to the minimal hole \((P_{\times i},\mathfrak{m},-i\widehat{a})\) in \(H[i]\).
In the next two corollaries we handle the case \(\deg P=1\). Recall from Lemma 4.3.8 that then \(\operatorname{order}Q=1\) or \(\operatorname{order}Q=2\), \(\deg Q=1\). Theorem 3.3.3 gives:
**Corollary 4.3.15**.: _Suppose \(H\) is \(1\)-linearly newtonian and \(\operatorname{order}Q=1\). Then the slot \((Q,\mathfrak{m},\widehat{b})\) in \(H\) has a refinement \((Q_{+b},\mathfrak{n},\widehat{b}-b)\) such that \((Q_{+b}^{\phi},\mathfrak{n},\widehat{b}-b)\) is eventually deep and split-normal._
**Corollary 4.3.16**.: _Suppose \(\deg P=1\) and \(\operatorname{order}Q=2\), \(\deg Q=1\). Let \(\widehat{Q}\in H\{Y\}\) be a minimal annihilator of \(\widehat{b}\) over \(H\). Then \(\big{(}\widehat{Q},\mathfrak{m},\widehat{b}\big{)}\) is a \(Z\)-minimal hole in \(H\) and has a refinement \(\big{(}\widehat{Q}_{+b},\mathfrak{n},\widehat{b}-b\big{)}\) such that \(\big{(}\widehat{Q}_{+b}^{\phi},\mathfrak{n},\widehat{b}-b\big{)}\) is eventually deep and split-normal._
Proof.: By the proof of Lemma 4.3.8 we have \(\operatorname{c}(Q)=\operatorname{c}(\widehat{Q})\) (hence \(\big{(}\widehat{Q},\mathfrak{m},\widehat{b}\big{)}\) is a \(Z\)-minimal hole in \(H\)) and \(L_{\widehat{Q}}\) splits over \(H[i]\). Corollary 3.3.12 gives a refinement \(\big{(}\widehat{Q}_{+b},\mathfrak{n},\widehat{b}-b\big{)}\) of \(\big{(}\widehat{Q},\mathfrak{m},\widehat{b}\big{)}\) whose linear part has Newton weight \(0\) and such that the slot \(\big{(}\widehat{Q}_{+b}^{\phi},\mathfrak{n},\widehat{b}-b\big{)}\) in \(H^{\phi}\) is deep, eventually. Moreover, by Lemmas 3.3.17 and 3.2.31, \(\big{(}\widehat{Q}_{+b}^{\phi},\mathfrak{n},\widehat{b}-b\big{)}\) is normal and its linear part splits over \(H^{\phi}[i]\), eventually. Thus \(\big{(}\widehat{Q}_{+b}^{\phi},\mathfrak{n},\widehat{b}-b\big{)}\) is eventually deep and split-normal.
This concludes the proof of Theorem 4.3.9.
### Split-normality and refinements
We now study the behavior of split-normality under refinements. In this subsection \(a\) ranges over \(H\) and \(\mathfrak{m}\), \(\mathfrak{n}\), \(\mathfrak{v}\) range over \(H^{\times}\). Let \((P,\mathfrak{m},\widehat{a})\) be a slot in \(H\) of order \(r\geqslant 1\) with \(\widehat{a}\in\widehat{H}\setminus H\), and \(L:=L_{P_{\times m}}\), \(w:=\operatorname{wt}(P)\). Here is the split-normal analogue of Lemma 3.3.19:
**Lemma 4.3.17**.: _Suppose \(\operatorname{order}(L)=r\) and \(\mathfrak{v}\) is such that (SN1) and (SN2) hold, and \(\mathfrak{v}(L)\asymp_{\Delta(\mathfrak{v})}\mathfrak{v}\). Then \((P,\mathfrak{m},\widehat{a})\) is split-normal._
Proof.: Same as that of 3.3.19, but with \(R\) as in (SN2) instead of \((P_{\times\mathfrak{m}})_{>1}\).
Now split-normal analogues of Propositions 3.3.25 and 3.3.26:
**Lemma 4.3.18**.: _Suppose \((P,\mathfrak{m},\widehat{a})\) is split-normal. Let a refinement \((P_{+a},\mathfrak{m},\widehat{a}-a)\) of \((P,\mathfrak{m},\widehat{a})\) be given. Then \((P_{+a},\mathfrak{m},\widehat{a}-a)\) is also split-normal._
Proof.: As in the proof of Proposition 3.3.25 we arrange \(\mathfrak{m}=1\) and show for \(\mathfrak{v}:=\mathfrak{v}(L_{P})\), using Lemmas 3.1.27 and 4.3.4, that \(\operatorname{order}(L_{P_{+a}})=r\) and
\[(P_{+a})_{1}\sim_{\Delta(\mathfrak{v})}P_{1},\quad\mathfrak{v}(L_{P_{+a}})\sim_ {\Delta(\mathfrak{v})}\mathfrak{v},\quad(P_{+a})_{>1}\prec_{\Delta(\mathfrak{v })}\mathfrak{v}^{w+1}(P_{+a})_{1}.\]
Now take \(Q\), \(R\) as in (SN2) for \(\mathfrak{m}=1\). Then \(P_{1}=Q+R_{1}\), and so by Lemma 3.1.28 for \(A=L_{Q}\) we obtain \((P_{+a})_{1}-Q\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}(P_{+a})_{1}\), and thus \((P_{+a})_{\geqslant 1}-Q\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}(P_{+a})_{1}\). Hence (SN2) holds with \(\mathfrak{m}=1\) and \(P_{+a}\) instead of \(P\). Thus the slot \((P_{+a},\mathfrak{m},\widehat{a}-a)\) in \(H\) is split-normal by Lemma 4.3.17.
**Lemma 4.3.19**.: _Suppose \((P,\mathfrak{m},\widehat{a})\) is split-normal, \(\widehat{a}\prec\mathfrak{n}\preccurlyeq\mathfrak{m}\), and \([\mathfrak{n}/\mathfrak{m}]\leqslant[\mathfrak{v}]\), \(\mathfrak{v}:=\mathfrak{v}(L)\). Then the refinement \((P,\mathfrak{n},\widehat{a})\) of \((P,\mathfrak{m},\widehat{a})\) is split-normal: if \(\mathfrak{m}\), \(P\), \(Q\), \(\mathfrak{v}\) are as in (SN2), then (SN2) holds with \(\mathfrak{n}\), \(Q_{\times_{\mathfrak{n}/\mathfrak{m}}}\), \(R_{\times_{\mathfrak{n}/\mathfrak{m}}}\), \(\mathfrak{v}(L_{P_{\times_{\mathfrak{n}}}})\) in place of \(\mathfrak{m}\), \(Q\), \(R\), \(\mathfrak{v}\)._
Proof.: Set \(\widetilde{L}:=L_{P_{\times\mathfrak{n}}}\). Lemma 3.3.1 gives \(\operatorname{order}(\widetilde{L})=r\) and \(\mathfrak{v}(\widetilde{L})\asymp_{\Delta(\mathfrak{v})}\mathfrak{v}\). Thus \((P_{\times\mathfrak{n}})_{>1}\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}( P_{\times\mathfrak{n}})_{1}\) by Proposition 3.3.26. Now arrange \(\mathfrak{m}=1\) in the usual way, and take \(Q\), \(R\) as in (SN2) for \(\mathfrak{m}=1\). Then
\[(P_{\times\mathfrak{n}})_{1}\ =\ (P_{1})_{\times\mathfrak{n}}\ =\ Q_{\times \mathfrak{n}}+(R_{1})_{\times\mathfrak{n}},\qquad(P_{\times\mathfrak{n}})_{>1 }\ =\ (R_{\times\mathfrak{n}})_{>1}\ =\ (R_{>1})_{\times\mathfrak{n}}\]
by [ADH, 4.3], where \(Q_{\times\mathfrak{n}}\) is homogeneous of degree \(1\) and order \(r\), and \(L_{Q_{\times\mathfrak{n}}}=L_{Q}\mathfrak{n}\) splits over \(K\). Using [ADH, 4.3, 6.1.3] and \([\mathfrak{n}]\leqslant[\mathfrak{v}]\) we obtain
\[(R_{1})_{\times\mathfrak{n}}\ \asymp_{\Delta(\mathfrak{v})}\ \mathfrak{n}R_{1}\ \preccurlyeq \mathfrak{n}R\ \prec_{\Delta(\mathfrak{v})}\mathfrak{n}\mathfrak{v}^{w+1}P_{1}\ \asymp_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}(P_{1})_{\times\mathfrak{n}} \ =\ \mathfrak{v}^{w+1}(P_{\times\mathfrak{n}})_{1}.\]
Hence (SN2) holds for \(\mathfrak{n},Q_{\times\mathfrak{n}},R_{\times\mathfrak{n}},\mathfrak{v}( \widetilde{L})\) in place of \(\mathfrak{m},Q,R,\mathfrak{v}\).
Recall our standing assumption in this section that \(H\) is a real closed \(H\)-field. Thus \(H\) is d-valued, and for all \(\mathfrak{n}\) and \(q\in\mathbb{Q}^{>}\) we have \(\mathfrak{n}^{q}\in H^{\times}\) such that \((\mathfrak{n}^{q})^{\dagger}=q\mathfrak{n}^{\dagger}\). _In the rest of this section we fix such an \(\mathfrak{n}^{q}\) for all \(\mathfrak{n}\) and \(q\in\mathbb{Q}^{>}\)_. Now we upgrade Corollary 3.3.31 with "split-normal" instead of "normal":
**Lemma 4.3.20**.: _Suppose \(\mathfrak{m}=1\), \((P,1,\widehat{a})\) is split-normal, \(\widehat{a}\prec\mathfrak{n}\prec 1\), and for \(\mathfrak{v}:=\mathfrak{v}(L_{P})\) we have \([\mathfrak{n}^{\dagger}]<[\mathfrak{v}]<[\mathfrak{n}]\). Then \((P,\mathfrak{n}^{q},\widehat{a})\) is a split-normal refinement of \((P,1,\widehat{a})\) for all but finitely many \(q\in\mathbb{Q}\) with \(0<q<1\)._
Proof.: Corollary 3.3.31 gives that \((P,\mathfrak{n}^{q},\widehat{a})\) is a normal refinement of \((P,1,\widehat{a})\) for all but finitely many \(q\in\mathbb{Q}\) with \(0<q<1\). Take \(Q\), \(R\) as in (SN2) for \(\mathfrak{m}=1\). Then \(L=L_{Q}+L_{R}\) where \(L_{Q}\) splits over \(H[i]\) and \(L_{R}\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}L\), for \(\mathfrak{v}:=\mathfrak{v}(L)\). Applying Corollary 3.1.18 to \(A:=L\), \(A_{*}:=L_{R}\) we obtain: \(L_{R}\mathfrak{n}^{q}\prec_{\Delta(\mathfrak{v})}\mathfrak{w}^{w+1}L\mathfrak{ n}^{q}\), \(\mathfrak{w}:=\mathfrak{v}(L\mathfrak{n}^{q})\), for all but finitely many \(q\in\mathbb{Q}^{>}\).
Let \(q\in\mathbb{Q}\) be such that \(0<q<1\), \((P,\mathfrak{n}^{q},\widehat{a})\) is a normal refinement of \((P,1,\widehat{a})\), and \(L_{R}\mathfrak{n}^{q}\prec_{\Delta(\mathfrak{m})}\mathfrak{w}^{w+1}L\mathfrak{ n}^{q}\), with \(\mathfrak{w}\) as above. Then \((P_{\times\mathfrak{n}^{q}})_{1}=Q_{\times\mathfrak{n}^{q}}+(R_{1})_{\times \mathfrak{n}^{q}}\) where \(Q_{\times\mathfrak{n}^{q}}\) is homogeneous of degree \(1\) and order \(r\), \(L_{Q_{\times\mathfrak{n}^{q}}}=L_{Q}\mathfrak{n}^{q}\) splits over \(H[i]\), and \((R_{1})_{\times\mathfrak{n}^{q}}\prec_{\Delta(\mathfrak{v})}\mathfrak{w}^{w+1}( P_{\times\mathfrak{n}^{q}})_{1}\) for \(\mathfrak{w}:=\mathfrak{v}(L_{P_{\times\mathfrak{n}^{q}}})\). Since \((P,\mathfrak{n}^{q},\widehat{a})\) is normal, we also have \((P_{\times\mathfrak{n}^{q}})_{>1}\prec_{\Delta(\mathfrak{m})}\mathfrak{w}^{w+1}( P_{\times\mathfrak{n}^{q}})_{1}\). Thus \((P,\mathfrak{n}^{q},\widehat{a})\) is split-normal.
_Remark_.: We do not know if in this last lemma we can drop the assumption \([\mathfrak{n}^{\dagger}]<[\mathfrak{v}]\).
**Strengthening split-normality**.: _In this subsection \(a,b\) range over \(H\) and \(\mathfrak{m},\mathfrak{n}\) over \(H^{\times}\), and \((P,\mathfrak{m},\widehat{a})\) is a slot in \(H\) of order \(r\geqslant 1\) and weight \(w:=\operatorname{wt}(P)\), so \(w\geqslant 1\), and \(L:=L_{P_{\times\mathfrak{m}}}\). If \(\operatorname{order}L=r\), we set \(\mathfrak{v}:=\mathfrak{v}(L)\)._
With an eye towards later use in connection with fixed point theorems over Hardy fields we strengthen here the concept of split-normality; in the next subsection we show how to improve Theorem 4.3.9 accordingly. See the last subsection of Section 4.2 for the notion of strong splitting.
**Definition 4.3.21**.: Call \((P,\mathfrak{m},\widehat{a})\)**almost strongly split-normal** if \(\operatorname{order}L=r\), \(\mathfrak{v}\prec^{\flat}1\), and the following strengthening of (SN2) holds:
1. \((P_{\times\mathfrak{m}})_{\geqslant 1}=Q+R\) where \(Q,R\in H\{Y\}\), \(Q\) is homogeneous of degree \(1\) and order \(r\), \(L_{Q}\) splits strongly over \(K\), and \(R\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}(P_{\times\mathfrak{m}})_{1}\).
We say that \((P,\mathfrak{m},\widehat{a})\) is **strongly split-normal** if \(\operatorname{order}L=r\), \(\mathfrak{v}\prec^{\flat}1\), and the following condition is satisfied:
1. \(P_{\times\mathfrak{m}}=Q+R\) where \(Q,R\in H\{Y\}\), \(Q\) is homogeneous of degree \(1\) and order \(r\), \(L_{Q}\) splits strongly over \(K\), and \(R\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}(P_{\times\mathfrak{m}})_{1}\).
To facilitate use of (SN2s) we observe:
**Lemma 4.3.22**.: _Suppose \((P,\mathfrak{m},\widehat{a})\) is strongly split-normal and \(P_{\times\mathfrak{m}}=Q+R\) as in (SN2s). Then \(Q\sim(P_{\times\mathfrak{m}})_{1}\), \(\mathfrak{v}_{Q}:=\mathfrak{v}(L_{Q})\sim\mathfrak{v}\), so \(R\prec_{\Delta(\mathfrak{v})}\mathfrak{v}_{Q}^{w+1}Q\)._
Proof.: We have \((P_{\times\mathfrak{m}})_{1}=Q+R_{1}\), so \(Q=(P_{\times\mathfrak{m}})_{1}-R_{1}\) with \(R_{1}\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}(P_{\times\mathfrak{m}})_ {1}\). Now apply Lemma 3.1.1 to \(A:=L\) and \(B:=-L_{R_{1}}\).
If \((P,\mathfrak{m},\widehat{a})\) is almost strongly split-normal, then \((P,\mathfrak{m},\widehat{a})\) is split-normal and hence normal by Lemma 4.3.4. If \((P,\mathfrak{m},\widehat{a})\) is normal and \(L\) splits strongly over \(K\), then \((P,\mathfrak{m},\widehat{a})\) is almost strongly split-normal; in particular, if \((P,\mathfrak{m},\widehat{a})\) is normal of order \(r=1\), then \((P,\mathfrak{m},\widehat{a})\) is almost strongly split-normal, by Lemma 4.2.11. Moreover:
**Lemma 4.3.23**.: _The following are equivalent:_
1. \((P,\mathfrak{m},\widehat{a})\) _is strongly split-normal;_
2. \((P,\mathfrak{m},\widehat{a})\) _is almost strongly split-normal and strictly normal;_
3. \((P,\mathfrak{m},\widehat{a})\) _is almost strongly split-normal and_ \(P(0)\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}(P_{1})_{\times\mathfrak{m}}\)_._
Proof.: Suppose \((P,\mathfrak{m},\widehat{a})\) is strongly split-normal, and let \(Q\), \(R\) be as in (SN2s). Then \((P_{\times\mathfrak{m}})_{\geqslant 1}=Q+R_{\geqslant 1}\), \(L_{Q}\) splits strongly over \(K\), and \(R_{\geqslant 1}\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}(P_{\times \mathfrak{m}})_{1}\). Hence \((P,\mathfrak{m},\widehat{a})\) is almost strongly split-normal, and thus normal. Also \(P(0)=R(0)\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}(P_{\times \mathfrak{m}})_{1}\), so \((P,\mathfrak{m},\widehat{a})\) is strictly normal. This shows (i) \(\Rightarrow\) (ii), and (ii) \(\Rightarrow\) (iii) is clear. For (iii) \(\Rightarrow\) (i) suppose \((P,\mathfrak{m},\widehat{a})\) is almost strongly split-normal and \(P(0)\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}(P_{1})_{\times\mathfrak{m}}\). Take \(Q\), \(R\) as in (SN2as). Then \(P_{\times\mathfrak{m}}=Q+\widetilde{R}\) where \(\widetilde{R}:=P(0)+R\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}(P_{1})_{ \times\mathfrak{m}}\). Thus \((P,\mathfrak{m},\widehat{a})\) is strongly split-normal.
**Corollary 4.3.24**.: _If \(L\) splits strongly over \(K\), then_
\[(P,\mathfrak{m},\widehat{a})\]
_is strongly split-normal_ \[\iff(P,\mathfrak{m},\widehat{a})\]
_is strictly normal._
The following diagram summarizes some implications between these variants of normality, for slots \((P,\mathfrak{m},\widehat{a})\) in \(H\) of order \(r\geqslant 1\):
\[\begin{CD}\text{strongly split-normal}@>{}>{}>\text{almost strongly split-normal}@>{}>{}>\text{split-normal}\\ @V{}V{}V\\ \text{strictly normal}@>{}>{}>\text{normal}\end{CD}\]
If \((P,\mathfrak{m},\widehat{a})\) is almost strongly split-normal, then so are \((bP,\mathfrak{m},\widehat{a})\) for \(b\neq 0\) and \((P_{\times\mathfrak{n}},\mathfrak{m}/\mathfrak{n},\widehat{a}/\mathfrak{n})\), and likewise with "strongly" in place of "almost strongly".
Here is a version of Lemma 4.3.18 for (almost) strong split-normality:
**Lemma 4.3.25**.: _Suppose \((P_{+a},\mathfrak{m},\widehat{a}-a)\) refines \((P,\mathfrak{m},\widehat{a})\). If \((P,\mathfrak{m},\widehat{a})\) is almost strongly split-normal, then so is \((P_{+a},\mathfrak{m},\widehat{a}-a)\). If \((P,\mathfrak{m},\widehat{a})\) is strongly split-normal, \(Z\)-minimal, and \(\widehat{a}-a\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{r+w+1}\mathfrak{m}\), then \((P_{+a},\mathfrak{m},\widehat{a}-a)\) is strongly split-normal._
Proof.: The first part follows from Lemma 4.3.18 and its proof. In combination with Lemmas 3.3.42 and 4.3.23, this also yields the second part.
**Lemma 4.3.26**.: _Suppose that \((P,\mathfrak{m},\widehat{a})\) is split-normal and \(\widehat{a}\prec_{\Delta(\mathfrak{v})}\mathfrak{m}\). Then for all sufficiently small \(q\in\mathbb{Q}^{>}\), any \(\mathfrak{n}\asymp\mathfrak{v}^{q}\mathfrak{m}\) yields an almost strongly split-normal refinement \((P,\mathfrak{n},\widehat{a})\) of \((P,\mathfrak{m},\widehat{a})\)._
Proof.: We arrange \(\mathfrak{m}=1\), so \(\widehat{a}\prec_{\Delta(\mathfrak{v})}1\). Take \(Q\), \(R\) as in (SN2) with \(\mathfrak{m}=1\), and take \(q_{0}\in\mathbb{Q}^{>}\) such that \(\widehat{a}\prec\mathfrak{v}^{q_{0}}\prec 1\). By Lemma 4.2.13 we can decrease \(q_{0}\) so that for all \(q\in\mathbb{Q}\) with \(0<q\leqslant q_{0}\) and any \(\mathfrak{n}\asymp\mathfrak{v}^{q}\), \(L_{Q_{\times n}}=L_{Q}\mathfrak{n}\) splits strongly over \(K\). Suppose \(q\in\mathbb{Q}\), \(0<q\leqslant q_{0}\), and \(\mathfrak{n}\asymp\mathfrak{v}^{q}\). Then \((P,\mathfrak{n},\widehat{a})\) is an almost strongly split-normal refinement of \((P,1,\widehat{a})\), by Lemma 4.3.19.
**Corollary 4.3.27**.: _Suppose that \((P,\mathfrak{m},\widehat{a})\) is \(Z\)-minimal, deep, and split-normal. Then \((P,\mathfrak{m},\widehat{a})\) has a refinement which is deep and almost strongly split-normal._
Proof.: Lemma 3.3.13 gives \(a\) such that \(\widehat{a}-a\prec_{\Delta(\mathfrak{v})}\mathfrak{m}\). By Corollary 3.3.8, the refinement \((P_{+a},\mathfrak{m},\widehat{a}-a)\) of \((P,\mathfrak{m},\widehat{a})\) is deep with \(\mathfrak{v}(L_{P_{+a,\times m}})\asymp_{\Delta(\mathfrak{v})}\mathfrak{v}\), and by Lemma 4.3.18 it is also split-normal. Now apply Lemma 4.3.26 to \((P_{+a},\mathfrak{m},\widehat{a}-a)\) in place of \((P,\mathfrak{m},\widehat{a})\) and again use Corollary 3.3.8.
We now turn to the behavior of these properties under compositional conjugation.
**Lemma 4.3.28**.: _Let \(\phi\) be active in \(H\) with \(0<\phi\preccurlyeq 1\). If \((P,\mathfrak{m},\widehat{a})\) is almost strongly split-normal, then so is the slot \((P^{\phi},\mathfrak{m},\widehat{a})\) in \(H^{\phi}\). Likewise with "strongly" in place of "almost strongly"._
Proof.: We arrange \(\mathfrak{m}=1\), assume \((P,\mathfrak{m},\widehat{a})\) is almost strongly split-normal, and take \(Q\), \(R\) as in (SN2as). The proof of Lemma 4.3.5 shows that with \(\mathfrak{w}:=\mathfrak{v}(L_{P^{\phi}})\) we have \(\mathfrak{w}\prec_{\phi}^{\flat}1\) and \((P^{\phi})_{\geqslant 1}=Q^{\phi}+R^{\phi}\) where \(Q^{\phi}\in H^{\phi}\{Y\}\) is homogeneous of degree \(1\) and order \(r\), \(L_{Q^{\phi}}\) splits over \(H^{\phi}[i]\), and \(R^{\phi}\prec_{\Delta(\mathfrak{w})}\mathfrak{w}^{w+1}(P^{\phi})_{1}\). By Lemma 4.2.12, \(L_{Q^{\phi}}=L_{Q}^{\phi}\) even splits strongly over \(H[i]\). Hence \((P^{\phi},\mathfrak{m},\widehat{a})\) is almost strongly split-normal. The rest follows from Lemma 4.3.23 and the fact that if \((P,\mathfrak{m},\widehat{a})\) is strictly normal, then so is \((P^{\phi},\mathfrak{m},\widehat{a})\).
If \(H\) is \(\mathfrak{w}\)-free and \(r\)-linearly newtonian, then by Corollary 3.3.48, every \(Z\)-minimal slot in \(H\) of order \(r\) has a refinement \((P,\mathfrak{m},\widehat{a})\) such that the slot \((P^{\phi},\mathfrak{m},\widehat{a})\) in \(H^{\phi}\) is eventually deep and strictly normal. Corollary 4.3.30 of the next lemma is a variant of this fact for strong split-normality.
**Lemma 4.3.29**.: _Assume \(H\) is \(\mathfrak{w}\)-free and \(r\)-linearly newtonian, and every \(A\in H[\partial]\) of order \(r\) splits over \(K\). Suppose \((P,\mathfrak{m},\widehat{a})\) is \(Z\)-minimal. Then there is a refinement \((P_{+a},\mathfrak{n},\widehat{a}-a)\) of \((P,\mathfrak{m},\widehat{a})\) and an active \(\phi\) in \(H\) with \(0<\phi\preccurlyeq 1\) such that \((P_{+a}^{\phi},\mathfrak{n},\widehat{a}-a)\) is deep and strictly normal, and its linear part splits strongly over \(K^{\phi}\) (so \((P_{+a}^{\phi},\mathfrak{n},\widehat{a}-a)\) is strongly split-normal by Corollary 4.3.24)._
Proof.: For any active \(\phi\) in \(H\) with \(0<\phi\preccurlyeq 1\) we may replace \(H\), \((P,\mathfrak{m},\widehat{a})\) by \(H^{\phi}\), \((P^{\phi},\mathfrak{m},\widehat{a})\), respectively. We may also replace \((P,\mathfrak{m},\widehat{a})\) by any of its refinements. Now Theorem 3.3.33 gives a refinement \((P_{+a},\mathfrak{n},\widehat{a}-a)\) of \((P,\mathfrak{m},\widehat{a})\) and
an active \(\phi\) in \(H\) such that \(0<\phi\preccurlyeq 1\) and \((P_{+a}^{\phi},\mathfrak{n},\widehat{a}-a)\) is deep and normal. Replacing \(H\), \((P,\mathfrak{m},\widehat{a})\) by \(H^{\phi}\), \((P_{+a}^{\phi},\mathfrak{n},\widehat{a}-a)\), respectively, we thus arrange that \((P,\mathfrak{m},\widehat{a})\) itself is deep and normal. We show that then the lemma holds with \(\phi=1\). For this we first replace \((P,\mathfrak{m},\widehat{a})\) by a suitable refinement \((P_{+a},\mathfrak{m},\widehat{a}-a)\) to arrange by Corollary 3.3.47 that \((P,\mathfrak{m},\widehat{a})\) is strictly normal and \(\widehat{a}\prec_{\Delta(\mathfrak{v})}\mathfrak{m}\). Now \(L\) splits over \(K\), so by Corollary 4.2.14, for sufficiently small \(q\in\mathbb{Q}^{>}\), any \(\mathfrak{n}\asymp|\mathfrak{v}|^{q}\mathfrak{m}\) gives a refinement \((P,\mathfrak{n},\widehat{a})\) of \((P,\mathfrak{m},\widehat{a})\) whose linear part \(L_{P_{\times n}}\) has order \(r\) and splits strongly over \(K\). For each such \(\mathfrak{n}\), \((P,\mathfrak{n},\widehat{a})\) is deep by Corollary 3.3.8, and for some such \(\mathfrak{n}\), \((P,\mathfrak{n},\widehat{a})\) is also strictly normal, by Remark 3.3.45.
The previous lemma in combination with Lemma 4.3.28 yields:
**Corollary 4.3.30**.: _With the same assumptions on \(H\), \(K\) as in Lemma 4.3.29, every \(Z\)-minimal slot in \(H\) of order \(r\) has a refinement \((P,\mathfrak{m},\widehat{a})\) such that \((P^{\phi},\mathfrak{m},\widehat{a})\) is eventually deep and strongly split-normal._
For \(r=1\) the splitting assumption is automatically satisfied (and this is the case most relevant later). We do not know whether "every \(A\in H[\widehat{a}]^{\neq}\) of order \(\leqslant r\) splits over \(K\)" is strictly weaker than "\(K\) is \(r\)-linearly closed".
### Achieving strong split-normality
We make the same assumptions as in the subsection _Achieving split-normality_: \(H\) is \(\mathfrak{o}\)-_free_ and \((P,\mathfrak{m},\widehat{a})\)_is a minimal hole in \(K=H[i]\) of order \(r\geqslant 1\), with \(\mathfrak{m}\in H^{\times}\) and \(\widehat{a}\in\widehat{K}\setminus K\)._ Recall that \(K\) is also \(\mathfrak{o}\)-free [ADH, 11.7.23]. We have
\[\widehat{a}\ =\ \widehat{b}+\widehat{c}\,i,\qquad\widehat{b},\widehat{c}\in \widehat{H}.\]
We let \(a\) range over \(K\), \(b\), \(c\) over \(H\), and \(\mathfrak{n}\) over \(H^{\times}\). In connection with the next two lemmas we note that given an active \(\phi\) in \(H\) with \(0<\phi\preccurlyeq 1\), if \((P,\mathfrak{m},\widehat{a})\) is normal (strictly normal, respectively), then so is \((P^{\phi},\mathfrak{m},\widehat{a})\), by Lemma 3.3.20 (Lemma 3.3.40, respectively); moreover, if the linear part of \((P,\mathfrak{m},\widehat{a})\) splits strongly over \(K\), then the linear part of \((P^{\phi},\mathfrak{m},\widehat{a})\) splits strongly over \(K^{\phi}=H^{\phi}[i]\), by Lemma 4.2.12. Here is a "complex" version of Lemma 4.3.29, with a similar proof:
**Lemma 4.3.31**.: _For some refinement \((P_{+a},\mathfrak{n},\widehat{a}-a)\) of \((P,\mathfrak{m},\widehat{a})\) and active \(\phi\) in \(H\) with \(0<\phi\preccurlyeq 1\), the hole \((P_{+a}^{\phi},\mathfrak{n},\widehat{a}-a)\) in \(K^{\phi}\) is deep and normal, its linear part splits strongly over \(K^{\phi}\), and it is moreover strictly normal if \(\deg P>1\)._
Proof.: For any active \(\phi\) in \(H\) with \(0<\phi\preccurlyeq 1\) we may replace \(H\) and \((P,\mathfrak{m},\widehat{a})\) by \(H^{\phi}\) and the minimal hole \((P^{\phi},\mathfrak{m},\widehat{a})\) in \(K^{\phi}\). We may also replace \((P,\mathfrak{m},\widehat{a})\) by any of its refinements \((P_{+a},\mathfrak{n},\widehat{a}-a)\). As noted before Theorem 4.3.9, Corollary 3.3.34 and Lemma 3.3.23 give a refinement \((P_{+a},\mathfrak{n},\widehat{a}-a)\) of \((P,\mathfrak{m},\widehat{a})\) and an active \(\phi\) in \(H\) with \(0<\phi\preccurlyeq 1\) such that \((P_{+a}^{\phi},\mathfrak{n},\widehat{a}-a)\) is deep and normal. Replacing \(H\), \((P,\mathfrak{m},\widehat{a})\) by \(H^{\phi}\), \((P_{+a}^{\phi},\mathfrak{n},\widehat{a}-a)\), respectively, we thus arrange that \((P,\mathfrak{m},\widehat{a})\) itself is deep and normal. We show that then the lemma holds with \(\phi=1\).
Set \(L:=L_{P_{\times\mathfrak{m}}}\) and \(\mathfrak{v}:=\mathfrak{v}(L)\). Lemma 3.3.13 gives \(a\) with \(\widehat{a}-a\prec_{\Delta(\mathfrak{v})}\mathfrak{m}\). If \(\deg P>1\), then \(K\) is \(r\)-linearly newtonian and we use Corollary 3.3.16 to take \(a\) such that even \(\widehat{a}-a\preccurlyeq^{w+2}\mathfrak{m}\). Replacing \((P,\mathfrak{m},\widehat{a})\) by \((P_{+a},\mathfrak{m},\widehat{a}-a)\), we thus arrange by Lemma 3.3.7 and Proposition 3.3.25 that \(\widehat{a}\prec_{\Delta(\mathfrak{v})}\mathfrak{m}\), and also by Lemma 3.3.46 that \((P,\mathfrak{m},\widehat{a})\) is strictly normal if \(\deg P>1\). Now \(L\) splits over \(K\), since \(K\) is \(r\)-linearly closed by Corollary 3.2.4. Then by Corollary 4.2.14, for sufficiently small \(q\in\mathbb{Q}^{>}\), any \(\mathfrak{n}\asymp|\mathfrak{v}|^{q}\mathfrak{m}\) gives a refinement \((P,\mathfrak{n},\widehat{a})\) of \((P,\mathfrak{m},\widehat{a})\) whose
linear part \(L_{P_{\times n}}\) splits strongly over \(K\). For such \(\mathfrak{n}\), \((P,\mathfrak{n},\widehat{a})\) is deep by Lemma 3.3.7 and normal by Proposition 3.3.26. If \((P,\mathfrak{m},\widehat{a})\) is strictly normal, then for some such \(\mathfrak{n}\), \((P,\mathfrak{n},\widehat{a})\) is also strictly normal, thanks to Lemma 3.3.44.
The following version of Lemma 4.3.31 also encompasses linear \((P,\mathfrak{m},\widehat{a})\):
**Lemma 4.3.32**.: _Suppose \(\partial K=K\) and \(\mathrm{I}(K)\subseteq K^{\dagger}\). Then there is a refinement \((P_{+a},\mathfrak{n},\widehat{a}-a)\) of \((P,\mathfrak{m},\widehat{a})\) and an active \(\phi\) in \(H\) with \(0<\phi\preccurlyeq 1\) such that the hole \((P_{+a}^{\phi},\mathfrak{n},\widehat{a}-a)\) in \(K^{\phi}\) is deep and strictly normal, and its linear part splits strongly over \(K^{\phi}\)._
Proof.: Thanks to Lemma 4.3.31 we need only consider the case \(\deg P=1\). Then we have \(r=1\) by Corollary 3.2.8. (See now the remark following this proof.) As in the proof of Lemma 4.3.31 we may replace \(H\) and \((P,\mathfrak{m},\widehat{a})\) for any active \(\phi\preccurlyeq 1\) in \(H^{>}\) by \(H^{\phi}\) and \((P^{\phi},\mathfrak{m},\widehat{a})\), and also \((P,\mathfrak{m},\widehat{a})\) by any of its refinements \((P_{+a},\mathfrak{n},\widehat{a}-a)\). Recall here that \(\mathfrak{n}\in H^{\times}\). Hence using a remark preceding Lemma 3.3.39 and Corollary 3.5.17 we arrange that \((P,\mathfrak{m},\widehat{a})\) is strictly normal, and thus balanced and deep. We show that then the lemma holds with \(\phi=1\).
Set \(L:=L_{P_{\times n}}\), \(\mathfrak{v}:=\mathfrak{v}(L)\). Lemmas 3.5.9 and 3.5.10 yield an \(a\) with \(\widehat{a}-a\preccurlyeq\mathfrak{v}^{4}\mathfrak{m}\). Replacing \((P,\mathfrak{m},\widehat{a})\) by \((P_{+a},\mathfrak{m},\widehat{a}-a)\) arranges that \(\widehat{a}\prec_{\Delta(\mathfrak{v})}\mathfrak{m}\), by Lemmas 3.3.7 and 3.3.41. As in the proof of Lemma 4.3.31, for sufficiently small \(q\in\mathbb{Q}^{>}\), any \(\mathfrak{n}\asymp|\mathfrak{v}|^{q}\mathfrak{m}\) now gives a strictly normal and deep refinement \((P,\mathfrak{n},\widehat{a})\) of \((P,\mathfrak{m},\widehat{a})\) whose linear part splits strongly over \(K\).
_Remark_.: Suppose we replace our standing assumption that \(H\) is \(\mathfrak{o}\)-free and \((P,\mathfrak{m},\widehat{a})\) is a minimal hole in \(K\) by the assumption that \(H\) is \(\lambda\)-free and \((P,\mathfrak{m},\widehat{a})\) is a slot in \(K\) of order and degree \(1\) (so \(K\) is \(\lambda\)-free by [ADH, 11.6.8] and \((P,\mathfrak{m},\widehat{a})\) is \(Z\)-minimal). Then Lemma 4.3.32 goes through with "hole" replaced by "slot". Its proof also goes through with the references to Lemmas 3.3.7 and 3.3.41 replaced by references to Corollary 3.3.8 and Lemma 3.3.42. The end of that proof refers to the end of the proof of Lemma 4.3.31, and there one should replace Proposition 3.3.26 by Corollary 3.3.27, and Lemma 3.3.44 by Remark 3.3.45.
In the remainder of this subsection we prove the following variant of Theorem 4.3.9:
**Theorem 4.3.33**.: _If \(H\) is \(1\)-linearly newtonian, then one of the following holds:_
* \(\widehat{b}\notin H\) _and there exists a_ \(Z\)_-minimal slot_ \((Q,\mathfrak{m},\widehat{b})\) _in_ \(H\) _with a refinement_ \((Q_{+b},\mathfrak{n},\widehat{b}-b)\) _such that_ \((Q_{+b}^{\phi},\mathfrak{n},\widehat{b}-b)\) _is eventually deep and almost strongly split-normal;_
* \(\widehat{c}\notin H\) _and there exists a_ \(Z\)_-minimal slot_ \((R,\mathfrak{m},\widehat{c})\) _in_ \(H\) _with a refinement_ \((R_{+c},\mathfrak{n},\widehat{c}-c)\) _such that_ \((R_{+c}^{\phi},\mathfrak{n},\widehat{c}-c)\) _is eventually deep and almost strongly split-normal._
_Moreover, if \(H\) is \(1\)-linearly newtonian and either \(\deg P>1\), or \(\widehat{b}\notin H\) and \(Z(H,\widehat{b})\) contains an element of order \(1\), or \(\widehat{c}\notin H\) and \(Z(H,\widehat{c})\) contains an element of order \(1\), then_ (i) _holds with "almost" omitted, or_ (ii) _holds with "almost" omitted._
Towards the proof of this theorem we first show:
**Lemma 4.3.34**.: _Suppose \(\widehat{b}\notin H\) and \((Q,\mathfrak{m},\widehat{b})\) is a \(Z\)-minimal slot in \(H\) with a refinement \((Q_{+b},\mathfrak{n},\widehat{b}-b)\) such that \((Q_{+b}^{\phi},\mathfrak{n},\widehat{b}-b)\) is eventually deep and split-normal. Then \((Q,\mathfrak{m},\widehat{b})\) has a refinement \((Q_{+b},\mathfrak{n},\widehat{b}-b)\) such that \((Q_{+b}^{\phi},\mathfrak{n},\widehat{b}-b)\) is eventually deep and almost strongly split-normal._
Proof.: Let \((Q_{+b},\mathfrak{n},\widehat{b}-b)\) be a refinement of \((Q,\mathfrak{m},\widehat{b})\) and let \(\phi_{0}\) be active in \(H\) such that \(0<\phi_{0}\preccurlyeq 1\) and \((Q_{+b}^{\phi_{0}},\mathfrak{n},\widehat{b}-b)\) is deep and split-normal. Then Corollary 4.3.27 yields a refinement \(\big{(}(Q_{+b}^{\phi_{0}})_{+b_{0}},\mathfrak{n}_{0},(\widehat{b}-b)-b_{0} \big{)}\) of \((Q_{+b}^{\phi_{0}},\mathfrak{n},\widehat{b}-b)\) which is deep and almost strongly split-normal. Hence
\[\big{(}(Q_{+b})_{+b_{0}},\mathfrak{n}_{0},(\widehat{b}-b)-b_{0}\big{)}\ =\ \big{(}Q_{+(b+b_{0})},\mathfrak{n}_{0},\widehat{b}-(b+b_{0})\big{)}\]
is a refinement of \((Q,\mathfrak{m},\widehat{b})\), and \(\big{(}Q_{+(b+b_{0})}^{\phi},\mathfrak{n}_{0},\widehat{b}-(b+b_{0})\big{)}\) is eventually deep and almost strongly split-normal by Lemma 4.3.28.
Likewise:
**Lemma 4.3.35**.: _Suppose \(\widehat{c}\notin H\), and \((R,\mathfrak{m},\widehat{c})\) is a \(Z\)-minimal slot in \(H\) with a refinement \((R_{+c},\mathfrak{n},\widehat{c}-c)\) such that \((R_{+c}^{\phi},\mathfrak{n},\widehat{c}-c)\) is eventually deep and split-normal. Then \((R,\mathfrak{m},\widehat{c})\) has a refinement \((R_{+c},\mathfrak{n},\widehat{c}-c)\) such that \((R_{+c}^{\phi},\mathfrak{n},\widehat{c}-c)\) is eventually deep and almost strongly split-normal._
Theorem 4.3.9 and the two lemmas above give the first part of Theorem 4.3.33. We break up the proof of the "moreover" part into several cases, along the lines of the proof of Theorem 4.3.9. We begin with the case where \(\widehat{b}\in H\) or \(\widehat{c}\in H\).
**Lemma 4.3.36**.: _Suppose \(H\) is \(1\)-linearly newtonian, \(\widehat{b}\notin H\), \((Q,\mathfrak{m},\widehat{b})\) is a \(Z\)-minimal slot in \(H\) of order \(r\), and some refinement \((Q_{+b},\mathfrak{n},\widehat{b}-b)\) of \((Q,\mathfrak{m},\widehat{b})\) is such that \((Q_{+b}^{\phi},\mathfrak{n},\widehat{b}-b)\) is eventually deep and split-normal. Then \((Q,\mathfrak{m},\widehat{b})\) has a refinement \((Q_{+b},\mathfrak{n},\widehat{b}-b)\) with \((Q_{+b}^{\phi},\mathfrak{n},\widehat{b}-b)\) eventually deep and strongly split-normal._
Proof.: Lemma 4.3.34 gives a refinement \((Q_{+b},\mathfrak{n},\widehat{b}-b)\) of \((Q,\mathfrak{m},\widehat{b})\) with \((Q_{+b}^{\phi},\mathfrak{n},\widehat{b}-b)\) eventually deep and almost strongly split-normal. We upgrade this to "strongly split-normal" as follows: Take active \(\phi_{0}\) in \(H\) with \(0<\phi_{0}\preccurlyeq 1\) such that the slot \((Q_{+b}^{\phi_{0}},\mathfrak{n},\widehat{b}-b)\) in \(H^{\phi_{0}}\) is deep and almost strongly split-normal. Now \(H\) is \(1\)-linearly newtonian, hence \(r\)-linearly newtonian. Therefore Corollary 3.3.47 yields a deep and strictly normal refinement \(\big{(}(Q_{+b}^{\phi_{0}})_{+b_{0}},\mathfrak{n},(\widehat{b}-b)-b_{0}\big{)}\) of \(\big{(}Q_{+b}^{\phi_{0}},\mathfrak{n},\widehat{b}-b\big{)}\). By Lemma 4.3.25, this refinement is still almost strongly split-normal, thus strongly split-normal by Lemma 4.3.23. Then by Lemma 4.3.28, \(\big{(}Q_{+(b+b_{0})},\mathfrak{n},\widehat{b}-(b+b_{0})\big{)}\) is a refinement of \((Q,\mathfrak{m},\widehat{b})\) such that \(\big{(}Q_{+(b+b_{0})}^{\phi},\mathfrak{n},\widehat{b}-(b+b_{0})\big{)}\) is eventually deep and strongly split-normal.
Lemmas 4.3.10 and 4.3.36 give the following:
**Corollary 4.3.37**.: _Suppose \(H\) is \(1\)-linearly newtonian and \(\widehat{c}\in H\). Then there is a hole \((Q,\mathfrak{m},\widehat{b})\) in \(H\) of the same complexity as \((P,\mathfrak{m},\widehat{a})\). Every such hole \((Q,\mathfrak{m},\widehat{b})\) in \(H\) is minimal and has a refinement \((Q_{+b},\mathfrak{n},\widehat{b}-b)\) such that \((Q_{+b}^{\phi},\mathfrak{n},\widehat{b}-b)\) is eventually deep and strongly split-normal._
Just as Lemma 4.3.10 gave rise to Lemma 4.3.11, Corollary 4.3.37 leads to:
**Corollary 4.3.38**.: _Suppose \(H\) is \(1\)-linearly newtonian and \(\widehat{b}\in H\). Then there is a hole \((R,\mathfrak{m},\widehat{c})\) in \(H\) of the same complexity as \((P,\mathfrak{m},\widehat{a})\). Every such hole in \(H\) is minimal and has a refinement \((R_{+c},\mathfrak{n},\widehat{c}-c)\) such that \((R_{+c}^{\phi},\mathfrak{n},\widehat{c}-c)\) is eventually deep and strongly split-normal._
In the following two lemmas we assume that \(\widehat{b},\widehat{c}\notin H\). Let \(Q\in Z(H,\widehat{b})\) be of minimal complexity, so \((Q,\mathfrak{m},\widehat{b})\) is a \(Z\)-minimal slot in \(H\), as is each of its refinements. The next lemma strengthens Corollary 4.3.13:
**Lemma 4.3.39**.: _Suppose \(\deg P>1\) and \(v(\widehat{b}-H)\subseteq v(\widehat{c}-H)\). Then \((Q,\mathfrak{m},\widehat{b})\) has a refinement \((Q_{+b},\mathfrak{n},\widehat{b}-b)\) such that \((Q_{+b}^{\phi},\mathfrak{n},\widehat{b}-b)\) is eventually deep and strongly split-normal._
Proof.: Corollary 4.3.13 and Lemma 4.3.34 give a refinement \((Q_{+b},\mathfrak{n},\widehat{b}-b)\) of \((Q,\mathfrak{m},\widehat{b})\) and an active \(\phi_{0}\) in \(H\) with \(0<\phi_{0}\preccurlyeq 1\) such that the slot \((Q_{+b}^{\phi_{0}},\mathfrak{n},\widehat{b}-b)\) in \(H^{\phi_{0}}\) is deep and almost strongly split-normal. From \(\deg P>1\) we obtain that \(H\) is \(r\)-linearly newtonian. Now argue as in the proof of Lemma 4.3.36.
Similarly we obtain a strengthening of Corollary 4.3.14, using that corollary and Lemma 4.3.35 in place of Corollary 4.3.13 and Lemma 4.3.34 in the proof:
**Lemma 4.3.40**.: _If \(\deg P>1\), \(v(\widehat{c}-H)\subseteq v(\widehat{b}-H)\), and \(R\in Z(H,\widehat{c})\) has minimal complexity, then the \(Z\)-minimal slot \((R,\mathfrak{m},\widehat{c})\) in \(H\) has a refinement \((R_{+c},\mathfrak{n},\widehat{c}-c)\) such that \((R_{+c}^{\phi},\mathfrak{n},\widehat{c}-c)\) is eventually deep and strongly split-normal._
We now prove the "moreover" part of Theorem 4.3.33. Thus, suppose \(H\) is \(1\)-linearly newtonian. If \(\widehat{b}\in H\), then \(\widehat{c}\notin H\) and Corollary 4.3.38 yields a strong version of (ii) with "almost" omitted. Likewise, if \(\widehat{c}\in H\), then \(\widehat{b}\notin H\) and Corollary 4.3.37 yields a strong version of (i), with "almost" omitted. Likewise, if \(Z(H,\widehat{c})\) contains an element of order \(1\), then (ii) holds with "almost" omitted.
### Ultimate Slots and Firm Slots
_In this section \(H\) is a Liouville closed \(H\)-field with small derivation, \(\widehat{H}\) is an immediate asymptotic extension of \(H\), and \(i\) be an element of an asymptotic extension of \(\widehat{H}\) with \(i^{2}=-1\)._ Then \(\widehat{H}\) is an \(H\)-field, \(i\notin\widehat{H}\), \(K:=H[i]\) is an algebraic closure of \(H\), and \(\widehat{K}:=\widehat{H}[i]\) is an immediate d-valued extension of \(K\). (See the beginning of Section 4.3.) Let \(C\) be the constant field of \(H\), let \(\mathcal{O}\) denote the valuation ring of \(H\) and \(\Gamma\) its value group. Accordingly, the constant field of \(K\) is \(C_{K}=C[i]\) and the valuation ring of \(K\) is \(\mathcal{O}_{K}=\mathcal{O}+\mathcal{O}\dot{\mathfrak{i}}\). Let \(\mathfrak{m}\), \(\mathfrak{n}\), \(\mathfrak{w}\) range over \(H^{\times}\) and \(\phi\) over the elements of \(H^{>}\) which are active in \(H\) (and hence in \(K\)).
In Section 1.2 we introduced
\[W\ :=\ \big{\{}\mathrm{wr}(a,b):\ a,b\in H,\ a^{2}+b^{2}=1\big{\}}.\]
Note that \(W\) is a subspace of the \(\mathbb{Q}\)-linear space \(H\), because \(Wi=S^{\dagger}\) where
\[S\ :=\ \{a+bi:\ a,b\in H,\ a^{2}+b^{2}=1\}\]
is a divisible subgroup of \(K^{\times}\). We have \(K^{\dagger}=H+Wi\) by Lemma 1.2.4. Thus there exists a complement \(\Lambda\) of the subspace \(K^{\dagger}\) of \(K\) such that \(\Lambda\subseteq Hi\), and in this section we fix such \(\Lambda\) and let \(\lambda\) range over \(\Lambda\). Let \(\mathrm{U}=K\big{[}\mathrm{e}(\Lambda)\big{]}\) be the universal exponential extension of \(K\) defined in Section 2.2.
For \(A\in K[\partial]^{\neq}\) we have its set \(\mathscr{E}^{\rm u}(A)\subseteq\Gamma\) of ultimate exceptional values, which a-priori might depend on our choice of \(\Lambda\). We now make good on a promise from Section 2.6 by showing under the mild assumption \({\rm I}(K)\subseteq K^{\dagger}\) and with our restriction \(\Lambda\subseteq H\dot{\imath}\) there is no such dependence:
**Corollary 4.4.1**.: _Suppose \({\rm I}(K)\subseteq K^{\dagger}\). Then for \(A\in K[\partial]^{\neq}\), the status of \(A\) being terminal does not depend on the choice of \(\Lambda\), and the set \(\mathscr{E}^{\rm u}(A)\) of ultimate exceptional values of \(A\) also does not depend on this choice._
Proof.: Let \(\Lambda^{*}\subseteq H\dot{\imath}\) also be a complement of \(K^{\dagger}\). Let \(\lambda\mapsto\lambda^{*}\) be the \(\mathbb{Q}\)-linear bijection \(\Lambda\to\Lambda^{*}\) with \(\lambda-\lambda^{*}\in W\dot{\imath}\) for all \(\lambda\). Then by Lemmas 1.2.8 and 1.2.16,
\[\lambda-\lambda^{*}\in{\rm I}(H)\dot{\imath}\ \subseteq\ {\rm I}(K)\ \subseteq\ ({\mathcal{O}}_{K}^{\times})^{\dagger}\]
for all \(\lambda\). Now use Lemma 2.6.8 and Corollary 2.6.9.
**Corollary 4.4.2**.: _Suppose \({\rm I}(K)\subseteq K^{\dagger}\). Let \(A=\partial-g\in K[\partial]\) where \(g\in K\) and let \(\mathfrak{g}\in H^{\times}\) be such that \(\mathfrak{g}^{\dagger}={\rm Re}\,g\). Then_
\[\mathscr{E}^{\rm u}(A)\ =\ v_{\rm g}(\ker_{\rm U}^{\neq}A)\ =\ \{v\mathfrak{g}\}.\]
_In particular, if \({\rm Re}\,g\in{\rm I}(H)\), then \(\mathscr{E}^{\rm u}(A)=\{0\}\)._
Proof.: Let \(f\in K^{\times}\) and \(\lambda\) be such that \(g=f^{\dagger}+\lambda\). Then
\[\mathscr{E}^{\rm u}(A)\ =\ v_{\rm g}(\ker_{\rm U}^{\neq}A)\ =\ \{vf\}\]
by Lemma 2.6.14 and its proof. Recall that \(K^{\dagger}=H+{\rm I}(H)\dot{\imath}\) by Lemma 1.2.16 and remarks preceding it, so \(g\in K^{\dagger}\) iff \({\rm Im}\,g\in{\rm I}(H)\). Consider first the case \(g\notin K^{\dagger}\). Then by Corollary 4.4.1 we can change \(\Lambda\) if necessary to arrange \(\lambda:=({\rm Im}\,g)\dot{\imath}\in\Lambda\) so that we can take \(f:=\mathfrak{g}\) in the above. Now suppose \(g\in K^{\dagger}\). Then \(g=(\mathfrak{g}h)^{\dagger}\) where \(h\in K^{\times}\), \(h^{\dagger}=({\rm Im}\,g)\dot{\imath}\). Then we can take \(f:=\mathfrak{g}h\), \(\lambda:=0\), and we have \(h\asymp 1\) since \(h^{\dagger}\in{\rm I}(H)\dot{\imath}\subseteq{\rm I}(K)\).
**Corollary 4.4.3**.: _Suppose \({\rm I}(K)\subseteq K^{\dagger}\), and let \(F\) be a Liouville closed \(H\)-field extension of \(H\), and \(L:=F[\dot{\imath}]\). Then the subspace \(L^{\dagger}\) of the \(\mathbb{Q}\)-linear space \(L\) has a complement \(\Lambda_{L}\) with \(\Lambda\subseteq\Lambda_{L}\subseteq Fi\). For any such \(\Lambda_{L}\) and \(A\in K[\partial]^{\neq}\) we have \(\mathscr{E}^{\rm e}(A_{\lambda})=\mathscr{E}^{\rm e}_{L}(A_{\lambda})\cap\Gamma\) for all \(\lambda\), and thus \(\mathscr{E}^{\rm u}(A)\subseteq\mathscr{E}^{\rm u}_{L}(A)\), where \(\mathscr{E}^{\rm u}_{L}(A)\) is the set of ultimate exceptional values of \(A\in L[\partial]^{\neq}\) with respect to \(\Lambda_{L}\)._
Proof.: By the remarks at the beginning of this subsection applied to \(F\), \(L\) in place of \(H\), \(K\) we have \(L^{\dagger}=F+W_{F}\dot{\imath}\) where \(W_{F}\) is a subspace of the \(\mathbb{Q}\)-linear space \(F\). Also \(K^{\dagger}=H+{\rm I}(H)\dot{\imath}\) by Lemma 1.2.16, and \(L^{\dagger}\cap K=K^{\dagger}\) by Lemma 2.6.24. This yields a complement \(\Lambda_{L}\) of \(L^{\dagger}\) in \(L\) with \(\Lambda\subseteq\Lambda_{L}\subseteq Fi\). Since \(H\) is Liouville closed and hence \(\lambda\)-free by [ADH, 11.6.2], its algebraic closure \(K\) is \(\lambda\)-free by [ADH, 11.6.8]. Now the rest follows from remarks preceding Lemma 2.6.12.
Given \(A\in K[\partial]^{\neq}\), let \(\mathscr{E}^{\rm u}(A^{\phi})\) be the set of ultimate exceptional values of the linear differential operator \(A^{\phi}\in K^{\phi}[\mathfrak{g}]\), \(\mathfrak{g}=\phi^{-1}\partial\), with respect to \(\Lambda^{\phi}=\phi^{-1}\Lambda\). We summarize some properties of ultimate exceptional values used later in this section:
**Lemma 4.4.4**.: _Let \(A\in K[\partial]^{\neq}\) have order \(r\). Then for all \(b\in K^{\times}\) and all \(\phi\),_
\[\mathscr{E}^{\rm u}(bA)\ =\ \mathscr{E}^{\rm u}(A),\quad\mathscr{E}^{\rm u}(Ab)\ =\ \mathscr{E}^{\rm u}(A)-vb,\quad\mathscr{E}^{\rm u}(A^{\phi})\ =\ \mathscr{E}^{\rm u}(A).\]
_Moreover, if \({\rm I}(K)\subseteq K^{\dagger}\), then:_
* \(|\mathscr{E}^{\rm u}(A)|\leqslant r\)_;_
_._
2. \(\dim_{C[i]}\ker_{\mathrm{U}}A=r\implies\mathscr{E}^{u}(A)=v_{\mathrm{g}}(\ker_{ \mathrm{U}}^{\neq}A)\)_;_
3. _under the assumption that_ \(\mathfrak{v}:=\mathfrak{v}(A)\prec^{\flat}1\) _and_ \(B\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{r+1}A\) _where_ \(B\in K[\mathfrak{d}]\) _has order_ \(\leqslant r\)_, we have_ \(\mathscr{E}^{u}(A+B)=\mathscr{E}^{u}(A)\)_;_
4. _for_ \(r=1\) _we have_ \(|\mathscr{E}^{u}(A)|=1\) _and_ \(\mathscr{E}^{u}(A)=v_{\mathrm{g}}(\ker_{\mathrm{U}}^{\neq}A)\)_._
Proof.: For the displayed equalities, see Remark 2.6.10. Now assume \(\mathrm{I}(K)\subseteq K^{\dagger}\). Then \(K^{\dagger}=H+\mathrm{I}(H)i\), so (i) and (ii) follow from Proposition 2.6.26 and (iii) from Proposition 3.1.26. Corollary 4.4.2 yields (iv).
Recall from Lemma 1.2.9 that if \(K\) is \(1\)-linearly newtonian, then \(\mathrm{I}(K)\subseteq K^{\dagger}\).
Suppose \(\mathrm{I}(K)\subseteq K^{\dagger}\). Then \(K^{\dagger}=H+\mathrm{I}(H)i\), so our \(\Lambda\) has the form \(\Lambda_{H}i\) with \(\Lambda_{H}\) a complement of \(\mathrm{I}(H)\) in \(H\). Conversely, any complement \(\Lambda_{H}\) of \(\mathrm{I}(H)\) in \(H\) yields a complement \(\Lambda=\Lambda_{H}i\) of \(K^{\dagger}\) in \(K\) with \(\Lambda\subseteq Hi\). Now \(\mathrm{I}(H)\) is a \(C\)-linear subspace of \(H\), so \(\mathrm{I}(H)\) has a complement \(\Lambda_{H}\) in \(H\) that is a \(C\)-linear subspace of \(H\), and then \(\Lambda:=\Lambda_{H}i\) is also a \(C\)-linear subspace of \(K\).
**Lemma 4.4.5**.: _Suppose \(\mathrm{I}(K)\subseteq K^{\dagger}\) and \(g\in K\), \(g-\lambda\in K^{\dagger}\). Then_
\[\mathrm{Im}\,g\in\mathrm{I}(H)\iff\lambda=0,\qquad\mathrm{Im}\,g\notin \mathrm{I}(H)\implies\lambda\sim(\mathrm{Im}\,g)i.\]
Proof.: Recall that \(\Lambda=\Lambda_{H}i\) where \(\Lambda_{H}\) is a complement of \(\mathrm{I}(H)\) in \(H\), so \(\lambda=\lambda_{H}i\) where \(\lambda_{H}\in\Lambda_{H}\). Also, \(K^{\dagger}=H\oplus\mathrm{I}(H)i\), hence \(\mathrm{Im}(g)-\lambda_{H}\in\mathrm{I}(H)\); this proves the displayed equivalence. Suppose \(\mathrm{Im}\,g\notin\mathrm{I}(H)\); since \(\mathrm{I}(H)\) is an \(\mathcal{O}_{H}\)-submodule of \(H\) and \(\lambda_{H}\notin\mathrm{I}(H)\), we then have \(\mathrm{Im}(g)-\lambda_{H}\prec\lambda_{H}\), so \(\lambda=\lambda_{H}i\sim\mathrm{Im}(g)i\).
**Corollary 4.4.6**.: _Suppose \(\mathrm{I}(K)\subseteq K^{\dagger}\), \(A\in K[\mathfrak{d}]^{\neq}\) has order \(r\), \(\dim_{C[i]}\ker_{\mathrm{U}}A=r\), and \(\lambda\) is an eigenvalue of \(A\) with respect to \(\Lambda\). Then \(\lambda\preccurlyeq\mathfrak{v}(A)^{-1}\)._
Proof.: Take \(f\neq 0\) and \(g_{1},\dots,g_{r}\) in \(K\) with \(A=f(\mathfrak{d}-g_{1})\cdots(\mathfrak{d}-g_{r})\). By Corollary 3.1.6 we have \(g_{1},\dots,g_{r}\preccurlyeq\mathfrak{v}(A)^{-1}\), and so Corollary 2.5.6 gives \(j\in\{1,\dots,r\}\) with \(g_{j}-\lambda\in K^{\dagger}\). Now use Lemma 4.4.5.
**Ultimate slots in \(H\).**_In this subsection \(a\), \(b\) range over \(H\)._ Also, \((P,\mathfrak{m},\widehat{a})\) is a slot in \(H\) of order \(r\geqslant 1\), where \(\widehat{a}\in\widehat{H}\setminus H\). Recall that \(L_{P_{\times\mathfrak{m}}}=L_{P}\mathfrak{m}\), so if \((P,\mathfrak{m},\widehat{a})\) is normal, then \(L_{P}\) has order \(r\).
**Corollary 4.4.7**.: _Suppose \(\mathrm{I}(K)\subseteq K^{\dagger}\) and the slot \((P,\mathfrak{m},\widehat{a})\) is split-normal with linear part \(L:=L_{P_{\times\mathfrak{m}}}\). Then with \(Q\) and \(R\) as in (SN2) we have \(\mathscr{E}^{u}(L)=\mathscr{E}^{u}(L_{Q})\)._
This follows from Lemmas 4.3.4 and 4.4.4(iii). In a similar vein we have an analogue of Lemma 3.3.24:
**Lemma 4.4.8**.: _Suppose \((P,\mathfrak{m},\widehat{a})\) is normal and \(a\prec\mathfrak{m}\). Then \(L_{P}\) and \(L_{P_{+a}}\) have order \(r\), and if \(\mathrm{I}(K)\subseteq K^{\dagger}\), then \(\mathscr{E}^{u}(L_{P})=\mathscr{E}^{u}(L_{P_{+a}})\)._
Proof.: We have \(L_{P_{\times\mathfrak{m}}}=L_{P}\mathfrak{m}\) and \(L_{P_{+a,\times\mathfrak{m}}}=L_{P_{\times\mathfrak{m},+a/\mathfrak{m}}}=L_{P_{+ a}}\mathfrak{m}\). The slot \((P_{\times\mathfrak{m}},1,\widehat{a}/\mathfrak{m})\) in \(H\) is normal and \(a/\mathfrak{m}\prec 1\). Lemma 3.1.28 applied to \(\widehat{H}\), \(P_{\times\mathfrak{m}}\), \(\widehat{a}/\mathfrak{m}\) in place of \(K\), \(P\), \(a\), respectively, gives: \(L_{P}\) and \(L_{P_{+a}}\) have order \(r\), and
\[L_{P}\mathfrak{m}-L_{P_{+a}}\mathfrak{m}\ =\ L_{P_{\times\mathfrak{m}}}-L_{P_{\times \mathfrak{m},+a/\mathfrak{m}}}\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{r+1}L_{P} \mathfrak{m}\]
where \(\mathfrak{v}:=\mathfrak{v}(L_{P}\mathfrak{m})\prec^{\flat}1\) by (N1). Suppose now that \(\mathrm{I}(K)\subseteq K^{\dagger}\). Then
\[\mathscr{E}^{u}(L_{P})\ =\ \mathscr{E}^{u}(L_{P}\mathfrak{m})+v(\mathfrak{m})\ =\ \mathscr{E}^{u}(L_{P_{+a}} \mathfrak{m})+v(\mathfrak{m})\ =\ \mathscr{E}^{u}(L_{P_{+a}})\]
by Lemma 4.4.4(iii).
The notion introduced below is modeled on that of "isolated slot" (Definition 3.4.1):
**Definition 4.4.9**.: Call \((P,\mathfrak{m},\widehat{a})\)**ultimate** if for all \(a\prec\mathfrak{m}\),
\[\operatorname{order}(L_{P_{+a}})=r\ \text{ and }\ \mathscr{E}^{\mathfrak{u}}(L_{P_{+a}}) \cap v(\widehat{a}-H)\ <\ v(\widehat{a}-a);\]
equivalently, for all \(a\prec\mathfrak{m}\): \(\operatorname{order}(L_{P_{+a}})=r\) and whenever \(\mathfrak{w}\preccurlyeq\widehat{a}-a\) is such that \(v(\mathfrak{w})\in\mathscr{E}^{\mathfrak{u}}(L_{P_{+a}})\), then \(\mathfrak{w}\prec\widehat{a}-b\) for all \(b\). (Thus if \((P,\mathfrak{m},\widehat{a})\) is ultimate, then it is isolated.)
If \((P,\mathfrak{m},\widehat{a})\) is ultimate, then so is every equivalent slot in \(H\) and \((bP,\mathfrak{m},\widehat{a})\) for \(b\neq 0\), as well as the slot \((P^{\phi},\mathfrak{m},\widehat{a})\) in \(H^{\phi}\) (by Lemma 4.4.4). The proofs of the next two lemma are like those of their "isolated" versions, Lemmas 3.4.2 and 3.4.3:
**Lemma 4.4.10**.: _If \((P,\mathfrak{m},\widehat{a})\) is ultimate, then so is any of its refinements._
**Lemma 4.4.11**.: _If \((P,\mathfrak{m},\widehat{a})\) is ultimate, then so is any of its multiplicative conjugates._
The ultimate condition is most useful in combination with other properties:
**Lemma 4.4.12**.: _If \(\operatorname{I}(K)\subseteq K^{\dagger}\) and \((P,\mathfrak{m},\widehat{a})\) is normal, then_
\[(P,\mathfrak{m},\widehat{a})\text{ is ultimate}\quad\Longleftrightarrow\quad \mathscr{E}^{\mathfrak{u}}(L_{P})\cap v(\widehat{a}-H)\leqslant v\mathfrak{m}.\]
Proof.: Use Lemma 4.4.8 and the equivalence \(\widehat{a}-a\prec\mathfrak{m}\Leftrightarrow a\prec\mathfrak{m}\).
The "ultimate" version of Lemma 3.4.5 has the same proof:
**Lemma 4.4.13**.: _If \(\deg P=1\), then_
\[(P,\mathfrak{m},\widehat{a})\text{ is ultimate}\quad\Longleftrightarrow\quad \mathscr{E}^{\mathfrak{u}}(L_{P})\cap v(\widehat{a}-H)\leqslant v\mathfrak{m}.\]
The next proposition is the "ultimate" version of Proposition 3.4.6:
**Proposition 4.4.14**.: _Suppose \(\operatorname{I}(K)\subseteq K^{\dagger}\), and \((P,\mathfrak{m},\widehat{a})\) is normal. Then \((P,\mathfrak{m},\widehat{a})\) has an ultimate refinement._
Proof.: Suppose \((P,\mathfrak{m},\widehat{a})\) is not already ultimate. Then Lemma 4.4.12 gives \(\gamma\) with
\[\gamma\in\mathscr{E}^{\mathfrak{u}}(L_{P})\cap v(\widehat{a}-H),\quad\gamma> v\mathfrak{m}.\]
Lemma 4.4.4(i) gives \(|\mathscr{E}^{\mathfrak{u}}(L_{P})|\leqslant r\), so we can take
\[\gamma\ :=\ \max\mathscr{E}^{\mathfrak{u}}(L_{P})\cap v(\widehat{a}-H),\]
and then \(\gamma>v\mathfrak{m}\). Take \(a\) and \(\mathfrak{n}\) with \(v(\widehat{a}-a)>\gamma=v(\mathfrak{n})\); then \((P_{+a},\mathfrak{n},\widehat{a}-a)\) is a refinement of \((P,\mathfrak{m},\widehat{a})\) and \(a\prec\mathfrak{m}\). Let \(b\prec\mathfrak{n}\); then \(a+b\prec\mathfrak{m}\), so by Lemma 4.4.8,
\[\operatorname{order}(L_{(P_{+a})_{+b}})\ =\ r,\qquad\mathscr{E}^{\mathfrak{u}}(L_{(P_{+ a})_{+b}})\ =\ \mathscr{E}^{\mathfrak{u}}(L_{P}).\]
Also \(v\big{(}(\widehat{a}-a)-b\big{)}>\gamma\), hence
\[\mathscr{E}^{\mathfrak{u}}\big{(}L_{(P_{+a})_{+b}}\big{)}\cap v\big{(}( \widehat{a}-a)-H\big{)}\ =\ \mathscr{E}^{\mathfrak{u}}(L_{P})\cap v(\widehat{a}-H)\ \leqslant\ \gamma\ <\ v\big{(}(\widehat{a}-a)-b\big{)}.\]
Thus \((P_{+a},\mathfrak{n},\widehat{a}-a)\) is ultimate.
_Remark 4.4.15_.: Proposition 4.4.14 goes through if instead of assuming that \((P,\mathfrak{m},\widehat{a})\) is normal, we assume that \((P,\mathfrak{m},\widehat{a})\) is linear. (Same argument, using Lemma 4.4.13 in place of Lemma 4.4.12.)
Finally, here is a consequence of Corollaries 2.6.15, 4.4.2, and Lemma 4.4.12, where we recall that \(\operatorname{order}(L_{P_{\times\mathfrak{m}}})=\operatorname{order}(L_{P} \mathfrak{m})=\operatorname{order}(L_{P})\):
**Corollary 4.4.16**.: _Suppose \({\rm I}(K)\subseteq K^{\dagger}\) and \((P,{\mathfrak{m}},\widehat{a})\) is normal of order \(r=1\). Then \(L_{P}=f(\partial-g)\) with \(f\in H^{\times}\), \(g\in H\), and for \({\mathfrak{g}}\in H^{\times}\) with \({\mathfrak{g}}^{\dagger}=g\) we have:_
\[(P,{\mathfrak{m}},\widehat{a})\mbox{ is ultimate}\quad\Longleftrightarrow \quad(P,{\mathfrak{m}},\widehat{a})\mbox{ is isolated}\quad\Longleftrightarrow \quad.{\mathfrak{g}}\succcurlyeq\mathfrak{m}\mbox{ or }{\mathfrak{g}}\prec\widehat{a}-H.\]
\((\)_In particular, if \(g\in{\rm I}(H)\) and \({\mathfrak{m}}\preccurlyeq 1\), then \((P,{\mathfrak{m}},\widehat{a})\) is ultimate.\()\)_
**Ultimate slots in \(K\).** _In this subsection, \(a\), \(b\) range over \(K=H[i]\). Also \((P,{\mathfrak{m}},\widehat{a})\) is a slot in \(K\) of order \(r\geqslant 1\), where \(\widehat{a}\in\widehat{K}\setminus K\). Lemma 4.4.8 goes through in this setting, with \(H\) in the proof replaced by \(K\):
**Lemma 4.4.17**.: _Suppose \((P,{\mathfrak{m}},\widehat{a})\) is normal, and \(a\prec\mathfrak{m}\). Then \(L_{P}\) and \(L_{P_{+a}}\) have order \(r\), and if \({\rm I}(K)\subseteq K^{\dagger}\), then \(\mathscr{E}^{\rm u}(L_{P})=\mathscr{E}^{\rm u}(L_{P_{+a}})\)._
We adapt Definition 4.4.9 to slots in \(K\): call \((P,{\mathfrak{m}},\widehat{a})\)**ultimate** if for all \(a\prec\mathfrak{m}\) we have \(\operatorname{order}(L_{P_{+a}})=r\) and \(\mathscr{E}^{\rm u}(L_{P_{+a}})\cap v(\widehat{a}-K)<v(\widehat{a}-a)\). If \((P,{\mathfrak{m}},\widehat{a})\) is ultimate, then it is isolated. Moreover, if \((P,{\mathfrak{m}},\widehat{a})\) is ultimate, then so is \((bP,{\mathfrak{m}},\widehat{a})\) for \(b\neq 0\) as well as the slot \((P^{\phi},{\mathfrak{m}},\widehat{a})\) in \(K^{\phi}\). Lemmas 4.4.10 and 4.4.11 go through in the present context, and so do Lemmas 4.4.12 and 4.4.13 with \(H\) replaced by \(K\). The analogue of Proposition 4.4.14 follows likewise:
**Proposition 4.4.18**.: _If \({\rm I}(K)\subseteq K^{\dagger}\) and \((P,{\mathfrak{m}},\widehat{a})\) is normal, then \((P,{\mathfrak{m}},\widehat{a})\) has an ultimate refinement._
_Remark 4.4.19_.: Proposition 4.4.18 also holds if instead of assuming that \((P,{\mathfrak{m}},\widehat{a})\) is normal, we assume that \((P,{\mathfrak{m}},\widehat{a})\) is linear.
Corollary 4.4.2 and the \(K\)-versions of Lemmas 4.4.12 and 4.4.13 yield:
**Corollary 4.4.20**.: _Suppose \({\rm I}(K)\subseteq K^{\dagger}\), \(r=1\), and \((P,{\mathfrak{m}},\widehat{a})\) is normal or linear. Then \(L_{P}=f(\partial-g)\) with \(f\in K^{\times},g\in K\), and for \({\mathfrak{g}}\in H^{\times}\) with \({\mathfrak{g}}^{\dagger}={\rm Re}\,g\) we have:_
\[(P,{\mathfrak{m}},\widehat{a})\mbox{ is ultimate}\quad\Longleftrightarrow \quad{\mathfrak{g}}\succcurlyeq\mathfrak{m}\mbox{ or }{\mathfrak{g}}\prec\widehat{a}-K.\]
\((\)_In particular, if \({\rm Re}\,g\in{\rm I}(H)\) and \({\mathfrak{m}}\preccurlyeq 1\), then \((P,{\mathfrak{m}},\widehat{a})\) is ultimate.\()\)_
**Using the norm to characterize being ultimate.** We use here the "norm" \(\|\cdot\|\) on \({\rm U}\) and the gaussian extension \(v_{\rm g}\) of the valuation of \(K\) from Section 2.1.
**Lemma 4.4.21**.: _For \(u\in{\rm U}^{\times}\) we have \(\|u\|^{\dagger}={\rm Re}\,u^{\dagger}\)._
Proof.: For \(u=f\,{\rm e}(\lambda)\), \(f\in K^{\times}\) we have \(\|u\|=|f|\) and \(u^{\dagger}=f^{\dagger}+\lambda\), so
\[\|u\|^{\dagger}\ =\ |f|^{\dagger}\ =\ {\rm Re}\,f^{\dagger}\ =\ {\rm Re}\,u^{\dagger},\]
using Corollary 1.2.5 for the second equality.
Using Corollary 2.1.10, Lemma 4.4.21, and [ADH, 10.5.2(i)] we obtain:
**Lemma 4.4.22**.: _Let \({\mathfrak{W}}\subseteq H^{\times}\) be \(\preccurlyeq\)-closed. Then for all \(u\in{\rm U}^{\times}\),_
\[\|u\|\in{\mathfrak{W}}\quad\Longleftrightarrow\quad v_{\rm g}u\in v({ \mathfrak{W}})\quad\Longleftrightarrow\quad{\rm Re}\,u^{\dagger}<{ \mathfrak{n}}^{\dagger}\mbox{ for all }{\mathfrak{n}}\notin{\mathfrak{W}}.\]
Let \((P,\mathfrak{m},\widehat{a})\) be a slot in \(H\) of order \(r\geqslant 1\). Applying Lemma 4.4.22 to the set \(\mathfrak{W}=\{\mathfrak{w}:\ \mathfrak{w}\prec\widehat{a}-H\}\)-- so \(v(\mathfrak{W})=\Gamma\setminus v(\widehat{a}-H)\)--we obtain a reformulation of the condition "\((P,\mathfrak{m},\widehat{a})\) is ultimate" in terms of the "norm" \(\|\cdot\|\) on U:
**Corollary 4.4.23**.: _The following are equivalent_ (_with a ranging over \(H\)_)_:_
1. \((P,\mathfrak{m},\widehat{a})\) _is ultimate;_
2. _for all_ \(a\prec\mathfrak{m}\)_:_ \(\operatorname{order}(L_{P_{+a}})=r\) _and whenever_ \(u\in\mathrm{U}^{\times}\)_,_ \(v_{\mathrm{g}}u\in\mathscr{E}^{\mathrm{u}}(L_{P_{+a}})\)_, and_ \(\|u\|\prec\widehat{a}-a\)_, then_ \(\|u\|\prec\widehat{a}-H\)_;_
3. _for all_ \(a\prec\mathfrak{m}\)_:_ \(\operatorname{order}(L_{P_{+a}})=r\) _and whenever_ \(u\in\mathrm{U}^{\times}\)_,_ \(v_{\mathrm{g}}u\in\mathscr{E}^{\mathrm{u}}(L_{P_{+a}})\)_, and_ \(\|u\|\prec\widehat{a}-a\)_, then_ \(\operatorname{Re}u^{\dagger}<\mathfrak{n}^{\dagger}\) _for all_ \(\mathfrak{n}\) _with_ \(v(\mathfrak{n})\in v(\widehat{a}-H)\)_._
### Firm slots and flabby slots in \(H\) (\({}^{*}\))
Let \((P,\mathfrak{m},\widehat{a})\) be a slot in \(H\) of order \(r\geqslant 1\), where \(\widehat{a}\in\widehat{H}\setminus H\). We let \(a\), \(b\) range over \(H\).
**Definition 4.4.24**.: We call \((P,\mathfrak{m},\widehat{a})\)**firm** if for all \(a\prec\mathfrak{m}\),
\[\operatorname{order}(L_{P_{+a}})=r\quad\text{and}\quad\mathscr{E}^{\mathrm{u} }(L_{P_{+a}})\subseteq v(\widehat{a}-H).\]
We call \((P,\mathfrak{m},\widehat{a})\)**flabby** if it is not firm, that is, if there is an \(a\prec\mathfrak{m}\) such that \(\operatorname{order}(L_{P_{+a}})<r\), or \(\operatorname{order}(L_{P_{+a}})=r\) and \(\gamma>v(\widehat{a}-H)\) for some \(\gamma\in\mathscr{E}^{\mathrm{u}}(L_{P_{+a}})\).
If \((P,\mathfrak{m},\widehat{a})\) is firm, then so are \((bP,\mathfrak{m},\widehat{a})\) for \(b\neq 0\) and any slot \((P,\mathfrak{m},\widehat{b})\) in \(H\) that is equivalent to \((P,\mathfrak{m},\widehat{a})\). For any \(\phi\), the slot \((P,\mathfrak{m},\widehat{a})\) in \(H\) is firm iff the slot \((P^{\phi},\mathfrak{m},\widehat{a})\) in \(H^{\phi}\) is firm.
**Lemma 4.4.25**.: _If \((P,\mathfrak{m},\widehat{a})\) is firm, then so is any of its refinements. If \((P,\mathfrak{m},\widehat{a})\) is flabby, then so is any refinement \((P_{+a},\mathfrak{m},\widehat{a}-a)\) of it._
The proof is like that of Lemma 4.4.10.
**Lemma 4.4.26**.: _Suppose \((P,\mathfrak{m},\widehat{a})\) is firm. Then \((P_{\times\mathfrak{n}},\mathfrak{m}/\mathfrak{n},\widehat{a}/\mathfrak{n})\) is firm._
Proof.: Let \(a\prec\mathfrak{m}/\mathfrak{n}\), so \(a\mathfrak{n}\prec\mathfrak{m}\) with \(L_{P_{\times\mathfrak{n},+a}}=L_{P_{+an}}\mathfrak{n}\). Since \((P,\mathfrak{m},\widehat{a})\) is firm, this yields \(\operatorname{order}(L_{P_{\times\mathfrak{n},+a}})=\operatorname{order}(L_ {P_{+an}})=r\) and
\[\mathscr{E}^{\mathrm{u}}(L_{P_{\times\mathfrak{n},+a}})=\mathscr{E}^{\mathrm{ u}}(L_{P_{+an}})-v\mathfrak{n}\subseteq v(\widehat{a}-H)-v\mathfrak{n}=v \big{(}(\widehat{a}/\mathfrak{n})-H\big{)},\]
using Lemma 4.4.4 for the first equality.
The proofs of the next two lemmas are clear, using Lemma 4.4.8 for the first one:
**Lemma 4.4.27**.: _If \(\mathrm{I}(K)\subseteq K^{\dagger}\) and \((P,\mathfrak{m},\widehat{a})\) is normal, then_
\[(P,\mathfrak{m},\widehat{a})\text{ is firm}\quad\Longleftrightarrow\quad \mathscr{E}^{\mathrm{u}}(L_{P})\subseteq v(\widehat{a}-H).\]
**Lemma 4.4.28**.: _If \(\deg P=1\), then_
\[(P,\mathfrak{m},\widehat{a})\text{ is firm}\quad\Longleftrightarrow\quad \mathscr{E}^{\mathrm{u}}(L_{P})\subseteq v(\widehat{a}-H).\]
_Remark 4.4.29_.: If the hypothesis of Lemma 4.4.27 or Lemma 4.4.28 holds, then
\[(P,\mathfrak{m},\widehat{a})\text{ is firm and ultimate}\quad\Longleftrightarrow \quad\mathscr{E}^{\mathrm{u}}(L_{P})\leqslant v\mathfrak{m},\]
as a consequence of Lemmas 4.4.12 and 4.4.13.
Lemma 4.4.8 yields:
**Corollary 4.4.30**.: _If the hypothesis of Lemma 4.4.27 or Lemma 4.4.28 holds and \((P,\mathfrak{m},\widehat{a})\) is flabby, then so is each refinement of \((P,\mathfrak{m},\widehat{a})\)._
**Firm slots and flabby slots in \(K\left({}^{*}\right)\).** Let now \((P,\mathfrak{m},\widehat{a})\) be a slot in \(K\) of order \(r\geqslant 1\) with \(\widehat{a}\in\widehat{K}\setminus K\), and let \(a\), \(b\) range over \(K\). We define \((P,\mathfrak{m},\widehat{a})\) to be **firm** if for all \(a\prec\mathfrak{m}\) we have \(\operatorname{order}(L_{P_{+a}})=r\) and \(\mathscr{E}^{\mathfrak{u}}(L_{P_{+a}})\subseteq v(\widehat{a}-H)\), and we say that \((P,\mathfrak{m},\widehat{a})\) is **flabby** if it is not firm. The results in the subsection above about a slot \((P,\mathfrak{m},\widehat{a})\) in \(H\) go through for the slot \((P,\mathfrak{m},\widehat{a})\) in \(K\), replacing \(H\), \(\widehat{H}\) by \(K\), \(\widehat{K}\) throughout.
**Corollary 4.4.31**.: _Suppose \(\mathrm{I}(K)\subseteq K^{\dagger}\), \(r=1\), and \((P,\mathfrak{m},\widehat{a})\) is normal or linear. Then \(L_{P}=f(\partial-g)\) with \(f\in K^{\times},\ g\in K\). For \(\mathfrak{g}\in H^{\times}\) with \(\mathfrak{g}^{\dagger}=\operatorname{Re}g\) we have:_
1. \((P,\mathfrak{m},\widehat{a})\) _is flabby_ \(\iff\)__\(\mathfrak{g}\prec\widehat{a}-K\)__\(\implies\)__\((P,\mathfrak{m},\widehat{a})\) _is ultimate;_
2. \((P,\mathfrak{m},\widehat{a})\) _is firm and ultimate_ \(\iff\)__\(\mathfrak{g}\succcurlyeq\mathfrak{m}\)_;_
3. \(\mathfrak{g}\succcurlyeq 1\iff\)__\(\operatorname{Re}g\in\mathrm{I}(H)\) _or_ \(\operatorname{Re}g>0\)_._
Proof.: The equivalence in (i) follows from Corollary 4.4.2 and the \(K\)-versions of Lemmas 4.4.27 and 4.4.28. Corollary 4.4.20 yields the last part of (i). For (ii), use Corollary 4.4.2 and the \(K\)-version of the equivalence in Remark 4.4.29. As to (iii), this is an elementary fact about the relation between \(\mathfrak{g}\in H^{\times}\) and \(\mathfrak{g}^{\dagger}\).
For the significance of firm slots in the Hardy field setting, see Section 7.7 below.
**Counterexamples**\(\left({}^{*}\right)\).: Suppose \(\mathrm{I}(K)\subseteq K^{\dagger}\) and \(H\) is not \(\omega\)-free. (In Example 7.5.40 we provide an \(H\) with these properties.) Let \((\lambda_{\rho})\) and \((\omega_{\rho})\) be as in Lemma 3.2.10 with \(H\) in the role of \(K\) there. That lemma yields a minimal hole \((P,\mathfrak{m},\lambda)\) in \(H\) with \(P=2Y^{\prime}+Y^{2}+\omega\)\((\omega\in H)\). This is a good source of counterexamples:
**Lemma 4.4.32**.: _The minimal hole \((P,\mathfrak{m},\lambda)\) in \(H\) is ultimate, and none of its refinements is quasilinear, normal, or firm._
Proof.: Let \(a\in H\). Then \(P_{+a}=2Y^{\prime}+2aY+Y^{2}+P(a)\) and thus \(L_{P_{+a}}=2(\partial+a)\), so for \(b\in H^{\times}\) with \(b^{\dagger}=-a\) we have \(\mathscr{E}^{\mathfrak{u}}(L_{P_{+a}})=\{vb\}\), by Corollary 4.4.2. Thus \((P,\mathfrak{m},\lambda)\) is ultimate iff \(\lambda-a\prec b\) for all \(a\prec\mathfrak{m}\) in \(H\) and \(b\in H^{\times}\) with \(b^{\dagger}=-a\) and \(vb\in v(\lambda-H)\); the latter holds by [ADH, 11.5.6] since \(v(\lambda-H)=\Psi\). Hence \((P,\mathfrak{m},\lambda)\) is ultimate. No refinement of \((P,\mathfrak{m},\lambda)\) is quasilinear by Corollary 3.2.25 and [ADH, 11.7.9], and so by Corollary 3.3.21, no refinement of \((P,\mathfrak{m},\lambda)\) is normal.
It remains to show that no refinement of \((P,\mathfrak{m},\lambda)\) is firm. Let \((\ell_{\rho})\), \((\gamma_{\rho})\), be the sequences from [ADH, 11.5] that give rise to \(\lambda_{\rho}=-\gamma_{\rho}^{\dagger}\) with \(H\) in place of \(K\). If \((P,\mathfrak{m},\lambda)\) has a firm refinement, then it has a firm refinement \((P_{+\lambda_{\rho}},\gamma_{\rho},\lambda-\lambda_{\rho})\), by Lemmas 3.2.24 and 4.4.25, so it suffices that \((P_{+\lambda_{\rho}},\gamma_{\rho},\lambda-\lambda_{\rho})\) is flabby for all \(\rho\). For \(a\in H\) we have \(L_{P_{+(\lambda_{\rho+a})}}=2(\partial+\lambda_{\rho}+a)\), so \(\mathscr{E}^{\mathfrak{u}}(L_{P_{+(\lambda_{\rho}+a)}})=\{vb\}\) with \(b\in H^{\times}\), \(b^{\dagger}=-(\lambda_{\rho}+a)\). Also \(v\big{(}(\lambda-\lambda_{\rho})-H\big{)}=v(\lambda-H)=\Psi\). Hence \((P_{+\lambda_{\rho}},\gamma_{\rho},\lambda-\lambda_{\rho})\) is flabby if there is \(a\prec\gamma_{\rho}\) in \(H\) and \(b\in H^{\times}\), not active in \(H\), such that \(b^{\dagger}=-(\lambda_{\rho}+a)\). We take \(a:=2\gamma_{\rho+1}\), \(b:=\gamma_{\rho}/\ell_{\rho+1}^{2}\). Then \(b^{\dagger}=\gamma_{\rho}^{\dagger}-2\ell_{\rho+1}^{\dagger}=-(\lambda_{\rho}+a)\) as required. Also, \(b\) is not active in \(H\). To see this let \(\sigma>\rho+1\). Then \(\gamma_{\rho}/\gamma_{\rho+1},\gamma_{\rho+1}/\gamma_{\sigma}\succ 1\) and
\[(\gamma_{\rho}/\gamma_{\rho+1})^{\dagger}\ =\ \lambda_{\rho+1}-\lambda_{\rho}\ \sim\ \gamma_{\rho+1}\ \succ\ \gamma_{\rho+2}\ \sim\ \lambda_{\sigma}-\lambda_{\rho+1}\ =\ (\gamma_{\rho+1}/\gamma_{\sigma})^{\dagger}\]
by [ADH, 11.5.2], hence \(\gamma_{\rho}/\gamma_{\rho+1}\succ\gamma_{\rho+1}/\gamma_{\sigma}\). Also \(\ell_{\rho+1}\asymp\gamma_{\rho}/\gamma_{\rho+1}\) by [ADH, proof of 11.5.2] and thus \(b=\gamma_{\rho}/\ell_{\rho+1}^{2}\asymp\gamma_{\rho+1}^{2}/\gamma_{\rho}\prec \gamma_{\sigma}\).
### 4.5. Repulsive-Normal Slots
_In this section \(H\) is a real closed \(H\)-field with small derivation and asymptotic integration, with \(\Gamma:=v(H^{\times})\). Also \(K:=H[\mathrm{i}]\) with \(\mathrm{i}^{2}=-1\) is an algebraic closure of \(H\)._ We study here the concept of a repulsive-normal slot in \(H\), which strengthens that of a split-normal slot in \(H\). Despite their name, repulsive-normal slots will turn out to have attractive analytic properties in the realm of Hardy fields.
**Attraction and repulsion.** In this subsection \(a\), \(b\) range over \(H\), \(\mathfrak{m}\), \(\mathfrak{n}\) over \(H^{\times}\), \(f\), \(g\), \(h\) (possibly with subscripts) over \(K\), and \(\gamma\), \(\delta\) over \(\Gamma\). We say that \(f\) is **attractive** if \(\mathrm{Re}\,f\succcurlyeq 1\) and \(\mathrm{Re}\,f<0\), and **repulsive** if \(\mathrm{Re}\,f\succcurlyeq 1\) and \(\mathrm{Re}\,f>0\). If \(\mathrm{Re}\,f\sim\mathrm{Re}\,g\), then \(f\) is attractive iff \(g\) is attractive, and likewise with "repulsive" in place of "attractive". Moreover, if \(a>0\), \(a\succcurlyeq 1\), and \(f\) is attractive (repulsive), then \(af\) is attractive (repulsive, respectively).
**Definition 4.5.1**.: Let \(\gamma>0\); we say \(f\) is \(\gamma\)**-repulsive** if \(v(\mathrm{Re}\,f)<\gamma^{\dagger}\) or \(\mathrm{Re}\,f>0\). Given \(S\subseteq\Gamma\), we say \(f\) is \(S\)**-repulsive** if \(f\) is \(\gamma\)-repulsive for all \(\gamma\in S\cap\Gamma^{>}\), equivalently, \(\mathrm{Re}\,f>0\), or \(v(\mathrm{Re}\,f)<\gamma^{\dagger}\) for all \(\gamma\in S\cap\Gamma^{>}\).
Note the following implications for \(\gamma>0\):
\[f\]
is
\[\gamma\]
-repulsive \[\implies\quad\mathrm{Re}\,f\neq 0\], \[f\]
is
\[\gamma\]
-repulsive,
\[\mathrm{Re}\,g\sim\mathrm{Re}\,f\]
\[\implies\quad g\]
is
\[\gamma\]
-repulsive.
The following is easy to show:
**Lemma 4.5.2**.: _Suppose \(\gamma>0\) and \(\mathrm{Re}\,f\succcurlyeq 1\). Then \(f\) is \(\gamma\)-repulsive iff \(v(\mathrm{Re}\,f)<\gamma^{\dagger}\) or \(f\) is repulsive. Hence, if \(f\) is repulsive, then \(f\) is \(\Gamma\)-repulsive; the converse of this implication holds if \(\Psi\) is not bounded from below in \(\Gamma\)._
Let \(\gamma,\delta>0\). If \(f\) is \(\gamma\)-repulsive and \(a>0\), \(a\succcurlyeq 1\), then \(af\) is \(\gamma\)-repulsive. If \(f\) is \(\gamma\)-repulsive and \(\delta\)-repulsive, then \(f\) is \((\gamma+\delta)\)-repulsive. If \(f\) is \(\gamma\)-repulsive and \(\gamma>\delta\), then \(f\) is \((\gamma-\delta)\)-repulsive. Moreover:
**Lemma 4.5.3**.: _Suppose \(\gamma\geqslant\delta=v\mathfrak{n}>0\). Set \(g:=f-\mathfrak{n}^{\dagger}\). Then:_
\[f\]
_is \[\gamma\] -repulsive_ \[\iff\quad f\]
_is \[\delta\] -repulsive and \(g\) is \[\gamma\] -repulsive._
Proof.: Note that \(\gamma\geqslant\delta>0\) gives \(\gamma^{\dagger}\leqslant\delta^{\dagger}\). Suppose \(f\) is \(\gamma\)-repulsive; by our remark, \(f\) is \(\delta\)-repulsive. Now if \(v(\mathrm{Re}\,f)<\gamma^{\dagger}\), then \(\mathrm{Re}\,g\sim\mathrm{Re}\,f\), whereas if \(\mathrm{Re}\,f>0\), then \(\mathrm{Re}(g)=\mathrm{Re}(f)-\mathfrak{n}^{\dagger}>\mathrm{Re}(f)>0\); in both cases, \(g\) is \(\gamma\)-repulsive. Conversely, suppose \(f\) is \(\delta\)-repulsive and \(g\) is \(\gamma\)-repulsive. If \(\mathrm{Re}\,f>0\), then clearly \(f\) is \(\gamma\)-repulsive. Otherwise, \(v(\mathrm{Re}\,f)<\delta^{\dagger}\), hence \(\mathrm{Re}\,g\sim\mathrm{Re}\,f\), so \(f\) is also \(\gamma\)-repulsive.
In a similar way we deduce a useful characterization of repulsiveness:
**Lemma 4.5.4**.: _Suppose \(\gamma=v\mathfrak{n}>0\). Set \(g:=f-\mathfrak{m}^{\dagger}\). Then:_
\[f\]
_is repulsive_ \[\iff\quad\mathrm{Re}\,f\succcurlyeq 1\]
_, \[f\] is \[\gamma\] -repulsive, and \(g\) is repulsive._
Proof.: Suppose \(f\) is repulsive; then by Lemma 4.5.2, \(f\) is \(\gamma\)-repulsive. Moreover, \(\mathrm{Re}\,g=\mathrm{Re}(f)-\mathfrak{m}^{\dagger}>\mathrm{Re}\,f>0\), hence \(\mathrm{Re}\,g\succcurlyeq 1\) and \(\mathrm{Re}\,g>0\), that is, \(g\) is repulsive. Conversely, suppose \(\mathrm{Re}\,f\succcurlyeq 1\), \(f\) is \(\gamma\)-repulsive, and \(g\) is repulsive. If \(v(\mathrm{Re}\,f)<\gamma^{\dagger}\), then \(\mathrm{Re}\,f\sim\mathrm{Re}\,g\); otherwise \(\mathrm{Re}\,f>0\). In both cases, \(f\) is repulsive.
**Corollary 4.5.5**.: _Suppose \(f\) is \(\gamma\)-repulsive where \(\gamma=v\mathfrak{m}>0\), and \(\mathrm{Re}\,f\succcurlyeq 1\). Then \(f\) is repulsive iff \(f-\mathfrak{m}^{\dagger}\) is repulsive, and \(f\) is attractive iff \(f-\mathfrak{m}^{\dagger}\) is attractive._
Proof.: The first equivalence is immediate from Lemma 4.5.4; this equivalence yields
\[f\text{ is attractive }\Longleftrightarrow\ f\text{ is not repulsive }\Longleftrightarrow\ f- \mathfrak{m}^{\dagger}\text{ is not repulsive }\] \[\Longleftrightarrow\ \operatorname{Re}(f)-\mathfrak{m}^{\dagger} \prec 1\text{ or }f-\mathfrak{m}^{\dagger}\text{ is attractive.}\]
Thus if \(f-\mathfrak{m}^{\dagger}\) is attractive, so is \(f\). Now assume towards a contradiction that \(f\) is attractive and \(f-\mathfrak{m}^{\dagger}\) is not. Then \(\operatorname{Re}f<0\) and \(\operatorname{Re}(f)-\mathfrak{m}^{\dagger}\prec 1\) by the above equivalence, so \(\operatorname{Re}f\sim\mathfrak{m}^{\dagger}\) thanks to \(\operatorname{Re}f\succcurlyeq 1\). But \(f\) is \(\gamma\)-repulsive, that is, \(\operatorname{Re}f\succ\mathfrak{m}^{\dagger}\) or \(\operatorname{Re}f>0\), a contradiction.
**Lemma 4.5.6**.: _Suppose \(\gamma=v\mathfrak{m}>0\) and \(v(\operatorname{Re}g)\geqslant\gamma^{\dagger}\). Then for all sufficiently large \(c\in C^{>}\) we have \(\operatorname{Re}(g)-c\mathfrak{m}^{\dagger}>0\)\((\)and hence \(g-c\mathfrak{m}^{\dagger}\) is \(\Gamma\)-repulsive\()\)._
Proof.: If \(v(\operatorname{Re}g)>\gamma^{\dagger}\), then \(\operatorname{Re}(g)-c\mathfrak{m}^{\dagger}\sim-c\mathfrak{m}^{\dagger}>0\) for all \(c\in C^{>}\). Suppose \(v(\operatorname{Re}g)=\gamma^{\dagger}\). Take \(c_{0}\in C^{\times}\) with \(\operatorname{Re}g\sim c_{0}\mathfrak{m}^{\dagger}\); then \(\operatorname{Re}(g)-c\mathfrak{m}^{\dagger}>0\) for \(c>c_{0}\).
_In the rest of this subsection we assume that \(S\subseteq\Gamma\)._ If \(f\) is \(S\)-repulsive, then so is \(af\) for \(a>0\), \(a\succcurlyeq 1\). If \(S>0\), \(\delta>0\), and \(f\) is \(S\)-repulsive and \(\delta\)-repulsive, then \(f\) is \((S+\delta)\)-repulsive.
**Lemma 4.5.7**.: _Suppose \(f\) is \(S\)-repulsive and \(0<\delta=v\mathfrak{n}\in S\). Then_
* \(f\) _is_ \((S-\delta)\)_-repulsive;_
* \(g:=f-\mathfrak{n}^{\dagger}\) _is_ \(S\)_-repulsive._
Proof.: Let \(\gamma\in(S-\delta)\), \(\gamma>0\). Then \(\gamma+\delta\in S\), so \(f\) is \((\gamma+\delta)\)-repulsive, hence \(\gamma\)-repulsive. This shows (i). For (ii), suppose \(\gamma\in S\), \(\gamma>0\); we need to show that \(g\) is \(\gamma\)-repulsive. If \(\gamma\geqslant\delta\), then \(g\) is \(\gamma\)-repulsive by Lemma 4.5.3. Taking \(\gamma=\delta\) we see that \(g\) is \(\delta\)-repulsive, hence if \(\gamma<\delta\), then \(g\) is also \(\gamma\)-repulsive.
Let \(A\in K[\partial]^{\neq}\) have order \(r\geqslant 1\). An \(S\)**-repulsive splitting** of \(A\) over \(K\) is a splitting \((g_{1},\dots,g_{r})\) of \(A\) over \(K\) where \(g_{1},\dots,g_{r}\) are \(S\)-repulsive. An \(S\)-repulsive splitting of \(A\) over \(K\) remains an \(S\)-repulsive splitting of \(hA\) over \(K\) for \(h\neq 0\). We say that \(A\)**splits \(S\)-repulsively** over \(K\) if there is an \(S\)-repulsive splitting of \(A\) over \(K\). From Lemmas 1.1.1 and 4.5.7 we obtain:
**Lemma 4.5.8**.: _Suppose \((g_{1},\dots,g_{r})\) is an \(S\)-repulsive splitting of \(A\) over \(K\) and \(0<\delta=v\mathfrak{n}\in S\). Then \((g_{1},\dots,g_{r})\) is an \((S-\delta)\)-repulsive splitting of \(A\) over \(K\), and \((h_{1},\dots,h_{r}):=(g_{1}-\mathfrak{n}^{\dagger},\dots,g_{r}-\mathfrak{n}^{ \dagger})\) is an \(S\)-repulsive splitting of \(A\mathfrak{n}\) over \(K\). \((\)Hence \((h_{1},\dots,h_{r})\) is also an \((S-\delta)\)-repulsive splitting of \(A\mathfrak{n}\) over \(K\).\()\)_
Note that if \(\phi\) is active in \(H\) with \(0<\phi\preccurlyeq 1\), and \(f\) is \(\gamma\)-repulsive (in \(K\)), then \(\phi^{-1}f\) is \(\gamma\)-repulsive in \(K^{\phi}=H^{\phi}[\text{i}]\).
**Lemma 4.5.9**.: _Suppose \((g_{1},\dots,g_{r})\) is an \(S\)-repulsive splitting of \(A\) over \(K\) and \(S\cap\Gamma^{>}\not\subseteq\Gamma^{\flat}\). Let \(\phi\) be active in \(H\) with \(0<\phi\prec 1\), and set \(h_{j}:=g_{j}-(r-j)\phi^{\dagger}\) for \(j=1,\dots,r\). Then \((\phi^{-1}h_{1},\dots,\phi^{-1}h_{r})\) is an \(S\)-repulsive splitting of \(A^{\phi}\) over \(K^{\phi}\)._
Proof.: By Lemma 1.1.2, \((\phi^{-1}h_{1},\dots,\phi^{-1}h_{r})\) is splitting of \(A^{\phi}\) over \(K^{\phi}\). Let \(j\in\{1,\dots,r\}\). If \(\operatorname{Re}g_{j}>0\), then \(\phi^{\dagger}<0\) yields \(\operatorname{Re}h_{j}\geqslant\operatorname{Re}g_{j}>0\). Otherwise, \(v(\operatorname{Re}g_{j})<\gamma^{\dagger}\) whenever \(0<\gamma\in S\); in particular, \(\operatorname{Re}g_{j}\succ 1\succ\phi^{\dagger}\), so \(\operatorname{Re}h_{j}\sim\operatorname{Re}g_{j}\). In both cases \(h_{j}\) is \(S\)-repulsive, so \(\phi^{-1}h_{j}\) is \(S\)-repulsive in \(K^{\phi}\).
**Proposition 4.5.10**.: _Suppose \(S\cap\Gamma^{>}\neq\emptyset\), \(nS\subseteq S\) for all \(n\geqslant 1\), the ordered constant field \(C\) of \(H\) is archimedean, and \((g_{1},\ldots,g_{r})\) is a splitting of \(A\) over \(K\). Then there exists \(\gamma\in S\cap\Gamma^{>}\) such that for any \(\mathfrak{m}\) with \(\gamma=v\mathfrak{m}\): \((g_{1}-n\mathfrak{m}^{\dagger},\ldots,g_{r}-n\mathfrak{m}^{\dagger})\) is an \(S\)-repulsive splitting of \(A\mathfrak{m}^{n}\) over \(K\), for all big enough \(n\)._
Proof.: Let \(J\) be the set of \(j\in\{1,\ldots,r\}\) such that \(g_{j}\) is not \(S\)-repulsive. If \(\gamma>0\) and \(g\) is not \(\gamma\)-repulsive, then \(g\) is not \(\delta\)-repulsive, for all \(\delta\geqslant\gamma\). Hence we can take \(\gamma\in S\cap\Gamma^{>}\) such that \(g_{j}\) is not \(\gamma\)-repulsive, for all \(j\in J\). Suppose \(\gamma=v\mathfrak{m}\). Lemma 4.5.6 yields \(m\geqslant 1\) such that for all \(n\geqslant m\), setting \(\mathfrak{n}:=\mathfrak{m}^{n}\), \(g_{j}-\mathfrak{n}^{\dagger}\) is \(\Gamma\)-repulsive for all \(j\in J\). For such \(\mathfrak{n}\) we have \(v\mathfrak{n}\in S\), so by Lemma 4.5.7(ii), \(g_{j}-\mathfrak{n}^{\dagger}\) is also \(S\)-repulsive for \(j\notin J\).
**Corollary 4.5.11**.: _If \(C\) is archimedean and \((g_{1},\ldots,g_{r})\) is a splitting of \(A\) over \(K\), then there exists \(\gamma>0\) such that for all \(\mathfrak{m}\) with \(\gamma=v\mathfrak{m}\): \((g_{1}-n\mathfrak{m}^{\dagger},\ldots,g_{r}-n\mathfrak{m}^{\dagger})\) is a \(\Gamma\)-repulsive splitting of \(A\mathfrak{m}^{n}\) over \(K\), for all big enough \(n\). If \(\Gamma\neq\Gamma^{\flat}\) then we can choose such \(\gamma>\Gamma^{\flat}\)._
Proof.: Taking \(S=\Gamma\) this follows from Proposition 4.5.10 and its proof.
In logical jargon, the condition that \(C\) is archimedean is not _first-order_. But it is satisfied when \(H\) is a Hardy field, the case where the results of this section will be applied. For other possible uses we indicate here a first-order variant of Proposition 4.5.10 with essentially the same proof:
**Corollary 4.5.12**.: _Suppose \((g_{1},\ldots,g_{r})\) is a splitting of \(A\) over \(K\). Then there exists \(\mathfrak{m}\prec 1\) such that for all sufficiently large \(c\in C^{>}\) and all \(\mathfrak{n}\), if \(\mathfrak{n}^{\dagger}=c\mathfrak{m}^{\dagger}\), then \((g_{1}-\mathfrak{n}^{\dagger},\ldots,g_{r}-\mathfrak{n}^{\dagger})\) is a \(\Gamma\)-repulsive splitting of \(A\mathfrak{n}\) over \(K\)._
In connection with this corollary we recall from [7, p. 105] that \(H\) is said to be _closed under powers_ if for all \(c\in C\) and \(\mathfrak{m}\) there is an \(\mathfrak{n}\) with \(c\mathfrak{m}^{\dagger}=\mathfrak{n}^{\dagger}\).
_In the rest of this section \(\widehat{H}\) is an immediate asymptotic extension of \(H\) and \(\mathfrak{i}\) with \(\mathfrak{i}^{2}=-1\) lies in an asymptotic extension of \(\widehat{H}\). Also \(K:=H[\mathfrak{i}]\) and \(\widehat{K}:=\widehat{H}[\mathfrak{i}]\)._
Let \(\widehat{a}\in\widehat{H}\setminus H\), so \(v(\widehat{a}-H)\) is a downward closed subset of \(\Gamma\). We say that \(f\) is \(\widehat{a}\)**-repulsive** if \(f\) is \(v(\widehat{a}-H)\)-repulsive; that is, \(\operatorname{Re}f>0\), or \(\operatorname{Re}f\succ\mathfrak{m}^{\dagger}\) for all \(a\), \(\mathfrak{m}\) with \(\mathfrak{m}\asymp\widehat{a}-a\prec 1\). (Of course, this notion is only interesting if \(v(\widehat{a}-H)\cap\Gamma^{>}\neq\emptyset\), since otherwise every \(f\) is \(\widehat{a}\)-repulsive.) Various earlier results give:
**Lemma 4.5.13**.: _Suppose \(f\) is \(\widehat{a}\)-repulsive. Then_
* \(b>0,\ b\succcurlyeq 1\implies bf\) _is_ \(\widehat{a}\)_-repulsive;_
* \(f\) _is_ \((\widehat{a}-a)\)_-repulsive;_
* \(\mathfrak{m}\asymp 1\implies f\) _is_ \(\widehat{a}\mathfrak{m}\)_-repulsive;_
* \(\mathfrak{n}\asymp\widehat{a}-a\prec 1\implies f\) _is_ \(\widehat{a}/\mathfrak{n}\) _repulsive and_ \(f-\mathfrak{n}^{\dagger}\) _is_ \(\widehat{a}\)_-repulsive._
For (iv), use Lemma 4.5.7. An \(\widehat{a}\)**-repulsive splitting** of \(A\) over \(K\) is a \(v(\widehat{a}-H)\)-repulsive splitting \((g_{1},\ldots,g_{r})\) of \(A\) over \(K\):
\[A\ =\ f(\partial-g_{1})\cdots(\partial-g_{r})\qquad\text{where $f\neq 0$ and $g_{1},\ldots,g_{r}$ are $\widehat{a}$-repulsive.}\]
We say that \(A\)**splits \(\widehat{a}\)-repulsively** over \(K\) if it splits \(v(\widehat{a}-H)\)-repulsively over \(K\). Thus if \(A\) splits \(\widehat{a}\)-repulsively over \(K\), then so does \(hA\) (\(h\neq 0\)), and \(A\) splits \((\widehat{a}-a)\)-repulsively over \(K\), and splits \(\widehat{a}\mathfrak{m}\)-repulsively over \(K\) for \(\mathfrak{m}\asymp 1\). Moreover, from Lemma 4.5.8 we obtain:
**Corollary 4.5.14**.: _Suppose \((g_{1},\dots,g_{r})\) is an \(\widehat{a}\)-repulsive splitting of \(A\) over \(K\) and \(\mathfrak{n}\asymp\widehat{a}-a\prec 1\). Then \((g_{1},\dots,g_{r})\) is an \(\widehat{a}/\mathfrak{n}\)-repulsive splitting of \(A\) over \(K\) and \((g_{1}-\mathfrak{n}^{\dagger},\dots,g_{r}-\mathfrak{n}^{\dagger})\) is an \(\widehat{a}\)-repulsive splitting of \(A\mathfrak{n}\) over \(K\)._
Proposition 4.5.10 yields:
**Corollary 4.5.15**.: _If \(\widehat{a}\prec 1\) is special over \(H\), \(C\) is archimedean, and \(A\) splits over \(K\), then \(A\mathfrak{n}\) splits \(\widehat{a}\)-repulsively over \(K\) for some \(a\) and \(\mathfrak{n}\asymp\widehat{a}-a\prec 1\)._
Recall that in Section 4.2 we defined a splitting \((g_{1},\dots,g_{r})\) of \(A\) over \(K\) to be _strong_ if \(\operatorname{Re}g_{j}\succcurlyeq\mathfrak{v}(A)^{\dagger}\) for \(j=1,\dots,r\).
**Lemma 4.5.16**.: _Suppose \(\widehat{a}-a\prec^{\flat}1\) for some \(a\). Let \((g_{1},\dots,g_{r})\) be an \(\widehat{a}\)-repulsive splitting of \(A\) over \(K\), let \(\phi\) be active in \(H\) with \(0<\phi\prec 1\), and set_
\[h_{j}\ :=\ \phi^{-1}\big{(}g_{j}-(r-j)\phi^{\dagger}\big{)}\qquad(j=1,\dots,r).\]
_Then \((h_{1},\dots,h_{r})\) is an \(\widehat{a}\)-repulsive splitting of \(A^{\phi}\) over \(K^{\phi}=H^{\phi}[i]\). If \(\mathfrak{v}(A)\prec^{\flat}1\) and \((g_{1},\dots,g_{r})\) is strong, then \((h_{1},\dots,h_{r})\) is strong._
This follows from Lemmas 4.2.12 and 4.5.9.
**Lemma 4.5.17**.: _Suppose \(\mathfrak{v}:=\mathfrak{v}(A)\prec 1\) and \(\widehat{a}\prec_{\Delta(\mathfrak{v})}1\). Let \((g_{1},\dots,g_{r})\) be an \(\widehat{a}\)-repulsive splitting of \(A\) over \(K\). Then for all sufficiently small \(q\in\mathbb{Q}^{>}\) and any \(\mathfrak{n}\asymp|\mathfrak{v}|^{q}\), \((g_{1}-\mathfrak{n}^{\dagger},\dots,g_{r}-\mathfrak{n}^{\dagger})\) is a strong \(\widehat{a}/\mathfrak{n}\)-repulsive splitting of \(A\mathfrak{n}\) over \(K\)._
Proof.: Take \(q_{0}\in\mathbb{Q}^{>}\) with \(\widehat{a}\prec|\mathfrak{v}|^{q_{0}}\prec 1\). Then for any \(q\in\mathbb{Q}\) with \(0<q\leqslant q_{0}\) and any \(\mathfrak{n}\asymp|\mathfrak{v}|^{q}\), \((g_{1}-\mathfrak{n}^{\dagger},\dots,g_{r}-\mathfrak{n}^{\dagger})\) is an \(\widehat{a}/\mathfrak{n}\)-repulsive splitting of \(A\mathfrak{n}\) over \(K\), by Corollary 4.5.14. Using Lemmas 4.2.13 and 4.2.10 (in that order) we can decrease \(q_{0}\) so that for all \(q\in\mathbb{Q}\) with \(0<q\leqslant q_{0}\) and \(\mathfrak{n}\asymp|\mathfrak{v}|^{q}\), \((g_{1}-\mathfrak{n}^{\dagger},\dots,g_{r}-\mathfrak{n}^{\dagger})\) is also a strong splitting of \(A\mathfrak{n}\) over \(K\).
_In the rest of this subsection we assume that \(H\) is Liouville closed with \(\operatorname{I}(K)\subseteq K^{\dagger}\)._
We choose a complement \(\Lambda\subseteq H\) of \(K^{\dagger}\) in \(K\) as in Section 4.4 and set \(\operatorname{U}:=K\big{[}\mathrm{e}(\Lambda)\big{]}\). We then have the set \(\mathscr{E}^{\mathrm{u}}(A)\subseteq\Gamma\) of ultimate exceptional values of \(A\) (which doesn't depend on \(\Lambda\) by Corollary 4.4.1). Recall from Corollary 1.2.31 that \(H\) is of Hardy type iff \(C\) is archimedean. _We now assume \(r=1\) and \(\widehat{a}\prec 1\) is special over \(H\), and let \(\Delta\) be the nontrivial convex subgroup of \(\Gamma\) that is cofinal in \(v(\widehat{a}-H)\)._
**Lemma 4.5.18**.: _Suppose \(C\) is archimedean and \(\mathscr{E}^{\mathrm{u}}(A)\cap v(\widehat{a}-H)<0\). Then \(A\) splits \(\widehat{a}\)-repulsively over \(K\)._
Proof.: We may arrange \(A=\partial-f\). Take \(u\in\operatorname{U}^{\times}\) with \(u^{\dagger}=f\), and set \(b:=\|u\|\in H^{>}\). Then \(\mathscr{E}^{\mathrm{u}}(A)=\{vb\}\) by Lemma 2.6.14 and its proof, hence
\[\mathscr{E}^{\mathrm{u}}(A)\cap v(\widehat{a}-H)<0\quad\Longleftrightarrow\quad b \succ 1\text{ or }vb>\Delta,\]
and \(\operatorname{Re}f=b^{\dagger}\) by Lemma 4.4.21. If \(b\succ 1\), then \(\operatorname{Re}f>0\), and if \(vb>\Delta\), then for all \(\delta\in\Delta^{\neq}\) we have \(\psi(vb)<\psi(\delta)\) by Lemma 1.2.27, so \(\operatorname{Re}f\succ\mathfrak{m}^{\dagger}\) for all \(a\), \(\mathfrak{m}\) with \(\widehat{a}-a\asymp\mathfrak{m}\prec 1\). In both cases \(A\) splits \(\widehat{a}\)-repulsively over \(K\).
**Lemma 4.5.19**.: _Suppose \(A\in H[\partial]\) and \(\mathfrak{v}(A)\prec 1\). Then \(0\notin\mathscr{E}^{\mathrm{u}}(A)\), and if \(A\) splits \(\widehat{a}\)-repulsively over \(K\), then \(\mathscr{E}^{\mathrm{u}}(A)\cap v(\widehat{a}-H)<0\)._
Proof.: We again arrange \(A=\partial-f\) and take \(u\), \(b\) as in the proof of Lemma 4.5.18. Then \(f\in H\) and \(b^{\dagger}=f=-1/\mathfrak{v}(A)\succ 1\), so \(b\not\prec 1\), and thus \(0\notin\{vb\}=\mathscr{E}^{\mathrm{u}}(A)\). Now suppose \(A\) splits \(\widehat{a}\)-repulsively over \(K\), that is, \(f>0\) or \(f\succ\mathfrak{m}^{\dagger}\) for all \(a\), \(\mathfrak{m}\)
with \(\widehat{a}-a\asymp\mathfrak{m}\prec 1\). In the first case \(f=b^{\dagger}\) and \(b\not\succ 1\) yield \(b\succ 1\). In the second case \(\psi(vb)=vf<\psi(\delta)\) for all \(\delta\in\Delta^{\neq}\), hence \(vb\notin\Delta\).
Combining Lemma 4.2.11 with the previous two lemmas yields:
**Corollary 4.5.20**.: _Suppose \(A\in H[\mathscr{G}]\) and \(\mathfrak{v}(A)\prec 1\), and \(H\) is of Hardy type. Then \(A\) splits strongly over \(K\), and we have the equivalence_
\[A\text{ splits }\widehat{a}\text{-repulsively over }K\iff\mathscr{E}^{\mathfrak{u}}(A) \cap v(\widehat{a}-H)\ \leqslant\ 0.\]
### Defining repulsive-normality
_In this subsection \((P,\mathfrak{m},\widehat{a})\) is a slot in \(H\) of order \(r\geqslant 1\) with \(\widehat{a}\in\widehat{H}\setminus H\) and linear part \(L:=L_{P\times_{\mathfrak{m}}}\)._ Set \(w:=\operatorname{wt}(P)\); if \(\operatorname{order}L=r\), set \(\mathfrak{v}:=\mathfrak{v}(L)\). We let \(a\), \(b\) range over \(H\) and \(\mathfrak{n}\) over \(H^{\times}\).
**Definition 4.5.21**.: Call \((P,\mathfrak{m},\widehat{a})\) repulsive-normal if \(\operatorname{order}L=r\), and
1. \(\mathfrak{v}\prec^{\flat}1\);
2. \((P_{\times\mathfrak{m}})_{\geqslant 1}=Q+R\) where \(Q,R\in H\{Y\}\), \(Q\) is homogeneous of degree \(1\) and order \(r\), \(L_{Q}\) splits \(\widehat{a}/\mathfrak{m}\)-repulsively over \(K\), and \(R\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}(P_{\times\mathfrak{m}})_{1}\).
Compare this with "split-normality" from Definition 4.3.3: clearly repulsive-normal implies split-normal, and hence normal. If \((P,\mathfrak{m},\widehat{a})\) is normal and \(L\) splits \(\widehat{a}/\mathfrak{m}\)-repulsively over \(K\), then \((P,\mathfrak{m},\widehat{a})\) is repulsive-normal. If \((P,\mathfrak{m},\widehat{a})\) is repulsive-normal, then so are \((bP,\mathfrak{m},\widehat{a})\) for \(b\neq 0\) and \((P_{\times\mathfrak{n}},\mathfrak{m}/\mathfrak{n},\widehat{a}/\mathfrak{n})\).
**Lemma 4.5.22**.: _Suppose \((P,\mathfrak{m},\widehat{a})\) is repulsive-normal and \(\phi\) is active in \(H\) such that \(0<\phi\prec 1\), and \(\widehat{a}-a\prec^{\flat}\mathfrak{m}\) for some \(a\). Then the slot \((P^{\phi},\mathfrak{m},\widehat{a})\) in \(H^{\phi}\) is repulsive-normal._
Proof.: First arrange \(\mathfrak{m}=1\), and let \(Q\), \(R\) be as in (RN2) for \(\mathfrak{m}=1\). Now \((P^{\phi},1,\widehat{a})\) is split-normal by Lemma 4.3.5. In fact, \(P^{\phi}_{\geqslant 1}=Q^{\phi}+R^{\phi}\), and the proof of this lemma shows that \(R^{\phi}\prec_{\Delta(\mathfrak{m})}\mathfrak{v}^{w+1}P^{\phi}_{1}\) where \(\mathfrak{w}:=\mathfrak{v}(L_{P^{\phi}})\). By Lemma 4.5.16, \(L_{Q^{\phi}}=L^{\phi}_{Q}\) splits \(\widehat{a}\)-repulsively over \(K^{\phi}\). So \((P^{\phi},1,\widehat{a})\) is repulsive-normal.
If \(\operatorname{order}L=r\), \(\mathfrak{v}\prec^{\flat}1\), and \(\widehat{a}-a\prec_{\Delta(\mathfrak{v})}\mathfrak{m}\), then \(\widehat{a}-a\prec^{\flat}\mathfrak{m}\). Thus we obtain from Lemmas 3.3.13 and 4.5.22 the following result:
**Corollary 4.5.23**.: _Suppose \((P,\mathfrak{m},\widehat{a})\) is \(Z\)-minimal, deep, and repulsive-normal. Let \(\phi\) be active in \(H\) with \(0<\phi\prec 1\). Then the slot \((P^{\phi},\mathfrak{m},\widehat{a})\) in \(H^{\phi}\) is repulsive-normal._
Before we turn to the task of obtaining repulsive-normal slots, we deal with the preservation of repulsive-normality under refinements.
**Lemma 4.5.24**.: _Suppose \((P,\mathfrak{m},\widehat{a})\) is repulsive-normal, and let \(Q\), \(R\) be as in (RN2). Let \((P_{+a},\mathfrak{n},\widehat{a}-a)\) be a steep refinement of \((P,\mathfrak{m},\widehat{a})\) where \(\mathfrak{n}\prec\mathfrak{m}\) or \(\mathfrak{n}=\mathfrak{m}\). Suppose_
\[(P_{+a,\times\mathfrak{n}})_{\geqslant 1}-Q_{\times\mathfrak{n}/\mathfrak{m}} \prec_{\Delta(\mathfrak{n})}\mathfrak{v}^{w+1}(P_{+a,\times\mathfrak{n}})_{ 1}\qquad\text{where }\mathfrak{w}:=\mathfrak{v}(L_{P_{+a,\times\mathfrak{n}}}).\]
_Then \((P_{+a},\mathfrak{n},\widehat{a}-a)\) is repulsive-normal._
Proof.: By (RN2), \(L_{Q}\) splits \(\widehat{a}/\mathfrak{m}\)-repulsively over \(K\), so \(L_{Q}\) also splits \((\widehat{a}-a)/\mathfrak{m}\)-repulsively over \(K\). We have \((\widehat{a}-a)/\mathfrak{m}\prec\mathfrak{n}/\mathfrak{m}\prec 1\) or \((\widehat{a}-a)/\mathfrak{m}\prec 1=\mathfrak{n}/\mathfrak{m}\), so \(L_{Q}\) splits \((\widehat{a}-a)/\mathfrak{n}\)-repulsively over \(K\) by the first part of Corollary 4.5.14, and hence \(L_{Q_{\times\mathfrak{n}/\mathfrak{m}}}=L_{Q}\cdot(\mathfrak{n}/\mathfrak{m})\) splits \((\widehat{a}-a)/\mathfrak{n}\)-repulsively over \(K\) by the second part of that Corollary 4.5.14. Thus \((P_{+a},\mathfrak{n},\widehat{a}-a)\) is repulsive-normal.
The proofs of Lemmas 4.3.18, 4.3.19, 4.3.20 give the following repulsive-normal analogues of these lemmas, using also Lemma 4.5.24; for Lemma 4.5.27 below we adopt the notational conventions about \(\mathfrak{n}^{q}\) (\(q\in\mathbb{Q}^{>}\)) stated before Lemma 4.3.20.
**Lemma 4.5.25**.: _If \((P,\mathfrak{m},\widehat{a})\) is repulsive-normal and \((P_{+a},\mathfrak{m},\widehat{a}-a)\) is a refinement of \((P,\mathfrak{m},\widehat{a})\), then \((P_{+a},\mathfrak{m},\widehat{a}-a)\) is also repulsive-normal._
**Lemma 4.5.26**.: _Suppose \((P,\mathfrak{m},\widehat{a})\) is repulsive-normal, \(\widehat{a}\prec\mathfrak{n}\prec\mathfrak{m}\), and \([\mathfrak{n}/\mathfrak{m}]\leqslant\big{[}\mathfrak{v}]\). Then the refinement \((P,\mathfrak{n},\widehat{a})\) of \((P,\mathfrak{m},\widehat{a})\) is repulsive-normal: if \(\mathfrak{m}\), \(P\), \(Q\), \(\mathfrak{v}\) are as in (RN2), then (RN2) holds with \(\mathfrak{n}\), \(Q_{\times\mathfrak{n}/\mathfrak{m}}\), \(R_{\times\mathfrak{n}/\mathfrak{m}}\), \(\mathfrak{v}(L_{P_{\times\mathfrak{n}}})\) in place of \(\mathfrak{m}\), \(Q\), \(R\), \(\mathfrak{v}\)._
**Lemma 4.5.27**.: _Suppose \(\mathfrak{m}=1\), \((P,1,\widehat{a})\) is repulsive-normal, \(\widehat{a}\prec\mathfrak{n}\prec 1\), and for \(\mathfrak{v}:=\mathfrak{v}(L_{P})\) we have \([\mathfrak{n}^{\uparrow}]<[\mathfrak{v}]<[\mathfrak{n}]\); then \((P,\mathfrak{n}^{q},\widehat{a})\) is a repulsive-normal refinement of \((P,1,\widehat{a})\) for all but finitely many \(q\in\mathbb{Q}\) with \(0<q<1\)._
### Achieving repulsive-normality
In this subsection we adopt the setting of the subsection _Achieving split-normality_ of Section 4.3: \(H\)_is \(\mathfrak{o}\)-free and \((P,\mathfrak{m},\widehat{a})\) is a minimal hole in \(K\) of order \(r\geqslant 1\), \(\mathfrak{m}\in H^{\times}\), and \(\widehat{a}\in\widehat{K}\setminus K\), with \(\widehat{a}=\widehat{b}+\widehat{c}\), \(\widehat{b},\widehat{c}\in\widehat{H}\). We let \(a\) range over \(K\), \(b\), \(c\) over \(H\), and \(\mathfrak{n}\) over \(H^{\times}\). We prove here the following variant of Theorem 4.3.9:
**Theorem 4.5.28**.: _Suppose the constant field \(C\) of \(H\) is archimedean and \(\deg P>1\). Then one of the following conditions is satisfied:_
* \(\widehat{b}\notin H\) _and some_ \(Z\)_-minimal slot_ \((Q,\mathfrak{m},\widehat{b})\) _in_ \(H\) _has a special refinement_ \((Q_{+b},\mathfrak{n},\widehat{b}-b)\) _such that_ \((Q_{+b}^{\phi},\mathfrak{n},\widehat{b}-b)\) _is eventually deep and repulsive-normal;_
* \(\widehat{c}\notin H\) _and some_ \(Z\)_-minimal slot_ \((R,\mathfrak{m},\widehat{c})\) _in_ \(H\) _has a special refinement_ \((R_{+c},\mathfrak{n},\widehat{c}-c)\) _such that_ \((R_{+c}^{\phi},\mathfrak{n},\widehat{c}-c)\) _is eventually deep and repulsive-normal._
To establish this theorem we need to take up the approximation arguments in the proof of Theorem 4.3.9 once again. While in that proof we treated the cases \(\widehat{b}\in H\) and \(\widehat{c}\in H\) separately to obtain stronger results in those cases (Lemmas 4.3.10, 4.3.11), here we proceed differently and first show a repulsive-normal version of Proposition 4.3.12 which also applies to those cases. _In the rest of this subsection we assume that \(C\) is archimedean._
**Proposition 4.5.29**.: _Suppose the hole \((P,\mathfrak{m},\widehat{a})\) in \(K\) is special and \(v(\widehat{b}-H)\subseteq v(\widehat{c}-H)\) (so \(\widehat{b}\notin H\)). Let \((Q,\mathfrak{m},\widehat{b})\) be a \(Z\)-minimal deep normal slot in \(H\). Then \((Q,\mathfrak{m},\widehat{b})\) has a repulsive-normal refinement._
Proof.: As in the proof of Proposition 4.3.12 we first arrange \(\mathfrak{m}=1\), and set
\[\Delta\ :=\ \big{\{}\delta\in\Gamma:\ |\delta|\in v\big{(}\widehat{a}-K\big{)} \big{\}},\]
a convex subgroup of \(\Gamma\) which is cofinal in \(v(\widehat{a}-K)=v(\widehat{b}-H)\), so \(\widehat{b}\) is special over \(H\). Lemma 3.3.13 applied to \((Q,1,\widehat{b})\) and \(\mathfrak{v}(L_{Q})\prec^{\flat}1\) gives that \(\Gamma^{\flat}\) is strictly contained in \(\Delta\). To show that \((Q,1,\widehat{b})\) has a repulsive-normal refinement, we follow the proof of Proposition 4.3.12, skipping the initial compositional conjugation, and arranging first that \(P,Q\asymp 1\). Recall from that proof that \(\dot{\widehat{a}}\in\dot{K}^{\rm c}=\dot{H}^{\rm c}[i]\) and \(\operatorname{Re}\dot{\widehat{a}}=\dot{\widehat{b}}\in\dot{H}^{\rm c}\setminus \dot{H}\), with \(\dot{\widehat{b}}\prec 1\), \(\dot{Q}\in\dot{H}\{Y\}\), and so \(\dot{Q}_{+\widehat{b}}\in\dot{H}^{\rm c}\{Y\}\). Let \(A\in\dot{H}^{\rm c}[\partial]\) be the linear part of \(\dot{Q}_{+\widehat{b}}\). Recall from that proof
that \(1\leqslant s:=\operatorname{order}Q=\operatorname{order}A\leqslant 2r\) and that \(A\) splits over \(\dot{K}^{\mathrm{c}}\). Then Lemma 1.1.4 gives a _real_ splitting \((g_{1},\ldots,g_{s})\) of \(A\) over \(\dot{K}^{\mathrm{c}}\):
\[A\ =\ f(\partial-g_{1})\cdots(\partial-g_{s}),\qquad 0\neq f\in\dot{H}^{ \mathrm{c}},\ g_{1},\ldots,g_{s}\in\dot{K}^{\mathrm{c}}.\]
It follows easily from [ADH, 10.1.8] that the real closed d-valued field \(\dot{H}\) is an \(H\)-field, and so its completion \(\dot{H}^{\mathrm{c}}\) is also a real closed \(H\)-field by [ADH, 10.5.9]. Recall also that \(\Delta=v(\dot{H}^{\times})\) is the value group of \(\dot{H}^{\mathrm{c}}\) and properly contains \(\Gamma^{\flat}\). Thus we can apply Corollary 4.5.11 with \(\dot{H}^{\mathrm{c}}\) in the role of \(H\) to get \(\mathfrak{n}\in\dot{\mathcal{O}}\) with \(0\neq\dot{\mathfrak{n}}\prec^{\flat}1\) and \(m\) such that for all \(n>m\), \((h_{1},\ldots,h_{s}):=(g_{1}-n\dot{\mathfrak{n}}^{\dagger},\ldots,g_{s}-n\dot{ \mathfrak{n}}^{\dagger})\) is a \(\Delta\)-repulsive splitting of \(A\dot{\mathfrak{n}}^{n}\) over \(\dot{K}^{\mathrm{c}}\), so \(\operatorname{Re}h_{1},\ldots,\operatorname{Re}h_{s}\neq 0\). For any \(n\), \(A\dot{\mathfrak{n}}^{n}\) is the linear part of \(\dot{Q}_{+\dot{b},\times\dot{\mathfrak{n}}^{n}}\in\dot{H}^{\mathrm{c}}\{Y\}\), and \((h_{1},\ldots,h_{s})\) is also a real splitting of \(A\dot{\mathfrak{n}}^{n}\) over \(\dot{K}^{\mathrm{c}}\):
\[A\dot{\mathfrak{n}}^{n}\ =\ \dot{\mathfrak{n}}^{n}f(\partial-h_{1})\cdots( \partial-h_{s}).\]
By increasing \(m\) we arrange that for all \(n>m\) we have \(g_{j}\not\sim n\dot{\mathfrak{n}}^{\dagger}\)\((j=1,\ldots,s)\), and also \(\mathfrak{v}(A\dot{\mathfrak{n}}^{n})\preccurlyeq\mathfrak{v}(A)\) provided \(\big{[}\mathfrak{v}(A)\big{]}<[\dot{\mathfrak{n}}]\); for the latter part use Lemma 3.1.16. Below we assume \(n>m\). Then \(\mathfrak{v}(A\dot{\mathfrak{n}}^{n})\prec 1\): to see this use Corollary 3.1.4, \(\mathfrak{v}(A)\prec 1\), and \(g_{j}\preccurlyeq h_{j}\)\((j=1,\ldots,s)\). Note that \(h_{1},\ldots,h_{s}\succcurlyeq 1\). We now apply Corollary 4.2.9 to \(\dot{H}\), \(\dot{K}\), \(\dot{Q}\), \(s\), \(\mathfrak{n}^{n}\), \(\dot{\widehat{b}}\), \(\dot{\mathfrak{n}}^{n}f\), \(h_{1},\ldots,h_{s}\) in place of \(H\), \(K\), \(P\), \(r\), \(\mathfrak{m}\), \(f\), \(a\), \(b_{1},\ldots,b_{r}\), respectively, and any \(\gamma\in\Delta\) with \(\gamma>v(\dot{\mathfrak{n}}^{n}),v(\operatorname{Re}h_{1}),\ldots,v( \operatorname{Re}h_{s})\). This gives \(a,b\in\dot{\mathcal{O}}\) and \(b_{1},\ldots,b_{s}\in\dot{\mathcal{O}}_{K}\) such that \(\dot{a},\dot{b}\neq 0\) in \(\dot{H}\) and such that for the linear part \(\widetilde{A}\in\dot{H}[\partial]\) of \(\dot{Q}_{+\dot{b},\times\dot{\mathfrak{n}}^{n}}\) we have
\[\dot{b}\ -\dot{\widehat{b}}\ \prec\ \dot{\mathfrak{n}}^{n},\qquad\widetilde{A}\ \sim\ A\dot{\mathfrak{n}}^{n},\qquad\operatorname{order}\widetilde{A}\ =\ s,\qquad\mathfrak{w}\ :=\ \mathfrak{v}( \widetilde{A})\ \sim\ \mathfrak{v}(A\dot{\mathfrak{n}}^{n}),\]
and such that for \(w:=\operatorname{wt}(Q)\) and with \(\Delta(\mathfrak{w})\subseteq\Delta\):
\[\widetilde{A}\ =\ \widetilde{B}+\widetilde{E},\ \ \ \widetilde{B}\ =\ \dot{a}(\partial-\dot{b}_{1})\cdots(\partial-\dot{b}_{s})\in\dot{H}[ \partial],\quad\widetilde{E}\in\dot{H}[\partial],\] \[v(\dot{b}_{1}-h_{1}),\ldots,v(\dot{b}_{s}-h_{s})\ >\ \gamma,\quad \widetilde{E}\ \prec_{\Delta(\mathfrak{w})}\ \mathfrak{w}^{w+1}\widetilde{A},\]
and \((\dot{b}_{1},\ldots,\dot{b}_{s})\) is a real splitting of \(\widetilde{B}\) over \(\dot{K}\). This real splitting over \(\dot{K}\) has a consequence that will be crucial at the end of the proof: by changing \(b_{1},\ldots,b_{s}\) if necessary, without changing \(\dot{b}_{1},\ldots,\dot{b}_{s}\) we arrange that \(B:=a(\partial-b_{1})\cdots(\partial-b_{s})\) lies in \(\dot{\mathcal{O}}[\partial]\subseteq H[\partial]\) and that \((b_{1},\ldots,b_{s})\) is a real splitting of \(B\) over \(K\). (Lemma 1.1.6.)
Since \(\operatorname{Re}b_{1}\sim\operatorname{Re}h_{1},\ldots,\operatorname{Re}\dot{ b}_{s}\sim\operatorname{Re}h_{s}\), the implication just before Lemma 4.5.2 gives that \((\dot{b}_{1},\ldots,\dot{b}_{s})\) is a \(\Delta\)-repulsive splitting of \(\widetilde{B}\) over \(\dot{K}\). Now \(\widehat{b}-b\prec\mathfrak{n}^{n}\prec 1\), so \((Q_{+b},1,\widehat{b}-b)\) is a refinement of the normal slot \((Q,1,\widehat{b})\) in \(H\), hence \((Q_{+b},1,\widehat{b}-b)\) is normal by Proposition 3.3.25. We claim that the refinement \((Q_{+b},\mathfrak{n}^{n},\widehat{b}-b)\) of \((Q_{+b},1,\widehat{b}-b)\) is also normal. If \([\mathfrak{n}]\leqslant\big{[}\mathfrak{v}(L_{Q_{+b}})\big{]}\), this claim holds by Corollary 3.3.27. From Lemma 3.1.27 and 3.1.7 we obtain:
\[\operatorname{order}L_{Q_{+b}}\ =\ \operatorname{order}L_{Q}\ =\ \operatorname{order}L_{Q_{+\widehat{b}}}\ =\ s,\] \[\mathfrak{v}(L_{Q_{+b}})\ \sim\ \mathfrak{v}(L_{Q})\ \sim\ \mathfrak{v}(L_{Q_{+\widehat{b}}}),\qquad v \big{(}\mathfrak{v}(L_{Q_{+\widehat{b}}})\big{)}\ =\ v\big{(}\mathfrak{v}(A)\big{)},\]
so \(v\big{(}\mathfrak{v}(L_{Q_{+b},\times\mathfrak{n}^{n}})\big{)}=v\big{(}\mathfrak{ v}(A)\big{)}\). Moreover, by Lemma 3.1.7 and the facts about \(\widetilde{A}\),
\[v\big{(}\mathfrak{v}(L_{Q_{+b,\times\mathfrak{n}^{n}}})\big{)}\ =\ v\big{(}\mathfrak{v}(\widetilde{A})\big{)}\ =\ v \big{(}\mathfrak{v}(A\dot{\mathfrak{n}}^{n})\big{)}\ =\ v(\mathfrak{v}).\]
Suppose \(\big{[}\mathfrak{v}(L_{Q_{+b}})\big{]}<[\mathfrak{n}]\). Then \([\mathfrak{v}(A)]<[\hat{\mathfrak{n}}]\), so \(\mathfrak{v}(A\hat{\mathfrak{n}}^{n})\preccurlyeq\mathfrak{v}(A)\) using \(n>m\). Now the asymptotic relations among the various \(\mathfrak{v}(\dots)\) above give
\[\mathfrak{v}(L_{Q_{+b,\times n^{n}}})\ \preccurlyeq\mathfrak{v}(L_{Q_{+b}}),\]
hence \((Q_{+b},\mathfrak{n}^{n},\widehat{b}-b)\) is normal by Corollary 3.3.29 applied to \(H\) and the normal slot \((Q_{+b},1,\widehat{b}-b)\) in \(H\) in the role of \(K\) and \((P,1,\widehat{a})\), respectively. Put \(\mathfrak{v}:=\mathfrak{v}(L_{Q_{+b,\times n^{n}}})\), so \(\mathfrak{v}\asymp\mathfrak{v}\). Note that \(Q_{+b,\times\mathfrak{n}^{n}}\in\dot{\mathcal{O}}\{Y\}\), so the image of \(L_{Q_{+b,\times\mathfrak{n}^{n}}}\in\dot{\mathcal{O}}[\partial]\) in \(\dot{H}[\partial]\) is \(\widehat{A}\). Thus in \(H[\partial]\) we have:
\[L_{Q_{+b,\times\mathfrak{n}^{n}}}\ =\ B+E\qquad\text{where }E\in\dot{\mathcal{O}}[ \partial],\ \ E\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}L_{Q_{+b,\times \mathfrak{n}^{n}}}.\]
Now \(\dot{b}_{1},\dots,\dot{b}_{s}\) are \(\Delta\)-repulsive, so \(b_{1},\dots,b_{s}\) are \(\Delta\)-repulsive, hence
\[B\ =\ a(\partial-b_{1})\cdots(\partial-b_{s})\]
splits \(\Delta\)-repulsively, and thus \((\widehat{b}-b)/\mathfrak{n}^{n}\)-repulsively. Therefore \((Q_{+b},\mathfrak{n}^{n},\widehat{b}-b)\) is repulsive-normal.
Instead of assuming in the above proposition that \((P,\mathfrak{m},\widehat{a})\) is special and \((Q,\mathfrak{m},\widehat{b})\) is deep and normal, we can assume, as with Corollary 4.3.13, that \(\deg P>1\):
**Corollary 4.5.30**.: _Suppose \(\deg P>1\) and \(v(\widehat{b}-H)\subseteq v(\widehat{c}-H)\). Let \(Q\in Z(H,\widehat{b})\) have minimal complexity. Then the \(Z\)-minimal slot \((Q,\mathfrak{m},\widehat{b})\) in \(H\) has a special refinement \((Q_{+b},\mathfrak{n},\widehat{b}-b)\) such that \((Q_{+b}^{\phi},\mathfrak{n},\widehat{b}-b)\) is eventually deep and repulsive-normal._
Proof.: The beginning of the subsection _Achieving split-normality_ of Section 4.3 and \(\deg P>1\) give that \(K\) is \(r\)-linearly newtonian. Lemmas 3.2.26 and 3.3.23 yield a quasilinear refinement \((P_{+a},\mathfrak{n},\widehat{a}-a)\) of our hole \((P,\mathfrak{m},\widehat{a})\) in \(K\). Set \(b:=\operatorname{Re}a\). By Lemma 4.1.3 we have
\[v\big{(}(\widehat{a}-a)-K\big{)}\ =\ v(\widehat{a}-K)\ =\ v\big{(}\widehat{b}-H \big{)}\ =\ v\big{(}(\widehat{b}-b)-H\big{)}.\]
Replacing \((P,\mathfrak{m},\widehat{a})\) and \((Q,\mathfrak{m},\widehat{b})\) by \((P_{+a},\mathfrak{n},\widehat{a}-a)\) and \((Q_{+b},\mathfrak{n},\widehat{b}-b)\), respectively, we arrange that \((P,\mathfrak{m},\widehat{a})\) is quasilinear. Then by Proposition 1.6.12 and \(K\) being \(r\)-linearly newtonian, \((P,\mathfrak{m},\widehat{a})\) is special; hence so is \((Q,\mathfrak{m},\widehat{b})\). Proposition 3.3.36 gives a refinement \((Q_{+b},\mathfrak{n},\widehat{b}-b)\) of \((Q,\mathfrak{m},\widehat{b})\) and an active \(\phi_{0}\in H^{>}\) such that \((Q_{+b}^{\phi_{0}},\mathfrak{n},\widehat{b}-b)\) is deep and normal. Refinements of \((P,\mathfrak{m},\widehat{a})\) remain quasilinear by Corollary 3.2.23. Since \(v(\widehat{b}-H)\subseteq v(\widehat{c}-H)\), Lemma 4.1.3(ii) gives a refinement \((P_{+a},\mathfrak{n},\widehat{a}-a)\) of \((P,\mathfrak{m},\widehat{a})\) with \(\operatorname{Re}a=b\). By Lemma 3.2.35 the minimal hole \((P_{+a}^{\phi_{0}},\mathfrak{n},\widehat{a}-a)\) in \(K^{\phi_{0}}\) is special. Proposition 4.5.29 applied to \((P_{+a}^{\phi_{0}},\mathfrak{n},\widehat{a}-a)\), \((Q_{+b}^{\phi_{0}},\mathfrak{n},\widehat{b}-b)\) in place of \((P,\mathfrak{m},\widehat{a})\), \((Q,\mathfrak{m},\widehat{b})\), respectively, gives us \(b_{0}\in H\), \(\mathfrak{n}_{0}\in H^{\times}\) and a repulsive-normal refinement \(\big{(}Q_{+(b+b_{0})}^{\phi_{0}},\mathfrak{n}_{0},\widehat{b}-(b+b_{0})\big{)}\) of \((Q_{+b}^{\phi_{0}},\mathfrak{n},\widehat{b}-b)\). This refinement is steep and hence deep by Corollary 3.3.6, since \((Q_{+b}^{\phi_{0}},\mathfrak{n},\widehat{b}-b)\) is deep. Thus by Corollary 4.5.23, \(\big{(}Q_{+(b+b_{0})},\mathfrak{n}_{0},\widehat{b}-(b+b_{0})\big{)}\) is a refinement of \((Q,\mathfrak{m},\widehat{b})\) such that that \(\big{(}Q_{+(b+b_{0})}^{\phi},\mathfrak{n}_{0},\widehat{b}-(b+b_{0})\big{)}\) is eventually deep and repulsive-normal. As a refinement of \((Q,\mathfrak{m},\widehat{b})\), it is special.
In the same way that Corollary 4.3.13 gave rise to Corollary 4.3.14, Corollary 4.5.30 gives rise to the following:
**Corollary 4.5.31**.: _If \(\deg P>1\), \(v(\widehat{c}-H)\subseteq v(\widehat{b}-H)\), and \(R\in Z(H,\widehat{c})\) has minimal complexity, then the \(Z\)-minimal slot \((R,\mathfrak{m},\widehat{c})\) in \(H\) has a special refinement \((R_{+c},\mathfrak{n},\widehat{c}-c)\) such that \((R_{+c}^{\phi},\mathfrak{n},\widehat{c}-c)\) is eventually deep and repulsive-normal._
By Lemma 4.1.3 we have \(v(\widehat{b}-H)\subseteq v(\widehat{c}-H)\) or \(v(\widehat{c}-H)\subseteq v(\widehat{b}-H)\), hence the two corollaries above yield Theorem 4.5.28, completing its proof.
**Strengthening repulsive-normality.** In this subsection we adopt the setting of the subsection _Strengthening split-normality_ of Section 4.3. Thus \((P,\mathfrak{m},\widehat{a})\) is a slot in \(H\) of order \(r\geqslant 1\) and weight \(w:=\operatorname{wt}(P)\), and \(L:=L_{P_{\times\mathfrak{m}}}\). If \(\operatorname{order}L=r\), we set \(\mathfrak{v}:=\mathfrak{v}(L)\). We let \(a\), \(b\) range over \(H\) and \(\mathfrak{m}\), \(\mathfrak{n}\) over \(H^{\times}\).
**Definition 4.5.32**.: We say that \((P,\mathfrak{m},\widehat{a})\) is **almost strongly repulsive-normal** if \(\operatorname{order}L=r\), \(\mathfrak{v}\prec^{\flat}1\), and there are \(Q,R\in H\{Y\}\) such that
1. \((P_{\times\mathfrak{m}})_{\geqslant 1}=Q+R\), \(Q\) is homogeneous of degree \(1\) and order \(r\), \(L_{Q}\) has a strong \(\widehat{a}/\mathfrak{m}\)-repulsive splitting over \(K\), and \(R\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}(P_{\times\mathfrak{m}})_{1}\).
We say that \((P,\mathfrak{m},\widehat{a})\) is **strongly repulsive-normal** if \(\operatorname{order}L=r\), \(\mathfrak{v}\prec^{\flat}1\), and there are \(Q,R\in H\{Y\}\) such that:
1. \(P_{\times\mathfrak{m}}=Q+R\), \(Q\) is homogeneous of degree \(1\) and order \(r\), \(L_{Q}\) has a strong \(\widehat{a}/\mathfrak{m}\)-repulsive splitting over \(K\), and \(R\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{w+1}(P_{\times\mathfrak{m}})_{1}\).
If \((P,\mathfrak{m},\widehat{a})\) is almost strongly repulsive-normal, then \((P,\mathfrak{m},\widehat{a})\) is almost strongly split-normal; likewise without "almost". Thus we can augment our diagram from Section 4.3 as follows, the implications holding for slots of order \(\geqslant 1\) in real closed \(H\)-fields with small derivation and asymptotic integration:
\[\begin{array}{c}\includegraphics[width=142.26378pt]{
If \((P,\mathfrak{m},\widehat{a})\) is almost strongly repulsive-normal, then so are \((bP,\mathfrak{m},\widehat{a})\) for \(b\neq 0\) and \((P_{\times\mathfrak{n}},\mathfrak{m}/\mathfrak{n},\widehat{a}/\mathfrak{n})\), and likewise with "strongly" in place of "almost strongly". The proof of the next lemma is like that of Lemma 4.3.25, using Lemmas 4.5.25 and 4.5.33 in place of Lemmas 4.3.18 and 4.3.23, respectively.
**Lemma 4.5.35**.: _Suppose \((P_{+a},\mathfrak{m},\widehat{a}-a)\) refines \((P,\mathfrak{m},\widehat{a})\). If \((P,\mathfrak{m},\widehat{a})\) is almost strongly repulsive-normal, then so is \((P_{+a},\mathfrak{m},\widehat{a}-a)\). If \((P,\mathfrak{m},\widehat{a})\) is strongly repulsive-normal, \(Z\)-minimal, and \(\widehat{a}-a\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{r+w+1}\mathfrak{m}\), then \((P_{+a},\mathfrak{m},\widehat{a}-a)\) is strongly repulsive-normal._
Here is the key to achieving almost strong repulsive-normality; its proof is similar to that of Lemma 4.3.26:
**Lemma 4.5.36**.: _Suppose that \((P,\mathfrak{m},\widehat{a})\) is repulsive-normal and \(\widehat{a}\prec_{\Delta(\mathfrak{v})}\mathfrak{m}\). Then for all sufficiently small \(q\in\mathbb{Q}^{>}\), any \(\mathfrak{n}\asymp\mathfrak{v}^{q}\mathfrak{m}\) yields an almost strongly repulsive-normal refinement \((P,\mathfrak{n},\widehat{a})\) of \((P,\mathfrak{m},\widehat{a})\)._
Proof.: First arrange \(\mathfrak{m}=1\). Take \(Q\), \(R\) as in (RN2) for \(\mathfrak{m}=1\). Then Lemma 4.5.17 gives \(q_{0}\in\mathbb{Q}^{>}\) such that \(\widehat{a}\prec\mathfrak{v}^{q_{0}}\) and for all \(q\in\mathbb{Q}\) with \(0<q\leqslant q_{0}\) and \(\mathfrak{n}\asymp\mathfrak{v}^{q}\), \(L_{Q_{\times\mathfrak{n}}}=L_{Q}\mathfrak{n}\) has a strong \(\widehat{a}/\mathfrak{n}\)-repulsive splitting over \(K\). Now Lemma 4.5.26 yields that \((P,\mathfrak{n},\widehat{a})\) is almost strongly repulsive-normal for such \(\mathfrak{n}\).
Using this lemma we now adapt the proof of Corollary 4.3.27 to obtain:
**Corollary 4.5.37**.: _Suppose \((P,\mathfrak{m},\widehat{a})\) is \(Z\)-minimal, deep, and repulsive-normal. Then \((P,\mathfrak{m},\widehat{a})\) has a deep and almost strongly repulsive-normal refinement._
Proof.: Lemma 3.3.13 gives \(a\) such that \(\widehat{a}-a\prec_{\Delta(\mathfrak{v})}\mathfrak{m}\). By Corollary 3.3.8, the refinement \((P_{+a},\mathfrak{m},\widehat{a}-a)\) of \((P,\mathfrak{m},\widehat{a})\) is deep with \(\mathfrak{v}(L_{P_{+a,\times\mathfrak{m}}})\asymp_{\Delta(\mathfrak{v})} \mathfrak{v}\), and by Lemma 4.5.25 it is also repulsive-normal. Now apply Lemma 4.5.36 to \((P_{+a},\mathfrak{m},\widehat{a}-a)\) in place of \((P,\mathfrak{m},\widehat{a})\) and again use Corollary 3.3.8 to preserve being deep.
Next we adapt the proof of Lemma 4.3.28 to obtain a result about the behavior of (almost) repulsive-normality under compositional conjugation:
**Lemma 4.5.38**.: _Suppose \(\phi\) is active in \(H\) with \(0<\phi\prec 1\), and there exists a with \(\widehat{a}-a\prec^{\flat}\mathfrak{m}\). If \((P,\mathfrak{m},\widehat{a})\) is almost strongly repulsive-normal, then so is the slot \((P^{\phi},\mathfrak{m},\widehat{a})\) in \(H^{\phi}\). Likewise with "strongly" in place of "almost strongly"._
Proof.: We arrange \(\mathfrak{m}=1\), assume \((P,\mathfrak{m},\widehat{a})\) is almost strongly repulsive-normal, and take \(Q\), \(R\) as in (RN2as). The proof of Lemma 4.3.5 shows that with \(\mathfrak{w}:=\mathfrak{v}(L_{P^{\phi}})\) we have \(\mathfrak{w}\prec_{\phi}^{\flat}1\) and \((P^{\phi})_{\geqslant 1}=Q^{\phi}+R^{\phi}\) where \(Q^{\phi}\in H^{\phi}\{Y\}\) is homogeneous of degree \(1\) and order \(r\), \(L_{Q^{\phi}}\) splits over \(K^{\phi}\), and \(R^{\phi}\prec_{\Delta(\mathfrak{w})}\mathfrak{w}^{w+1}(P^{\phi})_{1}\). By Lemma 4.5.16, \(L_{Q^{\phi}}=L_{Q}^{\phi}\) has even a strong \(\widehat{a}\)-repulsive splitting over \(K\). Hence \((P^{\phi},\mathfrak{m},\widehat{a})\) is almost strongly repulsive-normal. For the rest we use Lemma 4.5.33 and the fact that if \((P,\mathfrak{m},\widehat{a})\) is strictly normal, then so is \((P^{\phi},\mathfrak{m},\widehat{a})\).
Lemma 3.3.13, the remark preceding Corollary 4.5.23, and Lemma 4.5.38 yield:
**Corollary 4.5.39**.: _Suppose \((P,\mathfrak{m},\widehat{a})\) is \(Z\)-minimal and deep, and \(\phi\) is active in \(H\) with \(0<\phi\prec 1\). If \((P,\mathfrak{m},\widehat{a})\) is almost strongly repulsive-normal, then so is the slot \((P^{\phi},\mathfrak{m},\widehat{a})\) in \(H^{\phi}\). Likewise with "strongly" in place of "almost strongly"._
In the case \(r=1\), ultimateness yields almost strong repulsive-normality, under suitable assumptions; more precisely:
**Lemma 4.5.40**.: _Suppose \(H\) is Liouville closed and of Hardy type, and \(\mathrm{I}(K)\subseteq K^{\dagger}\). Assume also that \((P,\mathfrak{m},\widehat{a})\) is normal and special, of order \(r=1\). Then_
\[(P,\mathfrak{m},\widehat{a})\text{ is ultimate }\quad\Longleftrightarrow\quad L \text{ has a strong }\widehat{a}/\mathfrak{m}\text{-repulsive splitting over }K\text{,}\]
_in which case \((P,\mathfrak{m},\widehat{a})\) is almost strongly repulsive-normal._
Proof.: By Lemma 4.4.12, \((P,\mathfrak{m},\widehat{a})\) is ultimate iff \(\mathscr{E}^{\mathfrak{u}}(L)\cap v\big{(}(\widehat{a}/\mathfrak{m})-H\big{)} \leqslant 0\), and the latter is equivalent to \(L\) having a strong \(\widehat{a}/\mathfrak{m}\)-repulsive splitting over \(K\), by Corollary 4.5.20. For the rest use Corollary 4.5.34.
Liouville closed \(H\)-fields are \(1\)-linearly newtonian by Corollary 1.8.29, so in view of Lemma 3.2.36 and Corollary 3.3.21 we may replace the hypothesis "\((P,\mathfrak{m},\widehat{a})\) is special" in the previous lemma by "\((P,\mathfrak{m},\widehat{a})\) is \(Z\)-minimal or a hole in \(H\)". This leads to repulsive-normal analogues of Lemma 4.3.29 and Corollary 4.3.30 for \(r=1\):
**Lemma 4.5.41**.: _Assume \(H\) is Liouville closed and of Hardy type, and \(\mathrm{I}(K)\subseteq K^{\dagger}\). Suppose \((P,\mathfrak{m},\widehat{a})\) is \(Z\)-minimal and quasilinear of order \(r=1\). Then there is a refinement \((P_{+a},\mathfrak{n},\widehat{a}-a)\) of \((P,\mathfrak{m},\widehat{a})\) and an active \(\phi\) in \(H\) with \(0<\phi\preccurlyeq 1\) such that \((P_{+a}^{\phi},\mathfrak{n},\widehat{a}-a)\) is deep, strictly normal, and ultimate \((\)so \((P_{+a}^{\phi},\mathfrak{n},\widehat{a}-a)\) is strongly repulsive-normal by Lemmas 4.5.40 and 4.5.33\()\)._
Proof.: For any active \(\phi\) in \(H\) with \(0<\phi\preccurlyeq 1\) we may replace \(H\), \((P,\mathfrak{m},\widehat{a})\) by \(H^{\phi}\), \((P^{\phi},\mathfrak{m},\widehat{a})\). We may also replace \((P,\mathfrak{m},\widehat{a})\) by any of its refinements. Since \(H\) is \(1\)-linearly newtonian, Corollary 3.3.35 gives a refinement \((P_{+a},\mathfrak{n},\widehat{a}-a)\) of \((P,\mathfrak{m},\widehat{a})\) and an active \(\phi\) in \(H\) such that \(0<\phi\preccurlyeq 1\) and \((P_{+a}^{\phi},\mathfrak{n},\widehat{a}-a)\) is normal. Replacing \(H\), \((P,\mathfrak{m},\widehat{a})\) by \(H^{\phi}\), \((P_{+a}^{\phi},\mathfrak{n},\widehat{a}-a)\), we arrange that \((P,\mathfrak{m},\widehat{a})\) itself is normal. Then \((P,\mathfrak{m},\widehat{a})\) has an ultimate refinement by Proposition 4.4.14, and applying Corollary 3.3.35 to this refinement and using Lemma 4.4.10, we obtain an ultimate refinement \((P_{+a},\mathfrak{n},\widehat{a}-a)\) of \((P,\mathfrak{m},\widehat{a})\) and an active \(\phi\) in \(H\) with \(0<\phi\preccurlyeq 1\) such that the \(Z\)-minimal slot \((P_{+a}^{\phi},\mathfrak{n},\widehat{a}-a)\) in \(H^{\phi}\) is deep, normal, and ultimate. Again replacing \(H\), \((P,\mathfrak{m},\widehat{a})\) by \(H^{\phi}\), \((P_{+a}^{\phi},\mathfrak{n},\widehat{a}-a)\), we arrange that \((P,\mathfrak{m},\widehat{a})\) is deep, normal, and ultimate. Corollary 3.3.47 yields a deep and strictly normal refinement \((P_{+a},\mathfrak{m},\widehat{a}-a)\) of \((P,\mathfrak{m},\widehat{a})\); this refinement is still ultimate by Lemma 4.4.10. Hence \((P_{+a},\mathfrak{m},\widehat{a}-a)\) is a refinement of \((P,\mathfrak{m},\widehat{a})\) as required, with \(\phi=1\).
Combining Lemmas 3.2.26 and 4.5.41 with Corollary 4.5.39 yields:
**Corollary 4.5.42**.: _Assume \(H\) is Liouville closed, \(\omega\)-free, and of Hardy type, and \(\mathrm{I}(K)\subseteq K^{\dagger}\). Then every \(Z\)-minimal slot in \(H\) of order \(r=1\) has a refinement \((P,\mathfrak{m},\widehat{a})\) such that \((P^{\phi},\mathfrak{m},\widehat{a})\) is eventually deep, ultimate, and strongly repulsive-normal._
In the next subsection we show how minimal holes of degree \(>1\) in \(K\) give rise to deep, ultimate, strongly repulsive-normal, \(Z\)-minimal slots in \(H\).
**Achieving strong repulsive-normality.** Let \(H\) be an \(\omega\)-free Liouville closed \(H\)-field with small derivation and constant field \(C\), and \((P,\mathfrak{m},\widehat{a})\) a minimal hole of order \(r\geqslant 1\) in \(K:=H[i]\). Other conventions are as in the subsection _Achieving repulsive-normality._ Our goal is to prove a version of Theorem 4.5.28 with "repulsive-normal" improved to "strongly repulsive-normal + ultimate":
**Theorem 4.5.43**.: _Suppose \(C\) is archimedean, \(\mathrm{I}(K)\subseteq K^{\dagger}\), and \(\deg P>1\). Then one of the following conditions is satisfied:_
1. \(\widehat{b}\notin H\) _and some_ \(Z\)_-minimal slot_ \((Q,\mathfrak{m},\widehat{b})\) _in_ \(H\) _has a special refinement_ \((Q_{+b},\mathfrak{n},\widehat{b}-b)\) _such that_ \((Q_{+b}^{\phi},\mathfrak{n},\widehat{b}-b)\) _is eventually deep, strongly repulsive-normal, and ultimate;_
2. \(\widehat{c}\notin H\) _and some_ \(Z\)_-minimal slot_ \((R,\mathfrak{m},\widehat{c})\) _in_ \(H\) _has a special refinement_ \((R_{+c},\mathfrak{n},\widehat{c}-c)\) _such that_ \((R_{+c}^{\phi},\mathfrak{n},\widehat{c}-c)\) _is eventually deep, strongly repulsive-normal, and ultimate._
The proof of this theorem rests on the following two lemmas, where the standing assumption that \(H\) is Liouville closed can be dropped.
**Lemma 4.5.44**.: _Suppose \(\widehat{b}\notin H\) and \((Q,\mathfrak{m},\widehat{b})\) is a \(Z\)-minimal slot in \(H\) with a refinement \((Q_{+b},\mathfrak{n},\widehat{b}-b)\) such that \((Q_{+b}^{\phi},\mathfrak{n},\widehat{b}-b)\) is eventually deep and repulsive-normal. Then \((Q,\mathfrak{m},\widehat{b})\) has a refinement \((Q_{+b},\mathfrak{n},\widehat{b}-b)\) such that \((Q_{+b}^{\phi},\mathfrak{n},\widehat{b}-b)\) is eventually deep and almost strongly repulsive-normal._
Proof.: We adapt the proof of Lemma 4.3.34. Let \((Q_{+b},\mathfrak{n},\widehat{b}-b)\) be a refinement of \((Q,\mathfrak{m},\widehat{b})\) and let \(\phi_{0}\) be active in \(H\) such that \(0<\phi_{0}\preccurlyeq 1\) and \((Q_{+b}^{\phi_{0}},\mathfrak{n},\widehat{b}-b)\) is deep and repulsive-normal. Then Corollary 4.5.37 yields a refinement
\[\big{(}(Q_{+b}^{\phi_{0}})_{+b_{0}},\mathfrak{n}_{0},(\widehat{b}-b)-b_{0} \big{)}\]
of \((Q_{+b}^{\phi_{0}},\mathfrak{n},\widehat{b}-b)\) which is deep and almost strongly repulsive-normal. Hence
\[\big{(}(Q_{+b})_{+b_{0}},\mathfrak{n}_{0},(\widehat{b}-b)-b_{0}\big{)}\ =\ \big{(}Q_{+(b+b_{0})},\mathfrak{n}_{0},\widehat{b}-(b+b_{0})\big{)}\]
is a refinement of \((Q,\mathfrak{m},\widehat{b})\), and \(\big{(}Q_{+(b+b_{0})}^{\phi},\mathfrak{n}_{0},\widehat{b}-(b+b_{0})\big{)}\) is eventually deep and almost strongly repulsive-normal by Corollary 4.5.39.
In the same way we obtain:
**Lemma 4.5.45**.: _Suppose \(\widehat{c}\notin H\) and \((R,\mathfrak{m},\widehat{c})\) is a \(Z\)-minimal slot in \(H\) with a refinement \((R_{+c},\mathfrak{n},\widehat{c}-c)\) such that \((R_{+c}^{\phi},\mathfrak{n},\widehat{c}-c)\) is eventually deep and repulsive-normal. Then \((R,\mathfrak{m},\widehat{c})\) has a refinement \((R_{+c},\mathfrak{n},\widehat{c}-c)\) such that \((R_{+c}^{\phi},\mathfrak{n},\widehat{c}-c)\) is eventually deep and almost strongly repulsive-normal._
Theorem 4.5.28 and the two lemmas above give Theorem 4.5.28 with "repulsive-normal" improved to "almost strongly repulsive-normal". We now upgrade this further to "strongly repulsive-normal + ultimate" (under an extra assumption).
Recall from Lemma 4.1.3 that \(v(\widehat{b}-H)\subseteq v(\widehat{c}-H)\) or \(v(\widehat{c}-H)\subseteq v(\widehat{b}-H)\). Thus the next two lemmas finish the proof of Theorem 4.5.43.
**Lemma 4.5.46**.: _Suppose \(C\) is archimedean, \(\mathrm{I}(K)\subseteq K^{\dagger}\), \(\deg P>1\), and_
\[v(\widehat{b}-H)\ \subseteq\ v(\widehat{c}-H).\]
_Let \(Q\in Z(H,\widehat{b})\) have minimal complexity. Then the \(Z\)-minimal slot \((Q,\mathfrak{m},\widehat{b})\) in \(H\) has a special refinement \((Q_{+b},\mathfrak{n},\widehat{b}-b)\) such that \((Q_{+b}^{\phi},\mathfrak{n},\widehat{b}-b)\) is eventually deep, strongly repulsive-normal, and ultimate._
Proof.: Here are two ways of modifying \((Q,\mathfrak{m},\widehat{b})\). First, let \((Q_{+b},\mathfrak{n},\widehat{b}-b)\) be a refinement of \((Q,\mathfrak{m},\widehat{b})\). Lemma 4.1.3 gives \(c\in H\) with \(v(\widehat{a}-a)=v(\widehat{b}-b)\) with \(a:=b+ci\), and so the minimal hole \((P_{+a},\mathfrak{n},\widehat{a}-a)\) in \(K\) is a refinement of \((P,\mathfrak{m},\widehat{a})\) that relates to \((Q_{+b},\mathfrak{n},\widehat{b}-b)\) as \((P,\mathfrak{m},\widehat{a})\) relates to \((Q,\mathfrak{m},\widehat{b})\). So we can replace \((P,\mathfrak{m},\widehat{a})\) and \((Q,\mathfrak{m},\widehat{b})\) by \((P_{+a},\mathfrak{n},\widehat{a}-a)\) and \((Q_{+b},\mathfrak{n},\widehat{b}-b)\), whenever convenient. Second, let \(\phi\) be active in \(H\) with \(0<\phi\preccurlyeq 1\). Then we can likewise replace \(H\), \(K\), \((P,\mathfrak{m},\widehat{a})\), \((Q,\mathfrak{m},\widehat{b})\) by \(H^{\phi}\), \(K^{\phi}\), \((P^{\phi},\mathfrak{m},\widehat{a})\), \((Q^{\phi},\mathfrak{m},\widehat{b})\).
In this way we first arrange as in the proof of Corollary 4.5.30 that \((Q,\mathfrak{m},\widehat{b})\) is special. Next, we use Proposition 3.3.36 likewise to arrange that \((Q,\mathfrak{m},\widehat{b})\) is also normal. By Propositions 4.4.14 (where the assumption \(\mathrm{I}(K)\subseteq K^{\dagger}\) comes into play) and 3.3.25 we arrange that \((Q,\mathfrak{m},\widehat{b})\) is ultimate as well. The properties "special" and "ultimate" persist under further refinements and compositional conjugations.
Now Corollary 4.5.30 and Lemma 4.5.44 give a refinement \((Q_{+b},\mathfrak{n},\widehat{b}-b)\) of the slot \((Q,\mathfrak{m},\widehat{b})\) in \(H\) and an active \(\phi_{0}\) in \(H\) with \(0<\phi_{0}\preccurlyeq 1\) such that the slot \((Q_{+b}^{\phi_{0}},\mathfrak{n},\widehat{b}-b)\) in \(H^{\phi_{0}}\) is deep and almost strongly repulsive-normal. Corollary 3.3.47 then yields a deep and strictly normal refinement
\[\big{(}(Q_{+b}^{\phi_{0}})_{+b_{0}},\mathfrak{n},(\widehat{b}-b)-b_{0}\big{)}\]
of \(\big{(}Q_{+b}^{\phi_{0}},\mathfrak{n},\widehat{b}-b\big{)}\). This refinement is still almost strongly repulsive-normal by Lemma 4.5.35, and therefore strongly repulsive-normal by Lemma 4.5.33. Corollary 4.5.39 then gives that \(\big{(}Q_{+(b+b_{0})},\mathfrak{n},\widehat{b}-(b+b_{0})\big{)}\) is a special refinement of our slot \((Q,\mathfrak{m},\widehat{b})\) such that \(\big{(}Q_{+(b+b_{0})}^{\phi},\mathfrak{n},\widehat{b}-(b+b_{0})\big{)}\) is eventually deep and strongly repulsive-normal.
Likewise:
**Lemma 4.5.47**.: _Suppose \(C\) is archimedean, \(\mathrm{I}(K)\subseteq K^{\dagger}\), \(\deg P>1\), and_
\[v(\widehat{c}-H)\ \subseteq\ v(\widehat{b}-H).\]
_Let \(R\in Z(H,\widehat{c})\) have minimal complexity. Then the \(Z\)-minimal slot \((R,\mathfrak{m},\widehat{c})\) in \(H\) has a special refinement \((R_{+c},\mathfrak{n},\widehat{c}-c)\) such that \((R_{+c}^{\phi},\mathfrak{n},\widehat{c}-c)\) is eventually deep, strongly repulsive-normal, and ultimate._
**Part \(5\). Hardy Fields and their Universal Exponential Extensions**
In this part we turn to Hardy fields. Section 5.1 contains basic definitions and facts about germs of one-variable (real- or complex-valued) functions, and in Section 5.2 we collect the main facts we need about linear differential equations. In Section 5.3 we introduce Hardy fields and review some extension results due to Boshernitzan [32, 33, 34] and Rosenlicht [171]. In Section 5.4 we discuss upper and lower bounds on the growth of germs in Hardy fields from [34, 33, 170], and Section 5.5 contains a first study of second-order linear differential equations over Hardy fields (to be be completed in Section 7.5, with our main theorem available). Section 5.6 contains the proof of a significant result about maximal Hardy fields, Theorem 5.6.2: every such Hardy field is \(\omega\)-free. (See the beginning of that section for a review of this important property of \(H\)-asymptotic fields, introduced in [ADH, 11.7].) The rest of Section 5.6 contains refinements and applications of this fact. In Section 5.7 we then prove a general fact about bounding the derivatives of solutions to linear differential equations, based on [67, 88, 121]. In Section 5.10 we give an analytic description of the universal exponential extension U = U\({}_{K}\), introduced in Part 2, of the algebraic closure \(K\) of a Liouville closed Hardy field extending \(\mathbb{R}\). The elements of U are exponential sums with coefficients and exponents in \(K\). To extract asymptotic information about the summands in such a sum we use results of Boshernitzan [36] about uniform distribution mod \(1\) over Hardy fields. We include proofs of these results in Section 5.9, preceded by a development of the required classical facts concerning almost periodic functions in Section 5.8. (None of the material in Sections 5.8 and 5.9 is original, we only aim for an efficient and self-contained exposition.)
### Germs of Continuous Functions
Hardy fields consist of germs of one-variable differentiable real-valued functions. In this section we first consider the ring \(\mathcal{C}\) of germs of _continuous_ real-valued functions, and its complex counterpart \(\mathcal{C}[i]\). With an eye towards applications to Hardy fields, we pay particular attention to extending subfields of \(\mathcal{C}\).
**Germs.** As in [ADH, 9.1] we let \(\mathcal{G}\) be the ring of germs at \(+\infty\) of real-valued functions whose domain is a subset of \(\mathbb{R}\) containing an interval \((a,+\infty)\), \(a\in\mathbb{R}\); the domain may vary and the ring operations are defined as usual. If \(g\in\mathcal{G}\) is the germ of a real-valued function on a subset of \(\mathbb{R}\) containing an interval \((a,+\infty)\), \(a\in\mathbb{R}\), then we simplify notation by letting \(g\) also denote this function if the resulting ambiguity is harmless. With this convention, given a property \(P\) of real numbers and \(g\in\mathcal{G}\) we say that \(P\big{(}g(t)\big{)}\)_holds eventually_ if \(P\big{(}g(t)\big{)}\) holds for all sufficiently large real \(t\). Thus for \(g\in\mathcal{G}\) we have \(g=0\) iff \(g(t)=0\) eventually (and so \(g\neq 0\) iff \(g(t)\neq 0\) for arbitrarily large \(t\)). Note that the multiplicative group \(\mathcal{G}^{\times}\) of units of \(\mathcal{G}\) consists of the \(f\in\mathcal{G}\) such that \(f(t)\neq 0\), eventually. We identify each real number \(r\) with the germ at \(+\infty\) of the function \(\mathbb{R}\to\mathbb{R}\) that takes the constant value \(r\). This makes the field \(\mathbb{R}\) into a subring of \(\mathcal{G}\). Given \(g,h\in\mathcal{G}\), we set
\[g\leqslant h\quad:\Longleftrightarrow\quad g(t)\leqslant h(t),\,\text{ eventually.} \tag{5.1.1}\]
This defines a partial ordering \(\leqslant\) on \(\mathcal{G}\) which restricts to the usual ordering of \(\mathbb{R}\).
Let \(g,h\in\mathcal{G}\). Then \(g,h\geqslant 0\Rightarrow g+h\), \(g\cdot h\), \(g^{2}\geqslant 0\), and \(g\geqslant r\in\mathbb{R}^{>}\Rightarrow g\in\mathcal{G}^{\times}\). We define \(g<h:\Leftrightarrow g\leqslant h\) and \(g\neq h\). Thus if \(g(t)<h(t)\), eventually, then \(g<h\); the converse is not generally valid.
### Continuous germs
We call a germ \(g\in\mathcal{G}\)_continuous_ if it is the germ of a continuous function \((a,+\infty)\to\mathbb{R}\) for some \(a\in\mathbb{R}\), and we let \(\mathcal{C}\supseteq\mathbb{R}\) be the subring of \(\mathcal{G}\) consisting of the continuous germs \(g\in\mathcal{G}\). We have \(\mathcal{C}^{\times}=\mathcal{G}^{\times}\cap\mathcal{C}\); thus for \(f\in\mathcal{C}^{\times}\), we have \(f(t)\neq 0\), eventually, hence either \(f(t)>0\), eventually, or \(f(t)<0\), eventually, and so \(f>0\) or \(f<0\). More generally, if \(g,h\in\mathcal{C}\) and \(g(t)\neq h(t)\), eventually, then \(g(t)<h(t)\), eventually, or \(h(t)<g(t)\), eventually. We let \(x\) denote the germ at \(+\infty\) of the identity function on \(\mathbb{R}\), so \(x\in\mathcal{C}^{\times}\).
### The ring \(\mathcal{C}[i]\)
In analogy with \(\mathcal{C}\) we define its complexification \(\mathcal{C}[i]\) as the ring of germs at \(+\infty\) of \(\mathbb{C}\)-valued continuous functions whose domain is a subset of \(\mathbb{R}\) containing an interval \((a,+\infty)\), \(a\in\mathbb{R}\). It has \(\mathcal{C}\) as a subring. Identifying each complex number \(c\) with the germ at \(+\infty\) of the function \(\mathbb{R}\to\mathbb{C}\) that takes the constant value \(c\) makes \(\mathbb{C}\) also a subring of \(\mathcal{C}[i]\) with \(\mathcal{C}[i]=\mathcal{C}+\mathcal{C}i\), justifying the notation \(\mathcal{C}[i]\). The "eventual" terminology for germs \(f\in\mathcal{C}\) (like "\(f(t)\neq 0\), eventually") is extended in the obvious way to germs \(f\in\mathcal{C}[i]\). Thus for \(f\in\mathcal{C}[i]\) we have: \(f(t)\neq 0\), eventually, if and only if \(f\in\mathcal{C}[i]^{\times}\). In particular \(\mathcal{C}^{\times}=\mathcal{C}[i]^{\times}\cap\mathcal{C}\).
Let \(\Phi\colon U\to\mathbb{C}\) be a continuous function where \(U\subseteq\mathbb{C}\), and let \(f\in\mathcal{C}[i]\) be such that \(f(t)\in U\), eventually; then \(\Phi(f)\) denotes the germ in \(\mathcal{C}[i]\) with \(\Phi(f)(t)=\Phi\big{(}f(t)\big{)}\), eventually. For example, taking \(U=\mathbb{C}\), \(\Phi(z)=\mathrm{e}^{z}\), we obtain for \(f\in\mathcal{C}[i]\) the germ \(\exp f=\mathrm{e}^{f}\in\mathcal{C}[i]\) with \((\mathrm{e}^{f})(t)=\mathrm{e}^{f(t)}\), eventually. Likewise, for \(f\in\mathcal{C}\) with \(f(t)>0\), eventually, we have the germ \(\log f\in\mathcal{C}\). For \(f\in\mathcal{C}[i]\) we have \(\overline{f}\in\mathcal{C}[i]\) with \(\overline{f}(t)=\overline{f(t)}\), eventually; the map \(f\mapsto\overline{f}\) is an automorphism of the ring \(\mathcal{C}[i]\) with \(\overline{\overline{f}}=f\) and \(f\in\mathcal{C}\Leftrightarrow\overline{f}=f\). For \(f\in\mathcal{C}[i]\) we also have \(\mathrm{Re}\,f,\mathrm{Im}\,f,|f|\in\mathcal{C}\) with \(f(t)=(\mathrm{Re}\,f)(t)+(\mathrm{Im}\,f)(t)i\) and \(|f|(t)=|f(t)|\), eventually.
### Asymptotic relations on \(\mathcal{C}[i]\)
Although \(\mathcal{C}[i]\) is not a valued field, it will be convenient to equip \(\mathcal{C}[i]\) with the asymptotic relations \(\preccurlyeq\), \(\prec\), \(\sim\) (which are defined on any valued field [ADH, 3.1]) as follows: for \(f,g\in\mathcal{C}[i]\),
\[f\preccurlyeq g :\Longleftrightarrow\quad\text{there exists $c\in\mathbb{R}^{>}$ such that $|f|\leqslant c|g|$},\] \[f\preccurlyeq g :\Longleftrightarrow\quad g\in\mathcal{C}[i]^{\times}\text{ and }\lim_{t\to\infty}f(t)/g(t)=0\] \[\Longleftrightarrow\quad g\in\mathcal{C}[i]^{\times}\text{ and }|f|\leqslant c|g|\text{ for all $c\in\mathbb{R}^{>}$},\] \[f\sim g :\Longleftrightarrow\quad g\in\mathcal{C}[i]^{\times}\text{ and }\lim_{t\to\infty}f(t)/g(t)=1\] \[\Longleftrightarrow\quad f-g\preccurlyeq g.\]
We also use these notations for continuous functions \([a,+\infty)\to\mathbb{C}\), \(a\in\mathbb{R}\); for example, for continuous \(f\colon[a,+\infty)\to\mathbb{C}\) and \(g\colon[b,+\infty)\to\mathbb{C}\)\((a,b\in\mathbb{R})\), \(f\preccurlyeq g\) means: (germ of \(f\)) \(\preccurlyeq\) (germ of \(g\)). If \(h\in\mathcal{C}[i]\) and \(1\preccurlyeq h\), then \(h\in\mathcal{C}[i]^{\times}\). Also, for \(f,g\in\mathcal{C}[i]\) and \(h\in\mathcal{C}[i]^{\times}\) we have
\[f\preccurlyeq g\ \Leftrightarrow\ fh\preccurlyeq gh,\qquad f\prec g\ \Leftrightarrow\ fh\prec gh,\qquad f\sim g\ \Leftrightarrow\ fh\sim gh.\]
The binary relation \(\preccurlyeq\) on \(\mathcal{C}[i]\) is reflexive and transitive, and \(\sim\) is an equivalence relation on \(\mathcal{C}[i]^{\times}\). Moreover, for \(f,g,h\in\mathcal{C}[i]\) we have
\[f\prec g\ \Rightarrow\ f\preccurlyeq g,\qquad f\preccurlyeq g\prec h\ \Rightarrow\ f\prec h,\qquad f\prec g\prec h\ \Rightarrow\ f\prec h.\]
Note that \(\prec\) is a transitive binary relation on \(\mathcal{C}[i]\). For \(f,g\in\mathcal{C}[i]\) we also set
\[f\asymp g:\Leftrightarrow\ f\preccurlyeq g\ \&\ g\preccurlyeq f,\qquad f\sucv g: \Leftrightarrow\ g\prec f,\qquad f\succ g:\Leftrightarrow\ g\prec f,\]
so \(\asymp\) is an equivalence relation on \(\mathcal{C}[i]\), and \(f\sim g\Rightarrow f\asymp g\). Thus for \(f,g,h\in\mathcal{C}[i]\),
\[f\preccurlyeq g\ \Rightarrow\ fh\preccurlyeq gh,\quad f\preccurlyeq h\ \&\ g\preccurlyeq h\ \Rightarrow\ f+g\preccurlyeq h,\quad f\preccurlyeq 1\ \&\ g\prec 1\ \Rightarrow\ fg\prec 1,\]
hence
\[\mathcal{C}[i]^{\preccurlyeq}\ :=\ \big{\{}f\in\mathcal{C}[i]:f\preccurlyeq 1 \big{\}}\ =\ \big{\{}f\in\mathcal{C}[i]:|f|\leqslant n\text{ for some }n\big{\}}\]
is a subalgebra of the \(\mathbb{C}\)-algebra \(\mathcal{C}[i]\) and
\[\mathcal{C}[i]^{\prec}\ :=\ \big{\{}f\in\mathcal{C}[i]:f\prec 1\big{\}}\ =\ \Big{\{}f\in\mathcal{C}[i]:\lim_{t\to\infty}f(t)=0\Big{\}}\]
is an ideal of \(\mathcal{C}[i]^{\preccurlyeq}\). The group of units of \(\mathcal{C}[i]^{\preccurlyeq}\) is
\[\mathcal{C}[i]^{\asymp}\ :=\ \big{\{}f\in\mathcal{C}[i]:f\asymp 1\big{\}}\ =\ \big{\{}f\in\mathcal{C}[i]:1/n\leqslant|f| \leqslant n\text{ for some }n\geqslant 1\big{\}}\]
and has the subgroup
\[\mathbb{C}^{\times}\big{(}1+\mathcal{C}[i]^{\prec}\big{)}\ =\ \Big{\{}f\in \mathcal{C}[i]:\lim_{t\to\infty}f(t)\in\mathbb{C}^{\times}\Big{\}}\,.\]
We set \(\mathcal{C}^{\preccurlyeq}:=\mathcal{C}[i]^{\preccurlyeq}\cap\mathcal{C}\), and similarly with \(\prec,\asymp\) in place of \(\preccurlyeq\).
**Lemma 5.1.1**.: _Let \(f,g,f^{*},g^{*}\in\mathcal{C}[i]^{\times}\) with \(f\sim f^{*}\) and \(g\sim g^{*}\). Then \(1/f\sim 1/f^{*}\) and \(fg\sim f^{*}g^{*}\). Moreover, \(f\preccurlyeq g\Leftrightarrow f^{*}\preccurlyeq g^{*}\), and similarly with \(\prec\), \(\asymp\), or \(\sim\) in place of \(\preccurlyeq\)._
This follows easily from the observations above. For later reference we also note:
**Lemma 5.1.2**.: _Let \(f,g\in\mathcal{C}^{\times}\) be such that \(1\prec f\preccurlyeq g\); then \(\log|f|\preccurlyeq\log|g|\)._
Proof.: Clearly \(\log|g|\succ 1\). Take \(c\in\mathbb{R}^{>}\) such that \(|f|\leqslant c|g|\). Then \(\log|f|\leqslant\log c+\log|g|\) where \(\log c+\log|g|\sim\log|g|\); hence \(\log|f|\preccurlyeq\log|g|\).
**Lemma 5.1.3**.: _Let \(f,g,h\in\mathcal{C}^{\times}\) be such that \(f-g\prec h\) and \((f-h)(g-h)=0\). Then \(f\sim g\)._
Proof.: Take \(a\in\mathbb{R}\) and representatives \((a,+\infty)\to\mathbb{R}\) of \(f\), \(g\), \(h\), denoted by the same symbols, such that for each \(t>a\) we have \(f(t),g(t),h(t)\neq 0\), and \(f(t)=h(t)\) or \(g(t)=h(t)\). Let \(\varepsilon\in\mathbb{R}\) with \(0<\varepsilon\leqslant 1\) be given, and choose \(b\geqslant a\) such that \(|f(t)-g(t)|\leqslant\frac{1}{2}\varepsilon|h(t)|\) for all \(t>b\). Set \(q:=f/g\) and let \(t>b\); we claim that then \(|q(t)-1|\leqslant\varepsilon\). This is clear if \(g(t)=h(t)\), so suppose otherwise; then \(f(t)=h(t)\), and \(|1-1/q(t)|\leqslant\frac{1}{2}\varepsilon\leqslant\frac{1}{2}\). In particular, \(0<q(t)\leqslant 2\) and so \(|1-q(t)|=|1-1/q(t)|\cdot q(t)\leqslant\varepsilon\) as claimed.
**Subfields of \(\mathcal{C}\).** Let \(H\) be a _Hausdorff field_, that is, a subring of \(\mathcal{C}\) that happens to be a field; see [12]. Then \(H\) has the subfield \(H\cap\mathbb{R}\). If \(f\in H^{\times}\), then \(f(t)\neq 0\) eventually, hence either \(f(t)<0\) eventually or \(f(t)>0\) eventually. The partial ordering of \(\mathcal{G}\) from (5.1.1) thus restricts to a total ordering on \(H\) making \(H\) an ordered field in the usual sense of that term. By [32, Propositions 3.4 and 3.6]:
**Proposition 5.1.4**.: _Let \(H^{\mathrm{rc}}\) consist of the \(y\in\mathcal{C}\) with \(P(y)=0\) for some \(P(Y)\) in \(H[Y]^{\neq}\). Then \(H^{\mathrm{rc}}\) is the unique real closed Hausdorff field that extends \(H\) and is algebraic over \(H\). In particular, \(H^{\mathrm{rc}}\) is a real closure of the ordered field \(H\)._
Boshernitzan [32] assumes \(H\supseteq\mathbb{R}\) for this result, but this is not really needed in the proof, much of which already occurs in Hausdorff [94]. For the reader's convenience we include a proof of Proposition 5.1.4, after some lemmas. Let
\[P(Y)\ =\ P_{0}Y^{n}+P_{1}Y^{n-1}+\cdots+P_{n}\in H[Y]\qquad(P_{0},\ldots,P_{n}\in H),\]
and take \(a\in\mathbb{R}\) such that \(P_{0},\ldots,P_{n}\) have representatives in \(\mathcal{C}_{a}\), also denoted by \(P_{0},\ldots,P_{n}\). This yields for \(t\geqslant a\) the polynomial
\[P(t,Y)\ :=\ P_{0}(t)Y^{n}+P_{1}(t)Y^{n-1}+\cdots+P_{n}(t)\in\mathbb{R}[Y].\]
For any other choice of \(a\) and representatives of \(P_{0},\ldots,P_{n}\) in \(\mathcal{C}_{a}\) this gives for large enough \(t\) the same polynomial \(P(t,Y)\in\mathbb{R}[Y]\), so the "eventual" terminology makes sense for properties mentioning \(P(t,Y)\) with \(t\) ranging over \(\mathbb{R}\). For example, for \(y\in\mathcal{C}[i]\), we have: \(P(y)=0\Leftrightarrow y(t)\in\mathbb{C}\) is a zero of \(P(t,Y)\), eventually.
**Lemma 5.1.5**.: _Suppose \(P\) is irreducible \((\)in \(H[Y])\) of degree \(n\), so \(n\geqslant 1\). Then there are \(y_{1},\ldots,y_{m}\in\mathcal{C}\) such that \(y_{1}(t)<\cdots<y_{m}(t)\), eventually, and the distinct real zeros of the polynomial \(P(t,Y)\in\mathbb{R}[Y]\) are exactly \(y_{1}(t),\ldots,y_{m}(t)\), eventually. Thus \(P(y_{1})=\cdots=P(y_{m})=0\), and if \(n\) is odd, then \(m\geqslant 1\)._
Proof.: Take \(A,B\in H[Y]\) with \(1=AP+BP^{\prime}\). Then
\[1\ =\ A(t,Y)P(t,Y)+B(t,Y)P(t,Y)^{\prime},\quad\text{ eventually}.\]
Hence \(P(t,Y)\in\mathbb{R}[Y]\) has exactly \(n\) distinct complex zeros, eventually. Now use "continuity of roots" as used for example in [58, Chapter II, (2.4)].
**Lemma 5.1.6**.: _Let \(P,Q\in H[Y]\) be monic and irreducible with \(P\neq Q\), and let \(y,z\in\mathcal{C}[i]\), \(P(y)=Q(z)=0\). Then \(y(t)\neq z(t)\), eventually. In particular, if \(y,z\in\mathcal{C}\), then either \(y(t)<z(t)\) eventually, or \(y(t)>z(t)\) eventually._
Proof.: Take \(A,B\in H[Y]\) such that \(1=AP+BQ\). Then
\[1\ =\ A(t,Y)P(t,Y)+B(t,Y)Q(t,Y),\quad\text{ eventually}.\]
Hence \(Q(t,y(t))\neq 0\), eventually, and thus \(y(t)\neq z(t)\), eventually.
**Corollary 5.1.7**.: _Let \(y\in\mathcal{C}\) and \(P(y)=0\), \(P\in H[Y]^{\neq}\). Then \(Q(y)=0\) for some monic irreducible \(Q\in H[Y]\)._
Proof.: We have \(P=hQ_{1}^{e_{1}}\cdots Q_{n}^{e_{n}}\) where \(h\in H^{\neq}\), \(e_{1},\ldots,e_{n}\in\mathbb{N}^{\geqslant 1}\), and \(Q_{1},\ldots,Q_{n}\) in \(H[Y]\) are distinct, and monic irreducible. Lemmas 5.1.5 and 5.1.6 now yield germs \(y_{1},\ldots,y_{m}\in\mathcal{C}\) such that, eventually, \(y_{1}(t)<\cdots<y_{m}(t)\) are the real zeros of the polynomials \(Q_{1}(t,Y),\ldots,Q_{n}(t,Y)\in\mathbb{R}[Y]\), and thus of \(P(t,Y)\), and such that for each \(i\in\{1,\ldots,m\}\) there is a unique \(j\in\{1,\ldots,n\}\) with \(Q_{j}\big{(}t,y_{i}(t)\big{)}=0\), eventually. Continuity arguments and the connectedness of halflines \([a,+\infty)\) yields a single \(i\) with \(y_{i}(t)=y(t)\), eventually, and thus \(Q_{j}(y)=0\) for some \(j\).
Proof of Proposition 5.1.4.: Given \(y\in H^{\mathrm{rc}}\) we have by Corollary 5.1.7 a monic irreducible \(Q\in H[Y]\) with \(Q(y)=0\), so \(H[y]\subseteq H^{\mathrm{rc}}\) is a Hausdorff field and algebraic over \(H\). Since "algebraic over" is transitive, it follows that \(H^{\mathrm{rc}}\) is a Hausdorff field and algebraic over \(H\). Such transitivity also gives \((H^{\mathrm{rc}})^{\mathrm{rc}}=H^{\mathrm{rc}}\). Obviously, any algebraic Hausdorff field extension of \(H\) is contained in \(H^{\mathrm{rc}}\). So it only remains to show that the ordered field \(H^{\mathrm{rc}}\) is real closed. First, if \(y\in H^{\mathrm{rc}}\) and \(y\geqslant 0\), then clearly \(\sqrt{y}\in\mathcal{C}\) is algebraic over \(H^{\mathrm{rc}}\), and thus in it. Next, let \(P(Y)\in H^{\mathrm{rc}}[Y]\) have odd degree. Then \(P\) has a zero in \(H^{\mathrm{rc}}\): this follows from Lemma 5.1.5 by considering an irreducible factor of \(P\) in \(H^{\mathrm{rc}}[Y]\) of odd degree.
We record the following useful consequence of Corollary 5.1.7 and its proof:
**Corollary 5.1.8**.: _Let \(P\in H[Y]^{\neq}\) and let \(y_{1},\ldots,y_{m}\) be the distinct zeros of \(P\) in \(H^{\rm rc}\). Then \(y_{1}(t),\ldots y_{m}(t)\) are the distinct real zeros of \(P(t,Y)\), eventually._
Note that \(H[{\rm i}]\) is a subfield of \({\mathcal{C}}[{\rm i}]\), and by Proposition 5.1.4 and [ADH, 3.5.4], the subfield \(H^{\rm rc}[i]\) of \({\mathcal{C}}[{\rm i}]\) is an algebraic closure of the field \(H\). If \(f\in{\mathcal{C}}[{\rm i}]\) is integral over \(H\), then so is \(\overline{f}\), and hence so are the elements \({\rm Re}\,f=\frac{1}{2}(f+\overline{f})\) and \({\rm Im}\,f=\frac{1}{2{\rm i}}(f-\overline{f})\) of \({\mathcal{C}}\) [ADH, 1.3.2]. Thus \(H^{\rm rc}[{\rm i}]\) consists of the \(y\in{\mathcal{C}}[{\rm i}]\) with \(P(y)=0\) for some \(P(Y)\in H[Y]^{\neq}\).
The ordered field \(H\) has a convex subring
\[{\mathcal{O}}\ =\ \bigl{\{}f\in H:\ |f|\leqslant n\ \mbox{for some}\ n\bigr{\}}\ =\ {\mathcal{C}}^{\preccurlyeq}\cap H,\]
which is a valuation ring of \(H\), and we consider \(H\) accordingly as a valued ordered field. The maximal ideal of \({\mathcal{O}}\) is \(\circ={\mathcal{C}}^{\prec}\cap H\). The residue morphism \({\mathcal{O}}\to{\rm res}(H)\) restricts to an ordered field embedding \(H\cap{\mathbb{R}}\to{\rm res}(H)\), which is bijective if \({\mathbb{R}}\subseteq H\). Restricting the binary relations \(\preccurlyeq\), \(\prec\), \(\sim\) from the previous subsection to \(H\) gives exactly the asymptotic relations \(\preccurlyeq\), \(\prec\), \(\sim\) on \(H\) that it comes equipped with as a valued field. By [ADH, 3.5.15],
\[{\mathcal{O}}+{\mathcal{O}}{\rm i}\ =\ \bigl{\{}f\in H[{\rm i}]:|f|\leqslant n \ \mbox{for some}\ n\bigr{\}}\ =\ {\mathcal{C}}[{\rm i}]^{\preccurlyeq}\cap H[{\rm i}]\]
is the unique valuation ring of \(H[{\rm i}]\) whose intersection with \(H\) is \({\mathcal{O}}\). In this way we consider \(H[{\rm i}]\) as a valued field extension of \(H\). The maximal ideal of \({\mathcal{O}}+{\mathcal{O}}{\rm i}\) is \(\circ+\circ{\rm i}={\mathcal{C}}[{\rm i}]^{\prec}\cap H[{\rm i}]\). The asymptotic relations \(\preccurlyeq\), \(\prec\), \(\sim\) on \({\mathcal{C}}[{\rm i}]\) restricted to \(H[{\rm i}]\) are exactly the asymptotic relations \(\preccurlyeq\), \(\prec\), \(\sim\) on \(H[{\rm i}]\) that \(H[{\rm i}]\) has as a valued field. Moreover, \(f\asymp|f|\) in \({\mathcal{C}}[{\rm i}]\) for all \(f\in H[{\rm i}]\).
**Composition.** Let \(g\in{\mathcal{C}}\), and suppose that \(\lim_{t\to+\infty}g(t)=+\infty\); equivalently, \(g\geqslant 0\) and \(g\succ 1\). Then the composition operation
\[f\mapsto f\circ g\ :\ {\mathcal{C}}[{\rm i}]\to{\mathcal{C}}[{\rm i}],\qquad(f \circ g)(t)\ :=\ f\bigl{(}g(t)\bigr{)}\ \mbox{ eventually},\]
is an injective endomorphism of the ring \({\mathcal{C}}[{\rm i}]\) that is the identity on the subring \({\mathbb{C}}\). For \(f_{1},f_{2}\in{\mathcal{C}}[{\rm i}]\) we have: \(f_{1}\preccurlyeq f_{2}\Leftrightarrow f_{1}\circ g\preccurlyeq f_{2}\circ g\), and likewise with \(\prec\), \(\sim\). This endomorphism of \({\mathcal{C}}[{\rm i}]\) commutes with the automorphism \(f\mapsto\overline{f}\) of \({\mathcal{C}}[{\rm i}]\), and maps each subfield \(K\) of \({\mathcal{C}}[{\rm i}]\) isomorphically onto the subfield \(K\circ g=\{f\circ g:f\in K\}\) of \({\mathcal{C}}[{\rm i}]\). Note that if the subfield \(K\) of \({\mathcal{C}}[{\rm i}]\) contains \(x\), then \(K\circ g\) contains \(g\). Moreover, \(f\mapsto f\circ g\) restricts to an endomorphism of the subring \({\mathcal{C}}\) of \({\mathcal{C}}[{\rm i}]\) such that if \(f_{1},f_{2}\in{\mathcal{C}}\) and \(f_{1}\leqslant f_{2}\), then \(f_{1}\circ g\leqslant f_{2}\circ g\). This endomorphism of \({\mathcal{C}}\) maps each Hausdorff field \(H\) isomorphically (as an ordered field) onto the Hausdorff field \(H\circ g\).
Occasionally it is convenient to extend the composition operation on \({\mathcal{C}}\) to the ring \({\mathcal{G}}\) of all (not necessarily continuous) germs. Let \(g\in{\mathcal{G}}\) with \(\lim_{t\to+\infty}g(t)=+\infty\). Then for \(f\in{\mathcal{G}}\) we have the germ \(f\circ g\in{\mathcal{G}}\) with
\[(f\circ g)(t)\ :=\ f\bigl{(}g(t)\bigr{)}\ \mbox{ eventually}.\]
The map \(f\mapsto f\circ g\) is an endomorphism of the \({\mathbb{R}}\)-algebra \({\mathcal{G}}\). Let \(f_{1},f_{2}\in{\mathcal{G}}\). Then \(f_{1}\leqslant f_{2}\Rightarrow f_{1}\circ g\leqslant f_{2}\circ g\), and likewise with \(\preccurlyeq\) and \(\prec\) instead of \(\leqslant\), where we
extend the binary relations \(\preccurlyeq\), \(\prec\) from \(\mathcal{C}\) to \(\mathcal{G}\) in the natural way:
\[\begin{array}{ll}f_{1}\preccurlyeq f_{2}&\mathrel{\mathop{:}}\Longleftrightarrow \quad\text{there exists $c\in\mathbb{R}^{>}$ such that $|f_{1}(t)|\leqslant c|f_{2}(t)|$, eventually;}\\ f_{1}\prec f_{2}&\mathrel{\mathop{:}}\Longleftrightarrow\quad f_{2}\in \mathcal{G}^{\times}\text{ and }\lim_{t\to\infty}f_{1}(t)/f_{2}(t)=0.\end{array}\]
**Compositional inversion.** Suppose that \(g\in\mathcal{C}\) is eventually strictly increasing such that \(\lim_{t\to+\infty}g(t)=+\infty\). Then its compositional inverse \(g^{\mathrm{inv}}\in\mathcal{C}\) is given by \(g^{\mathrm{inv}}\big{(}g(t)\big{)}=t\), eventually, and \(g^{\mathrm{inv}}\) is also eventually strictly increasing with \(\lim_{t\to+\infty}g^{\mathrm{inv}}(t)=+\infty\). Then \(f\mapsto f\circ g\) is an automorphism of the ring \(\mathcal{C}[i]\), with inverse \(f\mapsto f\circ g^{\mathrm{inv}}\). In particular, \(g\circ g^{\mathrm{inv}}=g^{\mathrm{inv}}\circ g=x\). Moreover, \(f\mapsto f\circ g\) restricts to an automorphism of \(\mathcal{C}\), and if \(h\in\mathcal{C}\) is eventually strictly increasing with \(g\leqslant h\), then \(h^{\mathrm{inv}}\leqslant g^{\mathrm{inv}}\).
Let now \(f,g\in\mathcal{C}\) with \(f,g\geqslant 0\), \(f,g\succ 1\). It is not true in general that if \(f\), \(g\) are eventually strictly increasing and \(f\sim g\), then \(f^{\mathrm{inv}}\sim g^{\mathrm{inv}}\). (Counterexample: \(f=\log x\), \(g=\log 2x\).) Corollary 5.1.10 below gives a useful condition on \(f\), \(g\) under which this implication does hold. In addition, let \(h\in\mathcal{C}^{\times}\) be eventually monotone and continuously differentiable with \(h^{\prime}/h\preccurlyeq 1/x\).
**Lemma 5.1.9** (Entringer [65]).: _Suppose \(f\sim g\). Then \(h\circ f\sim h\circ g\)._
Proof.: Replacing \(h\) by \(-h\) if necessary we arrange that \(h\geqslant 0\), so \(h(t)>0\) eventually. Set \(p:=\min(f,g)\in\mathcal{C}\) and \(q:=\max(f,g)\in\mathcal{C}\). Then \(0\leqslant p\succ 1\) and \(f-g\prec p\). The Mean Value Theorem gives \(\xi\in\mathcal{G}\) such that \(p\leqslant\xi\leqslant q\) (so \(0\leqslant\xi\succ 1\)) and
\[h\circ f-h\circ g\ =\ (h^{\prime}\circ\xi)\cdot(f-g).\]
From \(h^{\prime}/h\preccurlyeq 1/x\) we obtain \(h^{\prime}\circ\xi\preccurlyeq(h\circ\xi)/\xi\preccurlyeq(h\circ\xi)/p\), hence \(h\circ f-h\circ g\prec h\circ\xi\). Set \(u:=\max(h\circ p,h\circ q)\). Then \(0\leqslant h\circ\xi\leqslant u\), hence \(h\circ f-h\circ g\prec u\). Also \((u-h\circ f)(u-h\circ g)=0\), so Lemma 5.1.3 yields \(h\circ f\sim h\circ g\).
**Corollary 5.1.10**.: _Suppose \(f\), \(g\) are eventually strictly increasing with \(f\sim g\) and \(f^{\mathrm{inv}}\sim h\). Then \(g^{\mathrm{inv}}\sim h\)._
Proof.: By the lemma above we have \(h\circ f\sim h\circ g\), and from \(f^{\mathrm{inv}}\sim h\) we obtain \(x=f^{\mathrm{inv}}\circ f\sim h\circ f\). Therefore \(g^{\mathrm{inv}}\circ g=x\sim h\circ g\) and thus \(g^{\mathrm{inv}}\sim h\).
**Corollary 5.1.11**.: _If \(g\), \(h\) are eventually strictly increasing, \(0\leqslant h\succ 1\), and \(g\sim h^{\mathrm{inv}}\), then \(g^{\mathrm{inv}}\sim h\)._
Proof.: Take \(f=h^{\mathrm{inv}}\) in Corollary 5.1.10.
Sometimes we prefer "big O" and "little o" notation instead of \(\preccurlyeq\) and \(\prec\): for \(\phi,\xi,\theta\in\mathcal{C}[i]\), \(\phi=\xi+O(\theta):\Leftrightarrow\phi-\xi\preccurlyeq\theta\) and \(\phi=\xi+o(\theta):\Leftrightarrow\phi-\xi\prec\theta\). For use in Section 7.5 we note:
**Corollary 5.1.12**.: _Suppose \(g=x+cx^{-1}+o(x^{-1})\), \(c\in\mathbb{R}\), and \(g\) is eventually strictly increasing. Then \(g^{\mathrm{inv}}=x-cx^{-1}+o(x^{-1})\)._
Proof.: We have \(g^{\mathrm{inv}}\sim x\) by Corollary 5.1.11 (for \(h=x\)), so \(g^{\mathrm{inv}}=x(1+\varepsilon)\) where \(\varepsilon\in\mathcal{C}\), \(\varepsilon\prec 1\). Now \((1+\varepsilon)^{-1}=1+\delta\) with \(\delta\in\mathcal{C}\), \(\delta\prec 1\). Then
\[x\ =\ g\circ g^{\mathrm{inv}}\ =\ x(1+\varepsilon)+cx^{-1}(1+\delta)+o(x^{-1})\]
and thus \(\varepsilon=-cx^{-2}(1+\delta)+o(x^{-2})=-cx^{-2}+o(x^{-2})\). This yields \(g^{\mathrm{inv}}=x(1+\varepsilon)=x-cx^{-1}+o(x^{-1})\), as claimed.
Extending ordered fields inside an ambient partially ordered ring.Let \(R\) be a commutative ring with \(1\neq 0\), equipped with a translation-invariant partial ordering \(\leqslant\) such that \(r^{2}\geqslant 0\) for all \(r\in R\), and \(rs\geqslant 0\) for all \(r,s\in R\) with \(r,s\geqslant 0\). It follows that for \(a,b,r\in R\) we have:
1. if \(a\leqslant b\) and \(r\geqslant 0\), then \(ar\leqslant br\);
2. if \(a\) is a unit and \(a>0\), then \(a^{-1}=a\cdot(a^{-1})^{2}>0\);
3. if \(a\), \(b\) are units and \(0<a\leqslant b\), then \(0<b^{-1}\leqslant a^{-1}\).
Relevant cases: \(R=\mathcal{G}\) and \(R=\mathcal{C}\), with partial ordering given by (5.1.1).
An _ordered subring of \(R\)_ is a subring of \(R\) that is totally ordered by the partial ordering of \(R\). An _ordered subfield of \(R\)_ is an ordered subring \(H\) of \(R\) which happens to be a field; then \(H\) equipped with the induced ordering is indeed an ordered field, in the usual sense of that term. (Thus any Hausdorff field is an ordered subfield of the partially ordered ring \(\mathcal{C}\).) We identify \(\mathbb{Z}\) with its image in \(R\) via the unique ring embedding \(\mathbb{Z}\to R\), and this makes \(\mathbb{Z}\) with its usual ordering into an ordered subring of \(R\).
**Lemma 5.1.13**.: _Assume \(D\) is an ordered subring of \(R\) and every nonzero element of \(D\) is a unit of \(R\). Then \(D\) generates an ordered subfield \(\operatorname{Frac}D\) of \(R\)._
Proof.: It is clear that \(D\) generates a subfield \(\operatorname{Frac}D\) of \(R\). For \(a\in D\), \(a>0\), we have \(a^{-1}>0\). It follows that \(\operatorname{Frac}D\) is totally ordered.
Thus if every \(n\geqslant 1\) is a unit of \(R\), then we may identify \(\mathbb{Q}\) with its image in \(R\) via the unique ring embedding \(\mathbb{Q}\to R\), making \(\mathbb{Q}\) into an ordered subfield of \(R\).
**Lemma 5.1.14**.: _Suppose \(H\) is an ordered subfield of \(R\), all \(g\in R\) with \(g>H\) are units of \(R\), and \(H<f\in R\). Then we have an ordered subfield \(H(f)\) of \(R\)._
Proof.: For \(P(Y)\in H[Y]\) of degree \(d\geqslant 1\) with leading coefficient \(a>0\) we have \(P(f)=af^{d}(1+\varepsilon)\) with \(-1/n<\varepsilon<1/n\) for all \(n\geqslant 1\), in particular, \(P(f)>H\) is a unit of \(R\). It remains to appeal to Lemma 5.1.13.
**Lemma 5.1.15**.: _Let \(H\) be a real closed ordered subfield of \(R\). Let \(A\) be a nonempty downward closed subset of \(H\) such that \(A\) has no largest element and \(B:=H\setminus A\) is nonempty and has no least element. Let \(f\in R\) be such that \(A<f<B\). Then the subring \(H[f]\) of \(R\) has the following properties:_
1. \(H[f]\) _is a domain;_
2. \(H[f]\) _is an ordered subring of_ \(R\)_;_
3. \(H\) _is cofinal in_ \(H[f]\)_;_
4. _for all_ \(g\in H[f]\setminus H\) _and_ \(a\in H\)_, if_ \(a<g\)_, then_ \(a<b<g\) _for some_ \(b\in H\)_, and if_ \(g<a\)_, then_ \(g<b<a\) _for some_ \(b\in H\)_._
Proof.: Let \(P\in H[Y]^{\neq}\); to obtain (i) and (ii) it suffices to show that then \(P(f)<0\) or \(P(f)>0\). We have
\[P(Y)\ =\ c\,Q(Y)\,(Y-a_{1})\cdots(Y-a_{n})\]
where \(c\in H^{\neq}\), \(Q(Y)\) is a product of monic quadratic irreducibles in \(H[Y]\), and \(a_{1},\ldots,a_{n}\in H\). This gives \(\delta\in H^{>}\) such that \(Q(r)\geqslant\delta\) for all \(r\in R\). Assume \(c>0\). (The case \(c<0\) is handled similarly.) We can arrange that \(m\leqslant n\) is such that \(a_{i}\in A\) for \(1\leqslant i\leqslant m\) and \(a_{j}\in B\) for \(m<j\leqslant n\). Take \(\varepsilon>0\) in \(H\) such that \(a_{i}+\varepsilon\leqslant f\) for \(1\leqslant i\leqslant m\) and \(f\leqslant a_{j}-\varepsilon\) for \(m<j\leqslant n\). Then
\[P(f)\ =\ c\,Q(f)\,(f-a_{1})\cdots(f-a_{m})(f-a_{m+1})\cdots(f-a_{n}),\]
and \((f-a_{1})\cdots(f-a_{m})\geqslant\varepsilon^{m}\). If \(n-m\) is even, then \((f-a_{m+1})\cdots(f-a_{n})\geqslant\varepsilon^{n-m}\), so \(P(f)\geqslant a\delta\varepsilon^{n}>0\). If \(n-m\) is odd, then \((f-a_{m+1})\cdots(f-a_{n})\leqslant-\varepsilon^{n-m}\), so \(P(f)\leqslant-a\delta\varepsilon^{n}<0\). These estimates also yield (iii) and (iv).
**Lemma 5.1.16**.: _With \(H\), \(A\), \(f\) as in Lemma 5.1.15, suppose all \(g\in R\) with \(g\geqslant 1\) are units of \(R\). Then we have an ordered subfield \(H(f)\) of \(R\) such that_ (iii) _and_ (iv) _of Lemma 5.1.15 go through for \(H(f)\) in place of \(H[f]\)._
Proof.: Note that if \(g\in R\) and \(g\geqslant\delta\in H^{>}\), then \(g\delta^{-1}\geqslant 1\), so \(g\) is a unit of \(R\) and \(0<g^{-1}\leqslant\delta^{-1}\). For \(Q\in H[Y]^{\neq}\) with \(Q(f)>0\) we can take \(\delta\in H^{>}\) such that \(Q(f)\geqslant\delta\), so \(Q(f)\in R^{\times}\) and \(0<Q(f)^{-1}\leqslant\delta^{-1}\). Thus we have an ordered subfield \(H(f)\) of \(R\) by Lemma 5.1.13, and the rest now follows easily.
**Adjoining pseudolimits and increasing the value group.** Let \(H\) be a real closed Hausdorff field and view \(H\) as an ordered valued field as before. Let \((a_{\rho})\) be a strictly increasing divergent pc-sequence in \(H\). Set
\[A\ :=\ \{a\in H:\ a<a_{\rho}\ \text{for some}\ \rho\},\qquad B\ :=\ \{b\in H:\ b>a_{\rho}\ \text{for all}\ \rho\},\]
so \(A\) is nonempty and downward closed without a largest element. Moreover, \(B=H\setminus A\) is nonempty and has no least element, since a least element of \(B\) would be a limit and thus a pseudolimit of \((a_{\rho})\). Let \(f\in\mathcal{C}\) satisfy \(A<f<B\). Then by Lemma 5.1.16 for \(R=\mathcal{C}\) we have an ordered subfield \(H(f)\) of \(\mathcal{C}\), and:
**Lemma 5.1.17**.: \(H(f)\) _is an immediate valued field extension of \(H\) with \(a_{\rho}\rightsquigarrow f\)._
Proof.: We can assume that \(v(a_{\tau}-a_{\sigma})>v(a_{\sigma}-a_{\rho})\) for all indices \(\tau>\sigma>\rho\). Set \(d_{\rho}:=a_{s(\rho)}-a_{\rho}\) (\(s(\rho):=\text{success of}\ \rho\)). Then \(a_{\rho}+2d_{\rho}\in B\) for all indices \(\rho\); see the discussion preceding [1, 2.4.2]. It then follows from that lemma that \(a_{\rho}\rightsquigarrow f\). Now \((a_{\rho})\) is a divergent pc-sequence in the henselian valued field \(H\), so it is of transcendental type over \(H\), and thus \(H(f)\) is an immediate extension of \(H\).
**Lemma 5.1.18**.: _Let \(H\) be a Hausdorff field with divisible value group \(\Gamma:=v(H^{\times})\). Let \(P\) be a nonempty upward closed subset of \(\Gamma\), and let \(f\in\mathcal{C}\) be such that \(a<f\) for all \(a\in H^{>}\) with \(va\in P\), and \(f<b\) for all \(b\in H^{>}\) with \(vb<P\). Then \(f\) generates a Hausdorff field \(H(f)\) with \(P>vf>Q\), \(Q:=\Gamma\setminus P\)._
Proof.: For any positive \(a\in H^{\text{rc}}\) there is \(b\in H^{>}\) with \(a\asymp b\) and \(a<b\), and also an element \(b\in H^{>}\) with \(a\asymp b\) and \(a>b\). Thus by Proposition 5.1.4 we can replace \(H\) by \(H^{\text{rc}}\) and arrange in this way that \(H\) is real closed. Set
\[A\ :=\ \{a\in H:\ a\leqslant 0\ \text{or}\ va\in P\},\qquad B:=H\setminus A.\]
Then we are in the situation of Lemma 5.1.15 for \(R=\mathcal{C}\), so by that lemma and Lemma 5.1.16 we have a Hausdorff field \(H(f)\). Clearly then \(P>vf>Q\).
**Non-oscillation.** A germ \(f\in\mathcal{C}\) is said to **oscillate** if \(f(t)=0\) for arbitrarily large \(t\) and \(f(t)\neq 0\) for arbitrarily large \(t\). Thus for \(f,g\in\mathcal{C}\),
\[f-g\ \text{is non-oscillating}\quad\iff\ \ \begin{cases}&\text{either}\ f(t)<g(t) \ \text{eventually, or}\ f=g,\\ &\text{or}\ f(t)>g(t)\ \text{eventually.}\end{cases}\]
In particular, \(f\in\mathcal{C}\) does not oscillate iff \(f=0\) or \(f\in\mathcal{C}^{\times}\). If \(g\in\mathcal{C}\) and \(g(t)\to+\infty\) as \(t\to+\infty\), then \(f\in\mathcal{C}\) oscillates iff \(f\circ g\) oscillates.
**Lemma 5.1.19**.: _Let \(f\in\mathcal{C}\) be such that for every \(q\in\mathbb{Q}\) the germ \(f-q\) is non-oscillating. Then \(\lim\limits_{t\to\infty}f(t)\) exists in \(\mathbb{R}\cup\{-\infty,+\infty\}\)._
Proof.: Set \(A\ :=\ \{q\in\mathbb{Q}:f(t)>q\text{ eventually}\}\). If \(A=\emptyset\), then \(\lim_{t\to\infty}f(t)=-\infty\), whereas if \(A=\mathbb{Q}\), then \(\lim_{t\to\infty}f(t)=+\infty\). If \(A\neq\emptyset,\mathbb{Q}\), then for \(\ell:=\sup A\in\mathbb{R}\) we have \(\lim_{t\to\infty}f(t)=\ell\).
**Lemma 5.1.20**.: _Let \(H\) be a real closed Hausdorff field and \(f\in\mathcal{C}\). Then \(f\) lies in a Hausdorff field extension of \(H\) iff \(f-h\) is non-oscillating for all \(h\in H\)._
Proof.: The forward direction is clear. For the converse, suppose \(f-h\) is non-oscillating for all \(h\in H\). We assume \(f\notin H\), so \(h<f\) or \(h>f\) for all \(h\in H\). Set \(A:=\{h\in H:h<f\}\), a downward closed subset of \(H\). If \(A=H\), then we are done by Lemma 5.1.14 applied to \(R=\mathcal{C}\); if \(A=\emptyset\) then we apply the same lemma to \(R=\mathcal{C}\) and \(-f\) in place of \(f\). Suppose \(A\neq\emptyset,H\). If \(A\) has a largest element \(a\), then we replace \(f\) by \(f-a\) to arrange \(0<f(t)<h(t)\) eventually, for all \(h\in H^{>}\), and then Lemma 5.1.14 applied to \(R=\mathcal{C}\), \(f^{-1}\) in place of \(f\) yields that \(f^{-1}\), and hence also \(f\), lies in a Hausdorff field extension of \(H\). The case that \(B:=H\setminus A\) has a least element is handled in the same way. If \(A\) has no largest element and \(B\) has no least element, then we are done by Lemma 5.1.16.
### 5.2. Linear Differential Equations
In this section we fix notations and conventions concerning differentiable functions and summarize well-known results on linear differential equations as needed later, focusing on the case of order 2. We also discuss disconjugate linear differential equations, mainly following [52, Chapter 3], as well as work by Lyapunov and Perron on "bounded" matrix differential equations; this material is only used in Section 7.4 on applications and can be skipped upon first reading.
**Differentiable functions.** Let \(r\) range over \(\mathbb{N}\cup\{\infty\}\), and let \(U\) be a nonempty open subset of \(\mathbb{R}\). Then \(\mathcal{C}^{r}(U)\) denotes the \(\mathbb{R}\)-algebra of \(r\)-times continuously differentiable functions \(U\to\mathbb{R}\), with the usual pointwise defined algebra operations. (We use "\(\mathcal{C}\)" instead of "\(C\)" since \(C\) will often denote the constant field of a differential field.) For \(r=0\) this is the \(\mathbb{R}\)-algebra \(\mathcal{C}(U)\) of continuous real-valued functions on \(U\), so
\[\mathcal{C}(U)\ =\ \mathcal{C}^{0}(U)\ \supseteq\ \mathcal{C}^{1}(U)\ \supseteq\ \mathcal{C}^{2}(U)\ \supseteq\ \cdots\ \supseteq\ \mathcal{C}^{\infty}(U).\]
For \(r\geqslant 1\) we have the derivation \(f\mapsto f^{\prime}\colon\mathcal{C}^{r}(U)\to\mathcal{C}^{r-1}(U)\) (with \(\infty-1:=\infty\)). This makes \(\mathcal{C}^{\infty}(U)\) a differential ring, with its subalgebra \(\mathcal{C}^{\omega}(U)\) of real-analytic functions \(U\to\mathbb{R}\) as a differential subring. The algebra operations on the algebras below are also defined pointwise. Note that
\[\mathcal{C}^{r}(U)^{\times}\ =\ \big{\{}f\in\mathcal{C}^{r}(U):f(t)\neq 0\text{ for all }t\in U\big{\}},\]
also for \(\omega\) in place of \(r\)[57, (9.2), ex. 4].
Let \(a\) range over \(\mathbb{R}\). Then \(\mathcal{C}^{r}_{a}\) denotes the \(\mathbb{R}\)-algebra of functions \([a,+\infty)\to\mathbb{R}\) that extend to a function in \(\mathcal{C}^{r}(U)\) for some open \(U\supseteq[a,+\infty)\). Thus \(\mathcal{C}^{0}_{a}\) (also denoted by \(\mathcal{C}_{a}\)) is the \(\mathbb{R}\)-algebra of real-valued continuous functions on \([a,+\infty)\), and
\[\mathcal{C}^{0}_{a}\ \supseteq\ \mathcal{C}^{1}_{a}\ \supseteq\ \mathcal{C}^{2}_{a}\ \supseteq\ \cdots\ \supseteq\mathcal{C}^{\infty}_{a}.\]
We have the subalgebra \(\mathcal{C}^{\omega}_{a}\) of \(\mathcal{C}^{\infty}_{a}\), consisting of the functions \([a,+\infty)\to\mathbb{R}\) that extend to a real-analytic function \(U\to\mathbb{R}\) for some open \(U\supseteq[a,+\infty)\). For \(f\in\mathcal{C}^{1}_{a}\) and \(g\in\mathcal{C}^{1}(U)\) extending \(f\) with open \(U\subseteq\mathbb{R}\) containing \([a,+\infty)\), the
restriction of \(g^{\prime}\) to \([a,+\infty)\to\mathbb{R}\) depends only on \(f\), not on \(g\), so we may define \(f^{\prime}:=g^{\prime}|_{[a,+\infty)}\in\mathcal{C}_{a}\). For \(r\geqslant 1\) this gives the derivation \(f\mapsto f^{\prime}\colon\mathcal{C}_{a}^{r}\to\mathcal{C}_{a}^{r-1}\). This makes \(\mathcal{C}_{a}^{\infty}\) a differential ring with \(\mathcal{C}_{a}^{\omega}\) as a differential subring.
For each of the algebras \(A\) above we also consider its complexification \(A[i]\) which consists by definition of the \(\mathbb{C}\)-valued functions \(f=g+hi\) with \(g,h\in A\), so \(g=\operatorname{Re}f\) and \(h=\operatorname{Im}f\) for such \(f\). We consider \(A[i]\) as a \(\mathbb{C}\)-algebra with respect to the natural pointwise defined algebra operations. We identify each complex number with the corresponding constant function to make \(\mathbb{C}\) a subfield of \(A[i]\) and \(\mathbb{R}\) a subfield of \(A\). (This justifies the notation \(A[i]\).) We have \(\mathcal{C}_{a}^{r}[i]^{\times}=\mathcal{C}_{a}[i]^{\times}\cap\mathcal{C}_{a }^{r}[i]\) and \((\mathcal{C}_{a}^{r})^{\times}=\mathcal{C}_{a}^{\times}\cap\mathcal{C}_{a}^{r}\), and likewise with \(r\) replaced by \(\omega\).
For \(r\geqslant 1\) we extend \(g\mapsto g^{\prime}\colon\mathcal{C}_{a}^{r}\to\mathcal{C}_{a}^{r-1}\) to the derivation
\[g+hi\mapsto g^{\prime}+h^{\prime}i\ :\ \mathcal{C}_{a}^{r}[i]\to\mathcal{C}_{a}^{r-1}[i] \qquad(g,h\in\mathcal{C}_{a}^{r}[i]),\]
which for \(r=\infty\) makes \(\mathcal{C}_{a}^{\infty}\) a differential subring of \(\mathcal{C}_{a}^{\infty}[i]\). We shall use the map
\[f\mapsto f^{\dagger}:=f^{\prime}/f\,:\ \mathcal{C}_{a}^{1}[i]^{\times}=\big{(} \mathcal{C}_{a}^{1}[i]\big{)}^{\times}\to\mathcal{C}_{a}^{0}[i],\]
with
\[(fg)^{\dagger}=f^{\dagger}+g^{\dagger}\qquad\text{ for }f,g\in\mathcal{C}_{a}^{ 1}[i]^{\times},\]
in particular the fact that \(f\in\mathcal{C}_{a}^{1}[i]^{\times}\) and \(f^{\dagger}\in\mathcal{C}_{a}^{0}[i]\) are related by
\[f(t)\ =\ f(a)\exp\biggl{[}\int_{a}^{t}f^{\dagger}(s)\,ds\biggr{]}\qquad(t \geqslant a).\]
For \(g\in\mathcal{C}_{a}^{0}[i]\), let \(\exp\int g\) denote the function \(t\mapsto\exp\biggl{[}\int_{a}^{t}g(s)\,ds\biggr{]}\) in \(\mathcal{C}_{a}^{1}[i]^{\times}\). Then
\[(\exp\int g)^{\dagger}=g\quad\text{ and }\quad\exp\int(g+h)=(\exp\int g) \cdot(\exp\int h)\qquad\text{ for }g,h\in\mathcal{C}_{a}^{0}[i].\]
Therefore \(f\mapsto f^{\dagger}\colon\mathcal{C}_{a}^{1}[i]^{\times}\to\mathcal{C}_{a}^{ 0}[i]\) is surjective.
NotationFor \(b\geqslant a\) and \(f\in\mathcal{C}_{a}[i]\) we set \(f|_{b}:=f|_{[b,+\infty)}\in\mathcal{C}_{a}[i]\).
Differentiable germsLet \(r\in\mathbb{N}\cup\{\infty\}\) and let \(a\) range over \(\mathbb{R}\). Let \(\mathcal{C}^{r}\) be the partially ordered subring of \(\mathcal{C}\) consisting of the germs at \(+\infty\) of the functions in \(\bigcup_{a}\mathcal{C}_{a}^{r}\); thus \(\mathcal{C}^{0}=\mathcal{C}\) consists of the germs at \(+\infty\) of the continuous real valued functions on intervals \([a,+\infty)\), \(a\in\mathbb{R}\). Note that \(\mathcal{C}^{r}\) with its partial ordering satisfies the conditions on \(R\) from Section 5.1. Also, every \(g\geqslant 1\) in \(\mathcal{C}^{r}\) is a unit of \(\mathcal{C}^{r}\), so Lemmas 5.1.14 and 5.1.16 apply to ordered subfields of \(\mathcal{C}^{r}\). We have
\[\mathcal{C}^{0}\ \supseteq\ \mathcal{C}^{1}\ \supseteq\ \mathcal{C}^{2}\ \supseteq\ \cdots\ \supseteq\ \mathcal{C}^{\infty}.\]
Each subring \(\mathcal{C}^{r}\) of \(\mathcal{C}\) yields the subring \(\mathcal{C}^{r}[i]=\mathcal{C}^{r}+\mathcal{C}^{r}i\) of \(\mathcal{C}^{0}[i]=\mathcal{C}[i]\), with
\[\mathcal{C}^{0}[i]\ \supseteq\ \mathcal{C}^{1}[i]\ \supseteq\ \mathcal{C}^{2}[i]\ \supseteq\ \cdots\ \supseteq\ \mathcal{C}^{\infty}[i].\]
Suppose \(r\geqslant 1\); then for \(f\in\mathcal{C}_{a}^{r}[i]\) the germ of \(f^{\prime}\in\mathcal{C}_{a}^{r-1}[i]\) only depends on the germ of \(f\), and we thus obtain a derivation \(g\mapsto g^{\prime}\colon\mathcal{C}^{r}[i]\to\mathcal{C}^{r-1}[i]\) with (germ of \(f\))\({}^{\prime}=\) (germ of \(f^{\prime}\)) for \(f\in\bigcup_{a}\mathcal{C}_{a}^{r}[i]\). This derivation restricts to a derivation \(\mathcal{C}^{r}\to\mathcal{C}^{r-1}\). Note that \(\mathcal{C}[i]^{\times}\cap\mathcal{C}^{r}[i]=\mathcal{C}^{r}[i]^{\times}\), and hence \(\mathcal{C}^{\times}\cap\mathcal{C}^{r}=(\mathcal{C}^{r})^{\times}\).
For open \(U\subseteq\mathbb{C}\) and \(\Phi\colon U\to\mathbb{C}\) of class \(\mathcal{C}^{r}\) (that is, its real and imaginary parts are of class \(\mathcal{C}^{r}\)), if \(f\in\mathcal{C}^{r}[i]\) and \(f(t)\in U\), eventually, then \(\Phi(f)\in\mathcal{C}^{r}[i]\). For example, if \(f\in\mathcal{C}^{r}\), then \(\exp f\in\mathcal{C}^{r}\), and if in addition \(f(t)>0\), eventually, then \(\log f\in\mathcal{C}^{r}\).
We set
\[\mathcal{C}^{<\infty}[\mathrm{i}]\ :=\ \bigcap_{n}\mathcal{C}^{n}[\mathrm{i}].\]
Thus \(\mathcal{C}^{<\infty}[\mathrm{i}]\) is naturally a differential ring with \(\mathbb{C}\) as its ring of constants. We also have the differential subring
\[\mathcal{C}^{<\infty}\ :=\ \bigcap_{n}\mathcal{C}^{n}\]
of \(\mathcal{C}^{<\infty}[\mathrm{i}]\), with \(\mathbb{R}\) as its ring of constants and \(\mathcal{C}^{<\infty}[\mathrm{i}]=\mathcal{C}^{<\infty}+\mathcal{C}^{<\infty} \mathrm{i}\). Note that \(\mathcal{C}^{<\infty}[\mathrm{i}]\) has \(\mathcal{C}^{\infty}[\mathrm{i}]\) as a differential subring. Similarly, \(\mathcal{C}^{<\infty}\) has \(\mathcal{C}^{\infty}\) as a differential subring, and the differential ring \(\mathcal{C}^{\infty}\) has in turn the differential subring \(\mathcal{C}^{\omega}\), whose elements are the germs at \(+\infty\) of the functions in \(\bigcup_{a}\mathcal{C}^{\omega}_{a}\). We have \(\mathcal{C}[\mathrm{i}]^{\times}\cap\mathcal{C}^{<\infty}[\mathrm{i}]=( \mathcal{C}^{<\infty}[\mathrm{i}])^{\times}\) and \(\mathcal{C}^{\times}\cap\mathcal{C}^{<\infty}=(\mathcal{C}^{<\infty})^{\times}\), and likewise with \(\mathcal{C}^{\omega}\) in place of \(\mathcal{C}^{<\infty}\). If \(R\) is a subring of \(\mathcal{C}^{1}\) such that \(f^{\prime}\in R\) for all \(f\in R\), then \(R\subseteq\mathcal{C}^{<\infty}\) is a differential subring of \(\mathcal{C}^{<\infty}\).
### Basic facts about linear differential equations
In this subsection we review the main analytic facts about linear differential equations used later. Let \(a\in\mathbb{R}\), \(r\in\mathbb{N}^{\geqslant 1}\), and \(f_{1},\ldots,f_{r}\in\mathcal{C}_{a}[\mathrm{i}]\). This gives the \(\mathbb{C}\)-linear map
\[y\mapsto A(y)\ :=\ y^{(r)}+f_{1}y^{(r-1)}+\cdots+f_{r}y\,:\ \mathcal{C}^{r}_{a}[ \mathrm{i}]\to\mathcal{C}_{a}[\mathrm{i}].\]
We now have the classical existence and uniqueness theorem (see, e.g., [57, (10.6.3)] or [203, SS19, I, II]):
**Proposition 5.2.1**.: _Let \(t\in\mathbb{R}^{\geqslant a}\) be given. Then for any \(b\in\mathcal{C}_{a}[\mathrm{i}]\) and \(c\in\mathbb{C}^{r}\) there is a unique \(y=y(b,c)\in\mathcal{C}^{r}_{a}[\mathrm{i}]\) such that_
\[A(y)\ =\ b,\qquad\big{(}y(t),y^{\prime}(t),\ldots,y^{(r-1)}(t)\big{)}\ =\ c.\]
_The map \(c\mapsto y(0,c)\colon\mathbb{C}^{r}\to\ker A\) is an isomorphism of \(\mathbb{C}\)-linear spaces, and so in particular, \(\dim_{\mathbb{C}}\ker A=r\)._
**Corollary 5.2.2**.: _Let \(y\in\ker A\). If for some \(t\in\mathbb{R}^{\geqslant a}\) we have \(y^{(j)}(t)=0\) for \(j=0,\ldots,r-1\), then \(y=0\)._
Proposition 5.2.1 and \(\operatorname{Re}A(y)=A(\operatorname{Re}y)\) for \(f_{1},\ldots,f_{r}\in\mathcal{C}_{a}\) and \(y\in\mathcal{C}^{r}_{a}[\mathrm{i}]\) give:
**Corollary 5.2.3**.: _Suppose \(f_{1},\ldots,f_{r}\in\mathcal{C}_{a}\) and \(t\in\mathbb{R}^{\geqslant a}\). Then for any \(b\in\mathcal{C}_{a}\) and \(c\in\mathbb{R}^{r}\) we have \(y=y(b,c)\in\mathcal{C}^{r}_{a}\), and the map \(c\mapsto y(0,c)\colon\mathbb{R}^{r}\to\mathcal{C}^{r}_{a}\cap\ker A\) is an isomorphism of \(\mathbb{R}\)-linear spaces._
Let \(b\in\mathcal{C}_{a}[\mathrm{i}]\). Using \(y^{(r)}=b-\sum_{i=1}^{r}f_{i}y^{(r-i)}\) for \(y\in A^{-1}(b)\subseteq\mathcal{C}^{r}_{a}[\mathrm{i}]\) gives
\[b,f_{1},\ldots,f_{r}\in\mathcal{C}^{n}_{a}[\mathrm{i}]\ \Longrightarrow\ A^{-1}(b)\subseteq\mathcal{C}^{n+r}_{a}[ \mathrm{i}]\]
by induction on \(n\). Hence \(b,f_{1},\ldots,f_{r}\in\mathcal{C}^{\infty}_{a}[\mathrm{i}]\Rightarrow A^{-1} (b)\subseteq\mathcal{C}^{\infty}_{a}[\mathrm{i}]\), in particular, \(f_{1},\ldots,f_{r}\in\mathcal{C}^{\infty}_{a}[\mathrm{i}]\Rightarrow\ker A \subseteq\mathcal{C}^{\infty}_{a}[\mathrm{i}]\). Also \(b,f_{1},\ldots,f_{r}\in\mathcal{C}^{\omega}_{a}[\mathrm{i}]\Rightarrow A^{-1} (b)\subseteq\mathcal{C}^{\omega}_{a}[\mathrm{i}]\) by Lemma 6.3.4 below (see also [57, (10.5.3)]), so \(f_{1},\ldots,f_{r}\in\mathcal{C}^{\omega}_{a}[\mathrm{i}]\Rightarrow\ker A \subseteq\mathcal{C}^{\omega}_{a}[\mathrm{i}]\).
Let \(y_{1},\ldots,y_{r}\in\mathcal{C}^{r}_{a}[\mathrm{i}]\). The _Wronskian_\(w=\operatorname{wr}(y_{1},\ldots,y_{r})\) of \(y_{1},\ldots,y_{r}\) is
\[\operatorname{wr}(y_{1},\ldots,y_{r})\ :=\ \det\begin{pmatrix}y_{1}&\cdots&y_{r} \\ y_{1}^{\prime}&\cdots&y_{r}^{\prime}\\ \vdots&&\vdots\\ y_{1}^{(r-1)}&\cdots&y_{r}^{(r-1)}\end{pmatrix}\in\mathcal{C}^{1}_{a}[\mathrm{ i}].\]
Hence if \(w\neq 0\) (that is, \(w(t)\neq 0\) for some \(t\geqslant a\)), then \(y_{1},\ldots,y_{r}\) are \(\mathbb{C}\)-linearly independent. The converse does not hold in general, even for \(r=2\) and \(y_{1},y_{2}\in\mathcal{C}_{a}^{\infty}\), see [23], but we do have:
**Lemma 5.2.4**.: _The following are equivalent:_
* \(w\in\mathcal{C}_{a}[i]^{\times}\)__(_that is,_ \(w(t)\neq 0\) _for all_ \(t\geqslant a\)_);_
* \(y_{1},\ldots,y_{r}\) _is a basis of_ \(\ker A\) _for some choice of_ \(f_{1},\ldots,f_{r}\in\mathcal{C}_{a}[i]\)_._
Proof.: For (i) \(\Rightarrow\) (ii), assume \(w\in\mathcal{C}_{a}[i]^{\times}\), and use that \(y_{1},\ldots,y_{r}\in\ker A\), where \(A\) is the \(\mathbb{C}\)-linear differential operator given by
\[y\mapsto A(y)\ :=\ \operatorname{wr}(y_{1},\ldots,y_{r},y)/w\,:\ \mathcal{C}_{a}^{r}[i] \to\mathcal{C}_{a}[i].\]
For (ii) \(\Rightarrow\) (i), assume (ii) and suppose towards a contradiction that \(t\geqslant a\) is such that \(w(t)=0\). This gives \(c_{1},\ldots,c_{r}\in\mathbb{C}\), not all \(0\), such that for \(y=\sum_{k=1}^{r}c_{k}y_{k}\) we have \(y^{(j)}(t)=0\) for \(j=0,\ldots,r-1\). Hence \(y=0\) by Corollary 5.2.2.
Let now \(y_{1},\ldots,y_{r}\in\ker A\). Then by the above
\[w\neq 0\iff w\in\mathcal{C}_{a}^{1}[i]^{\times}\iff y_{1},\ldots,y_{r}\ \text{are}\ \mathbb{C}\text{-linearly independent.}\]
Moreover, \(w^{\prime}=-f_{1}w\) (_Abel's Identity_, see [203, SS19, p. 200]) and hence
\[w(t)\ =\ w(a)\exp\Bigl{(}-\int_{a}^{t}f_{1}(s)\,ds\Bigr{)}\quad\text{ for }t\geqslant a.\]
In particular, \(w=w(a)\in\mathbb{C}\) if \(f_{1}=0\).
In the next corollary we let \(g_{1},\ldots,g_{r}\in\mathcal{C}_{a}[i]\) and consider the \(\mathbb{C}\)-linear map
\[y\mapsto B(y)\ :=\ y^{(r)}+g_{1}y^{(r-1)}+\cdots+g_{r}y\,:\ \mathcal{C}_{a}^{r}[i] \to\mathcal{C}_{a}[i].\]
**Corollary 5.2.5**.: \(f_{1}=g_{1},\ldots,f_{r-1}=g_{r-1}\iff A=B\iff\ker A=\ker B\)_._
Proof.: Suppose \(\ker A=\ker B\). Let \(y_{1},\ldots,y_{r}\) be a basis of \(\ker A\), and set \(h_{j}:=f_{j}-g_{j}\) (\(j=1,\ldots,r-1\)). Towards a contradiction suppose \(h_{j}\neq 0\) for some \(j\), and take \(j\) minimal with this property. Take a nonempty open interval \(I\subseteq\mathbb{R}^{\geqslant a}\) with \(h_{j}\in\mathcal{C}(I)[i]^{\times}\). (Here and below we denote the restrictions of \(h_{1},\ldots,h_{r-1}\) to functions \(I\to\mathbb{C}\) by the same symbols.) Then \(y_{1},\ldots,y_{r}\) restrict to \(\mathbb{C}\)-linearly independent functions in \(\mathcal{C}^{r}(I)[i]\) each satisfying the equation
\[y^{(r-j)}+(h_{j+1}/h_{j})y^{(r-j-1)}+\cdots+(h_{r}/h_{j})y=0,\]
contradicting [203, SS19, II].
Next some basic properties of Wronskians:
**Lemma 5.2.6**.: _Let \(u\in\mathcal{C}_{a}^{r}[i]\). Then \(\operatorname{wr}(uy_{1},\ldots,uy_{r})=u^{r}\operatorname{wr}(y_{1},\ldots, y_{r})\). In particular, if \(y_{1}\in\mathcal{C}_{a}^{r}[i]^{\times}\), \(r\geqslant 2\), and \(z_{j}:=(y_{j+1}/y_{1})^{\prime}\in\mathcal{C}_{a}^{r-1}[i]\) (\(j=1,\ldots,r-1\)), then \(\operatorname{wr}(y_{1},\ldots,y_{r})=y_{1}^{r}\operatorname{wr}(z_{1},\ldots, z_{r-1})\)._
Proof.: For the first identity, use that there are \(u_{ij}\in\mathcal{C}_{a}[i]\) (\(0\leqslant i\leqslant j<r\)) with \(u_{0j}=u\) such that for all \(y\in\mathcal{C}_{a}^{r}[i]\) we have
\[(uy)^{(j)}\ =\ u_{0j}y^{(j)}+u_{1j}y^{(j-1)}+\cdots+u_{jj}y.\]
The first identity yields the second by taking \(u:=y_{1}^{-1}\).
**Lemma 5.2.7**.: _Suppose \(v:=\operatorname{wr}(y_{1},\ldots,y_{r-1})\) and \(w:=\operatorname{wr}(y_{1},\ldots,y_{r})\) lie in \(\mathcal{C}_{a}^{1}[i]^{\times}\), with \(v:=1\) if \(r=1\). Then we have for all \(y\in\mathcal{C}_{a}^{r}[i]\),_
\[\bigl{(}\operatorname{wr}(y_{1},\ldots,y_{r-1},y)/w\bigr{)}^{\prime}\ =\ (v/w^{2}) \operatorname{wr}(y_{1},\ldots,y_{r},y).\]
Proof.: Expand the determinants \(\operatorname{wr}(y_{1},\ldots,y_{r-1},y)\) and \(\operatorname{wr}(y_{1},\ldots,y_{r},y)\) according to their last column to get functions \(g_{1},\ldots,g_{r-1},h_{1},\ldots,h_{r-1}\in\mathcal{C}_{a}[i]\) such that
\[\big{(}\operatorname{wr}(y_{1},\ldots,y_{r-1},y)/w\big{)}^{\prime} =\ (v/w)y^{(r)}+g_{1}y^{(r-1)}+\cdots+g_{r}y,\] \[(v/w^{2})\operatorname{wr}(y_{1},\ldots,y_{r},y) =\ (v/w)y^{(r)}+h_{1}y^{(r-1)}+\cdots+h_{r}y\]
for all \(y\in\mathcal{C}_{a}^{r}[i]\). Both left hand sides have the \(\mathbb{R}\)-linearly independent \(y_{1},\ldots,y_{r}\) among their zeros. Now use Corollary 5.2.5.
Let \(y\in\mathcal{C}_{a}^{r}[i]\) and \(t\geqslant a\). The **multiplicity** of \(y\) at \(t\) is the largest \(m\leqslant r\) such that \(y(t)=y^{\prime}(t)=\cdots=y^{(m-1)}(t)=0\); notation: \(\operatorname{mult}_{t}(y)\), or \(\operatorname{mult}_{t}^{r}(y)\) if we need to indicate the dependence on \(r\). So for \(a\leqslant 0\) we have \(\operatorname{mult}_{0}^{2}(x^{3})=2\), but \(\operatorname{mult}_{0}^{r}(x^{3})=3\) for \(r\geqslant 3\).) Thus \(t\) is a zero of \(y\) (that is, \(y(t)=0\)) iff \(\operatorname{mult}_{t}(y)\geqslant 1\). If \(y\in\ker A\) has a zero of multiplicity \(r\), then \(y=0\) by Corollary 5.2.2. Note that \(\operatorname{mult}_{t}(y)=\min\bigl{\{}\operatorname{mult}_{t}(\operatorname{ Re}y)\operatorname{mult}_{t}(\operatorname{Im}y)\bigr{\}}\). For \(z\in\mathcal{C}_{a}^{r}[i]\) we have
\[\operatorname{mult}_{t}(y+z)\geqslant\min\bigl{\{}\operatorname{mult}_{t}(y), \operatorname{mult}_{t}(z)\bigr{\}},\]
and using the Product Rule:
\[\operatorname{mult}_{t}(yz)\ =\ \min\big{\{}r,\operatorname{mult}_{t}(y)+ \operatorname{mult}_{t}(z)\bigr{\}}.\]
If \(r\geqslant 2\) and \(y(t)=0\), then \(y^{\prime}\in\mathcal{C}_{a}^{r-1}[i]\) and \(\operatorname{mult}_{t}^{r-1}(y^{\prime})=\operatorname{mult}_{t}^{r}(y)-1\). The following is obvious:
**Lemma 5.2.8**.: _Let \(y_{1},\ldots,y_{r}\in\mathcal{C}_{a}^{r}[i]\), \(w:=\operatorname{wr}(y_{1},\ldots,y_{r})\), and \(t\geqslant a\). If \(w(t)=0\), then \(\operatorname{mult}_{t}(y)=r\) for some \(\mathbb{C}\)-linear combination \(y=c_{1}y_{1}+\cdots+c_{r}y_{r}\) of \(y_{1},\ldots,y_{r}\), where \(c_{1},\ldots,c_{r}\in\mathbb{C}\) are not all zero._
We also call the sum
\[\operatorname{mult}(y)\ :=\ \sum_{t\geqslant a}\operatorname{mult}_{t}(y)\in \mathbb{N}\cup\{\infty\}\]
of the multiplicities of all zeros of \(y\) the (total) **multiplicity** of \(y\), and we denote it by \(\operatorname{mult}^{r}(y)\) if we need to exhibit the dependence on \(r\). Note that \(\operatorname{mult}(y)<\infty\) iff \(y\) has finitely many zeros. If \(z\in\mathcal{C}_{a}^{r}[i]^{\times}\), then \(\operatorname{mult}(yz)=\operatorname{mult}(y)\).
**Lemma 5.2.9**.: _Suppose \(y\in\mathcal{C}_{a}^{r}\), \(r\geqslant 2\). Then (with \(\infty-1:=\infty\)):_
\[\operatorname{mult}^{r-1}(y^{\prime})\ \geqslant\ \operatorname{mult}^{r}(y)-1.\]
Proof.: Let \(m\leqslant\operatorname{mult}^{r}(y)\); it is enough to show that then \(m-1\leqslant\operatorname{mult}^{r-1}(y^{\prime})\). Let \(t_{1}<\cdots<t_{n}\) be zeros of \(y\) such that \(\sum_{i}\operatorname{mult}_{t_{i}}(y)\geqslant m\). For \(i=1,\ldots,n-1\), Rolle's Theorem yields \(s_{i}\in(t_{i},t_{i+1})\) such that \(y^{\prime}(s_{i})=0\). Hence
\[m\ \leqslant\ \sum_{i=1}^{n}\operatorname{mult}_{t_{i}}^{r}(y) =\ n+\sum_{i=1}^{n}\operatorname{mult}_{t_{i}}^{r-1}(y^{\prime})\] \[\leqslant\ 1+\sum_{i=1}^{n-1}\operatorname{mult}_{s_{i}}^{r-1}(y^{ \prime})+\sum_{i=1}^{n}\operatorname{mult}_{t_{i}}^{r-1}(y^{\prime})\ \leqslant\ 1+ \operatorname{mult}^{r-1}(y^{\prime})\]
as required.
**Oscillation.** Let \(y\in\mathcal{C}_{a}\). We say that \(y\)**oscillates** if its germ in \(\mathcal{C}\) oscillates. So \(y\) does not oscillate iff \(\operatorname{sign}y(t)\) is constant, eventually. If \(y\) oscillates, then so does \(cy\) for \(c\in\mathbb{R}^{\times}\). If \(y\in\mathcal{C}_{a}^{1}\) oscillates, then so does \(y^{\prime}\in\mathcal{C}_{a}\), by Rolle's Theorem.
Let now \(r\in\mathbb{N}^{\geqslant 1}\) and \(f_{1},\dots,f_{r}\in\mathcal{C}_{a}\), and consider the \(\mathbb{R}\)-linear map
\[y\mapsto A(y)\ :=\ y^{(r)}+f_{1}y^{(r-1)}+\dots+f_{r}y\,:\ \mathcal{C}_{a}^{r} \to\mathcal{C}_{a}.\]
By Corollary 5.2.3, the \(\mathbb{R}\)-linear subspace \(\mathcal{C}_{a}^{r}\cap\ker A\) of \(\mathcal{C}_{a}^{r}\) has dimension \(r\).
Let \(y\in\mathcal{C}_{a}^{r}\cap\ker A\), \(y\neq 0\), and let \(Z:=y^{-1}(0)\) be the set of zeros of \(y\), so \(Z\subseteq[a,+\infty)\) is closed in \(\mathbb{R}\). By a _limit point_ of a set \(S\subseteq\mathbb{R}\) we mean a point \(b\in\mathbb{R}\) such that for every real \(\varepsilon>0\) we have \(0<|s-b|<\varepsilon\) for some \(s\in S\).
**Lemma 5.2.10**.: \(Z\) _has no limit points._
Proof.: For \(j=0,\dots,r\) let \(Z_{j}:=(y^{(j)})^{-1}(0)\) be the set of zeros of \(y^{(j)}\), so \(Z=Z_{0}\). Each \(Z_{j}\) is closed and hence contains its limit points. If \(t_{0}<t_{1}\) are in \(Z_{j}\), \(0\leqslant j<r\), then \(Z_{j+1}\cap(t_{0},t_{1})\neq\emptyset\), by Rolle, hence each limit point of \(Z_{j}\) is a limit point of \(Z_{j+1}\). Thus if \(t\) is a limit point of \(Z\), then \(t\geqslant a\) and \(y(t)=y^{\prime}(t)=\dots=y^{(r-1)}(t)=0\), hence \(y=0\) by Corollary 5.2.2, a contradiction.
By Lemma 5.2.10, \(Z\cap[a,b]\) is finite for every \(b\geqslant a\). Thus
\[y\text{ does not oscillate}\quad\Longleftrightarrow\quad Z\text{ is finite}\quad \Longleftrightarrow\quad Z\text{ is bounded}.\]
If \(t_{0}\in Z\) is not the largest element of \(Z\), then \(Z\cap(t_{0},t_{1})=\emptyset\) for some \(t_{1}>t_{0}\) in \(Z\). We say that a pair of zeros \(t_{0}<t_{1}\) of \(y\) is **consecutive** if \(Z\cap(t_{0},t_{1})=\emptyset\).
Next we consider the set \(Z_{1}:=(y^{\prime})^{-1}(0)\) of stationary points of \(y\).
**Lemma 5.2.11**.: _Suppose \(f_{r}\in\mathcal{C}_{a}^{\times}\). Then \(Z_{1}\) has no limit points._
Proof.: The proof of Lemma 5.2.10 shows that if \(t\) is a limit point of \(Z_{1}\), then \(t\geqslant a\) and \(y^{\prime}(t)=y^{\prime\prime}(t)=\dots=y^{(r)}(t)=0\), and as \(y\in\ker A\), this gives \(0=A(y)(t)=f_{r}(t)y(t)\), so \(y(t)=0\), and thus \(y=0\), a contradiction.
Thus if \(f_{r}\in\mathcal{C}_{a}^{\times}\), then \(Z_{1}\cap[a,b]\) is finite for all \(b\geqslant a\).
**Second-order differential equations.** Let \(f\in\mathcal{C}_{a}\), that is, \(f\colon[a,\infty)\to\mathbb{R}\) is continuous. We consider the differential equation
(L) \[Y^{\prime\prime}+fY\ =\ 0.\]
The solutions \(y\in\mathcal{C}_{a}^{2}\) of (L) form an \(\mathbb{R}\)-linear subspace \(\operatorname{Sol}(f)\) of \(\mathcal{C}_{a}^{2}\). The solutions \(y\in\mathcal{C}_{a}^{2}[\mathrm{i}]\) of (L) are the \(y_{1}+y_{2}\mathrm{i}\) with \(y_{1},y_{2}\in\operatorname{Sol}(f)\) and form a \(\mathbb{C}\)-linear subspace \(\operatorname{Sol}_{\mathbb{C}}(f)\) of \(\mathcal{C}_{a}^{2}[\mathrm{i}]\). For any complex numbers \(c\), \(d\) there is a unique solution \(y\in\mathcal{C}_{a}^{2}[\mathrm{i}]\) of (L) with \(y(a)=c\) and \(y^{\prime}(a)=d\), and the map that assigns to \((c,d)\) in \(\mathbb{C}^{2}\) this unique solution is an isomorphism \(\mathbb{C}^{2}\to\operatorname{Sol}_{\mathbb{C}}(f)\) of \(\mathbb{C}\)-linear spaces; it restricts to an \(\mathbb{R}\)-linear bijection \(\mathbb{R}^{2}\to\operatorname{Sol}(f)\). We have \(f\in\mathcal{C}_{a}^{n}\Rightarrow\operatorname{Sol}(f)\subseteq\mathcal{C}_{a} ^{n+2}\) (hence \(f\in\mathcal{C}_{a}^{\infty}\Rightarrow\operatorname{Sol}(f)\subseteq\mathcal{C}_{ a}^{\infty}\)) and \(f\in\mathcal{C}_{a}^{\omega}\Rightarrow\operatorname{Sol}(f)\subseteq\mathcal{C}_{ a}^{\omega}\). From [203, SS27, XI]:
**Lemma 5.2.12** (Sonin-Polya).: _Suppose \(f\in(\mathcal{C}_{a}^{1})^{\times}\), \(y\in\operatorname{Sol}(f)^{\neq}\), and \(t_{0}<t_{1}\) are stationary points of \(y\). If \(f\) is increasing, then \(|y(t_{0})|\geqslant|y(t_{1})|\). If \(f\) is decreasing, then \(|y(t_{0})|\leqslant|y(t_{1})|\). If \(f\) is strictly increasing, respectively strictly decreasing, then these inequalities are strict._
Proof.: Put \(u:=y^{2}+\bigl{(}(y^{\prime})^{2}/f\bigr{)}\in\mathcal{C}^{1}_{a}\). Then \(u^{\prime}=-f^{\prime}(y^{\prime}/f)^{2}\). Thus if \(f\) is increasing, then \(u\) is decreasing, and as \(u(t_{i})=y(t_{i})^{2}\) for \(i=0,1\), we get \(|y(t_{0})|\geqslant|y(t_{1})|\). The other cases are similar, using also Lemma 5.2.11 for the strict inequalities.
**Lemma 5.2.13**.: _Suppose \(f\in(\mathcal{C}^{1}_{a})^{\times}\), \(y\in\operatorname{Sol}(f)\), and \(t_{0}<t_{1}\) are consecutive zeros of \(y\). Then there is exactly one stationary point of \(y\) in the interval \((t_{0},t_{1})\)._
Proof.: If \(s_{0}<s_{1}\) were stationary points of \(y\) in the interval \((t_{0},t_{1})\), then by Rolle \(y^{\prime\prime}\) and thus \(y\) (in view of \(y^{\prime\prime}=-fy\)) would have a zero in the interval \((s_{0},s_{1})\).
Let \(y_{1},y_{2}\in\operatorname{Sol}(f)\), with Wronskian \(w=y_{1}y_{2}^{\prime}-y_{1}^{\prime}y_{2}\). Then \(w\in\mathbb{R}\), and
\[w\neq 0\ \Longleftrightarrow\ y_{1},\,y_{2}\text{ are $\mathbb{R}$-linearly independent.}\]
By [18, Chapter 6, Lemmas 2 and 3] we have:
**Lemma 5.2.14**.: _Let \(y_{1},y_{2}\in\operatorname{Sol}(f)\) be \(\mathbb{R}\)-linearly independent and \(g\in\mathcal{C}_{a}\). Then_
\[t\mapsto y(t)\ :=\ -y_{1}(t)\int_{a}^{t}\frac{y_{2}(s)}{w}g(s)\,ds+y_{2}(t) \int_{a}^{t}\frac{y_{1}(s)}{w}g(s)\,ds\,:\ [a,+\infty)\to\mathbb{R}\]
_lies in \(\mathcal{C}^{2}_{a}\) and satisfies \(y^{\prime\prime}+fy=g\), \(y(a)=y^{\prime}(a)=0\)._
**Lemma 5.2.15**.: _Let \(y_{1}\in\operatorname{Sol}(f)\) with \(y_{1}(t)\neq 0\) for \(t\geqslant a\). Then the function_
\[t\mapsto y_{2}(t):=y_{1}(t)\int_{a}^{t}\frac{1}{y_{1}(s)^{2}}\,ds\colon[a,+ \infty)\to\mathbb{R}\]
_also lies in \(\operatorname{Sol}(f)\), and \(y_{1}\), \(y_{2}\) are \(\mathbb{R}\)-linearly independent._
From [18, Chapter 2, Lemma 1] we also recall:
**Lemma 5.2.16** (Gronwall's Lemma).: _Let \(C\in\mathbb{R}^{\geqslant}\), \(v,y\in\mathcal{C}_{a}\) satisfy \(v(t),y(t)\geqslant 0\) for all \(t\geqslant a\) and_
\[y(t)\ \leqslant\ C+\int_{a}^{t}v(s)y(s)\,ds\quad\text{for all $t\geqslant a$.}\]
_Then_
\[y(t)\ \leqslant\ C\exp\biggl{[}\int_{a}^{t}v(s)\,ds\biggr{]}\quad\text{for all $t \geqslant a$.}\]
Here is a simpler differential version:
**Lemma 5.2.17**.: _Let \(u\in\mathcal{C}_{a}\) and \(y\in\mathcal{C}^{1}_{a}\) satisfy \(y^{\prime}(t)\leqslant u(t)y(t)\) for all \(t\geqslant a\). Then \(y(t)\leqslant y(a)\exp\Bigl{(}\int_{a}^{t}u(s)\,ds\Bigr{)}\) for all \(t\geqslant a\)._
Proof.: Put \(z(t):=y(t)\exp\Bigl{(}-\int_{a}^{t}u(s)\,ds\Bigr{)}\) for \(t\geqslant a\). Then \(z\in\mathcal{C}^{1}_{a}\), and \(z^{\prime}(t)\leqslant 0\) for all \(t\geqslant a\), so \(z(t)\leqslant z(a)=y(a)\) for all \(t\geqslant a\). This yields the desired result.
_In the rest of this subsection we assume that \(a\geqslant 1\) and that \(c\in\mathbb{R}^{>}\) is such that \(|f(t)|\leqslant c/t^{2}\) for all \(t\geqslant a\)._ Under this hypothesis, Lemma 5.2.16 yields the following bound on the growth of the solutions \(y\in\operatorname{Sol}(f)\); the proof we give is similar to that of [18, Chapter 6, Theorem 5].
**Proposition 5.2.18**.: _Let \(y\in\operatorname{Sol}(f)\). Then there is \(C\in\mathbb{R}^{\geqslant}\) such that \(|y(t)|\leqslant Ct^{c+1}\) and \(|y^{\prime}(t)|\leqslant Ct^{c}\) for all \(t\geqslant a\)._
Proof.: Let \(t\) range over \([a,+\infty)\). Integrating \(y^{\prime\prime}=-fy\) twice between \(a\) and \(t\), we obtain constants \(c_{1}\), \(c_{2}\) such that for all \(t\),
\[y(t)\ =\ c_{1}+c_{2}t-\int_{a}^{t}\int_{a}^{t_{1}}f(t_{2})y(t_{2})\,dt_{2}\,dt_{1 }\ =\ c_{1}+c_{2}t-\int_{a}^{t}(t-s)f(s)y(s)\,ds\]
and hence, with \(C:=|c_{1}|+|c_{2}|\),
\[|y(t)|\ \leqslant\ Ct+t\int_{a}^{t}|f(s)|\cdot|y(s)|\,ds,\ \ \text{so}\ \ \frac{|y(t)|}{t}\ \leqslant\ C+\int_{a}^{t}s|f(s)|\cdot\frac{|y(s)|}{s}\,ds.\]
Then by the lemma above,
\[\frac{|y(t)|}{t}\ \leqslant\ C\exp\biggl{[}\int_{a}^{t}s|f(s)|\,ds\biggr{]}\ \leqslant\ C\exp\biggl{[}\int_{1}^{t}c/s\,ds\biggr{]}\ =\ Ct^{c}\]
and thus \(|y(t)|\leqslant Ct^{c+1}\). Now
\[y^{\prime}(t) \ =\ c_{2}-\int_{a}^{t}f(s)y(s)\,ds,\ \text{so}\] \[|y^{\prime}(t)| \ \leqslant\ |c_{2}|+\int_{a}^{t}|f(s)y(s)|\,ds\ \leqslant\ C+Cc\int_{1}^{t}s^{c-1}\,ds\] \[\ =\ C+Cc\left[\frac{t^{c}}{c}-\frac{1}{c}\right]\ =\ Ct^{c}.\qed\]
Let \(y_{1},y_{2}\in\operatorname{Sol}(f)\) be \(\mathbb{R}\)-linearly independent. Recall that \(w=y_{1}y_{2}^{\prime}-y_{1}^{\prime}y_{2}\in\mathbb{R}^{\times}\). It follows that \(y_{1}\) and \(y_{2}\) cannot be simultaneously very small:
**Lemma 5.2.19**.: _There is a positive constant \(d\) such that_
\[\max\bigl{(}|y_{1}(t)|,|y_{2}(t)|\bigr{)}\ \geqslant\ dt^{-c}\ \ \ \ \text{ for all }t\geqslant a.\]
Proof.: Proposition 5.2.18 yields \(C\in\mathbb{R}^{>}\) such that \(|y_{i}^{\prime}(t)|\leqslant Ct^{c}\) for \(i=1,2\) and all \(t\geqslant a\). Hence \(|w|\leqslant 2\max\bigl{(}|y_{1}(t)|,|y_{2}(t)|\bigr{)}Ct^{c}\) for \(t\geqslant a\), so
\[\max\bigl{(}|y_{1}(t)|,|y_{2}(t)|\bigr{)}\ \geqslant\ \frac{|w|}{2C}t^{-c}\ \ \ \ \ \ (t\geqslant a).\qed\]
**Corollary 5.2.20**.: _Set \(y:=y_{1}+y_{2}\mathrm{i}\) and \(z:=y^{\dagger}\). Then for some \(D\in\mathbb{R}^{>}\),_
\[|z(t)|\ \leqslant\ Dt^{2c}\ \ \ \ \text{for all }t\geqslant a.\]
Proof.: Take \(C\) as in the proof of Lemma 5.2.19, and \(d\) as in that lemma. Then
\[|z(t)|\ =\ \frac{|y_{1}^{\prime}(t)+y_{2}^{\prime}(t)\mathrm{i}|}{|y_{1}(t)+y_{ 2}(t)\mathrm{i}|}\ \leqslant\ \frac{|y_{1}^{\prime}(t)|+|y_{2}^{\prime}(t)|}{\max\bigl{(}|y_{1}(t)|,|y_{2}(t )|\bigr{)}}\ \leqslant\ \left(\frac{2C}{d}\right)t^{2c}\]
for \(t\geqslant a\).
**More on oscillation.** We continue with the study of (L). Sturm's Separation Theorem says that if \(y,z\in\operatorname{Sol}(f)\) are \(\mathbb{R}\)-linearly independent and \(t_{0}<t_{1}\) are consecutive zeros of \(z\), then \((t_{0},t_{1})\) contains a unique zero of \(y\)[203, SS27, VI]. Thus:
**Lemma 5.2.21**.: _Some \(y\in\operatorname{Sol}(f)^{\neq}\) oscillates \(\ \Longleftrightarrow\ \text{every }y\in\operatorname{Sol}(f)^{\neq}\) oscillates._
We say that \(f\)**generates oscillations** if some element of \(\operatorname{Sol}(f)^{\neq}\) oscillates.
**Lemma 5.2.22**.: _Let \(b\in\mathbb{R}^{\geqslant a}\). Then_
\[f\ \text{generates oscillations}\ \ \ \Longleftrightarrow\ \ f|_{b}\in\mathcal{C}_{b}\ \text{generates oscillations}.\]
Proof.: The forward direction is obvious. For the backward direction, use that every \(y\in\mathcal{C}_{b}^{2}\) with \(y^{\prime\prime}+gy=0\) for \(g:=f|_{b}\) extends uniquely to a solution of (L).
By this lemma, whether \(f\) generates oscillations depends only on its germ in \(\mathcal{C}\). So this induces the notion of an element of \(\mathcal{C}\) generating oscillations. Here is another result of Sturm [203, loc. cit.] that we use below:
**Theorem 5.2.23** (Sturm's Comparison Theorem).: _Let \(g\in\mathcal{C}_{a}\) with \(f(t)\geqslant g(t)\) for all \(t\geqslant a\). Let \(y\in\operatorname{Sol}(f)^{\neq}\) and \(z\in\operatorname{Sol}(g)^{\neq}\), and let \(t_{0}<t_{1}\) be consecutive zeros of \(z\). Then either \((t_{0},t_{1})\) contains a zero of \(y\), or on \([t_{0},t_{1}]\) we have \(f=g\) and \(y=cz\) for some constant \(c\in\mathbb{R}^{\times}\)._
Here is an immediate consequence:
**Corollary 5.2.24**.: _If \(g\in\mathcal{C}_{a}\) generates oscillations and \(f(t)\geqslant g(t)\), eventually, then \(f\) also generates oscillations._
_Example_.: For \(k\in\mathbb{R}^{\times}\) we have the differential equation of the harmonic oscillator,
\[y^{\prime\prime}+k^{2}y\ =\ 0.\]
A function \(y\in\mathcal{C}_{a}^{2}\) is a solution iff for some real constants \(c,t_{0}\) and all \(t\geqslant a\),
\[y(t)\ =\ c\sin k(t-t_{0}).\]
For \(c\neq 0\), any function \(y\in\mathcal{C}_{a}^{2}\) as displayed oscillates. Thus if \(f(t)\geqslant\varepsilon\), eventually, for some constant \(\varepsilon>0\), then \(f\) generates oscillations.
To (L) we associate the corresponding **Riccati equation**
(R) \[z^{\prime}+z^{2}+f\ =\ 0.\]
Let \(y\in\operatorname{Sol}(f)^{\neq}\) be a non-oscillating solution to (L), and take \(b\geqslant a\) with \(y(t)\neq 0\) for \(t\geqslant b\). Then the function
\[t\mapsto z(t)\ :=\ y^{\prime}(t)/y(t)\,:\ [b,+\infty)\to\mathbb{R}\]
in \(\mathcal{C}_{b}^{1}\) satisfies (R). Conversely, if \(z\in\mathcal{C}_{b}^{1}\) (\(b\geqslant a\)) is a solution to (R), then
\[t\mapsto y(t)\ :=\ \exp\left(\int_{b}^{t}z(s)\,ds\right)\ :\ [b,+\infty)\to\mathbb{R}\]
is a non-oscillating solution to (L) with \(y\in(\mathcal{C}_{b}^{1})^{\times}\) and \(z=y^{\dagger}\).
Let \(g\in\mathcal{C}_{a}^{1}\), \(h\in\mathcal{C}_{a}^{0}\) and consider the second-order linear differential equation
(L) \[y^{\prime\prime}+gy^{\prime}+hy\ =\ 0.\]
**Corollary 5.2.25**.: _Set \(f:=-\frac{1}{2}g^{\prime}-\frac{1}{4}g^{2}+h\in\mathcal{C}_{a}\). Then the following are equivalent:_
1. _some nonzero solution of_ (L) _oscillates;_
2. _all nonzero solutions of_ (L) _oscillate;_
3. \(f\) _generates oscillations._
Proof.: Let \(G\in(\mathcal{C}_{a}^{2})^{\times}\) be given by \(G(t):=\exp\Bigl{(}-\frac{1}{2}\int_{a}^{t}g(s)\,ds\Bigr{)}\). Then \(y\in\mathcal{C}_{a}^{2}\) is a solution to (L) iff \(Gy\) is a solution to (L); cf. [ADH, 5.1.13].
**More on non-oscillation.** We continue with (L). Let \(y_{1}\), \(y_{2}\) range over elements of \(\operatorname{Sol}(f)\), and recall that its Wronskian \(w=y_{1}y_{2}^{\prime}-y_{1}^{\prime}y_{2}\) lies in \(\mathbb{R}\).
**Lemma 5.2.26**.: _Suppose \(b\geqslant a\) is such that \(y_{2}(t)\neq 0\) for \(t\geqslant b\). Then for \(q:=y_{1}/y_{2}\in\mathcal{C}_{b}^{2}\) we have \(q^{\prime}(t)=-w/y_{2}(t)^{2}\) for \(t\geqslant b\), so \(q\) is monotone and \(\lim_{t\to\infty}q(t)\) exists in \(\mathbb{R}\cup\{-\infty,+\infty\}\)._
This leads to:
**Corollary 5.2.27**.: _Suppose \(b\geqslant a\) and \(y_{1}(t),y_{2}(t)\neq 0\) for \(t\geqslant b\). For \(i=1,2\), set_
\[h_{i}(t)\ :=\ \int_{b}^{t}\frac{1}{y_{i}(s)^{2}}\,ds\quad\text{ for }t\geqslant b \text{, so }h_{i}\in\mathcal{C}_{b}^{3}.\]
_Then: \(y_{1}\prec y_{2}\iff h_{1}\succ 1\succcurlyeq h_{2}\)._
Proof.: Suppose \(y_{1}\prec y_{2}\). Then \(y_{1}\), \(y_{2}\) are \(\mathbb{R}\)-linearly independent, so \(w\neq 0\). Moreover, \(q\prec 1\) with \(q\) as in in Lemma 5.2.26, and \(q^{\prime}=-wh_{2}^{\prime}\) by that lemma, so \(q+wh_{2}\) is constant, and thus \(h_{2}\preccurlyeq 1\). Note that \(h_{1}\) is strictly increasing. If \(h_{1}(t)\to r\in\mathbb{R}\) as \(t\to+\infty\), then \(z:=(r-h_{1})y_{1}\in\operatorname{Sol}(f)\) by Lemma 5.2.26 with \(y_{1}\) and \(y_{2}\) interchanged, and \(z\prec y_{1}\), so \(z=0\), hence \(h_{1}=r\), a contradiction. Thus \(h_{1}\succ 1\).
For the converse, suppose \(h_{1}\succ 1\succcurlyeq h_{2}\). Then \(y_{1}\), \(y_{2}\) are \(\mathbb{R}\)-linearly independent, so \(w\neq 0\). From \(h_{2}\preccurlyeq 1\) and \(q+wh_{2}\) being constant we obtain \(q\preccurlyeq 1\). If \(q(t)\to r\neq 0\) as \(t\to+\infty\), then \(y_{1}=qy_{2}\asymp y_{2}\), and thus \(h_{1}\asymp h_{2}\), a contradiction. Hence \(q\prec 1\), and thus \(y_{1}\prec y_{2}\).
The pair \((y_{1},y_{2})\) is said to be a **principal system** of solutions of (L) if
1. \(y_{1}(t),y_{2}(t)>0\) eventually, and
2. \(y_{1}\prec y_{2}\).
Then \(y_{1}\), \(y_{2}\) form a basis of the \(\mathbb{R}\)-linear space \(\operatorname{Sol}(f)\), and \(f\) does not generate oscillations, by Lemma 5.2.21. Moreover, for \(y=c_{1}y_{1}+c_{2}y_{2}\) with \(c_{1},c_{2}\in\mathbb{R}\), \(c_{2}\neq 0\) we have \(y\sim c_{2}y_{2}\). Here are some facts about this notion:
**Lemma 5.2.28**.: _If \((y_{1},y_{2})\), \((z_{1},z_{2})\) are principal systems of solutions of (L), then there are \(c_{1},d_{1},d_{2}\in\mathbb{R}\) such that \(z_{1}=c_{1}y_{1}\), \(z_{2}=d_{1}y_{1}+d_{2}y_{2}\), and \(c_{1},d_{2}>0\)._
**Lemma 5.2.29**.: _Suppose \(f\) does not generate oscillations. Then (L) has a principal system of solutions._
Proof.: It suffices to find a basis \(y_{1}\), \(y_{2}\) of \(\operatorname{Sol}(f)\) with \(y_{1}\prec y_{2}\). Suppose \(y_{1}\), \(y_{2}\) is any basis of \(\operatorname{Sol}(f)\), and set \(c:=\lim_{t\to\infty}y_{1}(t)/y_{2}(t)\in\mathbb{R}\cup\{-\infty,+\infty\}\). If \(c=\pm\infty\), then interchange \(y_{1}\), \(y_{2}\), otherwise replace \(y_{1}\) by \(y_{1}-cy_{2}\). Then \(c=0\), so \(y_{1}\prec y_{2}\).
One calls \(y_{1}\) a **principal** solution of (L) if \((y_{1},y_{2})\) is a principal system of solutions of (L) for some \(y_{2}\). (See [91, Theorem XI.6.4] and [125, 127].) By the previous two lemmas, (L) has a principal solution iff \(f\) does not generate oscillations, and any two principal solutions differ by a multiplicative factor in \(\mathbb{R}^{>}\). If \(y_{1}\in(\mathcal{C}_{a})^{\times}\) and \(y_{2}\) is given as in Lemma 5.2.15, then \(y_{2}\) is a non-principal solution of (L) and \(y_{1}\notin\mathbb{R}y_{2}\).
**Chebyshev systems and Markov systems**\((^{*})\).: Let \(r\in\mathbb{N}^{\geqslant 1}\) and \(y_{1},\dots,y_{r}\in\mathcal{C}_{a}^{r}\), and let \(V\) be the \(\mathbb{R}\)-linear subspace of \(\mathcal{C}_{a}^{r}\) spanned by \(y_{1},\dots,y_{r}\). We call \(y_{1},\dots,y_{r}\) a **Chebyshev system** (on \(\mathbb{R}^{\geqslant a}\)) if for all \(y=c_{1}y_{1}+\dots+c_{r}y_{r}\) with \(c_{1},\dots,c_{r}\in\mathbb{R}\) not all zero, we have \(\operatorname{mult}^{r}(y)<r\). Note that if \(y_{1},\dots,y_{r}\) is a Chebyshev system, then \(y_{1},\dots,y_{r}\) are \(\mathbb{R}\)-linearly independent, and every basis of \(V\) is a Chebyshev system. Chebyshev systems can be used for interpolation:
**Lemma 5.2.30**.: _Suppose \(y_{1},\ldots,y_{r}\) are \(\mathbb{R}\)-linearly independent. Let \(t_{1},\ldots,t_{n}\in\mathbb{R}^{\geqslant a}\) be pairwise distinct and let \(r_{1},\ldots,r_{n}\in\mathbb{N}\) satisfy \(r_{1}+\cdots+r_{n}=r\). (So \(n\geqslant 1\)). Then the following are equivalent:_
1. _the only_ \(y\in V\) _with_ \(\operatorname{mult}_{t_{i}}^{r}(y)\geqslant r_{i}\) _for_ \(i=1,\ldots,n\) _is_ \(y=0\)_;_
2. _for all_ \(b_{ij}\in\mathbb{R}\)__\((i=1,\ldots,n\)_,_ \(j=1,\ldots,r_{i})\)_, there exists_ \(y\in V\) _with_ \[y^{(j-1)}(t_{i})\ =\ b_{ij}\qquad(i=1,\ldots,n,\ j=1,\ldots,r_{i}).\]
_Moreover, in this case, given any \(b_{ij}\) as in_ (ii)_, the element \(y\in V\) in_ (ii) _is unique._
Proof.: Each \(y\in V\) equals \(c_{1}y_{1}+\cdots+c_{r}y_{r}\) for a unique \((c_{1},\ldots,c_{r})\in\mathbb{R}^{r}\). Let the \(b_{ij}\) in \(\mathbb{R}\) be as in (ii), and set
\[M:=\begin{pmatrix}y_{1}(t_{1})&\cdots&y_{r}(t_{1})\\ \vdots&&\vdots\\ y_{1}^{(r_{1}-1)}(t_{1})&\cdots&y_{r}^{(r_{1}-1)}(t_{1})\\ y_{1}(t_{2})&\cdots&y_{r}(t_{2})\\ \vdots&&\vdots\\ y_{1}^{(r_{n}-1)}(t_{n})&\cdots&y_{r}^{(r_{n}-1)}(t_{n})\end{pmatrix}\in \mathbb{R}^{r\times r},\quad b:=\begin{pmatrix}b_{11}\\ \vdots\\ b_{1r_{1}}\\ b_{21}\\ \vdots\\ b_{nr_{n}}\end{pmatrix}\in\mathbb{R}^{r}.\]
Then given \(c=(c_{1},\ldots,c_{r})^{\mathrm{t}}\in\mathbb{R}^{r}\), the element \(y=c_{1}y_{1}+\cdots+c_{r}y_{r}\) of \(V\) satisfies the inequalities in (i) iff \(Mc=0\), and the displayed equations in (ii) iff \(Mc=b\). Thus (i) means injectivity of \(M:\mathbb{R}^{r}\to\mathbb{R}^{r}\), and (ii) its surjectivity.
In particular, if \(y_{1},\ldots,y_{r}\) is a Chebyshev system, then for all \(t_{i}\), \(r_{i}\)\((i=1,\ldots,n)\) as in the previous lemma and for all \(b_{ij}\in\mathbb{R}\)\((i=1,\ldots,n,\)\(j=1,\ldots,r_{i})\), there is a unique \(y\in V\) with \(y^{(j-1)}(t_{i})=b_{ij}\)\((i=1,\ldots,n,\)\(j=1,\ldots,r_{i})\).
_Remark_.: Suppose \(y_{1},\ldots,y_{r}\) are \(\mathbb{R}\)-linearly independent. If \(y_{1},\ldots,y_{r}\) is a Chebyshev system, then each \(y\in V^{\neq}\) has \(<r\) zeros. Remarkably, the converse of this implication also holds; this is due to Arama [4] and (in greater generality) Hartman [89]; a simple proof, from [145], is in [52, Chapter 3, Proposition 3]. This links the notion of Chebyshev system considered here with the concept of the same name in approximation theory [46, Chapter 3, SS4]. (These remarks are not used later.)
If \(y_{1},\ldots,y_{r}\) is a Chebyshev system, then \(\operatorname{wr}(y_{1},\ldots,y_{r})\in\mathcal{C}_{a}^{\times}\) by Lemma 5.2.8. If \(\operatorname{wr}(y_{1},\ldots,y_{j})\in\mathcal{C}_{a}^{\times}\) for \(j=1,\ldots,r\), then \(y_{1},\ldots,y_{r}\) is called a **Markov system** (on \(\mathbb{R}^{\geqslant a}\)). Thus by Lemma 5.2.8, if \(y_{1},\ldots,y_{j}\) is a Chebyshev system for \(j=1,\ldots,r\), then \(y_{1},\ldots,y_{r}\) is a Markov system. Here is a partial converse:
**Lemma 5.2.31**.: _If \(y_{1},\ldots,y_{r}\) is a Markov system, then it is a Chebyshev system._
Proof.: The case \(r=1\) is trivial, so let \(r\geqslant 2\) and let \(y_{1},\ldots,y_{r}\) be a Markov system; in particular, \(y_{1}\in\mathcal{C}_{a}^{\times}\). Set \(z_{j}:=(y_{j+1}/y_{1})^{\prime}\in\mathcal{C}_{a}^{r-1}\) for \(j=1,\ldots,r-1\). Then \(z_{1},\ldots,z_{r-1}\) is a Markov system by Lemma 5.2.6. Assume inductively that it is a Chebyshev system, and let \(y=c_{1}y_{1}+\cdots+c_{r}y_{r}\), \(c_{1},\ldots,c_{r}\in\mathbb{R}\) not all zero; we need to show \(\operatorname{mult}(y)<r\). Towards a contradiction, assume \(\operatorname{mult}(y)\geqslant r\). Then \(z:=(y/y_{1})^{\prime}\) satisfies \(\operatorname{mult}(z)\geqslant r-1\), by Lemma 5.2.9 and the remarks before it. Moreover, \(z=c_{2}z_{1}+\cdots+c_{r}z_{r-1}\), and so \(c_{2}=\cdots=c_{r}=0\) and hence \(y=c_{1}y_{1}\), and thus \(c_{1}=0\), a contradiction.
If \(y_{1},\ldots,y_{r}\) is a Markov system and \(b\geqslant a\), then \(y_{1}|_{b},\ldots,y_{r}|_{b}\) is a Markov system on \(\mathbb{R}^{\geqslant b}\), and likewise with "Chebyshev" in place of "Markov".
**Disconjugacy**\((^{*})\).: Let \(r\in\mathbb{N}^{\geqslant 1}\) and \(f_{1},\ldots,f_{r}\in\mathcal{C}_{a}\), and consider the linear differential equation
(D) \[y^{(r)}+f_{1}y^{(r-1)}+\cdots+f_{r}y\ =\ 0\]
on \(\mathbb{R}^{\geqslant a}\). Let \(\operatorname{Sol}\left(\mathrm{D}\right)\) be its set of solutions in \(\mathcal{C}_{a}^{r}\), so \(\operatorname{Sol}\left(\mathrm{D}\right)\) is the kernel of the \(\mathbb{R}\)-linear map
\[y\mapsto A(y):=y^{(r)}+f_{1}y^{(r-1)}+\cdots+f_{r}y\,:\ \mathcal{C}_{a}^{r} \to\mathcal{C}_{a}.\]
Recall that by Corollary 5.2.3 we have \(\dim_{\mathbb{R}}\operatorname{Sol}\left(\mathrm{D}\right)=r\). The linear differential equation (D) is said to be **disconjugate** if \(\operatorname{Sol}\left(\mathrm{D}\right)\) contains a Chebyshev system; that is, every nonzero \(y\in\operatorname{Sol}(D)\) has multiplicity \(<r\). If (D) is disconjugate, then it has no oscillating solutions.
_Example_.: The equation \(y^{(r)}=0\) is disconjugate, since its solutions in \(\mathcal{C}_{a}^{r}\) are the polynomial functions \(c_{0}+c_{1}x+\cdots+c_{r-1}x^{r-1}\) with \(c_{0},\ldots,c_{r-1}\in\mathbb{R}\).
From Lemma 5.2.30 we obtain:
**Corollary 5.2.32** (de la Vallee-Poussin [202]).: _Suppose (D) is disconjugate. Then for all pairwise distinct \(t_{1},\ldots,t_{n}\geqslant a\), all \(r_{1},\ldots,r_{n}\in\mathbb{N}\) with \(r_{1}+\cdots+r_{n}=r\), and all \(b_{ij}\in\mathbb{R}\)\((i=1,\ldots,n\), \(j=1,\ldots,r_{i})\), there is a unique \(y\in\operatorname{Sol}\left(\mathrm{D}\right)\) such that_
\[y^{(j-1)}(t_{i})\ =\ b_{ij}\qquad(i=1,\ldots,n,\ j=1,\ldots,r_{i}).\]
Let \(b\geqslant a\) and set \(g_{j}:=f_{j}|_{b}\in\mathcal{C}_{b}\) for \(j=1,\ldots,r\). This yields the linear differential equation
(D
\[b\]
\[y^{(r)}+g_{1}y^{(r-1)}+\cdots+g_{r}y\ =\ 0\]
on \(\mathbb{R}^{\geqslant b}\) with the \(\mathbb{R}\)-linear isomorphism \(y\mapsto y|_{b}\colon\operatorname{Sol}\left(\mathrm{D}\right)\to \operatorname{Sol}\left(\mathrm{D}_{b}\right)\).
**Corollary 5.2.33**.: _If (D) is disconjugate, then some basis \(y_{1},\ldots,y_{r}\) of the \(\mathbb{R}\)-linear space \(\operatorname{Sol}\left(\mathrm{D}\right)\) yields for every \(b>a\) a Markov system \(y_{1}|_{b},\ldots,y_{r}|_{b}\) on \(\mathbb{R}^{\geqslant b}\)._
Proof.: Let \(y_{1},\ldots,y_{r}\in\mathcal{C}_{a}^{r}\) be solutions of (D) such that
\[y_{j}(a)\ =\ y_{j}^{\prime}(a)\ =\ \cdots\ =\ y_{j}^{(r-j-1)}(a)\ =\ 0,\ y_{j}^{(r-j)}(a)\neq 0 \qquad\text{for $j=1,\ldots,r$.}\]
Then \(\operatorname{wr}(y_{1},\ldots,y_{r})(a)\neq 0\), so \(y_{1},\ldots,y_{r}\) are \(\mathbb{R}\)-linearly independent. Suppose (D) is disconjugate. Let \(j\in\{1,\ldots,r\}\), \(t\in\mathbb{R}^{>a}\). Then \(\operatorname{wr}(y_{1},\ldots,y_{j})(t)\neq 0\): otherwise Lemma 5.2.8 yields an \(\mathbb{R}\)-linear combination \(y\neq 0\) of \(y_{1},\ldots,y_{j}\) with \(\operatorname{mult}_{t}(y)\geqslant j\), but also \(\operatorname{mult}_{a}(y)\geqslant r-j\) by choice of \(y_{1},\ldots,y_{r}\), hence \(\operatorname{mult}(y)\geqslant r\), contradicting disconjugacy of (D). Thus \(y_{1},\ldots,y_{r}\) has the desired property.
With \(n\geqslant 1\) understood from the context, let \(\mathfrak{d}\) denote the \(\mathbb{R}\)-linear map
\[y\mapsto y^{\prime}\,:\ \mathcal{C}_{a}^{n}\to\mathcal{C}_{a}^{n-1},\]
identify \(f\in\mathcal{C}_{a}^{n-1}\) with the \(\mathbb{R}\)-linear operator \(y\mapsto fy\colon\mathcal{C}_{a}^{n-1}\to\mathcal{C}_{a}^{n-1}\), and for maps \(\alpha\colon\mathcal{C}_{a}^{n}\to\mathcal{C}_{a}^{n-1}\), \(\beta\colon\mathcal{C}_{a}^{n+1}\to\mathcal{C}_{a}^{n}\), denote \(\alpha\circ\beta\colon\mathcal{C}_{a}^{n+1}\to\mathcal{C}_{a}^{n-1}\) simply by \(\alpha\beta\). With these conventions we can state an analytic version of Lemma 1.1.3:
**Lemma 5.2.34**.: _If \(g_{j}\in(\mathcal{C}_{a}^{r-j+1})^{\times}\) for \(j=1,\ldots,r\) and we set_
\[A\ =\ g_{1}\cdots g_{r}(\mathfrak{d}g_{r}^{-1})\cdots(\mathfrak{d}g_{2}^{-1})( \mathfrak{d}g_{1}^{-1})\ :\ \mathcal{C}_{a}^{r}\to\mathcal{C}_{a}, \tag{5.2.1}\]
_then \(A=(\partial-h_{r})\cdots(\partial-h_{1})\) with \(h_{j}:=(g_{1}\cdots g_{j})^{\dagger}\in\mathcal{C}_{a}^{r-j}\) for \(j=1,\ldots,r\). Conversely, if \(h_{j}\in\mathcal{C}_{a}^{r-j}\) for \(j=1,\ldots,r\) and \(A:=(\partial-h_{r})\cdots(\partial-h_{1})\colon\mathcal{C}_{a}^{r}\to\mathcal{ C}_{a}\) and \(h_{0}:=0\), then (5.2.1) holds for \(g_{j}\in(\mathcal{C}_{a}^{r-j+1})^{\times}\) given by_
\[g_{j}(t)\ :=\ \exp\int_{a}^{t}\big{(}h_{j}(s)-h_{j-1}(s)\big{)}\,ds\qquad(j=1, \ldots,r),\]
_and \(h_{j}=(g_{1}\cdots g_{j})^{\dagger}\) for those \(j\)._
We now link the notion of disconjugacy with factorization of the operator \(A=\partial^{r}+f_{1}\partial^{r-1}+\cdots+f_{r}\colon\mathcal{C}_{a}^{r}\to \mathcal{C}_{a}\) considered earlier in connection with (D).
**Proposition 5.2.35** (Frobenius [74], Libri [129]).: _Suppose \(y_{1},\ldots,y_{r}\in\operatorname{Sol}\,(\mathrm{D})\) is a Markov system. Set \(w_{0}:=1\), \(w_{j}:=\operatorname{wr}(y_{1},\ldots,y_{j})\in(\mathcal{C}_{a}^{r-j+1})^{\times}\) for \(j=1,\ldots,r\), and_
\[g_{1}\ :=\ w_{1},\qquad g_{j}\ :=\ w_{j}w_{j-2}/w_{j-1}^{2}\quad(j=2,\ldots,r).\]
_Then \(g_{j}\in(\mathcal{C}_{a}^{r-j+1})^{\times}\) for \(j=1,\ldots,r\) and \(A=g_{1}\cdots g_{r}(\partial g_{r}^{-1})\cdots(\partial g_{2}^{-1})(\partial g _{1}^{-1})\)._
Proof.: It is clear that \(g_{j}\in(\mathcal{C}_{a}^{r-j+1})^{\times}\) and easy to check that \(w_{j}/w_{j-1}=g_{1}\cdots g_{j}\) for \(j=1,\ldots,r\). We define for \(j=0,\ldots,r\) the \(\mathbb{R}\)-linear map
\[y\mapsto A_{j}(y):=\operatorname{wr}(y_{1},\ldots,y_{j},y)/w_{j}\,:\ \mathcal{C}_{a}^{r}\to\mathcal{C}_{a}^{r-j}.\]
We claim that \(A_{j}=g_{1}\cdots g_{j}\partial g_{j}^{-1}\partial\cdots\partial g_{1}^{-1}\). The case \(j=0\) is trivial. Suppose the claim holds for a certain \(j<r\). Then
\[g_{1}\cdots g_{j+1}\partial g_{j+1}^{-1}\partial\cdots\partial g_{2}^{-1} \partial g_{1}^{-1}\ =\ g_{1}\cdots g_{j+1}\partial(g_{1}\cdots g_{j+1})^{-1}A_{j},\]
which sends \(y\in\mathcal{C}_{a}^{r}\) to
\[\frac{w_{j+1}}{w_{j}}\left(\frac{w_{j}}{w_{j+1}}\frac{\operatorname{wr}(y_{1},\ldots,y_{j},y)}{w_{j}}\right)^{\prime}=\frac{w_{j+1}}{w_{j}}\left(\frac{ \operatorname{wr}(y_{1},\ldots,y_{j},y)}{w_{j+1}}\right)^{\prime},\]
and this in turn equals \(A_{j+1}(y)=\operatorname{wr}(y_{1},\ldots,y_{j},y_{j+1},y)/w_{j+1}\) by Lemma 5.2.7.
Here is a converse, with \(A\) still the operator \(\partial^{r}+f_{1}\partial^{r-1}+\cdots+f_{r}\) figuring in (D):
**Theorem 5.2.36** (Polya [156]).: _Suppose_
\[A=g_{1}\cdots g_{r}(\partial g_{r}^{-1})\cdots(\partial g_{2}^{-1})(\partial g _{1}^{-1})\quad\text{ with }g_{j}\in(\mathcal{C}_{a}^{r-j+1})^{\times}\text{ for }j=1,\ldots,r.\]
_Then \(\operatorname{Sol}\,(\mathrm{D})\) contains a Markov system \(y_{1},\ldots,y_{r}\)._
Proof.: Let \(t_{1},\ldots,t_{r}\) range over \(\mathbb{R}^{\geq a}\) and define \(y_{1},\ldots,y_{r}\in\mathcal{C}_{a}^{r}\) by
\[y_{1}(t_{1}) \ :=\ g_{1}(t_{1}),\] \[y_{2}(t_{1}) \ :=\ g_{1}(t_{1})\int_{a}^{t_{1}}g_{2}(t_{2})\,dt_{2},\] \[\
_Remark_.: Suppose \(\operatorname{Sol}\left(\mathrm{D}\right)\) contains a Markov system \(y_{1},\ldots,y_{r}\). If \(f_{1},\ldots,f_{r}\in\mathcal{C}_{a}^{n}\), then \(y_{1},\ldots,y_{r}\in\mathcal{C}_{a}^{n+r}\), so \(g_{j}\in(\mathcal{C}_{a}^{n+r-j+1})^{\times}\) for \(j=1,\ldots,r\) where \(g_{1},\ldots,g_{r}\) are as in Proposition 5.2.35. Likewise, if \(f_{1},\ldots,f_{r}\in\mathcal{C}_{a}^{\infty}\), then those \(g_{j}\) lie in \((\mathcal{C}_{a}^{\infty})^{\times}\), and the same with \(\omega\) in place of \(\infty\).
**Corollary 5.2.37**.: _With \(A=\partial^{r}+f_{1}\partial^{r-1}+\cdots+f_{r}:\mathcal{C}_{a}^{r}\to \mathcal{C}_{a}\), \(\operatorname{Sol}\left(\mathrm{D}\right)\) contains a Markov system iff there are \(g_{j}\in(\mathcal{C}_{a}^{r-j+1})^{\times}\)\((j=1,\ldots,r)\) such that_
\[A\ =\ g_{1}\cdots g_{r}(\partial g_{r}^{-1})\cdots(\partial g_{2}^{-1})( \partial g_{1}^{-1}).\]
_Moreover, \(\operatorname{Sol}\left(\mathrm{D}_{b}\right)\) contains a Markov system for all \(b>a\) iff (\(\mathrm{D}_{b}\)) is disconjugate for all \(b>a\)._
Proof.: The first equivalence follows from Proposition 5.2.35 and Theorem 5.2.36, and the second equivalence follows from Lemma 5.2.31 and Corollary 5.2.33,
We say that (D) is **eventually disconjugate** if (\(\mathrm{D}_{b}\)) is disconjugate for some \(b\geqslant a\). If (D) is disconjugate, then so is (\(\mathrm{D}_{b}\)) for all \(b\geqslant a\), and likewise with "eventually disconjugate" in place of "disconjugate". If (D) is eventually disconjugate, then no solution of (D) in \(\mathcal{C}_{a}^{r}\) oscillates. If \(r=1\), then (D) is always disconjugate, since its solutions are the functions \(t\mapsto c\exp\Bigl{(}-\int_{a}^{t}f_{1}(s)\,ds\Bigr{)}\) with \(c\in\mathbb{R}\). Returning to the special case where \(r=2\) we have:
**Corollary 5.2.38**.: _Suppose (\(\mathrm{L}\)) has a non-oscillating solution \(y\neq 0\). Then (\(\mathrm{L}\)) is eventually disconjugate._
Proof.: Here \(f_{1}=0\), \(f_{2}=f\), and \(f\) does not generate oscillations by Lemma 5.2.21. Let \(y_{1},y_{2}\in\operatorname{Sol}(f)\) be non-oscillating and \(\mathbb{R}\)-linearly independent. Then \(\operatorname{wr}(y_{1},y_{2})\in\mathbb{R}^{\times}\). Take \(b\geqslant a\) such that \(y_{1}|_{b}\in(\mathcal{C}_{b})^{\times}\). Then \(y_{1}|_{b},y_{2}|_{b}\) is a Markov system.
_Remark_.: By [81], there is for each \(r>2\) a linear differential equation (D) with only non-oscillating solutions in \(\mathcal{C}_{a}^{r}\), but which is not eventually disconjugate. (This will not be used later but motivates Corollary 7.4.58 below.)
Passing to germs instead of functions, we now consider a monic operator
\[A\ =\ \partial^{r}+\phi_{1}\partial^{r-1}+\cdots+\phi_{r}\in\mathcal{C}^{< \infty}[\partial]\qquad(\phi_{1},\ldots,\phi_{r}\in\mathcal{C}^{<\infty}).\]
It gives rise to the \(\mathbb{R}\)-linear map
\[y\mapsto A(y)=y^{(r)}+\phi_{1}y^{(r-1)}+\cdots+\phi_{r}y\ :\ \mathcal{C}^{< \infty}\to\mathcal{C}^{<\infty},\]
whose kernel we denote by \(\ker A\).
**Lemma 5.2.39**.: \(\dim_{\mathbb{R}}\ker A=r\)_, and if \(\theta_{1},\ldots,\theta_{r}\in\mathcal{C}^{<\infty}\) and \(\ker A=\ker B\) for \(B=\partial^{r}+\theta_{1}\partial^{r-1}+\cdots+\theta_{r}\in\mathcal{C}^{< \infty}[\partial]\), then \(A=B\), that is \(\phi_{i}=\theta_{i}\) for \(i=1,\ldots,r\)._
Proof.: Take \(a\in\mathbb{R}\) and \(f_{1},\ldots,f_{r}\in\mathcal{C}_{a}\) representing \(\phi_{1},\ldots,\phi_{r}\). This gives an equation (D). Let \(y_{1},\ldots,y_{r}\) be a basis of the \(\mathbb{R}\)-linear space \(\operatorname{Sol}\left(\mathrm{D}\right)\). Then the germs of \(y_{1},\ldots,y_{r}\) lie in \(\mathcal{C}^{<\infty}\), and denoting these germs also by \(y_{1},\ldots,y_{r}\) one verifies easily that then \(y_{1},\ldots,y_{r}\) is a basis of \(\ker A\). The second part of the lemma follows in a similar way from Corollary 5.2.5.
We call \(A\) as above **disconjugate** if for some \(a\in\mathbb{R}\) the germs \(\phi_{1},\ldots,\phi_{r}\) have representatives \(f_{1},\ldots,f_{r}\) in \(\mathcal{C}_{a}\) such that the linear differential equation (D) on \(\mathbb{R}^{\geqslant a}\) is disconjugate.
**Lemma 5.2.40**.: _For \(A\) as above, the following are equivalent:_
* \(A\) _is disconjugate;_
* \(A=g_{1}\cdots g_{r}(\partial g_{r}^{-1})\cdots(\partial g_{2}^{-1})(\partial g_{ 1}^{-1})\) _for some_ \(g_{1},\ldots,g_{r}\in(\mathcal{C}^{<\infty})^{\times}\)_;_
* \(A=(\partial-h_{r})\cdots(\partial-h_{1})\) _for some_ \(h_{1},\ldots,h_{r}\in\mathcal{C}^{<\infty}\)_._
_Thus if monic \(A_{1},A_{2}\in\mathcal{C}^{<\infty}[\partial]\) of order \(\geqslant 1\) are disconjugate, then so is \(A_{1}A_{2}\)._
Proof.: Assume (i). Then Corollary 5.2.33 yields \(a\in\mathbb{R}\), representatives \(f_{1},\ldots,f_{r}\in\mathcal{C}_{a}\) of \(\phi_{1},\ldots,\phi_{r}\), and a Markov system \(y_{1},\ldots,y_{r}\in\operatorname{Sol}\left(\operatorname{D}\right)\). Let \(g_{1},\ldots,g_{r}\) be as in Proposition 5.2.35. Then for \(b\geqslant a\) with \(f_{1}|_{b},\ldots,f_{r}|_{b}\in\mathcal{C}_{a}^{n}\) we have \(g_{j}|_{b}\in(\mathcal{C}_{b}^{n+r-j+1})^{\times}\) for \(j=1,\ldots,r\). So the germs of \(g_{1},\ldots,g_{r}\) are in \((\mathcal{C}^{<\infty})^{\times}\), and denoting these germs also by \(g_{1},\ldots,g_{r}\) gives \(A=g_{1}\cdots g_{r}(\partial g_{r}^{-1})\cdots(\partial g_{2}^{-1})(\partial g _{1}^{-1})\) by Proposition 5.2.35 and Lemma 5.2.39. We have now shown (i) \(\Rightarrow\) (ii). For the converse, we reverse the argument using Theorem 5.2.36. The equivalence (ii) \(\Leftrightarrow\) (iii) is shown just like Lemma 1.1.3, using also that \(f\mapsto f^{\dagger}\colon(\mathcal{C}^{<\infty})^{\times}\to\mathcal{C}^{<\infty}\) is surjective.
_Remark_.: Lemma 5.2.40 goes through for monic \(A\in\mathcal{C}^{\infty}[\partial]\) of order \(r\), with \(\mathcal{C}^{\infty}\) in place of \(\mathcal{C}^{<\infty}\) everywhere. Likewise for monic \(A\in\mathcal{C}^{\omega}[\partial]\) of order \(r\), with \(\mathcal{C}^{\omega}\) in place of \(\mathcal{C}^{<\infty}\) everywhere.
A **principal system** of solutions of (D) is a tuple \(y_{1},\ldots,y_{r}\) in \(\operatorname{Sol}\left(\operatorname{D}\right)\) such that
* \(y_{1}(t),\ldots,y_{r}(t)>0\) eventually, and
* \(y_{1}\prec\cdots\prec y_{r}\) (in \(\mathcal{C}\)).
Note that then \(y_{1},\ldots,y_{r}\) are \(\mathbb{R}\)-linearly independent, and \(z_{1},\ldots,z_{r}\in\mathcal{C}_{a}^{r}\) is a principal system of solutions of (D) iff there are \(c_{ij}\in\mathbb{R}\) (\(1\leqslant j\leqslant i\leqslant r\)) such that
\[z_{i}\ =\ c_{ii}y_{i}+c_{i,i-1}y_{i-1}+\cdots+c_{i1}y_{1}\quad\text{and}\quad c _{ii}>0.\]
The next result generalizes Lemma 5.2.29. It seems slightly stronger than a similar result by Hartman [92] and Levin [128]:
**Proposition 5.2.41**.: _Suppose (D) has no oscillating solutions. Then it has a principal system of solutions._
Proof.: Let \(y,z\in\operatorname{Sol}\left(\operatorname{D}\right)\), \(y(t),z(t)>0\) eventually. Claim: \(\lim\limits_{t\to+\infty}y(t)/z(t)\) exists in \([0,+\infty]\). Suppose this limit doesn't exist. Then we have \(c\in\mathbb{R}^{>}\) such that
\[\liminf\limits_{t\to+\infty}y(t)/z(t)\ <c\ <\limsup\limits_{t\to+\infty}y(t)/z(t),\]
so \(y(t)/z(t)=c\) for arbitrarily large \(t\), but then \(y-cz=0\), a contradiction. In particular, for such \(y,z\) we have either \(y\prec z\), or \(y\sim cz\) for some \(c\in\mathbb{R}^{>}\), or \(y\succ z\). If \(y_{1},\ldots,y_{n}\in\operatorname{Sol}\left(\operatorname{D}\right)^{\neq}\) and \(y_{1}\prec\cdots\prec y_{n}\), then \(y_{1},\ldots,y_{n}\) are \(\mathbb{R}\)-linearly independent, so \(n\leqslant r\), and for any nonzero \(z\in\mathbb{R}y_{1}+\cdots+\mathbb{R}y_{n}\) we have \(z\sim cy_{j}\) for some \(j\in\{1,\ldots,n\}\) and \(c\in\mathbb{R}^{\times}\). Now take such \(y_{1},\ldots,y_{n}\) with maximal \(n\), so \(n\geqslant 1\). We claim that then \(\operatorname{Sol}\left(\operatorname{D}\right)=\mathbb{R}y_{1}+\cdots+ \mathbb{R}y_{n}\) (so \(n=r\)). Let \(z\in\operatorname{Sol}\left(\operatorname{D}\right)^{\neq}\). We cannot have \(z\prec y_{1}\), nor \(y_{j}\prec z\prec y_{j+1}\) with \(1\leqslant j\leqslant n-1\), nor \(z\succ y_{n}\); hence \(z\sim cy_{j}\) where \(1\leqslant j\leqslant n\) and \(c\in\mathbb{R}^{\times}\). Then \(z-cy_{j}\prec y_{j}\). If \(z\neq cy_{j}\), we take \(z-cy_{j}\) as our new \(z\) and obtain likewise \(z-cy_{j}\sim dy_{i}\) with \(1\leqslant i<j\) and \(d\in\mathbb{R}^{\times}\). Continuing this way leads in a finite number of steps to \(z\in\mathbb{R}y_{1}+\cdots+\mathbb{R}y_{n}\).
The next result is due to Trench [200]. We do not give a proof, since we shall establish in Section 7.4 a version of it in the Hardy field context; see also Proposition 2.5.39.
**Proposition 5.2.42**.: _Suppose \(\operatorname{Sol}\left(\mathrm{D}\right)\) contains a Markov system. Then there are \(g_{j}\in(\mathcal{C}_{a}^{r-j+1})^{\times}\ (j=1,\ldots,r)\) such that for \(A=\partial^{r}+f_{1}\partial^{r-1}+\cdots+f_{r}:\mathcal{C}_{a}^{r}\to\mathcal{ C}_{a}\),_
\[A\ =\ g_{1}\cdots g_{r}(\partial g_{r}^{-1})\cdots(\partial g_{2}^{-1})( \partial g_{1}^{-1})\quad\text{and}\quad\int_{a}^{\infty}|g_{j}(s)|\,ds=\infty \text{ for }j=2,\ldots,r.\]
_Moreover, such \(g_{1},\ldots,g_{r}\) are unique up to multiplication by nonzero constants._
An application of l'Hopital's Rule shows that for \(g_{1},\ldots,g_{r}\) as in Proposition 5.2.42 the tuple \(y_{1},\ldots,y_{r}\) in the proof of Theorem 5.2.36 is a principal system of solutions of (D).
### Lyapunov exponents \((^{*})\)
In this subsection \(f\), \(g\), \(h\) range over \(\mathcal{C}[i]\). Consider the downward closed subset
\[\Lambda=\Lambda(f):=\big{\{}\lambda\in\mathbb{R}:f\,\mathrm{e}^{\lambda x} \preccurlyeq 1\big{\}}\]
of \(\mathbb{R}\). If \(\lambda<\mu\in\Lambda\), then \(f\,\mathrm{e}^{\lambda x}\preccurlyeq 1\). Also
\[\Lambda(f)\ =\ \Lambda(\overline{f})\ =\ \Lambda(|f|),\quad f\preccurlyeq g\ \Rightarrow\ \Lambda(f)\supseteq\Lambda(g).\]
_Notation._ Set \(\mathbb{R}_{\pm\infty}:=\mathbb{R}\cup\{-\infty,+\infty\}\). Then for \(S\subseteq\mathbb{R}\) we have \(\sup S\in\mathbb{R}_{\pm\infty}\) with \(\sup\emptyset:=-\infty\).
The **Lyapunov exponent** of \(f\) is \(\lambda(f):=\sup\Lambda(f)\in\mathbb{R}_{\pm\infty}\). (See [45, SS3.12].) Note:
\[\lambda(f)=+\infty\quad\Longleftrightarrow\quad\Lambda(f)=\mathbb{R}\quad \Longleftrightarrow\quad f\prec\mathrm{e}^{\lambda x}\ \text{ for all }\lambda\in\mathbb{R},\]
and
\[\lambda(f)\ =\ \lambda(\overline{f})\ =\ \lambda(|f|),\quad f\preccurlyeq g \Rightarrow\lambda(f)\ \geqslant\ \lambda(g),\quad f\asymp g\ \Rightarrow\ \lambda(f)\ =\ \lambda(g).\]
If \(\lambda=\lambda(f)\in\mathbb{R}\), then for each \(\varepsilon\in\mathbb{R}^{>}\) we have \(f\,\mathrm{e}^{(\lambda-\varepsilon)x}\preccurlyeq 1\) and \(f\,\mathrm{e}^{(\lambda+\varepsilon)x}\not\preccurlyeq 1\). One also verifies easily that for \(f\in\mathcal{C}[i]^{\times}\),
\[\lambda(f)\ =\ -\limsup_{t\to+\infty}\frac{\log|f(t)|}{t}. \tag{5.2.2}\]
If \(f=\mathrm{e}^{g}\), then \(\lambda(f)=-\limsup_{t\to+\infty}\mathrm{Re}\,g(t)/t\). Thus \(\lambda(c\,\mathrm{e}^{\mathrm{i}\phi})=0\) for \(c\in\mathbb{C}^{\times}\), \(\phi\in\mathcal{C}\).
**Lemma 5.2.43**.: _Assume \(\lambda(f),\lambda(g)>-\infty\). Then:_
* \(\lambda(f+g)\geqslant\min\big{\{}\lambda(f),\lambda(g)\big{\}}\)_, with equality if_ \(\lambda(f)\neq\lambda(g)\)_;_
* \(\lambda(fg)\geqslant\lambda(f)+\lambda(g)\)_;_
* \(\lambda(f^{m})=m\lambda(f)\) _for all_ \(m\)_._
Proof.: For (i) suppose \(\lambda(f)\leqslant\lambda(g)\). Then for each \(\lambda\in\Lambda(f)\) and \(\varepsilon\in\mathbb{R}^{>}\) we have \((f+g)\,\mathrm{e}^{(\lambda-\varepsilon)x}\preccurlyeq 1\) and so \(\lambda-\varepsilon\in\Lambda(f+g)\). This shows \(\lambda(f+g)\geqslant\lambda(f)\), and \(\lambda(f+g)=\lambda(f)\) if \(\lambda(f)<\lambda(g)\) then follows using \(f=(f+g)-g\). Parts (ii) and (iii) follow in a similar way.
By Lemma 5.2.43(ii), if \(f\in\mathcal{C}[i]^{\times}\) and \(\lambda(f),\lambda(f^{-1})\in\mathbb{R}\), then \(\lambda(f^{-1})\leqslant-\lambda(f)\).
_Example._ If \(f=\mathrm{e}^{g}\) and \(\mathrm{Re}\,g-cx\preccurlyeq x\), \(c\in\mathbb{R}\), then \(\lambda(f)=c\), \(\lambda(f^{-1})=-c\).
Set
\[\mathcal{C}[i]^{\preceq\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
is an ideal of \(\mathcal{C}[\mathrm{i}]^{\preceq}\). The group of units of \(\mathcal{C}[\mathrm{i}]^{\preceq}\) is
\[\mathcal{C}[\mathrm{i}]^{\preceq}:=\big{\{}f\in\mathcal{C}[\mathrm{i}]^{\times}: \lambda(f),\lambda(f^{-1})\in\mathbb{R}\big{\}}=\big{\{}f:\mathrm{e}^{-nx} \preccurlyeq f\preccurlyeq\mathrm{e}^{nx}\text{ for some }n\big{\}}.\]
**Lemma 5.2.44**.: _Suppose \(f\in\mathcal{C}^{1}[\mathrm{i}]\). If \(\lambda(f^{\prime})\leqslant 0\), then \(\lambda(f^{\prime})\leqslant\lambda(f)\). If \(\lambda(f^{\prime})>0\), then \(c:=\lim_{s\to\infty}f(s)\in\mathbb{C}\) exists and \(\lambda(f^{\prime})\leqslant\lambda(f-c)\)._
Proof.: Let \(\lambda\in\Lambda(f^{\prime})\). Take \(a\in\mathbb{R}\) and a representative of \(f\) in \(\mathcal{C}^{1}_{a}[\mathrm{i}]\), also denoted by \(f\), as well \(C\in\mathbb{R}^{>}\), such that \(|f^{\prime}(t)|\leqslant C\,\mathrm{e}^{-\lambda t}\) for \(t\geqslant a\). If \(\lambda<0\), then for \(t\geqslant a\):
\[|f(t)|-|f(a)|\ \leqslant\ |f(t)-f(a)|\ =\ \left|\int_{a}^{t}f^{\prime}(s)\,ds \right|\ \leqslant\ \int_{a}^{t}|f^{\prime}(s)|\,ds\ \leqslant\\ C\int_{a}^{t}\mathrm{e}^{-\lambda s}\ ds\ =\ -\frac{C}{\lambda}( \mathrm{e}^{-\lambda t}-\mathrm{e}^{-\lambda a}),\]
hence \(f\preccurlyeq\mathrm{e}^{-\lambda x}\). This yields \(\lambda(f^{\prime})\leqslant\lambda(f)\) if \(\lambda(f^{\prime})\leqslant 0\). Suppose \(\lambda>0\). Then for \(a\leqslant s\leqslant t\):
\[|f(t)-f(s)|\ =\ \left|\int_{s}^{t}f^{\prime}(u)\,du\right|\ \leqslant\ \int_{s}^{t}|f^{\prime}(u)|\,du\ \leqslant\ -\frac{C}{\lambda}( \mathrm{e}^{-\lambda t}-\mathrm{e}^{-\lambda s}).\]
Therefore \(c:=\lim_{s\to\infty}f(s)\) exists and \(|c-f(s)|\leqslant\frac{C}{\lambda}\,\mathrm{e}^{-\lambda s}\) for \(s\geqslant a\). Hence \(f-c\preccurlyeq\mathrm{e}^{-\lambda x}\), so \(\lambda\in\Lambda(f-c)\). This yields \(\lambda(f^{\prime})\leqslant\lambda(f-c)\).
Let \(y=(y_{1},\ldots,y_{n})\in\mathcal{C}[\mathrm{i}]^{n}\), \(n\geqslant 1\). Put \(\lambda(y):=\min\bigl{\{}\lambda(y_{1}),\ldots,\lambda(y_{n})\bigr{\}}\). Then the function \(\lambda\colon\mathcal{C}[\mathrm{i}]^{n}\to\mathbb{R}_{\pm\infty}\) on the product ring \(\mathcal{C}[\mathrm{i}]^{n}\) also satisfies (i)-(iii) in Lemma 5.2.43 with \(f\), \(g\) replaced by \(y,z\in\mathcal{C}[\mathrm{i}]^{n}\) with \(\lambda(y),\lambda(z)>-\infty\). Thus:
**Corollary 5.2.45**.: _If \(m\geqslant 2\), \(y_{1},\ldots,y_{m}\in\mathcal{C}[\mathrm{i}]^{n}\) are \(\mathbb{C}\)-linearly dependent, and \(\lambda(y_{1}),\ldots,\lambda(y_{m})>-\infty\), then \(\lambda(y_{i})=\lambda(y_{j})\) for some \(i\neq j\)._
We define \(y\preccurlyeq g:\Leftrightarrow y_{1},\ldots,y_{n}\preccurlyeq g\)\(\bigl{(}\Rightarrow\lambda(y)\geqslant\lambda(g)\bigr{)}\). Note that \(\lambda(y)\in\mathbb{R}\) iff \(y\preccurlyeq\mathrm{e}^{mx}\) and \(y\not\preccurlyeq\mathrm{e}^{-mx}\) for some \(m\).
Let \(\|\cdot\|\) be a norm on the \(\mathbb{C}\)-linear space \(\mathbb{C}^{n}\), and accordingly, let \(\|y\|\) denote the germ of \(t\mapsto\big{\|}\big{(}y_{1}(t),\ldots,y_{n}(t)\big{)}\|\), so \(\|y\|\in\mathcal{C}\).
**Corollary 5.2.46**.: \(y\preccurlyeq\|y\|\)_, \(y\preccurlyeq g\Leftrightarrow\|y\|\preccurlyeq g\), and \(\lambda(\|y\|)=\lambda(y)\)._
Proof.: Any two norms on \(\mathbb{C}^{n}\) are equivalent, so we may arrange \(\|\,\cdot\,\|=\|\,\cdot\,\|_{1}\). Then \(y\preccurlyeq g\Rightarrow\|y\|=|y_{1}|+\cdots+|y_{n}|\preccurlyeq g\). From \(|y_{j}|\leqslant\|y\|\) we get \(y_{j}\preccurlyeq\|y\|\) for \(j=1,\ldots,n\) and thus \(y\preccurlyeq\|y\|\). Thus \(\|y\|\preccurlyeq g\Rightarrow y\preccurlyeq g\); also \(\lambda(\|y\|)\leqslant\lambda(y)\). Finally, Lemma 5.2.43(i) and \(\lambda(|f|)=\lambda(f)\) yield \(\lambda(\|y\|)\geqslant\lambda(y)\).
In particular, \(\lambda(y)=\lambda(\|y\|)=\lambda(\|y\|_{2})=\lambda\big{(}(\mathrm{Re}\,y_{1}, \ldots,\mathrm{Re}\,y_{n},\mathrm{Im}\,y_{1},\ldots,\mathrm{Im}\,y_{n})\big{)}\).
**Remarks on matrix differential equations**\(({}^{*})\)**.: In this subsection \(N\) is an \(n\times n\) matrix with entries in \(\mathcal{C}[\mathrm{i}]\), \(n\geqslant 1\). We consider tuples \(y\in\mathcal{C}^{1}[\mathrm{i}]^{n}\) as column vectors \(y=(y_{1},\ldots,y_{n})^{\mathrm{t}}\) with entries \(y_{j}\) in \(\mathcal{C}^{1}[\mathrm{i}]\). Later in this subsection and in Section 7.4 we shall tacitly use the following:
1. The \(\mathbb{C}\)-linear space of \(y\in\mathcal{C}^{1}[\mathrm{i}]^{n}\) such that \(y^{\prime}=Ny\) has dimension \(n\).
2. If all entries of \(N\) are in \(\mathcal{C}^{<\infty}[\mathrm{i}]\) and \(y\in\mathcal{C}^{1}[\mathrm{i}]^{n}\), \(y^{\prime}=Ny\), then \(y\in\mathcal{C}^{<\infty}[\mathrm{i}]^{n}\).
Classical existence and uniqueness results on matrix linear differential equations give (1), and induction on the degree of smoothness of \(y\) yields (2).
Call a matrix \((f_{ij})\) over \(\mathcal{C}[\mathrm{i}]\)**bounded** if \(f_{ij}\preccurlyeq 1\) for all \(i\), \(j\). Similarly with \(\mathcal{C}_{a}[\mathrm{i}]\) (\(a\in\mathbb{R}\)) in place of \(\mathcal{C}[\mathrm{i}]\). In the proof of Corollary 7.4.28 we shall use the following (cf. [45, SS3.13], [111, SSA.3.11]):
**Lemma 5.2.47** (Lyapunov [135], Perron [152]).: _Suppose \(N\) is bounded. If \(y\in\mathcal{C}^{1}[\mathrm{i}]^{n}\), \(y^{\prime}=Ny\), and \(y\neq 0\), then \(\lambda(y)\in\mathbb{R}\)._
Proof.: We have \(N=A+B\mathrm{i}\) where \(A\), \(B\) are \(n\times n\) matrices over \(\mathcal{C}\). Consider the bounded \(2n\times 2n\) matrix \(M:=\bigl{(}\begin{smallmatrix}A&-B\\ B&A\end{smallmatrix}\bigr{)}\) over \(\mathcal{C}\). For \(y=(y_{1},\ldots,y_{n})^{\mathrm{t}}\in\mathcal{C}^{1}[\mathrm{i}]^{n}\), set \(v:=(\mathrm{Re}\,y_{1},\ldots,\mathrm{Re}\,y_{n},\mathrm{Im}\,y_{1},\ldots, \mathrm{Im}\,y_{n})^{\mathrm{t}}\in(\mathcal{C}^{1})^{2n}\); then \(y^{\prime}=Ny\) iff \(v^{\prime}=Mv\). Now assume \(y^{\prime}=Ny\) and \(y\neq 0\). Then by the remark after Corollary 5.2.46 we may replace \(N\), \(n\), \(y\) by \(M\), \(2n\), \(v\) to arrange that the entries of \(N\) are in \(\mathcal{C}\) and \(y\in(\mathcal{C}^{1})^{n}\).
Let \(\lambda,\mu\in\mathbb{R}\) and consider \(z:=\mathrm{e}^{-\lambda x}\,y\). Then \(z^{\prime}=(N-\lambda I_{n})z\) where \(I_{n}\) is the \(n\times n\) identity matrix over \(\mathcal{C}\), and thus
\[\langle z,z\rangle^{\prime}\ =\ 2\langle z,z^{\prime}\rangle\ =\ 2\bigl{\langle}z, \bigl{(}N-(\lambda-\tfrac{1}{2})I_{n}\bigr{)}z\bigr{\rangle}-\langle z,z\rangle.\]
The lemma below gives \(\lambda\) such that \(\bigl{\langle}z,\bigl{(}N-(\lambda-\tfrac{1}{2})I_{n}\bigr{)}z\bigr{\rangle}\leqslant 0\), so \(\langle z,z\rangle\in\mathcal{C}^{1}\) and \(\langle z,z\rangle^{\prime}\leqslant 0\), and thus \(\langle z,z\rangle\preccurlyeq 1\). Corollary 5.2.46 yields \(z\preccurlyeq 1\), so \(y\preccurlyeq\mathrm{e}^{\lambda x}\). Likewise, set \(w:=\mathrm{e}^{\mu x}\,y\); then \(w^{\prime}=(N+\mu I_{n})w\), and apply Lemma 5.2.48 to a representative \(F\) of \(-N\) to get \(\mu\) with \(\langle w,w\rangle^{\prime}\geqslant\langle w,w\rangle\), so \(\langle w,w\rangle\succcurlyeq\mathrm{e}^{x}\) by Lemma 5.2.17, hence \(w\not\preccurlyeq 1\), and thus \(y\not\preccurlyeq\mathrm{e}^{-\mu x}\). So \(\lambda(y)\in\mathbb{R}\).
In the next lemma \(F=(f_{ij})\) is an \(n\times n\) matrix over \(\mathcal{C}_{a}\), \(a\in\mathbb{R}\). For \(t\in\mathbb{R}^{\geqslant a}\) this yields the \(n\times n\) matrix \(F(t):=\bigl{(}f_{ij}(t)\bigr{)}\) over \(\mathbb{R}\). Let \(I_{n}\) also be the \(n\times n\) identity matrix over \(\mathbb{R}\).
**Lemma 5.2.48**.: _Suppose \(F\) is bounded. Then there exists \(\mu\in\mathbb{R}^{>}\) such that for all real \(\lambda\geqslant\mu\), \(t\geqslant a\), and \(z\in\mathbb{R}^{n}\): \(\bigl{\langle}z,\bigl{(}F(t)-\lambda I_{n}\bigr{)}z\bigr{\rangle}\leqslant 0\)._
Proof.: Put \(G:=\tfrac{1}{2}(F+F^{\mathrm{t}})\), a symmetric bounded \(n\times n\) matrix over \(\mathcal{C}_{a}\) such that \(\bigl{\langle}z,F(t)z\bigr{\rangle}=\bigl{\langle}z,G(t)z\bigr{\rangle}\) for \(t\geqslant a\) and \(z\in\mathbb{R}^{n}\), and replace \(F\) by \(G\) to arrange that \(F\) is symmetric. Let
\[P(Y)\ :=\ \det(YI_{n}-F)\ =\ Y^{n}+P_{1}Y^{n-1}+\cdots+P_{n}\in\mathcal{C}_{a}[Y] \qquad(P_{1},\ldots,P_{n}\in\mathcal{C}_{a}),\]
and for \(t\in\mathbb{R}^{\geqslant a}\) put
\[P(t,Y)\ :=\ Y^{n}+P_{1}(t)Y^{n-1}+\cdots+P_{n}(t)\in\mathbb{R}[Y],\]
so for each \(\lambda\in\mathbb{R}\), \(P(t,Y+\lambda)\) is the characteristic polynomial of the symmetric \(n\times n\) matrix \(F(t)-\lambda I_{n}\) over \(\mathbb{R}\). Now \(P_{1},\ldots,P_{n}\preccurlyeq 1\) since \(F\) is bounded, so [ADH, 3.5.2] yields \(\mu\in\mathbb{R}^{>}\) such that for all \(t\in\mathbb{R}^{\geqslant a}\), all zeros of \(P(t,Y)\) in \(\mathbb{R}\) are in \([-\mu,\mu]\). Let \(\lambda\geqslant\mu\). Then for \(t\geqslant a\), no real zero of \(P(t,Y+\lambda)\), and thus no eigenvalue of \(F(t)-\lambda I_{n}\), is positive. Hence \(\bigl{\langle}z,(F(t)-\lambda I_{n})z\bigr{\rangle}\leqslant 0\) for all \(z\in\mathbb{R}^{n}\).
Let \(V:=\bigl{\{}y\in\mathcal{C}^{1}[\mathrm{i}]^{n}:y^{\prime}=Ny\bigr{\}}\), an \(n\)-dimensional \(\mathbb{C}\)-linear subspace of \(\mathcal{C}^{1}[\mathrm{i}]\). Suppose \(N\) is bounded. Then \(S:=\lambda(V^{\neq})\subseteq\mathbb{R}\) by Lemma 5.2.47, and \(S\), called the **Lyapunov spectrum** of \(y^{\prime}=Ny\), has at most \(n\) elements by Corollary 5.2.45. According to [ADH, 2.3] the surjective map
\[y\mapsto\lambda(y)\colon V\to S_{\infty}:=S\cup\{\infty\}\]
makes \(V\) a valued vector space over \(\mathbb{C}\). Thus by [ADH, remark before 2.3.10]:
**Corollary 5.2.49** (Lyapunov [135]).: _If \(N\) is bounded, then \(V\) has a basis \(y_{1},\ldots,y_{n}\) such that for all \(c_{1},\ldots,c_{n}\in\mathbb{C}\), not all zero, and \(y=c_{1}y_{1}+\cdots+c_{n}y_{n}\),_
\[\lambda(y)\ =\ \min\bigl{\{}\lambda(y_{j}):c_{j}\neq 0\bigr{\}}.\]
Whether or not \(N\) is bounded, a **Lyapunov fundamental system of solutions** of \(y^{\prime}=Ny\) is a basis \(y_{1},\ldots,y_{n}\) of \(V\) as in the corollary above. (In [45, SS3.14] this is called a _normal_ fundamental system of solutions of \(y^{\prime}=Ny\).) A **Lyapunov fundamental matrix** for \(y^{\prime}=Ny\) is an \(n\times n\) matrix with entries in \(\mathcal{C}^{1}[i]\) whose columns form a Lyapunov fundamental system of solutions of \(y^{\prime}=Ny\).
Lemma 5.2.47 also gives:
**Corollary 5.2.50**.: _Let \(f_{1},\ldots,f_{n}\in\mathcal{C}[i]\) be such that \(f_{1},\ldots,f_{n}\preccurlyeq 1\). Then there are \(\lambda_{1},\ldots,\lambda_{m}\in\mathbb{R}\)\((1\leqslant m\leqslant n)\) such that for all \(y\in\mathcal{C}^{n}[i]^{\neq}\) such that_
\[y^{(n)}+f_{1}y^{(n-1)}+\cdots+f_{n}y\ =\ 0\]
_we have \(\lambda(y,y^{\prime},\ldots,y^{(n-1)})\in\{\lambda_{1},\ldots,\lambda_{m}\}\)._
### Hardy Fields
Here we introduce Hardy fields and review some classical extension theorems for Hardy fields.
#### Hardy fields
A _Hardy field_ is a subfield of \(\mathcal{C}^{<\infty}\) that is closed under the derivation of \(\mathcal{C}^{<\infty}\); see also [ADH, 9.1]. Let \(H\) be a Hardy field. Then \(H\) is considered as an ordered valued differential field in the obvious way; see Section 5.1 for the ordering and valuation on \(H\). The field of constants of \(H\) is \(\mathbb{R}\cap H\). Hardy fields are pre-\(H\)-fields, and \(H\)-fields if they contain \(\mathbb{R}\); see [ADH, 9.1.9(i), (iii)]. As in Section 5.1 we equip the differential subfield \(H[i]\) of \(\mathcal{C}^{<\infty}[i]\) with the unique valuation ring lying over that of \(H\). Then \(H[i]\) is a pre-d-valued field of \(H\)-type with small derivation and constant field \(\mathbb{C}\cap H[i]\); if \(H\supseteq\mathbb{R}\), then \(H[i]\) is d-valued with constant field \(\mathbb{C}\).
We also consider variants: a \(\mathcal{C}^{\infty}\)_-Hardy field_ is a Hardy field \(H\subseteq\mathcal{C}^{\infty}\), and a \(\mathcal{C}^{\omega}\)_-Hardy field_ (also called an _analytic Hardy field_) is a Hardy field \(H\subseteq\mathcal{C}^{\omega}\). Most Hardy fields arising in practice are actually \(\mathcal{C}^{\omega}\)-Hardy fields. Boshernitzan [32] (with details worked out in [77]) first suggested a Hardy field \(H\not\subseteq\mathcal{C}^{\infty}\), and [79, Theorem 1] shows that each Hardy field with a largest comparability class extends to a Hardy field \(H\not\subseteq\mathcal{C}^{\infty}\). Rolin, Speissegger, Wilkie [166] construct o-minimal expansions \(\widetilde{\mathbb{R}}\) of the ordered field of real numbers such that \(H\subseteq\mathcal{C}^{\infty}\) and \(H\not\subseteq\mathcal{C}^{\omega}\) for the Hardy field \(H\) consisting of the germs of functions \(\mathbb{R}\to\mathbb{R}\) that are definable in \(\widetilde{\mathbb{R}}\). Le Gal and Rolin [124] construct such expansions such that \(H\not\subseteq\mathcal{C}^{\infty}\) for the corresponding Hardy field \(H\).
#### Hardian germs
Let \(y\in\mathcal{G}\). Following [190] we call \(y\)**hardian** if it lies in a Hardy field (and thus \(y\in\mathcal{C}^{<\infty}\)). We also say that \(y\) is \(\mathcal{C}^{\infty}\)**-hardian** if \(y\) lies in a \(\mathcal{C}^{\infty}\)-Hardy field, equivalently, \(y\in\mathcal{C}^{\infty}\) and \(y\) is hardian; likewise with \(\mathcal{C}^{\omega}\) in place of \(\mathcal{C}^{\infty}\). Let \(H\) be a Hardy field. Call \(y\in\mathcal{G}\)\(H\)**-hardian** (or **hardian over \(H\)**) if \(y\) lies in a Hardy field extension of \(H\). (Thus \(y\) is hardian iff \(y\) is \(\mathbb{Q}\)-hardian.) If \(H\) is a \(\mathcal{C}^{\infty}\)-Hardy field and \(y\in\mathcal{C}^{\infty}\) is hardian over \(H\), then \(y\) generates a \(\mathcal{C}^{\infty}\)-Hardy field extension \(H\langle y\rangle\) of \(H\); likewise with \(\mathcal{C}^{\omega}\) in place of \(\mathcal{C}^{\infty}\).
**Maximal and perfect Hardy fields.** Let \(H\) be a Hardy field. Call \(H\)_maximal_ if no Hardy field properly contains \(H\). Following Boshernitzan [33] we denote by \(\operatorname{E}(H)\) the intersection of all maximal Hardy fields containing \(H\); thus \(\operatorname{E}(H)\) is a Hardy field extension of \(H\), and a maximal Hardy field contains \(H\) iff it contains \(\operatorname{E}(H)\), so \(\operatorname{E}(\operatorname{E}(H))=\operatorname{E}(H)\). If \(H^{*}\) is a Hardy field extension of \(H\), then \(\operatorname{E}(H)\subseteq\operatorname{E}(H^{*})\); hence if \(H^{*}\) is a Hardy field with \(H\subseteq H^{*}\subseteq\operatorname{E}(H)\), then \(\operatorname{E}(H^{*})=\operatorname{E}(H)\). Note that \(\operatorname{E}(H)\) consists of the \(f\in\mathcal{G}\) that areardian over each Hardy field \(E\supseteq H\). Hence \(\operatorname{E}(\mathbb{Q})\) consists of the germs in \(\mathcal{G}\) that areardian over each Hardy field. As in [33] we also say that \(H\) is **perfect** if \(\operatorname{E}(H)=H\). (This terminology is slightly unfortunate, since Hardy fields, being of characteristic zero, are perfect as fields.) Thus \(\operatorname{E}(H)\) is the smallest perfect Hardy field extension of \(H\). Maximal Hardy fields are perfect.
**Differentially maximal Hardy fields.** Let \(H\) be a Hardy field. We now define differentially-algebraic variants of the above: call \(H\)**differentially maximal**, or \(\operatorname{d}\)**-maximal** for short, if \(H\) has no proper \(\operatorname{d}\)-algebraic Hardy field extension. Every maximal Hardy field is \(\operatorname{d}\)-maximal, so each Hardy field is contained in a \(\operatorname{d}\)-maximal one; in fact, by Zorn, each Hardy field \(H\) has a \(\operatorname{d}\)-maximal Hardy field extension which is \(\operatorname{d}\)-algebraic over \(H\). Let \(\operatorname{D}(H)\) be the intersection of all \(\operatorname{d}\)-maximal Hardy fields containing \(H\). Then \(\operatorname{D}(H)\) is a \(\operatorname{d}\)-algebraic Hardy field extension of \(H\) with \(\operatorname{D}(H)\subseteq\operatorname{E}(H)\). By the next lemma, \(\operatorname{D}(H)=\operatorname{E}(H)\) iff \(\operatorname{E}(H)\) is \(\operatorname{d}\)-algebraic over \(H\):
**Lemma 5.3.1**.: \(\operatorname{D}(H)=\big{\{}f\in\operatorname{E}(H):f\text{ is $\operatorname{d}$-algebraic over $H$}\big{\}}\)_._
Proof.: We only need to show the inclusion "\(\supseteq\)". For this let \(f\in\operatorname{E}(H)\) be \(\operatorname{d}\)-algebraic over \(H\), and let \(E\) be a \(\operatorname{d}\)-maximal Hardy field extension of \(H\); we need to show \(f\in E\). To see this extend \(E\) to a maximal Hardy field \(M\); then \(f\in M\), hence \(f\) generates a Hardy field extension \(E\langle f\rangle\) of \(E\). Since \(f\) is \(\operatorname{d}\)-algebraic over \(H\) and thus over \(E\), this yields \(f\in E\) by \(\operatorname{d}\)-maximality of \(E\), as required.
A \(\operatorname{d}\)-maximal Hardy field contains \(H\) iff it contains \(\operatorname{D}(H)\), hence \(\operatorname{D}(\operatorname{D}(H))=\operatorname{D}(H)\). If \(H^{*}\) is a Hardy field extension of \(H\), then \(\operatorname{D}(H)\subseteq\operatorname{D}(H^{*})\); hence for each Hardy field \(H^{*}\) with \(H\subseteq H^{*}\subseteq\operatorname{D}(H)\) we have \(\operatorname{D}(H^{*})=\operatorname{D}(H)\). We say that \(H\) is \(\operatorname{d}\)**-perfect** if \(\operatorname{D}(H)=H\). Thus \(\operatorname{D}(H)\) is the smallest \(\operatorname{d}\)-perfect Hardy field extension of \(H\). Every perfect Hardy field is \(\operatorname{d}\)-perfect, as is every \(\operatorname{d}\)-maximal Hardy field. The following diagram summarizes the various implications among these properties of Hardy fields:
We call \(\operatorname{D}(H)\) the \(\operatorname{d}\)**-perfect hull** of \(H\), and \(\operatorname{E}(H)\) the **perfect hull** of \(H\). It seems that the following question asked by Boshernitzan [33, p. 144] is still open:
_Question_.: Is \(\operatorname{E}(H)\) \(\operatorname{d}\)-algebraic over \(H\), in other words, is \(\operatorname{D}(H)=\operatorname{E}(H)\)?
Boshernitzan gave support for a positive answer: Lemma 5.4.1, Corollary 5.4.15, and Theorem 5.4.20 below. Our Theorems 5.6.11 and 7.5.32 (in combination with Theorem 1.4.1) can be seen as further support.
**Variants of the perfect hull.** Let \(H\) be a \(\mathcal{C}^{r}\)-Hardy field where \(r\in\{\infty,\omega\}\). We say that \(H\) is \(\mathcal{C}^{r}\)-**maximal** if no \(\mathcal{C}^{r}\)-Hardy field properly contains it. By Zorn, \(H\) has a \(\mathcal{C}^{r}\)-maximal extension. In analogy with \(\operatorname{E}(H)\), define the \(\mathcal{C}^{r}\)**-perfect hull**\(\operatorname{E}^{r}(H)\) of \(H\) to be the intersection of all \(\mathcal{C}^{r}\)-maximal Hardy fields containing \(H\). We say that \(H\) is \(\mathcal{C}^{r}\)**-perfect** if \(\operatorname{E}^{r}(H)=H\). The penultimate subsection goes through with _Hardy field_, _maximal_, _hardian_, \(\operatorname{E}(\,\cdot\,)\), and _perfect_ replaced by \(\mathcal{C}^{r}\)_-Hardy field_, \(\mathcal{C}^{r}\)_-maximal_, \(\mathcal{C}^{r}\)_-hardian_, \(\operatorname{E}^{r}(\,\cdot\,)\), and \(\mathcal{C}^{r}\)_-perfect_, respectively. (Corollary 7.2.13 shows that no analogue of \(\operatorname{D}(H)\) is needed for the \(\mathcal{C}^{r}\)-category.)
**Some basic extension theorems.** We summarize some well-known extension results for Hardy fields:
**Proposition 5.3.2**.: _Any Hardy field \(H\) has the following Hardy field extensions:_
* \(H(\mathbb{R})\)_, the subfield of_ \(\mathcal{C}^{<\infty}\) _generated by_ \(H\) _and_ \(\mathbb{R}\)_;_
* \(H^{\operatorname{rc}}\)_, the real closure of_ \(H\) _as defined in Proposition_ 5.1.4_;_
* \(H(\operatorname{e}^{f})\) _for any_ \(f\in H\)_;_
* \(H(f)\) _for any_ \(f\in\mathcal{C}^{1}\) _with_ \(f^{\prime}\in H\)_;_
* \(H(\log f)\) _for any_ \(f\in H^{>}\)_._
_If \(H\) is contained in \(\mathcal{C}^{\infty}\), then so are the Hardy fields in (i), (ii), (iii), (iv), (v); likewise with \(\mathcal{C}^{\omega}\) instead of \(\mathcal{C}^{\infty}\)._
Note that (v) is a special case of (iv), since \((\log f)^{\prime}=f^{\dagger}\in H\) for \(f\in H^{>}\). Another special case of (iv) is that \(H(x)\) is a Hardy field. A consequence of the proposition is that any Hardy field \(H\) has a smallest real closed Hardy field extension \(L\) with \(\mathbb{R}\subseteq L\) such that for all \(f\in L\) we have \(\operatorname{e}^{f}\in L\) and \(g^{\prime}=f\) for some \(g\in L\). Note that then \(L\) is a Liouville closed \(H\)-field as defined in [ADH, 10.6]. Let \(H\) be a Hardy field with \(H\supseteq\mathbb{R}\). As in [6] and [ADH, p. 460] we then denote the above \(L\) by \(\operatorname{Li}(H)\); so \(\operatorname{Li}(H)\) is the smallest Liouville closed Hardy field containing \(H\), called the _Hardy-Liouville closure_ of \(H\) in [12]. We have \(\operatorname{Li}(H)\subseteq\operatorname{D}(H)\), hence if \(H\) is d-perfect, then \(H\) is a Liouville closed \(H\)-field. Moreover, if \(H\subseteq\mathcal{C}^{\infty}\) then \(\operatorname{Li}(H)\subseteq\mathcal{C}^{\infty}\), and similarly with \(\mathcal{C}^{\omega}\) in place of \(\mathcal{C}^{\infty}\).
The next more general result in Rosenlicht [171] is attributed there to M. Singer:
**Proposition 5.3.3**.: _Let \(H\) be a Hardy field and \(p(Y),q(Y)\in H[Y]\), \(y\in\mathcal{C}^{1}\), such that \(y^{\prime}q(y)=p(y)\) with \(q(y)\in\mathcal{C}^{\times}\). Then \(y\) generates a Hardy field \(H(y)\) over \(H\)._
Note that for \(H\), \(p\), \(q\), \(y\) as in the proposition we have \(y\in\operatorname{D}(H)\).
**Compositional conjugation of differentiable germs.** Let \(\ell\in\mathcal{C}^{1}\), \(\ell^{\prime}(t)>0\) eventually (so \(\ell\) is eventually strictly increasing) and \(\ell(t)\to+\infty\) as \(t\to+\infty\). Then \(\phi:=\ell^{\prime}\in\mathcal{C}^{\times}\), and the compositional inverse \(\ell^{\operatorname{inv}}\in\mathcal{C}^{1}\) of \(\ell\) satisfies
\[\ell^{\operatorname{inv}}>\mathbb{R},\qquad(\ell^{\operatorname{inv}})^{\prime }\ =\ (1/\phi)\circ\ell^{\operatorname{inv}}\in\mathcal{C}.\]
The \(\mathbb{C}\)-algebra automorphism \(f\mapsto f^{\circ}:=f\circ\ell^{\operatorname{inv}}\) of \(\mathcal{C}[i]\) (with inverse \(g\mapsto g\circ\ell\)) maps \(\mathcal{C}^{1}[i]\) onto itself and satisfies for \(f\in\mathcal{C}^{1}[i]\) a useful identity:
\[(f^{\circ})^{\prime}\ =\ (f\circ\ell^{\operatorname{inv}})^{\prime}\ =\ (f^{\prime}\circ\ell^{ \operatorname{inv}})\cdot(\ell^{\operatorname{inv}})^{\prime}\ =\ (f^{\prime}/\ell^{\prime})\circ\ell^{ \operatorname{inv}}\ =\ (\phi^{-1}f^{\prime})^{\circ}.\]
Hence if \(n\geqslant 1\) and \(\ell\in\mathcal{C}^{n}\), then \(\ell^{\operatorname{inv}}\in\mathcal{C}^{n}\) and \(f\mapsto f^{\circ}\) maps \(\mathcal{C}^{n}[i]\) and \(\mathcal{C}^{n}\) onto themselves, for each \(n\). Therefore, if \(\ell\in\mathcal{C}^{<\infty}\), then \(\ell^{\operatorname{inv}}\in\mathcal{C}^{<\infty}\) and \(f\mapsto f^{\circ}\) maps \(\mathcal{C}^{<\infty}[i]\) and \(\mathcal{C}^{<\infty}\) onto themselves; likewise with \(\mathcal{C}^{\infty}\) or \(\mathcal{C}^{\omega}\) in place of \(\mathcal{C}^{<\infty}\). _In the rest of this subsection we assume \(\ell\in\mathcal{C}^{<\infty}\)._ Denote the differential ring \(\mathcal{C}^{<\infty}[i]\)
by \(R\), and as usual let \(R^{\phi}\) be \(R\) with its derivation \(f\mapsto\partial(f)=f^{\prime}\) replaced by the derivation \(f\mapsto\delta(f)=\phi^{-1}f^{\prime}\) [ADH, 5.7]. Then \(f\mapsto f^{\circ}\colon R^{\phi}\to R\) is an isomorphism of differential rings by the identity above. We extend it to the isomorphism
\[Q\mapsto Q^{\circ}\ \colon\ R^{\phi}\{Y\}\to R\{Y\}\]
of differential rings given by \(Y^{\circ}=Y\). Let \(y\in R\). Then
\[Q(y)^{\circ}\ =\ Q^{\circ}(y^{\circ})\qquad\text{for $Q\in R^{\phi}\{Y\}$}\]
and thus
\[P(y)^{\circ}\ =\ P^{\phi}(y)^{\circ}\ =\ (P^{\phi})^{\circ}(y^{\circ})\qquad \text{for $P\in R\{Y\}$}.\]
This leads to a useful generalization of the identity for \((f^{\circ})^{\prime}\) above. For this, let \(n\geqslant 1\) and let \(G^{n}_{k}\in\mathbb{Q}\{X\}\) (\(1\leqslant k\leqslant n\)) be the differential polynomial introduced in [ADH, 5.7]; so \(G^{n}_{k}\) is homogeneous of degree \(n\) and isobaric of weight \(n-k\). Viewing the \(G^{n}_{k}\) as elements of \(R\{X\}\) and \(\delta=\phi^{-1}\partial\) as an element of \(R[\partial]\) we have
\[\delta^{n}\ =\ G^{n}_{n}(\phi^{-1})\partial^{n}+\cdots+G^{n}_{1}(\phi^{-1}) \partial\qquad\text{in the ring $R[\partial]$}.\]
Thus
\[\delta^{2}\ =\ \phi^{-2}\partial^{2}-\phi^{\prime}\phi^{-3}\partial,\qquad \delta^{3}\ =\ \phi^{-3}\partial^{3}-3\phi^{\prime}\phi^{-4}\partial^{2}+\big{(}3(\phi^{\prime })^{2}-\phi\phi^{\prime\prime}\big{)}\phi^{-5}\partial,\quad\ldots\]
Set \(\lambda:=-\phi^{\dagger}\), and let
\[R^{n}_{k}\ :=\ \text{Ri}(G^{n}_{k})\in\mathbb{Q}\{Z\},\quad\text{ so }\quad G^{n}_{k}(\phi^{-1})=\phi^{-n}R^{n}_{k}(\lambda)\qquad(0\leqslant k \leqslant n).\]
Thus
\[\delta^{n}\ =\ \phi^{-n}\big{(}R^{n}_{n}(\lambda)\partial^{n}+\cdots+R^{n}_{1 }(\lambda)\partial\big{)}.\]
For instance,
\[\delta^{3} =\ \phi^{-3}\big{(}R^{3}_{3}(\lambda)\partial^{3}+R^{3}_{2}( \lambda)\partial^{2}+R^{3}_{1}(\lambda)\partial\big{)}\] \[=\ \phi^{-3}\big{(}\partial^{3}+3\lambda\partial^{2}+\big{(}2 \lambda^{2}+\lambda^{\prime}\big{)}\partial\big{)}.\]
We now have:
**Lemma 5.3.4**.: _Let \(f\in R\) and \(n\geqslant 1\). Then_
\[(f^{\circ})^{(n)}\ =\ \left(\phi^{-n}\big{(}R^{n}_{n}(\lambda)f^{(n)}+\cdots+R^{n }_{1}(\lambda)f^{\prime}\big{)}\right)^{\circ}.\]
Proof.: Let \(Q=Y^{(n)}\in R^{\phi}\{Y\}\), so \(Q^{\circ}=Y^{(n)}\in R\{Y\}\). Then \((f^{\circ})^{(n)}=Q^{\circ}(f^{\circ})=Q(f)^{\circ}=\delta^{n}(f)^{\circ}\). Now use the above identity for \(\delta^{n}\).
Note also: \((Q_{+f})^{\circ}=(Q^{\circ})_{+f^{\circ}}\) and \((Q_{\times f})^{\circ}=(Q^{\circ})_{\times f^{\circ}}\) for \(Q\in R^{\phi}\{Y\}\), \(f\in R\).
### Compositional conjugation in Hardy fields
Let now \(H\) be a Hardy field, and let \(\ell\in\mathcal{C}^{1}\) be such that \(\ell>\mathbb{R}\) and \(\ell^{\prime}\in H\). Then \(\ell\in\mathcal{C}^{<\infty}\), \(\phi:=\ell^{\prime}\) is active in \(H\), \(\phi>0\), and we have a Hardy field \(H(\ell)\). The \(\mathbb{C}\)-algebra automorphism \(f\mapsto f^{\circ}:=f\circ\ell^{\text{inv}}\) of \(\mathcal{C}[i]\) restricts to an ordered field isomorphism
\[h\mapsto h^{\circ}\ :\ H\to H^{\circ}:=H\circ\ell^{\text{inv}}.\]
The identity \((f^{\circ})^{\prime}=(\phi^{-1}f^{\prime})^{\circ}\), valid for each \(f\in\mathcal{C}^{1}[i]\), shows that \(H^{\circ}\) is again a Hardy field. Conversely, if \(E\) is a subfield of \(\mathcal{C}^{<\infty}\) with \(\phi\in E\) and \(E^{\circ}:=E\circ\ell^{\text{inv}}\) is a Hardy field, then \(E\) is a Hardy field. If \(H\subseteq\mathcal{C}^{\infty}\) and \(\ell\in\mathcal{C}^{\infty}\), then \(H^{\circ}\subseteq\mathcal{C}^{\infty}\); likewise with \(\mathcal{C}^{\omega}\) instead of \(\mathcal{C}^{\infty}\). If \(E\) is a Hardy field extension of \(H\), then \(E^{\circ}\) is a Hardy field extension of \(H^{\circ}\), and \(E\) is d-algebraic over \(H\) iff \(E^{\circ}\) is d-algebraic over \(H^{\circ}\). Hence \(H\) is maximal iff \(H^{\circ}\) is maximal, and likewise with "d-maximal" in place of "maximal". So \(\text{E}(H^{\circ})=\text{E}(H)^{\circ}\) and \(\text{D}(H^{\circ})=\text{D}(H)^{\circ}\), and thus \(H\) is
perfect iff \(H^{\circ}\) is perfect, and likewise with "\(\mathrm{d}\)-perfect" in place of "perfect". The next lemma is [32, Corollary 6.5]; see also [8, Theorem 1.7].
**Lemma 5.3.5**.: _The germ \(\ell^{\mathrm{inv}}\) is hardian. Moreover, if \(\ell\) is \(\mathcal{C}^{\infty}\)-hardian, then \(\ell^{\mathrm{inv}}\) is also \(\mathcal{C}^{\infty}\)-hardian, and likewise with \(\mathcal{C}^{\omega}\) in place of \(\mathcal{C}^{\infty}\)._
Proof.: By Proposition 5.3.2(iv) we can arrange that our Hardy field \(H\) contains both \(\ell\) and \(x\). Then \(\ell^{\mathrm{inv}}=x\circ\ell^{\mathrm{inv}}\) is an element of the Hardy field \(H\circ\ell^{\mathrm{inv}}\).
Next we consider the pre-\(\mathrm{d}\)-valued field \(K:=H[i]\) of \(H\)-type, which gives rise to
\[K^{\circ}:=K\circ\ell^{\mathrm{inv}}=H^{\circ}[i],\]
also a pre-\(\mathrm{d}\)-valued field of \(H\)-type, and we have the valued field isomorphism
\[h\mapsto h^{\circ}\ :\ K\to K^{\circ}.\]
Note: \(h\mapsto h^{\circ}\colon H^{\phi}\to H^{\circ}\) is an isomorphism of pre-\(H\)-fields, and \(h\mapsto h^{\circ}\colon K^{\phi}\to K^{\circ}\) is an isomorphism of valued differential fields. Recall that \(K\) and \(K^{\phi}\) have the same underlying field. For \(f,g\in K\) we have
\[f\preccurlyeq_{\phi}^{\flat}g\ (\mathrm{in}\ K)\quad\Longleftrightarrow\quad f \preccurlyeq^{\flat}g\ (\mathrm{in}\ K^{\phi})\quad\Longleftrightarrow\quad f^{\circ}\preccurlyeq^{ \flat}g^{\circ}\ (\mathrm{in}\ K^{\circ}),\]
and likewise with \(\preccurlyeq_{\phi}^{\flat}\), \(\preccurlyeq^{\flat}\) replaced by \(\preccurlyeq_{\phi}^{\flat}\), \(\preccurlyeq^{\flat}\).
**Lemma 5.3.6**.: _From the isomorphisms \(H^{\phi}\cong H^{\circ}\) and \(K^{\phi}\cong K^{\circ}\) we obtain: If \(H\) is Liouville closed, then so is \(H^{\circ}\). If \(\mathrm{I}(K)\subseteq K^{\dagger}\), then \(\mathrm{I}(K^{\circ})\subseteq(K^{\circ})^{\dagger}\)._
So far we focused on pre-composition with \(\ell^{\mathrm{inv}}\). As to pre-composition with \(\ell\), it seems not to be known whether \(H\circ\ell\subseteq H\) whenever \(H\) is maximal. However, we have the following (cf. [33, Lemma 11.6(7)]):
**Lemma 5.3.7**.: \(\mathrm{E}(\mathbb{Q})\circ\ell\subseteq\mathrm{E}(H)\)_._
Proof.: \(\mathrm{E}(H^{\circ})=\mathrm{E}(H)^{\circ}\) gives \(\mathrm{E}(H^{\circ})\circ\ell=\mathrm{E}(H)\). Now use \(\mathrm{E}(\mathbb{Q})\subseteq\mathrm{E}(H^{\circ})\).
Lemma 5.3.7 gives \(\mathrm{E}(\mathbb{Q})\circ\mathrm{E}(\mathbb{Q})^{>\mathbb{R}}\subseteq \mathrm{E}(\mathbb{Q})\); cf. [32, Theorem 6.8]. Boshernitzan's conjecture [32, SS10, Conjecture 3] that \(\mathrm{E}(\mathbb{Q})^{>\mathbb{R}}\) is also closed under compositional inversion seems to be still open.
### Differential algebraicity of compositional inverses \((^{*})\)
In the next lemma we let \(\ell\in\mathcal{C}^{<\infty}\) be hardian with \(\ell>\mathbb{R}\). The argument in the proof of Lemma 5.3.5 shows that \(\ell\) and \(\ell^{\mathrm{inv}}\) are both \(\mathbb{R}(x)\)-hardian; moreover (cf. [33, Lemma 14.10]):
**Lemma 5.3.8**.: _We have_
\[\mathrm{trdeg}\big{(}\mathbb{R}\langle x,\ell^{\mathrm{inv}}\rangle|\mathbb{R }\big{)}\ =\ \mathrm{trdeg}\big{(}\mathbb{R}\langle x,\ell\rangle|\mathbb{R}\big{)}, \tag{5.3.1}\]
_hence if \(\ell\) is \(\mathrm{d}\)-algebraic over \(\mathbb{R}\), then so is \(\ell^{\mathrm{inv}}\), with_
\[\mathrm{trdeg}\big{(}\mathbb{R}\langle\ell^{\mathrm{inv}}\rangle|\mathbb{R} \big{)}\ \leqslant\ \mathrm{trdeg}\big{(}\mathbb{R}\langle\ell\rangle|\mathbb{R}\big{)}+1.\]
Proof.: Set \(H:=\mathbb{R}\langle x,\ell\rangle=\mathbb{R}(x)\langle\ell\rangle\) and \(\phi:=\ell^{\prime}\). With \(\partial\) and \(\delta=\phi^{-1}\partial\) denoting the derivations of \(H\) and \(H^{\phi}\), we have \(\phi=1/\delta(x)\) and for all \(f\in H\) and \(n\geqslant 1\),
\[\partial^{n}(f)\in\mathbb{Q}\big{[}\delta(f),\delta^{2}(f),\dots,\phi,\delta( \phi),\delta^{2}(\phi),\dots\big{]}\]
by [ADH, remarks before 5.7.3]. The differential fields \(H\) and \(H^{\phi}\) have the same underlying field, and the former is generated as a field over \(\mathbb{R}\) by \(x\) and the \(\ell^{(n)}\), so applying the above to \(f=\ell\) shows that \(H^{\phi}\) is generated as a differential field over \(\mathbb{R}\) by \(x\) and \(\ell\). We also have a differential field isomorphism \(h\mapsto h^{\circ}\colon H^{\phi}\to H^{\circ}=\mathrm{trdeg}\big{(}\mathbb{R} \langle x,\ell\rangle|\mathbb{R}\big{)}\). We have \(\phi=1/\delta(x)\) and \(\phi=1/\delta(x)\). We have \(\phi=1/\delta(x)\) and \(\phi=1/\delta(x)\).
\(H\circ\ell^{\mathrm{inv}}\). This yields \(H^{\circ}=\mathbb{R}\langle\ell^{\mathrm{inv}},x\rangle\) and (5.3.1). Suppose now that \(\ell\) is d-algebraic over \(\mathbb{R}\); then by additivity of \(\mathrm{trdeg}\),
\[\mathrm{trdeg}\big{(}\mathbb{R}\langle x,\ell\rangle|\mathbb{R}\big{)}\ =\ \mathrm{trdeg}\big{(}\mathbb{R}\langle\ell,x\rangle|\mathbb{R}\langle\ell \rangle\big{)}+\mathrm{trdeg}\big{(}\mathbb{R}\langle\ell\rangle|\mathbb{R} \big{)}\ \leqslant\ 1+\mathrm{trdeg}\big{(}\mathbb{R}\langle\ell\rangle|\mathbb{R}\big{)},\]
and so by (5.3.1):
hence \(\ell^{\mathrm{inv}}\) is d-algebraic over \(\mathbb{R}\).
In Corollary 5.3.12 below we prove a uniform version of Lemma 5.3.8. To prepare for this we prove the next two lemmas, where \(R\) is a differential ring and \(x\in R\), \(x^{\prime}=1\). Also, \(\phi\in R^{\times}\), and we take distinct differential indeterminates \(U\), \(X\), \(Y\) and let \(G^{n}_{k}\in\mathbb{Q}\{U\}\subseteq R^{\phi}\{U\}\) (\(k=1,\ldots,n\)) be as in [ADH, p. 292], so with \(\partial\) and \(\mathfrak{d}=\phi^{-1}\partial\) denoting the derivations of \(R\) and \(R^{\phi}\), we have in \(R^{\phi}[\mathfrak{d}]\) for \(\partial=\phi\mathfrak{d}\):
\[\partial^{n}\ =\ G^{n}_{n}(\phi)\cdot\mathfrak{d}^{n}+G^{n}_{n-1}(\phi)\cdot \mathfrak{d}^{n-1}+\cdots+G^{n}_{1}(\phi)\cdot\mathfrak{d}.\]
Recall that the \(G^{n}_{k}\) do not depend on \(R\), \(x\), \(\phi\).
**Lemma 5.3.9**.: _There are \(H^{n}_{k}\in\mathbb{Q}\{X^{\prime}\}\subseteq\mathbb{Q}\{X\}\subseteq R^{\phi }\{X\}\)\((k=1,\ldots,n)\), independent of \(R\), \(x\), \(\phi\), such that \(G^{n}_{k}(\phi)=\phi^{2n-1}H^{n}_{k}(x)\)._
Proof.: By induction on \(n\geqslant 1\). For \(n=1\) we have \(G^{1}_{1}=U\), so \(H^{1}_{1}:=1\) does the job. Suppose for a certain \(n\geqslant 1\) we have \(H^{n}_{k}\) (\(k=1,\ldots,n\)) with the desired properties, and let \(k\in\{1,\ldots,n+1\}\). Now \(G^{n+1}_{k}=U\cdot\big{(}\mathfrak{d}(G^{n}_{k})+G^{n}_{k-1}\big{)}\) by [ADH, (5.7.2)] (with \(G^{n}_{0}:=0\)), so using \(\mathfrak{d}(\phi)=-\phi^{2}\mathfrak{d}^{2}(x)\) and setting \(H^{n}_{0}:=0\),
\[G^{n+1}_{k}(\phi) =\ \phi\cdot\big{(}(2n-1)\phi^{2n-2}\mathfrak{d}(\phi)H^{n}_{k}(x )+\phi^{2n-1}\mathfrak{d}(H^{n}_{k}(x))+\phi^{2n-1}H^{n}_{k-1}(x)\big{)}\] \[=\ \phi^{2n+1}\big{(}(1-2n)\mathfrak{d}^{2}(x)H^{n}_{k}(x)+ \mathfrak{d}(x)\mathfrak{d}(H^{n}_{k}(x))+\mathfrak{d}(x)H^{n}_{k-1}(x)\big{)}.\]
Thus we can take
\[H^{n+1}_{k}\ :=\ (1-2n)X^{\prime\prime}H^{n}_{k}+X^{\prime}(H^{n}_{k})^{\prime}+ X^{\prime}H^{n}_{k-1}.\qed\]
**Lemma 5.3.10**.: _Let \(C\) be a subfield of \(C_{R}\) and \(P\in C\{X,Y\}\subseteq R\{X,Y\}\). Then there are \(N\in\mathbb{N}\) and \(Q\in C\{X,Y\}\subseteq R^{\phi}\{X,Y\}\) such that \(P(x,Y)^{\phi}=\phi^{N}Q(x,Y)\) in \(R^{\phi}\{Y\}\). Here we can take \(N\), \(Q\) independent of \(x\), \(\phi\)._
Note that \(C\{X,Y\}\) as a differential subring of \(R\{X,Y\}\) is the same as \(C\{X,Y\}\) as a differential subring of \(R^{\phi}\{X,Y\}\), but "\(P\in C\{X,Y\}\subseteq R\{X,Y\}\)" indicates that \(P\) is considered as an element of \(R\{X,Y\}\) when substituting in \(P\), while "\(Q\in C\{X,Y\}\subseteq R^{\phi}\{X,Y\}\)" indicates that \(Q\) is taken as an element of \(R^{\phi}\{X,Y\}\) when substituting in \(Q\).
Proof.: For \(i=1,2\), let \(P_{i}\in C\{X,Y\}\), \(N_{i}\in\mathbb{N}\), and \(Q_{i}\in C\{X,Y\}\subseteq R^{\phi}\{X,Y\}\) be such that \(P_{i}(x,Y)^{\phi}=\phi^{N_{i}}Q_{i}(x,Y)\). Then
\[(P_{1}\cdot P_{2})(x,Y)^{\phi}\ =\ \phi^{N_{1}+N_{2}}(Q_{1}\cdot Q_{2})(x,Y).\]
Moreover, \(\mathfrak{d}(x)=\phi^{-1}\), hence if \(N_{1}\leqslant N_{2}\), then
\[(P_{1}+P_{2})(x,Y)^{\phi}\ =\ \phi^{N_{2}}Q(x,Y)\quad\text{for }Q:=(X^{\prime})^{N_{2 }-N_{1}}Q_{1}+Q_{2}.\]
For \(P=X\) we have \(P(x,Y)^{\phi}=x=\phi\cdot x\mathfrak{d}(x)\), so \(N=1\) and \(Q=XX^{\prime}\) works. For \(P=Y\) we can take \(N=0\) and \(Q=Y\). It is enough to prove the lemma for
such that no monomial in \(P\) has any factor \(X^{(m)}\) with \(m\geqslant 1\). Thus it only remains to do the case \(P=Y^{(n)}\) (\(n\geqslant 1\)). With \(H^{n}_{k}\) as in Lemma 5.3.9 we have
\[(Y^{(n)})^{\phi}\ =\ G^{n}_{n}(\phi)Y^{(n)}+\dots+G^{n}_{1}(\phi)Y^{\prime}\ =\ \phi^{N}Q(x,Y)\]
for \(N:=2n-1\) and \(Q:=H^{n}_{n}(X)Y^{(n)}+\dots+H^{n}_{1}(X)Y^{\prime}\).
In the next lemma \(x\) has its usual meaning as the germ in \(\mathcal{C}^{<\infty}\) of the identity function on \(\mathbb{R}\), we take \(R\) as the differential ring \(\mathcal{C}^{<\infty}[\mathrm{i}]\) and \(C\{X,Y\}\) as a differential subring of \(R\{X,Y\}\) for any subfield \(C\) of \(\mathbb{C}=C_{R}\).
**Lemma 5.3.11**.: _Let \(P\in C\{X,Y\}\) where \(C\) is a subfield of \(\mathbb{C}\). Then there are \(N\in\mathbb{N}\) and \(P^{\bullet}\in C\{X,Y\}\) such that for all \(y\in R\) and \(\ell\in\mathcal{C}^{<\infty}\) with \(\ell(t)\to+\infty\) as \(t\to+\infty\) and \(\ell^{\prime}(t)>0\), eventually, we have for \(\phi:=\ell^{\prime}\):_
\[P(x,y)\circ\ell^{\mathrm{inv}}\ =\ \bigl{(}\phi\circ\ell^{\mathrm{inv}}\bigr{)} ^{N}\cdot P^{\bullet}(\ell^{\mathrm{inv}},y\circ\ell^{\mathrm{inv}})\ \text{ in }R.\]
Proof.: Let \(\ell\in\mathcal{C}^{<\infty}\) be such that \(\ell(t)\to+\infty\) as \(t\to+\infty\) and \(\ell^{\prime}(t)>0\), eventually, and set \(\phi:=\ell^{\prime}\). For \(P_{x}:=P(x,Y)\in R\{Y\}\) and \(y\in R\) we have \(P(x,y)=P_{x}(y)=P_{x}^{\phi}(y)=\phi^{N}Q(x,y)\), with \(N\in\mathbb{N}\) and \(Q\in R^{\phi}\{X,Y\}\) as in Lemma 5.3.10, so
\[P(x,y)\circ\ell^{\mathrm{inv}}\ =\ \phi^{N}Q(x,y)\circ\ell^{\mathrm{inv}}\ =\ \bigl{(}\phi\circ\ell^{\mathrm{inv}}\bigr{)}^{N}\cdot Q(x,y)\circ\ell^{ \mathrm{inv}}.\]
Let \(P^{\bullet}\) be the element of \(C\{X,Y\}\) that is mapped to \(Q\in R^{\phi}\{X,Y\}\) under the ring inclusion \(C\{X,Y\}\to R^{\phi}\{X,Y\}\). The latter is not in general a differential ring morphism, but we have the differential ring isomorphism
\[y\mapsto y\circ\ell^{\mathrm{inv}}\ :\ R^{\phi}\to R\circ\ell^{\mathrm{inv}}=R,\]
which gives for \(y\in R\) that
\[Q(x,y)\circ\ell^{\mathrm{inv}}\ =\ P^{\bullet}(x\circ\ell^{\mathrm{inv}},y\circ \ell^{\mathrm{inv}})\ =\ P^{\bullet}(\ell^{\mathrm{inv}},y\circ\ell^{\mathrm{inv}}).\qed\]
**Corollary 5.3.12**.: _For each \(P\in\mathbb{R}\{X,Y\}\) there is a \(P^{\bullet}\in\mathbb{R}\{X,Y\}\) such that for all \(\ell\in\mathcal{C}^{<\infty}\) with \(\ell(t)\to+\infty\) as \(t\to+\infty\) and \(\ell^{\prime}(t)>0\), eventually, we have_
\[P(x,\ell)=0\iff P^{\bullet}(\ell^{\mathrm{inv}},x)=0.\]
We now indicate how Lemma 5.3.11 and Corollary 5.3.12 go through for transseries. Recall from [ADH, A.7] that there is a unique operation
\[(f,g)\mapsto f\circ g\colon\mathbb{T}\times\mathbb{T}^{>\mathbb{R}}\to\mathbb{T}\]
such that the following conditions hold for all \(g\in\mathbb{T}^{>\mathbb{R}}\):
1. \(x\circ g=g\);
2. \(f\mapsto f\circ g\colon\mathbb{T}\to\mathbb{T}\) is an \(\mathbb{R}\)-linear embedding of ordered exponential fields;
3. \(f\mapsto f\circ g\colon\mathbb{T}\to\mathbb{T}\) is strongly additive.
By [60, Proposition 6.3] the Chain Rule holds:
\[(f\circ g)^{\prime}=(f^{\prime}\circ g)\cdot g^{\prime}\qquad(f\in\mathbb{T}, \ g\in\mathbb{T}^{>\mathbb{R}}).\]
Moreover, \((f,g)\mapsto f\circ g\) restricts to a binary operation on \(\mathbb{T}^{>\mathbb{R}}\) which makes \(\mathbb{T}^{>\mathbb{R}}\) a group with identity element \(x\). For \(f\in\mathbb{T}^{>\mathbb{R}}\) we denote the unique \(g\in\mathbb{T}^{>\mathbb{R}}\) with \(f\circ g=x\) by \(g=f^{\mathrm{inv}}\). We extend \(\circ\) in a unique way to an operation
\[(f,g)\mapsto f\circ g\colon\mathbb{T}[\mathrm{i}]\times\mathbb{T}^{>\mathbb{R}} \to\mathbb{T}[\mathrm{i}]\]
by requiring that for all \(g\in\mathbb{T}^{>\mathbb{R}}\), the operation \(f\mapsto f\circ g\colon\mathbb{T}[\mathrm{i}]\to\mathbb{T}[\mathrm{i}]\) is \(\mathbb{C}\)-linear. It follows that for all \(g\in\mathbb{T}^{>\mathbb{R}}\) the operation \(f\mapsto f\circ g\colon\mathbb{T}[\mathrm{i}]\to\mathbb{T}[\mathrm{i}]\) is a field
embedding. For \(f\in\mathbb{T}[i]\), \(g,h\in\mathbb{T}^{>\mathbb{R}}\) we have \((f\circ g)\circ h=f\circ(g\circ h)\) [ADH, A.7(vi)], so \(\mathbb{T}[i]\circ h=\mathbb{T}[i]\). For \(\ell\in\mathbb{T}^{>\mathbb{R}}\) and \(\phi:=\ell^{\prime}\) we have a differential field isomorphism
\[y\mapsto y\circ\ell^{\mathrm{inv}}\colon\mathbb{T}[i]^{\phi}\to\mathbb{T}[i] \circ\ell^{\mathrm{inv}}=\mathbb{T}[i].\]
Let \(P\in C\{X,Y\}\) where \(C\) is a subfield of \(\mathbb{C}\). Let \(N\in\mathbb{N}\) and \(P^{\bullet}\in C\{X,Y\}\) be as obtained in the proof of Lemma 5.3.11. Then that proof gives for all \(y\in\mathbb{T}[i]\), \(\ell\in\mathbb{T}^{>\mathbb{R}}\), and \(\phi:=\ell^{\prime}\):
\[P(x,y)\circ\ell^{\mathrm{inv}}\ =\ \big{(}\phi\circ\ell^{\mathrm{inv}}\big{)}^{N }\cdot P^{\bullet}(\ell^{\mathrm{inv}},y\circ\ell^{\mathrm{inv}})\ \text{ in }\mathbb{T}[i].\]
Hence for \(C=\mathbb{R}\) we have \(P^{\bullet}\in\mathbb{R}\{X,Y\}\) and for all \(\ell\in\mathbb{T}^{>\mathbb{R}}\):
\[P(x,\ell)=0\iff P^{\bullet}(\ell^{\mathrm{inv}},x)=0.\]
### Upper and Lower Bounds on the Growth of Hardian Germs (\({}^{*}\))
This section elaborates on [33, 34, 170]. It is not used for proving our main theorem, but some of it is needed later, in the proofs of Corollary 5.5.40, Proposition 5.6.6, and Theorem 5.6.11.
### Generalizing logarithmic decomposition
In this subsection \(K\) is a differential ring and \(y\in K\). In [ADH, p. 213] we defined the \(n\)th iterated logarithmic derivative of \(y^{\langle n\rangle}\) when \(K\) is a differential field. Generalizing this, set \(y^{\langle 0\rangle}:=y\), and recursively, if \(y^{\langle n\rangle}\in K\) is defined and a unit in \(K\), then \(y^{\langle n+1\rangle}:=(y^{\langle n\rangle})^{\dagger}\), while otherwise \(y^{\langle n+1\rangle}\) is not defined. (Thus if \(y^{\langle n\rangle}\) is defined, then so are \(y^{\langle 0\rangle},\dots,y^{\langle n-1\rangle}\).) With \(L_{n}\) in \(\mathbb{Z}[X_{1},\dots,X_{n}]\) as in [ADH, p. 213], if \(y^{\langle n\rangle}\) is defined, then
\[y^{\langle n\rangle}\ =\ y^{\langle 0\rangle}\cdot L_{n}(y^{\langle 1\rangle}, \dots,y^{\langle n\rangle}).\]
If \(y^{\langle n\rangle}\) is defined and \(\boldsymbol{i}=(i_{0},\dots,i_{n})\in\mathbb{N}^{1+n}\), we set
\[y^{\langle\boldsymbol{i}\rangle}\ :=\ (y^{\langle 0\rangle})^{i_{0}}(y^{ \langle 1\rangle})^{i_{1}}\cdots(y^{\langle n\rangle})^{i_{n}}\in K.\]
Hence if \(H\) is a differential subfield of \(K\), \(P\in H\{Y\}\) has order at most \(n\) and logarithmic decomposition \(P=\sum_{\boldsymbol{i}}P_{\langle\boldsymbol{i}\rangle}Y^{\langle \boldsymbol{i}\rangle}\) (\(\boldsymbol{i}\) ranging over \(\mathbb{N}^{1+n}\), all \(P_{\langle\boldsymbol{i}\rangle}\in H\), and \(P_{\langle\boldsymbol{i}\rangle}=0\) for all but finitely many \(\boldsymbol{i}\)), and \(y^{\langle n\rangle}\) is defined, then \(P(y)=\sum_{\boldsymbol{i}}P_{\langle\boldsymbol{i}\rangle}y^{\langle \boldsymbol{i}\rangle}\). Below we apply these remarks to \(K=\mathcal{C}^{<\infty}\), where for \(y\in K^{\times}\) we have \(y^{\dagger}=(\log|y|)^{\prime}\), hence \(y^{\langle n+1\rangle}=(\log|y^{\langle n\rangle}|)^{\prime}\) if \(y^{\langle n+1\rangle}\) is defined.
### Transexponential germs
For \(f\in\mathcal{C}\) we recursively define the germs \(\exp_{n}f\) in \(\mathcal{C}\) by \(\exp_{0}f:=f\) and \(\exp_{n+1}f:=\exp(\exp_{n}f)\). Following [33] we say that a germ \(y\in\mathcal{C}\) is **transexponential** if \(y\geqslant\exp_{n}x\) for all \(n\). _In the rest of this subsection \(H\) is a Hardy field._ By Corollary 1.3.9 and Proposition 5.3.2:
**Lemma 5.4.1**.: _If the \(H\)-hardian germ \(y\) is \(\mathrm{d}\)-algebraic over \(H\), then \(y\leqslant\exp_{n}h\) for some \(n\) and some \(h\in H(x)\)._
Thus each tranexponential hardian germ is \(\mathrm{d}\)-transcendental (over \(\mathbb{R}\)). _In the rest of this subsection: \(y\in\mathcal{C}^{<\infty}\) is tranexponential and hardian, and \(z\in\mathcal{C}^{<\infty}[i]\)._ Then \(y^{\langle n\rangle}\) is defined, and \(y^{\langle n\rangle}\) is also tranexponential and hardian, for all \(n\). Next some variants of results from Section 1.3. For this, let \(n\) be given and let \(f\in\mathcal{C}^{<\infty}\), not necessarily hardian, be such that \(f\succ 1\), \(f\geqslant 0\), and \(y\succcurlyeq\exp_{n+1}f\).
**Lemma 5.4.2**.: _We have \(y^{\dagger}\succcurlyeq\exp_{n}f\) and \(y^{\langle n\rangle}\succcurlyeq\exp f\)._
Proof.: Since \(y\succcurlyeq\exp_{2}x\), we have \(\log y\succcurlyeq\exp x\) by Lemma 5.1.2, and thus \(y^{\dagger}=(\log y)^{\prime}\succcurlyeq\log y\). Since \(y\succcurlyeq\exp_{n+1}f\), the same lemma gives \(\log y\succcurlyeq\exp_{n}f\). Thus \(y^{\dagger}\succcurlyeq\exp_{n}f\). Now the second statement follows by an easy induction.
**Corollary 5.4.3**.: _Let \(\mathbf{i}\in\mathbb{Z}^{1+n}\) and suppose \(\mathbf{i}>0\) lexicographically. Then \(y^{\langle\mathbf{i}\rangle}\succ f\)._
Proof.: Let \(m\in\{0,\ldots,n\}\) be minimal such that \(i_{m}\neq 0\); so \(i_{m}\geqslant 1\). The remarks after Corollary 1.3.2 then give \(y^{\langle\mathbf{i}\rangle}\succ 1\) and \([v(y^{\langle\mathbf{i}\rangle})]=[v(y^{\langle m\rangle})]\), so we have \(k\in\mathbb{N}\), \(k\geqslant 1\), such that \(y^{\langle\mathbf{i}\rangle}\succcurlyeq(y^{\langle m\rangle})^{1/k}\). Then Lemma 5.4.2 gives \(y^{\langle\mathbf{i}\rangle}\succcurlyeq(y^{\langle m\rangle})^{1/k}\succcurlyeq( \exp f)^{1/k}\succ f\) as required.
In the next proposition and lemma \(P\in H\{Y\}^{\neq}\) has order at most \(n\), and \(\mathbf{i}\), \(\mathbf{j}\), \(\mathbf{k}\) range over \(\mathbb{N}^{1+n}\). Let \(\mathbf{j}\) be lexicographically maximal such that \(P_{\langle\mathbf{j}\rangle}\neq 0\), and choose \(\mathbf{k}\) so that \(P_{\langle\mathbf{k}\rangle}\) has minimal valuation. If \(P_{\langle\mathbf{k}\rangle}/P_{\langle\mathbf{j}\rangle}\succ x\), set \(f:=|P_{\langle\mathbf{k}\rangle}/P_{\langle\mathbf{j}\rangle}|\); otherwise set \(f:=x\). Then \(f\in H(x)\), \(f>0\), \(f\succ 1\), and \(f\succcurlyeq P_{\langle\mathbf{i}\rangle}/P_{\langle\mathbf{j}\rangle}\) for all \(\mathbf{i}\).
**Proposition 5.4.4**.: _We have \(P(y)\sim P_{\langle\mathbf{j}\rangle}y^{\langle\mathbf{j}\rangle}\) and thus_
\[P(y)\in\big{(}\mathcal{C}^{<\infty}\big{)}^{\times},\qquad\operatorname{sign }P(y)\ =\ \operatorname{sign}P_{\langle\mathbf{j}\rangle}\neq 0.\]
Proof.: For \(\mathbf{i}<\mathbf{j}\) we have \(y^{\langle\mathbf{j}-\mathbf{i}\rangle}\succ f\succcurlyeq P_{\langle\mathbf{i}\rangle}/P_ {\langle\mathbf{j}\rangle}\) by Corollary 5.4.3, therefore \(P_{\langle\mathbf{j}\rangle}y^{\langle\mathbf{j}\rangle}\succ P_{\langle\mathbf{i}\rangle} y^{\langle\mathbf{i}\rangle}\). Thus \(P(y)\sim P_{\langle\mathbf{j}\rangle}y^{\langle\mathbf{j}\rangle}\).
**Lemma 5.4.5**.: _Suppose that \(z^{\langle n\rangle}\) is defined and \(y^{\langle i\rangle}\sim z^{\langle i\rangle}\) for \(i=0,\ldots,n\). Then \(P(y)\sim P(z)\)._
Proof.: For all \(\mathbf{i}\) with \(P_{\langle\mathbf{i}\rangle}\neq 0\) we have \(P_{\langle\mathbf{i}\rangle}y^{\langle\mathbf{i}\rangle}\sim P_{\langle\mathbf{i}\rangle}z ^{\langle\mathbf{i}\rangle}\), by Lemma 5.1.1. Now use that for \(\mathbf{i}\neq\mathbf{j}\) we have \(P_{\langle\mathbf{i}\rangle}y^{\langle\mathbf{i}\rangle}\prec P_{\langle\mathbf{j}\rangle} y^{\langle\mathbf{j}\rangle}\) by the proof of Proposition 5.4.4.
From here on \(n\) is no longer fixed.
**Corollary 5.4.6** (Boshernitzan [33, Theorem 12.23]).: _Suppose \(y\geqslant\exp_{n}h\) for all \(h\in H(x)\) and all \(n\). Then \(y\) is \(H\)-hardian._
This is an immediate consequence of Proposition 5.4.4. (In [33], the proof of this fact is only indicated.) From Lemma 5.4.5 we also obtain:
**Corollary 5.4.7**.: _Suppose that \(y\) is as in Corollary 5.4.6 and \(z\in\mathcal{C}^{<\infty}\), and \(z^{\langle n\rangle}\) is defined and \(y^{\langle n\rangle}\sim z^{\langle n\rangle}\), for all \(n\). Then \(z\) is \(H\)-hardian, and there is a unique ordered differential field isomorphism \(H\langle y\rangle\to H\langle z\rangle\) over \(H\) which sends \(y\) to \(z\)._
Lemma 5.4.13 below contains another criterion for \(z\) to be \(H\)-hardian. This involves a certain binary relation \(\sim_{\infty}\) on germs defined in the next subsection. Lemma 5.4.5 also yields a complex version of Corollary 5.4.7:
**Corollary 5.4.8**.: _Suppose that \(y\) is as in Corollary 5.4.6 and that \(z^{\langle n\rangle}\) is defined and \(y^{\langle n\rangle}\sim z^{\langle n\rangle}\), for all \(n\). Then \(z\) generates a differential subfield \(H\langle z\rangle\) of \(\mathcal{C}^{<\infty}[i]\), and there is a unique differential field isomorphism \(H\langle y\rangle\to H\langle z\rangle\) over \(H\) which sends \(y\) to \(z\). Moreover, the binary relation \(\preccurlyeq\) on \(\mathcal{C}[i]\) restricts to a dominance relation on \(H\langle z\rangle\) which makes this an isomorphism of valued differential fields._
**A useful equivalence relation.** We set
\[{\mathcal{C}}^{<\infty}[i]^{\preccurlyeq}\ :=\ \big{\{}f\in{\mathcal{C}}^{<\infty}[i] :f^{(n)}\preccurlyeq 1\ \text{for all}\ n\big{\}}\ \subseteq\ {\mathcal{C}}[i]^{\preccurlyeq},\]
a differential \({\mathbb{C}}\)-subalgebra of \({\mathcal{C}}^{<\infty}[i]\), and
\[{\mathcal{I}}\ :=\ \big{\{}f\in{\mathcal{C}}^{<\infty}[i]:f^{(n)}\prec 1\ \text{for all}\ n\big{\}}\ \subseteq\ {\mathcal{C}}^{<\infty}[i]^{\preccurlyeq},\]
a differential ideal of \({\mathcal{C}}^{<\infty}[i]^{\preccurlyeq}\) (thanks to the Product Rule). Recall from the remarks preceding Lemma 5.1.1 that \(({\mathcal{C}}[i]^{\preccurlyeq})^{\times}={\mathcal{C}}[i]^{\asymp}\).
**Lemma 5.4.9**.: _The group of units of \({\mathcal{C}}^{<\infty}[i]^{\preccurlyeq}\) is_
\[{\mathcal{C}}^{<\infty}[i]^{\preccurlyeq}\ :=\ {\mathcal{C}}^{<\infty}[i]^{ \preccurlyeq}\cap{\mathcal{C}}[i]^{\asymp}\ =\ \big{\{}f\in{\mathcal{C}}^{<\infty}[i]:f\asymp 1,\ f^{(n)}\preccurlyeq 1\ \text{for all}\ n\big{\}}.\]
_Moreover, \(1+{\mathcal{I}}\) is a subgroup of \({\mathcal{C}}^{<\infty}[i]^{\asymp}\)._
Proof.: It is clear that
\[({\mathcal{C}}^{<\infty}[i]^{\preccurlyeq})^{\times}\ \subseteq\ {\mathcal{C}}^{<\infty}[i]^{\preccurlyeq}\cap({\mathcal{C}}[i]^{\preccurlyeq})^ {\times}\ =\ {\mathcal{C}}^{<\infty}[i]^{\preccurlyeq}\cap{\mathcal{C}}[i]^{\asymp}\ =\ { \mathcal{C}}^{<\infty}[i]^{\asymp}.\]
Conversely, suppose \(f\in{\mathcal{C}}^{<\infty}[i]\) satisfies \(f\asymp 1\) and \(f^{(n)}\preccurlyeq 1\) for all \(n\). For each \(n\) we have \(Q_{n}\in{\mathbb{Q}}\{X\}\) such that \((1/f)^{(n)}=Q_{n}(f)/f^{n+1}\), hence \((1/f)^{(n)}\preccurlyeq 1\). Thus \(f\in({\mathcal{C}}^{<\infty}[i]^{\preccurlyeq})^{\times}\). This shows the first statement. Clearly \(1+{\mathcal{I}}\subseteq{\mathcal{C}}^{<\infty}[i]^{\asymp}\), and \(1+{\mathcal{I}}\) is closed under multiplication. If \(\delta\in{\mathcal{I}}\), then \(1+\delta\) is a unit of \({\mathcal{C}}^{<\infty}[i]^{\preccurlyeq}\) and \((1+\delta)^{-1}=1+\varepsilon\) where \(\varepsilon=-\delta(1+\delta)^{-1}\in{\mathcal{I}}\).
For \(y,z\in{\mathcal{C}}[i]^{\times}\) we define
\[y\sim_{\infty}z\quad:\Longleftrightarrow\quad y\in z\cdot(1+{\mathcal{I}});\]
hence \(y\sim_{\infty}z\Rightarrow y\sim z\). Lemma 5.4.9 yields that \(\sim_{\infty}\) is an equivalence relation on \({\mathcal{C}}[i]^{\times}\), and for \(y_{i},z_{i}\in{\mathcal{C}}[i]^{\times}\) (\(i=1,2\)) we have
\[y_{1}\sim_{\infty}y_{2}\quad\&\quad z_{1}\sim_{\infty}z_{2}\qquad\Longrightarrow \qquad y_{1}z_{1}\sim_{\infty}y_{2}z_{2},\quad y_{1}^{-1}\sim_{\infty}y_{2}^ {-1}.\]
**Lemma 5.4.10**.: _Let \(y,z\in{\mathcal{C}}^{1}[i]^{\times}\) with \(y\sim_{\infty}z\) and \(z\in z^{\prime}\,{\mathcal{C}}^{<\infty}[i]^{\preccurlyeq}\). Then_
\[y^{\prime},z^{\prime}\in{\mathcal{C}}[i]^{\times},\qquad y^{\prime}\sim_{ \infty}z^{\prime}.\]
Proof.: Let \(\delta\in{\mathcal{I}}\) and \(f\in{\mathcal{C}}^{<\infty}[i]^{\preccurlyeq}\) with \(y=z(1+\delta)\) and \(z=z^{\prime}f\). Then \(z^{\prime}\in{\mathcal{C}}[i]^{\times}\) and \(y^{\prime}=z^{\prime}(1+\delta)+z\delta^{\prime}=z^{\prime}(1+\delta+f\delta^ {\prime})\) where \(\delta+f\delta^{\prime}\in{\mathcal{I}}\), so \(y^{\prime}\sim_{\infty}z^{\prime}\).
If \(\ell\in{\mathcal{C}}^{n}[i]\) and \(f\in{\mathcal{C}}^{n}\) with \(f\geqslant 0\), \(f\succ 1\), then \(\ell\circ f\in{\mathcal{C}}^{n}[i]\). In fact, for \(n\geqslant 1\) and \(1\leqslant k\leqslant n\) we have a differential polynomial \(Q_{k}^{n}\in{\mathbb{Q}}\{X^{\prime}\}\subseteq{\mathbb{Q}}\{X\}\) of order \(\leqslant n\), isobaric of weight \(n\), and homogeneous of degree \(k\), such that for all such \(\ell\), \(f\),
\[(\ell\circ f)^{(n)}\ =\ (\ell^{(n)}\circ f)\,Q_{n}^{n}(f)+\cdots+(\ell^{\prime} \circ f)\,Q_{1}^{n}(f).\]
For example,
\[Q_{1}^{1}=X^{\prime},\quad Q_{2}^{2}=(X^{\prime})^{2},\ Q_{1}^{2}=X^{\prime \prime},\quad Q_{3}^{3}=(X^{\prime})^{3},\ Q_{2}^{3}=3X^{\prime}X^{\prime \prime},\ Q_{1}^{3}=X^{\prime\prime\prime}.\]
The following Lemma is only used in the proof of Theorem 5.6.11 below.
**Lemma 5.4.11**.: _Let \(f,g\in{\mathcal{C}}^{<\infty}\) be such that \(f,g\geqslant 0\) and \(f,g\succ 1\), and set \(r:=g-f\). Suppose \(P(f)\cdot Q(r)\prec 1\) for all \(P,Q\in{\mathbb{Q}}\{Y\}\) with \(Q(0)=0\), and let \(\ell\in{\mathcal{C}}^{<\infty}[i]\) be such that \(\ell^{\prime}\in{\mathcal{I}}\). Then \(\ell\circ g-\ell\circ f\in{\mathcal{I}}\)._
Proof.: Treating real and imaginary parts separately we arrange \(\ell\in\mathcal{C}^{<\infty}\). Note that \(r\prec 1\). Taylor expansion [ADH, 4.2] for \(P\in\mathbb{Q}\{X\}\) of order \(\leqslant n\) gives
\[P(g)-P(f)\ =\ \sum_{|\boldsymbol{i}|\geqslant 1}\frac{1}{\boldsymbol{i}!}P^{( \boldsymbol{i})}(f)\cdot r^{\boldsymbol{i}}\qquad(\boldsymbol{i}\in\mathbb{N} ^{1+n}),\]
and thus \(P(g)-P(f)\prec 1\) and \(rP(g)\prec 1\). The Mean Value Theorem yields a germ \(r_{n}\in\mathcal{G}\) such that
\[\ell^{(n)}\circ g-\ell^{(n)}\circ f\ =\ \bigl{(}\ell^{(n+1)}\circ(f+r_{n}) \bigr{)}\cdot r\quad\text{and}\quad|r_{n}|\leqslant|r|.\]
Now \(r_{0}\prec 1\), so \(\ell^{\prime}\circ(f+r_{0})\prec 1\), hence \(\ell\circ g-\ell\circ f\prec 1\). For \(1\leqslant k\leqslant n\),
\[(\ell^{(k)}\circ g)\,Q_{k}^{n}(g)-(\ell^{(k)}\circ f)\,Q_{k}^{n} (f)=\\ (\ell^{(k)}\circ f)\,\bigl{(}Q_{k}^{n}(g)-Q_{k}^{n}(f)\bigr{)}+ \bigl{(}\ell^{(k+1)}\circ(f+r_{k})\bigr{)}\cdot rQ_{k}^{n}(g),\]
so \((\ell^{(k)}\circ g)\,Q_{k}^{n}(g)-(\ell^{(k)}\circ f)\,Q_{k}^{n}(f)\prec 1\), and thus \(\bigl{(}\ell\circ g-\ell\circ f\bigr{)}^{(n)}\prec 1\).
We consider next the differential \(\mathbb{R}\)-subalgebra
\[(\mathcal{C}^{<\infty})^{\lessless}\ :=\ \mathcal{C}^{<\infty}[i]^{\less} \cap\mathcal{C}^{<\infty}\ \subseteq\ \mathcal{C}^{\lessless}\]
of \(\mathcal{C}^{<\infty}\). In the rest of this subsection \(H\) is a Hardy field and \(y,z\in\mathcal{C}^{<\infty}\), \(y,z\succ 1\). Note that \((\mathcal{C}^{<\infty})^{\lessless}\cap H=\mathcal{O}_{H}\) and \(\mathcal{I}\cap H=\mathcal{c}_{H}\). This yields:
**Lemma 5.4.12**.: _Suppose \(y-z\in(\mathcal{C}^{<\infty})^{\lessless}\) and \(z\) is hardian. Then \(y\sim_{\infty}z\)._
Proof.: From \(y=z+f\) with \(f\in(\mathcal{C}^{<\infty})^{\lessless}\) we obtain \(y=z(1+fz^{-1})\). Now \(z^{-1}\in\mathcal{I}\), so \(fz^{-1}\in\mathcal{I}\), and thus \(y\sim_{\infty}z\).
We now formulate a sufficient condition involving \(\sim_{\infty}\) for \(y\) to be \(H\)-hardian.
**Lemma 5.4.13**.: _Suppose \(z\) is \(H\)-hardian with \(z\geqslant\exp_{n}h\) for all \(h\in H(x)\) and all \(n\), and \(y\sim_{\infty}z\). Then \(y\) is \(H\)-hardian, and there is a unique ordered differential field isomorphism \(H\langle y\rangle\to H\langle z\rangle\) which is the identity on \(H\) and sends \(y\) to \(z\)._
Proof.: By Lemma 5.4.1 we may replace \(H\) by the Hardy subfield \(\operatorname{Li}\bigl{(}H(\mathbb{R})\bigr{)}\) of \(\operatorname{E}(H)\) to arrange that \(H\supseteq\mathbb{R}\) is Liouville closed. By Corollary 5.4.7 (with the roles of \(y\), \(z\) reversed) it is enough to show that for each \(n\), \(y^{\langle n\rangle}\) is defined, \(y^{\langle n\rangle}\succ 1\), and \(y^{\langle n\rangle}\sim_{\infty}z^{\langle n\rangle}\). This holds by hypothesis for \(n=0\). By Lemma 1.3.3, \(z>H\) gives \(z^{\dagger}>H\), so \(z=z^{\prime}f\) with \(f\prec 1\) in the Hardy field \(H\langle z\rangle\), hence \(f^{(n)}\prec 1\) for all \(n\). So by Lemma 5.4.10, \(y^{\langle 1\rangle}=y^{\dagger}\) is defined, \(y^{\langle 1\rangle}\in(\mathcal{C}^{<\infty})^{\times}\), \(y^{\langle 1\rangle}\sim_{\infty}z^{\langle 1\rangle}\), and thus \(y^{\langle 1\rangle}\succ 1\). Assume for a certain \(n\geqslant 1\) that \(y^{\langle n\rangle}\) is defined, \(y^{\langle n\rangle}\succ 1\), and \(y^{\langle n\rangle}\sim_{\infty}z^{\langle n\rangle}\). Then \(z^{\langle n\rangle}\) is \(H\)-hardian and \(H<z^{\langle n\rangle}\) by Lemma 1.3.5. Hence by the case \(n=1\) applied to \(y^{\langle n\rangle}\), \(z^{\langle n\rangle}\) in place of \(y\), \(z\), respectively, \(y^{\langle n+1\rangle}=(y^{\langle n\rangle})^{\dagger}\) is defined, \(y^{\langle n+1\rangle}\succ 1\), and \(y^{\langle n+1\rangle}\sim_{\infty}z^{\langle n+1\rangle}\).
The next two corollaries are Theorems 13.6 and 13.10, respectively, in [33].
**Corollary 5.4.14**.: _Suppose \(z\) is tranexponential and hardian, and \(y-z\in(\mathcal{C}^{<\infty})^{\lessless}\). Then \(y\) is hardian, and there is a unique isomorphism \(\mathbb{R}\langle y\rangle\to\mathbb{R}\langle z\rangle\) of ordered differential fields that is the identity on \(\mathbb{R}\) and sends \(y\) to \(z\)._
Proof.: Take \(H:=\operatorname{Li}(\mathbb{R})\). Then \(z\) lies in a Hardy field extension of \(H\), namely \(\operatorname{Li}\bigl{(}\mathbb{R}\langle z\rangle\bigr{)}\), and \(H<z\). So \(y\sim_{\infty}z\) by Lemma 5.4.12. Now use Lemma 5.4.13.
**Corollary 5.4.15**.: _If \(z\in\operatorname{E}(H)^{>\mathbb{R}}\), then \(z\leqslant\exp_{n}h\) for some \(h\in H(x)\) and some \(n\)._ (_Thus if \(x\in H\) and \(\exp H\subseteq H\), then \(H^{>\mathbb{R}}\) is cofinal in \(\operatorname{E}(H)^{>\mathbb{R}}\)._)
Proof.: Towards a contradiction, suppose \(z\in\operatorname{E}(H)^{>\mathbb{R}}\) and \(z>\exp_{n}h\) in \(\operatorname{E}(H)\) for all \(h\in H(x)\) and all \(n\). Set \(y:=z+\sin x\). Then \(y\) is \(H\)-hardian by Lemmas 5.4.12 and 5.4.13, so \(y\), \(z\) lie in a common Hardy field extension of \(H\), a contradiction.
The same proof shows that Corollary 5.4.15 remains true if \(H\) is assumed to be a \(\mathcal{C}^{\infty}\)-Hardy field and \(\operatorname{E}(H)\) is replaced by \(\operatorname{E}^{\infty}(H)\); likewise for \(\omega\) in place of \(\infty\).
### Remarks on differential subfields of \(\mathcal{C}^{<\infty}[\mathrm{i}]\)
Let \(K\) be a subfield of \(\mathcal{C}[\mathrm{i}]\). Then the following are equivalent:
1. The binary relation \(\preccurlyeq\) on \(\mathcal{C}[\mathrm{i}]\) restricts to a dominance relation on \(K\);
2. for all \(f,g\in K\): \(f\preccurlyeq g\) or \(g\preccurlyeq f\);
3. for all \(f\in K\): \(f\preccurlyeq 1\) or \(1\preccurlyeq f\).
If \(K\subseteq H[\mathrm{i}]\) where \(H\) is a Hausdorff field, then \(\preccurlyeq\) restricts to a dominance relation on \(K\). (See Section 5.1.) Moreover, the following are equivalent:
1. \(K=H[\mathrm{i}]\) for some Hausdorff field \(H\);
2. \(i\in K\) and \(\overline{f}\in K\) for each \(f\in K\);
3. \(i\in K\) and \(\operatorname{Re}f,\operatorname{Im}f\in K\) for each \(f\in K\).
Next a lemma similar to Lemma 5.4.13, but obtained using Corollary 5.4.8 instead of Corollary 5.4.7:
**Lemma 5.4.16**.: _Let \(H\) be a Hardy field, let \(z\in\mathcal{C}^{<\infty}\) be \(H\)-hardian with \(z\geqslant\exp_{n}h\) for all \(h\in H(x)\) and all \(n\), and \(y\in\mathcal{C}^{<\infty}[\mathrm{i}]\) with \(y\sim_{\infty}z\). Then \(y\) generates a differential subfield \(H\langle y\rangle\) of \(\mathcal{C}^{<\infty}[\mathrm{i}]\), and there is a unique differential field isomorphism \(H\langle y\rangle\to H\langle z\rangle\) which is the identity on \(H\) and sends \(y\) to \(z\). The binary relation \(\preccurlyeq\) on \(\mathcal{C}[\mathrm{i}]\) restricts to a dominance relation on \(H\langle y\rangle\) which makes this an isomorphism of valued differential fields._
We use the above at the end of the next subsection to produce a differential subfield of \(\mathcal{C}^{<\infty}[\mathrm{i}]\) that is not contained in \(H[\mathrm{i}]\) for any Hardy field \(H\).
### Boundedness
Let \(H\subseteq\mathcal{C}\). We say that \(b\in\mathcal{C}\)**bounds**\(H\) if \(h\leqslant b\) for each \(h\in H\). We call \(H\)**bounded** if some \(b\in\mathcal{C}\) bounds \(H\), and we call \(H\)**unbounded** if \(H\) is not bounded. If \(H_{1},H_{2}\subseteq\mathcal{C}\) and for each \(h_{2}\in H_{2}\) there is an \(h_{1}\in H_{1}\) with \(h_{2}\leqslant h_{1}\), then any \(b\in\mathcal{C}\) bounding \(H_{1}\) also bounds \(H_{2}\). Every bounded subset of \(\mathcal{C}\) is bounded by a germ in \(\mathcal{C}^{\omega}\); this follows from [33, Lemma 14.3]:
**Lemma 5.4.17**.: _For every \(b\geqslant 0\) in \(\mathcal{C}^{\times}\) there is a \(\phi\geqslant 0\) in \((\mathcal{C}^{\omega})^{\times}\) such that \(\phi^{(n)}\prec b\) for all \(n\)._
Every countable subset of \(\mathcal{C}\) is bounded, by du Bois-Reymond [30]; see also [87, Chapter II] or [39, Chapitre V, p. 53, ex. 8]. Thus \(H\subseteq\mathcal{C}\) is bounded if it is totally ordered by the partial ordering \(\leqslant\) of \(\mathcal{C}\) and has countable cofinality. If \(H\) is a Hausdorff field and \(b\in\mathcal{C}\) bounds \(H\), then \(b\) also bounds the real closure \(H^{\operatorname{rc}}\subseteq\mathcal{C}\) of \(H\) [ADH, 5.3.2]. _In the rest of this subsection \(H\) is a Hardy field._
**Lemma 5.4.18**.: _Let \(H^{*}\) be a \(\mathrm{d}\)-algebraic Hardy field extension of \(H\) and suppose \(H\) is bounded. Then \(H^{*}\) is also bounded._
Proof.: By [ADH, 3.1.11] we have \(f\in H(x)^{>}\) such that for all \(g\in H(x)^{\times}\) there are \(h\in H^{\times}\) and \(q\in\mathbb{Q}\) with \(g\asymp hf^{q}\). Hence \(H(x)\) is bounded. Replacing \(H\), \(H^{*}\) by \(H(x)^{\operatorname{rc}}\), \(\operatorname{Li}\bigl{(}H^{*}(\mathbb{R})\bigr{)}\), respectively, we arrange that \(H\) is real closed with \(x\in H\), and \(H^{*}\supseteq\mathbb{R}\) is Liouville closed. Let \(b\in\mathcal{C}\) bound \(H\). Then any \(b^{*}\in\mathcal{C}\) such that \(\exp_{n}b\leqslant b^{*}\) for all \(n\) bounds \(H^{*}\), by Lemma 5.4.1.
**Lemma 5.4.19**.: _Suppose that \(H\) is bounded and \(f\in\mathcal{C}^{<\infty}\) is hardian over \(H\). Then \(H\langle f\rangle\) is bounded._
Proof.: Lemma 5.4.18 gives that \(\operatorname{Li}\bigl{(}H(\mathbb{R})\bigr{)}\) is bounded; also, \(f\) remains hardian over \(\operatorname{Li}\bigl{(}H(\mathbb{R})\bigr{)}\). Using this we arrange that \(H\) is Liouville closed. The case that \(H\langle f\rangle\) has no element \(>H\) is trivial, so assume we have \(y\in H\langle f\rangle\) with \(y>H\). Then \(y\) is d-transcendental over \(H\) and the sequence \(y,y^{2},y^{3},\dots\) is cofinal in \(H\langle y\rangle\), by Corollary 1.3.8, so \(H\langle y\rangle\) is bounded. Now use that \(f\) is d-algebraic over \(H\langle y\rangle\).
**Theorem 5.4.20** (Boshernitzan [33, Theorem 14.4]).: _Suppose \(H\) is bounded. Then the perfect hull \(\operatorname{E}(H)\) of \(H\) is d-algebraic over \(H\) and hence bounded. If \(H\subseteq\mathcal{C}^{\infty}\), then \(\operatorname{E}^{\infty}(H)\) is d-algebraic over \(H\); likewise with \(\omega\) in place of \(\infty\)._
Using the results above the proof is not difficult. It is omitted in [33], but we include it here for the sake of completeness. First, a lemma:
**Lemma 5.4.21**.: _Let \(b\in\mathcal{C}^{\times}\) bound \(H\), let \(\phi\geqslant 0\) in \(\mathcal{C}^{<\infty}\) satisfy \(\phi^{(n)}\prec b^{-1}\) for all \(n\), and let \(r\in\phi\cdot(\mathcal{C}^{<\infty})^{\preccurlyeq}\). Then \(Q(r)\prec 1\) for all \(Q\in H\{Y\}\) with \(Q(0)=0\)._
Proof.: From \(\phi\in\mathcal{I}\) we obtain \(r\in\mathcal{I}\), so it is enough that \(hr^{(n)}\prec 1\) for all \(h\in H\) and all \(n\). Now use the Product Rule and \(h\phi^{(n)}\prec hb^{-1}\preccurlyeq 1\) for \(h\in H^{\times}\).
Proof of Theorem 5.4.20.: Using Lemma 5.4.18, replace \(H\) by \(\operatorname{Li}\bigl{(}H(\mathbb{R})\bigr{)}\) to arrange that \(H\supseteq\mathbb{R}\) is Liouville closed. Let \(b\in\mathcal{C}\) bound \(H\). Then \(b\) also bounds \(\operatorname{E}(H)\), by Corollary 5.4.15. Lemma 5.4.17 yields \(\phi\geqslant 0\) in \((\mathcal{C}^{\omega})^{\times}\) such that \(\phi^{(n)}\prec b^{-1}\) for all \(n\); set \(r:=\phi\cdot\sin x\in\mathcal{C}^{\omega}\). Then \(Q(r)\prec f\) for all \(f\in\operatorname{E}(H)^{\times}\) and \(Q\in\operatorname{E}(H)\{Z\}\) with \(Q(0)=0\), by Lemma 5.4.21.
Suppose towards a contradiction that \(f\in\operatorname{E}(H)\) is d-transcendental over \(H\), and set \(g:=f+r\in\mathcal{C}^{<\infty}\). Then \(f\), \(g\) are not in a common Hardy field, so \(g\) is not hardian over \(H\). On the other hand, let \(P\in H\{Y\}^{\neq}\). Then \(P(f)\in\operatorname{E}(H)^{\times}\), and by Taylor expansion,
\[P(f+Z)\ =\ P(f)+Q(Z)\quad\text{ where }Q\in\operatorname{E}(H)\{Z\}\text{ with }Q(0)=0,\]
so \(P(g)=P(f+r)\sim P(f)\). Hence \(g\) is hardian over \(H\), a contradiction.
The proof in the case where \(H\subseteq\mathcal{C}^{\infty}\) is similar, using the version of Corollary 5.4.15 for \(\operatorname{E}^{\infty}(H)\); similarly for \(\omega\) in place of \(\infty\).
As to the existence of tranexponential hardian germs, we have:
**Theorem 5.4.22**.: _For every \(b\in\mathcal{C}\) there is a \(\mathcal{C}^{\omega}\)-hardian germ \(y\geqslant b\)._
This is Boshernitzan [34, Theorem 1.2], and leads to [34, Theorem 1.1]:
**Corollary 5.4.23**.: _No maximal Hardy field is bounded._
Proof.: Suppose \(x\in H\), and \(b\in\mathcal{C}\) bounds \(H\). Take some \(b^{*}\in\mathcal{C}\) such that \(b^{*}\geqslant\exp_{n}b\) for each \(n\). Now Theorem 5.4.22 yields a \(\mathcal{C}^{\omega}\)-hardian germ \(y\geqslant b^{*}\). By Corollary 5.4.6, \(y\) is \(H\)-hardian, so \(H\langle y\rangle\) is a proper Hardy field extension of \(H\).
The same proof shows also that no \(\mathcal{C}^{\infty}\)-maximal Hardy field and no \(\mathcal{C}^{\omega}\)-maximal Hardy field is bounded. In particular (Boshernitzan [34, Theorem 1.3]):
**Corollary 5.4.24**.: _Every maximal Hardy field contains a tranexponential germ. Likewise with "\(\mathcal{C}^{\infty}\)-maximal" or "\(\mathcal{C}^{\omega}\)-maximal" in place of "maximal"._
_Remark_.: For \(\mathcal{C}^{\infty}\)-Hardy fields, some of the above is in Sjodin's [190], predating [33, 34]: if \(H\) is a bounded \(\mathcal{C}^{\infty}\)-Hardy field, then so is \(\operatorname{Li}\bigl{(}H(\mathbb{R})\bigr{)}\)[190, Theorem 2]; no maximal \(\mathcal{C}^{\infty}\)-Hardy field is bounded [190, Theorem 6]; and \(E:=\operatorname{E}^{\infty}(\mathbb{Q})\) is bounded [190, Theorem 10] with \(E\circ E^{>\mathbb{R}}\subseteq E\)[190, Theorem 11].
We can now produce a differential subfield \(K\) of \(\mathcal{C}^{\omega}[\mathrm{i}]\) containing \(\mathrm{i}\) such that \(\preccurlyeq\) restricts to a dominance relation on \(K\) making \(K\) a d-valued field of \(H\)-type with constant field \(\mathbb{C}\), yet \(K\not\subseteq H[\mathrm{i}]\) for every Hardy field \(H\):
Take a tranexponential \(\mathcal{C}^{\omega}\)-hardian germ \(z\), and \(h\in\mathbb{R}(x)\) with \(0\neq h\prec 1\). Then \(\varepsilon:=h\,\mathrm{e}^{x\mathrm{i}}\in\mathcal{I}\), so \(y:=z(1+\varepsilon)\in\mathcal{C}^{\omega}[\mathrm{i}]\) with \(y\sim_{\infty}z\). Lemma 5.4.16 applied with \(H=\mathbb{R}\) shows that \(y\) generates a differential subfield \(K_{0}:=\mathbb{R}\langle y\rangle\) of \(\mathcal{C}^{\omega}[\mathrm{i}]\), and \(\preccurlyeq\) restricts to a dominance relation on \(K_{0}\) making \(K_{0}\) a d-valued field of \(H\)-type with constant field \(\mathbb{R}\). Then \(K:=K_{0}[\mathrm{i}]\) is a differential subfield of \(\mathcal{C}^{\omega}[\mathrm{i}]\) with constant field \(\mathbb{C}\). Moreover, \(\preccurlyeq\) also restricts to a dominance relation on \(K\), and this dominance relation makes \(K\) a d-valued field of \(H\)-type [ADH, 10.5.15]. We cannot have \(K\subseteq H[\mathrm{i}]\) where \(H\) is a Hardy field, since \(\operatorname{Im}y=zh\sin x\notin H\).
### Lower bounds on \(\mathrm{d}\)-algebraic hardian germs
In this subsection \(H\) is a Hardy field. Let \(f\in\mathcal{C}\) and \(f\succ 1\), \(f\geqslant 0\). Then the germ \(\log f\in\mathcal{C}\) also satisfies \(\log f\succ 1\), \(\log f\geqslant 0\). So we may inductively define the germs \(\log_{n}f\) in \(\mathcal{C}\) by \(\log_{0}f:=f\), \(\log_{n+1}f:=\log\log_{n}f\). Lemma 5.4.1 gives exponential upper bounds on \(\mathrm{d}\)-algebraic \(H\)-hardian germs. The next result leads to logarithmic lower bounds on such germs when \(H\) is grounded.
**Theorem 5.4.25** (Rosenlicht [170, Theorem 3]).: _Suppose \(H\) is grounded, and let \(E\) be a Hardy field extension of \(H\) such that \(|\Psi_{E}\setminus\Psi_{H}|\leqslant n\)\((\)so \(E\) is also grounded\()\). Then there are \(r,s\in\mathbb{N}\) with \(r+s\leqslant n\) such that_
* _for any_ \(h\in H^{>}\) _with_ \(h\succ 1\) _and_ \(\max\Psi_{H}=v(h^{\dagger})\)_, there exists_ \(g\in E^{>}\) _such that_ \(g\asymp\log_{r}h\) _and_ \(\max\Psi_{E}=v(g^{\dagger})\)_;_
* _for any_ \(g\in E\) _there exists_ \(h\in H\) _such that_ \(g<\exp_{s}h\)_._
This theorem is most useful in combination with the following lemma, which is [170, Proposition 5] (and also [7, Lemma 2.1] in the context of pre-\(H\)-fields).
**Lemma 5.4.26**.: _Let \(E\) be a Hardy field extension of \(H\) such that \(\operatorname{trdeg}(E|H)\leqslant n\). Then \(|\Psi_{E}\setminus\Psi_{H}|\leqslant n\)._
From [ADH, 9.1.11] we recall that for \(f,g\succ 1\) in a Hardy field we have \(f^{\dagger}\preccurlyeq g^{\dagger}\) iff \(|f|\leqslant|g|^{n}\) for some \(n\geqslant 1\). (See also the discussion before Lemma 1.2.27.) Thus by Lemma 5.4.26 and Theorem 5.4.25:
**Corollary 5.4.27**.: _Let \(E\) be a Hardy field extension of \(H\) with \(\operatorname{trdeg}(E|H)\leqslant n\), and let \(h\in H^{>}\) be such that \(h\succ 1\) and \(\max\Psi_{H}=v(h^{\dagger})\). Then \(E\) is grounded, and for all \(g\in E\) with \(g\succ 1\) there is an \(m\geqslant 1\) such that \(\log_{n}h\preccurlyeq g^{m}\)\((\)and hence \(\log_{n+1}h\preccurlyeq g\) for all \(g\in E\) with \(g\succ 1\)\()\)._
Applying Corollary 5.4.27 to \(H=\mathbb{R}(x)\), \(h=x\) yields:
**Corollary 5.4.28** (Boshernitzan [33, Proposition 14.11]).: _If \(y\in\mathcal{C}\) is hardian and \(\operatorname{d}\)-algebraic over \(\mathbb{R}\), then the Hardy field \(E=\mathbb{R}(x)\langle y\rangle\) is grounded, and there is an \(n\) such that \(\log_{n}x\prec g\) for all \(g\in E\) with \(g\succ 1\)._
### Second-Order Linear Differential Equations over Hardy Fields
In this section we review Boshernitzan's work [33, SS16] on adjoining non-oscillating solutions of second-order linear differential equations to Hardy fields, deduce some consequences about complex exponentials over Hardy fields used later, and prove a conjecture from [33, SS17]. _Throughout this section \(H\) is a Hardy field._
**Oscillation over Hardy fields.** In this subsection we assume \(f\in H\) and consider the linear differential equation
\[4Y^{\prime\prime}+fY\ =\ 0 \tag{4L}\]
over \(H\). The factor \(4\) is to simplify certain expressions, in conformity with [ADH, 5.2]. In [ADH, 5.2] we defined for any differential field \(K\) functions \(\omega\colon K\to K\) and \(\sigma\colon K^{\times}\to K\). We define likewise
\[\omega\ :\ \mathcal{C}^{1}[\text{i}]\to\mathcal{C}^{0}[\text{i}],\qquad \sigma\ :\ \mathcal{C}^{2}[\text{i}]^{\times}\to\mathcal{C}^{0}[\text{i}]\]
by
\[\omega(z)\ =\ -2z^{\prime}-z^{2}\quad\text{ and }\quad\sigma(y)\ =\ \omega(z)+y^{2}\text{ for }z:=-y^{\dagger}.\]
Note that \(\omega(\mathcal{C}^{1})\subseteq\mathcal{C}^{0}\) and \(\sigma\big{(}(\mathcal{C}^{2})^{\times}\big{)}\subseteq\mathcal{C}^{0}\), and \(\sigma(y)=\omega(z+y\text{i})\) for \(z:=-y^{\dagger}\). To clarify the role of \(\omega\) and \(\sigma\) in connection with second-order linear differential equations, suppose \(y\in\mathcal{C}^{2}\) is a non-oscillating solution to (4L) with \(y\neq 0\). Then \(z\!:=\!2y^{\dagger}\!\in\!\mathcal{C}^{1}\) satisfies \(-2z^{\prime}-z^{2}=f\), so \(z\) generates a Hardy field \(H(z)\) with \(\omega(z)=f\), by Proposition 5.3.3, which in turn yields a Hardy field \(H(z,y)\) with \(2y^{\dagger}=z\). Thus \(y_{1}:=y\) lies in a Hardy field extension of \(H\). From Lemma 5.2.15 and Proposition 5.3.2(iv) we also obtain a solution \(y_{2}\) to (4L) in a Hardy field extension of \(H\langle y_{1}\rangle=H(y,z)\) such that \(y_{1}\), \(y_{2}\) are \(\mathbb{R}\)-linearly independent; see also [171, Theorem 2, Corollary 2]. This shows:
**Proposition 5.5.1**.: _If \(f/4\) does not generate oscillations, then \(\operatorname{D}(H)\) contains \(\mathbb{R}\)-linearly independent solutions \(y_{1}\), \(y_{2}\) to (4L)._
Indeed, if \(f/4\) does not generate oscillations, then \(\operatorname{D}(H)\) contains solutions \(y_{1}\), \(y_{2}\) to (4L) with \(y_{1},y_{2}>0\) and \(y_{1}\prec y_{2}\). Here \(y_{1}\) is determined up to multiplication by a factor in \(\mathbb{R}^{>}\); we call such \(y_{1}\) a **principal solution** to (4L). (Lemmas 5.2.28, 5.2.29.) See Section 1.4 for the subsets \(\Gamma(H)\), \(\Lambda(H)\) of \(H\).
**Lemma 5.5.2**.: _Suppose \(H\) is \(\operatorname{d}\)-perfect and \(f/4\) does not generate oscillations, and let \(y\in H\) be a principal solution to (4L). Then \(z:=2y^{\dagger}\) is the unique solution of the equation \(\omega(z)=f\) in \(\Lambda(H)\)._
Proof.: We already know \(\omega(z)=f\). The restriction of \(\omega\) to \(\Lambda(H)\) is strictly increasing [ADH, 11.8.20], so it remains to show that \(z\in\Lambda(H)\). Let \(h\in H\), \(h^{\prime}=1/y^{2}\). Then \(h\succ 1\) by Corollary 5.2.27, hence \(1/y^{2}\in\Gamma(H)\), so \(z=-(1/y^{2})^{\dagger}\in\Lambda(H)\).
By [ADH, p. 259], with \(A=4\partial^{2}+f\in H[\partial]\) we have
\[4y^{\prime\prime}+fy=0\text{ for some }y\in H^{\times}\ \Rightarrow\ A\text{ splits over }H\iff f\in\omega(H).\]
To simplify the discussion we now also introduce the subset
\[\overline{\omega}(H):=\big{\{}f\in H:f/4\text{ does not generate oscillations}\big{\}}\]
of \(H\). If \(E\) is a Hardy field extension of \(H\), then \(\overline{\omega}(E)\cap H=\overline{\omega}(H)\). By Corollary 5.2.24, \(\overline{\omega}(H)\) is downward closed, and \(\omega(H)\subseteq\overline{\omega}(H)\) by the discussion following (R) in Section 5.2.
**Corollary 5.5.3**.: _If \(H\) is \(\mathrm{d}\)-perfect, then_
\[\omega(H)\ =\ \overline{\omega}(H)\ =\ \bigl{\{}f\in H:\ 4y^{\prime\prime}+fy=0 \text{ for some }y\in H^{\times}\bigr{\}},\]
_and \(\omega(H)\) is downward closed in \(H\)._
If \(H\) is \(\mathrm{d}\)-perfect, then \(H^{\dagger}=H\) by Proposition 5.3.2. The remarks after (R) show that a part of Corollary 5.5.3 holds under this weaker condition:
**Corollary 5.5.4**.: _If \(H^{\dagger}=H\), then_
\[\omega(H)\ =\ \bigl{\{}f\in H:\ 4y^{\prime\prime}+fy=0\text{ for some }y\in H^{\times}\bigr{\}}.\]
Lemma 5.2.14 and Proposition 5.5.1 also yield:
**Corollary 5.5.5**.: _If \(f\in\overline{\omega}(H)\), then each \(y\in\mathcal{C}^{2}\) such that \(4y^{\prime\prime}+fy\in H\) is in \(\mathrm{D}(H)\)._
For use in the proof of Corollary 5.5.32 we record the following property of \(\overline{\omega}(H)\):
**Lemma 5.5.6**.: \(\Gamma(H)\cap\overline{\omega}(H)=\emptyset\)_._
Proof.: We arrange that \(H\) is \(\mathrm{d}\)-perfect. Hence \(H\supseteq\mathbb{R}\) is Liouville closed and \(\overline{\omega}(H)=\omega(H)\) by Corollary 5.5.3. From \(x^{-1}=x^{\dagger}\in\Gamma(H)\) and \(\sigma(x^{-1})=2x^{-2}\asymp(x^{-1})^{\prime}\prec\ell^{\dagger}\) for all \(\ell\succ 1\) in \(H\) we obtain \(\Gamma(H)\subseteq\sigma\bigl{(}\Gamma(H)\bigr{)}^{\dagger}\), so \(\Gamma(H)\cap\omega(H)=\emptyset\) by [ADH, remark before 11.8.29].
Next some consequences of Proposition 5.5.1 for more general linear differential equations of order \(2\): Let \(g,h\in H\), and consider the linear differential equation
( \[\widetilde{\mathrm{L}}\] ) \[Y^{\prime\prime}+gY^{\prime}+hY\ =\ 0\]
over \(H\). An easy induction on \(n\) shows that for a solution \(y\in\mathcal{C}^{2}\) of (\(\widetilde{\mathrm{L}}\)) we have \(y\in\mathcal{C}^{n}\) with \(y^{(n)}\in Hy+Hy^{\prime}\) for all \(n\), so \(y\in\mathcal{C}^{<\infty}\). To reduce (\(\widetilde{\mathrm{L}}\)) to an equation (4L) we take \(f:=\omega(g)+4h=-2g^{\prime}-g^{2}+4h\in H\), take \(a\in\mathbb{R}\), and take a representative of \(g\) in \(\mathcal{C}^{1}_{a}\), also denoted by \(g\), and let \(G\in(\mathcal{C}^{2})^{\times}\) be the germ of
\[t\mapsto\exp\biggl{(}-\frac{1}{2}\int_{a}^{t}g(s)\,ds\biggr{)}\qquad(t\geqslant a).\]
This gives an isomorphism \(y\mapsto Gy\) from the \(\mathbb{R}\)-linear space of solutions of (4L) in \(\mathcal{C}^{2}\) onto the \(\mathbb{R}\)-linear space of solutions of (\(\widetilde{\mathrm{L}}\)) in \(\mathcal{C}^{2}\), and \(y\in\mathcal{C}^{2}\) oscillates iff \(Gy\) oscillates. By Proposition 5.3.2, \(G\in\mathrm{D}(H)\). Using \(\frac{f}{4}=-\frac{1}{2}g^{\prime}-\frac{1}{4}g^{2}+h\) we now obtain the following germ version of Corollary 5.2.25:
**Corollary 5.5.7**.: _The following are equivalent:_
* _some solution in_ \(\mathcal{C}^{2}\) _of_ (\(\widetilde{\mathrm{L}}\)) _oscillates;_
* _all nonzero solutions in_ \(\mathcal{C}^{2}\) _of_ (\(\widetilde{\mathrm{L}}\)) _oscillate;_
* \(-\frac{1}{2}g^{\prime}-\frac{1}{4}g^{2}+h\) _generates oscillations._
_Moreover, if \(-\frac{1}{2}g^{\prime}-\frac{1}{4}g^{2}+h\) does not generate oscillations, then all solutions of (\(\widetilde{\mathrm{L}}\)) in \(\mathcal{C}^{2}\) belong to \(\mathrm{D}(H)\)._
Set \(A:=\partial^{2}+g\partial+h\), and let \(f=\omega(g)+4h\), \(G\) be as above. Then \(A_{\ltimes G}=\partial^{2}+\frac{f}{4}\). Thus by combining Corollary 5.5.5 and Corollary 5.5.7 we obtain:
**Corollary 5.5.8**.: _If (L) has no oscillating solution in \(\mathcal{C}^{2}\), and \({y\in\mathcal{C}^{2}}\) is such that \({y^{\prime\prime}+gy^{\prime}+hy\in H}\), then \({y\in\mathrm{D}(H)}\)._
The next corollary follows from Proposition 5.5.1 and [ADH, 5.1.21]:
**Corollary 5.5.9**.: _The following are equivalent, for \({A\in H[\partial]}\) and \({f}\) as above:_
* \({f}/{4}\) _does not generate oscillations;_
* \({A}\) _splits over some Hardy field extension of_ \(H\)_;_
* \({A}\) _splits over_ \(\mathrm{D}(H)\)_._
For \({A\in H[\partial]}\) and \({f}\) as before we have \({A}_{\ltimes G}=\partial^{2}+\frac{f}{4}\) and \({G^{\dagger}=-\frac{1}{2}g\in H}\), so:
**Corollary 5.5.10**.: _A splits over \({H[i]}\Longleftrightarrow\partial^{2}+\frac{f}{4}\) splits over \({H[i]}\)._
Proposition 5.5.1 and its corollaries 5.5.5-5.5.8 are from [33, Theorems 16.17, 16.18, 16.19], and Corollary 5.5.3 is essentially [33, Lemma 17.1].
Proposition 5.5.1 applies only when (4L) has a solution in \((\mathcal{C}^{2})^{\times}\). Such a solution might not exist, but (4L) does have \(\mathbb{R}\)-linearly independent solutions \({y_{1},y_{2}\in\mathcal{C}^{2}}\), so \({w:=y_{1}y_{2}^{\prime}-y_{1}^{\prime}y_{2}\in\mathbb{R}^{\times}}\). Set \({y:=y_{1}+y_{2}i}\). Then \({4y^{\prime\prime}+fy=0}\) and \({y\in\mathcal{C}^{2}[i]^{\times}}\), and for \({z=2y^{\dagger}\in\mathcal{C}^{1}[i]^{\times}}\) we have \({-2z^{\prime}-z^{2}=f}\). Now
\[z\ =\ \frac{2y_{1}^{\prime}+2{\mathrm{i}}y_{2}^{\prime}}{y_{1}+{ \mathrm{i}}y_{2}}\ =\ \frac{2y_{1}^{\prime}y_{1}+2y_{2}^{\prime}y_{2}-2{\mathrm{i}}(y_{1}^{ \prime}y_{2}-y_{1}y_{2}^{\prime})}{y_{1}^{2}+y_{2}^{2}}\ =\ \frac{2(y_{1}^{\prime}y_{1}+y_{2}^{\prime}y_{2})+2{ \mathrm{i}}w}{y_{1}^{2}+y_{2}^{2}},\] \[\quad\quad\text{so}\ \ \operatorname{Re}z\ =\ \frac{2(y_{1}^{\prime}y_{1}+y_{2}^{\prime}y_{2})}{y_{1}^{2}+y_{2}^{2}}\in \mathcal{C}^{1},\qquad\operatorname{Im}z\ =\ \frac{2w}{y_{1}^{2}+y_{2}^{2}}\in\mathcal{C}^{2}.\]
Thus \(\operatorname{Im}z\in(\mathcal{C}^{2})^{\times}\) and \((\operatorname{Im}z)^{\dagger}=-\operatorname{Re}z\), and so
\[\sigma(\operatorname{Im}z)\ =\ \omega\big{(}-(\operatorname{Im}z)^{\dagger}+( \operatorname{Im}z){\mathrm{i}}\big{)}\ =\ \omega(z)\ =\ f\qquad\text{in }\mathcal{C}^{1}.\]
Replacing \(y_{1}\) by \(-y_{1}\) changes \(w\) to \(-w\); this way we can arrange \(w>0\), so \(\operatorname{Im}z>0\).
Conversely, every \({u\in(\mathcal{C}^{2})^{\times}}\) such that \({u>0}\) and \({\sigma(u)=f}\) arises in this way. To see this, suppose we are given such \({u}\), take \({\phi\in\mathcal{C}^{3}}\) with \({\phi^{\prime}=\frac{1}{2}u}\), and set
\[y_{1}\ :=\ \frac{1}{\sqrt{u}}\cos{\phi},\qquad y_{2}\ :=\ \frac{1}{\sqrt{u}}\sin{ \phi}\qquad\text{(elements of $\mathcal{C}^{2}$)}.\]
Then \(\operatorname{wr}(y_{1},y_{2})=1/2\), and \(y_{1}\), \(y_{2}\) solve (4L). To see the latter, consider
\[y\ :=\ y_{1}+y_{2}{\mathrm{i}}\ =\ \frac{1}{\sqrt{u}}\operatorname{e}^{\phi{ \mathrm{i}}}\in\mathcal{C}^{2}[{\mathrm{i}}]^{\times}\]
and note that \({z:=2y^{\dagger}}\) satisfies
\[\omega(z)\ =\ \omega(-{u^{\dagger}+ui})\ =\ \sigma(u)\ =\ f,\]
hence \({4y^{\prime\prime}+fy=0}\). The computation above shows \(\operatorname{Im}z=1/(y_{1}^{2}+y_{2}^{2})=u\). We have \({\phi^{\prime}>0}\), so either \({\phi>\mathbb{R}}\) or \({\phi-c}\prec 1\) for some \({c\in\mathbb{R}}\), with \({\phi>\mathbb{R}}\) iff \({f}/{4}\) generates oscillations. As to uniqueness of the above pair \((y_{1},y_{2})\), we have:
**Lemma 5.5.11**.: _Suppose \({f\notin\overline{\omega}(H)}\). Let \({\widetilde{y}_{1},\widetilde{y}_{2}\in\mathcal{C}^{2}}\) be \(\mathbb{R}\)-linearly independent solutions of (4L) with \(\operatorname{wr}({\widetilde{y}_{1},\widetilde{y}_{2}})=1/2\). Set \({\widetilde{y}:=\widetilde{y}_{1}+\widetilde{y}_{2}{\mathrm{i}}}\), \({\widetilde{z}:=2\widetilde{y}^{\dagger}}\). Then_
\[\operatorname{Im}{\widetilde{z}=u}\quad\Longleftrightarrow\quad{\widetilde{y} =\operatorname{e}^{\theta{\mathrm{i}}}y\ \text{for some }\theta\in\mathbb{R}}.\]
Proof.: If \(\widetilde{y}=e^{\theta_{1}}\,y\) (\(\theta\in\mathbb{R}\)), then clearly \(\widetilde{z}=2\widetilde{y}^{\dagger}=2y^{\dagger}=z\), hence \(\operatorname{Im}z=\operatorname{Im}\widetilde{z}\). For the converse, let \(A\) be the invertible \(2\times 2\) matrix with real entries and \(Ay=\widetilde{y}\); here \(y=(y_{1},y_{2})^{t}\) and \(\widetilde{y}=(\widetilde{y}_{1},\widetilde{y}_{2})^{t}\), column vectors with entries in \(\mathcal{C}^{2}\). As in the proof of [ADH, 4.1.18], \(\operatorname{wr}(y_{1},y_{2})=\operatorname{wr}(\widetilde{y}_{1}, \widetilde{y}_{2})\) yields \(\det A=1\).
Suppose \(\operatorname{Im}\widetilde{z}=u\), so \(y_{1}^{2}+y_{2}^{2}=\widetilde{y}_{1}^{2}+\widetilde{y}_{2}^{2}\). Choose \(a\in\mathbb{R}\) and representatives for \(u\), \(y_{1}\), \(y_{2}\), \(\widetilde{y}_{1}\), \(\widetilde{y}_{2}\) in \(\mathcal{C}_{a}\), denoted by the same symbols, such that in \(\mathcal{C}_{a}\) we have \(Ay=\widetilde{y}\) and \(y_{1}^{2}+y_{2}^{2}=\widetilde{y}_{1}^{2}+\widetilde{y}_{2}^{2}\), and \(u(t)\cdot\big{(}y_{1}(t)^{2}+y_{2}(t)^{2}\big{)}=1\) for all \(t\geqslant a\). With \(\|\cdot\|\) the usual euclidean norm on \(\mathbb{R}^{2}\), we then have \(\|Ay(t)\|=\|y(t)\|=1/\sqrt{u(t)}\) for \(t\geqslant a\). Since \(f/4\) generates oscillations, we have \(\phi>\mathbb{R}\), and we conclude that \(\|Av\|=1\) for all \(v\in\mathbb{R}^{2}\) with \(\|v\|=1\). It is well-known that then \(A=\big{(}\begin{smallmatrix}\cos\theta&-\sin\theta\\ \sin\theta&\cos\theta\end{smallmatrix}\big{)}\) with \(\theta\in\mathbb{R}\) (see, e.g., [122, Chapter XV, Exercise 2]), so \(\widetilde{y}=e^{\theta_{1}}\,y\).
The observations above will be used in the proofs of Theorems 5.6.2 and 7.5.32 below. We finish with miscellaneous historical remarks (not used later):
_Remarks_.: The connection between the second-order linear differential equation (4L) and the third-order non-linear differential equation \(\sigma(y)=f\) was first investigated by Kummer [119] in 1834. Appell [3] noted that the linear differential equation
\[Y^{\prime\prime\prime}+fY^{\prime}+(f^{\prime}/2)Y\ =\ 0\]
has \(\mathbb{R}\)-linearly independent solutions \(y_{1}^{2},y_{1}y_{2},y_{2}^{2}\in\mathcal{C}^{<\infty}\), though some cases were known earlier [49, 131]; in particular, \(1/u=y_{1}^{2}+y_{2}^{2}\) is a solution. See also Lemma 2.4.23. Hartman [90, 93] investigates monotonicity properties of \(y_{1}^{2}+y_{2}^{2}\). Steen [197] in 1874, and independently Pinney [153], remarked that \(r:=1/\sqrt{u}=\sqrt{y_{1}^{2}+y_{2}^{2}}\in\mathcal{C}^{<\infty}\) satisfies \(4r^{\prime\prime}+fr=1/r^{3}\). (See also [163].)
**Complex exponentials over Hardy fields.** We now use some of the above to prove an extension theorem for Hardy fields (cf. [33, Lemma 11.6(6)]):
**Proposition 5.5.12**.: _If \(\phi\in H\) and \(\phi\preccurlyeq 1\), then \(\cos\phi,\sin\phi\in\operatorname{D}(H)\)._
Proof.: Replacing \(H\) by \(\operatorname{D}(H)\) we arrange \(\operatorname{D}(H)=H\). Then by Proposition 5.3.2, \(H\supseteq\mathbb{R}\) is a Liouville closed \(H\)-field, and by Corollary 5.5.3, \(\omega(H)\) is downward closed. Hence by Lemma 1.2.20, \(H\) is trigonometrically closed. Let now \(\phi\in H\) and \(\phi\preccurlyeq 1\). Then \((e^{\phi_{1}})^{\dagger}=\phi^{\prime}\dot{\imath}\in K^{\dagger}\), so \(\cos\phi+\operatorname{i}\sin\phi=e^{\phi_{1}}\in K\) using \(K\supseteq\mathbb{C}\). Thus \(\cos\phi,\sin\phi\in H\).
**Corollary 5.5.13**.: _Let \(\phi\in H\) and \(\phi\preccurlyeq 1\). Then \(\cos\phi\), \(\sin\phi\) generate a \(\operatorname{d}\)-algebraic Hardy field extension \(E:=H(\cos\phi,\sin\phi)\) of \(H\). If \(H\) is a \(\mathcal{C}^{\infty}\)-Hardy field, then so is \(E\), and likewise with \(\mathcal{C}^{\omega}\) in place of \(\mathcal{C}^{\infty}\)._
Recall that for \(\phi,\theta\in\mathbb{R}\) we have
\[\cos(\phi+\theta)\ =\ \cos(\phi)\cos(\theta)-\sin(\phi)\sin(\theta),\] \[\cos(\phi-\theta)\ =\ \cos(\phi)\cos(\theta)+\sin(\phi)\sin(\theta).\]
Recall also the bijection \(\arccos\colon[-1,1]\to[0,\pi]\), the inverse of the cosine function on \([0,\pi]\). It follows that for any \(a,b\in\mathbb{R}\) we have \(d\in\mathbb{R}\) such that
\[a\cos(\phi)+b\sin(\phi)\ =\ \sqrt{a^{2}+b^{2}}\cdot\cos(\phi+d)\text{ for all }\phi\in \mathbb{R}\text{:}\]
for \(a\), \(b\) not both \(0\) this holds with \(d=\arccos\big{(}a/\sqrt{a^{2}+b^{2}}\big{)}\) when \(b\leqslant 0\), and with \(d=-\arccos\big{(}a/\sqrt{a^{2}+b^{2}}\big{)}\) when \(b\geqslant 0\). For later use we record some consequences:
**Lemma 5.5.14** (Addition of sinusoids).: _Let \(y\in\mathcal{C}\). Then_
\[y=a\cos x+b\sin x\text{ for some }a,b\in\mathbb{R}\quad\Longleftrightarrow\quad y=c \cos(x+d)\text{ for some }c,d\in\mathbb{R}.\]
**Corollary 5.5.15**.: _Let \(\phi\in\mathcal{C}\). Then \(\mathbb{R}\cos\phi+\mathbb{R}\sin\phi\,=\,\big{\{}c\cos(\phi+d):\,c,d\in \mathbb{R}\big{\}}\)._
**Corollary 5.5.16**.: _Suppose \(H\supseteq\mathbb{R}\) is real closed and closed under integration, and let \(g,h\in H\). Then there is \(u\in H\) such that \(-\pi\leqslant u\leqslant\pi\) and \(g\cos\phi+h\sin\phi=\sqrt{g^{2}+h^{2}}\cdot\cos(\phi+u)\) for all \(\phi\in\mathcal{C}\): if \(h<0\) this holds for \(u=\arccos\big{(}g/\sqrt{g^{2}+h^{2}}\big{)}\), and if \(h>0\) it holds for \(u=-\arccos\big{(}g/\sqrt{g^{2}+h^{2}}\big{)}\)._
Proof.: On the interval \((-1,1)\) the function arccos is real analytic with derivative \(t\mapsto-1/\sqrt{1-t^{2}}\). Thus \(\arccos\big{(}g/\sqrt{g^{2}+h^{2}}\big{)}\in H\) for \(h\neq 0\).
**Corollary 5.5.17**.: _Let \(a\in\mathbb{R}\) and let \(g,\phi\in\mathcal{C}^{1}_{a}\) have germs in \(H\) such that \(g(t)\neq 0\) eventually, and \(\phi(t)\to+\infty\) as \(t\to+\infty\). Then there is a real \(b\geqslant a\) with the property that if \(s_{0},s_{1}\in[b,+\infty)\) with \(s_{0}<s_{1}\) are any successive zeros of \(y:=g\cos\phi\), then \(y^{\prime}\) has exactly one zero in the interval \((s_{0},s_{1})\)._
Proof.: By increasing \(a\) we arrange \(g(t)\neq 0\) and \(\phi^{\prime}(t)>0\) for all \(t\geqslant a\). Replacing \(g\) by \(-g\) if necessary we further arrange \(g(t)>0\) for all \(t\geqslant a\). Let \(s_{0},s_{1}\in[a,+\infty)\) with \(s_{0}<s_{1}\) be successive zeros of \(y\). Later we impose a suitable lower bound \(b\geqslant a\) on \(s_{0}\). Then \(\phi(s_{1})=\phi(s_{0})+\pi\), since \(s_{1}\) is the next zero of \(\cos\phi\) after \(s_{0}\). Also
\[y^{\prime}\ =\ g^{\prime}\cos\phi-g\phi^{\prime}\sin\phi\ =\ \sqrt{g^{\prime 2 }+(g\phi^{\prime})^{2}}\cos(\phi+u),\ \text{ where }\]
\[u\ =\ \arccos\big{(}g^{\prime}/\sqrt{g^{\prime 2}+(g\phi^{\prime})^{2}} \big{)},\ \text{ so }0<u(t)<\pi\text{ for all }t\geqslant a.\]
By Rolle, \(y^{\prime}\) has a zero in \((s_{0},s_{1})\). Let \(t\in(s_{0},s_{1})\) be a zero of \(y^{\prime}\). Then
\[\phi(s_{0})<\phi(t)<\phi(s_{0})+\pi,\quad\phi(t)+u(t)\in\phi(s_{0})+\mathbb{Z}\pi,\]
so \(\phi(t)+u(t)=\phi(s_{0})+\pi\). Take \(b\geqslant a\) in \(\mathbb{R}\) so large that \(u\) is differentiable on \([b,+\infty)\) and \(\phi^{\prime}(t)+u^{\prime}(t)>0\) for all \(t\geqslant b\); this is possible because \(u\preccurlyeq 1\) is \(H\)-hardian by Corollary 5.5.16, and \(\phi(t)+u(t)\to+\infty\) as \(t\to+\infty\). Assuming now that \(b\leqslant s_{0}\), we conclude that \(t\in(s_{0},s_{1})\) is uniquely determined by \(\phi(t)+u(t)=\phi(s_{0})+\pi\).
The \(H\)-asymptotic field extension \(K:=H[\mathrm{i}]\) of \(H\) is a differential subring of \(\mathcal{C}^{<\infty}[\mathrm{i}]\). To handle ultimate dents in \(H\) in Section 4.4, we sometimes assumed \(\mathrm{I}(K)\subseteq K^{\uparrow}\), a condition that we consider more closely in the next proposition:
**Proposition 5.5.18**.: _Suppose \(H\supseteq\mathbb{R}\) is closed under integration. Then the following conditions are equivalent:_
* \(\mathrm{I}(K)\subseteq K^{\uparrow}\)_;_
* \(\mathrm{e}^{f}\in K\) _for all_ \(f\in K\) _with_ \(f\prec 1\)_;_
* \(\mathrm{e}^{\phi},\cos\phi,\sin\phi\in H\) _for all_ \(\phi\in H\) _with_ \(\phi\prec 1\)_._
Proof.: Assume (i), and let \(f\in K\), \(f\prec 1\). Then \(f^{\prime}\in\mathrm{I}(K)\), so we have \(g\in K^{\times}\) with \(f^{\prime}=g^{\dagger}\) and thus \(\mathrm{e}^{f}=cg\) for some \(c\in\mathbb{C}^{\times}\). Therefore \(\mathrm{e}^{f}\in K\). This shows (i) \(\Rightarrow\) (ii), and (ii) \(\Rightarrow\) (iii) is clear. Assume (iii), and let \(f\in\mathrm{I}(K)\). Then \(f=g+h\mathrm{i}\), \(g,h\in\mathrm{I}(H)\). Taking \(\phi,\theta\prec 1\) in \(H\) with \(\phi^{\prime}=g\) and \(\theta^{\prime}=h\),
\[\exp(\phi+\theta\mathrm{i})\ =\ \exp(\phi)\big{(}\cos(\theta)+\sin(\theta) \mathrm{i}\big{)}\in H[\mathrm{i}]\ =\ K\]
has the property that \(f=\big{(}\exp(\phi+\theta\mathrm{i})\big{)}^{\dagger}\in K^{\uparrow}\). This shows (iii) \(\Rightarrow\) (i).
From Propositions 5.5.12 and 5.5.18 we obtain:
**Corollary 5.5.19**.: _If \(H\) is \(\mathrm{d}\)-perfect, then \(\mathrm{I}(K)\subseteq K^{\dagger}\)._
Next we consider "polar coordinates" of nonzero elements of \(K\):
**Lemma 5.5.20**.: _Let \(f\in\mathcal{C}[\mathrm{i}]^{\times}\). Then \(|f|\in\mathcal{C}^{\times}\), and there exists \(\phi\in\mathcal{C}\) with \(f=|f|\,\mathrm{e}^{\phi\mathrm{i}}\); such \(\phi\) is unique up to addition of an element of \(2\pi\mathbb{Z}\). If also \(f\in\mathcal{C}^{r}[\mathrm{i}]^{\times}\), \(r\in\mathbb{N}\cup\{\infty,\omega\}\), then \(|f|\in\mathcal{C}^{r}\) and \(\phi\in\mathcal{C}^{r}\) for such \(\phi\)._
Proof.: The claims about \(|f|\) are clearly true. To show existence of \(\phi\) we may replace \(f\) by \(f/|f|\) to arrange \(|f|=1\). Take \(a\in\mathbb{R}\) and a representative of \(f\) in \(\mathcal{C}_{a}[\mathrm{i}]\), also denoted by \(f\), such that \(|f(t)|=1\) for all \(t\geqslant a\). The proof of [57, (9.8.1)] shows that for \(b\in(a,+\infty)\) and \(\phi_{a}\in\mathbb{R}\) with \(f(a)=\mathrm{e}^{\phi_{a}\mathrm{i}}\) there is a unique continuous function \(\phi\colon[a,b]\to\mathbb{R}\) such that \(\phi(a)=\phi_{a}\) and \(f(t)=\mathrm{e}^{\phi(t)\mathrm{i}}\) for all \(t\in[a,b]\), and if also \(f|_{[a,b]}\) is of class \(\mathcal{C}^{1}\), then so is this \(\phi\) with \(i\phi^{\prime}(t)=f^{\prime}(t)/f(t)\) for all \(t\in[a,b]\). With \(b\to+\infty\) this yields the desired result.
**Lemma 5.5.21**.: _Suppose \(H\supseteq\mathbb{R}\) is Liouville closed and \(f\in\mathcal{C}^{1}[\mathrm{i}]^{\times}\). Then \(f^{\dagger}\in K\) iff \(|f|\in H^{>}\) and \(f=|f|\,\mathrm{e}^{\phi\mathrm{i}}\) for some \(\phi\in H\). If in addition \(f\in K^{\times}\), then \(f=|f|\,\mathrm{e}^{\phi\mathrm{i}}\) for some \(\phi\preccurlyeq 1\) in \(H\)._
Proof.: Take \(\phi\in\mathcal{C}\) as in Lemma 5.5.20. Then \(\phi\in\mathcal{C}^{1}\) and \(\operatorname{Re}f^{\dagger}=|f|^{\dagger}\), \(\operatorname{Im}f^{\dagger}=\phi^{\prime}\). If \(f\in K^{\times}\), then the remarks preceding Lemma 1.2.16 give \(\phi^{\prime}\in\mathrm{I}(H)\), so \(\phi\preccurlyeq 1\).
**Corollary 5.5.22**.: _Suppose \(H\supseteq\mathbb{R}\) is Liouville closed with \(\mathrm{I}(K)\subseteq K^{\dagger}\). Let \(L\) be a differential subfield of \(\mathcal{C}^{<\infty}[\mathrm{i}]\) containing \(K\). Then \(L^{\dagger}\cap K=K^{\dagger}\)._
Proof.: Let \(f\in L^{\times}\) satisfy \(f^{\dagger}\in K\). Then \(f=|f|\,\mathrm{e}^{\phi\mathrm{i}}\) with \(|f|\in H^{>}\) and \(\phi\in H\), by Lemma 5.5.21. Hence \(\mathrm{e}^{\phi\mathrm{i}},\mathrm{e}^{-\phi\mathrm{i}}\in L\) and so \(\cos\phi=\frac{1}{2}(\mathrm{e}^{\phi\mathrm{i}}+\mathrm{e}^{-\phi\mathrm{i}})\in L\). In particular, \(\cos\phi\) does not oscillate, so \(\phi\preccurlyeq 1\) and thus \(f=|f|(\cos\phi+\mathrm{i}\sin\phi)\in K\) by Proposition 5.5.18.
**Corollary 5.5.23**.: _Let \(\phi\in H\), and suppose \(\mathrm{e}^{\phi\mathrm{i}}\sim f\) with \(f\in E[\mathrm{i}]^{\times}\) for some Hardy field extension \(E\) of \(H\). Then \(\phi\preccurlyeq 1\)._
Proof.: We can assume that \(E=H\) is Liouville closed and contains \(\mathbb{R}\). Towards a contradiction assume \(\phi\succ 1\). Lemma 5.5.21 yields \(\theta\preccurlyeq 1\) in \(H\) such that \(f=|f|\,\mathrm{e}^{\theta\mathrm{i}}\). Then \(\mathrm{e}^{(\phi-\theta)\mathrm{i}}\sim|f|\) and \(\phi-\theta\sim\phi\). Thus replacing \(f\), \(\phi\) by \(|f|\), \(\phi-\theta\), respectively, we arrange \(f\in H^{\times}\). Then \(\mathrm{e}^{\phi\mathrm{i}}=\cos\phi+\mathrm{i}\sin\phi\sim f\) in \(\mathcal{C}^{<\infty}[\mathrm{i}]\) gives \(\cos\phi\sim f\), contradicting that \(\cos\phi\) has arbitrarily large zeros.
**Corollary 5.5.24**.: _Let \(f\in K^{\times}\), \(\phi\in H\), so \(y:=f\,\mathrm{e}^{\phi\mathrm{i}}\in\mathcal{C}^{<\infty}[\mathrm{i}]^{\times}\). Then the following are equivalent:_
* \(\phi\preccurlyeq 1\)_;_
* \(y\in\mathrm{D}(H)[\mathrm{i}]\)_;_
* \(y\in E[\mathrm{i}]\) _for some Hardy field extension_ \(E\) _of_ \(H\)_;_
* \(y\sim g\) _for some Hardy field extension_ \(E\) _of_ \(H\) _and_ \(g\in E[\mathrm{i}]^{\times}\)_._
Proof.: Use Proposition 5.5.18 and Corollaries 5.5.19 and 5.5.23 to obtain the chain of implications (i) \(\Rightarrow\) (ii) \(\Rightarrow\) (iii) \(\Rightarrow\) (iv) \(\Rightarrow\) (i).
Finally, some observations about solutions to linear differential equations involving trigonometric functions.
**Lemma 5.5.25**.: _Let \(A\in K[\partial]^{\neq}\) and \(\phi\in H\). Then \(A(K\,\mathrm{e}^{\phi\mathrm{i}})\subseteq K\,\mathrm{e}^{\phi\mathrm{i}}\). Moreover, if \(K\) is \(r\)-linearly surjective with \(r:=\operatorname{order}A\), or \(K\) is \(1\)-linearly surjective and \(A\) splits over \(K\), then \(A(K\,\mathrm{e}^{\phi\mathrm{i}})=K\,\mathrm{e}^{\phi\mathrm{i}}\)._
Proof.: The differential operator \(B:=A_{\operatorname{\mathrm{xe}}^{\phi i}}=\operatorname{\mathrm{e}}^{-\phi i}A \operatorname{\mathrm{e}}^{\phi i}\in(\mathcal{C}^{<\infty}[i])[\partial]\) of order \(r\) has coefficients in \(K\). This follows from extending [ADH, 5.8.8] by allowing the element \(h\) there (which is \(\operatorname{\mathrm{e}}^{\phi i}\) here) to be a unit in a differential ring extension of \(K\) instead of a nonzero element in a differential field extension of \(K\); the proof of [ADH, 5.8.8] goes through, _mutatis mutandis_, to give this extension. Thus if \(y\in K\), then \(A(y\operatorname{\mathrm{e}}^{\phi i})=B(y)\operatorname{\mathrm{e}}^{\phi i}\). Also, if \(A\) splits over \(K\), then so does \(B\). Hence if \(K\) is \(r\)-linearly surjective, or \(K\) is \(1\)-linearly surjective and \(A\) splits over \(K\), then for each \(b\in K\) we obtain \(y\in K\) with \(B(y)=b\), and so \(A(y\operatorname{\mathrm{e}}^{\phi i})=b\operatorname{\mathrm{e}}^{\phi i}\).
**Lemma 5.5.26**.: _Let \(A\in H[\partial]^{\neq}\), and suppose \(K\) is \(r\)-linearly surjective with \(r:=\operatorname{\mathrm{order}}A\), or \(K\) is \(1\)-linearly surjective and \(A\) splits over \(K\). Let also \(h,\phi\in H\). Then there are \(f,g\in H\) such that \(A(f\cos\phi+g\sin\phi)=h\cos\phi\)._
Proof.: Lemma 5.5.25 gives \(y\in K\) such that
\[A(y\operatorname{\mathrm{e}}^{\phi i})\ =\ h\operatorname{\mathrm{e}}^{\phi i}\ =\ (h \cos\phi)+(h\sin\phi)i.\]
Take \(f,g\in H\) with \(y=f-gi\). Then
\[y\operatorname{\mathrm{e}}^{\phi i}\ =\ (f\cos\phi+g\sin\phi)+(-g\cos\phi+f\sin \phi)i\]
and hence \(A(f\cos\phi+g\sin\phi)\ =\ h\cos\phi\).
**Lemma 5.5.27**.: _Let \(f,g\in K\), \(\phi\in H\), \(\phi\succ 1\), and \(f\cos\phi+g\sin\phi\in\mathbb{C}\subseteq\mathcal{C}[i]\). Then \(f=g=0\)._
Proof.: Take \(c\in\mathbb{C}\) such that \(f\cos\phi+g\sin\phi=c\). Since \(\phi\in H\) and \(\phi\succ 1\), there are arbitrarily large \(t\) with \(\phi(t)\in 2\mathbb{Z}\pi\), so \(f(t)=c\), and thus \(f=c\). There are also arbitrarily large \(t\) with \(\phi(t)\in(2\mathbb{Z}+1)\pi\), and this gives likewise \(-f=c\), so \(f=c=0\). Hence \(g\sin\phi=0\), which easily gives \(g=0\).
Combining Lemmas 5.5.26 and 5.5.27 gives:
**Corollary 5.5.28**.: _If \(K\) is \(1\)-linearly surjective, and \(h,\phi\in H\), \(\phi\succ 1\), then there are unique \(f,g\in H\) such that \((f\cos\phi+g\sin\phi)^{\prime}=h\cos\phi\)._
**Behavior of \(\sigma\) and \(\omega\) under composition.** In this subsection we fix \(\ell\in\mathcal{C}^{1}\) with \(\ell>\mathbb{R}\) and \(\phi:=\ell^{\prime}\in H\), so \(\phi>0\). We use the superscript \(\circ\) as in the subsection on compositional conjugation in Hardy fields of Section 5.3. We refer to [ADH, 11.8] (or Section 1.4) for the definition of the subsets \(\Gamma(H)\), \(\Lambda(H)\), and \(\Delta(H)\) of \(H\). The bijection
\[y\mapsto(y/\phi)^{\circ}\ :\ H\to H^{\circ}\]
restricts to bijections \(\operatorname{I}(H)\to\operatorname{I}(H^{\circ})\) and \(\Gamma(H)\to\Gamma(H^{\circ})\), and the bijection
\[z\mapsto\big{(}(z+\phi^{\dagger})/\phi\big{)}^{\circ}\ :\ H\to H^{\circ}\]
restricts to bijections \(\Lambda(H)\to\Lambda(H^{\circ})\) and \(\Delta(H)\to\Delta(H^{\circ})\). (See the transformation formulas in [ADH, p. 520].) Consider the bijection
\[f\mapsto\Phi(f)\ :=\ \big{(}\big{(}f-\omega(-\phi^{\dagger})\big{)}/\phi^{2} \big{)}^{\circ}\ :\ H\to H^{\circ}.\]
Then for \(y\in H^{\times}\), \(z\in H\) we have
\[\sigma\big{(}(y/\phi)^{\circ}\big{)}\ =\ \Phi\big{(}\sigma(y)\big{)},\qquad\omega \big{(}\big{(}(z+\phi^{\dagger})/\phi\big{)}^{\circ}\big{)}\ =\ \Phi\big{(}\omega(z)\big{)}.\]
(See the formulas in [ADH, pp. 518-519].) Hence \(\Phi\) restricts to bijections
\[\sigma(H^{\times})\to\sigma\big{(}(H^{\circ})^{\times}\big{)},\quad\sigma \big{(}\operatorname{I}(H)^{\neq}\big{)}\to\sigma\big{(}\operatorname{I}(H^{ \circ})^{\neq}\big{)},\quad\sigma\big{(}\Gamma(H)\big{)}\to\sigma\big{(} \Gamma(H^{\circ})\big{)},\]
\[\omega(H)\to\omega(H^{\circ}),\quad\omega\big{(}\Lambda(H)\big{)}\to\omega\big{(} \Lambda(H^{\circ})\big{)},\quad\omega\big{(}\Delta(H)\big{)}\to\omega\big{(} \Delta(H^{\circ})\big{)}.\]
**An example of compositional conjugation.** Which "changes of variable" preserve the general form of the linear differential equation (4L)? The next lemma and Corollary 5.5.30 below give an answer.
**Lemma 5.5.29**.: _Let \(K\) be a differential field, \(f\in K\), and \(P(Y):=4Y^{\prime\prime}+fY\). Then for \(g\in K^{\times}\) and \(\phi:=g^{-2}\) we have_
\[g^{3}P^{\phi}_{\times g}(Y)\ =\ 4Y^{\prime\prime}+g^{3}P(g)Y.\]
Proof.: Let \(g,\phi\in K^{\times}\). Then
\[P_{\times g}(Y) =\ 4gY^{\prime\prime}+8g^{\prime}Y^{\prime}+(4g^{\prime\prime}+fg )Y\ =\ 4gY^{\prime\prime}+8g^{\prime}Y^{\prime}+P(g)Y,\quad\text{so}\] \[P^{\phi}_{\times g}(Y) =\ 4g(\phi^{2}Y^{\prime\prime}+\phi^{\prime}Y^{\prime})+8g^{ \prime}\phi Y^{\prime}+P(g)Y\] \[=\ 4g\phi^{2}Y^{\prime\prime}+(4g\phi^{\prime}+8g^{\prime}\phi)Y ^{\prime}+P(g)Y.\]
Now \(4g\phi^{\prime}+8g^{\prime}\phi=0\) is equivalent to \(\phi^{\dagger}=-2g^{\dagger}\), which holds for \(\phi=g^{-2}\). For this \(\phi\) we get \(P^{\phi}_{\times g}(Y)=g^{-3}\big{(}4Y^{\prime\prime}+g^{3}P(g)Y\big{)}\), that is, \(g^{3}P^{\phi}_{\times g}(Y)=4Y^{\prime\prime}+g^{3}P(g)Y\).
Now let \(\ell\in\mathcal{C}^{1}\) be such that \(\ell>\mathbb{R}\) and \(\phi:=\ell^{\prime}\in H\), and let \(P:=4Y^{\prime\prime}+fY\) where \(f\in H\). Note that if \(y\in\mathcal{C}^{2}[\mathrm{i}]\) and \(4y^{\prime\prime}+fy=0\), then \(y\in\mathcal{C}^{<\infty}[\mathrm{i}]\). Towards using Lemma 5.5.29, suppose \(\phi=g^{-2}\), \(g\in H^{\times}\). Using notation from the previous subsection we set \(h:=(g^{3}P(g))^{\circ}\in H^{\circ}\) to obtain the following reduction of solving the differential equation (4L) to solving a similar equation over \(H^{\circ}\):
**Corollary 5.5.30**.: _Let \(y\in\mathcal{C}^{2}[\mathrm{i}]\). Then \(z:=(y/g)^{\circ}\in\mathcal{C}^{2}[\mathrm{i}]\), and_
\[4y^{\prime\prime}+fy\ =\ 0\quad\Longleftrightarrow\quad 4z^{\prime\prime}+hz \ =\ 0.\]
In particular, \(f/4\) generates oscillations iff \(h/4\) does. In connection with the formulas in the previous subsection, note that
\[g^{3}P(g)\ =\ g^{3}(4g^{\prime\prime}+fg)\ =\ \big{(}f-\omega(-\phi^{\dagger}) \big{)}/\phi^{2},\]
so \(h=\Phi(f)\).
**Lemma 5.5.31**.: _The increasing bijection_
\[f\mapsto\Phi(f)\ =\ \big{(}\big{(}f-\omega(-\phi^{\dagger})\big{)}/\phi^{2} \big{)}^{\circ}\ :\ H\to H^{\circ}\]
_maps \(\overline{\omega}(H)\) onto \(\overline{\omega}(H^{\circ})\)._
Proof.: First replace \(H\) by its real closure to arrange that \(H\) is real closed, then take \(g\in H^{\times}\) with \(g^{-2}=\phi\), and use the remarks following Corollary 5.5.30.
We use the above to prove the Fite-Leighton-Wintner oscillation criterion for self-adjoint second-order linear differential equations over \(H\)[69, 126, 211]. (See also [99, SS2] and [198, p. 45].) Let \(A\in H[\mathfrak{d}]\) be self-adjoint of order \(2\). Then \(A=f\mathfrak{d}^{2}+f^{\prime}\mathfrak{d}+g\) where \(f,g\in H\), \(f\neq 0\), by the example following Lemma 2.4.19. For \(h\in\mathcal{C}\), let \(\int h\) denote a germ in \(\mathcal{C}^{1}\) with \((\int h)^{\prime}=h\).
**Corollary 5.5.32**.: _Suppose \(\int f^{-1}>\mathbb{R}\) and \(\int g>\mathbb{R}\). Then \(A(y)=0\) for some oscillating \(y\in\mathcal{C}^{<\infty}\)._
Proof.: We arrange that \(H\supseteq\mathbb{R}\) is Liouville closed. Then \(f^{-1},g\in\Gamma(H)\) by [ADH, 11.8.19]. Note that \(\phi:=f^{-1}\) is active in \(H\). Put \(B:=4\phi A_{\times\phi^{1/2}}\), so \(B=4\partial^{2}+h\) with \(h:=\omega(-\phi^{\dagger})+4g\phi\). Then \(A(y)=0\) for some oscillating \(y\in\mathcal{C}^{<\infty}\) iff \(B(z)=0\) for some oscillating \(z\in\mathcal{C}^{<\infty}\) iff \(h\notin\varpi(H)\), by Corollary 5.5.7. The latter is equivalent to \((4g/\phi)^{\circ}\notin\varpi(H^{\circ})\), by Lemma 5.5.31 applied to \(h\) in place of \(f\). Now \(\Gamma(H^{\circ})\cap\overline{\omega}(H^{\circ})=\emptyset\) by Lemma 5.5.6, so it remains to note that \(4g\in\Gamma(H)\) yields \((4g/\phi)^{\circ}\in\Gamma(H^{\circ})\), by remarks in the previous subsection.
**More about \(\overline{\omega}(H)\,(^{*})\).** For later use (in particular, in Section 7.5) we study here the downward closed subset \(\overline{\omega}(H)\) of \(H\) in more detail. Recall that \(\omega(H)\subseteq\overline{\omega}(H)\), with equality for \(\mathrm{d}\)-perfect \(H\). (Corollary 5.5.3.) In [ADH, 16.3] we introduced the concept of a _\(\Lambda\Omega\)-cut_ in a pre-\(H\)-field; every pre-\(H\)-field has exactly one or exactly two \(\Lambda\Omega\)-cuts [ADH, remark before 16.3.19]. By [ADH, 16.3.14, 16.3.16]:
**Lemma 5.5.33**.: _Suppose \(H\) is \(\mathrm{d}\)-perfect. Then \(\big{(}\operatorname{I}(H),\Lambda(H),\overline{\omega}(H)\big{)}\) is a \(\Lambda\Omega\)-cut in \(H\), and this is the unique \(\Lambda\Omega\)-cut in \(H\) iff \(H\) is \(\omega\)-free._
Thus in general,
\[\big{(}\operatorname{I}\!\big{(}\mathrm{D}(H)\big{)}\cap H,\,\Lambda\big{(} \mathrm{D}(H)\big{)}\cap H,\,\overline{\omega}(H)\big{)}\]
is a \(\Lambda\Omega\)-cut in \(H\), and hence (see [ADH, p. 692]):
\[\omega(H)^{\downarrow}\ \subseteq\ \overline{\omega}(H)\ \subseteq\ H\setminus \sigma\big{(}\Gamma(H)\big{)}^{\uparrow}.\]
The classification of \(\Lambda\Omega\)-cuts in \(H\) from [ADH, 16.3] can be used to narrow down the possibilities for \(\overline{\omega}(H)\):
**Lemma 5.5.34**.: _Let \(\phi\in H^{>}\) be such that \(v\phi\notin(\Gamma_{H}^{\neq})^{\prime}\). Then_
\[\overline{\omega}(H)\ =\ \omega(-\phi^{\dagger})+\phi^{2}\sigma_{H}^{\downarrow} \quad\text{or}\quad\overline{\omega}(H)\ =\ \omega(-\phi^{\dagger})+\phi^{2}\mathcal{O}_{H}^{\downarrow}.\]
_The first alternative holds if \(H\) is grounded, and the second alternative holds if \(v\phi\) is a gap in \(H\) with \(\phi\asymp b^{\prime}\) for some \(b\asymp 1\) in \(H\)._
Proof.: Either \(v\phi=\max\Psi_{H}\) or \(v\phi\) is a gap in \(H\), by [ADH, 9.2]. The remark before the lemma yields an \(\Lambda\Omega\)-cut \((I,\Lambda,\Omega)\) in \(H\) where \(\Omega=\overline{\omega}(H)\). Now use the proofs of [ADH, 16.3.11, 16.3.12, 16.3.13] together with the transformation formulas [ADH, (16.3.1)] for \(\Lambda\Omega\)-cuts.
By [ADH, 16.3.15] we have:
**Lemma 5.5.35**.: _If \(H\) has asymptotic integration and the set \(2\Psi_{H}\) does not have a supremum in \(\Gamma_{H}\), then_
\[\overline{\omega}(H)\ =\ \omega\big{(}\Lambda(H)\big{)}^{\downarrow}\ =\ \omega(H)^{\downarrow}\quad\text{or}\quad\overline{\omega}(H)\ =\ H\setminus\sigma\big{(}\Gamma(H)\big{)}^{\uparrow}.\]
**Corollary 5.5.36**.: _Suppose \(H\) is \(\omega\)-free. Then_
\[\overline{\omega}(H)\ =\ \omega\big{(}\Lambda(H)\big{)}^{\downarrow}\ =\ \omega(H)^{ \downarrow}\ =\ H\setminus\sigma\big{(}\Gamma(H)\big{)}^{\uparrow}.\]
Proof.: By [ADH, 11.8.30] we have \(\omega\big{(}\Lambda(H)\big{)}^{\downarrow}=\omega(H)^{\downarrow}=H\setminus \sigma\big{(}\Gamma(H)\big{)}^{\uparrow}\). It follows from [ADH, 9.2.19] that \(2\Psi_{H}\) has no supremum in \(\Gamma_{H}\). Now use Lemma 5.5.35.
In the next lemma \(L\supseteq\mathbb{R}\) is a Liouville closed \(\mathrm{d}\)-algebraic Hardy field extension of \(H\) such that \(\omega(L)=\overline{\omega}(L)\). (By Corollary 5.5.3, this holds for \(L=\mathrm{D}(H)\).) Note that then \(\overline{\omega}(L)=\omega\big{(}\Lambda(L)\big{)}\) by [ADH, 11.8.20].
**Lemma 5.5.37**.: _If \(H\) is not \(\lambda\)-free or \(\overline{\omega}(H)=H\setminus\sigma\big{(}\Gamma(H)\big{)}^{\uparrow}\), then \(L\) is \(\omega\)-free._
Proof.: If \(H\) is \(\omega\)-free or not \(\lambda\)-free, then \(L\) is \(\omega\)-free by Lemmas 1.4.18 and 1.4.20. Suppose \(H\) is \(\lambda\)-free but not \(\omega\)-free, and \(\overline{\omega}(H)=H\setminus\sigma\big{(}\Gamma(H)\big{)}^{\uparrow}\). So [ADH, 11.8.30] gives \(\omega\in H\) with \(\omega\big{(}\Lambda(H)\big{)}<\omega<\sigma\big{(}\Gamma(H)\big{)}\). Then \(\omega\in\overline{\omega}(H)\subseteq\overline{\omega}(L)\subseteq\omega \big{(}\Lambda(L)\big{)}\). Thus \(L\) is \(\omega\)-free by Corollary 1.4.21.
Theorem 7.5.32, which depends on much of what follows, shows that for \(L=\mathrm{D}(H)\) the converse of Lemma 5.5.37 also holds.
**Proof of a conjecture of Boshernitzan \((^{*})\).** In this last subsection we establish [33, Conjecture 17.11]: Corollary 5.5.40. For this, with \(\ell_{n}:=\log_{n}x\) we set \(\gamma_{n}:=\ell_{n}^{\dagger}\), \(\lambda_{n}:=-\gamma_{n}^{\dagger}\), and \(\omega_{n}:=\omega(\lambda_{n})\), as in [ADH, 11.5, 11.7], so
\[\gamma_{n}\ =\ \frac{1}{\ell_{0}\ell_{1}\cdots\ell_{n}},\qquad\omega_{n}\ =\ \frac{1}{\ell_{0}^{2}}+\frac{1}{\ell_{0}^{2}\ell_{1}^{2}}+\cdots+\frac{1}{\ell_ {0}^{2}\ell_{1}^{2}\cdots\ell_{n}^{2}}.\]
(See also the beginning of Section 5.6 below.) For \(c\in\mathbb{R}\), the germ
\[\frac{\omega_{n}+c\gamma_{n}^{2}}{4}\ =\ \frac{1}{4}\left(\frac{1}{\ell_{0}^{2}}+ \frac{1}{(\ell_{0}\ell_{1})^{2}}+\cdots+\frac{1}{(\ell_{0}\cdots\ell_{n-1})^{2 }}+\frac{c+1}{(\ell_{0}\cdots\ell_{n})^{2}}\right)\]
generates oscillations iff \(c>0\). (A. Kneser [112], Riemann-Weber [206, p. 63]; cf. [97].) This follows from the next corollary applied to \(f=\omega_{n}+c\gamma_{n}^{2}\) and the grounded Hardy subfield \(H:=\mathbb{R}\langle\ell_{n}\rangle=\mathbb{R}(\ell_{0},\ldots,\ell_{n})\) of \(\mathrm{Li}(\mathbb{R})\):
**Corollary 5.5.38**.: _Let \(H\) be a grounded Hardy field such that for some \(m\) we have \(h\succ\ell_{m}\) for all \(h\in H\) with \(h\succ 1\). Then for \(f\in H\), the following are equivalent:_
1. \(f\in\overline{\omega}(H)\)_;_
2. \(f<\omega_{n}\) _for some_ \(n\)_;_
3. _there exists_ \(c\in\mathbb{R}^{>}\) _such that for all_ \(n\) _we have_ \(f<\omega_{n}+c\gamma_{n}^{2}\)_;_
4. \(f<\omega_{n}+c\gamma_{n}^{2}\) _for all_ \(n\) _and all_ \(c\in\mathbb{R}^{>}\)_._
Proof.: By [ADH, 10.3.2, 10.5.15] we may replace \(H\) by \(H(\mathbb{R})\) to arrange \(H\supseteq\mathbb{R}\). By Lemma 1.4.18, \(L:=\mathrm{Li}(H)\) is \(\omega\)-free. With \(H_{\omega}\) as in the proof of that lemma, one verifies easily that for each \(g\in H_{\omega}\) with \(g\succ 1\) there is an \(m\) such that \(g\succ\ell_{m}\). Hence \((\ell_{n})\) is a logarithmic sequence in \(L\), in the sense of [ADH, p. 499]. Now the implication (i) \(\Rightarrow\) (iv) follows from Corollary 5.5.36 and [ADH, 11.8.22], and (iv) \(\Rightarrow\) (iii) is obvious. Since \(0<\gamma_{n+1}\prec\gamma_{n}\) we obtain for \(c\in\mathbb{R}^{>}\):
\[\omega_{n+1}+c\gamma_{n+1}^{2}\ =\ \omega_{n}+\gamma_{n+1}^{2}+c\gamma_{n+1}^{2 }\ <\ \omega_{n}+\gamma_{n}^{2}\ =\ \sigma(\gamma_{n}).\]
In view of [ADH, 11.8.30, 11.8.21] and Corollary 5.5.36, this yields (iii) \(\Rightarrow\) (ii). Downward closedness of \(\overline{\omega}(H)\) implies (ii) \(\Rightarrow\) (i).
Using the above equivalence (i) \(\Leftrightarrow\) (ii) we recover [33, Theorem 17.7]:
**Corollary 5.5.39**.: _Suppose \(f\in\mathcal{C}\) isardian and \(\mathrm{d}\)-algebraic over \(\mathbb{R}\). Then_
\[f\text{ generates oscillations}\ \Longleftrightarrow\ f>\omega_{n}/4\text{ for all }n.\]
Proof.: By Corollary 5.4.28 the Hardy field \(H:=\mathbb{R}\langle f\rangle\) satisfies the hypotheses of Corollary 5.5.38. Also, \(f\) generates oscillations iff \(4f\notin\overline{\omega}(H)\).
Using the above implication (iii) \(\Rightarrow\) (i) we obtain in the same way:
**Corollary 5.5.40**.: _Let \(f\in\mathcal{C}\) beardian and \(\mathrm{d}\)-algebraic over \(\mathbb{R}\), and suppose there is a \(c\in\mathbb{R}^{>}\) with \(f<\omega_{n}+c\gamma_{n}^{2}\) for all \(n\). Then \(f/4\) does not generate oscillations._
In the beginning of this subsection we introduced the germs \(\ell_{n}\), and so this may be a good occasion to observe that the Hardy field \(H=\mathbb{R}(\ell_{0},\ell_{1},\ell_{2},\dots)\) they generate over \(\mathbb{R}\) is \(\omega\)-free: since \(H\) is ungrounded and \(H\) is the union of the grounded Hardy subfields \(\mathbb{R}(\ell_{0},\dots,\ell_{n})\), this follows from [ADH, 11.7.15]. Thus the Hardy field \(\operatorname{Li}(\mathbb{R})=\operatorname{Li}(H)\) is \(\omega\)-free as well.
### 5.6. Maximal Hardy Fields are \(\omega\)-Free
In this section we discuss the fundamental property of \(\omega\)-freeness from [ADH] in the context of Hardy fields. The main result is Theorem 5.6.2, from which it follows that every maximal Hardy field is \(\omega\)-free. As an application of this theorem, we answer a question from [34].
### The property of \(\omega\)-freeness for Hardy fields
Let \(H\supseteq\mathbb{R}\) be a Liouville closed Hardy field. Note that then \(x\in H\) and \(\log f\in H\) for all \(f\in H^{>}\). To work with \(\omega\)-freeness for \(H\) we introduce the "iterated logarithms" \(\ell_{\rho}\); more precisely, transfinite recursion yields a sequence \((\ell_{\rho})\) in \(H^{>\mathbb{R}}\) indexed by the ordinals \(\rho\) less than some infinite limit ordinal \(\kappa\) as follows: \(\ell_{0}=x\), and \(\ell_{\rho+1}:=\log\ell_{\rho}\); if \(\lambda\) is an infinite limit ordinal such that all \(\ell_{\rho}\) with \(\rho<\lambda\) have already been chosen, then we pick \(\ell_{\lambda}\) to be any element in \(H^{>\mathbb{R}}\) such that \(\ell_{\lambda}\prec\ell_{\rho}\) for all \(\rho<\lambda\), if there is such an \(\ell_{\lambda}\), while if there is no such \(\ell_{\lambda}\), we put \(\kappa:=\lambda\). From \((\ell_{\rho})\) we obtain the sequences \((\gamma_{\rho})\) in \(H^{>}\) and \((\lambda_{\rho})\) in \(H\) as follows:
\[\gamma_{\rho}\ :=\ \ell_{\rho}^{\dagger},\qquad\lambda_{\rho}\ :=\ -\gamma_{\rho}^{ \dagger}\ =\ -\ell_{\rho}^{\dagger\,\dagger}\ :=\ -(\ell_{\rho}^{\dagger\,\dagger}).\]
Then \(\lambda_{\rho+1}=\lambda_{\rho}+\gamma_{\rho+1}\) and we have
\[\gamma_{0}\ =\ \ell_{0}^{-1},\qquad\gamma_{1}\ =\ (\ell_{0}\ell_{1})^{ -1}, \gamma_{2}\ =\ (\ell_{0}\ell_{1}\ell_{2})^{-1},\] \[\lambda_{0}\ =\ \ell_{0}^{-1},\qquad\lambda_{1}\ =\ \ell_{0}^{-1}+(\ell_{0}\ell_{1})^{-1}, \lambda_{2}\ =\ \ell_{0}^{-1}+(\ell_{0}\ell_{1})^{-1}+(\ell_{0}\ell_{1}\ell_{2})^{-1},\]
and so on. Indeed, \(v(\gamma_{\rho})\) is strictly increasing as a function of \(\rho\) and is cofinal in \(\Psi_{H}=\big{\{}v(f^{\dagger}):f\in H,\ 0\neq f\not\asymp 1\big{\}}\); we refer to [ADH, 11.5, 11.8] for this and some of what follows. Also, \((\lambda_{\rho})\) is a strictly increasing pc-sequence which is cofinal in \(\Lambda(H)\subseteq H\). We recall here the relevant descriptions from [ADH, 11.8]:
\[\Gamma(H)\ =\ \big{\{}a^{\dagger}:\ a\in H,\ a\succ 1\big{\}} \ =\ \{b\in H:\ b>\gamma_{\rho}\text{ for some }\rho\},\] \[\Lambda(H)\ =\ -\Gamma(H)^{\dagger} =\ \big{\{}-a^{\dagger\dagger}:\ a\in H,\ a\succ 1\big{\}}.\]
Here, \(\Gamma(H)\subseteq H^{>}\) is upward closed and \(\Lambda(H)\) is downward closed, since \(H\) is Liouville closed. The latter also gives that \(H\) is \(\lambda\)-free, that is, \((\lambda_{\rho})\) has no pseudolimit in \(H\). The function \(\omega\colon H\to H\) is strictly increasing on \(\Lambda(H)\) and setting \(\omega_{\rho}:=\omega(\lambda_{\rho})\) we obtain a strictly increasing pc-sequence \((\omega_{\rho})\) which is cofinal in \(\omega\big{(}\Lambda(H)\big{)}=\omega(H)\):
\[\omega_{0}\ =\ \ell_{0}^{-2},\qquad\omega_{1}\ =\ \ell_{0}^{-2}+(\ell_{0}\ell_{1})^{ -2},\qquad\omega_{2}\ =\ \ell_{0}^{-2}+(\ell_{0}\ell_{1})^{-2}+(\ell_{0}\ell_{1}\ell_{2})^{-2},\]
and so on; see [ADH, 11.7, 11.8] for this and some of what follows. Now \(H\) being \(\omega\)-free is equivalent to \((\omega_{\rho})\) having no pseudolimit in \(H\). By [ADH, 11.8.30] the pseudolimits of \((\omega_{\rho})\) in \(H\) are exactly the \(\omega\in H\) such that \(\omega(H)<\omega<\sigma\big{(}\Gamma(H)\big{)}\). Also, \(\sigma\) is strictly increasing on \(\Gamma(H)\). Thus \(H\) is not \(\omega\)-free if and only if there exists an \(\omega\in H\) such that \(\omega(H)<\omega<\sigma\big{(}\Gamma(H)\big{)}\).
**Lemma 5.6.1**.: _Let \(\gamma\in(\mathcal{C}^{1})^{\times}\), \(\gamma>0\), and \(\lambda:=-\gamma^{\dagger}\) with \(\lambda_{\rho}<\lambda<\lambda_{\rho}+\gamma_{\rho}\) in \(\mathcal{C}\), for all \(\rho\). Then \(\gamma_{\rho}>\gamma>\gamma_{\rho}/\ell_{\rho}=(-1/\ell_{\rho})^{\prime}\) in \(\mathcal{C}\), for all \(\rho\)._
Proof.: Pick \(a\in\mathbb{R}\) (independent of \(\rho\)) and functions in \(\mathcal{C}_{a}\) whose germs at \(+\infty\) are the elements \(\ell_{\rho}\), \(\boldsymbol{\gamma}_{\rho}\), \(\boldsymbol{\lambda}_{\rho}\) of \(H\); denote these functions also by \(\ell_{\rho}\), \(\boldsymbol{\lambda}_{\rho}\), \(\boldsymbol{\gamma}_{\rho}\). From \(\ell_{\rho}^{\dagger}=\boldsymbol{\gamma}_{\rho}\) and \(\boldsymbol{\gamma}_{\rho}^{\dagger}=-\boldsymbol{\lambda}_{\rho}\) in \(H\) we obtain \(c_{\rho},d_{\rho}>0\) such that for all sufficiently large \(t\geqslant a\),
\[\ell_{\rho}(t)\ =\ c_{\rho}\exp\biggl{[}\int_{a}^{t}\boldsymbol{\gamma}_{\rho}(s)\, ds\biggr{]}\,,\quad\boldsymbol{\gamma}_{\rho}(t)\ =\ d_{\rho}\exp\biggl{[}-\int_{a}^{t}\boldsymbol{\lambda}_{\rho}(s)\, ds\biggr{]}\,.\]
(How large is "sufficiently large" depends on \(\rho\).) Likewise we pick functions in \(\mathcal{C}_{a}\) whose germ at \(+\infty\) are \(\boldsymbol{\gamma}\), \(\boldsymbol{\lambda}\), and also denote these functions by \(\boldsymbol{\gamma}\), \(\boldsymbol{\lambda}\). From \(\boldsymbol{\gamma}^{\dagger}=-\boldsymbol{\lambda}\) in \(H\) we obtain a real constant \(d>0\) such that for all sufficiently large \(t\geqslant a\),
\[\boldsymbol{\gamma}(t)\ =\ d\exp\biggl{[}-\int_{a}^{t}\boldsymbol{\lambda}(s)\, ds\biggr{]}\,.\]
Also, \(\boldsymbol{\lambda}_{\rho}<\boldsymbol{\lambda}<\boldsymbol{\lambda}_{\rho}+ \boldsymbol{\gamma}_{\rho}\) yields constants \(a_{\rho},b_{\rho}\in\mathbb{R}\) such that for all \(t\geqslant a\)
\[\int_{a}^{t}\boldsymbol{\lambda}_{\rho}(s)\,ds\ <\ a_{\rho}+\int_{a}^{t} \boldsymbol{\lambda}(s)\,ds\ <\ b_{\rho}+\int_{a}^{t}\boldsymbol{\lambda}_{\rho}(s)\, ds+\int_{a}^{t}\boldsymbol{\gamma}_{\rho}(s)\,ds,\]
which by applying \(\exp(-\ast)\) yields that for all sufficiently large \(t\geqslant a\),
\[\frac{1}{d_{\rho}}\boldsymbol{\gamma}_{\rho}(t)\ >\ \frac{1}{\mathrm{e}^{a_{\rho} }\,d}\boldsymbol{\gamma}(t)\ >\ \frac{c_{\rho}}{\mathrm{e}^{b_{\rho}}\,d_{\rho}} \boldsymbol{\gamma}_{\rho}(t)/\ell_{\rho}(t).\]
Here the positive constant factors don't matter, since the valuation of \(\boldsymbol{\gamma}_{\rho}\) is strictly increasing and that of \(\boldsymbol{\gamma}_{\rho}/\ell_{\rho}=(-1/\ell_{\rho})^{\prime}\) is strictly decreasing with \(\rho\). Thus for all \(\rho\) we have \(\boldsymbol{\gamma}_{\rho}>\boldsymbol{\gamma}>\boldsymbol{\gamma}_{\rho}/ \ell_{\rho}=(-1/\ell_{\rho})^{\prime}\), in \(\mathcal{C}\).
We are now ready to prove a key result:
**Theorem 5.6.2**.: _Every Hardy field has a \(\mathrm{d}\)-algebraic \(\mathfrak{o}\)-free Hardy field extension._
Proof.: It is enough to show that every \(\mathrm{d}\)-maximal Hardy field is \(\mathfrak{o}\)-free. That reduces to showing that every non-\(\mathfrak{o}\)-free Liouville closed Hardy field containing \(\mathbb{R}\) has a proper \(\mathrm{d}\)-algebraic Hardy field extension. So assume \(H\supseteq\mathbb{R}\) is a Liouville closed Hardy field and \(H\) is not \(\mathfrak{o}\)-free. We shall construct a proper \(\mathrm{d}\)-algebraic Hardy field extension of \(H\). We have \(\mathfrak{o}\in H\) such that
\[\omega(H)\ <\ \mathfrak{o}\ <\ \sigma\bigl{(}\Gamma(H)\bigr{)}.\]
With \(\mathfrak{o}\) in the role of \(f\) in the discussion following Corollary 5.5.10, we have \(\mathbb{R}\)-linearly independent solutions \(y_{1},y_{2}\in\mathcal{C}^{2}\) of the differential equation \(4Y^{\prime\prime}+\mathfrak{o}Y=0\); in fact, \(y_{1},y_{2}\in\mathcal{C}^{<\infty}\). Then the complex solution \(y=y_{1}+y_{2}i\) is a unit of \(\mathcal{C}^{<\infty}[i]\), and so we have \(z:=2y^{\dagger}\in\mathcal{C}^{<\infty}[i]\). We shall prove that the elements \(\boldsymbol{\lambda}:=\mathrm{Re}\,z\) and \(\boldsymbol{\gamma}:=\mathrm{Im}\,z\) of \(\mathcal{C}^{<\infty}\) generate a Hardy field extension \(K=H(\boldsymbol{\lambda},\boldsymbol{\gamma})\) of \(H\) with \(\mathfrak{o}=\sigma(\boldsymbol{\gamma})\in\sigma(K^{\times})\). We can assume that \(w:=y_{1}y_{2}^{\prime}-y_{1}^{\prime}y_{2}\in\mathbb{R}^{>}\), so \(\boldsymbol{\gamma}=2w/|y|^{2}\in(\mathcal{C}^{<\infty})^{\times}\) and \(\boldsymbol{\gamma}>0\).
We have \(\mathfrak{o}_{\rho}\rightsquigarrow\mathfrak{o}\), with \(\mathfrak{o}-\mathfrak{o}_{\rho}\sim\boldsymbol{\gamma}_{\rho+1}^{2}\) by [ADH, 11.7.1]. Till further notice we fix \(\rho\) and set \(g_{\rho}:=\boldsymbol{\gamma}_{\rho}^{-1/2}\), so \(2g_{\rho}^{\dagger}=\boldsymbol{\lambda}_{\rho}=-\boldsymbol{\gamma}_{\rho}^{\dagger}\). For \(h\in H^{\times}\) we also have \(\omega(2h^{\dagger})=-4h^{\prime\prime}/h\), hence \(P:=4Y^{\prime\prime}+\mathfrak{o}Y\in H\{Y\}\) gives
\[P(g_{\rho})\ =\ g_{\rho}(\mathfrak{o}-\mathfrak{o}_{\rho})\ \sim\ g_{\rho} \boldsymbol{\gamma}_{\rho+1}^{2},\]
and so with an eye towards using Lemma 5.5.29:
\[g_{\rho}^{3}P(g_{\rho})\ \sim\ g_{\rho}^{4}\boldsymbol{\gamma}_{\rho+1}^{2}\ \sim\ \boldsymbol{\gamma}_{\rho+1}^{2}/\boldsymbol{\gamma}_{\rho}^{2}\ \asymp\ 1/\ell_{\rho+1}^{2}.\]
Thus with \(g:=g_{\rho}=\gamma_{\!\rho}^{-1/2}\), \(\phi=g^{-2}=\gamma_{\!\rho}\) we have \(A_{\rho}\in{\mathbb{R}}^{>}\) such that
\[g^{3}P_{\times g}^{\phi}(Y)\ =\ 4Y^{\prime\prime}+g^{3}P(g)Y,\quad|g^{3}P(g)|\ \leqslant\ A_{\rho}/\ell_{\rho+1}^{2}. \tag{5.6.1}\]
From \(P(y)=0\) we get \(P_{\times g}^{\phi}(y/g)=0\), that is, \(y/g\in{\mathcal{C}}^{<\infty}[{\rm i}]^{\phi}\) is a solution of \(4Y^{\prime\prime}+g^{3}P(g)Y=0\), with \(g^{3}P(g)\in H\subseteq{\mathcal{C}}^{<\infty}\). Set \(\ell:=\ell_{\rho+1}\), so \(\ell^{\prime}=\ell_{\rho}^{\dagger}=\phi\). The subsection on compositional conjugation in Section 5.3 yields the isomorphism \(h\mapsto h^{\circ}=h\circ\ell^{\rm inv}\colon H^{\phi}\to H^{\circ}\) of \(H\)-fields, where \(\ell^{\rm inv}\) is the compositional inverse of \(\ell\). Under this isomorphism the equation \(4Y^{\prime\prime}+g^{3}P(g)Y=0\) corresponds to the equation
\[4Y^{\prime\prime}+f_{\rho}Y\ =\ 0,\qquad f_{\rho}\ :=\ (g^{3}P(g))^{\circ}\in H ^{\circ}\ \subseteq\ {\mathcal{C}}^{<\infty}.\]
By Corollary 5.5.30, the equation \(4Y^{\prime\prime}+f_{\rho}Y=0\) has the "real" solutions
\[y_{j,\rho}\ :=\ (y_{j}/g)^{\circ}\in({\mathcal{C}}^{<\infty})^{\circ}\ =\ { \mathcal{C}}^{<\infty}\qquad(j=1,2),\]
and the "complex" solution
\[y_{\rho}\ :=\ y_{1,\rho}+y_{2,\rho}{\rm i}\ =\ (y/g)^{\circ},\]
which is a unit of the ring \({\mathcal{C}}^{<\infty}[{\rm i}]\). Set \(z_{\rho}:=2y_{\rho}^{\dagger}\in{\mathcal{C}}^{<\infty}[{\rm i}]\). The bound in (5.6.1) gives \(|f_{\rho}|\ \leqslant\ A_{\rho}/x^{2}\), which by Corollary 5.2.20 yields positive constants \(B_{\rho}\), \(c_{\rho}\) such that \(|z_{\rho}|\ \leqslant\ B_{\rho}x^{c_{\rho}}\). Using \((f^{\circ})^{\prime}=(\phi^{-1}f^{\prime})^{\circ}\) for \(f\in{\mathcal{C}}^{<\infty}[{\rm i}]\) we obtain
\[z_{\rho}\ =\ 2\big{(}(y/g)^{\circ}\big{)}^{\dagger}\ =\ 2\big{(}\phi^{-1}(y/g)^{ \dagger}\big{)}^{\circ}\ =\ \big{(}(z-2g^{\dagger})/\phi\big{)}^{\circ}\]
In combination with the bound on \(|z_{\rho}|\) this yields
\[\left|\frac{z-2g^{\dagger}}{\phi}\right| \leqslant\ B_{\rho}\,\ell_{\rho+1}^{c_{\rho}},\quad\text{hence}\] \[|z-\uplambda_{\rho}| \leqslant\ B_{\rho}\,\ell_{\rho+1}^{c_{\rho}}\,\phi\ =\ B_{\rho}\,\ell_{\rho+1}^{c_{\rho}}\,\gamma_{\!\rho},\quad\text{and so}\] \[z\ =\ \uplambda_{\rho}+R_{\rho}\quad\text{where}\quad|R_{\rho}| \leqslant B_{\rho}\,\ell_{\rho+1}^{c_{\rho}}\,\gamma_{\!\rho}.\]
We now use this last estimate with \(\rho+1\) instead of \(\rho\), together with
\[\uplambda_{\rho+1}\ =\ \uplambda_{\rho}+\gamma_{\!\rho+1},\quad\ell_{\rho+1} \gamma_{\!\rho+1}\ =\ \gamma_{\!\rho}.\]
This yields
\[z\,=\,\uplambda_{\rho}+\gamma_{\!\rho+1}+R_{\rho+1}\] \[\quad\text{with}\ |R_{\rho+1}|\,\leqslant\,B_{\rho+1}\,\ell_{\rho+2}^ {c_{\rho+1}}\,\gamma_{\!\rho+1}\,=\,B_{\rho+1}\big{(}\ell_{\rho+2}^{c_{\rho+1} }/\ell_{\rho+1}\big{)}\,\gamma_{\!\rho},\] \[\quad\text{so}\quad z\,=\,\uplambda_{\rho}+o(\gamma_{\!\rho})\ \ \text{that is,}\ z-\uplambda_{\rho}\prec\gamma_{\!\rho},\] \[\quad\text{and thus}\quad\uplambda\,=\,\operatorname{Re}z\,=\, \uplambda_{\rho}+o(\gamma_{\!\rho}),\quad\gamma\,=\,\operatorname{Im}z\, \prec\,\gamma_{\!\rho}.\]
Now varying \(\rho\) again, \((\uplambda_{\rho})\) is a strictly increasing divergent pc-sequence in \(H\) which is cofinal in \(\Lambda(H)\). By the above, \(\uplambda=\operatorname{Re}z\) satisfies \(\Lambda(H)<\uplambda<\Delta(H)\). This yields an ordered subfield \(H(\uplambda)\) of \({\mathcal{C}}^{<\infty}\), which by Lemma 5.1.17 is an immediate valued field extension of \(H\) with \(\uplambda_{\rho}\leadsto\uplambda\). Now \(\uplambda=-\gamma^{\dagger}\) (see discussion before Lemma 5.5.11), so Lemma 5.6.1 gives \(\gamma_{\!\rho}>\gamma>(-1/\ell_{\rho})^{\prime}\) in \({\mathcal{C}}^{<\infty}\), for all \(\rho\). In view of Lemma 5.1.18 applied to \(H(\uplambda)\), \(\gamma\) in the role of \(K\), \(f\) this yields an ordered subfield \(H(\uplambda,\gamma)\) of \({\mathcal{C}}^{<\infty}\). Moreover, \(\gamma\) is transcendental over \(H(\uplambda)\) and \(\gamma\) satisfies the second-order differential equation \(2yy^{\prime\prime}-3(y^{\prime})^{2}+y^{4}-\omega y^{2}=0\) over \(H\) (obtained from the relation \(\sigma(\gamma)=\omega\) by multiplication with \(\gamma^{2}\)). It follows that \(H(\uplambda,\gamma)\) is closed under the derivation of \({\mathcal{C}}^{<\infty}\), and hence \(H(\uplambda,\gamma)=H\langle\gamma\rangle\) is a Hardy field that is d-algebraic over \(H\)
The proof also shows that every \(\mathcal{C}^{\infty}\)-Hardy field has an \(\omega\)-free d-algebraic \(\mathcal{C}^{\infty}\)-Hardy field extension, and the same with \(\mathcal{C}^{\omega}\) instead of \(\mathcal{C}^{<\infty}\). In Section 7.5 below we show that the perfect hull of an \(\omega\)-free Hardy field remains \(\omega\)-free (Lemma 7.5.39), but that not every perfect Hardy field is \(\omega\)-free (Example 7.5.40).
**Improving Theorem 5.6.2** (\({}^{*}\)).: _In this subsection \(H\supseteq\mathbb{R}\) is a Liouville closed Hardy field and \(\omega\in H\), \(\gamma\in(\mathcal{C}^{2})^{\times}\) satisfy \(\omega(H)<\omega<\sigma\big{(}\Gamma(H)\big{)}\) and \(\sigma(\gamma)=\omega\)._
Lemma 1.4.18 leads to a more explicit version of Theorem 5.6.2:
**Corollary 5.6.3**.: _The germ \(\gamma\) generates a Hardy field extension \(H\langle\gamma\rangle\) of \(H\) with a gap \(v\gamma\), and so \(\operatorname{Li}\bigl{(}H\langle\gamma\rangle\bigr{)}\) is an \(\omega\)-free Hardy field extension of \(H\)._
Proof.: Since \(\sigma(-\gamma)=\sigma(\gamma)\), we may arrange \(\gamma>0\). The discussion before Lemma 5.5.11 with \(\omega\), \(\gamma\) in the roles of \(f\), \(g\), respectively, yields \(\mathbb{R}\)-linearly independent solutions \(y_{1},y_{2}\in\mathcal{C}^{<\infty}\) of the differential equation \(4Y^{\prime\prime}+\omega Y=0\) with Wronskian \(1/2\) such that \(\gamma=1/(y_{1}^{2}+y_{2}^{2})\). The proof of Theorem 5.6.2 shows that \(\gamma\) generates a Hardy field extension \(H\langle\gamma\rangle=H(\lambda,\gamma)\) of \(H\). Recall that \(v(\gamma_{\rho})\) is strictly increasing as a function of \(\rho\) and cofinal in \(\Psi_{H}\); as \(\gamma\prec\gamma_{\rho}\) for all \(\rho\), this gives \(\Psi_{H}<v\gamma\). Also \(\gamma>(-1/\ell_{\rho})^{\prime}>0\) for all \(\rho\) and \(v(1/\ell_{\rho})^{\prime}\) is strictly decreasing as a function of \(\rho\) and coinitial in \((\Gamma_{H}^{>})^{\prime}\), and so \(v\gamma<(\Gamma_{H}^{>})^{\prime}\). Then by [ADH, 13.7.1 and subsequent remark (2) on p. 626], \(v\gamma\) is a gap in \(H\langle\gamma\rangle\), so \(\operatorname{Li}\bigl{(}H\langle\gamma\rangle\bigr{)}\) is \(\omega\)-free by Lemma 1.4.18.
**Corollary 5.6.4**.: _Suppose \(\gamma>0\). Then with \(L:=\operatorname{Li}\bigl{(}H\langle\gamma\rangle\bigr{)}\),_
\[\omega\notin\overline{\omega}(H)\iff\gamma\in\Gamma(L),\qquad\omega\in \overline{\omega}(H)\iff\gamma\in\operatorname{I}(L).\]
Proof.: If \(\gamma\notin\Gamma(L)\), then \(\omega\in\omega(L)^{\downarrow}\) by [ADH, 11.8.31], hence \(\omega\in\overline{\omega}(H)\). If \(\gamma\in\Gamma(L)\), then we can use Corollary 5.5.36 for \(L\) to conclude \(\omega\notin\overline{\omega}(H)\). The equivalence on the right now follows from that on the left and [ADH, 11.8.19].
We also note that if \(\omega/4\) generates oscillations, then we have many choices for \(\gamma\):
**Corollary 5.6.5**.: _Suppose \(\omega/4\) generates oscillations. Then there are continuum many \(\widetilde{\gamma}\in(\mathcal{C}^{<\infty})^{\times}\) with \(\widetilde{\gamma}>0\) and \(\sigma(\widetilde{\gamma})=\omega\), and no Hardy field extension of \(H\) contains more than one such germ \(\widetilde{\gamma}\)._ (_In particular, \(H\) has continuum many maximal Hardy field extensions._)
Proof.: As before we arrange \(\gamma>0\) and set \(L:=\operatorname{Li}\bigl{(}H\langle\gamma\rangle\bigr{)}\). Take \(\phi\in L\) with \(\phi^{\prime}=\frac{1}{2}\gamma\) and consider the germs
\[y_{1}:=\frac{1}{\sqrt{\gamma}}\cos\phi,\quad y_{2}:=\frac{1}{\sqrt{\gamma}} \sin\phi\qquad\text{in }\mathcal{C}^{<\infty}.\]
The remarks preceding Lemma 5.5.11 show: \(y_{1}\), \(y_{2}\) solve the differential equation \(4Y^{\prime\prime}+\omega Y=0\), their Wronskian equals \(1/2\), and \(\phi\succ 1\) (since \(\omega/4\) generates oscillations). We now dilate \(y_{1}\), \(y_{2}\): let \(r\in\mathbb{R}^{>}\) be arbitrary and set
\[y_{1r}\ :=\ ry_{1},\qquad y_{2r}\ :=\ r^{-1}y_{2}.\]
Then \(y_{1r}\), \(y_{2r}\) still solve the equation \(4Y^{\prime\prime}+\omega Y=0\), and their Wronskian is \(1/2\). Put \(\gamma_{r}:=1/(y_{1r}^{2}+y_{2r}^{2})\in\mathcal{C}^{<\infty}\). Then \(\sigma(\gamma_{r})=\omega\). Let \(r,s\in\mathbb{R}^{>}\). Then
\[\gamma_{r}=\gamma_{s}\quad\Longleftrightarrow\quad y_{1r}^{2}+y_{2r}^{2}=y_{1s }^{2}+y_{2s}^{2}\quad\Longleftrightarrow\quad(r^{2}-s^{2})\cos^{2}\phi+\left( \frac{1}{r^{2}}-\frac{1}{s^{2}}\right)\sin^{2}\phi=0,\]
and hence \(\gamma_{r}=\gamma_{s}\) iff \(r=s\). Next, suppose \(M\) is a d-perfect Hardy field extension of \(H\) containing both \(\gamma\) and \(\widetilde{\gamma}\in(\mathcal{C}^{<\infty})^{\times}\) with \(\widetilde{\gamma}>0\) and \(\sigma(\widetilde{\gamma})=\omega\). Corollary 5.5.3
gives \(\omega\notin\omega(M)\), hence \(\gamma,\gamma\in\Gamma(M)\) by [ADH, 11.8.31], and thus \(\gamma=\gamma\) by [ADH, 11.8.29].
### Answering a question of Boshernitzan \((^{*})\)
Following [34] we say that a germ \(y\) in \(\mathcal{C}\) is **translogarithmic** if \(r\leqslant y\leqslant\ell_{n}\) for all \(n\) and all \(r\in\mathbb{R}\). Thus for eventually strictly increasing \(y\succ 1\) in \(\mathcal{C}\), \(y\) is translogarithmic iff its compositional inverse \(y^{\mathrm{inv}}\) is transexponential. By Lemma 5.3.5 and Corollary 5.4.24 there exist \(\mathcal{C}^{\omega}\)-hardian translogarithmic germs; see also [ADH, 13.9]. Translogarithmic hardian germs are d-transcendental, by Corollary 5.4.28. In this subsection we use Theorem 5.6.2 to prove the following analogue of Corollary 5.4.24 for translogarithmic germs, thus giving a positive answer to Question 4 in [34, SS7]:
**Proposition 5.6.6**.: _Every maximal Hardy field contains a translogarithmic germ._
Let \(H\supseteq\mathbb{R}\) be a Liouville closed Hardy field; then \(H\) has no translogarithmic element iff \((\ell_{n})\) is a logarithmic sequence for \(H\) in the sense of [ADH, 11.5]. In this case, if \(H\) is also \(\omega\)-free, then for each translogarithmic \(H\)-hardian germ \(y\) the isomorphism type of the ordered differential field \(H\langle y\rangle\) over \(H\) is uniquely determined; more generally, by [ADH, 13.6.7, 13.6.8]:
**Lemma 5.6.7**.: _Let \(H\) be an \(\omega\)-free \(H\)-field, with asymptotic couple \((\Gamma,\psi)\), and let \(L=H\langle y\rangle\) be a pre-\(H\)-field extension of \(H\) with \(\Gamma^{<}<vy<0\). Then for all \(P\in H\{Y\}^{\neq}\) we have_
\[v\big{(}P(y)\big{)}\ =\ \gamma+\mathrm{ndeg}(P)vy+\mathrm{nwt}(P)\psi_{L}(vy) \qquad\text{where $\gamma=v^{\mathrm{e}}(P)\in\Gamma$,}\]
_and thus_
\[\Gamma_{L}\ =\ \Gamma\oplus\mathbb{Z}vy\oplus\mathbb{Z}\psi_{L}(vy)\qquad( \text{internal direct sum}).\]
_Moreover, if \(L^{*}=H\langle y^{*}\rangle\) is a pre-\(H\)-field extension of \(H\) with \(\Gamma^{<}<vy^{*}<0\) and \(\mathrm{sign}\,y=\mathrm{sign}\,y^{*}\), then there is a unique pre-\(H\)-field isomorphism \(L\to L^{*}\) which is the identity on \(H\) and sends \(y\) to \(y^{*}\)._
This lemma suggests how to obtain Proposition 5.6.6: follow the arguments in the proof of [ADH, 13.6.7]. In the rest of this subsection we carry out this plan. For this, let \(H\supseteq\mathbb{R}\) be a Liouville closed Hardy field and \(y\in\mathcal{C}^{<\infty}\).
**Lemma 5.6.8**.: _Suppose \(H\) is \(\omega\)-free and for all \(\ell\in H^{>\mathbb{R}}\) we have, in \(\mathcal{C}\):_
* \(1\prec y\prec\ell\)_;_
* \(\delta^{n}(y)\prec 1\) _for all_ \(n\geqslant 1\)_, where_ \(\delta:=\phi^{-1}\partial\)_,_ \(\phi:=\ell^{\prime}\)_;_
* \(y^{\prime}\in\mathcal{C}^{\times}\) _and_ \((1/\ell)^{\prime}\preccurlyeq y^{\dagger}\)_._
_Let \(P\in H\{Y\}^{\neq}\). Then in \(\mathcal{C}\) we have_
\[P(y)\sim a\,y^{d}\,(y^{\dagger})^{w}\qquad\text{where $a\in H^{\times}$, $d=\mathrm{ndeg}(P)$, $w=\mathrm{nwt}(P)$.}\]
\((\)_Hence \(y\) is hardian over \(H\) and \(\mathrm{d}\)-transcendental over \(H\).\()\)_
Proof.: Since \(H\) is real closed, it has a monomial group, so the material of [ADH, 13.3] applies. Then [ADH, 13.3.3] gives a monic \(D\in\mathbb{R}[Y]^{\neq}\), \(b\in H^{\times}\), \(w\in\mathbb{N}\), and an active element \(\phi\) of \(H\) with \(0<\phi\prec 1\) such that:
\[P^{\phi}\ =\ b\cdot D\cdot(Y^{\prime})^{w}+R,\qquad R\in H^{\phi}\{Y\},\ R\prec ^{\flat}_{\phi}b.\]
Set \(d:=\mathrm{ndeg}\,P\), and note that by [ADH, 13.1.9] we have \(d=\mathrm{deg}\,D+w\) and \(w=\mathrm{nwt}\,P\). Replace \(P\), \(b\), \(R\) by \(b^{-1}P\), \(1\), \(b^{-1}R\), respectively, to arrange \(b=1\). Take \(\ell\in H\) with \(\ell^{\prime}=\phi\), so \(\ell>\mathbb{R}\); we use the superscript \(\circ\) as in the subsection on compositional
conjugation of Section 5.3; in particular, \(y^{\circ}=y\circ\ell^{\mathrm{inv}}\) with \((y^{\circ})^{\prime}=(\phi^{-1}y^{\prime})^{\circ}\), so \((y^{\circ})^{\dagger}\succcurlyeq 1/x^{2}\) by hypothesis (iii) of our lemma. In \(H^{\circ}\{Y\}\) we now have
\[(P^{\phi})^{\circ}\;=\;D\cdot(Y^{\prime})^{w}+R^{\circ}\qquad\text{where}\qquad R ^{\circ}\preccurlyeq 1.\]
Evaluating at \(y^{\circ}\) we have \(D(y^{\circ})\big{(}(y^{\circ})^{\prime}\big{)}^{w}\sim(y^{\circ})^{d}\big{(} (y^{\circ})^{\dagger}\big{)}^{w}\) and so \(D(y^{\circ})\big{(}(y^{\circ})^{\prime}\big{)}^{w}\succcurlyeq x^{-2w}\asymp 1\). By (i) we have \((y^{\circ})^{m}\prec x\) for \(m\geqslant 1\), and by (ii) we have \((y^{\circ})^{(n)}\preccurlyeq 1\) for \(n\geqslant 1\). Hence \(R^{\circ}(y^{\circ})\preccurlyeq h^{\circ}\) for some \(h\in H\) with \(h^{\circ}\preccurlyeq 1\). Thus in \(\mathcal{C}\) we have
\[(P^{\phi})^{\circ}(y^{\circ})\sim(y^{\circ})^{d}\big{(}(y^{\circ})^{\dagger} \big{)}^{w}.\]
Since \(P(y)^{\circ}=(P^{\phi})^{\circ}(y^{\circ})\), this yields \(P(y)\sim a\cdot y^{d}\cdot(y^{\dagger})^{w}\) for \(a=\phi^{-w}\).
**Corollary 5.6.9**.: _Suppose \(H\) is \(\omega\)-free and \(1\prec y\prec\ell\) for all \(\ell\in H^{>\mathbb{R}}\). Then the following are equivalent:_
1. \(y\) _isardian over_ \(H\)_;_
2. _for all_ \(h\in H^{>\mathbb{R}}\) _there is an_ \(\ell\in H^{>\mathbb{R}}\) _such that_ \(\ell\preccurlyeq h\) _and_ \(y\)_,_ \(\ell\) _lie in a common Hardy field;_
3. _for all_ \(h\in H^{>\mathbb{R}}\) _there is an_ \(\ell\in H^{>\mathbb{R}}\) _such that_ \(\ell\preccurlyeq h\) _and_ \(y\circ\ell^{\mathrm{inv}}\) _isardian._
Proof.: The implications (i) \(\Rightarrow\) (ii) \(\Rightarrow\) (iii) are obvious. Let \(\ell\in H^{>\mathbb{R}}\) be such that \(y^{\circ}:=y\circ\ell^{\mathrm{inv}}\) lies in a Hardy field \(H_{0}\); we arrange \(x\in H_{0}\). For \(\phi:=\ell^{\prime}\) we have \((\phi^{-1}y^{\dagger})^{\circ}=(y^{\circ})^{\dagger}\succ(1/x)^{\prime}=-1/ x^{2}\) and thus \(y^{\dagger}\succ-\phi/\ell^{2}=(1/\ell)^{\prime}\). Also \(y^{\circ}\prec x\), hence \(z:=(y^{\circ})^{\prime}\prec x^{\prime}=1\) and so \(z^{(n)}\prec 1\) for all \(n\). With \(\updelta:=\phi^{-1}\updelta\) and \(n\geqslant 1\) we have \(\updelta^{n}(y)^{\circ}=z^{(n-1)}\) and thus \(\updelta^{n}(y)\prec 1\). Moreover, for \(h\in H^{>\mathbb{R}}\) with \(\ell\preccurlyeq h\) and \(\theta:=h^{\prime}\) we have \(\theta^{-1}\updelta=f\updelta\) where \(f:=\phi/\theta\in H\), \(f\preccurlyeq 1\). Let \(n\geqslant 1\). Then
\[(\theta^{-1}\updelta)^{n}=(f\updelta)^{n}=G_{n}^{n}(f)\updelta^{n }+\cdots+G_{1}^{n}(f)\updelta\quad\text{ on }\mathcal{C}^{<\infty}\\ \text{ where }G_{j}^{n}\in\mathbb{Q}\{X\}\subseteq H^{\phi}\{X\} \text{ for }j=1,\ldots,n.\]
As \(\updelta\) is small as a derivation on \(H\), we have \(G_{j}^{n}(f)\preccurlyeq 1\) for \(j=1,\ldots,n\), and so \((\theta^{-1}\updelta)^{n}(y)\prec 1\). Thus (iii) \(\Rightarrow\) (i) by Lemma 5.6.8.
Proof of Proposition 5.6.6.: Let \(H\supseteq\mathbb{R}\) be any \(\omega\)-free Liouville closed Hardy field not containing any translogarithmic element; in view of Theorem 5.6.2 it suffices to show that then some Hardy field extension of \(H\) contains a translogarithmic element. The remark before Proposition 5.6.6 yields a translogarithmic germ \(y\) in a \(\mathcal{C}^{\omega}\)-Hardy field \(H_{0}\supseteq\mathbb{R}\). Then for each \(n\), the germs \(y\), \(\ell_{n}\) are contained in a common Hardy field, namely \(\mathrm{Li}(H_{0})\). Hence \(y\) generates a proper Hardy field extension of \(H\) by (ii) \(\Rightarrow\) (i) in Corollary 5.6.9.
Proposition 5.6.6 goes through when "maximal" is replaced by "\(\mathcal{C}^{\infty}\)-maximal" or "\(\mathcal{C}^{\omega}\)-maximal". This follows from its proof, using also remarks after the proof of Theorem 5.6.2. Here is a conjecture that is much stronger than Proposition 5.6.6; it postulates an analogue of Corollary 5.4.23 for infinite "lower bounds":
_Conjecture_.: If \(H\) is maximal, then there is no \(y\in\mathcal{C}^{1}\) such that \(1\prec y\prec h\) for all \(h\in H^{>\mathbb{R}}\), and \(y^{\prime}\in\mathcal{C}^{\times}\).
We observe that in this conjecture we may restrict attention to \(\mathcal{C}^{\omega}\)-hardian germs \(y\):
**Lemma 5.6.10**.: _Suppose there exists \(y\in\mathcal{C}^{1}\) such that \(1\prec y\prec h\) for all \(h\in H^{>\mathbb{R}}\) and \(y^{\prime}\in\mathcal{C}^{\times}\). Then there exists such a germ \(y\) which is \(\mathcal{C}^{\omega}\)-hardian._
Proof.: Take \(y\) as in the hypothesis. Replace \(y\) by \(-y\) if necessary to arrange \(y>\mathbb{R}\). Now Theorem 5.4.22 yields a \(\mathcal{C}^{\omega}\)-hardian germ \(z\geqslant y^{\operatorname{inv}}\). By Lemma 5.3.5, the germ \(z^{\operatorname{inv}}\) is also \(\mathcal{C}^{\omega}\)-hardian, and \(\mathbb{R}<z^{\operatorname{inv}}\leqslant y\prec h\) for all \(h\in H^{>\mathbb{R}}\).
**Generalizing a theorem of Boshernitzan**(\({}^{*}\)).: _In this subsection \(H\) is a Hardy field._ Recall from Corollary 5.4.15 that for all \(f\in\operatorname{E}(H)\) there are \(h\in H(x)\) and \(n\) such that \(f\leqslant\exp_{n}h\). In particular, the sequence \((\exp_{n}x)\) is cofinal in \(\operatorname{E}(\mathbb{Q})\). By Theorem 5.4.20 and Corollary 5.4.27, \((\ell_{n})\) is coinitial in \(\operatorname{E}(\mathbb{Q})^{>\mathbb{R}}\); see also [33, Theorem 13.2]. In particular, for the Hardy field \(H=\operatorname{Li}(\mathbb{R})\), the subset \(H^{>\mathbb{R}}\) is coinitial in \(\operatorname{E}(\mathbb{Q})^{>\mathbb{R}}=\operatorname{E}(H)^{>\mathbb{R}}\), equivalently, \(\Gamma_{H}^{<}\) is cofinal in \(\Gamma_{\operatorname{E}(H)}^{<}\). We now generalize this fact, recalling from the end of Section 5.5 that \(\operatorname{Li}(\mathbb{R})\) is \(\omega\)-free:
**Theorem 5.6.11**.: _Suppose \(H\) is \(\omega\)-free. Then \(\Gamma_{H}^{<}\) is cofinal in \(\Gamma_{\operatorname{E}(H)}^{<}\)._
Proof.: Replacing \(H\) by \(\operatorname{Li}\bigl{(}H(\mathbb{R})\bigr{)}\) and using Theorem 1.4.1 we arrange that \(H\) is Liouville closed and \(H\supseteq\mathbb{R}\). Let \(y\in\operatorname{E}(H)\) and suppose towards a contradiction that \(\mathbb{R}<y<H^{>\mathbb{R}}\). Then \(f:=y^{\operatorname{inv}}\) is tranexponential and hardian (Lemma 5.3.5). Lemma 5.4.19 gives a bound \(b\in\mathcal{C}\) for \(\mathbb{R}\langle f\rangle\). Lemma 5.4.17 gives \(b\in(\mathcal{C}^{\omega})^{\times}\) such that \(b^{(n)}\prec 1/b\) for all \(n\); set \(r:=\phi\cdot\sin x\in\mathcal{C}^{\omega}\). Then by Lemma 5.4.21 (with \(\mathbb{R}\langle f\rangle\) in place of \(H\)) we have \(Q(r)\prec 1\) for all \(Q\in\mathbb{R}\langle f\rangle\{Y\}\) with \(Q(0)=0\). Hence \(g:=f+r\) is eventually strictly increasing with \(g\succ 1\), and \(y=f^{\operatorname{inv}}\) and \(z:=g^{\operatorname{inv}}\in\mathcal{C}^{<\infty}\) do not lie in a common Hardy field. Thus in order to achieve the desired contradiction it suffices to show that \(z\) is \(H\)-hardian. For this we use Corollary 5.6.9. It is clear that \(f\sim g\), so \(y\sim z\) by Corollary 5.1.10, and thus \(1\prec z\prec\ell\) for all \(\ell\in H^{>\mathbb{R}}\). Let \(\ell\in H^{>\mathbb{R}}\) and \(\ell\prec x\); we claim that \(z\circ\ell^{\operatorname{inv}}\) is hardian, equivalently, by Lemma 5.3.5, that \(\ell\circ g=(z\circ\ell^{\operatorname{inv}})^{\operatorname{inv}}\) is hardian. Now \(\ell\circ f=(y\circ\ell^{\operatorname{inv}})^{\operatorname{inv}}\) is hardian and \(\ell\circ f\succ 1\), and Lemma 5.4.11 gives \(\ell\circ f-\ell\circ g\in(\mathcal{C}^{<\infty})^{\prec}\). Hence \(\ell\circ f\sim_{\infty}\ell\circ g\) by Lemma 5.4.12. For all \(n\) we have \(\ell_{n}\circ\ell=\log_{n}\ell\in H^{>\mathbb{R}}\), so \(y\leqslant\ell_{n}\circ\ell\), hence \(y\diamond\ell^{\operatorname{inv}}\leqslant\ell_{n}\), which by compositional inversion gives \(\ell\circ f\geqslant\exp_{n}x\). So \(\ell\circ g\) is hardian by Corollary 5.4.14. Thus \(z\) is \(H\)-hardian by (iii) \(\Rightarrow\) (i) of Corollary 5.6.9.
If \(H\subseteq\mathcal{C}^{\infty}\) is \(\omega\)-free, then \(\Gamma_{H}^{<}\) is also cofinal in \(\Gamma_{\operatorname{E}^{\infty}(H)}^{<}\), and similarly with \(\omega\) in place of \(\infty\). (Same proof as that of the previous theorem.) We also note that if \(\operatorname{D}(H)=\operatorname{E}(H)\) (e.g., if \(H\) is bounded; cf. Theorem 5.4.20), then Theorem 5.6.11 already follows from Theorem 1.4.1.
### Bounding Solutions of Linear Differential Equations
Let \(r\in\mathbb{N}^{\geqslant 1}\), and with \(\boldsymbol{i}\) ranging over \(\mathbb{N}^{r}\), let
\[P\ =\ P(Y,Y^{\prime},\ldots,Y^{(r-1)})\ =\ \sum_{\|\boldsymbol{i}\|<r}P_{ \boldsymbol{i}}Y^{\boldsymbol{i}}\ \in\ \mathcal{C}[i]\bigl{[}Y,Y^{\prime},\ldots,Y^{(r-1)}\bigr{]}\]
with \(P_{\boldsymbol{i}}\in\mathcal{C}[i]\) for all \(\boldsymbol{i}\) with \(\|\boldsymbol{i}\|<r\), and \(P_{\boldsymbol{i}}\neq 0\) for only finitely many such \(\boldsymbol{i}\). Then \(P\) gives rise to an evaluation map
\[y\mapsto P\bigl{(}y,y^{\prime},\ldots,y^{(r-1)}\bigr{)}\ :\ \mathcal{C}^{r-1}[i]\to\mathcal{C}[i].\]
Let \(y\in\mathcal{C}^{r}[i]\) satisfy the differential equation
\[y^{(r)}\ =\ P\bigl{(}y,y^{\prime},\ldots,y^{(r-1)}\bigr{)}. \tag{5.7.1}\]
In addition, \(\mathfrak{m}\) with \(0<\mathfrak{m}\prec 1\) is a hardian germ, and \(\eta\in\mathcal{C}\) is eventually increasing with \(\eta(t)>0\) eventually, and \(n\geqslant r\).
**Proposition 5.7.1**.: _Suppose \(P_{i}\preccurlyeq\eta\) for all \(i\), \(P(0)\preccurlyeq\eta\,\mathfrak{m}^{n}\), and \(y\preccurlyeq\mathfrak{m}^{n}\). Then_
\[y^{(j)}\ \preccurlyeq\eta^{j}\mathfrak{m}^{n-j(1+\varepsilon)}\qquad\text{ for }j=0,\ldots,r\text{ and all }\varepsilon\in\mathbb{R}^{>}\text{,}\]
_with \(\prec\) in place of \(\preccurlyeq\) if \(y\prec\mathfrak{m}^{n}\) and \(P(0)\prec\eta\,\mathfrak{m}^{n}\)._
The following immediate consequence is used in Section 5.10:
**Corollary 5.7.2**.: _Suppose \(f_{1},\ldots,f_{r}\in\mathcal{C}[i]\) and \(y\in\mathcal{C}^{r}[i]\) satisfy_
\[y^{(r)}+f_{1}y^{(r-1)}+\cdots+f_{r}y\ =\ 0,\qquad f_{1},\ldots,f_{r}\preccurlyeq \eta,\quad y\ \preccurlyeq\mathfrak{m}^{n}\text{.}\]
_Then \(y^{(j)}\preccurlyeq\eta^{j}\mathfrak{m}^{n-j(1+\varepsilon)}\) for \(j=0,\ldots,r\) and all \(\varepsilon\in\mathbb{R}^{>}\), with \(\prec\) in place of \(\preccurlyeq\) if \(y\prec\mathfrak{m}^{n}\)._
We obtain Proposition 5.7.1 from estimates due to Esclangon and Landau. To prepare for this we review an argument of Hardy-Littlewood which bounds the derivative \(f^{\prime}\) of a twice continuously differentiable function \(f\) in terms of \(f\), \(f^{\prime\prime}\). (For another statement in the same spirit see Lemma 5.9.10.)
Bounding \(f^{\prime}\) in terms of \(f\), \(f^{\prime\prime}\).: Let \(a\in\mathbb{R}\), let \(\phi,\psi\colon[a,+\infty)\to(0,+\infty)\) be continuous and increasing, and \(f\in\mathcal{C}^{2}_{a}[i]\). If \(f\) and \(f^{\prime\prime}\) are bounded, then so is \(f^{\prime}\) by the next lemma:
**Lemma 5.7.3** (Hardy-Littlewood [88]).: _Suppose \(|f|\leqslant\phi\), \(|f^{\prime\prime}|\leqslant\psi\), and let \(\varepsilon\in\mathbb{R}^{>}\). Then \(|f^{\prime}(t)|\leqslant(2+\varepsilon)\sqrt{\phi(t)\psi(t)}\), eventually._
Proof.: (Mordell [140]). First arrange \(a=0\) by translating the domain. Let \(0<s<t\). Taylor expansion at \(t\) yields \(\theta=\theta(s,t)\in[0,1]\) such that
\[f(t-s)\ =\ f(t)-sf^{\prime}(t)+\tfrac{1}{2}s^{2}f^{\prime\prime}(t-\theta s),\]
hence
\[|f(t-s)-f(t)+sf^{\prime}(t)|\ \leqslant\ \tfrac{1}{2}s^{2}\psi(t-\theta s) \leqslant\tfrac{1}{2}s^{2}\psi(t).\]
Since \(|f(t)|\leqslant\phi(t)\) and \(|f(t-s)|\leqslant\phi(t-s)\leqslant\phi(t)\), this yields
\[|f^{\prime}(t)|\ \leqslant\ (2/s)\phi(t)+(s/2)\psi(t).\]
Put \(\rho(t):=\sqrt{\phi(t)/\psi(t)}\) for \(t>0\). If \(t>2\rho(t)\), then \(s:=2\rho(t)\) in the above gives \(|f^{\prime}(t)|\leqslant 2\rho(t)\). Hence if eventually \(t>2\rho(t)\), then we are done. Suppose otherwise; then \(\rho\) is unbounded, hence so is \(\psi\rho=\sqrt{\phi\psi}\). Take \(b>0\) such that \(\sqrt{\phi(t)\psi(t)}\geqslant|f^{\prime}(0)|/\varepsilon\) for all \(t\geqslant b\). We claim that \(|f^{\prime}(t)|\leqslant(2+\varepsilon)\sqrt{\phi(t)\psi(t)}\) for \(t\geqslant b\). If \(t>2\rho(t)\), then \(|f^{\prime}(t)|\leqslant 2\sqrt{\phi(t)\psi(t)}<(2+\varepsilon)\sqrt{\phi(t)\psi(t)}\), so suppose \(t\leqslant 2\rho(t)\). Then
\[|f^{\prime}(t)-f^{\prime}(0)|\ =\ \left|\int_{0}^{t}f^{\prime\prime}(s)\,ds \right|\ \leqslant\ \int_{0}^{t}|f^{\prime\prime}(s)|\,ds\ \leqslant\ \int_{0}^{t}\psi(s)\,ds\ \leqslant\ t\psi(t)\]
and hence
\[|f^{\prime}(t)|\ \leqslant\ |f^{\prime}(0)|+t\psi(t)\leqslant|f^{\prime}(0)|+2 \sqrt{\phi(t)\psi(t)}\ \leqslant\ (2+\varepsilon)\sqrt{\phi(t)\psi(t)}.\qed\]
**Corollary 5.7.4**.: _If \(f\preccurlyeq\phi\), \(f^{\prime\prime}\preccurlyeq\psi\), then \(f^{\prime}\preccurlyeq\sqrt{\phi\psi}\), with \(f^{\prime}\prec\sqrt{\phi\psi}\) if also \(f\prec\phi\) or \(f^{\prime\prime}\prec\psi\)._
In Corollary 5.7.4 we cannot drop the assumption that \(\phi\), \(\psi\) are increasing. (Take \(a>1\), \(f(t)=t\log t\) for \(t\geqslant a\), \(\phi=f\), \(\psi=f^{\prime\prime}\).) However, Mordell [140] also shows that if instead of assuming that \(\phi\), \(\psi\) are increasing, we assume that they are decreasing, then Lemma 5.7.3 holds in a stronger form: \(|f|\leqslant\phi\ \&\ |f^{\prime\prime}|\leqslant\psi\,\Rightarrow|f^{\prime}| \leqslant 2(\phi\psi)^{1/2}\). The next lemma (not used later) yields a variant of Corollary 5.7.4 where the germ of \(f\) lies in a complexified Hardy field; see also [88, SS7].
**Lemma 5.7.5**.: _Let \(H\) be a Hardy field, \(K=H[i]\), and \(g\in K^{\times}\) such that \(g\prec 1\) or \(g\succ 1\), \(g^{\dagger}\not\asymp x^{-1}\). Then \(g^{\prime}\preccurlyeq|gg^{\prime\prime}|^{1/2}\)._
Proof.: Arranging that \(H\) is real closed and \(x\in H\) and using \(|h|\asymp h\) for \(h\in K\) (see the remarks before Corollary 1.2.6), the lemma now follows from parts (1), (2), (4) of [7, Lemma 5.2] applied to the asymptotic couple of \(K\).
We now generalize Corollary 5.7.4:
**Lemma 5.7.6** (Hardy-Littlewood [88]).: _Suppose \(f\in\mathcal{C}_{a}^{n}[i]\), \(n\geqslant 1\), such that \(f\preccurlyeq\phi\), \(f^{(n)}\preccurlyeq\psi\). Then for \(j=0,\ldots,n\) we have \(f^{(j)}\preccurlyeq\phi^{1-j/n}\psi^{j/n}\). If additionally \(f\prec\phi\) or \(f^{(n)}\prec\psi\), then \(f^{(j)}\prec\phi^{1-j/n}\psi^{j/n}\) for \(j=1,\ldots,n-1\)._
Proof.: The case \(n=1\) is trivial, so let \(n\geqslant 2\). We may also assume \(f\not=0\), and by increasing \(a\) we arrange \(f(a)\not=0\). Let \(j\) range over \(\{0,\ldots,n\}\). Consider the continuous increasing functions
\[\chi_{j}\,:\ [a,+\infty)\to(0,+\infty),\qquad\chi_{j}(t)\ :=\ \max_{a\leqslant s \leqslant t}|f^{(j)}(s)|\,\big{/}\,\big{(}\phi(s)^{1-j/n}\psi(s)^{j/n}\big{)},\]
and set \(\chi:=\max\{\chi_{0},\ldots,\chi_{n}\}\). Then \(\chi(t)\geqslant\chi_{0}(t)>0\) for all \(t\geqslant a\). We have
\[|f^{(j)}|\big{/}\big{(}\phi^{1-j/n}\psi^{j/n}\big{)}\ \leqslant\ \chi_{j}\ \leqslant\ \chi,\]
therefore
\[f^{(j)}\ \preccurlyeq\phi^{1-j/n}\psi^{j/n}\chi.\]
By induction on \(j=0,\ldots,n-2\) we now show
\[f^{(j)}\ \preccurlyeq\phi^{1-j/n}\psi^{j/n}\chi^{1-1/2^{j}}. \tag{5.7.2}\]
The case \(j=0\) follows from \(f\preccurlyeq\phi\). Suppose (5.7.2) holds for a certain \(j<n-2\). Then Corollary 5.7.4 with \(f^{(j)}\), \(\phi^{1-j/n}\psi^{j/n}\chi^{1-1/2^{j}}\), \(\phi^{1-(j+2)/n}\psi^{(j+2)/n}\chi\) in the role of \(f\), \(\phi\), \(\psi\), respectively, yields:
\[f^{(j+1)} \preccurlyeq\big{(}\phi^{1-j/n}\psi^{j/n}\chi^{1-1/2^{j}}\cdot \phi^{1-(j+2)/n}\psi^{(j+2)/n}\chi\big{)}^{1/2}\] \[= \phi^{1-(j+1)/n}\psi^{(j+1)/n}\chi^{1-1/2^{j+1}}.\]
This proves (5.7.2). We claim \(\chi\preccurlyeq 1\). Suppose otherwise; so \(\chi(t)\to+\infty\) as \(t\to+\infty\), since \(\chi\) is increasing, hence \(f^{(n)}\preccurlyeq\psi\preccurlyeq\psi\chi^{1-1/2^{n}}\preccurlyeq\psi\chi\). Corollary 5.7.4 with \(f^{(n-2)}\), \(\phi^{2/n}\psi^{1-2/n}\chi^{1-1/2^{n-2}}\), \(\psi\chi\) in the role of \(f\), \(\phi\), \(\psi\), respectively, yields
\[f^{(n-1)}\ \preccurlyeq\big{(}\phi^{2/n}\psi^{1-2/n}\chi^{1-1/2^{n-2}}\cdot \psi\chi\big{)}^{1/2}\ =\ \phi^{1/n}\psi^{1-1/n}\chi^{1-1/2^{n-1}}.\]
So (5.7.2) then also holds for \(j=n-1\), and it clearly holds for \(j=n\). But then \(\chi_{j}\preccurlyeq\chi^{1-1/2^{n}}\) for all \(j\) and so \(\chi\preccurlyeq\chi^{1-1/2^{n}}\), contradicting \(\chi\succ 1\).
Now suppose \(f\prec\phi\). By induction on \(j=0,\ldots,n-1\) we show \(f^{(j)}\prec\phi^{1-j/n}\psi^{j/n}\). The case \(j=0\) holds by assumption; suppose it holds for a certain \(j\leqslant n-2\). Then \(f^{(j+2)}\preccurlyeq\phi^{1-(j+2)/n}\psi^{(j+2)/n}\), so Corollary 5.7.4 with
\[f^{(j)},\quad\phi^{1-j/n}\psi^{j/n},\quad\phi^{1-(j+2)/n}\psi^{(j+2)/n}\]
in the role of \(f\), \(\phi\), \(\psi\), respectively, yields
\[f^{(j+1)}\prec\big{(}\phi^{1-j/n}\psi^{j/n}\cdot\phi^{1-(j+2)/n}\psi^{(j+2)/n} \big{)}^{1/2}=\phi^{1-(j+1)/n}\psi^{(j+1)/n}.\]
If \(f^{(n)}\prec\psi\), then likewise \(f^{(n-j)}\prec\phi^{j/n}\psi^{1-j/n}\) for \(j=0,\ldots,n-1\).
**Corollary 5.7.7**.: _Suppose \(f\in\mathcal{C}_{a}^{n}[\text{i}]\) and \(f\prec\phi\), \(f^{(n)}\preccurlyeq\phi\). Then \(f^{\prime},\ldots,f^{(n-1)}\preccurlyeq\phi\), and if in addition \(f\prec\phi\) or \(f^{(n)}\prec\phi\), then \(f^{\prime},\ldots,f^{(n-1)}\prec\phi\)._
### The theorem of Esclangon-Landau
In this subsection \(n\geqslant r\geqslant 1\) and \(P\) is as at the beginning of this section, and \(y\in\mathcal{C}^{r}[\text{i}]\) satisfies (5.7.1). Also, \(\eta\in\mathcal{C}\) is eventually increasing and positive, so \(\eta\succcurlyeq 1\). The next theorem covers the case \(\mathfrak{m}\asymp 1\) of Proposition 5.7.1:
**Theorem 5.7.8** (Landau [121]).: _Suppose \(y\preccurlyeq 1\) and \(P_{\boldsymbol{i}}\preccurlyeq\eta\) for all \(\boldsymbol{i}\). Then \(y^{(j)}\preccurlyeq\eta^{j}\) for \(j=0,\ldots,r\). Moreover, if \(y\prec 1\), then \(y^{(j)}\prec\eta^{j}\) for \(j=0,\ldots,r-1\), and if in addition \(P(0)\prec\eta\), then also \(y^{(r)}\prec\eta^{r}\)._
Proof.: Take \(a\in\mathbb{R}\) such that \(\eta\) is represented by an increasing continuous function \(\eta\colon[a,+\infty)\to(0,+\infty)\), and \(y\) by a function \(y\in\mathcal{C}_{a}^{r}[\text{i}]\). Then
\[t\mapsto\psi(t)\ :=\ \max\bigg{(}1,\max_{a\leqslant s\leqslant t}|y^{(r)}(s)| \bigg{)}\ :\ [a,+\infty)\to[1,+\infty)\]
is continuous and increasing with \(|y^{(r)}|\leqslant\psi\). By Lemma 5.7.6 we have \(y^{(j)}\preccurlyeq\psi^{j/r}\) for \(j=0,\ldots,r-1\) and thus \(P_{\boldsymbol{i}}y^{\boldsymbol{i}}\preccurlyeq\eta\psi^{1-1/r}\preccurlyeq\eta \psi^{1-1/r}\) if \(\|\boldsymbol{i}\|<r\). So \(y^{(r)}=P\big{(}y,\ldots,y^{(r-1)}\big{)}\preccurlyeq\eta\psi^{1-1/r}\). Take \(C\in\mathbb{R}^{>}\) such that
\[|y^{(r)}(t)|\ \leqslant\ C\eta(t)\psi(t)^{1-1/r}\qquad\text{for all $t\geqslant a$.}\]
Increasing \(C\) we arrange \(C\eta(a)\psi(a)^{1-1/r}\geqslant 1\). As \(\eta\psi^{1-1/r}\) is increasing,
\[\psi(t)\ \leqslant\ \max\bigg{(}1,\max_{a\leqslant s\leqslant t}C\eta(s)\psi(s )^{1-1/r}\bigg{)}\ \leqslant\ C\eta(t)\psi(t)^{1-1/r}\qquad\text{for $t\geqslant a$.}\]
Hence \(|y^{(r)}(t)|\leqslant\psi(t)\leqslant C^{r}\eta^{r}(t)\) for \(t\geqslant a\), so \(y^{(r)}\preccurlyeq\eta^{r}\). By Lemma 5.7.6 again this yields \(y^{(j)}\preccurlyeq\eta^{j}\) for \(j=0,\ldots,r\). Assume now that \(y\prec 1\). Then by that same lemma, \(y^{(j)}\prec\eta^{j}\) for \(j<r\). We have \(\eta\succcurlyeq 1\), so if \(0<\|\boldsymbol{i}\|<r\), then \(y^{\boldsymbol{i}}\prec\eta^{\|\boldsymbol{i}\|}\preccurlyeq\eta^{r-1}\). Hence if additionally \(P(0)\prec\eta\), then \(y^{(r)}=P\big{(}y,\ldots,y^{(r-1)}\big{)}\prec\eta^{r}\).
**Corollary 5.7.9** (Esclangon [67]).: _Suppose \(f_{1},\ldots,f_{r},g\in\mathcal{C}[\text{i}]\) and \(y\in\mathcal{C}^{r}[\text{i}]\) satisfy_
\[y^{(r)}+f_{1}y^{(r-1)}+\cdots+f_{r}y\ =\ g,\qquad f_{1},\ldots,f_{r},g,y\ \preccurlyeq\ 1.\]
_Then \(y^{\prime},\ldots,y^{(r)}\preccurlyeq 1\). If in addition \(y\prec 1\) and \(g\prec 1\), then \(y^{\prime},\ldots,y^{(r)}\prec 1\)._
Below \(H\) is a Hardy field and \(\mathfrak{m}\in H\), \(0<\mathfrak{m}\prec 1\). Recall also that \(n\geqslant r\geqslant 1\).
**Lemma 5.7.10**.: _Let \(z\in\mathcal{C}^{r}[\text{i}]\). If \(z^{(j)}\preccurlyeq\eta^{j}\) for \(j=0,\ldots,r\), then \((z\mathfrak{m}^{n})^{(j)}\preccurlyeq\eta^{j}\mathfrak{m}^{n-j}\) for \(j=0,\ldots,r\), and likewise with \(\prec\) instead of \(\preccurlyeq\)._
Proof.: Corollary 1.1.15 yields \((\mathfrak{m}^{n})^{(m)}\preccurlyeq\mathfrak{m}^{n-m}\) for \(m\leqslant n\). Thus if \(z^{(j)}\preccurlyeq\eta^{j}\) for \(j=0,\ldots,r\), then
\[z^{(k)}(\mathfrak{m}^{n})^{(j-k)}\preccurlyeq\eta^{k}\mathfrak{m}^{n-(j-k)} \preccurlyeq\eta^{j}\mathfrak{m}^{n-j}\qquad(0\leqslant k\leqslant j \leqslant r),\]
so \((z\mathfrak{m}^{n})^{(j)}\preccurlyeq\eta^{j}\mathfrak{m}^{n-j}\) for \(j=0,\ldots,r\), by the Product Rule. The argument with \(\prec\) instead of \(\preccurlyeq\) is similar.
We now return to the assumptions on \(P,y\) in the beginning of this section, so \(y\in\mathcal{C}^{r}[\mathrm{i}]\) satisfies (5.7.1). Suppose also that \(P_{\boldsymbol{i}}\preccurlyeq\eta\) for all \(\boldsymbol{i}\), \(P(0)\preccurlyeq\eta\,\mathfrak{m}^{n}\), and \(y\preccurlyeq\mathfrak{m}^{n}\). Let \(\varepsilon\in\mathbb{R}^{>}\) and set for \(i=0,\ldots,r\),
\[Y_{i}\ :=\ \sum_{j=0}^{i}\binom{i}{j}Y^{(i-j)}(\mathfrak{m}^{n})^{(j)}\in H \big{[}Y,Y^{\prime},\ldots,Y^{(i)}\big{]}\ \subseteq\ \mathcal{C}[\mathrm{i}]\big{[}Y,Y^{\prime},\ldots,Y^{(r)}\big{]}.\]
Then for \(z:=y\,\mathfrak{m}^{-n}\preccurlyeq 1\) in \(\mathcal{C}^{r}[\mathrm{i}]\) the product rule gives
\[Y_{i}(z,z^{\prime},\ldots,z^{(i)})\ =\ (z\,\mathfrak{m}^{n})^{(i)}\ =\ y^{(i)} \qquad(i=0,\ldots,r),\]
so with
\[Q\ :=\ Y^{(r)}-\mathfrak{m}^{-n}\big{(}Y_{r}-P(Y_{0},\ldots,Y_{r-1})\big{)} \in\mathcal{C}[\mathrm{i}]\big{[}Y,Y^{\prime},\ldots,Y^{(r-1)}\big{]}\]
we have by substitution of \(z,\ldots,z^{(r)}\) for \(Y,Y^{\prime},\ldots,Y^{(r)}\),
\[z^{(r)} =\ Q\big{(}z,z^{\prime},\ldots,z^{(r-1)}\big{)}+\mathfrak{m}^{-n }\big{(}y^{(r)}-P(y,y^{\prime},\ldots,y^{(r-1)})\big{)}\] \[=\ Q\big{(}z,z^{\prime},\ldots,z^{(r-1)}\big{)}.\]
For \(Y_{0},\ldots,Y_{r}\in H\{Y\}\) we have \((Y^{\boldsymbol{i}})_{\times\mathfrak{m}^{n}}=Y_{0}^{i_{0}}\cdots Y_{r}^{i_{r}}\) for \(\boldsymbol{i}=(i_{0},\ldots,i_{r})\in\mathbb{N}^{1+r}\). Now \(\mathfrak{m}^{-\varepsilon}\in\mathrm{Li}\big{(}H(\mathbb{R})\big{)}\). We equip \(\mathrm{Li}\big{(}H(\mathbb{R})\big{)}\{Y\}\) with the gaussian extension of the valuation of \(\mathrm{Li}\big{(}H(\mathbb{R})\big{)}\). Then by [ADH, 6.1.4],
\[\mathfrak{m}^{-n}(Y^{\boldsymbol{i}})_{\times\mathfrak{m}^{n}}\ \preccurlyeq\mathfrak{m}^{-\varepsilon}\qquad\text{for $ \boldsymbol{i}\in\mathbb{N}^{1+r}\setminus\{0\}$}.\]
Let \(\boldsymbol{i}\) range over \(\mathbb{N}^{r}\) and take \(Q_{\boldsymbol{i}}\in\mathcal{C}[\mathrm{i}]\) for \(\|\boldsymbol{i}\|<r\) such that
\[Q\ =\ \sum_{\|\boldsymbol{i}\|<r}Q_{\boldsymbol{i}}Y^{\boldsymbol{i}}, \qquad(Q_{\boldsymbol{i}}\neq 0\text{ for only finitely many $\boldsymbol{i}$}).\]
Together with \(P_{\boldsymbol{i}}\preccurlyeq\eta\) for all \(\boldsymbol{i}\) and \(P(0)\preccurlyeq\eta\,\mathfrak{m}^{n}\), the remarks above yield \(Q_{\boldsymbol{i}}\preccurlyeq\eta\,\mathfrak{m}^{-\varepsilon}\) for all \(\boldsymbol{i}\). By Theorem 5.7.8 applied to \(P\), \(y\), \(\eta\) replaced by \(Q\), \(z\), \(\eta\,\mathfrak{m}^{-\varepsilon}\), respectively, we now obtain \(z^{(j)}\preccurlyeq(\eta\,\mathfrak{m}^{-\varepsilon})^{j}\) (\(j=0,\ldots,r\)), with \(\prec\) in place of \(\preccurlyeq\) if \(y\prec\mathfrak{m}^{n}\) and \(P(0)\prec\eta\,\mathfrak{m}^{n}\). Using Lemma 5.7.10 with \(\eta\,\mathfrak{m}^{-\varepsilon}\) and \(\mathrm{Li}\big{(}H(\mathbb{R})\big{)}\) in place of \(\eta\) and \(H\) finishes the proof of Proposition 5.7.1.
### Almost Periodic Functions
For later use we now discuss trigonometric polynomials, almost periodic functions, and their mean values; see [27, 53] for this material in the case \(n=1\). _In this section we assume \(n\geqslant 1\), and for vectors \(r=(r_{1},\ldots,r_{n})\) and \(s=(s_{1},\ldots,s_{n})\) in \(\mathbb{R}^{n}\) we let \(r\cdot s:=r_{1}s_{1}+\cdots+r_{n}s_{n}\in\mathbb{R}\) be the usual dot product of \(r\) and \(s\). We also set \(rs:=(r_{1}s_{1},\ldots,r_{n}s_{n})\in\mathbb{R}^{n}\), not to be confused with \(r\cdot s\in\mathbb{R}\). Moreover, we let \(v,w\colon\mathbb{R}^{n}\to\mathbb{C}\) be complex-valued functions on \(\mathbb{R}^{n}\), and let \(s\) range over \(\mathbb{R}^{n}\), and \(T\) over \(\mathbb{R}^{>}\); integrals are with respect to the usual Lebesgue measure of \(\mathbb{R}^{n}\)._ Set
\[\|w\|\ :=\ \sup_{s}|w(s)|\ \in\ [0,+\infty].\]
We shall also have occasion to consider various functions \(\mathbb{R}^{n}\to\mathbb{C}\) obtained from \(w\): \(\overline{w}\), \(|w|\), as well as \(w_{+r}\) and \(w_{\times r}\) (for \(r\in\mathbb{R}^{n}\)), defined by
\[\overline{w}(s)\ :=\ \overline{w(s)},\quad|w|(s)\ :=\ |w(s)|,\quad w_{+r}(s)\ :=\ w(r+s),\quad w_{ \times r}(s)\ :=\ w(rs).\]
We say that \(w\) is \(1\)**-periodic** if \(w_{+k}=w\) for all \(k\in\mathbb{Z}^{n}\).
**Trigonometric polynomials.** Let \(\alpha\) range over \(\mathbb{R}^{n}\). Call \(w\) a **trigonometric polynomial** if there are \(w_{\alpha}\in\mathbb{C}\), with \(w_{\alpha}=0\) for all but finitely many \(\alpha\), such that for all \(s\),
\[w(s)\ =\ \sum_{\alpha}w_{\alpha}\,\mathrm{e}^{(\alpha\cdot s)i}\,. \tag{5.8.1}\]
The trigonometric polynomials form a subalgebra of the \(\mathbb{C}\)-algebra of uniformly continuous bounded functions \(\mathbb{R}^{n}\to\mathbb{C}\). Let \(w\) be a trigonometric polynomial. Then \(\overline{w}\) is a trigonometric polynomial, and for \(r\in\mathbb{R}^{n}\), so are the functions \(w_{+r}\) and \(w_{\times r}\). Note that \(w\) extends to a complex-analytic function \(\mathbb{C}^{n}\to\mathbb{C}\), that \(\operatorname{Re}w\) and \(\operatorname{Im}w\) are real analytic, and that \(\partial w/\partial x_{j}:=(\partial\operatorname{Re}w/\partial x_{j})+( \partial\operatorname{Im}w/\partial x_{j})i\) for \(j=1,\dots,n\) is also a trigonometric polynomial. The functions \(s\mapsto\sin(\alpha\cdot s)\) and \(s\mapsto\cos(\alpha\cdot s)\) on \(\mathbb{R}^{n}\) are real valued trigonometric polynomials. By Corollary 5.8.18 below the coefficients \(w_{\alpha}\) in (5.8.1) are uniquely determined by \(w\).
If \(w(s)=\mathrm{e}^{(\alpha\cdot s)i}\) for all \(s\), then \(w_{+r}=w\) for all \(r\in\mathbb{R}^{n}\) with \(\alpha\cdot r\in 2\pi\mathbb{Z}\). So if \(w\) is a trigonometric polynomial as in (5.8.1) with \(w_{\alpha}=0\) for all \(\alpha\notin 2\pi\mathbb{Z}^{n}\), then \(w\) is \(1\)-periodic. Next we state a well-known consequence of the Stone-Weierstrass Theorem; see [57, (7.4.2)] for the case \(n=1\).
**Proposition 5.8.1**.: _If \(v\) is continuous and \(1\)-periodic, then for every \(\varepsilon\) in \(\mathbb{R}^{>}\) there is a \(1\)-periodic trigonometric polynomial \(w\) with \(\|v-w\|<\varepsilon\)._
### Almost periodic functions
We call \(w\)**almost periodic** (in the sense of Bohr) if for every \(\varepsilon\) in \(\mathbb{R}^{>}\) there is a trigonometric polynomial \(v\) such that \(||v-w||\leqslant\varepsilon\). If \(w\) is almost periodic, then \(w\) is uniformly continuous and bounded (as the uniform limit of a sequence of uniformly continuous bounded functions \(\mathbb{R}^{n}\to\mathbb{C}\)). If \(w\) is almost periodic, then so are \(\overline{w}\), and \(w_{+r}\), \(w_{\times r}\) for \(r\in\mathbb{R}^{n}\).
Note that the \(\mathbb{C}\)-algebra of uniformly continuous bounded functions \(\mathbb{R}^{n}\to\mathbb{C}\) is a Banach algebra with respect to \(\|\cdot\|\): it is complete with respect to this norm. The closure of its subalgebra of trigonometric polynomials with respect to this norm is \(\{w:w\text{ is almost periodic}\}\), which is therefore a Banach subalgebra. In particular, if \(v,w\colon\mathbb{R}^{n}\to\mathbb{C}\) are almost periodic, so are \(v+w\) and \(vw\). Moreover:
**Corollary 5.8.2**.: _Let \(v_{1},\dots,v_{m}\colon\mathbb{R}^{n}\to\mathbb{C}\) be almost periodic, let \(X\subseteq\mathbb{C}^{m}\) be closed, and suppose \(F\colon X\to\mathbb{C}\) is continuous with \(\big{(}v_{1}(s),\dots,v_{m}(s)\big{)}\in X\) for all \(s\). Then the function \(F(v_{1},\dots,v_{m})\colon\mathbb{R}^{n}\to\mathbb{C}\) is almost periodic._
Proof.: Since \(v_{1},\dots,v_{m}\) are bounded we can arrange that \(X\) is compact. Let \(\varepsilon\in\mathbb{R}^{>}\). Then Weierstrass Approximation [57, (7.4.1)] gives a polynomial
\[P(x_{1},y_{1},\dots,x_{m},y_{m})\ \in\ \mathbb{C}[x_{1},y_{1},\dots,x_{m},y_{m}]\]
such that \(|F(z_{1},\dots,z_{m})-P(z_{1},\overline{z}_{1},\dots,z_{m},\overline{z}_{m})|\leqslant\varepsilon\) for all \((z_{1},\dots,z_{m})\in X\). Hence \(\|F(v_{1},\dots,v_{m})-P(v_{1},\overline{v}_{1},\dots,v_{m}\overline{v}_{m})\|\leqslant\varepsilon\). It remains to note that the function \(P(v_{1},\overline{v}_{1},\dots,v_{m},\overline{v}_{m})\) is almost periodic.
Call \(w\)**normal** if \(w\) is bounded and for every sequence \((r_{m})\) in \(\mathbb{R}^{n}\) the sequence \((w_{+r_{m}})\) of functions \(\mathbb{R}^{n}\to\mathbb{C}\) has a uniformly converging subsequence. One verifies easily that if \(v\), \(w\) are normal, then so are the functions \(v+w\) and \(cv\) (\(c\in\mathbb{C}\)); hence by the next lemma, each trigonometric polynomial is normal:
**Lemma 5.8.3**.: _Suppose \(w(s)=\mathrm{e}^{(\alpha\cdot s)i}\) for all \(s\). Then \(w\) is normal._
Proof.: Let \((r_{m})\) be a sequence in \(\mathbb{R}^{n}\). Passing to a subsequence of \((r_{m})\) we arrange that the sequence \(\big{(}w(r_{m})\big{)}\) of complex numbers of modulus \(1\) converges. Now use that for all \(l\), \(m\) and all \(s\) we have \(|w_{+r_{l}}(s)-w_{+r_{m}}(s)|=|w(r_{l})-w(r_{m})|\), and thus \(\|w_{+r_{l}}-w_{+r_{m}}\|\leqslant|w(r_{l})-w(r_{m})|\).
**Lemma 5.8.4**.: _Let \((w_{m})\) be a sequence of normal functions with \(\|w_{m}-w\|\to 0\) as \(m\to\infty\). Then \(w\) is normal._
Proof.: Let \((r_{k})_{k\in\mathbb{N}}\) be a sequence in \(\mathbb{R}^{n}\). Using normality of the \(w_{m}\) we obtain inductively subsequences \((r_{k0}),(r_{k1}),\dots\) of \((r_{k})\) such that for all \(m\), \(\big{(}(w_{m})_{+r_{km}}\big{)}\) converges uniformly and \((r_{k,m+1})\) is a subsequence of \((r_{km})\). Then for every \(m\), \((r_{m+l,m+l})_{l\geqslant 0}\) is a subsequence of \((r_{km})\); so \(\big{(}(w_{m})_{+r_{kk}}\big{)}\) converges uniformly. Now let \(\varepsilon\in\mathbb{R}^{>}\) be given. Take \(m\) so that \(\|w_{m}-w\|\leqslant\varepsilon\), and then take \(k_{0}\) so that \(\|(w_{m})_{+r_{kk}}-(w_{m})_{+r_{ll}}\|\leqslant\varepsilon\) for all \(k,l\geqslant k_{0}\). For such \(k\), \(l\) we have
\[\|w_{+r_{kk}}-w_{+r_{ll}}\|\leqslant\] \[\|w_{+r_{kk}}-(w_{m})_{+r_{kk}}\|+\|(w_{m})_{+r_{kk}}-(w_{m})_{+r _{ll}}\|+\|(w_{m})_{+r_{ll}}-w_{+r_{ll}}\|\leqslant 3\varepsilon.\]
Thus \((w_{+r_{kk}})\) converges uniformly.
**Corollary 5.8.5** (Bochner).: _Every almost periodic function \(\mathbb{R}^{n}\to\mathbb{C}\) is normal._
For \(\varepsilon\in\mathbb{R}^{>}\), we say that \(r\in\mathbb{R}^{n}\) is an \(\varepsilon\)**-translation vector** for \(w\) if \(\|w_{+r}-w\|<\varepsilon\). We define an \(n\)**-cube** of side length \(\ell\in\mathbb{R}^{>}\) to be a subset of \(\mathbb{R}^{n}\) of the form \(I=I_{1}\times\dots\times I_{n}\) where each \(I_{1},\dots,I_{n}\) is an open interval of length \(\ell\).
**Proposition 5.8.6**.: _If \(w\) is normal, then for all \(\varepsilon\in\mathbb{R}^{>}\) there is an \(\ell=\ell(w,\varepsilon)\in\mathbb{R}^{>}\) such that every \(n\)-cube of side length \(\ell\) contains an \(\varepsilon\)-translation vector for \(w\)._
Proof.: We assume that \(w\) is bounded and show the contrapositive. Let \(\varepsilon\in\mathbb{R}^{>}\) be such that there are \(n\)-cubes of arbitrarily large sidelength that contain no \(\varepsilon\)-translation vector for \(w\); to conclude that \(w\) is not normal it suffices to have a sequence \((r_{i})_{i\in\mathbb{N}}\) in \(\mathbb{R}^{n}\) such that \(r_{j}-r_{i}\) is not an \(\varepsilon\)-translation vector for \(w\), for all \(i<j\), since then \(\|w_{+r_{j}}-w_{+r_{i}}\|=\|w_{+(r_{j}-r_{i})}-w\|\geqslant\varepsilon\) for all \(i<j\). Now suppose \(r_{0},\dots,r_{m}\in\mathbb{R}^{n}\) are such that \(r_{j}-r_{i}\) is not an \(\varepsilon\)-translation vector for \(w\), for all \(i<j\leqslant m\). Then for \(k=1,\dots,n\) we take intervals \(I_{k}=(a_{k},b_{k})\) (\(a_{k}<b_{k}\) in \(\mathbb{R}\)) of equal length \(b_{k}-a_{k}>2\max\big{\{}|r_{0}|_{\infty},\dots,|r_{m}|_{\infty}\big{\}}\) such that \(I:=I_{1}\times\dots\times I_{n}\) does not contain an \(\varepsilon\)-translation vector for \(w\). Set \(r_{m+1}:=\frac{1}{2}(a_{1}+b_{1},\dots,a_{n}+b_{n})\); then for \(i\leqslant m\) we have \(r_{m+1}-r_{i}\in I\), hence \(r_{m+1}-r_{i}\) is not an \(\varepsilon\)-translation vector for \(w\).
By Corollary 5.8.5, Proposition 5.8.6 applies to almost periodic \(w\). Bohr [27] showed conversely that if \(w\) is continuous and satisfies the conclusion of Proposition 5.8.6, then \(w\) is almost periodic, but we do not use this elegant characterization of almost periodicity below. We now improve Proposition 5.8.6 for almost periodic \(w\). _In the rest of this subsection we assume that \(w\) is almost periodic._
**Lemma 5.8.7**.: _Let \(\varepsilon\in\mathbb{R}^{>}\); then there are \(\delta,\ell\in\mathbb{R}^{>}\) such that every \(n\)-cube of side length \(\ell\) contains an \(n\)-cube of side length \(\delta\) consisting entirely of \(\varepsilon\)-translation vectors for \(w\)._
Proof.: Uniform continuity of \(w\) yields \(\delta_{1}\in\mathbb{R}^{>}\) such that all \(d\in\mathbb{R}^{n}\) with \(|d|_{\infty}<\delta_{1}\) are \((\varepsilon/3)\)-translation vectors for \(w\). Take \(\ell_{1}:=\ell(w,\varepsilon/3)\) as in Proposition 5.8.6, and set \(\delta:=2\delta_{1}\), \(\ell:=\ell_{1}+\delta\). Let \(J=a+(0,\ell)^{n}\) be a cube of side length \(\ell\)
where \(a\in\mathbb{R}^{n}\). Take an \((\varepsilon/3)\)-translation vector \(r\in a+(\delta_{1},\ell_{1}+\delta_{1})^{n}\) for \(w\). The cube \(I:=r+(-\delta_{1},\delta_{1})^{n}\) of side length \(\delta\) is entirely contained in \(J\). Let \(p\in I\). Then for \(d:=p-r\) we have \(|d|_{\infty}<\delta_{1}\), so for all \(s\),
\[|w(s+p)-w(s)|\ \leqslant\ |w(s+d+r)-w(s+d)|+|w(s+d)-w(s)|\ <\ \frac{\varepsilon} {3}+\frac{\varepsilon}{3}\ <\ \varepsilon,\]
hence \(p\) is an \(\varepsilon\)-translation vector for \(w\).
**Corollary 5.8.8**.: _Suppose \(w(\mathbb{R}^{n})\subseteq\mathbb{R}\), \(s_{0}\in\mathbb{R}^{n}\), and \(w(s_{0})>0\). Then there are \(\delta_{1},\ell_{1}\in\mathbb{R}^{>}\) such that every \(n\)-cube of side length \(\ell_{1}\) contains an \(n\)-cube \(I\) of side length \(\delta_{1}\) with \(w(s)\geqslant w(s_{0})/3\) for all \(s\in I\)._
Proof.: Let \(\delta\), \(\ell\) be as in Lemma 5.8.7 for \(\varepsilon:=w(s_{0})/3\). By decreasing \(\delta\) we obtain from the uniform continuity of \(w\) that all \(d\in\mathbb{R}^{n}\) with \(|d|_{\infty}<\delta/2\) are \(\varepsilon\)-translation vectors for \(w\). Set \(\delta_{1}:=\delta\), \(\ell_{1}:=\ell+\delta/2\). Let \(J=a-(0,\ell_{1})^{n}\) with \(a\in\mathbb{R}^{n}\) be an \(n\)-cube of side length \(\ell_{1}\); we claim that \(J\) contains an \(n\)-cube \(I\) of side length \(\delta\) with \(w(s)\geqslant\varepsilon\) for all \(s\in I\). To prove this claim, consider the \(n\)-cube \(J_{0}:=(s_{0}-a)+(\delta/2,\ell+\delta/2)^{n}\) of side length \(\ell\). Our choice of \(\delta\), \(\ell\) gives an \(\varepsilon\)-translation vector \(r\in J_{0}\) for \(w\) such that \(r+(-\delta/2,\delta/2)^{n}\subseteq J_{0}\). Then
\[I\ :=\ (s_{0}-r)+(-\delta/2,\delta/2)^{n}\ \subseteq\ s_{0}-J_{0}\ =\ a-( \delta/2,\ell+\delta/2)^{n}\ \subseteq\ J,\]
and for every \(s\in I\), setting \(d=s-s_{0}+r\), we have \(|d|_{\infty}<\delta/2\), so
\[w(s)\ =\ w(s_{0})+\big{(}w(s_{0}+d)-w(s_{0})\big{)}-\big{(}w(s+r)-w(s)\big{)}\ \geqslant\ w(s_{0})-\varepsilon-\varepsilon\ =\ \varepsilon\]
as required.
**Lemma 5.8.9**.: _Suppose \(w(\mathbb{R}^{n})\subseteq\mathbb{R}\). Then with \(|s|:=|s|_{\infty}\),_
\[\liminf_{|s|\to\infty}w(s)\ =\ \inf_{s}w(s),\qquad\limsup_{|s|\to\infty}w(s)\ =\ \sup_{s}w(s).\]
Proof.: It suffices to prove the second equality: applying it to \(-w\) in place of \(w\) gives the first one. Set \(\sigma:=\sup_{s}w(s)\). Let \(\varepsilon\in\mathbb{R}^{>}\), and take \(s_{0}\in\mathbb{R}^{n}\) with \(w(s_{0})>\sigma-\varepsilon\). By Corollary 5.8.8 applied to \(s\mapsto v(s):=w(s)+\varepsilon-\sigma\) instead of \(w\) there are \(s\) with arbitrarily large \(|s|\) and \(v(s)\geqslant v(s_{0})/3>0\), hence \(w(s)>\sigma-\varepsilon\). Thus \(\limsup_{|s|\to\infty}w(s)\geqslant\sigma\); the reverse inequality holds trivially.
### The mean value
_In this subsection \(v\) and \(w\) are bounded and measurable._ If
\[\lim_{T\to\infty}\frac{1}{T^{n}}\int_{[0,T]^{n}}w(s)\,ds \tag{5.8.2}\]
exists (in \(\mathbb{C}\)), then we say that \(w\) has a mean value, and we call the quantity (5.8.2) the **mean value** of \(w\) and denote it by \(M(w)\). One verifies easily that if \(v\) and \(w\) have a mean value, then so do the functions \(v+w\), \(cw\) (\(c\in\mathbb{C}\)), and \(\overline{w}\), with
\[M(v+w)\ =\ M(v)+M(w),\quad M(cw)\ =\ cM(w),\quad\text{ and }\quad M( \overline{w})\ =\ \overline{M(w)}.\]
If \(w\) has a mean value, then \(|M(w)|\leqslant\|w\|\). If \(w\) and \(|w|\) have a mean value, then \(|M(w)|\leqslant M(|w|)\). If \(w\) has a mean value and \(w(\mathbb{R}^{n})\subseteq\mathbb{R}\), then \(M(w)\in\mathbb{R}\), with \(M(w)\geqslant 0\) if \(w\big{(}(\mathbb{R}^{\geq})^{n}\big{)}\subseteq\mathbb{R}^{\geq}\).
**Lemma 5.8.10**.: _Let \(d\in\mathbb{R}^{n}\). Then \(w\) has a mean value iff \(w_{+d}\) has a mean value, in which case \(M(w)=M(w_{+d})\)._
Proof.: It suffices to treat the case \(d=(d_{1},0,\ldots,0)\), \(d_{1}\in\mathbb{R}^{>}\). For \(T>d_{1}\) we have
\[\left|\int_{[0,T]^{n}}w_{+d}(s)\,ds-\int_{[0,T]^{n}}w(s)\,ds\right| \ =\\ \left|\int_{[T,d_{1}+T]\times[0,T]^{n-1}}w(s)\,ds-\int_{[0,d_{1}] \times[0,T]^{n-1}}w(s)\,ds\right|\ \leqslant\ 2d_{1}\|w\|T^{n-1},\]
and this yields the claim.
**Corollary 5.8.11**.: _Suppose \(w\) has a mean value, and let \(T_{0}\in\mathbb{R}^{>}\). If \(w(s)=0\) for all \(s\in(\mathbb{R}^{\geqslant})^{n}\) with \(|s|\geqslant T_{0}\), then \(M(w)=0\). If \(w(\mathbb{R}^{n})\subseteq\mathbb{R}\) and \(w(s)\geqslant 0\) for all \(s\in(\mathbb{R}^{\geqslant})^{n}\) with \(|s|\geqslant T_{0}\), then \(M(w)\geqslant 0\). (As before, \(|s|:=|s|_{\infty}\).)_
**Lemma 5.8.12**.: _Suppose \(w\) has a mean value and \(w(\mathbb{R}^{n})\subseteq\mathbb{R}\); then_
\[\inf_{s}w(s)\ \leqslant\ \liminf_{|s|\to\infty}w(s)\ \leqslant\ M(w)\ \leqslant\ \limsup_{|s|\to\infty}w(s)\ \leqslant\ \sup_{s}w(s).\]
Proof.: The first and last inequalities are clear. Towards a contradiction assume \(L:=\limsup_{|s|\to\infty}w(s)<M(w)\), and let \(\varepsilon=\frac{1}{2}\big{(}M(w)-L\big{)}\). Take \(T_{0}\in\mathbb{R}^{>}\) such that \(w(s)\leqslant M(w)-\varepsilon\) for all \(s\) with \(|s|\geqslant T_{0}\). The previous corollary applied to \(s\mapsto M(w)-\varepsilon-w(s)\) instead of \(w\) implies \(M(w)\leqslant M(w)-\varepsilon\), a contradiction. This shows the third inequality; the second inequality is proved in a similar way.
Note that if \(w\) has a mean value, then so does every \(v\) having the same restriction to \((\mathbb{R}^{\geqslant})^{n}\) as \(w\), with \(M(v)=M(w)\).
**Lemma 5.8.13**.: _Let \((v_{m})\) be a sequence of bounded measurable functions \(\mathbb{R}^{n}\to\mathbb{C}\) with a mean value, such that \(\lim_{m\to\infty}\lVert v_{m}-w\rVert=0\). Then \(w\) has a mean value, and \(\lim_{m\to\infty}M(v_{m})=M(w)\)._
Proof.: Let \(\varepsilon\in\mathbb{R}^{>}\) be given, and take \(m\) with \(||v_{m}-w||\leqslant\varepsilon\). Since \(v:=v_{m}\) has a mean value, we have \(T_{0}\in\mathbb{R}^{>}\) such that for all \(T_{1},T_{2}\geqslant T_{0}\),
\[\left|\frac{1}{T_{1}^{n}}\int_{[0,T_{1}]^{n}}v(s)\,ds-\frac{1}{T_{2}^{n}}\int_ {[0,T_{2}]^{n}}v(s)\,ds\right|\ \leqslant\ \varepsilon.\]
Then for such \(T_{1},T_{2}\) we have
\[\left|\frac{1}{T_{1}^{n}}\int_{[0,T_{1}]^{n}}w(s)\,ds-\frac{1}{T_ {2}^{n}}\int_{[0,T_{2}]^{n}}w(s)\,ds\right|\ \leqslant\ \frac{1}{T_{1}^{n}}\int_{[0,T_{1}]^{n}}\big{|}w(s)-v(s)\big{|}\,ds+\\ \left|\frac{1}{T_{1}^{n}}\int_{[0,T_{1}]^{n}}v(s)\,ds-\frac{1}{T_ {2}^{n}}\int_{[0,T_{2}]^{n}}v(s)\,ds\right|+\frac{1}{T_{2}^{n}}\int_{[0,T_{2}] ^{n}}\big{|}w(s)-v(s)\big{|}\,ds\]
where each term on the right of \(\leqslant\) is \(\leqslant\varepsilon\). Hence the limit (5.8.2) exists. To show \(\lim_{m\to\infty}M(v_{m})=M(w)\), use \(|M(v_{m})-M(w)|=|M(v_{m}-w)|\leqslant\lVert v_{m}-w\rVert\).
**The mean value of an almost periodic function.**_In this subsection \(v\) and \(w\) are almost periodic. As before, \(\alpha\) ranges over \(\mathbb{R}^{n}\)._
**Lemma 5.8.14**.: _Suppose \(w(s)=\mathrm{e}^{\mathrm{i}(\alpha\cdot s)}\) for all \(s\). Then \(w\) has a mean value, with \(M(w)=1\) if \(\alpha=0\) and \(M(w)=0\) otherwise._
Proof.: This is obvious for \(\alpha=0\). Assume \(\alpha\neq 0\). Then
\[\int_{[0,T]^{n}}\mathrm{e}^{\mathrm{i}(\alpha\cdot s)}\ ds\ =\ T^{\mid\{j:\alpha_{j}=0\} \mid}\cdot\prod_{j,\alpha_{j}\neq 0}\frac{\mathrm{e}^{\mathrm{i}\alpha_{j}T}-1}{ \mathrm{i}\alpha_{j}},\quad\text{ so}\]
\[\left|\frac{1}{T^{n}}\int_{[0,T]^{n}}\mathrm{e}^{\mathrm{i}(\alpha\cdot s)}\ ds\right|\ \ \leqslant\ \frac{1}{T^{\mid\{j:\ \alpha_{j}\neq 0\}\mid}}\cdot\prod_{j, \alpha_{j}\neq 0}\frac{2}{|\alpha_{j}|},\]
and thus \(\frac{1}{T^{n}}\int_{[0,T]^{n}}\mathrm{e}^{\mathrm{i}(\alpha\cdot s)}\ ds\to 0\) as \(T\to\infty\).
It follows that every trigonometric polynomial \(w\) has a mean value. Using also Lemma 5.8.13, every almost periodic function \(\mathbb{R}^{n}\to\mathbb{C}\) has a mean value.
**Lemma 5.8.15**.: _Suppose \(u\colon\mathbb{R}^{n}\to\mathbb{C}\) is continuous and \(1\)-periodic. Then \(u\) is almost periodic with mean value \(M(u)=\int_{[0,1]^{n}}u(s)\,ds\)._
Proof.: By Proposition 5.8.1, \(u\) is almost periodic. Now use that for \(T\in\mathbb{N}^{\geqslant 1}\),
\[\int_{[0,T]^{n}}u(s)\,ds\ =\ T^{n}\int_{[0,1]^{n}}u(s)\,ds.\qed\]
**Lemma 5.8.16**.: _Let \(r\in(\mathbb{R}^{\times})^{n}\). Then the almost periodic function \(w_{\times r}\) has the same mean value as \(w\)._
Proof.: Choose a sequence \((w_{m})\) of trigonometric polynomials with \(\|w_{m}-w\|\to 0\) as \(m\to\infty\). Then \((w_{m})_{\times r}\) is a trigonometric polynomial and \(\|(w_{m})_{\times r}-w_{\times r}\|\to 0\) as \(m\to\infty\). Lemma 5.8.14 gives \(M\big{(}(w_{m})_{\times r}\big{)}=M(w_{m})\); now use Lemma 5.8.13.
**Proposition 5.8.17** (Bohr).: _Suppose \(w(\mathbb{R}^{n})\subseteq\mathbb{R}^{\geqslant}\). If \(M(w)=0\), then \(w=0\)._
Proof.: Suppose \(s_{0}\in\mathbb{R}^{n}\) and \(w(s_{0})>0\). We claim that then \(M(w)>0\). Take \(\delta_{1}\), \(\ell_{1}\) as in Corollary 5.8.8. Let \(k\) range over \(\mathbb{N}^{n}\). Then \(\int_{\ell_{1}k+[0,\ell_{1}]^{n}}w(s)\,ds\geqslant\delta_{1}^{n}w(s_{0})/3\) for all \(k\), and hence for \(m\geqslant 1\) and \(T=\ell_{1}m\):
\[\frac{1}{T^{n}}\int_{[0,T]^{n}}w(s)\,ds\ =\ \frac{1}{T^{n}}\sum_{|k|<m}\int_{\ell_{1 }k+[0,\ell_{1}]^{n}}w(s)\,ds\ \geqslant\ (\delta_{1}/\ell_{1})^{n}w(s_{0})/3.\]
Thus \(M(w)\ \geqslant\ (\delta_{1}/\ell_{1})^{n}w(s_{0})/3\ >\ 0\).
By Proposition 5.8.17, the map \((v,w)\mapsto\langle v,w\rangle:=M(v\overline{w})\) is a positive definite hermitian form on the \(\mathbb{C}\)-linear space of almost periodic functions \(\mathbb{R}^{n}\to\mathbb{C}\). Lemma 5.8.10 yields \(\langle v_{+d},w_{+d}\rangle=\langle v,w\rangle\) for \(d\in\mathbb{R}^{n}\). By Lemma 5.8.14, the family \(\big{(}s\mapsto\mathrm{e}^{(\alpha\cdot s)\mathrm{i}}\,\big{)}_{\alpha}\) of trigonometric polynomials is orthonormal with respect to \(\langle\,\ \rangle\). In particular, for a trigonometric polynomial \(w\) as in (5.8.1) we have \(w_{\alpha}=\langle w,\mathrm{e}^{(\alpha\cdot s)\mathrm{i}}\rangle\), and thus:
**Corollary 5.8.18**.: _If \(w=0\), then \(w_{\alpha}=0\) for all \(\alpha\)._
**Corollary 5.8.19**.: _Suppose \(w\) is a trigonometric polynomial as in (5.8.1). Then_
\[w\ \text{is}\ 1\text{-periodic}\ \Longleftrightarrow\ w_{\alpha}=0\ \text{for all}\ \alpha\notin 2\pi\mathbb{Z}^{n}.\]
Proof.: If \(w\) is \(1\)-periodic, then for \(k\in\mathbb{Z}^{n}\) we have
\[w_{\alpha}\ =\ \langle w,\mathrm{e}^{(\alpha\cdot s)\mathrm{i}}\rangle\ =\ \langle w_{+k},(\mathrm{e}^{(\alpha\cdot s)\mathrm{i}})_{+k}\rangle\ =\ \mathrm{e}^{-(\alpha\cdot k)\mathrm{i}}\langle w,\mathrm{e}^{(\alpha\cdot s) \mathrm{i}}\rangle\ =\ \mathrm{e}^{-(\alpha\cdot k)\mathrm{i}}\,w_{\alpha},\]
which for \(w_{\alpha}\neq 0\) gives \(\alpha\cdot k\in 2\pi\mathbb{Z}\) for all \(k\in\mathbb{Z}^{n}\), and thus \(\alpha\in 2\pi\mathbb{Z}^{n}\). This yields the forward implication, and the backward direction is obvious.
In the next corollary we equip \(\mathbb{R}^{n}\) with the lexicographic ordering.
**Corollary 5.8.20**.: _Suppose \(w\) is a trigonometric polynomial. Then \(w(\mathbb{R}^{n})\subseteq\mathbb{R}\) iff there are \(c\in\mathbb{R}\) and \(u_{\alpha},v_{\alpha}\in\mathbb{R}\) for \(\alpha>0\), with \(u_{\alpha}=v_{\alpha}=0\) for all but finitely many \(\alpha>0\), such that for all \(s\in\mathbb{R}^{n}\),_
\[w(s)\ =\ c+\sum_{\alpha>0}\big{(}u_{\alpha}\cos(\alpha\cdot s)+v_{\alpha}\sin( \alpha\cdot s)\big{)}. \tag{5.8.3}\]
_Moreover, in this case \(c\) and the coefficients \(u_{\alpha}\), \(v_{\alpha}\) are unique, and \(w\) is \(1\)-periodic iff \(u_{\alpha}=v_{\alpha}=0\) for all \(\alpha>0\) with \(\alpha\notin 2\pi\mathbb{Z}^{n}\)._
Proof.: Clearly if \(w\) has stated form, then \(w(\mathbb{R}^{n})\subseteq\mathbb{R}\). For the converse, suppose \(w(\mathbb{R}^{n})\subseteq\mathbb{R}\), and \(w\) is given as in (5.8.1). Then for \(s\in\mathbb{R}^{n}\),
\[\sum_{\alpha}\overline{w_{\alpha}}\,\mathrm{e}^{-(\alpha\cdot s)i}\ =\ \overline{w}(s)\ =\ w(s)\ =\ \sum_{\alpha}w_{\alpha}\,\mathrm{e}^{(\alpha\cdot s)i},\]
and hence \(w_{0}\in\mathbb{R}\) and \(\overline{w_{\alpha}}=w_{-\alpha}\) for \(\alpha>0\), by Corollary 5.8.18, so
\[w(s)\ =\ w_{0}+\sum_{\alpha>0}\big{(}w_{\alpha}\,\mathrm{e}^{(\alpha\cdot s)i}+ \overline{w_{\alpha}}\,\mathrm{e}^{-(\alpha\cdot s)i}\,\big{)}\qquad(s\in \mathbb{R}^{n}).\]
Put \(c:=w_{0}\) and \(u_{\alpha}=\mathrm{Re}(2w_{\alpha})\), \(v_{\alpha}:=\mathrm{Im}(2w_{\alpha})\) for \(\alpha>0\). Then (5.8.3) holds for \(s\in\mathbb{R}^{n}\). The rest follows from Corollaries 5.8.18 and 5.8.19.
### 5.9. Uniform Distribution Modulo One
In this section we collect some basic facts about uniform distribution modulo \(1\) of functions as needed later. Our main references are [118, Chapter 1, SS9] and [36].
Natural density.Below \(\mathbb{R}\) is given its usual Lebesgue measure, _measurable_ means _Lebesgue-measurable_, and \(\mu\) denotes the Lebesgue measure on \(\mathbb{R}\). By an "interval" we mean here a set \(I=[a,b)\) where \(a,b\in\mathbb{R}\), \(a<b\), so \(\mu(I)=b-a\). In the rest of this subsection I is an interval and \(X\), \(Y\) are measurable subsets of \(\mathbb{R}\). We let
\[\rho(I,X)\ :=\ \frac{\mu(I\cap X)}{\mu(I)}\in[0,1]\]
be the **density of \(X\) in \(I\)**. So \(\rho(I,X)=0\) if \(I\cap X=\emptyset\) and \(\rho(I,X)=1\) if \(I\subseteq X\), and \(\rho(I+d,X+d)=\rho(I,X)\) for \(d\in\mathbb{R}\). Clearly \(\rho(I,X)\leqslant\rho(I,Y)\) if \(X\subseteq Y\), and if \((X_{n})\) is a family of pairwise disjoint measurable subsets of \(\mathbb{R}\) and \(X=\bigcup_{n}X_{n}\), then \(\rho(I,X)=\sum_{n}\rho(I,X_{n})\).
Let \(X\triangle Y:=(X\setminus Y)\cup(Y\setminus X)\) be the symmetric difference of \(X\), \(Y\). If \(\mu(X)<\infty\) and \(\mu(Y)<\infty\), then \(\mu(X)-\mu(Y)\leqslant\mu(X\setminus Y)\) and \(|\mu(X)-\mu(Y)|\leqslant\mu(X\triangle Y)\), so
**Lemma 5.9.1**.: \(|\rho(I,X)-\rho(I,Y)|\leqslant\rho(I,X\triangle Y)\)_._
Moreover:
**Lemma 5.9.2**.: _Let \(d\in\mathbb{R}\); then \(\big{|}\rho(I,X)-\rho(I+d,X)\big{|}\leqslant|d|/\mu(I)\)._
Proof.: We need to show \(\big{|}\mu(I\cap X)-\mu\big{(}(I+d)\cap X\big{)}\big{|}\leqslant|d|\). Replacing \(I\) and \(d\) by \(I+d\) and \(-d\), if necessary, we arrange \(d\geqslant 0\). Then
\[-\mu(I)\ =\ -\mu(I+d)\ \leqslant\ \mu(I\cap X)-\mu\big{(}(I+d)\cap X\big{)} \leqslant\ \mu(I),\]
hence we are done if \(\mu(I)\leqslant d\). Suppose \(\mu(I)>d\) and let \(I=[a,b)\), \(a,b\in\mathbb{R}\); so \(\mu(I)=b-a\). Then \(I\setminus(I+d)=[a,a+d)\) and \((I+d)\setminus I=[b,b+d)\), hence
\[-d\ =\ -\mu\big{(}(I+d)\setminus I\big{)}\ \leqslant\ \mu(I\cap X)-\mu\big{(}(I+d) \cap X\big{)}\ \leqslant\ \mu\big{(}I\setminus(I+d)\big{)}\ =\ d\]
as required.
Let \(\rho\) range over \([0,1]\) and \(T\) over \(\mathbb{R}^{>}\). Lemma 5.9.2 gives:
**Corollary 5.9.3**.: _The following conditions on \(X\) are equivalent:_
1. \(\lim_{T\to\infty}\rho\big{(}[0,T),X\big{)}\ =\ \rho\)_;_
2. _for all_ \(a\in\mathbb{R}\)_,_ \(\lim_{T\to\infty}\rho\big{(}[a,a+T),X\big{)}\ =\ \rho\)_;_
3. _for some_ \(a\in\mathbb{R}\)_,_ \(\lim_{T\to\infty}\rho\big{(}[a,a+T),X\big{)}\ =\ \rho\)_._
We say that \(X\) has **natural density \(\rho\) at \(+\infty\)** (short: \(X\) has density \(\rho\)) if one of the equivalent conditions in the corollary above holds, and in this case we set \(\rho(X):=\rho\). If \(X\) has an upper bound in \(\mathbb{R}\), then \(\rho(X)=0\), whereas if \(X\) contains a half-line \(\mathbb{R}^{\geqslant a}\) (\(a\in\mathbb{R}\)), then \(\rho(X)=1\). By Lemma 5.9.1 we have:
**Corollary 5.9.4**.: _If \(X\) has density \(\rho\) and \(X\triangle Y\) has density \(0\), then \(Y\) has density \(\rho\)._
In particular, the density of \(X\) only depends on the germ of \(X\) at \(+\infty\), in the following sense: if \(X\cap\mathbb{R}^{>a}=Y\cap\mathbb{R}^{>a}\) for some \(a\in\mathbb{R}\), then \(X\) has density \(\rho\) iff \(Y\) has density \(\rho\). The collection of measurable subsets of \(\mathbb{R}\) that have a density is a boolean algebra of subsets of \(\mathbb{R}\), and \(X\mapsto\rho(X)\) is a finitely additive measure on this boolean algebra taking values in \([0,1]\). If \(X\) has a density and \(d\in\mathbb{R}\), then \(X+d\) has the same density.
**Uniform distribution** mod 1. Let \(f\colon\mathbb{R}^{\geqslant a}\to\mathbb{R}\) (\(a\in\mathbb{R}\)) be measurable. For \(t\in\mathbb{R}\) we let \(\{t\}\) be the fractional part of \(t\): the element of \([0,1)\) such that \(t\in\mathbb{Z}+\{t\}\). Let \(Y\subseteq[0,1)\) be measurable; then \(Y+\mathbb{Z}\) is measurable and hence so is
\[f^{-1}(Y+\mathbb{Z})\ =\ \big{\{}t\in\mathbb{R}^{\geqslant a}:\{f(t)\}\in Y \big{\}}.\]
For \(a\leqslant b<c\) we have
\[\mu\big{(}[b,c)\cap f^{-1}(Y+\mathbb{Z})\big{)}\ =\ \int_{b}^{c}\chi_{Y}\big{(} \{f(t)\}\big{)}\,dt.\]
Let \(\rho\in\mathbb{R}\); then \(f^{-1}(Y+\mathbb{Z})\) has density \(\rho\) iff for some \(b\geqslant a\) we have
\[\lim_{T\to\infty}\frac{1}{T}\int_{b}^{b+T}\chi_{Y}\big{(}\{f(t)\}\big{)}dt\ =\ \rho,\]
and in this case the displayed identity holds for all \(b\geqslant a\). Hence if \(f^{-1}(Y+\mathbb{Z})\) has density \(\rho\) and \(g\colon\mathbb{R}^{\geqslant b}\to\mathbb{R}\) (\(b\in\mathbb{R}\)) is measurable with the same germ at \(+\infty\) as \(f\), then \(g^{-1}(Y+\mathbb{Z})\) also has density \(\rho\).
**Definition 5.9.5**.: We say that \(f\) is **uniformly distributed** mod 1 (abbreviated: u.d. mod 1) if for every interval \(I\subseteq[0,1)\) the set \(f^{-1}(I+\mathbb{Z})\) has density \(\mu(I)\).
The function \(f:\mathbb{R}^{\geqslant}\to\mathbb{R}\) with \(f(t)=t\) for all \(t\geqslant 0\) has \(f^{-1}(I+\mathbb{Z})=I+\mathbb{N}\) for \(I\) as above, so \(f\) is u.d mod 1. By the remarks above, if \(f\) is u.d. mod 1, then so is any measurable function \(\mathbb{R}^{\geqslant b}\to\mathbb{R}\) (\(b\in\mathbb{R}\)) with the same germ at \(+\infty\) as \(f\). If \(f\) is u.d. mod 1 and eventually increasing or eventually decreasing, then \(|f(t)|\to+\infty\) as \(t\to+\infty\). If \(f\) is u.d. mod 1, then so are the functions \(t\mapsto k\cdot f(t)\colon\mathbb{R}^{\geqslant a}\to\mathbb{R}\) for \(k\in\mathbb{Z}^{\neq}\), and \(t\mapsto f(d+t)\colon\mathbb{R}^{\geqslant(a-d)}\to\mathbb{R}\) with \(d\in\mathbb{R}\).
The Weyl Criterion.: In this subsection we fix a measurable function \(f\colon\mathbb{R}^{\geqslant}\to\mathbb{R}\). For a bounded measurable function \(w\colon[0,1]\to\mathbb{R}\) we consider the relation
(W) \[\lim_{T\to\infty}\frac{1}{T}\int_{0}^{T}w\big{(}\{f(t)\}\big{)}dt\ =\ \int_{0}^{1}w(s)\,ds.\]
Then \(f\) is u.d. mod \(1\) iff (W) holds whenever \(w=\chi_{I}\) is the characteristic function of some interval \(I\subseteq[0,1]\). It follows that if \(f\) is u.d. mod \(1\) and \(w\colon[0,1]\to\mathbb{R}\) is a step function (that is, an \(\mathbb{R}\)-linear combination of characteristic functions \(\chi_{I}\) of intervals \(I\subseteq[0,1]\)), then (W) holds.
**Lemma 5.9.6**.: _Let \(w\colon[0,1]\to\mathbb{R}\) be bounded and measurable, and suppose that for every \(\varepsilon\in\mathbb{R}^{>}\) there are bounded measurable functions \(w_{1},w_{2}\colon[0,1]\to\mathbb{R}\) such that_
* \(w_{1}\leqslant w\leqslant w_{2}\) _on_ \([0,1)\)_,_
* \(\int_{0}^{1}\big{(}w_{2}(s)-w_{1}(s)\big{)}\,ds\leqslant\varepsilon\)_, and_
* _for_ \(i=1,2\)_, (W) holds for_ \(w_{i}\) _instead of_ \(w\)_._
_Then (W) holds._
Proof.: Given \(\varepsilon\in\mathbb{R}^{>}\) and \(w_{1},w_{2}\colon[0,1]\to\mathbb{R}\) satisfying (i), (ii), (iii), we have
\[\int_{0}^{1}w(s)\,ds-\varepsilon \leqslant\ \int_{0}^{1}w_{1}(s)\,ds\ =\ \lim_{T\to\infty}\frac{1}{T}\int_{0}^{T}w_{1}\big{(}\{f(t)\}\big{)}dt\] \[\leqslant\ \liminf_{T\to\infty}\frac{1}{T}\int_{0}^{T}w\big{(}\{f(t)\} \big{)}dt\ \leqslant\ \limsup_{T\to\infty}\frac{1}{T}\int_{0}^{T}w\big{(}\{f(t)\}\big{)}dt\] \[\leqslant\ \lim_{T\to\infty}\frac{1}{T}\int_{0}^{T}w_{2}\big{(}\{f(t)\} \big{)}dt\ =\ \int_{0}^{1}w_{2}(s)\,ds\] \[\leqslant\ \int_{0}^{1}w(s)\,ds+\varepsilon.\qed\]
**Proposition 5.9.7**.: \(f\) _is u.d. mod \(1\) iff (W) holds for all continuous \(w\colon[0,1]\to\mathbb{R}\)._
Proof.: If \(w\colon[0,1]\to\mathbb{R}\) is continuous, then partitioning \([0,1)\) into intervals as in Riemann integration we obtain for any \(\varepsilon\in\mathbb{R}^{>}\) step functions \(w_{1},w_{2}\colon[0,1]\to\mathbb{R}\) such that \(w_{1}\leqslant w\leqslant w_{2}\) on \([0,1)\) and \(\int_{0}^{1}\big{(}w_{2}(s)-w_{1}(s)\big{)}\,ds\leqslant\varepsilon\). Moreover, if \(I\subseteq[0,1)\) is an interval, then for any \(\varepsilon\in\mathbb{R}^{>}\) there are continuous functions \(w_{1},w_{2}\colon[0,1]\to\mathbb{R}\) with \(w_{1}\leqslant\chi_{I}\leqslant w_{2}\) and \(\int_{0}^{1}\big{(}w_{2}(s)-w_{1}(s)\big{)}ds\leqslant\varepsilon\). The proposition follows from these facts and Lemma 5.9.6.
It is convenient to extend the notion of mean value to bounded measurable functions \(g\colon\mathbb{R}^{\geqslant}\to\mathbb{C}\): for such \(g\), if \(\lim_{T\to\infty}\frac{1}{T}\int_{0}^{T}g(t)\,dt\) exists in \(\mathbb{C}\), then we say that \(g\) has mean value \(M(g):=\lim_{T\to\infty}\frac{1}{T}\int_{0}^{T}g(t)\,dt\).
**Corollary 5.9.8**.: _The following conditions on \(f\) are equivalent:_
* \(f\) _is u.d._ mod \(1\)_;_
* \(\lim_{T\to\infty}\frac{1}{T}\int_{0}^{T}(w\circ f)(t)\,dt=\int_{0}^{1}w(s)\,ds\) _for all continuous_ \(1\)_-periodic functions_ \(w\colon\mathbb{R}\to\mathbb{C}\)_;_
* _for every continuous_ \(1\)_-periodic_ \(w\colon\mathbb{R}\to\mathbb{C}\)_, the function_ \(w\circ f\colon\mathbb{R}^{\geqslant}\to\mathbb{C}\) _has mean value_ \(M(w\circ f)=M(w)\)
Proof.: We first show (i) \(\Leftrightarrow\) (ii). For the forward direction, apply Proposition 5.9.7 to the real and imaginary parts of \(w\), using \(w(\{t\})=w(t)\) for \(t\in\mathbb{R}\) and \(1\)-periodic \(w\). The converse follows from Lemma 5.9.6 and the observation that if \(I\subseteq[0,1)\) is an interval, then for any \(\varepsilon\in\mathbb{R}^{>}\) we can take continuous functions \(w_{1},w_{2}\colon[0,1]\to\mathbb{R}\) with \(w_{1}\leqslant\chi_{I}\leqslant w_{2}\) and \(\int_{0}^{1}\big{(}w_{2}(s)-w_{1}(s)\big{)}ds\leqslant\varepsilon\) as in the proof of the proposition above, such that in addition \(w_{i}(0)=w_{i}(1)\) for \(i=1,2\), and then \(v_{i}\colon\mathbb{R}\to\mathbb{R}\) given by \(v_{i}(t)=w_{i}(\{t\})\) for \(t\in\mathbb{R}\) (\(i=1,2\)) is continuous and \(1\)-periodic. The equivalence of (ii) and (iii) is immediate from Lemma 5.8.15.
**Theorem 5.9.9** (Weyl [207]).: _The function \(f\) is u.d. \(\mathrm{mod}\ 1\) iff for all \(n\geqslant 1\) we have_
\[\lim_{T\to\infty}\frac{1}{T}\int_{0}^{T}\mathrm{e}^{2\pi inf(t)}\ dt\ =\ 0. \tag{5.9.1}\]
Proof.: The forward direction follows from Corollary 5.9.8. Conversely, suppose that (5.9.1) holds for all \(n\geqslant 1\). Note that then for all \(k\in\mathbb{Z}^{\neq}\),
\[\lim_{T\to\infty}\frac{1}{T}\int_{0}^{T}\mathrm{e}^{2\pi ikf(t)}\ dt\ =\ 0.\]
Thus by Corollary 5.8.19, every \(1\)-periodic trigonometric polynomial \(v\colon\mathbb{R}\to\mathbb{C}\) gives a function \(v\circ f\) with mean value \(M(v\circ f)=M(v)\). Now let \(w\colon\mathbb{R}\to\mathbb{C}\) be continuous and \(1\)-periodic. Proposition 5.8.1 yields a sequence \((v_{m})\) of \(1\)-periodic trigonometric polynomials \(\mathbb{R}\to\mathbb{C}\) with \(\|v_{m}-w\|\to 0\) as \(m\to\infty\). So \(M(v_{m})\to M(w)\) as \(m\to\infty\), by Lemma 5.8.13. Extend \(f\) to a measurable function \(\mathbb{R}\to\mathbb{R}\), also denoted by \(f\). Then \(\|(v_{m}\circ f)-(w\circ f)\|\to 0\) as \(m\to\infty\). Hence by Lemma 5.8.13 again, \(w\circ f\) has a mean value and \(M(v_{m})=M(v_{m}\circ f)\to M(w\circ f)\) as \(m\to\infty\). Therefore \(M(w\circ f)=M(w)\). Hence \(f\) is u.d. \(\mathrm{mod}\ 1\) by Corollary 5.9.8.
_Remark_.: Let \(w(s)=\mathrm{e}^{2\pi is}\) (\(s\in\mathbb{R}\)) and let \(g\colon\mathbb{R}\to\mathbb{R}\) be a continuous function whose restriction to \(\mathbb{R}^{\geqslant}\) is u.d. \(\mathrm{mod}\ 1\). By Corollary 5.9.8, \(w\circ g\) has a mean value. When is \(w\circ g\) almost periodic? This happens only for very special \(g\): if \(w\circ g\) is almost periodic, then there are \(r\in\mathbb{R}\) and an almost periodic \(h\colon\mathbb{R}\to\mathbb{R}\) such that \(g(t)=rt+h(t)\) for all \(t\in\mathbb{R}\), by a theorem of Bohr [26]. Moreover, [143, Theorem 1] says that if \(h\colon\mathbb{R}\to\mathbb{R}\) is almost periodic, then for all but countably many \(r\in\mathbb{R}\) the function \(t\mapsto rt+h(t)\colon\mathbb{R}^{\geqslant}\to\mathbb{R}\) is u.d. \(\mathrm{mod}\ 1\). These facts are not used later.
Uniform distribution \(\mathrm{mod}\ 1\) of differentiable functions.: Let \(f\colon\mathbb{R}^{\geqslant a}\to\mathbb{R}\) (\(a\in\mathbb{R}\)) be continuously differentiable. We give here sufficient conditions for \(f\) to be u.d. \(\mathrm{mod}\ 1\) and for \(f\) not to be u.d. \(\mathrm{mod}\ 1\). First a lemma in the spirit of Corollary 5.7.4:
**Lemma 5.9.10**.: _Let \(F\colon\mathbb{R}^{>}\to\mathbb{R}\) be twice continuously differentiable such that \(F(t)/t\to 0\) as \(t\to+\infty\). Assume \(t\mapsto tF^{\prime\prime}(t)\colon\mathbb{R}^{>}\to\mathbb{R}\) is bounded. Then \(F^{\prime}(t)\to 0\) as \(t\to+\infty\)._
Proof.: Let \(t,\eta>0\). Taylor's Theorem [16, Theorem 19.9] yields \(\theta\in[0,1]\) such that
\[F(t+\eta)-F(t)\ =\ \eta F^{\prime}(t)+\tfrac{1}{2}\eta^{2}F^{\prime\prime}(t+ \theta\eta),\]
and thus
\[F^{\prime}(t)\ =\ \frac{F(t+\eta)-F(t)}{\eta}-\tfrac{1}{2}\eta F^{\prime\prime}(t+ \theta\eta).\]
Take \(M\in\mathbb{R}^{>}\) such that \(|tF^{\prime\prime}(t)|\leqslant M\) for all \(t\in\mathbb{R}^{>}\). Let \(\varepsilon\in\mathbb{R}^{>}\), and set \(\delta:=\varepsilon/M\). Then for all \(t>0\), \(\eta=\delta t\) yields \(\theta=\theta_{t}\in[0,1]\) with
\[F^{\prime}(t)\ =\ \left(\frac{F(t+\delta t)}{t+\delta t}\cdot\frac{1+\delta}{ \delta}-\frac{F(t)}{t}\cdot\frac{1}{\delta}\right)-\frac{\delta}{2(1+\theta \delta t)}(t+\theta\delta t)F^{\prime\prime}(t+\theta\delta t).\]
The difference in the parentheses tends to zero as \(t\to\infty\) whereas the remaining term is \(\leqslant\varepsilon/2\) in absolute value for all \(t\in\mathbb{R}^{>}\).
**Proposition 5.9.11** (Kuipers-Meulenbeld [117]).: _Suppose the function_
\[t\mapsto f^{\prime}(t)t\,:\ \mathbb{R}^{\geqslant a}\to\mathbb{R}\]
_is bounded. Then \(f\) is not u.d. \(\mathrm{mod}\ 1\)._
Proof.: Replacing \(f\) by \(t\mapsto f(a+t)\colon\mathbb{R}^{\geqslant}\to\mathbb{R}\) we arrange \(a=0\). Assume towards a contradiction that (5.9.1) holds for \(n=1\), and consider \(F\colon\mathbb{R}^{>}\to\mathbb{R}\) given by
\[F(t)\ :=\ \mathrm{Re}\left(\int_{0}^{t}\mathrm{e}^{2\pi if(s)}\ ds\right)\ =\ \int_{0}^{t}\cos\big{(}2\pi f(s)\big{)}\,ds.\]
Then \(F\) is twice continuously differentiable with
\[F^{\prime}(t)\ =\ \cos\big{(}2\pi f(t)\big{)},\quad F^{\prime\prime}(t)\ =\ -2\pi f^{\prime}(t)\sin\big{(}2\pi f(t)\big{)},\]
and \(F(t)/t\to 0\) as \(t\to\infty\) and \(t\mapsto tF^{\prime\prime}(t)\colon\mathbb{R}^{>}\to\mathbb{R}\) is bounded. Hence by Lemma 5.9.10 we have \(\cos\big{(}2\pi f(t)\big{)}\to 0\) as \(t\to\infty\); likewise we show \(\sin\big{(}2\pi f(t)\big{)}\to 0\) as \(t\to\infty\). Hence \(\mathrm{e}^{2\pi if(t)}\to 0\) as \(t\to\infty\), a contradiction.
In the next proposition we assume \(a=0\) and consider the continuously differentiable function \(t\mapsto g(t):=f(\mathrm{e}^{t})\colon\mathbb{R}\to\mathbb{R}\) (so \(f(t)=g(\log t)\) for \(t\in\mathbb{R}^{>}\)).
**Proposition 5.9.12** (Tsuji [201]).: _Suppose \(g\) and \(g^{\prime}\) are eventually strictly increasing with \(g(t)/t\to+\infty\) as \(t\to+\infty\). Then \(f\) is u.d. \(\mathrm{mod}\ 1\)._
Proof.: Let \(n\geqslant 1\); we claim that (5.9.1) holds. The continuous functions
\[t \mapsto\varphi(t):=2\pi nf(t)\,:\ \mathbb{R}^{\geqslant}\to\mathbb{R},\] \[t \mapsto\gamma(t):=\varphi^{\prime}(t)t=2\pi n\,g^{\prime}(\log t) \,:\ \mathbb{R}^{>}\to\mathbb{R}\]
are eventually strictly increasing. We have \(\varphi(t)/\log t\to+\infty\) as \(t\to+\infty\). Therefore \(\gamma(t)\to+\infty\) as \(t\to+\infty\): otherwise \(\varphi^{\prime}(t)\leqslant M/t\) for all \(t\geqslant b\), and some \(b,M>0\), and then integration gives \(\varphi(t)=O(\log t)\) as \(t\to+\infty\), a contradiction.
Take \(a_{0}\in\mathbb{R}^{>}\) such that \(\varphi\) and \(\gamma\) are strictly increasing on \(\mathbb{R}^{\geqslant a_{0}}\) and \(\gamma(a_{0})>0\). Set \(\rho_{0}=\varphi(a_{0})\), and take \(\eta\colon\mathbb{R}^{\geqslant\rho_{0}}\to\mathbb{R}^{\geqslant a_{0}}\) so that \((\eta\circ\varphi)(t)=t\) for \(t\in\mathbb{R}^{\geqslant a_{0}}\). Then \(\eta^{\prime}\big{(}\varphi(t)\big{)}>0\) and \(\gamma(t)=\eta\big{(}\varphi(t)\big{)}/\eta^{\prime}\big{(}\varphi(t)\big{)}\) for \(t>a_{0}\). Hence the function
\[u\mapsto\eta^{\dagger}(u)\ :=\ \eta^{\prime}(u)/\eta(u)\,:\ \mathbb{R}^{>\rho_{0}} \to\mathbb{R}^{>}\]
is strictly decreasing with \(\lim\limits_{u\to+\infty}\eta^{\dagger}(u)=0\). Let now \(T>a_{0}\) and consider
\[I(T)\ :=\ \int_{a_{0}}^{T}\sin\varphi(t)\,dt.\]
Set \(\rho_{T}=\varphi(T)\), and let \(\tau\in(\rho_{0},\rho_{T})\). Substituting \(u=\varphi(t)\) gives
\[I(T)\ =\ \int_{\rho_{0}}^{\rho_{T}}\eta^{\prime}(u)\sin u\,du\ =\ \int_{\rho_{0}}^{\tau}\eta^{\prime}(u)\sin u\,du+\int_{\tau}^{\rho_{T}}\eta(u) \eta^{\dagger}(u)\sin u\,du.\]
Two applications of the Second Mean Value Theorem for Integrals [16, Theorem 23.7] yield first \(\tau_{2}\) and then \(\tau_{1}\) such that \(\tau\leqslant\tau_{1}\leqslant\tau_{2}\leqslant\rho_{T}\) and
\[\int_{\tau}^{\rho_{T}}\eta(u)\eta^{\dagger}(u)\sin u\,du\ =\ \eta^{\dagger}(\tau)\int_{ \tau}^{\tau_{2}}\eta(u)\sin u\,du\ =\ \eta^{\dagger}(\tau)\eta(\tau_{2})\int_{\tau_{1}}^{\tau_{2}}\sin u\,du,\]
hence
\[\int_{\tau}^{\rho_{T}}\eta(u)\eta^{\dagger}(u)\sin u\,du\ =\ \eta^{\dagger}(\tau) \,C\qquad\text{where }|C|\leqslant 2\eta(\rho_{T})=2T.\]
Let now \(\varepsilon\in\mathbb{R}^{>}\) be given. Take \(\tau>\rho_{0}\) so large that \(\eta^{\dagger}(\tau)\leqslant\varepsilon/4\). Then for \(T>a_{0}\) so large that \(\varphi(T)>\tau\), and
\[\left|\int_{\rho_{0}}^{\tau}\eta^{\prime}(u)\sin u\,du\right|\ \leqslant\ \varepsilon T/2\]
we have \(|I(T)|\leqslant\varepsilon T\). Thus as \(T\to\infty\) we have
\[\frac{1}{T}\int_{0}^{T}\sin\varphi(t)\,dt\ =\ \frac{1}{T}\int_{0}^{a_{0}}\sin \varphi(t)\,dt+\frac{I(T)}{T}\ \to\ 0.\]
Likewise, \(\frac{1}{T}\int_{0}^{T}\cos\varphi(t)\,dt\to 0\) as \(T\to\infty\). Thus (5.9.1) is satisfied.
**Theorem 5.9.13** (Boshernitzan [36]).: _Suppose the germ of \(f\), also denoted by \(f\), lies in a Hardy field \(H\). Then: \(f\) is u.d. \(\mathrm{mod}\ 1\Longleftrightarrow f\succ\log x\)._
Proof.: By increasing \(H\) we arrange \(\log x\in H\). The claim is obvious if \(f\preccurlyeq 1\), since then \(f\) is neither \(\succ\log x\) nor u.d. \(\mathrm{mod}\ 1\). So suppose \(f\succ 1\); then \(f\succ\log x\) iff \(f^{\prime}\succ 1/x\). If \(f^{\prime}\preccurlyeq 1/x\), then \(f\) is not u.d. \(\mathrm{mod}\ 1\) by Proposition 5.9.11. Suppose \(f^{\prime}\succ 1/x\); to show that \(f\) is u.d. \(\mathrm{mod}\ 1\) we replace \(f\) by \(t\mapsto f(a+t)\colon\mathbb{R}^{\geqslant}\to\mathbb{R}\) and \(H\) by \(H\circ(a+x)\) to arrange \(a=0\). Replacing \(f\) by \(-f\) if necessary we also arrange \(f>0\) in \(H\). Then \(f(t)\to+\infty\) as \(t\to+\infty\), hence the function \(t\mapsto g(t):=f(\mathrm{e}^{t})\colon\mathbb{R}\to\mathbb{R}\) is eventually strictly increasing and its germ, also denoted by \(g\), lies in some Hardy field and satisfies \(g\succ x\); thus \(g^{\prime}\succ 1\), hence \(t\mapsto g^{\prime}(t)\colon\mathbb{R}\to\mathbb{R}\) is also eventually strictly increasing. Thus \(f\) is u.d. \(\mathrm{mod}\ 1\) by Proposition 5.9.12.
In particular, if \(f\) is u.d. \(\mathrm{mod}\ 1\) and its germ lies in a Hardy field, then \(\alpha f\) is u.d. \(\mathrm{mod}\ 1\) for every \(\alpha\in\mathbb{R}^{\times}\).
**Uniform distribution \(\mathrm{mod}\ 1\) in higher dimensions.** In this subsection \(n\geqslant 1\), \(\mathbb{R}^{n}\) is equipped with its usual Lebesgue measure \(\mu_{n}\), and _measurable_ for a subset of \(\mathbb{R}^{n}\) means measurable with respect to \(\mu_{n}\). Let \(a\in\mathbb{R}\) and consider measurable functions \(f_{1},\ldots,f_{n}\colon\mathbb{R}^{\geqslant a}\to\mathbb{R}\), which we combine into a single map
\[f\ :=\ (f_{1},\ldots,f_{n})\ :\ \mathbb{R}^{\geqslant a}\to\mathbb{R}^{n}.\]
By a **box** (in \(\mathbb{R}^{n}\)) we mean a set \(I=I_{1}\times\cdots\times I_{n}\) where \(I_{1},\ldots,I_{n}\) are intervals, so \(\mu_{n}(I)=\mu(I_{1})\cdots\mu(I_{n})\) and \(f^{-1}(I+\mathbb{Z}^{n})=\bigcap_{j=1}^{n}f_{j}^{-1}(I_{j}+\mathbb{Z})\) is measurable.
**Definition 5.9.14**.: We say that \(f\) is **uniformly distributed**\(\mathrm{mod}\ 1\) (abbreviated: u.d. \(\mathrm{mod}\ 1\)) if for every box \(I\subseteq[0,1)^{n}\) the set \(f^{-1}(I+\mathbb{Z}^{n})\) has density \(\mu_{n}(I)\).
For \(s=(s_{1},\ldots,s_{n})\in\mathbb{R}^{n}\), set \(\{s\}:=\big{(}\{s_{1}\},\ldots,\{s_{n}\}\big{)}\in[0,1)^{n}\). With this notation, \(f\) is u.d. \(\mathrm{mod}\ 1\) iff for every box \(I\subseteq[0,1)^{n}\) we have
\[\lim_{T\to\infty}\frac{1}{T}\int_{a}^{a+T}\chi_{I}\big{(}\{f(t)\}\big{)}dt\ =\ \mu_{n}(I).\]
Let \(b\in\mathbb{R}^{\geqslant a}\), \(d\in\mathbb{R}\). Then \(f\) is u.d. \(\bmod 1\) iff the restriction of \(f\) to \(\mathbb{R}^{\geqslant b}\) is u.d. \(\bmod 1\), and if \(f\) is u.d. \(\bmod 1\), then so is \(t\mapsto f(d+t)\colon\mathbb{R}^{\geqslant(a-d)}\to\mathbb{R}^{n}\).
_In the rest of this subsection we assume \(a=0\)._ Proposition 5.9.7 and its Corollary 5.9.8 generalize to this setting:
**Proposition 5.9.15**.: _The map \(f\) is u.d. \(\bmod 1\) if and only if_
\[\lim_{T\to\infty}\frac{1}{T}\int_{0}^{T}w\big{(}\{f(t)\}\big{)}dt\ =\ \int_{[0,1]^{n}}w(s)\,ds\]
_for every continuous function \(w\colon[0,1]^{n}\to\mathbb{R}\)._
Proof.: First we use the proof of Lemma 5.9.6 to obtain the analogue of that lemma for bounded measurable functions \(w\colon[0,1]^{n}\to\mathbb{R}\). Now, given a continuous function \(w\colon[0,1]^{n}\to\mathbb{R}\) and \(\varepsilon\in\mathbb{R}^{>}\), there are \(\mathbb{R}\)-linear combinations \(w_{1},w_{2}\colon[0,1]^{n}\to\mathbb{R}\) of characteristic functions of pairwise disjoint boxes contained in \([0,1]^{n}\) such that \(w_{1}\leqslant w\leqslant w_{2}\) on \([0,1)^{n}\) and \(\int_{[0,1]^{n}}\big{(}w_{2}(s)-w_{1}(s)\big{)}\,ds\leqslant\varepsilon\). This gives one direction.
Next, let \(I=I_{1}\times\cdots\times I_{n}\subseteq[0,1)^{n}\) be a box and \(\varepsilon\in\mathbb{R}^{>}\). For \(j=1,\ldots,n\) we have continuous functions \(w_{1j},w_{2j}\colon[0,1]\to\mathbb{R}^{\geqslant}\) such that
\[0\ \leqslant\ w_{1j}\ \leqslant\ \chi_{I_{j}}\ \leqslant\ w_{2j}\ \leqslant\ 1\quad\text{and}\quad\int_{0}^{1}\big{(}w_{2j}(t)-w_{1j}(t)\big{)}dt \leqslant\varepsilon/2^{n}.\]
For \(s=(s_{1},\ldots,s_{n})\in[0,1]^{n}\) set \(w_{i}(s):=w_{i1}(s_{1})\cdots w_{in}(s_{n})\). Then the functions \(w_{1},w_{2}\colon[0,1]^{n}\to\mathbb{R}\) are continuous with
\[w_{1}\leqslant\chi_{I}\leqslant w_{2}\quad\text{and}\quad\int_{[0,1]^{n}} \big{(}w_{2}(s)-w_{1}(s)\big{)}ds\leqslant\varepsilon.\]
The proposition follows from these facts just as in the proof of Proposition 5.9.7.
As Proposition 5.9.7 led to Corollary 5.9.8, so does Proposition 5.9.15 give:
**Corollary 5.9.16**.: _The following conditions on \(f\) are equivalent:_
* _the map_ \(f\) _is u.d._ \(\bmod 1\)_;_
* \(\lim_{T\to\infty}\frac{1}{T}\int_{0}^{T}(w\circ f)(t)\,dt\ =\ \int_{[0,1]^{n}}w(s)\,ds\) _for every continuous_ \(1\)_-periodic function_ \(w\colon\mathbb{R}^{n}\to\mathbb{C}\)_;_
* _for every continuous_ \(1\)_-periodic_ \(w\colon\mathbb{R}^{n}\to\mathbb{C}\)_, the function_ \(w\circ f\colon\mathbb{R}^{\geqslant}\to\mathbb{C}\) _has mean value_ \(M(w\circ f)=M(w)\)_._
**Corollary 5.9.17**.: _Let \(w\colon\mathbb{R}^{n}\to\mathbb{R}^{\geqslant}\) be \(1\)-periodic and continuous, and suppose \(f\) is u.d. \(\bmod 1\). Then_
\[\limsup_{t\to\infty}w\big{(}f(t)\big{)}=0\ \Longleftrightarrow\ \limsup_{|s|\to\infty}w(s)=0\ \Longleftrightarrow\ w=0.\]
Proof.: Corollary 5.9.16 gives \(M(w\circ f)=M(w)\), and \(\|w\|=\limsup_{|s|\to\infty}w(s)\) by Lemma 5.8.9. One verifies easily that \(M(w\circ f)\leqslant\limsup_{t\to\infty}w\big{(}f(t)\big{)}\). The equivalences now follow from these facts and Proposition 5.8.17.
**Corollary 5.9.18**.: _Let \(w\colon\mathbb{R}^{n}\to\mathbb{R}\) be \(1\)-periodic and continuous, and suppose \(f\) is u.d. \(\bmod 1\). Then \(\limsup_{t\to\infty}w\big{(}f(t)\big{)}=\sup_{s}w(s)\) and \(\liminf_{t\to\infty}w\big{(}f(t)\big{)}=\inf_{s}w(s)\)._
Proof.: Let \(a\in\mathbb{R}\), \(a<\sup_{s}w(s)\). Then \(w=u+v\) where \(u,v\colon\mathbb{R}^{n}\to\mathbb{R}\) are given by \(u(s):=\min\bigl{(}a,w(s)\bigr{)}\) for all \(s\). Then \(u\), and thus \(v\), is \(1\)-periodic and continuous. Now \(v\geqslant 0\), but \(v\neq 0\), so \(\limsup_{t\to\infty}v\bigl{(}f(t)\bigr{)}>0\) by Corollary 5.9.17. This gives \(\varepsilon>0\) with \(v(f(t))>\varepsilon\) for arbitrarily large \(t\). For such \(t\) we have \(u\bigl{(}f(t)\bigr{)}=a\): \(u\bigl{(}f(t)\bigr{)}<a\) would give \(u\bigl{(}f(t)\bigr{)}=w\bigl{(}f(t)\bigr{)}\), so \(v\bigl{(}f(t)\bigr{)}=0\). Hence \(w\bigl{(}f(t)\bigr{)}=u\bigl{(}f(t)\bigr{)}+v\bigl{(}f(t)\bigr{)}>a+\varepsilon\) for such \(t\). The other equality follows likewise.
Weyl's Theorem 5.9.9 also generalizes:
**Theorem 5.9.19**.: _The map \(f\) is u.d. \(\mathrm{mod}\ 1\) if and only if for all \(k\in(\mathbb{Z}^{n})^{\neq}\),_
\[\lim_{T\to\infty}\frac{1}{T}\int_{0}^{T}\mathrm{e}^{2\pi\mathrm{i}(k\cdot f(t) )}\ dt\ =\ 0.\]
Proof.: Like that of Theorem 5.9.9, using 5.9.16 instead of 5.9.8.
Theorems 5.9.9 and 5.9.19 yield:
**Corollary 5.9.20**.: _The map \(f\) is u.d. \(\mathrm{mod}\ 1\) if and only if for all \(k\in(\mathbb{Z}^{n})^{\neq}\), the function \(t\mapsto k\cdot f(t)\colon\mathbb{R}^{\geqslant}\to\mathbb{R}\) is u.d. \(\mathrm{mod}\ 1\)._
**Strengthening uniform distribution.**_In this subsection \(n\geqslant 1\), the functions \(f_{1},\ldots,f_{n}\colon\mathbb{R}^{\geqslant}\to\mathbb{R}\) are measurable, \(f=(f_{1},\ldots,f_{n})\colon\mathbb{R}^{\geqslant}\to\mathbb{R}^{n}\) is the resulting map, and for \(\alpha\in\mathbb{R}^{n}\) we set \(\alpha f:=(\alpha_{1}f_{1},\ldots,\alpha_{n}f_{n})\colon\mathbb{R}^{\geqslant} \to\mathbb{R}^{n}\)._
**Lemma 5.9.21**.: _The following conditions on \(f\) are equivalent:_
* \(\alpha f\) _is u.d._ \(\mathrm{mod}\ 1\) _for all_ \(\alpha\in(\mathbb{R}^{\times})^{n}\)_;_
* \(\lim_{T\to\infty}\frac{1}{T}\int_{0}^{T}\mathrm{e}^{2\pi\mathrm{i}(\beta\cdot f (t))}\ dt\ =\ 0\) _for all_ \(\beta\in(\mathbb{R}^{n})^{\neq}\)_;_
* _for every almost periodic_ \(w\colon\mathbb{R}^{n}\to\mathbb{C}\)_, the function_ \(w\circ f\colon\mathbb{R}^{\geqslant}\to\mathbb{C}\) _has mean value_ \(M(w\circ f)=M(w)\)_._
Proof.: Assume (i); let \(\beta\in(\mathbb{R}^{n})^{\neq}\). For \(i=1,\ldots,n\) set \(\alpha_{i}:=1\), \(k_{i}:=0\) if \(\beta_{i}=0\) and \(\alpha_{i}:=\beta_{i}\), \(k_{i}:=1\) if \(\beta_{i}\neq 0\). Then \(k=(k_{1},\ldots,k_{n})\in(\mathbb{Z}^{n})^{\neq}\), \(\alpha=(\alpha_{1},\ldots,\alpha_{n})\) is in \((\mathbb{R}^{\times})^{n}\), and \(\beta\cdot f(t)=k\cdot(\alpha f)(t)\) for all \(t\in\mathbb{R}\). Now (ii) follows from Theorem 5.9.19 applied to \(\alpha f\) in place of \(f\). The implication (ii) \(\Rightarrow\) (iii) follows as in the proof of Theorem 5.9.9, using the definition of almost periodicity instead of Proposition 5.8.1. Finally, assume (iii), and let \(\alpha\in(\mathbb{R}^{\times})^{n}\); to show that \(\alpha f\) is u.d. \(\mathrm{mod}\ 1\) we verify that condition (iii) in Corollary 5.9.16 holds for \(\alpha f\) in place of \(f\). Thus let \(w\colon\mathbb{R}^{n}\to\mathbb{C}\) be continuous and \(1\)-periodic. By (iii) applied to the almost periodic function
\[s\mapsto w_{\times\alpha}(s)=w(\alpha s)\,:\ \mathbb{R}^{n}\to\mathbb{C}\]
in place of \(w\), the function \(w_{\times\alpha}\circ f=w\circ(\alpha f)\colon\mathbb{R}^{\geqslant}\to \mathbb{C}\) has a mean value and \(M(w_{\times\alpha}\circ f)=M(w_{\times\alpha})\); now use that \(M(w_{\times\alpha})=M(w)\) by Lemma 5.8.16.
We say that \(f\) is **uniformly distributed** (abbreviated: u.d.) if it satisfies one of the equivalent conditions in Lemma 5.9.21. This lemma also yields:
**Corollary 5.9.22**.: _The map \(f\) is u.d. if and only if for all \(\beta\in(\mathbb{R}^{n})^{\neq}\), the function \(t\mapsto\beta\cdot f(t)\colon\mathbb{R}^{\geqslant}\to\mathbb{R}\) is u.d. \(\mathrm{mod}\ 1\)._
The proof of the next result is like that of Corollary 5.9.17, using Lemma 5.9.21 instead of Corollary 5.9.16:
**Corollary 5.9.23**.: _Suppose \(w\colon\mathbb{R}^{n}\to\mathbb{R}^{\geqslant}\) is almost periodic and \(f\) is u.d. Then_
\[\limsup_{t\to\infty}w\big{(}f(t)\big{)}=0\iff\limsup_{|s|\to\infty}w(s)=0\iff w=0.\]
**Application to Hardy fields.**_In this subsection \(f_{1},\dots,f_{n}\colon\mathbb{R}^{\geqslant}\to\mathbb{R}\) with \(n\geqslant 1\) are continuous, their germs, denoted also by \(f_{1},\dots,f_{n}\), lie in a common Hardy field, and \(f:=(f_{1},\dots,f_{n})\colon\mathbb{R}^{\geqslant}\to\mathbb{R}^{n}\)._ Theorem 5.9.13 with Corollary 5.9.20 gives:
**Corollary 5.9.24** (Boshernitzan).: _We have the following equivalence:_
\[f\text{ is u.d. }\mathrm{mod}\ 1\iff k_{1}f_{1}+\dots+k_{n}f_{n}\succ\log x \text{ for all }(k_{1},\dots,k_{n})\in(\mathbb{Z}^{n})^{\neq}.\]
Combining Theorem 5.9.13 with Corollary 5.9.22 yields likewise:
**Corollary 5.9.25**.: _We have the following equivalence:_
\[f\text{ is u.d. }\iff\alpha_{1}f_{1}+\dots+\alpha_{n}f_{n}\succ\log x \text{ for all }(\alpha_{1},\dots,\alpha_{n})\in(\mathbb{R}^{n})^{\neq}.\]
_In particular, if \(\log x\prec f_{1}\prec\dots\prec f_{n}\), then \(f\) is u.d._
Here is an immediate application of Corollary 5.9.24:
**Corollary 5.9.26** (Weyl).: _Let \(\lambda_{1},\dots,\lambda_{n}\in\mathbb{R}\). Define \(g\colon\mathbb{R}^{\geqslant}\to\mathbb{R}^{n}\) by \(g(t)=(\lambda_{1}t,\dots,\lambda_{n}t)\). Then \(g\) is u.d. \(\mathrm{mod}\ 1\) iff \(\lambda_{1},\dots,\lambda_{n}\) are \(\mathbb{Q}\)-linearly independent._
We now get to the result that we actually need in Section 5.10:
**Proposition 5.9.27**.: _Suppose \(w\colon\mathbb{R}^{n}\to\mathbb{R}^{\geqslant}\) is almost periodic, \(1\prec f_{1}\prec\dots\prec f_{n}\), and \(\limsup_{t\to+\infty}w\big{(}f(t)\big{)}=0\). Then \(w=0\)._
Proof.: We first arrange \(f_{1}>\mathbb{R}\), replacing \(f_{1},\dots,f_{n}\) and \(w\) by \(-f_{1},\dots,-f_{n}\) and the function \(s\mapsto w(-s)\colon\mathbb{R}^{n}\to\mathbb{R}^{\geqslant}\), if \(f_{1}<\mathbb{R}\). Pick \(a\geqslant 0\) such that the restriction of \(f_{1}\) to \(\mathbb{R}^{\geqslant a}\) is strictly increasing, set \(b:=f_{1}(a)\), and let \(f_{1}^{\mathrm{inv}}\colon\mathbb{R}^{\geqslant b}\to\mathbb{R}\) be the compositional inverse of this restriction. Set \(g_{j}(t):=(f_{j}\circ f_{1}^{\mathrm{inv}})(t)\) for \(t\geqslant b\) and \(j=1,\dots,n\) and consider the map
\[g\ =\ (g_{1},\dots,g_{n})\ =\ f\circ f_{1}^{\mathrm{inv}}\ :\ \mathbb{R}^{ \geqslant b}\to\mathbb{R}^{n}.\]
The germs of \(g_{1},\dots,g_{n}\), denoted by the same symbols, lie in a common Hardy field and satisfy \(x=g_{1}\prec g_{2}\prec\dots\prec g_{n}\). Now \(f_{1}^{\mathrm{inv}}\) is strictly increasing and moreover \(f_{1}^{\mathrm{inv}}(t)\to+\infty\) as \(t\to+\infty\), so
\[\limsup_{t\to\infty}w\big{(}f(t)\big{)}\ =\ \limsup_{t\to\infty}w\big{(}f \big{(}f_{1}^{\mathrm{inv}}(t)\big{)}\big{)}\ =\ \limsup_{t\to\infty}w\big{(}g(t)\big{)}\ =\ 0.\]
Thus replacing \(f_{1},\dots,f_{n}\) by continuous functions \(\mathbb{R}^{\geqslant}\to\mathbb{R}\) with the same germs as \(g_{1},\dots,g_{n}\), we arrange \(x=f_{1}\prec f_{2}\prec\dots\prec f_{n}\). Then \(f\) is u.d. by Corollary 5.9.25. Now use Corollary 5.9.23.
The next three results are not used later but included for their independent interest.
**Corollary 5.9.28**.: _Assume \(w\colon\mathbb{R}^{n}\to\mathbb{R}\) is almost periodic and \(1\prec f_{1}\prec\dots\prec f_{n}\). Then \(\limsup_{t\to\infty}w\big{(}f(t)\big{)}=\sup_{s}w(s)\) and \(\liminf_{t\to\infty}w\big{(}f(t)\big{)}=\inf_{s}w(s)\)._
Proof.: Let \(a\in\mathbb{R}\), \(a<\sup_{s}w(s)\). Then \(w=u+v\) where \(u,v\colon\mathbb{R}^{n}\to\mathbb{R}\) are given by \(u(s):=\min\bigl{(}a,w(s)\bigr{)}\) for all \(s\). Then \(u\), and thus \(v\), is almost periodic by Corollary 5.8.2. Now argue as in the proof of Corollary 5.9.18, using Proposition 5.9.27 instead of Corollary 5.9.17.
**Corollary 5.9.29**.: _If \(w\colon\mathbb{R}^{n}\to\mathbb{C}\) is almost periodic and \(1\prec f_{1}\prec\cdots\prec f_{n}\), then_
\[\lim_{t\to\infty}w\big{(}f(t)\big{)}\ \text{exists in}\ \mathbb{C}\quad \Longleftrightarrow\quad w\ \text{is constant}.\]
Proof.: Apply the previous corollary to the real and imaginary part of \(w\).
Finally, we use these results to reprove [101, Theorem 8]. Given \(\alpha=(\alpha_{1},\ldots,\alpha_{n})\in\mathbb{C}^{n}\) we put \(\mathrm{e}^{\alpha}:=(\mathrm{e}^{\alpha_{1}},\ldots,\mathrm{e}^{\alpha_{n}}) \in\mathbb{C}^{n}\). Let \(m\geqslant 1\) and set
\[S\ :=\ \big{\{}(z_{1},\ldots,z_{m})\in\mathbb{C}^{m}:\ |z_{1}|=\cdots=|z_{m}|=1 \big{\}}.\]
**Corollary 5.9.30**.: _Suppose \(1\prec f_{1}\prec\cdots\prec f_{n}\). Let \(\varphi\colon S\to\mathbb{R}\) be continuous and let \(k_{1},\ldots,k_{n}\in\mathbb{N}^{\geqslant 1}\) with \(k_{1}+\cdots+k_{n}=m\) and \(\lambda_{j}=(\lambda_{j1},\ldots,\lambda_{jk_{j}})\in\mathbb{R}^{k_{j}}\) for \(j=1,\ldots,n\) be such that \(\lambda_{j1},\ldots,\lambda_{jk_{j}}\) are \(\mathbb{Q}\)-linearly independent. Then_
\[\limsup_{t\to\infty}\varphi\big{(}\,\mathrm{e}^{\mathrm{i}f_{1}(t)\lambda_{1 }},\ldots,\mathrm{e}^{\mathrm{i}f_{n}(t)\lambda_{n}}\,\big{)}\ =\ \max\varphi(S).\]
Proof.: By Corollary 5.8.2, the function
\[s=(s_{1},\ldots,s_{n})\mapsto w(s):=\varphi\big{(}\,\mathrm{e}^{\mathrm{i}s_{1 }\lambda_{1}},\ldots,\mathrm{e}^{\mathrm{i}s_{n}\lambda_{n}}\,\big{)}\ :\ \mathbb{R}^{n}\to\mathbb{R}\]
is almost periodic. We have
\[w\big{(}f(t)\big{)}\ =\ \varphi\big{(}\,\mathrm{e}^{\mathrm{i}f_{1}(t)\lambda_{1 }},\ldots,\mathrm{e}^{\mathrm{i}f_{n}(t)\lambda_{n}}\,\big{)}\quad\text{for }t \geqslant 0,\]
so by Corollary 5.9.28,
\[\limsup_{t\to\infty}\varphi\big{(}\,\mathrm{e}^{\mathrm{i}f_{1}(t)\lambda_{1 }},\ldots,\mathrm{e}^{\mathrm{i}f_{n}(t)\lambda_{n}}\,\big{)}\ =\ \limsup_{t\to\infty}w\big{(}f(t)\big{)}\ =\ \sup_{s}w(s).\]
For \(j=1,\ldots,n\) it follows from Corollary 5.9.26 that the image of the map
\[t\mapsto\mathrm{e}^{\mathrm{i}t\lambda_{j}}\ :\ \mathbb{R}^{\geqslant}\to \big{\{}(z_{1},\ldots,z_{kj})\in\mathbb{C}^{kj}:|z_{1}|=\cdots=|z_{kj}|=1\big{\}}\]
is dense in its codomain, so the image of the map
\[(s_{1},\ldots,s_{n})\mapsto\big{(}\,\mathrm{e}^{\mathrm{i}s_{1}\lambda_{1}}, \ldots,\mathrm{e}^{\mathrm{i}s_{n}\lambda_{n}}\,\big{)}\ :\ \mathbb{R}^{n}\to S\]
is dense in \(S\). Hence \(\sup_{s}w(s)=\max\varphi(S)\).
### Examples involving real-valued trigonometric polynomials \((^{*})\)
The material in this subsection is only used later to justify a remark after Corollary 5.10.11.
_Example 5.9.31_.: Let \(a,b\in\mathbb{R}^{\times}\), and consider the \(1\)-periodic trigonometric polynomial \(w\colon\mathbb{R}^{2}\to\mathbb{R}\) given by
\[w(s)\ =\ a\cos(2\pi s_{1})+b\cos(2\pi s_{2})\qquad\text{for }s=(s_{1},s_{2}) \in\mathbb{R}^{2}.\]
Let \(\lambda,\mu\in\mathbb{R}\) be \(\mathbb{Q}\)-linearly independent. Then by Corollaries 5.9.18 and 5.9.26:
\[\limsup_{t\to-\infty}w(\lambda t,\mu t)\ =\ \limsup_{t\to+\infty}w(\lambda t,\mu t )\ =\ |a|+|b|.\]
Note also that \(|a|+|b|>w(\lambda t,\mu t)\) for all \(t\in\mathbb{R}^{\neq}\).
Now let \(\alpha\in\mathbb{R}\setminus\mathbb{Q}\) and consider
\[v\colon\mathbb{R}\to\mathbb{R},\qquad v(t):=2-\cos(t)-\cos(\alpha t).\]
Then \(v(t)>0\) for all \(t\in\mathbb{R}^{\neq}\). Moreover, \(\liminf_{t\to+\infty}v(t)=0\), that is, for each \(\varepsilon>0\) there are arbitrarily large \(t\in\mathbb{R}\) with \(v(t)<\varepsilon\). With a suitable choice of \(\alpha\) we can replace here \(\varepsilon\) by any prescribed function \(\varepsilon\colon\mathbb{R}\to\mathbb{R}^{>}\) with \(\varepsilon(t)\to 0\) as \(t\to+\infty\):
**Theorem 5.9.32** (Basu-Bose-Vijayaraghavan [17]).: _Let \(\phi\colon\mathbb{R}\to\mathbb{R}^{>}\) be such that \(\phi(t)\to+\infty\) as \(t\to+\infty\). Then there exists \(\alpha\in\mathbb{R}\setminus\mathbb{Q}\) such that_
\[2-\cos(t)-\cos(\alpha t)<1/\phi(t)\quad\text{ for arbitrarily large $t\in\mathbb{R}$}.\]
Proof.: We first arrange \(\phi\geqslant 1\). We then choose a sequence \((d_{n})_{n\geqslant 1}\) of positive integers such that with \(q_{n}:=d_{1}d_{2}\cdots d_{n}\) (so \(q_{0}=1\)):
\[d_{n}\geqslant(2\pi+1)\,\phi(2\pi q_{n-1})\qquad\text{for $n\geqslant 1$},\]
and set
\[\alpha\;:=\;\sum_{n=1}^{\infty}\frac{1}{q_{n}}.\]
We have \(q_{m+1}=q_{m}d_{m+1}\geqslant(2\pi+1)q_{m}\,\phi(2\pi q_{m})\), so if \(q_{m+n}\geqslant(2\pi+1)^{n}q_{m}\,\phi(2\pi q_{m})\), then
\[q_{m+n+1}\geqslant(2\pi+1)q_{m+n}\,\phi(2\pi q_{m+n})\geqslant(2\pi+1)q_{m+n }\geqslant(2\pi+1)^{n+1}q_{m}\,\phi(2\pi q_{m}).\]
Thus by induction on \(n\) we obtain
\[q_{m+n}\geqslant(2\pi+1)^{n}q_{m}\,\phi(2\pi q_{m})\qquad\text{for $n\geqslant 1$}.\]
This yields
\[\sum_{n=1}^{\infty}\frac{1}{q_{m+n}}\leqslant\frac{1}{2\pi q_{m}\,\phi(2\pi q _{m})}\qquad\text{for all $m\geqslant 1$}. \tag{5.9.2}\]
Take \(p_{m}\in\mathbb{N}\) (\(m\geqslant 1\)) such that
\[\sum_{n=1}^{m}\frac{1}{q_{n}}=\frac{p_{m}}{q_{m}}.\]
Then
\[0<\alpha-\frac{p_{m}}{q_{m}}=\sum_{n=1}^{\infty}\frac{1}{q_{m+n}}\leqslant \frac{1}{2\pi q_{m}\,\phi(2\pi q_{m})}\qquad\text{for all $m\geqslant 1$}.\]
Suppose \(\alpha=p/q\) where \(p,q\in\mathbb{N}^{\geqslant 1}\); then for all \(m\geqslant 1\) we have
\[\frac{pq_{m}-qp_{m}}{qq_{m}}=\alpha-\frac{p_{m}}{q_{m}}\leqslant\frac{1}{2\pi q _{m}\,\phi(2\pi q_{m})}\]
and so
\[1\leqslant pq_{m}-qp_{m}\leqslant\frac{q}{2\pi\,\phi(2\pi q_{m})},\]
contradicting \(\phi(2\pi q_{m})\to+\infty\) as \(m\to+\infty\). Hence \(\alpha\notin\mathbb{Q}\). Next note that \(q_{m}\) and \(\alpha q_{m}-q_{m}\sum_{n\geqslant 1}\frac{1}{q_{m+n}}\) are integers and so
\[2-\cos(2\pi q_{m})-\cos(2\pi\alpha q_{m})=1-\cos\left(2\pi q_{m}\sum_{n=1}^{ \infty}\frac{1}{q_{m+n}}\right)<\frac{1}{\phi(2\pi q_{m})}\]
using (5.9.2). This yields the theorem.
### Universal Exponential Extensions of Hardy Fields
_In this section \(H\supseteq\mathbb{R}\) is a Liouville closed Hardy field._ Then \(\mathcal{C}^{<\infty}[i]\) is a differential ring extension of the d-valued field \(K:=H[i]\) with the same ring of constants as \(K\), namely \(\mathbb{C}\). Note that for any \(f\in\mathcal{C}^{<\infty}[i]\) we have a \(g\in\mathcal{C}^{<\infty}[i]\) with \(g^{\prime}=f\), and then \(u=\mathrm{e}^{g}\in\mathcal{C}^{<\infty}[i]^{\times}\) satisfies \(u^{\dagger}=f\).
**Lemma 5.10.1**.: _Suppose \(f\in\mathcal{C}^{<\infty}[i]\) is purely imaginary, that is, \(f\in i\mathcal{C}^{<\infty}\). Then there is a \(u\in\mathcal{C}^{<\infty}[i]^{\times}\) such that \(u^{\dagger}=f\) and \(|u|=1\)._
Proof.: Taking \(g\in i\mathcal{C}^{<\infty}\) with \(g^{\prime}=f\), the resulting \(u=\mathrm{e}^{g}\) works.
We define the subgroup \(\mathrm{e}^{Hi}\) of \(\mathcal{C}^{<\infty}[i]^{\times}\) by
\[\mathrm{e}^{Hi}\ :=\ \big{\{}\mathrm{e}^{hi}:\ h\in H\big{\}}\ =\ \big{\{}u\in\mathcal{C}^{<\infty}[i]^{\times}:\ |u|=1,\ u^{\dagger}\in Hi\big{\}}.\]
Then \((\mathrm{e}^{Hi})^{\dagger}=Hi\) by Lemma 5.10.1, so \((H^{\times}\cdot\mathrm{e}^{Hi})^{\dagger}=K\) and thus \(K[\mathrm{e}^{Hi}]\) is an exponential extension of \(K\) (in the sense of Section 2.2) with the same ring of constants \(\mathbb{C}\) as \(K\).
As in the beginning of Section 4.4 we fix a complement \(\Lambda\) of \(K^{\dagger}\) with \(\Lambda\subseteq Hi\), set \(\mathrm{U}:=K\big{[}\mathrm{e}(\Lambda)\big{]}\) as usual, and let \(\lambda\) range over \(\Lambda\). The differential \(K\)-algebras \(\mathrm{U}\) and \(K[\mathrm{e}^{Hi}]\) are isomorphic by Corollary 2.2.10, but we need something better:
**Lemma 5.10.2**.: _There is an isomorphism \(\mathrm{U}\to K[\mathrm{e}^{Hi}]\) of differential \(K\)-algebras that maps \(\mathrm{e}(\Lambda)\) into \(\mathrm{e}^{Hi}\)._
Proof.: We have a short exact sequence of commutative groups
\[1\to S\stackrel{{\subseteq}}{{\longrightarrow}}\mathrm{e}^{Hi} \stackrel{{\ell}}{{\longrightarrow}}Hi\to 0,\]
where \(S=\big{\{}z\in\mathbb{C}^{\times}:\,|z|=1\big{\}}\) and \(\ell(u):=u^{\dagger}\) for \(u\in\mathrm{e}^{Hi}\). Since the subgroup \(S\) of \(\mathbb{C}^{\times}\) is divisible, this sequence splits: we have a group embedding \(e\colon Hi\to\mathrm{e}^{Hi}\) such that \(e(b)^{\dagger}=b\) for all \(b\in Hi\). Then the group embedding
\[\mathrm{e}(\lambda)\mapsto e(\lambda)\ :\ \mathrm{e}(\Lambda)\to\mathrm{e}^{Hi}\]
extends uniquely to a \(K\)-algebra morphism \(\mathrm{U}\to K[\mathrm{e}^{Hi}]\). Since \(\mathrm{e}(\lambda)^{\dagger}=\lambda=e(\lambda)^{\dagger}\) for all \(\lambda\), this is a differential \(K\)-algebra morphism, and even an isomorphism by Lemma 2.2.9 applied to \(R=K[\mathrm{e}^{Hi}]\).
Complex conjugation \(f+g\mathrm{i}\mapsto\overline{f+g\mathrm{i}}=f-g\mathrm{i}\) (\(f,g\in\mathcal{C}^{<\infty}\)) is an automorphism of the differential ring \(\mathcal{C}^{<\infty}[i]\) over \(H\) and maps \(K[\mathrm{e}^{Hi}]\) onto itself, sending each \(u\in\mathrm{e}^{Hi}\) to \(u^{-1}\). Thus any isomorphism \(\iota\colon\mathrm{U}\to K[\mathrm{e}^{Hi}]\) of differential \(K\)-algebras with \(\iota\big{(}\mathrm{e}(\Lambda)\big{)}\subseteq\mathrm{e}^{Hi}\)--such \(\iota\) exists by Lemma 5.10.2--also satisfies
\[\iota(\overline{f})\ =\ \overline{\iota(f)}\qquad(f\in\mathrm{U}).\]
(See Section 2.2 for the definition of \(\overline{f}\) for \(f\in\mathrm{U}\). Given such an isomorphism \(\iota\), any differential \(K\)-algebra isomorphism \(\mathrm{U}\to K[\mathrm{e}^{Hi}]\) mapping \(\mathrm{e}(\Lambda)\) into \(\mathrm{e}^{Hi}\) equals \(\iota\circ\sigma_{\chi}\) for a unique character \(\chi\colon\Lambda\to\mathbb{C}^{\times}\) with \(|\chi(\lambda)|=1\) for all \(\lambda\), by Lemma 2.2.17.) Fix such an isomorphism \(\iota\) and identify \(\mathrm{U}\) with its image \(K[\mathrm{e}^{Hi}]\) via \(\iota\). We have the asymptotic relations \(\prec_{\mathrm{g}}\) and \(\prec_{\mathrm{g}}\) on \(\mathrm{U}\) coming from the gaussian extension \(v_{\mathrm{g}}\) of the valuation on \(K\). But we also have the asymptotic relations induced on \(\mathrm{U}=K[\mathrm{e}^{Hi}]\) by the relations \(\prec\) and \(\prec\) defined on \(\mathcal{C}[i]\) in Section 5.1. It is clear that for \(f\in\mathrm{U}\):
\[\begin{array}{ccccc}f\prec_{\mathrm{g}}1&\implies&f\prec 1&\iff&\text{for some $n$ we have $|f(t)|\leqslant n$ eventually,}\\ f\prec_{\mathrm{g}}1&\implies&f\prec 1&\iff&\lim_{t\to+\infty}f(t)=0.\end{array}\]
As a tool for later use we derive a converse of the implication \(f\prec_{\rm g}1\Rightarrow f\prec 1\): Lemma 5.10.8 below, where we assume in addition that \({\rm I}(K)\subseteq K^{\dagger}\) and \(\Lambda\) is an \(\mathbb{R}\)-linear subspace of \(K\). This requires the material from Section 5.9 and some considerations about exponential sums treated in the next subsection.
**Exponential sums over Hardy fields.** In this subsection \(n\geqslant 1\). In the next lemma, \(f=(f_{1},\ldots,f_{m})\in H^{m}\) where \(m\geqslant 1\) and \(1\prec f_{1}\prec\cdots\prec f_{m}\). (In that lemma it doesn't matter which functions we use to represent the germs \(f_{1},\ldots,f_{m}\).) For \(r=(r_{1},\ldots,r_{m})\in\mathbb{R}^{m}\) we set \(r\cdot f:=r_{1}f_{1}+\cdots+r_{m}f_{m}\in H\).
**Lemma 5.10.3**.: _Let \(r^{1},\ldots,r^{n}\in\mathbb{R}^{m}\) be distinct and \(c_{1},\ldots,c_{n}\in\mathbb{C}^{\times}\). Then_
\[\limsup_{t\to\infty}\left|c_{1}\,{\rm e}^{(r^{1}\cdot f)(t)i}+\cdots+c_{n}\,{ \rm e}^{(r^{n}\cdot f)(t)i}\right|\ >\ 0.\]
Proof.: Consider the trigonometric polynomial \(w\colon\mathbb{R}^{m}\to\mathbb{R}^{\geqslant}\) given by
\[w(s)\ :=\ \left|c_{1}\,{\rm e}^{(r^{1}\cdot s)i}+\cdots+c_{n}\,{\rm e}^{(r^{n} \cdot s)i}\right|^{2}.\]
By Corollary 5.8.18 we have \(w(s)>0\) for some \(s\in\mathbb{R}^{m}\). Taking continuous representatives \(\mathbb{R}^{\geqslant}\to\mathbb{R}\) of \(f_{1},\ldots,f_{m}\), to be denoted also by \(f_{1},\ldots,f_{m}\), the lemma now follows from Proposition 5.9.27.
Next, let \(h_{1},\ldots,h_{n}\in H\) be distinct such that \((\mathbb{R}h_{1}+\cdots+\mathbb{R}h_{n})\cap{\rm I}(H)=\{0\}\). Since \(H\) is Liouville closed we have \(\phi_{1},\ldots,\phi_{n}\in H\) such that \(\phi_{1}^{\prime}=h_{1},\ldots,\phi_{n}^{\prime}=h_{n}\).
**Lemma 5.10.4**.: _Let \(c_{1},\ldots,c_{n}\in\mathbb{C}^{\times}\). Then for \(\phi_{1},\ldots,\phi_{n}\) as above,_
\[\limsup_{t\to\infty}\left|c_{1}\,{\rm e}^{\phi_{1}(t)i}+\cdots+c_{n}\,{\rm e}^ {\phi_{n}(t)i}\right|\ >\ 0.\]
Proof.: The case \(n=1\) is trivial, so let \(n\geqslant 2\). Then \(\phi_{1},\ldots,\phi_{n}\) are not all in \(\mathbb{R}\). Set \(V:=\mathbb{R}+\mathbb{R}\phi_{1}+\cdots+\mathbb{R}\phi_{n}\subseteq H\), so \(\partial V=\mathbb{R}h_{1}+\cdots+\mathbb{R}h_{n}\). We claim that \(V\cap\sigma_{H}=\{0\}\). To see this, let \(\phi\in V\cap\sigma_{H}\); then \(\phi^{\prime}\in\partial(V)\cap{\rm I}(H)=\{0\}\) and hence \(\phi\in\mathbb{R}\cap\sigma_{H}=\{0\}\), proving the claim. Now \(H\) is a Hahn space over \(\mathbb{R}\) by [ADH, p. 109], so by [ADH, 2.3.13] we have \(f_{1},\ldots,f_{m}\in V\) (\(1\leqslant m\leqslant n\)) such that \(V=\mathbb{R}+\mathbb{R}f_{1}+\cdots+\mathbb{R}f_{m}\) and \(1\prec f_{1}\prec\cdots\prec f_{m}\). For \(j=1,\ldots,n\), \(k=1,\ldots,m\), take \(t_{j},r_{jk}\in\mathbb{R}\) such that \(\phi_{j}=t_{j}+\sum_{k=1}^{m}r_{jk}f_{k}\) and set \(r^{j}:=(r_{j1},\ldots,r_{jm})\in\mathbb{R}^{m}\). Since \(\phi_{j_{1}}-\phi_{j_{2}}\notin\mathbb{R}\) for \(j_{1}\neq j_{2}\), we have \(r^{j_{1}}\neq r^{j_{2}}\) for \(j_{1}\neq j_{2}\). It remains to apply Lemma 5.10.3 to \(c_{1}\,{\rm e}^{t_{1}i},\ldots,c_{n}\,{\rm e}^{t_{n}i}\) in place of \(c_{1},\ldots,c_{n}\).
**Corollary 5.10.5**.: _Let \(f_{1},\ldots,f_{n}\in K\) and set \(f:=f_{1}\,{\rm e}^{\phi_{1}i}+\cdots+f_{n}\,{\rm e}^{\phi_{n}i}\in\mathcal{C}^{ <\infty}[i]\), and suppose \(f\prec 1\). Then \(f_{1},\ldots,f_{n}\prec 1\)._
Proof.: We may assume \(0\neq f_{1}\preccurlyeq\cdots\preccurlyeq f_{n}\). Towards a contradiction, suppose that \(f_{n}\succcurlyeq 1\), and take \(m\leqslant n\) minimal such that \(f_{m}\asymp f_{n}\). Then with \(g_{j}:=f_{j}/f_{n}\in K^{\times}\) and \(g:=g_{1}\,{\rm e}^{\phi_{1}i}+\cdots+g_{n}\,{\rm e}^{\phi_{n}i}\) we have \(g\prec 1\) and \(g_{1},\ldots,g_{n}\preccurlyeq 1\), with \(g_{j}\prec 1\) iff \(j<m\). Replacing \(f_{1},\ldots,f_{n}\) by \(g_{m},\ldots,g_{n}\) and \(\phi_{1},\ldots,\phi_{n}\) by \(\phi_{m},\ldots,\phi_{n}\) we arrange \(f_{1}\asymp\cdots\asymp f_{n}\asymp 1\). So
\[f_{1}=c_{1}+\varepsilon_{1},\,\ldots,\,f_{n}=c_{n}+\varepsilon_{n}\quad\text{ with }c_{1},\ldots,c_{n}\in\mathbb{C}^{\times}\text{ and }\varepsilon_{1},\ldots, \varepsilon_{n}\in\sigma.\]
Then \(\varepsilon_{1}\,{\rm e}^{\phi_{1}i}+\cdots+\varepsilon_{n}\,{\rm e}^{\phi_{n}i}\prec 1\), hence
\[c_{1}\,{\rm e}^{\phi_{1}i}+\cdots+c_{n}\,{\rm e}^{\phi_{n}i}\ =\ f-\left( \varepsilon_{1}\,{\rm e}^{\phi_{1}i}+\cdots+{\rm e}^{\phi_{n}i}\right)\ \prec\ 1.\]
Now Lemma 5.10.4 yields the desired contradiction.
Here is an application of Corollary 5.10.5:
**Lemma 5.10.6**.: _Let \(f_{1},\ldots,f_{n},g_{1},\ldots,g_{n}\in K\) be such that in \(\mathcal{C}[i]\) we have_
\[f_{1}\,\mathrm{e}^{\phi_{1}i}+\cdots+f_{n}\,\mathrm{e}^{\phi_{n}i}\ \sim\ g_{1}\, \mathrm{e}^{\phi_{1}i}+\cdots+g_{n}\,\mathrm{e}^{\phi_{n}i}\,.\]
_Let \(j\in\{1,\ldots,n\}\) be such that \(0\neq f_{j}\succcurlyeq f_{k}\) for all \(k\in\{1,\ldots,n\}\). Then \(f_{j}\sim g_{j}\), and \(f_{k}-g_{k}\prec f_{j}\) for all \(k\neq j\)._
Proof.: We arrange \(j=1\) and \(f_{1}=1\). Then
\[\mathrm{e}^{\phi_{1}i}+f_{2}\,\mathrm{e}^{\phi_{2}i}+\cdots+f_{n}\,\mathrm{e}^ {\phi_{n}i}\ \sim\ g_{1}\,\mathrm{e}^{\phi_{1}i}+\cdots+g_{n}\,\mathrm{e}^{\phi_{n}i}, \qquad f_{2},\ldots,f_{n}\preccurlyeq 1.\]
Hence
\[(1-g_{1})\,\mathrm{e}^{\phi_{1}i}+(f_{2}-g_{2})\,\mathrm{e}^{\phi_{2}i}+\cdots +(f_{n}-g_{n})\,\mathrm{e}^{\phi_{n}i}\prec\mathrm{e}^{\phi_{1}i}+f_{2}\, \mathrm{e}^{\phi_{2}i}+\cdots+f_{n}\,\mathrm{e}^{\phi_{n}i}\preccurlyeq 1,\]
so \(1-g_{1}\prec 1\) and \(f_{k}-g_{k}\prec 1\) for all \(k\neq j\), by Corollary 5.10.5.
This leads to a partial generalization of Corollary 5.5.23, included for use in [15]:
**Corollary 5.10.7**.: _Let \(f\in K^{\times}\), \(g_{1},\ldots,g_{n}\in K\), and \(j\in\{1,\ldots,n\}\) such that in \(\mathcal{C}[i]\),_
\[f\,\mathrm{e}^{\phi_{j}i}\sim g_{1}\,\mathrm{e}^{\phi_{1}i}+\cdots+g_{n}\, \mathrm{e}^{\phi_{n}i}\,.\]
_Then \(f\sim g_{j}\), and \(g_{j}\succ g_{k}\) for all \(k\neq j\)._
Proof.: Use Lemma 5.10.6 with \(f_{j}:=f\) and \(f_{k}:=0\) for \(k\neq j\).
_In the rest of this subsection we assume that \(\mathrm{I}(K)\subseteq K^{\dagger}\). As noted in Section 4.4 we can then take \(\Lambda=\Lambda_{H}i\) where \(\Lambda_{H}\) is an \(\mathbb{R}\)-linear complement of \(\mathrm{I}(H)\) in \(H\). We assume \(\Lambda\) has this form, giving rise to the valuation \(v_{\mathrm{g}}\) on \(\mathrm{U}=K[\mathrm{e}^{Hi}]\) as explained in the beginning of this section._
**Lemma 5.10.8**.: _Let \(f\in\mathrm{U}\) be such that \(f\prec 1\). Then \(f\prec_{\mathrm{g}}1\)._
Proof.: We have \(f=f_{1}\,\mathrm{e}(h_{1}i)+\cdots+f_{n}\,\mathrm{e}(h_{n}i)\) with \(f_{1},\ldots,f_{n}\in K\) and distinct \(h_{1},\ldots,h_{n}\in\Lambda_{H}\), so \((\mathbb{R}h_{1}+\cdots+\mathbb{R}h_{n})\cap\mathrm{I}(H)=\{0\}\). For \(h\in\Lambda_{H}\) we have \(\mathrm{e}(h\mathrm{i})=\mathrm{e}^{\phi_{1}i}\) with \(\phi\in H\) and \(\phi^{\prime}=h\). Hence \(f=f_{1}\,\mathrm{e}^{\phi_{1}i}+\cdots+f_{n}\,\mathrm{e}^{\phi_{n}i}\) with \(\phi_{1},\ldots,\phi_{n}\in H\) such that \(\phi_{1}^{\prime}=h_{1},\ldots,\phi_{n}^{\prime}=h_{n}\). Now Corollary 5.10.5 yields \(f\prec_{\mathrm{g}}1\).
**Corollary 5.10.9**.: _Let \(f\in\mathrm{U}\) and \(\mathfrak{m}\in H^{\times}\). Then \(f\prec\mathfrak{m}\) iff \(f\prec_{\mathrm{g}}\mathfrak{m}\)._
**Lemma 5.10.10**.: _Let \(f\in\mathrm{U}\) and \(\mathfrak{m}\in H^{\times}\). Then \(f\preccurlyeq\mathfrak{m}\) iff \(f\prec_{\mathrm{g}}\mathfrak{m}\)._
Proof.: Replace \(f\), \(\mathfrak{m}\) by \(f/\mathfrak{m}\), \(1\), respectively, to arrange \(\mathfrak{m}=1\). The backward direction was observed earlier in this section. For the forward direction suppose \(f\preccurlyeq 1\). Then \(f\prec\mathfrak{n}\) for all \(\mathfrak{n}\in H^{\times}\) with \(1\prec\mathfrak{n}\), hence \(f\prec_{\mathrm{g}}\mathfrak{n}\) for all \(\mathfrak{n}\in H^{\times}\) with \(1\prec_{\mathrm{g}}\mathfrak{n}\), by two applications of Corollary 5.10.9, and thus \(f\preccurlyeq_{\mathrm{g}}1\).
**Corollary 5.10.11**.: _Let \(f,g\in\mathrm{U}\). Then_
\[f\preccurlyeq g\ \Longrightarrow\ f\preccurlyeq_{\mathrm{g}}g, \tag{5.10.1}\]
_and likewise with \((\preccurlyeq,\preccurlyeq_{\mathrm{g}})\) replaced by \((\prec\prec,\preccurlyeq_{\mathrm{g}})\), \((\prec,\preccurlyeq_{\mathrm{g}})\), or \((\sim,\sim_{\mathrm{g}})\). In particular, \(\mathrm{e}^{\phi_{i}}\asymp_{\mathrm{g}}1\) for all \(\phi\in H\)._
Proof.: The case \(g=0\) is trivial, so let \(g\neq 0\). Then \(g\asymp_{\mathrm{g}}\mathfrak{n}\) with \(\mathfrak{n}\in H^{\times}\), so \(g\preccurlyeq_{\mathrm{g}}\mathfrak{n}\) and \(\mathfrak{n}\preccurlyeq_{\mathrm{g}}g\), and thus \(g\preccurlyeq\mathfrak{n}\) by Lemma 5.10.10. If \(f\preccurlyeq g\), then \(f\preccurlyeq\mathfrak{n}\), hence \(f\preccurlyeq_{\mathrm{g}}\mathfrak{n}\) by Lemma 5.10.10, so \(f\preccurlyeq_{\mathrm{g}}g\). Likewise, if \(f\prec g\), then \(f\prec\mathfrak{n}\), so \(f\preccurlyeq_{\mathrm{g}}\mathfrak{n}\) by Corollary 5.10.9, hence \(f\preccurlyeq_{\mathrm{g}}g\). The rest is now clear.
_Remark_.: The converse of (5.10.1) doesn't hold in general, even when we restrict to \(f=1\) and \(g\in\mathrm{U}\cap\mathcal{C}^{\times}\): let \(\lambda,\mu\in\mathbb{R}\) be \(\mathbb{Q}\)-linearly independent and set
\[g\ :=\ 2-\cos(\lambda x)-\cos(\mu x)\in\mathrm{U};\]
then \(1\preccurlyeq_{\mathrm{g}}g\), and by Example 5.9.31 we have \(g\in\mathcal{C}^{\times}\) and \(1\not\preccurlyeq g\). Next, take \(\phi\in H\) with \(\phi>\mathbb{R}\), choose \(\alpha\in\mathbb{R}\setminus\mathbb{Q}\) as in Theorem 5.9.32 applied to a representative of the germ \(\phi\), and set
\[h\ :=\ \phi\cdot\big{(}2-\cos(x)-\cos(\alpha x)\big{)}\in\mathrm{U}.\]
Then \(h\in\mathcal{C}^{\times}\) and \(h\preccurlyeq_{\mathrm{g}}\phi\), so \(1\preccurlyeq_{\mathrm{g}}h\). By choice of \(\alpha\) we also have \(1\not\prec h\). Hence the converse of (5.10.1) for \((\prec,\preccurlyeq_{\mathrm{g}})\) in place of \((\preccurlyeq,\preccurlyeq_{\mathrm{g}})\) fails for \(f:=1\), \(g:=h\).
**An application to slots in \(H\)**.: _In this subsection we assume \(\mathrm{I}(K)\subseteq K^{\dagger}\). We take \(\Lambda=\Lambda_{H}\mathrm{i}\) where \(\Lambda_{H}\) is an \(\mathbb{R}\)-linear complement of \(\mathrm{I}(H)\) in \(H\), and accordingly identify \(\mathrm{U}\) with \(K[\mathrm{e}^{H\mathrm{i}}]\) as explained in the beginning of this section. Until further notice we let \((P,1,\widehat{h})\) be a slot in \(H\) of order \(r\geqslant 1\). We also let \(A\in K[\mathfrak{o}]\) have order \(r\), and we let \(\mathfrak{m}\) range over the elements of \(H^{\times}\) such that \(v\mathfrak{m}\in v(\widehat{h}-H)\). We begin with an important consequence of the material in Section 5.7:
**Lemma 5.10.12**.: _Suppose \((P,1,\widehat{h})\) is \(Z\)-minimal, deep, and special, and \(\mathfrak{v}(L_{P})\asymp\mathfrak{v}:=\mathfrak{v}(A)\). Let \(y\in\mathcal{C}^{r}[\mathrm{i}]\) satisfy \(A(y)=0\) and \(y\prec\mathfrak{m}\) for all \(\mathfrak{m}\). Then \(y^{\prime},\dots,y^{(r)}\prec\mathfrak{m}\) for all \(\mathfrak{m}\)._
Proof.: Corollary 3.3.15 gives an \(\mathfrak{m}\preccurlyeq\mathfrak{v}\), so it is enough to show \(y^{\prime},\dots,y^{(r)}\prec\mathfrak{m}\) for all \(\mathfrak{m}\preccurlyeq\mathfrak{v}\). Accordingly we assume \(0<\mathfrak{m}\preccurlyeq\mathfrak{v}\) below. As \(\widehat{h}\) is special over \(H\), we have \(2(r+1)v\mathfrak{m}\in v(\widehat{h}-H)\), so \(y\prec\mathfrak{m}^{2(r+1)}\). Then Corollary 5.7.2 with \(n=2(r+1)\), \(\eta=|\mathfrak{v}|^{-1}\), \(\varepsilon=1/r\) gives for \(j=0,\dots,r\):
\[y^{(j)}\ \prec\ \mathfrak{v}^{-j}\mathfrak{m}^{n-j(1+\varepsilon)}\ \prec\ \mathfrak{m}^{n-j(2+\varepsilon)}\ \preccurlyeq\mathfrak{m}^{n-r(2+\varepsilon)}\ =\ \mathfrak{m}.\qed\]
Note that by Proposition 5.2.1, if \(\dim_{\mathbb{C}}\ker_{\mathrm{U}}A=r\) and \(A(y)=0\), \(y\in\mathcal{C}^{r}[\mathrm{i}]\), then \(y\in\mathrm{U}=K[\mathrm{e}^{H\mathrm{i}}]\subseteq\mathcal{C}^{<\infty}[ \mathrm{i}]\). Corollary 5.10.9 is typically used in combination with the ultimate condition. Here is a first easy application:
**Lemma 5.10.13**.: _Suppose \((P,1,\widehat{h})\) is linear and ultimate, \(\dim_{\mathbb{C}}\ker_{\mathrm{U}}L_{P}=r\), and \(y\in\mathcal{C}^{r}[\mathrm{i}]\) satisfies \(L_{P}(y)=0\) and \(y\prec\mathfrak{m}\). Then \(y\prec\mathfrak{m}\) for all \(\mathfrak{m}\)._
Proof.: We have \(y\in\mathrm{U}\), so \(y\preccurlyeq_{\mathrm{g}}1\) by Lemma 5.10.8. If \(y=0\) we are done, so assume \(y\neq 0\). Lemma 4.4.4(ii) gives \(0<v_{\mathrm{g}}y\in v_{\mathrm{g}}(\ker_{\mathrm{U}}^{\neq}L_{P})=\mathscr{E} ^{\mathrm{u}}(L_{P})\), hence \(v_{\mathrm{g}}y>v(\widehat{h}-H)\) by Lemma 4.4.13, so \(y\preccurlyeq_{\mathrm{g}}\mathfrak{m}\) for all \(\mathfrak{m}\). Now Corollary 5.10.9 yields the desired conclusion.
**Corollary 5.10.14**.: _Suppose that \((P,1,\widehat{h})\) is \(Z\)-minimal, deep, special, linear, and ultimate, and that \(\dim_{\mathbb{C}}\ker_{\mathrm{U}}L_{P}=r\). Let \(f,g\in\mathcal{C}^{r}[\mathrm{i}]\) be such that \(P(f)=P(g)=0\) and \(f,g\prec 1\). Then \((f-g)^{(j)}\prec\mathfrak{m}\) for \(j=0,\dots,r\) and all \(\mathfrak{m}\)._
Proof.: Use Lemmas 5.10.12 and 5.10.13 for \(A=L_{P}\) and \(y=f-g\).
_In the rest of this subsection we assume that \((P,1,\widehat{h})\) is ultimate and normal, \(\dim_{\mathbb{C}}\ker_{\mathrm{U}}A=r\), and \(L_{P}=A+B\) where_
\[B\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{r+1}A,\qquad\mathfrak{v}:=\mathfrak{ v}(A)\prec^{\flat}1.\]
Then Lemma 3.1.1 gives \(\mathfrak{v}(L_{P})\sim\mathfrak{v}\), and by Lemma 4.4.4,
\[v_{\rm g}(\ker_{\rm U}^{\neq}A)\ =\ \mathscr{E}^{\rm u}(A)\ =\ \mathscr{E}^{\rm u}(L_{P}).\]
This yields a variant of Lemma 5.10.13:
**Proposition 5.10.15**.: _If \(y\in\mathcal{C}^{r}[{\rm i}]\) and \(A(y)=0\), \(y\prec 1\), then \(y\prec\mathfrak{m}\) for all \(\mathfrak{m}\)._
Proof.: Like that of Lemma 5.10.13, using Lemma 4.4.12 instead of 4.4.13.
The following result will be used in establishing a crucial non-linear version of Corollary 5.10.14, namely Proposition 6.5.14.
**Corollary 5.10.16**.: _If \((P,1,\widehat{h})\) is \(Z\)-minimal, deep, and special, and \(y\in\mathcal{C}^{r}[{\rm i}]\) is such that \(A(y)=0\) and \(y\prec 1\), then \(y,y^{\prime},\ldots,y^{(r)}\prec\mathfrak{m}\) for all \(\mathfrak{m}\)._
Proof.: Use first Proposition 5.10.15 and then Lemma 5.10.12.
So far we didn't have to name an immediate asymptotic extension of \(H\) where \(\widehat{h}\) is located, but for the "complex" version of the above we need to be more specific.
As in the beginning of Section 4.4, let \(\widehat{H}\) be an immediate asymptotic extension of \(H\) and \(\widehat{K}=\widehat{H}[{\rm i}]\supseteq\widehat{H}\) a corresponding immediate d-valued extension of \(K\). The results in this subsection then go through if instead of \((P,1,\widehat{h})\) being a slot in \(H\) of order \(r\geqslant 1\) we assume that \((P,1,\widehat{h})\) is a slot in \(K\) of order \(r\geqslant 1\) with \(\widehat{h}\in\widehat{K}\setminus K\), with \(\mathfrak{m}\) now ranging over the elements of \(K^{\times}\) such that \(v\mathfrak{m}\in v(\widehat{h}-K)\).
Solution spaces of linear differential operatorsRecall that \(\Lambda\subseteq Hi\), \({\rm U}=K\big{[}{\rm e}(\Lambda)\big{]}=K[{\rm e}^{H{\rm i}}]\) where \({\rm e}(\Lambda)\subseteq{\rm e}^{H{\rm i}}\subseteq\mathcal{C}^{<\infty}[{ \rm i}]^{\times}\). Hence for each \(\lambda\) we have an element \(\phi(\lambda)\) of \(H\) (unique up to addition of an element of \(2\pi\mathbb{Z}\)) such that \({\rm e}(\lambda)={\rm e}^{\phi(\lambda){\rm i}}\); we take \(\phi(0):=0\). Then \({\rm e}(\lambda)^{\dagger}=\lambda\) gives \(\phi(\lambda)^{\prime}{\rm i}=\lambda\), and
\[\phi(\lambda_{1}+\lambda_{2})\ \equiv\ \phi(\lambda_{1})+\phi(\lambda_{2})\bmod 2 \pi\mathbb{Z}\quad\text{for $\lambda_{1},\lambda_{2}\in\Lambda$.}\]
If \({\rm I}(K)\subseteq K^{\dagger}\), then \(\Lambda\cap{\rm I}(H){\rm i}=\{0\}\) (see Lemma 1.2.16), so \(\phi(\lambda)\succ 1\) for \(\lambda\neq 0\), hence for \(\mu\in\Lambda\): \(\lambda=\mu\Leftrightarrow\phi(\lambda)=\phi(\mu)\Leftrightarrow\phi( \lambda)-\phi(\mu)\prec 1\).
**Lemma 5.10.17**.: _Let \(\phi\in H\). Then there exists \(\lambda\) such that \(\phi-\phi(\lambda)\preccurlyeq 1\). If \(\phi\succ 1\), then for any such \(\lambda\) we have \(\operatorname{sign}\phi=\operatorname{sign}\operatorname{Im}\lambda\)._
Proof.: From \({\rm e}^{\phi{\rm i}}\in{\rm e}^{H{\rm i}}\subseteq{\rm U}^{\times}=K^{ \times}\,{\rm e}(\Lambda)\) we get \(f\in K^{\times}\) and \(\lambda\) with \({\rm e}^{\phi{\rm i}}=f\,{\rm e}(\lambda)=f\,{\rm e}^{\phi(\lambda){\rm i}}\). Note that \(|f|=1\), so \(f={\rm e}^{\theta{\rm i}}\) where \(\theta\in H\) with \(\theta\prec 1\), by Lemma 5.5.21. This yields \(\phi-\phi(\lambda)-\theta\in 2\pi\mathbb{Z}\) and so \(\phi-\phi(\lambda)\preccurlyeq 1\). This proves the first statement. Now suppose we have any \(\lambda\) with \(\phi-\phi(\lambda)\preccurlyeq 1\). Then \(\phi\sim\phi(\lambda)\) if \(\phi\succ 1\). So if \(\phi>\mathbb{R}\), then \(\phi(\lambda)>\mathbb{R}\) and thus \(\operatorname{Im}\lambda=\phi(\lambda)^{\prime}>0\); likewise, \(\phi<\mathbb{R}\) implies \(\operatorname{Im}\lambda<0\).
**Corollary 5.10.18**.: _Suppose \({\rm I}(K)\subseteq K^{\dagger}\). Let \(f=f_{1}\,{\rm e}^{\phi_{1}{\rm i}}+\cdots+f_{m}\,{\rm e}^{\phi_{m}{\rm i}}\in{ \rm U}\) where \(f_{1},\ldots,f_{m}\in K\) and \(\phi_{1},\ldots,\phi_{m}\in H\) are such that \(\phi_{j}=\phi_{k}\) or \(\phi_{j}-\phi_{k}\succ 1\) for \(j,k=1,\ldots,m\). Then_
\[f=0\quad\Longleftrightarrow\quad\sum_{1\leqslant k\leqslant m,\ \phi_{k}=\phi_{j}}f_{k}\ =\ 0\ \text{ for $j=1,\ldots,m$,}\]
_and for \(\mathfrak{m}\in H^{\times}\):_
\[f\prec\mathfrak{m}\quad\Longleftrightarrow\quad\sum_{1\leqslant k\leqslant m,\ \phi_{k}=\phi_{j}}f_{k}\ \prec\ \mathfrak{m}\text{ for $j=1,\ldots,m$,}\]
_and likewise with \(\preccurlyeq\) in place of \(\prec\)._
Proof.: We first arrange that \(\phi_{1},\dots,\phi_{m}\) are distinct, and we then need to show: \(f=0\Leftrightarrow f_{1}=\dots=f_{m}=0\), and \(f\prec\mathfrak{m}\Leftrightarrow f_{1},\dots,f_{m}\prec\mathfrak{m}\), and likewise with \(\preccurlyeq\) in place of \(\prec\). To make Corollary 5.10.9 applicable we also arrange that \(\Lambda=\Lambda_{H}\dot{\imath}\) with \(\Lambda_{H}\) an \(\mathbb{R}\)-linear complement of \(\operatorname{I}(H)\) in \(H\). Lemma 5.10.17 yields \(\lambda_{j}\in\Lambda\) with \(\phi_{j}-\phi(\lambda_{j})\preccurlyeq 1\) for \(j=1,\dots,m\); then \(\lambda_{1},\dots,\lambda_{m}\) are distinct. For \(j=1,\dots,m\), put \(g_{j}:=f_{j}\operatorname{e}^{(\phi_{j}-\phi(\lambda_{j}))\dot{\imath}}\in K\), so \(f_{j}\operatorname{e}^{\phi_{j}\dot{\imath}}=g_{j}\operatorname{e}(\lambda_{ j})\) and \(g_{j}\asymp|g_{j}|=|f_{j}|\asymp f_{j}\). Now the claim follows from the \(K\)-linear independence of \(\operatorname{e}(\lambda_{1}),\dots,\operatorname{e}(\lambda_{m})\), Corollary 5.10.9, and Lemma 5.10.10.
Let \(A\in K[\mathfrak{d}]^{\neq}\), \(r:=\operatorname{order}A\), and set \(V:=\ker_{\operatorname{U}}A\), a \(\mathbb{C}\)-linear subspace of \(\operatorname{U}\) of dimension at most \(r\), with \(\dim_{\mathbb{C}}V=r\) iff \(V=\ker_{\mathcal{C}^{<\infty}[\dot{\imath}]}A\). We describe in our present setting some consequences of the results obtained in Sections 2.3 and 2.5 about zeros of linear differential operators in the universal exponential extension.
**Lemma 5.10.19**.: _The \(\mathbb{C}\)-linear space \(V\) has a basis_
\[f_{1}\operatorname{e}(\lambda_{1}),\dots,f_{d}\operatorname{e}(\lambda_{d}) \quad\text{ where }f_{j}\in K^{\times}\text{, }\lambda_{j}\in\Lambda\ (j=1,\dots,d).\]
_For any such basis the set of eigenvalues of \(A\) with respect to \(\Lambda\) is \(\{\lambda_{1},\dots,\lambda_{d}\}\), and_
\[\operatorname{mult}_{\lambda}(A)\ =\ |\{j\in\{1,\dots,d\}:\lambda_{j}=\lambda\}|\quad \text{ for every }\lambda.\]
This follows from Lemma 2.5.1 and the considerations preceding it.
Call \(\phi_{1},\dots,\phi_{m}\in H\)**apart** if \(\phi_{j}=0\) or \(\phi_{j}\succ 1\) for \(j=1,\dots,m\), and \(\phi_{j}=\phi_{k}\) or \(\phi_{j}-\phi_{k}\succ 1\) for \(j,k=1,\dots,m\). (This holds in particular if \(\phi_{1}=\dots=\phi_{m}=0\).) If \(\operatorname{I}(K)\subseteq K^{\dagger}\), then \(\phi(\lambda_{1}),\dots,\phi(\lambda_{m})\) are apart for any \(\lambda_{1},\dots,\lambda_{m}\in\Lambda\).
**Corollary 5.10.20**.: _The \(\mathbb{C}\)-linear space \(V\) has a basis_
\[f_{1}\operatorname{e}^{\phi_{1}\dot{\imath}},\dots,f_{d}\operatorname{e}^{ \phi_{d}\dot{\imath}}\quad\text{ where }f_{j}\in K^{\times}\text{, }\phi_{j}\in H\ (j=1,\dots,d).\]
_If \(\operatorname{I}(K)\subseteq K^{\dagger}\), then for any such basis the \(f_{j}\operatorname{e}^{\phi_{j}\dot{\imath}}\) with \(\phi_{j}\preccurlyeq 1\) form a basis of the \(\mathbb{C}\)-linear space \(V\cap K=\ker_{K}A\), and we can choose the \(f_{j}\), \(\phi_{j}\) such that additionally \(\phi_{1},\dots\phi_{d}\) are apart and \((\phi_{1},vf_{1}),\dots,(\phi_{d},vf_{d})\) are distinct._
Proof.: The first claim holds by Lemma 5.10.19. Suppose \(\operatorname{I}(K)\subseteq K^{\dagger}\), and let a basis of \(V\) as in the corollary be given. Then by Lemma 5.10.17 we obtain \(\lambda_{j}\in\Lambda\) such that \(\phi_{j}-\phi(\lambda_{j})\preccurlyeq 1\), and so \(f_{j}\operatorname{e}^{\phi_{j}\dot{\imath}}=g_{j}\operatorname{e}(\lambda_{ j})\) where \(g_{j}:=f_{j}\operatorname{e}^{(\phi_{j}-\phi(\lambda_{j}))\dot{\imath}}\in K^{\times}\) by Proposition 5.5.18. Now \(\lambda_{j}=0\Leftrightarrow\phi_{j}\preccurlyeq 1\), by the remarks preceding Lemma 5.10.17, hence the \(f_{j}\operatorname{e}^{\phi_{j}\dot{\imath}}\) with \(\phi_{j}\preccurlyeq 1\) form a basis of \(V\cap K=\ker_{K}A\). Moreover, \(g_{1}\operatorname{e}^{\phi(\lambda_{1})\dot{\imath}},\dots,g_{d}\operatorname{ e}^{\phi(\lambda_{d})\dot{\imath}}\) is a basis of \(V\) and \(\phi(\lambda_{1}),\dots,\phi(\lambda_{d})\) are apart.
We have \(V=\bigoplus_{\lambda}V_{\lambda}\) (internal direct sum of \(\mathbb{C}\)-linear subspaces) where \(V_{\lambda}=(\ker_{K}A_{\lambda})\operatorname{e}(\lambda)\), by the remarks before (2.5.1). For each \(\lambda\), the subspace \(\ker_{K}A_{\lambda}\) of the \(\mathbb{C}\)-linear space \(K\) is generated by the \(g_{j}\) with \(\lambda_{j}=\lambda\). Applying [ADH, 5.6.6] to each \(A_{\lambda}\) we obtain \(h_{j}\in K^{\times}\) such that \(h_{1}\operatorname{e}^{\phi(\lambda_{1})\dot{\imath}},\dots,h_{d}\operatorname{ e}^{\phi(\lambda_{d})\dot{\imath}}\) is a basis of \(V\) where for all \(j\neq k\) with \(\phi(\lambda_{j})=\phi(\lambda_{k})\) we have \(h_{j}\not\preccurlyeq h_{k}\).
A **Hahn basis** of \(V\) is a basis of \(V\) as in Corollary 5.10.20 such that \(\phi_{1},\dots,\phi_{d}\) are apart and \((\phi_{1},vf_{1}),\dots,(\phi_{d},vf_{d})\) are distinct. (It should really be "Hahn basis with respect to \(\phi_{1},\dots,\phi_{d}\)" but in the few cases we use this notion we shall rely on the context as to what tuple \((\phi_{1},\dots,\phi_{d})\in H^{d}\) we are dealing with.) If \(\operatorname{I}(K)\subseteq K^{\dagger}\), then for such a Hahn basis the \(f_{j}\) with \(\phi_{j}=0\) form a valuation basis of the subspace \(V\cap K\) of the valued \(\mathbb{C}\)-linear space \(K\)[ADH, 2.3].
In the next lemma we assume \(\mathrm{I}(K)\subseteq K^{\dagger}\), and we recall that then
\[d\ \leqslant\ \sum_{\lambda}|\mathscr{E}^{\mathrm{e}}(A_{\lambda})|\ \leqslant\ r\]
by Lemma 2.6.16 and Proposition 2.6.26, and so by Lemma 2.6.16,
\[\sum_{\lambda}|\mathscr{E}^{\mathrm{e}}(A_{\lambda})|\ =\ d\ \Longrightarrow\ \mathscr{E}^{\mathrm{e}}(A_{\lambda})\ =\ v(\ker^{\neq}A_{\lambda})\ \text{ for all }\lambda.\]
**Lemma 5.10.21**.: _Suppose \(\mathrm{I}(K)\subseteq K^{\dagger}\), and let \(f_{1}\,\mathrm{e}^{\phi_{1}\mathrm{i}},\ldots,f_{d}\,\mathrm{e}^{\phi_{d} \mathrm{i}}\) be a Hahn basis of \(V\) as in Corollary 5.10.20. Then for all \(\lambda\),_
\[\mathscr{E}^{\mathrm{e}}(A_{\lambda})\ \supseteq\ v(\ker^{\neq}A_{\lambda})\ =\ \big{\{}vf_{j}:\ 1\leqslant j \leqslant d,\ \phi_{j}-\phi(\lambda)\preccurlyeq 1\big{\}}.\]
_and so \(\mathscr{E}^{\mathrm{u}}(A)\supseteq\{vf_{1},\ldots,vf_{d}\}\), with \(\mathscr{E}^{\mathrm{u}}(A)=\{vf_{1},\ldots,vf_{d}\}\) if \(\sum_{\lambda}|\mathscr{E}^{\mathrm{e}}(A_{\lambda})|=d\)._
Proof.: Take \(g_{j}\), \(\lambda_{j}\) as in the proof of Corollary 5.10.20. Then \(g_{j}\asymp|g_{j}|=|f_{j}|\asymp f_{j}\), and \(\lambda_{j}=\lambda\Leftrightarrow\phi_{j}-\phi(\lambda)\preccurlyeq 1\), for all \(\lambda\). So we can replace \(f_{j}\), \(\phi_{j}\) by \(g_{j}\), \(\phi(\lambda_{j})\) to arrange \(\phi_{j}=\phi(\lambda_{j})\) for \(j=1,\ldots,d\). Then for all \(\lambda\) the \(\mathbb{C}\)-linear space \(\ker A_{\lambda}\subseteq K\) is generated by the \(f_{j}\) with \(\lambda_{j}=\lambda\), so
\[\mathscr{E}^{\mathrm{e}}(A_{\lambda})\ \supseteq\ v(\ker^{\neq}A_{\lambda})\ =\ \{vf_{j}:\ 1\leqslant j \leqslant d,\ \lambda_{j}=\lambda\}.\]
For the rest use \(\mathscr{E}^{\mathrm{u}}(A)=\bigcup_{\lambda}\mathscr{E}^{\mathrm{e}}(A_{ \lambda})\) and the remarks preceding the lemma.
Corollaries 2.5.8 and 2.5.23 yield conditions on \(A\), \(K\) that guarantee \(\dim_{\mathbb{C}}V=r\):
**Lemma 5.10.22**.: _Suppose \(A\) splits over \(K\). If \(r\leqslant 1\), or \(r=2\), \(A\in H[\partial]\), or \(K\) is \(1\)-linearly surjective, then_
\[\dim_{\mathbb{C}}V\ =\ \sum_{\lambda}\mathrm{mult}_{\lambda}(A)\ =\ r.\]
Next a complement to Lemma 5.5.25:
**Corollary 5.10.23**.: _Suppose \(K\) is \(r\)-linearly surjective, or \(K\) is \(1\)-linearly surjective and \(A\) splits over \(K\). Let \(\phi\in H\) be such that \(\phi^{\prime}\mathrm{i}+K^{\dagger}\) is not an eigenvalue of \(A\). Then \(A\) maps \(K\,\mathrm{e}^{\phi\mathrm{i}}\) bijectively onto \(K\,\mathrm{e}^{\phi\mathrm{i}}\)._
Proof.: Let \(y\in K\), \(A(y\,\mathrm{e}^{\phi\mathrm{i}})=0\). By Lemma 5.5.25 it is enough to show that \(y=0\). Suppose towards a contradiction that \(y\neq 0\). Then \(y\,\mathrm{e}^{\phi\mathrm{i}}\in\mathrm{U}^{\times}=K^{\times}\,\mathrm{e}(\Lambda)\), so \(y\,\mathrm{e}^{\phi\mathrm{i}}=z\,\mathrm{e}(\lambda)\), \(z\in K^{\times}\). Then \(\phi^{\prime}\mathrm{i}-\lambda\in K^{\dagger}\), so \(\lambda\) is not an eigenvalue of \(A\) with respect to \(\Lambda\). Since \(y\in V\), this contradicts Lemma 5.10.19.
Let \(N\) be an \(n\times n\) matrix over \(K\), \(n\geqslant 1\). We end this subsection with a variant of Lemma 5.10.19 for the matrix differential equation \(y^{\prime}=Ny\). Set \(S:=\mathrm{sol}_{\mathrm{U}}(N)\), so \(S\) is a \(\mathbb{C}\)-linear subspace of \(\mathrm{U}^{n}\) of dimension \(\leqslant n\).
**Lemma 5.10.24**.: _Suppose \(S\) has a basis_
\[\mathrm{e}^{\phi_{1}\mathrm{i}}f_{1},\ldots,\mathrm{e}^{\phi_{d}\mathrm{i}}\, f_{d}\ \ \ \text{ where }\phi_{1},\ldots,\phi_{d}\in H\text{ and }f_{1},\ldots,f_{d}\in K^{n}\subseteq\mathrm{U}^{n}.\]
_Set \(\alpha_{j}:=\phi^{\prime}_{j}\mathrm{i}+K^{\dagger}\in K/K^{\dagger}\) for \(j=1,\ldots,d\). Then_
\[\mathrm{mult}_{\alpha}(N)\ =\ |\{j\in\{1,\ldots,d\}:\ \alpha_{j}=\alpha\}|\ \ \ \text{ for all }\alpha\in K/K^{\dagger}.\]
Proof.: We have \(f_{j}=(f_{1j},\ldots,f_{nj})^{\mathrm{t}}\in\mathrm{U}^{n}\) for \(j=1,\ldots,d\). We first consider the case that \(N\) is the companion matrix of a monic \(B\in K[\partial]\) of order \(n\). Then we have the \(\mathbb{C}\)-linear isomorphism \(z\mapsto(z,z^{\prime},\ldots,z^{(n-1)})^{\mathrm{t}}\colon\ker_{\mathrm{U}}B\to S\); its inverse maps the given basis to a basis \(\mathrm{e}^{\phi_{1i}}f_{11},\ldots,\mathrm{e}^{\phi_{d}i}f_{1d}\) of \(\ker_{\mathrm{U}}B\). For \(j=1,\ldots,d\) we have \(\phi_{j}-\phi(\lambda_{j})\preccurlyeq 1\) with \(\lambda_{j}\in\Lambda\), and so this basis has the form \(g_{1}\,\mathrm{e}(\lambda_{1}),\ldots,g_{d}\,\mathrm{e}(\lambda_{d})\) with \(g_{1},\ldots,g_{d}\in K^{\times}\). Now use Lemmas 2.4.35 and 5.10.19, and the fact that \(\alpha_{j}=\lambda_{j}+K^{\dagger}\) for \(j=1,\ldots,d\).
For the general case, [ADH, 5.5.9] gives the companion matrix \(M\) of a monic \(B\in K[\partial]\) of order \(n\) such that \(y^{\prime}=Ny\) is equivalent to \(y^{\prime}=My\). This yields \(P\in\mathrm{GL}_{n}(K)\) such that \(f\mapsto Pf\colon S\to\mathrm{sol}_{\mathrm{U}}(M)\) is a \(\mathbb{C}\)-linear isomorphism, and so \(PS=\mathrm{sol}_{\mathrm{U}}(M)\). Since \(P\,\mathrm{e}^{\phi_{ji}}\,f_{j}=\mathrm{e}^{\phi_{j}i}\,g_{j}\) with \(g_{j}\in K^{n}\) for \(j=1,\ldots,d\), we obtain a basis \(\mathrm{e}^{\phi_{1i}i}\,g_{1},\ldots,\mathrm{e}^{\phi_{d}i}g_{d}\) of the \(\mathbb{C}\)-linear subspace \(\mathrm{sol}_{\mathrm{U}}(M)\) of \(\mathrm{U}^{n}\), so we are in the special case treated earlier.
### A relative version of Corollary 5.10.20 (\({}^{*}\))
In this subsection I\((K)\subseteq K^{\dagger}\). We use an isomorphism as in Lemma 5.10.2 to identify \(\mathrm{U}=K[\mathrm{e}(\lambda)]\) with \(K[\mathrm{e}^{Hi}]\).
Let \(F\) be a Liouville closed Hardy field extension of \(H\); set \(L:=F[\mathrm{i}]\subseteq C^{<\infty}[\mathrm{i}]\). We show here how various results about \(H\), \(K\) extend in a coherent way to \(F,L\). First, Corollary 4.4.3 yields a complement \(\Lambda_{L}\) of the \(\mathbb{Q}\)-linear subspace \(L^{\dagger}\) of \(L\) with \(\Lambda\subseteq\Lambda_{L}\subseteq F\mathrm{i}\). Let \(\mathrm{U}_{L}=L\big{[}\mathrm{e}(\Lambda_{L})\big{]}\) be the universal exponential extension of \(L\) containing \(\mathrm{U}=K\big{[}\mathrm{e}(\Lambda)\big{]}\) as a differential subring described in the remarks following Corollary 2.2.13. We also have the differential subring \(L[\mathrm{e}^{F_{\mathrm{i}}}]\) of \(\mathcal{C}^{<\infty}[\mathrm{i}]\) with \(\mathrm{U}=K[\mathrm{e}^{Hi}]\subseteq L[\mathrm{e}^{F_{\mathrm{i}}}]\).
**Lemma 5.10.25**.: _There is an isomorphism \(\iota\colon\mathrm{U}_{L}\to L[\mathrm{e}^{F_{\mathrm{i}}}]\) of differential \(L\)-algebras with \(\iota\big{(}\mathrm{e}(\Lambda_{L})\big{)}\subseteq\mathrm{e}^{F_{\mathrm{i}}}\) that is the identity on \(\mathrm{U}\). Thus the diagram below commutes:_
Proof.: Lemma 5.10.2 yields an isomorphism \(\iota_{L}\colon\mathrm{U}_{L}\to L[\mathrm{e}^{F_{\mathrm{i}}}]\) of differential \(L\)-algebras with \(\iota_{L}\big{(}\mathrm{e}(\Lambda_{L})\big{)}\subseteq\mathrm{e}^{F_{\mathrm{ i}}}\). By Lemma 2.2.12 we have \(\iota_{L}^{-1}\big{(}K[\mathrm{e}^{Hi}]\big{)}=K[E]\) where \(E=\{u\in\mathrm{U}_{L}^{\times}:u^{\dagger}\in K\}\). From \(\mathrm{U}_{L}^{\times}=L^{\times}\,\mathrm{e}(\Lambda_{L})\) we get \(E=K^{\times}\,\mathrm{e}(\Lambda)\), so \(K[E]=\mathrm{U}\). Hence \(\iota_{L}^{-1}\) restricts to an automorphism of the differential \(K\)-algebra \(\mathrm{U}\). So this restriction equals \(\sigma_{\chi}\) where \(\chi\in\mathrm{Hom}(\Lambda,\mathbb{C}^{\times})\). (Lemma 2.2.14.) Extending \(\chi\) to \(\chi_{L}\in\mathrm{Hom}(\Lambda_{L},\mathbb{C}^{\times})\) yields an isomorphism
\[\iota:=\iota_{L}\circ\sigma_{\chi_{L}}\,:\,\,\mathrm{U}_{L}\to L[\mathrm{e}^{F _{\mathrm{i}}}]\]
of differential \(L\)-algebras with the desired property.
Fix an isomorphism \(\iota\colon\mathrm{U}_{L}\to L[\mathrm{e}^{F_{\mathrm{i}}}]\) as in the previous lemma and identify \(\mathrm{U}_{L}\) with its image via \(\iota\); thus \(\mathrm{U}=K[\mathrm{e}^{Hi}]\subseteq L[\mathrm{e}^{F_{\mathrm{i}}}]=\mathrm{ U}_{L}\subseteq\mathcal{C}^{<\infty}[\mathrm{i}]\). For each \(\mu\in\Lambda_{L}\) we have an element \(\phi(\mu)\) of \(F\) (unique up to addition of an element of \(2\pi\mathbb{Z}\)) such that \(\mathrm{e}(\mu)=\mathrm{e}^{\phi(\mu)i}\); we take \(\phi(0):=0\). The \(\phi(\lambda)\in F\) are actually in \(H\) and agree with the \(\phi(\lambda)\) defined earlier, up to addition of elements of \(2\pi\mathbb{Z}\). _In the rest of this subsection we assume \(\mathrm{I}(L)\subseteq L^{\dagger}\)._ So for \(\mu_{1},\mu_{2}\in\Lambda_{L}\): \(\mu_{1}=\mu_{2}\Leftrightarrow\phi(\mu_{1})-\phi(\mu_{2})\preccurlyeq 1\).
**Lemma 5.10.26**.: _Let \(\mu\in\Lambda_{L}\). Then_
\[\phi(\mu)\in H\iff\phi(\mu)\in H+\mathcal{O}_{F}\iff\mu\in\Lambda.\]
Proof.: We have \(\mu=\phi(\mu)^{\prime}i\). So if \(\phi(\mu)\in H+\mathcal{O}_{F}\), then \(\mu\in H\dot{\mathrm{i}}+\mathrm{I}(L)\subseteq K+L^{\dagger}=\Lambda+L^{\dagger}\), and hence \(\mu\in\Lambda\). Conversely, if \(\mu\in\Lambda\), then \(\phi(\mu)^{\prime}i\in\Lambda\subseteq Hi\), so \(\phi(\mu)^{\prime}\in H\), and thus \(\phi(\mu)\in H\).
Lemma 5.10.17 with \(F\), \(L\), \(\Lambda_{L}\) in place of \(H\), \(K\), \(\Lambda\), and Lemma 5.10.26 yield:
**Corollary 5.10.27**.: \(H+\mathcal{O}_{F}=\big{\{}\phi\in F:\phi-\phi(\lambda)\preccurlyeq 1\text{ for some }\lambda\big{\}}\)_._
Let \(A\in K[\partial]^{\neq}\), \(r:=\operatorname{order}A\), \(V:=\operatorname{ker}_{\mathrm{U}}A\), \(V_{L}:=\operatorname{ker}_{\mathrm{U}_{L}}A\), so \(V=V_{L}\cap\mathrm{U}\). Corollary 5.10.20 applied to \(F\), \(L\) in place of \(H\), \(K\) then gives a Hahn basis
\[f_{1}\operatorname{e}^{\phi_{1}i},\ \ldots,\ f_{d}\operatorname{e}^{\phi_{d}i} \qquad(f_{j}\in L^{\times},\phi_{j}\in F)\]
of \(V_{L}\). Recall from Corollary 4.4.3 that \(\mathscr{E}^{\mathrm{e}}(A_{\lambda})=\mathscr{E}^{\mathrm{e}}_{L}(A_{\lambda})\cap\Gamma\) for all \(\lambda\). Applying Lemma 5.10.21 to such a Hahn basis of \(V_{L}\) and \(F\), \(L\), \(\Lambda_{L}\), \(V_{L}\) in place of \(H\), \(K\), \(\Lambda\), \(V\), and using Corollary 5.10.27 we obtain:
\[\mathscr{E}^{\mathrm{u}}(A)\ \supseteq\ \{vf_{j}:j=1,\ldots,d,\ \phi_{j}\in H+ \mathcal{O}_{F}\}\cap\Gamma.\]
Recall from Corollary 2.6.27 that if \(A\) is terminal, then \(\mathscr{E}^{\mathrm{e}}(A_{\lambda})=\mathscr{E}^{\mathrm{e}}_{L}(A_{\lambda})\) for all \(\lambda\), and \(\mathscr{E}^{\mathrm{u}}(A)=\mathscr{E}^{\mathrm{u}}_{L}(A)\). We have \(d=\dim_{\mathbb{C}}V_{L}\leqslant r\), and Lemma 5.10.22 gives conditions on \(A\), \(F\), \(L\) which guarantee \(d=r\). The next corollary shows that if \(A\) is terminal and \(d=r\), then the "frequencies" \(\phi_{j}\) of the elements of our Hahn basis of \(V_{L}\) above can be taken in \(H\):
**Corollary 5.10.28**.: _Suppose \(A\) is terminal and \(d=r\). Then \(V_{L}\) has a Hahn basis_
\[f_{1}\operatorname{e}^{\phi_{1}i},\ \ldots,\ f_{r}\operatorname{e}^{\phi_{r}i} \qquad(f_{j}\in L^{\times},\ \phi_{j}\in H).\]
_For any such basis and all \(\lambda\) we have_
\[\mathscr{E}^{\mathrm{e}}(A_{\lambda})\ =\ \mathscr{E}^{\mathrm{e}}_{L}(A_{ \lambda})\ =\ v(\ker^{\neq}_{L}A_{\lambda})\ =\ \big{\{}vf_{j}:j=1,\ldots,r,\ \phi_{j}-\phi(\lambda)\preccurlyeq 1\big{\}}\]
_and \(\mathscr{E}^{\mathrm{u}}(A)=\mathscr{E}^{\mathrm{u}}_{L}(A)=\{vf_{1},\ldots,vf _{r}\}\subseteq\Gamma\), and the eigenvalues of \(A\) viewed as element of \(L[\partial]\) are \(\phi_{1}^{\prime}\dot{\mathrm{i}}+L^{\dagger},\ldots,\phi_{r}^{\prime}\dot{ \mathrm{i}}+L^{\dagger}\)._
Proof.: Lemma 2.6.16 and Corollary 2.6.27 give \(\mathscr{E}^{\mathrm{e}}(A_{\lambda})=\mathscr{E}^{\mathrm{e}}_{L}(A_{ \lambda})=v(\ker^{\neq}_{L}A_{\lambda})\) for all \(\lambda\), and \(\ker^{\neq}_{L}A_{\mu}=\emptyset\) for \(\mu\in\Lambda_{L}\setminus\Lambda\). Take any Hahn basis of \(V_{L}\) as described before the corollary. Lemma 5.10.17 yields \(\lambda_{j}\in\Lambda_{L}\) with \(\phi_{j}-\phi(\lambda_{j})\preccurlyeq 1\). We have \(g_{j}:=f_{j}\operatorname{e}^{(\phi_{j}-\phi(\lambda_{j}))i}\in L^{\times}\) by Proposition 5.5.18 and \(f_{j}\operatorname{e}^{\phi_{j}i}=g_{j}\operatorname{e}(\lambda_{j})\), so \(g_{j}\in\ker^{\neq}_{L}A_{\lambda_{j}}\). This yields \(\lambda_{j}\in\Lambda\) for \(j=1,\ldots,r\). Replacing each pair \(f_{j}\), \(\phi_{j}\) by \(g_{j}\), \(\phi(\lambda_{j})\) we obtain a Hahn basis of \(V_{L}\) as claimed. The rest follows from Lemmas 5.10.21 and 5.10.19.
### Duality considerations \((^{*})\)
As before, \(A\in K[\partial]^{\neq}\) has order \(r\), and \(V:=\ker_{\mathrm{U}}A\). Recall from Section 2.4 the bilinear form \([\,]_{A}\) on the \(\mathbb{C}\)-linear space \(\Omega=\operatorname{Frac}(\mathrm{U})\). As in the previous subsection we take \(\mathrm{U}=K[\operatorname{e}^{H\dot{\mathrm{i}}}]\) and fix values \(\mathrm{e}(\lambda)\).
**Corollary 5.10.29**.: _Suppose \(A\) splits over \(K\), \(\mathrm{I}(K)\subseteq K^{\dagger}\), and \(r\leqslant 1\) or \(K\) is \(1\)-linearly surjective. Let \(f_{j}\), \(\phi_{j}\) be as in Corollary 5.10.20. Then the \(\mathbb{C}\)-linear space \(\ker_{\mathcal{C}<\infty[\dot{\mathrm{i}}]}A^{*}\) equals \(W:=\ker_{\mathrm{U}}A^{*}\) and has a basis_
\[f_{1}^{*}\operatorname{e}^{-\phi_{1}i},\ \ldots,\ f_{r}^{*}\operatorname{e}^{-\phi_{r}i} \qquad\text{ where }f_{k}^{*}\in K^{\times}\ (k=1,\ldots,r)\]
_such that \(\big{[}f_{j}\operatorname{e}^{\phi_{j}i},f_{k}^{*}\operatorname{e}^{-\phi_{k}i} \big{]}_{A}=\delta_{jk}\) for \(j,k=1,\ldots,r\)._
Proof.: By Lemma 5.10.22 we have \(\dim_{\mathbb{C}}V=\dim_{\mathbb{C}}W=r\). As in the proof of Corollary 5.10.20 we obtain \(g_{j}\in K^{\times}\), \(\lambda_{j}\in\Lambda\) with \(\phi_{j}-\phi(\lambda_{j})\preccurlyeq 1\), and
\[y_{j}\ :=\ f_{j}\,\mathrm{e}^{\phi_{j}\mathrm{i}}=g_{j}\,\mathrm{e}(\lambda_{j}) \in\mathrm{U}^{\times},\quad j=1,\ldots,r.\]
The basis \(y_{1},\ldots,y_{r}\) of \(V\) yields by Corollary 2.5.5 that \(A=a(\partial-a_{r})\cdots(\partial-a_{1})\) with \(a\in K^{\times}\) and \((a_{1},\ldots,a_{r})=\operatorname{split}(y_{1},\ldots,y_{r})\). It is easy to reduce to the case \(a=1\). Then Corollary 2.5.16 provides a basis \(y_{1}^{*},\ldots,y_{r}^{*}\) of \(W\) with \([y_{j},y_{k}^{*}]_{A}=\delta_{jk}\) for all \(j\), \(k\), \(\operatorname{split}(y_{r}^{*},\ldots,y_{1}^{*})=(-a_{r},\ldots,-a_{1})\), and \(y_{k}^{*}=h_{k}\,\mathrm{e}(-\lambda_{k})\), \(h_{k}\in K^{\times}\), so \(y_{k}^{*}=f_{k}^{*}\,\mathrm{e}^{-\phi_{k}\mathrm{i}}\), where \(f_{k}^{*}:=h_{k}\,\mathrm{e}^{(\phi_{k}-\phi(\lambda_{k}))\mathrm{i}}\in K^{ \times}\), for \(k=1,\ldots,r\).
**Corollary 5.10.30**.: _Suppose \(\dim_{\mathbb{C}}V=r\geqslant 1\) and \(\mathrm{I}(K)\subseteq K^{\dagger}\), and let \(f_{j}\), \(\phi_{j}\) be as in Corollary 5.10.20. Let \(A=\partial^{r}+a_{r-1}\partial^{r-1}+\cdots+a_{0}\ (a_{0},\ldots,a_{r-1}\in K)\). Then_
\[\phi_{1}+\cdots+\phi_{r}\equiv b\bmod\mathcal{O}_{H}\qquad\text{for any $b\in H$ with $b^{\prime}=-\operatorname{Im}a_{r-1}$,}\]
_and hence \(\phi_{1}+\cdots+\phi_{r}\preccurlyeq 1\Longleftrightarrow a_{r-1}\in K^{\dagger}\). In particular, if \(A^{*}=(-1)^{r}A_{\ltimes a}\)\((a\in K^{\times})\) or \(a_{r-1}\in H\), then \(\phi_{1}+\cdots+\phi_{r}\preccurlyeq 1\)._
Proof.: Take \(g_{j}\), \(\lambda_{j}\) as in the proof of Corollary 5.10.20. Then
\[\lambda_{1}+\cdots+\lambda_{r}\ \equiv\ -a_{r-1}\bmod K^{\dagger}\]
by Corollary 2.5.2 and Lemma 5.10.19. Now \(K^{\dagger}\cap H\mathrm{i}=\mathrm{I}(H)\mathrm{i}\) by Lemma 1.2.16 and the remarks preceding it. Also \(\phi(\lambda_{j})^{\prime}\mathrm{i}=\lambda_{j}\) for all \(j\), and this yields the first claim. For the rest note that if \(A^{*}=(-1)^{r}A_{\ltimes a}\)\((a\in K^{\times})\), then \(a_{r-1}\in K^{\dagger}\) by the remarks after the proof of Proposition 2.4.11.
**Corollary 5.10.31**.: _Suppose \(A\) is self-dual and \(\mathrm{I}(K)\subseteq K^{\dagger}\). Also assume \(K\) is \(1\)-linearly surjective and \(\dim_{\mathbb{C}}V=r\), or \(r\geqslant 1\) and \(K\) is \((r-1)\)-linearly surjective. Then with the \(f_{j}\), \(\phi_{j}\) as in Corollary 5.10.20 we have \(\phi_{1}+\cdots+\phi_{d}\preccurlyeq 1\), and there is for each \(i\in\{1,\ldots,d\}\) a \(j\in\{1,\ldots,d\}\) with \(\phi_{i}+\phi_{j}\preccurlyeq 1\)._
Proof.: By Corollary 2.4.9 and (2.5.1) we have \(\operatorname{mult}_{\lambda}A=\operatorname{mult}_{-\lambda}A\) for all \(\lambda\). With \(\lambda_{1},\ldots,\lambda_{d}\) as in the proof of Corollary 5.10.20 this gives \(\lambda_{1}+\cdots+\lambda_{d}=0\), by Lemma 5.10.19, and thus \(\phi_{1}+\cdots+\phi_{d}\preccurlyeq 1\). For \(i=1,\ldots,d\) we have \(\operatorname{mult}_{\lambda_{i}}A=\operatorname{mult}_{-\lambda_{i}}A>0\) by Lemma 5.10.19, so that same lemma gives \(j\in\{1,\ldots,d\}\) such that \(\lambda_{i}+\lambda_{j}=0\), hence \(\phi_{i}+\phi_{j}\preccurlyeq 1\).
**Corollary 5.10.32**.: _Let \(A\), \(K\) be as in Corollary 5.10.31. Then \(V\) has a basis_
\[f_{1}\,\mathrm{e}^{\phi_{1}\mathrm{i}},g_{1}\,\mathrm{e}^{-\phi_{1}\mathrm{i}}, \,\ldots,\,f_{m}\,\mathrm{e}^{\phi_{m}\mathrm{i}},\,g_{m}\,\mathrm{e}^{-\phi_{ m}\mathrm{i}},\,\,h_{1},\ldots,h_{n}\qquad(2m+n=d)\]
_where \(f_{1},\ldots,f_{m},g_{1},\ldots,g_{m},h_{1},\ldots,h_{n}\in K^{\times}\), and \(\phi_{1},\ldots,\phi_{m}\in H^{>\mathbb{R}}\) are apart._
Proof.: By the proof of Corollary 5.10.31, if \(\lambda\) is an eigenvalue of \(A\), then so is \(-\lambda\), with the same multiplicity. Hence Lemma 5.10.19 yields a basis
\[f_{1}\,\mathrm{e}(\lambda_{1}),g_{1}\,\mathrm{e}(-\lambda_{1}),\,\ldots,\,f_{m }\,\mathrm{e}(\lambda_{m}),g_{m}\,\mathrm{e}(-\lambda_{m}),\,\,h_{1},\ldots,h_ {n}\qquad(2m+n=d)\]
of \(V\) where \(f_{j},g_{j},h_{k}\in K\) for \(j=1,\ldots,m\), \(k=1,\ldots,n\) and \(\lambda_{j}\in\Lambda\) with \(\operatorname{Im}\lambda_{j}>0\) for \(j=1,\ldots,m\). Note \(\mathrm{e}(-\lambda)=\mathrm{e}(\lambda)^{-1}=\mathrm{e}^{-\phi(\lambda) \mathrm{i}}\). Setting \(\phi_{j}:=\phi(\lambda_{j})\) for \(j=1,\ldots,m\) thus yields a basis of \(V\) as claimed.
In Section 2.1 we defined a "positive definite hermitian form" on the \(K\)-linear space \(\mathrm{U}=K\bigl{[}\mathrm{e}(\Lambda)\bigr{]}\), which via our isomorphism \(\iota\colon\mathrm{U}\to K[\mathrm{e}^{H\mathrm{i}}]\) transfers to a "positive definite hermitian form" \(\langle\,\ \rangle\) on the \(K\)-linear space \(K[\mathrm{e}^{H\mathrm{i}}]\). Note that \(\langle\,\ \rangle\) does not depend on the initial choice of isomorphism \(\iota\) as in Lemma 5.10.2 at the
beginning of this section, by the remarks following that lemma and Corollary 2.2.18. Suppose
\[y_{1}=f_{1}\,\mathrm{e}^{\phi_{1}i},\ \ldots,\ y_{d}=f_{d}\,\mathrm{e}^{\phi_{d }i}\]
is a basis of the \(\mathbb{C}\)-linear space \(V\) as in Corollary 5.10.20 such that for \(j,k=1,\ldots,d\) we have \(\phi_{j}=\phi_{k}\) or \(\phi_{j}-\phi_{k}\succ 1\). Then by Lemma 2.1.4 and Corollary 5.5.23,
\[\langle y_{j},y_{k}\rangle\ =\ 0\ \ \text{if }\phi_{j}\neq\phi_{k},\qquad \langle y_{j},y_{k}\rangle\ =\ f_{j}\overline{f_{k}}\neq 0\ \ \text{if }\phi_{j}=\phi_{k}.\]
**The case that \(A\in H[\![\partial]\!]\).**_In this subsection we assume \(A\in H[\![\partial]\!]^{\neq}\) has order \(r\)._ Then \(V:=\ker_{\mathrm{U}}A\) is closed under the complex conjugation automorphism of the differential ring \(\mathcal{C}^{<\infty}[\mathrm{i}]\). We have \(\mathrm{U}_{\mathrm{r}}=\mathrm{U}\cap\mathcal{C}^{<\infty}\) and by Corollary 2.2.20 a decomposition of \(\mathrm{U}_{\mathrm{r}}\) as an internal direct sum of \(H\)-linear subspaces:
\[\mathrm{U}_{\mathrm{r}}\ =\ H\oplus\bigoplus_{\mathrm{Im}\,\lambda>0}\big{(}H \cos\phi(\lambda)\oplus H\sin\phi(\lambda)\big{)}.\]
Set \(V_{\mathrm{r}}:=V\cap\mathcal{C}^{<\infty}\), an \(\mathbb{R}\)-linear subspace of \(V\) with \(V=V_{\mathrm{r}}\oplus V_{\mathrm{r}}i\) (internal direct sum of \(\mathbb{R}\)-linear subspaces of \(V\)). Each basis of the \(\mathbb{R}\)-linear space \(V_{\mathrm{r}}\) is a basis of the \(\mathbb{C}\)-linear space \(V\); in particular, \(\dim_{\mathbb{C}}V=\dim_{\mathbb{R}}V_{\mathrm{r}}\). If \(\dim_{\mathbb{C}}V=r\), then \(V_{\mathrm{r}}=\ker_{\mathcal{C}^{<\infty}}A\). If \(\lambda\) is an eigenvalue of \(A\), then so is \(-\lambda\), with \(\mathrm{mult}_{\lambda}(A)=\mathrm{mult}_{-\lambda}(A)\).
**Lemma 5.10.33**.: _The \(\mathbb{C}\)-linear space \(V=\ker_{\mathrm{U}}A\) has a basis_
\[g_{1}\,\mathrm{e}^{\phi_{1}i},\,g_{1}\,\mathrm{e}^{-\phi_{1}i},\ \ldots,\ g_{m}\, \mathrm{e}^{\phi_{m}i},\,g_{m}\,\mathrm{e}^{-\phi_{m}i},\ h_{1},\ \ldots,\ h_{n}\qquad(2m+n\leqslant r),\]
_where \(g_{1},\ldots,g_{m}\in H^{>}\), \(\phi_{1},\ldots,\phi_{m}\in H\) with \(\phi_{j}-\phi(\lambda_{j})\preccurlyeq 1\) and \(\mathrm{Im}\,\lambda_{j}>0\) for some \(\lambda_{j}\in\Lambda\) for \(j=1,\ldots,m\), and \(h_{1},\ldots,h_{n}\in H^{\times}\). For any such basis of \(V\),_
\[g_{1}\cos\phi_{1},\,g_{1}\sin\phi_{1},\ \ldots,\ g_{m}\cos\phi_{m},\,g_{m}\sin \phi_{m},\ h_{1},\ \ldots,\ h_{n}\]
_is a basis of the \(\mathbb{R}\)-linear space \(V_{\mathrm{r}}\), and \(h_{1},\ldots,h_{n}\) is a basis of the \(\mathbb{R}\)-linear subspace \(\ker_{H}A=V\cap H\) of \(H\)._
Proof.: By Corollary 2.5.18 the \(\mathbb{C}\)-linear space \(V\) has a basis
\[f_{1}\,\mathrm{e}(\lambda_{1}),\ \overline{f_{1}}\,\mathrm{e}(-\lambda_{1}),\ \ldots,\ f_{m}\,\mathrm{e}(\lambda_{m}),\ \overline{f_{m}}\,\mathrm{e}(-\lambda_{m}),\ h_{1},\ \ldots,\ h_{n}\]
with \(f_{1},\ldots,f_{m}\in K^{\times}\), \(\lambda_{1},\ldots,\lambda_{m}\in\Lambda\) with \(\mathrm{Im}\,\lambda_{1},\ldots,\mathrm{Im}\,\lambda_{m}>0\) and \(h_{1},\ldots,h_{n}\) in \(H^{\times}\). Moreover, for each such basis,
\[\mathrm{Re}\big{(}f_{1}\,\mathrm{e}(\lambda_{1})\big{)},\ \mathrm{Im}\big{(}f_{1}\, \mathrm{e}(\lambda_{1})\big{)},\ \ldots,\ \mathrm{Re}\big{(}f_{m}\,\mathrm{e}(\lambda_{m})\big{)},\ \mathrm{Im}\big{(}f_{m}\, \mathrm{e}(\lambda_{1})\big{)},\ h_{1},\ \ldots,\ h_{n}\]
is a basis of the \(\mathbb{R}\)-linear space \(V_{\mathrm{r}}\), and \(h_{1},\ldots,h_{n}\) is a basis of its \(\mathbb{R}\)-linear subspace \(\ker_{H}A=V\cap H\). Set \(g_{j}:=|f_{j}|=|\overline{f_{j}}|\in H^{>}\) (\(j=1,\ldots,m\)). Lemma 5.5.21 gives \(\phi_{j}\in H\) such that \(\phi_{j}-\phi(\lambda_{j})\preccurlyeq 1\) and \(f_{j}=g_{j}\,\mathrm{e}^{(\phi_{j}-\phi(\lambda_{j}))i}\), and thus \(f_{j}\,\mathrm{e}(\lambda_{j})=g_{j}\,\mathrm{e}^{\phi_{j}i}\), for \(j=1,\ldots,m\). Then \(g_{1},\ldots,g_{m},\phi_{1},\ldots,\phi_{m},h_{1},\ldots,h_{n}\) have the desired properties.
**Corollary 5.10.34**.: _Suppose \(K\) is \(1\)-linearly surjective when \(r\geqslant 2\), \(\mathrm{I}(K)\subseteq K^{\dagger}\), and \(A\) splits over \(K\). Then \(V=\ker_{\mathcal{C}^{<\infty}[\mathrm{i}]}A\) and the \(\mathbb{C}\)-linear space \(V\) has a basis_
\[g_{1}\,\mathrm{e}^{\phi_{1}i},\,g_{1}\,\mathrm{e}^{-\phi_{1}i},\ \ldots,\ g_{m}\, \mathrm{e}^{\phi_{m}i},\,g_{m}\,\mathrm{e}^{-\phi_{m}i},\ h_{1},\ \ldots,\ h_{n}\qquad(2m+n=r),\]
_where \(g_{j},\phi_{j}\in H^{>}\) with \(\phi_{j}\succ 1\)\((j=1,\ldots,m)\) and \(h_{k}\in H^{\times}\)\((k=1,\ldots,n)\). For any such basis of \(V\), the \(\mathbb{R}\)-linear space \(\ker_{\mathcal{C}^{<\infty}}A\) has basis_
\[g_{1}\cos\phi_{1},\,g_{1}\sin\phi_{1},\ \ldots,\ g_{m}\cos\phi_{m},\,g_{m}\sin\phi_{m},\ h_{1},\ \ldots,\ h_{n},\]
_and the \(\mathbb{R}\)-linear subspace \(\ker_{H}A=H\cap\ker_{\mathcal{C}^{<\infty}}A\) of \(H\) has basis \(h_{1},\ldots,h_{n}\)._
Proof.: By Corollary 2.5.8 we have \(\dim_{\mathbb{C}}\ker_{\mathrm{U}}A=r\), hence \(V=\ker_{\mathcal{C}^{<\infty}[i]}A\) and \(V_{r}=\ker_{\mathcal{C}^{<\infty}}A\). Now use Lemmas 5.10.17 and 5.10.33.
From Lemma 5.10.33 we obtain likewise, using Lemma 2.5.22 and Corollary 5.5.15:
**Corollary 5.10.35**.: _Suppose \(r=2\) and \(A\) splits over \(K\) but not over \(H\). Then there are \(g,\phi\in H^{>}\) such that_
\[\ker_{\mathcal{C}^{<\infty}}A\ =\ \mathbb{R}g\cos\phi+\mathbb{R}g\sin\phi\ =\ \big{\{}cg\cos(\phi+d):\ c,d\in\mathbb{R}\big{\}}.\]
_Moreover, if \(\mathrm{I}(K)\subseteq K^{\dagger}\), then we can choose here in addition \(\phi\succ 1\)._
_Remark_.: Let \(A\), \(g\), \(\phi\), \(r\) be as in Corollary 5.10.35. If \(\phi\succ 1\), then all \(y\in\ker_{\mathcal{C}^{<\infty}}^{\neq}A\) oscillate. If \(\phi\preccurlyeq 1\), then no \(y\in\ker_{\mathcal{C}^{<\infty}}A\) oscillates.
The following generalizes Corollary 5.5.28:
**Corollary 5.10.36**.: _Suppose \(K\) is \(1\)-linearly surjective and \(A\) splits over \(K\). Let \(\phi\) be an element of \(H\) with \(\phi>\mathbb{R}\) such that \(\phi^{\prime}\mathrm{i}+K^{\dagger}\) is not an eigenvalue of \(A\). Then for every \(h\in H\) there are unique \(f,g\in H\) such that \(A(f\cos\phi+g\sin\phi)=h\cos\phi\)._
Proof.: Let \(f,g\in H\) and \(A(f\cos\phi+g\sin\phi)=0\). By Lemma 5.5.26 it is enough to show \(f=g=0\). Set \(y:=\frac{1}{2}(f-g\mathrm{i})\in K\), so \(y\,\mathrm{e}^{\phi\mathrm{i}}+\overline{y}\,\mathrm{e}^{-\phi\mathrm{i}}=f \cos\phi+g\sin\phi\). The hypothesis and Corollary 2.5.8 give \(V=\ker_{\mathcal{C}^{<\infty}[i]}A\), so \(f\cos\phi+g\sin\phi\in V\). Suppose towards a contradiction that \(y\neq 0\). As in the proof of Corollary 5.10.23 we obtain \(y\,\mathrm{e}^{\phi\mathrm{i}}=z\,\mathrm{e}(\lambda)\), \(z\in K^{\times}\), where \(\lambda\) is not an eigenvalue with respect to \(\Lambda\). Also \(\lambda\neq 0\) in view of \(K^{\dagger}\subseteq H+\mathrm{I}(H)\mathrm{i}\). Hence
\[0\ =\ A(y\,\mathrm{e}^{\phi\mathrm{i}}+\overline{y}\,\mathrm{e}^{-\phi\mathrm{i} })\ =\ A\big{(}z\,\mathrm{e}(\lambda)+\overline{z}\,\mathrm{e}(-\lambda)\big{)}\ =\ A_{\lambda}(z)\,\mathrm{e}(\lambda)+A_{-\lambda}( \overline{z})\,\mathrm{e}(-\lambda),\]
so \(A_{\lambda}(z)=0\), contradicting that \(\lambda\) is not an eigenvalue of \(A\) with respect to \(\Lambda\).
Next a version of Lemma 5.10.8 for \(\mathrm{U}_{\mathrm{r}}\). _In the rest of this subsection \(\mathrm{I}(K)\subseteq K^{\dagger}\) and \(\Lambda=\Lambda_{H}\mathrm{i}\) where \(\Lambda_{H}\) is an \(\mathbb{R}\)-linear complement of \(\mathrm{I}(H)\) in \(H\)._
**Lemma 5.10.37**.: _Let \(\lambda_{1},\ldots,\lambda_{n}\in\Lambda\) be distinct, \(\mathrm{Im}\,\lambda_{j}>0\) for \(j=1,\ldots,n\), and_
\[y\ =\ f_{1}\cos\phi_{1}+g_{1}\sin\phi_{1}+\cdots+f_{n}\cos\phi_{n}+g_{n}\sin \phi_{n}+h\]
_where \(f_{1},\ldots,f_{n},g_{1},\ldots,g_{n},h\in H\) and \(\phi_{j}\in\phi(\lambda_{j})+\mathcal{O}_{H}\) for \(j=1,\ldots,n\). Then_
\[y\preccurlyeq 1\quad\Longrightarrow\quad f_{1},\ldots,f_{n},g_{1},\ldots,f_{n}, h\prec 1.\]
Proof.: Let \(j\) range over \(\{1,\ldots,n\}\). Setting \(a_{j}:=\frac{1}{2}(f_{j}-g_{j}\mathrm{i})\in K\) we have
\[y\ =\ a_{1}\,\mathrm{e}^{\phi_{1}\mathrm{i}}\,{\cdot}\!\!+\!\!\overline{a_{1}} \,\mathrm{e}^{-\phi_{1}\mathrm{i}}\,{\cdot}\!\!+\cdots+a_{n}\,\mathrm{e}^{\phi _{n}\mathrm{i}}\,{\cdot}\!\!+\!\!\overline{a_{n}}\,\mathrm{e}^{-\phi_{n}\mathrm{ i}}\,{\cdot}\!\!+\!\!h,\]
and so with \(b_{j}:=a_{j}\,\mathrm{e}^{\phi_{j}-\phi(\lambda_{j})}\in K\) we have \(a_{j}\asymp b_{j}\) and
\[y\ =\ b_{1}\,\mathrm{e}^{\phi(\lambda_{1})\mathrm{i}}\,{\cdot}\!\!+\!\!\! \overline{b_{1}}\,\mathrm{e}^{-\phi(\lambda_{1})\mathrm{i}}\,{\cdot}\!\!+\cdots +b_{n}\,\mathrm{e}^{\phi(\lambda_{n})\mathrm{i}}\,{\cdot}\!\!+\!\!\overline{b_{n }}\,\mathrm{e}^{-\phi(\lambda_{n})\mathrm{i}}\,{\cdot}\!\!+\!\!h,\]
Set \(h_{j}:=\phi(\lambda_{j})^{\prime}\in H\). Then \(h_{j}\mathrm{i}=\lambda_{j}\), so the elements
\[h_{1},\,\ldots,\,h_{n},\,-h_{1},\,\ldots,\,-h_{n},\,0\]
of \(H\) are distinct, and \((\mathbb{R}h_{1}+\cdots+\mathbb{R}h_{n})\cap\mathrm{I}(H)=\{0\}\) in view of \(\Lambda\cap\mathrm{I}(H)\mathrm{i}=\{0\}\). Assuming \(y\prec 1\), Corollary 5.10.5 then yields \(b_{1},\overline{b_{1}},\ldots,b_{n},\overline{b_{n}},h\prec 1\), and thus \(f_{1},\ldots,f_{n},g_{1},\ldots,f_{n},h\prec 1\)
**Lemma 5.10.38**.: _Recalling that \(\mathrm{U}_{r}=\mathrm{U}\cap\mathcal{C}^{<\infty}\) we have:_
\[H\ =\ \bigl{\{}y\in\mathrm{U}_{\mathrm{r}}:\,y-h\text{ is non-oscillating for all }h\in H\bigr{\}}\] \[\ =\ \bigl{\{}y\in\mathrm{U}_{\mathrm{r}}:\,y\text{ lies in a Hausdorff field extension of }H\bigr{\}}.\]
Proof.: Let \(j\) range over \(\{1,\ldots,n\}\). Suppose \(y\in\mathrm{U}_{\mathrm{r}}\) and \(y-h\) is non-oscillating for all \(h\in H\). Take distinct \(\lambda_{1},\ldots,\lambda_{n}\in\Lambda\) with \(\mathrm{Im}\,\lambda_{1},\ldots,\mathrm{Im}\,\lambda_{n}>0\), and take \(f_{1},\ldots,f_{n},g_{1},\ldots,g_{n},h\in H\) such that
\[y\ =\ f_{1}\cos\phi(\lambda_{1})+g_{1}\sin\phi(\lambda_{1})+\cdots+f_{n}\cos \phi(\lambda_{n})+g_{n}\sin\phi(\lambda_{n})+h.\]
We claim that \(y=h\). To prove this claim, replace \(y\) by \(y-h\) to arrange \(h=0\). Towards a contradiction, assume \(y\neq 0\). Then \(f_{j}\neq 0\) or \(g_{j}\neq 0\) for some \(j\). Divide \(y\) and \(f_{1},\ldots,f_{n},g_{1},\ldots,g_{n}\) by a suitable element of \(H^{\times}\) to arrange \(f_{j},g_{j}\preccurlyeq 1\) for all \(j\) and \(f_{j}\asymp 1\) or \(g_{j}\asymp 1\) for some \(j\). Then \(y\preccurlyeq 1\) and \(y-s\) is non-oscillating for all \(s\in\mathbb{R}\), and so Lemma 5.1.19 yields \(\ell\in\mathbb{R}\) such that \(y-\ell\prec 1\). Then Lemma 5.10.37 gives \(f_{j},g_{j}\prec 1\) for all \(j\), a contradiction. This proves the first equality. The second equality follows from Lemma 5.1.20.
In combination with Corollary 5.10.34 this yields:
**Corollary 5.10.39**.: _Recalling that \(V_{r}=\ker_{\mathrm{U}}A\cap\mathcal{C}^{<\infty}\) we have:_
\[\ker_{H}A\ =\ \bigl{\{}y\in V_{\mathrm{r}}:\,y-h\text{ is non-oscillating for all }h\in H\bigr{\}}\] \[\ =\ \bigl{\{}y\in V_{\mathrm{r}}:\,y\text{ lies in a Hausdorff field extension of }H\bigr{\}}.\]
_Hence if \(K\) is \(1\)-linearly surjective in case \(r\geqslant 2\), and \(A\) splits over \(K\), then every \(y\) in \(\ker_{\mathcal{C}^{<\infty}}A\) such that \(y-h\) is non-oscillating for all \(h\in H\) lies in \(H\)._
**Connection to Lyapunov exponents**\((^{*})\).: _In this subsection \(\mathrm{I}(K)\subseteq K^{\dagger}\), and we take \(\Lambda=\Lambda_{H}i\) where \(\Lambda_{H}\) is an \(\mathbb{R}\)-linear complement of \(\mathrm{I}(H)\) in \(H\). Accordingly, \(\mathrm{U}=K[\mathrm{e}^{Hi}]\). Let also \(n\geqslant 1\). In Section 5.2 we introduced the Lyapunov exponent \(\lambda(f)\in\mathbb{R}_{\pm\infty}\) of \(f\in\mathcal{C}[i]^{n}\). For use in Section 7.4 we collect here some properties of these exponents \(\lambda(f)\) for \(f\in\mathrm{U}^{n}\subseteq\mathcal{C}[i]^{n}\). Recall: \(f,g\in\mathcal{C}[i],\ f\preccurlyeq g\Rightarrow\lambda(f)\geqslant \lambda(g)\)._
**Lemma 5.10.40**.: _Let \(f,g\in\mathrm{U}\). Then_
\[f\preccurlyeq_{\mathrm{g}}g\Rightarrow\lambda(f)\geqslant\lambda(g),\qquad f \asymp_{\mathrm{g}}g\Rightarrow\lambda(f)=\lambda(g).\]
Proof.: We first treat the special case \(g=\mathfrak{m}\in H^{\times}\). Then the first statement follows from the remark before the lemma and Lemma 5.10.10. Suppose \(f\asymp_{\mathrm{g}}\mathfrak{m}\); thanks to the first statement it suffices to show \(\Lambda(f)\subseteq\Lambda(\mathfrak{m})\). Towards a contradiction, suppose \(\Lambda(f)\not\subseteq\Lambda(\mathfrak{m})\). Then we have \(a\in\mathbb{R}\) with \(f\preccurlyeq\mathrm{e}^{-ax}\) and \(\mathfrak{m}\neq\curlyeq\mathrm{e}^{-ax}\), so \(\mathrm{e}^{-ax}\prec\mathfrak{m}\) (since \(\mathfrak{m},\mathrm{e}^{-ax}\in H\)), hence \(f\prec\mathfrak{m}\) and thus \(f\preccurlyeq_{\mathrm{g}}\mathfrak{m}\) by Corollary 5.10.9, contradicting \(f\asymp_{\mathrm{g}}\mathfrak{m}\).
The case \(g=0\) being trivial, we now assume \(g\neq 0\) for the general case and take \(\mathfrak{m}\in H^{\times}\) with \(g\asymp_{\mathrm{g}}\mathfrak{m}\); then \(\lambda(g)=\lambda(\mathfrak{m})\) by the special case (with \(f=g\)), so we may replace \(g\) by \(\mathfrak{m}\) to reduce the lemma to the special case.
We turn \(\mathrm{U}^{n}\) into a valued \(\mathbb{C}\)-linear space with valuation \(v_{\mathrm{g}}\colon\mathrm{U}^{n}\to\Gamma_{\infty}\) given by
\[v_{\mathrm{g}}(f):=\min\bigl{\{}v_{\mathrm{g}}(f_{1}),\ldots,v_{\mathrm{g}}(f_{ n})\bigr{\}}\quad\text{for }f=(f_{1},\ldots,f_{n})\in\mathrm{U}^{n},\]
and denote by \(\preccurlyeq_{\mathrm{g}}\) the associated dominance relation on \(\mathrm{U}^{n}\). In the next four corollaries, \(f\), \(g\) range over \(\mathrm{U}^{n}\).
**Corollary 5.10.41**.: \(f\preccurlyeq_{\mathrm{g}}g\Rightarrow\lambda(f)\geqslant\lambda(g)\) _and \(f\asymp_{\mathrm{g}}g\Rightarrow\lambda(f)=\lambda(g)\)._
Proof.: Suppose \(f=(f_{1},\ldots,f_{n})\preccurlyeq_{\rm g}g=(g_{1},\ldots,g_{n})\). Take \(k\) with \(v_{\rm g}g=v_{\rm g}g_{k}\). Then \(f_{j}\preccurlyeq_{\rm g}g_{k}\) and so \(\lambda(f_{j})\geqslant\lambda(g_{k})\geqslant\lambda(g)\), for all \(j\), by Lemma 5.10.40, and thus \(\lambda(f)\geqslant\lambda(g)\).
**Corollary 5.10.42**.: _Let \(m\geqslant 1\), \(g_{1},\ldots,g_{m}\in{\rm U}^{n}\), and \(g=g_{1}+\cdots+g_{m}\) be such that \(v_{\rm g}(g)=\min\bigl{\{}v_{\rm g}(g_{1}),\ldots,v_{\rm g}(g_{m})\bigr{\}}\). Then \(\lambda(g)=\min\bigl{\{}\lambda(g_{1}),\ldots,\lambda(g_{m})\bigr{\}}\)._
Proof.: We may arrange \(g_{i}\preccurlyeq_{\rm g}g_{1}\) for all \(i\), so \(v_{\rm g}g=v_{\rm g}g_{1}\). Then \(\lambda(g_{1})\leqslant\lambda(g_{i})\) for all \(i\) and \(\lambda(g)=\lambda(g_{1})\), by Corollary 5.10.41.
Here is a special case of Corollary 5.10.42:
**Corollary 5.10.43**.: _Suppose \(m\geqslant 1\), \(f={\rm e}(h_{1}i)f_{1}+\cdots+{\rm e}(h_{m}i)f_{m}\) with \(f_{1},\ldots,f_{m}\) in \(K^{n}\) and distinct \(h_{1},\ldots,h_{m}\in\Lambda_{H}\). Then \(\lambda(f)=\min\bigl{\{}\lambda(f_{1}),\ldots,\lambda(f_{m})\bigr{\}}\)._
For the notion of valuation-independence, see [ADH, p. 92].
**Corollary 5.10.44**.: _Suppose \(m\geqslant 1\), \(f={\rm e}^{\phi_{1}i}\,f_{1}+\cdots+{\rm e}^{\phi_{m}i}\,f_{m}\), \(f_{1},\ldots,f_{m}\in K^{n}\) and \(\phi_{1},\ldots,\phi_{m}\in H\). Suppose also \(\phi_{j}=\phi_{k}\) or \(\phi_{j}-\phi_{k}\succ 1\) for \(j,k=1,\ldots,m\), and for \(k=1,\ldots,m\) the \(f_{j}\) with \(1\leqslant j\leqslant m\) and \(\phi_{j}=\phi_{k}\) are valuation-independent. Then \(v_{\rm g}(f)=\min\{v(f_{1}),\ldots,v(f_{m})\}\), and thus \(\lambda(f)=\min\bigl{\{}\lambda(f_{1}),\ldots,\lambda(f_{m})\bigr{\}}\)._
Proof.: First arrange that \(l\in\{1,\ldots,m\}\) is such that \(\phi_{1},\ldots,\phi_{l}\) are distinct and each \(\phi_{j}\) with \(l<j\leqslant m\) equals one of \(\phi_{1},\ldots,\phi_{l}\). For \(k=1,\ldots,l\), take \(\lambda_{k}\in\Lambda\) with \(\phi_{k}-\phi(\lambda_{k})\preccurlyeq 1\) and put \(g_{k}:=\sum_{1\leqslant j\leqslant m,\ \phi_{j}=\phi_{k}}f_{j}\) and \(h_{k}:={\rm e}^{(\phi_{k}-\phi(\lambda_{k}))i}\,g_{k}\in K^{n}\). Then \(v(g_{k})=v(h_{k})\), \({\rm e}^{\phi_{k}i}\,g_{k}={\rm e}(\lambda_{k})h_{k}\), and
\[f\ =\ {\rm e}^{\phi_{1}i}\,g_{1}+\cdots+{\rm e}^{\phi_{l}i}\,g_{l}\ =\ {\rm e}( \lambda_{1})h_{1}+\cdots+{\rm e}(\lambda_{l})h_{l}\]
with distinct \(\lambda_{1},\ldots,\lambda_{l}\). Hence
\[v_{\rm g}(f)\ =\ \min\{v(h_{1}),\ldots,v(h_{l})\}\ =\ \min\{v(g_{1}),\ldots,v(g_{l})\}.\]
Now use \(v(g_{k})=\min\{v(f_{j}):\ 1\leqslant j\leqslant m,\ \phi_{j}=\phi_{k}\}\) for \(k=1,\ldots,l\).
For what we say below about \(\Delta\) and \(\Gamma^{\flat}\), see [ADH, 9.1.11]. Set
\[\Delta\ :=\ \bigl{\{}\gamma\in\Gamma:\psi(\gamma)\geqslant 0\bigr{\}}\ =\ \bigl{\{}\gamma\in\Gamma:\gamma=O\bigl{(}v({\rm e}^{x})\bigr{)}\bigr{\}},\]
the smallest convex subgroup of \(\Gamma=v(K^{\times})\) containing \(v({\rm e}^{x})\in\Gamma^{<}\). Then \(\Delta\) has the convex subgroup
\[\Gamma^{\flat}\ =\ \bigl{\{}\gamma\in\Gamma:\psi(\gamma)>0\bigr{\}}\ =\ \bigl{\{}\gamma\in\Gamma:\gamma=o\bigl{(}v({\rm e}^{x})\bigr{)}\bigr{\}},\]
and we have an ordered group isomorphism \(r\mapsto v({\rm e}^{-rx})+\Gamma^{\flat}\colon\mathbb{R}\to\Delta/\Gamma^{\flat}\). Note also that for \(f\in K\) we have: \(v(f)\in\Gamma^{\flat}\Leftrightarrow\lambda(f)=0\).
**Lemma 5.10.45**.: _Let \(f\in{\rm U}\). Then_
\[\lambda(f)\ =\ +\infty\ \Leftrightarrow\ v_{\rm g}(f)\ >\ \Delta,\qquad\lambda(f)\ =\ -\infty\ \Leftrightarrow\ v_{\rm g}(f)\ <\ \Delta,\]
_and if \(\lambda(f)\in\mathbb{R}\), then \(v_{\rm g}(f)\in\Delta\) and \(v_{\rm g}(f)\equiv v({\rm e}^{-\lambda(f)x})\bmod\Gamma^{\flat}\)._
Proof.: We assume \(f\neq 0\), and use Lemma 5.10.40 to replace \(f\) by \(\mathfrak{m}\in H\) with \(f\preccurlyeq_{\rm g}\mathfrak{m}\) so as to arrange \(f\in H^{\times}\). The displayed claims then follow. Suppose \(\lambda(f)\in\mathbb{R}\), and let \(a\in\mathbb{R}^{>}\). Then \(f\,{\rm e}^{\lambda(f)-\frac{1}{2}a)x}\preccurlyeq 1\), so \(f\,{\rm e}^{\lambda(f)x}\prec{\rm e}^{\frac{1}{2}ax}\preccurlyeq^{ax}\). Also \(f\,{\rm e}^{\lambda(f)x}\not\prec{\rm e}^{-ax}\), thus \({\rm e}^{-ax}\prec f\,{\rm e}^{\lambda(f)x}\prec{\rm e}^{ax}\). This holds for all \(a\in\mathbb{R}^{>}\), so \(v(f\,{\rm e}^{\lambda(f)x})\in\Gamma^{\flat}\).
Lemma 5.10.45 yields \(\lambda(fg)=\lambda(f)+\lambda(g)\) for all \(f,g\in{\rm U}\cap\mathcal{C}[i]^{\preceq}\).
**Corollary 5.10.46**.: _Assume \(f,g\in K\), \(g\preccurlyeq f\), and \(\lambda(f)\in\mathbb{R}\). Then \(g^{\prime}+\lambda(f)g\prec f\)._
Proof.: Lemma 5.10.45 gives \(v(f\,\mathrm{e}^{\lambda(f)x})\in\Gamma^{\flat}\), so we can replace \(f\), \(g\) by \(f\,\mathrm{e}^{\lambda(f)x}\), \(g\,\mathrm{e}^{\lambda(f)x}\), to arrange \(f^{\prime}\prec f\) and \(\lambda(f)=0\). Now if \(f\preccurlyeq 1\), then \(g\preccurlyeq f\asymp 1\) and so \(g^{\prime}\prec 1\asymp f\), and if \(f\not\preccurlyeq 1\), then \(g^{\prime}\preccurlyeq f^{\prime}\prec f\) using [ADH, 9.1.3(iii) and 9.1.4(i)].
From Corollary 5.10.46 we easily obtain:
**Corollary 5.10.47**.: _Suppose \(f\in K^{n}\) is such that \(\lambda(f)\in\mathbb{R}\). Then \(f^{\prime}+\lambda(f)f\prec f\)._
Note that \(K\cap\mathcal{C}[i]^{\preceq}=\mathcal{O}_{\Delta}\) is by Lemma 5.10.45 the valuation ring of the coarsening \(v_{\Delta}\) of the valuation of \(K\) by \(\Delta\), with maximal ideal \(K\cap\mathcal{C}[i]^{\preccurlyeq}=o_{\Delta}\), cf. [ADH, 3.4]. By Corollary 5.10.43, the \(\mathbb{C}\)-subalgebra \(\mathrm{U}\cap\mathcal{C}[i]^{\preceq}\) of \(\mathrm{U}\) satisfies
\[\mathrm{U}\cap\mathcal{C}[i]^{\preceq}=\bigoplus_{h\in\Lambda_{H}}\mathcal{O} _{\Delta}\,\mathrm{e}(hi)\quad\text{(internal direct sum of $\mathcal{O}_{\Delta}$-submodules of $\mathrm{U}\cap\mathcal{C}[i]^{\preceq}$)}.\]
We put
\[\mathrm{U}^{\preceq}\ :=\ \bigoplus_{h\in\Lambda_{H}\cap\mathcal{O}_{H}} \mathcal{O}_{\Delta}\,\mathrm{e}(hi),\]
a \(\mathbb{C}\)-subalgebra of \(\mathrm{U}\cap\mathcal{C}[i]^{\preceq}\). Then
\[(\mathrm{U}^{\preceq})^{\times} =\big{\{}g\,\mathrm{e}(hi):g\in K^{\times},\,h\in\Lambda_{H},\ g^{ \dagger},h\preccurlyeq 1\big{\}}\] \[=\big{\{}g\,\mathrm{e}(hi):g\in K^{\times},\,h\in\Lambda_{H},\ \lambda(g)\in\mathbb{R},\,h\preccurlyeq 1 \big{\}}.\]
In the next lemma \(f=g\,\mathrm{e}^{\phi i}\in\mathrm{U}^{\times}\) where \(g\in K^{\times}\), \(\phi\in H\). Then \(|f|=|g|\in H\) and so \(-\lambda(f)=-\lambda(g)=\lim_{t\to\infty}\frac{1}{t}\log|g(t)|\).
**Lemma 5.10.48**.: \(f\in(\mathrm{U}^{\preceq})^{\times}\ \Leftrightarrow\ g^{\dagger},\phi^{ \prime}\preccurlyeq 1\ \Leftrightarrow\ f^{\dagger}\preccurlyeq 1\)_. If \(f^{\dagger}\preccurlyeq 1\), then_
\[-\lambda(f)\ =\ \lim_{t\to\infty}\mathrm{Re}\,f^{\dagger}(t)\ =\ \lim_{t\to\infty}\mathrm{Re}\,g^{\dagger}(t),\qquad\lim_{t\to\infty} \mathrm{Im}\,f^{\dagger}(t)\ =\ \lim_{t\to\infty}\phi^{\prime}(t)\]
_and these limits are in \(\mathbb{R}\)._
Proof.: Take \(h\in\Lambda_{H}\) with \(\phi-\phi(hi)\preccurlyeq 1\) and put \(g_{1}:=g\,\mathrm{e}^{(\phi-\phi(hi))i}\in K^{\times}\), so \(f=g_{1}\,\mathrm{e}^{\phi(hi)i}=g_{1}\,\mathrm{e}^{(hi)}\). Now \(g^{\dagger}-|g|^{\dagger}\prec 1\), since \(g\asymp|g|\). Also \(\mathrm{e}(hi)=\mathrm{e}^{\phi(hi)i}\) gives \(h=\phi(hi)^{\prime}\) by differentiation. Hence \(g_{1}^{\dagger}-g^{\dagger}=(\phi^{\prime}-h)i=(\phi-\phi(hi))^{\prime}i\prec 1\), so
\[|g|^{\dagger}\preccurlyeq 1\ \Leftrightarrow\ g^{\dagger}\preccurlyeq 1\ \Leftrightarrow\ g_{1}^{ \dagger}\preccurlyeq 1,\qquad\phi^{\prime}\preccurlyeq 1\ \Leftrightarrow\ h\preccurlyeq 1.\]
This yields the equivalences of the Lemma, using for \(f^{\dagger}\preccurlyeq 1\Rightarrow g^{\dagger},\phi^{\prime}\preccurlyeq 1\) that \(f^{\dagger}=g^{\dagger}+\phi^{\prime}i\), and \(\mathrm{Im}(g^{\dagger})\in\mathrm{I}(H)i\subseteq K^{\prec 1}\), the latter a consequence of Lemma 1.2.16 and the remarks preceding it. Now assume \(f^{\dagger}\preccurlyeq 1\). Then \(g^{\dagger},\phi^{\prime}\preccurlyeq 1\), so \(vg\in\Delta\), hence by Lemma 5.10.45, \(\lambda(f)=\lambda(g)\in\mathbb{R}\) and \(v\big{(}g\,\mathrm{e}^{\lambda(g)x}\,\big{)}\in\Gamma^{\flat}\), that is, \(g^{\dagger}+\lambda(g)\prec 1\), so \(\mathrm{Re}(g^{\dagger})+\lambda(f)\prec 1\), and thus \(-\lambda(f)=\ \lim_{t\to\infty}\mathrm{Re}\,g^{\dagger}(t)\). Now use \(\mathrm{Re}\,f^{\dagger}=\mathrm{Re}\,g^{\dagger}\) and \(\mathrm{Im}\,f^{\dagger}=\mathrm{Im}\,g^{\dagger}+\phi^{\prime}\) and \(\mathrm{Im}\,g^{\dagger}\prec 1\).
**Lemma 5.10.49**.: _Let \(f\in\mathrm{U}^{\preceq}\). Then \(f^{\prime}\in\mathrm{U}^{\preceq}\) and \(\lambda(f)\leqslant\lambda(f^{\prime})\). Moreover, if \(\lambda(f)\in\mathbb{R}\), then \(f^{\prime}\preccurlyeq_{\mathrm{g}}f\), and if \(\lambda(f)\in\mathbb{R}^{\times}\), then \(f^{\prime}\preccurlyeq_{\mathrm{g}}f\)._
Proof.: Suppose first that \(f=g\,\mathrm{e}(hi)\) where \(g\in\mathcal{O}_{\Delta}^{\neq}\), \(h\in\Lambda_{H}\cap\mathcal{O}_{H}\), so \(\lambda(f)=\lambda(g)\) and \(f^{\prime}=(g^{\prime}+ghi)\,\mathrm{e}(hi)\). Then by [ADH, 9.2.24, 9.2.26] we have \(g^{\prime}\in\mathcal{O}_{\Delta}\), with \(g^{\prime}\in\mathcal{O}_{\Delta}\) if \(g\in o_{\Delta}\). So \(f^{\prime}\in\mathcal{O}_{\Delta}\,\mathrm{e}(hi)\), with \(f^{\prime}\in o_{\Delta}\) if \(g\in o_{\Delta}\). This yields \(f^{\prime}\in\mathrm{U}^{\preceq}\) as well as \(\lambda(f^{\prime})=+\infty\) if \(\lambda(f)=+\infty\), by Lemma 5.10.45. Now suppose \(\lambda(f)\in\mathbb{R}\). Then \(v(g\,\mathrm{e}^{\lambda(g)x})\in\Gamma^{\flat}\) by Lemma 5.10.45, hence \(g^{\dagger}+\lambda(g)\prec 1\), so \(g^{\dagger}\preccurlyeq 1\), and
thus \(f^{\prime}=g(g^{\dagger}+h\mathrm{i})\,\mathrm{e}(hi)\prec_{\mathrm{g}}f\), and this yields \(\lambda(f^{\prime})\geqslant\lambda(f)\) by Lemma 5.10.40. If \(\lambda(f)\neq 0\), then \(g^{\dagger}\sim-\lambda(g)\) and so \(g^{\dagger}+h\mathrm{i}\sim-\lambda(g)+h\mathrm{i}\asymp 1\), and thus \(f^{\prime}\asymp_{\mathrm{g}}f\).
The case \(f=0\) is trivial, so we can assume next that \(f=f_{1}+\cdots+f_{m}\), \(f_{j}=g_{j}\,\mathrm{e}(h_{j}\mathrm{i})\), \(g_{j}\in\mathcal{O}_{\Delta}^{\neq}\), \(h_{j}\in\Lambda_{H}\cap\mathcal{O}_{H}\) for \(j=1,\ldots,m\), \(m\geqslant 1\), with distinct \(h_{1},\ldots,h_{m}\). We arrange \(f_{1}\asymp_{\mathrm{g}}\cdots\asymp_{\mathrm{g}}f_{m}\), so \(f\asymp_{\mathrm{g}}f_{1}\) and \(\lambda(f_{1})\leqslant\cdots\leqslant\lambda(f_{m})\), and \(\lambda(f)=\min\bigl{\{}\lambda(f_{1}),\ldots,\lambda(f_{m})\bigr{\}}=\lambda( f_{1})\) by Corollary 5.10.43. The special case gives \(f_{j}^{\prime}\in\mathrm{U}^{\preceq\!\!\!\preceq}\) and \(\lambda(f_{j})\leqslant\lambda(f_{j}^{\prime})\) for \(j=1,\ldots,m\), so \(f^{\prime}\in\mathrm{U}^{\preceq\!\!\!\preceq}\) and \(\lambda(f)\leqslant\lambda(f^{\prime})\). Suppose \(\lambda(f)\in\mathbb{R}\). Then \(v_{\mathrm{g}}(f_{1})\in\Delta\) by Lemma 5.10.45. If \(\lambda(f_{j})=+\infty\), then \(\lambda(f_{j}^{\prime})=+\infty\) by the special case, so \(v_{\mathrm{g}}(f_{j}^{\prime})>\Delta\) and thus \(f_{j}^{\prime}\prec_{\mathrm{g}}f_{1}\asymp_{\mathrm{g}}f\). If \(\lambda(f_{j})\in\mathbb{R}\), then \(f_{j}^{\prime}\prec_{\mathrm{g}}f_{j}\prec_{\mathrm{g}}f\), again by the special case. This yields \(f^{\prime}\prec_{\mathrm{g}}f\). Likewise, if \(\lambda(f)\in\mathbb{R}^{\times}\), then \(f_{1}^{\prime}\asymp_{\mathrm{g}}f_{1}\asymp_{\mathrm{g}}f\) and thus \(f^{\prime}\asymp_{\mathrm{g}}f\).
**Corollary 5.10.50**.: _If \(f\in\mathrm{U}^{\preceq\!\!\!\preceq}\) and \(\lambda(f)\in\mathbb{R}^{\times}\), then for all \(n\),_
\[f^{(n)}\ \asymp_{\mathrm{g}}\ f,\qquad\lambda(f^{(n)})\ =\ \lambda(f).\]
For use in the next lemma and then in Section 7.4 we also define for \(f\in\mathcal{C}^{1}[i]^{\times}\),
\[\mu(f)\ :=\ \limsup_{t\to\infty}\mathrm{Im}\left(f^{\prime}(t)/f(t)\right) \in\mathbb{R}_{\pm\infty}.\]
If \(f\in(\mathrm{U}^{\preceq\!\!\!\preceq})^{\times}\), then \(\lambda(f),\mu(f)\in\mathbb{R}\) by Lemma 5.10.48, and \(f^{\dagger}=\mathrm{Re}(f^{\dagger})+\mathrm{Im}(f^{\dagger})\mathrm{i}\) then yields \(f^{\dagger}-\bigl{(}-\lambda(f)+\mu(f)\mathrm{i}\bigr{)}\prec 1\).
In the next lemma, suppose \(f_{1},\ldots,f_{n}\in(\mathrm{U}^{\preceq\!\!\!\preceq})^{\times}\) are such that
\[c_{1}\ :=\ -\lambda(f_{1})+\mu(f_{1})\mathrm{i},\ \ldots,\ c_{n}\ :=\ -\lambda(f_{n})+ \mu(f_{n})\mathrm{i}\in\mathbb{C}\]
are distinct. Also, let \(c\in\mathbb{C}\) and suppose \(f:=f_{1}+\cdots+f_{n}\in\mathcal{C}[i]^{\times}\) and \(c-f^{\dagger}\prec 1\).
**Lemma 5.10.51**.: _Let \(i\in\{1,\ldots,n\}\) be such that \(f_{i}\succcdot f_{k}\) for all \(k\in\{1,\ldots,n\}\). Then \(c_{i}=c\) and \(\mathrm{Re}\,c_{k}\leqslant\mathrm{Re}\,c\) for all \(k\)._
Proof.: We let \(j\), \(k\), \(l\) range over \(\{1,\ldots,n\}\). Take \(g_{k}\in\mathcal{O}_{\Delta}^{\neq}\) and \(h_{k}\in\Lambda_{H}\cap\mathcal{O}_{H}\) such that \(f_{k}=g_{k}\,\mathrm{e}(h_{k}\mathrm{i})\). Then \(f_{k}^{\dagger}=g_{k}^{\dagger}+h_{k}\mathrm{i}\in\mathcal{O}\), \(c_{k}-f_{k}^{\dagger}\prec 1\), and
\[f^{\prime}\ =\ f_{1}^{\dagger}g_{1}\,\mathrm{e}(h_{1}\mathrm{i})+\cdots+f_{n}^{ \dagger}g_{n}\,\mathrm{e}(h_{n}\mathrm{i}).\]
Suppose \(h_{j}=h_{k}\) and \(g_{j}\asymp g_{k}\); then \(f_{j}^{\dagger}-f_{k}^{\dagger}=(g_{j}/g_{k})^{\dagger}\in\mathrm{I}(K)\subseteq\sigma\) and so \(c_{j}-c_{k}\prec 1\), hence \(j=k\). We arrange \(l\geqslant i\) so that \(h_{1},\ldots,h_{l}\) are distinct and the \(h_{k}\) with \(k>l\) are in \(\{h_{1},\ldots,h_{l}\}\). For \(j\leqslant l\), set \(g_{j}^{*}:=\sum_{h_{k}=h_{j}}g_{k}\) and \(g_{j}^{\partial}:=\sum_{h_{k}=h_{j}}f_{k}^{\dagger}g_{k}\), so
\[f\ =\ \sum_{j\leqslant l}g_{j}^{*}\,\mathrm{e}(h_{j}\mathrm{i}),\qquad f^{ \prime}\ =\ \sum_{j\leqslant l}g_{j}^{\partial}\,\mathrm{e}(h_{j}\mathrm{i}).\]
For \(j\leqslant l\) we have a unique \(k=k(j)\) with \(g_{j}^{*}\sim g_{k}\). Now \(g_{i}\asymp f_{i}\asymp f_{k}\asymp g_{k}\) for all \(k\), so \(i=k(i)\), hence \(0\neq g_{i}^{*}\sim g_{i}\succcdot g_{k(j)}\asymp g_{j}^{*}\) for \(j\leqslant l\). In particular, \(f\asymp_{\mathrm{g}}g_{i}\).
Suppose \(c\neq 0\). Then \(c-f^{\dagger}\prec 1\) gives \(cf\sim f^{\prime}\). Hence by Lemma 5.10.6 we have \(cg_{i}^{*}\sim g_{i}^{\partial}\) and \(\sum_{h_{k}=h_{j}}(c-f_{k}^{\dagger})g_{k}=cg_{j}^{*}-g_{j}^{\partial}\prec cg_{i }^{*}\) for \(j\neq i\), \(j\leqslant l\). Then \(cg_{i}\sim g_{i}^{\partial}=\sum_{h_{k}=h_{i}}f_{k}^{\dagger}g_{k}\). But if \(k\neq i\) and \(h_{k}=h_{i}\), then \(f_{k}^{\dagger}g_{k}\prec g_{k}\prec g_{i}\), hence \(cg_{i}\sim f_{i}^{\dagger}g_{i}\), so \(c\sim f_{i}^{\dagger}\). This proves \(c=c_{i}\). Also, if \(k\neq i\) and \(h_{k}=h_{i}\), then \(g_{k}\prec g_{i}\), so \(\mathrm{Re}(f_{k}^{\dagger})=\mathrm{Re}(g_{k}^{\dagger})<\mathrm{Re}(g_{i}^{ \dagger})=\mathrm{Re}(f_{i}^{\dagger})\) by Corollary 1.2.6, hence \(\mathrm{Re}\,c_{k}\leqslant\mathrm{Re}\,c_{i}=\mathrm{Re}\,c\). If \(j\leqslant l\), \(j\neq i\) and \(h_{k}=h_{j}\), then \(c\neq c_{k}\) gives \(g_{k}\asymp(c-f_{k}^{\dagger})g_{k}\prec cg_{j}^{*}-g_{j}^{\partial}\prec cg_{i }^{*}\asymp g_{i}\), and as before this yields \(\mathrm{Re}\,c_{k}\leqslant\mathrm{Re}\,c\). Hence \(\mathrm{Re}\,c_{k}\leqslant\mathrm{Re}\,c\) for all \(k\).
Next suppose \(c=0\). Then \(f^{\prime}\prec f\) and so \(f^{\prime}\prec_{\rm g}f\asymp_{\rm g}g_{i}\) by Corollary 5.10.11, hence \(g_{j}^{\vartheta}\prec g_{i}\) for \(j\leqslant l\), and this yields \(f_{k}^{\dagger}g_{k}\prec g_{i}\) for all \(k\). Taking \(k=i\) now gives \(f_{i}^{\dagger}\prec 1\) and so \(c_{i}=0\), and if \(k\neq i\), then \(c_{k}\neq 0\) and thus \(f_{k}^{\dagger}\asymp 1\), so \(g_{k}\asymp f_{k}^{\dagger}g_{k}\prec g_{i}\), and as in the case \(c\neq 0\) this gives \(\operatorname{Re}c_{k}\leqslant\operatorname{Re}c_{i}=0\). \(\square\)
## Part 6. Filling Holes in Hardy Fields
This part contains in Section 6.7 the proof of our main theorem. Important tools for this are the normalization and approximation theorems for holes and slots established in Parts 3 and 4. On the analytic side we need a suitable fixed point theorem proved in Section 6.2: Theorem 6.2.3. The definition of the operator used there is based on the right-inverses for linear differential operators over Hardy fields constructed in Section 6.1. Section 6.3 complements Section 6.2 by showing how to recover suitable smoothness for the fixed points obtained this way.
Let \((P,\mathfrak{m},\widehat{f})\) be a hole in a Liouville closed Hardy field \(H\supsetneq\mathbb{R}\) and recall that \(\widehat{f}\) lies in an immediate \(H\)-field extension of \(H\) and satisfies \(P(\widehat{f})=0\), \(\widehat{f}\prec\mathfrak{m}\). (This extension is not assumed to be a Hardy field.) Under suitable hypotheses on \(H\) and \((P,\mathfrak{m},\widehat{f})\), our fixed point theorem (or rather its "real" variant, Corollary 6.2.8) produces a germ \(f\) of a one-variable real-valued function such that \(P(f)=0\), \(f\prec\mathfrak{m}\); see Section 6.4. The challenge in the proof of our main result is to show that such an \(f\) generates a Hardy field extension \(H\langle f\rangle\) of \(H\) isomorphic to \(H\langle\widehat{f}\rangle\) over \(H\) (as ordered differential fields). In particular, we need to demonstrate that this zero \(f\) of \(P\) has the same asymptotic properties (relative to \(H\)) as its formal counterpart \(\widehat{f}\), and the notion of _asymptotic similarity_ established in Section 6.6 provides a suitable general framework for doing so. In order to show that \(f\) is indeed asymptotically similar to \(\widehat{f}\) over \(H\), we are naturally led to the following task: _given another germ \(g\) satisfying \(P(g)=0\), \(g\prec\mathfrak{m}\), bound the growth of \(h,h^{\prime},\ldots,h^{(r)}\) where \(h:=(f-g)/\mathfrak{m}\) and \(r:=\operatorname{order}P\)._ Assuming (among other things) that \((P,\mathfrak{m},\widehat{f})\) is repulsive-normal in the sense of Part 4, this is accomplished in Section 6.5, after revisiting parts of the material from Sections 6.1, 6.2, and 6.4 for certain weighted function spaces. (See Proposition 6.5.14.)
### 6.1. Inverting Linear Differential Operators over Hardy Fields
Given a Hardy field \(H\) and \(A\in H[\partial]\) we shall construe \(A\) as a \(\mathbb{C}\)-linear operator on various spaces of functions. We wish to construct right-inverses to such operators. A key assumption here is that \(A\) splits over \(H[i]\). This reduces the construction of such inverses mainly to the case of order \(1\), and this case is handled in the first two subsections using suitable twisted integration operators. In the third subsection we put things together and also show how to "preserve reality" by taking real parts. In the fourth subsection we introduce damping factors. Throughout we pay attention to the continuity of various operators with respect to various norms, for use in Section 6.2.
We let \(a\) range over \(\mathbb{R}\) and \(r\) over \(\mathbb{N}\cup\{\infty,\omega\}\). If \(r\in\mathbb{N}\), then \(r-1\) and \(r+1\) have the usual meaning, while for \(r\in\{\infty,\omega\}\) we set \(r-1=r+1:=r\). (This convention is just to avoid case distinctions.) We have the usual absolute value on \(\mathbb{C}\) given by \(|a+bi|=\sqrt{a^{2}+b^{2}}\in\mathbb{R}^{\geqslant}\) for \(a,b\in\mathbb{R}\), so for \(f\in\mathcal{C}_{a}[i]\) we have \(|f|\in\mathcal{C}_{a}\).
### Integration and some useful norms
For \(f\in\mathcal{C}_{a}[i]\) we define \(\partial_{a}^{-1}f\in\mathcal{C}_{a}^{1}[i]\) by
\[\partial_{a}^{-1}f(t)\ :=\ \int_{a}^{t}f(s)\,ds\ :=\ \int_{a}^{t}\operatorname{Re}f(s) \,ds+i\int_{a}^{t}\operatorname{Im}f(s)\,ds,\]
so \(\partial_{a}^{-1}f\) is the unique \(g\in\mathcal{C}_{a}^{1}[i]\) such that \(g^{\prime}=f\) and \(g(a)=0\). The integration operator \(\partial_{a}^{-1}\colon\mathcal{C}_{a}[i]\to\mathcal{C}_{a}^{1}[i]\) is \(\mathbb{C}\)-linear and maps \(\mathcal{C}_{a}^{r}[i]\) into \(\mathcal{C}_{a}^{r+1}[i]\). For \(f\in\mathcal{C}_{a}[i]\)
we have
\[\big{|}\partial_{a}^{-1}f(t)\big{|}\leqslant\big{(}\partial_{a}^{-1}|f|\big{)}(t) \qquad\text{ for all }t\geqslant a.\]
Let \(f\in\mathcal{C}_{a}[\mathrm{i}]\). Call \(f\)**integrable at \(\infty\)** if \(\lim_{t\to\infty}\int_{a}^{t}f(s)\,ds\) exists in \(\mathbb{C}\). In that case we denote this limit by \(\int_{a}^{\infty}f(s)\,ds\) and put
\[\int_{\infty}^{a}f(s)\,ds\ :=\ -\int_{a}^{\infty}f(s)\,ds,\]
and define \(\partial_{\infty}^{-1}f\in\mathcal{C}_{a}^{1}[\mathrm{i}]\) by
\[\partial_{\infty}^{-1}f(t)\ :=\ \int_{\infty}^{t}f(s)\,ds\ =\ \int_{\infty}^{a}f(s)\, ds+\int_{a}^{t}f(s)\,ds\ =\ \int_{\infty}^{a}f(s)\,ds+\partial_{a}^{-1}f(t),\]
so \(\partial_{\infty}^{-1}f\) is the unique \(g\in\mathcal{C}_{a}^{1}[\mathrm{i}]\) such that \(g^{\prime}=f\) and \(\lim_{t\to\infty}g(t)=0\). Note that
\[\mathcal{C}_{a}[\mathrm{i}]^{\mathrm{int}}\ :=\ \big{\{}f\in\mathcal{C}_{a}[ \mathrm{i}]:\ f\text{ is integrable at }\infty\big{\}} \tag{6.1.1}\]
is a \(\mathbb{C}\)-linear subspace of \(\mathcal{C}_{a}[\mathrm{i}]\) and that \(\partial_{\infty}^{-1}\) defines a \(\mathbb{C}\)-linear operator from this subspace into \(\mathcal{C}_{a}^{1}[\mathrm{i}]\) which maps \(\mathcal{C}_{a}^{r}[\mathrm{i}]\cap\mathcal{C}_{a}[\mathrm{i}]^{\mathrm{int}}\) into \(\mathcal{C}_{a}^{r+1}[\mathrm{i}]\). If \(f\in\mathcal{C}_{a}[\mathrm{i}]\) and \(g\in\mathcal{C}_{a}^{\mathrm{int}}:=\mathcal{C}_{a}[\mathrm{i}]^{\mathrm{int}} \cap\mathcal{C}_{a}\) with \(|f|\leqslant g\) as germs in \(\mathcal{C}\), then \(f\in\mathcal{C}_{a}[\mathrm{i}]^{\mathrm{int}}\); in particular, if \(f\in\mathcal{C}_{a}[\mathrm{i}]\) and \(|f|\in\mathcal{C}_{a}^{\mathrm{int}}\), then \(f\in\mathcal{C}_{a}[\mathrm{i}]^{\mathrm{int}}\). Moreover:
**Lemma 6.1.1**.: _Let \(f\in\mathcal{C}_{a}[\mathrm{i}]\) and \(g\in\mathcal{C}_{a}^{\mathrm{int}}\) be such that \(|f(t)|\leqslant g(t)\) for all \(t\geqslant a\). Then \(|\partial_{\infty}^{-1}f(t)|\leqslant|\partial_{\infty}^{-1}g(t)|\) for all \(t\geqslant a\)._
Proof.: Let \(t\geqslant a\). We have \(g\geqslant 0\) on \([a,\infty)\), hence \(\partial_{\infty}^{-1}g(t)\leqslant 0\). Also \(\big{|}\int_{t}^{\infty}f(s)\,ds\big{|}\leqslant\int_{t}^{\infty}\lvert f(s) \rvert\,ds\leqslant\int_{t}^{\infty}g(s)\,ds\). Thus
\[|\partial_{\infty}^{-1}f(t)|\ =\ \left|\int_{t}^{\infty}f(s)\,ds\right|\ \leqslant\ \int_{t}^{\infty}g(s)\,ds\ =\ -\partial_{\infty}^{-1}g(t)\ =\ |\partial_{\infty}^{-1}g(t)|\]
as claimed.
For \(f\in\mathcal{C}_{a}[\mathrm{i}]\) we set
\[\|f\|_{a}\ :=\ \sup_{t\geqslant a}|f(t)|\ \in\ [0,\infty],\]
so (with b for "bounded"):
\[\mathcal{C}_{a}[\mathrm{i}]^{\mathrm{b}}\ :=\ \big{\{}f\in\mathcal{C}_{a}[ \mathrm{i}]:\ \|f\|_{a}<\infty\big{\}}\]
is a \(\mathbb{C}\)-linear subspace of \(\mathcal{C}_{a}[\mathrm{i}]\), and \(f\mapsto\|f\|_{a}\) is a norm on \(\mathcal{C}_{a}[\mathrm{i}]^{\mathrm{b}}\) making it a Banach space over \(\mathbb{C}\). It is also convenient to define for \(t\geqslant a\) the seminorm
\[\|f\|_{[a,t]}\ :=\ \max_{a\leqslant s\leqslant t}|f(s)|\]
on \(\mathcal{C}_{a}[\mathrm{i}]\). More generally, let \(r\in\mathbb{N}\). Then for \(f\in\mathcal{C}_{a}^{r}[\mathrm{i}]\) we set
\[\|f\|_{a;r}\ :=\ \max\big{\{}\|f\|_{a},\ldots,\|f^{(r)}\|_{a}\big{\}}\ \in\ [0,\infty],\]
so
\[\mathcal{C}_{a}^{r}[\mathrm{i}]^{\mathrm{b}}:=\big{\{}f\in\mathcal{C}_{a}^{r} [\mathrm{i}]:\ \|f\|_{a;r}<\infty\big{\}}\]
is a \(\mathbb{C}\)-linear subspace of \(\mathcal{C}_{a}^{r}[\mathrm{i}]\), and \(f\mapsto\|f\|_{a;r}\) makes \(\mathcal{C}_{a}^{r}[\mathrm{i}]^{\mathrm{b}}\) a normed vector space over \(\mathbb{C}\). Note that by Corollary 5.7.7,
\[\mathcal{C}_{a}^{r}[\mathrm{i}]^{\mathrm{b}}\ =\ \big{\{}f\in\mathcal{C}_{a}^{r}[ \mathrm{i}]:\ \|f\|_{a}<\infty\text{ and }\|f^{(r)}\|_{a}<\infty\big{\}},\]
although we do not use this later. Note that for \(f,g\in\mathcal{C}_{a}^{r}[\mathrm{i}]\) we have
\[\|fg\|_{a;r}\ \leqslant\ 2^{r}\|f\|_{a;r}\|g\|_{a;r},\]
so \(\mathcal{C}^{r}_{a}[{\rm i}]^{\rm b}\) is a subalgebra of the \(\mathbb{C}\)-algebra \(\mathcal{C}^{r}_{a}[{\rm i}]\). If \(f\in\mathcal{C}^{r+1}_{a}[{\rm i}]\), then \(f^{\prime}\in\mathcal{C}^{r}_{a}[{\rm i}]\) with \(\|f^{\prime}\|_{a;r}\leqslant\|f\|_{a;r+1}\).
With \(\boldsymbol{i}=(i_{0},\ldots,i_{r})\) ranging over \(\mathbb{N}^{1+r}\), let \(P=\sum_{\boldsymbol{i}}P_{\boldsymbol{i}}Y^{\boldsymbol{i}}\) (all \(P_{\boldsymbol{i}}\in\mathcal{C}_{a}[{\rm i}]\)) be a polynomial in \(\mathcal{C}_{a}[{\rm i}]\big{[}Y,Y^{\prime},\ldots,Y^{(r)}\big{]}\). For \(f\in\mathcal{C}^{r}_{a}[{\rm i}]\) we set
\[P(f)\ :=\ \sum_{\boldsymbol{i}}P_{\boldsymbol{i}}f^{\boldsymbol{i}}\in \mathcal{C}_{a}[{\rm i}]\qquad\text{where }f^{\boldsymbol{i}}:=f^{i_{0}}(f^{\prime})^{i_{1}}\cdots(f^{(r)})^{i_{r}} \in\mathcal{C}_{a}[{\rm i}].\]
We also let
\[\|P\|_{a}\ :=\ \max_{\boldsymbol{i}}\|P_{\boldsymbol{i}}\|_{a}\in[0,\infty].\]
Then \(\|P\|_{a}<\infty\) iff \(P\in\mathcal{C}_{a}[{\rm i}]^{\rm b}\big{[}Y,\ldots,Y^{(r)}\big{]}\), and \(\|\cdot\|_{a}\) is a norm on the \(\mathbb{C}\)-linear space \(\mathcal{C}_{a}[{\rm i}]^{\rm b}\big{[}Y,\ldots,Y^{(r)}\big{]}\). In the following assume \(\|P\|_{a}<\infty\). Then for \(j=0,\ldots,r\) such that \(\partial P/\partial Y^{(j)}\neq 0\) we have
\[\|\partial P/\partial Y^{(j)}\|_{a}\ \leqslant\ (\deg_{Y^{(j)}}P)\cdot\|P\|_{a}.\]
Moreover:
**Lemma 6.1.2**.: _If \(P\) is homogeneous of degree \(d\in\mathbb{N}\) and \(f\in\mathcal{C}^{r}_{a}[{\rm i}]^{\rm b}\), then_
\[\|P(f)\|_{a}\ \leqslant\ \binom{d+r}{r}\cdot\|P\|_{a}\cdot\|f\|_{a;r}^{d}.\]
**Corollary 6.1.3**.: _Let \(d\leqslant e\) in \(\mathbb{N}\) be such that \(P_{\boldsymbol{i}}=0\) whenever \(|\boldsymbol{i}|<d\) or \(|\boldsymbol{i}|>e\). Then for \(f\in\mathcal{C}^{r}_{a}[{\rm i}]^{\rm b}\) we have_
\[\|P(f)\|_{a}\ \leqslant\ D\cdot\|P\|_{a}\cdot\big{(}\|f\|_{a;r}^{d}+\cdots+\|f \|_{a;r}^{e}\big{)}\]
_where \(D=D(d,e,r):=\binom{e+r+1}{r+1}-\binom{d+r}{r+1}\in\mathbb{N}^{\geqslant 1}\)._
Let \(B\colon V\to\mathcal{C}^{r}_{a}[{\rm i}]^{\rm b}\) be a \(\mathbb{C}\)-linear map from a normed vector space \(V\) over \(\mathbb{C}\) into \(\mathcal{C}^{r}_{a}[{\rm i}]^{\rm b}\). Then we set
\[\|B\|_{a;r}\ :=\ \sup\big{\{}\|B(f)\|_{a;r}:\ f\in V,\ \|f\|\leqslant 1\big{\}} \ \in\ [0,\infty],\]
the **operator norm of \(B\)**. Hence with the convention \(\infty\cdot b:=b\cdot\infty:=\infty\) for \(b\in[0,\infty]\) we have
\[\|B(f)\|_{a;r}\ \leqslant\ \|B\|_{a;r}\cdot\|f\|\qquad\text{for }f\in V.\]
Note that \(B\) is continuous iff \(\|B\|_{a;r}<\infty\). If the map \(D\colon\mathcal{C}^{r}_{a}[{\rm i}]^{\rm b}\to\mathcal{C}^{s}_{a}[{\rm i}]^{\rm b}\)\((s\in\mathbb{N})\) is also \(\mathbb{C}\)-linear, then
\[\|D\circ B\|_{a;s}\ \leqslant\ \|D\|_{a;s}\cdot\|B\|_{a;r}.\]
For \(r=0\) we drop the subscript: \(\|B\|_{a}:=\|B\|_{a;0}\).
**Lemma 6.1.4**.: _Let \(r\in\mathbb{N}^{\geqslant 1}\) and \(\phi\in\mathcal{C}^{r-1}_{a}[{\rm i}]^{\rm b}\). Then the \(\mathbb{C}\)-linear operator_
\[\partial-\phi\ :\ \mathcal{C}^{r}_{a}[{\rm i}]\to\mathcal{C}^{r-1}_{a}[{\rm i} ]^{\rm b},\quad f\mapsto f^{\prime}-\phi f\]
_maps \(\mathcal{C}^{r}_{a}[{\rm i}]^{\rm b}\) into \(\mathcal{C}^{r-1}_{a}[{\rm i}]^{\rm b}\), and its restriction \(\partial-\phi\colon\mathcal{C}^{r}_{a}[{\rm i}]^{\rm b}\to\mathcal{C}^{r-1}_{a} [{\rm i}]^{\rm b}\) is continuous with operator norm \(\|\partial-\phi\|_{a;r-1}\leqslant 1+2^{r-1}\|\phi\|_{a;r-1}\)._
Let \(r\in\mathbb{N}\), \(a_{0}\in\mathbb{R}\), and let \(a\) range over \([a_{0},\infty)\). The \(\mathbb{C}\)-linear map
\[f\mapsto f|_{[a,+\infty)}\,:\ \mathcal{C}^{r}_{a_{0}}[{\rm i}]\to\mathcal{C}^{r}_{a}[{ \rm i}]\]
satisfies \(\|f|_{[a,+\infty)}\|_{a;r}\leqslant\|f\|_{a_{0};r}\) for \(f\in\mathcal{C}^{r}_{a_{0}}[i]\), so it maps \(\mathcal{C}^{r}_{a_{0}}[i]^{\mathrm{b}}\) into \(\mathcal{C}^{r}_{a}[i]^{\mathrm{b}}\). For \(f\in\mathcal{C}^{0}_{a_{0}}[i]\) also denoting its germ at \(+\infty\) and its restriction \(f|_{[a,+\infty)}\), we have:
\[f\prec 1 \quad\Longleftrightarrow\quad\|f\|_{a}<\infty\text{ for some }a\quad \Longleftrightarrow\quad\|f\|_{a}<\infty\text{ for all }a,\] \[f\prec 1 \quad\Longleftrightarrow\quad\|f\|_{a}\to 0\text{ as }a\to\infty.\]
### Twisted integration
For \(f\in\mathcal{C}_{a}[i]\) we have the \(\mathbb{C}\)-linear operator
\[g\mapsto fg\;:\;\mathcal{C}_{a}[i]\to\mathcal{C}_{a}[i],\]
which we also denote by \(f\). We now fix an element \(\phi\in\mathcal{C}_{a}[i]\), and set \(\Phi:=\partial_{a}^{-1}\phi\), so \(\Phi\in\mathcal{C}^{1}_{a}[i]\), \(\Phi(t)=\int_{a}^{t}\phi(s)\,ds\) for \(t\geqslant a\), and \(\Phi^{\prime}=\phi\). Thus \(\mathrm{e}^{\Phi},\mathrm{e}^{-\Phi}\in\mathcal{C}^{1}_{a}[i]\) with \((\mathrm{e}^{\Phi})^{\dagger}=\phi\). Consider the \(\mathbb{C}\)-linear operator
\[B:=\mathrm{e}^{\Phi}\circ\partial_{a}^{-1}\circ\mathrm{e}^{-\Phi}\;:\;\mathcal{ C}_{a}[i]\to\mathcal{C}^{1}_{a}[i],\]
so
\[Bf(t)\;=\;\mathrm{e}^{\Phi(t)}\int_{a}^{t}\mathrm{e}^{-\Phi(s)}\,f(s)\,ds\quad \text{ for }f\in\mathcal{C}_{a}[i].\]
It is easy to check that \(B\) is a right inverse to \(\partial-\phi\colon\mathcal{C}^{1}_{a}[i]\to\mathcal{C}_{a}[i]\) in the sense that \((\partial-\phi)\circ B\) is the identity on \(\mathcal{C}_{a}[i]\). Note that for \(f\in\mathcal{C}_{a}[i]\) we have \(Bf(a)=0\), and thus \((Bf)^{\prime}(a)=f(a)\), using \((Bf)^{\prime}=f+\phi B(f)\). Set \(R:=\mathrm{Re}\,\Phi\) and \(S:=\mathrm{Im}\,\Phi\), so \(R,S\in\mathcal{C}^{1}_{a}\), \(R^{\prime}=\mathrm{Re}\,\phi\), \(S^{\prime}=\mathrm{Im}\,\phi\), and \(R(a)=S(a)=0\). Note also that if \(\phi\in\mathcal{C}^{r}_{a}[i]\), then \(\mathrm{e}^{\Phi}\in\mathcal{C}^{r+1}_{a}[i]\), so \(B\) maps \(\mathcal{C}^{r}_{a}[i]\) into \(\mathcal{C}^{r+1}_{a}[i]\).
Suppose \(\varepsilon>0\) and \(\mathrm{Re}\,\phi(t)\leqslant-\varepsilon\) for all \(t\geqslant a\). Then \(-R\) has derivative \(-R^{\prime}(t)\geqslant\varepsilon\) for all \(t\geqslant a\), so \(-R\) is strictly increasing with image \([-R(a),\infty)=[0,\infty)\) and compositional inverse \((-R)^{\mathrm{inv}}\in\mathcal{C}^{1}_{0}\). Making the change of variables \(-R(s)=u\) for \(s\geqslant a\), we obtain for \(t\geqslant a\) and \(f\in\mathcal{C}_{a}[i]\), and with \(s:=(-R)^{\mathrm{inv}}(u)\),
\[\int_{a}^{t}\mathrm{e}^{-\Phi(s)}\,f(s)\,ds = \int_{0}^{-R(t)}\mathrm{e}^{-\Phi(s)}\,f(s)\frac{1}{-R^{\prime}(s )}\,du,\;\text{ and thus}\] \[|Bf(t)| \leqslant \mathrm{e}^{R(t)}\cdot\left(\int_{0}^{-R(t)}\mathrm{e}^{u}\,du \cdot\|f\|_{[a,t]}\right)\cdot\left\|\frac{1}{\mathrm{Re}\,\phi}\right\|_{[a,t]}\] \[= \left[1-\mathrm{e}^{R(t)}\,\right]\cdot\|f\|_{[a,t]}\cdot\left\| \frac{1}{\mathrm{Re}\,\phi}\right\|_{[a,t]}\] \[\leqslant \|f\|_{[a,t]}\cdot\left\|\frac{1}{\mathrm{Re}\,\phi}\right\|_{[a, t]}\;\leqslant\;\|f\|_{a}\cdot\left\|\frac{1}{\mathrm{Re}\,\phi}\right\|_{a}.\]
Thus \(B\) maps \(\mathcal{C}_{a}[i]^{\mathrm{b}}\) into \(\mathcal{C}_{a}[i]^{\mathrm{b}}\cap\mathcal{C}^{1}_{a}[i]\) and \(B\colon\mathcal{C}_{a}[i]^{\mathrm{b}}\to\mathcal{C}_{a}[i]^{\mathrm{b}}\) is continuous with operator norm \(\|B\|_{a}\leqslant\left\|\frac{1}{\mathrm{Re}\,\phi}\right\|_{a}\).
Next, suppose \(\varepsilon>0\) and \(\mathrm{Re}\,\phi(t)\geqslant\varepsilon\) for all \(t\geqslant a\). Then \(R^{\prime}(t)\geqslant\varepsilon\) for all \(t\geqslant a\), so \(R(t)\geqslant\varepsilon\cdot(t-a)\) for such \(t\). Hence if \(f\in\mathcal{C}_{a}[i]^{\mathrm{b}}\), then \(\mathrm{e}^{-\Phi}\,f\) is integrable at \(\infty\). Recall from (6.1.1) that \(\mathcal{C}_{a}[i]^{\mathrm{int}}\) is the \(\mathbb{C}\)-linear subspace of \(\mathcal{C}_{a}[i]\) consisting of the \(g\in\mathcal{C}_{a}[i]\) that are integrable at \(\infty\). We have the \(\mathbb{C}\)-linear maps
\[f\mapsto\mathrm{e}^{-\Phi}\,f\colon\mathcal{C}_{a}[i]^{\mathrm{b}}\to\mathcal{C} _{a}[i]^{\mathrm{int}},\qquad\partial_{\infty}^{-1}\colon\mathcal{C}_{a}[i]^{ \mathrm{int}}\to\mathcal{C}^{1}_{a}[i],\quad\;f\mapsto\mathrm{e}^{\Phi}\,f \colon\mathcal{C}^{1}_{a}[i]\to\mathcal{C}^{1}_{a}[i].\]
Composition yields the \(\mathbb{C}\)-linear operator \(B\colon\mathcal{C}_{a}[i]^{\mathrm{b}}\to\mathcal{C}^{1}_{a}[i]\),
\[Bf(t)\;:=\;\mathrm{e}^{\Phi(t)}\int_{\infty}^{t}\mathrm{e}^{-\Phi(s)}\,f(s)\,ds \qquad(f\in\mathcal{C}_{a}[i]^{\mathrm{b}}).\]
It is a right inverse to \(\partial-\phi\) in the sense that \((\partial-\phi)\circ B\) is the identity on \(\mathcal{C}_{a}[\mathrm{i}]^{\mathrm{b}}\). Note that \(R\) is strictly increasing with image \([0,\infty)\) and compositional inverse \(R^{\mathrm{inv}}\in\mathcal{C}_{0}^{1}\). Making the change of variables \(R(s)=u\) for \(s\geqslant a\), we obtain for \(t\geqslant a\) and \(f\in\mathcal{C}_{a}[\mathrm{i}]^{\mathrm{b}}\) with \(s:=R^{\mathrm{inv}}(u)\),
\[\int_{\infty}^{t}\mathrm{e}^{-\Phi(s)}\,f(s)\,ds =\ -\int_{R(t)}^{\infty}\mathrm{e}^{-\Phi(s)}\,f(s)\frac{1}{R^{ \prime}(s)}\,du,\ \ \mbox{and thus}\] \[|Bf(t)| \leqslant\ \mathrm{e}^{R(t)}\cdot\left(\int_{R(t)}^{\infty}\mathrm{e}^{- u}\ du\right)\cdot\|f\|_{t}\cdot\left\|\frac{1}{\mathrm{Re}\,\phi}\right\|_{t}\] \[\leqslant\ \|f\|_{t}\cdot\left\|\frac{1}{\mathrm{Re}\,\phi} \right\|_{t}\ \leqslant\ \|f\|_{a}\cdot\left\|\frac{1}{\mathrm{Re}\,\phi}\right\|_{a}\]
Hence \(B\) maps \(\mathcal{C}_{a}[\mathrm{i}]^{\mathrm{b}}\) into \(\mathcal{C}_{a}[\mathrm{i}]^{\mathrm{b}}\cap\mathcal{C}_{a}^{1}[\mathrm{i}]\), and as a \(\mathbb{C}\)-linear operator \(\mathcal{C}_{a}[\mathrm{i}]^{\mathrm{b}}\to\mathcal{C}_{a}[\mathrm{i}]^{ \mathrm{b}}\), \(B\) is continuous with operator norm \(\|B\|_{a}\leqslant\big{\|}\frac{1}{\mathrm{Re}\,\phi}\big{\|}_{a}\). If \(\phi\in\mathcal{C}_{a}^{r}[\mathrm{i}]\), then \(B\) maps \(\mathcal{C}_{a}[\mathrm{i}]^{\mathrm{b}}\cap\mathcal{C}_{a}^{r}[\mathrm{i}]\) into \(\mathcal{C}_{a}[\mathrm{i}]^{\mathrm{b}}\cap\mathcal{C}_{a}^{r+1}[\mathrm{i}]\).
The case that for some \(\varepsilon>0\) we have \(\mathrm{Re}\,\phi(t)\leqslant-\varepsilon\) for all \(t\geqslant a\) is called the _attractive case_, and the case that for some \(\varepsilon>0\) we have \(\mathrm{Re}\,\phi(t)\geqslant\varepsilon\) for all \(t\geqslant a\) is called the _repulsive case_. In both cases the above yields a continuous operator \(B\colon\mathcal{C}_{a}[\mathrm{i}]^{\mathrm{b}}\to\mathcal{C}_{a}[\mathrm{i}] ^{\mathrm{b}}\) with operator norm \(\leqslant\big{\|}\frac{1}{\mathrm{Re}\,\phi}\big{\|}_{a}\) which is right-inverse to the operator \(\partial-\phi\colon\mathcal{C}_{a}^{1}[\mathrm{i}]\to\mathcal{C}_{a}[\mathrm{ i}]\). We denote this operator \(B\) by \(B_{\phi}\) if we need to indicate its dependence on \(\phi\). Note also its dependence on \(a\). In both the attractive and the repulsive case, \(B\) maps \(\mathcal{C}_{a}[\mathrm{i}]^{\mathrm{b}}\) into \(\mathcal{C}_{a}[\mathrm{i}]^{\mathrm{b}}\cap\mathcal{C}_{a}^{1}[\mathrm{i}]\), and if \(\phi\in\mathcal{C}_{a}^{r}[\mathrm{i}]\) then \(B\) maps \(\mathcal{C}_{a}[\mathrm{i}]^{\mathrm{b}}\cap\mathcal{C}_{a}^{r}[\mathrm{i}]\) into \(\mathcal{C}_{a}[\mathrm{i}]^{\mathrm{b}}\cap\mathcal{C}_{a}^{r+1}[\mathrm{i}]\).
Given a Hardy field \(H\) and \(f\in H[\mathrm{i}]\) with \(\mathrm{Re}\,f\succcurlyeq 1\) we can choose \(a\) and a representative of \(f\) in \(\mathcal{C}_{a}[\mathrm{i}]\), to be denoted also by \(f\), such that \(\mathrm{Re}\,f(t)\neq 0\) for all \(t\geqslant a\), and then \(f\in\mathcal{C}_{a}[\mathrm{i}]\) falls either under the attractive case or under the repulsive case. The original germ \(f\in H[\mathrm{i}]\) as well as the function \(f\in\mathcal{C}_{a}[\mathrm{i}]\) is accordingly said to be attractive, respectively repulsive. (This agrees with the terminology introduced at the beginning of Section 4.5.)
Twists and right-inverses of linear operators over Hardy fields.Let \(H\) be a Hardy field, \(K:=H[\mathrm{i}]\), and let \(A\in K[\partial]\) be a monic operator of order \(r\geqslant 1\),
\[A\ =\ \partial^{r}+f_{1}\partial^{r-1}+\cdots+f_{r},\qquad f_{1},\ldots,f_{r} \in K.\]
Take a real number \(a_{0}\) and functions in \(\mathcal{C}_{a_{0}}[\mathrm{i}]\) that represent the germs \(f_{1},\ldots,f_{r}\) and to be denoted also by \(f_{1},\ldots,f_{r}\). Whenever we increase below the value of \(a_{0}\), it is understood that we also update the functions \(f_{1},\ldots,f_{r}\) accordingly, by restriction; the same holds for any function on \([a_{0},\infty)\) that gets named. Throughout, \(a\) ranges over \([a_{0},\infty)\), and \(f_{1},\ldots,f_{r}\) denote also the restrictions of these functions to \([a,\infty)\), and likewise for any function on \([a_{0},\infty)\) that we name. Thus for any \(a\) we have the \(\mathbb{C}\)-linear operator
\[A_{a}\,:\ \mathcal{C}_{a}^{r}[\mathrm{i}]\to\mathcal{C}_{a}[\mathrm{i}],\quad y \mapsto y^{(r)}+f_{1}y^{(r-1)}+\cdots+f_{r}y.\]
Next, let \(\mathfrak{m}\in H^{\times}\) be given. It gives rise to the twist \(A_{\ltimes\mathfrak{m}}\in K[\partial]\),
\[A_{\ltimes\mathfrak{m}}\ :=\ \mathfrak{m}^{-1}A\mathfrak{m}\ =\ \partial^{r}+g_{1} \partial^{r-1}+\cdots+g_{r},\qquad g_{1},\ldots,g_{r}\in K.\]
Now [ADH, (5.1.1), (5.1.2), (5.1.3)] gives universal expressions for \(g_{1},\ldots,g_{r}\) in terms of \(f_{1},\ldots,f_{r},\mathfrak{m},\mathfrak{m}^{-1}\); for example, \(g_{1}=f_{1}+r\mathfrak{m}^{\dagger}\). Suppose the germ \(\mathfrak{m}\)
is represented by a function in \(\mathcal{C}^{r}_{a_{0}}[i]^{\times}\), also denoted by \(\mathfrak{m}\). Let \(\mathfrak{m}^{-1}\) likewise do double duty as the multiplicative inverse of \(\mathfrak{m}\) in \(\mathcal{C}^{r}_{a_{0}}[i]\). The expressions above can be used to show that the germs \(g_{1},\ldots,g_{r}\) are represented by functions in \(\mathcal{C}_{a_{0}}[i]\), to be denoted also by \(g_{1},\ldots,g_{r}\), such that for all \(a\) and all \(y\in\mathcal{C}^{r}_{a}[i]\) we have
\[\mathfrak{m}^{-1}A_{a}(\mathfrak{m}y)\ =\ (A_{\times\mathfrak{m}})_{a}(y),\ \text{ where }(A_{\times\mathfrak{m}})_{a}(y)\ :=\ y^{(r)}+g_{1}y^{(r-1)}+\cdots+g_{r}y.\]
The operator \(A_{a}\colon\mathcal{C}^{r}_{a}[i]\to\mathcal{C}_{a}[i]\) is surjective (Proposition 5.2.1); we aim to construct a right-inverse of \(A_{a}\) on the subspace \(\mathcal{C}_{a}[i]^{\mathrm{b}}\) of \(\mathcal{C}_{a}[i]\). For this, we assume given a splitting of \(A\) over \(K\),
\[A\ =\ (\partial-\phi_{1})\cdots(\partial-\phi_{r}),\qquad\phi_{1},\ldots, \phi_{r}\in K.\]
Take functions in \(\mathcal{C}_{a_{0}}[i]\), to be denoted also by \(\phi_{1},\ldots,\phi_{r}\), that represent the germs \(\phi_{1},\ldots,\phi_{r}\). We increase \(a_{0}\) to arrange \(\phi_{1},\ldots,\phi_{r}\in\mathcal{C}^{r-1}_{a_{0}}[i]\). Note that for \(j=1,\ldots,r\) the \(\mathbb{C}\)-linear map \(\partial-\phi_{j}\colon\mathcal{C}^{1}_{a}[i]\to\mathcal{C}_{a}[i]\) restricts to a \(\mathbb{C}\)-linear map \(A_{j}\colon\mathcal{C}^{j}_{a}[i]\to\mathcal{C}^{j-1}_{a}[i]\), so that we obtain a map \(A_{1}\circ\cdots\circ A_{r}\colon\mathcal{C}^{r}_{a}[i]\to\mathcal{C}_{a}[i]\). It is routine to verify that for all sufficiently large \(a\) we have
\[A_{a}\ =\ A_{1}\circ\cdots\circ A_{r}\,:\ \mathcal{C}^{r}_{a}[i]\to\mathcal{C}_{a}[i].\]
We increase \(a_{0}\) so that \(A_{a}=A_{1}\circ\cdots\circ A_{r}\) for all \(a\). Note that \(A_{1},\ldots,A_{r}\) depend on \(a\), but we prefer not to indicate this dependence notationally.
Now \(\mathfrak{m}\in H^{\times}\) gives over \(K\) the splitting
\[A_{\times\mathfrak{m}}\ =\ (\partial-\phi_{1}+\mathfrak{m}^{\dagger})\cdots( \partial-\phi_{r}+\mathfrak{m}^{\dagger}).\]
Suppose as before that the germ \(\mathfrak{m}\) is represented by a function \(\mathfrak{m}\in\mathcal{C}^{r}_{a_{0}}[i]^{\times}\). With the usual notational conventions we have \(\phi_{j}-\mathfrak{m}^{\dagger}\in\mathcal{C}^{r-1}_{a_{0}}[i]\), giving the \(\mathbb{C}\)-linear map \(\widetilde{A}_{j}:=\partial-(\phi_{j}-\mathfrak{m}^{\dagger})\colon\mathcal{C }^{j}_{a}[i]\to\mathcal{C}^{j-1}_{a}[i]\) for \(j=1,\ldots,r\), which for all sufficiently large \(a\) gives, just as for \(A_{a}\), a factorization
\[(A_{\times\mathfrak{m}})_{a}\ =\ \widetilde{A}_{1}\circ\cdots\circ\widetilde{A}_{r}.\]
To construct a right-inverse of \(A_{a}\) we now assume \(\operatorname{Re}\phi_{1},\ldots,\operatorname{Re}\phi_{r}\succcurlyeq 1\). Then we increase \(a_{0}\) once more so that for all \(t\geqslant a_{0}\),
\[\operatorname{Re}\phi_{1}(t),\ldots,\operatorname{Re}\phi_{r}(t)\neq 0.\]
Recall that for \(j=1,\ldots,r\) we have the continuous \(\mathbb{C}\)-linear operator
\[B_{j}\ :=\ B_{\phi_{j}}\ :\ \mathcal{C}_{a}[i]^{\mathrm{b}}\to\mathcal{C}_{a}[i]^{ \mathrm{b}}\]
from the previous subsection. The subsection on twisted integration now yields:
**Lemma 6.1.5**.: _The continuous \(\mathbb{C}\)-linear operator_
\[A_{a}^{-1}\ :=\ B_{r}\circ\cdots\circ B_{1}\ :\ \mathcal{C}_{a}[i]^{\mathrm{b}} \to\mathcal{C}_{a}[i]^{\mathrm{b}}\]
_is a right-inverse of \(A_{a}\): it maps \(\mathcal{C}_{a}[i]^{\mathrm{b}}\) into \(\mathcal{C}_{a}[i]^{\mathrm{b}}\cap\mathcal{C}^{r}_{a}[i]\), and \(A_{a}\circ A_{a}^{-1}\) is the identity on \(\mathcal{C}_{a}[i]^{\mathrm{b}}\). For its operator norm we have \(\|A_{a}^{-1}\|_{a}\ \leqslant\ \prod_{j=1}^{r}\big{\|}\frac{1}{ \operatorname{Re}\phi_{j}}\big{\|}_{a}\)._
Suppose \(A\) is real in the sense that \(A\in H[\partial]\). Then by increasing \(a_{0}\) we arrange that \(f_{1},\ldots,f_{r}\in\mathcal{C}_{a_{0}}\). Next, set
\[\mathcal{C}^{\mathrm{b}}_{a}\ :=\ \mathcal{C}_{a}[i]^{\mathrm{b}}\cap\mathcal{C}_{a} \ =\ \big{\{}f\in\mathcal{C}_{a}:\,\|f\|_{a}<\infty\big{\}},\]
an \(\mathbb{R}\)-linear subspace of \(\mathcal{C}_{a}\). Then the real part
\[\operatorname{Re}A_{a}^{-1}\ :\ \mathcal{C}^{\mathrm{b}}_{a}\to\mathcal{C}^{ \mathrm{b}}_{a},\qquad(\operatorname{Re}A_{a}^{-1})(f)\ :=\ \operatorname{Re}\bigl{(}A_{a}^{-1}(f)\bigr{)}\]
is \(\mathbb{R}\)-linear and maps \(\mathcal{C}_{a}^{\mathrm{b}}\) into \(\mathcal{C}_{a}^{r}\). Moreover, it is right-inverse to \(A_{a}\) on \(\mathcal{C}_{a}^{\mathrm{b}}\) in the sense that \(A_{a}\circ\mathrm{Re}\,A_{a}^{-1}\) is the identity on \(\mathcal{C}_{a}^{\mathrm{b}}\), and for \(f\in\mathcal{C}_{a}^{\mathrm{b}}\),
\[\|(\mathrm{Re}\,A_{a}^{-1})(f)\|_{a}\ \leqslant\ \|A_{a}^{-1}(f)\|_{a}.\]
**Damping factors.** Here \(H\), \(K\), \(A\), \(f_{1},\ldots,f_{r}\), \(\phi_{1},\ldots,\phi_{r}\), \(a_{0}\) are as in Lemma 6.1.5. In particular, \(r\in\mathbb{N}^{\geqslant 1}\), \(\mathrm{Re}\,\phi_{1},\ldots,\mathrm{Re}\,\phi_{r}\succcurlyeq 1\), and \(a\) ranges over \([a_{0},\infty)\). For later use we choose damping factors \(u\) to make the operator \(uA_{a}^{-1}\) more manageable than \(A_{a}^{-1}\). For \(j=0,\ldots,r\) we set
\[A_{j}^{\circ}\ :=\ A_{1}\circ\cdots\circ A_{j}\,:\ \mathcal{C}_{a}^{j}[i] \rightarrow\mathcal{C}_{a}[i], \tag{6.1.2}\]
with \(A_{0}^{\circ}\) the identity on \(\mathcal{C}_{a}[i]\) and \(A_{r}^{\circ}=A_{a}\), and
\[B_{j}^{\circ}\ :=\ B_{j}\circ\cdots\circ B_{1}\,:\ \mathcal{C}_{a}[i]^{ \mathrm{b}}\rightarrow\mathcal{C}_{a}[i]^{\mathrm{b}}, \tag{6.1.3}\]
where \(B_{0}^{\circ}\) is the identity on \(\mathcal{C}_{a}[i]^{\mathrm{b}}\) and \(B_{r}^{\circ}=A_{a}^{-1}\). Then \(B_{j}^{\circ}\) maps \(\mathcal{C}_{a}[i]^{\mathrm{b}}\) in-\(\mathcal{C}_{a}[i]^{\mathrm{b}}\) in-\(\mathcal{C}_{a}[i]^{\mathrm{b}}\) in-\(\mathcal{C}_{a}[i]^{\mathrm{b}}\) in-\(\mathcal{C}_{a}[i]^{\mathrm{b}}\) in-\(\mathcal{C}_{a}^{\circ}\circ B_{j}^{\circ}\) is the identity on \(\mathcal{C}_{a}[i]^{\mathrm{b}}\) by Lemma 6.1.5.
**Lemma 6.1.6**.: _Let \(u\in\mathcal{C}_{a}^{r}[i]^{\times}\). Then for \(i=0,\ldots,r\) and \(f\in\mathcal{C}_{a}[i]^{\mathrm{b}}\),_
\[\big{[}u\cdot A_{a}^{-1}(f)\big{]}^{(i)}\ =\ \sum_{j=r-i}^{r}u_{i,j}\cdot u\cdot B_{j}^{ \circ}(f)\quad\text{ in }\mathcal{C}_{a}^{r-i}[i] \tag{6.1.4}\]
_with coefficient functions \(u_{i,j}\in\mathcal{C}_{a}^{r-i}[i]\) given by \(u_{i,r-i}=1\), and for \(0\leqslant i<r\),_
\[u_{i+1,j}\ =\ \begin{cases}u_{i,r}^{\prime}+u_{i,r}(u^{\dagger}+\phi_{r})& \text{if }j=r\text{,}\\ u_{i,j}^{\prime}+u_{i,j}(u^{\dagger}+\phi_{j})+u_{i,j+1}&\text{if }r-i\leqslant j<r \text{.}\end{cases}\]
Proof.: Recall that for \(j=1,\ldots,r\) and \(f\in\mathcal{C}_{a}[i]^{\mathrm{b}}\) we have \(B_{j}(f)^{\prime}=f+\phi_{j}B_{j}(f)\). It is obvious that (6.1.4) holds for \(i=0\). Assuming (6.1.4) for a certain \(i<r\) we get
\[\big{[}uA_{a}^{-1}(f)\big{]}^{(i+1)}\ =\ \sum_{j=r-i}^{r}u_{i,j}^{\prime}\cdot uB_{j} ^{\circ}(f)+\sum_{j=r-i}^{r}u_{i,j}\cdot\big{[}uB_{j}^{\circ}(f)\big{]}^{ \prime},\]
and for \(j=r-i,\ldots,r\),
\[\big{[}uB_{j}^{\circ}(f)\big{]}^{\prime}\ =\ u^{\prime}B_{j}^{\circ}(f)+u\cdot \big{[}B_{j}^{\circ}(f)\big{]}^{\prime}\ =\ u^{\dagger}\cdot uB_{j}^{\circ}(f)+uB_{j-1}^{\circ}(f)+\phi_{j}uB_{j}^{ \circ}(f),\]
which gives the desired result.
Let \(\mathfrak{v}\in\mathcal{C}_{a_{0}}^{r}\) be such that \(\mathfrak{v}(t)>0\) for all \(t\geqslant a_{0}\), \(\mathfrak{v}\in H\), \(\mathfrak{v}\prec 1\). Then we have the convex subgroup
\[\Delta\ :=\ \big{\{}\gamma\in v(H^{\times}):\ \gamma=o(v\mathfrak{v})\big{\}}\]
of \(v(H^{\times})\). _We assume that \(\phi_{1},\ldots,\phi_{r}\preccurlyeq_{\Delta}\mathfrak{v}^{-1}\) in the asymptotic field \(K\), where \(\phi_{j}\) and \(\mathfrak{v}\) also denote their germs._ For real \(\nu>0\) we have \(\mathfrak{v}^{\nu}\in(\mathcal{C}_{a_{0}}^{r})^{\times}\), so
\[u\ :=\ \mathfrak{v}^{\nu}|_{[a,\infty)}\in(\mathcal{C}_{a}^{r})^{\times},\qquad \|u\|_{a}<\infty.\]
In the next proposition \(u\) has this meaning, a meaning which accordingly varies with \(a\). Recall that \(A_{a}^{-1}\) maps \(\mathcal{C}_{a}[i]^{\mathrm{b}}\) into \(\mathcal{C}_{a}[i]^{\mathrm{b}}\cap\mathcal{C}_{a}^{r}[i]\) with \(\|A_{a}^{-1}\|_{a}<\infty\).
**Proposition 6.1.7**.: _Assume \(H\) is real closed and \(\nu\in\mathbb{Q}\), \(\nu>r\). Then:_
1. _the_ \(\mathbb{C}\)_-linear operator_ \(uA_{a}^{-1}\colon\mathcal{C}_{a}[i]^{\mathrm{b}}\rightarrow\mathcal{C}_{a}[i]^{ \mathrm{b}}\) _maps_ \(\mathcal{C}_{a}[i]^{\mathrm{b}}\) _into_ \(\mathcal{C}_{a}^{r}[i]^{\mathrm{b}}\)_;_
2. \(uA_{a}^{-1}\colon\mathcal{C}_{a}[i]^{\mathrm{b}}\rightarrow\mathcal{C}_{a}^{r}[i ]^{\mathrm{b}}\) _is continuous;_
3. _there is a real constant_ \(c\geqslant 0\) _such that_ \(\|uA_{a}^{-1}\|_{a;r}\leqslant c\) _for all_ \(a\)_;_
4. _for all_ \(f\in\mathcal{C}_{a}[i]^{\mathrm{b}}\) _we have_ \(uA_{a}^{-1}(f)\preccurlyeq\mathfrak{v}^{\nu}\prec 1\)_;_
5. \(\|uA_{a}^{-1}\|_{a;r}\to 0\) _as_ \(a\rightarrow\infty\)
Proof.: Note that \(\mathfrak{v}^{\dagger}\preccurlyeq_{\Delta}1\) by [ADH, 9.2.10(iv)]. Denoting the germ of \(u\) also by \(u\) we have \(u\in H\) and \(u^{\dagger}=\nu\mathfrak{v}^{\dagger}\preccurlyeq_{\Delta}1\), in particular, \(u^{\dagger}\preccurlyeq\mathfrak{v}^{-1/2}\). Note that the \(u_{i,j}\) from Lemma 6.1.6--that is, their germs--lie in \(K\). Induction on \(i\) gives \(u_{i,j}\preccurlyeq_{\Delta}\mathfrak{v}^{-i}\) for \(r-i\leqslant j\leqslant r\). Hence \(uu_{i,j}\prec_{\Delta}\mathfrak{v}^{\nu-i}\prec_{\Delta}1\) for \(r-i\leqslant j\leqslant r\). Thus for \(i=0,\ldots,r\) we have a real constant
\[c_{i,a}\ :=\ \sum_{j=r-i}^{r}\|u\,u_{i,j}\|_{a}\cdot\|B_{j}\|_{a}\cdots\|B_{1} \|_{a}\in[0,\infty)\]
with \(\big{\|}\big{[}uA_{a}^{-1}(f)\big{]}^{(i)}\big{\|}_{a}\leqslant c_{i,a}\|f\|_ {a}\) for all \(f\in\mathcal{C}_{a}[i]^{\mathrm{b}}\). Therefore \(uA_{a}^{-1}\) maps \(\mathcal{C}_{a}[i]^{\mathrm{b}}\) into \(\mathcal{C}_{a}^{r}[i]^{\mathrm{b}}\), and the operator \(uA_{a}^{-1}\colon\mathcal{C}_{a}[i]^{\mathrm{b}}\to\mathcal{C}_{a}^{r}[i]^{ \mathrm{b}}\) is continuous with
\[\|uA_{a}^{-1}\|_{a;r}\ \leqslant\ c_{a}:=\max\{c_{0,a},\ldots,c_{r,a}\}.\]
As to (iii), this is because for all \(i\), \(j\), \(\|u\,u_{ij}\|_{a}\) is decreasing as a function of \(a\), and \(\|B_{j}\|_{a}\leqslant\big{\|}\frac{1}{\operatorname{Re}\phi_{j}}\big{\|}_{a}\) for all \(j\). For \(f\in\mathcal{C}_{a}[i]^{\mathrm{b}}\) we have \(A_{a}^{-1}(f)\in\mathcal{C}_{a}[i]^{\mathrm{b}}\), so (iv) holds. As to (v), \(u\,u_{i,j}\prec 1\) gives \(\|uu_{ij}\|_{a}\to 0\) as \(a\to\infty\), for all \(i\), \(j\). In view of \(\|B_{j}\|_{a}\leqslant\big{\|}\frac{1}{\operatorname{Re}\phi_{j}}\big{\|}_{a}\) for all \(j\), this gives \(c_{i,a}\to 0\) as \(a\to\infty\) for \(i=0,\ldots,r\), so \(c_{a}\to 0\) as \(a\to\infty\).
### Solving Split-Normal Equations over Hardy Fields
We construct here solutions of suitable algebraic differential equations over Hardy fields. These solutions lie in rings \(\mathcal{C}_{a}^{r}[i]^{\mathrm{b}}\) (\(r\in\mathbb{N}^{\geqslant 1}\)) and are obtained as fixed points of certain contractive maps, as is common in solving differential equations. Here we use that \(\mathcal{C}_{a}^{r}[i]^{\mathrm{b}}\) is a Banach space with respect to the norm \(\|\cdot\|_{a;r}\). It will take some effort to define the right contractions using the operators from Section 6.1.
In this section \(H\), \(K\), \(A\), \(f_{1},\ldots,f_{r}\), \(\phi_{1},\ldots,\phi_{r}\), \(a_{0}\) are as in Lemma 6.1.5. In particular, \(H\) is a Hardy field, \(K=H[i]\), and
\[A=(\partial-\phi_{1})\cdots(\partial-\phi_{r})\qquad\text{where $r\in\mathbb{N}^{ \geqslant 1}$, $\phi_{1},\ldots,\phi_{r}\in K$, $\operatorname{Re}\phi_{1},\ldots,\operatorname{Re}\phi_{r}\succcurlyeq 1$}.\]
Here \(a_{0}\) is chosen so that we have representatives for \(\phi_{1},\ldots,\phi_{r}\) in \(\mathcal{C}_{a_{0}}^{r-1}[i]\), denoted also by \(\phi_{1},\ldots,\phi_{r}\). We let \(a\) range over \([a_{0},\infty)\). In addition we assume that \(H\) is real closed, and that we are given a germ \(\mathfrak{v}\in H^{>}\) such that \(\mathfrak{v}\prec 1\) and \(\phi_{1},\ldots,\phi_{r}\preccurlyeq_{\Delta}\mathfrak{v}^{-1}\) for the convex subgroup
\[\Delta\ :=\ \big{\{}\gamma\in v(H^{\times}):\ \gamma=o(v\mathfrak{v})\big{\}}\]
of \(v(H^{\times})\). We increase \(a_{0}\) so that \(\mathfrak{v}\) is represented by a function in \(\mathcal{C}_{a_{0}}^{r}\), also denoted by \(\mathfrak{v}\), with \(\mathfrak{v}(t)>0\) for all \(t\geqslant a_{0}\).
### Constructing fixed points over \(H\)
Consider a differential equation
( \[*\] ) \[A(y)\ =\ R(y),\qquad y\prec 1,\]
where \(R\in K\{Y\}\) has order \(\leqslant r\), degree \(\leqslant d\in\mathbb{N}^{\geqslant 1}\) and weight \(\leqslant w\in\mathbb{N}^{\geqslant r}\), with \(R\preccurlyeq_{\Delta}\mathfrak{v}^{w}\). Now \(R=\sum_{\boldsymbol{j}}R_{\boldsymbol{j}}Y^{\boldsymbol{j}}\) with \(\boldsymbol{j}\) ranging here and below over the tuples \((j_{0},\ldots,j_{r})\in\mathbb{N}^{1+r}\) with \(|\boldsymbol{j}|\leqslant d\) and \(\|\boldsymbol{j}\|\leqslant w\); likewise for \(\boldsymbol{i}\). For each \(\boldsymbol{j}\) we take a function in \(\mathcal{C}_{a_{0}}[i]\) that represents the germ \(R_{\boldsymbol{j}}\in K\) and let \(R_{\boldsymbol{j}}\) denote this function as well as its restriction to any \([a,\infty)\). Thus \(R\) is represented on \([a,\infty)\) by
a polynomial \(\sum_{\boldsymbol{j}}R_{\boldsymbol{j}}Y^{\boldsymbol{j}}\in\mathcal{C}_{a}[i] \big{[}Y,\ldots,Y^{(r)}\big{]}\), to be denoted also by \(R\) for simplicity. This yields for each \(a\) an evaluation map
\[f\mapsto R(f):=\sum_{\boldsymbol{j}}R_{\boldsymbol{j}}f^{\boldsymbol{j}}\ :\ \mathcal{C}_{a}^{r}[i]\to\mathcal{C}_{a}[i].\]
As in [ADH, 4.2] we also have for every \(\boldsymbol{i}\) the formal partial derivative
\[R^{(\boldsymbol{i})}\ :=\ \frac{\partial^{|\boldsymbol{i}|}R}{\partial^{i_{0}}Y \cdots\partial^{i_{r}}Y^{(r)}}\ \in\ \mathcal{C}_{a}[i]\big{[}Y,\ldots,Y^{(r)}\big{]}\]
with \(R^{(\boldsymbol{i})}=\sum_{\boldsymbol{j}}R^{(\boldsymbol{i})}_{\boldsymbol{ j}}Y^{\boldsymbol{j}}\), all \(R^{(\boldsymbol{i})}_{\boldsymbol{j}}\in\mathcal{C}_{a}[i]\) having their germs in \(K\).
A _solution of \((*)\) on \([a,\infty)\)_ is a function \(f\in\mathcal{C}_{a}^{r}[i]^{\mathrm{b}}\) such that \(A_{a}(f)=R(f)\) and \(f\prec 1\). One might try to obtain a solution as a fixed point of the operator \(f\mapsto A_{a}^{-1}\big{(}R(f)\big{)}\), but this operator might fail to be contractive on a useful space of functions. Therefore we twist \(A\) and arrange things so that we can use Proposition 6.1.7. In the rest of this section we fix \(\nu\in\mathbb{Q}\) with \(\nu>w\) (so \(\nu>r\)) such that \(R\prec_{\Delta}\mathfrak{v}^{\nu}\) and \(\nu\mathfrak{v}^{\dagger}\not\sim\mathrm{Re}\,\phi_{j}\) in \(H\) for \(j=1,\ldots,r\). (Note that such \(\nu\) exists.) Then the twist \(\widetilde{A}:=A_{\kappa\mathfrak{v}^{\nu}}=\mathfrak{v}^{-\nu}A\mathfrak{v}^ {\nu}\in K[\mathfrak{d}]\) splits over \(K\) as follows:
\[\widetilde{A}\ =\ (\mathfrak{d}-\phi_{1}+\nu\mathfrak{v}^{\dagger}) \cdots(\mathfrak{d}-\phi_{r}+\nu\mathfrak{v}^{\dagger}),\quad\text{ with }\] \[\phi_{j}-\nu\mathfrak{v}^{\dagger}\preccurlyeq_{\Delta}\mathfrak{v }^{-1},\quad\mathrm{Re}\,\phi_{j}-\nu\mathfrak{v}^{\dagger}\ \sucqens\ 1\qquad(j=1,\ldots,r).\]
We also increase \(a_{0}\) so that \(\mathrm{Re}\,\phi_{j}(t)-\nu\mathfrak{v}^{\dagger}(t)\neq 0\) for all \(t\geqslant a_{0}\) and such that for all \(a\) and \(u:=\mathfrak{v}^{\nu}|_{[a,\infty)}\in(\mathcal{C}_{a}^{r})^{\times}\) the operator \(\widetilde{A}_{a}\colon\mathcal{C}_{a}^{r}[i]\to\mathcal{C}_{a}[i]\) satisfies
\[\widetilde{A}_{a}(y)\ =\ u^{-1}A_{a}(uy)\qquad(y\in\mathcal{C}_{a}^{r}[i]).\]
(See the explanations before Lemma 6.1.5 for definitions of \(A_{a}\) and \(\widetilde{A}_{a}\).) We now increase \(a_{0}\) once more, fixing it for the rest of the section except in the subsection "Preserving reality", so as to obtain as in Lemma 6.1.5, with \(\widetilde{A}\) in the role of \(A\), a distinguished right-inverse \(\widetilde{A}_{a}^{-1}\colon\mathcal{C}_{a}[i]^{\mathrm{b}}\to\mathcal{C}_{a }[i]^{\mathrm{b}}\) for such \(\widetilde{A}_{a}\).
**Lemma 6.2.1**.: _We have a continuous operator (not necessarily \(\mathbb{C}\)-linear)_
\[\Xi_{a}\ :\ \mathcal{C}_{a}^{r}[i]^{\mathrm{b}}\to\mathcal{C}_{a}^{r}[i]^{ \mathrm{b}},\quad f\mapsto u\widetilde{A}_{a}^{-1}\big{(}u^{-1}R(f)\big{)}.\]
_It has the property that \(\Xi_{a}(f)\preccurlyeq\mathfrak{v}^{\nu}\prec 1\) and \(A_{a}\big{(}\Xi_{a}(f)\big{)}=R(f)\) for all \(f\in\mathcal{C}_{a}^{r}[i]^{\mathrm{b}}\)._
Proof.: We have \(\|u^{-1}R_{\boldsymbol{i}}\|_{a}<\infty\) for all \(\boldsymbol{i}\), so \(u^{-1}R(f)=\sum_{\boldsymbol{i}}u^{-1}R_{\boldsymbol{i}}f^{\boldsymbol{i}}\in \mathcal{C}_{a}[i]^{\mathrm{b}}\) for all \(f\in\mathcal{C}_{a}^{r}[i]^{\mathrm{b}}\), and thus \(u\widetilde{A}_{a}^{-1}\big{(}u^{-1}R(f)\big{)}\in\mathcal{C}_{a}^{r}[i]^{ \mathrm{b}}\) for such \(f\), by Proposition 6.1.7(i). Continuity of \(\Xi_{a}\) follows from Proposition 6.1.7(ii) and continuity of \(f\mapsto u^{-1}R(f)\colon\mathcal{C}_{a}^{r}[i]^{\mathrm{b}}\to\mathcal{C}_{a }[i]^{\mathrm{b}}\). For \(f\in\mathcal{C}_{a}^{r}[i]^{\mathrm{b}}\) we have \(\Xi_{a}(f)\preccurlyeq\mathfrak{v}^{\nu}\prec 1\) by Proposition 6.1.7(iv), and
\[u^{-1}A_{a}\big{(}\Xi_{a}(f)\big{)}\ =\ u^{-1}A_{a}\big{[}u\widetilde{A}_{a}^{-1} \big{(}u^{-1}R(f)\big{)}\big{]}\ =\ \widetilde{A}_{a}\big{[}\widetilde{A}_{a}^{-1}\big{(}u^{-1}R(f)\big{)}\big{]}\ =\ u^{-1}R(f),\]
so \(A_{a}\big{(}\Xi_{a}(f)\big{)}=R(f)\).
By Lemma 6.2.1, each \(f\in\mathcal{C}_{a}^{r}[i]^{\mathrm{b}}\) with \(\Xi_{a}(f)=f\) is a solution of \((*)\) on \([a,\infty)\).
**Lemma 6.2.2**.: _There is a constant \(C_{a}\in\mathbb{R}^{\geqslant}\) such that for all \(f,g\in\mathcal{C}_{a}^{r}[i]^{\mathrm{b}}\),_
\[\|\Xi_{a}(f+g)-\Xi_{a}(f)\|_{a;r}\ \leqslant\ C_{a}\cdot\max\bigl{\{}1,\|f\|_{a;r}^{d} \bigr{\}}\cdot\bigl{(}\|g\|_{a;r}+\cdots+\|g\|_{a;r}^{d}\bigr{)}.\]
_We can take these \(C_{a}\) such that \(C_{a}\to 0\) as \(a\to\infty\), and we do so below._
Proof.: Let \(f,g\in{\mathcal{C}}^{r}_{a}[i]^{\rm b}\). We have the Taylor expansion
\[R(f+g)\ =\ \sum_{\boldsymbol{i}}\frac{1}{\boldsymbol{i}!}R^{(i)}(f)g^{ \boldsymbol{i}}\ =\ \sum_{\boldsymbol{i}}\frac{1}{\boldsymbol{i}!}\bigg{[}\sum_{ \boldsymbol{j}}R^{(\boldsymbol{i})}_{\boldsymbol{j}}f^{\boldsymbol{j}}\bigg{]} g^{\boldsymbol{i}}.\]
Now for all \(\boldsymbol{i}\), \(\boldsymbol{j}\) we have \(R^{(\boldsymbol{i})}_{\boldsymbol{j}}\prec_{\Delta}\mathfrak{p}^{\nu}\) in \(K\), so \(u^{-1}R^{(\boldsymbol{i})}_{\boldsymbol{j}}\prec 1\). Hence
\[D_{a}\ :=\ \sum_{\boldsymbol{i},\boldsymbol{j}}\big{\|}u^{-1}R^{(\boldsymbol{ i})}_{\boldsymbol{j}}\big{\|}_{a}\ \in\ [0,\infty)\]
has the property that \(D_{a}\to 0\) as \(a\to\infty\), and
\[\big{\|}u^{-1}\big{(}R(f+g)-R(f)\big{)}\big{\|}_{a}\ \leqslant\ D_{a}\cdot\max \bigl{\{}1,\|f\|_{a;r}^{d}\bigr{\}}\cdot\bigl{(}\|g\|_{a;r}+\cdots+\|g\|_{a;r}^ {d}\bigr{)}.\]
So \(h:=u^{-1}\big{(}R(f+g)-R(f)\big{)}\in{\mathcal{C}}^{0}_{a}[i]^{\rm b}\) gives \(\Xi_{a}(f+g)-\Xi_{a}(f)=u\widetilde{A}_{a}^{-1}(h)\), and
\[\|\Xi_{a}(f+g)-\Xi_{a}(f)\|_{a;r}=\|u\widetilde{A}_{a}^{-1}(h)\|_{a;r}\leqslant \|u\widetilde{A}_{a}^{-1}\|_{a;r}\cdot\|h\|_{a}.\]
Thus the lemma holds for \(C_{a}:=\|u\widetilde{A}_{a}^{-1}\|_{a;r}\cdot D_{a}\).
In the proof of the next theorem we use the well-known fact that the normed vector space \({\mathcal{C}}^{r}_{a}[i]^{\rm b}\) over \({\mathbb{C}}\) is actually a Banach space. Hence if \(S\subseteq{\mathcal{C}}^{r}_{a}[i]^{\rm b}\) is nonempty and closed and \(\Phi\colon S\to S\) is contractive (that is, there is a real number \(\lambda\in[0,1)\) such that \(\|\Phi(f)-\Phi(g)\|_{a;r}\leqslant\lambda\|f-g\|_{a;r}\) for all \(f,g\in S\)), then \(\Phi\) has a unique fixed point \(f_{0}\), and \(\Phi^{n}(f)\to f_{0}\) as \(n\to\infty\), for every \(f\in S\). (See for example [203, Chapter II, SS5, IX].)
**Theorem 6.2.3**.: _For all sufficiently large \(a\) the operator \(\Xi_{a}\) maps the closed ball_
\[\bigl{\{}f\in{\mathcal{C}}^{r}_{a}[i]:\ \|f\|_{a;r}\leqslant 1/2\bigr{\}}\]
_of the Banach space \({\mathcal{C}}^{r}_{a}[i]^{\rm b}\) into itself and has a unique fixed point on this ball._
Proof.: We have \(\Xi_{a}(0)=u\widetilde{A}_{a}^{-1}(u^{-1}R_{0})\), so \(\|\Xi_{a}(0)\|_{a;r}\leqslant\|u\widetilde{A}_{a}^{-1}\|_{a;r}\|u^{-1}R_{0}\|_ {a}\). Now \(\|u^{-1}R_{0}\|_{a}\to 0\) as \(a\to\infty\), so by Proposition 6.1.7(iii) we can take \(a\) so large that \(\|u\widetilde{A}_{a}^{-1}\|_{a;r}\|u^{-1}R_{0}\|_{a}\leqslant\frac{1}{4}\). For \(f\), \(g\) in the closed ball above we have by Lemma 6.2.2,
\[\|\Xi_{a}(f)-\Xi_{a}(g)\|_{a;r}\ =\ \|\Xi_{a}(f+(g-f))-\Xi_{a}(f)\|_{a;r}\ \leqslant\ C_{a}\cdot d\|f-g\|_{a;r}.\]
Take \(a\) so large that also \(C_{a}d\leqslant\frac{1}{2}\). Then \(\|\Xi_{a}(f)-\Xi_{a}(g)\|_{a;r}\leqslant\frac{1}{2}\|f-g\|_{a;r}\). Applying this to \(g=0\) we see that \(\Xi_{a}\) maps the closed ball above to itself. Thus \(\Xi_{a}\) has a unique fixed point on this ball.
Note that if \(\deg R\leqslant 0\) (so \(R=R_{0}\)), then \(\Xi_{a}(f)=u\widetilde{A}_{a}^{-1}(u^{-1}R_{0})\) is independent of \(f\in{\mathcal{C}}^{r}_{a}[i]^{\rm b}\), so for sufficiently large \(a\), the fixed point \(f\in{\mathcal{C}}^{r}_{a}[i]^{\rm b}\) of \(\Xi_{a}\) with \(\|f\|_{a;r}\leqslant 1/2\) is \(f=\Xi_{a}(0)=u\widetilde{A}_{a}^{-1}(u^{-1}R_{0})\). Here is a variant of Theorem 6.2.3:
**Lemma 6.2.4**.: _Let \(h\in{\mathcal{C}}^{r}_{a_{0}}[i]\) be such that \(\|h\|_{a_{0};r}\leqslant 1/8\). Then for sufficiently large \(a\) there is a unique \(f\in{\mathcal{C}}^{r}_{a}[i]^{\rm b}\) such that \(\|f\|_{a;r}\leqslant 1/2\) and \(\Xi_{a}(f)=f+h\)._
Proof.: Consider the operator \(\Theta_{a}=\Xi_{a}-h\colon{\mathcal{C}}^{r}_{a}[i]^{\rm b}\to{\mathcal{C}}^{r}_ {a}[i]^{\rm b}\) given by
\[\Theta_{a}(y)\ :=\ \Xi_{a}(y)-h.\]
Arguing as in the proof of Theorem 6.2.3 we take \(a\) so large that \(\|\Xi_{a}(0)\|_{a;r}\leqslant\frac{1}{\delta}\). Then \(\|\Theta_{a}(0)\|_{a;r}\leqslant\|\Xi_{a}(0)\|_{a;r}+\|h\|_{a;r}\leqslant\frac{1} {4}\). Also, take \(a\) so large that \(C_{a}d\leqslant\frac{1}{2}\). Then for \(f,g\in\mathcal{C}_{a}^{r}[i]^{\mathrm{b}}\) with \(\|f\|_{a;r},\|g\|_{a;r}\leqslant 1/2\) we have
\[\|\Theta_{a}(f)-\Theta_{a}(g)\|_{a;r}\ =\ \|\Xi_{a}(f)-\Xi_{a}(g)\|_{a;r}\ \leqslant\ \tfrac{1}{2}\|f-g\|_{a;r}.\]
Now finish as in the proof of Theorem 6.2.3.
Next we investigate the difference between solutions of \((*)\) on \([a_{0},\infty)\):
**Lemma 6.2.5**.: _Suppose \(f,g\in\mathcal{C}_{a_{0}}^{r}[i]^{\mathrm{b}}\) and \(A_{a_{0}}(f)=R(f)\), \(A_{a_{0}}(g)=R(g)\). Then there are positive reals \(E\), \(\varepsilon\) such that for all \(a\) there exists an \(h_{a}\in\mathcal{C}_{a}^{r}[i]^{\mathrm{b}}\) with the property that for \(\theta_{a}:=(f-g)|_{[a,\infty)}\),_
\[A_{a}(h_{a})=0,\quad\theta_{a}-h_{a}\prec\mathfrak{v}^{w},\quad\|\theta_{a}-h _{a}\|_{a;r}\ \leqslant\ E\cdot\|\mathfrak{v}^{\varepsilon}\|_{a}\cdot\big{(}\|\theta_{a}\| _{a;r}+\cdots+\|\theta_{a}\|_{a;r}^{d}\big{)},\]
_and thus \(h_{a}\prec 1\) in case \(f-g\prec 1\)._
Proof.: Set \(\eta_{a}:=A_{a}(\theta_{a})=R(f)-R(g)\), where \(f\) and \(g\) stand for their restrictions to \([a,\infty)\). From \(R\prec\mathfrak{v}^{w}\) we get \(u^{-1}R(f)\in\mathcal{C}_{a}[i]^{\mathrm{b}}\) and \(u^{-1}R(g)\in\mathcal{C}_{a}[i]^{\mathrm{b}}\), so \(u^{-1}\eta_{a}\in\mathcal{C}_{a}[i]^{\mathrm{b}}\). By Proposition 6.1.7(i),(iv) we have
\[\xi_{a}\ :=\ u\widetilde{A}_{a}^{-1}(u^{-1}\eta_{a})\in\mathcal{C}_{a}^{r}[i]^{ \mathrm{b}},\qquad\xi_{a}\prec\mathfrak{v}^{w}.\]
Now \(\widetilde{A}_{a}(u^{-1}\xi_{a})=u^{-1}\eta_{a}\), that is, \(A_{a}(\xi_{a})=\eta_{a}\). Note that then \(h_{a}:=\theta_{a}-\xi_{a}\) satisfies \(A_{a}(h_{a})=0\). Now \(\xi_{a}=\theta_{a}-h_{a}\) and \(\xi_{a}=\Xi_{a}(g+\theta_{a})-\Xi_{a}(g)\), hence by Lemma 6.2.2 and its proof,
\[\|\theta_{a}-h_{a}\|_{a;r}\ =\ \|\xi_{a}\|_{a;r}\ \leqslant\ C_{a} \cdot\max\bigl{\{}1,\|g\|_{a;r}^{d}\bigr{\}}\cdot\bigl{(}\|\theta\|_{a;r}+ \cdots+\|\theta\|_{a;r}^{d}\bigr{)},\ \text{with}\] \[C_{a}\ :=\ \bigl{\|}u\widetilde{A}_{a}^{-1}\bigr{\|}_{a;r}\cdot \sum_{\boldsymbol{i},\boldsymbol{j}}\bigl{\|}u^{-1}R_{\boldsymbol{j}}^{ \boldsymbol{(i)}}\bigr{\|}_{a}.\]
Take a real \(\varepsilon>0\) such that \(R\prec\mathfrak{v}^{\nu+\varepsilon}\). This gives a real \(e>0\) such that \(\sum_{\boldsymbol{i},\boldsymbol{j}}\big{\|}u^{-1}R_{\boldsymbol{j}}^{ \boldsymbol{(i)}}\big{\|}_{a}\leqslant e\|\mathfrak{v}^{\varepsilon}\|_{a}\) for all \(a\). Now use Proposition 6.1.7(iii).
The situation we have in mind in the lemma above is that \(f\) and \(g\) are close at infinity, in the sense that \(\|f-g\|_{a;r}\to 0\) as \(a\to\infty\). Then the lemma yields solutions of \(A(y)=0\) that are very close to \(f-g\) at infinity. However, being very close at infinity as stated in Lemma 6.2.5, namely \(\theta_{a}-h_{a}\prec\mathfrak{v}^{w}\) and the rest, is too weak for later use. We take up this issue again in Section 6.5 below. (In Corollary 6.2.15 later in the present section we already show: if \(f\neq g\) as germs, then \(h_{a}\neq 0\) for sufficiently large \(a\).)
### Preserving reality
We now assume in addition that \(A\) and \(R\) are real, that is, \(A\in H[\partial]\) and \(R\in H\{Y\}\). It is not clear that the fixed points constructed in the proof of Theorem 6.2.3 are then also real. Therefore we slightly modify this construction using real parts. We first apply the discussion following Lemma 6.1.5 to \(\widetilde{A}\) as well as to \(A\), increasing \(a_{0}\) so that for all \(a\) the \(\mathbb{R}\)-linear real part
\[\operatorname{Re}\widetilde{A}_{a}^{-1}\,:\ \mathcal{C}_{a}^{\mathrm{b}} \to\mathcal{C}_{a}^{\mathrm{b}}\]
maps \(\mathcal{C}_{a}^{\mathrm{b}}\) into \(\mathcal{C}_{a}^{r}\) and is right-inverse to \(\widetilde{A}_{a}\) on \((\mathcal{C}_{a}^{\mathrm{0}})^{\mathrm{b}}\), with
\[\bigl{\|}(\operatorname{Re}\widetilde{A}_{a}^{-1})(f)\bigr{\|}_{a}\leqslant \bigl{\|}\widetilde{A}_{a}^{-1}(f)\bigr{\|}_{a}\qquad\text{ for all }f\in\mathcal{C}_{a}^{\mathrm{b}}.\]
Next we set
\[(\mathcal{C}_{a}^{r})^{\mathrm{b}}\ :=\ \bigl{\{}f\in\mathcal{C}_{a}^{r}:\ \|f\|_{a;r}< \infty\bigr{\}}\ =\ \mathcal{C}_{a}^{r}[i]^{\mathrm{b}}\cap\mathcal{C}_{a}^{r},\]
which is a real Banach space with respect to \(\|\cdot\|_{a;r}\). Finally, this increasing of \(a_{0}\) is done so that the original \(R_{\boldsymbol{j}}\in\mathcal{C}_{a_{0}}[\mathrm{i}]\) restrict to updated functions \(R_{\boldsymbol{j}}\in\mathcal{C}_{a_{0}}\). For all \(a\), take \(u\), \(\Xi_{a}\) as in Lemma 6.2.1. This lemma has the following real analogue as a consequence:
**Lemma 6.2.6**.: _The operator_
\[\mathrm{Re}\ \Xi_{a}\ :\ (\mathcal{C}_{a}^{r})^{\mathrm{b}}\to(\mathcal{C}_{a}^{r })^{\mathrm{b}},\quad f\mapsto\mathrm{Re}\big{(}\Xi_{a}(f)\big{)}\]
_satisfies \((\mathrm{Re}\ \Xi_{a})(f)\preccurlyeq\mathfrak{v}^{\nu}\) for \(f\in(\mathcal{C}_{a}^{r})^{\mathrm{b}}\), and any fixed point of \(\mathrm{Re}\ \Xi_{a}\) is a solution of \((*)\) on \([a,\infty)\)._
Below the constants \(C_{a}\) are as in Lemma 6.2.2.
**Lemma 6.2.7**.: _For \(f,g\in(\mathcal{C}_{a}^{r})^{\mathrm{b}}\),_
\[\big{\|}(\mathrm{Re}\ \Xi_{a})(f+g)-(\mathrm{Re}\,\Xi_{a})(f)\big{\|}_{a;r}\ \leqslant\ C_{a}\cdot\max\bigl{\{}1,\|f\|_{a;r}^{d}\bigr{\}}\cdot\bigl{(}\|g \|_{a;r}+\cdots+\|g\|_{a;r}^{d}\bigr{)}.\]
The next corollary is derived from Lemma 6.2.7 in the same way as Theorem 6.2.3 from Lemma 6.2.2:
**Corollary 6.2.8**.: _For all sufficiently large \(a\) the operator \(\mathrm{Re}\ \Xi_{a}\) maps the closed ball_
\[\bigl{\{}f\in\mathcal{C}_{a}^{r}:\ \|f\|_{a;r}\leqslant 1/2\bigr{\}}\]
_of the Banach space \((\mathcal{C}_{a}^{r})^{\mathrm{b}}\) into itself and has a unique fixed point on this ball._
Here is the real analogue of Lemma 6.2.4, with a similar proof:
**Corollary 6.2.9**.: _Let \(h\in\mathcal{C}_{a_{0}}^{r}\) be such that \(\|h\|_{a_{0};r}\leqslant 1/8\). Then for sufficiently large \(a\) there is a unique \(f\in\mathcal{C}_{a}^{r}\) such that \(\|f\|_{a;r}\leqslant 1/2\) and \((\mathrm{Re}\,\Xi_{a})(f)=f+h\)._
We also have a real analogue of Lemma 6.2.5:
**Corollary 6.2.10**.: _Suppose \(f,g\in(\mathcal{C}_{a_{0}}^{r})^{\mathrm{b}}\) and \(A_{a_{0}}(f)=R(f)\), \(A_{a_{0}}(g)=R(g)\). Then there are positive reals \(E\), \(\varepsilon\) such that for all \(a\) there exists an \(h_{a}\in(\mathcal{C}_{a}^{r})^{\mathrm{b}}\) with the property that for \(\theta_{a}:=(f-g)|_{[a,\infty)}\),_
\[A_{a}(h_{a})=0,\quad\theta_{a}-h_{a}\preccurlyeq\mathfrak{v}^{w},\quad\| \theta_{a}-h_{a}\|_{a;r}\ \leqslant\ E\cdot\|\mathfrak{v}^{\varepsilon}\|_{a}\cdot\bigl{(}\|\theta_{a} \|_{a;r}+\cdots+\|\theta_{a}\|_{a;r}^{d}\bigr{)}.\]
Proof.: Take \(h_{a}\) to be the real part of an \(h_{a}\) as in Lemma 6.2.5.
**Some useful bounds.** To prepare for Section 6.5 we derive in this subsection some bounds from Lemmas 6.2.2 and 6.2.5. Throughout we assume \(d,r\in\mathbb{N}^{\geqslant 1}\). We begin with an easy inequality:
**Lemma 6.2.11**.: _Let \((V,\|\cdot\|)\) be a normed \(\mathbb{C}\)-linear space, and \(f,g\in V\). Then_
\[\|f+g\|^{d}\ \leqslant\ 2^{d}\cdot\max\bigl{\{}1,\|f\|^{d}\bigr{\}}\cdot \max\bigl{\{}1,\|g\|^{d}\bigr{\}}.\]
Proof.: Use that \(\|f+g\|\leqslant\|f\|+\|g\|\leqslant 2\max\bigl{\{}1,\|f\|\bigr{\}}\cdot \max\bigl{\{}1,\|g\|\bigr{\}}\).
Now let \(u\), \(\Xi_{a}\) be as in Lemma 6.2.1. By that lemma, the operator
\[\Phi_{a}\ :\ \mathcal{C}_{a}^{r}[\mathrm{i}]^{\mathrm{b}}\times\mathcal{C}_{a}^{r }[\mathrm{i}]^{\mathrm{b}}\to\mathcal{C}_{a}^{r}[\mathrm{i}]^{\mathrm{b}}, \quad(f,y)\mapsto\Xi_{a}(f+y)-\Xi_{a}(f)\]
is continuous. Furthermore \(\Phi_{a}(f,0)=0\) for \(f\in\mathcal{C}_{a}^{r}[\mathrm{i}]^{\mathrm{b}}\) and
\[\Phi_{a}(f,g+y)-\Phi_{a}(f,g)\ =\ \Phi_{a}(f+g,y)\qquad\text{for $f,g,y\in \mathcal{C}_{a}^{r}[\mathrm{i}]^{\mathrm{b}}$}. \tag{6.2.1}\]
**Lemma 6.2.12**.: _There are \(C_{a},C_{a}^{+}\in\mathbb{R}^{\geqslant}\) such that for all \(f,g,y\in\mathcal{C}_{a}^{r}[\mathrm{i}]^{\mathrm{b}}\),_
\[\|\Phi_{a}(f,g)\|_{a;r}\ \leqslant\ C_{a}\cdot\max\bigl{\{}1,\|f\|_{a;r}^{d} \bigr{\}}\cdot\bigl{(}\|y\|_{a;r}+\cdots+\|y\|_{a;r}^{d}\bigr{)}, \tag{6.2.2}\]
_We can take these \(C_{a},C_{a}^{+}\) such that \(C_{a},C_{a}^{+}\to 0\) as \(a\to\infty\), and do so below._
Proof.: The \(C_{a}\) as in Lemma 6.2.2 satisfy the requirements on the \(C_{a}\) here. Now let \(f,g,y\in\mathcal{C}_{a}^{r}[\mathrm{i}]^{\mathrm{b}}\). Then by (6.2.1) and (6.2.2) we have
\[\|\Phi_{a}(f,g+y)-\Phi_{a}(f,g)\|_{a;r}\ \leqslant\ C_{a}\cdot\max\bigl{\{}1,\|f+g \|_{a;r}^{d}\bigr{\}}\cdot\bigl{(}\|y\|_{a;r}+\cdots+\|y\|_{a;r}^{d}\bigr{)}.\]
Thus by Lemma 6.2.11, \(C_{a}^{+}:=2^{d}\cdot C_{a}\) has the required property.
Next, let \(f\), \(g\) be as in the hypothesis of Lemma 6.2.5 and take \(E\), \(\varepsilon\), and \(h_{a}\) (for each \(a\)) as in its conclusion. Thus for all \(a\) and \(\theta_{a}:=(f-g)|_{[a,\infty)}\),
\[\|\theta_{a}-h_{a}\|_{a;r}\ \leqslant\ E\cdot\|\mathfrak{v}^{\varepsilon}\|_{a} \cdot\bigl{(}\|\theta_{a}\|_{a;r}+\cdots+\|\theta_{a}\|_{a;r}^{d}\bigr{)},\]
and if \(f-g\prec 1\), then \(h_{a}\prec 1\). So
\[\|\theta_{a}-h_{a}\|_{a;r}\ \leqslant\ E\cdot\|\mathfrak{v}^{\varepsilon}\|_{a} \cdot\bigl{(}\rho+\cdots+\rho^{d}\bigr{)},\quad\rho:=\|f-g\|_{a_{0};r}.\]
We let
\[B_{a}\ :=\ \bigl{\{}y\in\mathcal{C}_{a}^{r}[\mathrm{i}]^{\mathrm{b}}:\ \|y-h_{a}\|_{a;r}\leqslant 1/2 \bigr{\}}\]
be the closed ball of radius \(1/2\) around \(h_{a}\) in \(\mathcal{C}_{a}^{r}[\mathrm{i}]^{\mathrm{b}}\). Using \(\mathfrak{v}^{\varepsilon}\prec 1\) we take \(a_{1}\geqslant a_{0}\) so that \(\theta_{a}\in B_{a}\) for all \(a\geqslant a_{1}\). Then for \(a\geqslant a_{1}\) we have
\[\|h_{a}\|_{a;r}\ \leqslant\ \|h_{a}-\theta_{a}\|_{a;r}+\|\theta_{a}\|_{a;r}\ \leqslant\ \tfrac{1}{2}+\rho,\]
and hence for \(y\in B_{a}\),
\[\|y\|_{a;r}\ \leqslant\ \|y-h_{a}\|_{a;r}+\|h_{a}\|_{a;r}\ \leqslant\ \tfrac{1}{2}+\bigl{(}\tfrac{1}{2}+\rho\bigr{)}\ =\ 1+\rho. \tag{6.2.4}\]
Consider now the continuous operators
\[\Phi_{a},\Psi_{a}\ :\ \mathcal{C}_{a}^{r}[\mathrm{i}]^{\mathrm{b}}\to\mathcal{C}_{a}^{r}[ \mathrm{i}]^{\mathrm{b}},\qquad\Phi_{a}(y):=\Xi_{a}(g+y)-\Xi_{a}(g),\quad \Psi_{a}(y):=\Phi_{a}(y)+h_{a}.\]
In the notation introduced above, \(\Phi_{a}(y)=\Phi_{a}(g,y)\) for \(y\in\mathcal{C}_{a}^{r}[\mathrm{i}]^{\mathrm{b}}\). With \(\xi_{a}\) as in the proof of Lemma 6.2.5 we also have \(\Phi_{a}(\theta_{a})=\xi_{a}\) and \(\Psi_{a}(\theta_{a})=\xi_{a}+h_{a}=\theta_{a}\). Below we reconstruct the fixed point \(\theta_{a}\) of \(\Psi_{a}\) from \(h_{a}\), for sufficiently large \(a\).
**Lemma 6.2.13**.: _There exists \(a_{2}\geqslant a_{1}\) such that for all \(a\geqslant a_{2}\) we have \(\Psi_{a}(B_{a})\subseteq B_{a}\), and \(\|\Psi_{a}(y)-\Psi_{a}(z)\|_{a;r}\leqslant\tfrac{1}{2}\|y-z\|_{a;r}\) for all \(y,z\in B_{a}\)._
Proof.: Take \(C_{a}\) as in Lemma 6.2.12, and let \(y\in B_{a}\). Then by (6.2.2),
\[\|\Phi_{a}(y)\|_{a;r}\ \leqslant\ C_{a}\cdot\max\bigl{\{}1,\|g\|_{a;r}^{d} \bigr{\}}\cdot\bigl{(}\|y\|_{a;r}+\cdots+\|y\|_{a;r}^{d}\bigr{)},\quad\theta_{a }\in B_{a},\ \mathrm{so}\] \[\|\Psi_{a}(y)-h_{a}\|_{a;r}\ \leqslant\ C_{a}M,\quad M:=\max\bigl{\{}1,\|g\|_{a_{0};r}^{d} \bigr{\}}\cdot\bigl{(}(1+\rho)+\cdots+(1+\rho)^{d}\bigr{)}.\]
Recall that \(C_{a}\to 0\) as \(a\to\infty\). Suppose \(a\geqslant a_{1}\) is so large that \(C_{a}M\leqslant 1/2\). Then \(\Psi_{a}(B_{a})\subseteq B_{a}\). With \(C_{a}^{+}\) as in Lemma 6.2.12, (6.2.3) gives for \(y,z\in\mathcal{C}_{a}^{r}[\mathrm{i}]^{\mathrm{b}}\),
\[\|\Phi_{a}(y)-\Phi_{a}(z)\|_{a;r}\leqslant\\ C_{a}^{+}\cdot\max\bigl{\{}1,\|g\|_{a;r}^{d}\bigr{\}}\cdot \max\bigl{\{}1,\|z\|_{a;r}^{d}\bigr{\}}\cdot\bigl{(}\|y-z\|_{a;r}+\cdots+\|y- z\|_{a;r}^{d}\bigr{)}.\]
Hence with \(N:=\max\bigl{\{}1,\|g\|_{a_{0};r}^{d}\bigr{\}}\cdot(1+\rho)^{d}\cdot d\) we obtain for \(y,z\in B_{a}\) that
\[\|\Psi_{a}(y)-\Psi_{a}(z)\|_{a;r}\ \leqslant\ C_{a}^{+}N\|y-z\|_{a;r},\]
so \(\|\Psi_{a}(y)-\Psi_{a}(z)\|_{a;r}\leqslant\frac{1}{2}\|y-z\|_{a;r}\) if \(C_{a}^{+}N\leqslant 1/2\).
Below \(a_{2}\) is as in Lemma 6.2.13.
**Corollary 6.2.14**.: _If \(a\geqslant a_{2}\), then \(\lim_{n\to\infty}\Psi_{a}^{n}(h_{a})=\theta_{a}\) in \(\mathcal{C}_{a}^{r}[i]^{\mathrm{b}}\)._
Proof.: Let \(a\geqslant a_{2}\). Then \(\Psi_{a}\) has a unique fixed point on \(B_{a}\). As \(\Psi_{a}(\theta_{a})=\theta_{a}\in B_{a}\), this fixed point is \(\theta_{a}\).
**Corollary 6.2.15**.: _If \(f\neq g\) as germs, then \(h_{a}\neq 0\) for \(a\geqslant a_{2}\)._
Proof.: Let \(a\geqslant a_{2}\). Then \(\lim_{n\to\infty}\Psi_{a}^{n}(h_{a})=\theta_{a}\). If \(h_{a}=0\), then \(\Psi_{a}=\Phi_{a}\), and hence \(\theta_{a}=0\), since \(\Phi_{a}(0)=0\).
### Smoothness Considerations
We assume \(r\in\mathbb{N}\) in this section. We prove here as much smoothness of solutions of algebraic differential equations over Hardy fields as could be hoped for. In particular, the solutions in \(\mathcal{C}_{a}^{r}[i]^{\mathrm{b}}\) of the equation \((*)\) in Section 6.2 actually have their germs in \(\mathcal{C}^{<\infty}[i]\). To make this precise, consider a "differential" polynomial
\[P\ =\ P(Y,\dots,Y^{(r)})\ \in\ \mathcal{C}^{n}[i]\bigl{[}Y,\dots,Y^{(r)} \bigr{]}.\]
We put _differential_ in quotes since \(\mathcal{C}^{n}[i]\) is not naturally a differential ring. Nevertheless, \(P\) defines an obvious evaluation map
\[f\mapsto P\bigl{(}f,\dots,f^{(r)}\bigr{)}\ :\ \mathcal{C}^{r}[i]\to\mathcal{C}[i].\]
We also have the "separant"
\[S_{P}\ :=\ \frac{\partial P}{\partial Y^{(r)}}\ \in\ \mathcal{C}^{n}[i]\bigl{[}Y, \dots,Y^{(r)}\bigr{]}\]
of \(P\).
**Proposition 6.3.1**.: _Assume \(n\geqslant 1\). Let \(f\in\mathcal{C}^{r}[i]\) be such that_
\[P\bigl{(}f,\dots,f^{(r)}\bigr{)}=0\in\mathcal{C}[i]\quad\text{ and }\quad S_{P} \bigl{(}f,\dots,f^{(r)}\bigr{)}\in\mathcal{C}[i]^{\times}.\]
_Then \(f\in\mathcal{C}^{n+r}[i]\). Thus if \(P\in\mathcal{C}^{<\infty}[i]\bigl{[}Y,\dots,Y^{(r)}\bigr{]}\), then \(f\in\mathcal{C}^{<\infty}[i]\). Moreover, if \(P\in\mathcal{C}^{\infty}[i]\bigl{[}Y,\dots,Y^{(r)}\bigr{]}\), then \(f\in\mathcal{C}^{\infty}[i]\), and likewise with \(\mathcal{C}^{\omega}[i]\) in place of \(\mathcal{C}^{\infty}[i]\)._
We deduce this from the lemma below, which has a complex-analytic hypothesis. Let \(U\subseteq\mathbb{R}\times\mathbb{C}^{1+r}\) be open. Let \(t\) range over \(\mathbb{R}\) and \(z_{0},\dots,z_{r}\) over \(\mathbb{C}\), and set \(x_{j}:=\operatorname{Re}z_{j}\), \(y_{j}:=\operatorname{Im}z_{j}\) for \(j=0,\dots,r\). We also set
\[U(t,z_{0},\dots,z_{r-1})\ :=\ \bigl{\{}z_{r}:(t,z_{0},\dots,z_{r-1},z_{r})\in U \bigr{\}},\]
an open subset of \(\mathbb{C}\), and we assume \(\Phi\colon U\to\mathbb{C}\) and \(n\geqslant 1\) are such that
\[\operatorname{Re}\Phi,\ \operatorname{Im}\Phi\ :\ U\to\mathbb{R}\]
are \(\mathcal{C}^{n}\)-functions of \((t,x_{0},y_{0},\dots,x_{r},y_{r})\), and for all \(t,z_{0},\dots,z_{r-1}\) the function
\[z_{r}\mapsto\Phi(t,z_{0},\dots,z_{r-1},z_{r})\ :\ U(t,z_{0},\dots,z_{r-1})\to \mathbb{C}\]
is holomorphic (the complex-analytic hypothesis alluded to).
**Lemma 6.3.2**.: _Let \(I\subseteq\mathbb{R}\) be a nonempty open interval and suppose \(f\in\mathcal{C}^{r}(I)[\mathrm{i}]\) is such that for all \(t\in I\),_
* \(\big{(}t,f(t),\ldots,f^{(r)}(t)\big{)}\in U\)_;_
* \(\Phi\big{(}t,f(t),\ldots,f^{(r)}(t)\big{)}=0\)_; and_
* \((\partial\Phi/\partial z_{r})\big{(}t,f(t),\ldots,f^{(r)}(t)\big{)}\neq 0\)_._
_Then \(f\in\mathcal{C}^{n+r}(I)[\mathrm{i}]\)._
Proof.: Set \(A:=\operatorname{Re}\Phi,\ B:=\operatorname{Im}\Phi\) and \(g:=\operatorname{Re}f,\ h:=\operatorname{Im}f\). Then for all \(t\in I\),
\[A\big{(}t,g(t),h(t),g^{\prime}(t),h^{\prime}(t)\ldots,g^{(r)}(t ),h^{(r)}(t)\big{)} =\ 0\] \[B\big{(}t,g(t),h(t),g^{\prime}(t),h^{\prime}(t)\ldots,g^{(r)}(t ),h^{(r)}(t)\big{)} =\ 0.\]
Consider the \(\mathcal{C}^{n}\)-map \((A,B)\colon U\to\mathbb{R}^{2}\), with \(U\) identified in the usual way with an open subset of \(\mathbb{R}^{1+2(1+r)}\). The Cauchy-Riemann equations give
\[\frac{\partial\Phi}{\partial z_{r}}\ =\ \frac{\partial A}{\partial x_{r}}+ \mathrm{i}\frac{\partial B}{\partial x_{r}},\qquad\frac{\partial A}{\partial x _{r}}\ =\ \frac{\partial B}{\partial y_{r}},\qquad\frac{\partial B}{\partial x_{r}}\ =\ -\frac{ \partial A}{\partial y_{r}}.\]
Thus the Jacobian matrix of the map \((A,B)\) with respect to its last two variables \(x_{r}\) and \(y_{r}\) has determinant
\[D\ =\ \left(\frac{\partial A}{\partial x_{r}}\right)^{2}+\left(\frac{ \partial B}{\partial x_{r}}\right)^{2}\ =\ \left|\frac{\partial\Phi}{\partial z_{r}}\right|^{2}\ :\ U\to\mathbb{R}.\]
Let \(t_{0}\in I\). Then
\[D\big{(}t_{0},g(t_{0}),h(t_{0}),\ldots,g^{(r)}(t_{0}),h^{(r)}(t_{0})\big{)} \neq 0,\]
so by the Implicit Mapping Theorem [57, (10.2.2), (10.2.3)] we have a connected open neighborhood \(V\) of the point
\[\big{(}t_{0},g(t_{0}),h(t_{0}),\ldots,g^{(r-1)}(t_{0}),h^{(r-1)}(t_{0})\big{)} \in\mathbb{R}^{1+2r},\]
open intervals \(J,K\subseteq\mathbb{R}\) containing \(g^{(r)}(t_{0})\), \(h^{(r)}(t_{0})\), respectively, and a \(\mathcal{C}^{n}\)-map
\[(G,H)\colon V\to J\times K\]
such that \(V\times J\times K\subseteq U\) and the zero set of \(\Phi\) on \(V\times J\times K\) equals the graph of \((G,H)\). Take an open subinterval \(I_{0}\) of \(I\) with \(t_{0}\in I_{0}\) such that for all \(t\in I_{0}\),
\[\big{(}t,g(t),h(t),g^{\prime}(t),h^{\prime}(t),\ldots,g^{(r-1)}(t),h^{(r-1)}(t ),g^{(r)}(t),h^{(r)}(t)\big{)}\in V\times J\times K.\]
Then the above gives that for all \(t\in I_{0}\) we have
\[g^{(r)}(t)\ =\ G\big{(}t,g(t),h(t),g^{\prime}(t),h^{\prime}(t), \ldots,g^{(r-1)}(t),h^{(r-1)}(t)\big{)},\] \[h^{(r)}(t)\ =\ H\big{(}t,g(t),h(t),g^{\prime}(t),h^{\prime}(t), \ldots,g^{(r-1)}(t),h^{(r-1)}(t)\big{)}.\]
It follows easily from these two equalities that \(g,h\) are of class \(\mathcal{C}^{n+r}\) on \(I_{0}\).
Let \(f\) continue to be as in Lemma 6.3.2. If \(\operatorname{Re}\Phi\), \(\operatorname{Im}\Phi\) are \(\mathcal{C}^{\infty}\), then by taking \(n\) arbitrarily high we conclude that \(f\in\mathcal{C}^{\infty}(I)[\mathrm{i}]\). Moreover:
**Corollary 6.3.3**.: _If \(\operatorname{Re}\Phi\), \(\operatorname{Im}\Phi\) are real-analytic, then \(f\in\mathcal{C}^{\omega}(I)[\mathrm{i}]\)._
Proof.: Same as that of Lemma 6.3.2, with the reference to [57, (10.2.3)] replaced by [57, (10.2.4)] to obtain that \(G\), \(H\) are real-analytic, and noting that then the last displayed relations for \(t\in I_{0}\) force \(g\), \(h\) to be real-analytic on \(I_{0}\) by [57, (10.5.3)].
**Lemma 6.3.4**.: _Let \(I\subseteq\mathbb{R}\) be a nonempty open interval, \(n\geqslant 1\), and_
\[P\ =\ P\big{(}Y,\ldots,Y^{(r)}\big{)}\ \in\ \mathcal{C}^{n}(I)[\mathrm{i}] \big{[}Y,\ldots,Y^{(r)}\big{]}.\]
_Let \(f\in\mathcal{C}^{r}(I)[\mathrm{i}]\) be such that_
\[P\big{(}f,\ldots,f^{(r)}\big{)}=0\in\mathcal{C}(I)[\mathrm{i}]\quad\text{ and }\quad(\partial P/\partial Y^{(r)})\big{(}f,\ldots,f^{(r)}\big{)}\in \mathcal{C}(I)[\mathrm{i}]^{\times}.\]
_Then \(f\in\mathcal{C}^{n+r}(I)[\mathrm{i}]\). Moreover, if \(P\in\mathcal{C}^{\infty}(I)[\mathrm{i}]\big{[}Y,\ldots,Y^{(r)}\big{]}\), then \(f\in\mathcal{C}^{\infty}(I)[\mathrm{i}]\), and likewise with \(\mathcal{C}^{\omega}(I)[\mathrm{i}]\) in place of \(\mathcal{C}^{\infty}(I)[\mathrm{i}]\)._
Proof.: Let \(P=\sum_{\boldsymbol{i}}P_{\boldsymbol{i}}Y^{\boldsymbol{i}}\) where all \(P_{\boldsymbol{i}}\in\mathcal{C}^{n}(I)[\mathrm{i}]\). Set \(U:=I\times\mathbb{C}^{1+r}\), and consider the map \(\Phi\colon U\to\mathbb{C}\) given by
\[\Phi(t,z_{0},\ldots,z_{r}):=\sum_{\boldsymbol{i}}P_{\boldsymbol{i}}(t)z^{ \boldsymbol{i}}\quad\text{where }z^{\boldsymbol{i}}:=z_{0}^{i_{0}}\cdots z_{r}^{i_{r}}\text{ for } \boldsymbol{i}=(i_{0},\ldots,i_{r})\in\mathbb{N}^{1+r}.\]
From Lemma 6.3.2 we obtain \(f\in\mathcal{C}^{n+r}(I)[\mathrm{i}]\). In view of Corollary 6.3.3 and the remark preceding it, and replacing \(n\) by \(\infty\) respectively \(\omega\), this argument also gives the second part of the lemma.
Proposition 6.3.1 follows from Lemma 6.3.4 by taking suitable representatives of the germs involved. Let now \(H\) be a Hardy field and \(P\in H[\mathrm{i}]\{Y\}\) of order \(r\). Then \(P\in\mathcal{C}^{<\infty}[\mathrm{i}]\big{[}Y,\ldots,Y^{(r)}\big{]}\), and so \(P(f):=P\big{(}f,\ldots,f^{(r)}\big{)}\in\mathcal{C}[\mathrm{i}]\) for \(f\in\mathcal{C}^{r}[\mathrm{i}]\) as explained in the beginning of this section.
For notational convenience we set
\[\mathcal{C}^{n}[\mathrm{i}]^{\preccurlyeq}:=\big{\{}f\in\mathcal{C}^{n}[ \mathrm{i}]:f,f^{\prime},\ldots,f^{(n)}\preccurlyeq 1\big{\}},\qquad( \mathcal{C}^{n})^{\preccurlyeq}:=\mathcal{C}^{n}[\mathrm{i}]^{\preccurlyeq} \cap\mathcal{C}^{n},\]
and likewise with \(\prec\) instead of \(\preccurlyeq\). Then \(\mathcal{C}^{n}[\mathrm{i}]^{\preccurlyeq}\) is a \(\mathbb{C}\)-subalgebra of \(\mathcal{C}^{n}[\mathrm{i}]\) and \((\mathcal{C}^{n})^{\preccurlyeq}\) is an \(\mathbb{R}\)-subalgebra of \(\mathcal{C}^{n}\). Also, \(\mathcal{C}^{n}[\mathrm{i}]^{\preccurlyeq}\) is an ideal of \(\mathcal{C}^{n}[\mathrm{i}]^{\preccurlyeq}\), and likewise with \(\mathcal{C}^{n}\) instead of \(\mathcal{C}^{n}[\mathrm{i}]\). We have \(\mathcal{C}^{n}[\mathrm{i}]^{\preccurlyeq}\supseteq\mathcal{C}^{n+1}[\mathrm{ i}]^{\preccurlyeq}\) and \((\mathcal{C}^{n})^{\preccurlyeq}\supseteq(\mathcal{C}^{n+1})^{\preccurlyeq}\), and likewise with \(\prec\) instead of \(\preccurlyeq\). Thus in the notation from Section 5.4:
**Corollary 6.3.5**.: _Suppose_
\[P=Y^{(r)}+f_{1}Y^{(r-1)}+\cdots+f_{r}Y-R\quad\text{ with }f_{1},\ldots,f_{r} \text{ in }H[\mathrm{i}]\text{ and }R_{\geqslant 1}\prec 1.\]
_Let \(f\in\mathcal{C}^{r}[\mathrm{i}]^{\preccurlyeq}\) be such that \(P(f)=0\). Then \(f\in\mathcal{C}^{<\infty}[\mathrm{i}]\). If \(H\subseteq\mathcal{C}^{\infty}\), then \(f\in\mathcal{C}^{\infty}[\mathrm{i}]\). If \(H\subseteq\mathcal{C}^{\omega}\), then \(f\in\mathcal{C}^{\omega}[\mathrm{i}]\)._
Proof.: We have \(S_{P}=\frac{\partial P}{\partial Y^{(r)}}=1-S\) with \(S:=\frac{\partial R_{\geqslant 1}}{\partial Y^{(r)}}\prec 1\) and thus
\[S_{P}(f,\ldots,f^{(r)})\ =\ 1-S(f,\ldots,f^{(r)}),\qquad S(f,\ldots,f^{(r)}) \prec 1,\]
so \(S_{P}(f,\ldots,f^{(r)})\in\mathcal{C}[\mathrm{i}]^{\times}\). Now appeal to Proposition 6.3.1.
Thus the germ of any solution on \([a,\infty)\) of the asymptotic equation \((*)\) of Section 6.2 lies in \(\mathcal{C}^{<\infty}[\mathrm{i}]\), and even in \(\mathcal{C}^{\infty}[\mathrm{i}]\) (respectively \(\mathcal{C}^{\omega}[\mathrm{i}]\)) if \(H\) is in addition a \(\mathcal{C}^{\infty}\)-Hardy field (respectively, a \(\mathcal{C}^{\omega}\)-Hardy field).
**Corollary 6.3.6**.: _Suppose \((P,1,\widehat{a})\) is a normal slot in \(H[\mathrm{i}]\) of order \(r\geqslant 1\), and \(f\in\mathcal{C}^{r}[\mathrm{i}]^{\preccurlyeq}\), \(P(f)=0\). Then \(f\in\mathcal{C}^{<\infty}[\mathrm{i}]\). If \(H\subseteq\mathcal{C}^{\infty}\), then \(f\in\mathcal{C}^{\infty}[\mathrm{i}]\). If \(H\subseteq\mathcal{C}^{\omega}\), then \(f\in\mathcal{C}^{\omega}[\mathrm{i}]\)._
Proof.: Multiplying \(P\) by an element of \(H[\mathrm{i}]^{\times}\) we arrange
\[P_{1}\ =\ Y^{(r)}+f_{1}Y^{(r-1)}+\cdots+f_{r}Y,\quad f_{1},\ldots,f_{r}\in H[ \mathrm{i}],\]
and then the hypothesis of Corollary 6.3.5 is satisfied.
For the differential subfield \(K:=H[\mathrm{i}]\) of the differential ring \(\mathcal{C}^{<\infty}[\mathrm{i}]\) we have:
**Corollary 6.3.7**.: _Suppose \(f\in\mathcal{C}^{<\infty}[\mathrm{i}]\) is such that \(P(f)=0\) and \(f\) generates a differential subfield \(K\langle f\rangle\) of \(\mathcal{C}^{<\infty}[\mathrm{i}]\) over \(K\). If \(H\) is a \(\mathcal{C}^{\infty}\)-Hardy field, then \(K\langle f\rangle\subseteq\mathcal{C}^{\infty}[\mathrm{i}]\), and likewise with \(\mathcal{C}^{\omega}\) in place of \(\mathcal{C}^{\infty}\)._
Proof.: Suppose \(H\) is a \(\mathcal{C}^{\infty}\)-Hardy field; it suffices to show \(f\in\mathcal{C}^{\infty}[\mathrm{i}]\). We may assume that \(P\) is a minimal annihilator of \(f\) over \(K\); then \(S_{P}(f)\neq 0\) in \(K\langle f\rangle\) and so \(S_{P}(f)\in\mathcal{C}[\mathrm{i}]^{\times}\). Hence the claim follows from Proposition 6.3.1.
With \(H\) replacing \(K\) in this proof we obtain the "real" version:
**Corollary 6.3.8**.: _Suppose \(f\in\mathcal{C}^{<\infty}\) isardian over \(H\) and \(P(f)=0\) for some \(P\in H\{Y\}^{\neq}\). Then \(H\subseteq\mathcal{C}^{\infty}\ \Rightarrow\ f\in\mathcal{C}^{\infty}\), and \(H\subseteq\mathcal{C}^{\omega}\ \Rightarrow\ f\in\mathcal{C}^{\omega}\)._
This leads to:
**Corollary 6.3.9**.: _Suppose \(H\) is a \(\mathcal{C}^{\infty}\)-Hardy field. Then every \(\mathrm{d}\)-algebraic Hardy field extension of \(H\) is a \(\mathcal{C}^{\infty}\)-Hardy field; in particular, \(\mathrm{D}(H)\subseteq\mathcal{C}^{\infty}\). Likewise with \(\mathcal{C}^{\infty}\) replaced by \(\mathcal{C}^{\omega}\)._
In particular, \(\mathrm{D}(\mathbb{Q})\subseteq\mathcal{C}^{\omega}\)[33, Theorems 14.3, 14.9].
Let \(H\) be a \(\mathcal{C}^{\infty}\)-Hardy field. Then by Corollary 6.3.9, \(H\) is \(\mathrm{d}\)-maximal iff \(H\) has no proper \(\mathrm{d}\)-algebraic \(\mathcal{C}^{\infty}\)-Hardy field extension; thus every \(\mathcal{C}^{\infty}\)-maximal Hardy field is \(\mathrm{d}\)-maximal (so \(\mathrm{D}(H)\subseteq\mathrm{E}^{\infty}(H)\)), and \(H\) has a \(\mathrm{d}\)-maximal \(\mathrm{d}\)-algebraic \(\mathcal{C}^{\infty}\)-Hardy field extension. The same remarks apply with \(\omega\) in place of \(\infty\).
### Existence and uniqueness theorems \((^{*})\)
We finish this section with some existence and uniqueness results for algebraic differential equations. From this subsection, only Corollary 6.3.13 is used later (in the proofs of Lemmas 7.7.42 and 7.7.51, which are not needed for the proof of our main theorem). First, let \(U\), \(\Phi\) be as in Lemma 6.3.2 for \(n=1\); the argument in the proof of that lemma combined with the existence and uniqueness theorem for scalar differential equations [203, SS11] yields:
**Lemma 6.3.10**.: _Let \((t_{0},c_{0},\ldots,c_{r})\in U\) be such that_
\[\Phi(t_{0},c_{0},\ldots,c_{r})=0\quad\text{and}\quad(\partial\Phi/\partial z_{ r})(t_{0},c_{0},\ldots,c_{r})\neq 0.\]
_Then for some open interval \(I\subseteq\mathbb{R}\) containing \(t_{0}\) there is a unique \(f\in\mathcal{C}^{r}(I)[\mathrm{i}]\) such that \(\big{(}f(t_{0}),f^{\prime}(t_{0}),\ldots,f^{(r)}(t_{0})\big{)}=(c_{0},\ldots,c _{r})\) and for all \(t\in I\),_
\[\big{(}t,f(t),\ldots,f^{(r)}(t)\big{)}\in U\quad\text{and}\quad\Phi\big{(}t,f( t),\ldots,f^{(r)}(t)\big{)}=0. \tag{6.3.1}\]
Proof.: Set \(A:=\mathrm{Re}\,\Phi\), \(B:=\mathrm{Im}\,\Phi\), \(a_{j}:=\mathrm{Re}\,c_{j}\), \(b_{j}:=\mathrm{Im}\,c_{j}\) (\(j=0,\ldots,r\)). As in the proof of Lemma 6.3.2 we identify \(U\) with an open subset of \(\mathbb{R}^{1+2(1+r)}\) and consider the \(\mathcal{C}^{1}\)-map \((A,B)\colon U\to\mathbb{R}^{2}\). The Jacobian matrix of the map \((A,B)\) with respect to its last two variables \(x_{r}\) and \(y_{r}\) has determinant
\[D\ =\ \left(\frac{\partial A}{\partial x_{r}}\right)^{2}+\left(\frac{\partial B}{ \partial x_{r}}\right)^{2}\ =\ \left|\frac{\partial\Phi}{\partial z_{r}}\right|^{2}\ :\ U\to\mathbb{R},\]
with \(D(t_{0},a_{0},b_{0},\ldots,a_{r},b_{r})\neq 0\), hence the Implicit Mapping Theorem [57, (10.2.2)] yields a connected open neighborhood \(V\) in \(\mathbb{R}^{1+2r}\) of the point
\[u_{0}:=(t_{0},a_{0},b_{0},\ldots,a_{r-1},b_{r-1}),\]
open intervals \(J,K\subseteq\mathbb{R}\) containing \(a_{r}\), \(b_{r}\), respectively, such that \(V\times J\times K\subseteq U\), as well as a \(\mathcal{C}^{1}\)-map \(F=(G,H)\colon V\to J\times K\) whose graph is \(\Phi^{-1}(0)\cap(V\times J\times K)\). Now by [203, SS11, II] we have an open interval \(I\subseteq\mathbb{R}\) containing \(t_{0}\) as well as a \(\mathcal{C}^{r}\)-map \(u\colon I\to\mathbb{R}^{2}\) such that \(\big{(}t_{0},u(t_{0}),u^{\prime}(t_{0}),\ldots,u^{(r-1)}(t_{0})\big{)}=u_{0}\) and for all \(t\in I\):
\[\big{(}t,u(t),\ldots,u^{(r-1)}(t)\big{)}\in V\quad\text{and}\quad u^{(r)}(t)= F\big{(}t,u(t),\ldots,u^{(r-1)}(t)\big{)}.\]
Then the function \(f\colon I\to\mathbb{C}\) with \((\operatorname{Re}f,\operatorname{Im}f)=u\) is an element of \(\mathcal{C}^{r}(I)[i]\) such that \(\big{(}f(t_{0}),f^{\prime}(t_{0}),\ldots,f^{(r)}(t_{0})\big{)}=(c_{0},\ldots,c_{r})\) and (6.3.1) holds for all \(t\in I\).
Let any \(f_{1}\in\mathcal{C}^{r}(I)[i]\) be given with \(\big{(}f_{1}(t_{0}),f_{1}^{\prime}(t_{0}),\ldots,f_{1}^{(r)}(t_{0})\big{)}=(c _{0},\ldots,c_{r})\) such that (6.3.1) holds for all \(t\in I\) with \(f_{1}\) in place of \(f\). The closed subset
\[S\ :=\ \big{\{}t\in I:\ f_{1}^{(j)}(t)=f^{(j)}(t)\text{ for }j=0,\ldots,r\big{\}}\]
of \(I\) contains \(t_{0}\); it is enough to show that \(S\) is open. Towards this, let \(t_{1}\in S\). The map \(u_{1}:=(\operatorname{Re}f_{1},\operatorname{Im}f_{1})\colon I\to\mathbb{R}^{2}\) is of class \(\mathcal{C}^{r}\) and
\[\big{(}t_{1},u_{1}(t_{1}),\ldots,u_{1}^{(r)}(t_{1})\big{)}\ =\ \big{(}t_{1},u(t_{1}),\ldots,u^{(r)}(t_{1})\big{)}\in V \times J\times K,\]
which gives an open interval \(I_{1}\subseteq I\) containing \(t_{1}\) such that \(\big{(}t,u_{1}(t),\ldots,u_{1}^{(r)}(t)\big{)}\in V\times J\times K\) for all \(t\in I_{1}\). Since \(\Phi\big{(}t,f_{1}(t),\ldots,f_{1}^{(r)}(t)\big{)}=0\) for \(t\in I_{1}\), this yields \(u_{1}^{(r)}(t)=F\big{(}t,u_{1}(t),\ldots,u_{1}^{(r-1)}(t)\big{)}\) for \(t\in I_{1}\). So \(u_{1}=u\) on \(I_{1}\) by the uniqueness part of [203, SS11, III], hence \(f_{1}=f\) on \(I_{1}\), and thus \(I_{1}\subseteq S\).
The second part of the proof gives a bit more: Suppose \(I,J\subseteq\mathbb{R}\) are open intervals with \(t_{0}\in I\cap J\) and the functions \(f\in\mathcal{C}^{r}(I)[i]\), \(g\in\mathcal{C}^{r}(J)[i]\) are such that
\[\big{(}f(t_{0}),f^{\prime}(t_{0}),\ldots,f^{(r)}(t_{0})\big{)}\ =\ (c_{0},\ldots,c_{r})\ =\ \big{(}g(t_{0}),g^{\prime}(t_{0}),\ldots,g^{(r)}(t_{0})\big{)},\]
(6.3.1) holds for all \(t\in I\), and (6.3.1) holds for all \(t\in J\) with \(g\) instead of \(f\). Assume also that \((\partial\Phi/\partial z_{r})\big{(}t,f(t),\ldots,f^{(r)}(t)\big{)}\neq 0\) for all \(t\in I\). Then
\[f(t)\ =\ g(t)\ \text{ for all }t\in I\cap J.\]
Next, let \(I\subseteq\mathbb{R}\) be a nonempty open interval and
\[P\ =\ P\big{(}Y,\ldots,Y^{(r)}\big{)}\ \in\ \mathcal{C}^{1}(I)[i]\big{[}Y, \ldots,Y^{(r)}\big{]}.\]
Applying Lemma 6.3.10 to the map \(\Phi\colon U:=I\times\mathbb{C}^{1+r}\to\mathbb{C}\) introduced in the proof of Lemma 6.3.4, we obtain:
**Lemma 6.3.11**.: _Let \(t_{0}\in I\) and \(c_{0},\ldots,c_{r}\in\mathbb{C}\) be such that_
\[\Phi(t_{0},c_{0},\ldots,c_{r})=0\quad\text{and}\quad(\partial\Phi/\partial z_{ r})(t_{0},c_{0},\ldots,c_{r})\neq 0.\]
_Then there is an open interval \(J\subseteq I\) containing \(t_{0}\) with a unique \(f\in\mathcal{C}^{r}(J)[i]\) such that \(\big{(}f(t_{0}),f^{\prime}(t_{0}),\ldots,f^{(r)}(t_{0})\big{)}=(c_{0},\ldots,c_ {r})\) and \(P\big{(}f,\ldots,f^{(r)}\big{)}=0\in\mathcal{C}(J)[i]\)._
This lemma and the remark following the proof of Lemma 6.3.10 yield:
**Corollary 6.3.12**.: _Given \(t_{0}\in I\) and \(c_{0},\ldots,c_{r}\in\mathbb{C}\), there is at most one function \(f\in\mathcal{C}^{r}(I)[i]\) such that \(\big{(}f(t_{0}),f^{\prime}(t_{0}),\ldots,f^{(r)}(t_{0})\big{)}=(c_{0},\ldots,c_ {r})\) as well as_
\[P\big{(}f,\ldots,f^{(r)}\big{)}=0\in\mathcal{C}(I)[i]\quad\text{ and }\quad( \partial P/\partial Y^{(r)})\big{(}f,\ldots,f^{(r)}\big{)}\in\mathcal{C}(I)[i]^{ \times}.\]
Now let \(a\) range over \(\mathbb{R}\), \(\boldsymbol{i}\) over \(\mathbb{N}^{1+r}\), and
\[P\ =\ P\big{(}Y,\ldots,Y^{(r)}\big{)}\ =\ \sum_{\boldsymbol{i}}P_{\boldsymbol{i}}Y^{ \boldsymbol{i}}\qquad\text{(all $P_{\boldsymbol{i}}\in\mathcal{C}_{a}^{1}[ \mathrm{i}]$)}\]
over polynomials in \(\mathcal{C}_{a}^{1}[\dot{\mathrm{i}}]\big{[}Y,\ldots,Y^{(r)}\big{]}\) of degree at most \(d\in\mathbb{N}^{\geqslant 1}\), and set \(P_{\geqslant 1}:=\sum_{|\boldsymbol{i}|\geqslant 1}P_{\boldsymbol{i}}Y^{ \boldsymbol{i}}=P-P_{0}\). Recall that in Section 6.1 we defined
\[P(f)\ :=\ \sum_{\boldsymbol{i}}P_{\boldsymbol{i}}f^{\boldsymbol{i}}\in \mathcal{C}_{a}[\mathrm{i}]\qquad(f\in\mathcal{C}_{a}^{r}[\mathrm{i}]).\]
**Corollary 6.3.13**.: _There is an \(E=E(d,r)\in\mathbb{N}^{\geqslant 1}\) with the following property: if_
\[P=Y^{(r)}+f_{1}Y^{(r-1)}+\cdots+f_{r}Y-R,\quad f_{1},\ldots,f_{r}\in\mathcal{C }_{a}^{1}[\mathrm{i}],\ \|R_{\geqslant 1}\|_{a}\leqslant 1/E,\]
_then for any \(t_{0}\in\mathbb{R}^{>a}\) and \(c_{0},\ldots,c_{r}\in\mathbb{C}\) there is at most one \(f\in\mathcal{C}_{a}^{r}[\mathrm{i}]\) such that_
\[P(f)=0,\quad\big{(}f(t_{0}),f^{\prime}(t_{0}),\ldots,f^{(r)}(t_{0})\big{)}=(c _{0},\ldots,c_{r}),\quad\text{ and }\quad\|f\|_{a;r}\leqslant 1.\]
Proof.: Set \(E:=2d(d+1)D\) with \(D=D(0,d,r)\) as in Corollary 6.1.3. Let \(P\) be as in the hypothesis and \(f\in\mathcal{C}_{a}^{r}[\mathrm{i}]\), \(\|f\|_{a;r}\leqslant 1\). Then \(\partial P/\partial Y^{(r)}=1-S\) where \(S:=\partial R_{\geqslant 1}/\partial Y^{(r)}\) and therefore \((\partial P/\partial Y^{(r)})(f,\ldots,f^{(r)})=1-S(f,\ldots,f^{(r)})\). We have \(\|S\|_{a}\leqslant d/E=1/\big{(}2(d+1)D\big{)}\) and hence by Corollary 6.1.3:
\[\|S(f,\ldots,f^{(r)})\|_{a}\ \leqslant\ D\cdot\|S\|_{a}\cdot\big{(}1+\|f\|_{a;r} ^{1}+\cdots+\|f\|_{a;r}^{d}\big{)}\ \leqslant\ 1/2.\]
Thus \((\partial P/\partial Y^{(r)})(f,\ldots,f^{(r)})\in\mathcal{C}_{a}[\mathrm{i}]^ {\times}\). Now use Corollary 6.3.12.
Thus in the context of Section 6.2, if \(a\) is so large that the functions \(f_{1},\ldots,f_{r}\) and the \(R_{\boldsymbol{j}}\) there are \(\mathcal{C}^{1}\) on \([a,\infty)\) with \(\|R_{\boldsymbol{j}}\|_{a}\leqslant 1/E(d,r)\), then for all \(t_{0}\in\mathbb{R}^{>a}\) and \(c_{0},\ldots,c_{r}\in\mathbb{C}\), there is at most one \(f\in\mathcal{C}_{a}^{r}[\mathrm{i}]\) with \(\|f\|_{a;r}\leqslant 1\) such that \(A_{a}(f)=R(f)\) and \(\big{(}f(t_{0}),f^{\prime}(t_{0}),\ldots,f^{(r)}(t_{0})\big{)}=(c_{0},\ldots,c _{r})\).
### A theorem of Boshernitzan \((^{*})\)
Here we supply a proof of the following result stated in [33, Theorem 11.8], and to be used in Section 7.7. (The proof in loc. cit. is only indicated there.) Below, \(Y\) and \(Z\) are distinct indeterminates.
**Theorem 6.3.14**.: _Let \(H\) be a Hardy field, \(P\in H[Y,Z]^{\neq}\), and suppose \(P(y,y^{\prime})=0\) with \(y\in\mathcal{C}^{1}\) lying in a Hausdorff field extension of \(H\). Then \(y\in\mathrm{D}(H)\)._
In particular, if \(H\) is a d-perfect Hardy field and \(F\) is a Hardy field properly extending \(H\), then \(\mathrm{trdeg}(F|H)\geqslant 2\).
For the proof of Theorem 6.3.14 we first observe:
**Corollary 6.3.15**.: _Let \(H\) be a Hausdorff field. Then \(H\subseteq\mathcal{C}^{n}\ \Rightarrow\ H^{\mathrm{rc}}\subseteq\mathcal{C}^{n}\), and likewise with \(<\infty\), \(\infty\), and \(\omega\) in place of \(n\)._
Proof.: If \(y\in H^{\mathrm{rc}}\) has minimum polynomial \(P\in H[Y]\) over \(H\), then \(P(y)=0\in\mathcal{C}\) and \(S_{P}(y)=P^{\prime}(y)\in\mathcal{C}^{\times}\). Now use Proposition 6.3.1.
**Lemma 6.3.16**.: _Suppose \(f\in\mathcal{C}^{1}\) oscillates. Then we are in case_ (i) _or case_ (ii)_:_
* _there are arbitrarily large_ \(s\) _with_ \(f^{\prime}(s)=0\) _and_ \(f(s)>0\)_,_
* _there are arbitrarily large_ \(s\) _with_ \(f^{\prime}(s)=0\) _and_ \(f(s)<0\)_._
_In case_ (i) _there are also arbitrarily large \(s\) _with_ \(f^{\prime}(s)=0\) _and_ \(f(s)\leqslant 0\)_, and in case_ (ii) _there are also arbitrarily large_ \(s\) _with_ \(f^{\prime}(s)=0\) _and_ \(f(s)\geqslant 0\)
Proof.: Let \(f\) be represented by a \(\mathcal{C}^{1}\)-function on an interval \((a,+\infty)\), also denoted by \(f\). Take \(b>a\) such that \(f(b)=0\), and then \(c>b\) with \(f(c)=0\) such that \(f(t)\neq 0\) for some \(t\in(b,c)\). Next, take \(s\in[b,c]\) such that \(|f(s)|=\max_{b\leqslant t\leqslant c}|f(t)|\). Then \(f(s)\neq 0\) and \(f^{\prime}(s)=0\). Since \(b\) can be taken arbitrarily large, we are in case (i) or in case (ii) above. (Of course, this is not an exclusive _or_.) For the remaining part of the lemma, use that in case (i) there are arbitrarily large \(s>a\) where \(f\) has a local minimum \(f(s)\leqslant 0\), and that in case (ii) there are arbitrarily large \(s>a\) where \(f\) has a local maximum \(f(s)\geqslant 0\).
In the next two lemmas \(H\), \(P\), \(y\) are as in Theorem 6.3.14.
**Lemma 6.3.17**.: _The germ \(y\) generates a Hardy field extension \(H\langle y\rangle\) of \(H\). If \(H\subseteq\mathcal{C}^{\infty}\), then \(H\langle y\rangle\subseteq\mathcal{C}^{\infty}\), and likewise with \(\omega\) in place of \(\infty\)._
Proof.: We are done if \(y\in H^{\mathrm{rc}}\), since \(H^{\mathrm{rc}}\) is a Hardy field with \(H\subseteq\mathcal{C}^{\infty}\Rightarrow H^{\mathrm{rc}}\subseteq\mathcal{C} ^{\infty}\) and \(H\subseteq\mathcal{C}^{\omega}\Rightarrow H^{\mathrm{rc}}\subseteq\mathcal{C} ^{\omega}\), by Proposition 5.3.2.
Suppose \(y\notin H^{\mathrm{rc}}\). We have the Hausdorff field \(F:=H(y)\subseteq\mathcal{C}^{1}\), and its real closure is by Proposition 5.1.4 the Hausdorff field
\[F^{\mathrm{rc}}\ =\ \big{\{}z\in\mathcal{C}:\ Q(z)=0\ \text{for some}\ Q\in F[Z]^{\neq}\big{\}}.\]
By Corollary 6.3.15 we have \(F^{\mathrm{rc}}\subseteq\mathcal{C}^{1}\). Set \(Q(Z):=P(y,Z)\in F[Z]^{\neq}\). We have \(Q(y^{\prime})=0\), so \(y^{\prime}\in F^{\mathrm{rc}}\), and thus \(\partial F\subseteq F^{\mathrm{rc}}\). Let now \(z\in F^{\mathrm{rc}}\), and let \(A(Z)\) be the minimum polynomial of \(z\) over \(F\), say \(A=Z^{n}+A_{1}Z^{n-1}+\cdots+A_{n}\)\((A_{1},\ldots,A_{n}\in F\), \(n\geqslant 1)\). With \(A^{\partial}:=A_{1}^{\prime}Z^{n-1}+\cdots+A_{n}^{\prime}\in F^{\mathrm{rc}}[Z]\) we have
\[0\ =\ A(z)^{\prime}\ =\ A^{\partial}(z)+A^{\prime}(z)\cdot z^{\prime}\]
with \(0\neq A^{\prime}(z)\in F^{\mathrm{rc}}\) and so \(z^{\prime}=-A^{\partial}(z)/A^{\prime}(z)\in F^{\mathrm{rc}}\). Hence \(F^{\mathrm{rc}}\) is a Hardy field, and so \(y\) generates a Hardy field extension \(H\langle y\rangle\subseteq F^{\mathrm{rc}}\) of \(H\). For the rest use Corollary 6.3.9.
In the proof of the next lemma we encounter an ordered field isomorphism
\[f\mapsto\widetilde{f}\ :\ E\to\widetilde{E}\]
between Hausdorff fields \(E\) and \(\widetilde{E}\). It extends uniquely to an ordered field isomorphism \(E^{\mathrm{rc}}\to\widetilde{E}^{\mathrm{rc}}\), also denoted by \(f\mapsto\widetilde{f}\), and to a ring isomorphism
\[Q\mapsto\widetilde{Q}\ :\ E[Y]\to\widetilde{E}[Y],\ \ \ \ \text{with}\ \widetilde{Y}=Y.\]
Let \(Q\in E[Y]^{\neq}\), and let \(y_{1}<\cdots<y_{m}\) be the distinct zeros of \(Q\) in \(E^{\mathrm{rc}}\). Then by Corollary 5.1.8, \(y_{1}(t)<\cdots<y_{m}(t)\) are the distinct real zeros of \(Q(t,Y)\), eventually. By the isomorphism, \(\widetilde{y}_{1}<\cdots<\widetilde{y}_{m}\) are the distinct zeros of \(\widetilde{Q}\) in \(\widetilde{E}^{\mathrm{rc}}\), and so \(\widetilde{y}_{1}(t)<\cdots<\widetilde{y}_{m}(t)\) are the distinct real zeros of \(\widetilde{Q}(t,Y)\), eventually. This has the trivial but useful consequence that, eventually, that is, for all sufficiently large \(t\),
\[Q(t,Y)=\widetilde{Q}(t,Y)\ \text{in}\ \mathbb{R}[Y]\ \Longrightarrow\ y_{1}(t)= \widetilde{y}_{1}(t),\ldots,y_{m}(t)=\widetilde{y}_{m}(t).\]
**Lemma 6.3.18**.: _Let \(u\) be an \(H\)-hardian germ. Then \(y-u\) is non-oscillating._
Proof.: By Lemma 6.3.17, \(y\) is \(H\)-hardian. Replacing \(H\) by \(H^{\mathrm{rc}}\) we arrange that \(H\) is real closed. Suppose towards a contradiction that \(w:=y-u\) oscillates. Then \(w^{\prime}=y^{\prime}-u^{\prime}\) oscillates. But \(y^{\prime}\) and \(u^{\prime}\) are \(H\)-hardian, so \(y^{\prime},u^{\prime}\notin H\) and for all \(h\in H\): \(y^{\prime}>h\Leftrightarrow u^{\prime}>h\). This yields an ordered field isomorphism \(H(y^{\prime})\to H(u^{\prime})\) over \(H\) mapping \(y^{\prime}\) to \(u^{\prime}\), which extends uniquely to an ordered field isomorphism
\[f\mapsto\widetilde{f}\ :\ H(y^{\prime})^{\mathrm{rc}}\to H(u^{\prime})^{ \mathrm{rc}}.\]
Now \(P(y,y^{\prime})=0\) gives \(y\in H(y^{\prime})^{\mathrm{rc}}\), so \(\widetilde{y}\in H(u^{\prime})^{\mathrm{rc}}\subseteq H(u)^{\mathrm{rc}}\). The remarks preceding the lemma applied to \(E=H(y^{\prime})\), \(\widetilde{E}=H(u^{\prime})\) and \(Q(Y):=P(Y,y^{\prime})\) in \(E[Y]\) give that for all sufficiently large \(t\) with \(y^{\prime}(t)=u^{\prime}(t)\) (that is, \(w^{\prime}(t)=0\)) we have \(y(t)=\widetilde{y}(t)\). Now \(u,\widetilde{y}\in H(u)^{\mathrm{rc}}\), so \(\widetilde{y}<u\) or \(\widetilde{y}=u\) or \(u<\widetilde{y}\). Suppose \(\widetilde{y}<u\). (The other two cases lead to a contradiction in a similar way.) Then for all sufficiently large \(t\) with \(w^{\prime}(t)=0\) we have \(y(t)=\widetilde{y}(t)<u(t)\), so \(w(t)<0\), contradicting Lemma 6.3.16 for \(f:=w\).
With these lemmas in place, Theorem 6.3.14 now follows quickly:
Proof of Theorem 6.3.14.: Let \(E\) be a d-maximal Hardy field extension of \(H\); we show that then \(y\in E\). Now \(E\) is real closed by the remarks after Proposition 5.3.2, and \(y-u\) is non-oscillating for all \(u\in E\) by Lemma 6.3.18, so \(y\) lies in a Hausdorff field extension of \(E\) by Lemma 5.1.20, hence \(y\) is \(E\)-hardian by Lemma 6.3.17 with \(E\) in place of \(H\), and thus \(y\in E\) by d-maximality of \(E\).
As an application of Theorem 6.3.14 we record [32, Theorem 8.1]:
**Corollary 6.3.19**.: _Let \(\ell\in\mathrm{D}(\mathbb{Q})\) be such that \(\ell>\mathbb{R}\) and \(\mathrm{trdeg}\big{(}\mathbb{R}\langle x,\ell\rangle|\mathbb{R}\big{)}\leqslant 2\). Then \(\ell^{\mathrm{inv}}\in\mathrm{D}(\mathbb{Q})\)._
Proof.: By Lemma 5.3.8 and the remark preceding it, \(\ell^{\mathrm{inv}}\) is \(\mathbb{R}(x)\)-hardian with
Now Theorem 6.3.14 with \(H:=\mathbb{R}(x)\) and \(y:=\ell^{\mathrm{inv}}\) yields \(y\in\mathrm{D}(H)=\mathrm{D}(\mathbb{Q})\).
### 6.4. Application to Filling Holes in Hardy Fields
This section combines the analytic material above with the normalization results of Parts 3 and 4. _Throughout \(H\) is a Hardy field with \(H\not\subseteq\mathbb{R}\), and \(r\in\mathbb{N}^{\geqslant 1}\)._ Thus \(K:=H[\mathrm{i}]\subseteq\mathcal{C}^{<\infty}[\mathrm{i}]\) is an \(H\)-asymptotic extension of \(H\). (Later we impose extra assumptions on \(H\) like being real closed with asymptotic integration.) Note that \(v(H^{\times})\neq\{0\}\): take \(f\in H\setminus\mathbb{R}\); then \(f^{\prime}\neq 0\), and if \(f\asymp 1\), then \(f^{\prime}\prec 1\).
**Evaluating differential polynomials at germs.** Any \(Q\in K\{Y\}\) of order \(\leqslant r\) can be evaluated at any germ \(y\in\mathcal{C}^{r}[\mathrm{i}]\) to give a germ \(Q(y)\in\mathcal{C}[\mathrm{i}]\), with \(Q(y)\in\mathcal{C}\) for \(Q\in H\{Y\}\) of order \(\leqslant r\) and \(y\in\mathcal{C}^{r}\). (See the beginning of Section 6.3.) Here is a variant that we shall need. Let \(\phi\in H^{\times}\); with \(\mathfrak{d}\) denoting the derivation of \(K\), the derivation of the differential field \(K^{\phi}\) is then \(\mathfrak{d}:=\phi^{-1}\mathfrak{d}\). We also let \(\mathfrak{d}\) denote its extension \(f\mapsto\phi^{-1}f^{\prime}\colon\mathcal{C}^{1}[\mathrm{i}]\to\mathcal{C}[ \mathrm{i}]\), which maps \(\mathcal{C}^{n+1}[\mathrm{i}]\) into \(\mathcal{C}^{n}[\mathrm{i}]\) and \(\mathcal{C}^{n+1}\) into \(\mathcal{C}^{n}\), for all \(n\). Thus for \(j\leqslant r\) we have the maps
\[\mathcal{C}^{r}[\mathrm{i}]\ \stackrel{{\mathfrak{d}}}{{\longrightarrow}} \ \mathcal{C}^{r-1}[\mathrm{i}]\ \stackrel{{\mathfrak{d}}}{{ \longrightarrow}}\ \cdots\ \stackrel{{\mathfrak{d}}}{{ \longrightarrow}}\ \mathcal{C}^{r-j+1}[\mathrm{i}]\ \stackrel{{\mathfrak{d}}}{{ \longrightarrow}}\ \mathcal{C}^{r-j}[\mathrm{i}],\]
which by composition yield \(\mathfrak{d}^{j}\colon\mathcal{C}^{r}[\mathrm{i}]\to\mathcal{C}^{r-j}[\mathrm{ i}]\), mapping \(\mathcal{C}^{r}\) into \(\mathcal{C}^{r-j}\). This allows us to define for \(Q\in K^{\phi}\{Y\}\) of order \(\leqslant r\) and \(y\in\mathcal{C}^{r}[\mathrm{i}]\) the germ \(Q(y)\in\mathcal{C}[\mathrm{i}]\) by
\[Q(y):=q\big{(}y,\mathfrak{d}(y),\ldots,\mathfrak{d}^{r}(y)\big{)}\ \ \ \text{ where }Q=q\big{(}Y,\ldots,Y^{(r)}\big{)}\in K^{\phi}\big{[}Y,\ldots,Y^{(r)}\big{]}.\]
Note that \(H^{\phi}\) is a differential subfield of \(K^{\phi}\), and if \(Q\in H^{\phi}\{Y\}\) is of order \(\leqslant r\) and \(y\in\mathcal{C}^{r}\), then \(Q(y)\in\mathcal{C}\).
**Lemma 6.4.1**.: _Let \(y\in\mathcal{C}^{r}[\mathrm{i}]\) and \(\mathfrak{m}\in K^{\times}\). Each of the following conditions implies \(y\in\mathcal{C}^{r}[\mathrm{i}]^{\prec}\):_
* \(\phi\preccurlyeq 1\) _and_ \(\mathfrak{d}^{0}(y),\ldots,\mathfrak{d}^{r}(y)\preccurlyeq 1\)_;_
_._
* \(\mathfrak{m}\preccurlyeq 1\) _and_ \(y\in\mathfrak{m}\,\mathcal{C}^{r}[\mathfrak{i}]^{\preccurlyeq}\)_._
_Moreover, if \(\mathfrak{m}\preccurlyeq 1\) and \((y/\mathfrak{m})^{(0)},\ldots,(y/\mathfrak{m})^{(r)}\prec 1\), then \(y^{(0)},\ldots,y^{(r)}\prec 1\)._
Proof.: For (i), use the smallness of the derivation of \(H\) and the transformation formulas in [ADH, 5.7] expressing the iterates of \(\mathfrak{\partial}\) in terms of iterates of \(\mathfrak{8}\). For (ii) and the "moreover" part, set \(y=\mathfrak{m}z\) with \(z=y/\mathfrak{m}\) and use the Product Rule and the smallness of the derivation of \(K\).
**Equations over Hardy fields and over their complexifications.** Let \(\phi>0\) be active in \(H\). We recall here from Section 5.3 how the the asymptotic field \(K^{\phi}=H[\mathfrak{i}]^{\phi}\) (with derivation \(\mathfrak{s}=\phi^{-1}\mathfrak{\partial}\)) is isomorphic to the asymptotic field \(K^{\circ}:=H^{\circ}[\mathfrak{i}]\) for a certain Hardy field \(H^{\circ}\): Let \(\ell\in\mathcal{C}^{1}\) be such that \(\ell^{\prime}=\phi\); then \(\ell>\mathbb{R}\), \(\ell\in\mathcal{C}^{<\infty}\), and \(\ell^{\mathrm{inv}}\in\mathcal{C}^{<\infty}\) for the compositional inverse \(\ell^{\mathrm{inv}}\) of \(\ell\). The \(\mathbb{C}\)-algebra automorphism \(f\mapsto f^{\circ}:=f\circ\ell^{\mathrm{inv}}\) of \(\mathcal{C}[\mathfrak{i}]\) (with inverse \(g\mapsto g\circ\ell\)) maps \(\mathcal{C}^{n}[\mathfrak{i}]\) and \(\mathcal{C}^{n}\) onto themselves, and hence restricts to a \(\mathbb{C}\)-algebra automorphism of \(\mathcal{C}^{<\infty}[\mathfrak{i}]\) and \(\mathcal{C}^{<\infty}\) mapping \(\mathcal{C}^{<\infty}\) onto itself. Moreover,
\[(\mathfrak{\partial},\circ,\mathfrak{s})\qquad\qquad(f^{\circ})^{\prime}\ =\ ( \phi^{\circ})^{-1}(f^{\prime})^{\circ}\ =\ \mathfrak{s}(f)^{\circ}\qquad\text{for $f\in \mathcal{C}^{1}[\mathfrak{i}]$.}\]
Thus we have an isomorphism \(f\mapsto f^{\circ}:(\mathcal{C}^{<\infty}[\mathfrak{i}])^{\phi}\to\mathcal{C}^ {<\infty}[\mathfrak{i}]\) of differential rings, and likewise with \(\mathcal{C}^{<\infty}\) in place of \(\mathcal{C}^{<\infty}[\mathfrak{i}]\). As already pointed out in Section 5.3,
\[H^{\circ}\ :=\ \{h^{\circ}:h\in H\}\ \subseteq\ \mathcal{C}^{<\infty}\]
is a Hardy field, and \(f\mapsto f^{\circ}\) restricts to an isomorphism \(H^{\phi}\to H^{\circ}\) of pre-\(H\)-fields, and to an isomorphism \(K^{\phi}\to K^{\circ}\) of asymptotic fields. We extend the latter to the isomorphism
\[Q\mapsto Q^{\circ}\ :\ K^{\phi}\{Y\}\to K^{\circ}\{Y\}\]
of differential rings given by \(Y^{\circ}=Y\), which restricts to a differential ring isomorphism \(H^{\phi}\{Y\}\to H^{\circ}\{Y\}\). Using the identity \((\mathfrak{\partial},\circ,\mathfrak{s})\) it is routine to check that for \(Q\in K^{\phi}\{Y\}\) of order \(\leqslant r\) and \(y\in\mathcal{C}^{r}[\mathfrak{i}]\),
\[Q(y)^{\circ}\ =\ (Q^{\circ})(y^{\circ}).\]
This allows us to translate algebraic differential equations over \(K\) into algebraic differential equations over \(K^{\circ}\): Let \(P\in K\{Y\}\) have order \(\leqslant r\) and let \(y\in\mathcal{C}^{r}[\mathfrak{i}]\).
**Lemma 6.4.2**.: \(P(y)^{\circ}=P^{\phi}(y)^{\circ}=P^{\phi\circ}(y^{\circ})\) _where \(P^{\phi\circ}:=(P^{\phi})^{\circ}\in K^{\circ}\{Y\}\), hence_
\[P(y)=0\ \Longleftrightarrow\ P^{\phi\circ}(y^{\circ})=0.\]
Moreover, \(y\prec\mathfrak{m}\ \Longleftrightarrow\ y^{\circ}\prec\mathfrak{m}^{\circ}\), for \(\mathfrak{m}\in K^{\times}\), so asymptotic side conditions are automatically taken care of under this "translation". Also, if \(\phi\preccurlyeq 1\) and \(y^{\circ}\in\mathcal{C}^{r}[\mathfrak{i}]^{\preccurlyeq}\), then \(y\in\mathcal{C}^{r}[\mathfrak{i}]^{\preccurlyeq}\), by Lemma 6.4.1(i) and \((\mathfrak{\partial},\circ,\mathfrak{\delta})\).
_In the rest of this section \(H\supseteq\mathbb{R}\) is real closed with asymptotic integration._ Then \(H\) is an \(H\)-field, and \(K=H[\mathfrak{i}]\) is the algebraic closure of \(H\), a d-valued field with small derivation extending \(H\), constant field \(\mathbb{C}\), and value group \(\Gamma:=v(K^{\times})=v(H^{\times})\).
**Slots in Hardy fields and compositional conjugation.**_In this subsection we let \(\phi>0\) be active in \(H\); as in the previous subsection we take \(\ell\in\mathcal{C}^{1}\) such that \(\ell^{\prime}=\phi\) and use the superscript \(\circ\) accordingly: \(f^{\circ}:=f\circ\ell^{\mathrm{inv}}\) for \(f\in\mathcal{C}[\mathfrak{i}]\)._
Let \((P,\mathfrak{m},\widehat{a})\) be a slot in \(K\) of order \(r\), so \(\widehat{a}\notin K\) is an element of an immediate asymptotic extension \(\widehat{K}\) of \(K\) with \(P\in Z(K,\widehat{a})\) and \(\widehat{a}\prec\mathfrak{m}\). We associate to \((P,\mathfrak{m},\widehat{a})\) a slot in \(K^{\circ}\) as follows: choose an immediate asymptotic extension
of \(K^{\circ}\) and an isomorphism \(\widehat{f}\mapsto\widehat{f}^{\circ}\colon\widehat{K}^{\phi}\to\widehat{K}^{\circ}\) of asymptotic fields extending the isomorphism \(f\mapsto f^{\circ}\colon K^{\phi}\to K^{\circ}\). Then \((P^{\phi\circ},\mathfrak{m}^{\circ},\widehat{a}^{\circ})\) is a slot in \(K^{\circ}\) of the same complexity as \((P,\mathfrak{m},\widehat{a})\). The equivalence class of \((P^{\phi\circ},\mathfrak{m}^{\circ},\widehat{a}^{\circ})\) does not depend on the choice of \(\widehat{K}^{\circ}\) and the isomorphism \(\widehat{K}^{\phi}\to\widehat{K}^{\circ}\). If \((P,\mathfrak{m},\widehat{a})\) is a hole in \(K\), then \((P^{\phi\circ},\mathfrak{m}^{\circ},\widehat{a}^{\circ})\) is a hole in \(K^{\circ}\), and likewise with "minimal hole" in place of "hole". Moreover, by Lemmas 3.1.19, 3.3.20, and 3.3.40:
**Lemma 6.4.3**.: _If \((P,\mathfrak{m},\widehat{a})\) is \(Z\)-minimal, then so is \((P^{\phi\circ},\mathfrak{m}^{\circ},\widehat{a}^{\circ})\), and likewise with "quasi-linear" and "special" in place of "\(Z\)-minimal". If \((P,\mathfrak{m},\widehat{a})\) is steep and \(\phi\preccurlyeq 1\), then \((P^{\phi\circ},\mathfrak{m}^{\circ},\widehat{a}^{\circ})\) is steep, and likewise with "deep", "normal", and "strictly normal" in place of "steep"._
Next, let \((P,\mathfrak{m},\widehat{a})\) be a slot in \(H\) of order \(r\), so \(\widehat{a}\notin H\) is an element of an immediate asymptotic extension \(\widehat{H}\) of \(H\) with \(P\in Z(H,\widehat{a})\) and \(\widehat{a}\prec\mathfrak{m}\). We associate to \((P,\mathfrak{m},\widehat{a})\) a slot in \(H^{\circ}\) as follows: choose an immediate asymptotic extension \(\widehat{H}^{\circ}\) of \(H^{\circ}\) and an isomorphism \(\widehat{f}\mapsto\widehat{f}^{\circ}\colon\widehat{H}^{\phi}\to\widehat{H}^{\circ}\) of asymptotic fields extending the isomorphism \(f\mapsto f^{\circ}\colon H^{\phi}\to H^{\circ}\). Then \((P^{\phi\circ},\mathfrak{m}^{\circ},\widehat{a}^{\circ})\) is a slot in \(H^{\circ}\) of the same complexity as \((P,\mathfrak{m},\widehat{a})\). The equivalence class of \((P^{\phi\circ},\mathfrak{m}^{\circ},\widehat{a}^{\circ})\) does not depend on the choice of \(\widehat{H}^{\circ}\) and the isomorphism \(\widehat{H}^{\phi}\to\widehat{H}^{\circ}\). If \((P,\mathfrak{m},\widehat{a})\) is a hole in \(H\), then \((P^{\phi\circ},\mathfrak{m}^{\circ},\widehat{a}^{\circ})\) is a hole in \(H^{\circ}\), and likewise with "minimal hole" in place of "hole". Lemma 6.4.3 goes through in this setting. Also, recalling Lemma 5.3.6, if \(H\) is Liouville closed and \((P,\mathfrak{m},\widehat{a})\) is ultimate, then \((P^{\phi\circ},\mathfrak{m}^{\circ},\widehat{a}^{\circ})\) is ultimate.
Moreover, by Lemmas 4.3.5 and 4.3.28, and Corollaries 4.5.23 and 4.5.39:
**Lemma 6.4.4**.:
1. _If_ \(\phi\preccurlyeq 1\) _and_ \((P,\mathfrak{m},\widehat{a})\) _is split-normal, then_ \((P^{\phi\circ},\mathfrak{m}^{\circ},\widehat{a}^{\circ})\) _is split-normal; likewise with "split-normal" replaced by "_(_almost_) _strongly split-normal"._
2. _If_ \(\phi\prec 1\) _and_ \((P,\mathfrak{m},\widehat{a})\) _is_ \(Z\)_-minimal, deep, and repulsive-normal, then_ \((P^{\phi\circ},\mathfrak{m}^{\circ},\widehat{a}^{\circ})\) _is repulsive-normal; likewise with "repulsive-normal" replaced by "_(_almost_) _strongly repulsive-normal"._
### Reformulations
We reformulate here some results of Sections 6.2 and 6.3 to facilitate their use. As in Section 3.1 we set for \(\mathfrak{v}\in K^{\times}\), \(\mathfrak{v}\prec 1\):
\[\Delta(\mathfrak{v})\ :=\ \big{\{}\gamma\in\Gamma:\gamma=o(v\mathfrak{v})\big{\}},\]
a proper convex subgroup of \(\Gamma\). In the next lemma, \(P\in K\{Y\}\) has order \(r\) and \(P=Q-R\), where \(Q,R\in K\{Y\}\) and \(Q\) is homogeneous of degree \(1\) and order \(r\). We set \(w:=\operatorname{wt}(P)\), so \(w\geqslant r\geqslant 1\).
**Lemma 6.4.5**.: _Suppose that \(L_{Q}\) splits strongly over \(K\), \(\mathfrak{v}(L_{Q})\preccurlyeq 1\), and_
\[R\prec_{\triangle}\mathfrak{v}(L_{Q})^{w+1}Q,\quad\Delta:=\Delta\big{(} \mathfrak{v}(L_{Q})\big{)}.\]
_Then \(P(y)=0\) and \(y^{\prime},\ldots,y^{(r)}\preccurlyeq 1\) for some \(y\prec\mathfrak{v}(L_{Q})^{w}\) in \(\mathcal{C}^{<\infty}[\mathrm{i}]\). Moreover:_
1. _if_ \(P,Q\in H\{Y\}\)_, then there is such_ \(y\) _in_ \(\mathcal{C}^{<\infty}\)_;_
2. _if_ \(H\subseteq\mathcal{C}^{\infty}\)_, then for any_ \(y\in\mathcal{C}^{r}[\mathrm{i}]^{\preccurlyeq}\) _with_ \(P(y)=0\) _we have_ \(y\in\mathcal{C}^{\infty}[\mathrm{i}]\)_; likewise with_ \(\mathcal{C}^{\omega}\) _in place of_ \(\mathcal{C}^{\infty}\)_._
Proof.: Set \(\mathfrak{v}:=|\mathfrak{v}(L_{Q})|\in H^{>}\), so \(\mathfrak{v}\asymp\mathfrak{v}(L_{Q})\). Take \(f\in K^{\times}\) such that \(A:=f^{-1}L_{Q}\) is monic; then \(\mathfrak{v}(A)=\mathfrak{v}(L_{Q})\asymp\mathfrak{v}\) and \(f^{-1}R\prec_{\triangle}f^{-1}\mathfrak{v}^{w+1}Q\asymp\mathfrak{v}^{w}\). We have \(A=(\partial-\phi_{1})\cdots(\partial-\phi_{r})\) where \(\phi_{j}\in K\) and \(\operatorname{Re}\phi_{j}\sucve\mathfrak{v}^{\dagger}\sucveqq 1\) for \(j=1,\ldots,r\) by the strong splitting assumption. Also \(\phi_{1},\ldots,\phi_{r}\preccurlyeq\mathfrak{v}^{-1}\) by Corollary 3.1.6. The claims now
follow from various results in Section 6.2 applied to the equation \(A(y)=f^{-1}R(y)\), \(y\prec 1\) in the role of \((*)\), using also Corollary 6.3.5.
**Lemma 6.4.6**.: _Let \((P,\mathfrak{n},\widehat{h})\) be a slot in \(H\) of order \(r\) and let \(\phi\) be active in \(H\), \(0<\phi\preccurlyeq 1\), such that \((P^{\phi},\mathfrak{n},\widehat{h})\) is strongly split-normal. Then for some \(y\) in \(\mathcal{C}^{<\infty}\),_
\[P(y)\ =\ 0,\quad y\ \prec\ \mathfrak{n},\quad y\in\mathfrak{n}\,(\mathcal{C}^{ r})^{\preccurlyeq}.\]
_If \(H\subseteq\mathcal{C}^{\infty}\), then there exists such \(y\) in \(\mathcal{C}^{\infty}\), and likewise with \(\mathcal{C}^{\omega}\) in place of \(\mathcal{C}^{\infty}\)._
Proof.: First we consider the case \(\phi=1\). Replace \((P,\mathfrak{n},\widehat{h})\) by \((P_{\times\mathfrak{n}},1,\widehat{h}/\mathfrak{n})\) to arrange \(\mathfrak{n}=1\). Then \(L_{P}\) has order \(r\), \(\mathfrak{v}:=\mathfrak{v}(L_{P})\preccurlyeq 1\), and \(P=Q-R\) where \(Q,R\in H\{Y\}\), \(Q\) is homogeneous of degree \(1\) and order \(r\), \(L_{Q}\in H[\partial]\) splits strongly over \(K\), and \(R\prec_{\Delta}\mathfrak{v}^{w+1}P_{1}\), where \(\Delta:=\Delta(\mathfrak{v})\) and \(w:=\operatorname{wt}(P)\). Now \(P_{1}=Q-R_{1}\), so \(\mathfrak{v}\sim\mathfrak{v}(L_{Q})\) by Lemma 3.1.1(ii), and thus \(\Delta=\Delta\big{(}\mathfrak{v}(L_{Q})\big{)}\). Lemma 6.4.5 gives \(y\) in \(\mathcal{C}^{<\infty}\) such that \(y\prec\mathfrak{v}^{w}\prec 1\), \(P(y)=0\), and \(y^{(j)}\preccurlyeq 1\) for \(j=1,\ldots,r\). Then \(y\) has for \(\mathfrak{n}=1\) the properties displayed in the lemma.
Now suppose \(\phi\) is arbitrary. Employing \((\ \ )^{\circ}\) as explained earlier in this section, the slot \((P^{\phi\circ},\mathfrak{n}^{\circ},\widehat{h}^{\circ})\) in the Hardy field \(H^{\circ}\) is strongly split-normal, hence by the case \(\phi=1\) we have \(z\in\mathcal{C}^{<\infty}\) with \(P^{\phi\circ}(z)=0\), \(z\prec\mathfrak{n}^{\circ}\), and \((z/\mathfrak{n}^{\circ})^{(j)}\preccurlyeq 1\) for \(j=1,\ldots,r\). Take \(y\in\mathcal{C}^{<\infty}\) with \(y^{\circ}=z\). Then \(P(y)=0\), \(y\prec\mathfrak{n}\), and \(y\in\mathfrak{n}\,(\mathcal{C}^{r})^{\preccurlyeq}\) by Lemma 6.4.2 and a subsequent remark. Moreover, if \(\phi,z\in\mathcal{C}^{\infty}\), then \(y\in\mathcal{C}^{\infty}\), and likewise with \(\mathcal{C}^{\omega}\) in place of \(\mathcal{C}^{\infty}\).
In the next "complex" version, \((P,\mathfrak{m},\widehat{a})\) is a slot in \(K\) of order \(r\) with \(\mathfrak{m}\in H^{\times}\).
**Lemma 6.4.7**.: _Let \(\phi\) be active in \(H\), \(0<\phi\preccurlyeq 1\), such that the slot \((P^{\phi},\mathfrak{m},\widehat{a})\) in \(K^{\phi}\) is strictly normal, and its linear part splits strongly over \(K^{\phi}\). Then for some \(y\in\mathcal{C}^{<\infty}[\mathrm{i}]\) we have_
\[P(y)\ =\ 0,\quad y\ \prec\ \mathfrak{m},\quad y\in\mathfrak{m}\,\mathcal{C}^ {r}[\mathrm{i}]^{\preccurlyeq}.\]
_If \(H\subseteq\mathcal{C}^{\infty}\), then there is such \(y\) in \(\mathcal{C}^{\infty}[\mathrm{i}]\). If \(H\subseteq\mathcal{C}^{\omega}\), then there is such \(y\) in \(\mathcal{C}^{\omega}[\mathrm{i}]\)._
Proof.: Consider first the case \(\phi=1\). Replacing \((P,\mathfrak{m},\widehat{a})\) by \((P_{\times\mathfrak{m}},1,\widehat{a}/\mathfrak{m})\) we arrange \(\mathfrak{m}=1\). Set \(L:=L_{P}\in K[\partial]\), \(Q:=P_{1}\), and \(R:=P-Q\). Since \((P,1,\widehat{a})\) is strictly normal, we have \(\operatorname{order}(L)=r\), \(\mathfrak{v}:=\mathfrak{v}(L)\preccurlyeq 1\), and \(R\prec_{\Delta}\mathfrak{v}^{w+1}Q\) where \(\Delta:=\Delta(\mathfrak{v})\), \(w:=\operatorname{wt}(P)\). As \(L\) splits strongly over \(K\), Lemma 6.4.5 gives \(y\) in \(\mathcal{C}^{<\infty}[\mathrm{i}]\) such that \(P(y)=0\), \(y\prec\mathfrak{v}^{w}\prec 1\), and \(y^{(j)}\preccurlyeq 1\) for \(j=1,\ldots,r\). For the last part of the lemma, use the last part of Lemma 6.4.5. The general case reduces to this special case as in the proof of Lemma 6.4.6.
### Finding germs in holes
_In this subsection_ \(\widehat{H}\) _is an immediate asymptotic extension of_ \(H\)_. This fits into the setting of Section 4.3 on split-normal slots: \(K=H[\mathrm{i}]\) and \(\widehat{H}\) have \(H\) as a common asymptotic subfield and \(\widehat{K}:=\widehat{H}[\mathrm{i}]\) as a common asymptotic extension, \(\widehat{H}\) is an \(H\)-field, and \(\widehat{K}\) is d-valued. Assume also that \(H\) is \(\mathfrak{0}\)-free. Thus \(K\) is \(\mathfrak{0}\)-free by [ADH, 11.7.23]. Let \((P,\mathfrak{m},\widehat{a})\) with \(\mathfrak{m}\in H^{\times}\) and \(\widehat{a}\in\widehat{K}\setminus K\) be a minimal hole in \(K\) of order \(r\geqslant 1\). Take \(\widehat{b},\widehat{c}\in\widehat{H}\) so that \(\widehat{a}=\widehat{b}+\widehat{c}\,\mathrm{i}\)._
**Proposition 6.4.8**.: _Suppose \(\deg P>1\). Then for some \(y\in\mathcal{C}^{<\infty}[\mathrm{i}]\) we have_
\[P(y)\ =\ 0,\quad y\ \prec\ \mathfrak{m},\quad y\in\mathfrak{m}\,\mathcal{C}^ {r}[\mathrm{i}]^{\preccurlyeq}.\]
_If \(\mathfrak{m}\preccurlyeq 1\), then \(y\prec\mathfrak{m}\) and \(y\in\mathcal{C}^{r}[\mathrm{i}]^{\preccurlyeq}\) for such \(y\). Moreover, if \(H\subseteq\mathcal{C}^{\infty}\), then we can take such \(y\) in \(\mathcal{C}^{\infty}[\mathrm{i}]\), and if \(H\subseteq\mathcal{C}^{\omega}\), then we can take such \(y\) in \(\mathcal{C}^{\omega}[\mathrm{i}]\)._
Proof.: Lemma 4.3.31 gives a refinement \((P_{+a},\mathfrak{n},\widehat{a}-a)\) of \((P,\mathfrak{m},\widehat{a})\) with \(\mathfrak{n}\in H^{\times}\) and an active \(\phi\) in \(H\) with \(0<\phi\preccurlyeq 1\) such that the hole \((P_{+a}^{\phi},\mathfrak{n},\widehat{a}-a)\) in \(K^{\phi}\) is strictly normal and its linear part splits strongly over \(K^{\phi}\). Lemma 6.4.7 applied to \((P_{+a},\mathfrak{n},\widehat{a}-a)\) in place of \((P,\mathfrak{m},\widehat{a})\) yields \(z\in\mathcal{C}^{<\infty}[\mathrm{i}]\) with \(P_{+a}(z)=0\), \(z\prec\mathfrak{n}\) and \((z/\mathfrak{n})^{(j)}\preccurlyeq 1\) for \(j=1,\ldots,r\). Lemma 6.4.1(ii) applied to \(z/\mathfrak{m}\), \(\mathfrak{n}/\mathfrak{m}\) in place of \(y\), \(\mathfrak{m}\), respectively, yields \((z/\mathfrak{m})^{(j)}\preccurlyeq 1\) for \(j=0,\ldots,r\). Also, \(a\prec\mathfrak{m}\) (in \(K\)), hence \((a/\mathfrak{m})^{(j)}\prec 1\) for \(j=0,\ldots,r\). Set \(y:=a+z\); then \(P(y)=0\), \(y\prec\mathfrak{m}\), and \((y/\mathfrak{m})^{(j)}\preccurlyeq 1\) for \(j=1,\ldots,r\). For the rest use Lemma 6.4.1(ii) and the last statement in Lemma 6.4.7.
Next we treat the linear case:
**Proposition 6.4.9**.: _Suppose \(\deg P=1\). Then for some \(y\in\mathcal{C}^{<\infty}[\mathrm{i}]\) we have_
\[P(y)\ =\ 0,\quad y\ \prec\ \mathfrak{m},\quad(y/\mathfrak{m})^{\prime}\ \preccurlyeq 1.\]
_If \(\mathfrak{m}\preccurlyeq 1\), then \(y\prec 1\) and \(y^{\prime}\preccurlyeq 1\) for each such \(y\). Moreover, if \(H\subseteq\mathcal{C}^{\infty}\), then we can take such \(y\) in \(\mathcal{C}^{\infty}[\mathrm{i}]\), and if \(H\subseteq\mathcal{C}^{\omega}\), then we can take such \(y\) in \(\mathcal{C}^{\omega}[\mathrm{i}]\)._
Proof.: We have \(r=1\) by Corollary 3.2.8. If \(\partial K=K\) and \(\mathrm{I}(K)\subseteq K^{\dagger}\), then Lemma 4.3.32 applies, and we can argue as in the proof of Proposition 6.4.8, using this lemma instead of Lemma 4.3.31. We reduce the general case to this special case as follows: Set \(H_{1}:=\mathrm{D}(H)\); then \(H_{1}\) is an \(\omega\)-free Hardy field by Theorem 1.4.1, and \(K_{1}:=H_{1}[\mathrm{i}]\) satisfies \(\partial K_{1}=K_{1}\) and \(\mathrm{I}(K_{1})\subseteq K_{1}^{\dagger}\), by Corollary 5.5.19. Moreover, by Corollary 6.3.9, if \(H\subseteq\mathcal{C}^{\infty}\), then \(H_{1}\subseteq\mathcal{C}^{\infty}\), and likewise with \(\mathcal{C}^{\omega}\) in place of \(\mathcal{C}^{\infty}\). The newtonization \(\widehat{H}_{1}\) of \(H_{1}\) is an immediate asymptotic extension of \(H_{1}\), and \(\widehat{K}_{1}:=\widehat{H}_{1}[\mathrm{i}]\) is newtonian [ADH, 14.5.7]. Corollary 3.2.29 gives an embedding \(K\langle\widehat{a}\rangle\to\widehat{K}_{1}\) over \(K\); let \(\widehat{a}_{1}\) be the image of \(\widehat{a}\) under this embedding. If \(\widehat{a}_{1}\in K_{1}\), then we are done by taking \(y:=\widehat{a}_{1}\), so we may assume \(\widehat{a}_{1}\notin K_{1}\). Then \((P,\mathfrak{m},\widehat{a}_{1})\) is a minimal hole in \(K_{1}\), and the above applies with \(H\), \(K\), \(\widehat{a}\) replaced by \(H_{1}\), \(K_{1}\), \(\widehat{a}_{1}\), respectively.
We can improve on these results in a useful way:
**Corollary 6.4.10**.: _Suppose \(\widehat{a}\sim a\in K\). Then for some \(y\in\mathcal{C}^{<\infty}[\mathrm{i}]\) we have_
\[P(y)\ =\ 0,\quad y\ \sim\ a,\quad(y/a)^{(j)}\ \prec\ 1\ \text{ for }j=1,\ldots,r.\]
_If \(H\subseteq\mathcal{C}^{\infty}\), then there is such \(y\) in \(\mathcal{C}^{\infty}[\mathrm{i}]\). If \(H\subseteq\mathcal{C}^{\omega}\), then there is such \(y\) in \(\mathcal{C}^{\omega}[\mathrm{i}]\)._
Proof.: Take \(a_{1}\in K\) and \(\mathfrak{n}\in H^{\times}\) with \(\mathfrak{n}\asymp\widehat{a}-a\sim a_{1}\), and set \(b:=a+a_{1}\). Then \((P_{+b},\mathfrak{n},\widehat{a}-b)\) is a refinement of \((P,\mathfrak{m},\widehat{a})\). Propositions 6.4.8 and 6.4.9 give \(z\in\mathcal{C}^{<\infty}[\mathrm{i}]\) with \(P(b+z)=0\), \(z\prec\mathfrak{n}\) and \((z/\mathfrak{n})^{(j)}\preccurlyeq 1\) for \(j=1,\ldots,r\). We have \((a_{1}/a)^{(j)}\prec 1\) for \(j=0,\ldots,r\), since \(K\) has small derivation. Likewise, \((\mathfrak{n}/a)^{(j)}\prec 1\) for \(j=0,\ldots,r\), and hence \((z/a)^{(j)}\prec 1\) for \(j=0,\ldots,r\), by \(z/a=(z/\mathfrak{n})\cdot(\mathfrak{n}/a)\) and the Product Rule. So \(y:=b+z\) has the desired property. The rest follows from the "moreover" parts of these propositions.
_Remark 6.4.11_.: Suppose we replace our standing assumption that \(H\) is \(\omega\)-free and \((P,\mathfrak{m},\widehat{a})\) is a minimal hole in \(K\) by the assumption that \(H\) is \(\lambda\)-free, \(\partial K=K\), \(\mathrm{I}(K)\subseteq K^{\dagger}\), and \((P,\mathfrak{m},\widehat{a})\) is a slot in \(K\) of order and degree \(1\). Then Proposition 6.4.9 and Corollary 6.4.10 go through by the remark following Lemma 4.3.32.
Now also drawing upon Theorem 4.3.33, we arrive at the main result of this section:
**Corollary 6.4.12**.: _Suppose \(H\) is \(1\)-linearly newtonian. Then one of the following two conditions is satisfied:_
* \(\widehat{b}\notin H\)_, and there are_ \(Q\in Z(H,\widehat{b})\) _of minimal complexity and_ \(y\in\mathcal{C}^{<\infty}\) _such that_ \(Q(y)=0\) _and_ \(y\prec\mathfrak{m}\)_;_
* \(\widehat{c}\notin H\)_, and there are_ \(R\in Z(H,\widehat{c})\) _of minimal complexity and_ \(y\in\mathcal{C}^{<\infty}\) _such that_ \(R(y)=0\) _and_ \(y\prec\mathfrak{m}\)_._
_If \(H\subseteq\mathcal{C}^{\infty}\), then there is such \(y\) in \(\mathcal{C}^{\infty}\), and likewise with \(\mathcal{C}^{\infty}\) replaced by \(\mathcal{C}^{\omega}\)._
Proof.: Suppose \(\deg P>1\), or \(\widehat{b}\notin H\) and \(Z(H,\widehat{b})\) has an element of order \(1\), or \(\widehat{c}\notin H\) and \(Z(H,\widehat{c})\) has an element of order \(1\). Let \(\phi\) range over active elements of \(H\) with \(0<\phi\preccurlyeq 1\). By the "moreover" part of Theorem 4.3.33, one of the following holds:
* \(\widehat{b}\notin H\) and there exist \(\phi\) and a \(Z\)-minimal slot \((Q,\mathfrak{m},\widehat{b})\) in \(H\) with a refinement \((Q_{+b},\mathfrak{n},\widehat{b}-b)\) such that \((Q_{+b}^{\phi},\mathfrak{n},\widehat{b}-b)\) is strongly split-normal;
* \(\widehat{c}\notin H\) and there exist \(\phi\) and a \(Z\)-minimal slot \((R,\mathfrak{m},\widehat{c})\) in \(H\) with a refinement \((R_{+c},\mathfrak{n},\widehat{c}-c)\) such that \((R_{+c}^{\phi},\mathfrak{n},\widehat{c}-c)\) is strongly split-normal.
Suppose \(\widehat{b}\notin H\) and \(\phi,Q,b\) are as in (1); then Lemma 6.4.6 applied to \((Q_{+b},\mathfrak{n},\widehat{b}-b)\) in place of \((P,\mathfrak{n},\widehat{h})\) yields \(z\in\mathcal{C}^{<\infty}\) with \(Q_{+b}(z)=0\), \(z\prec\mathfrak{n}\); hence \(Q(y)=0\), \(y\prec\mathfrak{m}\) for \(y:=b+z\), so (i) holds. Similarly, (2) implies (ii).
Suppose now that \(\deg P=1\), that if \(\widehat{b}\notin H\), then \(Z(H,\widehat{b})\) has no element of order \(1\), and that if \(\widehat{c}\notin H\), then \(Z(H,\widehat{c})\) has no element of order \(1\). Since \(\deg P=1\), Proposition 6.4.9 gives \(z\in\mathcal{C}^{<\infty}[i]\) such that \(P(z)=0\) and \(z\prec\mathfrak{m}\). Recall also that \(P\) has order \(1\) by Corollary 3.2.8. Consider now the case \(\widehat{b}\notin H\). In view of \(P(\widehat{a})=0\) and \(P(z)=0\) we obtain from Example 1.1.7 and Remark 1.1.9 a \(Q\in H\{Y\}\) of degree \(1\) and order \(1\) or \(2\) such that \(Q(\widehat{b})=0\) and \(Q(y)=0\) for \(y:=\operatorname{Re}z\prec\mathfrak{m}\). But \(Z(H,\widehat{b})\) has no element of order \(1\), so \(\operatorname{order}Q=2\) and \(Q\in Z(H,\widehat{b})\) has minimal complexity. Thus (i) holds. If \(\widehat{c}\notin H\), then the same reasoning shows that (ii) holds.
Is \(y\) as in (i) or (ii) of Corollary 6.4.12 hardian over \(H\)? At this stage we cannot claim this. In the next section we introduce weights and their corresponding norms as a more refined tool. This will allow us to obtain Corollary 6.5.20 as a key approximation result for later use.
### Weights
In this section we prove Proposition 6.5.14 to strengthen Lemma 6.2.5. This uses the material on repulsive-normal slots from Section 4.5, but we also need more refined norms for differentiable functions, to which we turn now.
**Weighted spaces of differentiable functions.**_In this subsection we fix \(r\in\mathbb{N}\) and a weight function \(\mathfrak{\tau}\in\mathcal{C}_{a}[i]^{\times}\)._ For \(f\in\mathcal{C}_{a}^{r}[i]\) we set
\[\|f\|_{a;r}^{\mathfrak{\tau}}\ :=\ \max\big{\{}\|\mathfrak{\tau}^{-1}f\|_{a},\| \mathfrak{\tau}^{-1}f^{\prime}\|_{a},\ldots,\|\mathfrak{\tau}^{-1}f^{(r)}\|_{ a}\big{\}}\ \in\ [0,+\infty],\]
and \(\|f\|_{a}^{\mathfrak{\tau}}:=\|f\|_{a;0}^{\mathfrak{\tau}}\) for \(f\in\mathcal{C}_{a}[i]\). Then
\[\mathcal{C}_{a}^{r}[i]^{\mathfrak{\tau}}\ :=\ \big{\{}f\in\mathcal{C}_{a}^{r}[i] \,:\,\|f\|_{a;r}^{\mathfrak{\tau}}<+\infty\big{\}}\]
is a \(\mathbb{C}\)-linear subspace of
\[\mathcal{C}_{a}[i]^{\mathfrak{\tau}}\ :=\ \mathcal{C}_{a}^{0}[i]^{\mathfrak{\tau}}\ =\ \mathfrak{\tau}\, \mathcal{C}_{a}[i]^{\mathfrak{b}}\ =\ \big{\{}f\in\mathcal{C}_{a}[i] \,:\,f\preccurlyeq\mathfrak{\tau}\big{\}}.\]
Below we consider the \(\mathbb{C}\)-linear space \(\mathcal{C}^{r}_{a}[i]^{\mathsf{t}}\) to be equipped with the norm
\[f\mapsto\|f\|_{a;r}^{\mathsf{t}}.\]
Recall from Section 6.1 the convention \(b\cdot\infty=\infty\cdot b=\infty\) for \(b\in[0,\infty]\). Note that
\[\|fg\|_{a;r}^{\mathsf{t}}\ \leqslant\ 2^{r}\|f\|_{a;r}\|g\|_{a;r}^{\mathsf{t}} \quad\text{ for }f,g\in\mathcal{C}^{r}_{a}[i], \tag{6.5.1}\]
so \(\mathcal{C}^{r}_{a}[i]^{\mathsf{t}}\) is a \(\mathcal{C}^{r}_{a}[i]^{\mathsf{b}}\)-submodule of \(\mathcal{C}^{r}_{a}[i]\). Note also that \(\|1\|_{a;r}^{\mathsf{t}}=\|\mathsf{\tau}^{-1}\|_{a}\), hence
\[\|f\|_{a;r}^{\mathsf{t}}\ \leqslant\ 2^{r}\|f\|_{a;r}\,\|\mathsf{\tau}^{-1}\|_{a} \quad\text{ for }f\in\mathcal{C}^{r}_{a}[i]\]
and
\[\mathsf{\tau}^{-1}\in\mathcal{C}_{a}[i]^{\mathrm{b}}\quad\Longleftrightarrow \quad 1\in\mathcal{C}^{r}_{a}[i]^{\mathsf{t}}\quad\Longleftrightarrow\quad \mathcal{C}^{r}_{a}[i]^{\mathsf{b}}\subseteq\mathcal{C}^{r}_{a}[i]^{\mathsf{t}}.\]
We have
\[\|f\|_{a;r}\ \leqslant\ \|f\|_{a;r}^{\mathsf{t}}\,\|\mathsf{\tau}\|_{a} \quad\text{ for }f\in\mathcal{C}^{r}_{a}[i], \tag{6.5.2}\]
and thus
\[\mathsf{\tau}\in\mathcal{C}_{a}[i]^{\mathrm{b}}\quad\Longleftrightarrow \quad\mathcal{C}^{r}_{a}[i]^{\mathsf{b}}\subseteq\mathcal{C}^{r}_{a}[i]^{ \mathsf{\tau}^{-1}}\quad\Longrightarrow\quad\mathcal{C}^{r}_{a}[i]^{\mathsf{t }}\subseteq\mathcal{C}^{r}_{a}[i]^{\mathsf{b}}. \tag{6.5.3}\]
Hence if \(\mathsf{\tau},\mathsf{\tau}^{-1}\in\mathcal{C}_{a}[i]^{\mathsf{b}}\), then \(\mathcal{C}^{r}_{a}[i]^{\mathrm{b}}=\mathcal{C}^{r}_{a}[i]^{\mathsf{t}}\), and the norms \(\|\,\cdot\,\|_{a;r}^{\mathsf{t}}\) and \(\|\,\cdot\,\|_{a;r}\) on this \(\mathbb{C}\)-linear space are equivalent. (In later use, \(\mathsf{\tau}\in\mathcal{C}_{a}[i]^{\mathrm{b}}\), \(\mathsf{\tau}^{-1}\notin\mathcal{C}_{a}[i]^{\mathrm{b}}\).) If \(\mathsf{\tau}\in\mathcal{C}_{a}[i]^{\mathrm{b}}\), then \(\mathcal{C}^{r}_{a}[i]^{\mathsf{t}}\) is an ideal of the commutative ring \(\mathcal{C}^{r}_{a}[i]^{\mathrm{b}}\). From (6.5.1) and (6.5.2) we obtain
\[\|fg\|_{a;r}^{\mathsf{t}}\ \leqslant\ 2^{r}\,\|\mathsf{\tau}\|_{a}\,\|f\|_{a;r}^ {\mathsf{t}}\,\|g\|_{a;r}^{\mathsf{t}}\quad\text{ for }f,g\in\mathcal{C}^{r}_{a}[i].\]
For \(f\in\mathcal{C}^{r+1}_{a}[i]^{\mathsf{t}}\) we have \(\|f\|_{a;r}^{\mathsf{t}},\|f^{\prime}\|_{a;r}^{\mathsf{t}}\leqslant\|f\|_{a;r+1 }^{\mathsf{t}}\). From (6.5.2) and (6.5.3):
**Lemma 6.5.1**.: _Suppose \(\mathsf{\tau}\in\mathcal{C}_{a}[i]^{\mathrm{b}}\) (so \(\mathcal{C}^{r}_{a}[i]^{\mathsf{t}}\subseteq\mathcal{C}^{r}_{a}[i]^{\mathrm{b}}\)) and \(f\in\mathcal{C}^{r}_{a}[i]^{\mathsf{t}}\). If \((f_{n})\) is a sequence in \(\mathcal{C}^{r}_{a}[i]^{\mathsf{t}}\) and \(f_{n}\to f\) in \(\mathcal{C}^{r}_{a}[i]^{\mathsf{t}}\), then also \(f_{n}\to f\) in \(\mathcal{C}^{r}_{a}[i]^{\mathsf{b}}\)._
This is used to show:
**Lemma 6.5.2**.: _Suppose \(\mathsf{\tau}\in\mathcal{C}_{a}[i]^{\mathrm{b}}\). Then the \(\mathbb{C}\)-linear space \(\mathcal{C}^{r}_{a}[i]^{\mathsf{t}}\) equipped with the norm \(\|\,\cdot\,\|_{a;r}^{\mathsf{t}}\) is complete._
Proof.: We proceed by induction on \(r\). Let \((f_{n})\) be a cauchy sequence in the normed space \(\mathcal{C}_{a}[i]^{\mathsf{t}}\). Then the sequence \((\mathsf{\tau}^{-1}f_{n})\) in the Banach space \(\mathcal{C}^{0}_{a}[i]^{\mathrm{b}}\) is cauchy, hence has a limit \(g\in\mathcal{C}_{a}[i]^{\mathrm{b}}\), so with \(f:=\mathsf{\tau}g\in\mathcal{C}_{a}[i]^{\mathsf{t}}\) we have \(\mathsf{\tau}^{-1}f_{n}\to\mathsf{\tau}^{-1}f\) in \(\mathcal{C}_{a}[i]^{\mathsf{b}}\) and hence \(f_{n}\to f\) in \(\mathcal{C}_{a}[i]^{\mathsf{t}}\). Thus the lemma holds for \(r=0\). Suppose the lemma holds for a certain value of \(r\), and let \((f_{n})\) be a cauchy sequence in \(\mathcal{C}^{r+1}_{a}[i]^{\mathsf{t}}\). Then \((f^{\prime}_{n})\) is a cauchy sequence in \(\mathcal{C}^{r}_{a}[i]^{\mathsf{t}}\) and hence has a limit \(g\in\mathcal{C}^{r}_{a}[i]^{\mathsf{t}}\), by inductive hypothesis. By Lemma 6.5.1, \(f^{\prime}_{n}\to g\) in \(\mathcal{C}_{a}[i]^{\mathrm{b}}\). Now \((f_{n})\) is also a cauchy sequence in \(\mathcal{C}_{a}[i]^{\mathsf{t}}\), hence has a limit \(f\in\mathcal{C}_{a}[i]^{\mathsf{t}}\) (by the case \(r=0\)), and by Lemma 6.5.1 again, \(f_{n}\to f\) in \(\mathcal{C}_{a}[i]^{\mathrm{b}}\). Thus \(f\) is differentiable and \(f^{\prime}=g\) by [57, (8.6.4)]. This yields \(f_{n}\to f\) in \(\mathcal{C}^{r+1}_{a}[i]^{\mathsf{t}}\).
**Lemma 6.5.3**.: _Suppose \(\mathsf{\tau}\in\mathcal{C}^{r}_{a}[i]^{\mathrm{b}}\). If \(f\in\mathcal{C}^{r}_{a}[i]\) and \(f^{(k)}\preccurlyeq\mathsf{\tau}^{r-k+1}\) for \(k=0,\ldots,r\), then \(f\mathsf{\tau}^{-1}\in\mathcal{C}^{r}_{a}[i]^{\mathrm{b}}\). (Thus \(\mathcal{C}^{r}_{a}[i]^{\mathsf{\tau}^{r+1}}\subseteq\mathsf{\tau}\mathcal{C}^{r} _{a}[i]^{\mathrm{b}}\).)_
Proof.: Let \(Q^{n}_{k}\in\mathbb{Q}\{X\}\) (\(0\leqslant k\leqslant n\)) be as in Lemma 1.1.11. Although \(\mathcal{C}^{r}_{a}[i]\) is not closed under taking derivatives, the proof of that lemma and the computation leading to Corollary 1.1.12 does give for \(f\in\mathcal{C}^{r}_{a}[i]\) and \(n\leqslant r\):
\[(f\mathsf{\tau}^{-1})^{(n)}\ =\ \sum_{k=0}^{n}Q^{n}_{k}(\mathsf{\tau})f^{(k)} \mathsf{\tau}^{k-n-1}.\]
Now use that \(Q_{k}^{n}(\tau)\preccurlyeq 1\) for \(n\leqslant r\) and \(k=0,\ldots,n\).
Next we generalize the inequality (6.5.1):
**Lemma 6.5.4**.: _Let \(f_{1},\ldots,f_{m-1},g\in\mathcal{C}_{a}^{r}[i]\), \(m\geqslant 1\); then_
\[\|f_{1}\cdots f_{m-1}g\|_{a;r}^{\tau}\ \leqslant\ m^{r}\,\|f_{1}\|_{a;r}\cdots\|f_{m-1 }\|_{a;r}\,\|g\|_{a;r}^{\tau}.\]
Proof.: Use the generalized Product Rule [1] and the well-known identity \(\sum\frac{n!}{i_{1}!\cdots i_{m}!}=m^{n}\) with the sum over all \((i_{1},\ldots,i_{m})\in\mathbb{N}^{m}\) with \(i_{1}+\cdots+i_{m}=n\).
With \(\boldsymbol{i}\) ranging over \(\mathbb{N}^{1+r}\), let \(P=\sum_{\boldsymbol{i}}P_{\boldsymbol{i}}Y^{\boldsymbol{i}}\) (all \(P_{\boldsymbol{i}}\in\mathcal{C}_{a}[i]\)) be a polynomial in \(\mathcal{C}_{a}[i]\big{[}Y,Y^{\prime},\ldots,Y^{(r)}\big{]}\); for \(f\in\mathcal{C}_{a}^{r}[i]\) we have \(P(f)=\sum_{\boldsymbol{i}}P_{\boldsymbol{i}}f^{\boldsymbol{i}}\in\mathcal{C}_ {a}[i]\). (See also the beginning of Section 6.2.) We set
\[\|P\|_{a}\ :=\ \max_{\boldsymbol{i}}\|P_{\boldsymbol{i}}\|_{a}\in[0,\infty].\]
_In the rest of this subsection we assume \(\|P\|_{a}<\infty\), that is, \(P\in\mathcal{C}_{a}[i]^{\mathrm{b}}\big{[}Y,\ldots,Y^{(r)}\big{]}\). Hence if \(\tau\in\mathcal{C}_{a}[i]^{\mathrm{b}}\), \(P(0)\in\mathcal{C}_{a}[i]^{\tau}\), and \(f\in\mathcal{C}_{a}^{r}[i]^{\tau}\), then \(P(f)\in\mathcal{C}_{a}[i]^{\tau}\). Here are weighted versions of Lemma 6.1.2 and 6.1.3:
**Lemma 6.5.5**.: _Suppose \(P\) is homogeneous of degree \(d\geqslant 1\), and let \(\tau\in\mathcal{C}_{a}[i]^{\mathrm{b}}\) and \(f\in\mathcal{C}_{a}^{r}[i]^{\tau}\). Then_
\[\|P(f)\|_{a}^{\tau}\ \leqslant\ \binom{d+r}{r}\cdot\|P\|_{a}\cdot\|f\|_{a;r}^{d-1 }\cdot\|f\|_{a;r}^{\tau}.\]
Proof.: For \(j=0,\ldots,r\) we have \(\|f^{(j)}\|_{a}\leqslant\|f\|_{a;r}\) and \(\|f^{(j)}\|_{a}^{\tau}\leqslant\|f\|_{a;r}^{\tau}\). Now \(f^{\boldsymbol{i}}\), where \(\boldsymbol{i}=(i_{0},\ldots,i_{r})\in\mathbb{N}^{r+1}\) and \(i_{0}+\cdots+i_{r}=d\), is a product of \(d\) such factors \(f^{(j)}\), so Lemma 6.5.4 with \(m:=d\), \(r:=0\), gives
\[\|f^{\boldsymbol{i}}\|_{a}^{\tau}\ \leqslant\ \|f\|_{a;r}^{d-1}\cdot\|f\|_{a;r}^{ \tau}.\]
It remains to note that by (6.5.1) we have \(\|P_{\boldsymbol{i}}f^{\boldsymbol{i}}\|_{a}^{\tau}\leqslant\|P_{\boldsymbol {i}}\|_{a}\cdot\|f^{\boldsymbol{i}}\|_{a}^{\tau}\).
**Corollary 6.5.6**.: _Let \(1\leqslant d\leqslant e\) in \(\mathbb{N}\) be such that \(P_{\boldsymbol{i}}=0\) if \(|\boldsymbol{i}|<d\) or \(|\boldsymbol{i}|>e\). Then for \(f\in\mathcal{C}_{a}^{r}[i]^{\tau}\) and \(\tau\in\mathcal{C}_{a}[i]^{\mathrm{b}}\) we have_
\[\|P(f)\|_{a}^{\tau}\ \leqslant\ D\cdot\|P\|_{a}\cdot\big{(}\|f\|_{a;r}^{d-1}+ \cdots+\|f\|_{a;r}^{e-1}\big{)}\cdot\|f\|_{a;r}^{\tau}\]
_where \(D\ =\ D(d,e,r)\ :=\ \binom{e+r+1}{r+1}-\binom{d+r}{r+1}\in\mathbb{N}^{\geqslant 1}\)._
**Doubly-twisted integration.** In this subsection we adopt the setting in _Twisted integration_ of Section 6.1. Thus \(\phi\in\mathcal{C}_{a}[i]\) and \(\Phi=\partial_{a}^{-1}\phi\). Let \(\tau\in\mathcal{C}_{a}^{1}\) satisfy \(\tau(s)>0\) for \(s\geqslant a\), and set \(\widetilde{\phi}:=\phi-\tau^{\dagger}\in\mathcal{C}_{a}[i]\) and \(\widetilde{\Phi}:=\partial_{a}^{-1}\tilde{\phi}\). Thus
\[\widetilde{\Phi}(t)\ =\ \int_{a}^{t}(\phi-\tau^{\dagger})(s)\,ds\ =\ \Phi(t)-\log\tau(t)+\log\tau(a)\qquad\text{for $t \geqslant a$.}\]
Consider the right inverses \(B,\widetilde{B}\colon\mathcal{C}_{a}[i]\to\mathcal{C}_{a}^{1}[i]\) to, respectively, \(\partial-\phi\colon\mathcal{C}_{a}^{1}[i]\to\mathcal{C}_{a}[i]\) and \(\partial-\widetilde{\phi}\colon\mathcal{C}_{a}^{1}[i]\to\mathcal{C}_{a}[i]\), given by
\[B\ :=\ \mathrm{e}^{\Phi}\circ\partial_{a}^{-1}\circ\mathrm{e}^{-\Phi},\quad \widetilde{B}\ :=\ \mathrm{e}^{\widetilde{\Phi}}\circ\partial_{a}^{-1}\circ\mathrm{e}^{- \widetilde{\Phi}}\,.\]
For \(f\in\mathcal{C}_{a}[i]\) and \(t\geqslant a\) we have
\[\widetilde{B}f(t) \;=\;\mathrm{e}^{\widetilde{\Phi}(t)}\int_{a}^{t}\mathrm{e}^{- \widetilde{\Phi}(s)}\,f(s)\,ds\] \[\;=\;\uptau(t)^{-1}\uptau(a)\,\mathrm{e}^{\Phi(t)}\int_{a}^{t} \mathrm{e}^{-\Phi(s)}\,\uptau(s)\uptau(a)^{-1}f(s)\,ds\] \[\;=\;\uptau(t)^{-1}\,\mathrm{e}^{\Phi(t)}\int_{a}^{t}\mathrm{e}^{ -\Phi(s)}\,\uptau(s)f(s)\,ds\;=\;\uptau^{-1}(t)\big{(}B(\uptau f)\big{)}(t)\]
and so \(\widetilde{B}=\uptau^{-1}\circ B\circ\uptau\). Hence if \(\widetilde{\phi}\) is attractive, then \(B_{\upkappa\uptau}:=\uptau^{-1}\circ B\circ\uptau\) maps \(\mathcal{C}_{a}[i]^{\mathrm{b}}\) into \(\mathcal{C}_{a}[i]^{\mathrm{b}}\cap\mathcal{C}_{a}^{1}[i]\), and the operator \(B_{\upkappa\uptau}\colon\mathcal{C}_{a}[i]^{\mathrm{b}}\to\mathcal{C}_{a}[i] ^{\mathrm{b}}\) is continuous with \(\|B_{\upkappa\uptau}\|_{a}\leqslant\big{\|}\frac{1}{\mathrm{Re}\,\widetilde{ \phi}}\big{\|}_{a}\); if in addition \(\widetilde{\phi}\in\mathcal{C}_{a}^{r}[i]\), then \(B_{\upkappa\uptau}\) maps \(\mathcal{C}_{a}[i]^{\mathrm{b}}\cap\mathcal{C}_{a}^{r}[i]\) into \(\mathcal{C}_{a}[i]^{\mathrm{b}}\cap\mathcal{C}_{a}^{r+1}[i]\). Note that if \(\phi\in\mathcal{C}_{a}^{r}[i]\) and \(\uptau\in\mathcal{C}_{a}^{r+1}\), then \(\widetilde{\phi}\in\mathcal{C}_{a}^{r}[i]\).
Next, suppose \(\phi\), \(\widetilde{\phi}\) are both repulsive. Then we have the \(\mathbb{C}\)-linear operators \(B,\widetilde{B}\colon\mathcal{C}_{a}[i]^{\mathrm{b}}\to\mathcal{C}_{a}^{1}[i]\) given, for \(f\in\mathcal{C}_{a}[i]^{\mathrm{b}}\) and \(t\geqslant a\), by
\[Bf(t)\;:=\;\mathrm{e}^{\Phi(t)}\int_{\infty}^{t}\mathrm{e}^{-\Phi(s)}\,f(s)\, ds,\qquad\widetilde{B}f(t)\;:=\;\mathrm{e}^{\widetilde{\Phi}(t)}\int_{\infty}^{t} \mathrm{e}^{-\widetilde{\Phi}(s)}\,f(s)\,ds.\]
Now assume \(\uptau\in\mathcal{C}_{a}[i]^{\mathrm{b}}\). Then we have the \(\mathbb{C}\)-linear operator
\[B_{\upkappa\uptau}\;:=\;\uptau^{-1}\circ B\circ\uptau:\;\mathcal{C}_{a}[i]^{ \mathrm{b}}\to\mathcal{C}_{a}^{1}[i].\]
A computation as above shows \(\widetilde{B}=B_{\upkappa\uptau}\); thus \(B_{\upkappa\uptau}\) maps \(\mathcal{C}_{a}[i]^{\mathrm{b}}\) into \(\mathcal{C}_{a}[i]^{\mathrm{b}}\cap\mathcal{C}_{a}^{1}[i]\), and the operator \(B_{\upkappa\uptau}\colon\mathcal{C}_{a}[i]^{\mathrm{b}}\to\mathcal{C}_{a}[i] ^{\mathrm{b}}\) is continuous with \(\|B_{\upkappa\uptau}\|_{a}\leqslant\big{\|}\frac{1}{\mathrm{Re}\,\widetilde{ \phi}}\big{\|}_{a}\). If \(\widetilde{\phi}\in\mathcal{C}_{a}^{r}[i]\), then \(B_{\upkappa\uptau}\) maps \(\mathcal{C}_{a}[i]^{\mathrm{b}}\cap\mathcal{C}_{a}^{r}[i]\) into \(\mathcal{C}_{a}[i]^{\mathrm{b}}\cap\mathcal{C}_{a}^{r+1}[i]\).
**More on twists and right-inverses of linear operators over Hardy fields.** In this subsection we adopt the assumptions in force for Lemma 6.1.5, which we repeat here. Thus \(H\) is a Hardy field, \(K=H[i]\), \(r\in\mathbb{N}^{\geqslant 1}\), and \(f_{1},\dots,f_{r}\in K\). We fix \(a_{0}\in\mathbb{R}\) and functions in \(\mathcal{C}_{a_{0}}[i]\) representing the germs \(f_{1},\dots,f_{r}\), denoted by the same symbols. We let \(a\) range over \([a_{0},\infty)\), and we denote the restriction of each \(f\in\mathcal{C}_{a_{0}}[i]\) to \([a,\infty)\) also by \(f\). For each \(a\) we then have the \(\mathbb{C}\)-linear map \(A_{a}\colon\mathcal{C}_{a}^{r}[i]\to\mathcal{C}_{a}[i]\) given by
\[A_{a}(y)\;=\;y^{(r)}+f_{1}y^{(r-1)}+\dots+f_{r}y.\]
We are in addition given a splitting \((\phi_{1},\dots,\phi_{r})\) of the linear differential operator \(A=\upsigma^{r}+f_{1}\upsigma^{r-1}+\dots+f_{r}\in K[\upsigma]\) over \(K\) with \(\mathrm{Re}\,\phi_{1},\dots,\mathrm{Re}\,\phi_{r}\succcurlyeq 1\), as well as functions in \(\mathcal{C}_{a_{0}}^{r-1}[i]\) representing \(\phi_{1},\dots,\phi_{r}\), denoted by the same symbols and satisfying \(\mathrm{Re}\,\phi_{1},\dots,\mathrm{Re}\,\phi_{r}\in(\mathcal{C}_{a_{0}})^{\times}\). This gives rise to the continuous \(\mathbb{C}\)-linear operators
\[B_{j}\;:=\;B_{\phi_{j}}\;:\;\mathcal{C}_{a}[i]^{\mathrm{b}}\to\mathcal{C}_{a}[ i]^{\mathrm{b}}\qquad(j=1,\dots,r)\]
and the right-inverse
\[A_{a}^{-1}\;:=\;B_{r}\circ\dots\circ B_{1}\;:\;\mathcal{C}_{a}[i]^{\mathrm{b}} \to\mathcal{C}_{a}[i]^{\mathrm{b}}\]
of \(A_{a}\) with the properties stated in Lemma 6.1.5.
Now let \(\upm\in H^{\times}\) with \(\upm\prec 1\), and set \(\widetilde{A}:=A_{\upkappa\upm}\in K[\upsigma]\). Let \(\uptau\in(\mathcal{C}_{a_{0}}^{r})^{\times}\) be a representative of \(\upm\). Then \(\uptau\in(\mathcal{C}_{a_{0}}^{r})^{\mathrm{b}}\) and \(\widetilde{\phi}_{j}:=\phi_{j}-\uptau^{\dagger}\in\mathcal{C}_{a_{0}}^{r-1}[i]\) for \(j=1,\dots,r\).
We have the \(\mathbb{C}\)-linear maps
\[\widetilde{A}_{j}\ :=\ \partial-\widetilde{\phi}_{j}\,:\ {\mathcal{C}}_{a}^{j}[i] \to{\mathcal{C}}_{a}^{j-1}[i]\qquad(j=1,\dots,r)\]
and for sufficiently large \(a\) a factorization
\[\widetilde{A}_{a}\ =\ \widetilde{A}_{1}\circ\dots\circ\widetilde{A}_{r}\,:\ {\mathcal{C}}_{a}^{r}[i]\to{\mathcal{C}}_{a}[i].\]
Below we assume this holds for all \(a\), as can be arranged by increasing \(a_{0}\). We call \(f,g\in{\mathcal{C}}_{a}[i]\)**alike** if \(f\), \(g\) are both attractive or both repulsive. In the same way we define when germs \(f,g\in{\mathcal{C}}[i]\) are alike. Suppose that \(\phi_{j}\), \(\widetilde{\phi}_{j}\) are alike for \(j=1,\dots,r\). Then we have continuous \(\mathbb{C}\)-linear operators
\[\widetilde{B}_{j}\ :=\ B_{\widetilde{\phi}_{j}}\,:\ {\mathcal{C}}_{a}[i]^{ \mathrm{b}}\to{\mathcal{C}}_{a}[i]^{\mathrm{b}}\qquad(j=1,\dots,r)\]
and the right-inverse
\[\widetilde{A}_{a}^{-1}\ :=\ \widetilde{B}_{r}\circ\dots\circ\widetilde{B}_{1} \,:\ {\mathcal{C}}_{a}[i]^{\mathrm{b}}\to{\mathcal{C}}_{a}[i]^{\mathrm{b}}\]
of \(\widetilde{A}_{a}\), and the arguments in the previous subsection show that \(\widetilde{B}_{j}=(B_{j})_{\ltimes\mathfrak{t}}=\mathfrak{t}^{-1}\circ B_{j }\circ\mathfrak{t}\) for \(j=1,\dots,r\), and hence \(\widetilde{A}_{a}^{-1}=\mathfrak{t}^{-1}\circ A_{a}^{-1}\circ\mathfrak{t}\). For \(j=0,\dots,r\) we set, in analogy with \(A_{j}^{\circ}\) and \(B_{j}^{\circ}\) from (6.1.2) and (6.1.3),
\[\widetilde{A}_{j}^{\circ}\ :=\ \widetilde{A}_{1}\circ\dots\circ\widetilde{A}_{j} \,:\ {\mathcal{C}}_{a}^{j}[i]\to{\mathcal{C}}_{a}[i],\quad\widetilde{B}_{j}^{\circ} \ :=\ \widetilde{B}_{j}\circ\dots\circ\widetilde{B}_{1}\,:\ {\mathcal{C}}_{a}[i]^{ \mathrm{b}}\to{\mathcal{C}}_{a}[i]^{\mathrm{b}}.\]
Then \(\widetilde{B}_{j}\) maps \({\mathcal{C}}_{a}[i]^{\mathrm{b}}\) into \({\mathcal{C}}_{a}[i]^{\mathrm{b}}\cap{\mathcal{C}}_{a}^{j}[i]\), \(\widetilde{A}_{j}^{\circ}\circ\widetilde{B}_{j}^{\circ}\) is the identity on \({\mathcal{C}}_{a}[i]^{\mathrm{b}}\), and \(\widetilde{B}_{j}^{\circ}=\mathfrak{t}^{-1}\circ B_{j}^{\circ}\circ\mathfrak{t}\) by the above.
A weighted version of Proposition 6.1.7.: We adopt the setting of the subsection _Damping factors_ of Section 6.1, and make the same assumptions as in the paragraph before Proposition 6.1.7. Thus \(H\), \(K\), \(A\), \(f_{1},\dots,f_{r}\), \(\phi_{1},\dots,\phi_{r}\), \(a_{0}\) are as in the previous subsection, \(\mathfrak{v}\in{\mathcal{C}}_{a_{0}}^{r}\) satisfies \(\mathfrak{v}(t)>0\) for all \(t\geqslant a_{0}\), and its germ \(\mathfrak{v}\) is in \(H\) with \(\mathfrak{v}\prec 1\). As part of those assumptions we also have \(\phi_{1},\dots,\phi_{r}\prec_{\Delta}\mathfrak{v}^{-1}\) in the asymptotic field \(K\), for the convex subgroup
\[\Delta\ :=\ \big{\{}\gamma\in v(H^{\times}):\ \gamma=o(v\mathfrak{v})\big{\}}\]
of \(v(H^{\times})=v(K^{\times})\). Also \(\nu\in\mathbb{R}^{>}\) and \(u:=\mathfrak{v}^{\nu}|_{[a,\infty)}\in({\mathcal{C}}_{a}^{r})^{\times}\).
To state a weighted version of Proposition 6.1.7, let \(\mathfrak{m}\in H^{\times}\), \(\mathfrak{m}\prec 1\), and let \(\mathfrak{m}\) also denote a representative in \(({\mathcal{C}}_{a_{0}}^{r})^{\times}\) of the germ \(\mathfrak{m}\). Set \(\mathfrak{t}:=\mathfrak{m}|_{[a,\infty)}\), so we have \(\mathfrak{t}\in({\mathcal{C}}_{a}^{r})^{\times}\cap({\mathcal{C}}_{a}^{r})^{ \mathrm{b}}\) and thus \({\mathcal{C}}_{a}^{r}[i]^{\mathrm{t}}\subseteq{\mathcal{C}}_{a}^{r}[i]^{ \mathrm{b}}\). (Note that \(\mathfrak{t}\), like \(u\), depends on \(a\), but we do not indicate this dependence notationally.) With notations as in the previous subsection we assume that for all \(a\) we have the factorization
\[\widetilde{A}_{a}\ =\ \widetilde{A}_{1}\circ\dots\circ\widetilde{A}_{r}\,:\ { \mathcal{C}}_{a}^{r}[i]\to{\mathcal{C}}_{a}[i],\]
as can be arranged by increasing \(a_{0}\) if necessary.
**Proposition 6.5.7**.: _Assume \(H\) is real closed, \(\nu\in\mathbb{Q}\), \(\nu>r\), and the elements \(\phi_{j}\), \(\phi_{j}-\mathfrak{m}^{\dagger}\) of \({\mathcal{C}}_{a_{0}}[i]\) are alike for \(j=1,\dots,r\). Then:_
1. _the_ \(\mathbb{C}\)_-linear operator_ \(uA_{a}^{-1}\colon{\mathcal{C}}_{a}[i]^{\mathrm{b}}\to{\mathcal{C}}_{a}[i]^{ \mathrm{b}}\) _maps_ \({\mathcal{C}}_{a}[i]^{\mathrm{t}}\) _into_ \({\mathcal{C}}_{a}^{r}[i]^{\mathrm{t}}\)_;_
2. _its restriction to a_ \(\mathbb{C}\)_-linear map_ \({\mathcal{C}}_{a}[i]^{\mathrm{t}}\to{\mathcal{C}}_{a}^{r}[i]^{\mathrm{t}}\) _is continuous; and_
3. _denoting this restriction also by_ \(uA_{a}^{-1}\)_, we have_ \(\|uA_{a}^{-1}\|_{a;r}^{\mathfrak{t}}\to 0\) _as_ \(a\to\infty\)
Proof.: Let \(f\in\mathcal{C}_{a}[i]^{\intercal}\), so \(g:=\uptau^{-1}f\in\mathcal{C}_{a}[i]^{\mathrm{b}}\). Let \(i\in\{0,\ldots,r\}\); then with \(\widetilde{B}_{j}^{\circ}\) as in the previous subsection and \(u_{i,j}\) as in Lemma 6.1.6, that lemma gives
\[\uptau^{-1}\big{(}uA_{a}^{-1}(f)\big{)}^{(i)}=\sum_{j=r-i}^{r}u_{i,j}u\cdot \uptau^{-1}B_{j}^{\circ}(\uptau g)\ =\ \sum_{j=r-i}^{r}u_{i,j}u\widetilde{B}_{j}^{\circ}(g).\]
The proof of Proposition 6.1.7 shows \(u_{i,j}u\in\mathcal{C}_{a}[i]^{\mathrm{b}}\) with \(\|u_{i,j}u\|_{a}\to 0\) as \(a\to\infty\). Set
\[\widetilde{c}_{i,a}\ :=\ \sum_{j=r-i}^{r}\|u\,u_{i,j}\|_{a}\cdot\|\widetilde{B}_ {j}\|_{a}\cdots\|\widetilde{B}_{1}\|_{a}\in[0,\infty)\qquad(i=0,\ldots,r).\]
Then \(\big{\|}\uptau^{-1}\big{[}uA_{a}^{-1}(f)\big{]}^{(i)}\big{\|}_{a}\leqslant \widetilde{c}_{i,a}\|g\|_{a}=\widetilde{c}_{i,a}\|f\|_{a}^{\intercal}\) where \(\widetilde{c}_{i,a}\to 0\) as \(a\to\infty\). This yields (i)-(iii).
**Weighted variants of results in Section 6.2.** In this subsection we adopt the hypotheses in force for Lemma 6.2.1. To summarize those, \(H\), \(K\), \(A\), \(f_{1},\ldots,f_{r}\), \(\phi_{1},\ldots,\phi_{r}\), \(a_{0}\), \(\mathfrak{v}\), \(\nu\), \(u\), \(\Delta\) are as in the previous subsection, \(d,r\in\mathbb{N}^{\geqslant 1}\), \(H\) is real closed, \(R\in K\{Y\}\) has order \(\leqslant d\) and weight \(\leqslant w\in\mathbb{N}^{\geqslant r}\). Also \(\nu\in\mathbb{Q}\), \(\nu>w\), \(R\prec_{\Delta}\mathfrak{v}^{\nu}\), \(\nu\mathfrak{v}^{\dagger}\not\sim\mathrm{Re}\,\phi_{j}\) and \(\mathrm{Re}\,\phi_{j}-\nu\mathfrak{v}^{\dagger}\in(\mathcal{C}_{a_{0}})^{\times}\) for \(j=1,\ldots,r\). Finally, \(\widetilde{A}:=A_{\ltimes\mathfrak{v}^{\nu}}\in K[\partial]\) and \(\widetilde{A}_{a}(y)=u^{-1}A_{a}(uy)\) for \(y\in\mathcal{C}_{a}^{r}[i]\). Next, let \(\mathfrak{m}\), \(\uptau\) be as in the previous subsection. As in Lemma 6.2.12 we consider the continuous operator
\[\Phi_{a}\colon\mathcal{C}_{a}^{r}[i]^{\mathrm{b}}\times\mathcal{C}_{a}^{r}[i] ^{\mathrm{b}}\to\mathcal{C}_{a}^{r}[i]^{\mathrm{b}}\]
given by
\[\Phi_{a}(f,y)\ :=\ \Xi_{a}(f+y)-\Xi_{a}(f)\ =\ u\widetilde{A}_{a}^{-1}\big{(}u^{ -1}\big{(}R(f+y)-R(f)\big{)}\big{)}\,.\]
Here is our weighted version of Lemma 6.2.12:
**Lemma 6.5.8**.: _Suppose the elements \(\phi_{j}-\nu\mathfrak{v}^{\dagger}\), \(\phi_{j}-\nu\mathfrak{v}^{\dagger}-\mathfrak{m}^{\dagger}\) of \(\mathcal{C}_{a_{0}}[i]\) are alike, for \(j=1,\ldots,r\), and let \(f\in\mathcal{C}_{a}^{r}[i]^{\mathrm{b}}\). Then the operator \(y\mapsto\Phi_{a}(f,y)\) maps \(\mathcal{C}_{a}^{r}[i]^{\mathrm{t}}\) into itself. Moreover, there are \(E_{a},E_{a}^{+}\in\mathbb{R}^{\geqslant}\) such that for all \(g\in\mathcal{C}_{a}^{r}[i]^{\mathrm{b}}\) and \(y\in\mathcal{C}_{a}^{r}[i]^{\sharp}\),_
\[\|\Phi_{a}(f,y)\|_{a;r}^{\intercal}\ \leqslant\ E_{a}\cdot\max\bigl{\{}1,\|f\|_{a;r}^ {d}\bigr{\}}\cdot\big{(}1+\|y\|_{a;r}+\cdots+\|y\|_{a;r}^{d-1}\big{)}\cdot\|y \|_{a;r}^{\intercal},\]
\[\|\Phi_{a}(f,g+y)-\Phi_{a}(f,g)\|_{a;r}^{\intercal}\ \leqslant\] \[\quad E_{a}^{+}\cdot\max\bigl{\{}1,\|f\|_{a;r}^{d}\bigr{\}}\cdot \max\bigl{\{}1,\|g\|_{a;r}^{d}\bigr{\}}\cdot\big{(}1+\|y\|_{a;r}+\cdots+\|y\|_{a ;r}^{d-1}\big{)}\cdot\|y\|_{a;r}^{\intercal}.\]
_We can take these \(E_{a}\), \(E_{a}^{+}\) such that \(E_{a},E_{a}^{+}\to 0\) as \(a\to\infty\), and do so below._
Proof.: Let \(y\in\mathcal{C}_{a}^{r}[i]^{\intercal}\). By Taylor expansion we have
\[R(f+y)-R(f)\ =\ \sum_{|\boldsymbol{i}|>0}\frac{1}{i!}R^{(\boldsymbol{i})}(f)y^{ \boldsymbol{i}}\ =\ \sum_{|\boldsymbol{i}|>0}S_{\boldsymbol{i}}(f)y^{\boldsymbol{i}}\quad\text{ where }S_{\boldsymbol{i}}(f):=\frac{1}{\boldsymbol{i}!}\sum_{\boldsymbol{j}}R_{\boldsymbol{j}}^{( \boldsymbol{i})}f^{\boldsymbol{j}},\]
and \(u^{-1}S_{\boldsymbol{i}}(f)\in\mathcal{C}_{a}[i]^{\mathrm{b}}\). So \(h:=u^{-1}\big{(}R(f+y)-R(f)\big{)}\in\mathcal{C}_{a}[i]^{\intercal}\), since \(\mathcal{C}_{a}[i]^{\intercal}\) is an ideal of \(\mathcal{C}_{a}[i]^{\mathrm{b}}\). Applying Proposition 6.5.7(i) with \(\phi_{j}-\nu\mathfrak{v}^{\dagger}\) in the role of \(\phi_{j}\) yields \(\Phi_{a}(f,y)=u\widetilde{A}_{a}^{-1}(h)\in\mathcal{C}_{a}^{r}[i]^{\intercal}\), establishing the first claim. Next, let \(g\in\mathcal{C}_{a}^{r}[i]^{\mathrm{b}}\). Then \(\Phi_{a}(f,g+y)-\Phi_{a}(f,g)=\Phi_{a}(f+g,y)\) by (6.2.1). Therefore,
\[\Phi_{a}(f,g+y)-\Phi_{a}(f,g)\ =\ u\widetilde{A}_{a}^{-1}(h),\ \ \ h:=\ u^{-1} \big{(}R(f+g+y)-R(f+g)\big{)},\ \text{so}\] \[\|\Phi_{a}(f,g+y)-\Phi_{a}(f,g)\|_{a;r}^{\intercal}\ =\ \|u\widetilde{A}_{a}^{-1}(h)\|_{a;r}^{ \intercal}\ \leqslant\ \|u\widetilde{A}_{a}^{-1}\|_{a;r}^{\intercal}\cdot\|h\|_{a}^{\intercal}.\]
By Corollary 6.5.6 we have
\[\|h\|_{a}^{\mathfrak{t}}\ \leqslant\ D\cdot\max_{|\boldsymbol{i}|>0}\|u^{-1}S_{ \boldsymbol{i}}(f+g)\|_{a}\cdot\big{(}1+\|y\|_{a;r}+\cdots+\|y\|_{a;r}^{d-1} \big{)}\cdot\|y\|_{a;r}^{\mathfrak{t}}\]
where \(D=D(d,r):=\big{(}\binom{d+r+1}{r+1}-1\big{)}\). Let \(D_{a}\) be as in the proof of Lemma 6.2.2. Then \(D_{a}\to 0\) as \(a\to\infty\), and Lemma 6.2.11 gives for \(|\boldsymbol{i}|>0\),
\[\|u^{-1}S_{\boldsymbol{i}}(f)\|_{a} \ \leqslant\ D_{a}\cdot\max\bigl{\{}1,\|f\|_{a;r}^{d}\bigr{\}}\] \[\|u^{-1}S_{\boldsymbol{i}}(f+g)\|_{a} \ \leqslant\ D_{a}\cdot\max\bigl{\{}1,\|f+g\|_{a;r}^{d}\bigr{\}}\] \[\ \leqslant\ 2^{d}D_{a}\cdot\max\bigl{\{}1,\|f\|_{a;r}^{d}\bigr{\}} \cdot\max\bigl{\{}1,\|g\|_{a;r}^{d}\bigr{\}}.\]
This gives the desired result for \(E_{a}:=\|u\widetilde{A}_{a}^{-1}\|_{a;r}^{\mathfrak{t}}\cdot D\cdot D_{a}\) and \(E_{a}^{+}:=2^{d}E_{a}\), using also Proposition 6.5.7(iii) with \(\phi_{j}-\nu\mathfrak{v}^{\dagger}\) in the role of \(\phi_{j}\).
Lemma 6.5.8 allows us to refine Theorem 6.2.3 as follows:
**Corollary 6.5.9**.: _Suppose the elements \(\phi_{j}-\nu\mathfrak{v}^{\dagger}\), \(\phi_{j}-\nu\mathfrak{v}^{\dagger}-\mathfrak{m}^{\dagger}\) of \(\mathcal{C}_{a_{0}}[\mathfrak{i}]\) are alike, for \(j=1,\ldots,r\), and \(R(0)\preccurlyeq\mathfrak{v}^{\nu}\mathfrak{m}\). Then for sufficiently large \(a\) the operator \(\Xi_{a}\) maps the closed ball \(B_{a}:=\bigl{\{}f\in\mathcal{C}_{a}^{r}[\mathfrak{i}]:\,\|f\|_{a;r}\leqslant 1 /2\bigr{\}}\) of the normed space \(\mathcal{C}_{a}^{r}[\mathfrak{i}]^{\mathrm{b}}\) into itself, has a unique fixed point in \(B_{a}\), and this fixed point lies in \(\mathcal{C}_{a}^{r}[\mathfrak{i}]^{\mathfrak{t}}\)._
Proof.: Take \(a\) such that \(\|\mathfrak{v}\|_{a}\leqslant 1\). Then by (6.5.2), \(B_{a}\) contains the closed ball
\[B_{a}^{\mathfrak{t}}\ :=\ \bigl{\{}f\in\mathcal{C}_{a}^{r}[\mathfrak{i}]:\,\|f\|_{a;r }^{\mathfrak{t}}\leqslant 1/2\bigr{\}}\]
of the normed space \(\mathcal{C}_{a}^{r}[\mathfrak{i}]^{\mathfrak{t}}\). Let \(f,g\in B_{a}^{\mathfrak{t}}\). Then \(\Xi_{a}(g)-\Xi_{a}(f)=\Phi_{a}(f,g-f)\) lies in \(\mathcal{C}_{a}^{r}[\mathfrak{i}]^{\mathfrak{t}}\) by Lemma 6.5.8, and with \(E_{a}\) as in that lemma,
\[\|\Xi_{a}(f)-\Xi_{a}(g)\|_{a;r}^{\mathfrak{t}} =\ \|\Phi_{a}(f,g-f)\|_{a;r}^{\mathfrak{t}}\] \[\leqslant\ E_{a}\cdot\max\bigl{\{}1,\|f\|_{a;r}^{d}\bigr{\}}\cdot \big{(}1+\cdots+\|g-f\|_{a;r}^{d-1}\big{)}\cdot\|g-f\|_{a;r}^{\mathfrak{t}}\] \[\leqslant\ E_{a}\cdot d\cdot\|g-f\|_{a;r}^{\mathfrak{t}}.\]
Taking \(a\) so that moreover \(E_{a}d\leqslant\frac{1}{2}\) we obtain
\[\|\Xi_{a}(f)-\Xi_{a}(g)\|_{a;r}^{\mathfrak{t}}\leqslant\frac{1}{2}\|f-g\|_{a; r}^{\mathfrak{t}},\ \ \ \ \text{for all}\ f,g\in B_{a}^{\mathfrak{t}}. \tag{6.5.4}\]
Next we consider the case \(g=0\). Our hypothesis \(R(0)\preccurlyeq\mathfrak{v}^{\nu}\mathfrak{m}\) gives \(u^{-1}R(0)\in\mathcal{C}_{a}[\mathfrak{i}]^{\mathfrak{t}}\). Proposition 6.5.7(i),(ii) with \(\phi_{j}-\nu\mathfrak{v}^{\dagger}\) in the role of \(\phi_{j}\) gives \(\Xi_{a}(0)\in\mathcal{C}_{a}^{r}[\mathfrak{i}]^{\mathfrak{t}}\) and \(\|\Xi_{a}(0)\|_{a;r}^{\mathfrak{t}}\leqslant\|u\widetilde{A}_{a}^{-1}\|_{a;r }^{\mathfrak{t}}\|u^{-1}R(0)\|_{a}^{\mathfrak{t}}\). Using Proposition 6.5.7(iii) we now take \(a\) so large that \(\|\Xi_{a}(0)\|_{a;r}^{\mathfrak{t}}\leqslant\frac{1}{4}\). Then (6.5.4) for \(g=0\) gives \(\Xi_{a}(B_{a}^{\mathfrak{t}})\subseteq B_{a}^{\mathfrak{t}}\). By Lemma 6.5.2 the normed space \(\mathcal{C}_{a}^{r}[\mathfrak{i}]^{\mathfrak{t}}\) is complete, hence \(\Xi_{a}\) has a unique fixed point in \(B_{a}^{\mathfrak{t}}\).
Now suppose in addition that \(A\in H[\mathfrak{d}]\) and \(R\in H\{Y\}\). Set
\[(\mathcal{C}_{a}^{r})^{\mathfrak{t}}\ :=\ \bigl{\{}f\in\mathcal{C}_{a}^{r}:\|f\|_{a;r} ^{\mathfrak{t}}<\infty\bigr{\}}\ =\ \mathcal{C}_{a}^{r}[\mathfrak{i}]^{\mathfrak{t}}\cap\mathcal{C}_{a}^{r},\]
a real Banach space with respect to \(\|\cdot\|_{a;r}^{\mathfrak{t}}\). Increase \(a_{0}\) as at the beginning of the subsection _Preserving reality_ of Section 6.2. Then we have the map
\[\operatorname{Re}\Phi_{a}\colon(\mathcal{C}_{a}^{r})^{\mathrm{b}}\times( \mathcal{C}_{a}^{r})^{\mathrm{b}}\to(\mathcal{C}_{a}^{r})^{\mathrm{b}},\ \ \ \ \ \ \ (f,y)\mapsto\operatorname{Re}\bigl{(}\Phi_{a}(f,y)\bigr{)}.\]
Suppose the elements \(\phi_{j}-\nu\mathfrak{v}^{\dagger}\), \(\phi_{j}-\nu\mathfrak{v}^{\dagger}-\mathfrak{m}^{\dagger}\) are alike for \(j=1,\ldots,r\), and let \(a\) and \(E_{a},E_{a}^{+}\) be as in Lemma 6.5.8. Then this lemma yields:
**Lemma 6.5.10**.: _Let \(f,g\in(\mathcal{C}^{r}_{a})^{\mathrm{b}}\) and \(y\in(\mathcal{C}^{r}_{a})^{\mathfrak{t}}\). Then \((\mathrm{Re}\,\Phi_{a})(f,y)\in(\mathcal{C}^{r}_{a})^{\mathfrak{t}}\) and_
\[\|\mathrm{Re}(\Phi_{a})(f,y)\|_{a;r}^{\mathfrak{t}}\ \leqslant\ E_{a}\cdot \max\bigl{\{}1,\|f\|_{a;r}^{d}\bigr{\}}\cdot\bigl{(}1+\|y\|_{a;r}+\dots+\|y\|_ {a;r}^{d-1}\bigr{)}\cdot\|y\|_{a;r}^{\mathfrak{t}},\]
\[\|(\mathrm{Re}\,\Phi_{a})(f,g+y)-(\mathrm{Re}\,\Phi_{a})(f,g)\|_{a;r}^{ \mathfrak{t}}\ \leqslant\]
\[E_{a}^{+}\cdot\max\bigl{\{}1,\|f\|_{a;r}^{d}\bigr{\}}\cdot\max\bigl{\{}1,\|g \|_{a;r}^{d}\bigr{\}}\cdot\bigl{(}1+\|y\|_{a;r}+\dots+\|y\|_{a;r}^{d-1}\bigr{)} \cdot\|y\|_{a;r}^{\mathfrak{t}}.\]
The same way we derived Corollary 6.5.9 from Lemma 6.5.8, this leads to:
**Corollary 6.5.11**.: _If \(R(0)\preccurlyeq\mathfrak{v}^{r}\mathfrak{m}\), then for sufficiently large \(a\) the operator \(\mathrm{Re}\,\Xi_{a}\) maps the closed ball \(B_{a}:=\bigl{\{}f\in\mathcal{C}^{r}_{a}:\,\|f\|_{a;r}\leqslant 1/2\bigr{\}}\) of the normed space \((\mathcal{C}^{r}_{a})^{\mathrm{b}}\) into itself, has a unique fixed point in \(B_{a}\), and this fixed point lies in \((\mathcal{C}^{r}_{a})^{\mathfrak{t}}\)._
**Revisiting Lemma 6.2.13**.: Here we adopt the setting of the previous subsection. As usual, \(a\) ranges over \([a_{0},\infty)\). We continue our investigation of the differences \(f-g\) between solutions \(f\), \(g\) of the equation \((*)\) on \([a_{0},\infty)\) from Section 6.2 which we began in Lemma 6.2.5, and so we take \(f\), \(g\), \(E\), \(\varepsilon\), \(h_{a}\), \(\theta_{a}\) as in that lemma. Recall that in the remarks preceding Lemma 6.2.13 we defined continuous operators \(\Phi_{a},\Psi_{a}\colon\mathcal{C}^{r}_{a}[\mathrm{i}]^{\mathrm{b}}\to\mathcal{ C}^{r}_{a}[\mathrm{i}]^{\mathrm{b}}\) by
\[\Phi_{a}(y)\ :=\ \Phi_{a}(g,y)\ =\ \Xi_{a}(g+y)-\Xi_{a}(g),\quad\Psi_{a}(y)\ :=\ \Phi_{a}(y)+h_{a}\qquad(y\in \mathcal{C}^{r}_{a}[\mathrm{i}]^{\mathrm{b}}).\]
As in those remarks, we set \(\rho:=\|f-g\|_{a_{0};r}\) and
\[B_{a}\ :=\ \bigl{\{}y\in\mathcal{C}^{r}_{a}[\mathrm{i}]^{\mathrm{b}}:\ \|y-h_{a}\|_{a;r} \leqslant 1/2\bigr{\}},\]
and take \(a_{1}\geqslant a_{0}\) so that \(\theta_{a}\in B_{a}\) for all \(a\geqslant a_{1}\). Then by (6.2.4) we have \(\|y\|_{a;r}\leqslant 1+\rho\) for \(a\geqslant a_{1}\) and \(y\in B_{a}\). Next, take \(a_{2}\geqslant a_{1}\) as in Lemma 6.2.13; thus for \(a\geqslant a_{2}\) and \(y,z\in B_{a}\) we have \(\Psi_{a}(y)\in B_{a}\) and \(\|\Psi_{a}(y)-\Psi_{a}(z)\|_{a;r}\leqslant\frac{1}{2}\|y-z\|_{a;r}\). As in the previous subsection, \(\mathfrak{m}\in H^{\times}\), \(\mathfrak{m}\prec 1\), \(\mathfrak{m}\) denotes also a representative in \((\mathcal{C}^{r}_{a_{0}})^{\times}\) of the germ \(\mathfrak{m}\), and \(\mathfrak{t}:=\mathfrak{m}|_{[a,\infty)}\in(\mathcal{C}^{r}_{a})^{\times}\cap( \mathcal{C}^{r}_{a}]^{\mathrm{b}}\), so \(\mathcal{C}^{r}_{a}[\mathrm{i}]^{\mathrm{t}}\subseteq\mathcal{C}^{r}_{a}[ \mathrm{i}]^{\mathrm{b}}\).
_In the rest of this subsection \(\phi_{1}-\nu\mathfrak{v}^{\dagger},\dots,\phi_{r}-\nu\mathfrak{v}^{\dagger}\in K\) are \(\gamma\)-repulsive for \(\gamma:=v\mathfrak{m}\in v(H^{\times})^{>}\), and \(h_{a}\in\mathcal{C}^{r}_{a}[\mathrm{i}]^{\mathfrak{t}}\) for all \(a\geqslant a_{2}\)._ Then Corollary 4.5.5 gives \(a_{3}\geqslant a_{2}\) such that for all \(a\geqslant a_{3}\) and \(j=1,\dots,r\), the functions \(\phi_{j}-u^{\dagger},\phi_{j}-(u\mathfrak{t})^{\dagger}\in\mathcal{C}_{a}[ \mathrm{i}]\) are alike and hence \(\Psi_{a}\bigl{(}\mathcal{C}^{r}_{a}[\mathrm{i}]^{\mathfrak{t}}\bigr{)}\subseteq \mathcal{C}^{r}_{a}[\mathrm{i}]^{\mathfrak{t}}\) by Lemma 6.5.8. Thus \(\Psi_{a}^{n}(h_{a})\in\mathcal{C}^{r}_{a}[\mathrm{i}]^{\mathfrak{t}}\) for all \(n\) and \(a\geqslant a_{3}\).
For \(a\geqslant a_{2}\) we have \(\lim_{n\to\infty}\Psi_{a}^{n}(h_{a})=\theta_{a}\) in \(\mathcal{C}^{r}_{a}[\mathrm{i}]^{\mathrm{b}}\) by Corollary 6.2.14; we now aim to strengthen this to "in \(\mathcal{C}^{r}_{a}[\mathrm{i}]^{\mathfrak{t}}\)" (possibly for a larger \(a_{2}\)). Towards this:
**Lemma 6.5.12**.: _There exists \(a_{4}\geqslant a_{3}\) such that \(\|\Psi_{a}(y)-\Psi_{a}(z)\|_{a;r}^{\mathfrak{t}}\leqslant\frac{1}{2}\|y-z\|_{a; r}^{\mathfrak{t}}\) for all \(a\geqslant a_{4}\) and \(y,z\in B_{a}\cap\mathcal{C}^{r}_{a}[\mathrm{i}]^{\mathfrak{t}}\)._
Proof.: For \(a\geqslant a_{3}\) and \(y,z\in\mathcal{C}^{r}_{a}[\mathrm{i}]^{\mathfrak{t}}\), and with \(E_{a}^{+}\) as in Lemma 6.5.8 we have
\[\|\Psi_{a}(y)-\Psi_{a}(z)\|_{a;r}^{\mathfrak{t}}\ \leqslant\] \[E_{a}^{+}\cdot\max\bigl{\{}1,\|g\|_{a;r}^{d}\bigr{\}}\cdot\max \bigl{\{}1,\|z\|_{a;r}^{d}\bigr{\}}\cdot\bigl{(}1+\|y-z\|_{a;r}+\dots+\|y-z\|_{a; r}^{d-1}\bigr{)}\cdot\|y-z\|_{a;r}^{\mathfrak{t}}.\]
For each \(a\geqslant a_{1}\) and \(y,z\in B_{a}\) we then have
\[\max\bigl{\{}1,\|z\|_{a;r}^{d}\bigr{\}}\cdot\bigl{(}1+\|y-z\|_{a;r}+\dots+\|y-z \|_{a;r}^{d-1}\bigr{)}\ \leqslant\ (1+\rho)^{d}\cdot d,\]
so taking \(a_{4}\geqslant a_{3}\) with
\[E_{a}^{+}\max\bigl{\{}1,\|g\|_{a_{0};r}^{d}\bigr{\}}(1+\rho)^{d}d\leqslant 1/2 \quad\text{ for all }a\geqslant a_{4},\]
we have \(\|\Psi_{a}(y)-\Psi_{a}(z)\|_{a;r}^{\mathfrak{t}}\leqslant\frac{1}{2}\|y-z\|_{a; r}^{\mathfrak{t}}\) for all \(a\geqslant a_{4}\) and \(y,z\in B_{a}\cap\mathcal{C}^{r}_{a}[\mathrm{i}]^{\mathfrak{t}}\).
Let \(a_{4}\) be as in the previous lemma.
**Corollary 6.5.13**.: _Suppose \(a\geqslant a_{4}\). Then \(\theta_{a}\in\mathcal{C}_{a}^{r}[i]^{\mathsf{\tau}}\) and \(\lim_{n\to\infty}\Psi_{a}^{n}(h_{a})=\theta_{a}\) in the normed space \(\mathcal{C}_{a}^{r}[i]^{\mathsf{\tau}}\). In particular, \(f-g,(f-g)^{\prime},\ldots,(f-g)^{(r)}\preccurlyeq\mathfrak{m}\)._
Proof.: We have \(\Phi_{a}(h_{a})=\Psi_{a}(h_{a})-h_{a}\in\mathcal{C}_{a}^{r}[i]^{\mathsf{\tau}}\), so \(M:=\|\Phi_{a}(h_{a})\|_{a;r}^{\mathsf{\tau}}<\infty\). Since \(\Psi_{a}(B_{a})\subseteq B_{a}\), induction on \(n\) using Lemma 6.5.12 gives
\[\|\Psi_{a}^{n+1}(h_{a})-\Psi_{a}^{n}(h_{a})\|_{a;r}^{\mathsf{\tau}}\leqslant M /2^{n}\qquad\text{ for all }n.\]
Thus \(\big{(}\Psi_{a}^{n}(h_{a})\big{)}\) is a cauchy sequence in the normed space \(\mathcal{C}_{a}^{r}[i]^{\mathsf{\tau}}\), and so converges in \(\mathcal{C}_{a}^{r}[i]^{\mathsf{\tau}}\) by Lemma 6.5.2. In the normed space \(\mathcal{C}_{a}^{r}[i]^{\mathsf{\mathsf{b}}}\) we have \(\lim_{n\to\infty}\Psi_{a}^{n}(h_{a})=\theta_{a}\), by Corollary 6.2.14. Thus \(\lim_{n\to\infty}\Psi_{a}^{n}(h_{a})=\theta_{a}\) in \(\mathcal{C}_{a}^{r}[i]^{\mathsf{\tau}}\) by Lemma 6.5.1.
**An application to slots in \(H\).** Here we adopt the setting of the subsection _An application to slots in \(H\)_ in Section 5.10. Thus \(H\supseteq\mathbb{R}\) is a Liouville closed Hardy field, \(K:=H[i]\), \(\mathrm{I}(K)\subseteq K^{\dagger}\), and \((P,1,\widehat{h})\) is a slot in \(H\) of order \(r\geqslant 1\); we set \(w:=\mathrm{wt}(P)\), \(d:=\deg P\). _Assume also that \(K\) is \(1\)-linearly surjective if \(r\geqslant 3\)._
**Proposition 6.5.14**.: _Suppose \((P,1,\widehat{h})\) is special, ultimate, \(Z\)-minimal, deep, and strongly repulsive-normal. Let \(f,g\in\mathcal{C}^{r}[i]\) and \(\mathfrak{m}\in H^{\times}\) be such that_
\[P(f)\ =\ P(g)\ =\ 0,\qquad f,g\ \prec\ 1,\qquad v\mathfrak{m}\in v(\widehat{h}-H).\]
_Then \((f-g)^{(j)}\preccurlyeq\mathfrak{m}\) for \(j=0,\ldots,r\)._
Proof.: We arrange \(\mathfrak{m}\prec 1\). Let \(\mathfrak{v}:=|\mathfrak{v}(L_{P})|\in H^{>}\), so \(\mathfrak{v}\prec^{\flat}1\), and set \(\Delta:=\Delta(\mathfrak{v})\). Take \(Q,R\in H\{Y\}\) where \(Q\) is homogeneous of degree \(1\) and order \(r\), \(A:=L_{Q}\in H[\mathfrak{d}]\) has a strong \(\widehat{h}\)-repulsive splitting over \(K\), \(P=Q-R\), and \(R\prec_{\Delta}\mathfrak{v}^{w+1}P_{1}\), so \(\mathfrak{v}(A)\sim\mathfrak{v}(L_{P})\) by Lemma 3.1.1. Multiplying \(P\), \(Q\), \(R\) by some \(b\in H^{\times}\) we arrange that \(A\) is monic, so \(A=\mathfrak{d}^{r}+f_{1}\mathfrak{d}^{r-1}+\cdots+f_{r}\) with \(f_{1},\ldots,f_{r}\in H\) and \(R\prec_{\Delta}\mathfrak{v}^{w}\). Let \((\phi_{1},\ldots,\phi_{r})\in K^{r}\) be a strong \(\widehat{h}\)-repulsive splitting of \(A\) over \(K\), so \(\phi_{1},\ldots,\phi_{r}\) are \(\widehat{h}\)-repulsive and
\[A\ =\ (\mathfrak{d}-\phi_{1})\cdots(\mathfrak{d}-\phi_{r}),\qquad\operatorname{Re }\phi_{1},\ldots,\operatorname{Re}\phi_{r}\ \succcurlyeq\mathfrak{v}^{\dagger}\ \succcurlyeq 1.\]
By Corollary 3.1.6 we have \(\phi_{1},\ldots,\phi_{r}\preccurlyeq\mathfrak{v}^{-1}\). Thus we can take \(a_{0}\in\mathbb{R}\) and functions on \([a_{0},\infty)\) representing the germs \(\phi_{1},\ldots,\phi_{r}\), \(f_{1},\ldots,f_{r}\), \(f\), \(g\) and the \(R_{\boldsymbol{j}}\) with \(\boldsymbol{j}\in\mathbb{N}^{1+r}\), \(|\boldsymbol{j}|\leqslant d\), \(\|\boldsymbol{j}\|\leqslant w\) (using the same symbols for the germs mentioned as for their chosen representatives) so as to be in the situation described in the beginning of Section 6.2, with \(f\) and \(g\) solutions on \([a_{0},\infty)\) of the differential equation \((*)\) there. As there, we take \(\nu\in\mathbb{Q}\) with \(\nu>w\) so that \(R\prec_{\Delta}\mathfrak{v}^{\nu}\) and \(\nu\mathfrak{v}^{\dagger}\not\sim\operatorname{Re}\phi_{j}\) for \(j=1,\ldots,r\), and then increase \(a_{0}\) to satisfy all assumptions for Lemma 6.2.1. Corollary 3.3.15 gives \(v(\mathfrak{v}^{\nu})\in v(\widehat{h}-H)\), so \(\phi_{j}-\nu\mathfrak{v}^{\dagger}=\phi_{j}-(\mathfrak{v}^{\nu})^{\dagger}\) (\(j=1,\ldots,r\)) is \(\widehat{h}\)-repulsive by Lemma 4.5.13(iv), so \(\gamma\)-repulsive for \(\gamma:=v\mathfrak{m}>0\). Now \(A\) splits over \(K\), and \(K\) is \(1\)-linearly surjective if \(r\geqslant 3\), hence \(\dim_{\mathbb{C}}\ker_{\mathrm{U}}A=r\) by Lemma 5.10.22. Thus by Corollary 5.10.16 we have \(y,y^{\prime},\ldots,y^{(r)}\prec\mathfrak{m}\) for all \(y\in\mathcal{C}^{r}[i]\) with \(A(y)=0\), \(y\prec 1\). In particular, \(\mathfrak{m}^{-1}h_{a},\mathfrak{m}^{-1}h_{a}^{\prime},\ldots,\mathfrak{m}^{-1}h_{ a}^{(r)}\prec 1\) for all \(a\geqslant a_{0}\). Thus the assumptions on \(\mathfrak{m}\) and the \(h_{a}\) made just before Lemma 6.5.12 are satisfied for a suitable choice of \(a_{2}\), so we can appeal to Corollary 6.5.13.
The assumption that \(K\) is \(1\)-linearly surjective for \(r\geqslant 3\) was only used in the proof above to obtain \(\dim_{\mathbb{C}}\ker_{\mathrm{U}}A=r\). So if \(A\) as in this proof satisfies \(\dim_{\mathbb{C}}\ker_{\mathrm{U}}A=r\), then we can drop this assumption about \(K\), also in the next corollary.
**Corollary 6.5.15**.: _Suppose \((P,1,\widehat{h})\), \(f\), \(g\), \(\mathfrak{m}\) are as in Proposition 6.5.14. Then_
\[f-g\in\mathfrak{m}\,\mathcal{C}^{r}[\mathrm{i}]^{\preccurlyeq}.\]
Proof.: If \(\mathfrak{m}\succcurlyeq 1\), then Lemma 6.4.1(ii) applied with \(y=(f-g)/\mathfrak{m}\) and \(1/\mathfrak{m}\) in place of \(\mathfrak{m}\) gives what we want. Now assume \(\mathfrak{m}\prec 1\). Since \(\widehat{h}\) is special over \(H\), Proposition 6.5.14 applies to \(\mathfrak{m}^{r+1}\) in place of \(\mathfrak{m}\), so \((f-g)^{(j)}\preccurlyeq\mathfrak{m}^{r+1}\) for \(j=0,\ldots,r\). Now apply Lemma 6.5.3 to suitable representatives of \(f-g\) and \(\mathfrak{m}\).
Later in this section we use Proposition 6.5.14 and its Corollary 6.5.15 to strengthen some results from Section 6.4. In Section 7.7 we give further refinements of that proposition for the case of firm and flabby slots, but these are not needed for the proof of our main result, given in Section 6.7.
**Weighted refinements of results in Section 6.4.** We now adopt the setting of the subsection _Reformulations_ of Section 6.4. Thus \(H\supseteq\mathbb{R}\) is a real closed Hardy field with asymptotic integration, and \(K:=H[\mathrm{i}]\subseteq\mathcal{C}^{<\infty}[\mathrm{i}]\) is its algebraic closure, with value group \(\Gamma:=v(H^{\times})=v(K^{\times})\). The next lemma and its corollary refine Lemma 6.4.5. Let \(P\), \(Q\), \(R\), \(L_{Q}\), \(w\) be as introduced before that lemma, set \(\mathfrak{v}:=|\mathfrak{v}(L_{Q})|\in H^{>}\), and, in case \(\mathfrak{v}\prec 1\), \(\Delta:=\Delta(\mathfrak{v})\).
**Lemma 6.5.16**.: _Let \(f\in K^{\times}\) and \(\phi_{1},\ldots,\phi_{r}\in K\) be such that_
\[L_{Q}\ =\ f(\partial-\phi_{1})\cdots(\partial-\phi_{r}),\qquad\mathrm{Re}\,\phi_{ 1},\ldots,\mathrm{Re}\,\phi_{r}\ \succcurlyeq 1.\]
_Assume \(\mathfrak{v}\prec 1\) and \(R\prec_{\Delta}\mathfrak{v}^{w+1}Q\). Let \(\mathfrak{m}\in H^{\times}\), \(\mathfrak{m}\prec 1\), \(P(0)\preccurlyeq\mathfrak{v}^{w+2}\mathfrak{m}Q\). Suppose that for \(j=1,\ldots,r\) and all \(\nu\in\mathbb{Q}\) with \(w<\nu<w+1\), \(\phi_{j}-(\mathfrak{v}^{\nu})^{\dagger}\) and \(\phi_{j}-(\mathfrak{v}^{\nu}\mathfrak{m})^{\dagger}\) are alike. Then \(P(y)=0\) and \(y,y^{\prime},\ldots,y^{(r)}\preccurlyeq\mathfrak{m}\) for some \(y\prec\mathfrak{v}^{w}\) in \(\mathcal{C}^{<\infty}[\mathrm{i}]\). If \(P,Q\in H\{Y\}\), then there is such \(y\) in \(\mathcal{C}^{<\infty}\)._
Proof.: Note that \(\phi_{1},\ldots,\phi_{r}\preccurlyeq\mathfrak{v}^{-1}\) by Corollary 3.1.6 and that \(R\prec_{\Delta}\mathfrak{v}^{w+1}Q\) gives \(f^{-1}R\prec_{\Delta}\mathfrak{v}^{w}\). Take \(\nu\in\mathbb{Q}\) such that \(w<\nu<w+1\), \(f^{-1}R\prec_{\Delta}\mathfrak{v}^{\nu}\) and \(\nu\mathfrak{v}^{\dagger}\not\sim\mathrm{Re}\,\phi_{j}\) for \(j=1,\ldots,r\). Set \(A\ :=\ f^{-1}L_{Q}\). From \(\nu<w+1\) and
\[R(0)\ =\ -P(0)\ \prec_{\Delta}\mathfrak{v}^{w+2}\mathfrak{m}Q\]
we obtain \(f^{-1}R(0)\prec_{\Delta}\mathfrak{v}^{\nu}\mathfrak{m}\). Thus we can apply successively Corollary 6.5.9, Lemma 6.2.1, and Corollary 6.3.5 to the equation \(A(y)=f^{-1}R(y)\), \(y\prec 1\) in the role of \((*)\) in Section 6.2 to obtain the first part. For the real variant, use instead Corollary 6.5.11 and Lemma 6.2.6.
Lemma 6.5.16 with \(\mathfrak{m}^{r+1}\) for \(\mathfrak{m}\) has the following consequence, using Lemma 6.5.3:
**Corollary 6.5.17**.: _Let \(f\in K^{\times}\) and \(\phi_{1},\ldots,\phi_{r}\in K\) be such that_
\[L_{Q}\ =\ f(\partial-\phi_{1})\cdots(\partial-\phi_{r}),\qquad\mathrm{Re}\,\phi_{ 1},\ldots,\mathrm{Re}\,\phi_{r}\ \succcurlyeq 1.\]
_Assume \(\mathfrak{v}\prec 1\) and \(R\prec_{\Delta}\mathfrak{v}^{w+1}Q\). Let \(\mathfrak{m}\in H^{\times}\), \(\mathfrak{m}\prec 1\), \(P(0)\preccurlyeq\mathfrak{v}^{w+2}\mathfrak{m}^{r+1}Q\). Suppose that for \(j=1,\ldots,r\) and all \(\nu\in\mathbb{Q}\) with \(w<\nu<w+1\), \(\phi_{j}-(\mathfrak{v}^{\nu})^{\dagger}\) and \(\phi_{j}-(\mathfrak{v}^{\nu}\mathfrak{m}^{r+1})^{\dagger}\) are alike. Then for some \(y\prec\mathfrak{v}^{w}\) in \(\mathcal{C}^{<\infty}[\mathrm{i}]\) we have \(P(y)=0\) and \(y\in\mathfrak{m}\,\mathcal{C}^{r}[\mathrm{i}]^{\preccurlyeq}\). If \(P,Q\in H\{Y\}\), then there is such \(y\) in \(\mathcal{C}^{<\infty}\)._
_Remark_.: If \(H\) is a \(\mathcal{C}^{\infty}\)-Hardy field, then Lemma 6.5.16 and Corollary 6.5.17 go through with \(\mathcal{C}^{<\infty}[\mathrm{i}]\), \(\mathcal{C}^{<\infty}\) replaced by \(\mathcal{C}^{\infty}[\mathrm{i}]\), \(\mathcal{C}^{\infty}\), respectively. Likewise with \(\mathcal{C}^{\omega}\) in place of \(\mathcal{C}^{\infty}\). (Use Corollary 6.3.5.)
Next a variant of Lemma 6.4.6. _In the rest of this subsection \((P,\mathfrak{n},\widehat{h})\) is a deep, strongly repulsive-normal, \(Z\)-minimal slot in \(H\) of order \(r\geqslant 1\) and weight \(w:=\operatorname{wt}(P)\). We assume also that \((P,\mathfrak{n},\widehat{h})\) is special_ (_as will be the case if \(H\) is \(r\)-linearly newtonian, and \(\mathfrak{o}\)-free if \(r>1\), by Lemma 3.2.36_).
**Lemma 6.5.18**.: _Let \(\mathfrak{m}\in H^{\times}\) be such that \(v\mathfrak{m}\in v(\widehat{h}-H)\), \(\mathfrak{m}\prec\mathfrak{n}\), and \(P(0)\preccurlyeq\mathfrak{v}(L_{P_{\times\mathfrak{n}}})^{w+2}\,(\mathfrak{m}/ \mathfrak{n})^{r+1}\,P_{\times\mathfrak{n}}\). Then for some \(y\in\mathcal{C}^{<\infty}\),_
\[P(y)\ =\ 0,\quad y\in\mathfrak{m}\,(\mathcal{C}^{r})^{\preccurlyeq}.\]
_If \(H\subseteq\mathcal{C}^{\infty}\), then there is such \(y\) in \(\mathcal{C}^{\infty}\); likewise with \(\mathcal{C}^{\omega}\) in place of \(\mathcal{C}^{\infty}\)._
Proof.: Replace \((P,\mathfrak{n},\widehat{h})\), \(\mathfrak{m}\) by \((P_{\times\mathfrak{n}},1,\widehat{h}/\mathfrak{n})\), \(\mathfrak{m}/\mathfrak{n}\) to arrange \(\mathfrak{n}=1\). Then \(L_{P}\) has order \(r\), \(\mathfrak{v}(L_{P})\prec^{\flat}1\), and \(P=Q-R\) where \(Q,R\in H\{Y\}\), \(Q\) is homogeneous of degree \(1\) and order \(r\), \(L_{Q}\in H[\mathfrak{o}]\) has a strong \(\widehat{h}\)-repulsive splitting \((\phi_{1},\dots,\phi_{r})\in K^{r}\) over \(K=H[i]\), and \(R\prec_{\Delta^{*}}\mathfrak{v}(L_{P})^{w+1}P_{1}\) with \(\Delta^{*}:=\Delta\big{(}\mathfrak{v}(L_{P})\big{)}\). By Lemma 3.1.1(ii) we have \(\mathfrak{v}(L_{P})\sim\mathfrak{v}(L_{Q})\asymp\mathfrak{v}\), so \(\operatorname{Re}\phi_{j}\succcurlyeq\mathfrak{v}^{\dagger}\succcurlyeq 1\) for \(j=1,\dots,r\), and \(\Delta=\Delta^{*}\). Moreover, \(P(0)\preceq\mathfrak{v}^{w+2}\mathfrak{m}^{r+1}Q\). Let \(\nu\in\mathbb{Q}\), \(\nu>w\), and \(j\in\{1,\dots,r\}\). Then \(0<v(\mathfrak{v}^{\nu})\in v(\widehat{h}-H)\) by Corollary 3.3.15, so \(\phi_{j}\) is \(\gamma\)-repulsive for \(\gamma=v(\mathfrak{v}^{\nu})\), hence \(\phi_{j}\) and \(\phi_{\widehat{h}}-(\mathfrak{v}^{\nu})^{\dagger}\) are alike by Corollary 4.5.5. Likewise, \(0<v(\mathfrak{v}^{\nu}\mathfrak{m}^{r+1})\in v(\widehat{h}-H)\) since \(\widehat{h}\) is special over \(H\), so \(\phi_{j}\) and \(\phi_{j}-(\mathfrak{v}^{\nu}\mathfrak{m}^{r+1})^{\dagger}\) are alike. Thus \(\phi_{j}-(\mathfrak{v}^{\nu})^{\dagger}\) and \(\phi_{j}-(\mathfrak{v}^{\nu}\mathfrak{m}^{r+1})^{\dagger}\) are alike as well. Hence Corollary 6.5.17 gives \(y\prec\mathfrak{v}^{w}\) in \(\mathcal{C}^{<\infty}\) with \(P(y)=0\) and \(y\in\mathfrak{m}\,(\mathcal{C}^{r})^{\preccurlyeq}\). For the rest use the remark following that corollary.
**Corollary 6.5.19**.: _Suppose \(\mathfrak{n}=1\), and let \(\mathfrak{m}\in H^{\times}\) be such that \(v\mathfrak{m}\in v(\widehat{h}-H)\). Then there are \(h\in H\) and \(y\in\mathcal{C}^{<\infty}\) such that:_
\[\widehat{h}-h\ \preccurlyeq\mathfrak{m},\qquad P(y)\ =\ 0,\qquad y\ \prec\ 1,\ y\in(\mathcal{C}^{r})^{\preccurlyeq},\qquad y-h\in \mathfrak{m}\,(\mathcal{C}^{r})^{\preccurlyeq}.\]
_If \(H\subseteq\mathcal{C}^{\infty}\), then we have such \(y\in\mathcal{C}^{\infty}\); likewise with \(\mathcal{C}^{\omega}\) in place of \(\mathcal{C}^{\infty}\)._
Proof.: Suppose first that \(\mathfrak{m}\succcurlyeq 1\), and let \(h:=0\) and \(y\) be as in Lemma 6.4.6 for \(\phi=\mathfrak{n}=1\). Then \(y\prec 1\), \(y\in(\mathcal{C}^{r})^{\preccurlyeq}\), so \(y\mathfrak{m}\prec 1\), \(y/\mathfrak{m}\prec 1\), \(y/\mathfrak{m}\in(\mathcal{C}^{r})^{\preccurlyeq}\) by the Product Rule. Next assume \(\mathfrak{m}\prec 1\) and set \(\mathfrak{v}:=|\mathfrak{v}(L_{P})|\in H^{>}\). By Corollary 3.3.15 we can take \(h\in H\) such that \(\widehat{h}-h\prec(\mathfrak{vm})^{(w+3)(r+1)}\), and then by Lemma 3.2.37 we have
\[P_{+h}(0)\ =\ P(h)\ \prec\ (\mathfrak{vm})^{w+3}P\ \prec\ \mathfrak{v}^{w+3} \mathfrak{m}^{r+1}P_{+h}.\]
By Lemma 4.5.35, \((P_{+h},1,\widehat{h}-h)\) is strongly repulsive-normal, and by Corollary 3.3.8 it is deep with \(\mathfrak{v}(L_{P_{+h}})\asymp_{\Delta(\mathfrak{v})}\mathfrak{v}\). Hence Lemma 6.5.18 applies to the slot \((P_{+h},1,\widehat{h}-h)\) in place of \((P,1,\widehat{h})\) to yield a \(z\in\mathcal{C}^{<\infty}\) with \(P_{+h}(z)=0\) and \((z/\mathfrak{m})^{(j)}\preccurlyeq 1\) for \(j=0,\dots,r\). Lemma 6.4.1 gives \(z^{(j)}\prec 1\) for \(j=0,\dots,r\). Set \(y:=h+z\); then \(P(y)=0\), \(y^{(j)}\prec 1\) and \(\big{(}(y-h)/\mathfrak{m}\big{)}^{(j)}\preccurlyeq 1\) for \(j=0,\dots,r\).
We now use the results above to approximate zeros of \(P\) in \(\mathcal{C}^{<\infty}\) by elements of \(H\):
**Corollary 6.5.20**.: _Suppose \(H\) is Liouville closed, \(\mathrm{I}(K)\subseteq K^{\dagger}\), \(\mathfrak{n}=1\), and our slot \((P,1,\widehat{h})\) in \(H\) is ultimate. Assume also that \(K\) is \(1\)-linearly surjective if \(r\geqslant 3\). Let \(y\in\mathcal{C}^{<\infty}\) and \(h\in H,\ \mathfrak{m}\in H^{\times}\) be such that \(P(y)=0\), \(y\prec 1\), and \(\widehat{h}-h\preccurlyeq\mathfrak{m}\). Then_
\[y-h\in\mathfrak{m}\,(\mathcal{C}^{r})^{\preccurlyeq}.\]
Proof.: Corollary 6.5.19 gives \(h_{1}\in H\), \(z\in\mathcal{C}^{<\infty}\) with \(\widehat{h}-h_{1}\preccurlyeq\mathfrak{m}\), \(P(z)=0\), \(z\prec 1\), and \(\big{(}(z-h_{1})/\mathfrak{m}\big{)}^{(j)}\preccurlyeq 1\) for \(j=0,\ldots,r\). Now
\[\frac{y-h}{\mathfrak{m}}\ =\ \frac{y-z}{\mathfrak{m}}+\frac{z-h_{1}}{\mathfrak{m}}+ \frac{h_{1}-h}{\mathfrak{m}}\]
with \(\big{(}(y-z)/\mathfrak{m}\big{)}^{(j)}\preccurlyeq 1\) for \(j=0,\ldots,r\) by Corollary 6.5.15. Also \((h_{1}-h)/\mathfrak{m}\in H\) and \((h_{1}-h)/\mathfrak{m}\preccurlyeq 1\), so \(\big{(}(h_{1}-h)/\mathfrak{m}\big{)}^{(j)}\preccurlyeq 1\) for all \(j\in\mathbb{N}\).
The above corollary is the only part of this section used towards establishing our main result, Theorem 6.7.22. But this use, in proving Theorem 6.7.13, is essential, and obtaining Corollary 6.5.20 required much of the above section.
### Asymptotic Similarity
Let \(H\) be a Hausdorff field and \(\widehat{H}\) an immediate valued field extension of \(H\). Equip \(\widehat{H}\) with the unique field ordering making it an ordered field extension of \(H\) such that \(\mathcal{O}_{\widehat{H}}\) is convex [ADH, 3.5.12]. Let \(f\in\mathcal{C}\) and \(\widehat{f}\in\widehat{H}\) be given.
**Definition 6.6.1**.: Call \(f\)**asymptotically similar** to \(\widehat{f}\) over \(H\) (notation: \(f\sim_{H}\widehat{f}\)) if \(f\sim\phi\) in \(\mathcal{C}\) and \(\phi\sim\widehat{f}\) in \(\widehat{H}\) for some \(\phi\in H\). (Note that then \(f\in\mathcal{C}^{\times}\) and \(\widehat{f}\neq 0\).)
Recall that the binary relations \(\sim\) on \(\mathcal{C}^{\times}\) and \(\sim\) on \(\widehat{H}^{\times}\) are equivalence relations which restrict to the same equivalence relation on \(H^{\times}\). As a consequence, if \(f\sim_{H}\widehat{f}\), then \(f\sim\phi\) in \(\mathcal{C}\) for any \(\phi\in H\) with \(\phi\sim\widehat{f}\) in \(\widehat{H}\), and \(\phi\sim\widehat{f}\) in \(\widehat{H}\) for any \(\phi\in H\) with \(f\sim\phi\) in \(\mathcal{C}\). Moreover, if \(f\in H\), then \(f\sim_{H}\widehat{f}\Leftrightarrow f\sim\widehat{f}\) in \(\widehat{H}\), and if \(\widehat{f}\in H\), then \(f\sim_{H}\widehat{f}\Leftrightarrow f\sim\widehat{f}\) in \(\mathcal{C}\).
**Lemma 6.6.2**.: _Let \(f_{1}\in\mathcal{C}\), \(f_{1}\sim f\), let \(\widehat{f}_{1}\in\widehat{H}_{1}\) for an immediate valued field extension \(\widehat{H}_{1}\) of \(H\), and suppose \(\widehat{f}\sim\theta\) in \(\widehat{H}\) and \(\widehat{f}_{1}\sim\theta\) in \(\widehat{H}_{1}\) for some \(\theta\in H\). Then: \(f\sim_{H}\widehat{f}\Leftrightarrow f_{1}\sim_{H}\widehat{f}_{1}\)._
For \(\mathfrak{n}\in H^{\times}\) we have \(f\sim_{H}\widehat{f}\Leftrightarrow\mathfrak{n}f\sim_{H}\mathfrak{n}\widehat{f}\). Moreover, by Lemma 5.1.1:
**Lemma 6.6.3**.: _Let \(g\in\mathcal{C}\), \(\widehat{g}\in\widehat{H}\), and suppose \(f\sim_{H}\widehat{f}\) and \(g\sim_{H}\widehat{g}\). Then \(1/f\sim_{H}1/\widehat{f}\) and \(fg\sim_{H}\widehat{f}\widehat{g}\). Moreover,_
\[f\preccurlyeq g\ \text{in}\ \mathcal{C}\quad\Longleftrightarrow\quad\widehat{f} \preccurlyeq\widehat{g}\ \text{in}\ \widehat{H}\text{,}\]
_and likewise with \(\prec\), \(\asymp\), or \(\sim\) in place of \(\preccurlyeq\)._
Lemma 6.6.3 readily yields:
**Corollary 6.6.4**.: _Suppose \(\widehat{f}\) is transcendental over \(H\) and \(Q(f)\sim_{H}Q(\widehat{f})\) for all \(Q\in H[Y]^{\neq}\). Then we have:_
* _a subfield_ \(H(f)\supseteq H\) _of_ \(\mathcal{C}\) _generated by_ \(f\) _over_ \(H\)_;_
* _a field isomorphism_ \(\iota\colon H(f)\to H(\widehat{f})\) _over_ \(H\) _with_ \(\iota(f)=\widehat{f}\)_;_
* _with_ \(H(f)\) _and_ \(\iota\) _as in_ (i) _and_ (ii) _we have_ \(g\sim_{H}\iota(g)\) _for all_ \(g\in H(f)^{\times}\)_, hence for all_ \(g_{1},g_{2}\in H(f)\)_:_ \(g_{1}\preccurlyeq g_{2}\) _in_ \(\mathcal{C}\)__\(\Leftrightarrow\)__\(\iota(g_{1})\preccurlyeq\iota(g_{2})\) _in_ \(\widehat{H}\)_._
_Also, \(\iota\) in_ (ii) _is unique and is an ordered field isomorphism, where the ordering on \(H(f)\) is its ordering as a Hausdorff field._
Proof.: To see that \(\iota\) is order preserving, use that \(\iota\) is a valued field isomorphism by (iii), and apply [ADH, 3.5.12].
Here is the analogue when \(\widehat{f}\) algebraic over \(H\):
**Corollary 6.6.5**.: _Suppose \(\widehat{f}\) is algebraic over \(H\) with minimum polynomial \(P\) over \(H\) of degree \(d\), and \(P(f)=0\), \(Q(f)\sim_{H}Q(\widehat{f})\) for all \(Q\in H[Y]^{\neq}\) of degree \(<d\). Then we have:_
* _a subfield_ \(H[f]\supseteq H\) _of_ \(\mathcal{C}\) _generated by_ \(f\) _over_ \(H\)_;_
* _a field isomorphism_ \(\iota\colon H[f]\to H[\widehat{f}]\) _over_ \(H\) _with_ \(\iota(f)=\widehat{f}\)_;_
* _with_ \(H[f]\) _and_ \(\iota\) _as in_ (i) _and_ (ii) _we have_ \(g\sim_{H}\iota(g)\) _for all_ \(g\in H[f]^{\times}\)_, hence for all_ \(g_{1},g_{2}\in H[f]\)_:_ \(g_{1}\preccurlyeq g_{2}\) _in_ \(\mathcal{C}\) _\(\Leftrightarrow\)__\(\iota(g_{1})\preccurlyeq\iota(g_{2})\) _in_ \(\widehat{H}\)_._
_Also, \(H[f]\) and \(\iota\) in_ (i) _and_ (ii) _are unique and \(\iota\) is an ordered field isomorphism, where the ordering on \(H(f)\) is its ordering as a Hausdorff field._
If \(\widehat{f}\notin H\), then to show that \(f-\phi\sim_{H}\widehat{f}-\phi\) for all \(\phi\in H\) it is enough to do this for \(\phi\) arbitrarily close to \(\widehat{f}\):
**Lemma 6.6.6**.: _Let \(\phi_{0}\in H\) be such that \(f-\phi_{0}\sim_{H}\widehat{f}-\phi_{0}\). Then \(f-\phi\sim_{H}\widehat{f}-\phi\) for all \(\phi\in H\) with \(\widehat{f}-\phi_{0}\prec\widehat{f}-\phi\)._
Proof.: Let \(\phi\in H\) with \(\widehat{f}-\phi_{0}\prec\widehat{f}-\phi\). Then \(\phi_{0}-\phi\succ\widehat{f}-\phi_{0}\), so \(\widehat{f}-\phi=(\widehat{f}-\phi_{0})+(\phi_{0}-\phi)\sim\phi_{0}-\phi\). By Lemma 6.6.3 we also have \(\phi_{0}-\phi\succ f-\phi_{0}\), and hence likewise \(f-\phi\sim\phi_{0}-\phi\).
We define: \(f\approx_{H}\widehat{f}:\Leftrightarrow f-\phi\sim_{H}\widehat{f}-\phi\) for all \(\phi\in H\). If \(f\approx_{H}\widehat{f}\), then \(f\sim_{H}\widehat{f}\) as well as \(f,\widehat{f}\notin H\), and \(\mathfrak{n}f\approx_{H}\widehat{\mathfrak{n}f}\) for all \(\mathfrak{n}\in H^{\times}\). Hence \(f\approx_{H}\widehat{f}\) iff \(f,\widehat{f}\notin H\) and the isomorphism \(\iota\colon H+Hf\to H+H\widehat{f}\) of \(H\)-linear spaces that is the identity on \(H\) and sends \(f\) to \(\widehat{f}\) satisfies \(g\sim_{H}\iota(g)\) for all nonzero \(g\in H+Hf\).
Here is an easy consequence of Lemma 6.6.6:
**Corollary 6.6.7**.: _Suppose \(\widehat{f}\notin H\) and \(f-\phi_{0}\sim_{H}\widehat{f}-\phi_{0}\) for all \(\phi_{0}\in H\) such that \(\phi_{0}\sim\widehat{f}\). Then \(f\approx_{H}\widehat{f}\)._
Proof.: Take \(\phi_{0}\in H\) with \(\phi_{0}\sim\widehat{f}\), and let \(\phi\in H\) be given. If \(\widehat{f}-\phi\prec\widehat{f}\), then \(f-\phi\sim_{H}\widehat{f}-\phi\) by hypothesis; otherwise we have \(\widehat{f}-\phi\succcurlyeq\widehat{f}\succ\widehat{f}-\phi_{0}\), and then \(f-\phi\sim_{H}\widehat{f}-\phi\) by Lemma 6.6.6.
Lemma 6.6.2 yields an analogue for \(\approx_{H}\):
**Lemma 6.6.8**.: _Let \(f_{1}\in\mathcal{C}\) be such that \(f_{1}-\phi\sim f-\phi\) for all \(\phi\in H\), and let \(\widehat{f}_{1}\) be an element of an immediate valued field extension of \(H\) such that \(v(\widehat{f}-\phi)=v(\widehat{f}_{1}-\phi)\) for all \(\phi\in H\). Then \(f\approx_{H}\widehat{f}\) iff \(f_{1}\approx_{H}\widehat{f}_{1}\)._
Let \(g\in\mathcal{C}\) be eventually strictly increasing with \(g(t)\to+\infty\) as \(t\to+\infty\); we then have the Hausdorff field \(H\circ g=\{h\circ g:h\in H\}\), with ordered valued field isomorphism \(h\mapsto h\circ g\colon H\to H\circ g\). (See Section 5.1.) Suppose
\[\widehat{h}\mapsto\widehat{h}\circ g\,:\,\,\widehat{H}\to\widehat{H}\circ g\]
extends this isomorphism to a valued field isomorphism, where \(\widehat{H}\circ g\) is an immediate valued field extension of the Hausdorff field \(H\circ g\). Then
\[f\sim_{H}\widehat{f}\quad\Longleftrightarrow\quad f\circ g\sim_{H\circ g} \widehat{f}\circ g,\qquad f\approx_{H}\widehat{f}\quad\Longleftrightarrow\quad f \circ g\approx_{H\circ g}\widehat{f}\circ g.\]
**The complex version.** We now assume in addition that \(H\) is real closed, with algebraic closure \(K:=H[i]\subseteq\mathcal{C}[i]\). We take \(i\) with \(i^{2}=-1\) also as an element of a field \(\widehat{K}:=\widehat{H}[i]\) extending both \(\widehat{H}\) and \(K\), and equip \(\widehat{K}\) with the unique valuation ring of \(\widehat{K}\) lying over \(\mathcal{O}_{\widehat{H}}\); see the remarks following Lemma 4.1.2. Then \(\widehat{K}\) is an immediate valued field extension of \(K\). Let \(f\in\mathcal{C}[i]\) and \(\widehat{f}\in\widehat{K}\) below.
Call \(f\)**asymptotically similar** to \(\widehat{f}\) over \(K\) (notation: \(f\sim_{K}\widehat{f}\)) if for some \(\phi\in K\) we have \(f\sim\phi\) in \(\mathcal{C}[i]\) and \(\phi\sim\widehat{f}\) in \(\widehat{K}\). Then \(f\in\mathcal{C}[i]^{\times}\) and \(\widehat{f}\neq 0\). As before, if \(f\sim_{K}\widehat{f}\), then \(f\sim\phi\) in \(\mathcal{C}[i]\) for any \(\phi\in K\) for which \(\phi\sim\widehat{f}\) in \(\widehat{K}\), and \(\phi\sim\widehat{f}\) in \(\widehat{K}\) for any \(\phi\in K\) for which \(f\sim\phi\) in \(\mathcal{C}[i]\). Moreover, if \(f\in K\), then \(f\sim_{K}\widehat{f}\) reduces to \(f\sim\widehat{f}\) in \(\widehat{K}^{\times}\). Likewise, if \(\widehat{f}\in K\), then \(f\sim_{K}\widehat{f}\) reduces to \(f\sim\widehat{f}\) in \(\mathcal{C}[i]^{\times}\).
**Lemma 6.6.9**.: _Let \(f_{1}\in\mathcal{C}[i]\) with \(f_{1}\sim f\). Let \(\widehat{H}_{1}\) be an immediate valued field extension of \(H\), let \(\widehat{K}_{1}:=\widehat{H}_{1}[i]\) be the corresponding immediate valued field extension of \(K\) obtained from \(\widehat{H}_{1}\) as \(\widehat{K}\) was obtained from \(\widehat{H}\). Let \(\widehat{f}_{1}\in\widehat{K}_{1}\), and \(\theta\in K\) be such that \(\widehat{f}\sim\theta\) in \(\widehat{K}\) and \(\widehat{f}_{1}\sim\theta\) in \(\widehat{K}_{1}\). Then \(f\sim_{K}\widehat{f}\) iff \(f_{1}\sim_{K}\widehat{f}_{1}\)._
For \(\mathfrak{n}\in K^{\times}\) we have \(f\sim_{K}\widehat{f}\Leftrightarrow\mathfrak{n}f\sim_{K}\mathfrak{n}\widehat{f}\), and \(f\sim_{K}\widehat{f}\Leftrightarrow\overline{f}\sim_{K}\overline{\widehat{f}}\) (complex conjugation). Here is a useful observation relating \(\sim_{K}\) and \(\sim_{H}\):
**Lemma 6.6.10**.: _Suppose \(f\sim_{K}\widehat{f}\) and \(\operatorname{Re}\widehat{f}\succcurlyeq\operatorname{Im}\widehat{f}\); then_
\[\operatorname{Re}f\ \succcurlyeq\operatorname{Im}f,\qquad\operatorname{Re}f\ \sim_{H}\ \operatorname{Re}\widehat{f}.\]
Proof.: Let \(\phi\in K\) be such that \(f\sim\phi\) in \(\mathcal{C}[i]\) and \(\phi\sim\widehat{f}\) in \(\widehat{K}\). The latter yields \(\operatorname{Re}\phi\succcurlyeq\operatorname{Im}\phi\) in \(H\) and \(\operatorname{Re}\phi\sim\operatorname{Re}\widehat{f}\) in \(\widehat{H}\). Using that \(f=(1+\varepsilon)\phi\) with \(\varepsilon\prec 1\) in \(\mathcal{C}[i]\) it follows easily that \(\operatorname{Re}f\succcurlyeq\operatorname{Im}f\) and \(\operatorname{Re}f\sim\operatorname{Re}\phi\) in \(\mathcal{C}\).
**Corollary 6.6.11**.: _Suppose \(f\in\mathcal{C}\) and \(\widehat{f}\in\widehat{H}\). Then \(f\sim_{H}\widehat{f}\) iff \(f\sim_{K}\widehat{f}\)._
Lemmas 6.6.3 and 6.6.6 go through with \(\mathcal{C}[i]\), \(K\), \(\widehat{K}\), and \(\sim_{K}\) in place of \(\mathcal{C}\), \(H\), \(\widehat{H}\), and \(\sim_{H}\). We define: \(f\approx_{K}\widehat{f}\lhd f-\phi\sim_{K}\widehat{f}-\phi\) for all \(\phi\in K\). Now Corollary 6.6.7 goes through with \(K\), \(\sim_{K}\), \(\approx_{K}\) in place of \(H\), \(\sim_{H}\), \(\approx_{H}\).
**Lemma 6.6.12**.: _Suppose \(f\in\mathcal{C}\), \(\widehat{f}\in\widehat{H}\), and \(f\sim_{H}\widehat{f}\). Then \(f+g\mathrm{i}\sim_{K}\widehat{f}+g\mathrm{i}\) for all \(g\in H\)._
Proof.: Let \(g\in H\), and take \(\phi\in H\) with \(f\sim\phi\) in \(\mathcal{C}\) and \(\phi\sim\widehat{f}\) in \(\widehat{H}\). Suppose first that \(g\prec\phi\). Then \(g\mathrm{i}\prec\phi\), and together with \(f-\phi\prec\phi\) this yields \((f+g\mathrm{i})-\phi\prec\phi\), that is, \(f+g\mathrm{i}\sim\phi\) in \(\mathcal{C}[i]\) (cf. the basic properties of the relation \(\prec\) on \(\mathcal{C}[i]\) stated before Lemma 5.1.1). Using likewise the analogous properties of \(\prec\) on \(\widehat{K}\) we obtain \(\phi\sim\widehat{f}+g\mathrm{i}\) in \(\widehat{K}\). If \(\phi\prec g\), then \(f\preccurlyeq\phi\prec g\mathrm{i}\) and thus \(f+g\mathrm{i}\sim g\mathrm{i}\) in \(\mathcal{C}[i]\), and likewise \(\widehat{f}+g\mathrm{i}\sim g\mathrm{i}\) in \(\widehat{K}\). Finally, suppose \(g\asymp\phi\). Take \(c\in\mathbb{R}^{\times}\) and \(\varepsilon\in H\) with \(g=c\phi(1+\varepsilon)\) and \(\varepsilon\prec 1\). We have \(f=\phi(1+\delta)\) where \(\delta\in\mathcal{C}\), \(\delta\prec 1\), so \(f+g\mathrm{i}=\phi(1+c\mathrm{i})(1+\rho)\) where \(\rho=(1+c\mathrm{i})^{-1}(\delta+ci\varepsilon)\prec 1\) in \(\mathcal{C}[i]\), so \(f+g\mathrm{i}\sim\phi(1+c\mathrm{i})\) in \(\mathcal{C}[i]\). Likewise, \(\widehat{f}+g\mathrm{i}\sim\phi(1+c\mathrm{i})\) in \(\widehat{K}\).
**Corollary 6.6.13**.: _Suppose \(f\in\mathcal{C}\) and \(\widehat{f}\in\widehat{H}\). Then \(f\approx_{H}\widehat{f}\) iff \(f\approx_{K}\widehat{f}\)._
Proof.: If \(f\approx_{K}\widehat{f}\), then for all \(\phi\in H\) we have \(f-\phi\sim_{K}\widehat{f}-\phi\), so \(f-\phi\sim_{H}\widehat{f}-\phi\) by Corollary 6.6.11, hence \(f\approx_{H}\widehat{f}\). Conversely, suppose \(f\approx_{H}\widehat{f}\). Then for all \(\phi\in K\) we have \(f-\operatorname{Re}\phi\sim_{H}\widehat{f}-\operatorname{Re}\phi\), so \(f-\phi\sim_{K}\widehat{f}-\phi\) by Lemma 6.6.12.
Next we exploit that \(K\) is algebraically closed:
**Lemma 6.6.14**.: \(f\approx_{K}\widehat{f}\implies Q(f)\sim_{K}Q(\widehat{f})\) _for all \(Q\in K[Y]^{\neq}\)._
Proof.: Factor \(Q\in K[Y]^{\neq}\) as
\[Q=a(Y-\phi_{1})\cdots(Y-\phi_{n}),\qquad a\in K^{\times},\ \phi_{1},\ldots,\phi_{n}\in K\]
and use \(f-\phi_{j}\sim_{K}\widehat{f}-\phi_{j}\) (\(j=1,\ldots,n\)) and the complex version of Lemma 6.6.3.
This yields a more useful "complex" version of Corollary 6.6.4:
**Corollary 6.6.15**.: _Suppose \(f\approx_{K}\widehat{f}\). Then \(\widehat{f}\) is transcendental over \(K\), and:_
1. \(f\) _generates over_ \(K\) _a subfield_ \(K(f)\) _of_ \(\mathcal{C}[\mathrm{i}]\)_;_
2. _we have a field isomorphism_ \(\iota\colon K(f)\to K(\widehat{f})\) _over_ \(K\) _with_ \(\iota(f)=\widehat{f}\)_;_
3. \(g\sim_{K}\iota(g)\) _for all_ \(g\in K(f)^{\times}\)_, hence for all_ \(g_{1},g_{2}\in K(f)\)_:_ \[g_{1}\preccurlyeq g_{2}\text{ in }\mathcal{C}[\mathrm{i}]\iff\iota(g_{1}) \preccurlyeq\iota(g_{2})\text{ in }\widehat{K}.\]
(_Thus the restriction of the binary relation \(\preccurlyeq\) on \(\mathcal{C}[\mathrm{i}]\) to \(K(f)\) is a dominance relation on the field \(K(f)\) in the sense of_ [1, 3.1.1].)__
In the next lemma \(f=g+hi\), \(g,h\in\mathcal{C}\), and \(\widehat{f}=\widehat{g}+\widehat{h}\), \(\widehat{g},\widehat{h}\in\widehat{H}\). Recall from Lemma 4.1.3 that if \(\widehat{f}\notin K\), then \(v(\widehat{g}-H)\subseteq v(\widehat{h}-H)\) or \(v(\widehat{h}-H)\subseteq v(\widehat{g}-H)\).
**Lemma 6.6.16**.: _Suppose \(f\approx_{K}\widehat{f}\) and \(v(\widehat{g}-H)\subseteq v(\widehat{h}-H)\). Then \(g\approx_{H}\widehat{g}\)._
Proof.: Let \(\rho\in H\) be such that \(\rho\sim\widehat{g}\); by Corollary 6.6.7 it is enough to show that then \(g-\rho\sim_{H}\widehat{g}-\rho\). Take \(\sigma\in H\) with \(\widehat{g}-\rho\succcurlyeq\widehat{h}-\sigma\), and set \(\phi:=\rho+\sigma\mathrm{i}\in K\). Then
\[\operatorname{Re}(f-\phi)=g-\rho\quad\text{ and }\quad\operatorname{Re}( \widehat{f}-\phi)=\widehat{g}-\rho\succcurlyeq\widehat{h}-\sigma= \operatorname{Im}(\widehat{f}-\phi),\]
and so by \(f-\phi\sim_{H}\widehat{f}-\phi\) and Lemma 6.6.10 we have \(g-\rho\sim_{H}\widehat{g}-\rho\).
**Corollary 6.6.17**.: _If \(f\approx_{K}\widehat{f}\), then \(\operatorname{Re}f\approx_{H}\operatorname{Re}\widehat{f}\) or \(\operatorname{Im}f\approx_{H}\operatorname{Im}\widehat{f}\)._
Let \(g\in\mathcal{C}\) be eventually strictly increasing with \(g(t)\to+\infty\) as \(t\to+\infty\); we then have the subfield \(K\circ g=(H\circ g)[\mathrm{i}]\) of \(\mathcal{C}[\mathrm{i}]\). Suppose the valued field isomorphism
\[h\mapsto h\circ g\colon H\to H\circ g\]
is extended to a valued field isomorphism
\[\widehat{h}\mapsto\widehat{h}\circ g\,:\ \widehat{H}\to\widehat{H}\circ g,\]
where \(\widehat{H}\circ g\) is an immediate valued field extension of the Hausdorff field \(H\circ g\). In the same way we took a common valued field extension \(\widehat{K}=\widehat{H}[\mathrm{i}]\) of \(\widehat{H}\) and \(K=H[\mathrm{i}]\) we now take a common valued field extension \(\widehat{K}\circ g=(\widehat{H}\circ g)[\mathrm{i}]\) of \(\widehat{H}\circ g\) and \(K\circ g=(H\circ g)[\mathrm{i}]\). This makes \(\widehat{K}\circ g\) an immediate extension of \(K\circ g\), and we have a unique valued field isomorphism \(y\mapsto y\circ g\colon\widehat{K}\to\widehat{K}\circ g\) extending the above map \(\widehat{h}\mapsto\widehat{h}\circ g\colon\widehat{H}\to\widehat{H}\circ g\) and sending \(\mathrm{i}\in\widehat{K}\) to \(\mathrm{i}\in\widehat{K}\circ g\). This map \(\widehat{K}\to\widehat{K}\circ g\) also extends \(f\mapsto f\circ g\colon K\to K\circ g\) and is the identity on \(\mathbb{C}\). See the commutative diagram below, where the labeled arrows are valued field isomorphisms and all unlabeled arrows are natural inclusions.
Now we have
\[f\sim_{K}\widehat{f}\quad\Longleftrightarrow\quad f\circ g\sim_{K\circ g}\widehat{ f}\circ g,\qquad f\approx_{K}\widehat{f}\quad\Longleftrightarrow\quad f\circ g \approx_{K\circ g}\widehat{f}\circ g.\]
At various places in the next section we use this for a Hardy field \(H\) and active \(\phi>0\) in \(H\), with \(g=\ell^{\rm inv}\), \(\ell\in\mathcal{C}^{1},\ \ell^{\prime}=\phi\). In that situation, \(H^{\circ}:=H\circ g\), \(\widehat{H}^{\circ}:=\widehat{H}\circ g\), and \(h^{\circ}:=h\circ g\), \(\widehat{h}^{\circ}:=\widehat{h}\circ g\) for \(h\in H\) and \(\widehat{h}\in\widehat{H}\), and likewise with \(K\) and \(\widehat{K}\) and their elements instead of \(H\) and \(\widehat{H}\).
### Differentially Algebraic Hardy Field Extensions
In this section we are finally able to generate under reasonable conditions Hardy field extensions by solutions in \(\mathcal{C}^{<\infty}\) of algebraic differential equations, culminating in the proof of our main theorem. We begin with a generality about enlarging differential fields within an ambient differential ring. Here, a _differential subfield_ of a differential ring \(E\) is a differential subring of \(E\) whose underlying ring is a field.
**Lemma 6.7.1**.: _Let \(K\) be a differential field with irreducible \(P\in K\{Y\}^{\neq}\) of order \(r\geqslant 1\), and \(E\) a differential ring extension of \(K\) with \(y\in E\) such that \(P(y)=0\) and \(Q(y)\in E^{\times}\) for all \(Q\in K\{Y\}^{\neq}\) of order \(<r\). Then \(y\) generates over \(K\) a differential subfield \(K\langle y\rangle\supseteq K\) of \(E\). Moreover, \(y\) has \(P\) as a minimal annihilator over \(K\) and \(K\langle y\rangle\) equals_
\[\left\{\frac{A(y)}{B(y)}:\ A,B\in K\{Y\},\,{\rm order}\,A\leqslant r,\,{\rm deg }_{Y^{(r)}}\,A<{\rm deg}_{Y^{(r)}}\,P,\,B\neq 0,\,{\rm order}\,B<r\right\}.\]
Proof.: Let \(p\in K[Y_{0},\ldots,Y_{r}]\) with distinct indeterminates \(Y_{0},\ldots,Y_{r}\) be such that \(P(Y)=p(Y,Y^{\prime},\ldots,Y^{(r)})\). The \(K\)-algebra morphism \(K[Y_{0},\ldots,Y_{r}]\to E\) sending \(Y_{i}\) to \(y^{(i)}\) for \(i=0,\ldots,r\) extends to a \(K\)-algebra morphism \(K(Y_{0},\ldots,Y_{r-1})[Y_{r}]\to E\) with \(p\) in its kernel, and so induces a \(K\)-algebra morphism
\[\iota\ :\ K(Y_{0},\ldots,Y_{r-1})[Y_{r}]/(p)\to E,\qquad(p)\ :=\ pK(Y_{0}, \ldots,Y_{r-1})[Y_{r}].\]
Now \(p\) as an element of \(K(Y_{0},\ldots,Y_{r-1})[Y_{r}]\) remains irreducible [122, Chapter IV, SS2]. Thus \(K(Y_{0},\ldots,Y_{r-1})[Y_{r}]/(p)\) is a field, so \(\iota\) is injective, and it is routine to check that the image of \(\iota\) is \(K\langle y\rangle\) as described; see also [ADH, 4.1.6].
In passing we also note the obvious d-transcendental version of this lemma:
**Lemma 6.7.2**.: _Let \(K\) be a differential field and \(E\) be a differential ring extension of \(K\) with \(y\in E\) such that \(Q(y)\in E^{\times}\) for all \(Q\in K\{Y\}^{\neq}\). Then \(y\) generates
_over \(K\) a differential subfield \(K\langle y\rangle\) of \(E\). Moreover, \(y\) is \(\mathrm{d}\)-transcendental over \(K\) and_
\[K\langle y\rangle\ =\ \left\{\frac{P(y)}{Q(y)}:\ P,Q\in K\{Y\},\ Q\neq 0\right\}.\]
We now apply the material above to generate Hardy field extensions.
### Application to Hardy fields
_In the rest of this section \(H\) is a real closed Hardy field, \(H\supseteq\mathbb{R}\), and \(\widehat{H}\) is an immediate \(H\)-field extension of \(H\)._ Let \(f\in\mathcal{C}^{<\infty}\) and \(\widehat{f}\in\widehat{\widehat{H}}\). Note that if \(Q\in H\{Y\}\) and \(Q(f)\sim_{H}Q(\widehat{f})\), then \(Q(f)\in\mathcal{C}^{\times}\). Hence by Lemma 6.7.1 with \(E=\mathcal{C}^{<\infty}\), \(K=H\), we have:
**Lemma 6.7.3**.: _Suppose \(\widehat{f}\) is \(\mathrm{d}\)-algebraic over \(H\) with minimal annihilator \(P\) over \(H\) of order \(r\geqslant 1\), and \(P(f)=0\) and \(Q(f)\sim_{H}Q(\widehat{f})\) for all \(Q\in H\{Y\}\setminus H\) with \(\mathrm{order}\,Q<r\). Then \(f\notin H\) and:_
1. \(f\) _isardian over_ \(H\)_;_
2. _we have a_ \((\)_necessarily unique_\()\) _isomorphism_ \(\iota\colon H\langle f\rangle\to H\langle\widehat{f}\rangle\) _of differential fields over_ \(H\) _such that_ \(\iota(f)=\widehat{f}\)_._
With an extra assumption \(\iota\) in Lemma 6.7.3 is an isomorphism of \(H\)-fields:
**Corollary 6.7.4**.: _Let \(\widehat{f}\), \(f\), \(P\), \(r\), \(\iota\) be as in Lemma 6.7.3, and suppose also that \(Q(f)\sim_{H}Q(\widehat{f})\) for all \(Q\in H\{Y\}\) with \(\mathrm{order}\,Q=r\) and \(\deg_{Y^{(r)}}Q<\deg_{Y^{(r)}}P\). Then \(g\sim_{H}\iota(g)\) for all \(g\in H\langle f\rangle^{\times}\), hence for \(g_{1},g_{2}\in H\langle f\rangle\) we have_
\[g_{1}\preccurlyeq g_{2}\text{ in }\mathcal{C}\ \Longleftrightarrow\ \iota(g_{1})\preccurlyeq\iota(g_{2})\text{ in }\widehat{H}.\]
_Moreover, \(\iota\) is an isomorphism of \(H\)-fields._
Proof.: Most of this follows from Lemmas 6.6.3 and 6.7.3 and the description of \(H\langle f\rangle\) in Lemma 6.7.1. For the last statement, use [1, 10.5.8].
Here is a \(\mathrm{d}\)-transcendental version of Lemma 6.7.3:
**Lemma 6.7.5**.: _Suppose \(Q(f)\sim_{H}Q(\widehat{f})\) for all \(Q\in H\{Y\}\setminus H\). Then:_
1. \(f\) _isardian over_ \(H\)_;_
2. _we have a_ \((\)_necessarily unique_\()\) _isomorphism_ \(\iota\colon H\langle f\rangle\to H\langle\widehat{f}\rangle\) _of differential fields over_ \(H\) _with_ \(\iota(f)=\widehat{f}\)_; and_
3. \(g\sim_{H}\iota(g)\) _for all_ \(g\in H\langle f\rangle^{\times}\)_, hence for all_ \(g_{1},g_{2}\in H\langle f\rangle\)_:_ \[g_{1}\preccurlyeq g_{2}\text{ in }\mathcal{C}\ \Longleftrightarrow\ \iota(g_{1})\preccurlyeq\iota(g_{2})\text{ in }\widehat{H}.\]
_Moreover, \(\iota\) is an isomorphism of \(H\)-fields._
This follows easily from Lemma 6.7.2.
### Analogues for \(K=H[\mathrm{i}]\)
We have the \(\mathrm{d}\)-valued extension \(K:=H[\mathrm{i}]\subseteq\mathcal{C}^{<\infty}[\mathrm{i}]\) of \(H\). As before we arrange that \(\widehat{K}=\widehat{H}[\mathrm{i}]\) is a \(\mathrm{d}\)-valued extension of \(\widehat{H}\) as well as an immediate extension of \(K\). Let \(f\in\mathcal{C}^{<\infty}[\mathrm{i}]\) and \(\widehat{f}\in\widehat{K}\). We now have the obvious "complex" analogues of Lemma 6.7.3 and Corollary 6.7.4:
**Lemma 6.7.6**.: _Suppose \(\widehat{f}\) is \(\mathrm{d}\)-algebraic over \(K\) with minimal annihilator \(P\) over \(K\) of order \(r\geqslant 1\), and \(P(f)=0\) and \(Q(f)\sim_{K}Q(\widehat{f})\) for all \(Q\in K\{Y\}\setminus K\) with \(\mathrm{order}\,Q<r\). Then_
1. \(f\) _generates over_ \(K\) _a differential subfield_ \(K\langle f\rangle\) _of_ \(\mathcal{C}^{<\infty}[\mathrm{i}]\)_;_
_
2. _we have a_ \((\)_necessarily unique_\()\) _isomorphism_ \(\iota\colon K\langle f\rangle\to K\langle\widehat{f}\rangle\) _of differential fields over_ \(K\) _such that_ \(\iota(f)=\widehat{f}\)_._
**Corollary 6.7.7**.: _Let \(\widehat{f}\), \(f\), \(P\), \(r\), \(\iota\) be as in Lemma 6.7.6, and suppose also that \(Q(f)\sim_{K}Q(\widehat{f})\) for all \(Q\in K\{Y\}\) with \(\operatorname{order}Q=r\) and \(\deg_{Y^{(r)}}Q<\deg_{Y^{(r)}}P\). Then \(g\sim_{K}\iota(g)\) for all \(g\in K\langle f\rangle^{\times}\), so for all \(g_{1},g_{2}\in K\langle f\rangle\) we have:_
\[g_{1}\preccurlyeq g_{2}\text{ in }\mathcal{C}[i]\iff\iota(g_{1})\preccurlyeq \iota(g_{2})\text{ in }\widehat{K}.\]
_Thus the relation \(\preccurlyeq\) on \(\mathcal{C}[i]\) restricts to a dominance relation on the field \(K\langle f\rangle\)._
From \(K\) being algebraically closed we obtain a useful variant of Corollary 6.7.7:
**Corollary 6.7.8**.: _Suppose \(f\approx_{K}\widehat{f}\), and \(P\in K\{Y\}\) is irreducible with_
\[\operatorname{order}P\ =\ \deg_{Y^{\prime}}P\ =\ 1,\quad P(f)=0\ \text{ in }\ \mathcal{C}^{<\infty}[i],\quad P(\widehat{f})=0\ \text{ in }\ \widehat{K}.\]
_Then \(P\) is a minimal annihilator of \(\widehat{f}\) over \(K\), \(f\) generates over \(K\) a differential subfield \(K\langle f\rangle=K(f)\) of \(\mathcal{C}^{<\infty}[i]\), and we have an isomorphism \(\iota\colon K\langle f\rangle\to K\langle\widehat{f}\rangle\) of differential fields over \(K\) such that \(\iota(f)=\widehat{f}\) and \(g\sim_{K}\iota(g)\) for all \(g\in K\langle f\rangle^{\times}\). Thus for all \(g_{1},g_{2}\in K\langle f\rangle\): \(g_{1}\preccurlyeq g_{2}\) in \(\mathcal{C}[i]\iff\iota(g_{1})\preccurlyeq\iota(g_{2})\) in \(\widehat{K}\)._
Proof.: By Corollary 6.6.15, \(\widehat{f}\) is transcendental over \(K\), so \(P\) is a minimal annihilator of \(\widehat{f}\) over \(K\) by [ADH, 4.1.6]. Now use Lemma 6.7.1 and Corollary 6.6.15.
This corollary leaves open whether \(\operatorname{Re}f\) or \(\operatorname{Im}f\) is hardian over \(H\). This issue is critical for us and we treat a special case in Proposition 6.7.18 below. The example following Corollary 5.4.24 shows that that there is a differential subfield of \(\mathcal{C}^{<\infty}[i]\) such that the binary relation \(\preccurlyeq\) on \(\mathcal{C}[i]\) restricts to a dominance relation on it, but which is not contained in \(F[i]\) for any Hardy field \(F\).
### Sufficient conditions for asymptotic similarity
Let \(\widehat{h}\) be an element of our immediate \(H\)-field extension \(\widehat{H}\) of \(H\). Note that in the next variant of [ADH, 11.4.3] we use ddeg instead of ndeg.
**Lemma 6.7.9**.: _Let \(Q\in H\{Y\}^{\neq}\), \(r:=\operatorname{order}Q\), \(h\in H\), and \(\mathfrak{v}\in H^{\times}\) be such that \(\widehat{h}-h\prec\mathfrak{v}\) and \(\operatorname{ddeg}_{\prec\mathfrak{v}}Q_{+h}=0\), and assume \(y\in\mathcal{C}^{<\infty}\) and \(\mathfrak{m}\in H^{\times}\) satisfies_
\[y-h\preccurlyeq\mathfrak{m}\prec\mathfrak{v},\qquad\left(\frac{y-h}{ \mathfrak{m}}\right)^{\prime},\ldots,\left(\frac{y-h}{\mathfrak{m}}\right)^{( r)}\preccurlyeq 1.\]
_Then \(Q(y)\sim Q(h)\) in \(\mathcal{C}^{<\infty}\) and \(Q(h)\sim Q(\widehat{h})\) in \(\widehat{H}\); in particular, \(Q(y)\sim_{H}Q(\widehat{h})\)._
Proof.: We have \(y=h+\mathfrak{m}u\) with \(u=\frac{y-h}{\mathfrak{m}}\in\mathcal{C}^{<\infty}\) and \(u,u^{\prime},\ldots,u^{(r)}\preccurlyeq 1\). Now
\[Q_{+h,\times\mathfrak{m}}\ =\ Q(h)+R\qquad\text{ with }R\in H\{Y\},\ R(0)=0,\]
which in view of \(\operatorname{ddeg}Q_{+h,\times\mathfrak{m}}=0\) gives \(R\prec Q(h)\). Thus
\[Q(y)\ =\ Q_{+h,\times\mathfrak{m}}(u)\ =\ Q(h)+R(u),\qquad R(u)\ \preccurlyeq\ R\ \prec\ Q(h),\]
so \(Q(y)\sim Q(h)\) in \(\mathcal{C}^{<\infty}\). Increasing \(|\mathfrak{m}|\) if necessary we arrange \(\widehat{h}-h\preccurlyeq\mathfrak{m}\), and then a similar computation with \(\widehat{h}\) instead of \(y\) gives \(Q(h)\sim Q(\widehat{h})\) in \(\widehat{H}\).
_In the remainder of this subsection we assume that \(H\) is ungrounded and \(H\neq\mathbb{R}\)._
**Corollary 6.7.10**.: _Suppose \(\widehat{h}\) is \(\mathrm{d}\)-algebraic over \(H\) with minimal annihilator \(P\) over \(H\) of order \(r\geqslant 1\), and let \(y\in\mathcal{C}^{<\infty}\) satisfy \(P(y)=0\). Suppose for all \(Q\) in \(H\{Y\}\setminus H\) of order \(<r\) there are \(h\in H\), \(\mathfrak{m},\mathfrak{v}\in H^{\times}\), and an active \(\phi>0\) in \(H\) such that \(\widehat{h}-h\preccurlyeq\mathfrak{m}\prec\mathfrak{v}\), \(\mathrm{ddeg}_{\prec\mathfrak{v}}\,Q_{+h}^{\phi}=0\), and_
\[\mathfrak{s}^{j}\bigg{(}\frac{y-h}{\mathfrak{m}}\bigg{)}\preccurlyeq 1\qquad \text{ for }j=0,\dots,r-1\text{ and }\mathfrak{s}:=\phi^{-1}\partial.\]
_Then \(y\notin H\) and \(y\) isardian over \(H\)._
Proof.: Let \(Q\in H\{Y\}\setminus H\) have order \(<r\), and take \(h\), \(\mathfrak{m}\), \(\mathfrak{v}\), \(\phi\) as in the statement of the corollary. By Lemma 6.7.3 it is enough to show that then \(Q(y)\sim_{H}Q(\widehat{h})\). We use \((\quad)^{\circ}\) as explained at the beginning of Section 6.4. Thus we have the Hardy field \(H^{\circ}\) and the \(H\)-field isomorphism \(h\mapsto h^{\circ}\colon H^{\phi}\to H^{\circ}\), extended to an \(H\)-field isomorphism \(\widehat{f}\mapsto\widehat{f}^{\circ}\colon\widehat{H}^{\phi}\to\widehat{H}^{\circ}\), for an immediate \(H\)-field extension \(\widehat{H}^{\circ}\) of \(H^{\circ}\). Set \(u:=(y-h)/\mathfrak{m}\in\mathcal{C}^{<\infty}\). We have \(\mathrm{ddeg}_{\prec\mathfrak{v}^{\circ}}\,Q_{+h^{\circ}}^{\phi\circ}=0\) and \((u^{\circ})^{(j)}\preccurlyeq 1\) for \(j=0,\dots,r-1\); hence \(Q^{\phi\circ}(y^{\circ})\sim_{H^{\circ}}\,Q^{\phi\circ}(\widehat{h}^{\circ})\) by Lemma 6.7.9. Now \(Q^{\phi\circ}(y^{\circ})=Q(y)^{\circ}\) in \(\mathcal{C}^{<\infty}\) and \(Q^{\phi\circ}(\widehat{h}^{\circ})=Q(\widehat{h})^{\circ}\) in \(\widehat{H}^{\circ}\), hence \(Q(y)\sim_{H}Q(\widehat{h})\).
Using Corollary 6.7.4 instead of Lemma 6.7.3 we show likewise:
**Corollary 6.7.11**.: _Suppose \(\widehat{h}\) is \(\mathrm{d}\)-algebraic over \(H\) with minimal annihilator \(P\) over \(H\) of order \(r\geqslant 1\), and let \(y\in\mathcal{C}^{<\infty}\) satisfy \(P(y)=0\). Suppose for all \(Q\) in \(H\{Y\}\setminus H\) with \(\mathrm{order}\,Q\leqslant r\) and \(\mathrm{deg}_{Y^{(r)}}\,Q<\mathrm{deg}_{Y^{(r)}}\,P\) there are \(h\in H\), \(\mathfrak{m},\mathfrak{v}\in H^{\times}\), and an active \(\phi>0\) in \(H\) such that \(\widehat{h}-h\preccurlyeq\mathfrak{m}\prec\mathfrak{v}\), \(\mathrm{ddeg}_{\prec\mathfrak{v}}\,Q_{+h}^{\phi}=0\), and_
\[\mathfrak{s}^{j}\bigg{(}\frac{y-h}{\mathfrak{m}}\bigg{)}\preccurlyeq 1\qquad \text{ for }j=0,\dots,r\text{ and }\mathfrak{s}:=\phi^{-1}\partial.\]
_Then \(y\) isardian over \(H\) and there is an isomorphism \(H\langle y\rangle\to H\langle\widehat{h}\rangle\) of \(H\)-fields over \(H\) sending \(y\) to \(\widehat{h}\)._
In the next subsection we use Corollary 6.7.11 to fill in certain kinds of holes in Hardy fields. Recall from [ADH, remark after 11.4.3] that if \(\widehat{h}\notin H\) and \(Z(H,\widehat{h})=\emptyset\), then \(\widehat{h}\) is \(\mathrm{d}\)-transcendental over \(H\). The next result is a version of Corollary 6.7.11 for that situation. (This will not be used until Section 7.5 below.)
**Corollary 6.7.12**.: _Suppose \(\widehat{h}\notin H\) and \(Z(H,\widehat{h})=\emptyset\). Let \(y\in\mathcal{C}^{<\infty}\) be such that for all \(h\in H\), \(\mathfrak{m}\in H^{\times}\) with \(\widehat{h}-h\preccurlyeq\mathfrak{m}\) and all \(n\) there is an active \(\phi_{0}\) in \(H\) such that for all active \(\phi>0\) in \(H\) with \(\phi\preccurlyeq\phi_{0}\) we have \(\mathfrak{s}^{n}\big{(}\frac{y-h}{\mathfrak{m}}\big{)}\preccurlyeq 1\) for \(\mathfrak{s}=\phi^{-1}\partial\). Then \(y\) isardian over \(H\), and there is an isomorphism \(H\langle y\rangle\to H\langle\widehat{h}\rangle\) of \(H\)-fields over \(H\) sending \(y\) to \(\widehat{h}\)._
Proof.: Let \(Q\in H\{Y\}\setminus H\); by Lemma 6.7.5 it is enough to show that \(Q(y)\sim_{H}Q(\widehat{h})\). Since \(Q\notin Z(H,\widehat{h})\), we obtain \(h\in H\) and \(\mathfrak{m},\mathfrak{v}\in H^{\times}\) such that \(\widehat{h}-h\preccurlyeq\mathfrak{m}\prec\mathfrak{v}\) and \(\mathrm{ddeg}_{\prec\mathfrak{v}}\,Q_{+h}=0\). Let \(r:=\mathrm{order}\,Q\) and choose an active \(\phi>0\) in \(H\) such that \(\mathrm{ddeg}_{\prec\mathfrak{v}}\,Q_{+h}^{\phi}=0\) and \(\mathfrak{s}^{n}\big{(}\frac{y-h}{\mathfrak{m}}\big{)}\preccurlyeq 1\) for \(\mathfrak{s}=\phi^{-1}\partial\) and \(n=0,\dots,r\). As in the proof of Corollary 6.7.10 this yields \(Q(y)\sim_{H}Q(\widehat{h})\).
**Generating immediate \(\mathrm{d}\)-algebraic Hardy field extensions.**_In this subsection \(H\) is Liouville closed, \((P,\mathfrak{n},\widehat{h})\) is a special \(Z\)-minimal slot in \(H\) of order \(r\geqslant 1\), \(K:=H[\mathrm{i}]\subseteq\mathcal{C}^{<\infty}[\mathrm{i}]\), \(\mathrm{I}(K)\subseteq K^{\dagger}\), and \(K\) is \(1\)-linearly surjective if \(r\geqslant 3\)._ We first treat the case where \((P,\mathfrak{n},\widehat{h})\) is a hole in \(H\) (not just a slot):
**Theorem 6.7.13**.: _Assume \((P,\mathfrak{n},\widehat{h})\) is a deep, ultimate, and strongly repulsive-normal hole in \(H\), and \(y\in\mathcal{C}^{<\infty}\), \(P(y)=0\), \(y\prec\mathfrak{n}\). Then \(y\) isardian over \(H\), and there is an isomorphism \(H\langle y\rangle\to H(\widehat{h})\) of \(H\)-fields over \(H\) sending \(y\) to \(\widehat{h}\)._
Proof.: Replacing \((P,\mathfrak{n},\widehat{h})\), \(y\) by \((P_{\times\mathfrak{n}},1,\widehat{h}/\mathfrak{n})\), \(y/\mathfrak{n}\) we arrange \(\mathfrak{n}=1\). Let \(Q\) in \(H\{Y\}\setminus H\), \(\operatorname{order}Q\leqslant r\), and \(\deg_{Y^{(r)}}Q<\deg_{Y^{(r)}}P\). Then \(Q\notin Z(H,\widehat{h})\), so we have \(h\in H\) and \(\mathfrak{v}\in H^{\times}\) such that \(h-\widehat{h}\prec\mathfrak{v}\) and \(\deg_{\prec\mathfrak{v}}Q_{+h}=0\). Take any \(\mathfrak{m}\in H^{\times}\) with \(\widehat{h}-h\prec\mathfrak{m}\prec\mathfrak{v}\). Take \(\mathfrak{w}\in H^{\times}\) with \(\mathfrak{m}\prec\mathfrak{w}\prec\mathfrak{v}\). Then \(\operatorname{ndeg}Q_{+h,\times\mathfrak{w}}=0\), so we have active \(\phi\) in \(H\), \(0<\phi\prec 1\), with \(\operatorname{ddeg}Q_{+h,\times\mathfrak{w}}^{\phi}=0\), and hence \(\operatorname{ddeg}_{\prec\mathfrak{w}}Q_{+h}^{\phi}=0\). Thus renaming \(\mathfrak{w}\) as \(\mathfrak{v}\) we have arranged \(\operatorname{ddeg}_{\prec\mathfrak{v}}Q_{+h}^{\phi}=0\).
Set \(\mathfrak{s}:=\phi^{-1}\mathfrak{\hat{v}}\); by Corollary 6.7.11 it is enough to show that \(\mathfrak{s}^{j}\left(\frac{y-h}{\mathfrak{m}}\right)\preccurlyeq 1\) for \(j=0,\ldots,r\). Now using \((\quad)^{\circ}\) as before, the hole \((P^{\phi\diamond},1,\widehat{h}^{\diamond})\) in \(H^{\diamond}\) is special, \(Z\)-minimal, deep, ultimate, and strongly repulsive-normal, by Lemmas 6.4.3 and 6.4.4. It remains to apply Corollary 6.5.20 to this hole in \(H^{\diamond}\) with \(h^{\diamond}\), \(\mathfrak{m}^{\diamond}\), \(y^{\diamond}\) in place of \(h\), \(\mathfrak{m}\), \(y\).
**Corollary 6.7.14**.: _Let \(\phi\) be active in \(H\), \(0<\phi\preccurlyeq 1\), and suppose the slot \((P^{\phi},\mathfrak{n},\widehat{h})\) in \(H^{\phi}\) is deep, ultimate, and strongly split-normal. Then \(P(y)=0\) and \(y\prec\mathfrak{n}\) for some \(y\in\mathcal{C}^{<\infty}\). If \((P^{\phi},\mathfrak{n},\widehat{h})\) is strongly repulsive-normal, then any such \(y\) isardian over \(H\) with \(y\notin H\)._
Proof.: Lemma 6.4.6 gives \(y\in\mathcal{C}^{<\infty}\) with \(P(y)=0\), \(y\prec\mathfrak{n}\). Now suppose \((P^{\phi},\mathfrak{n},\widehat{h})\) is strongly repulsive-normal, and \(y\in\mathcal{C}^{<\infty}\), \(P(y)=0\), \(y\prec\mathfrak{n}\). Using Lemma 3.2.14 we arrange that \((P,\mathfrak{n},\widehat{h})\) is a hole in \(H\). The hole \((P^{\phi\diamond},\mathfrak{n}^{\diamond},\widehat{h}^{\diamond})\) in \(H^{\diamond}\) is special, \(Z\)-minimal, deep, ultimate, and strongly repulsive-normal. Then Theorem 6.7.13 with \(H^{\diamond}\), \((P^{\phi\diamond},\mathfrak{n}^{\diamond},\widehat{h}^{\diamond})\), \(y^{\diamond}\) in place of \(H\), \((P,\mathfrak{n},\widehat{h})\), \(y\) shows that \(y^{\diamond}\) isardian over \(H^{\diamond}\) with \(y^{\diamond}\notin H^{\diamond}\). Hence \(y\) isardian over \(H\) and \(y\notin H\).
**Achieving \(1\)-linear newtonianity.** For the proof of our main theorem we need to show first that for any \(\mathrm{d}\)-maximal Hardy field \(H\) the corresponding \(K=H[\mathrm{i}]\) is \(1\)-linearly newtonian, the latter being a key hypothesis in Lemma 6.7.21 below. In this subsection we take this vital step: Corollary 6.7.20.
**Lemma 6.7.15**.: _Every \(\mathrm{d}\)-maximal Hardy field is \(1\)-newtonian._
Proof.: Let \(H\) be a \(\mathrm{d}\)-maximal Hardy field. Then \(H\) satisfies the conditions at the beginning of the previous subsection, by Corollary 5.5.19 and Theorem 5.6.2, for any special \(Z\)-minimal slot \((P,\mathfrak{n},\widehat{h})\) in \(H\) of order \(1\). By Corollary 1.8.29, \(H\) is \(1\)-linearly newtonian. Towards a contradiction assume that \(H\) is not \(1\)-newtonian. Then Lemma 3.2.1 yields a minimal hole \((P,\mathfrak{n},\widehat{h})\) in \(H\) of order \(r=1\). Using Lemma 3.2.26 we replace \((P,\mathfrak{n},\widehat{h})\) by a refinement to arrange that \((P,\mathfrak{n},\widehat{h})\) is quasilinear. Then \((P,\mathfrak{n},\widehat{h})\) is special, by Lemma 3.2.36. Using Corollary 4.5.42 we further refine \((P,\mathfrak{n},\widehat{h})\) to arrange that \((P^{\phi},\mathfrak{n},\widehat{h})\) is eventually deep, ultimate, and strongly repulsive-normal. Now Corollary 6.7.14 gives a proper \(\mathrm{d}\)-algebraic Hardy field extension of \(H\), contradicting \(\mathrm{d}\)-maximality of \(H\).
_In the rest of this subsection \(H\) has asymptotic integration._ We have the d-valued extension \(K:=H[\mathrm{i}]\subseteq\mathcal{C}^{<\infty}[\mathrm{i}]\) of \(H\) and as before we arrange that \(\widehat{K}=\widehat{H}[\mathrm{i}]\) is a d-valued extension of \(\widehat{H}\) as well as an immediate d-valued extension of \(K\).
**Lemma 6.7.16**.: _Suppose \(H\) is Liouville closed and \(\mathrm{I}(K)\subseteq K^{\dagger}\). Let \((P,\mathfrak{n},\widehat{f})\) be an ultimate linear minimal hole in \(K\) of order \(r\geqslant 1\), where \(\widehat{f}\in\widehat{K}\), such that \(\dim_{\mathbb{C}}\ker_{\mathrm{U}}L_{P}=r\). Assume also that \(K\) is \(\omega\)-free if \(r\geqslant 2\). Let \(f\in\mathcal{C}^{<\infty}[\mathrm{i}]\) be such that \(P(f)=0\), \(f\prec\mathfrak{n}\). Then \(f\approx_{K}\widehat{f}\)._
Proof.: Replacing \((P,\mathfrak{n},\widehat{f})\), \(f\) by \((P_{\times\mathfrak{n}},1,\widehat{f}/\mathfrak{n})\), \(f/\mathfrak{n}\) we arrange \(\mathfrak{n}=1\). Let \(\theta\in K^{\times}\) be such that \(\theta\sim\widehat{f}\); we claim that \(f\sim\theta\) in \(\mathcal{C}[i]\) (and so \(f\sim_{K}\widehat{f}\)). Applying Proposition 6.4.9 and Remark 6.4.11 to the linear minimal hole \((P_{+\theta},\theta,\widehat{f}-\theta)\) in \(K\) gives \(g\in\mathcal{C}^{<\infty}[\mathrm{i}]\) such that \(P_{+\theta}(g)=0\) and \(g\prec\theta\). Then \(P(\theta+g)=0\) and \(\theta+g\prec 1\), thus \(L_{P}(y)=0\) and \(y\prec 1\) for \(y:=f-(\theta+g)\in\mathcal{C}^{<\infty}[\mathrm{i}]\). Hence \(y\prec\theta\) by the version of Lemma 5.10.13 for slots in \(K\); see the remark following Corollary 5.10.16. Therefore \(f-\theta=y+g\prec\theta\) and so \(f\sim\theta\), as claimed.
The refinement \((P_{+\theta},1,\widehat{f}-\theta)\) of \((P,1,\widehat{f})\) is ultimate thanks to the \(K\)-version of Lemma 4.4.10, and \(L_{P_{+\theta}}=L_{P}\), so we can apply the claim to \((P_{+\theta},1,\widehat{f}-\theta)\) instead of \((P,1,\widehat{f})\) and \(f-\theta\) instead of \(f\) to give \(f-\theta\sim_{K}\widehat{f}-\theta\). Since this holds for all \(\theta\in K\) with \(\theta\sim\widehat{f}\), the \(K\)-version of Corollary 6.6.7 then yields \(f\approx_{K}\widehat{f}\).
**Corollary 6.7.17**.: _Let \((P,\mathfrak{n},\widehat{f})\) be a linear hole of order \(1\) in \(K\)._ (_We do not assume here that \(\widehat{f}\in\widehat{K}\)._) _Then there is an embedding \(\iota\colon K\langle\widehat{f}\rangle\to\mathcal{C}^{<\infty}[\mathrm{i}]\) of differential \(K\)-algebras such that \(\iota(g)\sim_{K}g\) for all \(g\in K\langle\widehat{f}\rangle^{\times}\)._
Proof.: Note that \((P,\mathfrak{n},\widehat{f})\) is minimal. We first show how to arrange that \(H\) is Liouville closed and \(\omega\)-free with \(\mathrm{I}(K)\subseteq K^{\dagger}\) and \(\widehat{f}\in\widehat{K}\). Let \(H_{1}\) be a maximal Hardy field extension of \(H\). Then \(H_{1}\) is Liouville closed and \(\omega\)-free, with \(\mathrm{I}(K_{1})\subseteq K_{1}^{\dagger}\) for \(K_{1}:=H_{1}[\mathrm{i}]\subseteq\mathcal{C}^{<\infty}[\mathrm{i}]\). Let \(\widehat{H}_{1}\) be the newtonization of \(H_{1}\); then \(\widehat{K}_{1}:=\widehat{H}_{1}[\mathrm{i}]\) is newtonian [ADH, 14.5.7]. Corollary 3.2.29 gives an embedding \(K\langle\widehat{f}\rangle\to\widehat{K}_{1}\) of valued differential fields over \(K\); let \(\widehat{f}_{1}\) be the image of \(\widehat{f}\) under this embedding. If \(\widehat{f}_{1}\in K_{1}\subseteq\mathcal{C}^{<\infty}[\mathrm{i}]\), then we are done, so assume \(\widehat{f}_{1}\notin K_{1}\). Then \((P,\mathfrak{n},\widehat{f}_{1})\) is a hole in \(K_{1}\), and we replace \(H\), \(K\), \((P,\mathfrak{n},\widehat{f})\) by \(H_{1}\), \(K_{1}\), \((P,\mathfrak{n},\widehat{f}_{1})\), and \(\widehat{K}\) by \(\widehat{K}_{1}\), to arrange that \(H\) is Liouville closed and \(\omega\)-free with \(\mathrm{I}(K)\subseteq K^{\dagger}\) and \(\widehat{f}\in\widehat{K}\).
Replacing \((P,\mathfrak{n},\widehat{f})\) by a refinement we also arrange that \((P,\mathfrak{n},\widehat{f})\) is ultimate and \(\mathfrak{n}\in H^{\times}\), by Proposition 4.4.18 and Remark 4.4.19. Then Proposition 6.4.9 yield an \(f\in\mathcal{C}^{<\infty}[\mathrm{i}]\) with \(P(f)=0\), \(f\prec\mathfrak{n}\). Now Lemma 6.7.16 gives \(f\approx_{K}\widehat{f}\), and it remains to appeal to Corollary 6.7.8.
**Proposition 6.7.18**.: _Suppose \(H\) is \(\omega\)-free and \(1\)-newtonian. Let \((P,\mathfrak{n},\widehat{f})\) be a linear hole in \(K\) of order \(1\) with \(\widehat{f}\in\widehat{K}\), and \(f\in\mathcal{C}^{<\infty}[\mathrm{i}]\), \(P(f)=0\), and \(f\approx_{K}\widehat{f}\). Then \(\operatorname{Re}f\) or \(\operatorname{Im}f\) generates a proper \(\mathrm{d}\)-algebraic Hardy field extension of \(H\)._
Proof.: Let \(\widehat{g}:=\operatorname{Re}\widehat{f}\) and \(\widehat{h}:=\operatorname{Im}\widehat{f}\). By Lemma 4.1.3 we have \(v(\widehat{g}-H)\subseteq v(\widehat{h}-H)\) or \(v(\widehat{h}-H)\subseteq v(\widehat{g}-H)\). Below we assume \(v(\widehat{g}-H)\subseteq v(\widehat{h}-H)\) (so \(\widehat{g}\in\widehat{\widehat{H}}\setminus H\)) and show that then \(g:=\operatorname{Re}f\) generates a proper \(\mathrm{d}\)-algebraic Hardy field extension of \(H\). (If \(v(\widehat{h}-H)\subseteq v(\widehat{g}-H)\) one shows likewise that \(\operatorname{Im}f\) generates a proper \(\mathrm{d}\)-algebraic Hardy field extension of \(H\).) The hole \((P,\mathfrak{n},\widehat{f})\) in \(K\) is minimal, and by arranging \(\mathfrak{n}\in H^{\times}\) we see that \(\widehat{g}\) is d-algebraic over \(H\), by a remark preceding Lemma 4.3.7.
Every element of \(Z(H,\widehat{g})\) has order \(\geqslant 2\), by Corollary 3.2.16 and \(1\)-newtonianity of \(H\). We arrange that the linear part \(A\) of \(P\) is monic, so \(A=\partial-a\) with \(a\in K\), \(A(\widehat{f})=-P(0)\) and \(A(f)=-P(0)\). Then Example 1.1.7 and Remark 1.1.9 applied to \(F=\mathcal{C}^{<\infty}\) yields \(Q\in H\{Y\}\) with \(1\leqslant\operatorname{order}Q\leqslant 2\) and \(\deg Q=1\) such that \(Q(\widehat{g})=0\) and \(Q(g)=0\). Hence \(\operatorname{order}Q=2\) and \(Q\) is a minimal annihilator of \(\widehat{g}\) over \(H\).
Towards applying Corollary 6.7.10 to \(Q\), \(\widehat{g}\), \(g\) in the role of \(P\), \(\widehat{h}\), \(y\) there, let \(R\) in \(H\{Y\}\setminus H\) have order \(<2\). Then \(R\notin Z(H,\widehat{g})\), so we have \(h\in H\) and \(\mathfrak{v}\in H^{\times}\) such that \(\widehat{g}-h\prec\mathfrak{v}\) and \(\operatorname{ndeg}_{\prec\mathfrak{v}}R_{+h}=0\). Take any \(\mathfrak{m}\in H^{\times}\) with \(\widehat{g}-h\prec\mathfrak{m}\prec\mathfrak{v}\). By Lemma 6.6.16 we have \(g\approx_{H}\widehat{g}\) and thus \(g-h\preccurlyeq\mathfrak{m}\). After changing \(\mathfrak{v}\) as in the proof of Theorem 6.7.13 we obtain an active \(\phi\) in \(H\), \(0<\phi\preccurlyeq 1\), such that \(\operatorname{ddeg}_{\prec\mathfrak{v}}R_{+h}^{\phi}=0\). Set \(\mathfrak{s}:=\phi^{-1}\mathfrak{d}\); by Corollary 6.7.10 it is now enough to show that \(\mathfrak{s}\big{(}(g-h)/\mathfrak{m}\big{)}\preccurlyeq 1\).
Towards this and using \((\ \ )^{\circ}\) as before, we have \(f^{\circ}\approx_{K^{\circ}}\widehat{f}^{\circ}\), and \(g^{\circ}\approx_{H^{\circ}}\widehat{g}^{\circ}\) by the facts about composition in Section 6.6. Moreover, \((g-h)^{\circ}\preccurlyeq\mathfrak{m}^{\circ}\), and \(H^{\circ}\) is \(\mathfrak{w}\)-free and \(1\)-newtonian, hence closed under integration by [1, 14.2.2]. We now apply Corollary 6.7.8 with \(H^{\circ}\), \(K^{\circ}\), \(P^{\phi\circ}\), \(f^{\circ}\), \(\widehat{f}^{\circ}\) in the role of \(H\), \(K\), \(P\), \(f\), \(\widehat{f}\) to give
\[\big{(}f^{\circ}/\mathfrak{m}^{\circ}\big{)}^{\prime}\ \approx_{K^{\circ}} \big{(}\widehat{f}^{\circ}/\mathfrak{m}^{\circ}\big{)}^{\prime},\]
hence \((g^{\circ}/\mathfrak{m}^{\circ})^{\prime}\approx_{H^{\circ}}(\widehat{g}^{ \circ}/\mathfrak{m}^{\circ})^{\prime}\) by Lemmas 4.1.4 and 6.6.16. Therefore,
\[\big{(}(g-h)^{\circ}/\mathfrak{m}^{\circ}\big{)}^{\prime}\ =\ (g^{\circ}/ \mathfrak{m}^{\circ})^{\prime}-(h^{\circ}/\mathfrak{m}^{\circ})^{\prime}\ \sim_{H}(\widehat{g}^{\circ}/\mathfrak{m}^{\circ})^{\prime}-(h^{\circ}/ \mathfrak{m}^{\circ})^{\prime}\ =\ \big{(}(\widehat{g}-h)^{\circ}/\mathfrak{m}^{\circ}\big{)}^{\prime}.\]
Now \((\widehat{g}-h)^{\circ}/\mathfrak{m}^{\circ}\preccurlyeq 1\), so \(\big{(}(\widehat{g}-h)^{\circ}/\mathfrak{m}^{\circ}\big{)}^{\prime}\prec 1\), hence \(\big{(}(g-h)^{\circ}/\mathfrak{m}^{\circ}\big{)}^{\prime}\prec 1\) by the last display, and thus \(\mathfrak{s}\big{(}(g-h)/\mathfrak{m}\big{)}\preccurlyeq 1\), which is more than enough.
If \(K\) has a linear hole of order \(1\), then \(K\) has a proper d-algebraic differential field extension inside \(\mathcal{C}^{<\infty}[i]\), by Corollary 6.7.17. We can now prove a Hardy analogue:
**Lemma 6.7.19**.: _Suppose \(K\) has a linear hole of order \(1\). Then \(H\) has a proper \(\operatorname{d}\)-algebraic Hardy field extension._
Proof.: If \(H\) is not d-maximal, then \(H\) has indeed a proper d-algebraic Hardy field extension, and if \(H\) is d-maximal, then \(H\) is Liouville closed, \(\omega\)-free, \(1\)-newtonian, and \(\mathrm{I}(K)\subseteq K^{\dagger}\), by Proposition 5.3.2, Corollary 5.5.19, Theorem 5.6.2, and Lemma 6.7.15. So assume below that \(H\) is Liouville closed, \(\omega\)-free, \(1\)-newtonian, and \(\mathrm{I}(K)\subseteq K^{\dagger}\), and that \((P,\mathfrak{n},\widehat{f})\) is a linear hole of order \(1\) in \(K\). By Lemma 4.2.15 we arrange that \(\widehat{f}\in\widehat{K}:=\widehat{H}[i]\) where \(\widehat{H}\) is an immediate \(\omega\)-free newtonian \(H\)-field extension of \(H\). Then \(\widehat{K}\) is also newtonian by [1, 14.5.7]. By Remark 4.4.19 we can replace \((P,\mathfrak{n},\widehat{f})\) by a refinement to arrange that \((P,\mathfrak{n},\widehat{f})\) is ultimate and \(\mathfrak{n}\in H^{\times}\). Proposition 6.4.9 now yields \(f\in\mathcal{C}^{<\infty}[i]\) with \(P(f)=0\) and \(f\prec\mathfrak{n}\). Then \(f\approx_{K}\widehat{f}\) by Lemma 6.7.16, and so \(\operatorname{Re}f\) or \(\operatorname{Im}f\) generates a proper d-algebraic Hardy field extension of \(H\), by Proposition 6.7.18.
**Corollary 6.7.20**.: _If \(H\) is \(\operatorname{d}\)-maximal, then \(K\) is \(1\)-linearly newtonian._
Proof.: Assume \(H\) is d-maximal. Then \(K\) is \(\omega\)-free by Theorem 5.6.2 and [1, 11.7.23]. If \(K\) is not \(1\)-linearly newtonian, then it has a linear hole of order \(1\), by Lemma 3.2.5, and so \(H\) has a proper d-algebraic Hardy field extension, by Lemma 6.7.19, contradicting d-maximality of \(H\)
**Finishing the story.** With one more lemma we will be done.
**Lemma 6.7.21**.: _Suppose \(H\) is Liouville closed, \(\omega\)-free, not newtonian, and \(K:=H[i]\) is \(1\)-linearly newtonian. Then \(H\) has a proper \(\mathrm{d}\)-algebraic Hardy field extension._
Proof.: By Proposition 1.8.28, \(K\) is \(1\)-linearly surjective and \(\mathrm{I}(K)\subseteq K^{\dagger}\). Since \(H\) is not newtonian, neither is \(K\), by [1, 14.5.6], and so by Lemma 3.2.1 we have a minimal hole \((P,\mathfrak{m},\widehat{f})\) in \(K\) of order \(r\geqslant 1\), with \(\mathfrak{m}\in H^{\times}\). Then \(\deg P>1\) by Corollary 3.2.8. As in the proof of Lemma 6.7.19 we take for \(\widehat{H}\) an immediate \(\omega\)-free newtonian \(H\)-field extension of \(H\) and arrange \(\widehat{f}\in\widehat{K}:=\widehat{H}[i]\). Now \(\widehat{f}=\widehat{g}+\widehat{h}i\) with \(\widehat{g},\widehat{h}\in\widehat{H}\). By Theorem 4.5.43, there are two cases:
1. \(\widehat{g}\notin H\) and some \(Z\)-minimal slot \((Q,\mathfrak{m},\widehat{g})\) in \(H\) has a special refinement \((Q_{+g},\mathfrak{n},\widehat{g}-g)\) such that \((Q_{+g}^{\phi},\mathfrak{n},\widehat{g}-g)\) is eventually deep, strongly repulsive-normal, and ultimate;
2. \(\widehat{h}\notin H\) and some \(Z\)-minimal slot \((R,\mathfrak{m},\widehat{h})\) in \(H\) has a special refinement \((R_{+h},\mathfrak{n},\widehat{h}-h)\) such that \((R_{+h}^{\phi},\mathfrak{n},\widehat{h}-h)\) is eventually deep, strongly repulsive-normal, and ultimate.
Suppose \(\widehat{g}\notin H\) and \((Q,\mathfrak{m},\widehat{g})\) is as in (1). Then \(1\leqslant\mathrm{order}\,Q\leqslant 2r\) by Lemma 4.3.7. _Claim_: \(Q(y)=0\) for some \(y\in\mathcal{C}^{<\infty}\setminus H\) that is hardian over \(H\). To prove this claim, take a special refinement \((Q_{+g},\mathfrak{n},\widehat{g}-g)\) of \((Q,\mathfrak{m},\widehat{g})\) and an active \(\phi\) in \(H\) with \(0<\phi\prec 1\) such that the slot \((Q_{+g}^{\phi},\mathfrak{n},\widehat{g}-g)\) in \(H^{\phi}\) is deep, strongly repulsive-normal, and ultimate. Corollary 6.7.14 applied to \((Q_{+g},\mathfrak{n},\widehat{g}-g)\) in place of \((P,\mathfrak{n},\widehat{h})\) gives a \(z\in\mathcal{C}^{<\infty}\setminus H\) that is hardian over \(H\) with \(Q_{+g}(z)=0\). Thus \(y:=g+z\in\mathcal{C}^{<\infty}\) is as in the Claim. Case (2) is handled likewise.
Recall from the introduction that an _\(H\)-closed field_ is an \(\omega\)-free newtonian Liouville closed \(H\)-field. Recall also that Hardy fields containing \(\mathbb{R}\) are \(H\)-fields. The main result of these notes can now be established in a few lines:
**Theorem 6.7.22**.: _A Hardy field is \(\mathrm{d}\)-maximal iff it contains \(\mathbb{R}\) and is \(H\)-closed._
Proof.: The "if" part is a special case of [1, 16.0.3]. By Proposition 5.3.2 and Theorem 5.6.2, every \(\mathrm{d}\)-maximal Hardy field contains \(\mathbb{R}\) and is Liouville closed and \(\omega\)-free. Suppose \(H\) is \(\mathrm{d}\)-maximal. Then \(K:=H[i]\) is \(1\)-linearly newtonian by Corollary 6.7.20, so \(H\) is newtonian by Lemma 6.7.21.
Theorem 6.7.22 and Corollary 6.3.9 yield Theorem B from the introduction in a refined form:
**Corollary 6.7.23**.: _Any Hardy field \(F\) has a \(\mathrm{d}\)-algebraic \(H\)-closed Hardy field extension. If \(F\) is a \(\mathcal{C}^{\infty}\)-Hardy field, then so is any such extension, and likewise with \(\mathcal{C}^{\omega}\) in place of \(\mathcal{C}^{\infty}\)._
**Part 7**.: **Applications**
Here we apply the material in the previous parts. In Section 7.1 we show how to transfer first-order logical properties of the differential field \(\mathbb{T}\) of transseries to maximal Hardy fields, proving in particular Theorem A and Corollaries 1-5 as well as the first part of Corollary 6 from the introduction. In Section 7.2 we obtain Corollary 7, elaborate on [ADH, Chapter 16], and relate Newton-Liouville closure to relative differential closure. In Section 7.3 we investigate embeddings of Hardy fields into \(\mathbb{T}\), and finish the proof of Corollary 6. There we also determine the universal theory of Hardy fields. Section 7.4 contains applications of our main theorem to linear differential equations over Hardy fields, including proofs of Corollaries 8-11 from the introduction. The final Corollary 12 from the introduction is established in Section 7.5, where we focus on the structure of perfect and d-perfect Hardy fields.
### Transfer Theorems
From [ADH, 16.3] we recall the notion of a _pre-\(\Lambda\Omega\)-field_\(\boldsymbol{H}=(H,\mathrm{I},\Lambda,\Omega)\): this is a pre-\(H\)-field \(H\) equipped with a \(\Lambda\Omega\)-cut \((\mathrm{I},\Lambda,\Omega)\) of \(H\). (See also Section 5.6.) A _\(\Lambda\Omega\)-field_ is a pre-\(\Lambda\Omega\)-field \(\boldsymbol{H}=(H;\dots)\) where \(H\) is an \(H\)-field. If \(\boldsymbol{M}=(M;\dots)\) is a pre-\(\Lambda\Omega\)-field and \(H\) is a pre-\(H\)-subfield of \(M\), then \(H\) has a unique expansion to a pre-\(\Lambda\Omega\)-field \(\boldsymbol{H}\) such that \(\boldsymbol{H}\subseteq\boldsymbol{M}\). By [ADH, 16.3.19], a pre-\(H\)-field \(H\) has a unique expansion to a pre-\(\Lambda\Omega\)-field iff one of the following conditions holds:
1. \(H\) is grounded;
2. there exists \(b\asymp 1\) in \(H\) such that \(v(b^{\prime})\) is a gap in \(H\);
3. \(H\) is \(\omega\)-free.
In particular, each d-maximal Hardy field \(M\) (being \(\omega\)-free) has a unique expansion to a pre-\(\Lambda\Omega\)-field \(\boldsymbol{M}\), namely \(\boldsymbol{M}=\big{(}M;\mathrm{I}(M),\Lambda(M),\omega(M)\big{)}\), and then \(\boldsymbol{M}\) is a \(\Lambda\Omega\)-field with constant field \(\mathbb{R}\). Below we always view any d-maximal Hardy field as an \(\Lambda\Omega\)-field in this way.
**Lemma 7.1.1**.: _Let \(H\) be a Hardy field. Then \(H\) has an expansion to a pre-\(\Lambda\Omega\)-field \(\boldsymbol{H}\) such that \(\boldsymbol{H}\subseteq\boldsymbol{M}\) for every \(\mathrm{d}\)-maximal Hardy field \(M\supseteq H\)._
Proof.: Since every d-maximal Hardy field containing \(H\) also contains \(\mathrm{D}(H)\), it suffices to show this for \(\mathrm{D}(H)\) in place of \(H\). So we assume \(H\) is d-perfect, and thus a Liouville closed \(H\)-field. For each d-maximal Hardy field \(M\supseteq H\) we now have \(\mathrm{I}(H)=\mathrm{I}(M)\cap H\) by [ADH, 11.8.2], \(\Lambda(H)=\Lambda(M)\cap H\) by [ADH, 11.8.6], and \(\omega(H)=\overline{\omega}(H)=\overline{\omega}(M)\cap H=\omega(M)\cap H\) by Corollary 5.5.3, as required.
Given a Hardy field \(H\), we call the unique expansion \(\boldsymbol{H}\) of \(H\) to a pre-\(\Lambda\Omega\)-field with the property stated in the previous lemma the **canonical \(\Lambda\Omega\)-expansion** of \(H\).
**Corollary 7.1.2**.: _Let \(H\), \(H^{*}\) be Hardy fields, with their canonical \(\Lambda\Omega\)-expansions \(\boldsymbol{H}\), \(\boldsymbol{H}^{*}\), respectively, such that \(H\subseteq H^{*}\). Then \(\boldsymbol{H}\subseteq\boldsymbol{H}^{*}\)._
Proof.: Let \(M^{*}\) be any d-maximal Hardy field extension of \(H^{*}\). Then \(\boldsymbol{H}\subseteq\boldsymbol{M}^{*}\) as well as \(\boldsymbol{H}^{*}\subseteq\boldsymbol{M}^{*}\), hence \(\boldsymbol{H}\subseteq\boldsymbol{H}^{*}\).
_In the rest of this section \(\mathcal{L}=\{0,1,-,+,\ \cdot\,,\partial,\leqslant,\preccurlyeq\}\) is the language of ordered valued differential rings_[ADH, p. 678]. We view each ordered valued differential field as an \(\mathcal{L}\)-structure in the natural way. Given an ordered valued differential field \(H\) and a subset \(A\) of \(H\) we let \(\mathcal{L}_{A}\) be \(\mathcal{L}\) augmented by names for the elements
of \(A\), and expand the \(\mathcal{L}\)-structure \(H\) to an \(\mathcal{L}_{A}\)-structure by interpreting the name of any \(a\in A\) as the element \(a\) of \(H\); cf. [ADH, B.3]. Let \(H\) be a Hardy field and \(\sigma\) be an \(\mathcal{L}_{H}\)-sentence. We now have our Hardy field analogue of the "Tarski principle" [ADH, B.12.14] in real algebraic geometry promised in the introduction:
**Theorem 7.1.3**.: _The following are equivalent:_
1. \(M\models\sigma\) _for some_ \(\mathrm{d}\)_-maximal Hardy field_ \(M\supseteq H\)_;_
2. \(M\models\sigma\) _for every_ \(\mathrm{d}\)_-maximal Hardy field_ \(M\supseteq H\)_;_
3. \(M\models\sigma\) _for every maximal Hardy field_ \(M\supseteq H\)_;_
4. \(M\models\sigma\) _for some maximal Hardy field_ \(M\supseteq H\)_._
Proof.: The implications (ii) \(\Rightarrow\) (iii) \(\Rightarrow\) (iv) \(\Rightarrow\) (i) are obvious, since "maximal \(\Rightarrow\)\(\mathrm{d}\)-maximal"; so it remains to show (i) \(\Rightarrow\) (ii). Let \(M\), \(M^{*}\) be \(\mathrm{d}\)-maximal Hardy field extensions of \(H\). By Lemma 7.1.1 and Corollary 7.1.2 expand \(M\), \(M^{*}\), \(H\) to pre-\(\Lambda\Omega\)-fields \(\boldsymbol{M}\), \(\boldsymbol{M}^{*}\), \(\boldsymbol{H}\), respectively, such that \(\boldsymbol{H}\subseteq\boldsymbol{M}\) and \(\boldsymbol{H}\subseteq\boldsymbol{M}^{*}\). In [ADH, introduction to Chapter 16] we extended \(\mathcal{L}\) to a language \(\mathcal{L}_{\Lambda\Omega}^{\iota}\), and explained in [ADH, 16.5] how each pre-\(\Lambda\Omega\)-field \(\boldsymbol{K}\) is construed as an \(\mathcal{L}_{\Lambda\Omega}^{\iota}\)-structure in such a way that every extension \(\boldsymbol{K}\subseteq\boldsymbol{L}\) of pre-\(\Lambda\Omega\)-fields corresponds to an extension of the associated \(\mathcal{L}_{\Lambda\Omega}^{\iota}\)-structures. By [ADH, 16.0.1], the \(\mathcal{L}_{\Lambda\Omega}^{\iota}\)-theory \(T_{\Lambda\Omega}^{\mathrm{nl},\iota}\) of \(H\)-closed \(\Lambda\Omega\)-fields eliminates quantifiers, and by Theorem 6.7.22, the canonical \(\Lambda\Omega\)-expansion of each \(\mathrm{d}\)-maximal Hardy field is a model of \(T_{\Lambda\Omega}^{\mathrm{nl},\iota}\). Hence \(\boldsymbol{M}\equiv_{H}\boldsymbol{M}^{*}\) [ADH, B.11.6], so if \(\boldsymbol{M}\models\sigma\), then \(\boldsymbol{M}^{*}\models\sigma\).
Corollaries 1 and 2 from the introduction are special cases of Theorem 7.1.3. By Corollary 6.3.8, \(\mathcal{C}^{\infty}\)-maximal and \(\mathcal{C}^{\omega}\)-maximal Hardy fields are \(\mathrm{d}\)-maximal, so the theorem above also yields Corollary 5 from the introduction in the following stronger form:
**Corollary 7.1.4**.: _If \(H\subseteq\mathcal{C}^{\infty}\) and \(M\models\sigma\) for some \(\mathrm{d}\)-maximal Hardy field extension \(M\) of \(H\), then \(M\models\sigma\) for every \(\mathcal{C}^{\infty}\)-maximal Hardy field \(M\supseteq H\). Likewise with \(\mathcal{C}^{\omega}\) in place of \(\mathcal{C}^{\infty}\)._
### The structure induced on \(\mathbb{R}\)
In the next corollary \(H\) is a Hardy field and \(\varphi(x)\) is an \(\mathcal{L}_{H}\)-formula where \(x=(x_{1},\ldots,x_{n})\) and \(x_{1},\ldots,x_{n}\) are distinct variables. Also, \(\mathcal{L}_{\mathrm{OR}}=\{0,1,-,+,\,\cdot\,,\leqslant\}\) is the language of ordered rings, and the ordered field \(\mathbb{R}\) of real numbers is interpreted as an \(\mathcal{L}_{\mathrm{OR}}\)-structure in the obvious way. By Theorem 6.7.22, \(\mathrm{d}\)-maximal Hardy fields are \(H\)-closed fields, so from [ADH, 16.6.7, B.12.13] in combination with Theorem 7.1.3 we obtain:
**Corollary 7.1.5**.: _There is a quantifier-free \(\mathcal{L}_{\mathrm{OR}}\)-formula \(\varphi_{\mathrm{OR}}(x)\) such that for all \(\mathrm{d}\)-maximal Hardy fields \(M\supseteq H\) and \(c\in\mathbb{R}^{n}\) we have_
\[M\models\varphi(c)\quad\Longleftrightarrow\quad\mathbb{R}\models\varphi_{ \mathrm{OR}}(c).\]
This yields Corollary 3 from the Introduction. We now justify what we claim about the examples after that corollary. The first of these examples is already covered by [ADH, 5.1.18, 11.8.25, 11.8.26], so we only deal with the second example here:
**Proposition 7.1.6**.: _Let \(g_{2},g_{3}\in\mathbb{R}\). Then the following are equivalent:_
1. _there exists aardian germ_ \(y\notin\mathbb{R}\) _such that_ \((y^{\prime})^{2}=4y^{3}-g_{2}y-g_{3}\)_;_
2. \(g_{2}^{3}=27g_{3}^{2}\) _and_ \(g_{3}\leqslant 0\)_;_
For (i) \(\Rightarrow\) (ii) we take a more general setting, and recycle arguments used in the proof of [ADH, 10.7.1]. Let \(K\) be a valued differential field such that \(\partial\mathcal{O}\subseteq\sigma\)
and \(C\subseteq\mathcal{O}\). (This holds for any d-valued field with small derivation.) Consider a polynomial \(P(Y)=4Y^{3}-g_{2}Y-g_{3}\) with \(g_{2},g_{3}\in C\). Its discriminant is \(16\Delta\) where \(\Delta:=g_{2}^{3}-27g_{3}^{2}\). Take \(e_{1}\), \(e_{2}\), \(e_{3}\) in an algebraic closure of \(C\) such that
\[P(Y)=4(Y-e_{1})(Y-e_{2})(Y-e_{3}).\]
Then
\[e_{1}+e_{2}+e_{3}\ =\ 0,\quad e_{1}e_{2}+e_{2}e_{3}+e_{3}e_{1}\ =\ -\tfrac{1}{4}g_{2},\quad e_{1}e_{2}e_{3}\ =\ \tfrac{1}{4}g_{3}, \tag{7.1.1}\]
and \(\Delta\neq 0\) iff \(e_{1}\), \(e_{2}\), \(e_{3}\) are distinct. In the next two lemmas \(y\in K\) and \((y^{\prime})^{2}=P(y)\). Then \(y\preccurlyeq 1\): otherwise \((y^{\prime})^{2}\prec 4y^{3}\sim P(y)=(y^{\prime})^{2}\) by [ADH, 4.4.3], a contradiction. Hence \(P(y)=(y^{\prime})^{2}\prec 1\). Moreover, if \(y\asymp 1\), then \(\Delta\neq 0\) or \(g_{3}\neq 0\).
**Lemma 7.1.7**.: _Suppose \(P^{\prime}(y)\asymp 1\). Then \(y\in\{e_{1},e_{2},e_{3}\}\) (so \(y\in C\))._
Proof.: The property \(\partial\mathcal{O}\subseteq\sigma\) means that the derivation of \(K\) is small with trivial induced derivation on its residue field. By [ADH, 6.2.1, 3.1.9] this property is inherited by any algebraic closure of \(K\), and so is the property \(C\subseteq\mathcal{O}\) by [ADH, 4.1.2]. Thus by passing to an algebraic closure we arrange that \(K\) is algebraically closed. Then \(C\) is also algebraically closed, so \(e_{1},e_{2},e_{3}\in C\) and thus \(y-e_{j}\prec 1\) for some \(j\in\{1,2,3\}\), say \(y=e_{1}+z\) where \(z\prec 1\). Since \(P^{\prime}(y)\asymp 1\) we have \(y-e_{2}\asymp y-e_{3}\asymp 1\) and thus
\[z\ \asymp\ 4z(e_{1}-e_{2}+z)(e_{1}-e_{3}+z)\ =\ P(y)\ =\ (y^{\prime})^{2}\ =\ (z^{\prime})^{2}.\]
Now if \(z\neq 0\), then \((z^{\prime})^{2}\prec z\), again by [ADH, 4.4.3], a contradiction. So \(y=e_{1}\).
In the next lemma \(K\) is in addition equipped with an ordering making \(K\) a valued ordered differential field whose valuation ring is convex. (Any \(H\)-field with small derivation satisfies the conditions we imposed.) Suppose \(\Delta=0\). Then \(e_{1}\), \(e_{2}\), \(e_{3}\) lie in the real closure of \(C\), and after arranging \(e_{1}\geqslant e_{2}\geqslant e_{3}\), the first and the last of the equations (7.1.1) yield \(e_{1}=e_{2}\Longleftrightarrow g_{3}\leqslant 0\), and \(e_{2}=e_{3}\Longleftrightarrow g_{3}\geqslant 0\).
**Lemma 7.1.8**.: _Suppose \(\Delta=0\) and \(g_{3}>0\). Then \(y\in C\)._
Proof.: Passing to the real closure of \(K\) with convex valuation extending that of \(K\), cf. [ADH, 3.5.18], we arrange that \(K\), and hence \(C\), is real closed. Arranging also \(e_{1}\geqslant e_{2}\geqslant e_{3}\), we set \(e:=e_{2}=e_{3}\). Then \(e_{1}=-2e>0>e\) and \(P(Y)=4(Y+2e)(Y-e)^{2}\). We have \(y\preccurlyeq 1\) and \(P(y)\prec 1\). Suppose \(y\notin C\). Then \(P^{\prime}(y)\prec 1\) by Lemma 7.1.7, so \(y-e\prec 1\). Set \(z:=y-e\), so \(0\neq z\prec 1\) and hence
\[12ez^{2}\ \sim\ 4(z+3e)z^{2}\ =\ P(y)\ =\ (y^{\prime})^{2}\ >\ 0,\]
contradicting \(e<0\).
Proof of Proposition 7.1.6.: Suppose \(y\notin\mathbb{R}\) is a hardian germ such that \((y^{\prime})^{2}=P(y)\) with \(P(Y)=4Y^{3}-g_{2}Y-g_{3}\) and \(g_{2},g_{3}\in\mathbb{R}\). Then \(y\preccurlyeq 1\) and \(P(y)\prec 1\), but also \(P^{\prime}(y)\prec 1\) by Lemma 7.1.7, hence \(\Delta\prec 1\). As \(\Delta\in\mathbb{R}\) this gives \(\Delta=0\), so \(g_{3}\leqslant 0\) by Lemma 7.1.8. This proves (i) \(\Rightarrow\) (ii). For the converse, let \(K\) be the Hardy field \(\mathbb{R}\) in the considerations above and suppose \(\Delta=0\), so \(e_{1},e_{2},e_{3}\in\mathbb{R}\). Arrange \(e_{1}\geqslant e_{2}\geqslant e_{3}\). If \(g_{3}=0\), then \(e_{1}=e_{2}=e_{3}=0\), and if \(g_{3}<0\), then \(e_{1}=e_{2}>0\). In Corollaries 7.1.9 and 7.1.12 we deal exhaustively with these two cases. In particular, we show there that in each case there is a hardian \(y\notin\mathbb{R}\) such that \((y^{\prime})^{2}=P(y)\), thus finishing the proof of Proposition 7.1.6.
Accordingly we assume below that \(g_{2},g_{3}\in\mathbb{R}\) and \(\Delta=0\), so \(e_{1},e_{2},e_{3}\in\mathbb{R}\). Note that the \(y\in\mathbb{R}\) such that \((y^{\prime})^{2}=P(y)\) are exactly \(e_{1}\), \(e_{2}\), \(e_{3}\).
**Corollary 7.1.9**.: _Suppose \(e_{1}=e_{2}=e_{3}=0\) and \(y\in\mathcal{C}^{1}\). Then_
\[y\in\mathcal{C}^{\times}\text{ and }(y^{\prime})^{2}=P(y)\quad\Longleftrightarrow \quad y=\frac{1}{(x-c)^{2}}\text{ for some }c\in\mathbb{R}.\]
Proof.: We have \(P(Y)=4Y^{3}\). The direction \(\Leftarrow\) is routine. For \(\Rightarrow\), suppose \(y\in\mathcal{C}^{\times}\) and \((y^{\prime})^{2}=4y^{3}\). Then \(y^{\prime}\in\mathcal{C}^{\times}\), \(y>0\), \(z:=y^{-1/2}>0\), and \(z^{\prime}=-\frac{1}{2}y^{-3/2}y^{\prime}\). We have \(y^{\prime}<0\): otherwise \(0<y^{\prime}=2y^{3/2}\) and so \(z^{\prime}=-1\), hence \(z<0\), a contradiction. Therefore \(y^{\prime}=-2y^{3/2}\), so \(z^{\prime}=1\) and thus \(y=\frac{1}{z^{2}}=\frac{1}{(x-c)^{2}}\) for some \(c\in\mathbb{R}\).
Lemma 7.1.12 below is an analogue of Lemma 7.1.9 for \(e_{1}=e_{2}>0\), but first we make some observations about hyperbolic functions. Recall that for \(t\in\mathbb{R}\),
\[\sinh t\ : =\ \frac{1}{2}(\mathrm{e}^{t}-\mathrm{e}^{-t}),\qquad\cosh t\ :=\ \frac{1}{2}(\mathrm{e}^{t}+\mathrm{e}^{-t}),\text{ so}\] \[\cosh^{2}t-\sinh^{2}t\ =\ 1,\qquad\frac{d}{dt}\sinh t\ =\ \cosh t,\qquad\frac{d}{dt}\cosh t\ =\ \sinh t.\]
We also set for \(t\in\mathbb{R}\):
\[\operatorname{sech}t\ :=\ \frac{1}{\cosh t}\ (\text{hyperbolic secant}),\qquad\tanh t\ :=\ \frac{\sinh t}{\cosh t}\ (\text{hyperbolic tangent})\]
and for \(t\neq 0\):
\[\operatorname{csch}t\ :=\ \frac{1}{\sinh t}\ (\text{hyperbolic cosecant}),\quad \operatorname{coth}t\ :=\ \frac{\cosh t}{\sinh t}\ (\text{hyperbolic cotangent}).\]
Now \(\sinh\colon\mathbb{R}\to\mathbb{R}\) is an increasing bijection, so \(t\mapsto\operatorname{csch}t\colon\mathbb{R}^{>}\to\mathbb{R}^{>}\) is a decreasing bijection. We have \(\operatorname{sech}^{2}t=1-\tanh^{2}t\), and for \(t\neq 0\), \(\operatorname{csch}^{2}t=\operatorname{coth}^{2}t-1\). Moreover, \(\frac{d}{dt}\operatorname{sech}t=-\tanh t\operatorname{sech}t\) and for \(t\neq 0\): \(\frac{d}{dt}\operatorname{csch}t=-\operatorname{coth}t\operatorname{csch}t\). Hence both \(-\operatorname{sech}^{2}\colon\mathbb{R}\to\mathbb{R}\) and \(\operatorname{csch}^{2}\colon\mathbb{R}^{\times}\to\mathbb{R}\) satisfy the differential equation \((u^{\prime})^{2}=4u^{2}(u+1)\). We use these facts to prove:
**Lemma 7.1.10**.: _Let \(w\in\mathcal{C}^{1}\) and \(e\in\mathbb{R}^{>}\). Then the following are equivalent:_
* \(w(t)>0\)_, eventually, and_ \((w^{\prime})^{2}=ew^{2}(w+1)\)__
* \(w=\operatorname{csch}^{2}\circ(c+\frac{\sqrt{e}}{2}x)\) _for some_ \(c\in\mathbb{R}\)_._
Proof.: Direct computation gives (ii) \(\Leftarrow\) (i). Assume (i). Consider the decreasing bijection \(t\mapsto u(t):=\operatorname{csch}^{2}(t)\colon\mathbb{R}^{>}\to\mathbb{R}^{>}\); let \(u^{\mathrm{inv}}:\mathbb{R}^{>}\to\mathbb{R}^{>}\) be its (strictly decreasing) compositional inverse, so \(u^{\mathrm{inv}}\in\mathcal{C}^{1}(\mathbb{R}^{>})\). Then for \(v:=u^{\mathrm{inv}}\circ w\in\mathcal{C}^{1}\) we have \(v(t)>0\), eventually, \(v^{\prime}=\frac{w^{\prime}}{u^{\prime}\circ v}\), and \(u\circ v=w\). Thus
\[(v^{\prime})^{2}\ =\ \frac{(w^{\prime})^{2}}{(u^{\prime})^{2}\circ v}\ =\ \frac{e}{4}.\]
Hence \(v^{\prime}=\frac{\sqrt{e}}{2}\), since \(v^{\prime}=-\frac{\sqrt{e}}{2}\) contradicts \(v>0\). Now use \(w=u\circ v\).
The increasing bijection \(t\mapsto\cosh t\colon(0,+\infty)\to(1,+\infty)\) yields the increasing bijection \(t\mapsto-\operatorname{sech}^{2}t\colon(0,+\infty)\to(-1,0)\). We use this to prove likewise:
**Lemma 7.1.11**.: _Let \(w\in\mathcal{C}^{1}\) and \(e\in\mathbb{R}^{>}\). Then the following are equivalent:_
* \(-1<w(t)<0\)_, eventually, and_ \((w^{\prime})^{2}\ =\ ew^{2}(w+1)\)_;_
* \(w=-\operatorname{sech}^{2}\circ(c+\frac{\sqrt{e}}{2}x)\) _for some_ \(c\in\mathbb{R}\)
**Corollary 7.1.12**.: _Suppose \(e_{1}=e_{2}>0\). Then the hardian germs \(y\notin\mathbb{R}\) such that \((y^{\prime})^{2}=P(y)\) are all in \(\mathbb{R}(\mathrm{e}^{x\sqrt{3e_{1}}})\) and are given by_
\[y\ =\ e_{1}+3e_{1}\cdot\mathrm{csch}^{2}\circ(c+x\sqrt{3e_{1}}),\quad y\ =\ e_{1}-3e_{1}\cdot\mathrm{sech}^{2}\circ(c+x\sqrt{3e_{1}}),\]
_where \(c\in\mathbb{R}\)._
Proof.: We have \(P(Y)=4(Y-e_{1})^{2}(Y+2e_{1})\). Let \(y\in\mathcal{C}^{1}\) and \(w:=(y-e_{1})/3e_{1}\). Then \((y^{\prime})^{2}=P(y)\) iff \((w^{\prime})^{2}=12e_{1}w^{2}(w+1)\). There is no hardian \(y<-2e_{1}\) with \((y^{\prime})^{2}=P(y)\), so we can use Lemmas 7.1.10 and 7.1.11 with \(e:=12e_{1}\).
**Uniform finiteness.** We now let \(H\) be a Hardy field and \(\varphi(x,y)\) and \(\theta(x)\) be \(\mathcal{L}_{H}\)-formulas, where \(x=(x_{1},\ldots,x_{m})\) and \(y=(y_{1},\ldots,y_{n})\).
**Lemma 7.1.13**.: _There is a \(B=B(\varphi)\in\mathbb{N}\) such that for all \(f\in H^{m}\): if for some \(\mathrm{d}\)-maximal Hardy field extension \(M\) of \(H\) there are more than \(B\) tuples \(g\in M^{n}\) with \(M\models\varphi(f,g)\), then for every \(\mathrm{d}\)-maximal Hardy field extension \(M\) of \(H\) there are infinitely many \(g\in M^{n}\) with \(M\models\varphi(f,g)\)._
Proof.: Fix a \(\mathrm{d}\)-maximal Hardy field extension \(M^{*}\) of \(H\). By [10, Proposition 6.4] we have \(B=B(\varphi)\in\mathbb{N}\) such that for all \(f\in(M^{*})^{m}\): if \(M^{*}\models\varphi(f,g)\) for more than \(B\) many \(g\in(M^{*})^{n}\), then \(M^{*}\models\varphi(f,g)\) for infinitely many \(g\in(M^{*})^{n}\). Now use Theorem 7.1.3.
In the proof of the next lemma we use that \(\mathcal{C}\) has the cardinality \(\mathfrak{c}=2^{\aleph_{0}}\) of the continuum, hence \(|H|=\mathfrak{c}\) if \(H\supseteq\mathbb{R}\).
**Lemma 7.1.14**.: _Suppose \(H\) is \(\mathrm{d}\)-maximal and \(S:=\big{\{}f\in H^{m}:H\models\theta(f)\big{\}}\) is infinite. Then \(|S|=\mathfrak{c}\)._
Proof.: Let \(d:=\dim(S)\) be the dimension of the definable set \(S\subseteq H^{m}\) as introduced in [10]. If \(d=0\), then \(|S|=|\mathbb{R}|=\mathfrak{c}\) by remarks following [10, Proposition 6.4]. Suppose \(d>0\), and for \(g=(g_{1},\ldots,g_{m})\in H^{m}\) and \(i\in\{1,\ldots,m\}\), let \(\pi_{i}(g):=g_{i}\). Then for some \(i\in\{1,\ldots,m\}\), the subset \(\pi_{i}(S)\) of \(H\) has nonempty interior, by [10, Corollary 3.2], and hence \(|S|=|H|=\mathfrak{c}\).
The two lemmas above together now yield Corollary 4 from the introduction.
**Transfer between maximal Hardy fields and transseries.** Let \(\boldsymbol{T}\) be the unique expansion of \(\mathbb{T}\) to a pre-\(\Lambda\Omega\)-field, so \(\boldsymbol{T}\) is an \(H\)-closed \(\Lambda\Omega\)-field with small derivation and constant field \(\mathbb{R}\).
**Lemma 7.1.15**.: _Let \(H\) be a pre-\(H\)-subfield of \(\mathbb{T}\) with \(H\not\subseteq\mathbb{R}\). Then \(H\) has a unique expansion to a pre-\(\Lambda\Omega\)-field._
Proof.: If \(H\) is grounded, this follows from [ADH, 16.3.19]. Suppose \(H\) is not grounded. Then \(H\) has asymptotic integration by the proof of [ADH, 10.6.19] applied to \(\Delta:=v(H^{\times})\). Starting with an \(h_{0}\succ 1\) in \(H\) with \(h_{0}^{\prime}\asymp 1\) we construct a logarithmic sequence \((h_{n})\) in \(H\) as in [ADH, 11.5], so \(h_{n}\asymp\ell_{n}\) for all \(n\). Hence \(\Gamma^{<}\) is cofinal in \(\Gamma^{<}_{\mathbb{T}}\), so \(H\) is \(\omega\)-free by [ADH, remark before 11.7.20]. Now use [ADH, 16.3.19] again.
In the rest of this subsection \(H\) is a Hardy field with canonical \(\Lambda\Omega\)-expansion \(\boldsymbol{H}\), and \(\iota\colon H\to\mathbb{T}\) is an embedding of ordered differential fields, and thus of pre-\(H\)-fields.
**Corollary 7.1.16**.: _The map \(\iota\) is an embedding \(\boldsymbol{H}\to\boldsymbol{T}\) of pre-\(\Lambda\Omega\)-fields._
Proof.: If \(H\not\subseteq\mathbb{R}\), then this follows from Lemma 7.1.15. Suppose \(H\subseteq\mathbb{R}\). Then \(\iota\) is the identity on \(H\), so extends to the embedding \(\mathbb{R}(x)\to\mathbb{T}\) that is the identity on \(\mathbb{R}\) and sends the germ \(x\) to \(x\in\mathbb{T}\). Now use that \(\mathbb{R}(x)\not\subseteq\mathbb{R}\) and Corollary 7.1.2.
Recall from [ADH, B.4] that for any \(\mathcal{L}_{H}\)-sentence \(\sigma\) we obtain an \(\mathcal{L}_{\mathbb{T}}\)-sentence \(\iota(\sigma)\) by replacing the name of each \(h\in H\) occurring in \(\sigma\) with the name of \(\iota(h)\).
**Corollary 7.1.17**.: _Let \(\sigma\) be an \(\mathcal{L}_{H}\)-sentence. Then_ (i)_-_(iv) _in Theorem 7.1.3 are also equivalent to:_
* \(\mathbb{T}\models\iota(\sigma)\)_._
Proof.: Let \(M\) be a d-maximal Hardy field extension of \(H\); it suffices to show that \(M\models\sigma\) iff \(\mathbb{T}\models\iota(\sigma)\). For this, mimick the proof of (i) \(\Rightarrow\) (ii) in Theorem 7.1.3, using Corollary 7.1.16.
Corollary 7.1.17 yields the first part of Corollary 6 from the introduction, even in a stronger form. After an intermezzo on differential closure in Section 7.2 we prove the second part of that corollary in Section 7.3: Corollary 7.3.2. There we also use:
**Lemma 7.1.18**.: \(\iota\) _extends uniquely to an embedding \(H(\mathbb{R})\to\mathbb{T}\) of pre-\(H\)-fields._
Proof.: Let \(\widehat{H}\) be the \(H\)-field hull of \(H\) in \(H(\mathbb{R})\). Then \(\iota\) extends uniquely to an \(H\)-field embedding \(\widehat{\iota}:\widehat{H}\to\mathbb{T}\) by [ADH, 10.5.13]. By [ADH, remark before 4.6.21] and [ADH, 10.5.16]\(\widehat{\iota}\) extends uniquely to an embedding \(H(\mathbb{R})\to\mathbb{T}\) of \(H\)-fields.
We finish with indicating how Theorem A from the introduction (again, in strengthened form) follows from [103] and the results above:
**Corollary 7.1.19**.: _If \(P\in H\{Y\}\), \(f<g\) in \(H\), and \(P(f)<0<P(g)\), then each \(\mathrm{d}\)-maximal Hardy field extension of \(H\) contains a \(y\) with \(f<y<g\) and \(P(y)=0\)._
Proof.: By [103], the ordered differential field \(\mathbb{T}_{\mathrm{g}}\) of grid-based transseries is \(H\)-closed with small derivation and has the differential intermediate value property (DIVP). Hence \(\mathbb{T}\) also has DIVP, by completeness of \(T_{H}\) (see the introduction). Now use Corollary 7.1.17.
**Corollary 7.1.20**.: _Let \(P\in H\{Y\}\) have odd degree. Then there is an \(H\)-hardian germ \(y\) with \(P(y)=0\)._
Proof.: This follows from Theorem 6.7.23 and [ADH, 14.5.3]. Alternatively, we can use Corollary 7.1.19: Replace \(H\) by \(\operatorname{Li}\bigl{(}H(\mathbb{R})\bigr{)}\) to arrange that \(H\supseteq\mathbb{R}\) is Liouville closed, and appeal to the example following Corollary 1.3.9.
Note that if \(H\subseteq\mathcal{C}^{\infty}\), then in the previous two corollaries we have \(H\langle y\rangle\subseteq\mathcal{C}^{\infty}\), by Corollary 6.3.9; likewise with \(\mathcal{C}^{\omega}\) in place if \(\mathcal{C}^{\infty}\).
### 7.2. Relative Differential Closure
Let \(K\subseteq L\) be an extension of differential fields, and let \(r\) range over \(\mathbb{N}\). We say that \(K\) is \(r\)**-differentially closed** in \(L\) for every \(P\in K\{Y\}^{\neq}\) of order \(\leqslant r\), each zero of \(P\) in \(L\) lies in \(K\). We also say that \(K\) is **weakly \(r\)-differentially closed** in \(L\) if every \(P\in K\{Y\}^{\neq}\) of order \(\leqslant r\) with a zero in \(L\) has a zero in \(K\). We abbreviate "\(r\)-differentially closed" by "\(r\)-d-closed." Thus
\[K\text{ is $r$-d-closed in $L$}\quad\Longrightarrow\quad K\text{ is weakly $r$-d-closed in $L$},\]
\[K\text{ is 0-d-closed in }L \iff K\text{ is weakly 0-d-closed in }L\] \[\iff K\text{ is algebraically closed in }L.\]
Hence
\[K\text{ is weakly 0-d-closed in }L\quad\Longrightarrow\quad C\text{ is algebraically closed in }C_{L}. \tag{7.2.1}\]
Also, if \(K\) is weakly 0-d-closed in \(L\) and \(L\) is algebraically closed, then \(K\) is algebraically closed, and similarly with "real closed" in place of "algebraically closed". In [ADH, 5.8] we defined \(K\) to be _weakly \(r\)_-d_-closed_ if every \(P\in K\{Y\}\setminus K\) of order \(\leqslant r\) has a zero in \(K\). Thus
\[K\text{ is weakly }r\text{-d-closed }\iff\begin{cases}&K\text{ is weakly }r\text{-d-closed in every differential field}\\ &\text{extension of }K.\end{cases}\]
If \(K\) is weakly \(r\)-d-closed in \(L\), then \(P(K)=P(L)\cap K\) for all \(P\in K\{Y\}\) of order \(\leqslant r\); in particular,
\[K\text{ is weakly 1-d-closed in }L\quad\Longrightarrow\quad\partial K= \partial L\cap K. \tag{7.2.2}\]
Also,
\[K\text{ is 1-d-closed in }L\quad\Longrightarrow\quad C=C_{L}\text{ and }K^{ \dagger}=L^{\dagger}\cap K. \tag{7.2.3}\]
Moreover:
**Lemma 7.2.1**.: _Suppose \(K\) is weakly \(r\)-d-closed in \(L\). If \(L\) is \(r\)-linearly surjective, then so is \(K\), and if \(L\) is \((r+1)\)-linearly closed, then so is \(K\)._
Proof.: The first statement is clear from the remarks preceding the lemma, and the second statement is shown similarly to [ADH, 5.8.9].
Sometimes we get more than we bargained for:
**Lemma 7.2.2**.: _Suppose \(K\) is not algebraically closed, \(C\neq K\), and \(K\) is weakly \(r\)-d-closed in \(L\). Let \(Q_{1},\dots,Q_{m}\in K\{Y\}^{\neq}\) of order \(\leqslant r\) have a common zero in \(L\), \(m\geqslant 1\). Then they have a common zero in \(K\)._
Proof.: Take a polynomial \(\Phi\in K[X_{1},\dots,X_{m}]\) whose only zero in \(K^{m}\) is the origin \((0,\dots,0)\in K^{m}\). Then the differential polynomial \(P:=\Phi(Q_{1},\dots,Q_{m})\in K\{Y\}\) is nonzero (use [ADH, 4.2.1]) and has order \(\leqslant r\). For \(y\in L\) we have
\[Q_{1}(y)=\dots=Q_{m}(y)=0\quad\Longrightarrow\quad P(y)=0,\]
and for \(y\in K\) the converse of this implication also holds.
We say that \(K\) is **differentially closed** in \(L\) if \(K\) is \(r\)-d-closed in \(L\) for each \(r\), and similarly we define when \(K\) is **weakly differentially closed** in \(L\). We also use "d-closed" to abbreviate "differentially closed". If \(K\), as a differential ring, is an elementary substructure of \(L\), then \(K\) is weakly d-closed in \(L\). The elements of \(L\) that are d-algebraic over \(K\) form the smallest differential subfield of \(L\) containing \(K\) which is d-closed in \(L\); we call it the **differential closure** ("d-closure" for short) of \(K\) in \(L\). Thus \(K\) is d-closed in \(L\) iff no d-subfield of \(L\) properly containing \(K\) is d-algebraic over \(K\). This notion of being differentially closed does not seen prominent in the differential algebra literature, though the definition occurs (as "differentially algebraic closure") in [114, p. 102]. Here is a useful fact about it:
**Lemma 7.2.3**.: _Let \(F\) be a differential field extension of \(L\) and \(E\) be a subfield of \(F\) containing \(K\) such that \(E\) is algebraic over \(K\) and \(F=L(E)\)._
_Then \(K\) is \(\mathrm{d}\)-closed in \(L\) iff \(E\cap L=K\) and \(E\) is \(\mathrm{d}\)-closed in \(F\)._
Proof.: Suppose \(K\) is \(\mathrm{d}\)-closed in \(L\). Then \(K\) is algebraically closed in \(L\), so \(L\) is linearly disjoint from \(E\) over \(K\). (See [122, Chapter VIII, SS4].) In particular \(E\cap L=K\). Now let \(y\in F\) be \(\mathrm{d}\)-algebraic over \(E\); we claim that \(y\in E\). Note that \(y\) is \(\mathrm{d}\)-algebraic over \(K\). Take a field extension \(E_{0}\subseteq E\) of \(K\) with \([E_{0}:K]<\infty\) (so \(E_{0}\) is a \(\mathrm{d}\)-subfield of \(E\)) such that \(y\in L(E_{0})\); replacing \(E\), \(F\) by \(E_{0}\), \(L(E_{0})\), respectively, we arrange that \(n:=[E:K]<\infty\). Let \(b_{1},\ldots,b_{n}\) be a basis of the \(K\)-linear space \(E\); then \(b_{1},\ldots,b_{n}\) is also a basis of the \(L\)-linear space \(F\). Let \(\sigma_{1},\ldots,\sigma_{n}\) be the distinct field embeddings \(F\to L^{\mathrm{a}}\) over \(L\). Then the vectors
\[\big{(}\sigma_{1}(b_{1}),\ldots,\sigma_{1}(b_{n})\big{)},\ldots,\big{(}\sigma _{n}(b_{1}),\ldots,\sigma_{n}(b_{n})\big{)}\in(L^{\mathrm{a}})^{n}\]
are \(L^{\mathrm{a}}\)-linearly independent [122, Chapter VI, Theorem 4.1]. Let \(a_{1},\ldots,a_{n}\in L\) be such that \(y=a_{1}b_{1}+\cdots+a_{n}b_{n}\). Then
\[\sigma_{j}(y)=a_{1}\sigma_{j}(b_{1})+\cdots+a_{n}\sigma_{j}(b_{n})\quad\text{ for }j=1,\ldots,n,\]
hence by Cramer's Rule,
\[a_{1},\ldots,a_{n}\in K\big{(}\sigma_{j}(y),\sigma_{j}(b_{i}):i,j=1,\ldots,n \big{)}.\]
Therefore \(a_{1},\ldots,a_{n}\) are \(\mathrm{d}\)-algebraic over \(K\), since \(\sigma_{j}(y)\) and \(\sigma_{j}(b_{i})\) for \(i,j=1,\ldots,n\) are. Hence \(a_{1},\ldots,a_{n}\in K\) since \(K\) is \(\mathrm{d}\)-closed in \(L\), so \(y\in E\) as claimed. This shows the forward implication. The backward direction is clear.
**Corollary 7.2.4**.: _If \(-1\) is not a square in \(L\) and \(\mathrm{i}\) in a differential field extension of \(L\) satisfies \(\mathrm{i}^{2}=-1\), then: \(K\) is \(\mathrm{d}\)-closed in \(L\)\(\Leftrightarrow\)\(K[\mathrm{i}]\) is \(\mathrm{d}\)-closed in \(L[\mathrm{i}]\)._
In the next lemma _extension_ refers to an extension of valued differential fields.
**Lemma 7.2.5**.: _Suppose \(K\) is an \(\lambda\)-free \(H\)-asymptotic field and is \(r\)-\(\mathrm{d}\)-closed in an \(r\)-newtonian ungrounded \(H\)-asymptotic extension \(L\). Then \(K\) is also \(r\)-newtonian._
Proof.: Let \(P\in K\{Y\}^{\neq}\) be quasilinear of order \(\leqslant r\). Then \(P\) remains quasilinear when viewed as differential polynomial over \(L\), by Lemma 1.8.9. Hence \(P\) has a zero \(y\preccurlyeq 1\) in \(L\), which lies in \(K\) since \(K\) is \(r\)-\(\mathrm{d}\)-closed in \(L\).
**Relative differential closure in \(H\)-fields.** We now return to the \(H\)-field setting. Let \(\mathcal{L}_{\emptyset}=\{0,1,-,+,\,\cdot\,,\,\emptyset\}\) be the language of differential rings, a sublanguage of the language \(\mathcal{L}=\mathcal{L}_{\partial}\cup\{\leqslant,\preccurlyeq\}\) of ordered valued differential rings from Section 7.1.
Let now \(M\) be an \(H\)-closed field, and let \(H\) a pre-\(H\)-subfield of \(M\) whose valuation ring and constant field we denote by \(\mathcal{O}\) and \(C\). Construing \(H\) and \(M\) as \(\mathcal{L}\)-structures in the usual way, \(H\) is an \(\mathcal{L}\)-substructure of \(M\). We also use the sublanguage \(\mathcal{L}_{\preccurlyeq}:=\mathcal{L}_{\partial}\cup\{\preccurlyeq\}\) of \(\mathcal{L}\), so \(\mathcal{L}_{\preccurlyeq}\) is the language of valued differential rings. We
expand the \(\mathcal{L}_{\delta}\)-structure \(H[\mathrm{i}]\) to an \(\mathcal{L}_{\preccurlyeq}\)-structure by interpreting \(\preccurlyeq\) as the dominance relation associated to the valuation ring \(\mathcal{O}+\mathcal{O}\)i of \(H[\mathrm{i}]\); we expand likewise \(M[\mathrm{i}]\) to an \(\mathcal{L}_{\preccurlyeq}\)-structure by interpreting \(\preccurlyeq\) as the dominance relation associated to the valuation ring \(\mathcal{O}_{M[\mathrm{i}]}=\mathcal{O}_{M}+\mathcal{O}_{M}\mathrm{i}\) of \(M[\mathrm{i}]\). Then \(H[\mathrm{i}]\) is an \(\mathcal{L}_{\preccurlyeq}\)-substructure of \(M[\mathrm{i}]\). By \(H\preccurlyeq_{\mathcal{L}}M\) we mean that \(H\) is an elementary \(\mathcal{L}\)-substructure of \(M\), and we use expressions like "\(H[\mathrm{i}]\preccurlyeq_{\mathcal{L}_{\preccurlyeq}}M[\mathrm{i}]\)" in the same way; of course, the two uses of the symbol \(\preccurlyeq\) in the latter are unrelated.
By Corollary 7.2.4, \(H\) is d-closed in \(M\) iff \(H[\mathrm{i}]\) is d-closed in \(M[\mathrm{i}]\).
**Lemma 7.2.6**.: _Suppose \(M\) has small derivation. Then_
\[H\ \preccurlyeq_{\mathcal{L}_{\delta}}M\ \Longleftrightarrow\ H[\mathrm{i}]\ \preccurlyeq_{\mathcal{L}_{\delta}}M[\mathrm{i}].\]
_Also, if \(H\preccurlyeq_{\mathcal{L}_{\delta}}M\), then \(H\preccurlyeq_{\mathcal{L}}M\) and \(H[\mathrm{i}]\preccurlyeq_{\mathcal{L}_{\preccurlyeq}}M[\mathrm{i}]\)._
Proof.: The forward direction in the equivalence is obvious. For the converse, let \(H[\mathrm{i}]\preccurlyeq_{\mathcal{L}_{\delta}}M[\mathrm{i}]\). We have \(M\equiv_{\mathcal{L}_{\delta}}\mathbb{T}\) by [ADH, 16.6.3]. Then [ADH, 10.7.10] yields an \(\mathcal{L}_{\partial}\)-formula defining \(M\) in \(M[\mathrm{i}]\), so the same formula defines \(M\cap H[\mathrm{i}]=H\) in \(H[\mathrm{i}]\), and thus \(H\preccurlyeq_{\mathcal{L}_{\delta}}M\). For the "also" part, use that the squares of \(M\) are the nonnegative elements in its ordering, that \(\mathcal{O}_{M}\) is then definable as the convex hull of \(C_{M}\) in \(M\) with respect to this ordering, and if \(H\preccurlyeq_{\mathcal{L}_{\delta}}M\), then each \(\mathcal{L}_{\partial}\)-formula defining \(\mathcal{O}_{M}\) in \(M\) also defines \(\mathcal{O}=\mathcal{O}_{M}\cap H\) in \(H\).
The next proposition complements [ADH, 16.0.3, 16.2.5]:
**Proposition 7.2.7**.: _The following are equivalent:_
* \(H\) _is_ \(\mathrm{d}\)_-closed in_ \(M\)_;_
* \(C=C_{M}\) _and_ \(H\preccurlyeq_{\mathcal{L}}M\)_;_
* \(C=C_{M}\) _and_ \(H\) _is_ \(H\)_-closed._
Proof.: Assume (i). Then \(C=C_{M}\) and \(H\) is a Liouville closed \(H\)-field, by (7.2.1), (7.2.2), and (7.2.3). We have \(\omega(M)\cap H=\omega(H)\) since \(H\) is weakly 1-d-closed in \(M\), and \(\sigma\big{(}\Gamma(M)\big{)}\cap H=\sigma\big{(}\Gamma(M)\cap H\big{)}= \sigma\big{(}\Gamma(H)\big{)}\) since \(H\) is 2-d-closed in \(M\) and \(\Gamma(M)\cap H=\Gamma(H)\) by [ADH, p. 520]. Now \(M\) is Schwarz closed [ADH, 14.2.20], so \(M=\omega(M)\cup\sigma\big{(}\Gamma(M)\big{)}\), hence also \(H=\omega(H)\cup\sigma\big{(}\Gamma(H)\big{)}\), thus \(H\) is also Schwarz closed [ADH, 11.8.33]; in particular, \(H\) is \(\omega\)-free. By Lemma 7.2.5, \(H\) is newtonian. This shows (i) \(\Rightarrow\) (iii). The implication (iii) \(\Rightarrow\) (i) is [ADH, 16.0.3], and (iii) \(\Leftrightarrow\) (ii) follows from [ADH, 16.2.5].
Next a consequence of [ADH, 16.2.1], but note first that \(H(C_{M})\) is an \(H\)-subfield of \(M\) and d-algebraic over \(H\), and recall that each \(\omega\)-free \(H\)-field has a Newton-Liouville closure, as defined in [ADH, p. 669].
**Corollary 7.2.8**.: _If \(H\) is \(\omega\)-free, then the differential closure of \(H\) in \(M\) is a Newton-Liouville closure of the \(\omega\)-free \(H\)-subfield \(H(C_{M})\) of \(M\)._
Let \(\boldsymbol{M}\) be the expansion of \(M\) to a \(\Lambda\boldsymbol{\Omega}\)-field, and let \(\boldsymbol{H}\), \(\boldsymbol{H}(C_{M})\) be the expansions of \(H\), \(H(C_{M})\), respectively, to pre-\(\Lambda\boldsymbol{\Omega}\)-subfields of \(\boldsymbol{M}\); then \(\boldsymbol{H}(C_{M})\) is a \(\Lambda\boldsymbol{\Omega}\)-field. By Proposition 7.2.7, the d-closure \(H^{\operatorname{da}}\) of \(H\) in \(M\) is \(H\)-closed and hence has a unique expansion \(\boldsymbol{H}^{\operatorname{da}}\) to a \(\Lambda\boldsymbol{\Omega}\)-field. Then \(\boldsymbol{H}\subseteq\boldsymbol{H}(C_{M})\subseteq\boldsymbol{H}^{ \operatorname{da}}\subseteq\boldsymbol{M}\). For the Newton-Liouville closure of a pre-\(\Lambda\boldsymbol{\Omega}\)-field, see [ADH, 16.4.8].
**Corollary 7.2.9**.: _The \(\Lambda\boldsymbol{\Omega}\)-field \(\boldsymbol{H}^{\operatorname{da}}\) is a Newton-Liouville closure of \(\boldsymbol{H}(C_{M})\)._
Proof.: Let \(\boldsymbol{H}(C_{M})^{\operatorname{nl}}\) be a Newton-Liouville closure of \(\boldsymbol{H}(C_{M})\). Since \(\boldsymbol{H}^{\operatorname{da}}\) is \(H\)-closed and extends \(\boldsymbol{H}(C_{M})\), there is an embedding \(\boldsymbol{H}(C_{M})^{\operatorname{nl}}\to\boldsymbol{H}^{\operatorname{da}}\) over \(\boldsymbol{H}(C_{M})\), and any such embedding is an isomorphism, thanks to [ADH, 16.0.3].
Relative differential closure in Hardy fieldsSpecializing to Hardy fields, assume below that \(H\) is a Hardy field and set \(K:=H[i]\subseteq\mathcal{C}^{<\infty}[i]\), an \(H\)-asymptotic extension of \(H\). By definition, \(H\) is d-maximal iff \(H\) is d-closed in every Hardy field extension of \(H\). The following contains Corollary 7 from the introduction:
**Corollary 7.2.10**.: _Suppose \(H\) is \(\operatorname{d}\)-maximal. Then \(K\) is weakly \(\operatorname{d}\)-closed, hence linearly closed by [ADH, 5.8.9], and linearly surjective. If \(E\) is a Hardy field extension of \(H\), then \(K\) is \(\operatorname{d}\)-closed in \(E[i]\)._
Proof.: By our main Theorem 6.7.22, \(H\) is newtonian, hence \(K\) is weakly \(\operatorname{d}\)-closed by [ADH, 14.5.7, 14.5.3], proving the first statement; the second statement follows from Corollary 7.2.4.
We now strengthen the second part of Corollary 7.2.10:
**Corollary 7.2.11**.: _Suppose \(H\) is \(\operatorname{d}\)-maximal and \(L\supseteq K\) is a differential subfield of \(\mathcal{C}^{<\infty}[i]\) such that \(L\) is a \(\operatorname{d}\)-valued \(H\)-asymptotic extension of \(K\) with respect to some dominance relation on \(L\). Then \(K\) is \(\operatorname{d}\)-closed in \(L\)._
Proof.: The \(\operatorname{d}\)-valued field \(K\) is \(\operatorname{\mathbbm{o}}\)-free and newtonian by [ADH, 11.7.23, 14.5.7]. Also \(L^{\dagger}\cap K=K^{\dagger}\) by Corollary 5.5.22. Now apply Theorem 2.6.6.
We do not require that the dominance relation on \(L\) in Corollary 7.2.11 is the restriction to \(L\) of the relation \(\preccurlyeq\) on \(\mathcal{C}[i]\).
Recall also that in Section 5.3 we defined the \(\operatorname{d}\)-perfect hull \(\operatorname{D}(H)\) of \(H\) as the intersection of all \(\operatorname{d}\)-maximal Hardy field extensions of \(H\). By the next result we only need to consider here \(\operatorname{d}\)-algebraic Hardy field extensions of \(H\):
**Corollary 7.2.12**.: _If \(H\) is \(\operatorname{d}\)-closed in some \(\operatorname{d}\)-maximal Hardy field extension of \(H\), then \(H\) is \(\operatorname{d}\)-maximal. Hence_
\[\operatorname{D}(H)\ =\ \bigcap\big{\{}M:M\text{ is a $\operatorname{d}$-maximal $\operatorname{d}$-algebraic Hardy field extension of $H$}\big{\}}.\]
Proof.: The first part follows from Theorem 6.7.22 and (i) \(\Rightarrow\) (iii) in Proposition 7.2.7. To prove the displayed equality we only need to show the inclusion "\(\supseteq\)". So let \(f\) be an element of every \(\operatorname{d}\)-maximal \(\operatorname{d}\)-algebraic Hardy field extension of \(H\), and let \(M\) be any \(\operatorname{d}\)-maximal Hardy field extension of \(H\); we need to show \(f\in M\). Let \(E\) be the \(\operatorname{d}\)-closure of \(H\) in \(M\). Then \(E\) is \(\operatorname{d}\)-algebraic over \(H\), and by the first part, \(E\) is \(\operatorname{d}\)-maximal; thus \(f\in E\), hence \(f\in M\) as required.
We can now prove a variant of Lemma 5.3.1 for \(\mathcal{C}^{\infty}\)- and \(\mathcal{C}^{\omega}\)-Hardy fields:
**Corollary 7.2.13**.: _Suppose \(H\) is a \(\mathcal{C}^{\infty}\)-Hardy field. Then_
\[\mathrm{D}(H) =\bigcap\big{\{}M:M\supseteq H\,\operatorname{d}\!\text{-maximal $\mathcal{C}^{\infty}$-Hardy field}\big{\}}\] \[=\big{\{}f\in\operatorname{E}^{\infty}(H):f\text{ is $\operatorname{d}\!\text{-algebraic over $H$}\big{\}}.\]
_Likewise with \(\omega\) in place of \(\infty\)._
Proof.: With both equalities replaced by "\(\subseteq\)", this follows from the definitions and the remarks following Corollary 6.3.9. Let \(f\in\operatorname{E}^{\infty}(H)\) be \(\operatorname{d}\!\text{-algebraic over $H$}\); we claim that \(f\in\mathrm{D}(H)\). To prove this claim, let \(E\) be a \(\operatorname{d}\!\text{-maximal Hardy}\) field extension \(E\) of \(H\); it is enough to show that then \(f\in E\). Now \(F:=E\cap\mathcal{C}^{\infty}\) is a \(\mathcal{C}^{\infty}\)-Hardy field extension of \(H\) which is \(\operatorname{d}\!\text{-closed}\) in \(E\), by Corollary 6.3.9, and hence \(\operatorname{d}\!\text{-maximal}\) by the previous corollary. Thus we may replace \(E\) by \(F\) to arrange that \(E\subseteq\mathcal{C}^{\infty}\), and then take a \(\mathcal{C}^{\infty}\)-maximal Hardy field extension \(M\) of \(E\). Now \(f\in\operatorname{E}^{\infty}(H)\) gives \(f\in M\), and \(E\) being \(\operatorname{d}\!\text{-maximal}\) and \(f\) being \(\operatorname{d}\!\text{-algebraic}\) over \(E\) yields \(f\in E\). The proof for \(\omega\) in place of \(\infty\) is similar.
Combining Theorem 5.4.20 with Corollary 7.2.13 yields:
**Corollary 7.2.14**.: _If \(H\subseteq\mathcal{C}^{\infty}\) is bounded, then \(\mathrm{D}(H)=\operatorname{E}(H)=\operatorname{E}^{\infty}(H)\). Likewise with \(\omega\) in place of \(\infty\)._
_Question._ Do the following implications hold for all \(H\)?
\[H\subseteq\mathcal{C}^{\infty}\implies\operatorname{E}(H)\subseteq \operatorname{E}^{\infty}(H),\qquad H\subseteq\mathcal{C}^{\omega}\implies \operatorname{E}(H)\subseteq\operatorname{E}^{\infty}(H)\subseteq \operatorname{E}^{\omega}(H).\]
Let \(\operatorname{E}:=\operatorname{E}(\mathbb{Q})\) be the perfect hull of the Hardy field \(\mathbb{Q}\). Boshernitzan [33, (20.1)] showed that \(\operatorname{E}\subseteq\operatorname{E}^{\infty}(\mathbb{Q})\subseteq \operatorname{E}^{\omega}(\mathbb{Q})\). From Corollary 7.2.14 we obtain
\[\operatorname{E}\ =\ \operatorname{E}^{\infty}(\mathbb{Q})\ =\ \operatorname{E}^{ \omega}(\mathbb{Q})\ =\ \mathrm{D}(\mathbb{Q}),\]
thus establishing [32, SS10, Conjecture 1].
Note that \(\operatorname{E}\) is \(1\)-\(\operatorname{d}\!\text{-closed}\) in all its Hardy field extensions, by Theorem 6.3.14. However, \(\operatorname{E}\) is not \(2\)-linearly surjective by [35, Proposition 3.7], so \(\operatorname{E}\) is not weakly \(2\)-\(\operatorname{d}\!\text{-closed}\) in any \(\operatorname{d}\!\text{-maximal}\) Hardy field extension of \(\operatorname{E}\) (see Lemma 7.2.1) and \(\operatorname{E}\) is not \(2\)-linearly newtonian (see [1, 14.2.2]).
More generally, by Theorem 6.3.14 each \(\operatorname{d}\!\text{-perfect}\) Hardy field is \(1\)-\(\operatorname{d}\!\text{-closed}\) in all its Hardy field extensions. Together with Lemma 6.7.15 and 7.2.5, this yields a generalization of Lemma 6.7.15:
**Corollary 7.2.15**.: _Every \(\operatorname{d}\!\text{-perfect}\) Hardy field is \(1\)-newtonian._
Let \(M\) be a \(\operatorname{d}\!\text{-maximal}\) Hardy field extension of \(H\) and \(H^{\operatorname{da}}\) the \(\operatorname{d}\!\text{-closure}\) of \(H\) in \(M\), so \(H(\mathbb{R})\subseteq H^{\operatorname{da}}\subseteq M\). From Corollary 7.2.8 we obtain a description of \(H^{\operatorname{da}}\):
**Corollary 7.2.16**.: _If \(H\) is \(\omega\)-free, then \(H^{\operatorname{da}}\) is a Newton-Liouville closure of \(H(\mathbb{R})\)._
Next, let \(\boldsymbol{H}(\mathbb{R})\), \(\boldsymbol{H}^{\operatorname{da}}\), \(\boldsymbol{M}\) be the canonical \(\Lambda\boldsymbol{\Omega}\)-expansions of the Hardy fields \(H(\mathbb{R})\), \(H^{\operatorname{da}}\), \(M\), respectively, so \(\boldsymbol{H}(\mathbb{R})\subseteq\boldsymbol{H}^{\operatorname{da}}\subseteq \boldsymbol{M}\). Corollary 7.2.9 yields:
**Corollary 7.2.17**.: \(\boldsymbol{H}^{\operatorname{da}}\) _is a Newton-Liouville closure of \(\boldsymbol{H}(\mathbb{R})\)._
### Embeddings into Transseries and Maximal Hardy Fields
We begin with a direct consequence of facts about "Newton-Liouville closure" in [ADH, 14.5, 16.2]. Let \(\boldsymbol{H}\) be a \(\Lambda\mathfrak{Q}\)-field with underlying \(H\)-field \(H\). By [ADH, 14.5.10, 16.4.1, 16.4.8], the constant field of the Newton-Liouville closure of \(\boldsymbol{H}\) is the real closure of \(C:=C_{H}\). Let \(\boldsymbol{M}\) be an \(H\)-closed \(\Lambda\mathfrak{Q}\)-field extension of \(\boldsymbol{H}\), with underlying \(H\)-field \(M\), and let \(\boldsymbol{H}^{\mathrm{da}}\) be the \(\mathrm{d}\)-closure of \(\boldsymbol{H}\) in \(\boldsymbol{M}\).
**Proposition 7.3.1**.: _Let \(\boldsymbol{H}^{*}\) be a \(\mathrm{d}\)-algebraic \(\Lambda\mathfrak{Q}\)-field extension of \(\boldsymbol{H}\) such that the constant field of \(\boldsymbol{H}^{*}\) is algebraic over \(C\). Then there is an embedding \(\boldsymbol{H}^{*}\to\boldsymbol{M}\) over \(\boldsymbol{H}\), and the image of any such embedding is contained in \(\boldsymbol{H}^{\mathrm{da}}\)._
Proof.: The image of any embedding \(\boldsymbol{H}^{*}\to\boldsymbol{M}\) over \(\boldsymbol{H}\) is \(\mathrm{d}\)-algebraic over \(H\) and thus contained in \(\boldsymbol{H}^{\mathrm{da}}\). For existence, take a Newton-Liouville closure \(\boldsymbol{M}^{*}\) of \(\boldsymbol{H}^{*}\). Then \(\boldsymbol{M}^{*}\) is also a Newton-Liouville closure of \(\boldsymbol{H}\), by [ADH, 16.0.3], and thus embeds into \(\boldsymbol{M}\) over \(\boldsymbol{H}\).
Let \(\mathcal{L}\) be the language of ordered valued differential rings, as in Section 7.1. The second part of Corollary 6 in the introduction now follows from the next result:
**Corollary 7.3.2**.: _Let \(H\) be a Hardy field, \(\iota\colon H\to\mathbb{T}\) an ordered differential field embedding, and \(H^{*}\) a \(\mathrm{d}\)-maximal \(\mathrm{d}\)-algebraic Hardy field extension of \(H\). Then \(\iota\) extends to an ordered valued differential field embedding \(H^{*}\to\mathbb{T}\), and so for any \(\mathcal{L}_{H}\)-sentence \(\sigma\), \(H^{*}\models\sigma\) iff \(\mathbb{T}\models\iota(\sigma)\)._
Proof.: We have \(H(\mathbb{R})\subseteq H^{*}\), and so by Lemma 7.1.18 we arrange that \(H\supseteq\mathbb{R}\). Let \(\boldsymbol{H}\), \(\boldsymbol{H}^{*}\) be the canonical \(\Lambda\mathfrak{Q}\)-expansions of \(H\), \(H^{*}\), respectively, and let \(\boldsymbol{T}\) be the expansion of \(\mathbb{T}\) to a \(\Lambda\mathfrak{Q}\)-field. Then \(\boldsymbol{H}\subseteq\boldsymbol{H}^{*}\), and by Lemma 7.1.16, \(\iota\) is an embedding \(\boldsymbol{H}\to\boldsymbol{T}\). By Proposition 7.3.1, \(\iota\) extends to an embedding \(\boldsymbol{H}^{*}\to\boldsymbol{T}\).
At the end of Section 5.5 we introduced the Hardy field \(H:=\mathbb{R}(\ell_{0},\ell_{1},\ell_{2},\dots)\), and we now mimick this in \(\mathbb{T}\) by setting \(\ell_{0}:=x\) and \(\ell_{n+1}:=\log\ell_{n}\) in \(\mathbb{T}\). This yields the unique ordered differential field embedding \(H\to\mathbb{T}\) over \(\mathbb{R}\) sending \(\ell_{n}\in H\) to \(\ell_{n}\in\mathbb{T}\) for all \(n\). Its image is the \(H\)-subfield \(\mathbb{R}(\ell_{0},\ell_{1},\dots)\) of \(\mathbb{T}\). Since the sequence \((\ell_{n})\) in \(\mathbb{T}\) is coinitial in \(\mathbb{T}^{>\mathbb{R}}\), each ordered differential subfield of \(\mathbb{T}\) containing \(\mathbb{R}(\ell_{0},\ell_{1},\dots)\) is an \(\mathfrak{o}\)-free \(H\)-field, by the remark preceding [ADH, 11.7.20].
From Lemma 7.1.15 and Proposition 7.3.1 we obtain:
**Corollary 7.3.3**.: _If \(H\supseteq\mathbb{R}\) is an \(\mathfrak{o}\)-free \(H\)-subfield of \(\mathbb{T}\) and \(H^{*}\) is a \(\mathrm{d}\)-algebraic \(H\)-field extension of \(H\) with constant field \(\mathbb{R}\), then there exists an \(H\)-field embedding \(H^{*}\to\mathbb{T}\) over \(H\)._
Corollary 7.3.3 goes through with \(\mathbb{T}\) replaced by its \(H\)-subfield
\[\mathbb{T}^{\mathrm{da}}\ :=\ \bigl{\{}f\in\mathbb{T}:f\text{ is $\mathrm{d}$-algebraic (over $\mathbb{Q}$)}\bigr{\}},\]
a Newton-Liouville closure of \(\mathbb{R}(\ell_{0},\ell_{1},\dots)\); see [ADH, 16.6] and Section 7.2 above. We now apply this observation to o-minimal structures. The _Pfaffian closure_ of an expansion of the ordered field of real numbers is its smallest expansion that is closed under taking Rolle leaves of definable \(1\)-forms of class \(\mathcal{C}^{1}\). See Speissegger [192] for complete definitions, and the proof that the Pfaffian closure of an o-minimal expansion of the ordered field of reals remains o-minimal.
**Corollary 7.3.4**.: _The Hardy field \(H\) of the Pfaffian closure of the ordered field of real numbers embeds as an \(H\)-field over \(\mathbb{R}\) into \(\mathbb{T}^{\mathrm{da}}\)._
Proof.: Let \(f\colon\mathbb{R}\to\mathbb{R}\) be definable in the Pfaffian closure of the ordered field of real numbers. The proof of [130, Theorem 3] gives \(r\in\mathbb{N}\), semialgebraic \(g\colon\mathbb{R}^{r+2}\to\mathbb{R}\), and \(a\in\mathbb{R}\) such that \(f|_{(a,\infty)}\) is \(\mathcal{C}^{r+1}\) and \(f^{(r+1)}(t)=g\big{(}t,f(t),\dots,f^{(r)}(t)\big{)}\) for all \(t>a\). Take \(P\in\mathbb{R}[Y_{1},\dots,Y_{r+3}]^{\neq}\) vanishing identically on the graph of \(g\); see [ADH, B.12.18]. Then \(P\big{(}t,f(t),\dots,f^{(r+1)}(t)\big{)}=0\) for \(t>a\). Hence the germ of \(f\) is d-algebraic over \(\mathbb{R}\), and so \(H\) is d-algebraic over \(\mathbb{R}\). As \(H\) contains the \(\omega\)-free Hardy field \(\mathbb{R}(\ell_{0},\ell_{1},\dots)\), we can use the remark following Corollary 7.3.3.
_Question_.: Let \(H\) be the Hardy field of an o-minimal expansion of the ordered field of reals, and let \(H^{*}\supseteq H\) be the Hardy field of the Pfaffian closure of this expansion. Does every embedding \(H\to\mathbb{T}\) extend to an embedding \(H^{*}\to\mathbb{T}\)?
We mentioned in the introduction that an embedding \(H\to\mathbb{T}\) as in Corollaries 7.3.2 and 7.3.4 can be viewed as an _expansion operator_ for the Hardy field \(H\) and its inverse as a _summation operator_. The corollaries above concern the existence of expansion operators; this relied on the \(H\)-closedness of \(\mathbb{T}\). Likewise, Theorem 6.7.22 and Proposition 7.3.1 also give rise to summation operators:
**Corollary 7.3.5**.: _Let \(H\) be an \(\omega\)-free \(H\)-field and let \(H^{*}\) be a \(\mathrm{d}\)-algebraic \(H\)-field extension of \(H\) with \(C_{H^{*}}\) algebraic over \(C_{H}\). Then any \(H\)-field embedding \(H\to M\) into a \(\mathrm{d}\)-maximal Hardy field extends to an \(H\)-field embedding \(H^{*}\to M\)._
In particular, given any ordered differential subfield \(H\supseteq\mathbb{R}(\ell_{0},\ell_{1},\dots)\) of \(\mathbb{T}\) with d-closure \(H^{*}\) in \(\mathbb{T}\), any \(\mathcal{L}\)-isomorphism between \(H\) and a Hardy field \(F\) extends to an \(\mathcal{L}\)-isomorphism between \(H^{*}\) and a Hardy field extension of \(F\). For \(H=\mathbb{R}(\ell_{0},\ell_{1},\dots)\subseteq\mathbb{T}\) (so \(H^{*}=\mathbb{T}^{\mathrm{da}}\)) we recover the main result of [104]:
**Corollary 7.3.6**.: _The \(H\)-field \(\mathbb{T}^{\mathrm{da}}\) is \(\mathcal{L}\)-isomorphic to a Hardy field \(\supseteq\mathbb{R}(\ell_{0},\ell_{1},\dots)\)._
Any Hardy field that is \(\mathcal{L}\)-isomorphic to \(\mathbb{T}^{\mathrm{da}}\) is d-maximal, so contains the Hardy field \(\mathrm{E}=\mathrm{E}(\mathbb{Q})=\mathrm{D}(\mathbb{Q})\); see the remarks following Lemma 5.3.1. Thus we have an \(\mathcal{L}\)-embedding \(e\colon\mathrm{E}\to\mathbb{T}^{\mathrm{da}}\), which we can view as an expansion operator for the Hardy field \(\mathrm{E}\). We suspect that \(e(\mathrm{E})\) is independent of the choice of \(e\).
In the remainder of this section we draw some consequences of Corollary 7.3.6 for the universal theory of Hardy fields.
### The universal theory of Hardy fields
Recall from Section 7.1 that \(\mathcal{L}=\{0,1,-,+,\cdot\,,\vartheta,\leqslant,\prec\}\) is the language of ordered valued differential rings. Let \(\mathcal{L}^{\iota}\) be \(\mathcal{L}\) augmented by a new unary function symbol \(\iota\). We view each pre-\(H\)-field \(H\) as an \(\mathcal{L}^{\iota}\)-structure by interpreting the symbols from \(\mathcal{L}\) in the natural way and \(\iota\) by the function \(\iota\colon H\to H\) given by \(\iota(a):=a^{-1}\) for \(a\in H^{\times}\) and \(\iota(0):=0\).
Since every Hardy field extends to a maximal one, each universal \(\mathcal{L}^{\iota}\)-sentence which holds in every maximal Hardy field also holds in every Hardy field; likewise with "d-maximal", "perfect", or "d-perfect" in place of "maximal". We now use Corollary 7.3.6 to show:
**Proposition 7.3.7**.: _Let \(\Sigma\) be the set of universal \(\mathcal{L}^{\iota}\)-sentences true in all Hardy fields. Then the models of \(\Sigma\) are the pre-\(H\)-fields with very small derivation._
For this we need a refinement of [ADH, 14.5.11]:
**Lemma 7.3.8**.: _Let \(H\) be a pre-\(H\)-field with very small derivation. Then \(H\) extends to an \(H\)-closed field with small derivation._
Proof.: By Corollary 1.1.25, replacing \(H\) by its \(H\)-field hull, we first arrange that \(H\) is an \(H\)-field. Let \((\Gamma,\psi)\) be the asymptotic couple of \(H\). Then \(\Psi^{\geq 0}\neq\emptyset\) or \((\Gamma,\psi)\) has gap \(0\). Suppose \((\Gamma,\psi)\) has gap \(0\). Let \(H(y)\) be the \(H\)-field extension from [ADH, 10.5.11] for \(K:=H\), \(s:=1\). Then \(y\succ 1\) and \(y^{\dagger}=1/y\prec 1\), so replacing \(H(y)\) by \(H\) we can arrange that \(\Psi^{\geq 0}\neq\emptyset\). Then every pre-\(H\)-field extension of \(H\) has small derivation, and so we are done by [ADH, 14.5.11].
Proof of Proposition 7.3.7.: The natural axioms for pre-\(H\)-fields with very small derivation formulated in \(\mathcal{L}^{\iota}\) are universal, so all models of \(\Sigma\) are pre-\(H\)-fields with very small derivation. Conversely, given any pre-\(H\)-field \(H\) with very small derivation we show that \(H\) is a model of \(\Sigma\): use Lemma 7.3.8 to extend \(H\) to an \(H\)-closed field with small derivation, and note that the \(\mathcal{L}^{\iota}\)-theory of \(H\)-closed fields with small derivation is complete by [ADH, 16.6.3] and has a Hardy field model by Corollary 7.3.6.
Similar arguments allow us to settle a conjecture from [6], in slightly strengthened form. For this, let \(\mathcal{L}^{\iota}_{x}\) be \(\mathcal{L}^{\iota}\) augmented by a constant symbol \(x\). We view each Hardy field containing the germ of the identity function on \(\mathbb{R}\) as an \(\mathcal{L}^{\iota}_{x}\)-structure by interpreting the symbols from \(\mathcal{L}^{\iota}\) as described at the beginning of this subsection and the symbol \(x\) by the germ of the identity function on \(\mathbb{R}\), which we also denote by \(x\) as usual. Each universal \(\mathcal{L}^{\iota}_{x}\)-sentence which holds in every maximal Hardy field also holds in every Hardy field containing \(x\).
**Proposition 7.3.9**.: _Let \(\Sigma_{x}\) be the set of universal \(\mathcal{L}^{\iota}_{x}\)-sentences true in all Hardy fields that contain \(x\). Then the models of \(\Sigma_{x}\) are the pre-\(H\)-fields with distinguished element \(x\) satisfying \(x^{\prime}=1\) and \(x\succ 1\)._
This follows from [ADH, 14.5.11] and the next lemma just like Proposition 7.3.7 followed from Lemma 7.3.8 and [ADH, 16.6.3].
**Lemma 7.3.10**.: _The \(\mathcal{L}^{\iota}_{x}\)-theory of \(H\)-closed fields with distinguished element \(x\) satisfying \(x^{\prime}=1\) and \(x\succ 1\) is complete._
Proof.: Let \(K_{1}\), \(K_{2}\) be models of this theory, and let \(x_{1},x_{2}\) be the interpretations of \(x\) in \(K_{1}\), \(K_{2}\). Then [ADH, 10.2.2, 10.5.11] gives an isomorphism \(\mathbb{Q}(x_{1})\to\mathbb{Q}(x_{2})\) of valued ordered differential fields sending \(x_{1}\) to \(x_{2}\). To show that \(K_{1}\equiv K_{2}\) as \(\mathcal{L}^{\iota}_{x}\)-structures we identify \(\mathbb{Q}(x_{1})\) with \(\mathbb{Q}(x_{2})\) via this isomorphism. View \(\Lambda\Omega\)-fields as \(\mathcal{L}^{\iota}_{\Lambda\Omega}\)-structures where \(\mathcal{L}^{\iota}_{\Lambda\Omega}\) extends \(\mathcal{L}^{\iota}\) as specified in [ADH, Chapter 16]. (See also the proof of Theorem 7.1.3.) By [ADH, 16.3.19] the \(\omega\)-free \(H\)-fields \(K_{1}\), \(K_{2}\) uniquely expand to \(\Lambda\Omega\)-fields \(\boldsymbol{K}_{1}\), \(\boldsymbol{K}_{2}\). The \(H\)-subfield \(\mathbb{Q}(x_{1})\) of \(K_{1}\) is grounded, so expands also uniquely to an \(\Lambda\Omega\)-field, and this \(\Lambda\Omega\)-field is a common substructure of both \(\boldsymbol{K}_{1}\) and \(\boldsymbol{K}_{2}\). Hence \(\boldsymbol{K}_{1}\equiv_{\mathbb{Q}(x_{1})}\boldsymbol{K}_{2}\) by [ADH, 16.0.1, B.11.6]. This yields the claim.
From the completeness of the \(\mathcal{L}^{\iota}\)-theory of \(H\)-closed fields with small derivation and Lemma 7.3.10 in combination with Theorem 6.7.22 we obtain:
**Corollary 7.3.11**.: _The set \(\Sigma\) of universal \(\mathcal{L}^{\iota}\)-sentences true in all Hardy fields is decidable, and so is the set \(\Sigma_{x}\) of universal \(\mathcal{L}^{\iota}_{x}\)-sentences true in all Hardy fields containing \(x\)._
We finish with an example of a not-so-obvious property of asymptotic integrals, expressible by universal \(\mathcal{L}^{\iota}\)-sentences, which holds in all Hardy fields. For this, let \(Y=(Y_{0},\ldots,Y_{n})\) be a tuple of distinct indeterminates and \(P,Q\in\mathbb{Z}[Y]^{\neq}\).
_Example_.: For all hardian germs \(\ell_{0},\ldots,\ell_{n+1},y\), and \(\vec{y}:=(y,y^{\prime},\ldots,y^{(n)})\):
\[\left\{\begin{array}{rl}&\mbox{if $\ell^{\prime}_{0}=1$, $\ell^{\prime}_{j+1}\ell_{j}=\ell^{\prime}_{j}$ for $j=0,\ldots,n$, $P(\vec{y})=0$, and $q:=Q(\vec{y})\neq 0$, then}\\ &(\ell_{0}\cdots\ell_{n+1}q)^{\prime}\neq 0,\,(\ell_{0}\cdots\ell_{n+1}q)^{ \dagger}\not\asymp q,\,\mbox{and $\big{(}q/(\ell_{0}\cdots\ell_{n+1}q)^{\dagger}\big{)}^{ \prime}\asymp q$.}\end{array}\right.\]
To see this, let \(\ell_{0},\ldots,\ell_{n+1},y\) be as hypothesized. Induction on \(j\) shows \(\ell_{j}\in\operatorname{Li}(\mathbb{R})\) and \(\ell_{j}\asymp\log_{j}x\) for \(j=0,\ldots,n+1\). Put \(H:=\mathbb{R}(x)\) and \(E:=H\langle y\rangle\). Then \(\operatorname{trdeg}(E|H)\leqslant n\), so Theorem 5.4.25 and Lemma 5.4.26 yield an \(r\in\{0,\ldots,n\}\) and \(g\in E^{>}\) with \(g\asymp\ell_{r}\) such that \(E\) is grounded with \(\max\Psi_{E}=v(g^{\dagger})\). Iterating Proposition 5.3.2 and [ADH, 10.2.3 and remark after it], starting with \(g^{\dagger}\) and \(\log g\) in the role of \(s\) and \(y\) in [ADH, 10.2.3], produces a grounded Hardy field \(F\) with \(E\subseteq F\subseteq\operatorname{Li}(E)\) and \(\max\Psi_{F}=v(f^{\dagger})\) where \(f\in F^{\times}\), \(f\asymp\ell_{n+1}\). Then
\[\Psi_{E}\ <\ v(f^{\dagger})\ <\ (\Gamma_{E}^{>})^{\prime},\]
so \(f^{\dagger}\not\asymp q:=Q(\vec{y})\), and thus \((f^{\dagger}/q)^{\dagger}\asymp(\ell_{0}\cdots\ell_{n+1}q)^{\dagger}\). Now the conclusion follows from [6, remarks after Lemma 2.7] applied to \(q\), \(f^{\dagger}\), \(F\) in the role of \(a\), \(b_{0}\), \(K\).
### 7.4. Linear Differential Equations over Hardy Fields
In this section we draw some consequences of our main Theorem 6.7.22 for linear differential equations over Hardy fields. This also uses results from Section 5.10. Throughout this section \(H\) is a Hardy field and \(K:=H[i]\subseteq\mathcal{C}^{<\infty}[i]\). Recall from Corollary 7.2.10 that if \(H\) is d-maximal, then \(K\) is linearly surjective and linearly closed; we use this fact freely below. Let \(A\in K[\partial]^{\neq}\) be monic and \(r:=\operatorname{order}A\).
**Solutions in the complexification of a \(\operatorname{d}\)-maximal Hardy field.**
**Theorem 7.4.1**.: _Suppose \(H\) is \(\operatorname{d}\)-maximal. Then \(A\) splits over \(K\) and the \(\mathbb{C}\)-linear space of zeros of \(A\) in \(\mathcal{C}^{<\infty}[i]\) has a basis_
\[f_{1}\operatorname{e}^{\phi_{1}i},\ \ldots,\ f_{r}\operatorname{e}^{\phi_{r}i} \qquad\mbox{where $f_{1},\ldots,f_{r}\in K^{\times}$, $\phi_{1},\ldots,\phi_{r}\in H$.}\]
_For any such basis, set \(\alpha_{j}:=\phi^{\prime}_{j}i+K^{\dagger}\in K/K^{\dagger}\) for \(j=1,\ldots,r\). Then the spectrum of \(A\) is \(\{\alpha_{1},\ldots,\alpha_{r}\}\), with_
\[\operatorname{mult}_{\alpha}(A)\ =\ |\{j\in\{1,\ldots,r\}:\alpha_{j}=\alpha\}| \quad\mbox{ for every $\alpha\in K/K^{\dagger}$,}\]
_and for any \(a_{1},\ldots,a_{r}\in K\) with \(A\ =\ (\partial-a_{r})\cdots(\partial-a_{1})\) we have_
\[\operatorname{mult}_{\alpha}(A)\ =\ |\{j\in\{1,\ldots,r\}:a_{j}+K^{\dagger}= \alpha\}|\quad\mbox{ for every $\alpha\in K/K^{\dagger}$.}\]
(The spectrum of \(A\) is as defined in Section 2.3, and does not refer to eigenvalues of the \(\mathbb{C}\)-linear operator \(y\mapsto A(y)\) on \(\mathcal{C}^{<\infty}[i]\).)
Proof.: The first part follows from Corollaries 2.5.6, 5.10.20, and Lemma 5.10.22. For the rest, also use Lemma 5.10.19 and the proof of Corollary 5.10.20.
_Remarks_.: Suppose \(H\) is d-maximal; so \(\operatorname{I}(K)\subseteq K^{\dagger}\) by Corollary 5.5.19. Hence by Corollary 5.10.20 we can choose the germs \(f_{j}\), \(\phi_{j}\) (\(j=1,\ldots,r\)) in Theorem 7.4.1 such that additionally \(f_{1}\operatorname{e}^{\phi_{1}i},\ldots,f_{r}\operatorname{e}^{\phi_{r}i}\) is a Hahn basis of \(\ker_{\mathcal{C}^{<\infty}[i]}A\); then the \(f_{j}\) with \(\phi_{j}=0\) form a valuation basis of the valued \(\mathbb{C}\)-linear space \(\ker_{K}A\). Fix such \(f_{j}\), \(\phi_{j}\), and let \(\langle\,\ \rangle\) be the "positive definite hermitian form" on the \(K\)-linear subspace \(K[\operatorname{e}^{Hi}]\) of \(\mathcal{C}^{<\infty}[i]\), as specified in the remarks after Corollary 5.10.32, with
associated "norm" \(\|\cdot\|\) on \(K[\operatorname{e}^{H{\rm i}}]\) given by \(\|f\|:=\sqrt{\langle f,f\rangle}\in H^{\geqslant}\). Those remarks give
\[\langle f_{j}\operatorname{e}^{\phi_{j}{\rm i}},f_{k}\operatorname{e}^{\phi_{k} {\rm i}}\rangle\ =\ \begin{cases}0&\text{if }\phi_{j}\neq\phi_{k},\\ f_{j}\overline{f_{k}}&\text{if }\phi_{j}=\phi_{k},\end{cases}\]
and so \(\|f_{j}\operatorname{e}^{\phi_{j}{\rm i}}\|=|f_{j}|\).
Next, let \(H_{0}\supseteq\mathbb{R}\) be a Liouville closed Hardy subfield of \(H\), set \(K_{0}:=H_{0}[{\rm i}]\) and suppose \({\rm I}(K_{0})\subseteq K_{0}^{\dagger}\), \(A\in K_{0}[\partial]\), and \(A\) splits over \(K_{0}\). Then we can can choose the \(\phi_{j}\), \(f_{j}\) in Theorem 7.4.1 such that \(f_{1}\operatorname{e}^{\phi_{1}{\rm i}},\ldots,f_{r}\operatorname{e}^{\phi_{ r}{\rm i}}\) is a Hahn basis of \(\ker_{\mathcal{C}^{<\infty}[{\rm i}]}A\), \(\phi_{1},\ldots,\phi_{r}\in H_{0}\), and \(vf_{1},\ldots,vf_{r}\in v(H_{0}^{\times})\), by Corollaries 2.6.21 and 5.10.28.
For each \(\phi\in H\), the \(\mathbb{C}\)-linear operator \(y\mapsto A(y)\) on \(\mathcal{C}^{<\infty}[{\rm i}]\) maps the \(\mathbb{C}\)-linear subspace \(K\operatorname{e}^{\phi{\rm i}}\) of \(\mathcal{C}^{<\infty}[{\rm i}]\) into itself (Lemma 5.5.25); more precisely, by Corollary 5.10.23:
**Corollary 7.4.2**.: _Suppose \(H\) is \({\rm d}\)-maximal, and let \(\phi\in H\). Then \(A(K\operatorname{e}^{\phi{\rm i}})=K\operatorname{e}^{\phi{\rm i}}\). Moreover, if \(\phi^{\prime}{\rm i}+K^{\dagger}\in K/K^{\dagger}\) is not an eigenvalue of \(A\), then for each \(b\in K\) there is a unique \(y\in K\) with \(A(y\operatorname{e}^{\phi{\rm i}})=b\operatorname{e}^{\phi{\rm i}}\)._
Can the assumption "\(H\) is \({\rm d}\)-maximal" in Theorem 7.4.1 and Corollary 7.4.2 be weakened to "\(H\) is perfect"? The case \(H={\rm E}:={\rm E}(\mathbb{Q})\) is illuminating: \({\rm E}\supseteq\mathbb{R}\) is a Liouville closed \(H\)-field, so contains the germs \(x\) and \(\operatorname{e}^{x^{2}}\), but Boshernitzan [35, Proposition 3.7] showed that \({\rm E}\) is not \(2\)-linearly surjective, as there is no \(y\in{\rm E}\) with \(y^{\prime\prime}+y=\operatorname{e}^{x^{2}}\). In fact, the conclusion of Corollary 7.4.2 fails for \(H={\rm E}\):
**Lemma 7.4.3**.: _Suppose \(H={\rm E}\). Then \(K\) is not \(1\)-linearly surjective: there is no \(y\in K\) with \(y^{\prime}-y{\rm i}=\operatorname{e}^{x^{2}}\)._
Proof.: Suppose \(y=a+b{\rm i}\) (\(a,b\in H\)) satisfies \(y^{\prime}-y{\rm i}=\operatorname{e}^{x^{2}}\). Now
\[y^{\prime}-y{\rm i}=(a^{\prime}+b^{\prime}{\rm i})-(-b+ai)=(a^{\prime}+b)+(b^{ \prime}-a){\rm i},\]
hence \(a^{\prime}+b=\operatorname{e}^{x^{2}}\) and \(b^{\prime}=a\), so \(b^{\prime\prime}+b=\operatorname{e}^{x^{2}}\), contradicting [35, Proposition 3.7].
It follows that the conclusion of Theorem 7.4.1 fails for \(H={\rm E}\):
**Corollary 7.4.4**.: _Let \(H={\rm E}\) and \(A=(\partial-2x)(\partial-{\rm i})\). Then \(\ker_{K[\operatorname{e}^{H{\rm i}}]}A=\mathbb{C}\operatorname{e}^{x{\rm i}}\)._
Proof.: In Section 5.10 we identified the universal exponential extension of \(K\) with \(K[\operatorname{e}^{H{\rm i}}]\). We have \(\operatorname{e}^{x{\rm i}}\in\ker_{K[\operatorname{e}^{H{\rm i}}]}A\). Suppose \(\dim_{\mathbb{C}}\ker_{K[\operatorname{e}^{H{\rm i}}]}A=2\). Then by Corollary 2.5.6, the eigenvalues of \(A\) are \(2x+K^{\dagger}\) and \({\rm i}+K^{\dagger}\). Now \(2x\in K^{\dagger}\), which gives \(f\in K^{\times}\) with \(A(f)=0\), so \(\ker_{K[\operatorname{e}^{H{\rm i}}]}A\) has basis \(f,\operatorname{e}^{x{\rm i}}\). Also \({\rm i}\notin K^{\dagger}\) by a remark preceding Lemma 1.2.16, so [ADH, 5.1.14(ii)] yields \((\partial-{\rm i})(cf)=\operatorname{e}^{x^{2}}\) for some \(c\in\mathbb{C}^{\times}\), contradicting the lemma above.
A vestige of linear surjectivity is retained by \({\rm d}\)-perfect Hardy fields:
**Corollary 7.4.5**.: _Suppose \(H\supseteq\mathbb{R}\) is Liouville closed and \({\rm I}(K)\subseteq K^{\dagger}\), and \(A\) splits over \(K\). Then there are \(\mathfrak{m},\mathfrak{n}\in H^{\times}\) such that for each Hardy field extension \(F\) of \(H\) and \(b\in F[{\rm i}]\) with \(b\prec\mathfrak{n}\), there exists \(y\in{\rm D}(F)[{\rm i}]\) that is the unique \(y\in\mathcal{C}^{<\infty}[{\rm i}]\) with \(A(y)=b\) and \(y\prec\mathfrak{m}\)._ (_So if \(H\) is \({\rm d}\)-perfect, then for such \(\mathfrak{m}\), \(\mathfrak{n}\) and all \(b\in K\) with \(b\prec\mathfrak{n}\) there is a unique \(y\in K\) with \(A(y)=b\) and \(y\prec\mathfrak{m}\)._)
Proof.: Let \(E\) be a d-maximal Hardy field extension of \(H\). Theorem 7.4.1 and the remark following it yields a Hahn basis
\[f_{1}\,\mathrm{e}^{\phi_{1}i},\ \ldots,\ f_{r}\,\mathrm{e}^{\phi_{r}i}\qquad(f_{1}, \ldots,f_{r}\in E[i]^{\times},\ \phi_{1},\ldots,\phi_{r}\in E)\]
of \(\ker_{\mathcal{C}^{<\infty}[i]}A\) with \(\phi_{1},\ldots,\phi_{r}\in H\) and \(vf_{1},\ldots,vf_{r}\in\Gamma=v(K^{\times})\). It follows from Corollaries 2.6.21 and 5.10.28 that \(\mathscr{E}^{\mathrm{e}}(A)=\mathscr{E}^{\mathrm{e}}_{E[i]}(A)=\{vf_{j}:\,j=1, \ldots,r,\,\phi_{j}=0\}\). By Corollary 1.8.10 the quantity \(v^{\mathrm{e}}_{A}(\gamma)\), for \(\gamma\in\Gamma\), does not change when passing from \(K\) to any ungrounded \(H\)-asymptotic extension of \(K\).
Take \(\mathfrak{m},\mathfrak{n}\in H^{\times}\) with \(\mathfrak{m}\prec f_{1},\ldots,f_{r}\) and \(v\mathfrak{n}=v^{\mathrm{e}}_{A}(v\mathfrak{m})\). Consider a Hardy field extension \(F\) of \(H\). Let \(b\in F[i]^{\times}\), \(b\prec\mathfrak{n}\), and let \(M\) be a d-maximal Hardy field extension of \(F\). Then linear wontaniity of \(L:=M[i]\) and Corollary 1.5.7 yields \(y\in L\) with \(A(y)=b\), \(vy\notin\mathscr{E}^{\mathrm{e}}_{L}(A)=\mathscr{E}^{\mathrm{e}}(A)\), and \(v^{\mathrm{e}}_{A}(vy)=vb\). Then \(v^{\mathrm{e}}_{A}(vy)=vb>v\mathfrak{n}=v^{\mathrm{e}}_{A}(v\mathfrak{m})\). Since \(v\mathfrak{m}>\mathscr{E}^{\mathrm{e}}_{L}(A)\), this yields \(y\prec\mathfrak{m}\) by Lemma 1.5.6. Suppose \(z\in\mathcal{C}^{<\infty}[i]\), \(A(z)=b\), and \(y\neq z\prec\mathfrak{m}\). Then \(u:=y-z\in\ker_{\mathcal{C}^{<\infty}[i]}A\) and \(0\neq u\prec\mathfrak{m}\), so \(f_{j}\prec\mathfrak{m}\) for some \(j\) by Corollary 5.10.18 (applied to \(E\), \(E[i]\) in place of \(H\), \(K\)), a contradiction. This last argument also takes care of the case \(b=0\): there is no nonzero \(u\prec\mathfrak{m}\) in \(\mathcal{C}^{<\infty}[i]\) such that \(A(u)=0\).
In [15] we shall prove that if \(H\) is \(\mathfrak{o}\)-free and d-perfect, then \(K\) is linearly closed. (This applies to \(H=\mathrm{E}\).) In particular, if the d-perfect hull \(\mathrm{D}(H)\) of \(H\) is \(\mathfrak{o}\)-free, then \(A\) splits over the algebraic closure \(\mathrm{D}(H)[i]\) of \(\mathrm{D}(H)\). (In Section 7.5 below we characterize when \(\mathrm{D}(H)\) is \(\mathfrak{o}\)-free.) Now if \(A\) splits over \(\mathrm{D}(H)[i]\), then there are \(g,\phi\in\mathrm{D}(H)\), \(g\neq 0\), such that \(A(g\,\mathrm{e}^{\phi i})=0\). The next lemma helps to clarify when for \(H\supseteq\mathbb{R}\) we may take here \(g\), \(\phi\) in the Hardy subfield \(\mathrm{Li}(H)\) of \(\mathrm{D}(H)\).
**Lemma 7.4.6**.: _Suppose \(H\supseteq\mathbb{R}\). The following are equivalent:_
1. _there exists_ \(y\neq 0\) _in a Liouville extension of the differential field_ \(K\) _such that_ \(A(y)=0\)_;_
2. _there exists_ \(f\in\mathrm{Li}(H)[i]\) _such that_ \(f^{\prime}\) _is algebraic over_ \(K\) _and_ \(A(\mathrm{e}^{f})=0\)_;_
3. _there exists_ \(f\in\mathrm{Li}(H)[i]\) _such that_ \(A(\mathrm{e}^{f})=0\)_._
Proof.: Suppose (i) holds. Then Corollary 1.1.30 gives \(y\neq 0\) in a differential field extension \(L\) of \(K\) such that \(A(y)=0\) and \(g:=y^{\dagger}\) is algebraic over \(K\). We arrange that \(L\) contains the algebraic closure \(K^{\mathrm{a}}=H^{\mathrm{rc}}[i]\) of \(K\), where \(H^{\mathrm{rc}}\subseteq\mathcal{C}^{<\infty}\) is the real closure of the Hardy field \(H\). Thus \(g\in K^{\mathrm{a}}\), and hence \(A=B(\partial-g)\) where \(B\in K^{\mathrm{a}}[\partial]\) by [ADH, 5.1.21]. Take \(f\in\mathrm{Li}(H)[i]\) with \(f^{\prime}=g\) and set \(z:=\mathrm{e}^{f}\in\mathcal{C}^{<\infty}[i]^{\times}\). Then \(z^{\dagger}=g\) and thus \(A(z)=0\). This shows (i) \(\Rightarrow\) (ii), and (ii) \(\Rightarrow\) (iii) is trivial. To prove (iii) \(\Rightarrow\) (i), let \(f\) be as in (iii) and \(y:=\mathrm{e}^{f}\in\mathcal{C}^{<\infty}[i]^{\times}\). By [ADH, 10.6.6] the differential field
\[L\ :=\mathrm{Li}(H)[i]\ \subseteq\ \mathcal{C}^{<\infty}[i]\]
is a Liouville extension of \(K=H[i]\). Now \(L[y]\subseteq\mathrm{U}_{L}:=L\big{[}\,\mathrm{e}^{\mathrm{Li}(H)i}\,\big{]} \subseteq\mathcal{C}^{<\infty}[i]\) and \(y^{\dagger}=f^{\prime}\in L\), so the differential fraction field \(L(y)\) of \(L\) is a Liouville extension of \(L\), and hence of \(K\).
**Corollary 7.4.7**.: _If \(H\supseteq\mathbb{R}\) is real closed and \(A(y)=0\) for some \(y\neq 0\) in a Liouville extension of \(K\), then \(A(\mathrm{e}^{f})=0\) for some \(f\in\mathrm{Li}(H)[i]\) with \(f^{\prime}\in K\)._
_We assume \(H\supseteq\mathbb{R}\) in the next three results._ Let \(E\) be a d-maximal Hardy field extension of \(H\). By Theorem 7.4.1, \(A\) splits over \(E[i]\). When does \(A\) already split over the d-subfield \(\mathrm{Li}(H)[i]\) of \(E[i]\)? Here is a necessary condition:
**Corollary 7.4.8**.: _If \(A\) splits over \(\operatorname{Li}(H)[\mathrm{i}]\), then it splits over \(H^{\mathrm{rc}}[\mathrm{i}]\)._
Proof.: We arrange \(H=H^{\mathrm{rc}}\) and proceed by induction on \(r\). The case \(r=0\) being trivial, suppose \(r\geqslant 1\) and \(A\) splits over \(L:=\operatorname{Li}(H)[\mathrm{i}]\). Then [ADH, 5.1.21] yields \(y\neq 0\) in a differential field extension of \(L\) with constant field \(\mathbb{C}\) and \(y^{\dagger}\in L\) such that \(A(y)=0\). Now \(L\langle y\rangle\) is a Liouville extension of \(L\) and hence of \(K\), so Lemma 7.4.6 gives \(f\in L\) with \(f^{\prime}\in K\) and \(A(\mathrm{e}^{f})=0\). Then \(A=B(\partial-f^{\prime})\) where \(B\in K[\partial]\) by [ADH, 5.1.21], and \(B\) splits over \(\operatorname{Li}(H)[\mathrm{i}]\) by [ADH, 5.1.22]. We can assume inductively that \(B\) splits over \(K\), and then \(A\) does too.
With a weaker hypothesis on \(A\), we have:
**Lemma 7.4.9**.: _Suppose \(A(y)=0\) for some \(y\neq 0\) in a Liouville extension of \(K\). Then there is a monic \(B\in K[\partial]\) of order \(n\geqslant 1\) such that \(A\in K[\partial]B\) and the \(\mathbb{C}\)-linear space of zeros of \(B\) in \(\mathcal{C}^{<\infty}[\mathrm{i}]\) has a basis_
\[g_{1}\,\mathrm{e}^{\phi_{1}\mathrm{i}},\ \dots,\ g_{n}\,\mathrm{e}^{\phi_{n} \mathrm{i}}\qquad\text{where }g_{1},\dots,g_{n}\in\operatorname{Li}(H)^{\times} \text{, }\phi_{1},\dots,\phi_{n}\in\operatorname{Li}(H)\]
_and \(g_{1}^{\dagger},\dots,g_{n}^{\dagger},\phi_{1}^{\prime},\dots,\phi_{n}^{ \prime}\in H^{\mathrm{rc}}\). Any such \(B\) splits over \(H^{\mathrm{rc}}[\mathrm{i}]\)._
Proof.: Put \(L:=\operatorname{Li}(H)[\mathrm{i}]\) and identify \(\mathrm{U}_{L}\) with the differential subring \(L\big{[}\,\mathrm{e}^{\operatorname{Li}(H)\mathrm{i}}\,\big{]}\) of \(\mathcal{C}^{<\infty}[\mathrm{i}]\) as explained at the beginning of Section 5.10. We consider also the differential subfield \(K^{\mathrm{a}}:=H^{\mathrm{rc}}[\mathrm{i}]\) of \(L\), and use Lemma 2.2.12 to identify \(\mathrm{U}:=\mathrm{U}_{K^{\mathrm{a}}}\) with a differential subring of \(\mathrm{U}_{L}\), so for all \(u\in\mathcal{C}^{<\infty}[\mathrm{i}]^{\times}\) with \(u^{\dagger}\in K^{\mathrm{a}}\) we have \(u\in\mathrm{U}^{\times}\). Corollary 7.4.7 yields \(f\in L\) such that \(f^{\prime}\in K^{\mathrm{a}}\) and \(A(\mathrm{e}^{f})=0\). Then \(g:=\mathrm{e}^{\operatorname{Re}f}\in\operatorname{Li}(H)^{\times}\) and \(\phi:=\operatorname{Im}f\in\operatorname{Li}(H)\) with \(\mathrm{e}^{f}=g\,\mathrm{e}^{\phi_{\mathrm{i}}}\), so \(g^{\dagger}=\operatorname{Re}f^{\prime}\), \(\phi^{\prime}=\operatorname{Im}f^{\prime}\), and thus \(g^{\dagger},\phi^{\prime}\in H^{\mathrm{rc}}\). Now \((\mathrm{e}^{f})^{\dagger}=f^{\prime}\in K^{\mathrm{a}}\), so \(y:=\mathrm{e}^{f}\in\ \mathrm{U}^{\times}\). Let \(V\) be the \(\mathbb{C}\)-linear subspace of \(\mathrm{U}\) spanned by the \(\sigma(y)\) with \(\sigma\in\operatorname{Aut}_{\mathrm{3}}(\mathrm{U}|K)\). Then \(y\in V\subseteq\ker_{\mathrm{U}}A\) and so \(n:=\dim_{\mathbb{C}}V\in\{1,\dots,r\}\). Corollary 2.5.9 yields a unique monic \(B\in K^{\mathrm{a}}[\partial]\) of order \(n\) such that \(V=\ker_{\mathrm{U}}B\). From \(\sigma(V)=V\) for all \(\sigma\in\operatorname{Aut}_{\mathrm{3}}(\mathrm{U}|K)\) we get \(B\in K[\partial]\) by Corollary 2.2.16. Then \(A\in K[\partial]B\) by [ADH, 5.1.15(i), 5.1.11]. To show \(V\) has a basis as described in the lemma, let \(\sigma\in\operatorname{Aut}_{\mathrm{3}}(\mathrm{U}|K)\). Then \(\sigma(y)\in\mathrm{U}^{\times}\), so \(\sigma(y)^{\dagger}\in K^{\mathrm{a}}=H^{\mathrm{rc}}+H^{\mathrm{rc}}[\mathrm{i}]\), hence \(\sigma(y)^{\dagger}=g_{\sigma}^{\dagger}+\phi_{\sigma}^{\prime}\mathrm{i}\) with \(g_{\sigma},\phi_{\sigma}\in H^{\mathrm{rc}}\), \(g_{\sigma}\neq 0\). Also \(\left(g_{\sigma}\,\mathrm{e}^{\phi_{\sigma}\mathrm{i}}\,\right)^{\dagger}=g_{ \sigma}^{\dagger}+\phi_{\sigma}^{\prime}\mathrm{i}\), and thus \(\sigma(y)=c_{\sigma}g_{\sigma}\,\mathrm{e}^{\phi_{\sigma}\mathrm{i}}\) with \(c_{\sigma}\in\mathbb{C}^{\times}\). This yields a basis of \(V\) as claimed. The final splitting claim follows from Corollary 2.5.9.
Lemma 7.4.9 yields the following corollary inspired by [187, Corollary 3].
**Corollary 7.4.10**.: _If \(A\) is irreducible, then the following are equivalent:_
* \(A(y)=0\) _for some_ \(y\neq 0\) _in a Liouville extension of_ \(K\)_;_
* _the_ \(\mathbb{C}\)_-linear space of zeros of_ \(A\) _in_ \(\mathcal{C}^{<\infty}[\mathrm{i}]\) _has a basis_ \[g_{1}\,\mathrm{e}^{\phi_{1}\mathrm{i}},\ \dots,\ g_{r}\,\mathrm{e}^{\phi_{r} \mathrm{i}}\qquad\text{where }g_{1},\dots,g_{r}\in\operatorname{Li}(H)^{\times} \text{, }\phi_{1},\dots,\phi_{r}\in\operatorname{Li}(H)\] _and_ \(g_{1}^{\dagger},\dots,g_{r}^{\dagger},\phi_{1}^{\prime},\dots,\phi_{r}^{\prime}\) _are algebraic over_ \(H\)_._
Next we improve the bounds on the derivatives of solutions to linear differential equations from Corollary 5.7.2 when the coefficients of the equation are in \(K\):
**Corollary 7.4.11**.: _Let \(\mathfrak{m}\in H\) with \(0<\mathfrak{m}\preccurlyeq 1\) and \(y\in\mathcal{C}^{r}[\mathrm{i}]\) be such that \(A(y)=0\) and \(y\preccurlyeq\mathfrak{m}^{n}\). Then \(y\in\mathcal{C}^{<\infty}[\mathrm{i}]\) and_
\[y^{(j)}\ \preccurlyeq\mathfrak{m}^{n-j}\mathfrak{v}(A)^{-j}\qquad\text{ for }j=0,\dots,n\text{,}\]
_with \(\prec\) in place of \(\preccurlyeq\) if \(y\prec\mathfrak{m}^{n}\)._
Proof.: First arrange that \(H\) is \(\mathrm{d}\)-maximal. Choose a complement \(\Lambda_{H}\) of the \(\mathbb{R}\)-linear subspace \(\mathrm{I}(H)\) of \(H\), set \(\Lambda:=\Lambda_{H}\mathrm{i}\), and identify the universal exponential extension \(\mathrm{U}=\mathrm{U}_{K}\) of \(K\) with the differential subring \(K[\mathrm{e}^{H\mathrm{i}}]\) of \(\mathcal{C}^{<\infty}[\mathrm{i}]\) as described at the beginning of Section 5.10. By Lemmas 5.10.19 and 5.10.22 we have \(y\in\ker_{\mathcal{C}^{<\infty}[\mathrm{i}]}A=\ker_{\mathrm{U}}A\) and
\[y\ =\ f_{1}\,\mathrm{e}^{\phi_{1}\mathrm{i}}+\cdots+f_{m}\,\mathrm{e}^{\phi_{m} \mathrm{i}},\quad f_{1},\ldots,f_{m}\in K,\quad\phi_{1},\ldots,\phi_{m}\in H,\]
where \(\lambda_{1}:=\phi_{1}^{\prime}\mathrm{i},\ldots,\lambda_{m}:=\phi_{m}^{\prime }\mathrm{i}\in\Lambda\) are the distinct eigenvalues of \(A\) with respect to \(\Lambda\) and \(f_{1}\,\mathrm{e}^{\phi_{1}\mathrm{i}},\ldots,f_{m}\,\mathrm{e}^{\phi_{m} \mathrm{i}}\in\ker_{\mathrm{U}}A\). By Corollary 5.10.9 and Lemma 5.10.10, we have for \(j=1,\ldots,m\): \(f_{j}\,\mathrm{e}^{\phi_{j}\mathrm{i}}\prec\mathfrak{m}^{n}\), with \(f_{j}\,\mathrm{e}^{\phi_{j}\mathrm{i}}\prec\mathfrak{m}^{n}\) if \(y\prec\mathfrak{m}^{n}\). Hence we may arrange that \(y=f\,\mathrm{e}^{\phi\mathrm{i}}\) where \(f\in K\), \(\phi\in H\), and \(\lambda:=\phi^{\prime}\mathrm{i}\in\Lambda\) is an eigenvalue of \(A\) with respect to \(\Lambda\), so \(\lambda\prec\mathfrak{v}^{-1}\) by Corollary 4.4.6, where \(\mathfrak{v}:=\mathfrak{v}(A)\preccurlyeq 1\).
Now for each \(j\in\mathbb{N}\): \((\mathrm{e}^{\phi\mathrm{i}})^{(j)}=R_{j}(\lambda)\,\mathrm{e}^{\phi\mathrm{i }}\prec\mathfrak{v}^{-j}\), using Lemma 1.1.20 if \(\lambda\succcurlyeq 1\), and so by the Product Rule: if \(g\in\mathcal{C}^{j}[\mathrm{i}]^{\preccurlyeq}\), then \((g\,\mathrm{e}^{\phi\mathrm{i}})^{(j)}\preccurlyeq\mathfrak{v}^{-j}\), and likewise with \(\prec\) in place of \(\preccurlyeq\). If \(\mathfrak{m}\asymp 1\), then this observation with \(g:=f\) already yields the desired conclusion. Suppose \(\mathfrak{m}\prec 1\). Then with \(z:=y\mathfrak{m}^{-n}=f\mathfrak{m}^{-n}\,\mathrm{e}^{\phi\mathrm{i}}\) this same observation with \(g:=f\mathfrak{m}^{-n}\) gives for \(j=0,\ldots,n\): \(z^{(j)}\preccurlyeq\mathfrak{v}^{-j}\), with \(z^{(j)}\prec\mathfrak{v}^{-j}\) if \(y\prec\mathfrak{m}^{n}\). Now \(z\in\mathcal{C}^{n}[\mathrm{i}]\), so we can use Lemma 5.7.10 for \(r=n\) and \(\eta=|\mathfrak{v}|^{-1}\).
For \(\mathfrak{m}=1\) we obtain from Corollary 7.4.11:
**Corollary 7.4.12**.: _Let \(y\in\mathcal{C}^{r}[\mathrm{i}]\) be such that \(A(y)=0\) and \(y\preccurlyeq 1\). Then \(y\in\mathcal{C}^{<\infty}[\mathrm{i}]\) and \(y^{(n)}\preccurlyeq\mathfrak{v}(A)^{-n}\) for all \(n\), with \(\prec\) in place of \(\preccurlyeq\) if \(y\prec 1\)._
Recall from (2.4.3) the _concomitant_\(P_{A}\in K\{Y,Z\}\) of \(A\). It yields a \(\mathbb{C}\)-bilinear map
\[(y,z)\mapsto[y,z]_{A}:=P_{A}(y,z)\ :\ \mathcal{C}^{<\infty}[\mathrm{i}]\times \mathcal{C}^{<\infty}[\mathrm{i}]\to\mathcal{C}^{<\infty}[\mathrm{i}]\]
used in the next result, which is immediate from Corollaries 5.10.29 and 7.2.10.
**Corollary 7.4.13**.: _Suppose \(H\) is \(\mathrm{d}\)-maximal, and let \(f_{j}\), \(\phi_{j}\) be as in Theorem 7.4.1. Then the \(\mathbb{C}\)-linear space of zeros of the adjoint \(A^{*}\) of \(A\) in \(\mathcal{C}^{<\infty}[\mathrm{i}]\) has a basis_
\[f_{1}^{*}\,\mathrm{e}^{-\phi_{1}\mathrm{i}},\ \ldots,\ f_{r}^{*}\,\mathrm{e}^{- \phi_{r}\mathrm{i}}\qquad\text{ where }f_{j}^{*}\in K^{\times}\ (j=1,\ldots,r)\]
_such that \(\big{[}f_{j}\,\mathrm{e}^{\phi_{j}\mathrm{i}},f_{k}^{*}\,\mathrm{e}^{-\phi_{k }\mathrm{i}}\,\big{]}_{A}=\delta_{jk}\) for \(j,k=1,\ldots,r\)._
Recall that \(A\) is said to be _self-adjoint_ if \(A^{*}=A\), and _skew-adjoint_ if \(A^{*}=-A\). (See Definition 2.4.12.) Self-adjoint operators play an important role in boundary value problems; see, e.g., [61, Chapter XIII]. The next result follows from Corollaries 2.4.30, 5.10.31 and 7.2.10 and applies to such operators:
**Corollary 7.4.14**.: _Suppose \(H\) is \(\mathrm{d}\)-maximal, and \(A^{*}=(-1)^{r}A_{\ltimes a}\), \(a\in K^{\times}\). Then there are \(a_{1},\ldots,a_{r}\in K\) such that_
\[A\ =\ (\partial-a_{r})\cdots(\partial-a_{1})\quad\text{and}\quad a_{j}+a_{r-j+1} \ =\ a^{\dagger}\ \ \text{for }j=1,\ldots,r.\]
_For \(f_{j}\), \(\phi_{j}\) as in Theorem 7.4.1 we have: \(\phi_{1}+\cdots+\phi_{r}\preccurlyeq 1\), and for each \(i\in\{1,\ldots,r\}\) there is a \(j\in\{1,\ldots,r\}\) such that \(\phi_{i}+\phi_{j}\preccurlyeq 1\)._
Operators satisfying the hypothesis of Corollary 7.4.14 are _self-dual_ in the sense of Section 2.4. For sources of such operators in physics, see [38]. The next result is immediate from Corollary 2.4.9 and Theorem 7.4.1 and gives a sufficient condition for such operators to have nontrivial zeros in complexified Hardy fields:
**Corollary 7.4.15**.: _If \(A\) is self-dual \((\)which is the case if \(A\) is skew-adjoint\()\), \(r\) is odd, and \(L\) is a \(\mathrm{d}\)-maximal Hardy field extension of \(H\), then there are \(y,z\in L\), not both zero, such that \(A(y+zi)=0\)._
The space of zeros of a self-dual \(A\) has a special kind of basis, by Corollary 5.10.32:
**Corollary 7.4.16**.: _Suppose \(A\) is self-dual and \(H\) is \(\mathrm{d}\)-maximal. Then the \(\mathbb{C}\)-linear space of zeros of \(A\) in \(\mathcal{C}^{<\infty}[\mathrm{i}]\) has a basis_
\[f_{1}\,\mathrm{e}^{\phi_{1}\mathrm{i}},g_{1}\,\mathrm{e}^{-\phi_{1}\mathrm{i} },\,\ldots,\,f_{m}\,\mathrm{e}^{\phi_{m}\mathrm{i}},g_{m}\,\mathrm{e}^{-\phi_{ m}\mathrm{i}},\,\,h_{1},\ldots,h_{n}\qquad(2m+n=r)\]
_where \(f_{1},\ldots,f_{m},g_{1},\ldots,g_{m},h_{1},\ldots,h_{n}\in K^{\times}\), and \(\phi_{1},\ldots,\phi_{m}\in H^{>\mathbb{R}}\) are apart._
### Bounded operators
_In this subsection_ \(H\) _is_ \(\mathrm{d}\)_-maximal._ (One can often reduce to this situation by extending a given Hardy field to a \(\mathrm{d}\)-maximal Hardy field.) _We choose an_ \(\mathbb{R}\)_-linear complement_ \(\Lambda_{H}\) _of_ \(\mathrm{I}(H)\) _in_ \(H\)_, set_ \(\Lambda:=\Lambda_{H}i\)_, and identify_ \(\mathrm{U}:=\mathrm{U}_{K}\) _with_ \(K[\mathrm{e}^{Hi}]\) _as explained at the beginning of Section_ 5.10_. Also_ \(A\in\mathcal{O}[\mathfrak{d}]\) (so \(\mathfrak{v}(A)=1\))_. Thus \(\mathrm{U}^{\times}=K^{\times}\,\mathrm{e}^{H\mathrm{i}}\) and \(V:=\ker_{\mathcal{C}^{<\infty}[\mathrm{i}]}A=\ker_{\mathrm{U}}A\). See Sections 5.2 and 5.10 for definitions of Lyapunov exponents and of \(\mathcal{C}[\mathrm{i}]\mbox{$\cong$}\) and \(\mathrm{U}^{\mbox{$\cong$}}\).
**Lemma 7.4.17**.: \(V\subseteq\mathrm{U}^{\mbox{$\cong$}}\)_, and \(\lambda(y)=\lambda(y,y^{\prime},\ldots,y^{(r-1)})\in\mathbb{R}\) for all \(y\in V^{\neq}\)._
Proof.: Lemma 2.3.36 gives \(\Sigma(A)\subseteq[\mathcal{O}]\), and Corollary 5.2.50 yields \(V\subseteq\mathrm{U}\cap\mathcal{C}[\mathrm{i}]\mbox{$\cong$}\). Lemma 5.10.19 gives a basis \(f_{1}\,\mathrm{e}(h_{1}i),\ldots,f_{r}\,\mathrm{e}(h_{r}i)\) of the \(\mathbb{C}\)-linear space \(V\) with \(f_{1},\ldots,f_{r}\in K^{\times}\) and \(h_{1},\ldots,h_{r}\in\Lambda_{H}\), and it says that then the eigenvalues of \(A\) with respect to \(\Lambda\) are \(h_{1}\mathrm{i},\ldots,h_{r}i\). So for \(j=1,\ldots,r\) we have \(h_{j}i-a\in K^{\dagger}=H+\mathrm{I}(H)i\) with \(a\in\mathcal{O}\). Then \(h_{j}-\mathrm{Im}\,a\in\mathrm{I}(H)\subseteq\mathcal{O}_{H}\) and \(\mathrm{Im}\,a\in\mathcal{O}_{H}\), so \(h_{j}\in\Lambda_{H}\cap\mathcal{O}_{H}\). From \(f_{j}\,\mathrm{e}(h_{j}i)\in\mathcal{C}[\mathrm{i}]\mbox{$\cong$}\) we obtain \(f_{j}\in K\cap\mathcal{C}[\mathrm{i}]\mbox{$\cong$}=\mathcal{O}_{\Delta}\), so \(V\subseteq\mathrm{U}^{\mbox{$\cong$}}\). The rest follows from Corollary 5.2.50 and Lemma 5.10.49.
For \(y\in\mathcal{C}^{1}[\mathrm{i}]^{\times}\), in Section 5.10 we also defined
\[\mu(y)\ =\ \limsup_{t\to+\infty}\mathrm{Im}\,\frac{y^{\prime}(t)}{y(t)}\in \mathbb{R}_{\pm\infty}.\]
The zeros of the characteristic polynomial \(\chi_{A}\in\mathbb{C}[Y]\) of \(A\) (defined in Section 2.3) contain information about elements of \(V\cap\mathrm{U}^{\times}\):
**Lemma 7.4.18**.: _Let \(f\in K^{\times}\) and \(\phi\in H\) be such that \(y=f\,\mathrm{e}^{\phi\mathrm{i}}\in V\). Then \(y\in(\mathrm{U}^{\mbox{$\cong$}})^{\times}\), \(\lambda:=\lambda(y)\in\mathbb{R}\), \(\mu:=\mu(y)\in\mathbb{R}\), \(\phi-\mu x\prec x\), and with \(\alpha:=\phi^{\prime}i+K^{\dagger}\):_
\[\chi_{A}(-\lambda+\mu\mathrm{i})=0,\qquad\mathrm{mult}_{\alpha}(A)\ \leqslant\sum_{c\in\mathbb{C},\ \mathrm{Im}\,c=\mu}\mathrm{mult}_{c}(\chi_{A}).\]
Proof.: Corollary 2.3.37 gives \(y^{\dagger}\preccurlyeq 1\), so \(y\in(\mathrm{U}^{\mbox{$\cong$}})^{\times}\), \(\lambda,\mu\in\mathbb{R}\) with \(y^{\dagger}-(-\lambda+\mu\mathrm{i})\prec 1\) and \(\phi^{\prime}\preccurlyeq 1\) by Lemma 5.10.48 and an observation following Corollary 5.10.50. Then \(\phi^{\prime}-\mu\prec 1\), so \(\phi-\mu x\prec x\). The rest follows from Corollary 2.3.37 and Lemma 2.3.39.
A **Lyapunov basis** of \(V\) is a basis \(y_{1},\ldots,y_{r}\) of the \(\mathbb{C}\)-linear space \(V\) such that for all \(c_{1},\ldots,c_{r}\in\mathbb{C}\), not all zero, and \(y=c_{1}y_{1}+\cdots+c_{r}y_{r}\) we have \(\lambda(y)=\min\bigl{\{}\lambda(y_{j}):c_{j}\neq 0\bigr{\}}\). There is a Lyapunov basis of \(V\); indeed, by the remarks after Theorem 7.4.1 and Corollary 5.10.44:
**Corollary 7.4.19**.: _The \(\mathbb{C}\)-linear space \(V\) has a Hahn basis_
\[f_{1}\,\mathrm{e}^{\phi_{1}{}_{1}},\ldots,f_{r}\,\mathrm{e}^{\phi_{r}{}_{i}} \qquad(f_{1},\ldots,f_{r}\in K^{\times},\ \phi_{1},\ldots,\phi_{r}\in H),\]
_and every such Hahn basis of \(V\) is a Lyapunov basis of \(V\)._
_Question_.: By Perron [148, Satz 8] (see also [191, Satz VI]), \(V\) has a Lyapunov basis \(y_{1},\ldots,y_{r}\) such that for each \(\lambda\in\mathbb{R}\), the number of \(j\) with \(\lambda(y_{j})=\lambda\) is equal to \(\sum_{\mu\in\mathbb{R}}\mathrm{mult}_{-\lambda+\mu{}_{i}}(\chi_{A})\). Can we choose here \(y_{1},\ldots,y_{r}\) to be a Hahn basis of \(V\)?
**Corollary 7.4.20**.: _If \(\chi_{A}\) has no real zeros, then \(K^{\dagger}\) is not an eigenvalue of \(A\), and so there is no \(y\in K^{\times}\) such that \(A(y)=0\)._
Proof.: Take a Hahn basis of \(V\) as in Corollary 7.4.19. Then the eigenvalues of \(A\) are \(\phi_{1}^{\prime}{}_{i}+K^{\dagger},\ldots,\phi_{r}^{\prime}{}_{i}+K^{\dagger}\), by Theorem 7.4.1. Suppose \(K^{\dagger}\) is an eigenvalue of \(A\). Then we have \(j\) with \(\phi_{j}^{\prime}{}_{i}\in K^{\dagger}=H+\mathrm{I}(H){}_{i}\), so \(\phi_{j}^{\prime}\in I(H)\), hence \(\phi_{j}\preccurlyeq 1\), and thus \(\phi_{j}=0\). Then Lemma 7.4.18 yields a real zero of \(\chi_{A}\). For the rest, use that the \(f_{j}\) with \(\phi_{j}=0\) form a basis of \(\ker_{K}A\) by remarks after Theorem 7.4.1.
_Example_.: The linear differential equation
\[y^{\prime\prime\prime}-\left(i+\frac{1}{\mathrm{e}^{x}}\right)y^{\prime\prime }+\left(1-\frac{1}{\log x}\right)y^{\prime}-\left(i+\frac{1}{x^{2}}\right)y=0\]
has no nonzero solution in \(F[i]\) for any Hardy field \(F\).
We can now prove a strong version of a theorem of Perron [147, Satz 5] in the setting of linear differential equations over complexified Hardy fields. (A precursor of Perron's theorem for \(A\in\mathbb{C}(x)[\![\partial]\!]\) is due to Poincare [154].) Perron assumes additionally that the real parts of distinct zeros of \(\chi_{A}\) are distinct.
**Proposition 7.4.21**.: _Suppose all \((\)complex\()\) zeros of \(\chi_{A}\) are simple. Let_
\[y_{1}=f_{1}\,\mathrm{e}^{\phi_{1}{}_{1}},\ \ldots,\ y_{r}=f_{r}\,\mathrm{e}^{ \phi_{r}{}_{i}}\qquad(f_{1},\ldots,f_{r}\in K^{\times},\ \phi_{1},\ldots,\phi_{r}\in H)\]
_be a Hahn basis of \(V\). Then the zeros of \(\chi_{A}\) are_
\[c_{1}\ :=\ -\lambda(y_{1})+\mu(y_{1})i,\ \ldots,\ c_{r}\ :=\ -\lambda(y_{r})+\mu(y_{r }){}_{i},\]
_and \(\left(y_{j}^{(n)}/y_{j}\right)-c_{j}^{n}\preccurlyeq 1\) for \(j=1,\ldots,r\) and all \(n\)._
Proof.: By Lemma 7.4.18 each \(c_{j}\) is a zero of \(\chi_{A}\), and we claim that there are no other. Let \(c=-\lambda+\mu{}_{i}\) (\(\lambda,\mu\in\mathbb{R}\)) be a zero of \(\chi_{A}\). Then Corollary 1.8.47 and [ADH, 5.1.21, 5.8.7] yield \(A\in K[\![\partial]\!]\big{(}\partial-(p+qi)\big{)}\) with \(p,q\in\mathcal{O}_{H}\), \(p+\lambda,q-\mu\preccurlyeq 1\). Taking \(f\in H^{\times}\) and \(\phi\in H\) with \(f^{\dagger}=p\) and \(\phi^{\prime}=q\) we have \(y:=f\,\mathrm{e}^{\phi{}_{i}}\in V^{\neq}\) and so \(\lambda(y)=\lambda\) and \(\mu(y)=\mu\).
Take \(a_{1},\ldots,a_{r}\in\mathbb{C}\) such that \(y=a_{1}f_{1}\,\mathrm{e}^{\phi_{1}{}_{i}}+\cdots+a_{r}f_{r}\,\mathrm{e}^{\phi_ {r}{}_{i}}\). As in the proof of Corollary 5.10.44 (but with \(r\) instead of \(m\)) we arrange, with \(l\in\{1,\ldots,r\}\), that \(\phi_{1},\ldots,\phi_{l}\) are distinct and each \(\phi_{j}\) with \(l<j\leqslant r\) equals one of \(\phi_{1},\ldots,\phi_{l}\). For \(k=1,\ldots,l\) we take (as in that proof, but with other notation) \(h_{k}\in\Lambda_{H}\) such that \(\phi_{k}-\phi(h_{k}i)\preccurlyeq 1\) and put \(g_{k}:=\sum_{1\leqslant j\leqslant l,\ \phi_{j}=\phi_{k}}a_{j}f_{j}\) and \(u_{k}:=\mathrm{e}^{(\phi_{k}-\phi(h_{k}i)){}_{i}}\), and likewise \(h\in\Lambda_{H}\) with \(\phi-\phi(hi)\preccurlyeq 1\), and set \(u:=\mathrm{e}^{(\phi-\phi(hi)){}_{i}}\). Then
\[y\ =uf\,\mathrm{e}(hi)\ =\ \ g_{1}\,\mathrm{e}^{\phi_{i}{}_{1}}+\cdots+g_{l}\, \mathrm{e}^{\phi_{l}{}_{i}}\ =\ u_{1}g_{1}\,\mathrm{e}(h_{1}{ }_{1}{}_{i})+\cdots+u_{l}\,\mathrm{g}_{l}\,\mathrm{e}(h_{l}{}_{i}).\]
Now \(u,u_{1},\ldots,u_{l}\in K\), so \(h=h_{k}\) for some \(k\in\{1,\ldots,l\}\), say \(h=h_{1}\), hence \(uf=u_{1}g_{1}\) and so \(y=g_{1}\,\mathrm{e}^{\phi_{i}{}_{1}}\). Since the \(f_{j}\) with \(\phi_{j}=\phi_{1}\) are valuation-independent
and \(f\asymp g_{1}\), this yields \(j\) with \(\phi_{j}=\phi_{1}\) and \(a_{j}\neq 0\) such that \(f\asymp f_{j}\). Then \(\lambda=\lambda(y)=\lambda(f)=\lambda(f_{j})=\lambda(y_{j})\). The proof of Lemma 7.4.18 gives
\[\mu\ =\ \mu(y)\ =\ \lim_{t\to\infty}\phi^{\prime}(t),\qquad\mu(y_{j})\ =\ \lim_{t\to\infty}\phi^{\prime}_{j}(t)\ =\ \lim_{t\to\infty}\phi^{\prime}_{1}(t).\]
But \(\phi_{1}-\phi(h_{1}i)\preccurlyeq 1\), \(\phi-\phi(hi)\preccurlyeq 1\), and \(h=h_{1}\), so \(\phi-\phi_{1}\preccurlyeq 1\), hence \(\phi^{\prime}-\phi^{\prime}_{1}\prec 1\), and thus \(\mu=\mu(y_{j})\). This yields \(c=c_{j}\). For the last claim, let \(j\in\{1,\ldots,r\}\). Then for \(z_{j}:=y^{\dagger}_{j}\) we have \(z_{j}-c_{j}\prec 1\) (see for example the proof of Lemma 7.4.18), and \(y^{(n)}_{j}/y_{j}=R_{n}(z_{j})\). Now use Lemma 1.1.20 if \(c_{j}\neq 0\). If \(c_{j}=0\), then \(z_{j}\prec 1\), so we can use that then \(R_{n}(z_{j})\prec 1\) for \(n\geqslant 1\).
**Corollary 7.4.22**.: _Suppose the real part of each complex zero of \(\chi_{A}\) is negative. Then for all \(y\in V\) and all \(n\) we have \(y^{(n)}\prec 1\)._
Proof.: By Corollary 7.4.19 it is enough to consider the case \(y=f\operatorname{e}^{\phi\mathrm{i}}\in V\) where \(f\in K^{\times}\), \(\phi\in H\). Then \(\lambda:=\lambda(y)=\lambda(f)\in\mathbb{R}^{>}\) by Lemma 7.4.18, which for \(0<\varepsilon<\lambda\) gives \(f\prec\operatorname{e}^{-(\lambda-\varepsilon)x}\prec 1\). Now use Corollary 7.4.12.
We use Corollary 7.4.22 to strengthen another theorem of Perron [149, 150] in the Hardy field context:
**Corollary 7.4.23**.: _Suppose \(a_{0}:=\chi_{A}(0)\neq 0\). Let \(b\in K\), \(b\preccurlyeq 1\). Then there exists \(y\in K\) such that_
\[A(y)\ =\ b,\quad y-(b/a_{0})\ \prec\ 1,\quad y^{(n)}\ \prec\ 1\ \text{ for all }n\geqslant 1.\]
_Moreover, if the real part of each complex zero of \(\chi_{A}\) is negative, then all \(y\in\mathcal{C}^{<\infty}[\mathrm{i}]\) with \(A(y)=b\) satisfy \(y-(b/a_{0})\prec 1\) and \(y^{(n)}\prec 1\) for all \(n\geqslant 1\)._
Proof.: By Theorem 6.7.22 and [ADH, 14.5.7], \(K\) is \(r\)-linearly newtonian. As \(\partial\mathcal{O}\subseteq\sigma\), for the first part it is enough to find \(y\in K\) such that \(A(y)=b\) and \(y-(b/a_{0})\prec 1\). Corollary 1.5.8 yields such \(y\) if \(b\asymp 1\), so suppose \(0\neq b\prec 1\) (since for \(b=0\) we can take \(y=0\)). From \(A(1)\asymp 1\) and Proposition 1.5.2 we obtain \(0\notin v(\ker_{K}^{\neq}A)=\mathscr{E}^{\mathrm{e}}(A)\). Corollary 1.5.7 then yields \(y\in K^{\times}\) with \(A(y)=b\), \(vy\notin\mathscr{E}^{\mathrm{e}}(A)\), and \(v^{\mathrm{e}}_{A}(vy)=vb\). Now \(A(1)\asymp 1\) gives \(v^{\mathrm{e}}_{A}(0)=0\), so \(y\prec 1\) by Lemma 1.5.6. The second statement now follows from Corollary 7.4.22.
_Example_.: Each \(y\in\mathcal{C}^{2}[\mathrm{i}]\) with
\[y^{\prime\prime}+(2-\mathrm{i})(1+x^{-1}\log x)y^{\prime}+(1-\mathrm{i})y\ =\ 2+ \operatorname{e}^{-x^{2}}\]
satisfies \(y\sim\mathrm{i}+1\) and \(y^{(n)}\prec 1\) for each \(n\geqslant 1\), and there is such a \(y\in F[\mathrm{i}]\) for some Hardy field \(F\).
Here is a version of Corollary 2.3.40 in the Hardy field setting:
**Proposition 7.4.24**.: _Let \(H_{0}\) be a \(\mathrm{d}\)-perfect Hardy subfield of \(H\) such that \(A\in K_{0}[\partial]\) for \(K_{0}:=H_{0}[\mathrm{i}]\). Suppose \(r:=\operatorname{order}(A)\geqslant 1\) and \(\chi_{A}\) has distinct zeros \(c_{1},\ldots,c_{r}\in\mathbb{C}\) with \(\operatorname{Re}c_{1}\geqslant\cdots\geqslant\operatorname{Re}c_{r}\). Then there is a unique splitting \((a_{1},\ldots,a_{r})\) of \(A\) over \(K\) such that \(a_{1}-c_{1},\ldots,a_{r}-c_{r}\prec 1\). If in addition \(\operatorname{Re}c_{1}>\cdots>\operatorname{Re}c_{r}\), then for this splitting of \(A\) over \(K\) we have \(a_{1},\ldots,a_{r}\in\mathcal{O}_{K_{0}}\)._
Proof.: The first claim holds by Corollary 2.3.40. Suppose \(\operatorname{Re}c_{1}>\cdots>\operatorname{Re}c_{r}\); it remains to show \(a_{1},\ldots,a_{r}\in\mathcal{O}_{K_{0}}\). For this we proceed by induction on \(r\). The case \(r=1\) being obvious, suppose \(r>1\). Let \(y_{1},\ldots,y_{r}\) be a Hahn basis of \(V\) with \(c_{j}=-\lambda(y_{j})+\mu(y_{j})\mathrm{i}\) for \(j=1,\ldots,r\) as in Proposition 7.4.21. Lemma 5.5.21
yields \(\theta_{j}\in H\) with \(y_{j}=|y_{j}|\,{\rm e}^{\theta_{j}i}\) and \(|y_{j}|\in H^{>}\), for \(j=1,\ldots,r\); these \(\theta_{j}\) might be different from the \(\phi_{j}\) of Proposition 7.4.21. Let now \(F\) be any d-maximal Hardy field extension of \(H_{0}\); we claim that \(|y_{r}|,\theta_{r}\in F\). To see this, use Lemma 5.5.21 and Proposition 7.4.21 applied to \(F\) in place of \(H\) to get \(f\in F^{>}\), \(\theta\in F\) such that \(y:=f\,{\rm e}^{\theta_{1}}\) satisfies \(A(y)=0\) and \(c_{r}-y^{\dagger}\prec 1\). Then \(y\in V\cap{\mathcal{C}}[i]^{\times}\). Take \(d_{1},\ldots,d_{r}\in{\mathbb{C}}\) such that \(y=d_{1}y_{1}+\cdots+d_{r}y_{r}\). Lemma 5.10.51 applied to the \(d_{j}y_{j}\) with \(d_{j}\neq 0\) in place of \(f_{1},\ldots,f_{n}\), and with \(c_{r}\) in the role of \(c\), yields \(i\in\{1,\ldots,r\}\) such that \(d_{i}\neq 0\), \(c_{i}=c_{r}\), and \(\operatorname{Re}c_{j}\leqslant\operatorname{Re}c_{r}\) for all \(j\) with \(d_{j}\neq 0\). Hence \(i=r\) is the only one such \(j\) and thus \(y=d_{r}y_{r}\). This yields \(|y_{r}|\in{\mathbb{R}}^{>}f\subseteq F^{>}\) and \(\theta_{r}\in\theta+{\mathbb{R}}\subseteq F\) by the uniqueness part of Lemma 5.5.20, as claimed. This claim and d-perfectness of \(H_{0}\) now give \(|y_{r}|,\phi_{r}\in H_{0}\), hence \(y_{r}^{\dagger}=|y_{r}|^{\dagger}+\theta_{r}^{\prime}i\in K_{0}\). By [ADH, 5.1.21] we get \(A=B(\partial-y_{r}^{\dagger})\) where \(B\in K_{0}[\partial]\) is monic, and by [ADH, 5.6.3] we have \(B\in{\mathcal{O}}_{K_{0}}[\partial]\) with \(\chi_{A}=\chi_{B}\cdot(Y-c_{r})\), hence the zeros of \(B\) are \(c_{1},\ldots,c_{r-1}\). Now apply the inductive hypothesis to \(B\).
_Example_.: The linear differential operator
\[\partial^{3}-(1-{\rm e}^{-\,{\rm e}^{x}}){\rm i}\partial^{2}-(1+i+x^{-2}\log x ^{2})\partial+(\log\log x)^{-1/2}\in{\mathcal{O}}[\partial]\]
splits over \({\rm E}({\mathbb{Q}})[i]\). In fact, there is a unique splitting \((a_{1},a_{2},a_{3})\) of this linear differential operator over \(K\) with \(a_{1}-(1+{\rm i})\prec 1\), \(a_{2}\prec 1\), and \(a_{3}+1\prec 1\), and we have \(a_{1},a_{2},a_{3}\in{\rm E}({\mathbb{Q}})[i]\).
_Question_.: Can we drop the assumption \(\operatorname{Re}c_{1}>\cdots>\operatorname{Re}c_{r}\) in the last part of Proposition 7.4.24?
Next we derive consequences of Theorem 7.4.1 for matrix differential equations.
**Matrix differential equations**.: _In this subsection \(H\) is \({\rm d}\)-maximal. We take an \({\mathbb{R}}\)-linear complement \(\Lambda_{H}\) of \({\rm I}(H)\) in \(H\), set \(\Lambda:=\Lambda_{H}{\rm i}\), and identify \({\rm U}={\rm U}_{K}\) with \(K[{\rm e}^{H{\rm i}}]\) as usual. Let \(N\) be an \(n\times n\) matrix over \(K\), where \(n\geqslant 1\)._ Recall from [ADH, 5.5] the definition of _fundamental matrix_ for the matrix differential equation \(y^{\prime}=Ny\) over any differential ring extension of \(K\).
**Corollary 7.4.25**.: _There are \(M\in\operatorname{GL}_{n}(K)\) and \(\phi_{1},\ldots,\phi_{n}\in H\) with \(\phi_{1},\ldots,\phi_{n}\) apart such that for \(D:=\operatorname{diag}({\rm e}^{\phi_{{\rm i}{\rm i}}},\ldots,{\rm e}^{\phi_{ n{\rm i}}})\), the \(n\times n\) matrix \(MD\) over \(K[{\rm e}^{H{\rm i}}]\) is a fundamental matrix for \(y^{\prime}=Ny\). Moreover, for any such \(M\) and \(\phi_{1},\ldots\phi_{n}\), setting \(\alpha_{j}:=\phi_{j}^{\prime}i+K^{\dagger}\) for \(j=1,\ldots,n\), the spectrum of \(y^{\prime}=Ny\) is \(\{\alpha_{1},\ldots,\alpha_{n}\}\), and for all \(\alpha\in K/K^{\dagger}\),_
\[{\rm mult}_{\alpha}(N)\ =\ |\{j\in\{1,\ldots,n\}:\alpha_{j}=\alpha\}|.\]
Proof.: The hypothesis of [ADH, 5.5.14] is satisfied for \(R:={\rm U}=K[{\rm e}^{H{\rm i}}]\). To see why, let \(L\in K[\partial]\) be monic of order \(n\). Then Theorem 7.4.1 and a subsequent remark provide \(f_{1},\ldots,f_{n}\in K^{\times}\) and \(\phi_{1},\ldots,\phi_{n}\in H\) such that \(\phi_{1},\ldots,\phi_{n}\) are apart and \(f_{1}\,{\rm e}^{\phi_{{\rm i}{\rm i}}},\ldots,f_{n}\,{\rm e}^{\phi_{n{\rm i}}}\) is a basis of \(\ker_{{\mathcal{C}}^{<\infty}[i]}L=\ker_{\Omega}L\), where \(\Omega:=\operatorname{Frac}{\rm U}\). Hence \(W:=\operatorname{Wr}\bigl{(}f_{1}\,{\rm e}^{\phi_{{\rm i}{\rm i}}},\ldots,f_{n }\,{\rm e}^{\phi_{n{\rm i}}}\bigr{)}\in\operatorname{GL}_{n}(\Omega)\). Also \(\det W\in{\rm e}^{\phi_{{\rm i}}+\cdots+\phi_{n{\rm i}}}\,K^{\times}\subseteq{ \rm U}^{\times}\), hence \(W\in\operatorname{GL}_{n}({\rm U})\). Thus by the remarks preceding [ADH, 5.5.14]: \(W\) is a fundamental matrix for \(y^{\prime}=A_{L}y\) with \({\rm U}\) as the ambient differential ring. Note that \(W=QD\) where \(D=\operatorname{diag}({\rm e}^{\phi_{{\rm i}{\rm i}}},\ldots,{\rm e}^{\phi_{n{ \rm i}}})\) and \(Q\in\operatorname{GL}_{n}(K)\).
We now follow the proof of [ADH, 5.5.14]: take monic \(L\in K[\partial]\) of order \(n\) such that \(y^{\prime}=Ny\) is equivalent to \(y^{\prime}=A_{L}y\), and take \(P\in\operatorname{GL}_{n}(K)\) such that \(P\operatorname{sol}_{R}(A_{L})=\operatorname{sol}_{R}(N)\). With \(W\) the above fundamental matrix for \(y^{\prime}=A_{L}y\)
\(PW\in\operatorname{GL}_{n}\big{(}R\big{)}\) is then a fundamental matrix for \(y^{\prime}=Ny\). So \(M:=PQ\in\operatorname{GL}_{n}(K)\) gives \(MD=PW\) as a fundamental matrix for \(y^{\prime}=Ny\).
Let now any \(M\in\operatorname{GL}_{n}(K)\), \(\phi_{1},\dots,\phi_{n}\in H\), and \(D:=\operatorname{diag}(\operatorname{e}^{\phi_{1}\operatorname{i}},\dots, \operatorname{e}^{\phi_{n}\operatorname{i}})\) be given such that \(MD\) is a fundamental matrix for \(y^{\prime}=Ny\). Let \(f_{1},\dots,f_{n}\) be the successive columns of \(M\). Then \(\operatorname{e}^{\phi_{1}\operatorname{i}}f_{1},\dots,\operatorname{e}^{ \phi_{n}\operatorname{i}}f_{n}\) is a basis of the \(\mathbb{C}\)-linear space \(\operatorname{sol}_{\operatorname{U}}(N)\). The "moreover" part now follows from Lemma 5.10.24.
Recall from Corollary 5.2.46 that for \(f=(f_{1},\dots,f_{n})\in\mathcal{C}[\operatorname{i}]^{n}\),
\[\lambda(f)\ =\ \min\bigl{\{}\lambda(f_{1}),\dots,\lambda(f_{n})\bigr{\}}\ =\ \lambda\bigl{(}|f_{1}|+\dots+|f_{n}|\bigr{)}\in\mathbb{R}_{\pm\infty}.\]
If \(f\in\mathcal{C}^{1}[\operatorname{i}]\) and \(f\notin\mathcal{C}^{1}[\operatorname{i}]^{\times}\), we set \(\mu(f):=-\infty\). With this convention,
\[\mu(f)\ :\ =\max\bigl{\{}\mu(f_{1}),\dots,\mu(f_{n})\bigr{\}}\quad\text{ for }f=(f_{1},\dots,f_{n})\in\mathcal{C}^{1}[\operatorname{i}]^{n}.\]
We also turn \(K^{n}\) into a valued \(\mathbb{C}\)-linear space with valuation \(v\colon K^{n}\to\Gamma_{\infty}\) given by \(v(f):=\min\bigl{\{}v(f_{1}),\dots,v(f_{n})\bigr{\}}\) for \(f=(f_{1},\dots,f_{n})\in K^{n}\).
**Corollary 7.4.26**.: _We can choose \(M\), \(D\) as in Corollary 7.4.25 such that the successive columns \(f_{1},\dots,f_{n}\) of \(M\) have the property that for \(k=1,\dots,n\) the \(f_{j}\) with \(\phi_{j}=\phi_{k}\) are valuation-independent. For any such \(M,D\), the matrix \(MD\) is a Lyapunov fundamental matrix for \(y^{\prime}=Ny\)._
Proof.: Take \(M\), \(\phi_{1},\dots,\phi_{n}\) as in Corollary 7.4.25 such that \(\phi_{1},\dots,\phi_{m}\) are distinct, \(m\leqslant n\), and each \(\phi_{j}\) with \(m<j\leqslant n\) is equal to some \(\phi_{k}\) with \(1\leqslant k\leqslant m\). For \(V:=\operatorname{sol}_{\operatorname{U}}(N)\) this yields an internal direct sum decomposition
\[V\ =\ \operatorname{e}^{\phi_{1}\operatorname{i}}V_{1}\oplus\dots\oplus \operatorname{e}^{\phi_{m}\operatorname{i}}V_{m}\]
into \(\mathbb{C}\)-linear subspaces of \(V\). Now [ADH, remark before 2.3.10] yields for \(k=1,\dots,m\) a valuation basis of \(V_{k}\). Modifying \(M\) accordingly, this yields \(M,D\) with the desired property. The rest follows from Corollary 5.10.44.
If the matrix \(N\) is bounded, then the solutions of \(y^{\prime}=Ny\) grow only moderately, by Lemma 5.2.47; their oscillation is also moderate:
**Lemma 7.4.27**.: _Suppose \(N\) is bounded. Let \(M\in\operatorname{GL}_{n}(K)\) and \(\phi_{1},\dots,\phi_{n}\in H\) be such that for \(D:=\operatorname{diag}(\operatorname{e}^{\phi_{1}\operatorname{i}},\dots, \operatorname{e}^{\phi_{n}\operatorname{i}})\) the \(n\times n\) matrix \(MD\) over \(K[\operatorname{e}^{H\operatorname{i}}]\) is a fundamental matrix for \(y^{\prime}=Ny\). Then \(\phi_{1},\dots,\phi_{n}\preccurlyeq x\)._
Proof.: Corollary 7.4.25 yields \(\Sigma(N)=\{\phi_{1}^{\prime}\dot{\imath}+K^{\dagger},\dots,\phi_{n}^{\prime} \dot{\imath}+K^{\dagger}\}\). The differential module over \(K\) associated to \(N\) (cf. [ADH, p. 277]) is bounded, by Example 2.3.32(1), hence each \(\alpha\in\Sigma(N)\) has the form \(a+K^{\dagger}\) with \(a\in\mathcal{O}\), by Corollary 2.3.48. Together with Lemma 2.3.38 this yields \(\phi_{1},\dots,\phi_{n}\preccurlyeq x\).
**Corollary 7.4.28**.: _Suppose \(N\) is bounded. Then \(\operatorname{sol}_{\operatorname{U}}(N)\subseteq(\operatorname{U}^{\preceq})^{n}\)._
Proof.: Take \(M\) and \(D\) as in Lemma 7.4.27. It suffices to show that the entries of \(MD\) are in \(\operatorname{U}^{\preceq}\!\!
**Corollary 7.4.29**.: _Suppose \(N\preccurlyeq\ell^{\prime}\), \(\ell\in H^{>\mathbb{R}}\). Let \(M\in\operatorname{GL}_{n}(K)\), \(\phi_{1},\ldots,\phi_{n}\in H\), and \(D:=\operatorname{diag}(\operatorname{e}^{\phi_{1}\operatorname{i}},\ldots, \operatorname{e}^{\phi_{n}\operatorname{i}})\) be such that \(MD\) is a fundamental matrix for \(y^{\prime}=Ny\) over \(K[\operatorname{e}^{Hi}]\). Then \(\phi_{1},\ldots,\phi_{n}\preccurlyeq\ell\), and there exists \(m\geqslant 1\) such that \(f\preccurlyeq\operatorname{e}^{m\ell}\) and \(f\not\preccurlyeq\operatorname{e}^{-m\ell}\), for each column \(f\) of \(M\)._
Proof.: Put \(\phi:=\ell^{\prime}\) and use the superscript \(\circ\) as in \((\mathfrak{d},\circ,\mathfrak{d})\), Section 6.4. Then for \(R:=\mathcal{C}^{<\infty}[\operatorname{i}]\) and any fundamental matrix \(F\in\operatorname{GL}_{n}(R)\) for \(y^{\prime}=Ny\), the matrix \(F^{\circ}\in\operatorname{GL}_{n}(R)\) is a fundamental matrix for \(z^{\prime}=(\phi^{-1}N)^{\circ}z\). As \(H^{\circ}\) is d-maximal and \((\phi^{-1}N)^{\circ}\) is bounded, we can apply Lemmas 7.4.27 and 5.2.47 (and a remark following Corollary 5.2.45) to \(M^{\circ}\) and \(D^{\circ}\) in the role of \(M\) and \(D\), and convert this back to information about \(M\) and \(D\) as claimed.
In the next lemma and its corollary we assume \(N\) is bounded and \(M\), \(D\) are as in Lemma 7.4.27. Let \(\operatorname{st}(N)\) (the _standard part_ of \(N\)) be the \(n\times n\) matrix over \(\mathbb{C}\) such that \(N-\operatorname{st}(N)\prec 1\). For \(f\in K^{n}\), put \(|f|:=\max\bigl{\{}|f_{1}|,\ldots,|f_{n}|\bigr{\}}\in H\), so \(vf=v|f|\).
**Lemma 7.4.30**.: _Let \(y=\operatorname{e}^{\phi\operatorname{i}}f\) where \(f=f_{k}\) is the \(k\)th column of \(M\) and \(\phi=\phi_{k}\), \(k\in\{1,\ldots,n\}\). Set \(s:=|f|^{-1}f\in\mathcal{O}^{n}\). Then \(\lambda:=\lambda(y)\in\mathbb{R}\), \(\mu:=\mu(y)\in\mathbb{R}\), and \(-\lambda+\mu\operatorname{i}\in\mathbb{C}\) is an eigenvalue of \(\operatorname{st}(N)\) with eigenvector \(\operatorname{st}(s)\in\mathbb{C}^{n}\)._
Proof.: Note that \(y\) is the \(k\)th column of \(MD\). From Lemma 5.2.47 we get \(\lambda\in\mathbb{R}\). Let \(g\) be a nonzero entry of \(f\). Then for the corresponding entry \(g\operatorname{e}^{\phi\operatorname{i}}\) of \(y\) we have \((g\operatorname{e}^{\phi\operatorname{i}})^{\dagger}=g^{\dagger}+\phi^{\prime}i\), so \(\operatorname{Im}\bigl{(}(g\operatorname{e}^{\phi\operatorname{i}})^{\dagger} \bigr{)}=\operatorname{Im}(g^{\dagger})+\phi^{\prime}\) with \(\operatorname{Im}(g^{\dagger})\prec 1\) by a remark preceding Lemma 1.2.16, and \(\phi^{\prime}\preccurlyeq 1\) by Lemma 7.4.27. Hence \(\mu(g\operatorname{e}^{\phi\operatorname{i}})=\lim_{t\to+\infty}\phi^{\prime}( t)\in\mathbb{R}\). This gives \(\mu=\lim_{t\to+\infty}\phi^{\prime}(t)\in\mathbb{R}\) and so \(\mu-\phi^{\prime}\prec 1\).
Next, \(y^{\prime}=Ny\) gives \(\phi^{\prime}\mathrm{i}f+f^{\prime}=Nf\), and then using also Corollary 5.10.47,
\[Nf\ =\ (-\lambda+\mu\operatorname{i})f+(\phi^{\prime}\operatorname{i}-\mu \operatorname{i})f+\lambda f+f^{\prime}\ =\ (-\lambda+\mu\operatorname{i})f+r,\quad r\in K^{n},\ r \prec f.\]
Dividing by \(|f|\in H^{\times}\) then yields the claim about \(-\lambda+\mu\operatorname{i}\) and \(s\).
The proof of Lemma 7.4.30 also gives the next corollary, where \(I_{n}\) denotes the \(n\times n\) identity matrix over \(K\), and \(\operatorname{mult}_{c}\bigl{(}\operatorname{st}(N)\bigr{)}:=\dim_{\mathbb{C}} \ker_{\mathbb{C}^{n}}\bigl{(}\operatorname{st}(N)-cI_{n}\bigr{)}\) for \(c\in\mathbb{C}\):
**Corollary 7.4.31**.: _For \(k=1,\ldots,n\), let \(f_{k}\) be the \(k\)th column of \(M\), so \(y_{k}:=f_{k}\operatorname{e}^{\phi_{k}\operatorname{i}}\) is the \(k\)th column of \(MD\), and put \(c_{k}:=-\lambda(y_{k})+\mu(y_{k})\operatorname{i}\). If for a certain \(k\) the \(f_{j}\) with \(\mu_{j}=\mu_{k}\) are valuation-independent, then for this \(k\) we have_
\[\operatorname{mult}_{c_{k}}\bigl{(}\operatorname{st}(N)\bigr{)}\ \geqslant\ |\{j:( \lambda_{j},\mu_{j})=(\lambda_{k},\mu_{k})\}|.\]
_Question_. Suppose \(N\) is bounded and \(\operatorname{st}(N)\) is the \(n\times n\) matrix over \(\mathbb{C}\) such that \(N-\operatorname{st}(N)\prec 1\). By Perron [151, Satz 13] there is a Lyapunov fundamental matrix \(F\) for \(y^{\prime}=Ny\) such that for each \(\lambda\in\mathbb{R}\), the number of columns \(f\) of \(F\) with \(\lambda(f)=\lambda\) equals \(\sum_{\mu\in\mathbb{R}}\operatorname{mult}_{-\lambda+\mu\operatorname{i}} \bigl{(}\operatorname{st}(N)\bigr{)}\). Can one take here \(F\) of the form \(F=MD\) where \(M\in\operatorname{GL}_{n}(K)\) and \(D=\operatorname{diag}(\operatorname{e}^{\phi_{1}\operatorname{i}},\ldots, \operatorname{e}^{\phi_{n}\operatorname{i}})\) with \(\phi_{1},\ldots,\phi_{n}\in H\)?
Recall: a column vector \((y_{1},\ldots,y_{n})^{\mathrm{t}}\in\mathcal{C}[\operatorname{i}]^{n}\) is said to be _bounded_ if \(y_{1},\ldots,y_{n}\preccurlyeq 1\).
**Lemma 7.4.32**.: _Suppose \(y^{\prime}=Ny\) where \(y\in\mathcal{C}^{1}[\operatorname{i}]^{n}\) is bounded. Then_
\[y\ =\ \operatorname{e}^{\phi_{1}\operatorname{i}}z_{1}+\cdots+\operatorname{e}^{ \phi_{m}}z_{m}\]
_where \(m\leqslant n\), \(\phi_{1},\ldots,\phi_{m}\in H\) are distinct and apart, \(z_{1},\ldots,z_{m}\in K^{n}\) are bounded, and \(\operatorname{e}^{\phi_{1}\operatorname{i}}z_{1},\ldots,\operatorname{e}^{\phi _{m}\operatorname{i}}z_{m}\in\operatorname{sol}_{\mathrm{U}}(N)\)._
Proof.: Let \(M\in\operatorname{GL}_{n}(K)\) and \(\phi_{1},\dots,\phi_{n}\in H\) be as in Corollary 7.4.25, in particular, \(\phi_{1},\dots,\phi_{n}\) are apart. For \(j=1,\dots,n\), let \(f_{j}\in K^{n}\) be the \(j\)th column of \(M\). Take \(c_{1},\dots,c_{n}\in\mathbb{C}\) such that \(y=c_{1}\operatorname{e}^{\phi_{1}i}f_{1}+\dots+c_{n}\operatorname{e}^{\phi_{n }i}f_{n}\). We arrange that \(\phi_{1},\dots,\phi_{m}\) are distinct, \(m\leqslant n\), and each \(\phi_{j}\) with \(m<j\leqslant n\) is equal to one of the \(\phi_{k}\) with \(1\leqslant k\leqslant m\). This gives \(y=\operatorname{e}^{\phi_{1}i}z_{1}+\dots+\operatorname{e}^{\phi_{m}}z_{m}\) with \(z_{1},\dots,z_{m}\in K^{n}\) and \(\operatorname{e}^{\phi_{1}i}z_{1},\dots,\operatorname{e}^{\phi_{m}i}z_{m}\in \operatorname{sol}_{\operatorname{U}}(N)\). The \(\preccurlyeq\)-version of Corollary 5.10.18 with \(\mathfrak{m}=1\) then shows that \(z_{1},\dots,z_{m}\) are bounded.
See [18, Chapter 2] and [45, Chapter II, SS3] for classical conditions on a matrix differential equation to have only bounded solutions.
Despite Corollary 7.4.25, the differential fraction field of \(K[\operatorname{e}^{H{\rm i}}]\) is not pv-closed, since it is not even algebraically closed; see [ADH, 5.1.31] and Lemma 2.1.2. Combining Corollary 7.2.10 and [ADH, 5.4.2] also yields:
**Corollary 7.4.33**.: _For every column \(b\in K^{n}\) the matrix differential equation \(y^{\prime}=Ny+b\) has a solution in \(K^{n}\)._
We also have a version of Corollary 7.4.15 for matrix differential equations:
**Corollary 7.4.34**.: _Suppose \(y^{\prime}=Ny\) is self-dual. If \(\alpha\) is an eigenvalue of \(y^{\prime}=Ny\), then so is \(-\alpha\), with the same multiplicity. If \(n\) is odd, then the matrix differential equation \(y^{\prime}=Ny\) has a solution \(y\neq 0\) in \(K^{n}\)._
This follows from Corollaries 2.4.36 and 7.4.25. Note that the hypothesis on \(N\) in Corollary 7.4.34 is satisfied if \(y^{\prime}=Ny\) is self-adjoint or hamiltonian.
If \(y^{\prime}=Ny\) is self-adjoint, and \(M\in\operatorname{GL}_{n}(K)\) and \(\phi_{1},\dots,\phi_{n}\in H\) (as in Corollary 7.4.25) are such that \(MD\) is a fundamental matrix for \(y^{\prime}=Ny\) where \(D:=\operatorname{diag}(\operatorname{e}^{\phi_{1}i},\dots,\operatorname{e}^{ \phi_{n}i})\), then there exists \(U\in\operatorname{GL}_{n}(\mathbb{C})\) such that the fundamental matrix \(MDU\in\operatorname{GL}_{n}\left(K[\operatorname{e}^{H{\rm i}}]\right)\) of \(y^{\prime}=Ny\) is orthogonal as an element of \(\operatorname{GL}_{n}(\Omega)\), where \(\Omega\) is the differential fraction field of \(K[\operatorname{e}^{H^{\rm i}}]\); likewise with "hamiltonian" and "symplectic" instead of "self-adjoint" and "orthogonal": Lemmas 2.4.39 and 2.4.40.
_Example_.: Any matrix differential equation \(y^{\prime}=Ny\) with
\[N=\begin{pmatrix}0&a&b\\ -a&0&c\\ -b&-c&0\end{pmatrix}\qquad(a,b,c\in K=H[{\rm i}])\]
has a nonzero solution \(y=(y_{1},y_{2},y_{3})^{\rm t}\in K^{3}\).
In the self-dual case we can improve on Corollary 7.4.25:
**Corollary 7.4.35**.: _Suppose \(y^{\prime}=Ny\) is self-dual. Then there are \(M\in\operatorname{GL}_{n}(K)\) and \(\phi_{1},\dots,\phi_{n}\in H\) that are apart such that_
1. _for each_ \(j\in\{1,\dots,n\}\) _there is a_ \(k\in\{1,\dots,n\}\) _with_ \(\phi_{j}=-\phi_{k}\)_;_
2. _with_ \(D:=\operatorname{diag}(\operatorname{e}^{\phi_{1}i},\dots,\operatorname{e}^{ \phi_{n}i})\)_, the_ \(n\times n\) _matrix_ \(MD\) _over_ \(K[\operatorname{e}^{H{\rm i}}]\) _is a fundamental matrix for_ \(y^{\prime}=Ny\)_._
Proof.: Corollary 2.4.34 yields a matrix differential equation \(y^{\prime}=A_{L}y\) over \(K\), equivalent to \(y^{\prime}=Ny\), where \(L\in K[\mathfrak{d}]\) is monic self-dual of order \(n\). Then we can use Corollary 7.4.16 instead of Theorem 7.4.1 in the proof of Corollary 7.4.25.
Let \(\Omega\) be the differential fraction field of \(K[\operatorname{e}^{H{\rm i}}]\) and \(V:=\operatorname{sol}_{\Omega}(N)\subseteq K[\operatorname{e}^{H{\rm i}}]^{n}\), a \(\mathbb{C}\)-linear subspace of \(\Omega^{n}\). Then \(\dim_{\mathbb{C}}V=n\) and \(V=\operatorname{sol}_{\mathcal{C}^{<\infty}[{\rm i}]}(N)\). In the corollary
below we assume that \(y^{\prime}=Ny\) is self-adjoint, and we equip \(V\) with the symmetric bilinear form \(\langle\,\ \rangle\) defined after Lemma 2.4.38 (with \(\Omega\) instead of \(K\)).
**Corollary 7.4.36**.: _There are \(m\leqslant n\) and distinct \(\theta_{1},\ldots,\theta_{m}\) in \(H^{>\mathbb{R}}\) that are apart, subspaces \(V_{1},\ldots,V_{m},W\) of the \(\mathbb{C}\)-linear space \(V\) with \(W\subseteq K^{n}\), and for \(j=1,\ldots,m\), nonzero subspaces \(V_{j}^{+},V_{j}^{-}\) of \(K^{n}\), such that_
\[V_{j} =\ V_{j}^{+}\,\mathrm{e}^{\theta_{j}{}_{i}}\oplus V_{j}^{-}\, \mathrm{e}^{-\theta_{j}{}_{i}}\qquad\text{(internal direct sum of subspaces of $V_{j}$)},\] \[V =\ V_{1}\perp\cdots\perp V_{m}\perp W\qquad\text{(orthogonal sum with respect to $\langle\,\ \rangle$)}.\]
_For any such \(m\) and \(\theta_{j}\), \(V_{j}\), \(V_{j}^{+},V_{j}^{-}\), \(W\) we have \(\dim_{\mathbb{C}}V_{j}^{+}=\dim_{\mathbb{C}}V_{j}^{-}\) and \(\langle\,\ \rangle\) restricts to a null form on \(V_{j}^{+}\,\mathrm{e}^{\theta_{j}{}_{i}}\) and on \(V_{j}^{-}\,\mathrm{e}^{-\theta_{j}{}_{i}}\)._
Proof.: Take \(M\) and \(\phi_{1},\ldots,\phi_{n}\) as in Corollary 7.4.35, and set
\[D\ :=\ \mathrm{diag}(\mathrm{e}^{\phi_{1}{}_{1}},\ldots,\mathrm{e}^{\phi_{n}{} _{1}}).\]
Let \((f_{1},\ldots,f_{n})^{\mathrm{t}}\) be the \(j\)th column of \(M\) and \((g_{1},\ldots,g_{n})^{\mathrm{t}}\) be the \(k\)th column of \(M\). The \(j\)th column of \(MD\) is \(f=(f_{1}\,\mathrm{e}^{\phi_{j}{}_{1}},\ldots,f_{n}\,\mathrm{e}^{\phi_{j}{}_{i} })^{\mathrm{t}}\), and the \(k\)th column of \(MD\) is \(g=(g_{1}\,\mathrm{e}^{\phi_{k}{}_{i}},\ldots,g_{n}\,\mathrm{e}^{\phi_{k}{}_{i} })^{\mathrm{t}}\), and \(f,g\in\mathrm{sol}_{\Omega}(N)\) by Corollary 7.4.35(ii). Thus by Lemma 2.4.38 applied to \(\Omega\) in place of \(K\) we have
\[\langle f,g\rangle\ =\ (f_{1}g_{1}+\cdots+f_{n}g_{n})\,\mathrm{e}^{(\phi_{j}+ \phi_{k}){}_{i}}\in\mathbb{C}.\]
Corollary 7.4.35(i) gives \(l\in\{1,\ldots,n\}\) with \(\phi_{k}=-\phi_{l}\); then \(\phi_{j}+\phi_{k}=\phi_{j}-\phi_{l}\). Hence if \(\phi_{j}\neq-\phi_{k}\), then \(\phi_{j}+\phi_{k}\succ 1\) by \(\phi_{j}\), \(\phi_{l}\) being apart, so \(\mathrm{e}^{(\phi_{j}+\phi_{k}){}_{i}}\notin K\) by Corollary 5.5.23 and thus \(\langle f,g\rangle=0\). Taking \(\theta_{1},\ldots,\theta_{m}\) to be the distinct positive elements of \(\{\phi_{1},\ldots,\phi_{n}\}\), this yields the existence statement. The rest follows from Corollary 2.4.36, Lemma 5.10.24, and again Corollary 5.5.23.
_Remark_.: Suppose \(y^{\prime}=Ny\) is hamiltonian. Then Corollary 7.4.36 remains true with \(\langle\,\ \rangle\) replaced by the alternating bilinear form \(\omega\) on \(V\) of Lemma 2.4.41. (Same proof, using Lemma 2.4.41 instead of Lemma 2.4.38.)
The complex conjugation automorphism of the differential ring \(\mathcal{C}^{<\infty}[i]\) restricts to an automorphism of the differential integral domain \(\mathrm{U}=K[\mathrm{e}^{H{}_{1}}]\), which in turn extends uniquely to an automorphism \(g\mapsto\overline{g}\) of the differential field \(\Omega\), with \(\overline{\overline{g}}=g\) for all \(g\in\Omega\). Let \(\Omega_{\mathrm{r}}\) be the fixed field of this automorphism of \(\Omega\). Then \(\Omega_{\mathrm{r}}\) is a differential subfield of \(\Omega\), and \(\Omega=\Omega_{\mathrm{r}}[i]\). Set \(\mathrm{U}_{\mathrm{r}}:=\Omega_{\mathrm{r}}\cap\mathrm{U}\). Then
\[\Omega_{\mathrm{r}}\ =\ \mathrm{Frac}(\mathrm{U}_{\mathrm{r}})\ \ \mathrm{inside}\ \Omega, \qquad\mathrm{U}_{\mathrm{r}}\ =\ \mathrm{U}\cap\ \mathcal{C}^{<\infty}\ \ \mathrm{inside}\ \mathcal{C}^{<\infty}[i], \quad\mathrm{U}\ =\ \mathrm{U}_{\mathrm{r}}[i].\]
_Assume in the rest of this subsection that \(y^{\prime}=Ny\) is anti-self-adjoint, and equip the \(\mathbb{C}\)-linear space \(V=\mathrm{sol}_{\Omega}(N)\) with the positive definite hermitian form \(\langle\,\ \rangle\) introduced after Lemma 2.4.45, with \(\Omega\) in the role of \(K\)._ Then we have the following analogue of Corollary 7.4.36:
**Corollary 7.4.37**.: _There are \(m\in\{1,\ldots,n\}\), distinct \(\theta_{1},\ldots,\theta_{m}\in H\) that are apart, and nonzero \(\mathbb{C}\)-linear subspaces \(V_{1},\ldots,V_{m}\) of \(K^{n}\) such that \(V\) is the following orthogonal sum with respect to \(\langle\,\ \rangle\):_
\[V\ =\ V_{1}\,\mathrm{e}^{\theta_{1}{}_{i}}\perp\cdots\perp V_{m}\,\mathrm{e}^{ \theta_{m}{}_{i}}.\]
Proof.: Corollary 7.4.25 gives \(M\in\mathrm{GL}_{n}(K)\) and \(\phi_{1},\ldots,\phi_{n}\in H\) that are apart such that \(MD\) is a fundamental matrix for \(y^{\prime}=Ny\) where \(D:=\mathrm{diag}\big{(}\,\mathrm{e}^{\phi_{1}{}_{i}},\ldots,\mathrm{e}^{\phi_ {n}{}_{i}}\,\big{)}\). For \(j,k=1,\ldots,n\), let \(f=(f_{1}\,\mathrm{e}^{\phi_{j}{}_{i}},\ldots,f_{n}\,\mathrm{e}^{\phi_{j}{}_{i} })^{\mathrm{t}}\) be the \(j\)th column of \(MD\) and
\((g_{1}\,\mathrm{e}^{\phi_{k^{1}}},\ldots,g_{n}\,\mathrm{e}^{\phi_{k^{1}}})^{ \mathrm{t}}\) the \(k\)th column of \(MD\), where \(f_{1},\ldots,f_{n}\) and \(g_{1},\ldots,g_{n}\) are in \(K\). Then by Lemma 2.4.45,
\[\langle f,g\rangle\ =\ (f_{1}\overline{g_{1}}+\cdots+f_{n}\overline{g_{n}})\, \mathrm{e}^{(\phi_{j}-\phi_{k})_{\mathrm{i}}}\in\mathbb{C}\]
and hence \(\langle f,g\rangle=0\) if \(\phi_{j}\neq\phi_{k}\), by Corollary 5.5.23. Taking \(\theta_{1},\ldots,\theta_{m}\) to be the distinct elements of \(\{\phi_{1},\ldots,\phi_{n}\}\), this yields the desired result.
Corollary 7.4.37 and [122, Chapter XV, Corollary 5.2] yield \(M\in\mathrm{GL}_{n}(K)\) and \(\phi_{1},\ldots,\phi_{n}\in H\) that are apart, such that \(MD\) with \(D:=\mathrm{diag}\big{(}\,\mathrm{e}^{\phi_{1}}{}^{\mathrm{i}}{}_{1}{}_{1}{}_{ 1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_ {1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_ {1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_ {1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{ }_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{} _{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{} _{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{} _{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{} _{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{} _{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{} _{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{} _{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{} _{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{} _{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{} _{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{} _{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{ }_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{} _{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{} _{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{} _{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{} _{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{ }_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{} _{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{} _{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{} _{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{} _{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{ }_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{} _{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{} _{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{} _{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{} _{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{ }_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{} _{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{} _{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{} _{1}{}_{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{ }_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{} _{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1}{}_{1
Proof.: Let \(N\) be an \(n\times n\) matrix over \(K\), \(\phi\in H\), \(z\in K^{n}\). Then \(\operatorname{e}^{\phi i}z\in K[\operatorname{e}^{H\!i}]^{n}\) is a solution of \(y^{\prime}=Ny\) iff \(z^{\prime}+\phi^{\prime}iz=Nz\). Moreover, for d-maximal \(H\) it follows from Corollary 7.4.25 that all solutions of \(y^{\prime}=Ny\) in \(\mathcal{C}^{1}[\operatorname{i}]^{n}\) are bounded iff all solutions \(\operatorname{e}^{\phi i}z\) with \(\phi\in H\) and \(z\in K^{n}\) are bounded. Now use that d-maximal Hardy fields are \(H\)-closed and that the theory of \(H\)-closed \(H\)-fields admits quantifier elimination in the language \(\mathcal{L}^{\iota}_{\Lambda\Omega}\).
Using Lemma 7.4.32 we obtain in the same way:
**Lemma 7.4.40**.: _There is a quantifier-free \(\mathcal{L}^{\iota}_{\Lambda\Omega}\)-formula \(\gamma(u,v)\) such that for every Hardy field \(H\) and \(n\times n\) matrix \(N\) over \(K=H[\operatorname{i}]\):_
\[\boldsymbol{H}\models\gamma(\operatorname{Re}N,\operatorname{Im}N)\iff\text{ some nonzero solution of }y^{\prime}=Ny\text{ in }\mathcal{C}^{1}[\operatorname{i}]^{n}\text{ is bounded.}\]
_Example._ Let \(a,b\in H\), and take \(g,\phi\in\operatorname{Li}\big{(}H(\mathbb{R})\big{)}\) with \(g\neq 0\), \(g^{\dagger}=a\), and \(\phi^{\prime}=b\). Then \(\big{\{}y\in\mathcal{C}^{1}[\operatorname{i}]:\,y^{\prime}=(a+bi)y\big{\}}= \mathbb{C}g\operatorname{e}^{\phi i}\), and \(g\operatorname{e}^{\phi i}\asymp g\). Thus if \(H\) is Liouville closed, then by [ADH, 11.8.19]:
\[\text{ every }y\in\mathcal{C}^{1}[\operatorname{i}]\text{ with }y^{\prime}=(a+bi)y\text{ is bounded}\] \[\iff\text{ some }y\in\mathcal{C}^{1}[\operatorname{i}]^{\neq}\text{ with }y^{\prime}=(a+bi)y\text{ is bounded}\] \[\iff a\notin\Gamma(H)\] \[\iff a\leqslant 0\text{ or }a\in\operatorname{I}(H).\]
**The real case.**_In this subsection we assume \(A\in H[\mathfrak{d}]\)._ Recall that if \(H\) is d-maximal, then \(K\) is linearly closed, so [ADH, 5.1.35] yields the following, which includes Corollary 9 from the introduction:
**Corollary 7.4.41**.: _If \(H\) is \(\operatorname{d}\)-maximal, then \(A\) is a product of irreducible operators in \(H[\mathfrak{d}]\) which are monic of order \(1\) or monic of order \(2\)._
The next result follows from Corollaries 5.5.19 and 5.10.34, and is a version of Theorem 7.4.1 in the case of a real operator:
**Corollary 7.4.42**.: _Let \(E\) be a \(\operatorname{d}\)-maximal Hardy field extension of \(H\). Then the \(\mathbb{C}\)-linear space \(V:=\ker_{\mathcal{C}^{<\infty}[\operatorname{i}]}A\) of zeros of \(A\) in \(\mathcal{C}^{<\infty}[\operatorname{i}]\) has a basis_
\[g_{1}\operatorname{e}^{\phi_{1}\operatorname{i}},\,g_{1}\operatorname{e}^{- \phi_{1}\operatorname{i}},\,\,\,\dots,\,\,g_{m}\operatorname{e}^{\phi_{m} \operatorname{i}},\,g_{m}\operatorname{e}^{-\phi_{m}\operatorname{i}},\,\,h_{1},\,\,\dots,\,\,h_{n}\qquad(2m+n=r),\]
_where \(g_{j},\phi_{j}\in E^{>}\) with \(\phi_{j}\succ 1\)\((j=1,\dots,m)\) and \(h_{k}\in E^{\times}\)\((k=1,\dots,n)\). For any such basis of \(V\), the \(\mathbb{R}\)-linear space \(V\cap\mathcal{C}^{<\infty}\) of zeros of \(A\) in \(\mathcal{C}^{<\infty}\) has basis_
\[g_{1}\cos\phi_{1},\,g_{1}\sin\phi_{1},\,\,\dots,\,\,g_{m}\cos\phi_{m},\,g_{m} \sin\phi_{m},\,\,h_{1},\,\,\dots,\,\,h_{n},\]
_and the \(\mathbb{R}\)-linear space \(V\cap E\) has basis \(h_{1},\dots,h_{n}\)._
_Remarks._ Let \(E\) be a d-maximal Hardy field extension of \(H\). The quantity \(n=\dim_{\mathbb{R}}\ker_{E}A\) in Corollary 7.4.42 (and hence also \(m=(r-n)/2\)) is independent of the choice of \(E\), by Theorem 7.1.3. Likewise, the number of distinct eigenvalues of \(A\) with respect to \(E[\operatorname{i}]\) does not depend on \(E\). In more detail, the tuple \((d,\mu_{1},\dots,\mu_{d})\) where \(d\) is the number of distinct eigenvalues of \(A\) and \(\mu_{1}\geqslant\dots\geqslant\mu_{d}\geqslant 1\) are their multiplicities, with respect to \(E[\operatorname{i}]\), does not depend on \(E\).
Corollary 7.4.42 yields:
**Corollary 7.4.43**.: _If \(r\) is odd, then \(A(y)=0\) for some \(H\)-hardian germ \(y\neq 0\)._
From Corollary 5.10.36 we obtain:
**Corollary 7.4.44**.: _Suppose \(H\) is \(\mathrm{d}\)-maximal, and let \(\phi>\mathbb{R}\) be an element of \(H\) such that \(\phi^{\prime}\dot{\mathrm{i}}+K^{\dagger}\) is not an eigenvalue of \(A\). Then for every \(h\in H\) there are unique \(f,g\in H\) such that \(A(f\cos\phi+g\sin\phi)=h\cos\phi\)._
Taking \(A=\partial\), with \(K^{\dagger}\) as the only eigenvalue (Example 2.3.1), we recover the following result due to Shackell [186, Theorem 2]; his proof is based on [35].
**Corollary 7.4.45**.: _Let \(h,\phi\in H\) and \(\phi>\mathbb{R}\). The germ \(h\cos\phi\in\mathcal{C}^{<\infty}\) has an antiderivative \(f\cos\phi+g\sin\phi\in\mathcal{C}^{<\infty}\) with \(f\), \(g\) in a Hardy field extension of \(H\), and any Hardy field extension of \(H\) contains at most one such pair \((f,g)\)._
Besides Corollary 7.4.44 we use here that by a remark preceding Lemma 1.2.16 we have \(\phi^{\prime}\dot{\mathrm{i}}\notin K^{\dagger}\) for \(\phi\in H\) with \(\phi>\mathbb{R}\).
We also record a real version of Corollary 7.4.25, which follows from Corollary 7.4.42 in the same way that Corollary 7.4.25 followed from Theorem 7.4.1. Let \(I_{m}\) denote the \(m\times m\) identity matrix. Recall that \(\mathrm{U}_{\mathrm{r}}=K[\mathrm{e}^{H\mathrm{i}}]\cap\mathcal{C}^{<\infty}\).
**Corollary 7.4.46**.: _Suppose \(H\) is \(\mathrm{d}\)-maximal and \(N\) is an \(n\times n\) matrix over \(H\), \(n\geqslant 1\). Then there are \(M\in\mathrm{GL}_{n}(H)\) as well as \(k,l\in\mathbb{N}\) with \(2k+l=n\) and_
\[D=\left(\begin{array}{ccccc}\boxed{D_{1}}&&&&\\ &\ddots&&\\ &&\boxed{D_{k}}&\\ &&\boxed{I_{l}}\end{array}\right)\quad\text{where }D_{j}=\left(\begin{smallmatrix} \cos\phi_{j}&\sin\phi_{j}\\ -\sin\phi_{j}&\cos\phi_{j}\end{smallmatrix}\right)\text{, }\phi_{j}\in H\text{, }\phi_{j}> \mathbb{R}\]
_such that the \(n\times n\) matrix \(MD\) is a fundamental matrix for \(y^{\prime}=Ny\) with respect to \(\mathrm{U}_{\mathrm{r}}\). In particular, if \(n\) is odd, then \(y^{\prime}=Ny\) for some \(0\neq y\in H^{n}\)._
Proof.: Let \(R\) and \(\Omega\) be as in the proof of Corollary 7.4.25, and let \(L\in H[\partial]\) be monic of order \(n\). Then Corollary 7.4.42 yields \(g_{1},\dots,g_{k},h_{1},\dots,h_{l}\) and \(\phi_{1},\dots,\phi_{k}>\mathbb{R}\) in \(H\), where \(2k+l=n\), such that the \(\mathbb{R}\)-linear space \(\ker_{\mathcal{C}^{<\infty}}L\) has basis
\[g_{1}\cos\phi_{1}\text{, }g_{1}\sin\phi_{1}\text{, }\dots\text{, }g_{k}\cos\phi_{k} \text{, }g_{k}\sin\phi_{k}\text{, }h_{1}\text{, }\dots\text{, }h_{l}.\]
This is also a basis of the \(\mathbb{C}\)-linear space \(\ker_{\mathcal{C}^{<\infty}[i]}L=\ker_{\Omega}L\). Thus
\[W:=\mathrm{Wr}(g_{1}\cos\phi_{1},g_{1}\sin\phi_{1},\dots,g_{k}\cos\phi_{k},g_ {k}\sin\phi_{k},h_{1},\dots,h_{l})\in\mathrm{GL}_{n}(\Omega).\]
Note that \(\mathrm{U}_{\mathrm{r}}=R\cap\mathcal{C}^{<\infty}\). It is routine to verify that \(W=QD\) where \(Q\) is an \(n\times n\) matrix over \(H\) and \(D\) is the \(n\times n\) matrix over \(\mathrm{U}_{\mathrm{r}}\) displayed above. We have \(\det D=1\), hence
\[\det W=\det Q\in H\cap\Omega^{\times}=H^{\times}\subseteq\mathrm{U}_{\mathrm{ r}}^{\times}\]
and thus \(Q\in\mathrm{GL}_{n}(H)\) and \(W\in\mathrm{GL}_{n}(\mathrm{U}_{\mathrm{r}})\). So by the remarks before [ADH, 5.5.14], \(W\) is a fundamental matrix for \(y^{\prime}=A_{L}y\) with \(\mathrm{U}_{\mathrm{r}}\) as the ambient differential ring.
Now take monic \(L\in H[\partial]\) such that \(y^{\prime}=Ny\) is equivalent to \(y^{\prime}=A_{L}y\), with respect to the differential field \(H\). Then take \(P\in\mathrm{GL}_{n}(H)\) such that \(P\operatorname{sol}_{\mathrm{U}_{\mathrm{r}}}(A_{L})=\operatorname{sol}_{ \mathrm{U}_{\mathrm{r}}}(N)\), and let \(W\) be as above. Then \(PW\in\mathrm{GL}_{n}(\mathrm{U}_{\mathrm{r}})\) is a fundamental matrix for \(y^{\prime}=Ny\). With \(D\), \(Q\) as before such that \(W=QD\), and \(M:=PQ\in\mathrm{GL}_{n}(H)\), we have \(MD=PW\).
_Example_.: Let \(T\) be an \(n\times n\) matrix over \(H\), \(n\geqslant 1\), and suppose \(T\) is skew-symmetric, that is, \(T^{\mathrm{t}}=-T\). Then the purely imaginary matrix \(S:=-\mathrm{i}T\) is hermitian, giving the Schrodinger equation \(y^{\prime}=Ty\)\((=-\mathrm{i}Sy)\) as in the example after Corollary 7.4.37.
If \(n\) is odd, then this equation has a solution \(y\in E^{n}\) with \(y_{1}^{2}+\dots+y_{n}^{2}=1\) for some Hardy field extension \(E\) of \(H\); such \(y\) exhibits no oscillatory behavior and hence is a "degenerate" wave.
We don't know whether in Corollary 7.4.46 for \(n\geqslant 4\) we can choose \(\phi_{1},\dots,\phi_{k}\) to be apart. In the next corollary we let \(\langle\,\ \rangle:\Omega_{\mathrm{r}}^{n}\times\Omega_{\mathrm{r}}^{n} \to\Omega_{\mathrm{r}}\) denote the usual symmetric bilinear form on \(\Omega_{\mathrm{r}}^{n}\), \(n\geqslant 1\), where \(\Omega_{\mathrm{r}}=\mathrm{Frac}(\mathrm{U}_{\mathrm{r}})\).
**Corollary 7.4.47**.: _Suppose \(H\) is \(\mathrm{d}\)-maximal, \(y^{\prime}=Ny\) is self-adjoint, and \(M\), \(k\), \(l\), and \(D\) are as in Corollary 7.4.46. Then \(\langle f,f\rangle\in\mathbb{R}^{>}\) for each column \(f\) of \(MD\). Let \(f_{1},g_{1},\dots,f_{k},g_{k},h_{1},\dots,h_{l}\in\mathrm{U}_{\mathrm{r}}^{n}\) be the \(1\)st, \(2\)nd,..., \(n\)th column of \(MD\). Then for \(i=1,\dots,k\), \(j=1,\dots,l\) we have \(\langle f_{i},g_{i}\rangle=\langle f_{i},h_{j}\rangle=\langle g_{i},h_{j} \rangle=0\)._
Proof.: For any columns \(f,g\) of \(MD\) we have \(\langle f,g\rangle\in\mathbb{R}\), by Lemma 2.4.38. This proves the first claim. Let \(i\in\{1,\dots,k\}\) and set \(\phi:=\phi_{i}\). Then \(f_{i}=f\cos\phi-g\sin\phi\), \(g_{i}=f\sin\phi+g\cos\phi\) where \(f,g\in H^{n}\). Hence
\[\langle f_{i},g_{i}\rangle\ =\ (\langle f,f\rangle-\langle g,g\rangle)\cos\phi\sin \phi+\langle f,g\rangle(\cos^{2}\phi-\sin^{2}\phi)\in\mathbb{R}.\]
Lemma 5.10.17 gives \(\mathrm{e}^{2\phi_{i}}=\theta\,\mathrm{e}(\lambda)\), \(\mathrm{e}^{-2\phi_{i}}=\theta^{-1}\,\mathrm{e}(-\lambda)\) with \(\theta\in K^{\times}\), \(\lambda\in\Lambda^{\neq}\), so the elements \(1\), \(\mathrm{e}^{2\phi_{i}}\), \(\mathrm{e}^{-2\phi_{i}}\) of \(K[\mathrm{e}^{H_{i}}]\) are \(K\)-linearly independent. In view of
\[\cos\phi\sin\phi\ =\ \tfrac{1}{4i}(\mathrm{e}^{2\phi_{i}}-\mathrm{e}^{-2\phi_{i} }),\qquad\cos^{2}\phi-\sin^{2}\phi\ =\ \tfrac{1}{2}(\mathrm{e}^{2\phi_{i}}+\mathrm{e}^{-2\phi_{i}}),\]
this yields \(\langle f_{i},g_{i}\rangle=0\). For \(j\in\{1,\dots,l\}\) we have
\[\langle f_{i},h_{j}\rangle=\langle f,h_{j}\rangle\cos\phi-\langle g,h_{j} \rangle\sin\phi\in\mathbb{R},\quad\langle g_{i},h_{j}\rangle=\langle f,h_{j} \rangle\sin\phi+\langle g,h_{j}\rangle\cos\phi\in\mathbb{R},\]
and we obtain likewise \(\langle f_{i},h_{j}\rangle=\langle g_{i},h_{j}\rangle=0\).
Corollary 7.4.42 holds for \(r=1\) with the assumption "\(E\) is \(\mathrm{d}\)-maximal" weakened to "\(E\) is Liouville closed". The next section has more about the case \(r=2\). The next lemma shows that Corollary 7.4.42 fails for \(r=3\) with the hypothesis "\(E\) is \(\mathrm{d}\)-maximal" replaced by "\(E\) is perfect".
**Lemma 7.4.48**.: _Suppose \(H=\mathrm{E}(\mathbb{Q})\) and \(A=(\partial-2x)(\partial^{2}+1)\). Then with \(\mathrm{U}:=K[\mathrm{e}^{H_{i}}]\) we have \(\ker_{\mathrm{U}}A=\mathbb{C}\,\mathrm{e}^{-x_{i}}\oplus\mathbb{C}\,\mathrm{e }^{x_{i}}\)._
The proof is similar to that of Corollary 7.4.4, using \(\partial^{2}+1=(\partial-i)(\partial+i)\) in \(K[\partial]\) and the fact that \(y^{\prime\prime}+y\neq\mathrm{e}^{x^{2}}\) for all \(y\in K\).
_Remark_.: Let \(H\) and \(A\) be as in the previous lemma. There is an \(H\)-hardian germ \(y\) with \(y\neq 0\) and \(A(y)=0\), but by the lemma, no such \(y\) is in \(H\). Thus Theorem 1 in [161] is false.
If \(H\) is \(\mathrm{d}\)-maximal and \(A\) has exactly one eigenvalue, then this eigenvalue is \(0\) by Corollary 2.5.21. This situation will be investigated in the next subsection.
### Non-oscillation and disconjugacy
_In this subsection we continue to assume that \(A\in H[\partial]\). In light of Corollary 7.4.42 one may ask whether every non-oscillating \(y\in\ker_{\mathcal{C}^{<\infty}}A\) is \(H\)-hardian. The answer is "no" for some \(A\): Suppose \(y\) in \(H\) satisfies \(y^{\prime\prime}+y=x\). (If \(H\) is \(\mathrm{d}\)-maximal, then \(H\) is linearly surjective and such \(y\) exists.) Then \(y\succ 1\), and \(y+\sin x\in\ker_{\mathcal{C}^{<\infty}}A\) is non-oscillating, but not \(H\)-hardian._
Here is a better question: if \(y\in\ker_{\mathcal{C}^{<\infty}}A\) and \(y-h\) is non-oscillating for all \(h\in H\), does it follow that \(y\) is \(H\)-hardian? The next two results shows that the answer may depend on \(H\). The first is a consequence of Corollary 5.10.39.
**Lemma 7.4.49**.: _Suppose \(H\) is \(\mathrm{d}\)-maximal. Then every \(y\in\ker_{\mathcal{C}^{<\infty}}A\) such that \(y-h\) is non-oscillating for all \(h\in H\) lies in \(H\)._
**Lemma 7.4.50**.: _Let \(H:=\mathrm{E}(\mathbb{Q})\). Then there is a monic \(A\in H[\partial]\) of order \(5\) and a \(y\in\ker_{\mathcal{C}^{<\infty}}A\) such that \(y-h\) is non-oscillating for all \(h\in H\), but \(y\) is not hardian._
Proof.: Recall that each \(\mathrm{d}\)-maximal Hardy field contains an \(f\) with \(f^{\prime\prime}+f=\mathrm{e}^{x^{2}}\), by Theorem 6.7.22. Take an \(H\)-hardian \(z\in\mathcal{C}^{<\infty}\) with \(z^{\prime\prime}+z=\mathrm{e}^{x^{2}}\). Then \(z-h\succ x^{n}\) for all \(n\) and all \(h\in H\), by [35, Proposition 3.7 and Theorem 3.9]. Set
\[B_{1}\ :=\ \partial^{3}-2x\partial^{2}+\partial-2x\in H[\partial],\quad B_{2} \ :=\ \partial^{2}+4\in H[\partial].\]
Then \(B_{1}(z)=B_{2}(\sin 2x)=0\), hence \(y:=z+\sin 2x\in\mathcal{C}^{<\infty}\) satisfies \(A(y)=0\) for some monic \(A\in H[\partial]\) of order \(5\), by [ADH, 5.1.39]. For all \(h\in H\) we have \(y-h=(z-h)+\sin 2x\sim z-h\), so \(y-h\) is non-oscillating. Towards a contradiction, assume \(y\) is hardian. Then \(y\) is \(H\)-hardian. Take an \(H\langle y\rangle\)-hardian \(u\in\mathcal{C}^{<\infty}\) such that \(u^{\prime\prime}+u=\mathrm{e}^{x^{2}}\). Then \((u-z)^{\prime\prime}+(u-z)=0\), so \(u-z=c\cos(x+d)\), with \(c,d\in\mathbb{R}\). But \(u-y=c\cos(x+d)-\sin 2x\) is not hardian: this is clear if \(c=0\), and otherwise follows from \(B_{2}(u-y)=3c\cos(x+d)\). This is the desired contradiction.
We can also ask: if _no_\(y\in\ker_{\mathcal{C}^{<\infty}}A\) oscillates, does it follow that \(\ker_{\mathcal{C}^{<\infty}}A\) is contained in some Hardy field extension of \(H\)? We now extend Corollary 5.5.9 to give a positive answer:
**Theorem 7.4.51**.: _The following are equivalent:_
* _no_ \(y\in\ker_{\mathcal{C}^{<\infty}}A\) _oscillates;_
* \(\ker_{\mathcal{C}^{<\infty}}A\subseteq\mathrm{D}(H)\)_;_
* _A splits over_ \(\mathrm{D}(H)\)_;_
* _A splits over some Hardy field extension of_ \(H\)_._
Proof.: Corollary 7.4.42 gives (i) \(\Rightarrow\) (ii). For (ii) \(\Rightarrow\) (iii) use that \(A\) splits over \(\mathrm{D}(H)\) whenever \(\dim_{\mathbb{R}}\ker_{\mathrm{D}(H)}A=r\), by Corollary 7.4.42, and Corollary 2.5.5 with the remark following it. The implication (iii) \(\Rightarrow\) (iv) is obvious. Suppose (iv) holds; to show (i), arrange that \(A\) splits over \(H\) and \(H\) is Liouville closed. Then \(\ker_{\mathcal{C}^{<\infty}}A\) is contained in \(H\) by Lemma 2.5.30, so (i) holds.
_Remark_.: The implication (i) \(\Rightarrow\) (ii) in Theorem 7.4.51 is also claimed in [162, Theorem 1]; but the proof given there is defective: in the proof of the auxiliary [162, Lemma 1] it is assumed that if \(y\in\mathcal{C}^{<\infty}\) is non-oscillating and \(A(y)=0\), \(y\neq 0\), then \(y^{\dagger}\) is also non-oscillating; but \(A=\partial^{3}+\partial\), \(y=2+\sin x\) contradicts this.
We say that \(A\)**does not generate oscillations** if it satisfies one of the equivalent conditions in the theorem above. Thus if \(r\leqslant 1\), then \(A\) does not generate oscillations, and by Corollary 5.5.7, the operator \(\partial^{2}+g\partial+h\)\((g,h\in H)\) generates oscillations iff the germ \(-\frac{1}{2}g^{\prime}-\frac{1}{4}g^{2}+h\) generates oscillations in the sense of Section 5.2. The property of \(A\) to not generate oscillations is uniformly definable in the canonical \(\Lambda\Omega\)-expansion \(H\) of \(H\) viewed as a structure in the language \(\mathcal{L}^{\iota}_{\Lambda\Omega}\) from [ADH, Chapter 16] (see also the proof of Theorem 7.1.3); more precisely:
**Corollary 7.4.52**.: _There is a quantifier-free \(\mathcal{L}^{\iota}_{\Lambda\Omega}\)-formula \(\omega_{r}(x_{1},\ldots,x_{r})\) such that for every Hardy field \(H\) and all \((h_{1},\ldots,h_{r})\in H^{r}\):_
\[\mbox{$H$}\models\omega_{r}(h_{1},\ldots,h_{r})\quad\Longleftrightarrow\quad \left\{\begin{array}{l}\partial^{r}+h_{1}\partial^{r-1}+\cdots+h_{r}\in H[ \partial]\mbox{ does }\\ \mbox{ not generate oscillations.}\end{array}\right.\]
Proof.: Note that \(A\) does not generate oscillations iff \(A\) splits over some \(\mathrm{d}\)-maximal Hardy field extension of \(H\). Now use that the \(\mathcal{L}_{\Lambda\Omega}^{\iota}\)-theory of canonical \(\Lambda\Omega\)-expansions of \(\mathrm{d}\)-maximal Hardy fields eliminates quantifiers, by [ADH, 16.0.1] and Theorem 6.7.22.
_Example_.: For \(\omega_{2}(x_{1},x_{2})\) we may take the \(\mathcal{L}_{\Lambda\Omega}^{\iota}\)-formula \(\Omega(-2x_{1}^{\prime}-x_{1}^{2}+4x_{2})\). Let \(\alpha,\beta\in\mathbb{R}\), and let \(\omega_{n}\) be as in Corollary 5.5.38. Then for \(H=\mathbb{R}\) in that corollary,
\[\partial^{2}+\alpha\partial+\beta\text{ does not generate oscillations } \Longleftrightarrow\ -\alpha^{2}+4\beta<\omega_{n}\text{ for some }n\] \[\Longleftrightarrow\ \alpha^{2}-4\beta\geqslant 0,\]
and applying the corollary to \(H=\mathbb{R}(x)\) gives
\[\partial^{2}+\alpha x^{-1}\partial+\beta x^{-2}\text{ does not generate} \bigg{\}} \Longleftrightarrow\ (2\alpha-\alpha^{2}+4\beta)x^{-2}<\omega_{n}\text{ for some }n\] \[\Longleftrightarrow\ (1-\alpha)^{2}-4\beta\geqslant 0,\]
in accordance with Corollary 7.1.5. (By the way, \(y^{\prime\prime}+\alpha x^{-1}y+\beta x^{-2}y=0\) is Euler's differential equation of order \(2\), cf. [111, SS22.3], [203, SS20, V].)
From Corollary 2.5.20 we obtain:
**Corollary 7.4.53**.: _Suppose \(H\) is \(\mathrm{d}\)-perfect. Then:_
\[\text{A does not generate oscillations }\Longleftrightarrow\ \mathrm{mult}_{0}(A)=r.\]
_If \(H\) is moreover \(\mathrm{d}\)-maximal, then:_
\[\text{A does not generate oscillations }\Longleftrightarrow\ A\text{ has no eigenvalue different from }0.\]
We say that \(B\in H[\partial]^{\neq}\) does not generate oscillations if \(bB\) does not generate oscillations, for \(b\in H^{\times}\) such that \(bB\) is monic. Using [ADH, 5.1.22] we obtain:
**Corollary 7.4.54**.: _Let \(B_{1},B_{2}\in H[\partial]^{\neq}\). Then \(B_{1}\) and \(B_{2}\) do not generate oscillations iff \(B_{1}B_{2}\) does not generate oscillations._
Note also that if \(E\) is a Hardy field extension of \(H\), then \(A\) generates oscillations iff \(A\) generates oscillations when viewed as element of \(E[\partial]\). Moreover, \(A\) generates oscillations iff its adjoint \(A^{*}\) does.
In the next corollary \(\phi\) ranges over elements of \(\mathrm{D}(H)^{>}\) that are active in \(\mathrm{D}(H)\).
**Corollary 7.4.55**.: _Suppose \(A\) does not generate oscillations. Then the \(\mathbb{R}\)-linear space \(\ker_{\mathcal{C}<\infty}A\) has a basis \(y_{1},\dots,y_{r}\) with all \(y_{j}\in\mathrm{D}(H)\) and \(y_{1}\prec\dots\prec y_{r}\), and there is a unique splitting \((a_{r},\dots,a_{1})\) of \(A\) over \(\mathrm{D}(H)\) such that eventually we have \(a_{j}+\phi^{\dagger}<a_{j+1}\) for \(j=1,\dots,r-1\)._
Proof.: By Theorem 7.4.51, \(A\) splits over \(\mathrm{D}(H)\). Now use Lemma 2.5.30 and Corollary 2.5.38 applied to the Liouville closed \(H\)-field \(\mathrm{D}(H)\) in place of \(H\).
Theorem 7.4.51 and Corollary 7.4.55 yield Corollary 10 from the introduction. The next lemma complements this Corollary 10 by taking a look at splittings over the differential ring extension \(R:=\mathcal{C}^{<\infty}\) of \(H\):
**Lemma 7.4.56**.: _Suppose \(H\supseteq\mathbb{R}\) is Liouville closed, \(A\) splits over \(H\), \(a_{1},\dots,a_{r}\) lie in \(R\), and \(A=(\partial-a_{r})\dots(\partial-a_{1})\) in \(R[\partial]\). Then \(a_{1},\dots,a_{r}\in H\)._
Proof.: By induction on \(r\). The case \(r=0\) being trivial, suppose \(r\geqslant 1\). By Lemma 2.5.19 we have \(\ker_{R}A\subseteq H\). Take \(y\in R^{\times}\) with \(y^{\dagger}=a_{1}\). Then \(A(y)=0\), hence \(y\in H^{\times}\), so \(a_{1}=y^{\dagger}\in H\). Set \(B:=(\partial-a_{r})\cdots(\partial-a_{2})\in R[\partial]\), so \(A=B(\partial-a_{1})\). Now \(A_{\ltimes y}\in H[\partial]\) and \(A_{\ltimes y}=B_{\ltimes y}\partial\), so \(B_{\ltimes y}\in H[\partial]\). Thus \(B\in H[\partial]\), and \(B\) splits over \(H\) by [ADH, 5.1.22]. Hence, inductively, \(a_{2},\ldots,a_{r}\in H\).
**Corollary 7.4.57**.: _Suppose \(A\) does not generate oscillations, and let \(b\in H\). Then all \(y\in\mathcal{C}^{<\infty}\) with \(A(y)=b\) are in \(\operatorname{D}(H)\)._
Proof.: This follows from Corollary 7.4.55 and variation of constants [ADH, 5.5.22], using that \(\operatorname{D}(H)\) is closed under integration.
As promised in the remarks following Corollary 11 from the Introduction we now strengthen the Trench normal form of disconjugate linear differential operators. (See Section 5.2, just before Lemma 5.2.40, for "disconjugate" in the present context.) Below we use for \(h\in\mathcal{C}\) the suggestive notation \(\int^{\infty}h=\infty\) to indicate that for some \(a\in\mathbb{R}\) and representative \(h\in\mathcal{C}_{a}\) (and thus for every \(a\in\mathbb{R}\) and every representative \(h\in\mathcal{C}_{a}\)) we have \(\int_{a}^{t}h(t)dt\to+\infty\) as \(t\to+\infty\).
**Corollary 7.4.58**.: _Let \(r\geqslant 1\). Then \(A\) does not generate oscillations iff \(A\) is disconjugate. Suppose \(A\) is disconjugate. Then there are \(g_{1},\ldots,g_{r}\in\operatorname{D}(H)^{>}\) with_
\[A\ =\ g_{1}\cdots g_{r}(\partial g_{r}^{-1})\cdots(\partial g_{2}^{-1})( \partial g_{1}^{-1}),\quad g_{j}\in\Gamma\big{(}\operatorname{D}(H)\big{)} \text{ for }j=2,\ldots,r. \tag{7.4.1}\]
_Given any such \(g_{1},\ldots,g_{r}\), if \(h_{1},\ldots,h_{r}\in(\mathcal{C}^{<\infty})^{\times}\) satisfy_
\[A\ =\ h_{1}\cdots h_{r}(\partial h_{r}^{-1})\cdots(\partial h_{2}^{-1})( \partial h_{1}^{-1}),\quad\int^{\infty}h_{j}\ =\ \infty\text{ for }j=2,\ldots,r, \tag{7.4.2}\]
_then \(g_{j}\in\mathbb{R}^{>}\cdot h_{j}\) for \(j=1,\ldots,r\)._
Proof.: If \(A\) does not generate oscillations, then \(A\) is disconjugate by Lemma 5.2.40 and Theorem 7.4.51. The converse is clear. Now suppose \(A\) is disconjugate. Then Proposition 2.5.39 yields \(g_{1},\ldots,g_{r}\in\operatorname{D}(H)^{>}\) such that (7.4.1) holds. Let \(h_{1},\ldots,h_{r}\in(\mathcal{C}^{<\infty})^{\times}\) be such that (7.4.2) holds, and set \(a_{j}:=(h_{1}\cdots h_{j})^{\dagger}\in\mathcal{C}^{<\infty}\) (\(j=1,\ldots,r\)). Then \(A=(\partial-a_{r})\cdots(\partial-a_{1})\), so \(a_{1},\ldots,a_{r}\in\operatorname{D}(H)\) by Lemma 7.4.56, hence \(h_{1},\ldots,h_{r}\in\operatorname{D}(H)\) as well. Thus \(h_{1},\ldots,h_{r}>0\) and \(h_{2},\ldots,h_{r}\in\Gamma\big{(}\operatorname{D}(H)\big{)}\). The uniqueness part of Proposition 2.5.39 gives \(g_{j}\in\mathbb{R}^{>}\cdot h_{j}\) for \(j=1,\ldots,r\).
This yields in particular Corollary 11 from the Introduction.
_Example_ (Trench [200, p. 321]).: Suppose \(H=\mathbb{R}\), \(r=3\), and \(A=\partial^{3}-\partial\). Then \(A\) splits as \((\partial-1)\partial(\partial+1)\) over \(H\), so \(A\) does not generate oscillations. In \(\operatorname{D}(H)[\partial]\),
\[A\ =\ \operatorname{e}^{x}\partial\operatorname{e}^{-2x}\partial\operatorname{e}^ {x}\partial\ =\ \operatorname{e}^{-x}\partial\operatorname{e}^{x}\partial\operatorname{e}^{x} \partial\operatorname{e}^{-x}\ =\ \operatorname{e}^{x}\partial\operatorname{e}^{-x}\partial \operatorname{e}^{-x}\partial\operatorname{e}^{x},\]
where only the last factorization is as in (7.4.1).
Generating oscillations is an invariant of the type of \(A\):
**Lemma 7.4.59**.: _Suppose \(A\) does not generate oscillations and \(B\in H[\partial]\) has the same type as \(A\). Then \(B\) also does not generate oscillations._
Proof.: By [ADH, 5.1.19]: \(r=\operatorname{order}(B)\) and we have \(R\in H[\partial]\) of order \(<r\) such that \(1\in H[\partial]R+R[\partial]A\) and \(BR\in H[\partial]A\). Now \(\ker_{\mathcal{C}^{<\infty}}A=\ker_{\operatorname{D}(H)}A\), and [ADH, 5.1.20] gives an isomorphism \(y\mapsto R(y)\colon\ker_{\operatorname{D}(H)}A\to\ker_{\operatorname{D}(H)}B\) of \(\mathbb{R}\)-linear spaces. Hence \(\ker_{\mathcal{C}^{<\infty}}B=\ker_{\operatorname{D}(H)}B\), so \(B\) does not generate oscillations.
Next, let \(M\) be a differential module over \(H\). Recall from Section 2.3 the notions of \(M\) splitting, and of \(M\) splitting over a given differential field extension of \(H\).
**Lemma 7.4.60**.: _The following are equivalent:_
1. \(M\) _splits over some Hardy field extension of_ \(H\)_;_
2. \(M\) _splits over_ \(\operatorname{D}(H)\)_;_
3. \(M\) _splits over_ \(\operatorname{E}(H)\)_._
Proof.: Let \(E\) be a Hardy field extension of \(H\) such that \(M\) splits over \(E\). To show that \(M\) splits over \(\operatorname{D}(H)\), replace \(E\) by \(\operatorname{D}(E)\) to arrange \(E=\operatorname{D}(E)\). Next, using \(E\otimes_{H}M\cong E\otimes_{\operatorname{D}(H)}(\operatorname{D}(H) \otimes_{H}M)\) and replacing \(H\), \(M\) by \(\operatorname{D}(H)\), \(\operatorname{D}(H)\otimes_{H}M\), respectively, also arrange \(H=\operatorname{D}(H)\). In particular, \(H\not\subseteq\mathbb{R}\), so \(M\cong H[\partial]/H[\partial]B\) for some monic \(B\in H[\partial]\), by [ADH, 5.5.3]. Then \(B\) splits over \(E\) by [ADH, 5.9.2] and hence over \(H\) (Theorem 7.4.51), so \(M\) splits. This shows (i) \(\Rightarrow\) (ii). The implications (ii) \(\Rightarrow\) (iii) \(\Rightarrow\) (i) are obvious.
We define: \(M\)**does not generate oscillations** if it satisfies one of the equivalent conditions in the previous lemma. If \(M=H[\partial]/H[\partial]A\), then \(M\) does not generate oscillations iff \(A\) does not generate oscillations.
**Corollary 7.4.61**.: _Let \(E\) be a Hardy field extension of \(H\). Then \(M\) does not generate oscillations iff its base change \(E\otimes_{H}M\) to \(E\) does not generate oscillations._
Proof.: Use \(\operatorname{D}(E)\otimes_{H}M\cong\operatorname{D}(E)\otimes_{E}\bigl{(}E \otimes_{H}M)\) as differential modules over \(\operatorname{D}(E)\).
If \(N\) is a differential submodule of \(M\), then \(M\) does not generate oscillations iff \(N\) and \(M/N\) do not generate oscillations. Hence:
**Corollary 7.4.62**.: _Let \(A_{1},\dots,A_{m}\in H[\partial]^{\neq}\), \(m\geqslant 1\). Then \(A_{1},\dots,A_{m}\) do not generate oscillations iff \(\operatorname{lclm}(A_{1},\dots,A_{m})\in H[\partial]\) does not generate oscillations._
Let now \(N\) be an \(n\times n\) matrix over \(H\), where \(n\geqslant 1\). We also say that the matrix differential equation \(y^{\prime}=Ny\) over \(H\)**does not generate oscillations** if the differential module over \(H\) associated to \(N\)[ADH, p. 277] does not generate oscillations. If a matrix differential equation over \(H\) generates oscillations, then so does every equivalent matrix differential equation over \(H\). Moreover, given a Hardy field extension \(E\) of \(H\), the matrix differential equation \(y^{\prime}=Ny\) over \(H\) does not generate oscillations iff \(y^{\prime}=Ny\) viewed as matrix differential equation over \(E\) does not generate oscillations (by Corollary 7.4.61).
**Lemma 7.4.63**.: _Suppose \(B\in H[\partial]\) is monic and \(N\) is the companion matrix of \(B\). Then \(y^{\prime}=Ny\) does not generate oscillations iff \(B\) does not generate oscillations. For each Hardy field extension \(E\) of \(H(\mathbb{R})\) we have an isomorphism_
\[y\mapsto(y,y^{\prime},\dots,y^{(n-1)})^{\mathrm{t}}\colon\ker_{E}(B)\to \operatorname{sol}_{E}(N)\]
_of \(\mathbb{R}\)-linear spaces._
Proof.: For the first claim, use that \(M_{N}\cong H[\partial]/H[\partial]B^{*}\) by [ADH, 5.5.8], and \(B\) does not generate oscillations iff \(B^{*}\) does not generate oscillations (remark before Corollary 7.4.55). For the second claim, see [ADH, pp. 271-272].
**Corollary 7.4.64**.: _Suppose \(H\) is \(\operatorname{d}\)-perfect. Then \(y^{\prime}=Ny\) does not generate oscillations iff \(\operatorname{mult}_{0}(N)=n\)._
Proof.: Using [ADH, 5.5.9], arrange that \(N\) is the companion matrix of the monic \(B\in H[\partial]\). Then \(\operatorname{mult}_{0}(B)=\operatorname{mult}_{0}(N)\) by Lemma 2.4.35. Now use Corollary 7.4.53 and Lemma 7.4.63.
**Proposition 7.4.65**.: _The following are equivalent:_
1. \(y^{\prime}=Ny\) _does not generate oscillations;_
2. \(\operatorname{D}(H)\) _contains a fundamental matrix of solutions for_ \(y^{\prime}=Ny\)_;_
3. \(\operatorname{E}(H)\) _contains a fundamental matrix of solutions for_ \(y^{\prime}=Ny\)_;_
4. _every maximal Hardy field extension of_ \(H\) _contains a fundamental matrix of solutions for_ \(y^{\prime}=Ny\)_;_
5. _some Hardy field extension of_ \(H\) _contains a fundamental matrix of solutions for_ \(y^{\prime}=Ny\)_._
Proof.: Suppose \(y^{\prime}=Ny\) does not generate oscillations. Then \(y^{\prime}=Ny\) viewed as matrix differential equation over \(\operatorname{D}(H)\) does not generate oscillations. Hence to show (ii) we may arrange that \(H=\operatorname{D}(H)\). Then \(H\not\subseteq\mathbb{R}\), so by [ADH, 5.5.9] we arrange that \(N\) is the companion matrix of the monic \(B\in H[\partial]\). Then \(B\) does not generate oscillations, so \(H\) contains a fundamental matrix of solutions for \(y^{\prime}=Ny\), by Theorem 7.4.51 and Lemma 7.4.63. This proves (ii). The implications (ii) \(\Rightarrow\) (iii) \(\Rightarrow\) (iv) are clear. Suppose (v) holds. To prove (i), we first arrange that \(H\) is d-perfect and contains a fundamental matrix of solutions for \(y^{\prime}=Ny\), and as in the proof of (i) \(\Rightarrow\) (ii) we then arrange that \(N\) is the companion matrix of some monic operator in \(H[\partial]\). Then \(y^{\prime}=Ny\) does not generate oscillations, by Theorem 7.4.51 and Lemma 7.4.63.
In [33, Definition 16.14], Boshernitzan defines \(y^{\prime}=Ny\) to be \(H\)_-regular_ if it satisfies condition (iii) in the proposition above. In [33, Theorem 16.16] he then notes the following version of Corollary 7.4.57, with \(\operatorname{E}(H)\) in place of \(\operatorname{D}(H)\):
**Corollary 7.4.66**.: _Suppose the matrix differential equation \(y^{\prime}=Ny\) does not generate oscillations. Let \(b\in H^{n}\) be a column vector. Then each solution \(y\) in \((\mathcal{C}^{<\infty})^{n}\) to the differential equation \(y^{\prime}=Ny+b\) lies in \(\operatorname{D}(H)^{n}\)._
Proof.: By Proposition 7.4.65, \(\operatorname{D}(H)\) contains a fundamental matrix of solutions for \(y^{\prime}=Ny\). Now use [ADH, 5.5.21] and \(\operatorname{D}(H)\) being closed under integration.
Here is an application of the material above to the parametrization of curves in euclidean \(n\)-space, where for simplicity we only treat the case \(n=3\), denoting the usual euclidean norm on \(\mathbb{R}^{3}\) by \(|\cdot|\).
_Example_ (Frenet-Serret formulas).: Let \(U\subseteq\mathbb{R}\) be a nonempty open interval and \(\gamma\colon U\to\mathbb{R}^{3}\) be a \(\mathcal{C}^{\infty}\)-curve, parametrized by arc length, that is, \(|\gamma^{\prime}(t)|=1\) for all \(t\in U\). Let \(T:=\gamma^{\prime}\) and \(\kappa:=|T^{\prime}|\) (the _curvature_ of \(\gamma\)). Suppose \(\kappa(t)\neq 0\) for each \(t\in U\), set \(N:=T^{\prime}/|T^{\prime}|\) and \(B=T\times N\) (vector cross product). Then for \(t\in U\) the vectors \(T(t),N(t),B(t)\in\mathbb{R}^{3}\) are orthonormal and \(y=(T,N,B)\colon\mathbb{R}^{>}\to\mathbb{R}^{9}\) is a solution of the matrix differential equation \(y^{\prime}=Fy\) on \(\mathcal{C}^{\infty}(U)\), where
\[F=\begin{pmatrix}&\kappa I&\\ -\kappa I&&\tau I\\ &-\tau I&\end{pmatrix}\qquad(I=\text{the $3\times 3$ identity matrix}),\]
for some \(\mathcal{C}^{\infty}\)-function \(\tau\colon U\to\mathbb{R}\) (the _torsion_ of \(\gamma\)).
Conversely, let \(\mathcal{C}^{\infty}\)-functions \(\kappa,\tau\colon U\to\mathbb{R}\) such that \(\kappa(t)>0\) for all \(t\in U\) be given. Then there is a \(\mathcal{C}^{\infty}\)-curve \(\gamma=(\gamma_{1},\gamma_{2},\gamma_{3})\colon U\to\mathbb{R}^{3}\), parametrized by arc length, with curvature \(\kappa\) and torsion \(\tau\). In fact, \(\gamma\) is unique up to proper euclidean motions in \(\mathbb{R}^{3}\). (See [193, Chapter 1] for these facts.) Fix such \(\gamma\) and assume in addition that \(U=(c,+\infty)\) with \(c\in\mathbb{R}\cup\{-\infty\}\) and that \(H\) is d-maximal and contains the germs of \(\kappa\), \(\tau\), also denoted by \(\kappa\), \(\tau\). Then for some \(\alpha\in K/K^{\dagger}\), the matrix differential equation \(y^{\prime}=Fy\) over \(K\) has spectrum \(\{\alpha,-\alpha,0\}\). (Example 2.4.37.) If \(\alpha=0\), then \(y^{\prime}=Fy\) does not generate oscillations, hence the germs of \(\gamma_{1}\), \(\gamma_{2}\), \(\gamma_{3}\) lie in \(H\) and so \(\gamma\) does not exhibit oscillating behavior. If \(\alpha\neq 0\), then \(\alpha=\phi^{\prime}\mathfrak{i}+K^{\dagger}\) where \(\phi\in H\), \(\phi>\mathbb{R}\), and then by Corollaries 7.4.45 and 7.4.46, the germs of \(\gamma_{1}\), \(\gamma_{2}\), \(\gamma_{3}\) lie in
\[H\cos\phi+H\sin\phi+H\subseteq\mathcal{C}^{\infty}.\]
For example, if \(\kappa\in\mathbb{R}^{>}\) and \(\tau\in\mathbb{R}\) are constant, then \(\gamma\) is the helix given by
\[t\mapsto\left(-a\cos(t/D),a\sin(t/D),bt/D\right)\]
where \(D=\sqrt{a^{2}+b^{2}}\), \(\kappa=a/D^{2}\), \(\tau=b/D^{2}\).
### Revisiting Second-Order Linear Differential Equations
In this section we analyze the oscillating solutions of second-order linear differential equations over Hardy fields in more detail. In particular, we prove Corollary 12 from the introduction. This is connected to the \(\omega\)-freeness of the perfect hull of a Hardy field, which is characterized in Theorem 7.5.32. _Throughout this section \(H\) is a Hardy field and \(K:=H[\mathfrak{i}]\subseteq\mathcal{C}^{<\infty}[\mathfrak{i}]\)._
**Parametrizing the solution space.** Let \(a,b\in H\). We now continue the study of the linear differential equation
( \[\widetilde{\mathrm{L}}\] ) \[Y^{\prime\prime}+aY^{\prime}+bY\ =\ 0\]
over \(H\) from Section 5.3 (with slightly changed notation), and focus on the oscillating case, viewed in the light of our main theorem. (Corollaries 5.5.7 and 5.5.9 already dealt with the non-oscillating case, which didn't need our main result.) Most of the following theorem was claimed without proof by Boshernitzan [35, Theorem 5.4]:
**Theorem 7.5.1**.: _Suppose \((\widetilde{\mathrm{L}})\) has an oscillating solution \((\)in \(\mathcal{C}^{<\infty})\). Then there are \(H\)-hardian germs \(g>0\), \(\phi>\mathbb{R}\) such that for all \(y\in\mathcal{C}^{<\infty}\),_
\[y\text{ is a solution of }(\widetilde{\mathrm{L}})\quad\Longleftrightarrow \quad y=cg\cos(\phi+d)\text{ for some }c,d\in\mathbb{R}.\]
_Any such \(H\)-hardian germs \(g\), \(\phi\) are \(\mathrm{d}\)-algebraic over \(H\) and lie in a common Hardy field extension of \(H\). If \(\mathrm{D}(H)\) is \(\omega\)-free, then these properties force \(g,\phi\in\mathrm{D}(H)\), and determine \(g\) uniquely up to multiplication by a positive real number and \(\phi\) uniquely up to addition of a real number._
_Remarks._ If \(H\) is \(\omega\)-free, then \(\mathrm{D}(H)\) is \(\omega\)-free by Theorem 1.4.1. Also, if \(H\) is not \(\lambda\)-free or \(\overline{\omega}(H)=H\setminus\sigma\big{(}\Gamma(H)\big{)}^{\dagger}\), then \(\mathrm{D}(H)\) is \(\omega\)-free, by Lemma 5.5.37. (See Section 5.5 or [ADH, 5.2] for the definition of the function \(\sigma\), and recall that \(\overline{\omega}(H)\) is the set of all \(f\in H\) such that \(f/4\) does not generate oscillations. If \(H\) is \(\omega\)-free, then \(\overline{\omega}(H)=H\setminus\sigma\big{(}\Gamma(H)\big{)}^{\dagger}\), by Corollary 5.5.36.) Recall also that \(\lambda\)-freeness includes having asymptotic integration. In the last sentence of Theorem 7.5.1 we cannot drop the hypothesis that \(\mathrm{D}(H)\) is \(\omega\)-free; see Remark 7.5.34.
Let \(V\) be an \(\mathbb{R}\)-linear subspace of \(\mathcal{C}\). A pair \((g,\phi)\) is said to **parametrize**\(V\) if
\[g\in\mathcal{C}^{\times},\ g>0,\quad\phi\in\mathcal{C},\ \phi>\mathbb{R},\qquad V \ =\ \big{\{}cg\cos(\phi+d):c,d\in\mathbb{R}\big{\}};\]
equivalently, \(g\in\mathcal{C}^{\times}\), \(g>0\), \(\phi\in\mathcal{C}\), \(\phi>\mathbb{R}\), and \(V=\mathbb{R}g\cos\phi+\mathbb{R}g\sin\phi\), by Corollary 5.5.15. If \((g,\phi)\) parametrizes \(V\), then so does \((cg,\phi+d)\) for any \(c\in\mathbb{R}^{>}\), \(d\in\mathbb{R}\).
_Example_.: The example following Corollary 5.2.24 shows that for \(f\in\mathbb{R}^{>}\) the pair \((1,\frac{\sqrt{\mathcal{I}}}{2}x)\) parametrizes \(\ker_{\mathcal{C}^{<\infty}}(4\partial^{2}+f)\).
Suppose \(V=\ker_{\mathcal{C}^{<\infty}}(\partial^{2}+a\partial+b)\), and let \(g\in\mathcal{C}^{\times}\), \(g>0\), and \(\phi\in\mathcal{C}\), \(\phi>\mathbb{R}\). Then \((g,\phi)\) parametrizes \(V\) iff \(g\,\mathrm{e}^{\phi i}\in\ker_{\mathcal{C}^{<\infty}[i]}(\partial^{2}+a \partial+b)\).
For later use we record the next lemma where \(V\) is an \(\mathbb{R}\)-linear subspace of \(\mathcal{C}^{1}\) and \(V^{\prime}:=\{y^{\prime}:y\in V\}\) (an \(\mathbb{R}\)-linear subspace of \(\mathcal{C}\)).
**Lemma 7.5.2**.: _Suppose \(H\supseteq\mathbb{R}\) is real closed and closed under integration, and \((g,\phi)\in H\times H\) parametrizes \(V\). Set \(q:=\sqrt{(g^{\prime})^{2}+(g\phi^{\prime})^{2}}\) and \(u:=\arccos(g^{\prime}/q)\). Then \(q,u\in H\) and \((q,\phi+u)\in H\times H\) parametrizes \(V^{\prime}\)._
Proof.: Note that \(u\) is as in Corollary 5.5.16 with \(g^{\prime}\), \(-g\phi^{\prime}\) in place of \(g\), \(h\). Let \(y\in V\), so \(y=cg\cos(\phi+d)\) where \(c,d\in\mathbb{R}\). Then
\[y^{\prime}\ =\ cg^{\prime}\cos(\phi+d)-cg\phi^{\prime}\sin(\phi+d)\ =\ cq\cos(\phi+u+d).\]
Conversely, for \(c,d\in\mathbb{R}\) we have \(cq\cos(\phi+u+d)=y^{\prime}\) for \(y=cg\cos(\phi+d)\in V\).
**Lemma 7.5.3**.: _Set \(f:=-2a^{\prime}-a^{2}+4b\). Let \(h\) be an \(H\)-hardian germ such that \(h>0\) and \(h^{\dagger}=-\frac{1}{2}a\). Let \(g\in\mathcal{C}^{\times}\), \(g>0\) and \(\phi\in\mathcal{C}\), \(\phi>\mathbb{R}\). Then:_
1. \((g,\phi)\) _parametrizes_ \(\ker_{\mathcal{C}^{<\infty}}4\partial^{2}+f\) _iff_ \((gh,\phi)\) _parametrizes_ \(\ker_{\mathcal{C}^{<\infty}}\partial^{2}+a\partial+b\)_._
_Assume also that \(\phi\) is hardian \((\)so \(\phi^{\prime}\) is hardian with \(\phi^{\prime}>0)\). Then:_
1. \((1/\sqrt{\phi^{\prime}},\phi)\) _parametrizes_ \(\ker_{\mathcal{C}^{<\infty}}4\partial^{2}+\sigma(2\phi^{\prime})\)_._
Proof.: The arguments leading up to Corollary 5.5.7 yield (i). As to (ii), the definition of \(\sigma\) in [ADH, p. 262] gives
\[\sigma(2\phi^{\prime})\ =\ \omega\big{(}-(2\phi^{\prime})^{\dagger}+2\phi^{ \prime}i\big{)}\ =\ \omega\big{(}-\phi^{\prime\dagger}+2\phi^{\prime}i\big{)}\ =\ \omega(2y^{\dagger})\]
where \(y:=(1/\sqrt{\phi^{\prime}})\,\mathrm{e}^{\phi i}\). Hence \(A(y)=0\) for \(A=4\partial^{2}+\sigma(2\phi^{\prime})\) by the computation in [ADH, p. 258], and thus \(\big{(}1/\sqrt{\phi^{\prime}},\phi\big{)}\) parametrizes \(\ker_{\mathcal{C}^{<\infty}}A\).
Item (i) in Lemma 7.5.3 reduces the proof of Theorem 7.5.1 to the case \(a=0\), and (ii) is about that case. Next we isolate an argument in the proof of [ADH, 14.2.18]:
**Lemma 7.5.4**.: _Let \(E\) be a \(2\)-newtonian \(H\)-asymptotic field with asymptotic integration, \(e\in E^{\times}\), \(f\in E\), and \(\upgamma\) be active in \(E\) such that \(e^{2}=f-\sigma(\upgamma)\) and \(e\succ\upgamma\). Then \(\sigma(y)=f\) and \(y\sim e\) for some \(y\in E^{\times}\)._
Proof.: Note that \(e\) is active in \(E\) since \(e\succ\upgamma\). By [ADH, 11.7.6] we have
\[\sigma(e)-f\ =\ \sigma(e)-\sigma(\upgamma)-e^{2}\ =\ \omega(-e^{\dagger})- \omega(-\upgamma^{\dagger})-\upgamma^{2}\ \prec\ e^{2},\]
and so \(\omega(-e^{\dagger})-f\sim-e^{2}\). Eventually \(\phi\prec e\), so \((\phi/e)^{\dagger}\prec e\) by [ADH, 9.2.11]. Hence with \(R\), \(Q\) as defined before [ADH, 14.2.18], eventually we have \(R^{\phi}\prec e^{2}\), and thus \(Q^{\phi}\sim e^{2}Y^{2}(Y^{2}-1)\). Now [ADH, 14.2.12] yields \(u\in E\) with \(u\sim 1\) and \(Q(u)=0\), thus \(\sigma(y)=f\) for \(y:=eu\sim e\)
Let \(A=4\partial^{2}+f\in H[\partial]\) where \(f/4\in H\) generates oscillations, and set \(V:=\ker_{\mathcal{C}^{<\infty}}A\). If \(H\) is \(\omega\)-free, then \(f\in\sigma\big{(}\Gamma(H)\big{)}^{\uparrow}\), by Corollary 5.5.36. Theorem 7.5.1 now follows from Lemmas 7.5.5, 7.5.6, and 7.5.7 below, which give more information.
**Lemma 7.5.5**.: _There is a pair of \(H\)-hardian germs parametrizing \(V\). For any such pair \((g,\phi)\) we have \(\sigma(2\phi^{\prime})=f\) and \(g^{2}\phi^{\prime}\in\mathbb{R}^{>}\), so \(g\), \(\phi\) are \(\mathrm{d}\)-algebraic over \(\mathbb{Q}\langle f\rangle\) and lie in a common Hardy field extension of \(H\). If \(f\in\mathcal{C}^{\infty}\), then each pair of \(H\)-hardian germs parametrizing \(V\) is in \((\mathcal{C}^{\infty})^{2}\); likewise with \(\mathcal{C}^{\omega}\) in place of \(\mathcal{C}^{\infty}\)._
Proof.: The first statement follows from Corollaries 5.5.19, 7.2.10, and 5.10.35. Next, let \((g,\phi)\) be a pair of \(H\)-hardian germs parametrizing \(V\). Set \(y:=g\,\mathrm{e}^{\phi i}\in\mathcal{C}^{<\infty}[i]^{\times}\); then we have \(A(y)=0\) and hence \(\omega(2y^{\dagger})=f\) where \(y^{\dagger}=g^{\dagger}+\phi^{\prime}i\in\mathcal{C}^{<\infty}[i]\). Now for \(p,q\in\mathcal{C}^{1}\) we have \(\omega(p+q\mathrm{i})=\omega(p)+q^{2}-2(pq+q^{\prime})\mathrm{i}\), so
\[\omega(p+q\mathrm{i})\in\mathcal{C}\ \Leftrightarrow\ pq+q^{\prime}=0.\]
Therefore \(2g^{\dagger}=-(2\phi^{\prime})^{\dagger}=-(\phi^{\prime})^{\dagger}\) and so \(g^{2}\phi^{\prime}\in\mathbb{R}^{>}\), and \(\sigma(2\phi^{\prime})=f\) [ADH, p. 262]. If \(f\in\mathcal{C}^{\infty}\), then \(y\in\mathcal{C}^{\infty}[i]\), so \(g^{2}=|y|^{2}\in\mathcal{C}^{\infty}\) and hence also \(\phi\in\mathcal{C}^{\infty}\) since \(g^{2}\phi^{\prime}\in\mathbb{R}^{>}\); likewise with \(\mathcal{C}^{\omega}\) in place of \(\mathcal{C}^{\infty}\).
**Lemma 7.5.6**.: _Suppose that \(H\supseteq\mathbb{R}\) is real closed with asymptotic integration, and that \(f\in\sigma\big{(}\Gamma(H)\big{)}^{\uparrow}\). Then there is an active \(e>0\) in \(H\) such that \(\phi^{\prime}\sim e\) for all pairs \((g,\phi)\) of \(H\)-hardian germs parametrizing \(V\)._
Proof.: Choose a logarithmic sequence \((\ell_{\rho})\) for \(H\) and set \(\upgamma_{\rho}:=\ell_{\rho}^{\dagger}\) [ADH, 11.5]; then \((\upgamma_{\rho})\) is strictly decreasing and coinitial in \(\Gamma(H)\) [ADH, p. 528]. Take \(\rho\) such that \(f>\sigma(\upgamma_{\rho})\). As in the proof of [ADH, 14.2.18], we increase \(\rho\) so that \(f-\sigma(\upgamma_{\rho})\succ\upgamma_{\rho}^{2}\), and take \(e\in H^{>}\) with \(e^{2}=f-\sigma(\upgamma_{\rho})\). Then \(e\succ\upgamma_{\rho}\) and so \(e\in\Gamma(H)^{\uparrow}\). Let \((g,\phi)\) be a pair of elements in a Hardy field extension \(E\) of \(H\) parametrizing \(V\). We claim that \(\phi^{\prime}\sim e/2\) (so \(e/2\) in place of \(e\) has the property desired in the lemma). We arrange that \(E\) is \(\mathrm{d}\)-maximal. Then \(E\) is Liouville closed and \(\phi>\mathbb{R}\), so \(e,2\phi^{\prime}\in\Gamma(E)\) by [ADH, 11.8.19]. Now \(E\) is newtonian by Theorem 6.7.22, so Lemma 7.5.4 yields \(u\sim 1\) in \(E\) such that \(\sigma(eu)=f\). Now the map \(y\mapsto\sigma(y)\colon\Gamma(E)\to E\) is strictly increasing [ADH, 11.8.29], hence \(2\phi^{\prime}=eu\) by Lemma 7.5.5, and thus \(\phi^{\prime}\sim e/2\).
**Lemma 7.5.7**.: _Suppose \(\mathrm{D}(H)\) is \(\omega\)-free or \(f\in\sigma\big{(}\Gamma(H)\big{)}^{\uparrow}\). Let \(H_{i}\) be a Hardy field extension of \(H\) with \((g_{i},\phi_{i})\in H_{i}\times H_{i}\) parametrizing \(V\), for \(i=1,2\). Then_
\[g_{1}/g_{2}\in\mathbb{R}^{>},\qquad\phi_{1}-\phi_{2}\in\mathbb{R}.\]
_Thus \(g,\phi\in\mathrm{D}(H)\) for any pair \((g,\phi)\) of \(H\)-hardian germs parametrizing \(V\)._
Proof.: We arrange that \(H_{1}\), \(H_{2}\) are \(\mathrm{d}\)-maximal and thus contain \(\mathrm{D}(H)\). Replacing \(H\) by \(\mathrm{D}(H)\) we further arrange that \(H\) is \(\mathrm{d}\)-perfect and \(f\in\sigma\big{(}\Gamma(H)\big{)}^{\uparrow}\). Then \(\phi_{1}^{\prime}\sim\phi_{2}^{\prime}\) by Lemma 7.5.6, and for \(i=1,2\) we have \(c_{i}\in\mathbb{R}^{>}\) with \(\phi_{i}^{\prime}=c_{i}/g_{i}^{2}\), by Lemma 7.5.5. Replacing \(g_{i}\) by \(g_{i}/\sqrt{c_{i}}\) we arrange \(c_{i}=1\)\((i=1,2)\), so \(g_{1}\sim g_{2}\). Consider now the elements \(g_{1}\cos\phi_{1}\), \(g_{1}\sin\phi_{1}\) of \(V\); take \(a,b,c,d\in\mathbb{R}\) such that
\[g_{1}\cos\phi_{1}\ =\ ag_{2}\cos(\phi_{2}+b),\qquad g_{1}\sin\phi_{1}\ =\ cg_{2}\cos(\phi_{2}+d).\]
Then
\[g_{1}^{2}\ =\ g_{1}^{2}(\cos^{2}\phi_{1}+\sin^{2}\phi_{1})\ =\ g_{2}^{2}\big{(}a^{2}\cos^{2}(\phi_{2}+b)+c^{2}\cos^{2}(\phi_{2}+d)\big{)}, \tag{7.5.1}\]
and hence
\[a^{2}\cos^{2}(\phi_{2}+b)+c^{2}\cos^{2}(\phi_{2}+d)\ \sim\ 1.\]
Thus the \(2\pi\)-periodic function
\[t\mapsto F(t)\ :=\ a^{2}\cos^{2}(t+b)+c^{2}\cos^{2}(t+d)\,:\ \mathbb{R}\to \mathbb{R}\]
satisfies \(F(t)\to 1\) as \(t\to+\infty\), hence \(F(t)=1\) for all \(t\), so \(g_{1}=g_{2}\) by (7.5.1). It follows that \(\phi_{1}^{\prime}=\phi_{2}^{\prime}\), so \(\phi_{1}-\phi_{2}\in\mathbb{R}\).
For the final claim, let \((g,\phi)\) be a pair of \(H\)-hardian germs parametrizing \(V\). Let \(M\) be any d-maximal extension of \(H\). Then Lemma 7.5.5 gives a pair \((g_{M},\phi_{M})\in M^{2}\) that also parametrizes \(V\). By the above, \(g/g_{M}\in\mathbb{R}^{>}\) and \(\phi-\phi_{M}\in\mathbb{R}\), hence \(g,\phi\in M\). Since \(M\) is arbitrary, this gives \(g,\phi\in\mathrm{D}(H)\).
This finishes the proof of Theorem 7.5.1 (and Corollary 12 from the introduction).
**Corollary 7.5.8**.: _Suppose that \(H\) is \(\mathrm{d}\)-perfect. Then \(\omega(H)=\overline{\omega}(H)\) is downward closed and \(\sigma\big{(}\Gamma(H)\big{)}\) is upward closed._
Proof.: By Corollary 5.5.3, \(\omega(H)=\overline{\omega}(H)\) is downward closed.
Let \(f\in\sigma\big{(}\Gamma(H)\big{)}^{\uparrow}\). The last part of Lemma 7.5.7 gives \(g,\phi\in H\) such that \((g,\phi)\) parametrizes \(V\). Then \(\sigma(2\phi^{\prime})=f\) by Lemma 7.5.5. Now \(2\phi^{\prime}\in\Gamma(H)\) by [ADH, 11.8.19], so \(f\) lies in \(\sigma\big{(}\Gamma(H)\big{)}\). Thus \(\sigma\big{(}\Gamma(H)\big{)}\) is upward closed.
Recall from [ADH, 11.8] that \(H\supseteq\mathbb{R}\) is said to be _Schwarz closed_ if \(H\) is Liouville closed and \(H=\omega\big{(}\Lambda(H)\big{)}\cup\sigma\big{(}\Gamma(H)\big{)}\).
**Corollary 7.5.9**.: _Suppose \(H\) is \(\mathrm{d}\)-perfect. Then the following are equivalent:_
* \(H\) _is Schwarz closed;_
* \(H\) _is_ \(\omega\)_-free;_
* _for all_ \(f\in H\) _the operator_ \(4\partial^{2}+f\in H[\partial]\) _splits over_ \(K\)_;_
* _for all_ \(a,b\in H\) _the operator_ \(\partial^{2}+a\partial+b\in H[\partial]\) _splits over_ \(K\)_._
Proof.: The equivalence (iii) \(\Leftrightarrow\) (iv) holds by Corollary 5.5.10, and the equivalences (i) \(\Leftrightarrow\) (ii) \(\Leftrightarrow\) (iii) follow from [ADH, 11.8.33] and Corollary 7.5.8.
_In the rest of this subsection \(A=\partial^{2}+a\partial+b\)\((a,b\in H)\). We set \(V:=\ker_{\mathcal{C}^{<\infty}}A\) and \(f:=-2a^{\prime}-a^{2}+4b\), and we take \(H\)-hardian \(h>0\) such that \(h^{\dagger}=-\frac{1}{2}a\). Note the relevance of Lemma 7.5.3(i) in this situation._
**Corollary 7.5.10**.: _Suppose \(f>0,f\succ 1/x^{2}\). Then \(f\notin\overline{\omega}(H)\), and for some \(H\)-hardian germ \(\phi\) with \(\phi^{\prime}\sim\frac{1}{2}\sqrt{f}\), and \(g:=1/\sqrt{\phi^{\prime}}\) we have: \((gh,\phi)\) parametrizes \(V\)._
Proof.: By Theorem 5.6.2 we arrange that \(H\supseteq\mathbb{R}\) is Liouville closed and \(\omega\)-free. With notation as at the beginning of Section 5.6 we have \(\omega_{\rho}\sim 1/x^{2}\) for all \(\rho\); hence \(f/4>\omega_{\rho}\) for all \(\rho\), so \(f/4\) generates oscillations by [ADH, 11.8.21] and Corollary 5.5.36, and \(f\notin\overline{\omega}(H)\), \(f\in\sigma\big{(}\Gamma(H)\big{)}^{\uparrow}\). Lemma 7.5.5 gives a pair \((g,\phi)\) parametrizing \(\ker_{\mathcal{C}^{<\infty}}(4\partial^{2}+f)\) with \(H\)-hardian \(\phi\) and \(g:=1/\sqrt{\phi^{\prime}}\). Now \(\upgamma:=1/x\) is active in \(H\) with \(\sigma(\upgamma)=2\upgamma^{2}\) and so \(f>\sigma(\upgamma)\) and \(f-\sigma(\upgamma)\sim f\). Then \(\phi^{\prime}\sim\frac{1}{2}\sqrt{f}\) by the proof of Lemma 7.5.6, so \(\phi\) has the property stated in Corollary 7.5.10.
**Corollary 7.5.11**.: _Suppose \(f\notin\overline{\omega}(H)\) and let \((g,\phi)\) be a pair of \(H\)-hardian germs parametrizing \(V\). Then \(\phi\prec x\) iff \(f\prec 1\), and the same with \(\preccurlyeq\) in place of \(\prec\). Also, if \(f\sim c\in\mathbb{R}^{>}\), then \(\phi\sim\frac{\sqrt{c}}{2}x\) and \((f<c\Rightarrow\phi^{\prime\prime}>0)\), \((f>c\Rightarrow\phi^{\prime\prime}<0)\)._
Proof.: We arrange \(H\supseteq\mathbb{R}\) is \(\omega\)-free, Liouville closed, and \(g,\phi\in H\). Then \(y:=2\phi^{\prime}\in\Gamma(H)\) by [ADH, 11.8.19]. Lemma 7.5.5 gives \(\sigma(y)=f\); also \(\sigma(c)=c^{2}\) for all \(c\in\mathbb{R}^{>}\). As the restriction of \(\sigma\) to \(\Gamma(H)\) is strictly increasing [ADH, 11.8.29],
this yields the first part. Now suppose \(f\sim c\in\mathbb{R}^{>}\), and take \(\lambda\in\mathbb{R}^{>}\) with \(\phi\sim\lambda x\). Then \(y\sim 2\lambda\), and with \(z:=-y^{\dagger}\prec 1\) we have \(f=\sigma(y)=\omega(z)+y^{2}\sim 4\lambda^{2}\). Hence \(\lambda=\sqrt{c}/2\). Suppose \(f<c\); then \(f=\sigma(y)<c=\sigma(\sqrt{c})\) yields \(y<\sqrt{c}\), so \(\phi^{\prime}<\lambda\). With \(g:=\phi-\lambda x\) we have \(g\prec x\), \(g^{\prime}\prec 1\) and \(g^{\prime}=\phi^{\prime}-\lambda<0\), so \(g^{\prime\prime}=\phi^{\prime\prime}>0\). The case \(f>c\) is similar.
Combining Corollaries 7.5.5 and 7.5.1 yields:
**Corollary 7.5.12**.: _Suppose \(f\notin\overline{\omega}(H)\). Then for every \(y\in V^{\neq}\) we have:_
1. _if_ \(f\prec 1\)_, then_ \(y\not\prec h\)__\((\)_so_ \(y\) _is unbounded if in addition_ \(a\leqslant 0\)_\()\)_;_
2. _if_ \(f\succ 1\)_, then_ \(y\prec h\)_; and_
3. _if_ \(f\asymp 1\)_, then_ \(y\prec h\)_._
_Remarks_.: See [18, Chapter 6] for related (though generally weaker) results in a more general setting. For example, if \(g\in\mathcal{C}\) is eventually increasing with \(g\succ 1\) or \(g\in\mathcal{C}^{1}\) and \(g\sim 1\) with \(\int|g^{\prime}|\prec 1\), then every \(y\in\mathcal{C}^{2}\) with \(y^{\prime\prime}+gy=0\) satisfies \(y\prec 1\); cf. SSSS6, 18 in loc. cit.
**Corollary 7.5.13**.: _Suppose \(f\notin\overline{\omega}(H)\), \(H\supseteq\mathbb{R}\), and \(H\) does not have asymptotic integration or \(H\) is \(\omega\)-free. Then the following are equivalent:_
1. \(A(y)=0\) _for some_ \(y\neq 0\) _in a Liouville extension of_ \(K\)_;_
2. _some pair_ \((g,\phi)\in\operatorname{Li}(H)^{2}\) _with_ \(g^{\dagger}\)_,_ \(\phi^{\prime}\) _algebraic over_ \(H\) _parametrizes_ \(V\)_;_
3. _some pair in_ \(\operatorname{Li}(H)^{2}\) _parametrizes_ \(V\)_;_
4. _every pair of_ \(H\)_-hardian germs parametrizing_ \(V\) _lies in_ \(\operatorname{Li}(H)^{2}\)_._
Proof.: Suppose (i) holds. Then Lemma 7.4.6 gives \(g,\phi\in L:=\operatorname{Li}(H)\), \(g\neq 0\), such that \(g^{\dagger}\), \(\phi^{\prime}\) are algebraic over \(H\) and \(A(g\operatorname{e}^{\phi i})=0\). Replacing \(g\) by \(-g\) if necessary we arrange \(g>0\). We have \(\phi\succ 1\): otherwise \(g\operatorname{e}^{\phi i}\in E[i]^{\times}\) for some Hardy field extension \(E\) of \(L\), by Corollary 5.5.24, hence \(\operatorname{Re}(g\operatorname{e}^{\phi i})\in V^{\neq}\) does not oscillate, or \(\operatorname{Im}(g\operatorname{e}^{\phi i})\in V^{\neq}\) does not oscillate, a contradiction. Replacing \(\phi\) by \(-\phi\) if necessary we arrange \(\phi>\mathbb{R}\). Then \((g,\phi)\) parametrizes \(V\). This yields (ii). The implication (ii) \(\Rightarrow\) (iii) is trivial. By the assumptions on \(H\), \(\operatorname{Li}(H)\), and thus \(\operatorname{D}(H)\), is \(\omega\)-free, so (iii) \(\Rightarrow\) (iv) follows from Theorem 7.5.1. For (iv) \(\Rightarrow\) (i), note that the differential fraction field of \(K[\operatorname{e}^{Hi}]\subseteq\mathcal{C}^{<\infty}[i]\) is a Liouville extension of \(K\).
**Corollary 7.5.14**.: _Suppose \(H\supseteq\mathbb{R}\) is Liouville closed and \(f\notin\overline{\omega}(H)\). Then the following are equivalent:_
1. \(g,\phi\in H\) _for every pair_ \((g,\phi)\) _of_ \(H\)_-hardian germs parametrizing_ \(V\)_;_
2. _there is a pair of germs in_ \(H\) _parametrizing_ \(V\)_;_
3. \(f\in\sigma(H^{\times})\)_._
Proof.: The implications (i) \(\Rightarrow\) (ii) \(\Rightarrow\) (iii) follow from Lemma 7.5.5 and the remarks preceding Lemma 7.5.3. Suppose \(f\in\sigma(H^{\times})\). Since \(f\notin\overline{\omega}(H)\) and \(\omega(H)^{\downarrow}\subseteq\overline{\omega}(H)\), we have \(f\notin\omega(H)^{\downarrow}\), so \(f\in\sigma\big{(}\Gamma(H)\big{)}\) by [ADH, 11.8.31]. Also, \(4\partial^{2}+f\) splits over \(K\) but not over \(H\) (cf. [ADH, pp. 259, 262]) and \(f/4\) generates oscillations. Hence Corollary 5.10.35 and the remark following it yield a pair of germs in \(H\) parametrizing \(V\). Now (i) follows from Lemma 7.5.7.
The case of Theorem 7.5.1 where \(a\), \(b\) are d-algebraic over \(\mathbb{Q}\) is used later. In that case the \(\Psi\)-set of the Hardy subfield \(H_{0}:=\mathbb{Q}\langle a,b\rangle\) of \(H\) is finite by Lemma 5.4.26, so \(H_{0}\) has no asymptotic integration. Thus the relevance of the next result:
**Corollary 7.5.15**.: _Suppose \(f\notin\overline{\omega}(H)\) and \(H\) has no asymptotic integration. Then there is a pair \((g,\phi)\in\mathrm{D}(H)^{2}\) parametrizing \(V\) such that every pair of \(H\)-hardian germs parametrizing \(V\) equals \((cg,\phi+d)\) for some \(c\in\mathbb{R}^{>}\) and \(d\in\mathbb{R}\)._
Proof.: The assumption on \(H\) gives that \(\mathrm{D}(H)\) is \(\omega\)-free. Now use Theorem 7.5.1.
_In the rest of this subsection \(f\notin\overline{\omega}(H)\), and \((g,\phi)\) is a pair of \(H\)-hardian germs parametrizing \(V\)._ Then \(\sigma(2\phi^{\prime})=f\) (cf. Lemma 7.5.5) and thus
\[P(2\phi^{\prime})\ =\ 0\quad\text{where }P(Y)\ :=\ 2YY^{\prime\prime}-3(Y^{ \prime})^{2}+Y^{4}-fY^{2}\in H\{Y\}.\]
Hence Theorem 5.4.25 applied to \(E:=H\langle\phi\rangle=H(\phi,\phi^{\prime},\phi^{\prime\prime})\) gives for grounded \(H\) elements \(h_{0},h_{1}\in H^{>}\) and \(m\), \(n\) with \(h_{0},h_{1}\succ 1\) and \(m+n\leqslant 3\), such that
\[\log_{m+1}h_{0}\ \prec\ \phi\ \preccurlyeq\ \exp_{n}h_{1}.\]
In the next two lemmas we improve on these bounds:
**Lemma 7.5.16**.: _Suppose \(\ell_{0}\in H^{>}\), \(\ell_{0}\succ 1\), and \(\max\Psi_{H}=v(\upgamma_{0})\) for \(\upgamma_{0}:=\ell_{0}^{\dagger}\). Then \(f-\omega(-\upgamma_{0}^{\dagger})\succ\upgamma_{0}^{2}\) and \(\phi\succ\log\ell_{0}\), or \(f-\omega(-\upgamma_{0}^{\dagger})\asymp\upgamma_{0}^{2}\) and \(\phi\asymp\log\ell_{0}\)._
Proof.: By Lemma 5.5.34 we have \(f\notin\overline{\omega}(H)=\omega(-\upgamma_{0}^{\dagger})+\upgamma_{0}^{2} \phi_{H}^{\downarrow}\) and hence \(f=\omega(-\upgamma_{0}^{\dagger})+\upgamma_{0}^{2}u\) where \(u\sucv 1\), \(u>0\), so \(f-\sigma(\upgamma_{0})=\upgamma_{0}^{2}(u-1)\). Suppose \(u\succ 1\); then \(f-\sigma(\upgamma_{0})\sim u\upgamma_{0}^{2}\succ\upgamma_{0}^{2}\), and the proof of Lemma 7.5.6 shows that then \(\phi^{\prime}\sim e/2\) where \(e^{2}=f-\sigma(\upgamma_{0})\), so \(e\succ\upgamma_{0}\) and thus \(\phi\succ\log\ell_{0}\). Now suppose \(u\asymp 1\), and put \(\ell_{1}:=\log\ell_{0}\), \(\upgamma_{1}:=\ell_{1}^{\dagger}\). Then by [ADH, 11.7.6],
\[f-\sigma(\upgamma_{1})\ =\ \omega\big{(}-\upgamma_{0}^{\dagger}\big{)}-\omega \big{(}-\upgamma_{1}^{\dagger}\big{)}+u\upgamma_{0}^{2}-\upgamma_{1}^{2}\ \sim\ u\upgamma_{0}^{2}\ \succ\ \upgamma_{1}^{2},\]
and arguing as in the proof of Lemma 7.5.6 as before gives \(\phi\asymp\log\ell_{0}\).
**Lemma 7.5.17**.: _Suppose \(f\in\sigma\big{(}\Gamma(H)\big{)}^{\uparrow}\) or \(H\) is not \(\lambda\)-free, and \(u\in H^{>}\) is such that \(u\succ 1\) and \(v(u^{\dagger})=\min\Psi_{H}\). Then \(\phi\leqslant u^{n}\) for some \(n\geqslant 1\)._
Proof.: We have \(H\)-hardian \(\phi\succ 1\), but this is not enough to get \(\theta\in H^{\times}\) with \(\phi\asymp\theta\). That is why we consider first the case that \(H\supseteq\mathbb{R}\) is real closed with asymptotic integration, and \(f\in\sigma\big{(}\Gamma(H)\big{)}^{\uparrow}\). Then Lemma 7.5.6 gives \(e\in H^{>}\) such that \(\phi^{\prime}\sim e\), and as \(H\) has asymptotic integration we obtain \(\theta\in H^{\times}\) with \(\phi\asymp\theta\). Hence \(\phi^{\dagger}\asymp\theta^{\dagger}\preccurlyeq u^{\dagger}\), and thus \(\phi\leqslant u^{n}\) for some \(n\geqslant 1\), by [ADH, 9.1.11].
We now reduce the general case to this special case. Take a d-maximal Hardy field extension \(E\) of \(H\) with \(g,\phi\in E\). Suppose \(H\) is \(\lambda\)-free. Then \(f\in\sigma\big{(}\Gamma(H)\big{)}^{\uparrow}\). Also, \(H(\mathbb{R})\) is \(\lambda\)-free with the same value group as \(H\), by Proposition 1.4.3, so \(L:=H(\mathbb{R})^{\mathrm{rc}}\subseteq E\) has asymptotic integration, with \(v(u^{\dagger})=\min\Psi_{L}\). Thus \(\phi\leqslant u^{n}\) for some \(n\geqslant 1\) by the special case applied to \(L\) in the role of \(H\).
For the rest of the proof we assume \(H\) is not \(\lambda\)-free. Then \(H(\mathbb{R})\) is not \(\lambda\)-free by Lemmas 1.4.13 and 1.4.14, and so \(L:=H(\mathbb{R})^{\mathrm{rc}}\subseteq E\) is not \(\lambda\)-free by [ADH, 11.6.8]. Using [ADH, 10.3.2] we also have \(v(u^{\dagger})=\min\Psi_{L}\). Hence replacing \(H\) by \(L\) we arrange that \(H\supseteq\mathbb{R}\) and \(H\) is real closed in what follows.
Suppose \(H\) has no asymptotic integration. As in the proof of Lemma 1.4.18 this yields an \(\omega\)-free Hardy subfield \(L\supseteq H\) of \(E\) such that \(\Gamma_{H}^{>}\) is cofinal in \(\Gamma_{L}^{>}\), so \(v(u^{\dagger})=\min\Psi_{L}\). Moreover, \(f\in L\,\backslash\,\overline{\omega}(L)=\sigma\big{(}\Gamma(L)\big{)}^{\uparrow}\) by Corollary 5.5.36. Hence replacing \(H\) by \(L^{\mathrm{rc}}\) we have a reduction to the special case.
Suppose \(H\) has asymptotic integration. Since \(H\) is not \(\lambda\)-free, [ADH, 11.6.1] gives \(s\in H\) creating a gap over \(H\). Take \(y\in E^{\times}\) with \(y^{\dagger}=s\). Then \(vy\) is a gap in \(H(y)\) by the remark following [ADH, 11.5.14], and thus a gap in
\(H(y)^{\rm rc}\). Moreover, \(\Gamma_{H}^{>}\) is cofinal in \(\Gamma_{H(y)}^{>}\) by [ADH, 10.4.5](i), hence cofinal in \(\Gamma_{L}^{>}\), so \(v(u^{\dagger})=\min\Psi_{L}\). Thus replacing \(H\) by \(L\) yields a reduction to the "no asymptotic integration" case.
**Corollary 7.5.18**.: _Let \(a,b\in H:={\mathbb{R}}(x)\). Then \(g,\phi\in{\rm D}({\mathbb{Q}})\subseteq{\mathcal{C}}^{\omega}\), and \(\phi\preccurlyeq x^{n}\) for some \(n\geqslant 1\). Moreover, \(f\succ 1/x^{2}\) and \(\log x\prec\phi\), or \(f\asymp 1/x^{2}\) and \(\log x\asymp\phi\)._
Proof.: Apply Lemma 7.5.16 with \(\ell_{0}:=x\), and Lemma 7.5.17 with \(u=x\), and note that \(f\prec 1/x^{2}\) is excluded by the standing assumption \(f\notin\overline{\omega}(H)\).
_Remark_.: Suppose \(a=0\). Then \(b=f/4\), and \(g^{2}\phi^{\prime}\in{\mathbb{R}}^{>}\) by Lemma 7.5.5; hence bounds on \(\phi\) give bounds on \(g\). Thus by Corollary 7.5.18, if \(b\in H:={\mathbb{R}}(x)\), then \(g\succcurlyeq x^{-n}\) for some \(n\geqslant 1\), and either \(f\succ 1/x^{2}\), \(g\prec\sqrt{x}\), or \(f\asymp 1/x^{2}\), \(g\asymp\sqrt{x}\).
_Examples_.: Let \(H:={\mathbb{R}}(x)\). Then for \(a=0\) and \(b=\frac{5}{4}x^{-2}\) the standing assumption \(f\notin\overline{\omega}(H)\) holds, since \(f=5x^{-2}\). The germ \(y=x^{1/2}\cos\log x\in{\mathcal{C}}^{\omega}\) solves the corresponding second-order linear differential equation \(4Y^{\prime\prime}+fY=0\). Other example: let \(H\) contain \(x\) and \(x^{r}\) where \(r\in{\mathbb{R}}\), \(r>-1\). Then for \(a=0\) and \(b:=\frac{1}{4}\big{(}x^{2r}-r(r+2)x^{-2}\big{)}\in{\mathcal{C}}^{\omega}\) the standing assumption \(f\notin\overline{\omega}(H)\) holds in view of \(f=4b\sim x^{2r}\succ 1/x^{2}\). Here \(z=x^{-r/2}\cos\left(\frac{x^{r+1}}{2(r+1)}\right)\in{\mathcal{C}}^{\omega}\) satisfies \(4z^{\prime\prime}+fz=0\).
We now set \(B:=\mathfrak{d}^{3}+f\mathfrak{d}+(f^{\prime}/2)\in H[\mathfrak{d}]\), and observe:
**Lemma 7.5.19**.: \(B(1/\phi^{\prime})=0\)_._
Proof.: We arrange that \(H\supseteq{\mathbb{R}}\) contains \(\phi\) and is Liouville closed, and identify the universal exponential extension \({\rm U}={\rm U}_{K}\) of \(K=H[i]\) with a differential subring of \({\mathcal{C}}^{<\infty}[i]\) as explained at the beginning of Section 5.10. Then
\[(\phi^{\prime})^{-1/2}\,{\rm e}^{\phi i},(\phi^{\prime})^{-1/2}\,{\rm e}^{- \phi i}\in\ker_{\rm U}4\mathfrak{d}^{2}+f.\]
Thus \(B(1/\phi^{\prime})=0\) by Lemma 2.4.23 applied to \({\rm Frac}({\rm U})\) in the role of \(K\).
For the canonical \(\Lambda\Omega\)-expansion of a Hardy field, see Section 7.1.
**Lemma 7.5.20**.: _Let \(E\) be a pre-\(\Lambda\Omega\)-field extension of the canonical \(\Lambda\Omega\)-expansion of \(H\langle\phi^{\prime}\rangle\). Then \(\ker_{E}B=C_{E}(1/\phi^{\prime})\)._
Proof.: Using [ADH, 16.3.20, remark after 4.1.13] we arrange \(E\) to be Schwarz closed. Then \(f\notin\overline{\omega}(H)=\omega(E)\cap H\), hence \(f\in\sigma(E^{\times})\), so \(\dim_{C_{E}}\ker_{E}B=1\) by Lemma 2.5.25.
We can now complement Corollary 7.5.13:
**Corollary 7.5.21**.: _Suppose \(\phi^{\prime}\) is algebraic over \(H\). Then \((\phi^{\prime})^{2}\in H\) and \(g^{\dagger}\in H\)._
Proof.: Let \(E:=H^{\rm rc}\subseteq{\mathcal{C}}^{<\infty}\). Then by Corollary 7.1.2, the canonical \(\Lambda\Omega\)-expansion of \(E\) extends that of \(H\langle\phi^{\prime}\rangle\). Set \(L:=E[i]\subseteq{\mathcal{C}}^{<\infty}[i]\), so \(L\) is an algebraic closure of the differential field \(H\). Put \(u:=2\phi^{\prime}\in E\), and let \(\tau\in{\rm Aut}(L|H)\). Then \(B(\tau(1/u))=0\) by Lemma 7.5.19. So \({\rm Re}\,\tau(1/u)\) and \({\rm Im}\,\tau(1/u)\) in \(E\) are also zeros of \(B\), hence Lemma 7.5.20 yields \(c\in{\mathbb{C}}^{\times}\) with \(\tau(1/u)=c/u\) and thus \(\tau(u)=c^{-1}u\). Now with
\[P(Y)\ :=\ 2YY^{\prime\prime}-3(Y^{\prime})^{2}+Y^{4}-fY^{2}\in H\{Y\}\]
we have \(P(u)=0\), so \(P(\tau(u))=0\), hence
\[0\ =\ P(u)-c^{2}P(\tau(u))\ =\ P(u)-c^{2}P(c^{-1}u)\ =\ (1-c^{-2})u^{4}\]
and thus \(c\in\{-1,1\}\), so \(\tau(u^{2})=u^{2}\). This proves the first statement. The second statement follows from the first and \(g^{2}\phi^{\prime}\in\mathbb{R}^{>}\) by Lemma 7.5.5.
**Distribution of zeros.** Let \(a,b\in H\) and consider again the differential equation
( \[\widetilde{\mathrm{L}}\] ) \[Y^{\prime\prime}+aY^{\prime}+bY\ =\ 0.\]
Below we use Theorem 7.5.1 to show that for any oscillating solution \(y\in\mathcal{C}^{<\infty}\) of (\(\widetilde{\mathrm{L}}\)) the sequence of successive zeros of \(y\) grows very regularly, with growth comparable to that of the sequence of successive relative maxima of \(y\), and also to that of a function whose germ is hardian. (For the equation \(Y^{\prime\prime}+fY=0\), where \(f\in\mathrm{E}(\mathbb{Q})\) generates oscillations, this was suggested after [33, SS20, Conjecture 4].)
To make this precise we first define a preordering \(\leqslant\) on the set \(\mathbb{R}^{\mathbb{N}}\) of sequences of real numbers by
\[(s_{n})\leqslant(t_{n})\quad:\Longleftrightarrow\quad s_{n}\leqslant t_{n} \text{ eventually }\quad:\Longleftrightarrow\quad\exists m\,\forall n\geqslant m\ s_{n} \leqslant t_{n}.\]
(A preordering on a set is a reflexive and transitive binary relation on that set.) We say that \((s_{n}),(t_{n})\in\mathbb{R}^{\mathbb{N}}\) are _comparable_ if \((s_{n})\leqslant(t_{n})\) or \((t_{n})\leqslant(s_{n})\). The induced equivalence relation \(\sim_{\mathrm{tail}}\) on \(\mathbb{R}^{\mathbb{N}}\) is that of having the same tail:
\[(s_{n})\sim_{\mathrm{tail}}(t_{n})\quad:\Longleftrightarrow\quad(s_{n}) \leqslant(t_{n})\text{ and }(t_{n})\leqslant(s_{n})\ \Longleftrightarrow\ s_{n}=t_{n}\text{ eventually}.\]
To any germ \(f\in\mathcal{C}\) we take a representative in \(\mathcal{C}_{0}\), denoted here also by \(f\) for convenience, and associate to this germ the tail of the sequence \(\big{(}f(n)\big{)}\), noting that this tail is independent of the choice of representative.
For example, if the germs of \(f,g\in\mathcal{C}_{0}\) are contained in a common Hardy field, then the sequences \(\big{(}f(n)\big{)}\), \(\big{(}g(n)\big{)}\) are comparable. Given an infinite set \(S\subseteq\mathbb{R}\) with a lower bound in \(\mathbb{R}\) and without a limit point, the **enumeration** of \(S\) is the strictly increasing sequence \((s_{n})\) with \(S=\{s_{0},s_{1},\dots\}\) (so \(s_{n}\to+\infty\) as \(n\to+\infty\)).
We take representatives of \(a\), \(b\) in \(\mathcal{C}_{e}^{1}\) with \(e\in\mathbb{R}\), denoting these by \(a\) and \(b\) as well, and set \(f:=-2a^{\prime}-a^{2}+4b\in\mathcal{C}_{e}\). Let \(y\in\mathcal{C}_{e}^{2}\) be oscillating with
\[y^{\prime\prime}+ay^{\prime}+by\ =\ 0,\qquad\text{(so the germ of $f$ does not lie in $\overline{\omega}(H)$)},\]
and let \((s_{n})\) be the enumeration of \(y^{-1}(0)\). (See Lemma 5.2.10.) Theorem 7.5.1 yields \(e_{0}\geqslant e\), \(g\in\mathcal{C}_{e_{0}}^{\times}\), and strictly increasing \(\phi\in\mathcal{C}_{e_{0}}\) such that \(y|_{e_{0}}=g\cos\phi\), and \(g\), \(\phi\) lie in a common Hardy field extension of \(H\) with \((g,\phi)\) parametrizing \(\ker_{\mathcal{C}^{<\infty}}(\partial^{2}+a\partial+b)\) (where \(g\), \(\phi\) also denote their own germs).
**Lemma 7.5.22**.: _There is a strictly increasing \(\zeta\in\mathcal{C}_{n_{0}}\)\((n_{0}\in\mathbb{N})\) such that \(s_{n}=\zeta(n)\) for all \(n\geqslant n_{0}\) and the germ of \(\zeta\) is hardian with \(H\)-hardian compositional inverse._
Proof.: Take \(n_{0}\in\mathbb{N}\) such that \(s_{n}\geqslant e_{0}\) for all \(n\geqslant n_{0}\), and then \(k_{0}\in\frac{1}{2}+\mathbb{Z}\) such that \(\phi(s_{n})=(k_{0}+n)\pi\) for all \(n\geqslant n_{0}\). Thus \(n_{0}=\big{(}\phi(s_{n_{0}})/\pi\big{)}-k_{0}\). Let \(\zeta\in\mathcal{C}_{n_{0}}\) be the compositional inverse of \((\phi/\pi)-k_{0}\) on \([s_{n_{0}},+\infty)\). Then \(\zeta\) has the desired properties: the germ of \(\zeta\) is hardian by Lemma 5.3.5.
If \(a,b\in\mathcal{C}^{\infty}\), then we can choose \(\zeta\) in Lemma 7.5.22 such that its germ is in \(\mathcal{C}^{\infty}\); likewise with \(\mathcal{C}^{\omega}\) in place of \(\mathcal{C}^{\infty}\). We do not know whether we can always choose \(\zeta\) in Lemma 7.5.22 to have \(H\)-hardian germ. For \(\phi\) not growing too slowly we can describe the asymptotic behavior of \(\zeta\) in terms of \(\phi\):
**Corollary 7.5.23**.: _If \(\phi\succcurlyeq x^{1/n}\) for some \(n\geqslant 1\), then in Lemma 7.5.22 one can choose \(\zeta\sim\phi^{\mathrm{inv}}\circ\pi x\)._
Proof.: Let \(n_{0}\), \(k_{0}\), \(\zeta\) be as in the proof of Lemma 7.5.22. Then
\[\zeta^{\rm inv}\ \sim\ (\phi/\pi)-k_{0}\ \sim\ \phi/\pi.\]
Now assume \(\phi\succcur 1^{1/n}\), \(n\geqslant 1\). Then \(\zeta^{\rm inv}\succcur 1^{1/n}\), so \(\zeta\preccurlyeq x^{n}\), and thus the condition stated just before Lemma 5.1.9 is satisfied for \(h:=\zeta\). We can therefore use Corollary 5.1.11 with \(\phi/\pi\), \(\zeta\) in the role of \(g\), \(h\) to give \(\phi^{\rm inv}\circ\pi x\sim\zeta\).
Combining Corollaries 7.5.11, 7.5.23, and 5.1.11 we obtain:
**Corollary 7.5.24**.: _If \(f\sim c\ (c\in\mathbb{R}^{>})\), then \(s_{n}\sim\frac{2}{\sqrt{c}}\pi n\) as \(n\to\infty\)._
Combining Corollary 7.5.18 with the proof of Lemma 7.5.22 yields crude bounds on the growth of \((s_{n})\) when \(H=\mathbb{R}(x)\):
**Corollary 7.5.25**.: _Suppose \(a,b\in\mathbb{R}(x)\). If \(f\asymp 1/x^{2}\), then for some \(r\in\mathbb{R}^{>}\) we have \({\rm e}^{n/r}\leqslant s_{n}\leqslant{\rm e}^{rn}\) eventually. If \(f\succ 1/x^{2}\), then for some \(m\geqslant 1\) and every \(\varepsilon\in\mathbb{R}^{>}\) we have \(n^{1/m}\leqslant s_{n}\leqslant{\rm e}^{\varepsilon n}\) eventually._
The next lemma is a version of the Sturm Convexity Theorem [21, p. 318] concerning the differences between consecutive zeros of \(y\):
**Lemma 7.5.26**.: _If \(f\prec 1\), then the sequence \((s_{n+1}-s_{n})\) is eventually strictly increasing with \(s_{n+1}-s_{n}\to+\infty\) as \(n\to\infty\). If \(f\succ 1\), then \((s_{n+1}-s_{n})\) is eventually strictly decreasing with \(s_{n+1}-s_{n}\to 0\) as \(n\to\infty\). Now suppose \(f\sim c\ (c\in\mathbb{R}^{>})\). Then \(s_{n+1}-s_{n}\to 2\pi/\sqrt{c}\) as \(n\to\infty\), and if \(f<c\), then \((s_{n+1}-s_{n})\) is eventually strictly decreasing, if \(f=c\), then \((s_{n+1}-s_{n})\) is eventually constant, and if \(f>c\), then \((s_{n+1}-s_{n})\) is eventually strictly increasing._
Proof.: We arrange \(\phi\in\mathcal{C}^{2}_{e_{0}}\) such that \(\phi^{\prime}(t)>0\) for all \(t\geqslant e_{0}\). Take \(\zeta\) as in the proof of Lemma 7.5.22. Then \(\zeta\in\mathcal{C}^{2}_{n_{0}}\) with
\[\zeta^{\prime}\ =\ \pi\frac{1}{\phi^{\prime}\circ\zeta},\qquad\zeta^{\prime \prime}\ =\ -\pi^{2}\frac{\phi^{\prime\prime}\circ\zeta}{(\phi^{\prime}\circ\zeta)^{3}}.\]
The Mean Value Theorem gives for every \(n\geqslant n_{0}\) a \(t_{n}\in(n,n+1)\) such that
\[s_{n+1}-s_{n}\ =\ \zeta(n+1)-\zeta(n)\ =\ \zeta^{\prime}(t_{n}).\]
If \(f\prec 1\), then Corollary 7.5.11 gives \(\phi\prec x\), so \(\zeta\succ x\), hence \(\zeta^{\prime}\succ 1\); this proves the first claim of the lemma. The other claims follow likewise using Corollary 7.5.11 and the above remarks on \(\zeta^{\prime}\) and \(\zeta^{\prime\prime}\).
**Corollary 7.5.27**.: _Let \(h\in\mathcal{C}_{0}\) and suppose the germ of \(h\) is in \(\mathrm{E}(\mathbb{Q})\). Then the sequences \((s_{n})\) and \(\big{(}h(n)\big{)}\) are comparable._
Proof.: Let \(\zeta\) be as in Lemma 7.5.22, and note that the germs of \(\zeta\) and \(h\) lie in a common Hardy field.
Let also \(\underline{a},\underline{b}\in H\), and take representatives of \(\underline{a}\), \(\underline{b}\) in \(\mathcal{C}^{1}_{\underline{c}}\) (\(\underline{e}\in\mathbb{R}\)), denoting these by \(\underline{a}\) and \(\underline{b}\) as well. Let \(\underline{y}\in\mathcal{C}^{2}_{\underline{c}}\) be an oscillating solution of the differential equation
\[Y^{\prime\prime}+\underline{a}Y^{\prime}+\underline{b}Y\ =\ 0,\]
and let \((\underline{s}_{n})\) be the enumeration of \(\underline{y}^{-1}(0)\).
**Lemma 7.5.28**.: _The sequences \((s_{n})\) and \((\underline{s}_{n})\) are comparable._
Proof.: We arrange that \(H\) is maximal and take \(\zeta\) as in Lemma 7.5.22. This lemma also provides a strictly increasing \(\underline{\zeta}\in\mathcal{C}_{\mathfrak{n}_{0}}\) (\(\underline{n}_{0}\in\mathbb{N}\)) such that \(\underline{s}_{n}=\underline{\zeta}(n)\) for all \(n\geqslant\underline{n}_{0}\) and the germ of \(\underline{\zeta}\) is historian with \(H\)-hardian compositional inverse. With \(\zeta\) and \(\underline{\zeta}\) denoting also their germs this gives \(\zeta^{\rm inv}\leqslant\underline{\zeta}^{\rm inv}\) or \(\zeta^{\rm inv}\geqslant\underline{\zeta}^{\rm inv}\), hence \(\zeta\geqslant\underline{\zeta}\) or \(\zeta\leqslant\underline{\zeta}\). Thus \((s_{n})\) and \((\underline{s}_{n})\) are comparable.
Now \(y^{\prime}\) also oscillates, so by Corollary 5.5.17 there is for all sufficiently large \(n\) exactly one \(t\in(s_{n},s_{n+1})\) with \(y^{\prime}(t)=0\). Also \(b\neq 0\) in \(H\), since \(b=0\) would mean that \(z:=y^{\prime}\) satisfies \(z^{\prime}+az=0\), so \(z\) would be \(H\)-hardian. This leads to the following: Let \(m\geqslant 1\) and suppose \(y\in\mathcal{C}_{e}^{m+2}\) (and \(y^{\prime\prime}+ay^{\prime}+by=0\) with oscillating \(y\) as before). Then the zero sets of \(y,y^{\prime},\ldots,y^{(m)}\) are eventually parametrized by hardian germs as follows:
**Lemma 7.5.29**.: _For \(i=0,\ldots,m\) we have an \(n_{i}\in\mathbb{N}\) and a strictly increasing function \(\zeta_{i}\in\mathcal{C}_{n_{i}}\), such that:_
1. \(\zeta_{i}(n_{i})\geqslant e\) _and_ \(\zeta_{i}(t)\to+\infty\) _as_ \(t\to+\infty\)_;_
2. _the germ of_ \(\zeta_{i}\) _is hardian with_ \(H\)_-hardian compositional inverse;_
3. \(\left\{\zeta_{i}(n):\ n\geqslant n_{i}\right\}=\left\{t\geqslant\zeta_{i}(n_{ i})\ :\ y^{(i)}(t)=0\right\}\)_;_
4. \(\zeta_{i}^{\rm inv}-\zeta_{0}^{\rm inv}\preccurlyeq 1\)_;_
5. _if_ \(i<m\)_, then_ \(\zeta_{i}(n)<\zeta_{i+1}(n)<\zeta_{i}(n+1)\) _for all_ \(n\geqslant n_{i+1}\)_._
Proof.: We arrange that \(H\) is maximal. For simplicity we only do the case \(m=1\); the general case just involves more notation. For \(\zeta_{0}\) we take a function \(\zeta\) as constructed in the proof of Lemma 7.5.22, and also take \(n_{0}\) as in that proof, so clauses (i), (ii), (iii) are satisfied for \(i=0\). Set \(A:=\partial^{2}+a\partial+b\in H[\partial]\). As \(b\neq 0\), we have the monic operator \(A^{\partial}\in H[\partial]\) of order \(2\) as defined before Lemma 2.5.13, with \(A^{\partial}(y^{\prime})=0\). Take a pair \((g_{1},\phi_{1})\) of elements of \(H\) parametrizing \(\ker_{\mathcal{C}^{<\infty}}A^{\partial}\). Then with \(A^{\partial}\), \(g_{1}\), \(\phi_{1}\) instead of \(A\), \(g\), \(\phi\), and taking suitable representatives of the relevant germs, the proof of Lemma 7.5.22 provides likewise an \(n_{1}\in\mathbb{N}\), a \(k_{1}\in\frac{1}{2}+\mathbb{Z}\), and a strictly increasing function \(\zeta_{1}\in\mathcal{C}_{n_{1}}\) satisfying clauses (i), (ii), (iii) for \(i=1\) and with compositional inverse given by \((\phi_{1}/\pi)-k_{1}\).
Recall that \(\mathrm{U}=\mathrm{U}_{K}\subseteq\mathcal{C}^{<\infty}[i]\) is a differential integral domain extending \(K\), and that \(\ker_{\mathcal{C}^{<\infty}[i]}B=\ker_{\mathrm{U}}B\) for all \(B\in K[\partial]^{\neq}\), by Theorem 7.4.1. Therefore \(\ker_{\mathcal{C}^{<\infty}[i]}A^{\partial}=\left\{y^{\prime}:y\in\ker_{ \mathcal{C}^{<\infty}[i]}A\right\}\) by Lemma 2.5.13, so in view of \(A\in H[\partial]\),
\[\ker_{\mathcal{C}^{<\infty}}A^{\partial}\ =\ \left\{y^{\prime}:y\in\ker_{ \mathcal{C}^{<\infty}}A\right\}.\]
Now (iv) for \(i=1\) follows from Lemmas 7.5.2 and 7.5.7.
As to (v), the remark preceding the lemma gives \(\ell\in\mathbb{N}\) and \(p\in\mathbb{Z}\) such that for all \(n\geqslant n_{1}+\ell\) we have: \(n+p\geqslant n_{0}\) and \(\zeta_{1}(n)\) is the unique zero of \(y^{\prime}\) in the interval \(\big{(}\zeta(n+p),\zeta(n+p+1)\big{)}\). Set \(n_{1}^{*}:=n_{1}+\ell+|p|\), and modify \(\zeta_{1}\) to \(\zeta_{1}^{*}\colon[n_{1}^{*},+\infty)\to\mathbb{R}\) by setting \(\zeta_{1}^{*}(t)=\zeta_{1}(t-p)\). Then \(\zeta(n)<\zeta_{1}^{*}(n)<\zeta(n+1)\) for all \(n\geqslant n_{1}^{*}\). The compositional inverse of \(\zeta_{1}^{*}\) is given by \((\phi_{1}/\pi)-(k_{1}-p)\). Thus replacing \(\zeta_{1}\), \(n_{1}\), \(k_{1}\) by \(\zeta_{1}^{*}\), \(n_{1}^{*}\), \(k_{1}-p\), all clauses are satisfied.
Define \(N\colon\mathbb{R}^{\geqslant e}\to\mathbb{N}\) by
\[N(t)\ :=\ \big{|}[e,t]\cap y^{-1}(0)\big{|}\ =\ \min\{n:s_{n}>t\},\]
so for \(n\geqslant 1\): \(N(t)=n\Leftrightarrow s_{n-1}\leqslant t<s_{n}\). Thus \(N(t)\to+\infty\) as \(t\to+\infty\); in fact:
**Lemma 7.5.30**.: \(N\sim\phi/\pi\)_._
Proof.: Take \(n_{0}\), \(k\) as in the proof of Lemma 7.5.22, so \(\phi(s_{n})=(k+n)\pi\) for \(n\geqslant n_{0}\). Let \(t\geqslant e\) be such that \(N(t)\geqslant n_{0}+1\) ; then \(s_{N(t)-1}\leqslant t<s_{N(t)}\) and thus
\[N(t)+k-1=\phi(s_{N(t)-1})/\pi\leqslant\phi(t)/\pi<\phi(s_{N(t)})/\pi=N(t)+k.\]
This yields \(N\sim\phi/\pi\).
The quantity \(N(t)\) has been studied extensively in connection with second order linear differential equations; see [91, Chapter IX, SS5, and the literature quoted on p. 401]. For example, the lemma below is a consequence of a result due to Wiman [210] that holds under more general assumptions (see [91, Chapter IX, Corollary 5.3]), but also follows easily using our Hardy field calculus. Here we assume \(a=0\), so \(f=4b\in\mathcal{C}_{e}\).
**Lemma 7.5.31**.: _Suppose \(f(t)>0\) for all \(t\geqslant e\), and \(\big{(}1/\sqrt{f}\big{)}^{\prime}\prec 1\). Then_
\[N(t)\ \sim\ \frac{1}{2\pi}\int_{e}^{t}\sqrt{f(s)}\,ds\quad\text{as $t\to+\infty$.}\]
Proof.: From \(\big{(}1/\sqrt{f}\big{)}^{\prime}\prec 1\) we get \(f^{\dagger}\prec\sqrt{f}\). Now \(f\) is hardian, so \(f\preccurlyeq 1/x^{2}\) would give \(f^{\dagger}\succcurlyeq 1/x\), which together with \(f\preccurlyeq 1/x^{2}\) contradicts \(f^{\dagger}\prec\sqrt{f}\). Thus \(f\succ 1/x^{2}\). For the rest of the argument we arrange \(H\) is maximal with \((g,\phi)\in H^{2}\). Corollary 7.5.10 yields a pair \((g_{1},\phi_{1})\in H^{2}\) parametrizing \(\ker_{\mathcal{C}<\infty}(\partial^{2}+b)\) such that \(\phi_{1}^{\prime}\sim(1/2)\sqrt{f}\). Then \(\phi-\phi_{1}\in\mathbb{R}\) by Lemma 7.5.7, so \(\phi^{\prime}\sim(1/2)\sqrt{f}\). Let \(\phi_{2}\in\mathcal{C}_{e}^{1}\) be given by \(\phi_{2}(t)=(1/2)\int_{e}^{t}\sqrt{f(s)}\,ds\). Then \(\phi_{2}^{\prime}=(1/2)\sqrt{f}\), so (the germ of) \(\phi_{2}\) lies in \(H\) and \(\sqrt{f}\succ 1/x\), so \(\phi_{2}>\mathbb{R}\). Hence by [ADH, 9.1.4(ii)] we have \(\phi\sim\phi_{2}\). Now apply Lemma 7.5.30.
In view of Lemma 5.2.10 one may ask to what extent the results in this subsection generalize to higher-order linear differential equations over Hardy fields.
When is the perfect hull \(\omega\)-free?Here we use the lemmas that made up the proof of Theorem 7.5.1 to characterize \(\omega\)-freeness of the (d-) perfect hull of \(H\):
**Theorem 7.5.32**.: _The following are equivalent:_
* \(H\) _is not_ \(\lambda\)_-free or_ \(\overline{\omega}(H)=H\setminus\sigma\big{(}\Gamma(H)\big{)}^{\uparrow}\)_;_
* \(\mathrm{D}(H)\) _is_ \(\omega\)_-free;_
* \(\mathrm{E}(H)\) _is_ \(\omega\)_-free._
In connection with this theorem recall that by Corollary 7.5.9, a d-perfect Hardy field is Schwarz closed iff it is \(\omega\)-free, so in (ii), (iii) we could have also written "Schwarz closed" instead of "\(\omega\)-free". The implication (i) \(\Rightarrow\) (ii) was shown already in Lemma 5.5.37. To show the contrapositive of (iii) \(\Rightarrow\) (i) suppose \(H\) is \(\lambda\)-free and \(\overline{\omega}(H)\neq H\setminus\sigma\big{(}\Gamma(H)\big{)}^{\uparrow}\). Since \(\overline{\omega}(H)\subseteq H\setminus\sigma\big{(}\Gamma(H)\big{)}^{\uparrow}\) this yields \(\omega\in H\) with \(\overline{\omega}(H)<\omega<\sigma\big{(}\Gamma(H)\big{)}\), and so by Lemma 7.5.33 below, \(\mathrm{E}(H)\) is not \(\omega\)-free. The proof of this lemma relies on Corollary 7.5.8, but additionally draws on some results from Sections 5.3 and 5.6.
**Lemma 7.5.33**.: _Suppose \(H\) is \(\lambda\)-free, and \(\omega\in H\), \(\overline{\omega}(H)<\omega<\sigma\big{(}\Gamma(H)\big{)}\). Then_
\[\omega(E)<\omega<\sigma\big{(}\Gamma(E)\big{)}\qquad\text{for $E:=\mathrm{E}(H)$,}\]
_hence \(E\) is not \(\omega\)-free._
Proof.: We may replace \(H\) by any \(\lambda\)-free Hardy subfield \(L\) of \(E\) containing \(H\) such that \(\Gamma^{<}\) is cofinal in \(\Gamma^{<}_{L}\), by [1, 11.8.14, 11.8.29]. Using this observation and Proposition 1.4.3 we replace \(H\) by \(H(\mathbb{R})\) to arrange \(H\supseteq\mathbb{R}\). Next we replace \(H\) by \(\operatorname{Li}(H)\subseteq E\) to arrange that \(H\) is Liouville closed, using Proposition 1.4.15. Now \(\omega(E)=\overline{\omega}(E)\) is downward closed and \(\overline{\omega}(E)\cap H=\overline{\omega}(H)\), so \(\omega(E)<\omega\). Towards a contradiction, assume \(\omega\in\sigma\big{(}\Gamma(E)\big{)}^{\uparrow}\). Take \(\gamma\in\Gamma(E)\) with \(\sigma(\gamma)=\omega\). Corollaries 5.6.3 and 5.6.5 also yield a germ \(\widetilde{\gamma}\in(\mathcal{C}^{<\infty})^{\times}\setminus\{\gamma\}\) with \(\widetilde{\gamma}>0\) and \(\sigma(\widetilde{\gamma})=\omega\), and a maximal Hardy field extension \(M\) of \(H\) containing \(\widetilde{\gamma}\). Since \(M\) is \(\omega\)-free (by Theorem 5.6.2) and \(\omega\notin\overline{\omega}(M)\), we have \(\omega\in\sigma\big{(}\Gamma(M)\big{)}^{\uparrow}\) by Corollary 5.5.36 and so \(\widetilde{\gamma}\in\Gamma(M)\) by [1, 11.8.31]. Since \(E\subseteq M\) we have \(\Gamma(E)\subseteq\Gamma(M)\). Then from \(\sigma(\gamma)=\omega=\sigma(\widetilde{\gamma})\) we obtain \(\gamma=\widetilde{\gamma}\) by [1, 11.8.29], a contradiction.
_Remark 7.5.34_.: Suppose \(H\supseteq\mathbb{R}\) is Liouville closed and \(\omega\in H\) satisfies
\[\overline{\omega}(H)\ <\ \omega\ <\ \sigma\big{(}\Gamma(H)\big{)}.\]
Then the uniqueness in Theorem 7.5.1 fails, by Corollaries 5.6.3 and 5.6.5: any \(\gamma>0\) in any d-maximal Hardy field extension \(M\) of \(H\) with \(\sigma(\gamma)=\omega\) yields a pair \((g,\phi)\) parametrizing \(V:=\ker_{\mathcal{C}^{<\infty}}(4\partial^{2}+\omega)\) where \(g:=1/\sqrt{\gamma}\) and \(\phi\in M\), \(\phi^{\prime}=\frac{1}{2}\gamma\).
To finish the proof of Theorem 7.5.32 it remains to show the implication (ii) \(\Rightarrow\) (iii), which we do in Lemma 7.5.39 below. (This implication holds trivially if \(H\) is bounded, by Theorem 5.4.20.) We precede this lemma with some observations.
If \(\phi\) is active in \(H\), then the pre-\(H\)-field \(H^{\phi}\) has small derivation \(\updelta=\phi^{-1}\partial\); so if \(h\in H\), \(h\prec 1\), then \(\updelta^{n}(h)\prec 1\) for all \(n\). The next lemma yields a variant of this when \(h\) is multiplied by a germ in \(\mathcal{C}^{<\infty}\) with sufficiently small derivatives:
**Lemma 7.5.35**.: _Let \(y=hz\), \(h\in H\), \(h\prec 1\) and \(z\in\mathcal{C}^{<\infty}\), \(z\preccurlyeq 1\) and \(z^{(j)}\preccurlyeq\mathrm{e}^{-x}\) for \(j=1,\ldots,n\). Let also \(\phi\) be active in \(H\) with \(0<\phi\preccurlyeq 1/x\), and \(\updelta=\phi^{-1}\partial\). Then \(\updelta^{j}(y)\prec 1\) for \(j=0,\ldots,n\)._
Proof.: Let \(j\), \(k\) with \(k\leqslant j\) range over \(\{1,\ldots,n\}\). By the Product Rule for the derivation \(\updelta\) and the remark preceding the lemma it is enough to show that \(\updelta^{j}(z)\preccurlyeq 1\). Let \(R^{j}_{k}\in\mathbb{Q}\{X\}\) be as in Lemma 5.3.4. Now \(\lambda:=-\phi^{\dagger}\asymp 1/x\), hence \(R^{j}_{k}(\lambda)\preccurlyeq 1\), and \((\phi^{-j})^{\dagger}=-j\phi^{\dagger}\asymp\lambda\prec 1=(\mathrm{e}^{x})^{\dagger}\), hence also \(\phi^{-j}\preccurlyeq\mathrm{e}^{x}\). This yields
\[\updelta^{j}(z)\ =\ \phi^{-j}\big{(}R^{j}_{j}(\lambda)z^{(j)}+\cdots+R^{j}_{1}( \lambda)z^{\prime}\big{)}\preccurlyeq\phi^{-j}\,\mathrm{e}^{-x}\prec 1,\]
which is more than enough.
We have an ample supply of oscillating germs \(z\) as in Lemma 7.5.35:
**Lemma 7.5.36**.: _Let \(z:=\mathrm{e}^{-x}\sin x\in\mathcal{C}^{\omega}\); then \(|z^{(n)}|\leqslant 2^{n}\,\mathrm{e}^{-x}\) for all \(n\)._
In the next lemma and its corollary our Hardy field \(H\) contains \(\mathbb{R}\) and is real closed, and \(\widehat{H}\) is an immediate Hardy field extension of \(H\). We now have the following perturbation result:
**Lemma 7.5.37**.: _Suppose \(H\) is ungrounded, \(\Psi_{H}^{>0}:=\Psi_{H}\cap\Gamma_{H}^{>}\neq\emptyset\). Let \(\widehat{f}\in\widehat{H}\setminus H\) and \(Z(H,\widehat{f})=\emptyset\). Let \(g\in H\), \(vg>v(\widehat{f}-H)\) and \(z\in\mathcal{C}^{<\infty}\), \(z\preccurlyeq 1\), \(z^{(n)}\preccurlyeq\mathrm{e}^{-x}\) for all \(n\geqslant 1\). Then \(f:=\widehat{f}+gz\in\mathcal{C}^{<\infty}\) is hardian over \(H\), and we have an isomorphism \(H\langle f\rangle\to H\langle\widehat{f}\rangle\) of \(H\)-fields over \(H\) sending \(f\) to \(\widehat{f}\)._
Proof.: The hypothesis \(\Psi_{H}^{>0}\neq\emptyset\) and [ADH, 9.2.15] yields active \(\phi\) in \(H\) with \(\phi^{\dagger}\asymp\phi\). But also \(t^{\dagger}\asymp t\) for \(t:=x^{-1}\) in \(H(x)\), so \(\phi\asymp t\) by the uniqueness in [ADH, 9.2.15]. Below \(\phi\) ranges over the active elements of \(H\) such that \(0<\phi\prec t\), and \(\mathfrak{d}:=\phi^{-1}\mathfrak{d}\). Let \(h\in H\), \(\mathfrak{m}\in H^{\times}\) be such that \(\widehat{f}-h\preccurlyeq\mathfrak{m}\); by Corollary 6.7.12 it is enough to show that then \(\mathfrak{d}^{n}\big{(}\frac{f-h}{\mathfrak{m}}\big{)}\preccurlyeq 1\) for all \(n\). Now \(u:=\frac{\widehat{f}-h}{\mathfrak{m}}\in\widehat{H}\), \(u\preccurlyeq 1\), and the valued differential field \(\widehat{H}^{\phi}\) has small derivation, so \(\mathfrak{d}^{n}(u)\preccurlyeq 1\) for all \(n\). Moreover, \(g/\mathfrak{m}\in H\), \(g/\mathfrak{m}\prec 1\), so \(\mathfrak{d}^{n}\big{(}\frac{g}{\mathfrak{m}}z\big{)}\prec 1\) for all \(n\), by Lemma 7.5.35. Thus \(\mathfrak{d}^{n}\big{(}\frac{f-h}{\mathfrak{m}}\big{)}=\mathfrak{d}^{n}\big{(} \frac{\widehat{f}-h}{\mathfrak{m}}\big{)}+\mathfrak{d}^{n}\big{(}\frac{g}{ \mathfrak{m}}z\big{)}\preccurlyeq 1\), for all \(n\).
Using Lemmas 7.5.36 and 7.5.37, and results of [ADH, 11.4] we now obtain:
**Corollary 7.5.38**.: _Suppose \(H\) is ungrounded with \(\Psi_{H}^{>0}\neq\emptyset\). Let \((f_{\rho})\) be a divergent pc-sequence in \(H\) of \(\mathrm{d}\)-transcendental type over \(H\) and with a pseudolimit in \(\mathrm{E}(H)\). Then \((f_{\rho})\) is a c-sequence._
Proof.: Let \(f_{\rho}\leadsto\widehat{f}\in\mathrm{E}(H)\). Then by [ADH, 11.4.7, 11.4.13] the Hardy field \(H\langle\widehat{f}\rangle\) is an immediate extension of \(H\), and \(Z(H,\widehat{f})=\emptyset\). Suppose \((f_{\rho})\) is not a c-sequence. Then we can take \(g\in H^{\times}\) with \(vg>v(H-\widehat{f})\). By Lemmas 7.5.36 and 7.5.37, the germ \(f:=\widehat{f}+g\operatorname{e}^{-x}\sin x\) generates a Hardy field \(H\langle f\rangle\) over \(H\); however, no maximal Hardy field extension of \(H\) contains both \(\widehat{f}\) and \(f\), contradicting \(\widehat{f}\in\mathrm{E}(H)\).
We can now supply the proof of the still missing implication (ii) \(\Rightarrow\) (iii) in Theorem 7.5.32:
**Lemma 7.5.39**.: _Suppose \(H\) is \(\mathfrak{o}\)-free. Then \(\mathrm{E}(H)\) is also \(\mathfrak{o}\)-free._
Proof.: Since \(E:=\mathrm{E}(H)\) is Liouville closed and contains \(\mathbb{R}\) we may replace \(H\) by the Hardy subfield \(\mathrm{Li}\big{(}H(\mathbb{R})\big{)}\) of \(E\), which remains \(\mathfrak{o}\)-free by Theorem 1.4.1, and arrange that \(H\supseteq\mathbb{R}\) and \(H\) is Liouville closed (so Corollary 7.5.38 applies). Towards a contradiction, suppose \(\mathfrak{o}\in E\), \(\omega(E)<\mathfrak{o}<\sigma\big{(}\Gamma(E)\big{)}\); then \(\omega(H)<\mathfrak{o}<\sigma\big{(}\Gamma(H)\big{)}\). Choose a logarithmic sequence \((\ell_{\rho})\) for \(H\) and define \(\mathfrak{o}_{\rho}:=\omega(-\ell_{\rho}^{\dagger\dagger})\). Then \((\mathfrak{o}_{\rho})\) is a divergent pc-sequence in \(H\) with \(\mathfrak{o}_{\rho}\leadsto\mathfrak{o}\), by [ADH, 11.8.30]. By [ADH, 13.6.3], \((\mathfrak{o}_{\rho})\) is of d-transcendental type over \(H\). Its width is \(\big{\{}\gamma\in(\Gamma_{H})_{\infty}:\gamma>2\Psi_{H}\big{\}}\) by [ADH, 11.7.2], which contains \(v(1/x^{4})=2v\big{(}(1/x)^{\prime}\big{)}\), so \((\mathfrak{o}_{\rho})\) is not a c-sequence, contradicting Corollary 7.5.38.
Next we describe for \(j=1,2\) a \(\lambda\)-free Hardy field \(H_{(j)}\supseteq\mathbb{R}\) and \(\mathfrak{o}_{(j)}\in H_{(j)}\) such that \(\omega\big{(}\Lambda(H_{(j)})\big{)}<\mathfrak{o}_{(j)}<\sigma\big{(}\Gamma(H_ {(j)})\big{)}\) (so \(H_{(j)}\) is not \(\mathfrak{o}\)-free by [ADH, 11.8.30]), and
1. \(\mathfrak{o}_{(1)}\in\overline{\omega}(H_{(1)})\);
2. \(\mathfrak{o}_{(2)}\notin\overline{\omega}(H_{(2)})\).
It follows that \(\overline{\omega}(H_{(1)})=H_{(1)}\setminus\sigma\big{(}\Gamma(H_{(1)})\big{)} ^{\uparrow}\) by Lemma 5.5.35, hence condition (i) in Theorem 7.5.32 is satisfied for \(H=H_{(1)}\), but it is _not_ satisfied for \(H=H_{(2)}\); thus \(\mathrm{E}(H_{(1)})\) is \(\mathfrak{o}\)-free, whereas \(\mathrm{E}(H_{(2)})\) is not.
To construct such \(H_{(j)}\) and \(\mathfrak{o}_{(j)}\in H_{(j)}\) we start with a hardian translogarithmic germ \(\ell_{\omega}\) (see the remarks before Proposition 5.6.6), and set
\[\mathfrak{\gamma}\ :=\ \ell_{\omega}^{\dagger},\quad\lambda\ :=\ -\mathfrak{\gamma}^{ \dagger},\quad\mathfrak{o}_{(1)}\ :=\ \omega(\lambda),\quad\mathfrak{o}_{(2)}\ :=\ \sigma(\gamma)\ =\ \mathfrak{o}_{(1)}+\gamma^{2}.\]
Using [ADH, Sections 11.5, 11.7] we see that the Hardy field \(E:=\mathbb{R}(\ell_{0},\ell_{1},\ell_{2},\dots)\) is \(\omega\)-free and that the elements \(\omega_{(1)},\omega_{(2)}\in M:=E\langle\ell_{\omega}\rangle\) are pseudolimits of the pc-sequence \((\omega_{n})\) in \(E\). For \(j=1,2\), we consider the Hardy subfield
\[H_{(j)}\ :=\ E\langle\omega_{(j)}\rangle\]
of \(M\), an immediate \(\lambda\)-free extension of \(E\) by [ADH, 13.6.3, 13.6.4], and therefore \(\omega\bigl{(}\Lambda(H_{(j)})<\omega_{(j)}<\sigma\bigl{(}\Gamma(H_{(j)}) \bigr{)}\) by [ADH, 11.8.30]. Moreover
\[\overline{\omega}\bigl{(}H_{(j)}\bigr{)}\ =\ \overline{\omega}(M)\cap H_{(j)}\ \ \text{for}\ j=1,2,\]
so (1) holds since \(\omega_{1}\in\omega(M)\subseteq\overline{\omega}(M)\), whereas \(\omega_{2}\in\sigma\bigl{(}\Gamma(M)\bigr{)}\subseteq M\setminus\overline{ \omega}(M)\), hence (2) holds.
_Example 7.5.40_.: Set \(H:=\operatorname{E}(H_{(2)})\). Then the Hardy field \(H\) is perfect, so \(H\supseteq\mathbb{R}\) is a Liouville closed Hardy field with \(\operatorname{I}(K)\subseteq K^{\dagger}\), but \(H\) is not \(\omega\)-free. This makes good on a promise made before Lemma 4.4.32.
### Antiderivatives of rational functions as phase functions
In this subsection \(H=\mathbb{R}(x)\), so \(K=H[i]=\mathbb{C}(x)\). If \(f\in H\setminus\overline{\omega}(H)\) and \((g,\phi)\in\operatorname{Li}(H)^{2}\) parametrizes \(\operatorname{ker}_{c^{<}\infty}(4\partial^{2}+f)\), then \((\phi^{\prime})^{2}\in H\) by Corollaries 7.5.13, 7.5.15, and 7.5.21. In Corollary 7.5.51 below we give a condition on such \(f\), \(g\), \(\phi\) that ensures \(\phi^{\prime}\in H\), to be used in Section 7.6. We precede this with remarks about ramification in quadratic extensions of \(K\). So let \(L\) be a field extension of \(K\) with \([L:K]=2\).
**Lemma 7.5.41**.: _Up to multiplication by \(-1\), there is a unique \(y\in L\) such that \(L=K(y)\) and \(y^{2}=p(x)\) where \(p\in\mathbb{C}[X]\) is monic and separable._
Proof.: By [ADH, 1.3.11], \(A:=\mathbb{C}[x]\) is integrally closed, so [ADH, 1.3.12, 1.3.13] yield a \(y\in L\) with minimum polynomial \(P\in A[Y]\) over \(K\) such that \(L=K(y)\). Take \(a,b\in A\) with \(P=Y^{2}+aY+b\). Replacing \(a\), \(b\), \(y\) by \(0\), \(b-(a/2)^{2}\), \(y+(a/2)\), respectively, we arrange \(a=0\). Thus \(y^{2}=p(x)\) for \(p\in\mathbb{C}[X]\) with \(p(x)=-b\), and replacing \(y\) by \(cy\) for suitable \(c\in\mathbb{C}^{\times}\) we arrange that \(p\) is monic. If \(c\in\mathbb{C}\) and \(p\in(X-c)^{2}\mathbb{C}[X]\), then we may also replace \(p\), \(y\) by \(p/(X-c)^{2}\), \(y/(x-c)\), respectively. In this way we arrange that \(p\) is separable. Suppose \(L=K(z)\) and \(z^{2}=q(x)\) where \(q\in\mathbb{C}[X]\) is monic and separable. Take \(r,s\in K\) with \(z=r+sy\). Then \(s\neq 0\), and \(q(x)=z^{2}=\bigl{(}r^{2}+s^{2}p(x)\bigr{)}+(2rs)y\), hence \(r=0\) and so \(q(x)=s^{2}p(x)\). Since \(p\), \(q\) are monic and separable, this yields \(s^{2}=1\) and thus \(z=-y\) or \(z=y\).
In the following \(y\), \(p\) are as in Lemma 7.5.41. For each \(c\in\mathbb{C}\) we have the valuation \(v_{c}\colon K^{\times}\to\mathbb{Z}\) that is trivial on \(\mathbb{C}\) with \(v_{c}(x-c)=1\), and we also have the valuation \(v_{\infty}\colon K^{\times}\to\mathbb{Z}\) that is trivial on \(\mathbb{C}\) with \(v_{\infty}(x^{-1})=1\) [ADH, 3.1.30]. Given \(f\in K^{\times}\) there are only finitely many \(c\in\mathbb{C}_{\infty}:=\mathbb{C}\cup\{\infty\}\) such that \(v_{c}(f)\neq 0\); moreover, \(\sum_{c\in\mathbb{C}_{\infty}}v_{c}(f)=0\), with \(f\in\mathbb{C}^{\times}\) iff \(v_{c}(f)=0\) for all \(c\in\mathbb{C}\). Let \(c\in\mathbb{C}_{\infty}\), and equip \(K\) with the valuation ring \(\mathcal{O}_{c}\) of \(v_{c}\). By [ADH, 3.1.15, 3.1.21], either exactly one or exactly two valuation rings of \(L\) lie over \(\mathcal{O}_{c}\). The residue morphism \(\mathcal{O}_{c}\to\operatorname{res}(K)\) restricts to an isomorphism \(\mathbb{C}\to\operatorname{res}(K)\), and equipping \(L\) with a valuation ring lying over \(\mathcal{O}_{c}\), composition with the natural inclusion \(\operatorname{res}(K)\to\operatorname{res}(L)\) yields an isomorphism \(\mathbb{C}\to\operatorname{res}(L)\); thus the valued field extension \(L\supseteq K\) is immediate iff \(L\) is unramified over \(K\), that is, \(\Gamma_{L}=\Gamma=\mathbb{Z}\).
**Lemma 7.5.42**.: _Suppose \(c\neq\infty\). If \(p(c)=0\), then only one valuation ring of \(L\) lies over \(\mathcal{O}_{c}\), and equipping \(L\) with this valuation ring we have \([\Gamma_{L}:\Gamma]=2\). If \(p(c)\neq 0\)
_then there are exactly two valuation rings of \(L\) lying over \(\mathcal{O}_{c}\), and equipped with any one of these valuation rings, \(L\) is unramified over \(K\)._
Proof.: If \(p(c)=0\), then \(v_{c}(p)=1\), so by [ADH, 3.1.28] there is a unique valuation ring \(\mathcal{O}_{L}\) of \(L\) lying over \(\mathcal{O}_{c}\), and equipping \(L\) with \(\mathcal{O}_{L}\) we have \([\Gamma_{L}:\Gamma]=2\). Now suppose \(p(c)\neq 0\). We identify \(K\) with its image under the embedding of \(K\) into the valued field \(K^{\mathrm{c}}:=\mathbb{C}(\!(t)\!)\) of Laurent series over \(\mathbb{C}\) which is the identity on \(\mathbb{C}\) and sends \(x-c\) to \(t\). Take \(\alpha\in\mathbb{C}^{\times}\) with \(\alpha^{2}=p(c)\). Hensel's Lemma [ADH, 3.3.5] yields \(z\in K^{\mathrm{c}}\) with \(z^{2}=p(x)\) and \(z\sim\alpha\). Let \(\mathcal{O}_{+}\), \(\mathcal{O}_{-}\) be the preimages of the valuation ring of the valued subfield \(K(z)\) of \(K^{\mathrm{c}}\) under the field isomorphisms \(L=K(y)\to K(z)\) over \(K\) with \(y\mapsto z\) and \(y\mapsto-z\), respectively. Then \(\mathcal{O}_{+}\neq\mathcal{O}_{-}\) lie over \(\mathcal{O}_{c}\), and each turns \(L\) into an immediate extension of \(K\).
In the next lemma we set \(d:=\deg p\), so \(d\geqslant 1\).
**Lemma 7.5.43**.: _If \(d\) is odd, then only one valuation ring of \(L\) lies over \(\mathcal{O}_{\infty}\), and equipping \(L\) with this valuation ring we have \([\Gamma_{L}:\Gamma]=2\). If \(d\) is even, then there are exactly two valuation rings of \(L\) lying over \(\mathcal{O}_{\infty}\), and equipped with any one of these valuation rings, \(L\) is unramified over \(K\)._
Proof.: We have \(v_{\infty}(p)=-d\). Hence if \(d\) is odd, then we can argue using [ADH, 3.1.28] as in the proof of Lemma 7.5.42. Suppose \(d\) is even, so with \(e=d/2\) we have \((y/x^{e})^{2}=p(x)/x^{d}\sim 1\). Identify \(K\) with its image under the embedding of \(K\) into the valued field \(K^{\mathrm{c}}:=\mathbb{C}(\!(t)\!)\) of Laurent series over \(\mathbb{C}\) which is the identity on \(\mathbb{C}\) and sends \(x^{-1}\) to \(t\). Then [ADH, 3.3.5] yields \(z\in K^{\mathrm{c}}\) with \(z\sim 1\) and \(z^{2}=p(x)/x^{d}\). Let \(\mathcal{O}_{+}\), \(\mathcal{O}_{-}\) be the preimages of the valuation ring of the valued subfield \(K(z)\) of \(K^{\mathrm{c}}\) under the field isomorphisms \(L\to K(z)\) over \(K\) with \(y\mapsto x^{e}z\) and \(y\mapsto-x^{e}z\), respectively. Then \(\mathcal{O}_{+}\neq\mathcal{O}_{-}\) are valuation rings of \(L\) lying over \(\mathcal{O}_{\infty}\), each of which turns \(L\) into an immediate extension of \(K\).
**Corollary 7.5.44**.: _There are at least two \(c\in\mathbb{C}_{\infty}\) such that some valuation ring of \(L\) lying over \(\mathcal{O}_{c}\) makes \(L\) ramified over \(K\)._
Next we let \(C\) be any field of characteristic zero and consider the d-valued Hahn field \(C(\!(t^{\mathbb{Q}})\!)\) with its strongly additive \(C\)-linear derivation satisfying \({t^{\prime}=1}\). We let \(q\), \(r\), \(s\) range over \(\mathbb{Q}\), and \(z=\sum_{q}z_{q}t^{q}\in C(\!(t^{\mathbb{Q}})\!)^{\times}\) with all \(z_{q}\in C\). Put
\[q_{0}\ :=vz\ =\ \min\operatorname{supp}z\in\mathbb{Q},\]
so \(z\sim z_{q_{0}}t^{q_{0}}\). If \(z\notin C(\!(t)\!)\), then we also set \(q_{1}:=\min\bigl{(}(\operatorname{supp}z)\setminus\mathbb{Z}\bigr{)}\in \mathbb{Q}\setminus\mathbb{Z}\), so \(q_{1}\geqslant q_{0}\). In Lemmas 7.5.46 and 7.5.47 below we give sufficient conditions for \(z\) to be in \(C(\!(t)\!)\). Set
\[w\ :=\ z^{2}\ =\ \sum_{q}w_{q}t^{q}\quad\text{where}\quad w_{q}\ =\ \sum_{r+s=q}z_{r}z_{s},\]
so \(w\sim z_{q_{0}}^{2}t^{2q_{0}}\), and observe:
**Lemma 7.5.45**.: _If \(w\notin C(\!(t)\!)\) (and so \(z\notin C(\!(t)\!)\)), then_
\[\min\bigl{(}(\operatorname{supp}w)\setminus\mathbb{Z}\bigr{)}=q_{0}+q_{1}, \quad w_{q_{0}+q_{1}}=2z_{q_{0}}z_{q_{1}}.\]
**Lemma 7.5.46**.: _Suppose \(\omega(z)\in t^{-1}C[[t]]\). Then \(z\in C(\!(t)\!)\)._
Proof.: Put \(u:=z^{\prime}=\sum_{q}u_{q}t^{q}\), \(u_{q}=(q+1)z_{q+1}\). If \(q_{0}\neq 0\), then \(u\sim\ q_{0}z_{q_{0}}t^{q_{0}-1}\). Hence \(q_{0}\geqslant-1\): otherwise \(-\omega(z)=2u+w\sim z_{q_{0}}^{2}\,t^{2q_{0}}\), contradicting \(\omega(z)\preccurlyeq t^{-1}\preccurlyeq t^{-2}\). Moreover, if \(q_{0}=-1\), then \((2u+w)-(-2z_{-1}+z_{-1}^{2})t^{-2}\prec t^{-2}\) and so \(z_{-1}=2\). Towards a contradiction, suppose \(z\notin C(\!(t)\!)\). We have \(u\notin C(\!(t)\!)\). Indeed
\[\min\bigl{(}(\operatorname{supp}u)\setminus\mathbb{Z}\bigr{)}=q_{1}-1,\quad u _{q_{1}-1}=q_{1}z_{q_{1}}. \tag{7.5.2}\]
Also \(w=-\omega(z)-2u\notin C(\!(t)\!)\), and by the previous lemma
\[\min\bigl{(}(\operatorname{supp}w)\setminus\mathbb{Z}\bigr{)}=q_{0}+q_{1}, \quad w_{q_{0}+q_{1}}=2z_{q_{0}}z_{q_{1}}. \tag{7.5.3}\]
From (7.5.2), (7.5.3), and \(2u+w\in C(\!(t)\!)\) we get \(q_{1}-1=q_{0}+q_{1}\) and \(2q_{1}z_{q_{1}}=-2z_{q_{0}}z_{q_{1}}\), hence \(q_{0}=-1\), \(q_{1}=-z_{q_{0}}\). Thus \(q_{1}=-2<-1=q_{0}\), a contradiction.
In [ADH, p. 519] we defined \(\omega^{\phi}:E\to E\) for a differential field \(E\) and \(\phi\in E^{\times}\).
**Lemma 7.5.47**.: _Suppose \(\omega^{-1/t^{2}}(z)\in C[[t]]^{\times}\). Then \(z\in C(\!(t)\!)\)._
Proof.: Put \(u:=-t^{2}z^{\prime}=\sum_{q}u_{q}t^{q}\) where \(u_{q}=-(q-1)z_{q-1}\). If \(q_{0}\neq 0\), then \(u\sim-q_{0}z_{q_{0}}t^{q_{0}+1}\). We must have \(q_{0}=0\): otherwise, if \(q_{0}<1\), then \(2q_{0}<q_{0}+1\) and so \(-\omega^{-1/t^{2}}(z)=2u+w\sim z_{q_{0}}^{2}t^{2q_{0}}\), contradicting \(\omega^{-1/t^{2}}(z)\asymp 1\), whereas if \(q_{0}\geqslant 1\) then \(2u+w\preccurlyeq t^{q_{0}+1}\preccurlyeq t^{2}\), again contradicting \(\omega^{-1/t^{2}}(z)\asymp 1\). Now suppose \(z\notin C(\!(t)\!)\). Then
\[\min\bigl{(}(\operatorname{supp}u)\setminus\mathbb{Z}\bigr{)}=q_{1}+1,\quad u _{q_{1}+1}=-q_{1}z_{q_{1}} \tag{7.5.4}\]
and by Lemma 7.5.45:
\[\min\bigl{(}(\operatorname{supp}w)\setminus\mathbb{Z}\bigr{)}=q_{1},\quad w_{ q_{1}}=2z_{0}z_{q_{1}}. \tag{7.5.5}\]
Together with \(2u+w\in C(\!(t)\!)\) this yields a contradiction.
We now apply the above with \(C=\mathbb{C}\) to show:
**Corollary 7.5.48**.: _Let \(z\in L\) be such that \(\omega(z)=f\in K\). If \(v_{c}(f)\geqslant-1\) for all \(c\in\mathbb{C}\), or \(v_{c}(f)\geqslant-1\) for all but one \(c\in\mathbb{C}\) and \(v_{\infty}(f)=0\), then \(z\in K\)._
Proof.: Let \(c\in\mathbb{C}_{\infty}\) and let \(L\) be equipped with a valuation ring lying over \(\mathcal{O}_{c}\). If \(c\in\mathbb{C}\), then we have a valued differential field embedding \(L\to\mathbb{C}(\!(t^{\mathbb{Q}})\!)\) over \(\mathbb{C}\) with \(x-c\mapsto t\), and identifying \(L\) with its image under this embedding, if \(v_{c}(f)\geqslant-1\), then \(f\in t^{-1}\mathbb{C}[[t]]\), hence \(z\in\mathbb{C}(\!(t)\!)\) by Lemma 7.5.46, so \(K(z)\subseteq\mathbb{C}(\!(t)\!)\) is unramified over \(K\). If \(c=\infty\), then we have a valued differential field embedding \(L\to\mathbb{C}(\!(t^{\mathbb{Q}})\!)^{-1/t^{2}}\) over \(\mathbb{C}\) with \(x^{-1}\mapsto t\), and again identifying \(L\) with its image under this embedding, if \(v_{\infty}(f)=0\), then \(f\in\mathbb{C}[[t]]^{\times}\) by Lemma 7.5.47, so \(K(z)\) is unramified over \(K\). Now use Corollary 7.5.44.
In the next two lemmas we fix \(c\in\mathbb{C}_{\infty}\) and equip \(K=\mathbb{C}(x)\) with \(v=v_{c}\). Then the valued differential field \(K\) is d-valued, and for all \(z\in K^{\times}\) with \(vz=k\neq 0\) we have \(z^{\dagger}\sim k(x-c)^{-1}\) if \(c\neq\infty\), and \(z^{\dagger}\sim-kx^{-1}\) if \(c=\infty\). In these two lemmas we let \(z\in K^{\times}\), and set \(k:=vz\), \(f:=\omega(z)\).
**Lemma 7.5.49**.: _Suppose \(z\succ 1\). If \(c=\infty\), then \(f\sim-z^{2}\). If \(c\neq\infty\) and \(f\preccurlyeq 1\), then \(z-2(x-c)^{-1}\preccurlyeq 1\)._
Proof.: If \(c=\infty\), then \(x\succ 1\) and so \(z^{\dagger}\sim-kx^{-1}\prec 1\prec z\), hence \(f=\omega(z)=-z(2z^{\dagger}+z)\sim-z^{2}\). Now suppose \(c\neq\infty\) and \(f\preccurlyeq 1\). Applying the automorphism of the differential field \(K\) over \(\mathbb{C}\) with \(x\mapsto x+c\) we arrange \(c=0\). So \(x\prec 1\) and \(z^{\dagger}\sim kx^{-1}\). We have \(-z(2z^{\dagger}+z)=\omega(z)=f\preccurlyeq 1\), so \(2z^{\dagger}\sim-z\), and thus \(z\sim 2x^{-1}\), that is, \(z-2x^{-1}\preccurlyeq 1\).
**Lemma 7.5.50**.: _Suppose \(c\neq\infty\) and \(d\in\mathbb{C}^{\times}\) is such that \(f-d(x-c)^{-2}\preccurlyeq 1\). Then \(z\succ 1\), and for some \(b\in\mathbb{C}\) with \(b(2-b)=d\) we have \(z-b(x-c)^{-1}\preccurlyeq 1\)._
Proof.: We arrange again \(c=0\), so \(\omega(z)=f\sim dx^{-2}\). If \(z\preccurlyeq 1\), then \(z^{\prime}\prec x^{-1}\) and thus \(dx^{-2}\sim\omega(z)=-(2z^{\prime}+z^{2})\prec x^{-1}\), contradicting \(x\prec 1\). Thus \(z\succ\ 1\), hence \(z^{\dagger}\sim\ kx^{-1}\) with \(k<0\). Together with \(-z(2z^{\dagger}+z)=\omega(z)\sim dx^{-2}\) this yields \(k=-1\) and \(z\sim\ bx^{-1}\) with \(b\in\mathbb{C}^{\times}\), \(b(2-b)=d\), so \(z-bx^{-1}\preccurlyeq 1\).
Let \(f\in H\setminus\overline{\omega}(H)\), and suppose \((g,\phi)\in\operatorname{Li}(H)^{2}\) parametrizes \(\ker_{c^{<\infty}}(4\partial^{2}+f)\). Here is the promised sufficient condition for \(\phi^{\prime}\in H\):
**Corollary 7.5.51**.: _Suppose \(v_{c}(f)\geqslant-1\) for all \(c\in\mathbb{C}\), or \(v_{c}(f)\geqslant-1\) for all but one \(c\in\mathbb{C}\) and \(v_{\infty}(f)=0\). Then \(\phi^{\prime}\in H\)._
Proof.: Put \(y:=g\operatorname{e}^{\phi_{1}}\in\mathcal{C}^{<\infty}[i]^{\times}\). The proof of Lemma 7.5.5 gives \(4y^{\prime\prime}+fy=0\) and \(\omega(z)=f\) for \(z:=2y^{\dagger}=-\phi^{\prime\dagger}+2\phi^{\prime}i\). We have the differential field extension \(L:=K[z]=K[\phi^{\prime}]\subseteq\operatorname{Li}(H)[i]\) of \(K\). If \(\phi^{\prime}\notin H\), then \([L:K]=2\), and then Corollary 7.5.48 gives \(z\in K\), a contradiction. Thus \(\phi^{\prime}\in H\).
In the next section on the Bessel equation the relevant \(f\) satisfies an even stronger condition, and this gives more information about \(\phi\):
**Corollary 7.5.52**.: _Suppose \(v_{c}(f)\geqslant 0\) for all \(c\in\mathbb{C}^{\times}\), \(v_{\infty}(f)=0\), and \(d\in\mathbb{C}\), \(v_{0}\big{(}f-dx^{-2}\big{)}\geqslant 0\). Then there are \(a,b\in\mathbb{C}\) and distinct \(c_{1},\dots,c_{n}\in\mathbb{C}^{\times}\) such that_
\[-\phi^{\prime\dagger}+2\phi^{\prime}i\ =\ a+bx^{-1}+2\sum_{j=1}^{n}(x-c_{j})^{-1} \quad\text{and}\quad b(2-b)=d.\]
Proof.: Corollary 7.5.51 and its proof gives \(z:=-\phi^{\prime\dagger}+2\phi^{\prime}i\in K^{\times}\) and \(\omega(z)=f\). Consider first the case \(d\neq 0\). Then by Lemma 7.5.49 we have \(v_{\infty}(z)\geqslant 0\) and
\[v_{c}\big{(}z-2(x-c)^{-1}\big{)}\ \geqslant\ 0\ \text{ whenever }c\in\mathbb{C}^{\times}\text{ and }v_{c}(z)<0.\]
Lemma 7.5.50 gives \(v_{0}(z-bx^{-1})\geqslant 0\) with \(b\in\mathbb{C}\) such that \(b(2-b)=d\). Taking \(c_{1},\dots,c_{n}\) as the distinct poles of \(z\) in \(\mathbb{C}^{\times}\), this yields the desired result by considering the partial fraction decomposition of \(z\) with respect to \(\mathbb{C}[x]\). Next, suppose \(d=0\). Then \(v_{c}(f)\geqslant 0\) for all \(c\in\mathbb{C}_{\infty}\), hence \(f\in\mathbb{C}\cap H=\mathbb{R}\). Also \(f>0\), since \(0\in\overline{\omega}(H)\) and \(f\notin\overline{\omega}(H)\). The example preceding Lemma 7.5.2, together with Corollary 7.5.15, gives \(\phi=\frac{\sqrt{f}}{2}x+r\) with \(r\in\mathbb{R}\), so \(z=\sqrt{f}\cdot i\), and this gives the desired result with \(a=\sqrt{f}\cdot i\), \(b=0\), \(n=0\).
### The Example of the Bessel Equation
We are going to use the results from Section 7.5 to obtain information about the solutions of the Bessel equation
(B
\[{}_{\nu}\]
) \[x^{2}Y^{\prime\prime}+xY^{\prime}+(x^{2}-\nu^{2})Y\ =\ 0\]
of order \(\nu\in\mathbb{R}\). For solutions in \(\mathcal{C}_{e}^{2}\)\((e\in\mathbb{R}^{>})\), this is equivalent to the equation (\(\widetilde{\mathrm{L}}\)) in Section 7.5 with \(a=x^{-1}\), \(b=1-\nu^{2}x^{-2}\), so that \(f_{\nu}:=-2a^{\prime}-a^{2}+4b\) gives
\[f_{\nu}\ =\ -2(x^{-1})^{\prime}-(x^{-1})^{2}+4\big{(}1-\nu^{2}x^{-2}\big{)}\ =\ 4+(1-4\nu^{2})x^{-2}\ \sim\ 4.\]
Thus \(f_{\nu}\notin\overline{\omega}\big{(}\mathbb{R}(x)\big{)}\), and we have the isomorphism \(y\mapsto x^{1/2}y\) of the \(\mathbb{R}\)-linear space \(V_{\nu}\subseteq\mathcal{C}^{<\infty}\) of solutions of (B\({}_{\nu}\)) onto the \(\mathbb{R}\)-linear space of solutions in \(\mathcal{C}^{<\infty}\) of
(L\({}_{\nu}\)) \[4Y^{\prime\prime}+f_{\nu}Y\ =\ 0.\]
The nonzero solutions of (B\({}_{\nu}\)) in \(\mathcal{C}^{2}(\mathbb{R}^{>})\) are known as (real) _cylinder functions_; cf. [205, SS15.22].
**Proposition 7.6.1**.: _There is a unique hardian germ \(\phi_{\nu}\) such that_
\[\phi_{\nu}-x\ \preccurlyeq\ x^{-1}\text{ and }\ V_{\nu}\ =\ \left\{\frac{c}{\sqrt{x \phi_{\nu}^{\prime}}}\cos(\phi_{\nu}+d):\ c,d\in\mathbb{R}\right\}.\]
_This germ \(\phi_{\nu}\) lies in \(\mathrm{D}(\mathbb{Q})\subseteq\mathcal{C}^{\omega}\)._ (_Recall that_\(\mathrm{D}(\mathbb{Q})=\mathrm{E}(\mathbb{Q})\).)
If \(\nu^{2}=\frac{1}{4}\), then \(V_{\nu}=\mathbb{R}x^{-1/2}\cos x+\mathbb{R}x^{-1/2}\sin x\), and Proposition 7.6.1 holds with \(\phi_{\nu}=x\). So suppose \(\nu^{2}\neq\frac{1}{4}\). Then Corollary 7.5.10 gives a germ \(\phi\sim x\) in \(\mathrm{D}(\mathbb{Q})\) such that \((g,\phi)\) parametrizes \(V_{\nu}\), where \(g:=(x\phi^{\prime})^{-1/2}\). Using this fact, Proposition 7.6.1 now follows from Corollary 7.5.15, Lemma 7.5.5, and the next lemma about any such pair \((g,\phi)\):
**Lemma 7.6.2**.: _We have \(\phi-x-r-\frac{1}{2}(\nu^{2}-\frac{1}{4})x^{-1}\preccurlyeq x^{-3}\) for some \(r\in\mathbb{R}\)._
Proof.: Set \(z:=2\phi^{\prime}\), so \(z=2+\varepsilon\), \(\varepsilon\prec 1\). From \(\sigma(z)=f_{\nu}\) and multiplication by \(z^{2}\),
\[2zz^{\prime\prime}-3(z^{\prime})^{2}+z^{2}(z^{2}-f_{\nu})\ =\ 0\]
and thus with \(\mu:=4\nu^{2}-1\in\mathbb{R}^{\times}\), \(u:=-(2zz^{\prime\prime}-3(z^{\prime})^{2})=3(\varepsilon^{\prime})^{2}-2y \varepsilon^{\prime\prime}\):
\[u\ =\ z^{2}(z^{2}-f_{\nu})\sim 4(z^{2}-f_{\nu})=4(4\varepsilon+\varepsilon^{2} +\mu x^{-2}),\text{ and thus}\]
\[u/4\ \sim\ \varepsilon(4+\varepsilon)+\mu x^{-2}. \tag{7.6.1}\]
We claim that \(u\prec x^{-2}\). If \(\varepsilon\preccurlyeq x^{-2}\), then \(\varepsilon^{\prime}\preccurlyeq x^{-3}\), \(\varepsilon^{\prime\prime}\preccurlyeq x^{-4}\), and the claim is valid. If \(\varepsilon\succ x^{-2}\), then \(\varepsilon^{\dagger}\preccurlyeq(x^{-2})^{\dagger}=-2x^{-1}\), so \(\varepsilon^{\prime}\preccurlyeq x^{-1}\varepsilon\prec x^{-1}\prec 1\), hence \(\varepsilon^{\prime\prime}\prec(x^{-1})^{\prime}=-x^{-2}\), which again yields \(u\prec x^{-2}\). The claim and (7.6.1) give \(\varepsilon\sim-\frac{\mu}{4}x^{-2}\) and hence \(\delta:=\varepsilon+\frac{\mu}{4}x^{-2}\prec x^{-2}\). Indeed, we have \(\delta\preccurlyeq x^{-4}\). To see why, note that \(\varepsilon^{\prime}\sim\frac{1}{2}\mu x^{-3}\) and \(\varepsilon^{\prime\prime}\sim-\frac{3}{2}\mu x^{-4}\), so \(u\sim 6\mu x^{-4}\), and
\[\tfrac{3}{2}\mu x^{-4}\ \sim\ g/4\ \sim\ \varepsilon(4+\varepsilon)+\mu x^{-2}\ = \ 4\delta+\varepsilon^{2},\qquad\varepsilon^{2}\ \sim\ \tfrac{\mu^{2}}{16}x^{-4}.\]
Now the lemma follows by integration from
\[\phi^{\prime}-1+\tfrac{1}{2}(\nu^{2}-\tfrac{1}{4})x^{-2}\ =\ \tfrac{1}{2} \varepsilon+\tfrac{1}{8}\mu x^{-2}\ =\ \delta/2\ \preccurlyeq\ x^{-4}.\qed\]
With \(\phi\) and \(r\) as in Lemma 7.6.2, it is \(\phi-r\) that is the germ \(\phi_{\nu}\) in Proposition 7.6.1, and till further notice we set \(\phi:=\phi_{\nu}\), \(f:=f_{\nu}\), and \(V:=V_{\nu}\). Thus \(\sigma(2\phi^{\prime})=f\) and \(\phi_{\nu}=\phi_{-\nu}\). As mentioned before, we do not know if \(\mathrm{E}(\mathbb{Q})^{>\mathbb{R}}\) is closed under compositional inversion. Nevertheless:
**Lemma 7.6.3**.: \(\phi^{\mathrm{inv}}\in\mathrm{E}(\mathbb{Q})\)_._
Proof.: Set \(\alpha:=\frac{1}{2}(\nu^{2}-\frac{1}{4})\). Then \(\phi=x+\alpha x^{-1}+o(x^{-1})\), so
\[\phi^{\mathrm{inv}}\ =\ x-\alpha x^{-1}+o(x^{-1})\]
by Corollary 5.1.12, and \(\phi^{\mathrm{inv}}\) is hardian. Let \(P\in\mathbb{R}(x)\{Y\}\) be as in the remarks before Lemma 7.5.16 with \(H=\mathbb{R}(x)\), so \(P(2\phi^{\prime})=0\). Corollary 5.3.12 then gives \(\widetilde{P}\in\mathbb{R}(x)\{Z\}\) such that for all hardian \(y>\mathbb{R}\),
\[P(2y^{\prime})\ =\ 0\quad\Longleftrightarrow\quad\widetilde{P}(y^{\mathrm{inv} })\ =\ 0,\]
in particular, \(\widetilde{P}(\phi^{\mathrm{inv}})=0\). Let now \(H\) be any maximal Hardy field. Theorem 7.1.3 then yields \(z\in H\) such that \(z=x-\alpha x^{-1}+o(x^{-1})\) and \(\widetilde{P}(z)=0\), so \(y:=z^{\mathrm{inv}}\) is hardian and \(P(2y^{\prime})=0\). Then \(\sigma(2y^{\prime})=f\), so \(\big{(}(xy^{\prime})^{-1/2},y\big{)}\) parametrizes \(V\) by Lemma 7.5.3 and a remark preceding that lemma. Also \(y=x+\alpha x^{-1}+o(x^{-1})\) by Corollary 5.1.12. Thus \(\phi=y\) by Proposition 7.6.1 and so \(\phi^{\mathrm{inv}}=z\in H\).
This quickly yields some facts on the distribution of zeros of solutions: Let \(y\in\mathcal{C}_{e}^{2}\) (\(e\in\mathbb{R}^{>}\)) be a nonzero solution of (B\({}_{\nu}\)) and let \((s_{n})\) be the enumeration of its zero set. From Corollary 7.5.24 and Lemma 7.5.26 we obtain a well-known result, see for example [91, Chapter XI, Exercise 3.2(d)], [203, SS27, XIII]:
**Corollary 7.6.4**.: _We have \(s_{n}\sim\pi n\) and \(s_{n+1}-s_{n}\to\pi\) as \(n\to\infty\)._
**Lemma 7.6.5**.: _There is a strictly increasing \(\zeta\in\mathcal{C}_{n_{0}}\)\((n_{0}\in\mathbb{N})\) whose germ is in \(\mathrm{E}(\mathbb{Q})\) such that \(s_{n}=\zeta(n)\) for all \(n\geqslant n_{0}\)._
Proof.: Take \(e_{0}\geqslant e\), a representative of \(\phi\) in \(\mathcal{C}_{e_{0}}^{1}\) denoted also by \(\phi\), and \(c,d\in\mathbb{R}\), such that \(\phi^{\prime}(t)>0\) and \(y(t)=\big{(}c/\sqrt{t\phi^{\prime}(t)}\big{)}\cdot\cos\big{(}\phi(t)+d\big{)}\) for all \(t\geqslant e_{0}\). So we are in the situation described before Lemma 7.5.22. Next, take \(n_{0}\), \(k_{0}\), \(\zeta\) as in the proof of that lemma. Then \(\zeta\) is strictly increasing with \(s_{n}=\zeta(n)\) for all \(n\geqslant n_{0}\), and the germ of \(\zeta\), denoted by the same symbol, satisfies \(\zeta=\phi^{\mathrm{inv}}\circ\big{(}\pi\cdot(x+k_{0})\big{)}\). Now use Lemma 7.6.3 and \(\mathrm{E}(\mathbb{Q})\circ\mathrm{E}(\mathbb{Q})^{>\mathbb{R}}\subseteq \mathrm{E}(\mathbb{Q})\) (see the remark after Lemma 5.3.7), to conclude \(\zeta\in\mathrm{E}(\mathbb{Q})\).
Lemma 7.6.5 yields an improvement of Corollary 7.5.27 in our (Bessel) case:
**Corollary 7.6.6**.: _For any \(h\in\mathcal{C}_{0}\) with hardian germ the sequences \((s_{n})\) and \(\big{(}h(n)\big{)}\) are comparable._
Lemma 7.5.26 also has the following corollary, the first part of which was observed by Porter [157] (cf. also [205, SS15.8, 15.82]).
**Corollary 7.6.7**.: _If \(\nu^{2}>\frac{1}{4}\), then the sequence \((s_{n+1}-s_{n})\) is eventually strictly decreasing, and if \(\nu^{2}<\frac{1}{4}\), then \((s_{n+1}-s_{n})\) is eventually strictly increasing._
Finally, if \(\underline{\nu}\in\mathbb{R}\) and \(\underline{y}\in\mathcal{C}_{\underline{e}}^{2}\) with \(\underline{e}\in\mathbb{R}^{>}\) is a nonzero solution of the Bessel equation of order \(\underline{\nu}\), then \((s_{n})\) and the enumeration \((\underline{s}_{n})\) of the zero set of \(\underline{y}\) are comparable, by Lemma 7.5.28. This is related to classical results on the "interlacing of zeros" of cylinder functions; cf. [205, SSSS15.22, 15.24].
In the next lemma, [21, Chapter 10, Theorem 8] has \(-\frac{1}{2}(\nu^{2}-\frac{1}{4})x^{-1}\) instead of our \(\frac{1}{2}(\nu^{2}-\frac{1}{4})x^{-1}\). This sign error originated in an integration on [21, p. 327].
**Lemma 7.6.8**.: _Let \(y\in V^{\neq}\). Then there is a pair \((c,d)\in\mathbb{R}^{\times}\times[0,\pi)\) such that \(y=\frac{c}{\sqrt{x\phi^{\prime}}}\cos(\phi+d)\), and for any such pair we have_
\[y-\frac{c}{\sqrt{x}}\cos\big{(}x+d+\tfrac{1}{2}(\nu^{2}-\tfrac{1}{4})x^{-1} \big{)}\ \preccurlyeq\ x^{-5/2}. \tag{7.6.2}\]
Proof.: Proposition 7.6.1 yields \((c,d)\in\mathbb{R}\times[0,\pi)\) such that \(y=\frac{c}{\sqrt{x\phi^{\prime}}}\cos(\phi+d)\). Then \(c\neq 0\). From \(\phi^{\prime}-1\preccurlyeq x^{-2}\) we get \(\frac{1}{\sqrt{\phi^{\prime}}}-1\preccurlyeq x^{-2}\), and for every \(u\in\mathcal{C}\) we have \(\cos(x+u)-\cos(x)\preccurlyeq u\). Using also Lemma 7.6.2 this yields (7.6.2).
We complement this with some uniqueness properties:
**Lemma 7.6.9**.: _Let \(y\in V^{\neq}\). Then there is a unique \((c,d)\in\mathbb{R}^{\times}\times[0,\pi)\) such that_
\[y-\frac{c}{\sqrt{x}}\cos(x+d)\ \prec\ \frac{1}{\sqrt{x}}, \tag{7.6.3}\]
_and this is also the unique \((c,d)\in\mathbb{R}^{\times}\times[0,\pi)\) such that \(y=\frac{c}{\sqrt{x\phi^{\prime}}}\cos(\phi+d)\)._
Proof.: For \((c,d)\in\mathbb{R}^{\times}\times[0,\pi)\) with \(y=\frac{c}{\sqrt{x\phi^{\prime}}}\cos(\phi+d)\) we have (7.6.2), so
\[y-\frac{c}{\sqrt{x}}\cos(x+d)\ \preccurlyeq\ x^{-3/2}\]
in view of \(\cos(x+u)-\cos(x)\preccurlyeq u\) for \(u\in\mathcal{C}\). This gives (7.6.3). Suppose towards a contradiction that (7.6.3) also holds for a pair \((c^{*},d^{*})\in\mathbb{R}^{\times}\times[0,\pi)\) instead of \((c,d)\), with \((c^{*},d^{*})\neq(c,d)\). Then \(d\neq d^{*}\), say \(d<d^{*}\), so \(0<\theta:=d^{*}-d<\pi\). Then \(c\cos(x+d)-c^{*}\cos(x+d+\theta)\prec 1\), and hence \(c\cos(x)-c^{*}\cos(x+\theta)\prec 1\), which by a trigonometric identity turns into
\[(c-c^{*}\cos\theta)\cos(x)+c^{*}\sin\theta\sin(x)\ =\ \sqrt{(c-c^{*}\cos \theta)^{2}+(c^{*}\sin\theta)^{2}}\cdot\cos(x+s)\prec 1\]
with \(s\in\mathbb{R}\) depending only on \(c\), \(c^{*}\), \(\theta\); see the remarks preceding Lemma 5.5.14. This forces \(c^{*}\sin\theta=0\), but \(\sin\theta>0\), so \(c^{*}=0\), contradicting \(c^{*}\in\mathbb{R}^{\times}\).
**Corollary 7.6.10**.: _For any \((c,d)\in\mathbb{R}^{\times}\times[0,\pi)\) there is a unique \(y\in V^{\neq}\) such that (7.6.3) holds. This \(y\) is given by \(y=\frac{c}{\sqrt{x\phi^{\prime}}}\cos(\phi+d)\)._
_Remark 7.6.11_.: Lemmas 7.6.8, 7.6.9, and Corollary 7.6.10 remain valid when we replace \(\mathbb{R}^{\times}\times[0,\pi)\) everywhere by \(\mathbb{R}^{>}\times[0,2\pi)\). (Use that \(\cos(\theta+\pi)=-\cos(\theta)\) for \(\theta\in\mathbb{R}\).)
Call a germ in \(\mathcal{C}\)_eventually convex_ if it has a convex representative in \(\mathcal{C}_{r}\) for some \(r\in\mathbb{R}\); likewise with "concave" in place of "convex". The two lemmas below comprise a slightly weaker version of [105, Theorem 2]. By Lemma 7.6.2 we have \(\phi=x+\alpha x^{-1}+O(x^{-3})\) where \(\alpha:=\frac{1}{2}(\nu^{2}-\frac{1}{4})\), so with \(\phi\) being hardian we obtain
\[\phi^{\prime}\ =\ 1-\alpha x^{-2}+O(x^{-4}),\qquad\phi^{\prime\prime}\ =\ 2 \alpha x^{-3}+O(x^{-5}).\]
Hence \(\phi^{\prime\prime}>0\) if \(\nu^{2}>\frac{1}{4}\) and \(\phi^{\prime\prime}<0\) if \(\nu^{2}<\frac{1}{4}\), and thus:
**Lemma 7.6.12**.: \(\phi\) _is eventually convex if \(\nu^{2}>\frac{1}{4}\), and eventually concave if \(\nu^{2}<\frac{1}{4}\)._
Lemma 7.5.2 yields \((q,\theta)\in\mathrm{D}(\mathbb{Q})^{2}\) parametrizing \(V^{\prime}:=\{y^{\prime}:y\in V\}\).
**Lemma 7.6.13**.: _We have \(\theta-x-r-\frac{1}{2}(\nu^{2}+\frac{3}{4})x^{-1}\preccurlyeq x^{-3}\) for some \(r\in\mathbb{R}\). Hence \(\theta\) is eventually convex._
Proof.: Set \(g:=(x\phi^{\prime})^{-1/2},q:=\sqrt{(g^{\prime})^{2}+(g\phi^{\prime})^{2}}\), so \(g,q\in\mathrm{D}(\mathbb{Q})\). The proof of Lemma 7.5.29(iv) and Lemmas 7.5.2 and 7.5.7 give \(\theta=\phi+d+u\) with \(d\in\mathbb{R}\) and \(u=\arccos(g^{\prime}/q)\). Now \({\phi^{\prime}}^{\dagger}=2\alpha x^{-3}+O(x^{-5})\) and so \(g^{\dagger}=-\frac{1}{2}x^{-1}+O(x^{-3})\). Since \((\phi^{\prime})^{2}=1+O(x^{-2})\), this yields \(((g^{\dagger})^{2}+(\phi^{\prime})^{2})^{-1/2}=1+O(x^{-2})\) and thus
\[g^{\prime}/q\ =\ g^{\dagger}\big{(}(g^{\dagger})^{2}+(\phi^{\prime})^{2}\big{)}^{-1/2} \ =\ -\tfrac{1}{2}x^{-1}+O(x^{-3}).\]
Hence
\[(g^{\prime}/q)^{\prime}\ =\ \tfrac{1}{2}x^{-2}+O(x^{-4}),\qquad\big{(}1-(g^{\prime}/q) ^{2}\big{)}^{-1/2}\ =\ 1+O(x^{-2}).\]
We obtain
\[u^{\prime}\ =\ -\frac{(g^{\prime}/q)^{\prime}}{\sqrt{1-(g^{\prime}/q)^{2}}}\ =\ - \tfrac{1}{2}x^{-2}+O(x^{-4})\]
and thus \(u=c+\tfrac{1}{2}x^{-1}+O(x^{-3})\) with \(c\in\mathbb{R}\), and so \(\theta-x-r-(\alpha+\tfrac{1}{2})x^{-1}\preccurlyeq x^{-3}\) for \(r:=c+d\), as claimed.
Asymptotic expansions for \(\phi\) and \(\phi^{\mathrm{inv}}\).The arguments in this subsection demonstrate the efficiency of our transfer theorems from Section 7.1. They allow us to produce hardian solutions of algebraic differential equations from transseries solutions of these equations. Such transseries solutions may be constructed by purely formal computations in \(\mathbb{T}\) (without convergence considerations). Our first goal is to improve on the relation \(\phi\sim x+\tfrac{\mu-1}{8}x^{-1}\) from Lemma 7.6.2, where \(\mu:=4\nu^{2}\):
**Theorem 7.6.14**.: _The germ \(\phi=\phi_{\nu}\) has an asymptotic expansion_
\[\phi\ \sim\ x+\frac{\mu-1}{8}x^{-1}+\frac{\mu^{2}-26\mu+25}{384}x^{-3}+\frac{\mu ^{3}-115\mu^{2}+1187\mu-1073}{5120}x^{-5}+\cdots\]
Here we use the sign \(\sim\) not in the sense of comparing germs, but to indicate an _asymptotic expansion_: for a sequence \((g_{n})\) in \(\mathcal{C}^{<\infty}[\mathrm{i}]\) with \(g_{0}\succ g_{1}\succ g_{2}\succ\cdots\) we say that \(g\in\mathcal{C}^{<\infty}[\mathrm{i}]\) has the asymptotic expansion
\[g\ \sim\ c_{0}g_{0}+c_{1}g_{1}+c_{2}g_{2}+\cdots\qquad(c_{0},c_{1},c_{2}, \cdots\in\mathbb{C})\]
if \(g-(c_{0}g_{0}+\cdots+c_{n}g_{n})\prec g_{n}\) for all \(n\) (and then the sequence \(c_{0},c_{1},c_{2},\dots\) of coefficients is uniquely determined by \(g,g_{0},g_{1},g_{2},\dots\)).
In the course of the proof of Theorem 7.6.14 we also obtain an explicit formula for the coefficient of \(x^{-2n+1}\) in the asymptotic expansion of the theorem. Towards the proof, set
\[(\nu,n)\ :=\ \frac{(\mu-1^{2})(\mu-3^{2})\cdots(\mu-(2n-1)^{2})}{n!\,2^{2n}} \qquad\text{(Hankel's symbol)},\]
so \((\nu,0)=1\), \((\nu,1)=\tfrac{\mu-1}{4}\), and \((\nu,n)=(-\nu,n)\). Also, if \((\nu,n)=0\) for some \(n\), then \(\nu\in\tfrac{1}{2}+\mathbb{Z}\): and if \(\nu=\tfrac{1}{2}+m\), then \((\nu,n)=0\) for \(n\geqslant m+1\). Moreover, if \(\nu\notin\tfrac{1}{2}+\mathbb{Z}\), then in terms of Euler's Gamma function (cf. [123, XV, SS2, \(\Gamma 3\), \(\Gamma 5\)]),
\[(\nu,n)\ =\ \frac{(-1)^{n}}{\pi n!}\cos(\pi\nu)\,\Gamma(\tfrac{1}{2}+n -\nu)\,\Gamma(\tfrac{1}{2}+n+\nu),\text{ and so}\] \[(m,n)\ =\ \frac{(-1)^{m+n}}{\pi n!}\Gamma(\tfrac{1}{2}+n-m)\, \Gamma(\tfrac{1}{2}+n+m).\]
(To prove the first identity, use \(\Gamma(z+1)=z\Gamma(z)\)\(n\) times to give
\[\Gamma(\tfrac{1}{2}+n-\nu)\ =\ \Gamma(\tfrac{1}{2}-\nu+n)\ =\ \Big{(}\prod_{j=0}^{n-1}(\tfrac{1}{2}-\nu+j)\Big{)}\cdot \Gamma(\tfrac{1}{2}-\nu);\text{ likewise}\] \[\Gamma(\tfrac{1}{2}+n+\nu)\ =\ \Gamma(\tfrac{1}{2}+\nu+n)\ =\ \Big{(}\prod_{j=0}^{n-1}(\tfrac{1}{2}+\nu+j)\Big{)}\cdot \Gamma(\tfrac{1}{2}+\nu),\]
and then use \(\Gamma(z)\Gamma(1-z)=\tfrac{\pi}{\sin(\pi z)}\) to get \(\Gamma(\tfrac{1}{2}-\nu)\Gamma(\tfrac{1}{2}+\nu)=\tfrac{\pi}{\cos(\pi\nu)}\).)
Below we consider the \(H\)-subfield \(\mathbb{R}(\!(x^{-1})\!)\) of \(\mathbb{T}\), and set
\[y\ :=\ \sum_{n=0}^{\infty}y_{n}x^{-2n}\in\mathbb{R}(\!(x^{-1})\!)\ \mbox{ where }y_{n}\ :=\ (2n-1)!!\frac{(\nu,n)}{2^{n}}.\]
Here \((2n-1)!!:=1\cdot 3\cdot 5\cdots(2n-1)=\frac{(2n)!}{2^{n}\,n!}\), so \((-1)!!=1\). Thus
\[y\ =\ 1+\left(\frac{\mu-1}{8}\right)x^{-2}+\frac{3!!}{2!}\left( \frac{\mu-1}{8}\right)\left(\frac{\mu-9}{8}\right)x^{-4}+\] \[\frac{5!!}{3!}\left(\frac{\mu-1}{8}\right)\left(\frac{\mu-9}{8} \right)\left(\frac{\mu-25}{8}\right)x^{-6}+\cdots.\]
The definition of the \(y_{n}\) yields the recursion
\[y_{0}\ =\ 1\qquad\mbox{and}\qquad y_{n+1}\ =\ \left(\frac{2n+1}{n+1}\right) \left(\frac{\mu-(2n+1)^{2}}{8}\right)y_{n}. \tag{7.6.4}\]
Using this recursion and \(\Gamma(1/2)=\sqrt{\pi}\) for \(n=0\), induction on \(n\) yields for \(\nu\notin\frac{1}{2}+\mathbb{Z}\):
\[y_{n}\ =\ \frac{\Gamma(n+\frac{1}{2})\,\Gamma(\nu+\frac{1}{2}+n)}{n!\,\sqrt{ \pi}\,\Gamma(\nu+\frac{1}{2}-n)}.\]
We now verify that \(y\) satisfies the linear differential equation
\[Y^{\prime\prime\prime}+fY^{\prime}+(f^{\prime}/2)Y\ =\ 0.\]
Here \(f=4+(1-\mu)x^{-2}\), so \(f^{\prime}/2=(\mu-1)x^{-3}\). Thus
\[(f^{\prime}/2)y\ =\ \sum_{n}(\mu-1)y_{n}x^{-2n-3}\ =\ (\mu-1)x^{-3}+\sum_{n \geqslant 1}(\mu-1)y_{n}x^{-2n-3}.\]
We also have \(y^{\prime}=\sum_{n\geqslant 1}-2ny_{n}x^{-2n-1}\) and so
\[fy^{\prime} \ =\ \big{(}4+(1-\mu)x^{-2}\big{)}\sum_{n\geqslant 1}-2ny_{n}x^{- 2n-1}\] \[\ =\ \sum_{n\geqslant 1}2n(\mu-1)y_{n}x^{-2n-3}-\sum_{n\geqslant 1 }8ny_{n}x^{-2n-1}\] \[\ =\ \sum_{n\geqslant 1}2(\mu-1)ny_{n}x^{-2n-3}-\sum_{m\geqslant 0 }8(m+1)y_{m+1}x^{-2m-3}\] \[\ =\ -8y_{1}x^{-3}+\sum_{n\geqslant 1}\big{(}2(\mu-1)ny_{n}-8(n+1) y_{n+1}\big{)}x^{-2n-3}\]
and hence, using \(\mu-1=8y_{1}\):
\[fy^{\prime}+(f^{\prime}/2)y\ =\ \sum_{n\geqslant 1}\big{(}(2n+1)(\mu-1)y_{n}-8(n+ 1)y_{n+1}\big{)}x^{-2n-3}.\]
Moreover
\[y^{\prime\prime}\ =\ \sum_{n\geqslant 1}2n(2n+1)y_{n}x^{-2n-2},\quad y^{ \prime\prime\prime}\ =\ \sum_{n\geqslant 1}-4n(2n+1)(n+1)y_{n}x^{-2n-3}.\]
This yields the claim by (7.6.4). We now identify the Hardy field \(\mathbb{R}(x)\) with an \(H\)-subfield of \(\mathbb{T}\) in the obvious way. Then \(\mathbb{R}(x)\subseteq\mathbb{R}(\!(x^{-1})\!)\), and the above yields:
**Lemma 7.6.15**.: _Let \(H\) be an \(H\)-closed field extending the \(H\)-field \(\mathbb{R}(x)\) and set \(B:=\partial^{3}+f\partial+(f^{\prime}/2)\in\mathbb{R}(x)[\partial]\). Then \(\dim_{C_{H}}\ker_{H}B=1\). If \(H\) extends the \(H\)-field \(\mathbb{R}\langle x,y\rangle\subseteq\mathbb{T}\), then \(\ker_{H}B=\ C_{H}y\)._
Proof.: We have \(\sigma(1/x)=2/x^{2}\prec 4\sim f\), so \(f\in\sigma\big{(}\Gamma(H)\big{)}^{\uparrow}\), hence \(f\in\sigma(H^{\times})\setminus\omega(H)\). Thus \(\dim_{C_{H}}\ker_{H}B=1\) by Lemma 2.5.25. Hence if \(H\) extends the \(H\)-field \(\mathbb{R}\langle x,y\rangle\), then \(\ker_{H}B=C_{H}y\) since \(B(y)=0\) by the argument preceding the lemma.
**Proposition 7.6.16**.: _There is a uniqueardian germ \(\psi=\psi_{\nu}\) such that_
\[\psi\ \sim\ 1\quad\text{ and }\quad\psi^{\prime\prime\prime}+f\psi^{\prime}+(f^{ \prime}/2)\psi\ =\ 0. \tag{7.6.5}\]
_This \(\psi\) satisfies \(\psi=1/\phi^{\prime}\in\mathrm{D}(\mathbb{Q})\) and has the asymptotic expansion_
\[\psi\ \sim\ 1+\frac{\mu-1}{8}x^{-2}+\cdots+(2n-1)!!\frac{(\nu,n)}{2^{n}}x^{-2n}+\cdots. \tag{7.6.6}\]
_Moreover \(\psi_{-\nu}=\psi_{\nu}\), and if \(\nu=\frac{1}{2}+m\), then_
\[\psi\ =\ 1+\frac{\mu-1}{8}x^{-2}+\cdots+(2m-1)!!\frac{(\nu,m)}{2^{m}}x^{-2m}.\]
Proof.: For any \(H\)-closed field \(H\supseteq\mathbb{R}(x)\), consider the statement
\[\begin{cases}&\text{there is a unique $\psi\in H$ such that (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq: eq:eq: eq:eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq:
We now prove Theorem 7.6.14, using also results and notations from the Appendix to this section. Corollary 7.3.2 yields an \(H\)-field embedding \(e\colon\,\mathrm{D}(\mathbb{Q})\to\mathbb{T}\) over \(\mathbb{R}(x)\). Let \(\psi\) be as in Proposition 7.6.16. Then \(e(\psi)=y\) and so
\[e(\phi^{\prime})\ =\ e(1/\psi)\ =\ y^{-1}\ =\ z_{0}+z_{1}\frac{x^{-2}}{1!}+z_{2} \frac{x^{-4}}{2!}+\cdots+z_{n}\frac{x^{-2n}}{n!}+\cdots\]
where
\[z_{n}\ :=\ B_{n}(-y_{1}1!,\ldots,-y_{n}n!)\in\mathbb{Q}[y_{1},\ldots,y_{n}]\subseteq \mathbb{Q}[\mu]\]
by Lemma 7.6.53 at the end of this section; here the \(B_{n}\) are as defined in (7.6.23). Using Lemma 7.6.18 and \(\phi\sim x\) we obtain the asymptotic expansion
\[\phi\ \sim\ u_{0}x+u_{1}\frac{x^{-1}}{1!}+u_{2}\frac{x^{-3}}{2!}+\cdots+u_{n} \frac{x^{-2n+1}}{n!}+\cdots\quad\text{where }u_{n}:=\frac{z_{n}}{-2n+1}.\]
The first few terms of the sequence \((z_{n})\) are
\[z_{0} =\ 1,\] \[z_{1} =\ -y_{1}=\frac{-(\mu-1)}{8},\] \[z_{2} =\ -2y_{2}+2y_{1}^{2}=\frac{-3(\mu-1)(\mu-9)+2(\mu-1)^{2}}{64},\] \[z_{3} =\ -6y_{3}+12y_{1}y_{2}-6y_{1}^{3}\] \[=\ \frac{-15(\mu-1)(\mu-9)(\mu-25)+18(\mu-1)^{2}(\mu-9)-6(\mu-1)^ {3}}{512}\]
and so
\[u_{0} =\ 1,\] \[u_{1} =\ \frac{\mu-1}{8},\] \[u_{2} =\ \frac{3(\mu-1)(\mu-9)-2(\mu-1)^{2}}{192}=\frac{\mu^{2}-26\mu+25}{192}\] \[u_{3} =\ \frac{15(\mu-1)(\mu-9)(\mu-25)-18(\mu-1)^{2}(\mu-9)+6(\mu-1)^{3}}{2560}\] \[=\ \frac{3(\mu^{3}-115\mu^{2}+1187\mu-1073)}{2560}.\]
This finishes the proof of Theorem 7.6.14.
We turn to the compositional inverse \(\phi^{\mathrm{inv}}\) of \(\phi\). Recall: \(\phi^{\mathrm{inv}}\in\mathrm{D}(\mathbb{Q})\) by Lemma 7.6.3. To prove the next result we use Corollary 7.6.67 in the Appendix to this section.
**Corollary 7.6.19**.: _We have an asymptotic expansion_
\[\phi^{\mathrm{inv}}\ \sim\ x-\frac{\mu-1}{8}x^{-1}-\frac{(\mu-1)(7\mu-31)}{192} \frac{x^{-3}}{2!}+\cdots\]
Proof.: Let \(e\colon\,\mathrm{D}(\mathbb{Q})\to\mathbb{T}\) and \((u_{n})\) be as above. Set
\[u:=\sum_{n}u_{n}\frac{x^{-2n+1}}{n!}=x+\frac{\mu-1}{8}x^{-1}+\frac{\mu^{2}-26 \mu+25}{384}x^{-3}+\cdots\in\mathbb{R}(\!(x^{-1})\!)\subseteq\mathbb{T},\]
so \(e(\phi)=u\). Let \(P\in\mathbb{R}(x)\{Y\}\) be as in the proof of Lemma 7.6.3, so \(P(2u^{\prime})=e(P(2\phi^{\prime}))=0\). Corollary 5.3.12 and the remark following it yield a \(\widetilde{P}\in\mathbb{R}(x)\{Z\}\) such that for all hardian \(y>\mathbb{R}\),
\[P(2y^{\prime})\ =\ 0\quad\Longleftrightarrow\quad\widetilde{P}(y^{\rm inv})\ =\ 0\]
and such that this equivalence also holds for \(y\in\mathbb{T}^{>\mathbb{R}}\) and \(y^{\rm inv}\) the compositional inverse of \(y\) in \(\mathbb{T}\). Hence \(\widetilde{P}(e(\phi^{\rm inv}))=e(\widetilde{P}(\phi^{\rm inv}))=0\) and \(\widetilde{P}(u^{\rm inv})=0\). The proof of Lemma 7.6.3 shows that each maximal Hardy field \(H\) contains a unique zero \(z\) of \(\widetilde{P}\) such that \(z=x-\frac{1}{8}(\mu-1)x^{-1}+o(x^{-1})\). By Corollary 7.1.17 this remains true with \(\mathbb{T}\) in place of \(H\). Now \(e(\phi^{\rm inv})=x-\frac{1}{8}(\mu-1)x^{-1}+o(x^{-1})\), and by the remarks following Corollary 7.6.67 we have \(u^{\rm inv}=u^{[-1]}=x-\frac{1}{8}(\mu-1)x^{-1}+o(x^{-1})\). Hence \(e(\phi^{\rm inv})=u^{\rm inv}\) and thus \(\phi^{\rm inv}\) has an asymptotic expansion as claimed.
_Remark_.: Corollary 7.6.67 yields the more detailed asymptotic expansion
\[\phi^{\rm inv}\ \sim\ x-\sum_{j=1}^{\infty}g_{j}\frac{x^{-2j+1}}{j!},\quad\text{ where }g_{j}=\sum_{i=1}^{j}\frac{(2(j-1))!}{(2j-1-i)!}B_{ij}(u_{1},\ldots,u_{j-i+1}).\]
### Liouvillian phase functions
The next proposition adds to Corollary 7.5.13 for the differential equation (\(\mathrm{L}_{\nu}\)). This subsection is not used in the rest of the section.
**Proposition 7.6.20**.: _With \(\phi=\phi_{\nu}\), the following are equivalent:_
* \(\nu\in\frac{1}{2}+\mathbb{Z}\)_;_
* \(1/\phi^{\prime}\in\mathbb{R}[x^{-1}]\)_;_
* \(f\in\sigma\big{(}\mathbb{R}(x)^{>}\big{)}\)_; recall:_ \(f=4+(1-\mu)x^{-2}\)_;_
* \(\phi\in\mathrm{Li}\big{(}\mathbb{R}(x)\big{)}\)_;_
* \(x^{2}y^{\prime\prime}+xy^{\prime}+(x^{2}-\nu^{2})y=0\) _for some_ \(y\neq 0\) _in a Liouville extension of_ \(\mathbb{C}(x)\)_;_
* _there are_ \(a,b\in\mathbb{C}\) _and distinct_ \(c_{1},\ldots,c_{n}\in\mathbb{C}^{\times}\) _such that_ \[-\phi^{\prime}{}^{\dagger}+2\phi^{\prime}i=a+bx^{-1}+2\sum_{i=1}^{n}(x-c_{i})^ {-1}\quad\text{and}\quad b=1+2\nu\text{ or }b=1-2\nu.\]
Proof.: The implication (i) \(\Rightarrow\) (ii) follows from Proposition 7.6.16, and (ii) \(\Rightarrow\) (iii) from \(f=\sigma(2\phi^{\prime})\). If \(f\in\sigma\big{(}\mathbb{R}(x)^{\times}\big{)}\), then \(\phi\in\mathrm{Li}\big{(}\mathbb{R}(x)\big{)}\) by Corollary 7.5.14 with \(H:=\mathrm{Li}\big{(}\mathbb{R}(x)\big{)}\); thus (iii) \(\Rightarrow\) (iv). For the rest of the proof, recall that by Lemma 7.5.3 the pair \(\big{(}1/\sqrt{\phi^{\prime}},\phi\big{)}\) parametrizes \(\ker_{\mathbb{C}^{<\infty}}(4\theta^{2}+f)\), so \(\big{(}1/\sqrt{x\phi^{\prime}},\phi\big{)}\) parametrizes \(V_{\nu}\). Thus (iv) \(\Leftrightarrow\) (v) by Corollary 7.5.13. Moreover, if (iv) holds, then \(\big{(}1/\sqrt{\phi^{\prime}},\phi\big{)}\in\mathrm{Li}\big{(}\mathbb{R}(x) \big{)}^{2}\), so (vi) then follows from Corollary 7.5.52.
Suppose \(a,b,c_{1},\ldots,c_{n}\) are as in (vi) and set
\[y\ :=\ (\phi^{\prime})^{-1/2}\,\mathrm{e}^{\phi i},\qquad z\ :=\ 2y^{\dagger}\ =\ -\phi^{\prime}{}^{ \dagger}+2\phi^{\prime}i\in\mathbb{C}(x),\]
so \(4y^{\prime\prime}+fy=0\), hence \(\omega(z)=f\). Then, as germs at \(+\infty\),
\[z\ =\ a+(b{+}2n)x^{-1}{+}O(x^{-2}),\ \text{so}\ z^{\prime}\ =\ O(x^{-2}),\quad z^{2}\ =\ a^{2}{+}2a(b{+}2n)x^{-1}{+}O(x^{-2})\]
and hence
\[f\ =\ 4+(1-\mu)x^{-2}\ =\ \omega(z)\ =\ -(2z^{\prime}+z^{2})\ =\ -a^{2}-2a(b+2n)x^{-1}+O(x^{-2}),\]
so \(b+2n=0\), hence \(\nu=-n-\frac{1}{2}\) or \(\nu=n+\frac{1}{2}\), and thus \(\nu\in\frac{1}{2}+\mathbb{Z}\).
_Remark_.: In the setting of analytic functions, the above equivalence (i) \(\Leftrightarrow\) (v) goes back to Liouville [132]. For more on this, see [113, appendix], [116, SS4.2], [164, Chapter VI], and [205, SS4.74].)
For the next result, note that \(\arctan(g)\in\operatorname{Li}\bigl{(}\mathbb{R}(x)\bigr{)}\) for \(g\in\mathbb{R}(x)\).
**Corollary 7.6.21**.: _Suppose \(\nu\in\frac{1}{2}+\mathbb{Z}\). Then there are distinct \((a_{1},b_{1}),\dots,(a_{m},b_{m})\) in \(\mathbb{R}^{\times}\times\mathbb{R}\) such that_
\[\phi\ =\ x+\sum_{i=1}^{m}\arctan\left(\frac{a_{i}}{x-b_{i}}\right).\]
Proof.: Take imaginary parts in the equality of Proposition 7.6.20(vi), integrate, and appeal to the defining property of \(\phi\) in Proposition 7.6.1 in combination with the fact that for \(a,b\in\mathbb{R}\) we have \(\arctan\left(\frac{a}{x-b}\right)\preccurlyeq x^{-1}\). Here we also use that the derivative of \(\arctan\left(\frac{a}{x-b}\right)\) is \(\frac{-a}{(x-b)^{2}+a^{2}}=\operatorname{Im}\left(\frac{1}{x-c}\right)\) for \(a,b\in\mathbb{R}\), \(c=b-ai\).
Is \(\phi,\phi^{\operatorname{inv}}\in\operatorname{Li}\bigl{(}\mathbb{R}(x) \bigr{)}\) possible? The answer is "no" except for \(\phi=x\):
**Corollary 7.6.22**.: _Suppose \(\phi\in\operatorname{Li}\bigl{(}\mathbb{R}(x)\bigr{)}\), \(\phi\neq x\), and \(\nu\geqslant 0\), so \(\nu=\frac{1}{2}+m\) where \(m\geqslant 1\). Then \(\theta:=1/\phi^{\operatorname{inv}}\) satisfies_
\[\theta^{\prime}=-\theta^{2}(1+y_{1}\theta^{2}+\dots+y_{m}\theta^{2m})\quad \text{where $y_{i}=(2i-1)!!\frac{(\nu,i)}{2^{i}}$ for $i=1,\dots,m$}\]
_and \(\theta\notin\operatorname{Li}\bigl{(}\mathbb{R}(x)\bigr{)}\)._
Proof.: By the Chain Rule and Proposition 7.6.16 we have
\[\theta^{\prime}=-\theta^{2}(\phi^{\operatorname{inv}})^{\prime}=-\theta^{2}( \psi\circ\phi^{\operatorname{inv}})\quad\text{where $\psi=1+y_{1}x^{-2}+\dots+y_{m}x^{-2m}$},\]
and this yields the first claim. Towards a contradiction, assume \(\theta\in\operatorname{Li}\bigl{(}\mathbb{R}(x)\bigr{)}\). Then by [ADH, 10.6.6], \(\theta\) lies in the Liouville extension \(\operatorname{Li}\bigl{(}\mathbb{R}(x)\bigr{)}[i]\) of \(\mathbb{C}(x)=\mathbb{R}(x)[i]\). Hence by Corollary 1.1.35, \(\theta\) is algebraic over \(\mathbb{C}(x)\). Also \(\theta\notin\mathbb{C}\), so Lemma 1.1.36 yields \(Q\in\mathbb{C}(Y)\) with \(Q^{\prime}=1/P\) where \(P:=-Y^{2}(1+y_{1}Y^{2}+\dots+y_{m}Y^{2m})\). Thus
\[\phi^{\prime}\ =\ \frac{1}{\psi}\ =\ -\frac{x^{-2}}{P(x^{-1})}\ =\ -x^{-2}Q^{\prime}(x^{-1})\ =\ Q (x^{-1})^{\prime}\]
and so \(\phi\in\mathbb{R}(x)\). This is impossible by the lemma below.
**Lemma 7.6.23**.: \(\mu=1\Longleftrightarrow\phi=x\Longleftrightarrow\phi\in\mathbb{R}(x)\)_._
Proof.: The first equivalence is clear from the remarks following Proposition 7.6.1. Assume \(\phi\in\mathbb{R}(x)\). Then by Proposition 7.6.20,
\[-\phi^{\prime}{}^{\dagger}+2\phi^{\prime}i\ =\ a+bx^{-1}+2\sum_{i=1}^{n}(x-c_{i} )^{-1}\quad(a,b\in\mathbb{C},\ \text{distinct $c_{1},\dots,c_{n}\in\mathbb{C}^{ \times}$}),\]
so \(2\phi^{\prime}i-a\in\partial F\cap\mathbb{C}F^{\dagger}\) for \(F:=\mathbb{C}(x)\), hence \(2\phi^{\prime}i=a\) by Corollary 1.2.14. Thus \(\phi\in\mathbb{R}x+\mathbb{R}\). Since \(\phi\sim x\) and \(\phi-x\preccurlyeq x^{-1}\), this gives \(\phi=x\).
_Question_.: Does there exist a \(\nu\notin\frac{1}{2}+\mathbb{Z}\) for which \(\phi^{\operatorname{inv}}\in\operatorname{Li}\bigl{(}\mathbb{R}(x)\bigr{)}\)?
**The Bessel functions.** We can now establish some classical facts about distinguished solutions to the Bessel differential equation (B\({}_{\nu}\)): Corollaries 7.6.40, 7.6.41, 7.6.42 below. Our proofs use less complex analysis than those in the literature: we need just one contour integration, for Proposition 7.6.29 below. We assume some basic facts about Euler's \(\Gamma\)-function and recall that \(1/\Gamma\) is an entire function with \(-\mathbb{N}\) as its set of zeros (all simple), so \(\Gamma\) is meromorphic on the complex plane without any zeros and has \(-\mathbb{N}\) as its set of poles. Our main reference for these and other properties of \(\Gamma\) used below is [123]. Let also \(z\mapsto\log z\colon\mathbb{C}\setminus\mathbb{R}^{\leqslant}\to\mathbb{C}\) be the holomorphic extension of the real logarithm function, and for \(z\in\mathbb{C}\setminus\mathbb{R}^{\leqslant}\), set \(z^{\nu}:=\exp(\nu\log z)\). Let \(\nu\in\mathbb{C}\) until further notice, and note that \((\nu,z)\mapsto z^{\nu}\) is analytic on \(\mathbb{C}\times(\mathbb{C}\setminus\mathbb{R}^{\leqslant})\), and, keeping \(\nu\) fixed, has derivative \(z\mapsto\nu z^{\nu-1}\) on \(\mathbb{C}\setminus\mathbb{R}^{\leqslant}\). Moreover, for \(z\in\mathbb{C}\setminus\mathbb{R}\), \(\nu,\nu_{1},\nu_{2}\in\mathbb{C}\), \(t\in\mathbb{R}^{>}\) we have
\[z^{\nu_{1}+\nu_{2}}\ =\ z^{\nu_{1}}z^{\nu_{2}},\quad(tz)^{\nu}\ =\ t^{\nu}z^{\nu},\quad|z^{\nu}|\ =\ |z^{|\operatorname{Re}\nu},\quad\overline{z^{\nu}}\ =\ \overline{z^{\nu}},\]
and for \(z_{1},z_{2}\in\mathbb{C}^{\times}\) with \(\operatorname{Re}z_{1}\geqslant 0\), \(\operatorname{Re}z_{2}>0\): \(z_{1}z_{2}\in\mathbb{C}\setminus\mathbb{R}^{\leqslant}\), \((z_{1}z_{2})^{\nu}=z_{1}^{\nu}z_{2}^{\nu}\).
**Lemma 7.6.24**.: _Let \(A,B\subseteq\mathbb{C}\) be nonempty and compact. Then_
\[\sum_{n}\max_{(\nu,z)\in A\times B}\left|\frac{(-1)^{n}}{n!\, \Gamma(\nu+n+1)}\left(\frac{z}{2}\right)^{2n}\right|<\infty,\text{ so the series}\] \[\sum_{n}\frac{(-1)^{n}}{n!\,\Gamma(\nu+n+1)}\left(\frac{z}{2} \right)^{2n}\]
_converges absolutely and uniformly on \(A\times B\)._
Proof.: Take \(R\in\mathbb{R}^{>}\) such that \(|z|\leqslant 2R\) for all \(z\in B\). Set \(M_{n}:=\max_{\nu\in A}\left|\frac{1}{\Gamma(\nu+n+1)}\right|\) and take \(n_{0}\in\mathbb{N}\) such that \(|\nu+n+1|\geqslant 1\) for all \(n\geqslant n_{0}\) and \(\nu\in A\). Then \(M_{n+1}\leqslant M_{n}\) for \(n\geqslant n_{0}\), by the functional equation for \(\Gamma\), so the sequence \((M_{n})\) is bounded. Hence \(\sum_{n}\max_{(\nu,z)\in A\times B}\left|\frac{(-1)^{n}}{n!\,\Gamma(\nu+n+1)} \left(\frac{z}{2}\right)^{2n}\right|\leqslant\sum_{n}\frac{M_{n}R^{n}}{n!}<\infty\).
By Lemma 7.6.24 and [123, V, SS1, Theorem 1.1] we obtain a holomorphic function
\[z\mapsto J_{\nu}(z)\ :=\ \sum_{n}\frac{(-1)^{n}}{n!\,\Gamma(n+\nu+1)}\left( \frac{z}{2}\right)^{2n+\nu}:\mathbb{C}\setminus\mathbb{R}^{\leqslant}\to \mathbb{C}.\]
For example, for \(z\in\mathbb{C}\setminus\mathbb{R}^{\leqslant}\), we have
\[J_{\frac{1}{2}}(z)\ =\ \sum_{n}\frac{(-1)^{n}}{n!\,\Gamma(n+\frac{3}{2})}\left( \frac{z}{2}\right)^{2n+\frac{1}{2}}.\]
Note also that
\[J_{-m}(z)\ =\ \sum_{n\geqslant m}\frac{(-1)^{n}}{n!\,(n-m)!}\left(\frac{z}{2} \right)^{2n-m}\ =\ (-1)^{m}J_{m}(z) \tag{7.6.7}\]
and thus
\[J_{-m}(z)\sim\frac{(-1)^{m}}{m!}\left(\frac{z}{2}\right)^{m},\quad J_{m}(z) \sim\frac{1}{m!}\left(\frac{z}{2}\right)^{m}\qquad\text{as }z\to 0.\]
Termwise differentiation shows that \(J_{\nu}\) satisfies the differential equation (B\({}_{\nu}\)) on \(\mathbb{C}\setminus\mathbb{R}^{\leqslant}\). The function \(J_{\nu}\) is known as the _Bessel function of the first kind_ of order \(\nu\).
Note that (B\({}_{\nu}\)) doesn't change when replacing \(\nu\) by \(-\nu\), so \(J_{-\nu}\) is also a solution of (B\({}_{\nu}\)). Lemma 7.6.24 shows that the function
\[(\nu,z)\mapsto J_{\nu}(z)\ :\ \mathbb{C}\times(\mathbb{C}\setminus\mathbb{R}^{ \leqslant})\to\mathbb{C}\]
is analytic, and that for fixed \(\nu\) the function \(z\mapsto z^{-\nu}J_{\nu}(z)\) on \(\mathbb{C}\setminus\mathbb{R}^{\leqslant}\) extends to an entire function.
Termwise differentiation gives \((z^{\nu}J_{\nu})^{\prime}=z^{\nu}J_{\nu-1}\) and \((z^{-\nu}J_{\nu})^{\prime}=-z^{-\nu}J_{\nu+1}\), so
\[J_{\nu-1}\ =\ \frac{\nu}{z}J_{\nu}+J_{\nu}^{\prime},\qquad J_{\nu+1}\ =\ \frac{\nu}{z}J_{\nu}-J_{\nu}^{\prime}, \tag{7.6.8}\]
by the Product Rule, and thus
\[J_{\nu-1}+J_{\nu+1}\ =\ \frac{2\nu}{z}J_{\nu}\qquad J_{\nu-1}-J_{\nu+1}\ =\ 2J_{\nu}^{\prime}. \tag{7.6.9}\]
Note: if \(\nu+1\notin-\mathbb{N}\), then for \(z\in\mathbb{C}\setminus\mathbb{R}^{\leqslant}\) and \(z\to 0\) we have
\[J_{\nu}(z)\sim\frac{1}{\Gamma(\nu+1)}\left(\frac{z}{2}\right)^{\nu},\ \ \text{and for}\ \nu\neq 0:\ J_{\nu}^{\prime}(z)\sim\frac{\nu}{2\Gamma(\nu+1)}\left(\frac{z}{2} \right)^{\nu-1}. \tag{7.6.10}\]
If \(\nu-1\notin\mathbb{N}\), then (7.6.10) holds with \(-\nu\) in place of \(\nu\). It follows that for \(\nu\notin\mathbb{Z}\) the solutions \(J_{\nu}\), \(J_{-\nu}\) of (B\({}_{\nu}\)) are \(\mathbb{C}\)-linearly independent. Set
\[w\ :=\ \text{wr}(J_{\nu},J_{-\nu})\ =\ J_{\nu}J_{-\nu}^{\prime}-J_{\nu}^{ \prime}J_{-\nu}.\]
Then \(w^{\prime}(z)=-w(z)/z\) on \(\mathbb{C}\setminus\mathbb{R}^{\leqslant}\) (cf. remarks following Lemma 5.2.4). This gives \(c\in\mathbb{C}\) such that \(w(z)=c/z\) on \(\mathbb{C}\setminus\mathbb{R}^{\leqslant}\). If \(\nu\notin\mathbb{Z}\), then \(c\neq 0\), and by (7.6.10) and the remark following it we obtain \(c=-\frac{2\nu}{\Gamma(\nu+1)\Gamma(-\nu+1)}=-\frac{2}{\Gamma(\nu)\Gamma(-\nu+ 1)}\). Hence using \(\Gamma(\nu)\Gamma(1-\nu)=\pi/\sin(\pi\nu)\):
\[\text{wr}(J_{\nu},J_{-\nu})(z)\ =\ -\frac{2\sin(\pi\nu)}{\pi z}\ \ \ \text{ for}\ \nu\notin\mathbb{Z}\ \text{and}\ z\in\mathbb{C}\setminus\mathbb{R}^{\leqslant}. \tag{7.6.11}\]
Next we express \(J_{1/2}\) and \(J_{-1/2}\) in terms of \(\sin z\) and \(\cos z\):
**Lemma 7.6.25**.: _On \(\mathbb{C}\setminus\mathbb{R}^{\leqslant}\) we have_
\[J_{1/2}(z)\ =\ \sqrt{\frac{2}{\pi z}}\ \sin z,\qquad J_{-1/2}(z)\ =\ \sqrt{\frac{2}{\pi z}}\ \cos z.\]
Proof.: For \(\nu=1/2\), a fundamental system of solutions of (B\({}_{\nu}\)) on \(\mathbb{R}^{>}\) is given by \(x^{-1/2}\cos x\), \(x^{-1/2}\sin x\). This yields \(a,b\in\mathbb{R}\) such that on \(\mathbb{R}^{>}\),
\[J_{1/2}(t)\ =\ at^{-1/2}\cos t+bt^{-1/2}\sin t.\]
As \(t\to 0^{+}\) we have:
\[t^{-1/2}\cos t\ =\ t^{-1/2}+O(t^{3/2}),\quad t^{-1/2}\sin t\ =\ t^{1/2}+O(t^{5/2}),\ \text{and}\] \[J_{1/2}(t)\sim\frac{1}{\Gamma(3/2)}\left(\frac{t}{2}\right)^{1/2 }\ =\ \sqrt{\frac{2}{\pi}}\ t^{1/2}\ \ (\text{using (\ref{eq:1.1}))},\]
so \(a=0\), \(b=\sqrt{\frac{2}{\pi}}\), giving the identity claimed for \(J_{1/2}\). For \(\nu=-1/2\) one can use the left identity (7.6.8) for \(\nu=1/2\).
From Lemma 7.6.25 and (7.6.8) we obtain by induction on \(n\):
**Corollary 7.6.26**.: _For each \(n\) there are \(P_{n},Q_{n}\in\mathbb{Q}[Z]\) with \(\deg P_{n}=\deg Q_{n}=n\), both with positive leading coefficient, such that, with \(Q_{-1}:=0\), we have on \(\mathbb{C}\setminus\mathbb{R}^{\leqslant}\):_
\[J_{n+\frac{1}{2}}(z)\ =\ \sqrt{\frac{2}{\pi z}}\,\big{(}P_{n}(z^{-1})\sin z-Q_{n -1}(z^{-1})\cos z\big{)},\] \[J_{-n-\frac{1}{2}}(z)\ =\ (-1)^{n}\sqrt{\frac{2}{\pi z}}\,\big{(}P_{n} (z^{-1})\cos z+Q_{n-1}(z^{-1})\sin z\big{)}.\]
For example,
\[J_{3/2}(z)\ =\ \sqrt{\frac{2}{\pi z}}\,\bigg{(}\frac{\sin z}{z}-\cos z\bigg{)}\,.\]
For \(\nu\notin\mathbb{Z}\) we have the solution
\[Y_{\nu}\ :=\ \frac{\cos(\pi\nu)J_{\nu}-J_{-\nu}}{\sin(\pi\nu)}\]
of \((\mathrm{B}_{\nu})\) on \(\mathbb{C}\setminus\mathbb{R}^{\leqslant}\). For fixed \(z\in\mathbb{C}\setminus\mathbb{R}^{\leqslant}\), the entire function
\[\nu\mapsto\cos(\pi\nu)J_{\nu}(z)-J_{-\nu}(z)\]
has a zero at each \(\nu\in\mathbb{Z}\), by (7.6.7), so the holomorphic function
\[\nu\mapsto Y_{\nu}(z)\colon\mathbb{C}\setminus\mathbb{Z}\to\mathbb{C}\]
has a removable singularity at each \(\nu\in\mathbb{Z}\), and thus extends to an entire function whose value at \(k\in\mathbb{Z}\) is given by
\[Y_{k}(z):=\lim_{\nu\in\mathbb{C}\setminus\mathbb{Z},\ \nu\to k}Y_{\nu}(z),\ \ \ \ \text{for}\ z\in\mathbb{C}\setminus\mathbb{R}^{\leqslant}.\]
In this way we obtain a two-variable analytic function
\[(\nu,z)\mapsto Y_{\nu}(z)\ :\ \mathbb{C}\times(\mathbb{C}\setminus\mathbb{R}^{ \leqslant})\to\mathbb{C},\]
and thus for each \(\nu\in\mathbb{C}\) a solution \(Y_{\nu}\) of \((\mathrm{B}_{\nu})\) on \(\mathbb{C}\setminus\mathbb{R}^{\leqslant}\), called the _Bessel function of the second kind_ of order \(\nu\). Using (7.6.11) we determine the Wronskian of \(J_{\nu}\), \(Y_{\nu}\) (first for \(\nu\notin\mathbb{Z}\), and then by continuity for all \(\nu\)):
\[\mathrm{wr}(J_{\nu},Y_{\nu})(z)\ =\ -\frac{\mathrm{wr}(J_{\nu},J_{-\nu})(z)}{ \sin(\pi\nu)}\ =\ \frac{2}{\pi z}\ \ \ (z\in\mathbb{C}\setminus\mathbb{R}^{ \leqslant}), \tag{7.6.12}\]
hence \(J_{\nu}\), \(Y_{\nu}\) are \(\mathbb{C}\)-linearly independent. The recurrence formulas (7.6.9) yield analogous formulas for the Bessel functions of the second kind:
\[Y_{\nu-1}+Y_{\nu+1}\ =\ \frac{2\nu}{z}Y_{\nu}\ \ \ \ \ \ \ Y_{\nu-1}-Y_{\nu+1}\ =\ 2Y_{\nu}^{\prime}. \tag{7.6.13}\]
Adding and subtracting these identities gives the analogue of (7.6.8)
\[Y_{\nu-1}\ =\ \frac{\nu}{z}Y_{\nu}+Y_{\nu}^{\prime},\ \ \ \ \ \ Y_{\nu+1}\ =\ \frac{\nu}{z}Y_{\nu}-Y_{\nu}^{\prime}. \tag{7.6.14}\]
For \(\nu\in\mathbb{R}\) we have \(J_{\nu}(\mathbb{R}^{>}),Y_{\nu}(\mathbb{R}^{>})\subseteq\mathbb{R}\), and for such \(\nu\) we let \(J_{\nu}\), \(Y_{\nu}\) denote also the germs (at \(+\infty\)) of their restrictions to \(\mathbb{R}^{>}\). We can now state the main result of the rest of this section. It gives rather detailed information about the behavior of \(J_{\nu}(t)\) and \(Y_{\nu}(t)\) for \(\nu\in\mathbb{R}\) and large \(t\in\mathbb{R}^{>}\):
**Theorem 7.6.27**.: _Let \(\nu\in\mathbb{R}\). Then for the germs \(J_{\nu}\) and \(Y_{\nu}\) we have_
\[J_{\nu}\ =\ \sqrt{\frac{2}{\pi x\phi_{\nu}^{\prime}}}\ \cos\Big{(}\phi_{\nu}- \frac{\pi\nu}{2}-\frac{\pi}{4}\Big{)}\,,\ \ \ \ \ \ Y_{\nu}\ =\ \sqrt{\frac{2}{\pi x\phi_{\nu}^{\prime}}}\ \sin\Big{(}\phi_{\nu}-\frac{\pi\nu}{2}- \frac{\pi}{4}\Big{)}\,.\]
The proof will take some effort, especially for \(Y_{\nu}\) with \(\nu\in\mathbb{Z}\).
Below \(J\) denotes the analytic function \((\nu,z)\mapsto J_{\nu}(z)\colon\mathbb{C}\times(\mathbb{C}\setminus\mathbb{R}^{ \leqslant})\to\mathbb{C}\).
**Lemma 7.6.28**.: _Let \(k\in\mathbb{Z}\) and \(z\in\mathbb{C}\setminus\mathbb{R}^{\leqslant}\). Then_
\[Y_{k}(z)\ =\ \frac{1}{\pi}\left(\left(\frac{\partial J}{\partial\nu}\right)( k,z)+(-1)^{k}\left(\frac{\partial J}{\partial\nu}\right)(-k,z)\right).\]
_In particular, \(Y_{-k}=(-1)^{k}Y_{k}\) and \(Y_{0}(z)=\frac{2}{\pi}\left(\frac{\partial J}{\partial\nu}\right)(0,z)\)._
Proof.: By l'Hopital's Rule for germs of holomorphic functions at \(k\),
\[\lim_{\nu\to k}\frac{\cos(\pi\nu)J(\nu,z)-J(-\nu,z)}{\sin(\pi\nu)}\] \[= \lim_{\nu\to k}\frac{-\pi\sin(\pi\nu)J(\nu,z)+\cos(\pi\nu)( \partial J/\partial\nu)(\nu,z)+(\partial J/\partial\nu)(-\nu,z)}{\pi\cos(\pi \nu)}\] \[= \frac{1}{\pi}\lim_{\nu\to k}\left((\partial J/\partial\nu)( \nu,z)+\frac{(\partial J/\partial\nu)(-\nu,z)}{\cos(\pi\nu)}\right),\]
and this yields the claims.
The following asymptotic relation is crucial for establishing Theorem 7.6.27. It is due to Hankel [82] with earlier special cases provided by Poisson [155]\((\nu=0)\), Hansen [83]\((\nu=1)\) and Jacobi [108]\((\nu\in\mathbb{Z})\).
**Proposition 7.6.29** (Hankel).: _Let \(\nu\in\mathbb{R}\). Then for the germ \(J_{\nu}\) we have:_
\[J_{\nu}-\sqrt{\frac{2}{\pi x}}\cos\left(x-\frac{\pi\nu}{2}-\frac{\pi}{4} \right)\ \preccurlyeq\ x^{-3/2}.\]
Proposition 7.6.29 with Lemma 7.6.9 and Remark 7.6.11 yield the identity for the germ \(J_{\nu}\) in Theorem 7.6.27. As to \(Y_{\nu}\), let us simplify notation by setting \(S:=\sqrt{\frac{2}{\pi x\phi_{\nu}^{\prime}}}\), \(\alpha:=\frac{\pi\nu}{2}\), \(\theta:=\phi_{\nu}-\frac{\pi}{4}\). Using the identity for \(J_{\nu}\) in Theorem 7.6.27, the numerator in the definition of \(Y_{\nu}\) turns into
\[S\cdot\left[\cos(2\alpha)\cos(\theta-\alpha)-\cos(\theta+\alpha)\right]\]
and the denominator into \(\sin(2\alpha)\). Trigonometric addition formulas yield
\[\cos(2\alpha)\cos(\theta-\alpha)-\cos(\theta+\alpha)\ =\ \sin(2\alpha)\sin( \theta-\alpha).\]
For \(\nu\notin\mathbb{Z}\), we have \(\sin(2\alpha)\neq 0\), so this gives the identity for the germ \(Y_{\nu}\) in Theorem 7.6.27. The identity for \(Y_{\nu}\) with \(\nu\in\mathbb{Z}\) will be dealt with after the proof of Proposition 7.6.29. First a useful reduction step:
**Lemma 7.6.30**.: _Let \(\nu\in\mathbb{R}\), and let \(J_{\nu}\), \(J_{\nu+1}\) denote also the germs \((\)at \(+\infty)\) of their restrictions to \(\mathbb{R}^{>}\). Then the following are equivalent:_
* \(J_{\nu}-\sqrt{\frac{2}{\pi x}}\cos\left(x-\frac{\pi\nu}{2}-\frac{\pi}{4} \right)\ \preccurlyeq\ x^{-3/2}\)_;_
* \(J_{\nu+1}-\sqrt{\frac{2}{\pi x}}\cos\left(x-\frac{\pi(\nu+1)}{2}- \frac{\pi}{4}\right)\ \preccurlyeq\ x^{-3/2}\)_._
Proof.: Put \(\alpha_{\nu}:=\frac{\pi\nu}{2}+\frac{\pi}{4}\) and \(g_{\nu}:=\sqrt{\frac{2}{\pi x\phi_{\nu}^{\prime}}}\in\mathrm{D}(\mathbb{Q})\). The proof of Lemma 7.6.8 gives \(\frac{1}{\sqrt{\phi_{\nu}^{\prime}}}-1\preccurlyeq x^{-2}\), so \(g_{\nu}-\sqrt{\frac{2}{\pi x}}\preccurlyeq x^{-5/2}\), hence \(g_{\nu}\preccurlyeq x^{-1/2}\) and thus \(g_{\nu}^{\prime}\preccurlyeq x^{-3/2}\). Assume now (i). By (7.6.8) we have
\[J_{\nu+1}\ =\ \frac{\nu}{x}J_{\nu}-J_{\nu}^{\prime} \tag{7.6.15}\]
and by (i)
\[\frac{1}{x}J_{\nu}\ =\ \sqrt{\frac{2}{\pi}}\ x^{-3/2}\cos(x-\alpha_{\nu})+O(x^{- 5/2})\ =\ O(x^{-3/2}). \tag{7.6.16}\]
We have \(J_{\nu}\in V_{\nu}\), so by Lemma 7.6.9 and (ii),
\[J_{\nu}\ =\ g_{\nu}\cos(\phi_{\nu}-\alpha_{\nu}),\]
and \(\alpha_{\nu+1}=\alpha_{\nu}+\frac{\pi}{2}\) gives \(\sin(t-\alpha_{\nu})=\cos(t-\alpha_{\nu+1})\). Thus
\[-J_{\nu}^{\prime} =\ -g_{\nu}^{\prime}\cos(\phi_{\nu}-\alpha_{\nu})+g_{\nu}\phi_{ \nu}^{\prime}\sin(\phi_{\nu}-\alpha_{\nu})\] \[=\ g_{\nu}\phi_{\nu}^{\prime}\cos(\phi_{\nu}-\alpha_{\nu+1})+O(x^{ -3/2}).\]
Also \(\cos(x+u)-\cos x\preccurlyeq u\) for \(u\in\mathcal{C}\) and \(\phi_{\nu}-x\preccurlyeq x^{-1}\), so \(\cos(\phi_{\nu}-\alpha_{\nu+1})=\cos(x-\alpha_{\nu+1})+O(x^{-1})\). Using \(\phi_{\nu}^{\prime}-1\preccurlyeq x^{-2}\) and \(g_{\nu}-\sqrt{\frac{2}{\pi x}}\preccurlyeq x^{-5/2}\) this yields
\[g_{\nu}\phi_{\nu}^{\prime}\cos(\phi_{\nu}-\alpha_{\nu+1})\ =\ \sqrt{\frac{2}{\pi x}} \cos(x-\alpha_{\nu+1})+O(x^{-3/2})\]
and so
\[-J_{\nu}^{\prime}\ =\ \sqrt{\frac{2}{\pi x}}\cos(x-\alpha_{\nu})+O(x^{-3/2}). \tag{7.6.17}\]
Combining (7.6.15), (7.6.16), (7.6.17) yields (ii). Likewise one proves (ii) \(\Rightarrow\) (i), using
\[J_{\nu}\ =\ \frac{\nu+1}{x}J_{\nu+1}-J_{\nu+1}^{\prime}\]
instead of (7.6.15).
_Remark_.: Using the identities (7.6.14) instead of (7.6.8) shows that Lemma 7.6.30 also holds with \(\sin\), \(Y_{\nu}\), \(Y_{\nu+1}\) in place of \(\cos\), \(J_{\nu}\), \(J_{\nu+1}\).
Lemma 7.6.30 gives a reduction of Proposition 7.6.29 to the case \(\nu>-1/2\). (We could also reduce to the case \(\nu>1\), say, but the choice of \(-1/2\) is useful later.)
**Lemma 7.6.31** (Poisson representation).: _Let \(\mathrm{Re}\,\nu>-\frac{1}{2}\) and \(z\in\mathbb{C}\setminus\mathbb{R}^{\leqslant}\). Then_
\[J_{\nu}(z)\ =\ \frac{(\frac{z}{2})^{\nu}}{\Gamma(\nu+\frac{1}{2})\sqrt{\pi}} \int_{-1}^{1}\mathrm{e}^{tzi}(1-t^{2})^{\nu-\frac{1}{2}}\,dt.\]
Proof.: For \(p,q\in\mathbb{C}\) with \(\mathrm{Re}\,p,\mathrm{Re}\,q>0\) and \(B(p,q):=\int_{0}^{1}t^{p-1}(1-t)^{q-1}\,dt\) we have
\[B(p,q)\ =\ \frac{\Gamma(p)\Gamma(q)}{\Gamma(p+q)},\]
see for example [144, Chapter 2, SS1.6]. From the definition of \(B(p,q)\) we obtain
\[\int_{-1}^{1}t^{2n}(1-t^{2})^{\nu-\frac{1}{2}}\,dt\ =\ \int_{0}^{1}s^{n-\frac{1}{2}}(1-s)^{\nu- \frac{1}{2}}\,ds\ =\ B\big{(}n+\tfrac{1}{2},\nu+\tfrac{1}{2}\big{)}.\]
In the equalities below we use this for the fourth equality (and one can appeal to a Dominated Convergence Theorem for the second):
\[\int_{-1}^{1}\mathrm{e}^{tzi}(1-t^{2})^{\nu-\frac{1}{2}}\,dt = \int_{-1}^{1}\sum_{m}(1-t^{2})^{\nu-\frac{1}{2}}t^{m}\frac{(iz)^{m} }{m!}\,dt\] \[= \sum_{m}\left(\int_{-1}^{1}t^{m}(1-t^{2})^{\nu-\frac{1}{2}}\,dt \right)\frac{(iz)^{m}}{m!}\] \[= \sum_{n}\left(\int_{-1}^{1}t^{2n}(1-t^{2})^{\nu-\frac{1}{2}}\,dt \right)\frac{(-1)^{n}z^{2n}}{(2n)!}\] \[= \sum_{n}B\big{(}n+\tfrac{1}{2},\nu+\tfrac{1}{2}\big{)}\frac{(-1)^ {n}z^{2n}}{(2n)!}\] \[= \Gamma\big{(}\nu+\tfrac{1}{2}\big{)}\sum_{n}\frac{(-1)^{n}2^{2n} \,\Gamma\big{(}n+\tfrac{1}{2}\big{)}}{\Gamma(n+\nu+1)(2n)!}\,\Big{(}\frac{z}{2 }\Big{)}^{2n}\] \[= \Gamma\big{(}\nu+\tfrac{1}{2}\big{)}\sqrt{\pi}\sum_{n}\frac{(-1) ^{n}}{n!\,\Gamma(n+\nu+1)}\,\Big{(}\frac{z}{2}\Big{)}^{2n}\,,\]
where for the last equality we used \(\Gamma\big{(}n+\tfrac{1}{2}\big{)}=\sqrt{\pi}\ (2n)!/(n!2^{2n})\), a consequence of the Gauss-Legendre duplication formula for the Gamma function (see [123, XV, SS2, \(\Gamma\)8]).
We also need the following estimate:
**Lemma 7.6.32**.: _Let \(\lambda,t\in\mathbb{R}\) with \(\lambda>-1\) and \(t\geqslant 1\). Then_
\[t^{-(\lambda+1)}\Gamma(\lambda+1)\ =\ \int_{0}^{\infty}\mathrm{e}^{-st}\,s^{ \lambda}\,ds\ \leqslant\ \int_{0}^{1}\mathrm{e}^{-st}\,s^{\lambda}\,ds+\Gamma(\lambda+1)\,\mathrm{e}^{1-t},\]
_and thus \(\int_{1}^{\infty}\mathrm{e}^{-st}\,s^{\lambda}\,ds\ \leqslant\ \Gamma(\lambda+1)\,\mathrm{e}^{1-t}\)._
Proof.: We have
\[\int_{0}^{\infty}\mathrm{e}^{-st}\,s^{\lambda}\,ds\ =\ \int_{0}^{\infty} \mathrm{e}^{-u}\,\Big{(}\frac{u}{t}\Big{)}^{\lambda}\,\frac{du}{t}\ =\ t^{-(\lambda+1)}\Gamma(\lambda+1)\]
and
\[\int_{1}^{\infty}\mathrm{e}^{-st}\,s^{\lambda}\,ds\ =\ \mathrm{e}^{-t}\int_{1}^{ \infty}\mathrm{e}^{-t(s-1)}\,s^{\lambda}\,ds\ \leqslant\ \mathrm{e}^{-t}\int_{1}^{\infty}\mathrm{e}^{-(s-1)}\,s^{\lambda}\,ds.\]
Now use \(\int_{1}^{\infty}\mathrm{e}^{-(s-1)}\,s^{\lambda}\,ds=\mathrm{e} \int_{1}^{\infty}\mathrm{e}^{-s}\,s^{\lambda}\,ds\leqslant\mathrm{e}\int_{0}^ {\infty}\mathrm{e}^{-s}\,s^{\lambda}\,ds=\mathrm{e}\,\Gamma(\lambda+1)\).
**Corollary 7.6.33**.: _Let \(\lambda\in\mathbb{C}\) with \(\mathrm{Re}\,\lambda>-1\) and \(t\in\mathbb{R}^{\geqslant 1}\). Then_
\[\left|\int_{0}^{1}\mathrm{e}^{-st}\,s^{\lambda}\,ds\ -\ t^{-(\lambda+1)} \Gamma(\lambda+1)\right|\ \leqslant\ \Gamma\big{(}\,\mathrm{Re}(\lambda)+1\big{)}\, \mathrm{e}^{1-t}\,.\]
Proof.: The identities in the beginning of the proof of Lemma 7.6.32 generalize to
\[\int_{0}^{\infty}\mathrm{e}^{-st}\,s^{\lambda}\,ds\ =\ t^{-(\lambda+1)}\Gamma( \lambda+1).\]
Hence
\[\left|\int_{0}^{1}\mathrm{e}^{-st}\,s^{\lambda}\,ds\ -\ t^{-(\lambda+1)}\Gamma( \lambda+1)\right|\ =\ \left|\int_{1}^{\infty}\mathrm{e}^{-st}\,s^{\lambda}\,ds\right|\ \leqslant\ \int_{1}^{\infty}\mathrm{e}^{-st}\,s^{\mathrm{Re}\,\lambda}\,ds,\]
and now use Lemma 7.6.32.
By Lemma 7.6.30 the next result is more than enough to give Proposition 7.6.29. The proof is classical and uses Laplace's method, cf. [144, Chapter 3, SS7].
**Lemma 7.6.34**.: _Suppose \(\mathrm{Re}\,\nu>-\frac{1}{2}\), and let \(t\) range over \(\mathbb{R}^{\geqslant 1}\). Then_
\[J_{\nu}(t)\ =\ \sqrt{\frac{2}{\pi t}}\ \cos\left(t-\frac{\pi\nu}{2}-\frac{ \pi}{4}\right)+O(t^{-\frac{3}{2}})\quad\text{as $t\to+\infty$}.\]
Proof.: We consider the holomorphic function
\[z\mapsto f_{\nu,t}(z):=\mathrm{e}^{tzi}(1-z^{2})^{\nu-\frac{1}{2}}\colon \mathbb{C}\setminus(\mathbb{R}^{\leqslant-1}\cup\mathbb{R}^{\geqslant 1}) \to\mathbb{C},\]
and set \(I_{\nu}(t):=\int_{-1}^{1}f_{\nu,t}(s)\,ds\). By Lemma 7.6.31 we have \(J_{\nu}(t)=\frac{(\frac{t}{2})^{\nu}}{\Gamma(\nu+\frac{1}{2})\sqrt{\pi}}I_{ \nu}(t)\).
To determine the asymptotic behavior of \(I_{\nu}(t)\) as \(t\to+\infty\) we integrate along the contour \(\gamma_{R}\) depicted below, where \(R\) is a real number \(>1\).
By Cauchy, \(\int_{\gamma_{R}}f_{\nu,t}(z)\,dz=0\) (see [123, III, SS5]), and letting \(R\to+\infty\) we obtain \(I_{\nu}(t)=I_{\nu}^{-}(t)-I_{\nu}^{+}(t)\) where
\[I_{\nu}^{-}(t)\ :=\ i\int_{0}^{\infty}f_{\nu,t}(-1+\mathrm{i}s)\,ds,\qquad I_{ \nu}^{+}(t)\ :=\ i\int_{0}^{\infty}f_{\nu,t}(1+\mathrm{i}s)\,ds.\]
Now
\[I_{\nu}^{+}(t)\ =\ i\int_{0}^{\infty}\mathrm{e}^{t(1+\mathrm{i}s)\mathrm{i}} \left(1-(1+\mathrm{i}\mathrm{i})^{2}\right)^{\nu-\frac{1}{2}}ds\ =\ i\,\mathrm{e}^{t\mathrm{i}}\int_{0}^{\infty}\mathrm{e}^{-st}(s^{2}-2 \mathrm{i}\mathrm{i})^{\nu-\frac{1}{2}}\,ds.\]
The complex analytic function \((\kappa,z)\mapsto\left(1+\frac{\mathrm{i}}{2}z\right)^{\kappa}-1\) on \(\mathbb{C}\times\{z\in\mathbb{C}:\ |z|<2\}\) vanishes on the locus \(z=0\), so we have a complex analytic function \((\kappa,z)\mapsto r_{\kappa}(z)\) on the same region such that \(\left(1+\frac{\mathrm{i}}{2}z\right)^{\kappa}-1=zr_{\kappa}(z)\) for all \((\kappa,z)\) in this region. For \(\kappa=\nu-\frac{1}{2}\) this yields a continuous function \(r\colon[0,1]\to\mathbb{C}\) such that
\(1+r(s)s\) for all \(s\in[0,1]\), and \(r(0)=(\nu-\frac{1}{2})\frac{\mathrm{i}}{2}\). In view of an identity stated just before Lemma 7.6.24 this yields for \(s\in(0,1]\):
\[(s^{2}-2s\mathrm{i})^{\nu-\frac{1}{2}}\ =\ (-2s\mathrm{i})^{\nu-\frac{1}{2}} \big{(}1+\tfrac{\mathrm{i}}{2}s\big{)}^{\nu-\frac{1}{2}}\ =\ (-2s\mathrm{i})^{\nu-\frac{1}{2}}+r(s)s(-2s\mathrm{i})^{\nu- \frac{1}{2}},\]
so
\[\int_{0}^{1}\mathrm{e}^{-st}(s^{2}-2s\mathrm{i})^{\nu-\frac{1}{2}}\,ds\ =\ \int_{0}^{1}\mathrm{e}^{-st}(-2s\mathrm{i})^{\nu-\frac{1}{2}}\,ds+\int_{0}^{1} \mathrm{e}^{-st}\,r(s)s(-2s\mathrm{i})^{\nu-\frac{1}{2}}\,ds.\]
By Corollary 7.6.33, as \(t\to+\infty\),
\[\int_{0}^{1}\mathrm{e}^{-st}(-2s\mathrm{i})^{\nu-\frac{1}{2}}\,ds\ =\ (-2\mathrm{i})^{\nu-\frac{1}{2}}t^{-(\nu+\frac{1}{2})}\Gamma\big{(}\nu+\frac{ 1}{2}\big{)}+O(\mathrm{e}^{-t}).\]
Take \(C\in\mathbb{R}^{>}\) such that \(|r(s)|\leqslant C\) for all \(s\in[0,1]\), and set \(\lambda:=\mathrm{Re}(\nu)-\frac{1}{2}\), so \(\lambda>-1\). Then by Lemma 7.6.32,
\[\left|\int_{0}^{1}\mathrm{e}^{-st}\,r(s)s(-2s\mathrm{i})^{\nu- \frac{1}{2}}\,ds\right| \leqslant\ 2^{\lambda}C\int_{0}^{1}\mathrm{e}^{-st}\,s^{\lambda+1}\,ds\] \[\leqslant\ 2^{\lambda}C\int_{0}^{\infty}\mathrm{e}^{-st}\,s^{ \lambda+1}\,ds\] \[=\ 2^{\lambda}C\,\Gamma(\lambda+2)t^{-\lambda-2}.\]
Hence, as \(t\to+\infty\),
\[\int_{0}^{1}\mathrm{e}^{-st}(s^{2}-2s\mathrm{i})^{\nu-\frac{1}{2}}\,ds\ =\ (-2\mathrm{i})^{\lambda}t^{-\nu-\frac{1}{2}}\Gamma\big{(}\nu+\tfrac{1}{2}\big{)} +O(t^{-\lambda-2}). \tag{7.6.18}\]
Next, take \(D\in\mathbb{R}^{>}\) such that \(|(1-2\frac{\mathrm{i}}{s})^{\nu-\frac{1}{2}}|\leqslant D\) for all \(s\in[1,\infty)\). For such \(s\) we have \(|(s^{2}-2s\mathrm{i})^{\nu-\frac{1}{2}}|\leqslant Ds^{2\lambda}\leqslant Ds^{2 \,\mathrm{Re}\,\nu}\) and thus
\[\left|\int_{1}^{\infty}\mathrm{e}^{-st}(s^{2}-2s\mathrm{i})^{\nu-\frac{1}{2}} \,ds\right|\ \leqslant\ D\int_{1}^{\infty}\mathrm{e}^{-st}\,s^{2\,\mathrm{Re}\,\nu}\,ds,\]
hence by Lemma 7.6.32:
\[\int_{1}^{\infty}\mathrm{e}^{-st}(s^{2}-2s\mathrm{i})^{\nu-\frac{1}{2}}\,ds\ =\ O(\mathrm{e}^{-t})\quad\text{as $t\to+\infty$}. \tag{7.6.19}\]
Combining (7.6.18) and (7.6.19) yields
\[I_{\nu}^{+}(t)\ =\ i(-2\mathrm{i})^{\nu-\frac{1}{2}}\,\mathrm{e}^{\mathrm{i}t}\,t^{- \nu-\frac{1}{2}}\Gamma\big{(}\nu+\tfrac{1}{2}\big{)}+O(t^{-\lambda-2})\quad \text{as $t\to+\infty$}.\]
In the same way we obtain
\[I_{\nu}^{-}(t)\ =\ i(2\mathrm{i})^{\nu-\frac{1}{2}}\,\mathrm{e}^{-\mathrm{i}t}\,t^{- \nu-\frac{1}{2}}\Gamma\big{(}\nu+\tfrac{1}{2}\big{)}+O(t^{-\lambda-2})\quad \text{as $t\to+\infty$}.\]
Thus as \(t\to+\infty\):
\[I_{\nu}(t) \ =\ I_{\nu}^{-}(t)-I_{\nu}^{+}(t)\] \[=\ 2^{\nu-\frac{1}{2}}\mathrm{i}\big{[}\mathrm{i}^{\nu-\frac{1}{2}} \,\mathrm{e}^{-\mathrm{i}t}\,-(-\mathrm{i})^{\nu-\frac{1}{2}}\,\mathrm{e}^{ \mathrm{i}t}\,\big{]}t^{-\nu-\frac{1}{2}}\Gamma\big{(}\nu+\tfrac{1}{2}\big{)} +O(t^{-\lambda-2}).\]
Using \(\mathrm{i}^{\nu-\frac{1}{2}}=\mathrm{e}^{\frac{1}{2}(\nu-\frac{1}{2})\pi \mathrm{i}}\) and the like we have
\[\mathrm{i}\big{[}\mathrm{i}^{\nu-\frac{1}{2}}\,\mathrm{e}^{-\mathrm{i}t}\,-( -\mathrm{i})^{\nu-\frac{1}{2}}\,\mathrm{e}^{\mathrm{i}t}\,\big{]}\ =\ 2\cos\Big{(}t-\frac{\pi\nu}{2}-\frac{\pi}{4}\Big{)}\,,\ \text{and thus}\]
\[J_{\nu}(t)\ =\ \frac{(\frac{t}{2})^{\nu}}{\Gamma(\nu+\frac{1}{2})\sqrt{\pi}}\,I_{ \nu}(t)\ =\ \sqrt{\frac{2}{\pi t}}\ \cos\Big{(}t-\frac{\pi\nu}{2}-\frac{\pi}{4}\Big{)}+O(t^{-\frac{3}{2}})\]
as \(t\to+\infty\).
Here is a consequence of Hankel's result, cf. Lommel [133, p. 67]:
**Corollary 7.6.35**.: _Let \(\nu\in\mathbb{R}\). Then \(J_{\nu}^{2}+J_{\nu+1}^{2}=\ \frac{2}{\pi x}+O(x^{-2})\)._
Proof.: With \(\alpha_{\nu}:=\frac{\pi\nu}{2}+\frac{\pi}{4}\) we have by Proposition 7.6.29,
\[J_{\nu}\ =\ \sqrt{\frac{2}{\pi x}}\,\cos(x-\alpha_{\nu})+O(x^{-3/2}),\qquad J_{ \nu+1}\ =\ \sqrt{\frac{2}{\pi x}}\,\cos(x-\alpha_{\nu+1})+O(x^{-3/2}).\]
Now use \(\sin(x-\alpha_{\nu})=\cos(x-\alpha_{\nu+1})\).
Since Proposition 7.6.29 is now established, so is Theorem 7.6.27, except for the \(Y_{\nu}\)-identity when \(\nu\in\mathbb{Z}\). To treat that case, and also for use in the next subsection, we now prove a uniform version of Lemma 7.6.34:
**Lemma 7.6.36**.: _Let \(\nu_{0}\in\mathbb{C}\) and \(\operatorname{Re}\nu_{0}>-\frac{1}{2}\). Then there are reals \(\varepsilon>0\), \(t_{0}\geqslant 1\), and \(C_{0}>0\), such that for all \(\nu\in\mathbb{C}\) with \(|\nu-\nu_{0}|<\varepsilon\) and all \(t\geqslant t_{0}\):_
\[\operatorname{Re}\nu\ >\ -\frac{1}{2},\qquad\left|J_{\nu}(t)-\sqrt{\frac{2}{\pi t }}\cos\left(t-\frac{\pi\nu}{2}-\frac{\pi}{4}\right)\right|\ \leqslant\ C_{0}t^{-\frac{3}{2}}.\]
Proof.: We follow the proof of Lemma 7.6.34, where in the beginning we introduced the complex analytic function \((\nu,z)\mapsto r_{\nu}(z)\) on \(\mathbb{C}\times\left\{z\in\mathbb{C}:\,|z|<2\right\}\). Take \(\varepsilon\in\mathbb{R}^{>}\) and \(C\in\mathbb{R}^{\geqslant 1}\) such that \(0<\varepsilon<\operatorname{Re}(\nu_{0})+\frac{1}{2}\) and \(|r_{\nu-\frac{1}{2}}(s)|\leqslant C\) for all \((\nu,s)\in B_{0}\times[-1,1]\) where \(B_{0}:=\{\nu\in\mathbb{C}:\,|\nu-\nu_{0}|<\varepsilon\}\). (To handle \(I_{\nu}^{+}\) we use this for \(s\in[0,1]\), and to deal with \(I_{\nu}^{-}\) we use \(s\in[-1,0]\).) Take also have \(D\in\mathbb{R}^{>}\) such that \(|(1-2\frac{\mathrm{i}}{s})^{\nu-\frac{1}{2}}|\leqslant D\) for all \(\nu\in B_{0}\) and \(s\geqslant 1\) (and thus also for \(\nu\in B_{0}\) and \(s\leqslant-1\)). Next, set \(\lambda_{0}:=\operatorname{Re}(\nu_{0})-\frac{1}{2}\), and take \(t_{0}\geqslant 1\) such that \(\mathrm{e}^{t-1}\geqslant t^{\lambda_{0}+\varepsilon+2}\) for all \(t\geqslant t_{0}\). Below \(\nu\) ranges over \(B_{0}\) and \(t\) over \(\mathbb{R}^{\geqslant t_{0}}\), and \(\lambda:=\operatorname{Re}(\nu)-\frac{1}{2}\), so \(\lambda>-1\). Then, as in the proof of Lemma 7.6.34:
\[\left|\int_{0}^{1}\mathrm{e}^{-st}\,r_{\nu-\frac{1}{2}}(s)s(-2s\mathrm{i})^{ \nu-\frac{1}{2}}\,ds\right|\ \leqslant\ 2^{\lambda}C\,\Gamma(\lambda+2)t^{-\lambda-2}.\]
Take \(C_{\Gamma}\in\mathbb{R}^{>}\) such that \(2^{\lambda}C\,\Gamma(\lambda+2)\leqslant C_{\Gamma}\) for all \(\nu\). Then
\[\left|\int_{0}^{1}\mathrm{e}^{-st}\,r_{\nu-\frac{1}{2}}(s)s(-2s\mathrm{i})^{ \nu-\frac{1}{2}}\,ds\right|\ \leqslant\ C_{\Gamma}\,t^{-\lambda-2}.\]
By increasing \(C_{\Gamma}\) we arrange that \(C_{\Gamma}\geqslant C\) and that for all \(\nu\),
\[2^{\lambda}\,\Gamma(\lambda+1),D\,\Gamma(2\lambda+2)\ \leqslant\ C_{\Gamma}.\]
We have \(\mathrm{e}^{1-t}\leqslant t^{-\lambda-2}\), so by Corollary 7.6.33:
\[\left|\int_{0}^{1}\mathrm{e}^{-st}(-2s\mathrm{i})^{\nu-\frac{1}{2}}\,ds-(-2 \mathrm{i})^{\nu-\frac{1}{2}}t^{-(\nu+\frac{1}{2})}\Gamma(\nu+\tfrac{1}{2}) \right|\ \leqslant\ C_{\Gamma}\,t^{-\lambda-2}.\]
Combining this with an earlier displayed inequality yields:
\[\left|\int_{0}^{1}\mathrm{e}^{-st}(s^{2}-2s\mathrm{i})^{\nu-\frac{1}{2}}\,ds-(- 2\mathrm{i})^{\nu-\frac{1}{2}}t^{-\nu-\frac{1}{2}}\Gamma(\nu+\tfrac{1}{2}) \right|\ \leqslant\ 2C_{\Gamma}\,t^{-\lambda-2}. \tag{7.6.20}\]
As in the proof of Proposition 7.6.29 and using Lemma 7.6.32 we also have
\[\left|\int_{1}^{\infty}\mathrm{e}^{-st}(s^{2}-2s\mathrm{i})^{\nu -\frac{1}{2}}\,ds\right| \leqslant\ D\int_{1}^{\infty}\mathrm{e}^{-st}\,s^{2\operatorname{ Re}(\nu)}\,ds\] \[\leqslant\ D\Gamma(2\lambda+2)\,\mathrm{e}^{1-t}\ \leqslant\ C_{\Gamma}\,t^{-\lambda-2}.\]
Combining this with (7.6.20) we obtain:
\[\big{|}I_{\nu}^{+}(t)-\mathrm{i}(-2\mathrm{i})^{\nu-\frac{1}{2}}\,\mathrm{e}^{ \mathrm{fi}}\,t^{-\nu-\frac{1}{2}}\Gamma\big{(}\nu+\tfrac{1}{2}\big{)}\big{|}\ \leqslant\ 3C_{\Gamma}\,t^{-\lambda-2}.\]
In the same way,
\[\big{|}I_{\nu}^{-}(t)-\mathrm{i}(2\mathrm{i})^{\nu-\frac{1}{2}}\,\mathrm{e}^{- \mathrm{fi}}\,t^{-\nu-\frac{1}{2}}\Gamma\big{(}\nu+\tfrac{1}{2}\big{)}\big{|}\ \leqslant\ 3C_{\Gamma}\,t^{-\lambda-2}\]
and so as in the proof of Proposition 7.6.29:
\[\bigg{|}I_{\nu}(t)-2^{\nu+\frac{1}{2}}t^{-\nu-\frac{1}{2}}\Gamma\big{(}\nu+ \tfrac{1}{2}\big{)}\cos\big{(}t-\tfrac{\pi\nu}{2}-\tfrac{\pi}{4}\big{)}\bigg{|} \ \leqslant\ 6C_{\Gamma}\,t^{-\lambda-2}.\]
Hence
\[\bigg{|}J_{\nu}(t)-\sqrt{\frac{2}{\pi t}}\ \cos\Big{(}t-\frac{\pi\nu}{2}- \frac{\pi}{4}\Big{)}\bigg{|}\ \leqslant\ \frac{6C_{\Gamma}}{\sqrt{\pi}\,2^{\mathrm{Re}(\nu)}|\Gamma(\nu+\tfrac{1}{2}) |}t^{-\frac{3}{2}}.\]
Thus \(\varepsilon\), \(t_{0}\) as chosen, and a suitable \(C_{0}\) have the required properties.
To finish the proof of Theorem 7.6.27, it suffices by Lemma 7.6.9 and Remark 7.6.11 to show the following:
**Lemma 7.6.37**.: _Let \(k\in\mathbb{Z}\). Then for the germ \(Y_{k}\) we have:_
\[Y_{k}-\sqrt{\frac{2}{\pi x}}\ \sin\left(x-\frac{\pi k}{2}-\frac{\pi}{4} \right)\ \preccurlyeq\ x^{-\frac{3}{2}}.\]
Proof.: By the remark after the proof of Lemma 7.6.30 it is enough to treat the case \(k=0\). Lemma 7.6.36 with \(\nu_{0}=0\) yields reals \(t_{0}\geqslant 1\), \(C_{0}>0\), and \(\varepsilon\) with \(0<\varepsilon<\frac{1}{2}\) such that for all \(\nu\in\mathbb{C}\) with \(|\nu|<\varepsilon\) and all \(t\geqslant t_{0}\):
\[\bigg{|}J_{\nu}(t)-\sqrt{\frac{2}{\pi t}}\cos\Big{(}t-\frac{\pi\nu}{2}-\frac{ \pi}{4}\Big{)}\bigg{|}\ \leqslant\ C_{0}t^{-\frac{3}{2}}.\]
Let \(t\geqslant t_{0}\) be fixed and consider the entire function \(d\) given by
\[d(\nu)\ :=\ J(\nu,t)-\sqrt{\frac{2}{\pi t}}\cos\Big{(}t-\frac{\pi \nu}{2}-\frac{\pi}{4}\Big{)}\,,\ \mathrm{so}\] \[d^{\prime}(\nu)\ =\ \left(\frac{\partial J}{\partial\nu}\right)(\nu,t)- \sqrt{\frac{\pi}{2t}}\sin\Big{(}t-\frac{\pi\nu}{2}-\frac{\pi}{4}\Big{)}\]
and hence by Lemma 7.6.28:
\[d^{\prime}(0)\ =\ \left(\frac{\partial J}{\partial\nu}\right)(0,t)-\sqrt{\frac{ \pi}{2t}}\sin\Big{(}t-\frac{\pi}{4}\Big{)}=\frac{\pi}{2}\left[Y_{0}(t)-\sqrt {\frac{2}{\pi t}}\sin\Big{(}t-\frac{\pi}{4}\Big{)}\right].\]
Also \(|d^{\prime}(0)|\leqslant\frac{1}{\varepsilon}\max_{|\nu|=\varepsilon}|d(\nu)|\), a Cauchy inequality, and thus
\[\bigg{|}Y_{0}(t)-\sqrt{\frac{2}{\pi t}}\sin\Big{(}t-\frac{\pi}{4}\Big{)} \bigg{|}\ =\ \frac{2}{\pi}\,|d^{\prime}(0)|\ \leqslant\ \frac{2C_{0}}{\pi \varepsilon}t^{-\frac{3}{2}},\]
which gives the desired result for \(k=0\) and thus for all \(k\in\mathbb{Z}\).
In the rest of this subsection we let \(\nu\) range over \(\mathbb{R}\) and derive some consequences of Theorem 7.6.27. Toward showing that the germ \(\psi_{\nu}\in\mathrm{E}(\mathbb{Q})\) depends analytically on \(\nu\) we introduce the real analytic function \(\Psi\colon\mathbb{R}\times\mathbb{R}^{>}\to\mathbb{R}^{>}\) by
\[\Psi(\nu,t)\ :=\ \frac{\pi t}{2}\left[J(\nu,t)^{2}+Y(\nu,t)^{2}\right],\]
and let \(\Psi(\nu,-)\) be the function \(t\mapsto\Psi(\nu,t)\colon\mathbb{R}^{>}\to\mathbb{R}^{>}\). Then by Theorem 7.6.27:
**Corollary 7.6.38**.: \(\psi_{\nu}\) _is the germ at \(+\infty\) of \(\Psi(\nu,-)\)._
For \(\phi_{\nu}\) we consider the real analytic function
\[\widetilde{\Phi}\ :\ \mathbb{R}\times\mathbb{R}^{>}\to\mathbb{R},\quad(\nu,t) \mapsto\int_{1}^{t}\frac{1}{\Psi(\nu,s)}\,ds.\]
Let \(\widetilde{\Phi}_{\nu}\) be the germ (at \(+\infty\)) of \(t\mapsto\widetilde{\Phi}(\nu,t)\). Then \(\widetilde{\Phi}_{\nu}^{\prime}=\frac{1}{\psi_{\nu}}=\phi_{\nu}^{\prime}\), so \(\phi_{\nu}=\widetilde{\Phi}_{\nu}+c_{\nu}\) where \(c_{\nu}\) is a real constant. To determine this constant we note that by Proposition 7.6.16 we have \(1-\frac{1}{\psi_{\nu}}\preccurlyeq x^{-2}\), which gives the real number
\[\widetilde{c}_{\nu}\ :=\ \int_{1}^{\infty}\left(1-\frac{1}{\Psi(\nu,s)}\right)\,ds.\]
We also set \(\widetilde{c}(\nu,t)=\int_{1}^{t}\left(1-\frac{1}{\Psi(\nu,s)}\right)\,ds\), so \(\widetilde{c}(\nu,t)\to\widetilde{c}_{\nu}\) as \(t\to+\infty\), and for \(t>0\) we have \(\widetilde{\Phi}(\nu,t)+\widetilde{c}(\nu,t)=t-1\). Taking germs we obtain \(\widetilde{\Phi}_{\nu}+\widetilde{c}_{\nu}+1-x\prec 1\). Also \(\phi_{\nu}-x\prec 1\), so \(\widetilde{\Phi}_{\nu}+c_{\nu}-x\prec 1\), and thus \(c_{\nu}=\widetilde{c}_{\nu}+1\). This suggests the function
\[\Phi:\mathbb{R}\times\mathbb{R}^{>}\to\mathbb{R},\quad(\nu,t)\mapsto \widetilde{\Phi}(\nu,t)+\widetilde{c}_{\nu}+1.\]
The above arguments yield:
**Corollary 7.6.39**.: _For each \(\nu\) the germ of \(\Phi(\nu,-)\) is \(\phi_{\nu}\)._
Thus as for \(\psi_{\nu}\) the germ \(\phi_{\nu}\) has a unique real analytic representative on \(\mathbb{R}^{>}\), namely \(\Phi(\nu,-)\). Note that \(\Phi\) is real analytic iff the function \(\nu\mapsto\widetilde{c}_{\nu}\colon\mathbb{R}\to\mathbb{R}\) is real analytic, but we don't even know if this last function is continuous.
For the next result, cf. [205, SS13.74]:
**Corollary 7.6.40**.: \(J_{\nu}^{2}+Y_{\nu}^{2}\) _is eventually strictly decreasing, \(x(J_{\nu}^{2}+Y_{\nu}^{2})\) is eventually strictly increasing if \(|\nu|<1/2\) and eventually strictly decreasing if \(|\nu|>1/2\)._
Proof.: We have \(\psi_{\nu}\sim 1+\frac{\mu-1}{8}x^{-2}\) by Proposition 7.6.16, and \(\psi_{\nu}\) is hardian, thus \(\psi_{\nu}^{\prime}\preccurlyeq x^{-3}\), and so \((x^{-1}\psi_{\nu})^{\prime}=-x^{-2}\psi_{\nu}+x^{-1}\psi_{\nu}^{\prime}\sim-x ^{-2}\). This yields the claims.
**Corollary 7.6.41** (Schafheitlin [179, p. 86]).: _If \(\nu>1/2\), then, as elements of \(\mathrm{D}(\mathbb{Q})\),_
\[\frac{2/\pi}{x}\ <\ J_{\nu}^{2}+Y_{\nu}^{2}\ <\ \frac{2/\pi}{(x^{2}-\nu^{2})^{1/2}}.\]
Proposition 7.6.16 also yields (cf. [205, SS13.75] or [144, Chapter 9, SS9]):
**Corollary 7.6.42**.: _The germ \(J_{\nu}^{2}+Y_{\nu}^{2}\) has the asymptotic expansion_
\[J_{\nu}^{2}+Y_{\nu}^{2}\ \sim\ \frac{2}{\pi x}\sum_{n}(2n-1)!!\frac{(\nu,n)}{2 ^{n}}x^{-2n}.\]
_Remark_.: Nicholson [142, 141] (see [205, SSSS13.73-13.75]) established Corollary 7.6.42, but "the analysis is difficult" [144, p. 340]. A simpler deduction of the integral representation of \(J_{\nu}^{2}+Y_{\nu}^{2}\) used by Nicholson was given by Wilkins [208], see also [144, Chapter 9, SS7.2]. For more on the history of this result, see [109, SS1].
Theorem 7.6.27 and the recurrence relations (7.6.9) and (7.6.13) yield remarkable identities among the germs \(\phi_{\nu},\phi_{\nu-1},\phi_{\nu+1}\in\mathrm{D}(\mathbb{Q})\). For example:
**Corollary 7.6.43**.: _Recalling that \(\psi_{\nu}=1/\phi^{\prime}_{\nu}\), we have_
\[-\sqrt{\psi_{\nu-1}}\sin(\phi_{\nu-1}-\phi_{\nu})+\sqrt{\psi_{\nu+1}} \sin(\phi_{\nu+1}-\phi_{\nu}) =\ \frac{2\nu}{x}\sqrt{\psi_{\nu}},\] \[\sqrt{\psi_{\nu-1}}\cos(\phi_{\nu-1}-\phi_{\nu})-\sqrt{\psi_{\nu+ 1}}\cos(\phi_{\nu+1}-\phi_{\nu}) =\ 0.\]
Proof.: Put \(H_{\nu}:=J_{\nu}+Y_{\nu}\mathrm{i}\in\mathcal{C}^{<\infty}[\mathrm{i}]\). Then
\[H_{\nu}\ =\ \sqrt{\psi_{\nu}}\cdot\sqrt{\frac{2}{\pi x}}\ \mathrm{e}^{(\phi_{\nu}- \frac{\pi\nu}{2}-\frac{\pi}{4})\mathrm{i}},\qquad H_{\nu-1}+H_{\nu+1}\ =\ \frac{2\nu}{x}H_{\nu},\]
and dividing both sides of the equality on the right by \(\sqrt{\frac{2}{\pi x}}\ \mathrm{e}^{(\phi_{\nu}-\frac{\pi\nu}{2}-\frac{\pi}{4}) \mathrm{i}}\) gives
\[\sqrt{\psi_{\nu-1}}\,\mathrm{e}^{(\phi_{\nu-1}-\phi_{\nu}+\pi/2) \mathrm{i}}\,+\sqrt{\psi_{\nu+1}}\,\mathrm{e}^{(\phi_{\nu+1}-\phi_{\nu}-\pi/2) \mathrm{i}}\] \[=\ \sqrt{\psi_{\nu-1}}\ i\,\mathrm{e}^{(\phi_{\nu-1}-\phi_{\nu}) \mathrm{i}}\,-\sqrt{\psi_{\nu+1}}\ i\,\mathrm{e}^{(\phi_{\nu+1}-\phi_{\nu}) \mathrm{i}}\ =\ \frac{2\nu}{x}\sqrt{\psi_{\nu}}.\]
Now take real and imaginary parts in the last identity.
### An asymptotic expansion for the zeros of Bessel functions
We are going to use Corollary 7.6.19 to strengthen a result of McMahon on parametrizing the zeros of Bessel functions: Corollary 7.6.51 and the remark following it. Lemma 7.6.46 below, due to Fourier [71] for \(\nu=0\) and to Lommel [133, p. 69] in general, is only included for completeness; its proof is based on the following useful identity also due to Lommel [134]:
**Lemma 7.6.44**.: _Let \(\alpha,\beta\in\mathbb{C}^{\times}\), \(\mu,\nu\in\mathbb{C}\), and let \(y_{\mu},y_{\nu}\colon\mathbb{C}\setminus\mathbb{R}^{\leqslant}\to\mathbb{C}\) be holomorphic solutions of (B\({}_{\mu}\)) and (B\({}_{\nu}\)), respectively. Then on \(\mathbb{C}\setminus\left(\alpha^{-1}\mathbb{R}^{\leqslant}\cup\beta^{-1} \mathbb{R}^{\leqslant}\right)\):_
\[\frac{d}{dz}\left[z\left(\beta y_{\mu}(\alpha z)y^{\prime}_{\nu} (\beta z)-\alpha y_{\nu}(\beta z)y^{\prime}_{\mu}(\alpha z)\right)\right]\ =\] \[\left((\alpha^{2}-\beta^{2})z-\frac{\mu^{2}-\nu^{2}}{z}\right)y_{ \mu}(\alpha z)y_{\nu}(\beta z).\]
Proof.: Let \(U\subseteq\mathbb{C}\) be open, let \(g,\widetilde{g}\colon U\to\mathbb{C}\) be continuous, and let \(y,\widetilde{y}\colon U\to\mathbb{C}\) be holomorphic such that \(4y^{\prime\prime}+gy=4\widetilde{y}^{\prime\prime}+\widetilde{g}\widetilde{y}=0\). An easy computation gives
\[\mathrm{wr}(y,\widetilde{y})^{\prime}\ =\ \tfrac{1}{4}(g-\widetilde{g})y \widetilde{y}.\]
Assume for now that \(\alpha,\beta\in\mathbb{R}^{>}\) and apply this remark to \(U:=\mathbb{C}\setminus\mathbb{R}^{\leqslant}\) and
\[y(z)\ :=\ (\alpha z)^{1/2}y_{\mu}(\alpha z),\qquad g(z)\ :=4 \alpha^{2}+(1-4\mu^{2})z^{-2}\] \[\widetilde{y}(z)\ :=\ (\beta z)^{1/2}y_{\nu}(\beta z),\qquad\widetilde{g}(z)\ :=\ 4 \beta^{2}+(1-4\nu^{2})z^{-2}.\]
Then \(4y^{\prime\prime}+gy=0\), since \(z\mapsto z^{1/2}y_{\mu}(z)\colon\mathbb{C}\setminus\mathbb{R}^{\leqslant}\to \mathbb{C}\) satisfies (L\({}_{\mu}\)). Likewise, \(4\widetilde{y}^{\prime\prime}+\widetilde{g}\widetilde{y}=0\). This yields the claimed identity for \(\alpha,\beta\in\mathbb{R}^{>}\) by a straightforward computation using that \((\alpha z)^{\frac{1}{2}}=\alpha^{\frac{1}{2}}z^{\frac{1}{2}}\) for such \(\alpha\) and for \(z\in\mathbb{C}\setminus\mathbb{R}^{\leqslant}\).
Next, for general \(\alpha\), \(\beta\), \(z\) we note that \(U_{3}:=\left\{(\alpha,\beta,z)\in(\mathbb{C}^{\times})^{3}:\,\alpha z,\beta z \notin\mathbb{R}^{\leqslant}\right\}\) is open in \(\mathbb{C}^{3}\). Moreover, \(U_{3}\) is connected. (Proof sketch: suppose \((\alpha,\beta,z)\in U_{3}\); then so is \((\mathrm{e}^{\theta_{1}}\alpha,\mathrm{e}^{\theta_{1}}\beta,\mathrm{e}^{- \theta_{1}}z)\) for \(\theta\in\mathbb{R}\), so we can "connect to" \(\alpha\in\mathbb{R}^{>}\); next keep \(\alpha\), \(z\) fixed, and rotate \(\beta\) to a point in \(\mathbb{R}^{>}\) while preserving \(\beta\notin z^{-1}\mathbb{R}^{\leqslant}\).) Both sides in the claimed identity define a complex analytic function on \(U_{3}\). Now use analytic continuation as in [57, (9.4.4)].
We now define certain improper complex integrals and state some basic facts about them. Let \(U\) be an open subset of \(\mathbb{C}\) with \(0\in\partial U\) and let \(f\colon U\to\mathbb{C}\) be holomorphic such that for some \(\varepsilon\in\mathbb{R}^{>}\) we have \(f(u)=O\big{(}|u|^{-1+\varepsilon}\big{)}\) as \(u\to 0\). For \(z\in U\) such that \((0,z]:=\big{\{}tz:\ t\in(0,1]\big{\}}\subseteq U\), we set
\[\int_{0}^{z}f(u)\,du\ :=\ \lim_{\delta\downarrow 0}\int_{\delta}^{1}zf(tz)\, dt\quad\text{(the limit exists in $\mathbb{C}$)}.\]
Suppose \((0,z]\subseteq U\) for all \(z\in U\). Then the function \(z\mapsto\int_{0}^{z}f(u)\,du\) on \(U\) is holomorphic with derivative \(f\), and \(\lim_{z\to 0}\int_{0}^{z}f(u)\,du=0\); to see this, first show that for any \(z_{0}\in U\), open ball \(B\subseteq U\) centered at \(z_{0}\), and \(z\in U\), we have \(\int_{0}^{z}f(u)\,du=\int_{0}^{z_{0}}f(u)\,du+\int_{z_{0}}^{z}f(u)\,du\), where the last integral is by definition \(\int_{0}^{1}(z-z_{0})f\big{(}z_{0}+t(z-z_{0})\big{)}\,dt\). Thus the integral below makes sense:
**Corollary 7.6.45**.: _Let \(\alpha,\beta,z\in\mathbb{C}^{\times}\) satisfy \(\alpha z,\beta z\notin\mathbb{R}^{\leqslant}\), and let \(\nu\in\mathbb{R}^{\geqslant-1}\). Then_
\[(\alpha^{2}-\beta^{2})\int_{0}^{z}uJ_{\nu}(\alpha u)J_{\nu}(\beta u)\,du\ =\ z\big{(}\beta J_{\nu}(\alpha z)J_{\nu}^{\prime}(\beta z)-\alpha J_{\nu}( \beta z)J_{\nu}^{\prime}(\alpha z)\big{)}. \tag{7.6.21}\]
Proof.: Fixing \(\alpha,\beta\in\mathbb{C}^{\times}\) and \(\nu\in\mathbb{R}^{\geqslant-1}\), both sides in (7.6.21) are holomorphic functions of \(z\) on the open subset \(\mathbb{C}\setminus\big{(}\alpha^{-1}\mathbb{R}^{\leqslant}\cup\beta^{-1} \mathbb{R}^{\leqslant}\big{)}\) of \(\mathbb{C}\), with equal derivatives by Lemma 7.6.44. Moreover, both sides tend to \(0\) as \(z\to 0\) in \(\mathbb{C}\setminus\big{(}\alpha^{-1}\mathbb{R}^{\leqslant}\cup\beta^{-1} \mathbb{R}^{\leqslant}\big{)}\), using for the right hand side the first two terms in the power series for \(J_{\nu}\) and \(J_{\nu}^{\prime}\).
**Lemma 7.6.46**.: _Let \(\nu\in\mathbb{R}^{\geqslant-1}\). Then all zeros of \(J_{\nu}\) are contained in \(\mathbb{R}^{>}\)._
Proof.: Let \(\alpha\in\mathbb{C}\setminus\mathbb{R}^{\leqslant}\) be a zero of \(J_{\nu}\). From the power series for \(J_{\nu}\) we see that then \(\overline{\alpha}\) is also a zero of \(J_{\nu}\), and \(\alpha\notin\mathrm{i}\mathbb{R}\). Putting \(\beta=\overline{\alpha}\) and \(z=1\) in (7.6.21) yields
\[(\alpha^{2}-\overline{\alpha}^{2})\int_{0}^{1}tJ_{\nu}(\alpha t)J_{\nu}( \overline{\alpha}t)\,dt\ =\ \overline{\alpha}J_{\nu}(\alpha)J_{\nu}^{\prime}(\overline{\alpha})-\alpha J_ {\nu}(\overline{\alpha})J_{\nu}^{\prime}(\alpha)\ =\ 0.\]
If \(\alpha\notin\mathbb{R}\), then this yields \(\int_{0}^{1}tJ_{\nu}(\alpha t)J_{\nu}(\overline{\alpha}t)\,dt=0\), but \(J_{\nu}(\alpha t)J_{\nu}(\overline{\alpha}t)\in\mathbb{R}^{\geqslant}\) for all \(t\in(0,1]\) and \(J_{\nu}(\alpha t)\neq 0\) for some \(t\in(0,1]\), a contradiction. Thus \(\alpha\in\mathbb{R}\).
Taking \(\alpha=\beta=1\) in Lemma 7.6.44 yields (for all \(\mu,\nu\in\mathbb{C}\)):
\[\frac{d}{dz}\left[z\big{(}J_{\mu}^{\prime}(z)J_{\nu}(z)-J_{\mu}(z)J_{\nu}^{ \prime}(z)\big{)}\right]\ =\ (\mu^{2}-\nu^{2})\frac{J_{\mu}(z)J_{\nu}(z)}{z}\ \ \text{on $\mathbb{C}\setminus \mathbb{R}^{\leqslant}$}. \tag{7.6.22}\]
In the next result it is convenient to let \(J\) denote the analytic function \((\nu,z)\mapsto J_{\nu}(z)\) on \(\mathbb{C}\times(\mathbb{C}\setminus\mathbb{R}^{\leqslant})\), so for \((\nu,z)\in\mathbb{C}\times(\mathbb{C}\setminus\mathbb{R}^{\leqslant})\) we have \(J_{\nu}^{\prime}(z)=\frac{\partial J}{\partial z}(\nu,z)\). In its proof we also use that for any complex analytic functions \(A\), \(B\) on an open set \(U\subseteq\mathbb{C}^{2}\) and \((\mu,z)\) and \((\nu,z)\) ranging over \(U\):
\[\lim_{\mu\to\nu}\frac{\partial}{\partial z}\left(A(\mu,z)\frac{B(\mu,z)-B(\nu,z) }{\mu-\nu}\right)\ =\ \frac{\partial}{\partial z}\left(A(\nu,z)\frac{\partial B}{\partial\nu}(\nu,z) \right),\]
an easy consequence of \(\frac{\partial^{2}B}{\partial\nu\partial z}=\frac{\partial^{2}B}{\partial z \partial\nu}\).
**Corollary 7.6.47**.: _On \(\mathbb{C}\times(\mathbb{C}\setminus\mathbb{R}^{\leqslant})\) we have_
\[\frac{d}{dz}\left[z\left(J_{\nu}(z)\cdot\frac{\partial^{2}J}{\partial\nu \partial z}(\nu,z)-J_{\nu}^{\prime}(z)\cdot\frac{\partial J}{\partial\nu}(\nu,z) \right)\right]\ =\ 2\nu\frac{J_{\nu}^{2}(z)}{z},\]
Proof.: For \(\mu,\nu\in\mathbb{C}\) with \(\mu\neq\nu\) we obtain from (7.6.22) that on \(\mathbb{C}\setminus\mathbb{R}^{\leqslant}\),
\[\frac{d}{dz}\left[z\left(J_{\mu}(z)\frac{J_{\mu}^{\prime}(z)-J_{\nu}^{\prime}(z) }{\mu-\nu}-J_{\mu}^{\prime}(z)\frac{J_{\mu}(z)-J_{\nu}(z)}{\mu-\nu}\right) \right]\ =\ (\mu+\nu)\frac{J_{\mu}(z)J_{\nu}(z)}{z}.\]
Now let \(\mu\) tend to \(\nu\) and use the identity preceding the corollary.
Below \(\nu\in\mathbb{R}\), so the set \(Z_{\nu}:=\mathbb{R}^{>}\cap J_{\nu}^{-1}(0)\) of positive real zeros of \(J_{\nu}\) is infinite and has no limit point. Let \((j_{\nu,n})\) be the enumeration of \(Z_{\nu}\). Note that if \(t\in Z_{\nu}\), then \(J_{\nu+1}(t)=-J_{\nu}^{\prime}(t)\) by (7.6.9), so \(J_{\nu+1}(t)\neq 0\).
**Proposition 7.6.48** (Schlafli [180]).: _Let \(n\) be given. Then the function_
\[\nu\mapsto j(\nu):=j_{\nu,n}\colon\mathbb{R}^{>-1}\to\mathbb{R}^{>}\]
_is analytic, and its derivative at \(\nu>0\) is given by_
\[j^{\prime}(\nu)\ =\ \frac{2\nu}{j(\nu)\,J_{\nu+1}^{2}\big{(}j(\nu)\big{)}}\int_{0 }^{j(\nu)}J_{\nu}^{2}(s)\frac{ds}{s}.\]
_In particular, the restriction of \(j\) to \(\mathbb{R}^{>}\) is strictly increasing._
In the proof of Proposition 7.6.48 we use:
**Lemma 7.6.49**.: _Let \(\varepsilon\in\mathbb{R}^{>}\). Then there exists \(\delta\in\mathbb{R}^{>}\) such that \(J_{\nu}(t)\neq 0\) for all \(\nu\geqslant-1+\varepsilon\) and \(t\in(0,\delta]\)._
Proof.: Take \(\delta\in\mathbb{R}^{>}\) such that \(\delta^{2}<4\log(1+\varepsilon)\). Then for \(\nu\geqslant-1+\varepsilon\) and \(0<t\leqslant\delta\):
\[\left|\frac{\Gamma(\nu+1)J_{\nu}(t)}{(\frac{1}{2}t)^{\nu}}-1\right|\ =\ \left|\sum_{n \geqslant 1}\frac{(-1)^{n}(\frac{1}{4}t^{2})^{n}}{n!(\nu+n)\cdots(\nu+1)} \right|\ \leqslant\ \frac{\exp(\frac{1}{4}\delta^{2})-1}{\varepsilon}\ <1\,\]
hence \(J_{\nu}(t)\neq 0\).
Proof of Proposition 7.6.48.: Let \(\nu_{0}\in\mathbb{R}^{>-1}\). For each \(m\) we have \(J_{\nu_{0}}(j_{\nu_{0},m})=0\) and \(J_{\nu_{0}}^{\prime}(j_{\nu_{0},m})\neq 0\). So IFT (the Implicit Function Theorem [57, (10.2.2), (10.2.4)]) yields an interval \(I=(\nu_{0}-\varepsilon,\nu_{0}+\varepsilon)\) with \(\varepsilon\in\mathbb{R}^{>}\), \(-1<\nu_{0}-\varepsilon\), and for each \(m\leqslant n\) an analytic function \(j_{m}\colon I\to\mathbb{R}^{>}\) with \(J_{\nu}\big{(}j_{m}(\nu)\big{)}=0\) for \(\nu\in I\) and \(j_{m}(\nu_{0})=j_{\nu_{0},m}\). Shrinking \(\varepsilon\) if necessary we also arrange to have \(\delta\in\mathbb{R}^{>}\) such that \(j_{\nu_{0},m}>\delta\) and \(|j_{m}(\nu)-j_{\nu_{0},m}|<\delta\) for all \(\nu\in I\) and \(m\leqslant n\), and \(J_{\nu}^{\prime}(t)\neq 0\) for all \(\nu\in I\), \(m\leqslant n\) and \(t\in\mathbb{R}\) with \(|t-j_{\nu_{0},m}|<\delta\). Hence for \(\nu\in I\) and \(m\leqslant n\) (using IFT at all \(\nu\in I\)):
\[Z_{\nu}\cap(j_{\nu_{0},m}-\delta,j_{\nu_{0},m}+\delta)\ =\ \big{\{}j_{m}(\nu) \big{\}},\qquad j_{0}(\nu)<j_{1}(\nu)<\cdots<j_{n}(\nu).\]
We claim that for \(\nu\in I\) we have
\[\big{\{}t\in Z_{\nu}:\,t\leqslant j_{n}(\nu)\big{\}}=\big{\{}j_{0}(\nu),j_{1} (\nu),\ldots,j_{n}(\nu)\big{\}}.\]
Suppose for example that \(\nu_{1}\in I\), \(t_{1}\in Z_{\nu_{1}}\), \(t_{1}<j_{0}(\nu_{1})\); we shall derive a contradiction. (The assumption \(\nu_{1}\in I\), \(t_{1}\in Z_{\nu}\), \(j_{m}(\nu_{1})<t_{1}<j_{m+1}(\nu_{1})\), \(m<n\), leads to a contradiction in the same way.) Using IFT again we see that
\[U:=\big{\{}\nu\in I:\ \text{there is a $t\in Z_{\nu}$ with $t<j_{0}(\nu)$}\big{\}}\]
is open in \(I\), and by an easy extra argument using Lemma 7.6.49 also closed in \(I\). As \(\nu_{1}\in U\), this gives \(U=I\), so \(\nu_{0}\in U\), a contradiction. The claim gives \(j_{m}(\nu)=j_{\nu,m}\) for \(\nu\in I\) and \(m\leqslant n\). Taking \(m=n\) it follows that \(j\) is analytic.
Next, let \(\nu\) range over \(\mathbb{R}^{>}\). Differentiating \(J\big{(}\nu,j(\nu)\big{)}=0\) yields
\[\frac{\partial J}{\partial\nu}\big{(}\nu,j(\nu)\big{)}+J^{\prime}_{\nu}\big{(}j( \nu)\big{)}j^{\prime}(\nu)\ =\ 0.\]
Using the primitive of \(s\mapsto\frac{J^{2}_{\nu}(s)}{s}\) provided by Corollary 7.6.47 gives
\[\int_{0}^{j(\nu)}\frac{J^{2}_{\nu}(s)}{s}\,ds\ =\ -\frac{j(\nu)}{2\nu}J^{\prime}_{ \nu}\big{(}j(\nu)\big{)}\frac{\partial J}{\partial\nu}\big{(}\nu,j(\nu)\big{)}.\]
Now combine the two displayed identities with \(J^{\prime}_{\nu}\big{(}j(\nu)\big{)}=-J_{\nu+1}\big{(}j(\nu)\big{)}\).
We now bring in Lemma 7.6.36 to bound \(j_{\nu,n}\) for \(\nu>0\) and sufficiently large \(n\):
**Proposition 7.6.50**.: _Let \(\nu_{0}\in\mathbb{R}^{>}\). Then there is an \(n_{0}\in\mathbb{N}\) such that for \(n\geqslant n_{0}\):_
\[(n+\tfrac{1}{2}\nu_{0}+\tfrac{1}{2})\pi\ \leqslant\ j_{\nu_{0},n}\ \leqslant\ (n+\tfrac{1}{2}\nu_{0}+1)\pi.\]
Proof.: Compactness and Lemma 7.6.36 yield \(C_{0},t_{0}\in\mathbb{R}^{>}\) such that for all \(\nu\) in the smallest closed interval \(I\) containing both \(\nu_{0}\) and \(1/2\), and for all \(t\geqslant t_{0}\):
\[\left|J_{\nu}(t)-\sqrt{\frac{2}{\pi t}}\cos\Big{(}t-\frac{\pi\nu}{2}-\frac{\pi }{4}\Big{)}\right|\ \leqslant\ C_{0}t^{-3/2}.\]
We arrange that \(t_{0}\geqslant C_{0}\sqrt{\pi}\). Hence if \(\nu\in I\) and \(j_{\nu,n}\geqslant t_{0}\), then
\[\left|\cos\Big{(}j_{\nu,n}-\frac{\pi}{2}\nu-\frac{\pi}{4}\Big{)}\right|\ \leqslant\ \frac{1}{\sqrt{2}},\]
and so we have a unique \(k_{\nu,n}\in\mathbb{Z}\) with
\[\tfrac{1}{4}\pi\ \leqslant\ j_{\nu,n}-(\tfrac{1}{2}\nu+\tfrac{1}{4}+k_{\nu,n}) \pi\ \leqslant\ \tfrac{3}{4}\pi.\]
Let \(\nu_{1}\) be the left endpoint of \(I\) (so \(\nu_{1}=1/2\) or \(\nu_{1}=\nu_{0}\)) and take \(n_{0}\in\mathbb{N}\) such that \(j_{\nu_{1},n_{0}}\geqslant t_{0}\). Then \(j_{\nu,n}\geqslant t_{0}\) for \(\nu\in I\) and \(n\geqslant n_{0}\) by Proposition 7.6.48. Let \(n\geqslant n_{0}\); we claim that \(\nu\mapsto k_{\nu,n}\colon I\to\mathbb{Z}\) is constant. To see this, note that Proposition 7.6.48 yields \(\delta\in(0,1/4)\) such that for all \(\nu,\widetilde{\nu}\in I\) with \(|\nu-\widetilde{\nu}|<\delta\) we have \(|j_{\nu,n}-j_{\widetilde{\nu},n}|<\pi/4\), which in view of
\[-\tfrac{1}{2}\pi\ \leqslant\ (j_{\nu,n}-j_{\widetilde{\nu},n})-(\nu- \widetilde{\nu})\tfrac{\pi}{2}-(k_{\nu,n}-k_{\widetilde{\nu},n})\pi\ \leqslant\ \tfrac{1}{2}\pi,\]
gives \(k_{\nu,n}=k_{\widetilde{\nu},n}\). Thus \(\nu\mapsto k_{\nu,n}\colon I\to\mathbb{Z}\) is locally constant, and hence constant. Let \(k_{n}\) be the common value of \(k_{\nu,n}\) for \(\nu\in I\). Now \(Z_{1/2}=\{m\pi:m\geqslant 1\}\) by Lemma 7.6.25, hence \(j_{1/2,n}=(n+1)\pi\) and so
\[\tfrac{1}{4}\pi\ \leqslant\ (n+1)\pi-(\tfrac{1}{2}+k_{n})\pi\ \leqslant\ \tfrac{3}{4}\pi.\]
This yields \(k_{n}=n\).
**Corollary 7.6.51**.: _Suppose \(\nu>0\). There is a strictly increasing \(\zeta\in\mathcal{C}_{n_{0}}\)\((n_{0}\in\mathbb{N})\) whose germ is in \(\mathrm{E}(\mathbb{Q})\) such that \(j_{\nu,n}=\zeta(n)\) for all \(n\geqslant n_{0}\) and which has for \(s:=\big{(}x+\tfrac{1}{2}\nu+\tfrac{3}{4}\big{)}\pi\) the asymptotic expansion_
\[\zeta\ \sim\ s-\left(\frac{\mu-1}{8}\right)s^{-1}-\left(\frac{(\mu-1)(7\mu-31)}{19 2}\right)\frac{s^{-3}}{2!}+\cdots\.\]
Proof.: Take \(n_{0}\) as in Proposition 7.6.50 with \(\nu_{0}=\nu\). Theorem 7.6.27 yields \(t_{0}\in\mathbb{R}^{>}\) and a representative of \(\phi=\phi_{\nu}\) in \(\mathcal{C}^{1}_{t_{0}}\), also denoted by \(\phi\), such that for all \(t\geqslant t_{0}\),
\[\phi^{\prime}(t)>0,\qquad J_{\nu}(t)=\sqrt{\frac{2}{\pi t\phi^{\prime}(t)}}\ \cos\left(\phi(t)-\frac{\pi}{2}\nu-\frac{\pi}{4}\right).\]
Increasing \(n_{0}\) if necessary we arrange that \(j_{\nu,n_{0}}\geqslant t_{0}+\frac{\pi}{2}\). Then for \(n\geqslant n_{0}\),
\[\phi(j_{\nu,n})-\left(\tfrac{1}{2}\nu+\tfrac{3}{4}\right)\pi\in\mathbb{Z}\pi.\]
Take \(k\in\mathbb{Z}\) with \(\phi(j_{\nu,n_{0}})=\left(\tfrac{1}{2}\nu+\tfrac{3}{4}+k\right)\pi\). Then for \(n\geqslant n_{0}\) we have \(\phi(j_{\nu,n})=\left(n-n_{0}+\tfrac{1}{2}\nu+\tfrac{3}{4}+k\right)\pi\). By Proposition 7.6.50 we have for \(n\geqslant n_{0}\),
\[(n+\tfrac{1}{2}\nu+\tfrac{1}{2})\pi\ \leqslant\ j_{\nu,n}\ \leqslant\ (n+\tfrac{1}{2}\nu+1)\pi,\]
and thus for all \(n\geqslant n_{0}\),
\[\phi\big{(}(n+\tfrac{1}{2}\nu+\tfrac{1}{2})\pi\big{)}\ \leqslant\ \phi(j_{\nu,n})\ =\ \big{(}n+\tfrac{1}{2}\nu+\tfrac{3}{4}+k-n_{0}\big{)}\,\pi\ \leqslant\ \phi\big{(}(n+\tfrac{1}{2}\nu+1)\pi\big{)}.\]
Since \(\phi-x\preccurlyeq x^{-1}\), this yields \(k=n_{0}\), therefore \(\phi(j_{\nu,n})=\big{(}n+\tfrac{1}{2}\nu+\tfrac{3}{4}\big{)}\,\pi\) for \(n\geqslant n_{0}\). Let \(\phi^{\mathrm{inv}}\in\mathcal{C}_{t_{1}}\) be the compositional inverse of \(\phi\), where \(t_{1}:=\phi(t_{0})\), and let \(\zeta\in\mathcal{C}_{n_{0}}\) be given by \(\zeta(t):=\phi^{\mathrm{inv}}\left(\big{(}t+\tfrac{1}{2}\nu+\tfrac{3}{4}\big{)}\pi\right)\) for \(t\geqslant n_{0}\). Then \(\zeta\) is strictly increasing with \(j_{\nu,n}=\zeta(n)\) for \(n\geqslant n_{0}\). Taking \(\zeta\) and \(\phi^{\mathrm{inv}}\) as germs we have \(\zeta=\phi^{\mathrm{inv}}\circ s\). Now \(\phi^{\mathrm{inv}}\in\mathrm{E}(\mathbb{Q})\) by Lemma 7.6.3, and \(\mathrm{E}(\mathbb{Q})\circ\mathrm{E}(\mathbb{Q})^{>\mathbb{R}}\subseteq\mathrm{ E}(\mathbb{Q})\), so \(\zeta\in\mathrm{E}(\mathbb{Q})\). The claimed asymptotic expansion for \(\zeta\) follows from Corollary 7.6.19.
_Remark_.: The asymptotic expansion for \(j_{\nu,n}\) as \(n\to\infty\) in Corollary 7.6.51 was obtained by McMahon [138]. (For \(\nu=1\), apparently Gauss was aware of it as early as 1797, cf. [205, p. 506].) What is new here is that we specified a function \(\zeta\) with germ in \(\mathrm{E}(\mathbb{Q})\) such that \(j_{\nu,n}=\zeta(n)\) for all sufficiently large \(n\).
In [144, p. 247], Olver states: "No explicit formula is available for the general term" of the asymptotic expansion for \(j_{\nu,n}\) as \(n\to\infty\) in Corollary 7.6.51. The remark after the proof of Corollary 7.6.19 yields the asymptotic expansion
\[\zeta\ \sim\ s-\sum_{j=1}^{\infty}\left(\sum_{i=1}^{j}\frac{(2(j-1))!}{(2j-1-i)! }B_{ij}(u_{1},\dots,u_{j-i+1})\right)\frac{s^{-2j+1}}{j!},\]
which is perhaps as explicit as possible. The values of \(u_{1}\), \(u_{2}\), \(u_{3}\) given before Corollary 7.6.19 yield the first few terms of this expansion:
\[\zeta\ \sim\ s\ -\ \frac{\mu-1}{8}s^{-1}\ -\ \frac{(\mu-1)(7\mu-3 1)}{192}\frac{s^{-3}}{2!}\ -\\ \frac{(\mu-1)(83\mu^{2}-982\mu+3779)}{2560}\frac{s^{-5}}{3!}\ -\ \cdots.\]
### Appendix: inversion of formal power series
In this appendix we discuss multiplicative and compositional inversion of power series. We use [ADH, 12.5] and its notations. Thus \(x,y_{1},y_{2},\dots,z\) are distinct indeterminates, and
\[R\ :=\ \mathbb{Q}[x,y_{1},y_{2},\dots],\qquad A\ :=\ R[[z]].\]
We also let \(K\) be a field of characteristic zero. Recall from [ADH, 12.5.1] the definition of the Bell polynomials \(B_{ij}\in\mathbb{Q}[y_{1},\ldots,y_{d}]\), where \(i\leqslant j\) and \(d=j-i+1\):
\[B_{ij}:=\sum_{\begin{subarray}{c}\boldsymbol{k}=(k_{1},\ldots,k_{d})\in \mathbb{N}^{d}\\ |\boldsymbol{k}|=i,\ \|\boldsymbol{k}\|=j\end{subarray}}\frac{j!}{k_{1}!k_{2}! \cdots k_{d}!}\,\left(\frac{y_{1}}{1!}\right)^{k_{1}}\left(\frac{y_{2}}{2!} \right)^{k_{2}}\cdots\left(\frac{y_{d}}{d!}\right)^{k_{d}}.\]
(Also \(B_{ij}:=0\in\mathbb{Q}[y_{1},y_{2},\ldots]\) for \(i>j\).) Let
\[y\ :=\ \sum_{n\geqslant 1}y_{n}\frac{z^{n}}{n!}\in zR[[z]].\]
By [ADH, (12.5.2)] we have in \(R[[z]]\):
\[\frac{y^{i}}{i!}\ =\ \sum_{j\geqslant 0}B_{ij}\frac{z^{j}}{j!}\ =\ \sum_{j\geqslant i}B_{ij}\frac{z^{j}}{j!}.\]
**Lemma 7.6.52**.: _Let \(i\leqslant j\) and \(d=j-i+1\); then_
\[B_{ij}\left(\frac{y_{2}}{2},\frac{y_{3}}{3},\ldots,\frac{y_{d+1}}{d+1}\right) \ =\ \frac{j!}{(i+j)!}B_{i,i+j}(0,y_{2},y_{3},\ldots,y_{j+1}).\]
Proof.: We have
\[y-y_{1}z\ =\ z\sum_{n\geqslant 1}\left(\frac{y_{n+1}}{n+1}\right)\frac{z^{n}}{ n!}\ =\ \sum_{n\geqslant 2}y_{n}\frac{z^{n}}{n!},\]
hence
\[\frac{(y-y_{1}z)^{i}}{i!}\ =\ \sum_{j\geqslant 0}B_{ij}\left(\frac{y_{2}}{2}, \frac{y_{3}}{3},\ldots\right)\frac{z^{i+j}}{j!}\ =\ \sum_{k\geqslant 0}B_{ik}(0,y_{2},y_{3},\ldots)\frac{z^{k}}{k!}.\qed\]
For \(j\in\mathbb{N}\) we set
\[B_{j}\ :=\ \sum_{i=0}^{j}i!\,B_{ij}\in\mathbb{Q}[y_{1},\ldots,y_{j+1}]. \tag{7.6.23}\]
Note that
\[\frac{B_{j}}{j!}\ =\ \sum_{i=0}^{j}\sum_{\begin{subarray}{c}\boldsymbol{k}=(k_{1},\ldots,k_{d})\in\mathbb{N}^{d}\\ |\boldsymbol{k}|=i,\ \|\boldsymbol{k}\|=j\end{subarray}}\frac{i!}{k_{1}! \cdots k_{d}!}\,\left(\frac{y_{1}}{1!}\right)^{k_{1}}\left(\frac{y_{2}}{2!} \right)^{k_{2}}\cdots\left(\frac{y_{d}}{d!}\right)^{k_{d}}.\]
We have \(B_{0}=B_{00}=1\) and \( B_{j}=\sum_{i=1}^{j}i!\,B_{ij}\in\mathbb{Q}[y_{1},\ldots,y_{j}]\) for \(j\geqslant 1\). Using the examples following [ADH, 12.5.4] we obtain
\[B_{1} =\ y_{1},\] \[B_{2} =\ y_{2}+2y_{1}^{2},\] \[B_{3} =\ y_{3}+6y_{1}y_{2}+6y_{1}^{3},\] \[B_{4} =\ y_{4}+8y_{1}y_{3}+6y_{2}^{2}+36y_{1}^{2}y_{2}+24y_{1}^{4},\] \[B_{5} =\ y_{5}+10y_{1}y_{4}+20y_{2}y_{3}+60y_{1}^{2}y_{3}+90y_{1}y_{2}^{ 2}+240y_{1}^{3}y_{2}+120y_{1}^{5}.\]
We have \(1-y\in 1+zR[[z]]\subseteq R[[z]]^{\times}\) with inverse \((1-y)^{-1}=\sum_{i\geqslant 0}y^{i}\), so for \(m\geqslant 1\):
\[(1-y)^{-m}\ =\ \sum_{i\geqslant 0}{i+m-1\choose m-1}y^{i}=\sum_{j\geqslant 0} \left(\sum_{i=0}^{j}m^{\overline{i}}B_{ij}\right)\frac{z^{j}}{j!} \tag{7.6.24}\]
where \(m^{\overline{i}}:=m(m+1)\cdots(m+i-1)\) (so \(m^{\overline{0}}=1\), \(1^{\overline{i}}=i!\)). In particular,
\[(1-y)^{-1}\ =\ \sum_{j\geqslant 0}B_{j}\frac{z^{j}}{j!}.\]
**Corollary 7.6.53**.: _Let \(f=\sum_{n\geqslant 1}f_{n}\frac{z^{n}}{n!}\in K[[z]]\)\((f_{n}\in K)\). Then_
\[(1-f)^{-1} =\ \sum_{j\geqslant 0}B_{j}(f_{1},\ldots,f_{j})\frac{z^{j}}{j!}\] \[=\ 1+f_{1}z+(f_{2}+2f_{1}^{2})\frac{z^{2}}{2!}+(f_{3}+6f_{1}f_{2 }+6f_{1}^{3})\frac{z^{3}}{3!}+\cdots.\]
Next we discuss the compositional inversion of formal power series. From [ADH, 12.5] recall that \(zK^{\times}+z^{2}K[[z]]\) is a group under formal composition with \(z\) as its identity element. We denote the compositional inverse of any \(f\in zK^{\times}+z^{2}K[[z]]\) by \(f^{[-1]}\). We equip the field \(K(\!(z)\!)\) of Laurent series with the strongly additive and \(K\)-linear derivation \(d/dz\) (so \(z^{\prime}=1\)).
**Definition 7.6.54**.: Let \(f=\sum_{k\in\mathbb{Z}}f_{k}z^{k}\in K(\!(z)\!)\) where \(f_{k}\in K\) for \(k\in\mathbb{Z}\). Then \(\operatorname{res}(f):=f_{-1}\in K\) is the **residue** of \(f\). (We also have the residue morphism \(f\mapsto f(0)\colon K[[z]]\to K\) of the valuation ring \(K[[z]]\) of \(K(\!(z)\!)\).)
The map \(f\mapsto\operatorname{res}(f)\colon K(\!(z)\!)\to K\) is strongly additive and \(K\)-linear.
**Lemma 7.6.55**.: _Let \(f\in K(\!(z)\!)\). Then \(\operatorname{res}(f^{\prime})=0\), and if \(f\neq 0\), then \(\operatorname{res}(f^{\dagger})=vf\)._
Proof.: The first claim is clearly true. For the second, let \(f=z^{k}g\) where \(k=vf\), \(g\in K[[z]]^{\times}\). Then \(f^{\dagger}=kz^{-1}+g^{\dagger}\in kz^{-1}+K[[z]]\), so \(\operatorname{res}(f^{\dagger})=k=vf\).
**Corollary 7.6.56**.: _Let \(f,g\in K(\!(z)\!)\). Then \(\operatorname{res}(f^{\prime}g)=-\operatorname{res}(fg^{\prime})\) and thus, if \(g\neq 0\) and \(k\in\mathbb{Z}\), then \(\operatorname{res}(f^{\prime}g^{k})=-k\operatorname{res}(fg^{k-1}g^{\prime})\)._
Proof.: For the first claim, use the Product Rule and the first part of Lemma 7.6.55. The second claim follows from the first.
**Corollary 7.6.57** (Jacobi [107]).: _Let \(f\in zK[[z]]^{\neq}\) and \(g\in K(\!(z)\!)\). Then_
\[\operatorname{res}\bigl{(}(g\circ f)f^{\prime}\bigr{)}=vf\ \operatorname{res}(g).\]
Proof.: By strong additivity and \(K\)-linearity it is enough to show this for \(g=z^{k}\) (\(k\in\mathbb{Z}\)). If \(k\neq-1\), then
\[\operatorname{res}\bigl{(}(g\circ f)f^{\prime}\bigr{)}=\operatorname{res}(f^{k }f^{\prime})=\operatorname{res}\Bigl{(}\bigl{(}f^{k+1}/(k+1)\bigr{)}^{\prime} \Bigr{)}=0=vf\ \operatorname{res}(g),\]
and if \(k=-1\), then
\[\operatorname{res}\bigl{(}(g\circ f)f^{\prime}\bigr{)}=\operatorname{res}(f^{ \dagger})=vf=vf\ \operatorname{res}(g),\]
using Lemma 7.6.55.
We now obtain the Lagrange Inversion Formula, following [76]:
**Theorem 7.6.58**.: _Let \(f\in zK^{\times}+z^{2}K[[z]]\), \(g\in K[[z]]\). Then_
\[g\circ f^{[-1]}\ =\ g(0)+\sum_{n\geqslant 1}\frac{1}{n}\operatorname{res}(g^{ \prime}f^{-n})z^{n}.\]
Proof.: Let \(h:=g\circ f^{[-1]}=\sum_{n}h_{n}z^{n}\) (\(h_{n}\in K\)). Then for \(n\geqslant 1\) we have
\[\frac{1}{n}\operatorname{res}(g^{\prime}f^{-n}) =\ \operatorname{res}\bigl{(}gf^{-n-1}f^{\prime}\bigr{)}\ =\ \operatorname{res}\bigl{(}(h\circ f)f^{-n-1}f^{\prime}\bigr{)}\] \[=\ \operatorname{res}\left(\bigl{(}hz^{-n-1}\circ f\bigr{)}\cdot f ^{\prime}\bigr{)}\ =\ \operatorname{res}\bigl{(}hz^{-n-1}\bigr{)}\ =\ h_{n},\]
using Corollary 7.6.56 for the first equality, and 7.6.57 for the next to last.
Taking \(g=z^{m}\) in the above yields:
**Corollary 7.6.59**.: _If \(f\in zK^{\times}+z^{2}K[[z]]\), then for \(m\geqslant 1\):_
\[(f^{[-1]})^{m}\ =\ \sum_{n\geqslant m}\frac{m}{n}\operatorname{res}(z^{m-1}f^{- n})z^{n}.\]
_Remark_.: Theorem 7.6.58 stems from Lagrange [120] and Burmann (cf. [98]). The identity in Corollary 7.6.59 is from Jabotinsky [106, Theorem II] and Schur [182].
We now use Corollary 7.6.59 to express the coefficients of \(f^{[-1]}\) in terms of those of \(f\); cf. [50, SS3.8, Theorem E]. Here \(f=z+\sum_{n\geqslant 2}f_{n}\frac{z^{n}}{n!}\) with \(f_{n}\in K\) for \(n\geqslant 2\). Then
\[g\ :=\ f^{[-1]}\ =\ z+\sum_{n\geqslant 2}g_{n}\frac{z^{n}}{n!}\qquad(g_{n} \in K\text{ for }n\geqslant 2).\]
For \(h\in zK[[z]]\), let \(\llbracket h\rrbracket\) denote the (upper triangular) iteration matrix of \(h\) as in [ADH, 12.5], so \(\llbracket h\rrbracket_{m,n}=0\) for \(m>n\), \(\llbracket h\rrbracket_{0,0}=1\), \(\llbracket h\rrbracket_{0,n}=0\) for \(n\geqslant 1\).
**Proposition 7.6.60**.: _For \(1\leqslant m\leqslant n\) we have_
\[\llbracket g\rrbracket_{m,n}\ =\ \sum_{i=0}^{n-m}\frac{(-1)^{i}(i+n-1)!}{(i+n-m)!( m-1)!}B_{i,i+n-m}(0,f_{2},\ldots,f_{n-m+1}).\]
Proof.: By [ADH, remarks before 12.5.5] we have \(g^{m}/m!=\sum_{n}\llbracket g\rrbracket_{m,n}z^{n}/n!\) and so by Corollary 7.6.59, \(\llbracket g\rrbracket_{m,n}=(n-1)!/(m-1)!\operatorname{res}(z^{m-1}f^{-n})\) if \(1\leqslant m\leqslant n\). Set
\[h\ :=\ 1-\frac{f}{z}\ =\ \sum_{n\geqslant 1}h_{n}\frac{z^{n}}{n!}\qquad\text{ where }h_{n}=-f_{n+1}/(n+1)\text{ for }n\geqslant 1.\]
Then \(\operatorname{res}(z^{m-1}f^{-n})\) is the constant term of \(z^{m}f^{-n}\) and so equals the coefficient of \(z^{n-m}\) in \(z^{n-m}z^{m}f^{-n}=(z/f)^{n}=\bigl{(}\frac{1}{1-h}\bigr{)}^{n}\). Hence by (7.6.24) for \(n\geqslant m\geqslant 1\):
\[\operatorname{res}(z^{m-1}f^{-n})\ =\ \sum_{i=0}^{n-m}\frac{n^{\overline{i}}}{(n-m)!}B_ {i,n-m}(h_{1},\ldots,h_{n-m-i+1}).\]
Lemma 7.6.52 gives for \(n\geqslant m\) and \(i\leqslant n-m\):
\[B_{i,n-m}\bigl{(}h_{1},\ldots,h_{n-m-i+1}\bigr{)} =\ B_{i,n-m}\bigl{(}\frac{-f_{2}}{2},\ldots,\frac{-f_{n-m-i+2}}{ n-m-i+2}\bigr{)}\] \[=\ \frac{(n-m)!}{(i+n-m)!}B_{i,i+n-m}\bigl{(}0,-f_{2},\ldots,-f_{n-m+ 1}\bigr{)}\] \[=\ \frac{(-1)^{i}(n-m)!}{(i+n-m)!}B_{i,i+n-m}\bigl{(}0,f_{2}, \ldots,f_{n-m+1}\bigr{)}.\]
For \(1\leqslant m\leqslant n\) this yields the identity claimed for \(\llbracket g\rrbracket_{m,n}\).
**Corollary 7.6.61** (Ostrowski [146]).: _For \(n\geqslant 2\) we have_
\[g_{n}=\sum_{\begin{subarray}{c}\boldsymbol{k}=(k_{1},\ldots,k_{n})\in\mathbb{N }^{n}\\ |\boldsymbol{k}|=n-1,\ \|\boldsymbol{k}\|=2n-2\end{subarray}}(-1)^{n-k_{1}-1} \frac{(2n-k_{1}-2)!}{k_{2}!k_{3}!\cdots k_{n}!}\left(\frac{f_{2}}{2!}\right)^{ k_{2}}\left(\frac{f_{3}}{3!}\right)^{k_{3}}\cdots\left(\frac{f_{n}}{n!} \right)^{k_{n}}.\]
Proof.: Let \(n\geqslant 2\). We have \(g_{n}=\llbracket g\rrbracket_{1,n}\), so by Proposition 7.6.60:
\[g_{n}=\sum_{i=1}^{n-1}(-1)^{i}B_{i,i+n-1}(0,f_{2},f_{3},\ldots,f_{n})\qquad(n \geqslant 2).\]
Now use the definition of the Bell polynomials and reindex.
One now easily determines the first few \(g_{n}\):
\[g_{2} =-f_{2}\] \[g_{3} =-f_{3}+3f_{2}^{2}\] \[g_{4} =-f_{4}+10f_{3}f_{2}-15f_{2}^{3}\] \[g_{5} =-f_{5}+15f_{4}f_{2}+10f_{3}^{2}-105f_{3}f_{2}^{2}+105f_{2}^{4}\]
We now establish analogues of some of these results for compositional inversion in the field \(K(\!(x^{-1})\!)\) of Laurent series in \(x^{-1}\) over \(K\). We have the usual valuation \(v\colon K(\!(x^{-1})\!)^{\times}\to\mathbb{Z}\) on \(K(\!(x^{-1})\!)\) with valuation ring \(K[[x^{-1}]]\); so \(v(x)=-1\). We also have the unique continuous derivation \(f\mapsto f^{\prime}\) on \(K(\!(x^{-1})\!)\) such that \(a^{\prime}=0\) for \(a\in K\) and \(x^{\prime}=1\). This valuation and derivation make \(K(\!(x^{-1})\!)\) a d-valued field with small derivation.
As in \(K(\!(z)\!)\), we have a well-behaved notion of composition in \(K(\!(x^{-1})\!)\): for \(f,g\) in \(K(\!(x^{-1})\!)\) with \(f\succ 1\) and \(g=\sum_{k}g_{k}x^{k}\) (\(g_{k}\in K\) for \(k\in\mathbb{Z}\)), the family \((g_{k}f^{k})\) is summable in \(K(\!(x^{-1})\!)\), and we denote its sum by \(g\circ f\). For \(f\in K(\!(x^{-1})\!)\) with \(f\succ 1\), the map \(g\mapsto g\circ f\) is a strongly additive and \(K\)-linear field embedding, which is bijective if \(f\asymp x\). This can be seen, for example, by relating composition in \(K(\!(x^{-1})\!)\) to composition in \(K(\!(z)\!)\): The strongly additive, \(K\)-linear map
\[\tau\colon K(\!(x^{-1})\!)\to K(\!(z)\!)\quad\text{with }\tau(x^{-k})=z^{k} \text{ for all }k\in\mathbb{Z},\]
is an isomorphism of valued fields. Let \(f\in K(\!(x^{-1})\!)\), \(f\succ 1\). Then \(\tau(1/f)\in zK[[z]]\), and we have a commutative diagram
of strongly additive, \(K\)-linear maps. Also \(\tau(1/f)\in zK^{\times}+z^{2}K[[z]]\) if \(f\asymp x\).
As in Definition 7.6.54, we define:
**Definition 7.6.62**.: Let \(f=\sum_{k\in\mathbb{Z}}f_{k}x^{k}\in K(\!(x^{-1})\!)\) where \(f_{k}\in K\) for \(k\in\mathbb{Z}\). Then \(\operatorname{res}(f):=f_{-1}\in K\) is the **residue** of \(f\).
The map \(f\mapsto\operatorname{res}(f)\colon K(\!(x^{-1})\!)\to K\) is strongly additive and \(K\)-linear.
**Lemma 7.6.63**.: _Let \(f\in K(\!(x^{-1})\!)\). Then \(\operatorname{res}(f^{\prime})=0\); if \(f\neq 0\), then \(\operatorname{res}(f^{\dagger})=-vf\)._
Proof.: The first claim is clearly true. For the second, let \(f=x^{k}g\) where \(k=-vf\), \(g\in K(\!(x^{-1})\!)\), \(g\asymp 1\). Then \(f^{\dagger}=kx^{-1}+g^{\dagger}\) with \(g^{\dagger}\preccurlyeq x^{-2}\), so \(\operatorname{res}(f^{\dagger})=k=-vf\).
Just like Lemma 7.6.55 led to Corollaries 7.6.56 and 7.6.57, Lemma 7.6.63 gives:
**Corollary 7.6.64**.: _If \(f,g\in K(\!(x^{-1})\!)\), then \(\operatorname{res}(f^{\prime}g)=-\operatorname{res}(fg^{\prime})\) and thus, if also \(g\neq 0\) and \(k\in\mathbb{Z}\), then \(\operatorname{res}(f^{\prime}g^{k})=-k\operatorname{res}(fg^{k-1}g^{\prime})\)._
**Corollary 7.6.65**.: _If \(f,g\in K(\!(x^{-1})\!)\), \(f\succ 1\), then \(\operatorname{res}\bigl{(}(g\circ f)f^{\prime}\bigr{)}=-vf\,\operatorname{res }(g)\)._
For \(f\in K(\!(x^{-1})\!)\), \(f\asymp x\), let \(f^{[-1]}\) be the compositional inverse of \(f\). One verifies easily that if \(f\in x+x^{-1}K[[x^{-1}]]\), then \(f^{[-1]}\in x+x^{-1}K[[x^{-1}]]\). More generally:
**Theorem 7.6.66**.: _Let \(f,g\in x+x^{-1}K[[x^{-1}]]\). Then_
\[g\circ f^{[-1]}\ =\ x-\sum_{n\geqslant 1}\frac{1}{n}\operatorname{res}(g^{ \prime}f^{n})x^{-n}.\]
Proof.: Let \(h:=g\circ f^{[-1]}=\sum_{k}h_{k}x^{k}\) (\(h_{k}\in K\)). Then for \(k\in\mathbb{Z}^{\neq}\) we have
\[\frac{1}{k}\operatorname{res}(g^{\prime}f^{-k}) =\ \operatorname{res}\bigl{(}gf^{-k-1}f^{\prime}\bigr{)}\ =\ \operatorname{res}\bigl{(}(h\circ f)f^{-k-1}f^{\prime}\bigr{)}\] \[=\ \operatorname{res}\bigl{(}(hx^{-k-1}\circ f)f^{\prime}\bigr{)} \ =\ \operatorname{res}\bigl{(}hx^{-k-1}\bigr{)}\ =\ h_{k},\]
using Corollary 7.6.64 for the first equality and 7.6.65 for the next to last one. This computation goes through for any \(f,g\in K(\!(x^{-1})\!)\) with \(f\asymp x\), but under the assumptions of the theorem gives the desired result.
**Corollary 7.6.67**.: _Let \(f=x+\sum_{n\geqslant 1}f_{n}\frac{x^{-2n+1}}{n!}\), all \(f_{n}\in K\), and \(g=f^{[-1]}\). Then_
\[g\ =\ x-\sum_{j\geqslant 1}g_{j}\frac{x^{-2j+1}}{j!}\quad\text{where $g_{j}=\sum_{i=1}^{j}\frac{(2(j-1))!}{(2j-1-i)!}B_{ij}(f_{1},\ldots,f_{j-i+1}) $.}\]
Proof.: Put \(F:=\sum_{n\geqslant 1}f_{n}\frac{z^{n}}{n!}\in zK[[z]]\), \(h:=\sum_{n\geqslant 1}f_{n}\frac{x^{-2n}}{n!}\). Then \(f=x(1+h)\), and \(\operatorname{res}(f^{n})\) is the coefficient of \(x^{-n-1}\) in \(f^{n}/x^{n}=(1+h)^{n}\). Thus \(\operatorname{res}(f^{n})=0\) if \(n\) is even. Now suppose \(n\) is odd, \(n=2j-1\) (\(j\geqslant 1\)). Then the coefficient of \(x^{-n-1}=x^{-2j}\) in \((1+h)^{n}\) equals the coefficient of \(z^{j}\) in the power series \((1+F)^{n}=\sum_{i=0}^{n}\frac{n!}{(n-i)!}\frac{F^{i}}{i!}\), and this coefficient in turn is given by
\[\frac{1}{j!}\sum_{i=1}^{j}\frac{n!}{(n-i)!}B_{ij}(f_{1},\ldots,f_{j-i+1}).\]
Now use Theorem 7.6.66 with \(x\) in the role of of \(g\) there.
Using the formulas for \(B_{ij}\) for small values of \(i\), \(j\) given on [ADH, p. 554] we readily compute:
\[g_{1} =f_{1},\] \[g_{2} =f_{2}+2f_{1}^{2},\] \[g_{3} =f_{3}+12f_{1}f_{2}+12f_{1}^{3},\] \[g_{4} =f_{4}+24f_{1}f_{3}+18f_{2}^{2}+180f_{1}^{2}f_{2}+120f_{1}^{4}\] \[g_{5} =f_{5}+40f_{1}f_{4}+80f_{2}f_{3}+560f_{1}^{2}f_{3}+840f_{1}f_{2}^{ 2}+3660f_{1}^{3}f_{2}+1830f_{1}^{5}\]
_Remark_.: Suppose \(K=\mathbb{R}\). Then \(\mathbb{R}((x^{-1})\!)\) is a subfield of \(\mathbb{T}\), and the composition \((g,f)\mapsto g\circ f\colon\mathbb{T}\times\mathbb{T}^{>\mathbb{R}}\to\mathbb{T}\) in \(\mathbb{T}\) (see the remarks after Corollary 5.3.12) extends the composition in \(\mathbb{R}(\!(x^{-1})\!)\) defined above. All \(f\in\mathbb{T}^{>\mathbb{R}}\) have a compositional inverse \(f^{\rm inv}\) in \(\mathbb{T}\), with \(f^{\rm inv}=f^{[-1]}\) if \(f\in\mathbb{R}(\!(x^{-1})\!)\). For \(f>\mathbb{R}\) in the subfield \(\mathbb{T}_{\rm g}\) of \(\mathbb{T}\) consisting of the grid based series we have \(f^{\rm inv}\in\mathbb{T}_{\rm g}\), and [103, Section 5.4.2] has a formula for the coefficients of \(f^{\rm inv}\) in that case.
### Holes and Slots in Perfect Hardy Fields
In this section \(H\supseteq\mathbb{R}\) is a real closed Hardy field with asymptotic integration. We set \(K:=H[i]\subseteq\mathcal{C}^{<\infty}[i]\), an algebraically closed \(\mathrm{d}\)-valued extension of \(H\). Moreover, \(\widehat{H}\) is an immediate \(H\)-field extension of \(H\) and \(\widehat{K}:=\widehat{H}[i]\) is the corresponding immediate \(\mathrm{d}\)-valued extension of \(K\) as in Section 6.7. We also fix a \(\mathrm{d}\)-maximal Hardy field extension \(H_{*}\) of \(H\). The \(H\)-field \(H_{*}\) is newtonian, and the \(\mathrm{d}\)-valued field extension \(K_{*}:=H_{*}[i]\subseteq\mathcal{C}^{<\infty}[i]\) of \(K\) is newtonian and linearly closed.
Recall that if \(\mathrm{I}(K)\subseteq K^{\dagger}\) and \(A\in K[\![\partial]\!]^{\neq}\) splits over \(K\), then \(A\) is terminal. In this section we show:
**Theorem 7.7.1**.: _Suppose \(H\) is \(\mathrm{d}\)-perfect and \(\omega\!\)-free. Then every minimal hole in \(K\) of positive order is flabby. Moreover, \(H\) has no hole of order \(1\), every minimal hole in \(H\) of order \(2\) is flabby, and if all \(A\in H[\![\partial]\!]^{\neq}\) are terminal, then every minimal hole in \(H\) of positive order is flabby._
In Corollary 7.7.50 below we also show that if \(H\) is \(\mathrm{d}\)-perfect (but not necessarily \(\omega\!\)-free), then every linear minimal hole \((P,\mathfrak{m},\widehat{f})\) in \(K\) of order \(1\) with \(\widehat{f}\in\widehat{K}\) is flabby. (See the discussion after the proof of Lemma 7.5.39 for an example of a \(\mathrm{d}\)-perfect Hardy field that is not \(\omega\!\)-free.)
The theorem above originated in an attempt to characterize \(\omega\!\)-free \(\mathrm{d}\)-perfect Hardy fields among Hardy fields containing \(\mathbb{R}\) purely in terms of asymptotic differential algebra. We hope to return to this topic at a later occasion.
With the proof of Corollary 7.7.50 we also finish the proof of Theorem 7.7.1.
### Asymptotic similarity and equivalence of slots
Let \((P,\mathfrak{m},\widehat{f})\) be a slot in \(H\) of order \(\geqslant 1\) where \(\widehat{f}\in\widehat{H}\). If \(f\in\mathcal{C}^{<\infty}\) is \(H\)-hardian and \((P,\mathfrak{m},f)\) is a slot in \(H\) (we regard this as including the requirement that \(f\notin H\) and the Hardy field \(H\langle f\rangle\) is an immediate extension of \(H\)), then
\[f\approx_{H}\widehat{f}\quad\Longleftrightarrow\quad(P,\mathfrak{m},f)\text{ and }(P,\mathfrak{m},\widehat{f})\text{ are equivalent.}\]
Note that if \(f\in\mathcal{C}\), \(f\approx_{H}\widehat{f}\), and \(g,h\in H\), \(g\neq 0\), then \(fg-h\approx_{H}\widehat{f}g-h\). From Corollary 3.2.29 and newtonianity of \(H_{*}\) we get a useful result about filling slots in \(H\) by elements of \(\mathrm{d}\)-maximal Hardy field extensions of \(H\):
**Lemma 7.7.2**.: _If \(H\) is \(\omega\)-free and \((P,\mathfrak{m},\widehat{f})\) is \(Z\)-minimal, then there exists \(f\in H_{*}\) such that \((P,\mathfrak{m},f)\) is a hole in \(H\) equivalent to \((P,\mathfrak{m},\widehat{f})\), in particular, \(P(f)=0\), \(f\prec\mathfrak{m}\), and \(f\approx_{H}\widehat{f}\)._
In Lemma 7.7.2 we cannot drop the assumption that \(H\) is \(\omega\)-free. To see why, suppose \(H\) is d-perfect and not \(\omega\)-free (such \(H\) exists by Example 7.5.40), and take \(\omega\in H\) and \((P,\mathfrak{m},\lambda)\) as in Lemma 3.2.10 for \(H\) in the role of \(K\) there, so \(P=2Y^{\prime}+Y^{2}+\omega\) and \((P,\mathfrak{m},\lambda)\) is a minimal hole in \(H\) by Corollary 3.2.11. Since \(\omega\notin\omega(H)\) and \(H\) is 1-d-closed in all its Hardy field extensions, no \(H\)-hardian germ \(f\) satisfies \(P(f)=0\). Thus the conclusion of Lemma 7.7.2 fails for \(\widehat{f}=\lambda\).
Corollary 3.2.30 yields a variant for \(P\) of order \(1\):
**Lemma 7.7.3**.: _If \(H\) is \(\lambda\)-free and \((P,\mathfrak{m},\widehat{f})\) is \(Z\)-minimal of order \(1\) with a quasilinear refinement, then there exists \(f\in H_{*}\) such that \(H\langle f\rangle\) is an immediate extension of \(H\) and \((P,\mathfrak{m},f)\) is a hole in \(H\) equivalent to \((P,\mathfrak{m},\widehat{f})\)._
Here are complex versions of some of the above: Let \((P,\mathfrak{m},\widehat{f})\) be a slot in \(K\) of order \(\geqslant 1\) where \(\widehat{f}\in\widehat{K}\). If \(f\in K_{*}\) and \((P,\mathfrak{m},f)\) is a slot in \(K\) (so \(f\notin K\) and \(K\langle f\rangle\subseteq K_{*}\) is an immediate extension of \(K\)), then
\[f\approx_{K}\widehat{f}\quad\Longleftrightarrow\quad(P,\mathfrak{m},f)\text{ and }(P,\mathfrak{m},\widehat{f})\text{ are equivalent.}\]
If \(f\in\mathcal{C}[\text{i}]\), \(f\approx_{K}\widehat{f}\),and \(g,h\in K\), \(g\neq 0\), then \(fg-h\approx_{K}\widehat{f}g-h\). Recall that \(H\) is \(\omega\)-free iff \(K\) is, by [ADH, 11.7.23]. Again by Corollaries 3.2.29 and 3.2.30:
**Lemma 7.7.4**.: _If \(H\) is \(\omega\)-free and \((P,\mathfrak{m},\widehat{f})\) is \(Z\)-minimal as a slot in \(K\), then there exists \(f\in K_{*}\) such that \(K\langle f\rangle\) is an immediate extension of \(K\) and \((P,\mathfrak{m},f)\) is a hole in \(K\) equivalent to \((P,\mathfrak{m},\widehat{f})\) (and thus \(P(f)=0\), \(f\prec\mathfrak{m}\), and \(f\approx_{K}\widehat{f}\))._
**Lemma 7.7.5**.: _If \(H\) is \(\lambda\)-free and, as a slot over \(K\), \((P,\mathfrak{m},\widehat{f})\) is \(Z\)-minimal of order \(1\) with a quasilinear refinement, then there exists \(f\in K_{*}\) such that \(K\langle f\rangle\) is an immediate extension of \(K\) and \((P,\mathfrak{m},f)\) is a hole in \(K\) equivalent to \((P,\mathfrak{m},\widehat{f})\)._
_In the rest of this section \(H\) is Liouville closed and \(\operatorname{I}(K)\subseteq K^{\dagger}\). (These conditions are satisfied if \(H\) is d-perfect.) We take an \(\mathbb{R}\)-linear complement \(\Lambda_{H}\) of \(\operatorname{I}(H)\) in \(H\), so \(\Lambda:=\Lambda_{H}\dot{\text{i}}\) is a complement of \(K^{\dagger}\) in \(K\). Next we take an \(\mathbb{R}\)-linear complement \(\Lambda_{H_{*}}\) of \(\operatorname{I}(H_{*})\) in \(H_{*}\), so \(\Lambda_{*}:=\Lambda_{H_{*}}\dot{\text{i}}\) is a complement of \(K^{\dagger}_{*}\) in \(K_{*}\). Accordingly we identify in the usual way \(\operatorname{U}:=\operatorname{U}_{K}:=K\bigl{[}\operatorname{e}(\Lambda) \bigr{]}\) with \(K[\operatorname{e}^{H\dot{\text{i}}}]\) and likewise \(\operatorname{U}_{*}:=\operatorname{U}_{K_{*}}:=K_{*}\bigl{[}\operatorname{e}( \Lambda_{*})\bigr{]}\) with \(K_{*}[\operatorname{e}^{H_{*}\dot{\text{i}}}]\)._
### Zeros of linear differential operators close to the linear part of a slot
_In this subsection \((P,1,\widehat{h})\) with \(\widehat{h}\in\widehat{H}\) is a normal or linear slot in \(H\) of order \(r\geqslant 1\)._ Then \(\operatorname{order}(L_{P})=r\), so \(\dim_{\mathbb{C}}\ker_{\mathfrak{U}_{*}}L_{P}=r\) by Theorem 7.4.1. Lemma 4.4.4(ii) then gives \(\mathscr{E}_{K_{*}}^{\text{\rm u}}(L_{P})=v_{\text{\rm g}}(\ker_{\operatorname {U}_{*}}^{\neq}L_{P})\). If \(L_{P}\) is terminal, then \(\mathscr{E}^{\text{\rm u}}(L_{P})=\mathscr{E}_{K_{*}}^{\text{\rm u}}(L_{P})\), by Corollary 2.6.23. We use these remarks to deal with firm and flabby cases:
**Lemma 7.7.6**.: _Suppose \((P,1,\widehat{h})\) is firm and ultimate and \(L_{P}\) is terminal. Then there is no \(y\in\mathcal{C}^{r}[\text{i}]^{\neq}\) such that \(L_{P}(y)=0\) and \(y\prec 1\)._
Proof.: Suppose \(y\in\mathcal{C}^{r}[\text{i}]^{\neq}\), \(L_{P}(y)=0\), and \(y\prec 1\). Then \(y\in\operatorname{U}_{*}\), so \(y\prec_{\text{\rm g}}1\) by Lemma 5.10.8. The remarks above give \(v_{\text{\rm g}}y\in\mathscr{E}^{\text{\rm u}}(L_{P})\). Then \(v_{\text{\rm g}}y\leqslant 0\) by Remark 4.4.29, contradicting \(y\prec_{\text{\rm g}}1\).
**Lemma 7.7.7**.: _Suppose \((P,1,\widehat{h})\) is flabby. Then there exists \(y\in\mathcal{C}^{<\infty}[\mathrm{i}]^{\neq}\) such that \(L_{P}(y)=0\) and \(y\prec\mathfrak{m}\) for all \(\mathfrak{m}\in H^{\times}\) with \(v\mathfrak{m}\in v(\widehat{h}-H)\). If in addition \((P,1,\widehat{h})\) is \(Z\)-minimal, deep, and special, then \(y^{\prime},\ldots,y^{(r)}\prec\mathfrak{m}\) for all such \(y\) and \(\mathfrak{m}\)._
Proof.: Flabbiness of \((P,1,\widehat{h})\) and Lemmas 4.4.27 and 4.4.28 yield a \(\gamma\in\mathscr{E}^{\mathrm{u}}(L_{P})\) with \(\gamma>v(\widehat{h}-H)\). Then \(\gamma\in\mathscr{E}^{\mathrm{u}}_{K_{*}}(L_{P})\) by Corollary 4.4.3, so a remark above gives \(y\in\ker_{\mathrm{U}_{*}}^{\neq}L_{P}\) such that \(v_{\mathrm{g}}y=\gamma\). Then \(y\prec_{\mathrm{g}}\mathfrak{m}\) and thus \(y\prec\mathfrak{m}\), for all \(\mathfrak{m}\in H^{\times}\) with \(v\mathfrak{m}\in v(\widehat{h}-H)\). For the remainder, use Lemma 5.10.12.
Next we consider a suitable perturbation \(A\) of \(L_{P}\): _In the rest of this subsection we assume \(L_{P}=A+B\) with \(A,B\in K[\partial]\) satisfying_
\[\mathrm{order}(A)\ =\ r,\qquad\mathfrak{v}\ :=\ \mathfrak{v}(A)\prec^{\flat}1, \qquad B\prec_{\Delta(\mathfrak{v})}\mathfrak{v}^{r+1}A.\]
Then Lemma 3.1.1 gives \(\mathfrak{v}(L_{P})\sim\mathfrak{v}\). By Lemma 4.4.4(ii),(iii),
\[v_{\mathrm{g}}(\ker_{\mathrm{U}_{*}}^{\neq}A)\ =\ \mathscr{E}^{\mathrm{u}}_{K_{*}}(A)\ =\ \mathscr{E}^{\mathrm{u}}_{K_{*}}(L_{P}),\qquad\mathscr{E}^{\mathrm{u}}(A)\ =\ \mathscr{E}^{\mathrm{u}}(L_{P}).\]
If \(A\) is terminal, then all five displayed sets are equal by Corollary 2.6.23. Recall also from Corollary 2.6.21 that if \(A\) splits over \(K\), then \(A\) is terminal, and from Proposition 2.6.26 that if \(\dim_{\mathbb{C}}\ker_{\mathrm{U}}A=r\), then \(A\) is terminal.
We can now generalize Proposition 5.10.15:
**Proposition 7.7.8**.: _Suppose \((P,1,\widehat{h})\) is ultimate, \(A\) is terminal, and \(y\in\mathcal{C}^{r}[\mathrm{i}]\) satisfies \(A(y)=0\), \(y\prec 1\). Then \(y\prec\mathfrak{m}\) for all \(\mathfrak{m}\in H^{\times}\) with \(v\mathfrak{m}\in v(\widehat{h}-H)\)._
Proof.: We have \(y\in\mathrm{U}_{*}\), so \(y\prec_{\mathrm{g}}1\) by Lemma 5.10.8. If \(y=0\), then we are done, so suppose \(y\neq 0\). Then \(0<v_{\mathrm{g}}y\in\mathscr{E}^{\mathrm{u}}(L_{P})\) by remarks before Proposition 7.7.8. Hence \(v_{\mathrm{g}}y>v(\widehat{h}-H)\) by Lemma 4.4.12 if \((P,1,\widehat{h})\) is normal, and by Lemma 4.4.13 if \((P,1,\widehat{h})\) is linear, so \(y\prec_{\mathrm{g}}\mathfrak{m}\) for all \(\mathfrak{m}\in H^{\times}\) with \(v\mathfrak{m}\in v(\widehat{h}-H)\), and thus \(y\prec\mathfrak{m}\) for all such \(\mathfrak{m}\) by Corollary 5.10.9.
**Corollary 7.7.9**.: _Suppose \(A\) is terminal and \((P,1,\widehat{h})\) is \(Z\)-minimal, deep, ultimate, and special. If \(y\in\mathcal{C}^{r}[\mathrm{i}]^{\neq}\) satisfies \(A(y)=0\) and \(y\prec 1\), then \(y,y^{\prime},\ldots,y^{(r)}\prec\mathfrak{m}\) for all \(\mathfrak{m}\in H^{\times}\) with \(v\mathfrak{m}\in v(\widehat{h}-H)\)._
Proof.: First use Proposition 7.7.8 and then Lemma 5.10.12.
Next we turn to firm and flabby cases.
**Lemma 7.7.10**.: _Suppose \(A\) is terminal and \((P,1,\widehat{h})\) is firm and ultimate. Then there is no \(y\in\mathcal{C}^{r}[\mathrm{i}]^{\neq}\) such that \(A(y)=0\) and \(y\prec 1\)._
Proof.: Suppose \(y\in\mathcal{C}^{r}[\mathrm{i}]^{\neq}\), \(A(y)=0\), and \(y\prec 1\). Then \(y\in\mathrm{U}_{*}\), so \(y\prec_{\mathrm{g}}1\) by Lemma 5.10.8. The remarks before Proposition 7.7.8 give \(v_{\mathrm{g}}y\in\mathscr{E}^{\mathrm{u}}(L_{P})\). Hence \(v_{\mathrm{g}}y\leqslant 0\) by Remark 4.4.29, contradicting \(y\prec_{\mathrm{g}}1\).
**Lemma 7.7.11**.: _Suppose \((P,1,\widehat{h})\) is flabby. Then there exists \(y\in\mathcal{C}^{<\infty}[\mathrm{i}]^{\neq}\) such that \(A(y)=0\) and \(y\prec\mathfrak{m}\) for all \(\mathfrak{m}\in H^{\times}\) with \(v\mathfrak{m}\in v(\widehat{h}-H)\). If in addition \((P,1,\widehat{h})\) is \(Z\)-minimal, deep, and special, then \(y^{\prime},\ldots,y^{(r)}\prec\mathfrak{m}\) for all such \(y\) and \(\mathfrak{m}\)._
Proof.: Flabbiness of \((P,1,\widehat{h})\) and Lemmas 4.4.27 and 4.4.28 yield a \(\gamma\in\mathscr{E}^{\mathrm{u}}(L_{P})=\mathscr{E}^{\mathrm{u}}(A)\) with \(\gamma>v(\widehat{h}-H)\). The rest of the proof is the same as that of Lemma 7.7.7 with \(A\) instead of \(L_{P}\)
_Remark_.: The material above in this subsection goes through if instead of \((P,1,\widehat{h})\) with \(\widehat{h}\in\widehat{H}\) being a normal or linear slot in \(H\) of order \(r\geqslant 1\) we assume \((P,1,\widehat{h})\) with \(\widehat{h}\in\widehat{K}\) is a normal or linear slot in \(K\) of order \(r\geqslant 1\), and "\(\mathfrak{m}\in H^{\times}\) with \(v\mathfrak{m}\in v(\widehat{h}-H)\)" is replaced everywhere by "\(\mathfrak{m}\in K^{\times}\) with \(v\mathfrak{m}\in v(\widehat{h}-K)\)".
To see this, use the \(K\)-versions of Lemmas 4.4.12, 4.4.13, 4.4.27, of Lemma 4.4.28 and Remark 4.4.29, and of Lemma 5.10.12; cf. the discussion at the end of the subsection _An application to slots in \(H\)_ of Section 5.10.
### Application to linear slots
In this subsection we apply the material in the last subsection to the study of linear slots (in \(H\) and in \(K\)). Until further notice \((P,\mathfrak{m},\widehat{h})\) with \(\widehat{h}\in\widehat{H}\) is a \(Z\)-minimal linear slot in \(H\) of order \(r\geqslant 1\).
**Lemma 7.7.12**.: _There exists \(f\in H_{*}\) such that \(P(f)=0\) and \(f\prec\mathfrak{m}\)._
Proof.: We may replace \((P,\mathfrak{m},\widehat{h})\) by a refinement whenever convenient. Hence by Remark 3.4.7 we may arrange that \((P,\mathfrak{m},\widehat{h})\) is isolated. Then \(P(0)\neq 0\), and \(\gamma:=v\widehat{h}\) is the unique element of \(\Gamma\setminus\mathscr{E}^{\mathrm{e}}(L_{P})\) such that \(v^{\mathrm{e}}_{L_{P}}(\gamma)=v(P(0))\), by Lemmas 3.2.14 and 3.4.15. Now \(H_{*}\) is linearly newtonian, so Corollary 1.5.7 yields \(f\in H_{*}^{\times}\) with \(P(f)=0\), \(vf\notin\mathscr{E}^{\mathrm{e}}_{H^{*}}(L_{P})\), and \(v^{\mathrm{e}}_{L_{P}}(vf)=v(P(0))\). By Corollary 1.8.10, \(v^{\mathrm{e}}_{L_{P}}(\gamma)\) does not change when passing from \(H\) to \(H_{*}\), and \(\gamma\notin\mathscr{E}^{\mathrm{e}}_{H_{*}}(L_{P})\). Thus \(vf=\gamma\) by Lemma 1.5.6; in particular, \(f\prec\mathfrak{m}\).
**Corollary 7.7.13**.: _Suppose \((P,\mathfrak{m},\widehat{h})\) is flabby, and \(f\in\mathcal{C}^{r}\) is such that \(P(f)=0\) and \(f\prec\mathfrak{m}\). Then \(f\in\mathcal{C}^{<\infty}\) and there exists \(g\in\mathcal{C}^{<\infty}\) such that \(P(g)=0\), \(g\prec\mathfrak{m}\), and \(0\neq f-g\prec\mathfrak{n}\) for all \(\mathfrak{n}\in H^{\times}\) with \(v\mathfrak{n}\in v(\widehat{h}-H)\). For any such \(g\) we have_
\[f\approx_{H}\widehat{h}\ \Rightarrow\ g\approx_{H}\widehat{h},\quad H\subseteq \mathcal{C}^{\infty}\ \Rightarrow\ f,g\in\mathcal{C}^{\infty},\quad H\subseteq\mathcal{C}^{\omega}\ \Rightarrow\ f,g\in\mathcal{C}^{\omega}.\]
_If \((P,\mathfrak{m},\widehat{h})\) is also deep and special, then \(f-g\in\mathfrak{m}(\mathcal{C}^{r})^{\prec}\) for any such \(g\)._
Proof.: Lemma 6.3.4 gives \(f\in\mathcal{C}^{<\infty}\). Replace \((P,\mathfrak{m},\widehat{h})\), \(f\), by \((P_{\times\mathfrak{m}},1,\widehat{h}/\mathfrak{m})\), \(f/\mathfrak{m}\) to arrange \(\mathfrak{m}=1\). Lemma 7.7.7 then yields \(y\in\mathcal{C}^{<\infty}[i]^{\neq}\) such that \(L_{P}(y)=0\) and \(y\prec\mathfrak{n}\) for all \(\mathfrak{n}\in H^{\times}\) with \(v\mathfrak{n}\in v(\widehat{h}-H)\). Replacing \(y\) by \(\operatorname{Re}y\) or \(\operatorname{Im}y\) we arrange \(y\in\mathcal{C}^{<\infty}\). Then \(g:=f+y\in\mathcal{C}^{<\infty}\) satisfies \(f\neq g\), \(P(g)=0\), and \(g\prec 1\). The rest follows from remarks after Corollary 5.2.3 and from Lemma 7.7.7.
In the proof of the next corollary we use that if \(H\) is \(\mathfrak{o}\)-free, then Lemma 7.7.2 yields an \(f\in H_{*}\) such that \(P(f)=0\) and \(f\approx_{H}\widehat{h}\).
**Corollary 7.7.14**.: _Suppose \((P,\mathfrak{m},\widehat{h})\) is ultimate and \(L_{P}\) is terminal. Then_
\[(P,\mathfrak{m},\widehat{h})\text{ is firm}\quad\Longleftrightarrow\quad\text{ there is a unique $f\in\mathcal{C}^{r}$ with $P(f)=0$ and $f\prec\mathfrak{m}$.}\]
_If \((P,\mathfrak{m},\widehat{h})\) is firm, \(f\in\mathcal{C}^{r}\), \(P(f)=0\), \(f\prec\mathfrak{m}\), then \(f\in\mathrm{D}(H)\), there is no \(g\neq f\) in \(\mathcal{C}^{r}[i]\) with \(P(g)=0\), \(g\prec\mathfrak{m}\), and if in addition \(H\) is \(\mathfrak{o}\)-free, then \(f\approx_{H}\widehat{h}\)._
Proof.: We arrange \(\mathfrak{m}=1\) as before. Then Lemmas 7.7.6 and 7.7.12 yield "\(\Rightarrow\)". For "\(\Leftarrow\)" and the rest, use Corollary 7.7.13 and the remark after its proof, and observe that our d-maximal Hardy field extension \(H_{*}\) of \(H\) was arbitrary.
**Corollary 7.7.15**.: _Suppose \(H\) is \(\mathfrak{o}\)-free and \(\mathrm{d}\)-perfect, and all \(A\in H[\partial]\subseteq K[\partial]\) of order \(r\) are terminal. Then every \(Z\)-minimal linear slot in \(H\) of order \(r\) is flabby._
Proof.: Given a firm \(Z\)-minimal linear slot in \(H\) of order \(r\) we use Remark 4.4.15 and Lemma 4.4.25 to refine it to be ultimate. So we arrive at an ultimate firm \(Z\)-minimal linear slot in \(H\) of order \(r\) with terminal linear part. This contradicts \(H\) being d-perfect by Corollary 7.7.14.
Next the \(K\)-versions of Lemma 7.7.12 and its corollaries: Let \((P,\mathfrak{m},\widehat{f})\) with \(\widehat{f}\in\widehat{K}\) be a \(Z\)-minimal linear slot in \(K\) of order \(r\geqslant 1\). Now \(K\) is \(\lambda\)-free [ADH, 11.6.8], so we can mimick the proof of Lemma 7.7.12 to obtain:
**Lemma 7.7.16**.: _There exists \(f\in K_{*}\) such that \(P(f)=0\) and \(f\prec\mathfrak{m}\)._
The \(K\)-version of Lemma 7.7.7 leads to the \(K\)-version of Corollary 7.7.13:
**Corollary 7.7.17**.: _Suppose \((P,\mathfrak{m},\widehat{f})\) is flabby, and \(f\in\mathcal{C}^{r}[i]\), \(P(f)=0\), and \(f\prec\mathfrak{m}\). Then \(f\in\mathcal{C}^{<\infty}[i]\) and there exists \(g\in\mathcal{C}^{<\infty}[i]\) such that \(P(g)=0\), \(g\prec\mathfrak{m}\), and \(0\neq f-g\prec\mathfrak{n}\) for all \(\mathfrak{n}\in K^{\times}\) with \(v\mathfrak{n}\in v(\widehat{f}-K)\). For any such \(g\) we have_
\[f\approx_{K}\widehat{f}\ \Rightarrow\ g\approx_{K}\widehat{f},\quad H\subseteq \mathcal{C}^{\infty}\ \Rightarrow\ f,g\in\mathcal{C}^{\infty}[i],\quad H\subseteq\mathcal{C}^{ \omega}\ \Rightarrow\ f,g\in\mathcal{C}^{\omega}[i].\]
_If \((P,\mathfrak{m},\widehat{f})\) is also deep and special, then \(f-g\in\mathfrak{m}\mathcal{C}^{r}[i]^{\prec}\) for any such \(g\)._
If \(H\) is \(\omega\)-free, then Lemma 7.7.4 yields \(f\in K_{*}\) with \(P(f)=0\) and \(f\approx_{K}\widehat{f}\). This remark and \(K\)-versions of various results yield the \(K\)-version of Corollary 7.7.14:
**Corollary 7.7.18**.: _Suppose \((P,\mathfrak{m},\widehat{f})\) is ultimate and \(L_{P}\) is terminal. Then:_
\[(P,\mathfrak{m},\widehat{f})\ \text{is firm}\quad\Longleftrightarrow\quad\text{ there is a unique $f\in\mathcal{C}^{r}[i]$ with $P(f)=0$ and $f\prec\mathfrak{m}$.}\]
_If \((P,\mathfrak{m},\widehat{f})\) is firm, \(f\in\mathcal{C}^{r}[i]\), \(P(f)=0\), \(f\prec\mathfrak{m}\), then \(f\in\mathrm{D}(H)[i]\), and also \(f\approx_{K}\widehat{f}\) in case \(H\) is \(\omega\)-free._
Using \(K\)-versions of various results (like Remark 4.4.19 instead of Remark 4.4.15), then yields the \(K\)-version of Corollary 7.7.15:
**Corollary 7.7.19**.: _If \(H\) is \(\omega\)-free and \(\mathrm{d}\)-perfect, and all \(A\in K[\![\partial]\!]\) of order \(r\) are terminal, then every \(Z\)-minimal linear slot in \(K\) of order \(r\) is flabby._
Linear slots in \(K\) of order \(1\) are \(Z\)-minimal, and we can say more in this case:
**Corollary 7.7.20**.: _Suppose \(r=1\). If \((P,\mathfrak{m},\widehat{f})\) is flabby, then it is ultimate. Moreover, \(L_{P}\) is terminal, so if \((P,\mathfrak{m},\widehat{f})\) is firm and ultimate, then there is a unique \(f\in\mathcal{C}^{r}[i]\) with \(P(f)=0\) and \(f\prec\mathfrak{m}\), and for this \(f\) we have: \(f\in\mathrm{D}(H)[i]\), with \(f\approx_{K}\widehat{f}\) in case \(H\) is \(\omega\)-free or \((P,\mathfrak{m},\widehat{f})\) is a hole in \(K\)._
Proof.: Corollary 4.4.31(i) gives "flabby \(\Rightarrow\) ultimate". Since \(L_{P}\) has order \(1\), it is terminal. Now use Corollary 7.7.18, and Lemma 7.7.3, noting in connection with that lemma that the linear slot \((P,\mathfrak{m},\widehat{f})\) is quasilinear.
Our next goal is to establish refinements of Proposition 6.5.14 for the case of firm and flabby slots in \(H\): Lemmas 7.7.33, 7.7.34, 7.7.36, and 7.7.42 below. Towards this goal, we introduce yet another useful concept of normality for slots.
**Absolutely normal slots in \(H\).**_In this subsection \((P,\mathfrak{m},\widehat{h})\) is a slot in \(H\) of order \(r\geqslant 1\) with \(\widehat{h}\in\widehat{H}\)._ Given active \(\phi>0\) in \(H\) we take \(\ell\in H\) with \(\ell^{\prime}=\phi\), and set \(f^{\circ}:=f\circ\ell^{\mathrm{inv}}\) for \(f\in\mathcal{C}[\mathrm{i}]\), as usual; see Section 6.4. Recall from Section 5.3 that \(H^{\circ}\) is Liouville closed with \(K^{\circ}=H^{\circ}[\mathrm{i}]\) and \(\mathrm{I}(K^{\circ})\subseteq(K^{\circ})^{\dagger}\), and that \(H^{\circ}_{*}\) is a d-maximal Hardy field extension of \(H^{\circ}\), with \(K^{\circ}_{*}=H^{\circ}_{*}[\mathrm{i}]\).
Since \(K_{*}\) is linearly closed, each linear differential operator \(A\in K[\partial]^{\neq}\) splits over \(K_{*}\). If \(A\in K[\partial]^{\neq}\) splits strongly over \(K_{*}\), then by Theorem 7.1.3 this remains true when \(K_{*}\) is replaced by \(K_{**}:=H_{**}[\mathrm{i}]\subseteq\mathcal{C}^{<\infty}[\mathrm{i}]\) for any d-maximal Hardy field extension \(H_{**}\) of \(H\). We say that \((P,\mathfrak{m},\widehat{h})\) is **absolutely normal** if it is strictly normal and its linear part splits strongly over \(K_{*}\). If \((P,\mathfrak{m},\widehat{h})\) is absolutely normal, then so is \((P_{\times\mathfrak{m}},1,\widehat{h}/\mathfrak{m})\). Moreover:
**Lemma 7.7.21**.: _Suppose \((P,\mathfrak{m},\widehat{h})\) is absolutely normal, and \(\phi\) is active in \(H\) with \(0<\phi\preccurlyeq 1\). Then the slot \((P^{\phi\circ},\mathfrak{m}^{\circ},\widehat{h}^{\circ})\) in \(H^{\circ}\) is absolutely normal._
Proof.: By Lemma 3.3.40, the slot \((P^{\phi},\mathfrak{m},\widehat{h})\) in \(H^{\phi}\) is strictly normal, hence so is the slot \((P^{\phi\circ},\mathfrak{m}^{\circ},\widehat{h}^{\circ})\) in \(H^{\circ}\). By Lemma 4.2.12 the linear part \(L_{P^{\phi}_{\times\mathfrak{m}}}=(L_{P_{\times\mathfrak{m}}})^{\phi}\) of \((P^{\phi},\mathfrak{m},\widehat{h})\) splits strongly over \(K_{*}^{\phi}=H^{\phi}_{*}[\mathrm{i}]\), hence the linear part of \((P^{\phi\circ},\mathfrak{m}^{\circ},\widehat{h}^{\circ})\) splits strongly over \(K_{*}^{\circ}=H^{\circ}_{*}[\mathrm{i}]\).
Next we show how to achieve absolute normality:
**Proposition 7.7.22**.: _Suppose \((P,\mathfrak{m},\widehat{h})\) is \(Z\)-minimal, deep, and strictly normal, and \(\widehat{h}\prec_{\Delta(\mathfrak{v})}\mathfrak{m}\) with \(\mathfrak{v}:=\mathfrak{v}(L_{P_{\times\mathfrak{m}}})\). Then for all sufficiently small \(q\in\mathbb{Q}^{>}\), \((P,\mathfrak{n},\widehat{h})\), for any \(\mathfrak{n}\asymp\mathfrak{m}|\mathfrak{v}|^{q}\) in \(H^{\times}\), is a deep, absolutely normal refinement of \((P,\mathfrak{m},\widehat{h})\)._
Proof.: Recall that \(L_{P_{\times\mathfrak{m}}}\) splits over the \(H\)-asymptotic extension \(K_{*}\) of \(K\). The argument in the proof of Corollary 4.2.14 shows that for all sufficiently small \(q\in\mathbb{Q}^{>}\), \((P,\mathfrak{n},\widehat{h})\), for any \(\mathfrak{n}\asymp\mathfrak{m}|\mathfrak{v}|^{q}\) in \(H^{\times}\), is a steep refinement of \((P,\mathfrak{m},\widehat{h})\) whose linear part splits strongly over \(K_{*}\). By Corollary 3.3.6 any such refinement \((P,\mathfrak{n},\widehat{h})\) of \((P,\mathfrak{m},\widehat{h})\) is deep, and for all sufficiently small \(q\in\mathbb{Q}^{>}\), any such refinement of \((P,\mathfrak{m},\widehat{h})\) also remains strictly normal, by Lemma 3.3.44 and Remark 3.3.45.
**Corollary 7.7.23**.: _If \((P,\mathfrak{m},\widehat{h})\) is \(Z\)-minimal, deep, normal, and special, then \((P,\mathfrak{m},\widehat{h})\) has a deep, absolutely normal refinement._
This follows from Corollary 3.3.47 and Proposition 7.7.22.
**Lemma 7.7.24**.: _Suppose \(H\) is \(\omega\)-free and \((P,\mathfrak{m},\widehat{h})\) is \(Z\)-minimal and special. Then there are a refinement \((P_{+h},\mathfrak{n},\widehat{h}-h)\) of \((P,\mathfrak{m},\widehat{h})\) and an active \(\phi>0\) in \(H\) such that the slot \((P^{\phi\circ}_{+h^{\circ}},\mathfrak{n}^{\circ},\widehat{h}^{\circ}-h^{ \circ})\) in \(H^{\circ}\) is deep, absolutely normal, and ultimate._
Proof.: For any active \(\phi>0\) in \(H\) we may replace \(H\), \((P,\mathfrak{m},\widehat{h})\) by \(H^{\circ}\), \((P^{\phi\circ},\mathfrak{m}^{\circ},\widehat{h}^{\circ})\), and we may also replace \((P,\mathfrak{m},\widehat{h})\) by any of its refinements. Since \(H\) is \(\omega\)-free, Proposition 3.3.36 yields a refinement \((P_{+h},\mathfrak{n},\widehat{h}-h)\) of \((P,\mathfrak{m},\widehat{h})\) and an active \(\phi>0\) in \(H\) such that the slot \((P^{\phi\circ}_{+h^{\circ}},\mathfrak{n}^{\circ},\widehat{h}^{\circ}-h^{ \circ})\) in \(H^{\circ}\) is normal. Replacing \(H\), \((P,\mathfrak{m},\widehat{h})\) by \(H^{\circ}\), \((P^{\phi\circ}_{+h^{\circ}},\mathfrak{n}^{\circ},\widehat{h}^{\circ}-h^{ \circ})\) we arrange that \((P,\mathfrak{m},\widehat{h})\) is normal. Proposition 4.4.14 now yields an ultimate refinement of \((P,\mathfrak{m},\widehat{h})\). Applying Proposition 3.3.36 to this refinement and using Lemma 4.4.10, we obtain an ultimate refinement \((P_{+h},\mathfrak{n},\widehat{h}-h)\)
of \((P,\mathfrak{m},\widehat{h})\) and an active \(\phi>0\) in \(H\) such that \((P^{\phi\phi}_{+h^{\circ}},\mathfrak{n}^{\circ},\widehat{h}^{\circ}-h^{\circ})\) is deep, normal, and ultimate. Again replacing \(H\), \((P,\mathfrak{m},\widehat{h})\) by \(H^{\circ}\), \((P^{\phi\phi}_{+h^{\circ}},\mathfrak{n}^{\circ},\widehat{h}^{\circ}-h^{\circ})\), we arrange that \((P,\mathfrak{m},\widehat{h})\) is deep, normal, and ultimate. Now apply Corollary 7.7.23 to \((P,\mathfrak{m},\widehat{h})\) and use Lemma 4.4.10.
**Corollary 7.7.25**.: _Suppose \(H\) is \(\omega\)-free and \(r\)-linearly newtonian, and \((P,\mathfrak{m},\widehat{h})\) is \(Z\)-minimal. Then the conclusion of Lemma 7.7.24 holds._
Proof.: As in the beginning of the proof of Lemma 7.7.24, use Theorem 3.3.33 to arrange that \((P,\mathfrak{m},\widehat{h})\) is normal. Then \((P,\mathfrak{m},\widehat{h})\) is quasilinear by Corollary 3.3.21 and hence special by Lemma 3.2.36, so Lemma 7.7.24 applies to \((P,\mathfrak{m},\widehat{h})\).
_Remark 7.7.26_.: By Corollary 3.2.6 the hypotheses of Corollary 7.7.25 are satisfied if \(H\) is \(\omega\)-free and \((P,\mathfrak{m},\widehat{h})\) is a nonlinear minimal hole in \(H\).
**Absolutely normal slots in \(K\)**.: _Let \((P,\mathfrak{m},\widehat{f})\) be a slot in \(K\) of order \(r\geqslant 1\), with \(\widehat{f}\in\widehat{K}\). Call \((P,\mathfrak{m},\widehat{f})\)_**absolutely normal**_ if it is strictly normal and \(L_{P_{\times}=}\) splits strongly over \(K_{*}\). If \((P,\mathfrak{m},\widehat{f})\) is absolutely normal, then so is \((P_{\times\mathfrak{m}},1,\widehat{f}/\mathfrak{m})\). If \((Q,\mathfrak{n},\widehat{h})\) is a slot in \(H\) of order \(\geqslant 1\) with \(\widehat{h}\in\widehat{H}\subseteq\widehat{K}\), then it is a slot in \(K\) (Corollary 4.3.2), and \((Q,\mathfrak{n},\widehat{h})\) is absolutely normal as a slot in \(H\) iff it is absolutely normal as a slot in \(K\).
**Lemma 7.7.27**.: _Suppose \((P,\mathfrak{m},\widehat{f})\) is absolutely normal. Then there exists \(y\) in \(\mathcal{C}^{<\infty}[i]\cap\mathfrak{m}\,\mathcal{C}^{r}[i]^{\prec}\) such that \(P(y)=0\) and \(y\prec\mathfrak{m}\). If \(H\subseteq\mathcal{C}^{\infty}\), then any such \(y\) lies in \(\mathcal{C}^{\infty}[i]\), and likewise with \(\mathcal{C}^{\omega}\) in place of \(\mathcal{C}^{\infty}\)._
Proof.: Use Lemma 6.4.5 with \(Q=(P_{\times\mathfrak{m}})_{1}\) and \(H_{*}\), \(K_{*}\), \(P_{\times\mathfrak{m}}\) in place of \(H\), \(K\), \(P\). For the last part, use Corollary 6.3.5 as in the proof of that lemma.
The \(K\)-versions of Lemma 7.7.21, Proposition 7.7.22, and Corollary 7.7.23 (with the same proofs) are as follows:
**Lemma 7.7.28**.: _Suppose \((P,\mathfrak{m},\widehat{f})\) is absolutely normal, and \(\phi\) is active in \(H\) with \(0<\phi\prec 1\). Then the slot \((P^{\phi\circ},\mathfrak{m}^{\circ},\widehat{h}^{\circ})\) in \(K^{\circ}\) is absolutely normal._
**Proposition 7.7.29**.: _Suppose \((P,\mathfrak{m},\widehat{f})\) is \(Z\)-minimal, deep, and strictly normal, and \(\widehat{f}\prec_{\Delta(\mathfrak{v})}\mathfrak{m}\) with \(\mathfrak{v}:=\mathfrak{v}(L_{P_{\times\mathfrak{m}}})\). Then for all sufficiently small \(q\in\mathbb{Q}^{>}\), \((P,\mathfrak{n},\widehat{f})\), for any \(\mathfrak{n}\asymp\mathfrak{m}|\mathfrak{v}|^{q}\) in \(K^{\times}\), is a deep, absolutely normal refinement of \((P,\mathfrak{m},\widehat{f})\)._
**Corollary 7.7.30**.: _If \((P,\mathfrak{m},\widehat{f})\) is \(Z\)-minimal, deep, normal, and special, then \((P,\mathfrak{m},\widehat{f})\) has a deep, absolutely normal refinement._
The \(K\)-version of Lemma 7.7.24 is as follows (its proof uses Proposition 4.4.18 instead of Proposition 4.4.14, and the \(K\)-version of Lemma 4.4.10):
**Lemma 7.7.31**.: _Suppose \(H\) is \(\omega\)-free and \((P,\mathfrak{m},\widehat{f})\) is \(Z\)-minimal and special. Then there are a refinement \((P_{+f},\mathfrak{n},\widehat{f}-f)\) of \((P,\mathfrak{m},\widehat{f})\) and an active \(\phi>0\) in \(H\) such that the slot \((P^{\phi\circ}_{+f^{\circ}},\mathfrak{n}^{\circ},\widehat{f}^{\circ}-f^{\circ})\) in \(K^{\circ}\) is deep, absolutely normal, and ultimate._
The \(K\)-version of Corollary 7.7.25 now follows in the same way:
**Corollary 7.7.32**.: _Suppose \(H\) is \(\omega\)-free, \(K\) is \(r\)-linearly newtonian, and \((P,\mathfrak{m},\widehat{f})\) is \(Z\)-minimal. Then the conclusion of Lemma 7.7.31 holds._
**Firm slots in \(H\)**.: _In this subsection \((P,\mathfrak{m},\widehat{h})\) is a slot in \(H\) of order \(r\geqslant 1\), with \(\widehat{h}\in\widehat{H}\). We set \(d:=\deg(P)\), \(w:=\operatorname{wt}(P)\), and begin with a significant strengthening of Proposition 6.5.14 for firm slots in \(H\):_
**Lemma 7.7.33**.: _Suppose \((P,\mathfrak{m},\widehat{h})\) is firm, ultimate, and strongly split-normal, and let \(f,g\in\mathcal{C}^{r}[\mathrm{i}]\) satisfy \(P(f)=P(g)=0\) and \(f,g\prec\mathfrak{m}\). Then \(f=g\)._
Proof.: The proof is similar to that of Proposition 6.5.14. We first replace \((P,\mathfrak{m},\widehat{h})\), \(f\), \(g\) by \((P_{\times\mathfrak{m}},1,\widehat{h}/\mathfrak{m})\), \(f/\mathfrak{m}\), \(g/\mathfrak{m}\), to arrange \(\mathfrak{m}=1\). We set \(\mathfrak{v}:=|\mathfrak{v}(L_{P})|\prec^{b}1\) and \(\Delta:=\Delta(\mathfrak{v})\), and take \(Q,R\in H\{Y\}\) where \(Q\) is homogeneous of degree \(1\) and order \(r\), \(A:=L_{Q}\in H[\partial]\) splits strongly over \(K\), \(P=Q-R\), and \(R\prec_{\Delta}\mathfrak{v}^{w+1}P_{1}\), so \(\mathfrak{v}(A)\sim\mathfrak{v}(L_{P})\). Multiplying \(P\), \(Q\), \(R\) by some \(b\in H^{\times}\) we arrange that \(A=\partial^{r}+f_{1}\partial^{r-1}+\cdots+f_{r}\) with \(f_{1},\ldots,f_{r}\in H\) and \(R\prec_{\Delta}\mathfrak{v}^{w}\). We have
\[A\;=\;(\partial\!-\!\phi_{1})\cdots(\partial\!-\!\phi_{r}),\quad\phi_{1}, \ldots,\phi_{r}\in K,\quad\operatorname{Re}\phi_{1},\ldots,\operatorname{Re} \phi_{r}\;\succcurlyeq\;\mathfrak{v}^{\dagger}\;\succcurlyeq\;1. \tag{7.7.1}\]
Corollary 3.1.6 yields \(\phi_{1},\ldots,\phi_{r}\preccurlyeq\mathfrak{v}^{-1}\). Take \(a_{0}\in\mathbb{R}\) and functions on \([a_{0},\infty)\) representing the germs \(\phi_{1},\ldots,\phi_{r}\), \(f_{1},\ldots,f_{r}\), \(f\), \(g\) and the \(R_{\boldsymbol{j}}\) with \(\boldsymbol{j}\in\mathbb{N}^{1+r}\), \(|\boldsymbol{j}|\leqslant d\), \(\|\boldsymbol{j}\|\leqslant w\) (using the same symbols for the germs mentioned as for their chosen representatives) so as to be in the situation described in the beginning of Section 6.2, with \(f\) and \(g\) solutions on \([a_{0},\infty)\) of the differential equation \((*)\) there. As there, we take \(\nu\in\mathbb{Q}\) with \(\nu>w\) so that \(R\prec_{\Delta}\mathfrak{v}^{\nu}\) and \(\nu\mathfrak{v}^{\dagger}\not\sim\operatorname{Re}\phi_{j}\) for \(j=1,\ldots,r\), and then increase \(a_{0}\) to satisfy all assumptions for Lemma 6.2.1.
With \(a\geqslant a_{0}\) and \(h_{a}\in\mathcal{C}_{a}^{r}[\mathrm{i}]\) as in Lemma 6.2.5 we have \(A_{a}(h_{a})=0\) and \(h_{a}\prec 1\). Now \(A\) splits over \(K\), so \(A\) is terminal as an element of \(K[\partial]\), by Corollary 2.6.21. As \((P,1,\widehat{h})\) is firm and ultimate, this yields \(h_{a}=0\) (for all \(a\geqslant a_{0}\)) by Lemma 7.7.10 and Corollary 5.2.2, and thus \(f=g\) by Corollary 6.2.15.
Next we prove variants of Lemma 7.7.33 by modifying the restrictive hypothesis of strong split-normality.
**Lemma 7.7.34**.: _Suppose \((P,\mathfrak{m},\widehat{h})\) is firm, ultimate, and absolutely normal, and its linear part \(L_{P\times_{\mathfrak{m}}}\in H[\partial]\subseteq K[\partial]\) is terminal. Then for all \(f,g\in\mathcal{C}^{r}[\mathrm{i}]\) such that \(P(f)=P(g)=0\) and \(f,g\prec\mathfrak{m}\) we have \(f=g\)._
Proof.: Replacing \((P,\mathfrak{m},\widehat{h})\) by \((P_{\times\mathfrak{m}},1,\widehat{h}/\mathfrak{m})\) we arrange \(\mathfrak{m}=1\). Put \(A:=L_{P}\in H[\partial]\) and \(R:=P_{1}-P\in H\{Y\}\), so \(R\prec_{\Delta}\mathfrak{v}^{w+1}P_{1}\) where \(\Delta:=\Delta(\mathfrak{v})\), \(\mathfrak{v}:=\mathfrak{v}(A)\prec^{b}1\). Multiplying \(A\), \(P\), \(R\) on the left by some \(b\in H^{\times}\) we arrange
\[A\;=\;\partial^{r}+f_{1}\partial^{r-1}+\cdots+f_{r},\quad f_{1},\ldots,f_{r} \in H,\quad R\prec_{\Delta}\mathfrak{v}^{w}.\]
Then (7.7.1) holds with \(\phi_{1},\ldots,\phi_{r}\in K_{*}\) instead of \(\phi_{1},\ldots,\phi_{r}\in K\), and \(\phi_{1},\ldots,\phi_{r}\preccurlyeq\mathfrak{v}^{-1}\) by Corollary 3.1.6. Now argue as at the end of the proof of Lemma 7.7.33 to get \(f=g\), using that \(A\in K[\partial]\) is terminal by assumption.
From Corollary 2.6.21 and Lemma 7.7.34 we obtain:
**Corollary 7.7.35**.: _If \((P,\mathfrak{m},\widehat{h})\) is firm, ultimate, and strictly normal, and its linear part splits strongly over \(K\), then the conclusion of Lemma 7.7.34 holds._
_In the rest of this subsection we assume that all \(A\in H[\partial]\subseteq K[\partial]\) of order \(r\) are terminal._ Recall that by Lemmas 4.4.10 and 4.4.25, if \((P,\mathfrak{m},\widehat{h})\) is ultimate, then so is any refinement of it, and likewise with "firm" in place of "ultimate".
**Lemma 7.7.36**.: _Suppose \((P,\mathfrak{m},\widehat{h})\) is \(Z\)-minimal, deep, normal, special, ultimate, firm, and \(f,g\in\mathcal{C}^{r}[\mathrm{i}]\), \(P(f)=P(g)=0\), \(f\approx_{K}\widehat{h}\), \(g\approx_{K}\widehat{h}\). Then \(f=g\)._
Proof.: Corollary 7.7.23 gives a deep absolutely normal refinement \((P_{+h},\mathfrak{n},\widehat{h}-h)\) of \((P,\mathfrak{m},\widehat{h})\). Now apply Lemma 7.7.34 to \((P_{+h},\mathfrak{n},\widehat{h}-h)\), \(f-h\), \(g-h\) in the role of \((P,\mathfrak{m},\widehat{h})\), \(f\), \(g\).
**Corollary 7.7.37**.: _Suppose \(H\) is \(\mathfrak{o}\)-free and \(r\)-linearly newtonian, and \((P,\mathfrak{m},\widehat{h})\) is firm and \(Z\)-minimal. Then there is a unique \(f\in\mathcal{C}^{r}[\mathrm{i}]\) with \(P(f)=0\) and \(f\approx_{K}\widehat{h}\). For this \(f\) we have \(f\in\mathrm{D}(H)\) and \(f\approx_{H}\widehat{h}\)._
Proof.: Suppose \(f,g\in\mathcal{C}^{r}[\mathrm{i}]\) satisfy \(P(f)=P(g)=0\), \(f\approx_{K}\widehat{h}\), and \(g\approx_{K}\widehat{h}\); we claim that \(f=g\). If \(\phi>0\) is active in \(H\), then by the remarks before Lemma 4.4.25, Lemma 6.4.3, and the remarks at the end of Section 6.6, and with the superscript \(\circ\) having the usual meaning, we may replace \(H\), \(K\), \(\widehat{H}\), \(\widehat{K}\), \((P,\mathfrak{m},\widehat{h})\), \(f\), \(g\) by \(H^{\circ}\), \(K^{\circ}\), \(\widehat{H}^{\circ}\), \(\widehat{K}^{\circ}\), \((P^{\phi\circ},\mathfrak{m}^{\circ},\widehat{h}^{\circ})\), \(f^{\circ}\), \(g^{\circ}\). Using this observation, Corollary 7.7.25 and the remarks before Lemma 7.7.36, we arrange that \((P,\mathfrak{m},\widehat{h})\) is ultimate and absolutely normal. The claim now follows from Lemma 7.7.34. From Lemma 7.7.2 we obtain \(f\in H_{*}\) with \(P(f)=0\) and \(f\approx_{H}\widehat{h}\); then \(f\approx_{K}\widehat{h}\) by Corollary 6.6.13. Our d-maximal Hardy field extension \(H_{*}\) of \(H\) was arbitrary, so the uniqueness statement just proved gives \(f\in\mathrm{D}(H)\).
**Corollary 7.7.38**.: _Suppose \(H\) is \(\mathfrak{o}\)-free and \((P,\mathfrak{m},\widehat{h})\) is a firm minimal hole in \(H\). Then the conclusion of Corollary 7.7.37 holds._
Proof.: If \((P,\mathfrak{m},\widehat{h})\) is nonlinear, then \(H\) is \(r\)-linearly newtonian by Corollary 3.2.6, so the hypotheses of Corollary 7.7.37 are satisfied, and so is its conclusion.
Suppose \((P,\mathfrak{m},\widehat{h})\) is linear. By Remark 4.4.15 we can refine \((P,\mathfrak{m},\widehat{h})\) to arrange it to be ultimate. By our standing assumption \(L_{P}\) is terminal, so we can appeal to Corollary 7.7.14.
\(Z\)**-minimal slots in \(\mathrm{d}\)-perfect Hardy fields.**_In this subsection \(H\) is \(\mathrm{d}\)-perfect._ By Corollary 7.2.15, \(H\) is \(1\)-newtonian and so has no quasilinear \(Z\)-minimal slot of order \(1\), by Corollary 3.4.14. This allows us to add to the characterization of \(\mathfrak{o}\)-freeness for d-perfect Hardy fields given in Corollary 7.5.9:
**Corollary 7.7.39**.: \(H\) _is \(\mathfrak{o}\)-free \(\iff\)\(H\) has no hole of order \(1\)\(\iff\)\(H\) has no slot of order \(1\)._
Proof.: The first equivalence follows from Lemma 3.2.1 and Corollary 7.2.15. For the rest we observe that if \(H\) has a slot of order \(1\), then it also has a hole of order \(1\): Given a slot \((P,\mathfrak{m},\widehat{h})\) in \(H\) of order \(1\), take \(Q\in Z(H,\widehat{h})\) of minimal complexity. Then \((Q,\mathfrak{m},\widehat{h})\) is a \(Z\)-minimal slot of order \(\leqslant 1\) in \(H\), hence is equivalent to a \(Z\)-minimal hole \((Q,\mathfrak{m},\widehat{b})\) in \(H\), by Lemma 3.2.14, so \(\operatorname{order}Q=1\) by a remark after Lemma 3.2.1.
Next, an immediate consequence of Corollaries 7.7.15 and 7.7.38:
**Corollary 7.7.40**.: _Let \(r\in\mathbb{N}^{\geqslant 1}\). If \(H\) is \(\mathfrak{o}\)-free and all \(A\in H[\mathfrak{o}]\) of order \(r\) are terminal, then every minimal hole in \(H\) of order \(r\) is flabby._
From this we deduce:
**Corollary 7.7.41**.: _Suppose \(H\) is \(\mathfrak{o}\)-free. Then every minimal hole in \(H\) of order \(2\) and every linear slot in \(H\) of order \(2\) is flabby._
Proof.: By Corollaries 2.6.21 and 7.5.9, all \(A\in H[\partial]\) of order \(2\) are terminal. Hence every minimal hole in \(H\) of order \(2\) is flabby, by Corollary 7.7.40. Every linear slot in \(H\) of order \(2\) is \(Z\)-minimal, by Corollary 7.7.39, and hence is flabby by Corollary 7.7.15.
In the next subsection we study flabby slots in \(H\) in more detail.
### Flabby slots in \(H\)
_In this subsection \((P,\mathfrak{m},\widehat{h})\) is a slot in \(H\) of order \(r\geqslant 1\)._ Note that if \((P,\mathfrak{m},\widehat{f})\) is normal and \(f\in\mathfrak{m}\mathcal{C}^{r}[\mathrm{i}]^{\preccurlyeq}\), \(P(f)=0\), then by Corollary 6.3.6 we have \(f\in\mathcal{C}^{<\infty}[\mathrm{i}]\), and \(f\in\mathcal{C}^{\infty}[\mathrm{i}]\) if \(H\subseteq\mathcal{C}^{\infty}\), \(f\in\mathcal{C}^{\omega}[\mathrm{i}]\) if \(H\subseteq\mathcal{C}^{\omega}\).
Next some observations tacitly used in the proof of Lemma 7.7.42 below. For this, suppose \((P,\mathfrak{m},\widehat{h})\) is flabby. Then \((P_{\times\mathfrak{m}},1,\widehat{h}/\mathfrak{m})\) is flabby by Lemma 4.4.26, and if \(g\in(\mathcal{C}^{r})^{\preccurlyeq}\) and \(P_{\times\mathfrak{m}}(g)=0\), \(g\prec 1\), then \(f:=\mathfrak{m}g\in\mathfrak{m}\,(\mathcal{C}^{r})^{\preccurlyeq}\) satisfies \(P(f)=0\) and \(f\prec\mathfrak{m}\). Likewise, let \((P_{+h},\mathfrak{n},\widehat{h}-h)\) be a refinement of \((P,\mathfrak{m},\widehat{h})\), and suppose \((P,\mathfrak{m},\widehat{h})\) is also linear or normal. Then the slot \((P_{+h},\mathfrak{n},\widehat{h}-h)\) in \(H\) is flabby by Corollary 4.4.30, and if \(g\in\mathfrak{n}\,(\mathcal{C}^{r})^{\preccurlyeq}\) and \(P_{+h}(g)=0\), \(g\prec\mathfrak{n}\), then \(f:=h+g\in\mathfrak{m}\,(\mathcal{C}^{r})^{\preccurlyeq}\) satisfies \(P(f)=0\) and \(f\prec\mathfrak{m}\). Finally, let \(\phi\) be active in \(H\), \(0<\phi\preccurlyeq 1\), and let the superscript \(\circ\) have the usual meaning. Then the slot \((P^{\phi\circ},\mathfrak{m}^{\circ},\widehat{h}^{\circ})\) in \(H^{\circ}\) is flabby, and if \(g\in\mathfrak{m}^{\circ}\,(\mathcal{C}^{r})^{\preccurlyeq}\) and \(P^{\phi\circ}(g)=0\), \(g\prec\mathfrak{m}^{\circ}\), then taking \(f\in\mathcal{C}^{r}\) with \(f^{\circ}=g\) we have \(f\in\mathfrak{m}\,(\mathcal{C}^{r})^{\preccurlyeq}\), \(P(f)=0\), and \(f\prec\mathfrak{m}\), using the remark after Lemma 6.4.2.
**Lemma 7.7.42**.: _Suppose \((P,\mathfrak{m},\widehat{h})\) is \(Z\)-minimal, deep, normal, special, and flabby. Then there are \(f\neq g\) in \(\mathfrak{m}\,(\mathcal{C}^{r})^{\preccurlyeq}\) such that \(P(f)=P(g)=0\), and \(f,g\prec\mathfrak{m}\)._
Proof.: Corollary 7.7.23 yields a deep absolutely normal refinement \((P_{+h},\mathfrak{n},\widehat{h}-h)\) of \((P,\mathfrak{m},\widehat{h})\). Using the remarks preceding the lemma, we can replace \((P,\mathfrak{m},\widehat{h})\) by \((P_{+h},\mathfrak{n},\widehat{h}-h)\) to arrange that \((P,\mathfrak{m},\widehat{h})\) is absolutely normal, and then replacing \((P,\mathfrak{m},\widehat{h})\) by \((P_{\times\mathfrak{m}},1,\widehat{h}/\mathfrak{m})\) we also arrange \(\mathfrak{m}=1\). Set \(d:=\deg P\) and \(w:=\mathrm{wt}(P)\). Let \(\mathfrak{v}\), \(\Delta\), \(A\), \(R\) be as in the proof of Lemma 7.7.34. Multiplying \(A\), \(P\), \(R\) on the left by some \(b\in H^{\times}\) we arrange \(A=\partial^{r}+f_{1}\partial^{r-1}+\cdots+f_{r}\) with \(f_{1},\ldots,f_{r}\in H\) and \(R\prec_{\Delta}\mathfrak{v}^{w}\). Then (7.7.1) holds with \(\phi_{1},\ldots,\phi_{r}\in K_{*}\) instead of \(\phi_{1},\ldots,\phi_{r}\in K\), and \(\phi_{1},\ldots,\phi_{r}\preccurlyeq\mathfrak{v}^{-1}\) by Corollary 3.1.6. Take \(a_{0}\in\mathbb{R}\) and functions on \([a_{0},\infty)\) representing the germs \(\phi_{1},\ldots,\phi_{r}\), \(f_{1},\ldots,f_{r}\), and the \(R_{\boldsymbol{j}}\) with \(\boldsymbol{j}\in\mathbb{N}^{1+r}\), \(|\boldsymbol{j}|\leqslant d\), \(\|\boldsymbol{j}\|\leqslant w\) (using the same symbols for the germs mentioned as for their chosen representatives) so as to be in the situation described in the beginning of Section 6.2. Increasing \(a_{0}\) if necessary and choosing \(\nu\) as in the proof of Lemma 7.7.33 we arrange that \(f_{1},\ldots,f_{r}\), and the \(R_{\boldsymbol{j}}\) are in \(\mathcal{C}^{1}_{a_{0}}\) and \(\|R\|_{a_{0}}\leqslant 1/E\), with \(E=E(d,r)\in\mathbb{N}^{\geqslant 1}\) as in Corollary 6.3.13, and the hypotheses of Lemma 6.2.1 are satisfied. Lemma 7.7.7 yields \(h\in\mathcal{C}^{<\infty}[\mathrm{i}]\) such that \(A(h)=0\), \(h\neq 0\), and \(h,h^{\prime},\ldots,h^{(r)}\prec 1\). Replacing \(h\) by \(\operatorname{Re}h\) or \(\operatorname{Im}h\) we arrange \(h\in\mathcal{C}^{<\infty}\). Increasing \(a_{0}\) again we arrange that \(h\) is represented by a function in \(\mathcal{C}^{r}_{a_{0}}\), denoted by the same symbol, such that
\[A_{a_{0}}(h)\ =\ 0,\qquad\|h\|_{a_{0};r}\ \leqslant 1/8,\]
and such that we are in the situation of Lemma 6.2.6, with \(a\) ranging over \([a_{0},+\infty)\). Then Corollaries 6.2.8 and 6.2.9 yield for sufficiently large \(a\geqslant a_{0}\) functions
with \(\|f\|_{a;r},\|g\|_{a;r}\leqslant 1\) and \((\operatorname{Re}\Xi_{a})(f)=f\), \((\operatorname{Re}\Xi_{a})(g)=g+h\). Fix such \(a\), \(f\), \(g\). Then \(A_{a}(f)=R(f)\) and
\[A_{a}(g)\ =\ A_{a}(g+h)\ =\ A_{a}\big{(}(\operatorname{Re}\Xi_{a})(g)\big{)}\ = \ \operatorname{Re}A_{a}\big{(}\Xi_{a}(g)\big{)}\ =\ \operatorname{Re}R(g)\ =\ R(g),\]
and \(f\prec 1\) and \(g+h\prec 1\) by Lemma 6.2.6, so \(g\prec 1\), hence \(f\), \(g\) are solutions of \((*)\) on \([a,\infty)\). Denoting the germs of \(f\), \(g\) also by \(f\), \(g\) we have \(P(f)=P(g)=0\). Moreover, \(f\neq g\) as germs: otherwise \(f=g\) in \(\mathcal{C}^{r}_{a}\) by the remark after the proof of Corollary 6.3.13, and thus \(h=(\operatorname{Re}\Xi_{a})(g)-g=(\operatorname{Re}\Xi_{a})(f)-f=0\) in \(\mathcal{C}^{r}_{a}\), a contradiction.
**Corollary 7.7.43**.: _Suppose \(H\) is \(\omega\)-free and \(r\)-linearly newtonian, and \((P,\mathfrak{m},\widehat{h})\) is \(Z\)-minimal and flabby. Assume also that \((P,\mathfrak{m},\widehat{h})\) is linear or normal. Then the conclusion of Lemma 7.7.42 holds._
Proof.: Use Theorem 3.3.33 and the remarks preceding Lemma 7.7.42 to arrange that \((P,\mathfrak{m},\widehat{h})\) is deep and normal. Then \((P,\mathfrak{m},\widehat{h})\) is quasilinear by Corollary 3.3.21, and hence special by Lemma 3.2.36. Now Lemma 7.7.42 applies.
Note that the hypotheses of Corollary 7.7.43 hold if \(H\) is \(\omega\)-free and \((P,\mathfrak{m},\widehat{h})\) is a flabby normal nonlinear minimal hole in \(H\), by Corollary 3.2.6.
Suppose \(H\) is \(\omega\)-free, all \(A\in H[\partial]\subseteq K[\partial]\) of order \(r\) are terminal, and \((P,\mathfrak{m},\widehat{h})\) is a minimal hole in \(H\). Then by Corollary 7.7.38 we have:
\[(P,\mathfrak{m},\widehat{h})\ \text{is firm}\quad\Longrightarrow\quad\text{ there is a unique $f\in\mathcal{C}^{r}$ with $P(f)=0$ and $f\approx_{H}\widehat{h}$.}\]
Thanks to Corollary 7.7.13, the converse of this implication also holds if \(\deg P=1\), but we do not know whether this is still the case when \(\deg P>1\). We now prove a partial generalization of Corollary 7.7.14:
**Corollary 7.7.44**.: _Suppose \(H\) is \(\omega\)-free, \((P,\mathfrak{m},\widehat{h})\) is an ultimate minimal hole in \(H\) with terminal linear part, and \((P,\mathfrak{m},\widehat{h})\) is linear or absolutely normal. Then_
\[(P,\mathfrak{m},\widehat{h})\ \text{is firm}\quad\Longleftrightarrow\quad\text{ there is a unique $f\in\mathcal{C}^{r}$ with $P(f)=0$ and $f\prec\mathfrak{m}$.}\]
_If \((P,\mathfrak{m},\widehat{h})\) is firm and \(f\in\mathcal{C}^{r}\), \(P(f)=0\), and \(f\prec\mathfrak{m}\), then \(f\in\operatorname{D}(H)\) and \(f\approx_{H}\widehat{h}\), and there is no \(g\neq f\) in \(\mathcal{C}^{r}[i]\) with \(P(g)=0\) and \(g\prec\mathfrak{m}\)._
Proof.: If \(\deg P=1\), then this follows from Corollary 7.7.14. Suppose \(\deg P>1\). Lemma 7.7.2 yields \(f\in H_{*}\) with \(P(f)=0\) and \(f\approx_{H}\widehat{h}\). This and Lemma 7.7.34 yield the forward direction of the displayed equivalence, as well as the rest in view of \(H_{*}\) being arbitrary. The converse holds by the remark after Corollary 7.7.43.
_Remark_.: Suppose \(H\) is \(\omega\)-free, all \(A\in H[\partial]\subseteq K[\partial]\) of order \(r\) are terminal, and \((P,\mathfrak{m},\widehat{h})\) is a minimal hole in \(H\). Then Remarks 4.4.15 and 7.7.26 give a refinement \((P_{+h},\mathfrak{n},\widehat{h}-h)\) of \((P,\mathfrak{m},\widehat{h})\) and an active \(\phi>0\) in \(H\) such that the minimal hole \((P_{+h^{\circ}}^{\phi\circ},\mathfrak{n}^{\circ},\widehat{h}^{\circ}-h^{\circ})\) in \(H^{\circ}\) is ultimate, and linear or absolutely normal. Therefore the hypotheses of Corollary 7.7.44 are satisfied by \(H^{\circ}\) and \((P_{+h^{\circ}}^{\phi\circ},\mathfrak{n}^{\circ},\widehat{h}^{\circ}-h^{\circ})\) in place of \(H\) and \((P,\mathfrak{m},\widehat{h})\).
The next two subsections contain analogues of Lemmas 7.7.34, 7.7.36, and 7.7.42 for slots in \(K\).
**Firm slots in \(K\).**_In this subsection \((P,\mathfrak{m},\widehat{f})\) is a slot in \(K\) of order \(r\geqslant 1\), with \(\widehat{f}\in\widehat{K}\)._ Here are \(K\)-versions of Lemma 7.7.34 and its Corollary 7.7.35 with similar proofs:
**Lemma 7.7.45**.: _If \((P,\mathfrak{m},\widehat{f})\) is firm, ultimate, and absolutely normal, with terminal linear part, then for any \(f,g\in\mathcal{C}^{r}[i]\) with \(P(f)=P(g)=0\), \(f,g\prec\mathfrak{m}\) we have \(f=g\)._
**Corollary 7.7.46**.: _If \((P,\mathfrak{m},\widehat{f})\) is firm, ultimate, and strictly normal, and its linear part splits strongly over \(K\), then the conclusion of Lemma 7.7.45 holds._
_In the rest of this subsection we assume that all \(A\in K[\mathfrak{d}]\) of order \(r\) are terminal._ (This holds if \(r=1\) or \(K\) is \(\omega\)-free with a minimal hole of order \(r\) in \(K\), because then \(K\) is \(r\)-linearly closed by Corollary 3.2.4.)
By the \(K\)-versions of Lemmas 4.4.10 and 4.4.25, if \((P,\mathfrak{m},\widehat{f})\) is ultimate, then so is each of its refinements, and likewise with "firm" in place of "ultimate". The \(K\)-version of Corollary 7.7.23, and Lemma 7.7.45 in place of Lemma 7.7.34 then yields the \(K\)-version of Lemma 7.7.36:
**Lemma 7.7.47**.: _If \((P,\mathfrak{m},\widehat{f})\) is \(Z\)-minimal, deep, normal, special, ultimate, and firm, then there is at most one \(f\in\mathcal{C}^{r}[i]\) with \(P(f)=0\) and \(f\approx_{K}\widehat{f}\)._
Using the \(K\)-version of Corollary 7.7.25, and Lemmas 7.7.4 and 7.7.45 instead of Lemmas 7.7.2 and 7.7.34 we obtain the \(K\)-version of Corollary 7.7.37:
**Corollary 7.7.48**.: _If \(H\) is \(\omega\)-free, \(K\) is \(r\)-linearly newtonian, and \((P,\mathfrak{m},\widehat{f})\) is firm and \(Z\)-minimal, then there is a unique \(f\in\mathcal{C}^{r}[i]\) with \(P(f)=0\) and \(f\approx_{K}\widehat{f}\), and this \(f\) is in \(\mathrm{D}(H)[i]\)._
Here is a \(K\)-analogue of Corollary 7.7.38:
**Corollary 7.7.49**.: _Suppose \((P,\mathfrak{m},\widehat{f})\) is a firm minimal hole in \(K\), and \(r=\deg P=1\) or \(H\) is \(\omega\)-free. Then the conclusion of Corollary 7.7.48 holds._
Proof.: If \((P,\mathfrak{m},\widehat{f})\) has complexity \((1,1,1)\), then by Remark 4.4.19 and the \(K\)-version of Lemma 4.4.25 we arrange that \((P,\mathfrak{m},\widehat{f})\) is ultimate, so that the desired conclusion follows from Corollary 7.7.20. If \(K\) is \(\omega\)-free and \((P,\mathfrak{m},\widehat{f})\) has complexity \(>(1,1,1)\), then \(\deg P>1\) by Corollary 3.2.8, so \(K\) is \(r\)-linearly newtonian by Corollary 3.2.6, and the desired conclusion follows from Corollary 7.7.48.
**Corollary 7.7.50**.: _Suppose \(H\) is \(\mathrm{d}\)-perfect. If \((P,\mathfrak{m},\widehat{f})\) is a minimal hole in \(K\) of complexity \((1,1,1)\), then it is flabby. If \(H\) is \(\omega\)-free, then every minimal hole in \(K\) of positive order is flabby._
Proof.: The first part is immediate from Corollary 7.7.49. For the second part, suppose \(H\) is \(\omega\)-free and we are given a minimal hole in \(K\) of positive order. By Lemma 4.2.15 we can pass to an equivalent hole \((Q,\mathfrak{n},\widetilde{a})\) in \(K\) with \(\widetilde{a}\in\widetilde{H}[i]\) for some immediate \(H\)-field extension \(\widetilde{H}\) of \(H\), so Corollary 7.7.49 applies to it.
Theorem 7.7.1 now follows from Corollaries 7.7.39, 7.7.40, 7.7.41, and 7.7.50.
**Flabby slots in \(K\).**_Let \((P,\mathfrak{m},\widehat{f})\) be a slot in \(K\) of order \(r\geqslant 1\), \(\widehat{f}\in\widehat{K}\)._ Note that if \((P,\mathfrak{m},\widehat{f})\) is normal and \(f\in\mathfrak{m}\mathcal{C}^{r}[\mathrm{i}]^{\preccurlyeq}\), \(P(f)=0\), then by Corollary 6.3.6 we have \(f\in\mathcal{C}^{<\infty}[\mathrm{i}]\), and \(f\in\mathcal{C}^{\infty}[\mathrm{i}]\) if \(H\subseteq\mathcal{C}^{\infty}\), \(f\in\mathcal{C}^{\omega}[\mathrm{i}]\) if \(H\subseteq\mathcal{C}^{\omega}\).
Suppose \((P,\mathfrak{m},\widehat{f})\) is flabby. The remarks about multiplicative conjugates, refinements, and compositional conjugates preceding Lemma 7.7.42 then go through for the slot \((P,\mathfrak{m},\widehat{f})\) in \(K\) instead of the slot \((P,\mathfrak{m},\widehat{h})\) in \(H\) with \(\mathcal{C}^{r}[\mathrm{i}]\) replacing \(\mathcal{C}^{r}\) and \(K^{\circ}\) instead of \(H^{\circ}\); this uses the \(K\)-versions of Lemma 4.4.26 and Corollary 4.4.30. It helps in proving a complex version of Lemma 7.7.42:
**Lemma 7.7.51**.: _Suppose \((P,\mathfrak{m},\widehat{f})\) is flabby, special, \(Z\)-minimal, deep, and strictly normal. Then there are \(f\neq g\) in \(\mathfrak{m}\mathcal{C}^{r}[\mathrm{i}]^{\preccurlyeq}\) such that \(P(f)=P(g)=0\), \(f,g\prec\mathfrak{m}\)._
Proof.: Using Corollary 7.7.30 and the remarks preceding the lemma, we arrange that \(\mathfrak{m}=1\) and \((P,1,\widehat{f})\) is absolutely normal. Now argue as in the proof of Lemma 7.7.42, using instead of Lemma 7.7.7 its \(K\)-version. We also appeal to Lemma 6.2.1, Theorem 6.2.3, and Lemma 6.2.4 instead of to Lemma 6.2.6 and Corollaries 6.2.8 and 6.2.9. Naturally, we don't need to take real or imaginary parts, and use \(\Xi_{a}\) instead of \(\operatorname{Re}\Xi_{a}\).
**Corollary 7.7.52**.: _Suppose \(H\) is \(\omega\)-free, \(K\) is \(r\)-linearly newtonian, and \((P,\mathfrak{m},\widehat{f})\) is \(Z\)-minimal and flabby. Assume also that \((P,\mathfrak{m},\widehat{f})\) is linear or normal. Then the conclusion of Lemma 7.7.51 holds._
Proof.: Like that of Corollary 7.7.43, but using Corollary 3.3.48 instead of Theorem 3.3.33, and Lemma 7.7.51 instead of Lemma 7.7.42.
In particular, the conclusion of Lemma 7.7.51 holds if \(H\) is \(\omega\)-free and \((P,\mathfrak{m},\widehat{f})\) is a flabby normal nonlinear minimal hole in \(K\).
**Corollary 7.7.53**.: _Suppose that \((P,\mathfrak{m},\widehat{f})\) is an ultimate minimal hole in \(K\) and, in case the complexity of \((P,\mathfrak{m},\widehat{f})\) is \(>(1,1,1)\), that \(H\) is \(\omega\)-free and \((P,\mathfrak{m},\widehat{f})\) is absolutely normal. Then_
\((P,\mathfrak{m},\widehat{f})\) _is firm_ _there is a unique \(f\in\mathcal{C}^{r}[\mathrm{i}]\) with \(P(f)=0\) and \(f\prec\mathfrak{m}\)._
_If \((P,\mathfrak{m},\widehat{f})\) is firm, \(f\in\mathcal{C}^{r}[\mathrm{i}]\), \(P(f)=0\), and \(f\prec\mathfrak{m}\), then \(f\in\mathrm{D}(H)[\mathrm{i}]\) and \(f\approx_{K}\widehat{f}\)._
Proof.: If \((P,\mathfrak{m},\widehat{f})\) has complexity \((1,1,1)\), use Corollaries 7.7.18 and 7.7.20. Now suppose \(H\) is \(\omega\)-free and \((P,\mathfrak{m},\widehat{f})\) is absolutely normal of complexity \(>(1,1,1)\). Then \(\deg P>1\) by Corollary 3.2.8, and \(L_{P_{\times\mathfrak{m}}}\) is terminal by Corollaries 3.2.4 and 2.6.21. Thus the forward direction of the displayed equivalence follows from Lemmas 7.7.45 and 7.7.4, and the backward direction from the remark after Corollary 7.7.52. The rest follows by applying Lemma 7.7.4 to all choices of \(H_{*}\).
_Remark_.: Suppose \((P,\mathfrak{m},\widehat{f})\) is a minimal hole in \(K\). If \(\deg P=1\), then \((P,\mathfrak{m},\widehat{f})\) refines to an ultimate hole in \(K\) by Remark 4.4.19. If \(H\) is \(\omega\)-free and \(\deg P>1\), then Corollary 7.7.32 gives a refinement \((P_{+f},\mathfrak{n},\widehat{f}-f)\) of \((P,\mathfrak{m},\widehat{f})\) and an active \(\phi>0\) in \(H\) such that the minimal hole \((P_{+f^{\circ}}^{\phi_{\circ}},\mathfrak{n}^{\circ},\widehat{f}^{\circ}-f^{ \circ})\) in \(K^{\circ}\) is ultimate and absolutely normal.
**Index**
absolutely normal, 448, 449 adjoint
conjugate, 92
linear differential operator, 86
matrix differential equation, 111
alike, 341
almost strongly
repulsive-normal, 213
split-normal, 193
anti-self-adjoint, 114
anti-self-dual
differential module, 114
linear differential operator, 114
matrix differential equation, 114
apart, 300
asymptotic couple
of Hardy type, 37
asymptotically surjective, 50
basis
Hahn, 300
Lyapunov, 379
Bessel function
first kind, 422
second kind, 424
bilinear form
\((-1)^{n}\)-symmetric, 102
\(\partial\)-compatible, 100
concomitant, 104
bounded
differential module, 98
matrix, 243
characteristic polynomial, 97
Chebyshev system, 235
closed
\(H\)-closed, 5
trigonometrically, 35
under powers, 37
closure
trigonometric, 36
trigonometric-Liouville, 43
completion, 59
complexity
differential polynomial, 23
hole, 146
slot, 149
concomitant, 104
conjugate
adjoint, 92
complex, 92
compositional, 149
dual, 93
multiplicative, 151
\(\mathcal{C}^{r}\)-Hardy field, 244
\(\mathcal{C}^{r}\)-maximal, 246
\(\mathcal{C}^{r}\)-perfect, 246
\(\mathcal{C}^{r}\)-perfect hull, 246
deep, 156
density, 284
natural, 285
differential algebra, 78
exponential extension, 78
universal exponential extension, 81
differential closure, 366
differential module
anti-self-dual, 114
bounded, 98
complex conjugate, 92
compositional conjugate, 87
conjugate dual, 93
cyclic, 91
dual, 89
eigenvalue, 87
generates oscillations, 394
lattice, 94
multiplicity, 87, 88
self-dual, 100
spectrum, 87
splitting, 90
differential polynomial
complexity, 23
\(K\)-external zero, 68
linear part, 145
newton position, 64, 65
proper, 66
separant, 23
differentially closed, 366
weakly, 366
disconjugate, 237, 239
eventually, 239
eigenring
linear differential operator, 100
matrix differential equation, 110
eigenvalue, 115
differential module, 87
linear differential operator, 85
matrix differential equation, 111
element
\(\widehat{a}\)-repulsive, 207
almost special, 51
attractive, 205
exponential over, 78
\(\gamma\)-repulsive, 205
repulsive, 205
special, 51
spectral decomposition, 80
enumeration, 403
exceptional values
eventual, 45
ultimate, 133
exponential
element, 78
extension, 78
extension
exponential, 78
gaussian, 76
Liouville, 27
spectral, 130
strict, 57
trigonometric, 36
universal exponential, 80
function
1-periodic, 278
alike, 341
almost periodic, 279
attractive, 316
Bessel, 422, 424
cylinder, 413
integrable at \(\infty\), 313
mean value, 281
normal, 279
repulsive, 316
slowly varying, 47
trigonometric polynomial, 279
uniformly distributed mod 1, 285, 289
gaussian extension, 76
generates oscillations
differential module, 394
germ, 233
linear differential operator, 392
matrix differential equation, 394
germ, 218
alike, 341
analytic, 228
apart, 300
asymptotically similar, 348, 350
attractive, 316
continuous, 219
differentiable, 227
generates oscillations, 233
hardian, 244
oscillating, 225
repulsive, 316
smooth, 228
transexponential, 251
translogarithmic, 272
uniformly distributed mod 1, 285
group of logarithmic derivatives, 30
complement, 79
group ring, 74
gaussian extension of a valuation, 76
norms, 77
trace, 75
\(H\)-asymptotic field
\(\omega\)-free, 268
strongly \(r\)-newtonian, 68
\(H\)-closed, 5
Hahn basis, 300
hardian, 244
Hardy field, 244
analytic, 244
asymptotic couple, 37
bounded, 255
canonical \(\Lambda\Omega\)-expansion, 360
d-maximal, 245
d-perfect, 245
d-perfect hull, 245
differentially maximal, 245
\(H\)-closed, 5
Hardy-Liouville closure, 246
maximal, 245
\(\omega\)-free, 268
perfect, 245
perfect hull, 245
Schwarz closed, 399
smooth, 244
unbounded, 255
universal exponential extension, 295
Hardy type, 37
Hausdorff field, 220
hole, 146
complexity, 146
minimal, 146
\(Z\)-minimal, 148
\(K\)-external zero, 68
\(\Lambda\Omega\)-cut, 266
\(\Lambda\Omega\)-field, 360
lattice, 94
differential module, 94
least common left multiple, 22 limit point, 231 linear differential operator \(\widehat{a}\)-repulsive splitting, 207 adjoint, 86 anti-self-dual, 114 asymptotically surjective, 50 characteristic polynomial, 97 concomitant, 104 conjugate adjoint, 92 disconjugate, 237, 239 eigenring, 100 eigenvalue, 85 eventual exceptional values, 45 eventually disconjugate, 239 generates oscillations, 392 Hahn basis, 300 least common left multiple, 22 Lyapunov basis, 379 multiplicity, 85 real splitting, 22 \(S\)-repulsive splitting, 206 same type, 87 scrambled, 125 self-adjoint, 103 self-dual, 101 skew-adjoint, 103 span, 139 spectral decomposition, 116 spectrum, 85 splitting, 20 steep, 48 strong splitting, 179 terminal, 136 twist, 84 ultimate exceptional values, 133 unscrambled, 125 linear part differential polynomial, 145 slot, 153 Liouville extension, 27 Lyapunov basis, 379 exponent, 241 fundamental matrix, 244 fundamental system of solutions, 244 spectrum, 236 matrix differential equation \(H\)-regular, 395 adjoint, 111 anti-self-adjoint, 114 eigenring, 110 eigenvalue, 111 gauge transform, 110 generates oscillations, 394 hamiltonian, 113 Lyapunov fundamental matrix, 244 Lyapunov fundamental system of solutions, 244 Lyapunov spectrum, 243 self-adjoint, 112 self-dual, 111 spectrum, 111 mean value, 281 minimal hole, 146 multiplicity differential module, 87, 88 function, 230 linear differential operator, 85 total, 230 newton position, 64, 65 normal absolutely, 448, 449 function, 279 slot, 159 strictly, 164 principal solution, 235 system, 235 proper, 66 differential polynomial, 66 slot, 175 refinement, 150 repulsive element, 205, 316 function, 316 germ, 316 repulsive-normal, 209 almost strongly, 213 residue, 439, 441 \(S\)-repulsive element, 205 splitting, 206 scrambled, 125 self-adjoint linear differential operator, 103 matrix differential equation, 112 self-dual differential module, 100
linear differential operator, 101 matrix differential equation, 111 semialgebraic, 7 separant, 23, 325 skew-adjoint, 103 slot, 149 absolutely normal, 448, 449 almost strongly repulsive-normal, 213 almost strongly split-normal, 193 balanced, 174 complexity, 149 compositional conjugate, 149 deep, 156 equivalence, 149 firm, 203, 204 flabby, 203, 204 isolated, 167 linear, 149 linear part, 153 multiplicative conjugate, 151 normal, 159 proper, 175 quasilinear, 151 refinement, 150 repulsive-normal, 209 special, 155 split-normal, 185 steep, 156 strictly normal, 164 strongly repulsive-normal, 213 stiminate, 201, 202 \(Z\)-minimal, 149 solution principal, 235, 258 principal system, 235 split-normal equation, 320 system, 5, 9 span, 139 special, 51 almost, 51 slot, 155 spectral decomposition of a differential operator, 116 of an element, 80 spectral extension, 130 spectrum differential module, 87 linear differential operator, 85 Lyapunov, 243 matrix differential equation, 111 split-normal, 185 almost strongly, 193 splitting, 20 \(\widehat{a}\)-repulsive, 207 differential module, 90 induced by a factorization, 20 real, 22 \(S\)-repulsive, 206 strong, 179 steep linear differential operator, 48 slot, 156 strictly normal, 164 strongly strongly \(r\)-newtonian, 68 repulsive-normal, 213 split-normal, 193 system Chebyshev, 235 Markov, 236 principal, 235 terminal, 136 trace, 75 transexponential, 251 translation vector, 280 translogarithmic, 272 transseries, 8 grid-based, 5 oscillating, 127 trigonometric closed, 35 closure, 36 extension, 36 Liouville closure, 43 twist, 84 type, 87 ultimate exceptional values, 133 slot, 201, 202 uniformly distributed mod 1, 285 universal exponential extension, 80, 81 of a Hardy field, 295 spectral decomposition, 80 unscrambled, 125 valuation gaussian extension, 76 spectral extension, 130 valued differential field completion, 59 very small derivation, 26
\(Z\)-minimal
hole, 148
slot, 149
zero
consecutive, 231
\(K\)-external, 68
multiplicity, 230
**List of Symbols**
\[\begin{array}{ll}\mbox{mult}_{a}(A)&\mbox{multiplicity of $A$ at $a\in K$ |
2308.06329 | Correlation harvesting between particle detectors in uniform motion | We investigate the correlation harvesting protocol using two Unruh-DeWitt
particle detectors moving along four classes of uniformly accelerated
trajectories categorized by Letaw: linear, catenary, cusped, and circular
motions. For each trajectory, two types of configurations are carried out: one
possesses a stationary (time-translation invariant) Wightman function and the
other is nonstationary. We find that detectors undergoing linear, catenary, and
cusped motions gain fewer correlations in the nonstationary configurations
compared to those in stationary configurations. Detectors in circular motion
have similar behavior in both configurations. We discuss the relative
suppression of correlation harvesting due to high acceleration for each case.
Remarkably we find that under certain circumstances detectors in both linear
and circular states of motion can harvest genuine (non-communication assisted)
entanglement even though they are in causal contact. | Lana Bozanic, Manar Naeem, Kensuke Gallock-Yoshimura, Robert B. Mann | 2023-08-11T18:08:53Z | http://arxiv.org/abs/2308.06329v1 | # Correlation harvesting between particle detectors in uniform motion
###### Abstract
We investigate the correlation harvesting protocol using two Unruh-DeWitt particle detectors moving along four classes of uniformly accelerated trajectories categorized by Letaw: linear, catenary, cusped, and circular motions. For each trajectory, two types of configurations are carried out: one possesses a stationary (time-translation invariant) Wightman function and the other is nonstationary. We find that detectors undergoing linear, catenary, and cusped motions gain fewer correlations in the nonstationary configurations compared to those in stationary configurations. Detectors in circular motion have similar behavior in both configurations. We discuss the relative suppression of correlation harvesting due to high acceleration for each case. Remarkably we find that under certain circumstances detectors in both linear and circular states of motion can harvest genuine (non-communication assisted) entanglement even though they are in causal contact.
## I Introduction
The field of relativistic quantum information (RQI) has undergone rapid development in recent years, resulting in the emergence of groundbreaking concepts such as entanglement degradation due to non-inertial motion [1; 2; 3], entanglement harvesting [4; 5; 6; 7; 8], and the long-established Unruh effect [9; 10; 11]. The Unruh effect states that a linearly accelerating observer in Minkowski spacetime will experience a thermal bath, and this experience is indistinguishable from that of an inertial observer sitting in a thermal bath. More precisely, the temperature detected by a linearly accelerating two-level quantum system, known as the Unruh-DeWitt (UDW) particle detector [12; 11], is proportional to its acceleration \(a\) and it reads
\[T_{\rm U}=\frac{\hbar a}{2\pi ck_{\rm B}}\,, \tag{1}\]
where \(\hbar\) is the reduced Planck constant, \(c\) is the speed of light, and \(k_{\rm B}\) is Boltzmann's constant.
Despite its theoretical significance, the Unruh effect has yet to be verified experimentally. The main challenge hindering its experimental verification lies in the large acceleration required to produce experimentally measurable temperatures. For example, an acceleration on the order of magnitude of \(a\approx 10^{20}\) m/s\({}^{2}\) is needed to achieve a temperature of \(T_{\rm U}\sim 1\) Kelvin.
Given this situation, researchers have explored other detector trajectories that could induce a phenomenon similar to the Unruh effect. In 1981, Letaw found five classes of stationary trajectories with nonzero constant acceleration in flat spacetime [13]: linear, circular, cusped, catenary, and helix trajectories. In \((3+1)\)-dimensional Minkowski spacetime, these trajectories are characterized by three parameters: two torsions and the magnitude of the proper acceleration. The effective temperatures observed by a detector undergoing these motions have been studied over subsequent years in various contexts [14; 15; 16; 17; 18; 19; 20]. Among these, the circular trajectory has attracted considerable attention for potential experimental realizations of the Unruh effect [14; 15; 19; 21; 22; 23; 24].
Less studied is the _entanglement harvesting_ protocol for these various classes of motion, apart from detectors undergoing linearly accelerated motion, which has been extensively analyzed [25; 26; 27; 28]. In this paper, we consider this problem and investigate how _two_ UDW detectors can extract entanglement from the vacuum whilst undergoing these various types of non-inertial motion.
In the entanglement harvesting protocol [4; 5; 6; 7], two initially uncorrelated detectors interact locally with a quantum field in some state (typically the vacuum state) to extract preexisting entanglement [29; 30]. More generally, detectors can harvest classical and quantum correlations in what is called the _correlation harvesting protocol_. The amount of harvested correlations is sensitive to the background spacetime [31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46] and the motion of the detectors [25; 26; 27; 28; 47; 48; 49; 50]. Implementing this protocol is very close to implementation as recent experiments detecting correlations of the electromagnetic ground state in a ZnTe crystal have demonstrated [51; 52; 53].
In this paper we investigate the correlation harvesting protocol with detectors in four classes of uniform acceleration motion: linear, catenary, cusped, and circular trajectories. Unlike the helical case, these motions can all be realized in 2 spatial dimensions, and so are more amenable to experimental testing [19; 20; 21; 22; 23; 24]. We categorize the configurations for the detectors into two configurations: stationary, in which the Wightman function is time-translation invariant, and nonstationary, in which the Wightman function is not time-translation invariant. The Wightman functions for these two scenarios are similar, except in nonstationary configurations they possess an additional term that breaks time-translation invariance.
After introducing the UDW detector model in section
II and the four uniform acceleration trajectories in section III, we then focus in section IV.1 on a single detector following the four trajectories to examine its transition probability (or response function). We study its dependence on the magnitudes of the acceleration and torsions, and numerically evaluate the effective temperature of the detector.
We then consider the correlation harvesting protocol in section IV.2. Specifically, concurrence of entanglement and quantum mutual information - which measures the harvested total correlations - are numerically evaluated. We find that the stationary and nonstationary configurations behave in a similar manner since their Wightman functions have terms in common. However, the amount of correlations extracted by the detectors in the nonstationary configurations differs from those of the stationary ones due to an additional term in the Wightman function. We also look into the acceleration dependence of the harvested correlations and conclude that sufficiently high accelerations prevent _any_ uniformly accelerating detectors from extracting correlations. This point is consistent with previous papers that focused on linear and circular motions [8; 25; 26; 27; 47; 48; 28]. Finally, we show that constant acceleration makes it challenging to extract 'genuine entanglement' (entanglement preexistent in a quantum field that has no possible assistance from detector communication) in section IV.3. In general, genuine entanglement can be harvested from causally disconnected spacetime regions due to microcausality. For inertial detectors with Gaussian switching in Minkowski spacetime, it is shown that a sufficiently large energy gap allows the detectors to extract genuine entanglement from such regions [54; 7]. While we find that this is generally not the case for uniformly accelerated detectors, remarkably we find small but non-negligible regions of parameter space where detectors in causal contact can harvest genuine entanglement.
Throughout this manuscript, we use the mostly-plus metric convention, \((-,+,+,+)\) and the natural units \(\hbar=k_{\mathrm{B}}=c=1\). A point in spacetime is denoted by \(\mathsf{x}\).
## II Unruh-deWitt detectors
### Density matrix of detectors
Let us first review the correlation harvesting protocol. Consider two pointlike UDW detectors A and B with an energy gap \(\Omega_{j}\), \(j\in\{\mathrm{A},\mathrm{B}\}\) between ground \(\ket{g_{j}}\) and excited states \(\ket{e_{j}}\). These detectors interact with the quantum Klein-Gordon field \(\hat{\phi}\) along their trajectories \(\mathsf{x}_{j}(\tau_{j})=(t(\tau_{j}),\mathbf{x}(\tau_{j}))\), where \(\tau_{j}\) is the proper time of detector-\(j\).
In the interaction picture, the interaction Hamiltonian (as a generator of time-translation with respect to \(\tau_{j}\)) describing the coupling between detector-\(j\) and \(\hat{\phi}\) is given by
\[\hat{H}_{j}^{\tau_{j}}(\tau_{j})=\lambda_{j}\chi_{j}(\tau_{j})\hat{\mu}_{j}( \tau_{j})\otimes\hat{\phi}(\mathsf{x}_{j}(\tau_{j}))\,,\ j\in\{\mathrm{A}, \mathrm{B}\} \tag{2}\]
where \(\lambda_{j}\) is a coupling constant and \(\chi_{j}(\tau_{j})\) is the switching function that governs the time-dependence of the coupling. Here, \(\hat{\mu}_{j}(\tau_{j})\) is the monopole moment given by
\[\hat{\mu}_{j}(\tau_{j})=\ket{e_{j}}\bra{g_{j}}e^{\mathrm{i}(\Omega_{j}\tau_{ j}}+\ket{g_{j}}\bra{e_{j}}e^{-\mathrm{i}\Omega_{j}\tau_{j}}\,, \tag{3}\]
which describes each detector's internal dynamics, and the field operator \(\hat{\phi}(\mathsf{x}_{j}(\tau_{j}))\) is pulled back along the trajectory of detector-\(j\). The superscript on \(\hat{H}_{j}^{\tau_{j}}(\tau_{j})\) indicates the time-translation that the Hamiltonian is generating.
The total interaction Hamiltonian, \(\hat{H}_{\mathrm{I}}^{t}(t)\), can be written as a generator of time-translation with respect to the time \(t\) that is common to both detectors:
\[\hat{H}_{\mathrm{I}}^{t}(t)=\frac{\mathrm{d}\tau_{\mathrm{A}}}{\mathrm{d}t} \hat{H}_{\mathrm{A}}^{\tau_{\mathrm{A}}}\big{(}\tau_{\mathrm{A}}(t)\big{)}+ \frac{\mathrm{d}\tau_{\mathrm{B}}}{\mathrm{d}t}\hat{H}_{\mathrm{B}}^{\tau_{ \mathrm{B}}}\big{(}\tau_{\mathrm{B}}(t)\big{)}\,, \tag{4}\]
Note that the proper times \(\tau_{\mathrm{A}}\) and \(\tau_{\mathrm{B}}\) are each now functions of \(t\). From this Hamiltonian, one obtains the time-evolution operator \(\hat{U}_{\mathrm{I}}\)[55; 56]:
\[\hat{U}_{\mathrm{I}}=\mathcal{T}_{t}\exp\left(-\mathrm{i}\int_{\mathbb{R}} \mathrm{d}t\,\hat{H}_{\mathrm{I}}^{t}(t)\right)\,, \tag{5}\]
where \(\mathcal{T}_{t}\) is a time-ordering symbol with respect to the common time \(t\).
One can then use a perturbative analysis and obtain the final density matrix of a joint system \(\mathcal{H}_{\mathrm{A}}\otimes\mathcal{H}_{\mathrm{B}}\), where \(\mathcal{H}_{j}\) is a Hilbert space for detector-\(j\). Assuming a small coupling strength, \(\lambda\ll 1\), the Dyson series expansion of \(\hat{U}_{\mathrm{I}}\) reads
\[\hat{U}_{\mathrm{I}} =\mathds{1}+\hat{U}_{\mathrm{I}}^{(1)}+\hat{U}_{\mathrm{I}}^{(2)} +\mathcal{O}(\lambda^{3})\,, \tag{6a}\] \[\hat{U}_{\mathrm{I}}^{(1)} =-\mathrm{i}\int_{-\infty}^{\infty}\mathrm{d}t\,\hat{H}_{ \mathrm{I}}^{t}(t)\,,\] (6b) \[\hat{U}_{\mathrm{I}}^{(2)} =-\int_{-\infty}^{\infty}\mathrm{d}t_{1}\int_{-\infty}^{t_{1}} \mathrm{d}t_{2}\,\hat{H}_{\mathrm{I}}^{t}(t_{1})\hat{H}_{\mathrm{I}}^{t}(t_{2 })\,. \tag{6c}\]
By assuming that the initial state, \(\rho_{0}\), of the detectors-field system is
\[\rho_{0}=\ket{g_{\mathrm{A}}}\bra{g_{\mathrm{A}}}\otimes\ket{g_{\mathrm{B}}} \bra{g_{\mathrm{B}}}\otimes\ket{0}\bra{0}\,, \tag{7}\]
where \(\ket{0}\) is the vacuum state of the field, one finds the final total density matrix \(\rho_{\mathrm{tot}}\) after the interaction to be
\[\rho_{\mathrm{tot}} =\hat{U}_{\mathrm{I}}\rho_{0}\hat{U}_{\mathrm{I}}^{\dagger}\] \[=\rho_{0}+\rho^{(1,1)}+\rho^{(2,0)}+\rho^{(0,2)}+\mathcal{O}( \lambda^{4})\,, \tag{8}\]
where \(\rho^{(i,j)}=\hat{U}^{(i)}\rho_{0}\hat{U}^{(j)\dagger}\) and all the odd-power terms of \(\lambda\) vanish [7]. Then the final density matrix of the detectors, \(\rho_{\mathrm{AB}}\), is obtained by tracing out the field part: \(\rho_{\mathrm{AB}}=\mathrm{Tr}_{\mathrm{g}}\big{[}\rho_{\mathrm{tot}}\big{]}\). By employing the basis \(\ket{g_{\mathrm{A}}g_{\mathrm{B}}}=[1,0,0,0]^{\top},\ket{g_{\mathrm{A}}e_{ \mathrm{B}}}=[0,1,0,0]^{\top},\ket{e_{\mathrm{A}}g_{\mathrm{B}}}=\ket{0,1,0,0} ^{\top},\ket{e_{\mathrm{A}}g_{\mathrm{B}}}=\ket{0,1,0,0}^{\top}\).
\([0,0,1,0]^{\top},|e_{\rm A}e_{\rm B}\rangle=[0,0,0,1]^{\top}\), the density matrix \(\rho_{\rm AB}\) reads
\[\rho_{\rm AB}=\left[\begin{array}{cccc}1-\mathcal{L}_{\rm AA}- \mathcal{L}_{\rm BB}&0&0&\mathcal{M}^{*}\\ 0&\mathcal{L}_{\rm BB}&\mathcal{L}_{\rm AB}^{*}&0\\ 0&\mathcal{L}_{\rm AB}&\mathcal{L}_{\rm AA}&0\\ \mathcal{M}&0&0&0\end{array}\right]+\mathcal{O}(\lambda^{4})\,, \tag{9}\]
where
\[\mathcal{L}_{ij} =\lambda^{2}\int_{\mathbb{R}}\mathrm{d}\tau_{i}\int_{\mathbb{R}} \mathrm{d}\tau_{j}^{\prime}\,\chi_{i}(\tau_{i})\chi_{j}(\tau_{j}^{\prime})e^{ -i\Omega(\tau_{i}-\tau_{j}^{\prime})}\] \[\qquad\qquad\qquad\qquad\qquad\times W\big{(}\mathsf{x}_{i}( \tau_{i}),\mathsf{x}_{j}(\tau_{j}^{\prime})\big{)}\,, \tag{10a}\] \[\mathcal{M} =-\lambda^{2}\int_{\mathbb{R}}\mathrm{d}\tau_{\rm A}\int_{ \mathbb{R}}\mathrm{d}\tau_{\rm B}\,\chi_{\rm A}(\tau_{\rm A})\chi_{\rm B}( \tau_{\rm B})e^{-i\Omega(\tau_{\rm A}+\tau_{\rm B})}\] \[\qquad\times\big{[}\Theta\big{(}t(\tau_{\rm A})-t(\tau_{\rm B}) \big{)}W\big{(}\mathsf{x}_{\rm A}(\tau_{\rm A}),\mathsf{x}_{\rm B}(\tau_{\rm B })\big{)}\] \[\qquad\quad+\Theta\big{(}t(\tau_{\rm B})-t(\tau_{\rm A})\big{)}W \big{(}\mathsf{x}_{\rm B}(\tau_{\rm B}),\mathsf{x}_{\rm A}(\tau_{\rm A}) \big{)}\big{]}\,, \tag{10b}\]
where \(\Theta(t)\) is the Heaviside step function and \(W(\mathsf{x},\mathsf{x}^{\prime})\coloneqq\bra{0}\hat{\phi}(\mathsf{x})\hat{ \phi}(\mathsf{x}^{\prime})\ket{0}\) is the vacuum Wightman function. In \((3+1)\)-dimensional Minkowski spacetime, the Wightman function reads
\[W(\mathsf{x},\mathsf{x}^{\prime})=-\frac{1}{4\pi^{2}}\frac{1}{(t-t^{\prime}- \mathrm{i}\epsilon)^{2}-(\mathbf{x}-\mathbf{x}^{\prime})^{2}}\,, \tag{11}\]
where \(\epsilon\) is the UV cutoff. The elements \(\mathcal{L}_{jj}\), \(j\in\{\rm A,B\}\) are the so-called transition probabilities (or response functions), which describe the probability of a detector transitioning from the ground to excited states, \(|g_{j}\rangle\to|e_{j}\rangle\). The off-diagonal elements \(\mathcal{M}\) and \(\mathcal{L}_{\rm AB}\) are responsible for harvesting entanglement and quantum mutual information, respectively, as we shall see in the next subsection.
Throughout this paper, we use a Gaussian switching function
\[\chi_{j}(\tau_{j})=e^{-\tau_{j}^{2}/2\sigma^{2}}\,, \tag{12}\]
where \(\sigma>0\) is the characteristic Gaussian width, which has the units of time. We will use \(\sigma\) to make all quantities unitless (such as \(\Omega\sigma\)).
### Correlation measure
Let us introduce two measures for correlation: concurrence \(\mathcal{C}_{\rm AB}\) and quantum mutual information \(I_{\rm AB}\).
Concurrence is a measure of entanglement [57; 58]. Let \(\rho_{\rm AB}\) be the density matrix of a two-qubit system. We first define a matrix \(\tilde{\rho}_{\rm AB}\) as
\[\tilde{\rho}_{\rm AB}\coloneqq(\hat{\sigma}_{y}\otimes\hat{\sigma}_{y})\rho_ {\rm AB}^{*}(\hat{\sigma}_{y}\otimes\hat{\sigma}_{y})\,, \tag{13}\]
where \(\hat{\sigma}_{y}\) is the Pauli-\(y\) operator and \(\rho_{\rm AB}^{*}\) is the complex conjugate of \(\rho_{\rm AB}\). Then by denoting \(w_{i}\in\mathbb{R}\), (\(i=1,2,3,4\)) as eigenvalues of a Hermitian operator \(\sqrt{\sqrt{\rho_{\rm AB}}\hat{\rho}_{\rm AB}\sqrt{\rho_{\rm AB}}}\), the concurrence is defined as follows.
\[\mathcal{C}_{\rm AB}\coloneqq\max\{0, \,w_{1}-w_{2}-w_{3}-w_{4}\}\,, \tag{14}\] \[(w_{1}\geq w_{2}\geq w_{3}\geq w_{4})\,.\]
The concurrence is zero if and only if the state \(\rho_{\rm AB}\) is separable. In the case of our density matrix (9), the concurrence is known to be
\[\mathcal{C}_{\rm AB}=2\max\{0,\,|\mathcal{M}|-\sqrt{\mathcal{L}_{\rm AA} \mathcal{L}_{\rm BB}}\}+\mathcal{O}(\lambda^{4})\,. \tag{15}\]
Quantum mutual information [59], on the other hand, quantifies the amount of total correlation, both classical and quantum. Quantum mutual information \(I_{\rm AB}\) between two qubits A and B up to second order in \(\lambda\) is [7]
\[I_{\rm AB} =\mathcal{L}_{+}\ln\mathcal{L}_{+}+\mathcal{L}_{-}\ln\mathcal{L}_{-}\] \[\qquad-\mathcal{L}_{\rm AA}\ln\mathcal{L}_{\rm AA}-\mathcal{L}_{ \rm BB}\ln\mathcal{L}_{\rm BB}+\mathcal{O}(\lambda^{4})\,, \tag{16}\]
where
\[\mathcal{L}_{\pm}\coloneqq\frac{1}{2}\left(\mathcal{L}_{\rm AA}+ \mathcal{L}_{\rm BB}\pm\sqrt{(\mathcal{L}_{\rm AA}-\mathcal{L}_{\rm BB})^{2}+4 |\mathcal{L}_{\rm AB}|^{2}}\right). \tag{17}\]
Note that, while concurrence (15) vanishes when the "noise term" \(\sqrt{\mathcal{L}_{\rm AA}\mathcal{L}_{\rm BB}}\) exceeds the nonlocal element \(|\mathcal{M}|\), the mutual information becomes zero when \(|\mathcal{L}_{\rm AB}|=0\). In addition, if \(\mathcal{C}_{\rm AB}=0\) but the mutual information is nonvanishing, then the extracted correlation by the detectors is either classical correlation or nondistillable entanglement.
## III Uniform acceleration trajectories
### Single detector trajectory classification
The most well-known trajectory for a uniformly accelerating (i.e., \(a=const.\)) pointlike particle is linear accelerated motion. However, Letaw pointed out that there are, in fact, five classes of uniformly accelerated trajectories, excluding the case where \(a=0\). Along with the linear case, the other classes are circular, catenary, cusped, and helix [13]. Consider a trajectory in \((3+1)\)-dimensional Minkowski spacetime. Such a trajectory can be characterized by three geometric invariants: the curvature \(a(\tau)\), which represents the magnitude of proper acceleration, the first torsion \(b(\tau)\), and the second torsion (also known as hypertorsion) \(\nu(\tau)\) of the worldline. The torsions \(b(\tau)\) and \(\nu(\tau)\) correspond to the proper angular velocities in a given tetrad frame [13]. Assuming that these invariants are constants, the trajectory becomes stationary. In a nutshell, these motions are characterized by the following:
1. linear: \(a\neq 0,b=\nu=0\)
2. catenary: \(a>b\), \(\nu=0\)
3. cusped: \(a=b,\nu=0\)
4. circular: \(a<b,\nu=0\)
5. helix: \(\nu\neq 0\)
In this subsection, we review these trajectories and consider the corresponding vacuum Wightman functions. We will suppress the UV cutoff \(\epsilon\) for readability.
#### ii.1.1 Linear motion
The linear acceleration motion of a detector is defined solely by the constant acceleration \(a\), with all other parameters set to zero. The trajectory reads
\[\mathsf{x}(\tau)=\left(\frac{1}{a}\sinh(a\tau),\frac{1}{a}\cosh(a\tau),0,0 \right)\,, \tag{18}\]
and the Wightman function along this trajectory is given by
\[W_{\rm lin}(\Delta\tau)=-\frac{1}{4\pi^{2}}\frac{1}{\frac{4}{a^{2}}\sinh^{2} \left(\frac{a\Delta\tau}{2}\right)}\,, \tag{19}\]
where \(\Delta\tau\coloneqq\tau-\tau^{\prime}\).
#### ii.1.2 Circular motion
The circular trajectory is defined by \(a\) and \(b\) satisfying \(a<b\). Let us begin with a commonly used trajectory
\[\mathsf{x}(\tau)=(\gamma\tau,R\cos(\omega\gamma\tau),R\sin(\omega\gamma\tau), 0)\, \tag{20}\]
where \(R,\omega\), and \(\gamma\) are the radius of the circular motion, angular velocity, and the Lorentz factor defined as \(\gamma\coloneqq 1/\sqrt{1-v^{2}}\). Here, \(v\coloneqq R\omega(\leq 1)\) is the speed of the detector. Introducing the acceleration of the detector \(a=R\omega^{2}\gamma^{2}\), these parameters can be related by
\[\omega =\sqrt{\frac{a}{(1+aR)R}}\,, \tag{21a}\] \[\gamma =\sqrt{1+aR}\,,\] (21b) \[v =\sqrt{\frac{aR}{1+aR}}\,. \tag{21c}\]
In terms of the acceleration \(a\) and the torsion \(b\), we can further express \(\omega\) and \(v\) as
\[\omega=b(1-a^{2}/b^{2})\qquad v=a/b\]
respectively. The Wightman function is then
\[W_{\rm cir}(\Delta\tau)=-\frac{1}{4\pi^{2}}\frac{1}{\gamma^{2}\Delta\tau^{2}- 4R^{2}\sin^{2}(\omega\gamma\Delta\tau/2)}\,. \tag{22}\]
#### ii.1.3 Cusped motion
Cusped motion is described by the acceleration and torsion with \(a=b\). The trajectory reads
\[\mathsf{x}(\tau)=\left(\tau+\frac{1}{6}a^{2}\tau^{3},\,\frac{1}{2}a\tau^{2}, \,\frac{1}{6}a^{2}\tau^{3},\,0\right)\,, \tag{23}\]
and the corresponding Wightman function is
\[W_{\rm cus}(\Delta\tau)=-\frac{1}{4\pi^{2}}\frac{1}{\Delta\tau^{2}+\frac{a^{2 }}{12}\Delta\tau^{4}}\,. \tag{24}\]
#### iii.1.4 Catenary motion
Catenary motion can be characterized by \(a\) and \(b\) with \(a>b\). The trajectory is given by
\[\mathsf{x}(\tau)=\left(\frac{a}{a^{2}-b^{2}}\sinh\big{(}\sqrt{a^{2}- b^{2}}\,\tau\big{)},\right.\\ \left.\frac{a}{a^{2}-b^{2}}\cosh\big{(}\sqrt{a^{2}-b^{2}}\,\tau \big{)},\frac{b\tau}{\sqrt{a^{2}-b^{2}}},0\right), \tag{25}\]
and the Wightman function reads
\[W_{\rm cat}(\Delta\tau)\\ =-\frac{1}{4\pi^{2}}-\frac{1}{\frac{b^{2}\Delta\tau^{2}}{a^{2}-b^{2 }}+\frac{4a^{2}}{(a^{2}-b^{2})^{2}}\sinh^{2}\big{(}\frac{\sqrt{a^{2}-b^{2}} \Delta\tau}{2}\big{)}}. \tag{26}\]
We immediately see that catenary motion reduces to the linear motion as \(b\to 0\). Catenary motion also reduces to cusped motion as \(b\to a\) after a coordinate transformation consisting of a Lorentz boost a translation [18].
#### iii.1.5 Helix motion
Finally, helix motion is a combination of circular and linear acceleration motions characterized by three parameters, \(a,b\), and \(\nu\):
\[\mathsf{x}(\tau)=\left(\frac{\mathcal{P}}{\Gamma_{+}}\sinh(\Gamma_{+}\tau), \frac{\mathcal{P}}{\Gamma_{+}}\cosh(\Gamma_{+}\tau),\right.\\ \left.\frac{\mathcal{Q}}{\Gamma_{-}}\cos(\Gamma_{-}\tau),\frac{ \mathcal{Q}}{\Gamma_{-}}\sin(\Gamma_{-}\tau)\right), \tag{27}\]
where \(\mathcal{P}\coloneqq\Xi/\Gamma,\;\mathcal{Q}\coloneqq ab/\Xi\Gamma\), and
\[\Xi^{2} \coloneqq\frac{1}{2}(\Gamma^{2}+a^{2}+b^{2}+\nu^{2})\,, \tag{28a}\] \[\Gamma^{2} \coloneqq\Gamma_{+}^{2}+\Gamma_{-}^{2}\,,\quad\Gamma_{\pm}^{2} \coloneqq\sqrt{A^{2}+B^{2}}\pm A\,,\] (28b) \[A \coloneqq\frac{1}{2}(a^{2}-b^{2}-\nu^{2})\,,\quad B\coloneqq a \nu\,. \tag{28c}\]
The Wightman function reads
\[W_{\rm hel}(\Delta\tau)=\\ -\frac{1}{4\pi^{2}}\frac{1}{\frac{4\mathcal{P}^{2}}{\Gamma_{+}^{2 }}\sinh^{2}\left(\frac{\Gamma_{+}\Delta\tau}{2}\right)-\frac{4\mathcal{Q}^{2} }{\Gamma_{-}^{2}}\sin^{2}\left(\frac{\Gamma_{-}\Delta\tau}{2}\right)}\,. \tag{29}\]
Note that the trajectory and the corresponding Wightman function reduce to the aforementioned trajectories when \(\nu\to 0\). In this sense, the helix is the general motion that contains other motions.
#### iii.1.6 Wightman function at \(\nu=0\)
We now turn our attention to the special case where \(\nu=0\). Although the Wightman functions for linear, circular, catenary, and cusped motions may initially appear to take different forms, they can actually be expressed in a unified manner. Let \(\bar{b}\equiv b/a\) with the condition that \(a\neq 0\). The Wightman functions for all trajectories with \(\nu=0\) can be written in the following compact form:
\[W_{\nu=0}(\Delta\tau)=-\frac{1}{4\pi^{2}}\frac{1}{-\frac{\bar{b}^{2}}{1-\bar{ b}^{2}}\Delta\tau^{2}+\frac{4}{(1-\bar{b}^{2})^{2}a^{2}}\sinh^{2}\left(\frac{ \sqrt{1-\bar{b}^{2}}a\Delta\tau}{2}\right)}\,. \tag{30}\]
The parameter \(\bar{b}\) serves to specify the particular trajectory, as illustrated in figure 1: linear (\(\bar{b}=0\)), catenary (\(0<\bar{b}<1\)), cusped (\(\bar{b}=1\)), and circular (\(\bar{b}>1\)). For circular motion, we employ the identity \(\sin(\mathrm{i}x)=\mathrm{i}\sinh(x)\). Note that one obtains the Wightman function for the cusped motion, as given in (24), by performing a series expansion around \(\bar{b}=1\).
The corresponding transition probability \(\mathcal{L}_{jj}\), \(j\in\{\mathrm{A},\mathrm{B}\}\), in (9) reads
\[\mathcal{L}_{jj}=\lambda^{2}\sigma\sqrt{\pi}\int_{\mathbb{R}}\mathrm{d}u\,e^{- u^{2}/4\sigma^{2}}e^{-\mathrm{i}\Omega u}W_{\nu=0}(u)\,. \tag{31}\]
### Two detectors in uniform acceleration
We now consider two UDW detectors A and B, both undergoing uniform acceleration motion. In particular, we categorize the detector configurations into two classes: stationary (time-translation invariant) and nonstationary scenarios.
#### iii.2.1 Stationary scenario
Consider two detectors undergoing the same uniform acceleration (e.g., both linearly accelerated). The Wightman function can be made time-translation invariant, meaning it depends only on the time difference \(\Delta\tau\coloneqq\tau_{\mathrm{A}}-\tau_{\mathrm{B}}\), by imposing that the angle between the velocity
vector of a detector and the spatial displacement vector from one detector to the other is time-independent. For example, two linearly accelerating detectors along the trajectories
\[\mathsf{x}_{\mathrm{A}}(\tau_{\mathrm{A}}) =\left(\frac{1}{a}\sinh(a\tau_{\mathrm{A}}),\frac{1}{a}\cosh(a\tau _{\mathrm{A}}),0,0\right)\,, \tag{32a}\] \[\mathsf{x}_{\mathrm{B}}(\tau_{\mathrm{B}}) =\left(\frac{1}{a}\sinh(a\tau_{\mathrm{B}}),\frac{1}{a}\cosh(a \tau_{\mathrm{B}}),L,0\right) \tag{32b}\]
give the following stationary Wightman function:
\[W_{\mathrm{lin}}(\tau_{\mathrm{A}},\tau_{\mathrm{B}})=-\frac{1}{4\pi^{2}}\frac {1}{\frac{4}{a^{2}}\sinh^{2}\left(\frac{a\Delta\tau}{2}\right)-L^{2}} \tag{33}\]
where \(L\coloneqq|\mathbf{x}_{\mathrm{A}}-\mathbf{x}_{\mathrm{B}}|\) is the spatial separation between the two detectors. As depicted in figure 2 top-left, each velocity vector of the detector is always perpendicular to the displacement vector \(\mathbf{x}_{\mathrm{AB}}\coloneqq\mathbf{x}_{\mathrm{A}}-\mathbf{x}_{\mathrm{B}}\) throughout the
Figure 2: Stationary and nonstationary configurations for four classes of uniformly accelerating detectors. Red and blue strips represent detectors A and B, respectively.
interaction.
We can also construct stationary Wightman functions for other motions:
**circular**: \[\mathrm{x_{A}} =(\gamma\tau_{\mathrm{A}},R\cos(\omega\gamma\tau_{\mathrm{A}}),R \sin(\omega\gamma\tau_{\mathrm{A}}),0)\,\] (34a) \[\mathrm{x_{B}} =(\gamma\tau_{\mathrm{B}},R\cos(\omega\gamma\tau_{\mathrm{B}}),R \sin(\omega\gamma\tau_{\mathrm{B}}),L)\.\] (34b)
**cusped**: \[\mathrm{x_{A}} =\left(\tau_{\mathrm{A}}+\frac{1}{6}a^{2}\tau_{\mathrm{A}}^{3}, \,\frac{1}{2}a\tau_{\mathrm{A}}^{2},\,\frac{1}{6}a^{2}\tau_{\mathrm{A}}^{3},\, 0\right)\,,\] (35a) \[\mathrm{x_{B}} =\left(\tau_{\mathrm{B}}+\frac{1}{6}a^{2}\tau_{\mathrm{B}}^{3}, \,\frac{1}{2}a\tau_{\mathrm{B}}^{2},\,\frac{1}{6}a^{2}\tau_{\mathrm{B}}^{3},\, L\right)\,,\] (35b)
**catenary**: \[\mathrm{x_{A}} =\left(\frac{a}{a^{2}-b^{2}}\sinh\left(\sqrt{a^{2}-b^{2}}\,\tau_ {\mathrm{A}}\right)\!,\right.\] \[\left.\frac{a}{a^{2}-b^{2}}\cosh\left(\sqrt{a^{2}-b^{2}}\,\tau_ {\mathrm{A}}\right)\!,\frac{b\tau_{\mathrm{A}}}{\sqrt{a^{2}-b^{2}}},0\right),\] (36a) \[\mathrm{x_{B}} =\left(\frac{a}{a^{2}-b^{2}}\sinh\left(\sqrt{a^{2}-b^{2}}\,\tau_ {\mathrm{B}}\right)\!,\right.\] \[\left.\frac{a}{a^{2}-b^{2}}\cosh\left(\sqrt{a^{2}-b^{2}}\,\tau_ {\mathrm{B}}\right)\!,\frac{b\tau_{\mathrm{B}}}{\sqrt{a^{2}-b^{2}}},L\right),\] (36b)
As for a single detector, the Wightman functions along the trajectories given above take the following compact form:
\[W_{\mathrm{s}}(\tau_{\mathrm{A}},\tau_{\mathrm{B}})\equiv W_{\mathrm{s}}( \Delta\tau)=-\frac{1}{4\pi^{2}}\frac{1}{-\frac{\bar{b}^{2}}{1-\bar{b}^{2}} \Delta\tau^{2}+\frac{4}{(1-\bar{b}^{2})^{2}a^{2}}\sinh^{2}\left(\frac{\sqrt{1 -\bar{b}^{2}}a\Delta\tau}{2}\right)-L^{2}}\,, \tag{37}\]
where \(\Delta\tau\coloneqq\tau_{\mathrm{A}}-\tau_{\mathrm{B}},\bar{b}\equiv b/a\), and the subscript's' stands for stationary. Since the Wightman function depends only on \(\Delta\tau\), the elements in the density matrix (9), \(\mathcal{M}\) and \(\mathcal{L}_{\mathrm{AB}}\), can be simplified to single integrals when the Gaussian switching function (12) is used:
\[\mathcal{M} =-2\lambda^{2}\sigma\sqrt{\pi}e^{-\Omega^{2}\sigma^{2}}\int_{0}^{ \infty}\mathrm{d}u\,e^{-u^{2}/4\sigma^{2}}W_{\mathrm{s}}(u)\,, \tag{38a}\] \[\mathcal{L}_{\mathrm{AB}} =\lambda^{2}\sigma\sqrt{\pi}\int_{\mathbb{R}}\mathrm{d}u\,e^{-u^ {2}/4\sigma^{2}}e^{-\mathrm{i}\Omega u}W_{\mathrm{s}}(u)\,. \tag{38b}\]
Here, we used the fact that the Heaviside step function in (10b) can be written as \(\Theta(t(\tau_{\mathrm{A}})-t(\tau_{\mathrm{B}}))=\Theta(\tau_{\mathrm{A}}- \tau_{\mathrm{B}})\) for any of the uniform acceleration scenarios mentioned earlier.
We note that all stationary configurations can only be realized in \((3+1)\) dimensions, with the exception of the linear configuration.
#### iii.2.2 Nonstationary scenario
One can also consider configurations similar to those in section III.2.1, where the Wightman function depends not only on \(\Delta\tau\) but also on \(\Delta_{+}\tau\coloneqq\tau_{\mathrm{A}}+\tau_{\mathrm{B}}\). In this case, the Wightman function is no longer time-translation invariant (hence, nonstationary).
In particular, consider two linearly accelerating UDW detectors whose trajectories are given by
\[\mathrm{x_{A}}(\tau_{\mathrm{A}}) =\left(\frac{1}{a}\sinh(a\tau_{\mathrm{A}}),\frac{1}{a}\cosh(a \tau_{\mathrm{A}})+L,0,0\right)\,, \tag{39a}\] \[\mathrm{x_{B}}(\tau_{\mathrm{B}}) =\left(\frac{1}{a}\sinh(a\tau_{\mathrm{B}}),\frac{1}{a}\cosh(a \tau_{\mathrm{B}}),0,0\right)\,. \tag{39b}\]
The correlation harvesting protocol along these trajectories was examined in [26; 28]. The corresponding Wightman function reads
\[W_{\mathrm{lin}}(\tau_{\mathrm{A}},\tau_{\mathrm{B}})=-\frac{1}{4\pi^{2}}\, \frac{1}{\frac{4}{a^{2}}\sinh^{2}\left(\frac{a\Delta\tau}{2}\right)-L^{2}- \frac{4L}{a}\sinh\left(\frac{a\Delta\tau}{2}\right)\sinh\left(\frac{a\Delta_{ +}\tau}{2}\right)}\,. \tag{40}\]
The term \(\Delta_{+}\tau\) comes from the fact that the angle between the velocity vector and the displacement vector is
time-dependent (\(0^{\circ}\) or \(180^{\circ}\)).
Similarly, other uniformly accelerating trajectories that yield a nonstationary Wightman function are
**circular**:
\[\mathrm{x}_{\mathrm{A}} =(\gamma\tau_{\mathrm{A}},R\cos(\omega\gamma\tau_{\mathrm{A}})+L,R \sin(\omega\gamma\tau_{\mathrm{A}}),0)\, \tag{41a}\] \[\mathrm{x}_{\mathrm{B}} =(\gamma\tau_{\mathrm{B}},R\cos(\omega\gamma\tau_{\mathrm{B}}),R \sin(\omega\gamma\tau_{\mathrm{B}}),0). \tag{41b}\]
**cusped**:
\[\mathrm{x}_{\mathrm{A}} =\left(\tau_{\mathrm{A}}+\frac{1}{6}a^{2}\tau_{\mathrm{A}}^{3}, \,\frac{1}{2}a\tau_{\mathrm{A}}^{2}+L,\,\frac{1}{6}a^{2}\tau_{\mathrm{A}}^{3}, \,0\right)\,, \tag{42a}\] \[\mathrm{x}_{\mathrm{B}} =\left(\tau_{\mathrm{B}}+\frac{1}{6}a^{2}\tau_{\mathrm{B}}^{3}, \,\frac{1}{2}a\tau_{\mathrm{B}}^{2},\,\frac{1}{6}a^{2}\tau_{\mathrm{B}}^{3}, \,0\right)\,, \tag{42b}\]
**catenary**:
\[\mathrm{x}_{\mathrm{A}} =\left(\frac{a}{a^{2}-b^{2}}\sinh\left(\sqrt{a^{2}-b^{2}}\,\tau_ {\mathrm{A}}\right)\!\right)\!,\] \[\quad\frac{a}{a^{2}-b^{2}}\cosh\left(\sqrt{a^{2}-b^{2}}\,\tau_{ \mathrm{A}}\right)+L,\frac{b\tau_{\mathrm{A}}}{\sqrt{a^{2}-b^{2}}},0\right), \tag{43a}\] \[\mathrm{x}_{\mathrm{B}} =\left(\frac{a}{a^{2}-b^{2}}\sinh\left(\sqrt{a^{2}-b^{2}}\,\tau_ {\mathrm{B}}\right)\!\right.\] \[\quad\frac{a}{a^{2}-b^{2}}\cosh\left(\sqrt{a^{2}-b^{2}}\,\tau_{ \mathrm{B}}\right)\!,\frac{b\tau_{\mathrm{B}}}{\sqrt{a^{2}-b^{2}}},0\right), \tag{43b}\]
The Wightman function for these nonstationary motions can be compactly expressed as
\[W_{\mathrm{ns}}(\tau_{\mathrm{A}},\tau_{\mathrm{B}})=\\ -\frac{\frac{1}{4\pi^{2}}}{-\frac{\tilde{b}^{2}}{1-\tilde{b}^{2} }\Delta\tau^{2}+\frac{4}{(1-\tilde{b}^{2})^{2}a^{2}}\sinh^{2}\left(\frac{ \sqrt{1-\tilde{b}^{2}}a\Delta\tau}{2}\right)-L^{2}-\frac{4L}{(1-\tilde{b}^{2} )a}\sinh\left(\frac{\sqrt{1-\tilde{b}^{2}}a\Delta\tau}{2}\right)\sinh\left( \frac{\sqrt{1-\tilde{b}^{2}}a\Delta\tau}{2}\right)}\,, \tag{44}\]
and possesses an additional term in the denominator compared to the stationary Wightman function (37). Here, the subscript 'ns' designates nonstationary. Due to this additional term, the correlations harvested by nonstationary detectors will exhibit behavior similar to those of stationary detectors.
Note that the presence of \(\Delta_{+}\tau\) prevents us from reducing the double integrals in (10) into single integrals. Furthermore, all nonstationary configurations can be realized in \((2+1)\) dimensions, except for the helix case, which we are not considering.
## IV Numerical results
Here, we numerically compute the concurrence (15) and quantum mutual information (16) harvested by two uniformly accelerating detectors by inserting the Wightman functions (30), (37) and (44) into \(\mathcal{L}_{ij}\) and \(\mathcal{M}\) given in (10). For stationary detectors in III.2.1, we utilize the expressions given by (38).
### Transition probability of uniformly accelerating detectors
Let us begin by considering the transition probability \(\mathcal{L}_{jj}\) for a uniformly accelerating detector. We are particularly interested in the cases of linear (\(\bar{b}=0\)), catenary (\(0<\bar{b}<1\)), cusped (\(\bar{b}=1\)), and circular (\(\bar{b}>1\)) motions, and their respective transition probabilities are given by (31). We consider \(\mathcal{L}_{jj}/\lambda^{2}\) and write the parameters in units of \(\sigma\), which makes the transition probability a function of three variables: \(a\sigma,\bar{b}\), and \(\Omega\sigma\). It is important to note that \(\mathcal{L}_{\mathrm{AA}}=\mathcal{L}_{\mathrm{BB}}\), as we are assuming both detectors are identical.
Figure 3 depicts the transition probability \(\mathcal{L}_{jj}/\lambda^{2}\) as a function of the magnitude of acceleration \(a\sigma\) for fixed \(\Omega\) (panel (a)) and \(\log_{10}\bar{b}\) for different values of the acceleration (panels (b), (c)). In figure 3(a), the transition probabilities for a detector with \(\Omega\sigma=2\) in linear (\(\bar{b}=0\)), catenary (\(\bar{b}=0.5\)), cusped (\(\bar{b}=1\)), and circular (\(\bar{b}=2\)) motions are shown. We find that in all these cases, \(\mathcal{L}_{jj}/\lambda^{2}\) increases with the acceleration \(a\sigma\).1
Footnote 1: \(\mathcal{L}_{jj}/\lambda^{2}\) is not guaranteed to _monotonically_ increase with \(a\sigma\) for a finite interaction duration. For a detector in the linearly accelerated motion, such phenomenon is known as the (weak) anti-Unruh effect [60; 61].
However, the relationship between the transition probabilities of detectors in different uniform motions is highly nontrivial. For instance, when a detector has \(\Omega\sigma=2\) and \(a\sigma\lesssim 5\), as depicted in figure 3(a), a detector in circular motion with \(\bar{b}=2\) shows the largest value of \(\mathcal{L}_{jj}/\lambda^{2}\), whereas a detector in linear motion (\(\bar{b}=0\)) shows the smallest. This relation, however, flips for \(a\sigma\gtrsim 5\). We will numerically demonstrate that such relationships depend on the interplay between \(a\sigma\) and \(\Omega\sigma\).
In figure 3(b), the magnitude of the acceleration is fixed at \(a\sigma=1\), and \(\mathcal{L}_{jj}/\lambda^{2}\) is plotted as a function of \(\log_{10}\bar{b}\). Each curve in this figure corresponds to a
different value of \(\Omega\sigma\), with the curve for \(\Omega\sigma=2\) corresponding to figure 3(a) at \(a\sigma=1\). For each value of \(\Omega\sigma\) in 3(b), the transition probability has a peak for \(\log_{10}\tilde{b}>0\) (i.e., \(\tilde{b}>1\)), and then decreases with increasing \(\log_{10}\tilde{b}\), becoming smaller than the value for the linear case (\(\log_{10}\tilde{b}\rightarrow-\infty\)). This means that \(\mathcal{L}_{jj}/\lambda^{2}\) at \(a\sigma=1\) in figure 3(a) increases with \(\tilde{b}\) until it reaches a maximum and then decreases. We note that the presence of the peak is contingent on larger values of \(\Omega\sigma\) relative to \(a\sigma\); In fact, the peak does not appear for smaller energy gaps, in which case the transition probability monotonically decreases with \(\tilde{b}\), as shown in figure 3(b). This trend is further illustrated in figure 3(c), where \(a\sigma=6\) is chosen. In this scenario, the peak is nonexistent for \(\Omega\sigma=1\) and \(1.5\) (as well as for \(\Omega\sigma<1\)), but becomes manifest when \(\Omega\sigma\gtrsim 2\). Thus we infer that detectors with smaller energy gaps \(\Omega\sigma\) compared to \(a\sigma\) do not have a peak in \(\mathcal{L}_{jj}(\tilde{b})/\lambda^{2}\).
The behavior of \(\mathcal{L}_{jj}/\lambda^{2}\) is related to the concept of the "effective temperature" perceived by a detector. For now, let us denote the transition probability as \(\mathcal{L}_{jj}(\Omega,\sigma)\). The effective temperature, \(T_{\text{eff}}\), is defined as
\[T_{\text{eff}}^{-1}\coloneqq\frac{1}{\Omega}\ln\frac{\mathcal{L}_{jj}(-\Omega,\sigma)/\lambda^{2}\sigma}{\mathcal{L}_{jj}(\Omega,\sigma)/\lambda^{2}\sigma} \tag{45}\]
where this formula is derived in the Appendix. We divide \(\mathcal{L}_{jj}(\Omega,\sigma)\) by \(\sigma\) so that it is well defined in the long interaction limit, \(\sigma\rightarrow\infty\)[61, 62]. Note that if the Wightman function obeys the Kubo-Martin-Schwinger (KMS) condition [63, 64], then the effective temperature converges to the KMS temperature (which is the temperature of the field formally defined in quantum field theory) in the limit \(\sigma\rightarrow\infty\). However, in the case of finite interaction duration, the effective temperature is an estimator for the actual field temperature. For a detector in a uniform acceleration motion, the effective temperature for each scenario has been examined in, e.g., [14, 15, 16, 17, 18, 19, 20].
We plot the effective temperature \(T_{\text{eff}}\) as a function of \(\log_{10}\tilde{b}\) when \(\sigma=1\) and \(a\sigma=1\) in figure 3(d), which corresponds to figure 3(b). We see that the locations of the peaks in \(T_{\text{eff}}\) align with those of \(\mathcal{L}_{jj}(\Omega)\) in 3(b). This suggests that, for a given acceleration and energy gap, a detector in circular motion within a certain range
of \(\log_{10}\bar{b}\) can register higher effective temperatures than those in other types of motion. However, as \(\bar{b}\to\infty\), which corresponds to the speed of a detector in circular motion with \(v_{\rm circ}(=\bar{b}^{-1})\to 0\), the temperature becomes colder.
### Concurrence and quantum mutual information between uniformly accelerating detectors
We now move on to the correlation harvesting protocol using two uniformly accelerating detectors, exploring both stationary and nonstationary configurations as described in section III.2.
We first examine the difference between the stationary and nonstationary configurations by plotting concurrence \(\mathcal{C}_{\rm AB}/\lambda^{2}\) and quantum mutual information \(I_{\rm AB}/\lambda^{2}\) as a function of \(\bar{b}\) in figure 4. In these plots, we fix \(\Omega\sigma=0.1\) and \(L/\sigma=1\), and consider \(a\sigma=1\) and \(a\sigma=2\). We notice two characteristics: (i) In the vicinity of \(\bar{b}\approx 0\), stationary detectors consistently harvest greater correlations than nonstationary detectors, for both concurrence and mutual information. (ii) As \(\bar{b}\) becomes larger, both plots begin to oscillate with \(\bar{b}\), and the curve representing correlations harvested by nonstationary detectors oscillates around the curve for the stationary case. The frequency of the oscillation increases as \(a\sigma\) grows.
These observations can be traced back to the form of the Wightman functions (37) and (44). Let us recall that the denominators of these expressions contain \(\sinh(x)\) when \(\bar{b}\in[0,1)\) and transform into \(\sin(x)\) when \(\bar{b}>1\). Therefore, within the range \(\bar{b}\in[0,1)\), the correlations are characterized by an exponential pattern, while for \(\bar{b}>1\), an oscillatory behavior emerges. These traits explain the observation above. In particular, the suppression of correlations near \(\bar{b}\approx 0\) for nonstationary detectors can be attributed to an additional term in the denominator of (44), which is absent in the stationary Wightman function (37). This extra term diminishes the amount of harvested correlations relative to the stationary scenario, and simultaneously gives rise to the oscillations noticed in the nonstationary case around the stationary one.
We next examine the acceleration dependence of concurrence \(\mathcal{C}_{\rm AB}\) and quantum mutual information \(I_{\rm AB}\) as
illustrated in figures 5(a) and (b), respectively. The stationary (figure 5(a-i) and (b-i)) and nonstationary (figure 5(a-ii) and (b-ii)) configurations are depicted, and all four uniformly accelerated motions, linear (\(\bar{b}=0\)), catenary (\(\bar{b}=0.5\)), cusped (\(\bar{b}=1\)), and circular (\(\bar{b}=2\)) are shown in each figure.
As we pointed out earlier, the correlations harvested by nonstationary detectors for \(\bar{b}\in[0,1)\) (figure 5(a-ii) and (b-ii)) decay with increasing \(a\sigma\) faster than those extracted by the stationary detectors (figure 5(a-i) and (b-i)). Meanwhile, the correlations extracted by nonstationary detectors in circular motion (\(\bar{b}>1\)) (figure 5(a-ii) and (b-ii)) exhibit oscillatory behavior around the corresponding stationary curves (figure 5(a-i) and (b-i)).
Another observation we make is that, for both stationary and nonstationary configurations and for any value of \(\bar{b}\), \(\mathcal{C}_{\rm AB}/\lambda^{2}\) becomes \(0\) at sufficiently high \(a\sigma\). This can be attributed to the high transition probability at large \(a\sigma\) as shown in figure 3(a), leading to \(|\mathcal{M}|<\sqrt{\mathcal{L}_{\rm AA}\mathcal{L}_{\rm BB}}\) in (15). Furthermore, the high accelerations prevent the detectors from extracting quantum mutual information, as \(I_{\rm AB}/\lambda^{2}\to 0\) at \(a\sigma\to\infty\) in figure 5(b). This indicates that any correlations cannot be harvested as \(a\sigma\to\infty\) if the detectors are uniformly accelerated. These findings are consistent with previous results [8, 25, 26, 27, 28, 47, 48], where linearly and circularly accelerated detectors are considered. Our paper extends these insights, providing a more general understanding that encompasses arbitrary uniformly accelerated motion.
### Genuine entanglement
We finally consider how much of entanglement is coming from the quantum field. It is known that the Wightman function can be decomposed into two parts: the anticommutator and the commutator of the field operator. The anticommutator part \(\left\langle\{\hat{\phi}(\mathsf{x}),\hat{\phi}(\mathsf{x}^{\prime})\}\right\rangle _{\rho_{\phi}}\) (also known as the Hadamard function), where \(\left\langle\cdot\right\rangle_{\rho_{\phi}}\) is the expectation value with respect to the field state \(\rho_{\phi}\), depends on the state of the field \(\rho_{\phi}\). Conversely, the commutator part \(\left\langle[\hat{\phi}(\mathsf{x}),\hat{\phi}(\mathsf{x}^{\prime})]\right\rangle _{\rho_{\phi}}=[\hat{\phi}(\mathsf{x}),\hat{\phi}(\mathsf{x}^{\prime})]\in \mathbb{C}\) (also known as the Pauli-Jordan function) is state-independent. This means
that even if the field state is not entangled, the commutator part in the Wightman function allows detectors to be entangled with each other. Such entanglement does not come from preexisting entanglement in the field; rather it is associated with communication between the detectors, and thus we cannot say (for an unentangled field state) that entanglement is 'extracted' from the field if the commutator part is the only contribution [54]. We say that entanglement is harvested if the anticommutator contribution in the element \(\mathcal{M}\) is nonzero, and in particular we qualify the harvested entanglement as being _genuine_ if the commutator part in \(\mathcal{M}\) is zero. Microcausality tells us that the two detectors can harvest genuine entanglement if they are causally disconnected. Here, we explore the circumstances under which two uniformly accelerating detectors can extract genuine entanglement from the field. Remarkably we find that this can be possible even if the detectors are in causal contact.
We begin by plotting the concurrence \(\mathcal{C}_{\mathrm{AB}}/\lambda^{2}\) as a function of the proper separation \(L/\sigma\) between the detectors and the energy gap \(\Omega\sigma\) in figure 6. The respective curves correspond to linear (\(\bar{b}=0\)), catenary (\(\bar{b}=0.5\)), cusped (\(\bar{b}=1\)), and circular motions (\(\bar{b}=2\)) in the stationary configurations depicted in figure 2. The left region of each curve represents the parameters \((L/\sigma,\Omega\sigma)\) that enable the detectors to become entangled, manifest as \(\mathcal{C}_{\mathrm{AB}}/\lambda^{2}>0\). Conversely, the right region corresponds to \(\mathcal{C}_{\mathrm{AB}}/\lambda^{2}=0\). Therefore, the stationary linear configuration (\(\bar{b}=0\)) has the broadest parameter space that leads to \(\mathcal{C}_{\mathrm{AB}}/\lambda^{2}>0\) compared to any other stationary configurations.
It has been shown [7] that two detectors at rest in Minkowski spacetime with a Gaussian switching function can be entangled with an arbitrary detector separation \(L/\sigma\) if the energy gap \(\Omega\sigma\) is large enough. However, we see that this is not the case for uniformly accelerating detectors - they can be entangled only when they are close to each other, no matter how large \(\Omega\sigma\) is.
We further ask how much entanglement stems from the anticommutator and commutator parts in the Wightman function. To see this, let us decompose the Wightman function as
\[W(\mathsf{x},\mathsf{x}^{\prime})=\mathrm{Re}[W(\mathsf{x},\mathsf{x}^{\prime })]+\mathrm{i}\,\mathrm{Im}[W(\mathsf{x},\mathsf{x}^{\prime})]\,, \tag{46}\]
where
\[2\mathrm{Re}[W(\mathsf{x},\mathsf{x}^{\prime})] =\langle 0|\{\hat{\phi}(\mathsf{x}),\hat{\phi}(\mathsf{x}^{ \prime})\}|0\rangle\, \tag{47a}\] \[2\mathrm{Im}[W(\mathsf{x},\mathsf{x}^{\prime})] =-\mathrm{i}\,[\hat{\phi}(\mathsf{x}),\hat{\phi}(\mathsf{x}^{ \prime})]\,. \tag{47b}\]
Then the matrix element \(\mathcal{M}\) can be decomposed into
\[\mathcal{M}=\mathcal{M}_{+}+\mathrm{i}\mathcal{M}_{-}\,, \tag{48}\]
where \(\mathcal{M}_{+}\) and \(\mathcal{M}_{-}\) are (10b) with the Wightman function being replaced by \(\mathrm{Re}[W(\mathsf{x},\mathsf{x}^{\prime})]\) and \(\mathrm{Im}[W(\mathsf{x},\mathsf{x}^{\prime})]\), respectively. \(\mathcal{M}_{+}\) contains the information about the genuine entanglement whereas \(\mathcal{M}_{-}\) is state-independent and does not necessarily exhibit the preexisted entanglement in the field. For the stationary detectors, these expressions can be simplified to single integral forms:
\[\mathcal{M}_{+} =-\lambda^{2}\sigma\sqrt{\pi}e^{-\Omega^{2}\sigma^{2}}\int_{0}^{ \infty}\mathrm{d}u\,e^{-u^{2}/4\sigma^{2}}W_{\mathrm{s}}(u)\] \[\quad-\lambda^{2}\sigma\sqrt{\pi}e^{-\Omega^{2}\sigma^{2}}\int_{0 }^{\infty}\mathrm{d}u\,e^{-u^{2}/4\sigma^{2}}W_{\mathrm{s}}^{*}(u)\,, \tag{49a}\] \[\mathcal{M}_{-} =\mathrm{i}\lambda^{2}\sigma\sqrt{\pi}e^{-\Omega^{2}\sigma^{2}} \int_{0}^{\infty}\mathrm{d}u\,e^{-u^{2}/4\sigma^{2}}W_{\mathrm{s}}(u)\] \[\quad-\mathrm{i}\lambda^{2}\sigma\sqrt{\pi}e^{-\Omega^{2}\sigma^{2 }}\int_{0}^{\infty}\mathrm{d}u\,e^{-u^{2}/4\sigma^{2}}W_{\mathrm{s}}^{*}(u)\,. \tag{49b}\]
We then define harvested concurrence \(\mathcal{C}_{\mathrm{AB}}^{+}\) and communication-assisted concurrence \(\mathcal{C}_{\mathrm{AB}}^{-}\) as [54]
\[\mathcal{C}_{\mathrm{AB}}^{\pm}\coloneqq 2\max\{0,\,|\mathcal{M}_{\pm}|- \sqrt{\mathcal{L}_{\mathrm{AA}}\mathcal{L}_{\mathrm{BB}}}\}+\mathcal{O}( \lambda^{4})\,. \tag{50}\]
We plot \(\mathcal{C}_{\mathrm{AB}}^{\pm}/\lambda^{2}\) as a function of \(L/\sigma\) in figure 7. Here, we specifically choose the stationary linear (\(\bar{b}=0\)) and circular (\(\bar{b}=2\)) cases as a demonstration. We find that for \(\Omega\sigma=1\) (figure 7(a)), the detectors can harvest entanglement since \(\mathcal{C}_{\mathrm{AB}}^{+}/\lambda^{2}>0\). Most strikingly, it is possible to extract genuine entanglement for \(L/\sigma\in(1.5,2.2)\) since \(\mathcal{C}_{\mathrm{AB}}^{+}>0\) while \(\mathcal{C}_{\mathrm{AB}}^{-}=0\) in this region. However, this is not always true as one can see from figure 7(b) when \(\Omega\sigma=2\). Here, detectors in circular motion can encounter the case where \(\mathcal{C}_{\mathrm{AB}}^{+}=0\) while \(\mathcal{C}_{\mathrm{AB}}^{-}>0\), which
Figure 6: The boundaries between \(\mathcal{C}_{\mathrm{AB}}>0\) and \(\mathcal{C}_{\mathrm{AB}}=0\) for the four stationary trajectories as a function of the proper separation \(L/\sigma\) and the energy gap \(\Omega\sigma\). Here, linear (\(\bar{b}=0\)), catenary (\(\bar{b}=0.5\)), cusped (\(\bar{b}=1\)), and circular (\(\bar{b}=2\)) with \(a\sigma=1\) are depicted. Concurrence is nonzero in the left region of each curve.
indicates that the generated entanglement after the interaction is purely coming from the communication and not from the field.
However genuine entanglement can still be extracted in the linear configuration.
## V Conclusion
We carried out the correlation harvesting protocol using two uniformly accelerating Unruh-DeWitt (UDW) detectors in \((3+1)\)-dimensional Minkowski spacetime. According to Letaw _et al._ (2015), trajectories with constant (nonzero) acceleration can be characterized by the magnitude of the acceleration and two torsion parameters, resulting in five classes: linear, catenary, cusped, circular, and helix motions. The first four of these classes of motion are confined to a two-dimensional spatial surface and can be regarded as specific cases of the helix motion. Since two-dimensional configurations are more amenable to experimental setups, we employed these four simpler motions for our analysis.
We first examined the transition probability of a single detector following the four trajectories in section IV.1. Utilizing a unified expression for the Wightman functions along these trajectories, we were able to explore the general characteristics that are common to all these motions. We found that the transition probabilities of these motions monotonically increase with the magnitude of acceleration. Moreover, we also evaluated the effective temperature--an estimator for the temperature as observed by a detector.
We then introduced another UDW detector to consider the correlation harvesting protocol in Sec. IV.2. Two configurations were explored: stationary and nonstationary configurations. In the stationary configuration, detectors are separated in the direction perpendicular to their two-dimensional spatial planes of motion. Specifically, the displacement vector pointing from one detector to the other remains orthogonal to the velocity vectors of the detectors. In such a case, the Wightman function along the stationary configuration is time-translation invariant. On the other hand, in the nonstationary configuration, the displacement vector aligns parallel to the planes of motion. This makes the Wightman function nonstationary (i.e., not time-translation invariant). Moreover, while this Wightman function shares a common term with the stationary configuration, an additional term appears that specifically characterizes the nonstationary nature of this configuration.
We found that the harvested correlations--entanglement and total correlations--behave in a distinct manner depending on the motion of the detectors. Specifically, detectors in linear, catenary, and cusped motions within the nonstationary configuration gain fewer correlations compared to those in the stationary configuration. On the other hand, in the circular motion case, both configurations exhibit similar behavior. This difference can be attributed to the Wightman functions. For linear, catenary, and cusped motions, the Wightman function contain hyperbolic functions, leading to an exponential alteration of the
Figure 7: Harvested and communication-assisted concurrence \(\mathcal{C}^{+}_{\mathrm{AB}}/\lambda^{2}\) for the stationary linear and circular configurations as a function of the detector separation \(L/\sigma\). (a) When \(\Omega\sigma=1\). The linear and circular cases are very similar and the detectors can harvest genuine entanglement near \(L/\sigma=2\). (b) When \(\Omega\sigma=2\). Although the linear case does not change much compared to the \(\Omega\sigma=1\) case, the circular case cannot harvest genuine entanglement anymore. It is the mixture of the anticommutator and commutator contributions, or in the worst case \(\mathcal{C}^{+}_{\mathrm{AB}}/\lambda^{2}=0\) around \(L/\sigma=1.3\).
results. In contrast, the Wightman function for circular motion is governed by trigonometric functions.
We also looked into the acceleration dependence of the harvested correlations and concluded (not surprisingly) that high accelerations prevent the detectors from acquiring correlations from the field. This point is consistent with previous papers [25, 26, 27, 28, 47, 48, 8, 28], in which linearly and circularly accelerated detectors are considered. Our paper generalized these results to any uniformly accelerating detectors on two-dimensional spatial surfaces.
Finally, we focused on entanglement harvested by the detectors in the stationary configuration and asked how much of entanglement is coming from the correlations preexisted in the field. To be precise, the entanglement coming from the commutator part of the Wightman function is state-independent, which suggests that the detectors can still be correlated even if the field is not entangled [54]. Thus, it is important to examine how the anticommutator part of the Wightman function (which is state-dependent) contributes to the extracted correlations. One way to eliminate the commutator contribution is to use causally disconnected detectors. However, we found that the existence of acceleration prohibits us to extract correlations with detectors separated far away, no matter what the energy gap is. However we also found the striking result that detectors in causal contact can harvest genuine entanglement in certain parameter regimes.
Our results have important implications for experiment. Attempts to realize the Unruh effect and correlation harvesting generally rely on using laser pulses to probe what are effectively two dimensional surfaces. To probe the effects of non-inertial motion on mutual information and entanglement will therefore involve two detectors (two pulses) in nonstationary configurations, since only these can be realized in a two-dimensional setting. Experimental verification of the harvesting of genuine entanglement would be an exciting confirmation of our understanding of relativistic quantum information.
###### Acknowledgements.
This work was supported in part by the Natural Sciences and Engineering Research Council of Canada. KGY is thankful to Dr. Jorma Louko for elaborating on the relationship among the uniformly accelerating trajectories.
## Appendix A Effective temperature
Here, we review the concept of effective temperature \(T_{\rm eff}\) and clarify its relation to the KMS temperature.
Let us first review the KMS temperature. In quantum theory with separable Hilbert spaces, a trace of an operator, \({\rm Tr}[\cdot]\) is well defined. This enables us to consider the Gibbs state at the inverse temperature \(\beta\), \(\rho=e^{-\beta\hat{H}}/Z\), where \(Z\coloneqq{\rm Tr}[e^{-\beta\hat{H}}]\) is the partition function. This is what we consider a thermal state of a system.
However in QFT, a trace is generally not well defined. Instead, we identify the Kubo-Martin-Schwinger (KMS) state [63, 64] as a thermal state in QFT. Specifically, if the field is in the KMS thermal state with respect to time \(\tau\) at the inverse KMS temperature \(\beta_{\rm KMS}\), the Wightman function satisfies
\[W(\Delta\tau-{\rm i}\beta_{\rm KMS})=W(-\Delta\tau)\,, \tag{10}\]
where \(\Delta\tau\coloneqq\tau-\tau^{\prime}\). The Fourier transform of this equality with respect to \(\Delta\tau\) reads
\[\tilde{W}(-\omega)=e^{\beta_{\rm KMS}\omega}\tilde{W}(\omega)\,. \tag{11}\]
This equality in the Fourier domain is known as the detailed balance condition. Thus, the thermality of a quantum field is imprinted in these equalities.
The thermality can also be implemented in the transition probability of a UDW detector. Recall that the transition probability is written as
\[\mathcal{L}=\lambda^{2}\int_{\mathbb{R}}\mathrm{d}\tau\int_{ \mathbb{R}}\mathrm{d}\tau^{\prime}\,\chi(\tau)\chi(\tau^{\prime})e^{-{\rm i} \Omega(\tau-\tau^{\prime})}W\big{(}\mathsf{x}(\tau),\mathsf{x}(\tau^{\prime}) \big{)}\,, \tag{12}\]
where the subscript in \(\mathcal{L}_{jj},\,j\in\{\rm A,B\}\) is omitted for simplicity. Let us assume that the switching function, \(\chi(\tau)\), has a characteristic time length \(\sigma\). In our paper, this is the Gaussian width in \(\chi(\tau)=e^{-\tau^{2}/2\sigma^{2}}\). It is convenient to introduce a quantity related to the transition probability known as the _response function_ (divided by the characteristic time length) \(\mathcal{F}(\Omega,\sigma)\):
\[\mathcal{L}=\lambda^{2}\sigma\mathcal{F}(\Omega,\sigma)\,,\] \[\mathcal{F}(\Omega,\sigma):=\] \[\quad\frac{1}{\sigma}\int_{\mathbb{R}}\mathrm{d}\tau\int_{ \mathbb{R}}\mathrm{d}\tau^{\prime}\,\chi(\tau)\chi(\tau^{\prime})e^{-{\rm i} \Omega(\tau-\tau^{\prime})}W\big{(}\mathsf{x}(\tau),\mathsf{x}(\tau^{\prime}) \big{)}\,. \tag{13}\]
If the field is in the KMS state and the switching function is a rapidly decreasing function such as a Gaussian function, then the response function in the long interaction limit obeys the detailed balance relation [62]:
\[\lim_{\sigma\to\infty}\frac{\mathcal{F}(-\Omega,\sigma)}{\mathcal{F}(\Omega, \sigma)}=e^{\beta_{\rm KMS}\Omega}\,. \tag{14}\]
Note that this relation holds when the long interaction limit is taken. On the other hand, if \(\sigma\) is not sufficiently long, the ratio of the response function (sometimes known as the excited-to-deexcited ratio) does not satisfy the detailed balance condition.
From this relation, one can define the _effective temperature_ as
\[T_{\rm eff}^{-1}\coloneqq\frac{1}{\Omega}\ln\frac{\mathcal{F}(- \Omega,\sigma)}{\mathcal{F}(\Omega,\sigma)}\,. \tag{15}\]
Note that the effective temperature is not necessarily the KMS temperature. If the field is in the KMS state and the long interaction limit is taken, then the effective temperature becomes the KMS temperature. In this sense, the effective temperature is an estimator for the field's temperature.
|
2308.00980 | Grasp Stability Assessment Through Attention-Guided Cross-Modality
Fusion and Transfer Learning | Extensive research has been conducted on assessing grasp stability, a crucial
prerequisite for achieving optimal grasping strategies, including the minimum
force grasping policy. However, existing works employ basic feature-level
fusion techniques to combine visual and tactile modalities, resulting in the
inadequate utilization of complementary information and the inability to model
interactions between unimodal features. This work proposes an attention-guided
cross-modality fusion architecture to comprehensively integrate visual and
tactile features. This model mainly comprises convolutional neural networks
(CNNs), self-attention, and cross-attention mechanisms. In addition, most
existing methods collect datasets from real-world systems, which is
time-consuming and high-cost, and the datasets collected are comparatively
limited in size. This work establishes a robotic grasping system through
physics simulation to collect a multimodal dataset. To address the sim-to-real
transfer gap, we propose a migration strategy encompassing domain randomization
and domain adaptation techniques. The experimental results demonstrate that the
proposed fusion framework achieves markedly enhanced prediction performance
(approximately 10%) compared to other baselines. Moreover, our findings suggest
that the trained model can be reliably transferred to real robotic systems,
indicating its potential to address real-world challenges. | Zhuangzhuang Zhang, Zhenning Zhou, Haili Wang, Zhinan Zhang, Huang Huang, Qixin Cao | 2023-08-02T07:26:35Z | http://arxiv.org/abs/2308.00980v1 | # Grasp Stability Assessment Through Attention-Guided Cross-Modality Fusion and Transfer Learning
###### Abstract
Extensive research has been conducted on assessing grasp stability, a crucial prerequisite for achieving optimal grasping strategies, including the minimum force grasping policy. However, existing works employ basic feature-level fusion techniques to combine visual and tactile modalities, resulting in the inadequate utilization of complementary information and the inability to model interactions between unimodal features. This work proposes an attention-guided cross-modality fusion architecture to comprehensively integrate visual and tactile features. This model mainly comprises convolutional neural networks (CNNs), self-attention, and cross-attention mechanisms. In addition, most existing methods collect datasets from real-world systems, which is time-consuming and high-cost, and the datasets collected are comparatively limited in size. This work establishes a robotic grasping system through physics simulation to collect a multimodal dataset. To address the sim-to-real transfer gap, we propose a migration strategy encompassing domain randomization and domain adaptation techniques. The experimental results demonstrate that the proposed fusion framework achieves markedly enhanced prediction performance (approximately 10%) compared to other baselines. Moreover, our findings suggest that the trained model can be reliably transferred to real robotic systems, indicating its potential to address real-world challenges.
## I Introduction
Before grasping an object, humans effortlessly integrate the senses of vision and touch to assess the stability of the grasp. Visual feedback provides information regarding the geometric properties of the object's surface, while tactile feedback establishes precise and intuitive contact conditions between the hand and the object. Thus, these two modalities are concurrent and complementary. However, the existing robotic grasping methodologies typically use a fixed gripping force. As a result, the robot primarily relies on open-loop grasping and cannot actively modify its pose and gripping force, thereby limiting the stability and security of the grasp. Additionally, it is critical to equip robots with the capability to delicately and minimally grasp objects, much like humans, as this can substantially enhance robots' intelligence in handling the complexities encountered in unstructured environments [1]. The grasp stability assessment, which serves as a critical prerequisite for enabling intelligent grasping, remains an open and challenging research issue [2, 3].
To date, several typical studies on the assessment of grasp stability have focused on multimodal fusion networks (MMFNs) that utilize both visual and tactile modalities. Calandra _et al_. [2] proposed a multimodal sensing framework for predicting grasp stability using tactile and visual inputs. Their experimental results indicate that the visual-tactile model significantly enhances grasping performance. They further introduced a regrasping policy based on grasp stability evaluation using raw visual-tactile data. The learned model enables the robot to grasp objects with minimal gripping force, reducing the chance of object damage [3]. Li _et al_. [4] introduced an architecture constructed from CNN and Recurrent Neural Network (RNN) to classify a grasp as stable or not. Cui _et al_. [5] proposed a 3D CNN-based fusion perception network to evaluate the grasp stability of deformable objects. They also introduced an MMFN that utilizes the self-attention mechanism [6]. In a recent study, Kanitkar _et al_. [7] introduced a multimodal dataset that includes tactile and visual data to explore grasp outcomes at specific holding poses. However, the visual-tactile fusion-based grasp stability evaluation methods discussed above still exhibit certain limitations.
Firstly, these studies employed simple feature-level fusion techniques (e.g., concatenation of the unimodal features from the final layer) to train multimodal prediction networks. Even in work [6], only one single-layer self-attention module is utilized. This has led to insufficient utilization of complementary information and a failure to model interactions between unimodal features. In recent years, transformers have been demonstrated to perform well across various tasks, such as natural language processing (NLP) [8,
Fig. 1: The framework diagram of the proposed grasp stability assessment method: the prediction network is trained with synthetic visual and tactile images and then successfully deployed on a real robot using the proposed migration strategy.
-vision-tactile manipulation [35], motion forecasting [36], as well as processing multimodal data, such as images, audio, and video [9]. Inspired by these observations, we propose an attention-guided visual-tactile cross-modality fusion method for delicate robotic grasping tasks. Specifically, this architecture utilizes a self-attention-based module to enhance unimodal information, a cross-attention-based module to model the interactions between unimodal features, and a co-attention module to aggregate and enhance visual-tactile features.
Secondly, these methods involve the collection of datasets from real-world systems, which is a time-consuming and expensive process, and the resulting datasets are often limited in size. A large-scale and reasonable dataset is a primary prerequisite for data-driven methods. To accelerate the dataset generation process, applying physics simulation provides an appealing avenue. Therefore, in this paper, we set up a robotic grasping system in the physics simulator PyBullet and implement a visual-tactile multimodal dataset collection policy. We then propose a migration strategy that consists of domain randomization and domain adaptation techniques to bridge the sim-to-real transfer gap. The framework diagram of the proposed grasp stability assessment method is shown in Fig. 1. The contributions of this paper are summarised as follows:
(1) An end-to-end attention-guided cross-modality fusion architecture is proposed to assess the grasp stability.
(2) A migration strategy that consists of domain randomization and domain adaptation techniques is proposed to bridge the sim-to-real transfer gap.
(3) Extensive validation experiments are conducted in both real and simulation systems, and the results prove that the proposed model outperforms other baselines and can be reliably transferred to the real robotic system.
The remaining part of this paper is structured as follows. Section II reviews the related work of grasp stability evaluation and sim-to-real transfer. In Section III, the cross-modality fusion architecture is described. In Section IV, the dataset generation and migration strategies are presented. In Section V, extensive validation experiments are conducted in simulated and real systems, and the experimental results are discussed. Finally, Section VI is the conclusion of this paper and future work.
## II Related Work
### _Grasp Stability Evaluation_
Grasp stability evaluation has been extensively researched as a crucial prerequisite for optimizing grasping strategies. Bekiroglu _et al._[10] introduced a probabilistic learning framework that utilizes machine learning techniques and tactile data acquired from pressure-sensitive tactile sensors to evaluate grasp stability. Kwiatkowski _et al._[11] utilized CNNs to evaluate grasp stability by combining tactile signals and proprioceptive information. Veiga _et al._[12] employed tactile data to predict slip events and modulate contact forces accordingly in anticipation of slip occurrences. Nevertheless, these techniques typically use electronic tactile sensors (ETSs) that offer limited tactile information, impeding robotic tactile sensing performance advancement.
Compared to ETSs, vision-based tactile sensors (VBTSs), such as GelSight-style sensors, offer notable benefits in high-resolution, robustness, and integration of visual-tactile data. Kolamuri _et al._[13] employed GelSight sensors to detect the rotational failure of grasp and presented a regrasping strategy to enhance grasp stability. Si _et al._[14] developed a CNN-LSTM model that uses a sequence of tactile images to predict grasp outcomes. However, these research frameworks do not integrate visual modality, the concurrent and synergistic integration of visual-tactile data during the initial grasping stage is critical for achieving optimal grasping results. Calandra _et al._[2] showed that including tactile signals in a multimodal perception framework significantly improves grasping performance. Cui _et al._[5] utilized a 3D CNN-based visual-tactile fusion network to evaluate the grasp state of deformable objects. Kanitkar _et al._[7] presented a multimodal dataset consisting of visual-tactile information to investigate the impact of varied holding poses on grasp stability. Nevertheless, the adoption of simplistic feature-level fusion approaches in these studies resulted in a restricted exploitation of complementary information and an inability to effectively capture the interplay among unimodal features. In contrast to these previous studies, we propose an attention-guided cross-modality fusion architecture to enhance unimodal information and model the interactions between unimodal features.
### _Sim-to-Real Transfer_
While generating datasets in simulation is highly efficient, the distributional shift between real and simulation data may lead to migration failure, also known as the sim-to-real gap. By bridging the distribution gap between simulation and the real world, transfer learning can enable the control strategies learned in the simulation to be effectively applied to a real robot. Some works investigated the effectiveness of domain randomization techniques in transferring a model trained on simulated RGB images to real-world images [15, 16]. By introducing randomization in the rendering process within the simulator, these studies successfully reduced the distribution gap between simulated and real-world data and enabled the successful deployment of the trained model on real hardware. Simulating the VBTS is challenging compared to the visual modality because an ideal high-resolution tactile simulator needs to model not only realistic optical properties but also accurate contact dynamics.
Gomes _et al._[17] utilized the Gazebo built-in camera to capture the depth map of the contact area and generated the RGB image using Phong's model. Agarwal _et al._[18] employed the bidirectional path-tracing algorithm to generate more realistic synthetic images, but this method requires a significant amount of computation. Si _et al._[19] proposed Taxim, an example-based method for simulating GelSight sensors that involves optical and marker motion field simulation. Wang _et al._[20] proposed TACTO, a simulation framework for simulating VBTSs such as DIGIT [21] and OmniTact [22]. Although the studies mentioned above have shown impressive results, the gap between synthetic tactile images and real images remains due to the challenges in modeling optical properties and contact dynamics. Chen _et al._[23] utilized CycleGAN [24] to train unpaired data. Nevertheless, the physical properties of the tactile sensor are neglected. Lin _et al._[25] employed an image-to-image
translation GAN [26] to accomplish the sim-to-real transfer. Nonetheless, they solely evaluated the zero-shot performance in basic scenarios such as edge-following and surface-following. In this paper, DIGIT [21] and TACTO [20] are employed as tools for capturing real and simulated tactile images, and a migration strategy consisting of domain randomization and domain adaptation techniques is proposed to bridge the sim-to-real transfer gap for delicate grasping tasks.
## III Cross-Modality Fusion Architecture
### _Network Structure_
The proposed dual-stream visual-tactile fusion network in this paper consists of feature extraction, cross-modality fusion transformer, and prediction head, as shown in Fig. 2.
**Feature extraction** The visual modality is an RGB image \(\mathcal{I}_{\mathcal{I}}\in\mathbb{R}^{3\times 480\times 640}\) captured by a camera mounted on the robot's hand, whereas the tactile modality corresponds to a spliced image \(\mathcal{I}_{h}\in\mathbb{R}^{3\times 320\times 480}\) generated from two high-resolution tactile sensors mounted on the gripper. It is noteworthy that this paper investigates the grasp stability assessment. For this purpose, the images of both modalities are captured simultaneously after the gripper closure and prior to object lifting. In contrast, for slip detection purposes, sequential images during the lifting process are required.
We first take ResNet-50 [27] as the backbone for each stream to extract deep features. Specifically, the final outputs of ResNet-50 are obtained by discarding the last stage and using the outputs of the fourth stage instead. Following that, the channel dimension is reduced by a 1\(\times\)1 convolution to obtain two feature maps with lower dimensions \(\big{(}\mathbf{X}_{\mathcal{I}_{\mathcal{I}_{\mathcal{I}_{\mathcal{I}_{ \mathcal{I}_{\mathcal{I}_{\mathcal{I}_{\mathcal{I}_{\mathcal{I}}}}}}}}}}\big{)} \in\mathbb{R}^{H\times W\times d}\). The value of \(d\) is set to 512 in our implementation. Finally, the image features are flattened along their spatial dimension into one-dimensional features \(\big{(}\mathbf{X}_{\mathcal{I}_{\mathcal{I}_{\mathcal{I}_{\mathcal{I}_{ \mathcal{I}_{\mathcal{I}_{\mathcal{I}_{\mathcal{I}_{\mathcal{I}}}}}}}}}}\big{)} \in\mathbb{R}^{HW\times d}\), which is then utilized as input to the cross-modality fusion transformer.
**Transformer module** First, we employ two multi-head self-attention (MSA) modules to integrate the global interactions and enhance feature representation in the same domain. Then, two multi-head cross-attention modules (MCA) are devised to further integrate global interactions between visual-tactile domains. In this way, a fusion layer is created by combining two MSA modules and two MCA modules. This fusion layer repeats four times in the experiment. The features of the two modalities are then concatenated and fed into an MSA module (also known as the co-attention mechanism) to aggregate the global context, and the final output is a 512-length feature vector.
Given an input sequence \(\mathbf{X}\in\mathbb{R}^{HW\times d}\), it will pass through three projection matrices \(\mathbf{W}^{Q}\in\mathbb{R}^{d\times d_{\mathcal{I}_{\mathcal{I}_{\mathcal{I} _{\mathcal{I}_{\mathcal{I}_{\mathcal{I}_{\mathcal{I}}}}}}}}}\), \(\mathbf{W}^{V}\in\mathbb{R}^{d\times d_{\mathcal{I}_{\mathcal{I}_{\mathcal{I} _{\mathcal{I}_{\mathcal{I}_{\mathcal{I}}_{\mathcal{I}_{\mathcal{I}_{\mathcal{I} }}}}}}}}}\), and \(\mathbf{W}^{V}\in\mathbb{R}^{d\times d_{\mathcal{I}_{\mathcal{I}_{\mathcal{I} _{\mathcal{I}_{\mathcal{I}}}}}}}\) to produce three embeddings \(\mathbf{Q}\) (Query), \(\mathbf{K}\) (Key), and \(\mathbf{V}\)(Value):
\[\{\mathbf{Q},\mathbf{K},\mathbf{V}\}=\{\mathbf{X}\mathbf{W}^{Q},\mathbf{X} \mathbf{W}^{K},\mathbf{X}\mathbf{W}^{V}\} \tag{1}\]
Then the self-attention mechanism is defined as:
\[\mathbf{X}\leftarrow\text{SA}(\mathbf{Q},\mathbf{K},\mathbf{V})=\text{softmax} \left(\frac{\mathbf{Q}\mathbf{K}^{\text{T}}}{\sqrt{d_{k}}}\right)\mathbf{V} \tag{2}\]
Multiple self-attention sub-layers can be stacked in parallel to consider diverse attention distributions. Thus the structure of multi-head self-attention (MHSA) is defined as:
\[\text{MHSA}(\mathbf{Q},\mathbf{K},\mathbf{V})=\text{Concat}\big{(}\mathbf{X} _{1},\cdots,\mathbf{X}_{n_{h}}\big{)}\mathbf{W}^{O} \tag{3}\]
where \(\mathbf{W}^{O}\in\mathbb{R}^{n_{h}d_{\mathcal{I}}\times d}\) is a parameter matrix. In the experiment, we set \(n_{h}=8\), \(d=512\), and \(d_{k}=d_{v}=d/n_{h}=64\).
In addition, each module is followed by a two-layer feed-forward network (FFN) to enhance the fitting ability of the network,
\[\text{FFN}(\mathbf{X})=\text{max}(\mathbf{0},\mathbf{X}\mathbf{W}_{1}+\mathbf{ b}_{1})\mathbf{W}_{2}+\mathbf{b}_{2} \tag{4}\]
Both FFNs and attention modules employ residual connection. We also apply the positional embedding to both MSA and MCA modules because the attention mechanism cannot distinguish positional information of the input feature sequence. Following [8], we adopt sin and cos functions to encode the positional information \(\mathbf{P}\) of the input sequence. Thus, the MSA can be formulated as
\[\mathbf{X}\leftarrow\mathbf{X}+\text{MHSA}(\mathbf{X}+\mathbf{P},\mathbf{X} +\mathbf{P},\mathbf{X}) \tag{5}\]
And the MCA can be expressed as
Fig. 2: Dual-stream fusion network architecture for grasp stability prediction.
\[\mathbf{X}_{v}\leftarrow\mathbf{X}_{v}+\text{MHSA}(\mathbf{X}_{v}+\mathbf{P}_{v},\mathbf{X}_{h}+\mathbf{P}_{h},\mathbf{X}_{h}) \tag{6}\]
\[\mathbf{X}_{h}\leftarrow\mathbf{X}_{h}+\text{MHSA}(\mathbf{X}_{h}+\mathbf{P}_{ h},\mathbf{X}_{v}+\mathbf{P}_{v},\mathbf{X}_{v}) \tag{7}\]
where \(\mathbf{X}_{v}\) and \(\mathbf{X}_{h}\) denote the feature sequences of the visual and tactile channels, respectively.
**Prediction head** The prediction head is a classification module consisting of a two-layer FFN structure and a loss function. This module takes 512 feature vectors as input and outputs binary classification results. In addition, a ReLU activation function is employed between the two FFN layers.
### _Training_
In this work, we adopt the standard binary cross-entropy loss to measure the loss of classification,
\[\mathcal{L}_{BCE}=-\sum_{i}[\mathcal{y}_{i}\text{log}(p_{i})+(1-y_{i})\text{ log}(1-p_{i})] \tag{8}\]
where \(\mathcal{y}_{i}\) represents the ground-truth label, \(\mathcal{y}_{i}=1\) denotes successful grasp, and \(p_{i}\) is the probability of belonging to the successful grasp predicted by the learned model.
We resize tactile and visual images to \(256\times 256\times 3\) and randomly sample \(224\times 224\times 3\) crops for data augmentation. The ResNet-50 pretrained by ImageNet [32] is employed. We train 20 epochs on the full network using Adam optimizer with a learning rate of \(1\times 10^{-4}\) and batch size of 32. The experiments are implemented on Ubuntu 18.04 with one NVIDIA GTX1080Ti GPU and a 2.10 GHz Intel Xeon E5-2620 CPU.
## IV Dataset Generation and Migration Strategies
Collecting large-scale datasets in the real world is time-consuming and laborious. Therefore, robot simulation plays a crucial role in data-driven manipulation tasks. In this paper, we set up a simulation environment for robotic grasping to generate a large-scale visual-tactile dataset and successfully transfer the learned policy to the real world with the proposed migration strategy.
### _Experimental Conditions_
To collect a large and reasonable multimodal dataset, a simulation system for robotic grasping is built. The physical entities in the real-world setting include a UR10 robot, a Robotiq gripper, a RealSense SR305 camera, two high-resolution tactile sensors, and an object that is intended to be grasped. The DIGIT [21] is selected as the tactile sensing hardware for real-world implementation due to its seamless integration with the gripper and user-friendly operation. Simultaneously, the TACTO [20] replicates DIGIT in the simulation environment. Moreover, OpenGL integrated with PyBullet is utilized to render RGB images. The simulated hardware maintains identical CAD dimensions to those of the physical environment and is loaded via URDF. The communication between the two environments is established through the Robot Operating System (ROS).
### _Visual-Tactile Dataset_
This multimodal dataset \(\mathcal{D}\) contains tactile images and rendered RGB images. The object set includes 50 objects from the YCB dataset [29]. These objects are divided into a training set of 45 objects and a test set of 5. The overall process framework for visual-tactile dataset generation is shown in Fig. 3. We first sample 400 force closure grasp candidates for each object using the antipodal grasp sampling method [30]. The sampled grasp configurations are represented in the object's reference frame. Then the MeshPy library calculates the stable poses for each object on a table surface and makes corrections in the PyBullet gravity environment, discarding unstable entries. Twenty collision-free grasps are subsequently computed for each stable pose on a planar work surface. Finally, the resulting grasps corresponding to each stable pose are stored in the HDF5 file.
To simplify the problem, we assume that the grasping is a quasi-static process with a Coulomb friction model, and the friction coefficient is set to \(\mu=0.6\) for objects and gripper in grasping trials. In addition, the mass of the object is taken from the dataset. When performing grasping trials, each object will be loaded with all stored stable poses in order, and each stable pose comes with 20 grasp configurations. The robot will perform 30 grasping trials for each grasp configuration, starting with a gripping force of 5 N and rising at an interval of 1 N. After closing the gripper, images from two tactile sensors are recorded. To initially match the distribution of synthetic images to real readings, a 2D Gaussian filter \(G(x,y)\) is first applied,
\[\mathcal{I}\gets\mathcal{I}\ast\frac{1}{\sqrt{2\pi\sigma^{2}}}e^{\frac{x ^{2}+y^{2}}{2\sigma^{2}}} \tag{9}\]
where \(\ast\) stands for the convolution operation. Then the synthetic difference image is added to the real background image to form the training input,
\[\mathcal{I}\gets\mathcal{I}-\mathcal{I}_{\text{sim,bg}}+\mathcal{I}_{ \text{real,bg}} \tag{10}\]
RGB images are synchronously acquired with tactile images. To address the reality gap of the visual modality, the following aspects are considered. 1) The real texture of the object is loaded. 2) Textures of the gripper, tactile sensors, and table are randomized within a specified range. 3) The camera's pose and resolution align with the physical parameters. In each trial, 50 images are rendered by randomly sampling the camera pose's degrees within a range of 2cm and 5\({}^{\circ}\). 4) Gaussian noise is added to the images.
After lifting the object, the robot performs a shaking action to verify its stability, thus improving the robustness of the grasp outcomes. Finally, if the object is still in hand (i.e., the object's height along the z-axis \(z>h\) ), the label \(\ell=1\). Otherwise, \(\ell=0\). The dataset is collected using 16 processes
Fig. 3: The data collection procedure for visual-tactile and paired tactile datasets.
to obtain a total of \(2.5\times 10^{6}\) datapoints. The distribution of positive-to-negative labels in the dataset is around 6:4.
### _Paired Tactile Dataset_
By using strictly paired simulation and real tactile images, one can achieve realistic optical properties and accurate contact dynamics mapping, thus enabling the effective transfer of tactile modality. However, the data collection policy needs to be carefully designed. To the best of our knowledge, no related work has been conducted on robotic grasping tasks. This work proposes a paired dataset collection policy for robotic grasping tasks, followed by image transfer using a modified conditional GAN.
The paired data collection procedure is shown in Fig. 3. Firstly, we select ten objects from the training object set. For each real object, we manually place a few stable poses. A real-time surface-based matching method implemented using the HALCON library [34] is used to estimate the object's 6D pose in each case. Secondly, the estimated pose is synchronized to the simulation environment via ROS service. In the simulation, 20 collision-free grasps corresponding to the synchronized pose are filtered from the grasp set containing 400 grasps. Finally, these grasps are synchronized sequentially to the real scene. Each synchronized grasp configuration involves conducting 20 grasping trials with the robot in each environment, starting with a gripping force of 10 N (the minimum gripping force of the gripper) and rising at an interval of 1 N. When closing the gripper, the paired images \((J_{r},J_{s})\) from real and simulation environments are recorded. The final dataset comprises 6800 paired data, which are partitioned into training (80%) and test (20%) sets.
### _Training a Conditional GAN_
The migration network is built on the pix-to-pix GAN [26], as shown in Fig. 4. It aims to learn a mapping \(f=G(x)\) from real tactile images to simulated images. Due to the utilization of simulated data for training the grasp stability prediction network, mapping real tactile images to simulated counterparts facilitates the direct deployment of the trained model on real hardware. The pix2pix architecture consists of a U-Net architecture as the generator \(G\) and a patch-based fully convolutional network as the discriminator \(D\). In our task, the objective of the discriminator \(D\) is to reveal the differences between simulated and generated images, while the generator \(G\) is to translate real images to simulated-like images to fool the discriminator \(D\). We employ LSGAN loss [31] instead of original conditional GAN loss to promote training stability and generate higher-quality images. The discriminative loss and the generative loss are defined as follows,
\[\mathcal{L}_{LSGAN}(D)=\frac{1}{2}\mathbb{E}_{x,y}[(D(x,y)-1)^{2}]\] \[+\frac{1}{2}\mathbb{E}_{x}\left[D\big{(}x,G(x)\big{)}^{2}\right] \tag{11}\] \[\mathcal{L}_{LSGAN}(G)=\frac{1}{2}\mathbb{E}_{x}\left[\big{(}D \big{(}x,G(x)\big{)}-1\big{)}^{2}\right] \tag{12}\]
Furthermore, we employ binary cross-entropy loss in Equation (8) to make the output of \(G\) approach the simulated image as much as possible. Thus the final objective functions can be expressed as
\[\min\mathcal{L}(D)=\mathcal{L}_{LSGAN}(D) \tag{13}\]
\[\min\mathcal{L}(G)=\mathcal{L}_{LSGAN}(G)+\lambda\mathcal{L}_{BCE} \tag{14}\]
To optimize networks, we set \(\lambda=10\) and use the Adam optimizer with 2e-4 learning rate. We train the model with 10 batch sizes and 20 epochs.
## V Experiments and Discussion
Extensive experiments are conducted in simulation and real hardware to evaluate the proposed method. The experiments mainly include the following four aspects.
Firstly, comparative studies are conducted on a publicly available dataset and a self-collected dataset to compare the predictive performance of different models with different inputs. Secondly, we test the prediction accuracy of the multiple models in grasping trials. Thirdly, a delicate grasp experiment is designed to further verify the effectiveness of the proposed method. Finally, we use the SSIM metric to measure the effect of tactile image migration.
### _Predictive Performance_
To comprehensively evaluate the proposed model, we conduct the experiments on a public dataset from Calandra _et al._[2] and the dataset collected in Section IV with four baselines. The public dataset is collected by two GelSight sensors [33] and a Kinect camera mounted in front of the robot. This multimodal dataset contains 9269 grasping trials from 106 unique objects, and we use the data captured between closing gripper and lifting object as input to networks. Ours
\begin{table}
\begin{tabular}{c c c c} \hline & A(\%) & P(\%) & R(\%) \\ \hline Visual only & \(62.2\pm 1.3\) & \(63.5\pm 0.6\) & \(61.1\pm 0.7\) \\ Tactile only & \(67.1\pm 0.4\) & \(66.2\pm 0.3\) & \(64.8\pm 0.5\) \\ Calandra et al. [2] & \(72.2\pm 0.6\) & \(72.8\pm 0.8\) & \(71.2\pm 0.6\) \\ Ours-m & \(76.6\pm 1.6\) & \(75.3\pm 1.1\) & \(75.1\pm 0.9\) \\ Ours & \(\mathbf{84.4\pm 0.7}\) & \(\mathbf{85.2\pm 0.8}\) & \(\mathbf{83.1\pm 0.5}\) \\ \hline \end{tabular}
\end{table} TABLE I: Cross-Validation Results of the Different Models on Public Dataset.
Fig. 4: Tactile migration network that maps real image to simulated image.
refers to the proposed attention-guided cross-modality fusion model. Visual only and Tactile only mask out the tactile input and visual input, respectively, and the pre-trained ResNet-50, along with the mentioned prediction head (Section III), is utilized for feature extraction and classification. The dual-stream network proposed by Calandra _et al._[2] is adopted as the network architecture for direct visual-tactile fusion. Moreover, we denote Ours-m as a network structure in which only the last MSA is kept in the cross-modality fusion transformer, That is, the features from both channels are concatenated and passed through a self-attention module, as shown in Fig. 2. Finally, we employ the 3-fold cross-validation method to train the networks. Accuracy (A), Precision (P), and Recall (R) are chosen as the evaluation metrics.
The cross-validation results are reported in Table I and Table II. Overall, we see that Visual only method exhibits the lowest performance, whereas the the performance of Tactile only is second only to Calandra _et al._[2]. This suggests that tactile feedback plays a more critical role than vision in predicting the grasp outcomes. Furthermore, the predictive power of the multimodal architectures is substantially improved compared to the unimodal ones. That is, integrating vision and tactile is indispensable for executing stable and gentle grasp operations. Additionally, the proposed full model Ours performs best, and the prediction performance has been dramatically improved compared to other baselines.
More specifically, we can see from Table I that the proposed method Ours exhibits an average performance improvement of around 12% and 8% over Calandra _et al._[2] and Ours-m, respectively. And there is also a boost of around 8% and 5% in Table II. These findings demonstrate that our approach can effectively integrate visual and tactile modalities, leveraging their complementary strengths to improve the network's predictive power in a limited dataset. Furthermore, the comparison of Table I and Table II reveals that a large-scale dataset enables the network to undergo adequate training, significantly improving prediction performance.
### _Grasping Performance_
While testing on the dataset offers a preliminary assessment of the predictive power of different models, the goal is to evaluate the performance of the trained model in actual grasping trials. Therefore, we perform grasping tests on ten objects using both a simulation and an actual robot. Fig. 5 displays the ten objects used in our evaluation, comprising the first five from the training set and the remaining five from the test set. Fig. 6 shows the test scenarios for some objects in both simulation and real world. We conduct 50 grasping trials for each object with a randomized gripping force and employ prediction accuracy as a performance metric. This study uses GPG [28] to generate 6-DoF grasps from the single-view point cloud. GPG is a rapid solution that enables the sampling of parallel grasps from the 3D unknown point cloud. First, we evaluate four trained models in the simulation: Visual only, Tactile only, Calandra _et al._[2], and Ours. Subsequently, we deploy our proposed model with and without the migration strategy on a real robot, which are denoted by Ours (W-GAN) and Ours (WO-GAN) in Table III, respectively. Before lifting, the grasp result is evaluated by the corresponding model.
The grasping results are shown in Table III. Upon analyzing the simulation results, we have observed that Visual only performs reasonably poorly. The main reason is that the gripping force changes during grasping trials, but the visual modality cannot perceive such slight variations. Comparatively, Tactile only shows relatively good prediction accuracy and generalization ability. This also again demonstrates the importance of tactile sense in delicate grasping tasks. The proposed model Ours achieves the highest accuracy and has strong generalization capability to unseen poses and objects, and the mean prediction accuracy attains a level of 95.6%. In actual tests, the mean accuracy of the
Fig. 5: In grasping trials, ten objects are utilized, with the first five selected from the training set and the final five from the test set.
Fig. 6: A subset of the objects tested in simulated and real-world environments.
\begin{table}
\begin{tabular}{c|c c c c c c c c c c c} \hline \hline & obj. 1 & obj. 2 & obj. 3 & obj. 4 & obj. 5 & obj. 6 & obj. 7 & obj. 8 & obj. 9 & obj. 10 & Mean \\ \hline \multirow{4}{*}{Simulation} & Visual only & 58 & 52 & 42 & 64 & 60 & 34 & 28 & 26 & 32 & 30 & 42.6 \\ & Tactile only & 78 & 72 & 60 & 82 & 86 & 68 & 72 & 68 & 62 & 74 & 72.2 \\ & Calandra et al. [2] & 90 & 84 & 80 & 96 & 100 & 90 & 82 & 86 & 70 & 78 & 85.6 \\ & Ours & **100** & **92** & **96** & **100** & **100** & **100** & **90** & **94** & **84** & **100** & **95.6** \\ \hline \multirow{2}{*}{Real} & Ours (WO-GAN) & 64 & 52 & 48 & 62 & 68 & 44 & 56 & 66 & 48 & 54 & 56.2 \\ & Ours (W-GAN) & **86** & **78** & **70** & **92** & **96** & **80** & **82** & **84** & **70** & **94** & **83.2** \\ \hline \hline \end{tabular}
\end{table} TABLE III: The Prediction Accuracy of the Grasping Experiment on Ten Objects, Presented in Percentage Form.
Fig. 7: Delicate grasping experimental results. (a) Average gripping force. (b) Grasp success rate is defined as the percentage of successful grasps to the total number of grasps.
proposed model employing the migration strategy is 83.2%, which is superior to the 56.2% accuracy attained via direct deployment. The findings further establish the efficacy of the migration strategy.
### _Delicate Grasping_
The ultimate objective of evaluating grasping outcomes is to facilitate the generation of optimal grasping strategies, including minimum force grasping. This paper design a simplified rule to demonstrate the viability of the proposed model in delicate grasping experiments. We choose object 6 in Fig. 5 for our experiments on a real robot. Specifically, for a given grasp, let the robot start with a minimum force of 10N, increase by 1N each time, and the maximum force is 30N. During the process, the robot maintains a constant grasp pose and performs a lifting action when the model predicts a successful grasping result or the maximum gripping force is reached. Meanwhile, grasping with a fixed gripping force of 10N and 30N is the control group. We sample 10 grasp configurations, and 50 grasping trials are carried out for each configuration. The average value of the final gripping force and the grasp success rate are recorded.
We show the results in Fig. 7. For most grasping trials, the fixed policy with a 30N gripping force consistently yields the highest success rate, indicating that a higher gripping force generally leads to more stable grasps.
In contrast, our proposed model enables the robot to grasp objects with significantly less force, while still maintaining a similar success rate. Additionally, for unstable grasp configurations like grasp 5 in Fig. 7, the fixed policy using 30N results in a low success rate, whereas the regrasping policy based on grasp-stability evaluation achieves a relatively higher success rate.
### _Tactile Modal Migration_
To evaluate the performance of the tactile transfer strategy visually, we utilize the trained generation model to produce images on the evaluation set, and employ the SSIM metric to quantify the similarity.
We compute SSIM scores between the generated and simulated tactile images on the validation set, resulting in an average score of 0.991, indicating a high degree of similarity. Additionally, we present five sets of paired tactile images in Fig. 8, with the corresponding SSIM scores displayed in Fig. 9.
## VI Conclusion
This paper proposes an attention-guided cross-modality fusion network to assess grasp stability. This model is trained with synthetic visual and tactile images and then effectively deployed on a real robot using domain randomization and domain adaptation techniques. The experimental results show that our suggested model outperforms direct and co-attention fusion methods by approximately 12% and 8% on a publicly available small-scale dataset. Furthermore, the simulation and real-world grasping trials yield average prediction accuracies of 95.6% and 83.2%, respectively. The experimental findings demonstrate the effectiveness and efficiency of the proposed fusion method and transfer strategy in grasp stability evaluation tasks.
In future work, we will further investigate more effective transfer strategies for visual and tactile modalities. We also plan to introduce reinforcement learning to achieve minimum force grasping of unknown objects based on current work.
## Acknowledgment
The authors would like to express our most sincere gratitude to Zilin Si from Carnegie Mellon University and Shaoxiong Wang from Massachusetts Institute of Technology, who provided many valuable suggestions regarding tactile simulation.
|
2306.16496 | Exploring chemical compound space with a graph-based recommender system | With the availability of extensive databases of inorganic materials,
data-driven approaches leveraging machine learning have gained prominence in
materials science research. In this study, we propose an innovative adaptation
of data-driven concepts to the mapping and exploration of chemical compound
space. Recommender systems, widely utilized for suggesting items to users,
employ techniques such as collaborative filtering, which rely on bipartite
graphs composed of users, items, and their interactions. Building upon the Open
Quantum Materials Database (OQMD), we constructed a bipartite graph where
elements from the periodic table and sites within crystal structures are
treated as separate entities. The relationships between them, defined by the
presence of ions at specific sites and weighted according to the thermodynamic
stability of the respective compounds, allowed us to generate an embedding
space that contains vector representations for each ion and each site. Through
the correlation of ion-site occupancy with their respective distances within
the embedding space, we explored new ion-site occupancies, facilitating the
discovery of novel stable compounds. Moreover, the graph's embedding space
enabled a comprehensive examination of chemical similarities among elements,
and a detailed analysis of local geometries of sites. To demonstrate the
effectiveness and robustness of our method, we conducted a historical
evaluation using different versions of the OQMD and recommended new compounds
with Kagome lattices, showcasing the applicability of our approach to practical
materials design. | Elton Ogoshi, Henrique Ferreira, João N. B. Rodrigues, Gustavo M. Dalpian | 2023-06-28T18:40:38Z | http://arxiv.org/abs/2306.16496v1 | # Exploring chemical compound space with a graph-based recommender system
###### Abstract
With the availability of extensive databases of inorganic materials, data-driven approaches leveraging machine learning have gained prominence in materials science research. In this study, we propose an innovative adaptation of data-driven concepts to the mapping and exploration of chemical compound space. Recommender systems, widely utilized for suggesting items to users, employ techniques such as collaborative filtering, which rely on bipartite graphs composed of users, items, and their interactions. Building upon the Open Quantum Materials Database (OQMD), we constructed a bipartite graph where elements from the periodic table and sites within crystal structures are treated as separate entities. The relationships between them, defined by the presence of ions at specific sites and weighted according to the thermodynamic stability of the respective compounds, allowed us to generate an embedding space that contains
vector representations for each ion and each site. Through the correlation of ion-site occupancy with their respective distances within the embedding space, we explored new ion-site occupancies, facilitating the discovery of novel stable compounds. Moreover, the graph's embedding space enabled a comprehensive examination of chemical similarities among elements, and a detailed analysis of local geometries of sites. To demonstrate the effectiveness and robustness of our method, we conducted a historical evaluation using different versions of the OQMD and recommended new compounds with Kagome lattices, showcasing the applicability of our approach to practical materials design.
## 1 Introduction
The set of all admissible compounds results from all possible combinations of atoms, compositions[1] and crystal structures[2] (ACS[3]). The combinatorially large nature of this space makes it extremely difficult to thoroughly explore it. A naive approach to expand the quantity of available known materials would be a trial-and-error methodology across all possibilities, but this is is rendered impractical due to resource limitations, a constraint that holds true even for _in silico_ exploration. Despite the significant recent advancements in high-throughput infrastructure and computational power, the most effective strategy for researchers remains the identification of patterns within known compounds to effectively restrict this expansive search space.
In the past, there were some attempts to formalize such patterns in the form of empirical geometrical rules, such as the Goldschmidt tolerance factor[4] for perovskites, and the Hume-Rothery rules for the formation of substitutional solid solutions in metallic systems[5]. However, when these empirical rules were formulated, the available dataset of materials was considerably limited compared to the vast repositories we have today. The advent of comprehensive materials databases, such as the Open Quantum Materials Database (OQMD)[6], Aflowlib[7], Materials Project[8], and NOMAD[9] has significantly expanded the compounds'
knowledge base. This broadens the prospects for both finding new empirical rules and improving the existing ones. For instance, a larger dataset of compounds allowed Bartel et al [10] to propose a new and more accurate tolerance factor for perovskites, while Pei et al [11] found new descriptors that predicted the stability of high-entropy alloys beyond the Hume-Rothery rules. Yet, these approaches are typically limited to a specific subset of compounds constrained by a crystal structure.
In recent years, inspired by data-driven methodologies, researchers have proposed innovative strategies to narrow the search space across a broader set of compounds. These strategies can be broadly categorized into iterative and non-iterative methods. Iterative methods employ a cyclical process of exploration and exploitation, with examples including active learning [12], genetic algorithms [13], and iterative ion substitution [14]. Non-iterative methods, on the other hand, build models from the entirety of the available data. Notable such examples include the tensor-based recommender system proposed by Seko et al [15, 16], and state-of-art graph-based machine learning models that predict the formation energy of unexplored compounds, thereby helping to restrict the search space [17, 18, 19].
Another context where one has to cope with an enormous informational space is the realm of digital life. There users are routinely presented with a vast array of options, be it media to consume or products to purchase. They have to parse through all that information and make decisions that are most aligned with their interests. Some of the most popular information filtering strategies are known as recommender systems [20] (RS). These attempt to simplify the decision-making process by first trying to identify items that are perceived to most closely align with the user's interests and then suggesting these to the user.
Collaborative filtering, a method used by RSs [21], makes predictions about the interests of a user by collecting preferences from many users. This process can be visualized and modeled using a bipartite graph, where one set of nodes represents the users, and the other set represents the items. The task of the RS then becomes a link prediction problem: based on the existing structure of the graph and the preferences of similar users, which new user
item links are most likely to form?
In this study, inspired by information filtering ideas, we propose a new way of looking at chemical compound space and apply recommender system (RS) principles to the search for new materials. We construct a bipartite graph to depict the relations between periodic table elements and local crystal environment geometries (ignorant of chemistry). By then generating an embedding space from this graph, we are able to explore the affinities between ions and local environments, and eventually spot new stable materials
In the remainder of this text we will use the expression _anonymous motif sites_ (AM sites) to refer to the local crystal environment geometries mentioned in the previous paragraph. We discuss AM sites in detail in the Methods section. Within our model, a ion-AMsite link means that that ion occupies that AM site in one or more materials of the set of compounds used to construct the graph. The weight assigned to each link corresponds to the respective compound's thermodynamic stability, as derived from Density Functional Theory (DFT). By embedding this representation into a vector space, we were able to discern chemical and structural patterns and develop a RS that recommends new ion-site occupations. These recommendations may result in the formation of stable compounds within a desired prototype crystal structure, as we demonstrate in section Results and Discussion for the case of CsV\({}_{3}\)Sb\({}_{5}\).[22, 23]
Our methodology notably differs from previous machine learning and data-driven methods that have generated element embeddings from the periodic table.[24, 25, 26, 27] Unlike those approaches, our embedding for ions is constructed exclusively from the data of ion-site occupancies in known compounds, without considering any compositional descriptor or chemical features from the elements themselves. This unique approach emphasizes the incorporation of site information within the embedding and, as a consequence, we cannot only delve into ion-ion relationships within the embedding, but also exploit and analyse the ion-site relationships.
## Methods
### The Link Prediction Task in Bipartite Graphs
Link prediction can be formally described as follows: consider a graph \(G=(V,E^{+})\), where \(V\) is the set of nodes and \(E^{+}\) the set of edges. In the context of a recommender system, \(G\) is a bipartite graph with \(V=U\cup P\) and \(U\cap P=\emptyset\). Here, \(U\) represents the set of users and \(P\) the set of products, for example. Each edge connects a vertex in \(U\) to a vertex in \(P\), with edges \((u,p)\in E^{+}\) representing user-product interactions.
The set of all possible but non-existing edges is denoted by \(E^{-}\), such that \(E^{+}\cap E^{-}=\emptyset\). The link prediction task can then be formulated as a binary classification problem, where all allowed edges or pairs of nodes \((u,p)\in U\times P\) are associated with a label \(y_{up}\in\{0,1\}\), indicating the absence or presence of an edge between \(u\) and \(p\), respectively.
The objective of the prediction task is to learn a function \(f:U\times P\rightarrow[0,1]\) that approximates the probability \(P(y_{up}=1\mid u,p)\) of an edge existing between nodes \(u\) and \(p\).
\[P(y_{up}=1|u,p)\approx f(u,p) \tag{1}\]
A common approach for link prediction employs node embeddings, that are vector representations of nodes that capture their local graph structural information. Let \(\phi:V\rightarrow\mathbb{R}^{d}\) be a function mapping each node \(v\in V\) to a \(d\)-dimensional vector \(\phi(v)\). This function outputs vector representations for both \(U\) and \(P\) (\(\mathbf{v}_{u}\) and \(\mathbf{v}_{p}\), respectively). The similarity between the embeddings of two distinct nodes can be computed using a similarity function \(s:\mathbb{R}^{d}\times\mathbb{R}^{d}\rightarrow\mathbb{R}\), such as the dot product or the cosine similarity. This can then be used as a proxy for the likelihood of a link existing between them, or used as an input to a second trained model \(m\), such as a decision tree or a logistic regression, responsible for defining the similarity threshold that delineates the existing edges from the non-existing edges. The sequence of transformations from the node representations to the final link prediction function \(f\) is represented in Figure 1.
To train a link prediction model \(f\), a training dataset \(\mathcal{D}=\left\{(u_{i},p_{i},y_{u_{i}p_{i}})\right\}_{i=1}^{N}\) of \(N\) instances is constructed from positive examples \((u,p,y_{up}=1)\) drawn from the true edges \(E^{+}\) and negative examples \((u,v,y_{up}=0)\) drawn from the non-existing edges \(E^{-}\). The model parameters are optimized by minimizing a global loss function \(L\) that measures the discrepancy between the predicted probabilities \(f(u,p)\) and the true labels \(y_{up}\). The edges from the set of positive examples are removed from \(G\) to avoid data leakage during the optimization and training of \(m\). The dataset \(\mathcal{D}\) is partitioned into training, validation, and test sets, ensuring a balanced distribution of positive and negative samples. The global loss function can be written as
\[L=\sum_{(u,p)\in\mathcal{D}}\mathcal{L}(f(u,p),y_{up}) \tag{2}\]
Figure 1: Visualization of the link prediction task for recommender systems (\(RS\)), encapsulated by function \(f\). The goal is to suggest new candidate connections between a pair of nodes, represented in deep red and deep yellow. The function \(f\) is composed of two components. Firstly, the nodes are processed by an embedding function \(\phi\) which converts each node into a vector representation. Then, these vectors are used as input for a similarity function \(s\). The output from the similarity function is then processed by a model \(m\), which finally produces the recommendation. Through this process, the recommender system is capable of suggesting new links between nodes.
where \(f(u,p)\) is the predicted probability of edge existence, \(y_{up}\) is the true label, and \(\mathcal{L}\) is the loss function defined by the model \(m\).
The learned function \(f\) is evaluated on the test set to estimate its generalization ability to unseen data. Ultimately, \(f\) is used to estimate the probability of all edges \((u,p)\in E^{-}\) for new recommendations.
### Building the Recommender System
#### The Data:
We utilized two versions of the OQMD database: version 1.4 from 2020 and version 1.5 from 2021, denoted as OQMD\({}^{\text{past}}\) and OQMD\({}^{\text{current}}\), respectively. Each version was subjected to the following filters:
1. Non-unary materials with negative formation energy (\(E_{f}<0.0\) eV) were selected. For unary materials, no restrictions were applied as the energy above hull \(E_{\text{hull}}\) is equivalent to \(E_{f}\).
2. Only unary, binary, ternary, and quaternary structures with 24 or fewer sites in the unit cell were considered. This step was applied to try to eliminate structures with a high number of low-symmetry non-equivalent occupied sites, such as in amorphous phases.
3. Duplicate entries were removed, i.e., data entries reported by OQMD as repeated for the same compound (indicated by the column _duplicate_of_id_).
After applying these filters, OQMD\({}^{\text{past}}\) contained 307,912 entries and OQMD\({}^{\text{current}}\) contained 451,522 entries. The energy above hull \(E_{\text{hull}}\) and the crystal structure were extracted for each compound in both databases.
Our first objective is to build a recommender system using OQMD\({}^{\text{past}}\) and then validate it by verifying two different things: (i) how much do the recommendations for ion-site occupa
tions restrict the search space; (ii) how many of the thermodynamically stable coumpounds from \((\mathrm{OQMD}^{\mathrm{current}}-\mathrm{OQMD}^{\mathrm{past}})\) can be found in this restricted search space. For simplicity, we carried out the latter validation experiment on a subset of cubic halide perovskites, rather than using the entire set of compounds \(\mathcal{C}\). This process, often referred to as "historical validation", also helped identify the optimal set of hyperparameters for the model and the graph \(G\) construction.
Following this validation step, we built a recommender system using \(\mathrm{OQMD}^{\mathrm{current}}\) data and generated recommendations for ion-site occupations. These recommendations resulted in the proposal of new compounds in a crystal structure prototype. Subsequently, we conducted Density Functional Theory (DFT) calculations, using the same parameters of OQMD, and utilized OQMD's convex hull constructions to calculate the recommended compounds' \(E_{\mathrm{hull}}\). This allowed us to infer about the thermodynamic stability of the new compounds and from it validate the RS recommendations.
#### Anonymous Motifs (AMs)
Here we present the definition of an Anonymous Motif (AM), since to the best of our knowledge, a formal mathematical definition for AMs does not exist in the literature of crystallography.
AM is defined solely by the geometric arrangement and connectivity of atomic positions in space, disregarding the chemical identity of the atoms and, consequently, the compound's stoichiometry. Given a compound's crystal structure, we first disregard its composition and reduce it, when possible, to a primitive cell \(p\). Then, we use spglib[28] to analyze the symmetry of the resulting structure and of its sites. With the symmetry analysis data, we then define AM as the following tuple:
\[AM=(sg,n_{s},Wy,l) \tag{3}\]
where:
* \(sg\) is the space group number (integer) of \(p\)
* \(n_{s}\) is the number of sites in \(p\) (integer)
* \(Wy\) is a list of non-equivalent sites in \(p\), given by their respective Wyckoff positions, represented as tuples of the form \((n_{i},sym_{i})\), where \(n_{i}\) is the occupancy count of site \(i\) (integer) and \(sym_{i}\) is the Hermann-Mauguin symbol for the point group symmetry of site \(i\) (string) in \(p\).
* \(l\) is a subgroup number (integer), assigned after using pymatgen's StructureMatcher[29] to further differentiate the \(p\)'s with the same \((sg,N,Wy)\) but with different coordination environments. This is a "dummy index", as it is used purely to differentiate otherwise identical AM labels that represent different coordination environments.
As an illustrative example, we can consider the representation for cubic perovskites: \((221,5,[(1,\text{m-3m}),(1,\text{m-3m}),(3,4/\text{mmm})],0)\). The three labels in \(Wy\) represent the cuboctahedral A-site, the octahedral B-site, and the linear X-site, respectively (as shown in Figure 2. As the chemical identities are disregarded, both simple cubic perovskites and double perovskites would be grouped together into the same AM. By using AMs, one can reduce the complexity of the materials space by identifying common coordination patterns and geometrical motifs shared among different compounds. They provide a standardized way of categorizing materials based on their crystal structures and occupied sites.
#### AM sites
We define \(\mathcal{C}\) as the set of all unique compounds \(c\) from the OQMD data. Let's denote the set of all AMs found for all compounds in \(\mathcal{C}\) as \(AM^{\mathcal{C}}\). Now, we can define the set \(S^{\mathcal{C}}\) of all sites of all AMs in \(AM^{\mathcal{C}}\).
\[S^{\mathcal{C}}=\{(sg,n_{s},n_{i},sym_{i},l)\mid\forall AM\in AM^{\mathcal{C}},(n_{i},sym_{i})\in Wy\} \tag{4}\]
As an example, the AM site \((227,2,2,-43m,0)\) groups together all sites of silicon, zincblende, chalcopyrite, stannite and kesterite structures, as shown in detail in Figure 3. For statistical reasons, we only considered AMs with more than 5 compounds.
#### Defining the Bipartite Graph \(G^{t}\)
We denote the set of all ions in the chemical compositions of all compounds in \(\mathcal{C}\) as \(I\). The bipartite graph is constructed with vertices divided into two disjoint sets, \(I\) and \(S^{\mathcal{C}}\), where each edge connects a vertex in \(I\) to one in \(S^{\mathcal{C}}\). The graph is represented as \(G^{T}=(I,S^{\mathcal{C}},E^{+},W^{T})\), where \(E^{+}\) is the set of existing edges, and \(W^{T}:E^{+}\rightarrow\mathbb{R}\) is a Boltzmann-like weight function defining the edge weights.
The link \((i,s)\) is added to \(E^{+}\) if the occupation of an AM site \(s\) by an ion \(i\) occurs in a compound \(c\). The set of all links is thus defined as:
Figure 2: Illustrative example of the Anonymous Motif label of cubic perovskites. The elements of the \(Wy\) list represents the A, B, and X sites.
\[E^{+}=\{(i,s)\mid i\text{ occupies }s\text{ in compound }c,\forall c\in\mathcal{C}\} \tag{5}\]
A weight \(w^{c,T}(i,s)\) is defined as being proportional to the energy above the hull of the compound \(c\), \(E^{c}_{\text{hull}}\), using a Boltzmann factor and introducing a parametric temperature \(T\):
\[w^{c,T}(i,s)=\exp(-E^{c}_{\text{hull}}/kT) \tag{6}\]
We created three graphs \(G^{T}\) with three parameteric temperatures: 0 K, 100 K, and 300 K. The objective was to define a threshold for compounds \(c\in\mathcal{C}\): a 0K parametric temperature includes only compounds in the convex hull (here defined as the compounds with \(E_{\text{hull}}\leq 30\) meV/atom), while T \(>0\) K parametric temperatures attribute a small weight even for compounds with large \(E_{\text{hull}}\). We wanted to investigate if adding these compounds into the model would improve the recommendations. Larger temperatures penalizes less the weights of the graph links associated with compounds that are thermodynamically unstable. Figure 4 illustrates this concept on a hypothetical binary convex hull.
Figure 3: AM site grouping example. As only the lattice geometry is taken into account by ignoring compositions and stoichiometries, the occupied sites of all the presented compounds get mapped to the same (227, 2, 2, -43m, 0) AM site.
Each edge's total weight \(W^{T}(i,s)\) is computed iteratively over all compounds \(c^{\prime}\) of the set of compounds \(\mathcal{C}^{\prime}\), where \(\mathcal{C}^{\prime}\) is the subset of all compounds that are formed by the occupation of \(s\) by \(i\):
\[W^{T}(i,s)=\sum_{c^{\prime}\in\mathcal{C}^{\prime}}w^{c^{\prime},T}(i,s) \tag{7}\]
We truncated the values of \(w^{c^{\prime},T}\): if \(w^{c^{\prime},T}(i,s)<0.001\), then we considered that \((i,s)\) is non-existent.
This workflow assigns larger weights \(W^{T}\) to common occupations, such as the X-site in cubic perovskites (O, \((221,5,3,(3,4/\text{mmm}),0)\)) that occur in numerous oxide cubic perovskites, per comparison with the less common occupations like (Ca, \((221,5,3,(3,4/\text{mmm}),0)\)), found in the anti-perovskite Ca\({}_{3}\)SnO. The weight \(W^{T}(i,s)\) quantifies the frequency of the \((i,s)\) occupation in \(\mathcal{C}\), weighted by the thermodynamic stability of the compounds featuring \((i,s)\).
Figure 5 pictorially illustrates the structure of the bipartite graph \(G^{T}\) built from the set of compounds \(\mathcal{C}\).
Figure 4: Illustration of the effect of the temperature \(T\) on the selection of the compounds \(\mathcal{C}\) used for building \(G^{T}\), and on the weights \(W^{T}(i,s)\) between the ion \(i\) and the site \(s\).
### The Embedding for Ions and AM sites
In order to generate embeddings that are encoded into vector representations for each type of node (ions and AM sites) we conduct random walks across the graph. DeepWalk, [30] a widely-used algorithm, generates these random walks by starting at a randomly selected node and choosing the next node uniformly at random from its neighbors. This process is repeated for a fixed number of steps to generate a single random walk, and is performed multiple times to generate a set of random walks for the entire graph. The premise of DeepWalk is that nodes frequently co-occurring on these walks likely have similar roles or positions in the graph, hence the learned embeddings will capture these similarities. Once the random walks are generated, they are input into the Word2Vec [31] algorithm to learn the embeddings. These embeddings, which capture the local and global structure of the graph, can then be used for downstream tasks such as link prediction or node classification.
This methodology was adapted for a weighted graph. A random walk \(rw\) in the weighted bipartite graph \(G\) can be defined as a sequence of alternating vertices from the sets \(I\) and \(S^{\mathcal{C}}\), where the probability of transitioning from one vertex to another depends on the normalized
weights of the edges \(W^{T}(i,s)\) linking to the current graph node.
First, we calculate the transition probabilities for each ion and AM site at parametric temperature \(T\). For each ion \(i\in I\) and each AM site \(s\in S^{\mathcal{C}}\), the sum of weights \(W^{T}(i,s)\) for all edges connected to it is computed. Then, the transition probability \(P(i,s)\) is given as follows:
\[P(i,s)=\frac{W^{T}(i,s)}{\sum_{s\in S^{\mathcal{C}}}W^{T}(i,s)} \tag{8}\]
A single \(rw\)'s in \(G\) can be expressed as a sequence of alternating vertices from \(I\) and \(S\), \(rw=(i_{1},s_{1},i_{2},s_{2},\ldots,i_{n},s_{n})\), where \(i_{k}\in I\), \(s_{k}\in S^{\mathcal{C}}\), and the choice of \(s_{k}\) given \(i_{k}\) is determined by the transition probability \(P(i_{k},s_{k})\), and the choice of \(i_{k+1}\) given \(s_{k}\) is determined by the transition probability \(P(s_{k},i_{k+1})\). A total of 150 random walks per node, with a walk length of 50 nodes was used.
The set of all \(rw\) in the graph is defined as \(RW\). The Word2Vec function takes \(RW\) as input and a set of hyperparameters \(\theta\) and outputs an embedding function \(\phi_{\theta}=\text{Word2Vec}(RW,\theta)\).
The Figure 6a illustrates the embedding creation process.
#### Defining the function \(f\)
Our RS is encoded in the function \(f(i,s)\). The embedding function \(\phi_{\theta}\) maps a node (either ion \(i\in I\) or a AM site \(s\in S\)) to a \(d\)-dimensional vector representation. These representations are then processed by a similarity function, with the output fed into a binary classification model. We employ cosine similarity as the similarity metric and a decision tree (DT) as the model.
For training, we adapted the dataset \(\mathcal{D}\) creation methodology for weighted graphs. Edges \(E^{+}\) were sampled by their weight, with higher weight edges having a greater selection chance. Once selected, the edge was removed from the original \(G^{T}\) graph to prevent data leakage during embedding. The edge's weight was then used as a multiplier for instance repetition in the dataset. As an example, a positive edge with a weight of 10 would be repeated 10
times. Negative examples were uniformly sampled from \(E^{-}\) until an approximate 50% ratio of positive and negative examples was achieved.
The dataset was divided into 80% for training, 10% for validation, and 10% for testing. Optimal hyperparameters for each temperature were determined using the Optuna python library [32]. We optimized the hyperparameters \(\theta\) of the embedding function \(\phi\), as these have been shown to significantly impact recommender systems [33]. The accuracy results for the optimal hyperparameters are presented in the Supplementary Information.
After training, the learned function \(f\) was used to predict the likelihood of edges \((i,s)\in E^{-}\), between node pairs in the graph, and then use these new reccomended occupations for building new compounds.
Figure 6: a) Graphical representation of the embedding creation workflow: given a graph \(G^{T}\), the set of weighted random walks is made and then used as input for Word2Vec. The result is an embedding function \(\phi_{t}theta\). b) Results obtained from the embedding’s vector representations of ions and AM sites.
## Results and Discussion
Establishing an embedding that encapsulates the structure of bipartite graphs \(G^{T}\) facilitated two distinct analyses concerning the relations among the graph entities. The first one pertains to the relationships between ions, while the second one pertains to the relationship between the ions and the AM sites.
The first was done by correlating the ion-ion embedding cosine similarities to ionic substitution in crystal structures. This approach enabled the validation of chemical trends within the continuous embedding space created solely with ion-site occupation data.
The second analysis involved the study of proximate AM sites surrounding the ions within the embedding space. This approach produced two distinct types of outcomes. Firstly, it enabled the verification of the overarching local geometries of the AM sites in association with the corresponding ion. Secondly, it allowed the exploration of novel ion-site occupancies informed by the trained RS function \(f\).
Figure 6b illustrates the results obtained from the embedding and further discussed below.
### Embedding Chemical Trends
After optimizing the hyperparameters of Word2Vec, an embedding was obtained from \(G^{0}\), exclusively considering compounds on the hull and built with OQMD\({}^{\text{current}}\). We applied the uniform manifold approximation and projection (UMAP) technique to reduce the dimensionality of the embedding [34] for a two-dimensional visualization (Figure 7).
The embedding shows that, with some few distinct exceptions, the ions from the same family in the periodic table usually occupy the same group of sites, as they are close together in the embedding space projection of Figure 7. Within the same cluster family, there is also a larger proximity between ions with similar atomic radii. For instance, within the halogen family cluster, Chlorine (Cl) is more closely related to Bromine (Br) than to Iodine (I).
A similar pattern is also observed within transition metal clusters. Ions from the 5d
Figure 7: Embedding of \(G^{0}\) built with OQMD\({}^{\rm current}\). In light gray are the AM sites. The colored points represent the set of 89 ions, each color representing its corresponding family in the periodic table. UMAP parameters: metric: cosine, n_neighbours: 250, min_dist: 0.5, densmap: True, random_state: 0.
period display a greater likeness to 4d period ions than those from the 3d period. This can be exemplified by 5d transition metals (Hf, Ta, W, Re, Os, Ir, Pt), which showcase similar atomic radii to 4d metals within the same group (Zr, Nb, Mo, Tc, Ru, Rh, Pd). This phenomena can be explained by the lanthanide contraction effect [35]. As one moves through the lanthanide series (f-block elements) on the periodic table, the atomic radii tend to decrease due to the poor shielding of the increasing nuclear charge by the 4f electrons, resulting in a higher effective nuclear charge and a smaller atomic radius. This effect continues into the subsequent d-block elements, causing the 5d elements to have atomic radii similar to their 4d counterparts, despite being in a new period.
The embedding projection also illuminates trends related to the ionic character of ions. Anions, such as oxygen, nitrogen, carbon, hydrogen, halogens, and chalcogens, largely congregate on the left side, while cations typically occupy the right side. This pattern supports the notion that cations and anions tend to inhabit distinct sites in crystal structures.
The visualization also underscores a sizable and dense cluster surrounding oxygen (O). This clustering likely stems from the original ICSD [36] database that serves as the foundation for OQMD. ICSD primarily focuses on published crystallographic research, which heavily features oxides due to their prevalence and to their wide-ranging applications.
A unique trend is apparent with ions in the second period, or the first row of p-block elements (B, C, N, O, F), as these elements behave differently from their same-family heavier counterparts. This divergence arises from their increased electronegativity and smaller atomic radii, affecting bond polarity and lengths.
Certain ion-ion similarities spanning across different periodic families can be observed, often due to shared oxidation states. For instance, Ytterbium (Yb) and Europium (Eu) from the lanthanide series show a striking resemblance to alkaline earth metals, likely attributable to their shared +2 oxidation state. Thallium (Tl), despite belonging to a different group, demonstrates notable proximity to Sodium (Na) in the embedding, an occurrence that can be explained by Thallium's ability to adopt a +1 oxidation state. In addition, Scandium
(Sc) and Yttrium (Y), which typically present a +3 oxidation state, are found to be closely aligned with the lanthanides, further emphasizing the influence of overlapping oxidation states on ion-ion relationships.
Supplementary to the UMAP visualization, a dendrogram depicting the hierarchical clustering of the 89 ions within the embedding space is provided in the Supplementary Information.
### Embedding Local Geometry Trends
In further analysis, we generated heatmaps to provide insights into the local geometric configurations of the sites. Utilizing the CrystalNN method [37, 38], we computed scores representing the likelihood of each type of local geometry for every site. Subsequently, we divided the 2D UMAP space and computed the average score for each segment. To smooth out the data distribution, we applied a Gaussian fitting across the 2D space, focusing on the dimension representing the geometry of interest.
The results for two distinct types of local geometries--linear and square coplanar--are presented in Figure 8. The color variations surrounding the ions in these maps correspond to the degree to which ions occupy stable compound's sites with the given geometry.
Something important to note is that, as the color encodes a site average over the area, a light color does not necessarily imply that the ion is not capable of occupying a site with the given local geometry. It implies instead that, statistically in the data set, that geometry is less common than other geometries.
The heatmap for linear local geometry shows that sites with this geometry tend to concentrate around anions such as halogens, H and O. Cu and Ag also present a certain concentration around them, which could be attributed to their capacity of having a less common oxidation state of 1+. An example of such occupation is found in the delaffosite class of compounds [39].
The square planar coordination is not as common as other geometries but is typically
found with certain transition metals that present a 2+ oxidation state. For instance, in certain crystal structures, Pd, Pt, and Cu can adopt square planar geometry. The heatmap confirms this and suggests that Ir and Zn, which can also present 2+ oxidation state, can also adopt this geometry. One prominent such family is that of the copper oxide high-temperature superconductors [40, 41], where the square planar geometry and the Cu 2+ oxidation state are believed to be two crucial ingredients for the distinctive properties of these materials
The heatmaps for other local geometries--cuboctahedral, octahedral, tetrahedral, trigonal planar, and single bound--are presented in the Supplementary Information.
### Ion-site Recommendations
#### Historical Validation on Cubic Halide Perovskites
Utilizing the methodology outlined in the previous section, we trained a RS using \(\mathcal{C}=\text{OQMD}^{\text{past}}\). We then validated the methodology using the set of new cubic halide perovskites in \((\text{OQMD}^{\text{current}}-\text{OQMD}^{\text{past}})\). Our objective was to assess the impact of the parametric temperature \(T\) on two key aspects of RSs: the degree to which it restricts the search space;
Figure 8: Heatmaps indicating local geometry coordination for linear and square coplanar configurations. The color scale corresponds to the density of sites within the given area of the UMAP 2D-transformed embedding space that conform to the respective geometry.
and the number of thermodynamically stable compounds from \((\mathrm{OQMD}^{\mathrm{current}}-\mathrm{OQMD}^{\mathrm{past}})\) that can be identified within this restricted search space by our framework.
Halide perovskites are important since they are a promising class of materials for optoelectronics and photovoltaics [42]. They possess unique crystal structures and exceptional optoelectronic properties, including high absorption coefficients and long charge carrier diffusion lengths. Halide perovskite solar cells have achieved impressive power conversion efficiencies, surpassing traditional thin-film technologies [43]. These materials can be fabricated using low-cost techniques and integrated into various devices such as LEDs and photodetectors. However, challenges remain in their stability, and ongoing research aims to improve their performance and understand their fundamental properties.
A comparison to a brute-force approach needs to be done when evaluating a RS. A cubic halide perovskites brute-force search has a space size of 11,200 possibilities, provided we consider four halogens {F, Cl, Br, I}, only non-metalic ions, and ions from the 1st and 2nd row for the A and B sites, while excluding noble gases, radioactive ions Additionally, for the A site, we eliminated the option of p-orbital ions.
OQMD\({}^{\mathrm{past}}\) has a total of 135 cubic halide perovskites, 63 of which are thermodynamically stable (\(E_{\mathrm{hull}}\leq 30\mathrm{meV}/\mathrm{atom}\)). OQMD\({}^{\mathrm{current}}\) has 432 additional cubic halide perovskites, 50 of which are thermodynamically stable. In the context of a binary classification problem, these 50 compounds constitute the set of positive instances, while the remaining 382 instances are defined as the negative class of new thermodynamically unstable compounds. Since the positive and negative sets are unbalanced and false positives are undesired in the context of a RS, the precision was the chosen metric to evaluate this task. Precision is a measure of relevancy -- it quantifies the proportion of recommended items that are actually relevant to the user. The results for parametric temperatures of \(T=\{0K,100K,300K\}\) are presented in Table 1.
The results indicate that \(G^{100}\) provides the optimal balance between restricting the search space and offering the largest proportion of thermodynamically stable recommendations.
These findings suggest that including all compounds and weighting them by their \(E_{\rm hull}\) is the most effective strategy. The comparatively poorer performance of \(T=300K\) suggests that a higher temperature underpenalizes non-stable compounds.
Another trend that can be attributed to the parametric temperature is on the \(\rm{ABX}_{3}\) search space reduction: a higher \(T\) resulted in a higher number of possibilites. Such result can be attributed to the higher connectivity of ions to AM sites in the resulting \(G^{T}\) graphs, resulting from \(W^{T}(i,s)\).
It is worth noting that there are compounds that were recommended that are not present in \(\rm{(OQMD^{\rm current}-OQMD^{\rm past})}\) and thus remain to be verified with DFT calculations. The list of the recommendations for A and B sites are given in the Supplementary Information.
#### Recommendations for Kagome Lattice Prototype Structure
Using the validated methodology, optimal hyperparameters \(\theta\), and the best parametric temperature \(T=100K\), we trained a recommender system with \(\rm{OQMD^{\rm current}}\). This was then used to recommend ion-site occupations for the sites of the class of compounds \(\rm{AB}_{3}C_{5}\).
Their crystal structure presents a layered structure of Kagome lattices separated by A-ions in a honeycomb lattice. The Kagome lattice is made of three B-ions coordinated with one C-ion, sandwiched between two honeycomb lattices of C. The structure presents a symmetry
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \(G^{0}\) & \(G^{100}\) & \(G^{300}\) & Brute-force \\ \hline A search space & 9 & 9 & 12 & 50 \\ \(\rm{B}\) search space & 11 & 12 & 15 & 57 \\ \(\rm{ABX}_{3}\) search space & 392 (3.5\%) & 428 (3.8\%) & 708 (6.3\%) & 11,200 (100\%) \\ In \(\rm{(OQMD^{\rm current}-OQMD^{\rm past})}\) & 49 & 73 & 109 & 432 \\ \(E_{\rm hull}\leq\) 30 meV/atom & 13 & 23 & 28 & 50 \\ Precision & 26.5\% & **31.5\%** & 25.7\% & 11.6\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of different models with different temperatures to the brute-force approach. The search space is the number of possible combinations given the ion-site suggestions made by the RS. The precision is given within \(\rm{(OQMD^{\rm current}-OQMD^{\rm past})}\) by the ratio of the thermodynamically stable compounds (true positives) to the total number of compounds suggested by the recommender (true positives + false positives).
given by the space group \(P6/mmm\).
Ortiz et al first synthesized this prototype structure in 2019 with the compositions KV\({}_{3}\)Sb\({}_{5}\), RbV\({}_{3}\)Sb\({}_{5}\), CsV\({}_{3}\)Sb\({}_{5}\)[23]. Since then, many characterizations of these compounds were made, revealing properties such as chiral charge ordering [44], large anomalous Hall effect [45], non-trivial topological band structures and superconducting ground states [22, 46].
The prototype crystal structure maps to the four non-equivalent sites \((191,9,[(1,6/mmm),(1,6/mmm),(3,mmm),(4,3m)],0)\) AM label, as shown in Figure 9. In this prototype, A occupies the first site in the \(Wy\) list \((1,6/mmm)\), B occupies the third site \((3,mmm)\), and C occupies both the second and the fourth sites \((1,6/mmm)\) and \((4,3m)\). For simplicity, we truncated the recommendations to a AB\({}_{3}\)C\({}_{5}\) stoichiometry, using the intersection of recommendations of C for both sites (represented by yellow and green atoms in Figure 9).
We input these sites into the RS, which recommended ions to occupy them. Known compositions for this structure in OQMD\({}^{\rm current}\) were filtered out, and the remaining recommendations were ranked according to the sum of ion-site distances in the embedding space. The top 10 recommendations were then verified using DFT calculations, and the results are presented in Table 2. All ion-site recommendations are given in the Supplementary Information.
Figure 9: Prototype Crystal structure of compound CsV\({}_{3}\)Sb\({}_{5}\), with stoichiometry AB\({}_{3}\)C\({}_{5}\). The ions A and B are represented in blue and red, respectively. The C ion occupies two non-equivalent types of sites, represented in green and in yellow.
The results show that 6 out of the 10 recommendations are indeed stable (with \(E_{\rm hull}<30\) meV/atom). These 6 compounds were also reported by Jiang et al [47] as having a \(E_{\rm hull}<30\) meV/atom, but at the cost of a total of 1386 high-throughput DFT calculations performed. Notably, these materials were entirely new in OQMD\({}^{\rm current}\).
Another significant finding is the novel recommendation of the ion family Ti, Hf, and Zr for the B site occupancy. This type of occupation was not previously present in OQMD\({}^{\rm current}\), making it a non-trivial recommendation that could not have been made simply by visually or intuitively filling the gaps in the periodic table.
## Conclusion
Leveraging data from OQMD, we successfully constructed a graph-based recommender system that incorporates information on the thermodynamic stability of compounds along with symmetry and local geometry of sites. The developed embedding enabled us to discern chemical and local geometrical trends through ion-ion similarities, and to extract ion-site recommendations from the ion-site relationships.
Our experiments with a parametric temperature that weights the \(E_{\rm hull}\) of compounds led
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & & & & & \(E_{\rm hull}\) \\ A & B & C & Composition & \(\Sigma\)(ion-site distances) & (meV/atom) \\ \hline Rb & Ti & Sb & **RbTi\({}_{\bf 3}\)Sb\({}_{\bf 5}\)** & 0.125 & 0 \\ K & Ti & Sb & **KTi\({}_{\bf 3}\)Sb\({}_{\bf 5}\)** & 0.128 & 13 \\ Cs & Ti & Sb & **CsTi\({}_{\bf 3}\)Sb\({}_{\bf 5}\)** & 0.134 & 0 \\ Rb & Ti & Bi & **RbTi\({}_{\bf 3}\)Bi\({}_{\bf 5}\)** & 0.173 & 5 \\ K & Ti & Bi & **KTi\({}_{\bf 3}\)Bi\({}_{\bf 5}\)** & 0.176 & 23 \\ Cs & Ti & Bi & **CsTi\({}_{\bf 3}\)Bi\({}_{\bf 5}\)** & 0.182 & 0 \\ Rb & Hf & Sb & RbHf\({}_{\bf 3}\)Sb\({}_{\bf 5}\) & 0.219 & 182 \\ K & Hf & Sb & KHf\({}_{\bf 3}\)Sb\({}_{\bf 5}\) & 0.223 & 165 \\ Tl & Ti & Sb & TlTi\({}_{\bf 3}\)Sb\({}_{\bf 5}\) & 0.225 & 102 \\ Cs & Hf & Sb & CsHf\({}_{\bf 3}\)Sb\({}_{\bf 5}\) & 0.228 & 139 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Thermodynamical stability results for the top 10 compounds built with the ion-site suggestions of the RS built with T = 100K and OQMD\({}^{\rm current}\).
us to conclude that considering not only the compounds on the hull, but also those above the hull, can be beneficial to the identification of new compounds.
By employing a recommender system with optimally tuned hyperparameters and parametric temperature, we managed to propose novel compounds that are not present in the current version of OQMD. This demonstrates the potential of our approach to facilitate the discovery of new materials.
## Data Availability
The RS trained with OQMD\({}^{\rm{current}}\), along with the graphs \(G^{100}\) and \(G^{0}\) (as the data used to build them), as well as the ion and site embeddings, can be downloaded from our GitHub repository at [https://github.com/simcomat/ionic-sub-RS](https://github.com/simcomat/ionic-sub-RS). A detailed explanation of the process for obtaining recommendations for ion-site occupations and new compounds is provided in a Jupyter notebook available on the same page.
## Competing Interests
The Authors declare no Competing Financial or Non-Financial Interests.
## Author Contributions
E.O. conceived the work, built the model, and ran the DFT calculations and wrote the first version of the paper. H.F., J.N.B.R. and G.M.D. helped with refinements of the model. G.M.D. supervised the work. All authors contributed with the final version of the paper.
## Acknowledgement
The authors acknowledge the National Laboratory for Scientific Computing (LNCC/MCTI, Brazil) for providing HPC resources of the SDumont supercomputer. The authors also thank the Brazilian funding agencies FAPESP (grants 17/02317-2 and 18/11641-0), CNPq (309384/2020-6) and CAPES for financial support. This research has been conducted within the scope of the Materials Informatics INCT (CNPq).
|
2301.02444 | High-Performance Deterministic Concurrency using Lingua Franca | Actor frameworks and similar reactive programming techniques are widely used
for building concurrent systems. They promise to be efficient and scale well to
a large number of cores or nodes in a distributed system. However, they also
expose programmers to nondeterminism, which often makes implementations hard to
understand, debug, and test. The recently proposed reactor model is a promising
alternative that enables efficient deterministic concurrency. In this paper, we
show that determinacy does neither imply a loss in expressivity nor in
performance. To show this, we evaluate Lingua Franca (LF), a reactor-oriented
coordination language that equips mainstream programming languages with a
concurrency model that automatically takes advantage of opportunities to
exploit parallelism that do not introduce nondeterminism. Our implementation of
the Savina benchmark suite demonstrates that, in terms of execution time, the
runtime performance of LF programs even exceeds popular and highly optimized
actor frameworks. We compare against Akka and CAF, which LF outperforms by
1.86x and 1.42x, respectively. | Christian Menard, Marten Lohstroh, Soroush Bateni, Matthew Chorlian, Arthur Deng, Peter Donovan, Clément Fournier, Shaokai Lin, Felix Suchert, Tassilo Tanneberger, Hokeun Kim, Jeronimo Castrillon, Edward A. Lee | 2023-01-06T10:08:08Z | http://arxiv.org/abs/2301.02444v2 | # High-Performance Deterministic Concurrency using Lingua Franca
###### Abstract.
Actor frameworks and similar reactive programming techniques are widely used for building concurrent systems. They promise to be efficient and scale well to a large number of cores or nodes in a distributed system. However, they also expose programmers to nondeterminism, which often makes implementations hard to understand, debug, and test. The recently proposed reactor model is a promising alternative that enables efficient deterministic concurrency. In this paper, we show that determinacy does neither imply a loss in expressivity nor in performance. To show this, we evaluate Lingua Franca (LF), a reactor-oriented coordination language that equips mainstream programming languages with a concurrency model that automatically takes advantage of opportunities to exploit parallelism that do not introduce nondeterminism. Our implementation of the Savina benchmark suite demonstrates that, in terms of execution time, the runtime performance of LF programs even exceeds popular and highly optimized actor frameworks. We compare against Akka and CAF, which LF outperforms by \(1.86x\) and \(1.42x\), respectively.
c _Keywords:_ coordination, concurrency, determinism, performance
## 1. Introduction
Theoreticians working on programming language semantics have long understood the value of determinism as well as the expressive power of nondeterminism in programming languages. In practice, however, today, nondeterminism creeps into programming languages and frameworks not to benefit from its expressiveness, but rather because of a widespread perception that it is needed to get good performance on parallel hardware. We show in this paper that it is not necessary to sacrifice determinism to achieve performance. We do this by focusing on actor frameworks, which have proved popular and successful in many very demanding applications, but admit nondeterminism that is often not actually needed by their applications.
Exploiting parallel hardware such as multicore machines to improve performance is only possible when programs expose concurrency. Common abstractions for concurrency include threads (Hold et al., 2015), remote procedure calls (Sararaman et al., 2016), publish-subscribe (Saraman et al., 2016), service-oriented architectures (Saraman et al., 2016), and actors (Beng et al., 2016; Goyal et al., 2016). Each of these models has its own merits, but they all introduce nondeterminism: situations where, for a given state and input, the behavior of a program is not uniquely defined. While nondeterminism can be useful in some applications, most programming tasks benefit from more repeatable behavior. Deterministic programs are easier to understand, debug, and test (for each test vector, there is one known-good response). For nondeterministic programs, problematic behaviors might be harder to discover because they may only occupy a small fraction of the state space (Saraman et al., 2016). And reproducing failures can be extremely hard (Saraman et al., 2016; Sinclair et al., 2016; Sinclair et al., 2016)
because they might occur only when the system is under a specific amount of load (Kumar et al., 2017).
Determinism is a subtle concept (Kumar et al., 2017). Here, we focus on a particular form of determinism for programs, where a program is deterministic if, given the same inputs, it always produces the same outputs. This definition does not require that operations be performed in a particular order, and therefore is not at odds with concurrency and parallel execution. It is possible, but often not easy, to achieve this form of determinism even when using nondeterministic abstractions such as threads, actors, and asynchronous remote procedure calls. For simple enough programs, such as a chain of actors, if communication is reliable, then execution will be deterministic. Some of the benchmarks we compare against in this paper are deterministic in this way. As we will show, however, even slightly more complex communication structures result in nondeterminism that can be difficult to correct.
In this paper, we evaluate a language-based coordination approach to specifying concurrent software that preserves determinism by default and only admits nondeterminism when explicitly introduced by the programmer. The coordination language Lingua Franca (LF) (Lingua Franca, 2010), which is based on a concurrent model of computation called reactors (Lingua Franca, 2010; Lingua Franca, 2010), achieves this by analyzing program structure and ensuring that data dependencies are observed correctly at runtime. An LF program defines reactive software components called "reactors" and provides operators to compose them hierarchically (through containment) and bilaterally (via connections). Because the language supports both deterministic and nondeterministic concurrency, it provides a fertile ground for exploring the impact of determinism on performance.
The semantics of the deterministic subset of LF can be thought of as a deterministic variant of actors (Bauer et al., 2016; Lingua Franca, 2010; Lingua Franca, 2010). We show in this paper that it delivers performance comparable to popular nondeterministic realizations of actors on parallel hardware. Like Akka (Akka, 2017) and CAF (Akka, 2017)--the frameworks we compare against--LF orchestrates the execution of chunks of code written in conventional programming languages, allowing programmers to rely on the languages, libraries, and tools that they are comfortable with. Unlike the frameworks we compare against, LF is polyglot. It currently supports C, C++, Python, TypeScript, and Rust. This paper focuses on the runtime performance of the C++ target, which, as a core contribution of this paper, has been optimized to efficiently exploit concurrency on parallel hardware. Earlier work (Lingua Franca, 2010) has only reported preliminary performance indications of LF based on its C target, which is predominantly aimed at running on embedded systems.
At the core of LF's concurrency model is a logical model of time that gives a clear notion of simultaneity and avoids dead-locks using dependency analysis based on causality interfaces (Lobato et al., 2017). It is this timed semantics that enables efficient deterministic concurrency in LF. However, the benchmarks we compare against were created to evaluate actor frameworks, which have no temporal semantics. None of the benchmarks take advantage of the time-related features of LF; the temporal semantics is only used to deliver determinism.
ContributionsWe show that the reactor-oriented paradigm as implemented in Lingua Franca enables efficient exploitation of parallel hardware without relinquishing determinism. For this, we explain the mechanisms through which LF programs expose concurrency; we present a language extension that allows for the definition of scalable programs; and we introduce an optimized C++ runtime for LF that enables efficient parallel execution. We further present an extensive evaluation based on the Savina benchmark suite (Lingua Franca, 2010), showing that our LF runtime outperforms Akka and CAF by \(1.86x\) and \(1.42x\), respectively.
OutlineWe first motivate our work (Section 2) and then introduce LF (Section 3). We then go into detail about the concurrency in LF and introduce our optimized C++ runtime (Section 4). Next, we report benchmark results (Section 5), discuss related work (Section 6), and conclude (Section 7).
## 2. Motivation
The actor model is widely accepted and deployed in production for its promise to allow programmers to easily express concurrency, provide high execution performance, and scale well to large datasets and complex applications. Moreover, in contrast to thread-based programs, actor semantics prevents _low-level_ data races. However, like most message passing paradigms, actors expose the programmer to nondeterminism in the form of _high-level_ data races (Kumar et al., 2017), a problem that becomes considerably challenging to manage as the complexity of a program grows.
Consider the simple example in Figure 0(a). The Account actor manages the balance of a bank account that two users interact with. User A sends a deposit message increasing the account's balance and User B sends a withdrawal message decreasing the account's balance. If we assume that the
Figure 1. Example actor programs that may expose nondeterministic behavior.
balance is initialized to 0 and the account only grants a withdrawal if the resulting balance is not negative, then there are two possible behaviors. If A's message is processed first, then the withdrawal is granted to B. If B's message is processed first, then the withdrawal is denied. The actor model assigns no meaning to the ordering of messages. Therefore, there is no well-defined correct behavior for this example.
The reader may object that for an application like that of Figure 0(a), the order of transactions is intrinsically nondeterministic, and any additional nondeterminism introduced by the software framework is inconsequential. However, if we focus on testability, we see that even identical inputs can yield different results, making testing more difficult. If we focus on consistency, the problem that different observers of the same events may see different behaviors becomes problematic. In databases, it is common to assign time stamps to external inputs and to then treat those timestamps as a semantic property of the inputs and define the behavior of the database relative to those time stamps. We adopt this perspective in this paper, and rely on the definition of determinism given by Lee [41]: "determinism is a property of models, not of physical realizations," and "A model is deterministic if given all the _inputs_ that are provided to the model, the model defines exactly one possible _behavior_." If we define "inputs" in Figure 0(a) to be time-stamped user queries and "behavior" to be the sequence of actions taken by the Account, then it is reasonable to demand determinism.
Consider Figure 0(b), which has only one user. Even if this one user first sends a deposit and then a withdrawal message, the actor model does not guarantee that the receiving actor sees and processes the incoming messages in this order. While some actor frameworks, e.g., Akka and Erlang, guarantee in-order message delivery, others, e.g., AmbientTalk [77], expressly do not. Yet, even if the framework guarantees point-to-point in-order message delivery, this property is not transitive. If we add a Proxy, as shown in Figure 0(c), then we cannot make any assumptions about the order in which Account receives messages. This example further illustrates that composing actors can have unexpected side effects.
Consequently, implementing solutions to practical concurrency problems with actors can be challenging. Even seemingly simple concurrency problems like the one discussed above require high programming discipline, and solutions are typically difficult to maintain and tend to lack modularity. In addition, the inherent nondeterminism of actor frameworks makes it hard to verify such solutions. Erroneous behavior might only occur in a fraction of executions, and thus integration tests cannot reliably detect such "Heisenbugs" [61].
In a recent study, Bagherzadeh et al. [4] analyzed bugs in Akka programs that were discussed on StackOverflow or GitHub and determined that 14.6% of the bugs are caused by races. This makes high-level races the second most common cause of bugs in Akka programs after errors in the program logic. In a similar study of 12 actor-based production systems, Hedden and Zhao [32] determined that 3.2% of the reported bugs were caused by bad message ordering, 4.8% of bugs were caused by incorrect coordination mechanisms, 4.8% were caused by erroneous coordination at shutdown, and 2.4% of bugs were caused by erroneous coordination at startup. Note that these numbers only cover _known_ bugs in their studied projects and, as noted by the authors, the majority of the reported message ordering bugs belonged to the Gatling project because it already incorporated a debugging tool called Bita [73] that is designed to detect such bugs. We suspect that there are more undetected bugs in projects that do not use specialized debugging tools.
The actor community has addressed the inherent nondeterminism of actors and the resulting bugs by introducing better tools for analyzing and debugging actor programs. This includes TransPDOR [72], Bita [73], Actoverse [69], iDeA [55], CauDEr [39], and Multiverse debugging [75]. While these are valuable solutions, we argue that a programming model for expressing concurrent programs should provide deterministic semantics by default and allow the programmer to introduce nondeterminism only where it is desired and understood to do no harm. In such cases, the aforementioned tools for nondeterministic behavior can still be utilized to debug the implementation.
There are a number of ways to achieve deterministic concurrency, including Kahn process networks [35, 36], many flavors of dataflow models [21, 43, 62], physically asynchronous, logically synchronous models [68], synchronous-reactive languages [8, 24], and discrete-event systems [15, 23, 46, 79]. Lohstroh et al. [53] compare the reactor model to each of these, showing that it has many of their best features and
Figure 2: Example LF implementation of the actor program shown in Figure 0(a) and variations thereof.
fewer of their pitfalls. Lingua Franca builds on this reactor model because it is more expressive than some of the alternatives (e.g., Kahn networks) and is stylistically close to actors, which have proven effective in practice. In this paper, we show that the resulting determinism does not incur a performance penalty, but on the contrary, helps to achieve improved performance in most cases.
## 3. Introduction to Lingua Franca
Lingua Franca (LF) builds on the relatively new reactor-oriented programming paradigm. Intuitively, we can describe reactors as deterministic actors with a discrete event execution semantics and explicitly declared ports and connections. A logical timeline is used to order events and ensure a deterministic execution. As a polyglot language, LF incorporates code in a target programming language to implement the logic of each component. LF itself is only concerned with the coordination aspect of a program.
In this section, we introduce the core concepts of reactors and LF. Note however, that a full discussion of LF including its syntax and tooling is beyond the scope of this paper. Instead, we base our discussion on LF's diagrammatic representation of programswhich gets synthesized from LF source code automatically (Lingua et al., 2017). A complete introduction to LF's textual syntax is given by Lohstroh et al. (Lohstroh et al., 2018).
### LF by Example
Figure 1(a) shows an LF implementation of the deposit/with-drawal example in Figure 0(a). The program is assembled from three **reactor instances**userA, userB and account, shown as light gray boxes in the diagram. Note that userA and userB are instances of the same **reactor class**User and hence share the same structure and functionality. In the diagram, black triangles denote **ports**. In this example, both users have an output port which is connected to a respective input port at the account. These ports and connections allow the users to send requests to the account.
In LF, all computation is performed in reactive code segments called **reactions** that are implemented in an arbitrary target language. In the diagram, reactions are represented by dark gray chevrons. All reactions must explicitly declare their triggers, other dependencies and potential effects.In the example in Figure 1(a), both users define a reaction that is triggered by a **timer**. Timers are an LF construct used to produce events in regular intervals or once at a specific time. The timer of userA is configured to trigger an event one second after program startup; the timer of userB is configured to trigger an event two seconds after program startup. The corresponding reactions simply send a deposit or withdrawal request by setting the user's output port.
The Account reactor defines two reactions, one for each of its inputs. Both reactions will simply try to apply the requested change to the balance, which is stored in a state variable local to the reactor instance. Note that reactors may define arbitrary state variables which are accessible by all their own reactions (which does not include reactions of contained reaction). In addition to state variables, reactors may also define parameters which can be set at instantiation. This mechanism allows the User reactor to be reusable, as the precise time at which the timer triggers (offset) and the amount to withdraw or deposit (value) are configurable at instantiation time.
The reader might notice that the separated reactions in account duplicate logic and are not a practical solution, in particular if there are many users. We choose this representation to keep our exposition simple and avoid a detailed discussion of LF syntax. Indeed, in LF a single reaction can bind to an arbitrary number of upstream ports.
When executed, the program will wait for 1 second before triggering the timer of the userA reactor and invoking its reaction. The event produced by this reaction will trigger reaction 1 in account, which is invoked immediately after the first reaction completes. Two seconds after program startup, userB will react and subsequently trigger reaction 2 in account. In this example, the deposit event (+20.0) occurs earlier than the withdrawal event (-10.0), and hence our execution semantics ensures that the account processes the deposit event before the withdrawal event, meaning the balance will not become negative. In a more realistic implementation, the two users would generate events sporadically and have their reactions triggered not by a timer but a physical action (see Section 3.3). However, using a timer greatly simplifies our exposition as we only have to consider a single _logical timeline_ along which events are ordered. Moreover, such timers can be used to create regression tests that validate program execution with specific input timings.
Note that even when the two events occur logically simultaneously, meaning that both reactions in the account reactor are triggered at the same logical time, the resulting program will be deterministic. All reactions within the same tag are executed according to a well-defined precedence relation. In particular, any reactions within the same reactor are mutually exclusive and executed following the lexical declaration order of the reactions in LF code. This order is also reflected by the numbers displayed on the reactions in the diagrams in Figure 1. More details on the precedence relation of reactions are given in Section 4.1.
To deliberately change the order in which events occur, a logical delay can be introduced in the program using a **logical action**, as shown in Figure 1(c). In the diagram, actions are denoted by small white triangles. In contrast to ports, which allow relaying events logically instantaneously from one reactor to another, logical actions provide a mechanism for scheduling new events at a later (logical) time. Upon receiving an input, reaction 2 of the ProxyDelay reactor is triggered, which schedules its logical action with a configurable
delay. This creates a new event which, when processed, triggers reaction 1 of the ProxyDelay reactor, which retrieves the original value and forwards it to its output port.
If we assume that a delay of 2 seconds is used for scheduling the logical action, then the deposit message from userA will only arrive at the account 3 seconds after startup. Hence the deposit message will be processed _after_ the withdrawal message from userB, causing B's request to be denied.
It is important to note that all of the discussed examples are deterministic, regardless of the physical execution times of reactions, as all events are unambiguously ordered along a single logical timeline. The physical timing of the events, on the other hand, will be approximate. The contribution of this paper is to show that such determinism does not necessarily reduce performance and is also useful for applications that have no need for explicit timing.
### Logical and physical time
All events have an associated **tag**, which is used to order events on a logical timeline at runtime. In time-sensitive applications, tags are not purely used for logical ordering but also relate to physical time. By default, the runtime only processes the events associated with a certain tag once the current physical time \(T\) is greater than the time value of the tag \(t\) (\(T>t\)). We say that logical time "chases" physical time. The relationship between physical and logical time in the reactor model gives logical delays a useful semantics and also permits the formulation of deadlines. This timed semantics is particularly useful for software that operates in cyber-physical systems. For a more in depth discussion of LF's timed-semantics, the interested reader may refer to (Zhang et al., 2019).
If an application has no need for any physical time properties, the concurrence of physical and logical time can be turned off; in this case, the tags are used only to preserve determinism, not to control timing. Moreover, LF programmers are not required to explicitly control timing aspects of their programs. Delays can simply be omitted, for instance when scheduling an action, in which case the runtime will use the next available tag. In consequence, also untimed general purpose programs can benefit from the deterministic concurrency enabled by LF's timed-semantics.
### Asynchrony and deliberate nondeterminism
The reactor model distinguishes logical actions and _physical_ ones. A logical action is always scheduled synchronously with a delay relative to the current tag. A **physical action** may be scheduled from asynchronous contexts; its event is assigned a logical time based on the current reading of physical time. Physical actions are the key mechanism for handling sporadic inputs originating from physical processes (such as users initiating withdrawal or deposit requests).
The assignment of tags to physical actions is nondeterministic in the sense that it is not defined by the program. However, once those tags are assigned, for example, to deposit or withdrawal requests by a user, the processing of the events is deterministic and occurs in tag order. Hence, the tags assigned to externally initiated events are considered as part of the _input_, and given this input, the program remains deterministic. This approach draws a clear perimeter around the deterministic and therefore testable program logic while allowing it to interact with sporadic external inputs.
Physical actions can also be used within the program itself, for example, to nondeterministically assign a new tag to a message received from another reactor. In this usage, physical actions provide a means for deliberately introducing actor-like nondeterminism into a program.
## 4. Efficient Deterministic Concurrency
LF programs are deterministic by default. This property is inherited from the reactor model that LF implements. Lohstroh et al. (Lohstroh et al., 2017) explain why reactors behave deterministically. Their argument can be adapted to the concrete context of the Lingua Franca language, but this is beyond the scope of this paper. Reactors are also concurrent, and, as we show in this paper, the exposed concurrency is sufficient for the runtime system to effectively exploit multi-core hardware to where it matches or exceeds the performance of fundamentally asynchronous and nondeterministic actor frameworks. In this section, we first show exactly how concurrency is exposed and then describe in more depth how our C++ runtime is implemented and how it utilizes parallel hardware.
### Parallelism
The use of statically declared ports and connections as the interfaces between reactors, as well as the declarations of reaction dependencies, distinguish reactors from more dynamic models like actors or other asynchronous message passing frameworks where communication is purely based on addresses. While the fixed topology of reactor programs is less flexible and limits runtime adaptation, it also provides two key advantages. First, it achieves a separation of concerns between the functionality of components and their composition. Second, it makes explicit at the interface level which dependencies exist between components. As a consequence, a dependency graph can be derived for any composition of reactors. The dependency graph is an **acyclic precedence graph (APG)** that organizes all reactions into a partial order that captures all scheduling constraints that must be observed to ensure that the execution of a reactor program yields deterministic results. Because this graph is valid irrespective of the contents of the code that executes when reactions are triggered, reactions can be treated as a black box. It is this property that enables the polyglot nature of LF and exposes the concurrency in the application.
Figure 3 shows the dependency graph for the program given in Figure 1(c). The solid arrows represent dependencies
that arise because one reaction (possibly) sends data to the other. The dashed arrows represent dependencies that arise because the two reactions belong to the same reactor. Analogous to the behaviors of actors, reactions of the same reactor are mutually exclusive. The execution order is well-defined and given by the lexical declaration order of the reactions in LF code. This order is also indicated by the numbers in the reaction labels in Figure 2.
The dependency graph precisely defines in which order reactions need to be executed. Independent reactions may be executed in parallel without breaking determinism. For instance, the APG in Figure 3 tells us that reaction 1 of ProxyDelay and the reactions of userA and userB can all execute in parallel. Note that the dependency graph is required to be acyclic as any cycle would violate causality. The LF compiler ensures that a valid program has an acyclic dependency graph. Any dependency cycles in LF programs can be resolved by introducing a logical action and using it to schedule a new event at a future tag.
Since in LF all dependencies are statically declared, there is a lack of runtime agility compared to actors and similar models. The reactor model compensates this with **mutations** that support runtime adaptations of the reactor topology and the implied dependency graph. However, LF does not fully implement mutations yet and a discussion of mutations is beyond the scope of this paper.
### Scalable Connection Patterns
Creating individual reactor instances, ports and connections becomes tedious for larger programs. To address this problem, we introduce an extension to the LF syntax that allows to create multiple ports or reactor instances at once. Further, we introduce an overloading of LF's connection operator to create multiple connections over multiports and banks at once. This mechanism allows realizing various complex connection patterns in a single line of code and, as it is fully parameterizable, allows LF programs to transparently scale to a given problem size without recompiling.
Consider a simple fork-join program in LF:1
Footnote 1: Implementation details are omitted for brevity. Please refer to (Song et al., 2018) for an introduction to LF syntax.
```
1reactorSrc(w:int(3)){
2output[w]out:int
3}
4reactorWorker{
5inputin:int
6outputout:int
7}
8reactorSink(w:int(3)){
9input[w]in:int
10}
```
The program defines a Src, a Worker, and a sink reactor and an unnamed main reactor that assembles the program. Worker defines two individual ports of type int called in and out. Src and Sink use our syntax extension to each define a **multiport** of width \(w\), where \(w\) is a parameter and defaults to 3. The main reactor creates all reactor instances and connections. Concretely, it creates two individual instances of Src and Sink and uses our syntax extension to create a **bank** of worker reactors of width \(w\) (line 14). The two connection statements (line 15, 16) establish \(w\) connections each, one for each pair of multiport and bank instance. The resulting connection pattern is illustrated in Figure 3(a). Note that the precise number of workers is configurable via the \(w\) parameter of the main reactor, which can be specified when executing the program without recompiling. Hence the program can be configured to an arbitrary number of workers.
In this example, the source reactor produces three separate values to be sent to the worker. Instead, if we want to broadcast a single value to all workers, then we can use the broadcast syntax (...)+. Configuring the source reactor to use a single output (by setting \(w\)=1 in line 12) and changing line 15 to (src.out)+ \(>\) wrk.in creates the pattern in Figure 3(b).
Another common pattern that can be conveniently expressed using LF syntax is a cascade composition, illustrated by the following program:
```
1mainreactor(n:int(2)){
2src=newSrc(w=1)
3dst=newSink(w=1)
4wrk=new[m]Worker()
5src.out,wrk.out->wrk.in,dst.in
6}
```
The connection operator sequences all ports listed on the left- and right-hand side, and connects the \(n\)th port on the left hand side to the \(n\)th port on the right-hand side. By offsetting the left-hand side of the connection statement in line 5 with a single source port and appending the sink port to the right hand side, we can effectively arrange the connections to form the cascade shown in Figure 3(c).
The connection operator also connects multiports within banks. In this case, the operator will implicitly unfold all port instances on both sides of the connection to form a flat list of ports. The unfolding happens such that we first list all ports of the first bank instance, then all ports of the second instance, and so on. Consider the following program:
```
1reactorNode(w:int(3)){
2input[w]in:int
3output[w]out:int
4}
```
This will create the pattern shown in Fig. 3(d) which is not very useful. Using the interleaved modifier on either side of the connection in line 7, we can modify the unfolding
Figure 3. Reaction graph for the program in Figure 1(c).
strategy to first list all first port instances within all bank instances, then the second port instances within all bank instances, and so on. This creates the fully connected pattern shown in Figure 3(e).
All of the patterns discussed in this section are used extensively in our benchmark implementations in Section 5.
### Runtime Implementation
The execution of each LF program is governed by a runtime. Most importantly, the runtime includes a scheduler which keeps track of all scheduled future events, controls the advancement of logical time, and invokes any triggered reactions in the order specified by the dependency graph while aiming to exploit as much parallelism as possible. Lohstroh et al. have already sketched a simple scheduling algorithm for reactor programs (Lohstroh et al., 2017). In this section, we present a C++ implementation of this scheduling algorithm that aims at exploiting parallelism while keeping synchronization overhead to a minimum and avoiding contention on shared resources.
Figure 5 gives an overview of the scheduling mechanism used in our runtime. The scheduler keeps track of future events in the _event queue_ and processes them strictly in tag order. When processing an event, the scheduler first determines all reactions that are triggered by the event and stores them in the _reaction queue_. Any reactions in the reaction queue for which all dependencies are met (as indicated by the APG) are forwarded to the ready queue and then picked up for execution by the worker threads. If the executed reactions trigger any further reactions by setting ports, those reactions are inserted in the reaction queue. If a reaction schedules future events via an action, these new events are inserted into the event queue. Note that the scheduler always waits until all reactions at the current tag are processed before advancing to the next tag and triggering new reactions.
The most important task of the scheduler is to decide when any given reaction should be moved from the reaction queue to the ready queue. As the APG precisely defines the ordering constraints of reactions, reaction scheduling is closely related to DAG-based scheduling strategies (Bateni et al., 2015; Bateni et al., 2015). However, the APG is not equivalent to a task graph as it may contain reactions that do not need to be executed. Most often only a fraction of the reactions is triggered at a particular tag. Moreover, we do not know in advance precisely which reactions will be triggered for a given tag, as reactions may or may not send messages via their declared ports. In consequence an optimal schedule cannot be computed in advance.
To decide whether a given triggered reaction is ready for execution, we need to check if it has a dependency on any other reaction that is triggered or currently executing. To avoid traversing the APG at runtime, we utilize a simple heuristic. Concretely, we assign a _level_ (top level as defined in (Bateni et al., 2015)) to each reaction. Any reactions with the same level do not depend on each other and hence can be executed in parallel. Our scheduler then processes reactions going from one level to the next. Once all reactions within a level are processed, all triggered reactions in the next level are moved to the ready queue. This approach avoids the need for analyzing the APG during execution, but also falls short on exploiting all opportunities for parallel execution. For instance, this approach does not execute reaction 2 of ProxyDelay in parallel with reaction 2 of Account. Nonetheless, our evaluation shows that this strategy is sufficient to efficiently exploit parallelism in most cases. Given the extensive research on DAG-scheduling, we are confident that we can apply more complex strategies in future work to also account for the missed opportunities for exploiting parallelism.
Another limitation of our scheduling approach is that the scheduler only executes reactions that are triggered at the same tag. In particular, this may hinder exploiting pipeline parallelism in programs that do not use logical delays to create separate pipeline stages. However, this limitation can be overcome by using a federated execution strategy as discussed by Bateni et al. (Bateni et al., 2015).
While the scheduling algorithm sketched in (Lohstroh et al., 2017) and refined in this section is relatively straight forward to implement, further optimizations where needed to achieve competitive performance. In the following, we detail the most important optimizations that we use in our C++ runtime.
Figure 4. Various connection patterns in Lingua Franca
Figure 5. Scheduling mechanism in the LF runtime.
_Coordinating worker threads._ In the above discussion we conceptually distinguished the scheduler from the workers. In an actual implementation, however, using a central scheduler and separate worker thread imposes a significant synchronization overhead. Instead, in our implementation, any of the worker threads can become the scheduler and move ready reactions to the ready queue or advance logical time to the next tag if all reactions have been processed. Furthermore, we exploit the fact that at any time we know the number of reactions to execute in parallel and use a counting semaphore to control the number of active workers.
_Lock-free data structures._ The three queues and other data structures that are required for bookkeeping (e.g., a list of all set ports) are shared across workers. Using mutexes for synchronization proved to be inefficient due to high contention on the shared resources, especially when many parallel reactions set ports or schedule actions. Instead, we utilize lock-free data structures where possible. For instance, the ready queue is implemented as a fixed size buffer paired with an atomic counter. Since we know precisely how many reactions can at most run in parallel (i.e. the maximum number of reactions in the APG that have the same level), we can fix the size of the queue. Every time new reactions are moved to the reaction queue, the atomic counter is set to the number of reactions in the queue. Each time a worker tries to execute a reaction it atomically decrements the counter. If the result is negative, then the queue is empty. Otherwise the result provides the index within the buffer to read from.
We further exploit knowledge about the execution of reactor programs. For instance, the scheduler advances logical time only once all reactions have been processed. This operation is safe without additional synchronization, as all of the workers are waiting for new reactions.
_Sparse multiports._ In programs where reactions in multiple reactors may trigger the same reaction (such as an account with an arbitrary number of users), the triggered reaction often needs to know which port(s) actually are set (contain data). If there are many upstream reactors and communication is sparse, simply checking all ports for presence can be inefficient. Instead, we expose an API for obtaining an iterator to only set ports. Note that this problem does not arise in actors, as no ports exist and messages are processed one by one, only considering those that are actually sent.
## 5. Performance evaluation
The actor model is widely accepted for programming large concurrent applications, and implementations such as the C++ Actor Framework (CAF) (Kalton et al., 2010) and Akka (Kalton et al., 2011) are known to be fast and efficient in utilizing a larger number of threads. Compared to actors, LF imposes various restrictions that amount to a model of computation in which fewer behaviors are allowed. In this section, we show that these restrictions do not necessarily introduce overhead or higher execution times. In fact, LF is considerably faster for many benchmarks.
### Methodology
Our evaluation is based on the Savina benchmark suite (Savina et al., 2016) for actor languages and frameworks. While this suite has several issues, as Blessing et al. discuss in (Blessing et al., 2017), Savina covers a wide range of patterns and, to the best of our knowledge, is the most comprehensive benchmark suite for actor frameworks that has been published. The Savina suite includes Akka implementations of all benchmarks. CAF implementations of most Savina benchmarks are also available.2
Footnote 2: [https://github.com/woelke/saving](https://github.com/woelke/saving)
We ported 22 of the 30 Savina benchmarks to the C++ target of LF.3 Due to the fundamental differences between the actor and reactor model, the process of porting benchmarks is not always straightforward. We aimed at closely resembling the original workloads and considered the intention behind the individual benchmarks. We did not implement the benchmarks Fork Join (actor creation), Fibonacci, Quicksort, Bitonic Sort, Sieve of Eratosthenes, Unbalanced Cobwebbed Tree, Online Facility Location, and Successive Over-Relaxation as they require the capability to dynamically create actors. In the reactor model, this can be achieved with mutations that may modify the reactor topology (Savina et al., 2016; Savina et al., 2016). However, mutations are not yet fully implemented in LF, and a discussion of language-level constructs for supporting mutations is beyond the scope of this paper.
Footnote 3: Source code available at [https://github.com/lf-lang/benchmarks-linguafranca](https://github.com/lf-lang/benchmarks-linguafranca)
We further omit the A\({}^{*}\)-Search and Logistic Map Series benchmarks from our presentation. A\({}^{*}\)-Search suffers from a severe race condition that results in wildly varying execution times (Blessing et al., 2017). Logistic Map Series is omitted as it violates actor semantics and requires explicit synchronization (Blessing et al., 2017). For this reason, the CAF implementation needs to use a blocking call, which makes it slower than the other implementations by at least two orders of magnitude. Since this is not a problem of CAF, but rather a problem in the benchmark design, we omit Logistic Map Series to avoid skewing the analysis.
Figure 6 reports measured results for all supported benchmarks obtained with Akka, CAF, and the C++ target of LF. The plots show the mean execution times (including 99% confidence intervals) for a varying number of worker threads for each of the benchmarks. Not all benchmarks are implemented in CAF and hence it is missing in some plots.
All measurements were performed on a workstation with an Intel Core i9-10900K processor with 32 GiB DDR4-2933 RAM running Ubuntu 22.04 and using CAF version 17.6 and Akka version 2.6.17. Following the methodology of Savina, measurements exclude initialization and cleanup. Each measurement comprises 32 iterations. The first two iterations are
excluded from our analysis and are used to warm up caches and the JVM (in the case of Akka).
### Discussion
The first six plots in Figure 6 belong to the group of micro benchmarks in the Savina suite. These are designed to expose overhead in the protocol used for exchanging messages and for scheduling. Overall, our C++ runtime shows comparable performance to Akka and CAF. In Ping Pong and Thread Ring, our implementation is considerably faster than Akka but is still outperformed by CAF. For Counting Actor and Big, Akka performs better and the LF performance is slightly behind CAF. In Fork Join and Chameeos, the LF implementation outperforms both Akka and CAF, especially when using a larger number of worker threads.
The next six plots (Concurrent Dictionary to Bank Transaction) belong to the group of concurrency benchmarks. LF significantly outperforms CAF and Akka in all the concurrency benchmarks (especially for a high number of worker threads). This highlights how concurrent behavior is expressed naturally in LF and can be executed efficiently. Actor implementations of those benchmarks, on the other hand, have to synchronize explicitly and resort to potentially expensive protocols (e.g., by sending acknowledge messages), or implement some other (blocking) synchronization strategy that violates actor semantics (Brock et al., 2018).
The remaining plots belong to the group of parallelism benchmarks in the Savina suite4. Radix Sort and Filter Bank suffer somewhat from an inefficiency in our scheduler, as discussed in Section 4.3. In these particular benchmarks, our simple algorithm leads to a non-optimal execution as some reactions are executed later than they could. We will revise this algorithm in future work. However, the remaining parallelism benchmarks highlight that LF can efficiently implement parallel algorithms. Our LF implementations are on par with Akka and CAF and scale well with more threads.
Footnote 4: Producer Consumer is actually listed as a concurrency benchmark, but we find it fits better to the group of parallelism benchmarks.
On average, LF outperforms both Akka and CAF. For 20 threads, the C++ runtime achieves a speedup of \(1.85x\) over Akka and a \(1.42x\) speedup over CAF. These speedups were calculated using the geometric mean over the speedups of individual benchmarks. We conclude that LF can compete with and even outperform mod
Figure 6. Mean execution times and 99% confidence intervals for various Savina benchmarks implemented in LF, CAF, and Akka, measured for a varying number of worker threads. The numbers prefixed with # are benchmark IDs as listed in (Kumar et al., 2019).
actor frameworks such as Akka and CAF. Particularly with workloads that require synchronization, LF significantly outperforms actor implementations. LF is as efficient as the actor frameworks in exploiting parallelism and scales well to a larger thread count. In summary, the deterministic concurrency provided by LF does not hinder performance but enables more efficient implementations. This is possible in part because the scheduler has insights into the program structure, and explicit synchronization can be avoided in LF, as opposed to many of Savina actor benchmarks.
The performance comparison between C++ and Scala (Akka) needs to be taken with care, as other factors such as different library implementations and the behavior of the JVM may influence performance. For instance, the large discrepancy between Akka and our implementation in the Pi Precision benchmark is explained by a less efficient representation of large numbers in Scala/Java. However, the other benchmarks of the Savina suite do not depend on external libraries and are designed to be more portable between languages. Also note that over all benchmarks CAF only achieves an average speedup of \(1.09x\) over Akka for 20 threads and is outperformed in 9 out of 16 benchmarks. For single threaded execution, Akka outperforms CAF in 10 benchmarks and achieves an average speedup of 1.33x. This indicates that the implemented Scala workloads are comparable to the C++ implementations. Even considering a potential skew due to the JVM, our results clearly show that LF can compete with state-of-the-art actor frameworks.
To better understand the impact of the optimizations discussed in Section 4.3, Figure 7 also shows the speedup of our runtime for 20 worker threads compared to a less optimized runtime. This baseline is an older version of our runtime that is optimized in the sense that we used code profiling to identify obvious bottlenecks and eliminated them using common code optimization techniques, but that does not include the optimizations discussed in this paper. The average overall speedup (geometric mean) achieved by our optimizations is \(2.18x\). In particular, Big and Bank Transaction significantly benefit from our optimization for sparse communication patterns. The concurrency benchmarks (e.g., Concurrent Dictionary and Dining Philosophers), are mostly improved by reducing the contention on shared resources. However, not all benchmarks benefit from our optimizations. The reduced performance in Ping Pong and Counting Actor shows that optimizing for efficient parallel execution also comes at a cost for simple sequential programs.
## 6. Related Work
LF is closely related to the languages and frameworks evolved around Hewitt's actor model (Blees and Goyal, 2016; Goyal et al., 2017), including Akka (Kal
These models enable improved static analysis and optimization (Steiner et al., 2011), but they limit the application's flexibility and capability to react to external events. Ohua (2018) is another language providing parallelism through dataflow and is similar to LF in that it integrates with existing high-level languages. However, it falls short on exposing coordination facilities for individual nodes and does not provide a timed semantics.
Deterministic concurrency is also found in synchronous languages such as Esterel (2007), Lustre (2007), and SIGNAL (2007) as well as in Functional Reactive Programming (FRP) languages, like Fran (2007), FrTime (2007), and Elm (2007). However, these languages are challenging to use for general purpose programming as they require pure functions and there is a lack of widely-applicable libraries. Only recently, side effects have been considered in a formal semantics for Esterel (2007) and distributed dataflow (Steiner et al., 2011). In Lingua Franca, arbitrary code can be embedded in reactions and we can benefit from the available libraries for popular general purpose languages.
Another interesting approach is taken by deterministic multithreading libraries such as DThreads (2007) or Consequence (2007), which enforce a total order for concurrent store operations. Recent work has made considerable progress in avoiding the bottlenecks of conventional DTM techniques (Steiner et al., 2011). However, we argue that threads are not a convenient concurrency model for the reasons outlined in (Steiner et al., 2011). Moreover, threads do not allow for transparent distributed execution as is possible with (re)actors.
## 7. Conclusion
Unlike actors and related models for asynchronous concurrency, LF enforces determinism by default, and features asynchronous behavior only when introduced deliberately. Our evaluation, based on LF's C++ target, shows that this deterministic model does not impede performance. On the contrary, we achieve an average speedup of \(1.85x\) over Akka and \(1.42x\) over CAF. With LF, we manage to combine reproducible (and testable) behavior with good performance. Yet, our relatively simple scheduling strategy likely still leaves room for significant improvement. We leave it as future work to explore more advanced scheduling algorithms capable of exploiting more parallelism at runtime. We also aim to furnish full runtime support for mutations and implement the remaining Savina benchmarks that require them. Finally, we note that our implementation of the Savina benchmark suite is not only useful for comparing LF to actor-oriented frameworks; it also demonstrates that LF, which is still in its infancy, is already suitable for solving practical problems.
## Acknowledgments
The work in this paper was supported in part by the German Federal Ministry of Education and Research (BMBF) as part of the Software Campus (01IS12051) and the program "Souveran. Digital. Vernetzt.", joint project 6G-life (16KISK001K). This work was also supported in part by the National Science Foundation (NSF), award #CNS-1836601 (Reconciling Safety with the Internet), the iCyPhy Research Center (Industrial Cyber-Physical Systems), supported by Denso, Ford, Siemens, and Toyota, and the National Research Foundation (NRF) of Korea (No. NRF-2022R1F1A1065201).
|
2306.06685 | An extension of the weighted geometric mean in unital $JB$-algebras | Let $\mathcal{A}$ be a unital $JB$-algebra and $A,~B\in\mathcal{A}$, we
extend the weighted geometric mean $A\sharp_r B$, from $r\in [0,1]$ to $r\in
(-1, 0)\cup(1, 2)$. We will notice that many results will be reversed when the
domain of $r$ change from $[0,1]$ to $(-1,0)$ or $(1,2)$. We investigate some
property of $A\sharp_r B$ for such quantities of $r$, such as
we show that $A\sharp_r B$ is separately operator convex with respect to $A,
B$. We also introduce the Heinz and Heron means for unital $JB$-algebras and
give some famous inequalities involving them. | Amir Ghasem Ghazanfari, Somayeh Malekinejad | 2023-06-11T13:59:47Z | http://arxiv.org/abs/2306.06685v1 | # An extension of the weighted geometric mean in unital \(Jb\)-algebras
###### Abstract.
Let \(\mathcal{A}\) be a unital \(JB\)-algebra and \(A,\ B\in\mathcal{A}\), we extend the weighted geometric mean \(A_{\sharp r}B\), from \(r\in[0,1]\) to \(r\in(-1,0)\cup(1,2)\). We will notice that many results will be reversed when the domain of \(r\) change from \([0,1]\) to \((-1,0)\) or \((1,2)\). We investigate some property of \(A_{\sharp r}^{\sharp}B\) for such quantities of \(r\), such as we show that \(A_{\sharp r}^{\sharp}B\) is separately operator convex with respect to \(A,B\). We also introduce the Heinz and Heron means for unital \(JB\)-algebras and give some famous inequalities involving them.
Key words and phrases:\(JB\)-algebra, Heinz mean, Heron mean 2020 Mathematics Subject Classification: 46H70, 47A63, 47A64 *Corresponding author
## 1. introduction and preliminary
Jordan algebras are considered as a model to formalize the concept of an algebra of observables in quantum mechanics. In the mathematical foundation of quantum physic one of the natural axioms is that the observables form a Jordan algebra. One reason for this is that, typically, observables are supposed to be self-adjoint operators on a Hilbert space, and the space of such operators is closed under the Jordan product but not the associative product. We refer the reader to [6] for further details.
A Jordan algebra over \(\mathbb{R}\) is a vector space \(\mathcal{A}\) over \(\mathbb{R}\) equipped with a commutative bilinear product \(\circ\) that satisfies the identity
\[a\circ(b\circ a^{2})=(a\circ b)\circ a^{2},\]
for all \(a,b\in\mathcal{A}\).
Let \(\mathcal{A}\) be an algebra and \(a,b\in\mathcal{A}\). Let
\[x\circ y=\frac{xy+yx}{2}. \tag{1.1}\]
Then \(\circ\) defines a bilinear, commutative product on \(\mathcal{A}\), which is called the Jordan product. If \(\mathcal{A}\) is associative, then \(\mathcal{A}\) becomes a Jordan algebra when equipped with the product (1.1), as does any subspace closed under \(\circ\). Such Jordan algebras are called special Jordan algebras, all others are called exceptional. The following algebras are examples of special Jordan algebras with product (1.1).
**Example 1.1**.:
* The Jordan algebra of \(n\times n\) self-adjoint real matrices \(H_{n}(\mathbb{R})\).
* The Jordan algebra of \(n\times n\) self-adjoint complex matrices \(H_{n}(\mathbb{C})\).
* The Jordan algebra of \(n\times n\) self-adjoint quaternionic matrices \(H_{n}(\mathbb{H})\).
* The Jordan algebra of \(n\times n\) self-adjoint octonionic matrices \(H_{n}(\mathbb{O})\), where \(n\leq 3\).
**Definition 1.2**.: A Jordan Banach algebra is a real Jordan algebra \(\mathcal{A}\) equipped with a complete norm satisfying
\[\|A\circ B\|\leq\|A\|\|B\|,\ A,B\in\mathcal{A}.\]
Jordan operator algebras are norm-closed spaces of operators on a Hilbert space which are closed under the Jordan product.
Basic examples are real symmetric and complex hermitian matrices with the Jordan product. A \(JB\)-algebra is a Jordan Banach algebra \(\mathcal{A}\) in which the norm satisfies the following two additional conditions for \(A,B\in\mathcal{A}\):
\[(i)\ \|A^{2}\|=\|A\|^{2}\] \[(ii)\|A^{2}\|\leq\|A^{2}+B^{2}\|\]
We say \(A\in\mathcal{A}\) is invertible if there exists \(B\in\mathcal{A}\), which is called Jordan inverse of \(A\), such that
\[A\circ B=I\ \text{and}\ A^{2}\circ B=A.\]
The spectrum of \(A\), denoted by \(Sp(A)\), is the set of \(\lambda\in\mathbb{R}\) such that \(A-\lambda\) does not have inverse in \(A\). Furthermore, if \(Sp(A)\subset[0,\infty)\), we say \(A\) is positive, denoted \(A\geq 0\).
In a \(JB\)-algebra we define
\[U_{A}B=\{ABA\}:=2(A\circ B)\circ A-A^{2}\circ B.\]
Note that \(ABA\) is meaningless unless \(A\) is special, in which case \(\{ABA\}=ABA\). Moreover, if \(B\geq 0\), then \(U_{A}B=\{ABA\}\geq 0\).
We mention some of properties of \(U_{A}\) that we will use frequently in sequel: \(U_{A}\) is a linear mapping and
\[U_{\{ABA\}}=U_{A}U_{B}U_{A}. \tag{1.2}\]
It also satisfies the following two Lemmas:
**Lemma 1.3**.: _[_1_, Lemma 1.23]_ _Let \(\mathcal{A}\) be a \(JB\)-Banach algebra and \(A\in\mathcal{A}\). Then \(A\) is an invertible element iff \(U_{A}\) has a bounded inverse, and in this case the inverse map is \(U_{a^{-1}}\) i.e., \(U_{A}^{-1}=U_{A^{-1}}\)._
**Lemma 1.4**.: _[_1_, Lemma 1.24]_ _If \(A\) and \(B\) are invertible elements of a \(JB\)-algebra, Then \(\{ABA\}\) is invertible with inverse \(\{A^{-1}B^{-1}A^{-1}\}\)._
For more details, we refer the reader to [1, 7].
A real-valued function f on \(\mathbb{R}\) is said to be operator monotone (increasing) on a JB-algebra \(\mathcal{A}\) if \(A\leq B\) implies \(f(A)\leq f(B)\). We say \(f\) is operator convex if for any \(\lambda\in[0,1]\),
\[f((1-\lambda)A+\lambda B)\leq(1-\lambda)f(A)+\lambda f(B).\]
Wang et al. [8] introduced some operator means for two positive invertible elements \(A\), \(B\) in a unital \(JB\)-algebra \(\mathcal{A}\) and \(\nu\in[0,1]\), such as
* \(\nu\)-weighted harmonic mean: \(A!_{\nu}B=\big{(}(1-\nu)A^{-1}+\nu B^{-1}\big{)}^{-1}\);
* \(\nu\)-weighted geometric mean: \(A^{\sharp}_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**Lemma 2.1**.: _Let \(\mathcal{A}\) be a \(JB\)-algebra and \(A\in\mathcal{A}\) and \(r,s\in\mathbb{R}\). Then_
(i): \(A^{r}\circ A^{s}=A^{r+s}\)__\((A\geq 0)\)_,_
(ii): \(U_{A}U_{A}=U_{A^{2}}\)_._
Proof.: (i) Consider continuous real functions \(f(t)=t^{r},\ g(t)=t^{s}\) on \((0,\infty)\). By the continuous functional calculus at \(A\), we have
\[A^{r+s}=(fg)(A)=f(A)\circ g(A)=A^{r}\circ A^{s}.\]
(ii) For every \(B\in\mathcal{A}\), The identity
\[U_{A}U_{A}(B)=\{A\{ABA\}A\}=\{A^{2}BA^{2}\}=U_{A^{2}}(B)\]
follows from MacDonald's theorem.
It is well known that the following integral representation is valid for \(x>0\) and \(0<r<1\). (See relation (V.4) in [2].)
\[x^{r}=\frac{\sin r\pi}{\pi}\int_{0}^{\infty}\frac{x}{\lambda+x}\lambda^{r-1}d\lambda. \tag{2.1}\]
For \(x\in(0,\infty)\), from (2.1), we obtain
\[x^{r} =\frac{\sin(r-1)\pi}{\pi}\int_{0}^{\infty}\frac{x^{2}}{\lambda+x }\lambda^{r-2}d\lambda r \in(1,2) \tag{2.2}\] \[x^{r} =\frac{\sin(r+1)\pi}{\pi}\int_{0}^{\infty}\frac{1}{\lambda+x} \lambda^{r}d\lambda r \in(-1,0) \tag{2.3}\]
By changing of variable \(\lambda=\frac{1}{\alpha}\) from (2.1), we have
\[x^{r}=\frac{\sin r\pi}{\pi}\int_{0}^{\infty}\frac{x}{1+\alpha x}\alpha^{-r}d\alpha. \tag{2.4}\]
Thus for \(x\in(0,\infty)\), from (2.4), we get
\[x^{r} =\frac{\sin(r-1)\pi}{\pi}\int_{0}^{\infty}\frac{x^{2}}{1+\alpha x }\alpha^{-r+1}d\alpha r \in(1,2) \tag{2.5}\] \[x^{r} =\frac{\sin(r+1)\pi}{\pi}\int_{0}^{\infty}\frac{1}{1+\alpha x} \alpha^{-r-1}d\alpha r \in(-1,0) \tag{2.6}\]
**Proposition 2.2**.: _Let \(r\in[-1,0]\cup[1,2]\) and \(x\geq 0\), the real-valued function \(x\to x^{r}\) is operator convex on unital JB-algebras._
Proof.: Let \(\mathcal{A}\) be a unital \(JB\)-algebra, \(r\in(-1,0)\) and \(x\geq 0\). From (2.6), we have
\[\left|\int_{\delta_{1}}^{\delta_{2}}\frac{1}{1+\alpha x}\alpha^{-r-1}d\alpha \right|\leq\left|\int_{\delta_{1}}^{\delta_{2}}\alpha^{-r-1}d\alpha\right| \to 0\ \mbox{uniformly}\,\ \mbox{as}\ \delta_{1},\delta_{2}\to 0.\]
From (2.3), we have
\[\left|\int_{N_{1}}^{N_{2}}\frac{\alpha}{\alpha+x}\alpha^{r-1}d\alpha\right| \leq\left|\int_{N_{1}}^{N_{2}}\alpha^{r-1}d\alpha\right|\to 0\ \mbox{uniformly}\,\ \mbox{as}\ N_{1},N_{2}\to\infty\]
If \([a,b]\) is an arbitrary interval of \((0,\infty)\) and \(\alpha\in[a,b]\), then
\[\sup_{x\in[0,M]}\left|\frac{1}{(1+\alpha x)\alpha^{r+1}}-\frac{1}{(1+ \alpha_{0}x)\alpha_{0}^{r+1}}\right|\\ \leq\frac{1}{a^{2(r+1)}}|(1+\alpha x)\alpha^{r+1}-(1+\alpha_{0}x) \alpha_{0}^{r+1}|\to 0,\text{ as }\alpha\to\alpha_{0}.\]
Hence \(f_{\alpha}(x)=\frac{1}{1+\alpha x}\alpha^{-r-1}\) is uniformly Reimann integrable on \(\alpha\in(0,\infty)\). In this case, operator convexity the function \(x\to x^{r}\) on \(\mathcal{A}\), follows from Lemma (1.6) that is also valid for operator convex functions.
Now if \(r\in(1,2)\) and \(x\geq 0\). From (2.5), we have
\[\left|\int_{\delta_{1}}^{\delta_{2}}\frac{x^{2}}{1+\alpha x}\alpha^{-r+1}d \alpha\right|\leq M^{2}\left|\int_{\delta_{1}}^{\delta_{2}}\alpha^{-r+1}d \alpha\right|\to 0\text{ uniformly },\text{ as }\delta_{1},\delta_{2}\to 0.\]
Also
\[\left|\int_{N_{1}}^{N_{2}}\frac{\alpha x}{1+\alpha x}x\alpha^{-r}d\alpha \right|\leq M\left|\int_{N_{1}}^{N_{2}}\alpha^{-r}d\alpha\right|\to 0\text{ uniformly },\text{ as }N_{1},N_{2}\to\infty\]
If \([a,b]\) is an arbitrary interval of \((0,\infty)\) and \(\alpha\in[a,b]\), then
\[\sup_{x\in[0,M]}\left|\frac{x^{2}}{(1+\alpha x)\alpha^{1-r}}- \frac{x^{2}}{(1+\alpha_{0}x)\alpha_{0}^{1-r}}\right|\\ \leq\frac{M^{2}}{a^{2(1-r)}}|(1+\alpha x)\alpha^{1-r}-(1+\alpha_{ 0}x)\alpha_{0}^{1-r}|\to 0,\text{ as }\alpha\to\alpha_{0}.\]
Hence \(g_{\alpha}(x)=\frac{x^{2}}{1+\alpha x}\alpha^{-r+1}\) is uniformly Reimann integrable on \(\alpha\in(0,\infty)\). In this case, operator convexity the function \(x\to x^{r}\) on \(\mathcal{A}\), follows from Lemma (1.6) that is also valid for operator convex functions.
In the next Theorems, for \(r\in(-1,0)\) or \(r\in(1,2)\), we give an integral representation of \(A\sharp_{r}B\) in unital \(JB\)-algebras.
**Theorem 2.3**.: _Let \(A\), \(B\) be positive invertible elements in a unital JB-algebra \(\mathcal{A}\). Then for any \(r\in(1,2)\)_
\[A\sharp_{r}B=\int_{0}^{1}\left((1-s)B^{-1}+s\{B^{-1}AB^{-1}\} \right)^{-1}d\mu(s),\] \[\text{ where }d\mu(s)=\frac{\sin(r-1)\pi}{\pi}\frac{s^{r-2}}{(1-s)^ {r-1}}ds.\]
Proof.: By a change of variables to (2.1), we obtain
\[x^{r}=\frac{\sin r\pi}{\pi}\int_{0}^{1}\frac{x}{s+(1-s)x}\frac{s^{r-1}}{(1-s) ^{r}}ds, \tag{2.7}\]
Therefore for \(r\in(1,2)\) and \(x\in(0,\infty)\),
\[x^{r}=\int_{0}^{1}x^{2}\left(s+(1-s)x\right)^{-1}d\mu(s), \tag{2.8}\] \[\text{where }d\mu(s)=\frac{\sin(r-1)\pi}{\pi}\frac{s^{r-2}}{(1-s)^{r- 1}}ds.\]
Applying functional calculus in \(JB\)-algebras to (2.8), ( [1, Proposition 1.21]). We obtain
\[\{A^{-\frac{1}{2}}BA^{-\frac{1}{2}}\}^{r}\] \[=\int_{0}^{1}U_{\{A^{-\frac{1}{2}}BA^{-\frac{1}{2}}\}}\left(sI+(1 -s)\{A^{-\frac{1}{2}}BA^{-\frac{1}{2}}\}\right)^{-1}\{A^{-\frac{1}{2}}BA^{- \frac{1}{2}}\}d\mu(s)\] \[=\int_{0}^{1}U_{\{A^{-\frac{1}{2}}BA^{-\frac{1}{2}}\}}\left(sI+(1 -s)\,U_{A^{-\frac{1}{2}}}B\right)^{-1}d\mu(s).\]
Therefore by (1.2), (1.8), Lemma 1.3 and Lemma 1.4, we get
\[A\sharp_{r}B=\{A^{\frac{1}{2}}\{A^{-\frac{1}{2}}BA^{-\frac{1}{2} }\}^{r}A^{\frac{1}{2}}\}\] \[=\left\{A^{\frac{1}{2}}\int_{0}^{1}U_{\{A^{-\frac{1}{2}}BA^{- \frac{1}{2}}\}}\left(sI+(1-s)U_{A^{-\frac{1}{2}}}(B)\right)^{-1}d\mu(s)A^{ \frac{1}{2}}\right\}\] \[=\int_{0}^{1}U_{A^{\frac{1}{2}}}U_{A^{-\frac{1}{2}}}U_{B}U_{A^{- \frac{1}{2}}}\left(sI+(1-s)U_{A^{-\frac{1}{2}}}(B)\right)^{-1}d\mu(s)\] \[=\int_{0}^{1}U_{B}U_{A^{-\frac{1}{2}}}\left(sI+(1-s)U_{A^{-\frac{ 1}{2}}}(B)\right)^{-1}d\mu(s)\] \[=\int_{0}^{1}\left((1-s)B^{-1}+s\{B^{-1}AB^{-1}\}\right)^{-1}d\mu (s).\]
**Theorem 2.4**.: _Let \(A\), \(B\) be positive invertible elements in a unital JB-algebra \(\mathcal{A}\). Then for any \(r\in(-1,0)\)_
\[A\sharp_{r}B=\int_{0}^{1}\left((1-s)\{A^{-1}BA^{-1}\}+sA^{-1}\right)^{-1}d\nu (s),\]
_where \(d\nu(s)=\frac{\sin(r+1)\pi}{\pi}\frac{s^{r}}{(1-s)^{r+1}}ds\)._
Proof.: By (2.7) for \(r\in(-1,0)\), we have
\[x^{r}=\int_{0}^{1}\left(s+(1-s)x\right)^{-1}d\mu(s), \tag{2.9}\] \[\text{where }d\mu(s)=\frac{\sin(r+1)\pi}{\pi}\frac{s^{r}}{(1-s)^{r +1}}ds.\]
Applying functional calculus in \(JB\)-algebras to (2.9), ( [1, Proposition 1.21]). We obtain
\[A\sharp_{\!r}B =\int_{0}^{1}\left((1-s)\{A^{-\frac{1}{2}}\{A^{-\frac{1}{2}}BA^{- \frac{1}{2}}\}A^{-\frac{1}{2}}\}+sA^{-1}\right)^{-1}d\mu(s)\] \[=\int_{0}^{1}\left((1-s)U_{A^{-1}}(B)+sA^{-1}\right)^{-1}d\nu(s) \text{( by Lemma \ref{lem:1}).}\]
**Proposition 2.5**.: _Let \(A\), \(B\) be positive invertible elements in a unital JB-algebra \(\mathcal{A}\) and \(\alpha\) and \(\beta\) two nonnegative real numbers. Then i) \(A\sharp_{\!r}B=\{B(A\sharp_{\!2-r}B)^{-1}B\}\) for any \(r\in(1,2)\) ii) \(A\sharp_{\!r}B=\{A(A\sharp_{\!-r}B)^{-1}A\}\) for any \(r\in(-1,0)\) iii) \((A\sharp_{\!r}B)=(B\sharp_{\!1-r}A)\) for any \(r\in(-1,2)\) iv) \((A\sharp_{\!r}B)^{-1}=(A^{-1}\sharp_{\!r}B^{-1})\) for any \(r\in(-1,2)\) v) \((\alpha A\sharp_{\!r}\beta B)=(\alpha\sharp_{\!r}\beta)(A\sharp_{\!r}B)\) for any \(r\in(-1,2)\) vi) \(A\sharp_{\!r}B\) is separately operator convex with respect to \(A\), \(B\), for any \(r\in(-1,2)\) vi) \(\{C(A\sharp_{\!r}B)C\}=\{CAC\}\sharp_{\!r}\{CBC\}\) for any invertible \(C\) in \(\mathcal{A}\)._
Proof.: i)If \(r\in(1,2)\), then \(2-r\in(0,1)\) and
\[\{B(A\sharp_{\!2-r}B)^{-1}B\}=U_{B}(A\sharp_{\!2-r}B)^{-1}\] \[=U_{B}(A^{-1}\sharp_{\!2-r}B^{-1}) \text{( by \eqref{eq:1})}\] \[=U_{B}U_{A^{-\frac{1}{2}}}\{A^{\frac{1}{2}}B^{-1}A^{\frac{1}{2}} \}^{2-r} \text{( by \eqref{eq:1})}\] \[=U_{B}U_{A^{-\frac{1}{2}}}\{A^{-\frac{1}{2}}BA^{-\frac{1}{2}}\}^{ r-2} \text{( by Lemma \ref{lem:1})}\] \[=U_{B}U_{A^{-\frac{1}{2}}}\{A^{-\frac{1}{2}}BA^{-\frac{1}{2}}\}^{ r}\{A^{-\frac{1}{2}}BA^{-\frac{1}{2}}\}^{-2} \text{( by Lemma \ref{lem:1})}\] \[=U_{B}U_{A^{-\frac{1}{2}}}\{A^{\frac{1}{2}}B^{-1}A^{\frac{1}{2}} \}\{A^{-\frac{1}{2}}BA^{-\frac{1}{2}}\}^{r} \text{(}\{a^{-1}a^{r}a^{-1}\}=a^{r-2})\] \[=U_{B}U_{A^{-\frac{1}{2}}}U_{A^{\frac{1}{2}}}U_{B^{-1}}U_{A^{\frac {1}{2}}}\{A^{-\frac{1}{2}}BA^{-\frac{1}{2}}\}^{r} \text{(}U_{\{aba\}}=U_{a}U_{b}U_{a})\] \[=U_{A^{\frac{1}{2}}}\{A^{-\frac{1}{2}}BA^{-\frac{1}{2}}\}^{r} \text{( by Lemma \ref{lem:1})}\] \[=A\sharp_{\!r}B. \text{( by \eqref{eq:1})}\]
ii) If \(r\in(-1,0)\), then \(-r\in(0,1)\) and
\[\{A(A\sharp_{\!-r}B)^{-1}A\}=U_{A}(A\sharp_{\!-r}B)^{-1}\] \[=U_{A}(A^{-1}\sharp_{\!-r}B^{-1}) \text{( by \eqref{eq:1})}\] \[=U_{A}\{A^{-\frac{1}{2}}\{A^{\frac{1}{2}}B^{-1}A^{\frac{1}{2}}\}^{ -r}A^{-\frac{1}{2}}\} \text{( by \eqref{eq:1})}\] \[=U_{A}U_{A^{-\frac{1}{2}}}\{A^{\frac{1}{2}}B^{-1}A^{\frac{1}{2}} \}^{-r}\] \[=U_{A^{\frac{1}{2}}}\{A^{\frac{1}{2}}B^{-1}A^{\frac{1}{2}}\}^{-r} \text{( by Lemma \ref{lem:1})}\] \[=U_{A^{\frac{1}{2}}}\{A^{-\frac{1}{2}}BA^{-\frac{1}{2}}\}^{r} \text{( by Lemma \ref{lem:1})}\] \[=A\sharp_{\!r}B. \text{( by \eqref{eq:1})}\]
iii) If \(0\leq r\leq 1\), then (1.3) shows that \(A\sharp_{r}B=B\sharp_{1-r}A\). Now, suppose that \(r\in(1,2)\), then
\[\eqalign{A\sharp_{r}B&=\{B(A\sharp_{2-r}B)^{-1}B\}&\qquad\qquad\qquad\hbox{(by part(i))}\cr&=\{B(B\sharp_{r-1}A)^{-1}B\}&\qquad\qquad\qquad\hbox{( by (1.3))}\cr&=B\sharp_{1-r}A.&\qquad\qquad\qquad\hbox{(by part(ii))}\cr}\]
If \(r\in(-1,0)\), then
\[\eqalign{A\sharp_{r}B&=\{A(A\sharp_{-r}B)^{-1}A\}&\qquad\qquad\qquad\hbox{(by part(ii))}\cr&=\{A(B\sharp_{1+r}A)^{-1}A\}&\qquad\qquad\qquad\hbox{( by (1.3))}\cr&=B\sharp_{1-r}A.&\qquad\qquad\qquad\hbox{(by part(i))}\cr}\]
iv)
\[\eqalign{(A\sharp_{r}B)^{-1}&=\{A^{1\over 2}\{A^{-{1\over 2}}BA^{-{1 \over 2}}\}^{r}A^{1\over 2}\}^{-1}\cr&=\{A^{-{1\over 2}}\{A^{1\over 2}B^{-1}A^{1 \over 2}\}^{r}A^{-{1\over 2}\}}\cr&=A^{-1}\sharp_{r}B^{-1}.\cr}\]
v) it follows directly from the definition.
vi)For any \(0\leq\lambda\leq 1\), Operator convexity the function \(x\to x^{r}\) on \({\Cal{A}}\) for \(r\in[-1,0]\cup[1,2]\) implies that
\[\eqalign{A\sharp_{r}[(1-\lambda)B_{1}+\lambda B_{2}]&=U_{A^{1\over 2}}\left((1- \lambda)\{A^{-{1\over 2}}B_{1}A^{-{1\over 2}}\}+\lambda\{A^{-{1\over 2}}B_{2}A^{-{1 \over 2}}\}\right)^{r}\cr&\leq U_{A^{1\over 2}}\left((1-\lambda)\{A^{-{1\over 2}}B_{1}A^{-{1 \over 2}}\}^{r}+\lambda\{A^{-{1\over 2}}B_{2}A^{-{1\over 2}}\}^{r}\right)\cr&=(1- \lambda)U_{A^{1\over 2}}\{A^{-{1\over 2}}B_{1}A^{-{1\over 2}}\}^{r}+\lambda U_{A^{1 \over 2}}\{A^{-{1\over 2}}B_{2}A^{-{1\over 2}}\}^{r}\cr&=(1-\lambda)(A\sharp_{r}B_{1})+ \lambda(A\sharp_{r}B_{2}).\cr}\]
Similarly \(B\sharp_{1-r}A\) is operator convex with respect to \(A\). By Proposition 2.5 we have \(A\sharp_{r}B=B\sharp_{1-r}A\), hence \(A\sharp_{r}B\) is also operator convex with respect to \(A\).
vii) If \(0\leq r\leq 1\), then (1.7) shows that \(\{CAC\}\sharp_{r}\{CBC\}=\{C(A\sharp_{r}B)C\}\).
Now, suppose that \(r\in(1,2)\). By Theorem 2.3, we have
\[\{CAC\}_{\sharp r}\{CBC\}\] \[=\int_{0}^{1}\left((1-s)\{CBC\}^{-1}+s\left\{\{CBC\}^{-1}\{CAC\}\{CBC \}^{-1}\right\}\right)^{-1}d\mu(s)\] \[=\int_{0}^{1}\left((1-s)\{CBC\}^{-1}+sU_{\{CBC\}^{-1}}U_{C}(A) \right)^{-1}d\mu(s)\] \[=\int_{0}^{1}\left((1-s)\{CBC\}^{-1}+sU_{C^{-1}}U_{B^{-1}}U_{C^{-1 }}U_{C}(A)\right)^{-1}d\mu(s)\] \[=\int_{0}^{1}\left((1-s)\{CBC\}^{-1}+s\{C^{-1}\{B^{-1}AB^{-1}\}C^ {-1}\}\right)^{-1}d\mu(s)\] \[=\int_{0}^{1}\left((1-s)\{C^{-1}B^{-1}C^{-1}\}+s\{C^{-1}\{B^{-1} AB^{-1}\}C^{-1}\}\right)^{-1}d\mu(s)\] \[=\int_{0}^{1}\{C\left((1-s)B^{-1}+s\{B^{-1}AB^{-1}\}\right)^{-1}C \}d\mu(s)\] \[=\left\{\int_{0}^{1}C\left((1-s)B^{-1}+s\{B^{-1}AB^{-1}\}\right)^ {-1}d\mu(s)C\right\}\] \[=\{C(A_{\sharp r}^{\sharp}B)C\}.\]
If \(r\in(-1,0)\), then \(\{C(A_{\sharp r}^{\sharp}B)C\}=\{CAC\}_{\sharp r}\{CBC\}\) by Theorem 2.4.
In the next theorem, we present a converse of theorem 3 in [8] for \(JB\)-algebras.
**Theorem 2.6**.: _(Reverse Young inequality for \(JB\)-algebras) Let \(A\), \(B\) be positive invertible elements in a unital \(JB\)-algebra \(\mathcal{A}\) and \(r\in(-1,0)\cup(1,2)\). Then_
\[A\nabla_{r}B\leq A_{\sharp r}^{\sharp}B\leq A!_{r}B. \tag{2.10}\]
Proof.: According to Lemma 2.1 in [4], if \(r\notin(0,1)\),
\[(1-r)1+rx\leq x^{r}\leq((1-r)1+rx^{-1})^{-1} \tag{2.11}\]
for all \(x>0\). By functional calculus at \(\{A^{-\frac{1}{2}}BA^{-\frac{1}{2}}\}\) for \(JB\)-algebras [1, proposition 1.21], from (2.11) we get
\[(1-r)1+r\{A^{-\frac{1}{2}}BA^{-\frac{1}{2}}\}\leq\{A^{-\frac{1}{2}}BA^{-\frac{ 1}{2}}\}^{r}\leq\left(1-r)1+r\{A^{-\frac{1}{2}}BA^{-\frac{1}{2}}\}^{-1}\right) ^{-1}.\]
Since \(U_{A}\) is a linear mapping, so
\[U_{A^{\frac{1}{2}}}\left((1-r)1+r\{A^{-\frac{1}{2}}BA^{-\frac{1} {2}}\}\right)\leq U_{A^{\frac{1}{2}}}\left(\{A^{-\frac{1}{2}}BA^{-\frac{1}{2}} \}^{r}\right)\] \[\leq U_{A^{\frac{1}{2}}}\left(\left(1-r)1+r\{A^{-\frac{1}{2}}BA^{- \frac{1}{2}}\}^{-1}\right)^{-1}\right). \tag{2.12}\]
Inequality (2.10) follows from (2.12).
## 3. Heinz and Heron means in JB-algebras
For two positive invertible elements \(A\), \(B\) in a unital \(JB\)-algebra \(\mathcal{A}\) and \(\nu\in[0,1]\), we introduce Heinz, Heron and logarithmic means, as follows
\[H_{\nu}(A,B)=\frac{A\sharp_{\nu}B+A\sharp_{1-\nu}B}{2}\] \[F_{\nu}(A,B)=(1-\nu)A\sharp_{\nu}B+\nu A\nabla B\] \[L(A,B)=\int_{0}^{1}A\sharp_{\nu}Bd\nu\]
where \(A\sharp_{\nu}B=\{A^{1/2}\{A^{-1/2}BA^{-1/2}\}^{\nu}A^{1/2}\}\). In [3, Lemma 1], Bhatia gave the following inequality between the Heinz and Heron means,
\[H_{\nu}(a,b)\leq F_{\alpha(\nu)}(a,b), \tag{3.1}\]
for \(0\leq\nu\leq 1\), where \(\alpha(\nu)=(1-2\nu)^{2}\).
By the proof of Lemma 1 of [3], we obtain
\[H_{\nu}(a,b)\geq F_{\alpha(\nu)}(a,b), \tag{3.2}\]
where \(\nu\notin(0,1)\).
**Theorem 3.1**.: _Let \(A\), \(B\) be positive invertible elements in a unital JB-algebra \(\mathcal{A}\). Then_
\[H_{\nu}(A,B)\leq F_{\alpha(\nu)}(A,B) \nu\in[0,1] \tag{3.3}\] \[H_{\nu}(A,B)\geq F_{\alpha(\nu)}(A,B) \nu\notin(0,1), \tag{3.4}\]
_where \(\alpha(\nu)=(1-2\nu)^{2}\)._
Proof.: Let \(\nu\in[0,1]\), from (3.1) for \(x>0\), we get
\[\frac{x^{\nu}+x^{1-\nu}}{2}\leq(1-\alpha(\nu))x^{\frac{1}{2}}+\alpha(\nu)\frac {x+1}{2}. \tag{3.5}\]
By functional calculus at \(\{A^{-\frac{1}{2}}BA^{-\frac{1}{2}}\}\) for \(JB\)-algebras [1, proposition 1.21], from (3.5) we get
\[\frac{\{A^{-\frac{1}{2}}BA^{-\frac{1}{2}}\}^{\nu}+\{A^{-\frac{1}{2}}BA^{- \frac{1}{2}}\}^{1-\nu}}{2}\] \[\leq(1-\alpha(\nu))\{A^{-\frac{1}{2}}BA^{-\frac{1}{2}}\}^{\frac{1 }{2}}+\alpha(\nu)\frac{\{A^{-\frac{1}{2}}BA^{-\frac{1}{2}}\}+I}{2}. \tag{3.6}\]
Since \(U_{A}\) is a linear mapping, so
\[\frac{U_{A^{\frac{1}{2}}}(\{A^{-\frac{1}{2}}BA^{-\frac{1}{2}}\}^{\nu})+U_{A^{ \frac{1}{2}}}(\{A^{-\frac{1}{2}}BA^{-\frac{1}{2}}\}^{1-\nu})}{2}\]
\[\leq(1-\alpha(\nu))U_{A^{\frac{1}{2}}}(\{A^{-\frac{1}{2}}BA^{-\frac{1}{2}}\}^ {\frac{1}{2}})+\alpha(\nu)\frac{U_{A^{\frac{1}{2}}}(\{A^{-\frac{1}{2}}BA^{- \frac{1}{2}}\})+A}{2}. \tag{3.7}\]
In this case, we get \(H_{\nu}(A,B)\leq F_{\alpha(\nu)}(A,B)\). Similarly, for \(\nu\notin(0,1)\), inequality (3.4) follows from (3.2).
From (3.3), we deduced
\[L(A,B) =\int_{0}^{1}\{A^{\frac{1}{2}}\left(A^{-\frac{1}{2}}BA^{-\frac{1}{2} }\right)^{r}A^{\frac{1}{2}}\}dt\] \[=\int_{0}^{1}H_{t}(A,B)dt\leq\int_{0}^{1}F_{\alpha(t)}(A,B)dt\] \[=\int_{0}^{1}4(t-t^{2})(A\sharp B)+(1-4(t-t^{2}))(\frac{A+B}{2})dt\] \[=\frac{2}{3}(A\sharp B)+\frac{1}{3}\left(\frac{A+B}{2}\right)\] \[=F_{\frac{1}{3}}(A,B).\]
It is clear that if \(\nu_{1}\leq\nu_{2}\) then \(F_{\nu_{1}}(A,B)\leq F_{\nu_{2}}(A,B)\). Therefore,
\[L(A,B)\leq F_{\beta}(A,B)\leq H_{\gamma}(A,B), \tag{3.8}\]
where \(\frac{1}{3}\leq\beta\leq 1\leq\gamma\).
|
2301.03969 | Exploring bulk viscous unified scenarios with Gravitational Waves
Standard Sirens | We consider the unified bulk viscous scenarios and constrain them using the
Cosmic Microwave Background observations from Planck 2018 and the Pantheon
sample from Type Ia Supernovae. Then we generate the luminosity distance
measurements from ${\cal O}(10^3)$ mock Gravitational Wave Standard Sirens
(GWSS) events for the proposed Einstein Telescope. We then combine these mock
luminosity distance measurements from the GWSS with the current cosmological
probes in order to forecast how the mock GWSS data could be effective in
constraining these bulk viscous scenarios. Our results show that a non-zero
time dependent bulk viscosity in the universe sector is strongly preferred by
the current cosmological probes and will possibly be confirmed at many standard
deviations by the future GWSS measurements. We further mention that the
addition of GWSS data can significantly reduce the uncertainties of the key
cosmological parameters obtained from the usual cosmological probes employed in
this work. | Weiqiang Yang, Supriya Pan, Eleonora Di Valentino, Celia Escamilla-Rivera, Andronikos Paliathanasis | 2023-01-10T14:05:34Z | http://arxiv.org/abs/2301.03969v2 | # Exploring bulk viscous unified scenarios with Gravitational Waves Standard Sirens
###### Abstract
We consider the unified bulk viscous scenarios and constrain them using the Cosmic Microwave Background observations from Planck 2018 and the Pantheon sample from Type Ia Supernovae. Then we generate the luminosity distance measurements from \(\mathcal{O}(10^{3})\) mock Gravitational Wave Standard Sirens (GWSS) events for the proposed Einstein Telescope. We then combine these mock luminosity distance measurements from the GWSS with the current cosmological probes in order to forecast how the mock GWSS data could be effective in constraining these bulk viscous scenarios. Our results show that a non-zero time dependent bulk viscosity in the universe sector is strongly preferred by the current cosmological probes and will possibly be confirmed at many standard deviations by the future GWSS measurements. We further mention that the addition of GWSS data can significantly reduce the uncertainties of the key cosmological parameters obtained from the usual cosmological probes employed in this work.
## I Introduction
Understanding the nature of dark matter and dark energy has been a challenge for cosmologists. The standard cosmological model, namely, the so-called \(\Lambda\)-Cold Dark Matter (\(\Lambda\)CDM) model representing a mixture of two non-interacting fluids \(-\) a positive cosmological constant (\(\Lambda>0\)) and a cold dark matter component, has undoubtedly proved its unprecedented success by explaining a large span of astronomical data. However, this simplest cosmological scenario has some limitations. For example, the cosmological constant problem [1] and the coincidence problem [2] have already questioned the existing assumptions in the \(\Lambda\)CDM model, e.g. constant energy density of the vacuum and the non-interacting nature between \(\Lambda\) and CDM. These limitations motivated the cosmologists to find alternative cosmological scenarios beyond \(\Lambda\)CDM by relaxing the above assumptions, and as a consequence, several new cosmological models were introduced, see [3; 4; 5; 6; 7; 8; 9; 10; 11; 12] for a review of various dark energy and modified gravity models. Additionally, the appearance of cosmological tensions at many standard deviations between Planck [13] (assuming \(\Lambda\)CDM in the background) and other cosmological probes, such as distance ladders [14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24] or weak lensing [25; 26; 27; 28; 29] and galaxy cluster data [30; 31; 32] has further weakened the confidence in the \(\Lambda\)CDM cosmological model [33; 34; 35; 36; 37]. Thus, the list of cosmological models aiming to address the cosmological tensions is increasing in time, see the review articles [38; 39; 40; 41; 42; 43; 44] and references therein. Given the fact that the origin of dark matter and dark energy is not clearly understood yet, thus, there is no reason to favor any particular cosmological theory over others. As a result, various ways have been proposed to interpret the dynamics of the dark sector in terms of dark matter and dark energy. The simplest assumption is the consideration of independent evolution of these dark fluids. The generalization of the above consideration is the assumption of a non-gravitational interaction between these dark sectors. On the other hand, a heuristic approach is to consider a unified dark fluid that can explain the dynamics of dark energy and dark matter at cosmological scales. The attempt to unify the dark sector of the Universe began long back ago. The most simplest unified dark sector models can be constructed in the context of Einstein gravity with the introduction of a generalized equation of state \(p=\mathcal{F}(\rho)\), where \(p\) and \(\rho\) are respectively the pressure and energy density of the unified dark sector and \(\mathcal{F}\) is an analytic function of the energy density, \(\rho\). The well known unified cosmological models, such as the Chaplygin gas model [45] and its successive generalizations, namely, the generalized Chaplygin gas, modified Chaplygin gas, see Refs. [46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57] and some other unified cosmological scenar
ios as well [58; 59; 60] belong to this classification. While it is essential to mention that a subset of the unified models has been diagnosed with exponential blowup in the matter power spectrum which is not consistent with the observations [61], however, this does not rule out the possibility of unified models aiming to cover a wide region of the universe evolution because a new kind of unified fluid may avoid such unphysical activities. The unified cosmological models can also be developed by considering a relation like \(p=\mathcal{G}(H)\) where \(\mathcal{G}\) is an analytic function of \(H\), the Hubble function of the Friedmann-Lemaitre-Robertson-Walker (FLRW) line element. Apparently, theories with \(p=\mathcal{F}(\rho)\) and \(p=\mathcal{G}(H)\) seem identical, however, this is only true in spatially flat FLRW universe. For a curved universe, the two approaches are not the same.
In the present work we are interested to study a particular class of unified models endowed with bulk viscosity. The cosmological fluids allowing bulk viscosity as an extra ingredient can explain the accelerating expansion of the universe, and hence they are also enlisted as possible alternatives to the standard \(\Lambda\)CDM cosmology in the literature [62; 63]. Following an earlier work Ref. [64] where an evidence of non-zero bulk viscosity was preferred by the current cosmological probes, in the present article, we use the simulated Gravitational Waves Standard Sirens (GWSS) measurements from the Einstein Telescope [65]1 in order to quantify the improvements of the cosmological parameters, if any, from the future GWSS measurements. As the gravitational waves (GW) have opened a new window for astrophysics and cosmology, therefore, it will be interesting to investigate the contribution from the simulated GWSS data, once combined with the current cosmological probes. This motivated many investigators to use the mock GWSS data matching the expected sensitivity of the Einstein Telescope to constrain a class of cosmological models, see for instance, [66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77]. In particular, the combined analysis of simulated GWSS measurements from Einstein Telescope and the standard cosmological probes has proven to be very effective for a class of cosmological models, in the sense that the error bars in the key cosmological parameters of these cosmological models are significantly reduced thanks to the mock GWSS dataset [78; 79; 70; 71; 74; 76; 77], however, in some specific \(f(R)\) theories of gravity, the generated mock GWSS from the Einstein Telescope may not be very much helpful to give stringent constraints on them during its first phase of running [82]. Thus, one may expect that the constraining power of the Einstein Telescope may depend on the underlying cosmological model. Aside from the future GWSS measurements from the Einstein Telescope, one can also use the simulated GWSS measurements from other GW observatories, such as, Laser Interferometer Space Antenna (LISA) [83; 84; 85; 86] and DECi-heltz Interferometer Gravitational wave Observatory (DECIGO) [87; 88], TianQin [89]. In this article, we focus only on the simulated GWSS data from Einstein Telescope to constrain the bulk viscous unified scenario.
Footnote 1: [https://www.einsteintelescope.nl/en/](https://www.einsteintelescope.nl/en/)
The paper has been organized as follows: in Sec. II we discuss the gravitational equations for the bulk viscous scenario. Sec. III describes the observational data that we have considered for the analysis in this work. Sec. IV presents the observational constraints on the bulk viscous models, and mainly we discuss how the inclusion of gravitational waves data from the Einstein Telescope improves the constraints. Finally, in Sec. V we present the conclusions.
## II Revisiting the bulk viscous scenarios: background and perturbations
As usual, we consider the homogeneous and isotropic space time described by the Friedmann-Lemaitre-Robertson-Walker (FLRW) line element
\[ds^{2}=-dt^{2}+a^{2}(t)\left[\frac{dr^{2}}{1-kr^{2}}+r^{2}\left(d\theta^{2}+ \sin^{2}\theta d\phi^{2}\right)\right], \tag{1}\]
where \(a(t)\) is the expansion scale factor and \(k\) denotes the spatial curvature of the universe. For \(k=0,-1,+1\), we have three different geometries of the universe, namely, spatially flat, open and closed, respectively. In this paper we restrict ourselves to the spatially flat scenario where we assume that (i) the gravitational sector is described by the Einstein's gravity, (ii) the matter sector of the universe consists of the relativistic radiation, non-relativistic baryons and a unified bulk viscous fluid which combines the effects of dark matter and dark energy, (iii) all the fluids are non-interacting with each other. Within this framework, we can write down the gravitational field equations as follows (in the units where \(8\pi G=1\))
\[H^{2}=\frac{1}{3}\rho_{\rm tot}, \tag{2}\] \[2\dot{H}+3H^{2}=-\ p_{\rm tot}, \tag{3}\]
where an overhead dot indicates the derivative with respect to the cosmic time \(t\); \(H\equiv\dot{a}/a\) is the Hubble expansion rate; \((\rho_{\rm tot},p_{\rm tot})=(\rho_{r}+\rho_{b}+\rho_{u},p_{r}+p_{b}+p_{u})\) are the total energy density and total pressure of the cosmic components in which \((\rho_{r},p_{r})\), \((\rho_{b},p_{b})\), \((\rho_{u},p_{u})\) are the energy density and pressure of radiation, baryons and the unified fluid, respectively. The conservation equation for each fluid follows the usual law \(\dot{\rho}_{i}+3H(1+w_{i})\rho_{i}=0\), where the subscript \(i\) refers to radiation (\(i=r\)), baryons (\(i=b\)) and the unified fluid (\(i=u\)) and \(\omega_{i}\) are the standard barotropic state parameters: \(w_{r}=p_{r}/\rho_{r}=1/3\), \(w_{b}=p_{b}/\rho_{b}=0\) and \(w_{u}=p_{u}/\rho_{u}=(\gamma-1)\), where \(\gamma\) is a constant parameter. In general for different values of \(\gamma\), say for instance, \(\gamma=0\), we realize a cosmo
logical constant-like fluid endowed with the bulk viscosity and similarly \(\gamma=1\) results in a dust-like fluid endowed with the bulk viscosity. As the nature of the fluid is not clearly understood and as the observational data play an effective role to understand this nature, thus, in order to be more transparent in this direction we consider \(\gamma\) lying in the interval \([-3,3]\) which includes both exotic \((p_{u}/\rho_{u}=(\gamma-1)<-1/3)\) and non-exotic \((p_{u}/\rho_{u}=(\gamma-1)>-1/3\) ) fluids. As already mentioned, since the unified fluid has a bulk viscosity, therefore, it enjoys an effective pressure [90]: \(p_{\rm eff}=p_{u}-u^{\nu}_{,u}\eta(\rho_{u})\), where \(u^{\mu}_{,u}\) is the expansion scalar of this fluid and \(\eta(\rho_{u})>0\) is the coefficient of the bulk viscosity. Thus, in the FLRW background, the effective pressure of the bulk viscous fluid reduces to
\[p_{\rm eff}=p_{u}-3H\eta(\rho_{u}). \tag{4}\]
Since there is no unique selection for the bulk viscous coefficient, \(\eta(\rho_{u})\), therefore, we consider a well known choice for it in which the bulk viscous coefficient has a power law evolution of the form [91; 90; 92]:
\[\eta(\rho_{u})=\alpha\rho_{u}^{m}, \tag{5}\]
where \(\alpha\) is a positive constant and \(m\) is any real number. Notice that for the case \(m=0\) we recover the scenario with a constant bulk viscous coefficient. Now, with the consideration of the bulk viscous coefficient in (5), the effective pressure of the unified fluid can be expressed as
\[p_{\rm eff}=(\gamma-1)\rho_{u}-\sqrt{3}\alpha\rho_{\rm tot}^{1/2}\rho_{u}^{m}, \tag{6}\]
and consequently, one can define the effective equation of state of the viscous dark fluid as
\[w_{\rm eff}=\frac{p_{\rm eff}}{\rho_{u}}=(\gamma-1)-\sqrt{3}\alpha\rho_{\rm tot }^{1/2}\rho_{u}^{m-1}. \tag{7}\]
The adiabatic sound speed for the viscous fluid is given by
\[c_{a,{\rm eff}}^{2}=\frac{p^{\prime}_{\rm eff}}{\rho^{\prime}_{u}}=w_{\rm eff }+\frac{w^{\prime}_{\rm eff}}{3\mathcal{H}(1+w_{\rm eff})}. \tag{8}\]
where the prime denotes the derivative with respect to the conformal time \(\tau\) and \(\mathcal{H}\) is the conformal Hubble parameter, \(\mathcal{H}=aH\). Note that depending on the nature of \(w_{\rm eff}\), \(c_{a,{\rm eff}}^{2}\) could be negative, and hence \(c_{a,{\rm eff}}\) could be an imaginary quantity. This may invite instabilities in the perturbations. Thus, in order to avoid this possible unphysical situation, we consider the entropy perturbations (non-adiabatic perturbations) in the unified dark fluid following the analysis of generalized dark matter [93].
Now we focus on the evolution of the unified bulk viscous fluid at the level of perturbations. In the entropy perturbation mode, the true pressure perturbation comes from the effective pressure given by
\[\delta p_{\rm eff} =\delta p_{u}-\delta\eta(\nabla_{\sigma}u^{\sigma})-\eta(\delta \nabla_{\sigma}u^{\sigma})\] \[=\delta p_{u}-3H\delta\eta-\frac{\eta}{a}\left(\theta+\frac{h^{ \prime}}{2}\right). \tag{9}\]
The effective sound speed of viscous dark fluid for the bulk viscous coefficient (5) can be defined as
\[c_{s,{\rm eff}}^{2}\equiv\left(\frac{\delta p_{\rm eff}}{\delta \rho_{u}}\right)_{rf}\] \[=c_{s}^{2}-\sqrt{3}\alpha m\rho_{\rm tot}^{1/2}\rho_{u}^{m-1}- \frac{\alpha\rho_{u}^{m-1}}{a\delta_{u}}\left(\theta+\frac{h^{\prime}}{2} \right), \tag{10}\]
where '\(|_{rf}\)' denotes the rest frame. Following the analysis in [93], the sound speed in the rest frame is assumed to be zero, i.e. \(c_{s}^{2}=0\).
The density perturbation and the velocity perturbation can also be written as [93]
\[\delta^{\prime}_{u} =-(1+w_{\rm eff})(\theta_{u}+\frac{h^{\prime}}{2})+\frac{w^{\prime }_{\rm eff}}{1+w_{\rm eff}}\delta_{u}\] \[-3\mathcal{H}(c_{s,{\rm eff}}^{2}-c_{a,{\rm eff}}^{2})\left[ \delta_{u}+3\mathcal{H}(1+w_{\rm eff})\frac{\theta_{D}}{k^{2}}\right], \tag{11}\] \[\theta^{\prime}_{u} =-\mathcal{H}(1-3c_{s,{\rm eff}}^{2})\theta_{u}+\frac{c_{s,{\rm eff }}^{2}}{1+w_{\rm eff}}k^{2}\delta_{u}, \tag{12}\]
Thus, following the evolution at the background and perturbation level prescribed above, one can now be able to understand the dynamics of the bulk viscous fluid. In this work we consider two different bulk viscous scenarios characterized as follows: the bulk viscous model 1 (labeled as BVF1) where we consider \(\gamma=1\), and the bulk viscous model 2 (labeled as BVF2) where we keep \(\gamma\) as a free parameter. The common parameters in both BVF1 and BVF2 are \(\alpha\) and \(m\).
## III Standard Cosmological Probes, Simulated GWSS Data, and Methodology
In this section we describe the cosmological data sets employed to perform the statistical analyses of the present bulk viscous scenarios.
* **Cosmic Microwave Background (CMB):** We use the CMB data from the Planck 2018 data release. Precisely, we use the CMB temperature and polarization angular power spectra _plikTT-TEEE+lowl+lowE_[94; 95].
* **Pantheon sample from Type Ia Supernovae (SNIa) data:** Type Ia Supernovae are the first astronomical data that probed the accelerating expansion of the universe and hence indicated the existence of an exotic fluid with negative pressure (dark energy). Here we use the Pantheon compilation of the SNIa data comprising 1048 data points spanned in the redshift interval \([0.01,2.3]\)[96].
* **Gravitational Waves Standard Sirens (GWSS):** We take the mock Gravitational Waves Standard Sirens (GWSS) data generated by matching the expected sensitivity of Einstein
Telescope in order to understand the constraining power of the future GWSS data from the Einstein Telescope. The Einstein Telescope is a proposed ground based third-generation (3G) GW detector. The telescope will take a triangular shape and its each arm length will be increased to 10 km, compared to 3 km arm length VIRGO and 4 km arm length LIGO [65, 97]. Thus, due to such increased arm length, the Einstein Telescope will be a potential GW detector by reducing all displacement noises [65, 97]. It is expected that after 10 years of operation, Einstein Telescope will detect \(\mathcal{O}(10^{3})\) GWSS events. Although the detection of \(\mathcal{O}(10^{3})\) GWSS events is very optimistic while the number of detections could be low in reality [65]. As argued in [65], the Einstein Telescope will likely to detect 20 \(-\) 50 events per year, i.e. 200 \(-\) 500 events in 10 years. However, following the earlier works [66, 67, 70, 71, 74, 77, 79, 81, 82], in this article, we restrict ourselves to the detection of \(\mathcal{O}(10^{3})\) GWSS events by the Einstein Telescope to constrain the bulk viscous scenarios. For more features of the Einstein Telescope we refer the readers to [65].
We originally generate the mock GWSS luminosity distance measurements matching the expected sensitivity of the Einstein Telescope after 10 years of full operation. Specifically we create 1000 triples (\(z_{i}\), \(d_{L}(z_{i})\), \(\sigma_{i}\)) where \(z_{i}\) is the redshift of a GW source, \(d_{L}(z_{i})\) is the measured luminosity distance at redshift \(z_{i}\) and \(\sigma_{i}\) is the uncertainty associated with the luminosity distance \(d_{L}(z_{i})\). Let us briefly summarize the procedure of generating the mock GWSS dataset and we refer to Refs. [70, 71, 81] for more technical details. The initial step for generating the mock GWSS dataset is to identify the expected GW sources. We consider the GW events originating from two distinct binary systems, namely, (i) a combination of a Black Hole (BH) and a Neutron Star (NS) merger, identified as BHNS and (ii) the binary neutron star (BNS) merger. Following the mass distributions as described in Ref. [81], the ratio of the number of GW events for the BHNS merger versus BNS merger is taken to be 0.03 as predicted for the Advanced LIGO-VIRGO network [98]. We then determine the merger rate \(R(z)\) of sources and from the merger rate of the sources, we determine the redshift distribution of the sources, \(P(z)\) given by [66, 67, 70, 79, 81, 99, 66]
\[P(z)\propto\frac{4\pi d_{C}^{2}(z)R(z)}{H(z)(1+z)}, \tag{13}\]
where \(d_{C}(z)\equiv\int_{0}^{z}H^{-1}(z^{\prime})dz^{\prime}\) is the co-moving distance and for \(R(z)\) we take the following piecewise linear function estimated in [100] (also see [66, 67, 70, 79, 81, 101]): \(R(z)=1+2z\) for \(z\leq 1\), \(R(z)=\frac{3}{4}(5-z)\), for \(1<z<5\) and \(R(z)=0\) for \(z>5\). After having \(P(z)\), we sample 1000 values of redshifts from this distribution which represent the redshifts \(z_{i}\) of our 1000 mock GWSS data.
The next step is to choose a fiducial model because while going from the merger rate to the redshift distributions, a fiducial cosmological model is needed since the expression for \(P(z)\) includes both the co-moving distance and expansion rate at redshift \(z\), i.e. \(d_{L}(z)\) and \(H(z)\) respectively. This \(H(z)\) corresponds to the fiducial model. As in this article we are interested to investigate how the inclusion of GWSS data improves the constraints on the BVF models, therefore, we generate two different mock GWSS datasets choosing BVF1 and BVF2 as the fiducial models. We take the fiducial values of the cosmological parameters given by the best fit values of the same cosmological parameters of the BVF1 and BVF2 models obtained from the CMB+Pantheon data analysis. Now, for the chosen fiducial model(s), one can now estimate the luminosity distance at the redshift \(z_{i}\) using the relation
\[d_{L}(z_{i})=(1+z_{i})\int_{0}^{z_{i}}\frac{dz^{\prime}}{H(z^{ \prime})}\,. \tag{14}\]
Thus, after having the luminosity distance \(d_{L}(z_{i})\) of the GW source, our last job is now to determine the uncertainty \(\sigma_{i}\) associated with this luminosity distance. The determination of the uncertainty \(\sigma_{i}\) directly connects to the waveform of GW because the GW amplitude depends on the luminosity distance (also on the so-called chirp mass [66, 67, 79]) and hence one can extract the information about \(d_{L}(z_{i})\) and \(\sigma_{i}\). We refer to Refs. [66, 67, 70, 79, 81] for the technical details to calculate the uncertainties on the luminosity distance measurements. The luminosity distance measurement \(d_{L}(z_{i})\) has two kind of uncertainties, one is the instrumental uncertainty \(\sigma_{i}^{\rm inst}\) and the other one is the weak lensing uncertainty \(\sigma_{i}^{\rm lens}\). The instrumental error can be derived to be \(\sigma_{i}^{\rm inst}\) (\(\simeq 2d_{L}(z_{i})/\mathcal{S}\) where \(\mathcal{S}\) is the combined signal-to-noise ratio of the Einstein Telescope) using the Fisher matrix approach and assuming that the uncertainty on \(d_{L}(z_{i})\) is not correlated with the uncertainties on the remaining GW parameters (see [66, 67, 70, 79, 81]) and the lensing error is \(\sigma_{i}^{\rm lens}\simeq 0.05z_{i}d_{L}(z_{i})\)[66]. Thus, the total uncertainty due to the instrumental and the weak lensing uncertainties on \(d_{L}(z_{i})\) is \(\sigma_{i}=\sqrt{(\sigma_{i}^{\rm inst})^{2}+(\sigma_{i}^{\rm lens})^{2}}\). Finally, let us note that the combined signal-to-noise ratio of the GW detector is a very crucial quantity in this context since for the Einstein Telescope, the combined signal-to-noise ratio should be
at least 8 for a GW detection [99]. Thus, in summary, we generate 1000 GW sources up to redshift \(z=2\) with \(\mathcal{S}>8\). For more technical details we refer the readers to Refs. [66; 67; 70; 79; 81; 99].
To constrain the BVF scenarios we modify the publicly available CosmoMC package [102] which is an excellent cosmological code supporting the Planck 2018 likelihood [95] and it has a convergence diagnostic following the Gelman-Rubin statistic \(R-1\)[103]. It is essential to mention that for both BVF1 and BVF2 scenarios, we have used the dimensionless quantity \(\beta=\alpha H_{0}\rho_{\text{tot},0}^{m-1}\) where \(\rho_{\text{tot},0}\) is the present value of \(\rho_{\text{tot}}\). We further mention here that \(\beta=0\) (equivalently, \(\alpha=0\)) implies no viscosity and hence the overall picture behaves like a unified cosmic fluid without bulk viscosity. Thus, in summary, the parameter space of BVF1 and BVF2 are as below:
\[\mathcal{P}_{\text{BVF1}} \equiv\{\Omega_{b}h^{2},100\theta_{\text{MC}},\tau,n_{s},\ln(10 ^{10}A_{s}),\beta,m\}\] \[\mathcal{P}_{\text{BVF2}} =\{\Omega_{b}h^{2},100\theta_{\text{MC}},\tau,n_{s},\ln(10^{10}A _{s}),\beta,m,\gamma\}\]
where the description of the free parameters are as follows: \(\Omega_{b}h^{2}\) is the baryons density, \(100\theta_{MC}\) is the ratio of the sound horizon to the angular diameter distance; \(\tau\) is the optical depth, \(n_{s}\) is the scalar spectral index, \(A_{s}\) is the amplitude of the initial power spectrum. The flat priors on both cosmological scenarios are shown in Table 1.
## IV Observational constraints: results and analysis
In this section we present the constraints on the bulk viscous scenarios considering CMB+Pantheon and CMB+Pantheon+GWSS. As we are interested to estimate the improvement of the cosmological parameters in presence of the GWSS measurements, and as the combined standard cosmological probes offer the most stringent constraints on the cosmological parameters, therefore, the inclusion of GWSS with the combined standard cosmological probes is reasonable. As mentioned earlier, the key common free parameters of BVF1 and BVF2 are \(\beta\) and \(m\), since \(\beta\neq 0\) indicates the preference for a non-zero bulk viscosity and \(m\neq 0\) indicates that the coefficient of the bulk viscosity is not constant in the redshift range considered. In the following subsections we discuss the constraints on these two scenarios in detail.
### Constraints on the BVF1 scenario
In Table 2 we have presented the constraints on the BVF1 scenario for CMB+Pantheon and CMB+Pantheon+GWSS. The latter dataset is aimed to understand the improvement expected from GWSS on the constraints from CMB+Pantheon. In Fig. 1 we have compared these datasets graphically by showing the one dimensional marginalized distribution of some model parameters and the two dimensional joint contours. As discussed, this scenario has two main key parameters, namely, \(\beta\), quantifying the existence of bulk viscosity in the cosmic sector, and \(m\), which tells us whether the bulk viscosity will have a dynamical nature (corresponding to \(m\neq 0\)) or not.
Since for CMB+Pantheon, we find an evidence for a non-zero bulk viscosity in the cosmic sector at many standard deviations, i.e. \(\beta=0.430^{+0.017}_{-0.016}\) at 68% CL, this is further strengthen for CMB+Pantheon+GWSS, where \(\beta=0.4262^{+0.0079}_{-0.0078}\) at 68% CL.2 One can clearly see that the inclusion of GWSS to CMB+Pantheon improves the error bars on \(\beta\) by a factor of at least 2. This reflects the constraining power of GWSS. On the other hand, focusing on the parameter \(m\), which quantifies the time evolution of the bulk viscosity, we see that \(m\) remains non-zero at several standard deviations for CMB+Pantheon, where the 68% CL constraint on \(m\) is \(m=-0.557^{+0.068}_{-0.059}\), and becomes \(m=-0.519^{+0.038}_{-0.035}\) for CMB+Pantheon+GWSS. From the constraints on \(m\), one can clearly see that the uncertainty in \(m\) is reduced by a factor of \(\sim 1.7-1.8\) when the GWSS data are included with the combined dataset CMB+Pantheon. Concerning the Hubble constant, we find that \(H_{0}\) assumes slightly higher values compared to the \(\Lambda\)CDM based Planck. Actually, we have \(H_{0}=68.1^{+1.2}_{-1.1}\) at 68% CL for CMB+Pantheon, while \(H_{0}=68.30^{+0.46}_{-0.45}\) at 68% CL for CMB+Pantheon+GWSS, again improving the uncertainty in \(H_{0}\) by a factor of 2.5. This shows that the effects of GWSS are clearly visible through these parameters. In Fig. 1, one can compare the constraints on the model parameters obtained from CMB+Pantheon and CMB+Pantheon.
\begin{table}
\begin{tabular}{c c c} \hline Parameter & Priors (BVF1) & Priors (BVF2) \\ \hline \(\Omega_{b}h^{2}\) & \([0.005,0.1]\) & \([0.005,0.1]\) \\ \(\tau\) & \([0.01,0.8]\) & \([0.01,0.8]\) \\ \(n_{s}\) & \([0.5,1.5]\) & \([0.5,1.5]\) \\ \(\ln(10^{10}A_{s})\) & \([2.4,4]\) & \([2.4,4]\) \\ \(100\theta_{\text{MC}}\) & \([0.5,10]\) & \([0.5,10]\) \\ \(\beta\) & \([0,1]\) & \([0,1]\) \\ \(m\) & \([-2,0.5]\) & \([-2,0.5]\) \\ \(\gamma\) & \(-\) & \([-3,3]\) \\ \hline \end{tabular}
\end{table}
Table 1: We show the flat priors on all the free parameters associated with the bulk viscous models.
Finally, through Fig. 2 we examine how the model affects the CMB TT power spectrum for different values of the model parameters, \(\beta\) and \(m\) with respect to the standard \(\Lambda\)CDM scenario. In the upper panel of Fig. 2 we depict the evolution in the CMB TT power spectrum for different values of \(\beta\) while in the lower panel of Fig. 2 we depict the evolution in the CMB TT power spectrum for different values of \(m\). From both the graphs, we notice that as long as \(\beta\) or \(m\) increases, the model exhibits significant differences in the lower multipoles (\(\ell\leq 10\)). For \(\ell\geq 10\), we observe that with the increasing values of \(\beta\) and \(m\), the peaks of the CMB TT power spectrum increase significantly, particularly changing their mutual ratio.
\begin{table}
\begin{tabular}{c c c} \hline \hline Parameters & CMB+Pantheon & CMB+Pantheon+GWSS \\ \hline \(\Omega_{b}h^{2}\) & \(0.02232^{+0.00015+0.00029}_{-0.00015-0.00028}\) & \(0.02253^{+0.00014+0.00028}_{-0.00014-0.00026}\) \\ \(100\theta_{\rm MC}\) & \(1.02780^{+0.00055+0.0011}_{-0.0005-0.0011}\) & \(1.02808^{+0.00037+0.00073}_{-0.00038-0.00073}\) \\ \(\tau\) & \(0.0537^{+0.00074+0.016}_{-0.0077-0.015}\) & \(0.0567^{+0.00078+0.016}_{-0.0078-0.015}\) \\ \(n_{s}\) & \(0.9641^{+0.0043+0.0086}_{-0.0043-0.0084}\) & \(0.9686^{+0.0041+0.0089}_{-0.0040-0.0080}\) \\ \(\ln(10^{10}A_{s})\) & \(3.046^{+0.016+0.031}_{-0.015-0.031}\) & \(0.348^{+0.016+0.033}_{-0.016-0.033}\) \\ \(\beta\) & \(0.430^{+0.017+0.033}_{-0.016-0.034}\) & \(0.4262^{+0.0079+0.016}_{-0.0078-0.015}\) \\ \(m\) & \(-0.557^{+0.068+0.12}_{-0.059-0.013}\) & \(-0.519^{+0.038+0.074}_{-0.035-0.075}\) \\ \(H_{0}\) & \(68.1^{+1.2+2.2}_{-1.1-2.3}\) & \(68.30^{+0.46+0.91}_{-0.45-0.85}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: We report the observational constraints on the BVF1 scenario at 68% and 95% CL for CMB+Pantheon and CMB+Pantheon+GWSS datasets.
Figure 1: For the BVF1 scenario we show the 1-dimensional posterior distribution of some model parameters and the 2-dimensional joint contours of the model parameters at 68% and 95% CL for CMB+Pantheon and CMB+Pantheon+GWSS.
### Constraints on the BVF2 scenario
In Table 3 we present the constraints on the BVF2 scenario for both CMB+Pantheon and CMB+Pantheon+GWSS. And in Fig. 3, we compare the constraints from these datasets explicitly showing the one dimensional marginalized distribution of some model parameters and the two dimensional joint contours. As already discussed, this scenario has three main key parameters, namely, \(\beta\), which quantifies the existence of bulk viscosity in the cosmic sector, \(m\), which tells us whether the bulk viscosity enjoys a dynamical nature (corresponding to \(m\neq 0\)) or not, and finally, the parameter \(\gamma\), which indicates the fluid which endows the bulk viscosity. We note that \(\gamma=1\) refers to the dust fluid endowing the bulk viscosity in which we are interested in, for which we recover the previous scenario BVF1.
For CMB+Pantheon, we find that \(\beta\neq 0\) at several standard deviations yielding \(\beta=0.447\pm 0.022\) at 68% CL which gives a clear indication of a non-zero bulk viscosity in the cosmic sector. When the GWSS are added to this combination, i.e. CMB+Pantheon+GWSS, the conclusion about \(\beta\) does not change significantly (\(\beta=0.425^{+0.018}_{-0.016}\) at 68% CL), indicating that for this scenario GWSS do not provide any additional constraining power on \(\beta\). Looking at the dynamical nature of the bulk viscosity, we see that for CMB+Pantheon, \(m\) remains nonzero at more than 2 standard deviations leading to \(m=-0.85^{+0.30}_{-0.19}\) at 68% CL. However, this evidence could be further strengthened by the inclusion of the GWSS data, that we forecast to be \(m=-0.683^{+0.099}_{-0.089}\) at 68% CL for CMB+Pantheon+GWSS, improving the error bars up to a factor of 3. Finally, focusing on the parameter \(\gamma\) which directly connects with the nature of the cosmic fluid endowing the bulk viscosity, we can see that it is consistent with 1, which corresponds to a dust-like fluid, within 2 standard deviations for CMB+Pantheon (\(\gamma=0.9970^{+0.0015}_{-0.0024}\) at 68% CL). Also for this parameter, the addition of the GWSS further improves the constraining power of a factor larger than 3 to 4, that we forecast to be \(\gamma=0.99757^{+0.00049}_{-0.00058}\) at 68% CL for CMB+Pantheon+GWSS. Therefore, with respect to the BVF1 case, where the inclusion of the forecasted GWSS was able to improve both \(\beta\) and \(m\), in this BVF2 scenario, the improvement of the constraining power is displayed only on \(m\) and \(\gamma\) but does not affect anymore \(\beta\) signifi
Figure 2: The CMB \(C_{l}^{TT}\) power spectrum versus multipole moment \(l\) using the best fits values obtained for the BVF1 model using the join data sets described, with three arbitrary \(\beta\) and \(m\) values.
\begin{table}
\begin{tabular}{c c c} \hline \hline Parameters & CMB+Pantheon & CMB+Pantheon+GWSS \\ \hline \(\Omega_{b}h^{2}\) & \(0.02241^{+0.00016+0.00032}_{-0.00016-0.00032}\) & \(0.02238^{+0.00015+0.00030}_{-0.00016-0.00031}\) \\ \(100\theta_{\rm MC}\) & \(1.02907^{+0.00011+0.00180}_{-0.00082-0.00198}\) & \(1.02921^{+0.00040+0.00040+0.00079}_{-0.00040-0.00079}\) \\ \(\tau\) & \(0.0516^{+0.00047+0.015}_{-0.0072-0.015}\) & \(0.0521^{+0.00071+0.015}_{-0.0078-0.014}\) \\ \(n_{s}\) & \(0.9575^{+0.0005+0.012}_{-0.0006-0.012}\) & \(0.9583^{+0.0038+0.0075}_{-0.0038-0.0077}\) \\ \(\ln(10^{10}A_{s})\) & \(3.038^{+0.016+0.032}_{-0.017-0.032}\) & \(3.040^{+0.015+0.032}_{-0.015-0.031}\) \\ \(\beta\) & \(0.447^{+0.022+0.042}_{-0.042}\) & \(0.425^{+0.018+0.032}_{-0.016-0.034}\) \\ \(m\) & \(-0.85^{+0.30+0.46}_{-0.19-0.50}\) & \(-0.683^{+0.099+0.18}_{-0.089-0.19}\) \\ \(\gamma\) & \(0.9970^{+0.0015+0.0402}_{-0.0024-0.0036}\) & \(0.9975^{+0.00049+0.0011}_{-0.00058-0.0011}\) \\ \(H_{0}\) & \(65.2^{+1.7+4.4}_{-2.6-3.9}\) & \(64.91^{+0.59+1.1}_{-0.60-1.2}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: We report the observational constraints on the BVF2 scenario at 68% and 95% CL for CMB+Pantheon and CMB+Pantheon+GWSS.
cantly.
Furthermore, we find that for this scenario, the Hubble constant attains a very low value for CMB+Pantheon compared to Planck's estimation within the \(\Lambda\)CDM paradigm. We also note that \(H_{0}\) is correlated to all three free parameters of this scenario, namely, \(\beta\), \(m\) and \(\gamma\). With \(\beta\) and \(\gamma\), \(H_{0}\) is positively correlated while with \(m\), it has a strong anti-correlation. For CMB+Pantheon, \(H_{0}=65.2^{+1.7}_{-2.6}\) km/s/Mpc at 68% CL and after the inclusion of GWSS it becomes \(H_{0}=64.91^{+0.59}_{-0.60}\) km/s/Mpc at 68% CL, reducing the uncertainty in \(H_{0}\) by a factor of \(\sim 4\).
Finally, in Fig. 4, we examine the CMB TT power spectrum for this bulk viscous scenario BVF2 considering different values of the free parameter \(\gamma\) with respect to the standard \(\Lambda\)CDM scenario. As \(\gamma\) lies in the region \([-3,3]\) and the nature of the cosmic fluid characterized by its equation of state \(p_{u}=(\gamma-1)\rho_{u}\) depends on the sign of \(\gamma\), therefore, we have considered two separate plots, one where \(\gamma\) is non-negative (i.e. \(\gamma\geq 0\)) and another plot where \(\gamma\) allows both positive and negative values including \(\gamma=0\). From both the panels of Fig. 4, we clearly see that any deviation from \(\gamma=1\) makes significant changes in the amplitude of the CMB TT power spectrum. In particular, we see that the peaks of the CMB TT spectrum significantly increases and shift towards higher multipoles for any value different from \(\gamma=1\) at small scales, as well as the Integrated Sachs Wolfe (ISW) plateau at large scales. As \(\gamma=1\) indicates a cosmological constant-like fluid endowed with the bulk viscosity, therefore, for \(\gamma=1\), we replicate an equivalent behaviour of the \(\Lambda\)CDM scenario.
## V Conclusions
Although the \(\Lambda\)CDM cosmological model is extremely successful in describing a large span of astronomical observations, it cannot explain several theoretical and observational issues. This motivated the scientific community to construct a variety of cosmological proposals and testing them with the available astronomical data. Among these cosmological models, in this article, we focus on the unified cosmological models allowing bulk vis
Figure 3: For the BVF2 scenario we show the 1-dimensional posterior distributions of some model parameters and the 2-dimensional joint contours of the model parameters at 68% and 95% CL for CMB+Pantheon and CMB+Pantheon+GWSS.
cosity in the background. However, since these models do not recover the \(\Lambda\)CDM scenario as a special case, our only ability in distinguishing them, once the GWSS data will be available, will rely only on a Bayesian model comparison for a better fit of the cosmological observations, as done in Ref. [64]. The unified cosmological scenarios endowed with bulk viscosity are appealing from two different perspectives: the first one is the concept of a unified picture of dark matter and dark energy, and the second is the inclusion of bulk viscosity into that unified picture. Effectively, the unified bulk viscous scenario is a generalized cosmic picture combining two distinct cosmological directions. According to a recent paper on the unified bulk viscous scenarios [64], current cosmological probes prefer a non-zero dynamical bulk viscosity in the dark sector at many standard deviations. So, in light of the current cosmological probes, unified bulk viscous cosmological scenarios are attractive. In this line of thought, what about the future of such unified bulk viscous scenarios? In this article we have focused on it with an answer.
Following Ref. [64], in this work we have explored these scenarios with the GWSS aiming to understand how the future distance measurements from GWSS may improve the constraints on such scenarios. In order to proceed toward this confrontation, we have generated \(\mathcal{O}(10^{3})\) mock GWSS luminosity distance measurements matching the expected sensitivity of the Einstein Telescope and added these mock data to the current cosmological probes, namely CMB from Planck 2018 release3 and SNIa Pantheon sample. We find that the inclusion of GWSS luminosity distance measurements together with the current cosmological probes makes the possible future evidence for new physics stronger, by reducing the uncertainty in the parameters in a significant way. This is a potential behaviour of the GWSS luminosity distance measurements since this makes the parameter much deterministic. Overall for both BVF1 and BVF2 scenarios, we find a very strong preference of a non-zero time dependent bulk viscous coefficient (alternatively, the viscous nature of the unified dark fluid) at many standard deviations.
Footnote 3: We mention that in the earlier work [64], CMB data from Planck 2015 were used to constrain the bulk viscous scenarios.
In conclusion, in the present paper we demonstrate that future GWSS distance measurements from the Einstein Telescope might be powerful to extract more information about the physics of the dark sector. Therefore, based on the present results, we feel that it might just be a matter of time before we convincingly detect the viscosity in the dark sector, if any.
## VI Acknowledgments
The authors thank the referee for some important comments which helped us to improve the article considerably. WY was supported by the National Natural Science Foundation of China under Grants No. 12175096 and No. 11705079, and Liaoning Revitalization Talents Program under Grant no. XLYC1907098. SP acknowledges the financial support from the Department of Science and Technology (DST), Govt. of India, under the Scheme "Fund for Improvement of S&T Infrastructure (FIST)" [File No. SR/FST/MS-I/2019/41]. EDV is supported by a Royal Society Dorothy Hodgkin Research Fellowship. CE-R is supported by the Royal Astronomical Society as FRAS 10147 and by PAPIIT UNAM Project TA100122. This article/publication is based upon work from COST Action CA21136 Addressing observational tensions in cosmology with systematics and fundamental physics (CosmoVerse) supported by COST (European Cooperation in Science and Technology). AP was supported in part by the National Research Foundation of South Africa (Grant Number 131604). Also AP thanks the support of Vicerrectoria de Investigacion y Desarrollo Tecnologico (Vridt) at Universidad Catolica del Norte
Figure 4: The CMB \(C_{l}^{TT}\) power spectrum versus multipole moment \(l\) using the best fits values obtained for each BVF2 models with three \(\gamma\) values, respectively using the join data sets described.
through Nucleo de Investigacion Geometria Diferencial y Aplicaciones, Resolucion Vridt No - 096/2022.
|
2305.18100 | Thermal Leptogenesis in the Minimal Gauged $U(1)_{L_μ-L_τ}$ Model | We discuss the thermal leptogenesis mechanism within the minimal gauged
U(1)$_{L_\mu-L_\tau}$ model to explain the observed baryon asymmetry of the
Universe (BAU). In such framework, the phases of the
Pontecorvo-Maki-Nakagawa-Sakata neutrino mixing matrix and the sum of the
Standard Model neutrino masses are predictable because of a restricted neutrino
mass matrix structure. Additionally, in the context of thermal leptogenesis,
the BAU can be computed in terms of the three remaining free variables that
parameterise the right-handed neutrino masses and their Yukawa couplings to the
Higgs and lepton doublets. We identify the ranges of such parameters for which
the correct BAU can be reproduced. We adopt the formalism of the density matrix
equations to fully account for flavour effects and consider the decays of all
the three right-handed neutrinos. Our analysis reveals that thermal
leptogenesis is feasible within a wide parameter space, specifically for Yukawa
couplings ranging from approximate unity to $\mathcal{O}(0.03-0.05)$ and mass
of the lightest right-handed neutrino $M_1\gtrsim 10^{11-12}\,\text{GeV}$,
setting a leptogenesis scale in the considered model which is higher than that
of the non-thermal scenario. | Alessandro Granelli, Koichi Hamaguchi, Natsumi Nagata, Maura E. Ramirez-Quezada, Juntaro Wada | 2023-05-29T14:11:41Z | http://arxiv.org/abs/2305.18100v2 | # Thermal Leptogenesis in the Minimal Gauged \(U(1)_{L_{\mu}-L_{\tau}}\) Model
###### Abstract
We discuss the thermal leptogenesis mechanism within the minimal gauged \(\mathrm{U}(1)_{L_{\mu}-L_{\tau}}\) model to explain the observed baryon asymmetry of the Universe (BAU). In such framework, the phases of the Pontecorvo-Maki-Nakagawa-Sakata neutrino mixing matrix and the sum of the Standard Model neutrino masses are predictable because of a restricted neutrino mass matrix structure. Additionally, in the context of thermal leptogenesis, the BAU can be computed in terms of the three remaining free variables that parameterise the right-handed neutrino masses and their Yukawa couplings to the Higgs and lepton doublets. We identify the ranges of such parameters for which the correct BAU can be reproduced. We adopt the formalism of the density matrix equations to fully account for flavour effects and consider the decays of all the three right-handed neutrinos. Our analysis reveals that thermal leptogenesis is feasible within a wide parameter space, specifically for Yukawa couplings ranging from approximate unity to \(\mathcal{O}(0.03-0.05)\) and mass of the lightest right-handed neutrino \(M_{1}\gtrsim 10^{11-12}\,\)GeV, setting a leptogenesis scale in the considered model which is higher than that of the non-thermal scenario.
Introduction
There is astrophysical and cosmological evidence for the existence of a matter-antimatter asymmetry in the present Universe. The baryon-to-photon ratio parameterises the baryon asymmetry of the Universe (BAU),
\[\eta_{B}=\frac{n_{B}-n_{\bar{B}}}{n_{\gamma}}, \tag{1}\]
where \(n_{B}\), \(n_{\bar{B}}\), and \(n_{\gamma}\) are the number densities of baryons, anti-baryons, and photons, respectively. The baryon-to-photon ratio has been determined independently from observations of the Cosmic Microwave Background (CMB) anisotropies and Big Bang Nucleosynthesis (BBN) estimates. Both estimations are consistent with the best-fit value \(\eta_{B}\simeq 6.1\times 10^{-10}\)[1, 2]. One compelling mechanism that explains the observed BAU is leptogenesis [3] based on the existence of right-handed neutrinos and their out-of-equilibrium decays in the early Universe. In the simplest thermal leptogenesis scenario, the CP-violating, out-of-equilibrium decays of the right-handed neutrinos generate a lepton asymmetry, which is converted into a baryon asymmetry by the sphaleron processes predicted by the SM [4]. The leptogenesis mechanism for the BAU generation can be studied in a wide range of models beyond the SM, providing a valuable window into the study of new physics (see, e.g., Ref. [5] for a recent review on the topic and references therein).
In this paper, we focus on a model that exhibits a novel anomaly-free U(1) gauge symmetry denoted by U(1)\({}_{L_{\mu}-L_{\tau}}\)[6, 7, 8, 9], where \(L_{e}\), \(L_{\mu}\) and \(L_{\tau}\) stand for the lepton number (flavour) associated to the electron (\(e\)), muon (\(\mu\)) and tauon (\(\tau\)), respectively. The model is implemented within the framework of the type-I seesaw mechanism [10, 11, 12, 13, 14], hence providing an explanation for the smallness of SM neutrino masses through the introduction of three heavy right-handed neutrinos \(N_{1,2,3}\) with masses \(M_{1,\,2,\,3}\). The symmetry of the model imposes constraints on the neutrino mass structure, since the second and third-generation leptons, \(\mu\) and \(\tau\), are charged under U(1)\({}_{L_{\mu}-L_{\tau}}\) while the electron is not. Consequently, the Dirac mass matrix has a simple diagonal structure, and only certain components of the Majorana matrix are non-zero. This simple mass structure is insufficient to explain neutrino oscillation data [15, 16], and hence, the U(1)\({}_{L_{\mu}-L_{\tau}}\) must be broken. This is typically achieved by introducing a scalar field that has non-zero U(1)\({}_{L_{\mu}-L_{\tau}}\) charge and a vacuum expectation value (VEV), which breaks the gauge symmetry giving mass to the U(1)\({}_{L_{\mu}-L_{\tau}}\) gauge bosons. We refer to this model as the "minimal gauged U(1)\({}_{L_{\mu}-L_{\tau}}\) model". The model has a strong predictive power [15, 16, 17, 18, 19, 20, 21, 22, 23] because of the so-called two-zero minor structure [17, 18, 19, 21, 22, 23] of the neutrino mass matrix.
The aim of the present work is to study thermal leptogenesis in the minimal gauged U(1)\({}_{L_{\mu}-L_{\tau}}\) model.1 Since the neutrino mass structure is highly restricted in the model, it is not obvious whether the observed BAU is obtainable through thermal leptogenesis. In a widely-considered setup for thermal leptogenesis [28], the right-handed neutrinos mass spectrum is strongly hierarchical, \(M_{1}\ll M_{2}\ll M_{3}\), and the lepton asymmetries in the three flavors evolve equally. This scenario has been extensively studied solving the unflavoured Boltzmann Equations (BEs) and is subject to a model-independent bound
on the mass scale of leptogenesis, reading \(M_{1}\gtrsim 10^{9}\,\text{GeV}\), below which the requisite CP-asymmetry is too small to get the observed value of the BAU [29, 30, 31, 32, 33]. However, in our model, the condition of a strong mass hierarchy is not satisfied in many parts of the parameter space, and we expect the decay of each of the three right-handed neutrinos to contribute to the generation of the BAU.
Charged lepton flavour effects can also play a crucial role in the generation of the BAU [34, 35, 36, 37, 38, 39, 40, 41, 42, 43]. Specifically, the unflavoured scenario is valid only in the single-flavour regime for temperatures above \(\sim 10^{12}\,\text{GeV}\), when the processes mediated by the charged lepton SM Yukawa couplings are out-of-equilibrium. In the two-flavour regime \(10^{9}\ll T/\text{GeV}\ll 10^{12}\), processes induced by the \(\tau\)-Yukawa coupling occur at a rate \(\Gamma_{\tau}\) much larger than the Hubble expansion rate \(H\), which indicates that these processes are in thermal equilibrium. Consequently, the asymmetries in the lepton charge \(L_{\tau}\) and \(L_{\mu}+L_{e}\) evolve differently in this regime. Similarly, in the three-flavour regime for \(T\ll 10^{9}\,\text{GeV}\), processes mediated by the \(\mu\)-Yukawa coupling with rate \(\Gamma_{\mu}\) are also in thermal equilibrium (\(\Gamma_{\mu}\gg H\)), leading to the individual evolution of \(L_{e}\), \(L_{\mu}\) and \(L_{\tau}\). Given that the mass scales of interest cover different flavour regimes, the simplest unflavoured scenario is not applicable to our study, and we have to consider the impact of charged lepton flavour effects on the BAU generation.
The study of thermal leptogenesis in the (three-) two-flavour regime can be conducted using the formalism of (three-) two-flavoured BEs, provided that the processes mediated by the (\(\mu\)- and \(\tau\)-) \(\tau\)-Yukawa couplings are sufficiently fast. In this formalism, the equations for the asymmetries in (\(L_{\tau}\), \(L_{\mu}\) and \(L_{e}\)) \(L_{\tau}\) and \(L_{\mu}+L_{e}\) are different and solved separately. However, to accurately account for flavour effects, it is more precise to trace the evolution of the elements of the density matrix of the lepton flavour system with the Density Matrix Equations (DMEs) [40, 41, 42, 43]. This approach is particularly useful when the processes mediated by the charged lepton Yukawa couplings are neither infinitely fast nor their effects negligible, such as at the transitions between different flavour regimes, and has been shown to lead to different predictions with respect to the BEs [40, 41, 42, 43, 44, 45].
The presence of multiple decaying right-handed neutrinos also has implications for the generation of the BAU, specifically because of the effects of heavy neutrino flavours. The right-handed neutrinos couple to different superpositions of flavour states, whose interactions can induce additional decoherence effects in the context of the DMEs [42] (see also Ref. [43]). These effects can be particularly relevant when the right-handed neutrinos do not have a strongly hierarchical mass spectrum, which is the case in certain parts of the parameter space of the considered model where \(M_{2}\lesssim 3M_{1}\) and \(M_{3}\lesssim 3M_{2}\). In this regime, the different superposition of flavour states, associated with the different right-handed neutrinos, are simultaneously present in the Universe.
In this work, we consider the formalism of the DMEs with three decaying right-handed neutrinos to fully account for all the relevant effects mentioned above. By performing a numerical scan of the parameter space, we investigate for which values of the parameters of the minimal gauge \(\text{U}(1)_{L_{\mu}-L_{\tau}}\) model the DMEs are successful in reproducing the observed BAU. We solve numerically the DMEs for thermal leptogenesis using the Python package ULYSSES[46, 47], that is a freely accessible code for the numerical evaluation of the BAU in the context of leptogenesis. Its major features are the variety of equations available, allowing for comparisons between the BEs and DMEs in the different regimes, and a rapid evaluation, making scans of large parameter spaces feasible over relatively short periods
of time.
The paper is structured as follows. In Sec. 2, we present the minimal gauged U(1)\({}_{L_{\mu}-L_{\tau}}\) model and analyse the corresponding neutrino mass structure. In Sec. 3, we discuss in more details the mechanism of thermal leptogenesis and the formalism of DMEs. We introduce the DMEs with three decaying right-handed neutrinos and the corresponding CP-asymmetry parameters, which are crucial for understanding the generation of the lepton and baryon asymmetries in the early Universe. The results of the scan of the parameter space for viable leptogenesis is presented in Sec. 4, and we finally conclude in Sec. 5.
## 2 The Minimal Gauged U(1)\({}_{L_{\mu}-L_{\tau}}\) Model
In the minimal gauged U(1)\({}_{L_{\mu}-L_{\tau}}\) model, the second and third-generation leptons, namely those of lepton flavour \(\mu\) and \(\tau\), carry charges of \(+1\) and \(-1\), respectively. Notably, the first-generation leptons (of lepton flavour \(e\)), quarks, and the Higgs field in the Standard Model (SM) are not charged under this particular gauge symmetry. Moreover, we introduce three right-handed sterile neutrinos, \(N_{e}\), \(N_{\mu}\), and \(N_{\tau}\), each described by a Weyl spinor that transforms under the (0, \(\frac{1}{2}\)) representation of the Lorentz group (thus right-handed), which are singlets under the SM gauge group (thus sterile) and carry the U(1)\({}_{L_{\mu}-L_{\tau}}\) charges 0, \(+1\), and \(-1\), respectively. Additionally, we introduce a scalar boson \(\sigma\), which is a singlet under the SM gauge group and carries the U(1)\({}_{L_{\mu}-L_{\tau}}\) charge \(+1\). This scalar field develops a VEV that spontaneously breaks the U(1)\({}_{L_{\mu}-L_{\tau}}\) gauge symmetry. We summarise the charges of the field content of this model in Table 1, denoting the left-handed SU(2) lepton doublets of flavour \(\alpha\) with \(\ell_{\alpha}\) and the right-handed charged leptons with \(\alpha_{R}\), \(\alpha=e\), \(\mu\), \(\tau\). We adopt a two-component spinor notation (see, e.g., Ref. [48]) as in Ref. [21].
For the purposes of our discussion, we focus on the new leptonic interactions involving the right-handed neutrinos and their Majorana mass terms, reading
\[\Delta\mathcal{L}= -\lambda_{e}N_{e}^{c}(\ell_{e}\cdot\Phi)-\lambda_{\mu}N_{\mu}^{c }(\ell_{\mu}\cdot\Phi)-\lambda_{\tau}N_{\tau}^{c}(\ell_{\tau}\cdot\Phi)\] \[-\frac{1}{2}M_{ee}N_{e}^{c}N_{e}^{c}-M_{\mu\tau}N_{\mu}^{c}N_{ \tau}^{c}-\lambda_{e\mu}\sigma N_{e}^{c}N_{\mu}^{c}-\lambda_{e\tau}\sigma^{*} N_{e}^{c}N_{\tau}^{c}+\text{h.c.}\, \tag{2}\]
where the dots indicate the contraction of the SU(2) indices between the lepton doublets and the Higgs doublet \(\Phi\). Additionally, \((N_{\alpha}^{c})_{a}\equiv\varepsilon_{ab}(N_{\alpha}^{*})_{b}\), where \(\alpha=e,\,\mu,\,\tau\) and \(\varepsilon_{ab}\) is the antisymmetric tensor of the spinor indices \(a\), \(b\). The interaction terms in Eq. (2) lead to neutrino mass terms after the Higgs field \(\Phi\) and singlet scalar \(\sigma\) acquire their VEVs,
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \multicolumn{5}{c}{\(L_{\mu}-L_{\tau}\)**Charges of the Field Content**} \\ \hline \hline & \(\ell_{e},e_{R},N_{e}\) & \(\ell_{\mu},\mu_{R},N_{\mu}\) & \(\ell_{\tau},\tau_{R},N_{\tau}\) & \(\sigma\) & Others \\ \hline \(L_{\mu}-L_{\tau}\) & 0 & \(+1\) & \(-1\) & \(+1\) & 0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The \(L_{\mu}-L_{\tau}\) charges of the field content in the considered minimal gauged U(1)\({}_{L_{\mu}-L_{\tau}}\) model.
denoted as \(\langle\Phi\rangle=v/\sqrt{2}\) and \(\langle\sigma\rangle\), respectively. These mass terms can be expressed as follows:
\[\mathcal{L}_{\text{mass}}=-(\nu_{e},\nu_{\mu},\nu_{\tau})\mathcal{M}_{D}\begin{pmatrix} N_{e}^{c}\\ N_{\mu}^{c}\\ N_{\tau}^{c}\end{pmatrix}-\frac{1}{2}(N_{e}^{c},N_{\mu}^{c},N_{\tau}^{c}) \mathcal{M}_{R}\begin{pmatrix}N_{e}^{c}\\ N_{\mu}^{c}\\ N_{\tau}^{c}\end{pmatrix}+\text{h.c.}\, \tag{3}\]
where \(\nu_{\alpha}\), are the left-handed Weyl spinors describing the SM neutrinos of lepton flavour \(\alpha\), \(\mathcal{M}_{D}\) is the Dirac mass matrix and \(\mathcal{M}_{R}\) is the Majorana mass matrix given by,
\[\mathcal{M}_{D}=\frac{v}{\sqrt{2}}\begin{pmatrix}\lambda_{e}&0&0\\ 0&\lambda_{\mu}&0\\ 0&0&\lambda_{\tau}\end{pmatrix}\,\qquad\mathcal{M}_{R}=\begin{pmatrix}M_{ee}& \lambda_{e\mu}\langle\sigma\rangle&\lambda_{e\tau}\langle\sigma\rangle\\ \lambda_{e\mu}\langle\sigma\rangle&0&M_{\mu\tau}\\ \lambda_{e\tau}\langle\sigma\rangle&M_{\mu\tau}&0\end{pmatrix}\, \tag{4}\]
respectively. Notably the Dirac mass matrix \(\mathcal{M}_{D}\) is diagonal, while the \((\mu,\mu)\) and \((\tau,\tau)\) components in the Majorana mass matrix \(\mathcal{M}_{R}\) are zero due to the \(\text{U}(1)_{L_{\mu}-L_{\tau}}\) gauge symmetry. This particular structure leads to interesting predictions for neutrino observables [19, 21, 22, 49, 50], as discussed below. For the remainder of this discussion, we assume that the Dirac Yukawa couplings \(\lambda_{e}\), \(\lambda_{\mu}\), and \(\lambda_{\tau}\), as well as the VEVs \(v\) and \(\langle\sigma\rangle\) are real without loss of generality, with \(v=246\,\text{GeV}\); this can always be realised via field redefinition.
The seesaw master formula can naturally explain the smallness of SM neutrino masses by assuming that the non-zero components in the Majorana mass matrix are much larger than those in the Dirac matrix. The formula relates the light neutrino masses to the masses of the heavy right-handed neutrinos, and their couplings to the SM particles [10, 11, 12, 13, 14],
\[\mathcal{M}_{\nu_{L}}\simeq-\mathcal{M}_{D}\mathcal{M}_{R}^{-1}\mathcal{M}_{D }^{T}. \tag{5}\]
The mass matrix can be diagonalised, allowing us to express the flavour eigenstates of the neutrinos as linear combinations of the mass eigenstates,
\[U^{T}\mathcal{M}_{\nu_{L}}U=\text{diag}(m_{1},m_{2},m_{3}). \tag{6}\]
Here, \(U\) is the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) mixing matrix \(U\)[51, 52, 53, 54] and can be parameterised as [55]
\[U=\begin{pmatrix}c_{12}c_{13}&s_{12}c_{13}&s_{13}e^{-i\delta}\\ -s_{12}c_{23}-c_{12}s_{23}s_{13}e^{i\delta}&c_{12}c_{23}-s_{12}s_{23}s_{13}e^{ i\delta}&s_{23}c_{13}\\ s_{12}s_{23}-c_{12}c_{23}s_{13}e^{i\delta}&-c_{12}s_{23}-s_{12}c_{23}s_{13}e^{ i\delta}&c_{23}c_{13}\end{pmatrix}\times\begin{pmatrix}1&0&0\\ 0&e^{i\frac{\alpha_{2}}{2}}&0\\ 0&0&e^{i\frac{\alpha_{3}}{2}}\end{pmatrix}, \tag{7}\]
where \(c_{ij}\equiv\cos\theta_{ij}\) and \(s_{ij}\equiv\sin\theta_{ij}\) and \(\delta\), \(\alpha_{2}\), \(\alpha_{3}\in[0,2\pi]\). The Normal Ordering (NO) of neutrino masses, where \(m_{1}<m_{2}<m_{3}\), is the only mass hierarchy that is consistent with neutrino oscillation data in this model, as demonstrated in Ref. [21].
Using Eqs. (5) and (6), it is possible to express the Majorana mass matrix \(\mathcal{M}_{R}\) as
\[\mathcal{M}_{R}=-\mathcal{M}_{D}^{T}\,U\,\text{diag}(m_{1}^{-1},m_{2}^{-1},m_ {3}^{-1})\,U^{T}\mathcal{M}_{D}. \tag{8}\]
The \((\mu,\mu)\) and \((\tau,\tau)\) components of the right-hand side of the above equation vanish. This is a direct consequence of the two-zero-minor structure [49, 50] of \(\mathcal{M}_{\nu_{L}}\) in this model.2 The vanishing conditions on complex quantities result in four real parameter equations. These equations allow us to predict the values of the lightest neutrino mass \(m_{1}\), as well as the Dirac and Majorana CP phases \(\delta\), \(\alpha_{2}\), and \(\alpha_{3}\), as functions of the other neutrino oscillation parameters. These parameters include the neutrino mixing angles \(\theta_{12}\), \(\theta_{23}\), \(\theta_{13}\), and the squared mass differences \(\Delta m^{2}_{21}\equiv m^{2}_{2}-m^{2}_{1}\), and \(\Delta m^{2}_{31}\equiv m^{2}_{3}-m^{2}_{1}\). For each set of input parameters, there exist two possible sets of predictions, as shown in Ref. [21]. Specifically, if the set \((m_{1},\,\delta,\,\alpha_{2},\,\alpha_{3})\) satisfies the two-vanishing conditions, then the set \((m_{1},\,2\pi-\delta,\,2\pi-\alpha_{2},\,2\pi-\alpha_{3})\) also satisfies them.
Footnote 2: Note that this structure is stable against the renormalisation-group effects [21].
For our numerical study, we fix the neutrino mixing angles and the squared mass differences following the NuFit analyses [56], which are comprehensive global fits that include data from all the relevant neutrino oscillation experiments. The most recent NuFit analyses are conducted both with and without incorporating data from the Super-Kamiokande (SK) experiment. These two approaches yield somewhat different values for the neutrino mixing angles and squared mass differences. We summarise the results obtained in the NuFit 5.2 analysis [56, 57] for the best-fit values, \(1\sigma\) deviations and \(3\sigma\) ranges of the neutrino mixing angles and mass squared differences in Table 2.
We observe that, while the inclusion of the SK data substantially affects the best-fit \(\pm 1\sigma\) values of \(\theta_{23}\), it does not change dramatically its \(3\sigma\) allowed range (the maximal value without SK data is larger by \(0.9^{\circ}\)) and the results for \(\theta_{12}\), \(\theta_{13}\), \(\Delta m^{2}_{21}\), and \(\Delta m^{2}_{31}\). Specifically, the best fit \(\pm 1\sigma\) values for \(\theta_{23}\) lie below (above) \(\pi/4\) with (without) the inclusion of SK data. Since the minimal gauged U(1)\({}_{L_{\mu}-L_{\tau}}\) model predictions are highly dependent on the value of \(\theta_{23}\), as the sum of neutrino masses diverges for \(\theta_{23}=\pi/4\)[21], we examine the cases with and without the inclusion of SK data separately.3 In both cases, we set \(\theta_{12}\), \(\theta_{13}\), \(\Delta m^{2}_{21}\), and \(\Delta m^{2}_{31}\) to their best-fit values, while we treat \(\theta_{23}\)
\begin{table}
\begin{tabular}{c|c c c c c} \hline \multicolumn{6}{c}{**Neutrino Masses and Mixing Parameters**} \\ \hline \hline
**Parameters** & \(\theta_{12}\) & \(\theta_{13}\) & \(\theta_{23}\) & \(\Delta m^{2}_{21}\) & \(\Delta m^{2}_{31}\) \\
**(units)** & (\({}^{\circ}\)) & (\({}^{\circ}\)) & (\({}^{\circ}\)) & (\(10^{-5}\) eV\({}^{2}\)) & (\(10^{-3}\) eV\({}^{2}\)) \\ \hline
**With SK** & \(33.41^{+0.75}_{-0.72}\) & \(8.58^{+0.11}_{-0.11}\) & \(42.2^{+1.1}_{-0.9}\) & \(7.41^{+0.21}_{-0.20}\) & \(2.507^{+0.026}_{-0.027}\) \\ \(3\sigma\)**range** & \([31.31,35.74]\) & \([8.23,8.91]\) & \([39.7,51.0]\) & \([6.82,8.03]\) & \([2.427,2.590]\) \\ \hline
**Without SK** & \(33.41^{+0.75}_{-0.72}\) & \(8.54^{+0.11}_{-0.12}\) & \(49.1^{+1.0}_{-1.3}\) & \(7.41^{+0.21}_{-0.20}\) & \(2.511^{+0.028}_{-0.027}\) \\ \(3\sigma\)**range** & \([31.31,35.74]\) & \([8.19,8.89]\) & \([39.6,51.9]\) & \([6.82,8.03]\) & \([2.427,2.590]\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Best-fit values, \(1\sigma\) deviations and \(3\sigma\) allowed ranges of the neutrino mixing angles \(\theta_{12}\), \(\theta_{13}\), \(\theta_{23}\), and of the squared mass differences \(\Delta m^{2}_{21}\) and \(\Delta m^{2}_{31}\) in the case of NO light neutrino mass spectrum, from the latest NuFit 5.2 analysis [56, 57]. The two lines of values correspond to the fit with (top) and without (bottom) the inclusion of SK data.
differently given the implications it has on the sum of neutrino masses predicted by the considered model.
The sum of the neutrino masses is constrained by various cosmological and astrophysical measurements, yielding an upper bound of \(\sum_{i}m_{i}<(0.12-0.69)\,\)eV (95% C.L.), which depends on the adopted model and level of statistical complexity [58, 59] (see also Refs. [60, 61, 62, 63]). By adopting the best-fit values for \(\theta_{23}\) in the two cases with and without SK data, we obtain \(\sum_{i}m_{i}\simeq 0.241\,\)eV and \(0.173\,\)eV, respectively. However, in order to evade the aforementioned limitations, some level of complexity and assumptions is required. We instead set \(\theta_{23}\) to its \(-\left(+\right)3\sigma\) minimal (maximal) limit for the case with (without) SK data; specifically, \(\theta_{23}=39.7^{\circ}\,(51.9^{\circ})\), with the other input parameters at their best-fit values. This yields \(\sum_{i}m_{i}=0.142\,(0.117)\) eV, which minimises the sum of neutrino masses and reduces the tension with the cosmological bounds. To summarise, in our numerical analysis, we consider the following two sets of input parameters:
\[\begin{array}{ll}\mbox{\bf Set I}&\mbox{\bf Set II}\\ \theta_{12}=33.41^{\circ}&\theta_{12}=33.41^{\circ}\\ \theta_{13}=8.58^{\circ}&\theta_{13}=8.54^{\circ}\\ \theta_{23}=39.7^{\circ}&\theta_{23}=51.9^{\circ}\\ \Delta m_{21}^{2}=7.41\times 10^{-5}\,\mbox{eV}^{2}&\Delta m_{21}^{2}=7.41 \times 10^{-5}\\ \Delta m_{31}^{2}=2.507\times 10^{-3}\,\mbox{eV}^{2}&\Delta m_{31}^{2}=2.511 \times 10^{-3}\,\mbox{eV}^{2}.\end{array}\]
The minimal gauged U(1)\({}_{L_{\mu}-L_{\tau}}\) model, when combined with the neutrino mixing angles and squared mass differences in Set I (II), predicts a value of \(m_{1}=0.039\,(0.029)\,\)eV.4 In addition, the PMNS phases are determined to be \(\delta\simeq 301^{\circ}\,(228^{\circ})\), \(\alpha_{2}=116^{\circ}\,(225^{\circ})\) and \(\alpha_{3}=269^{\circ}\,(70^{\circ})\), or equivalently, \(\delta=59^{\circ}\,(132^{\circ})\), \(\alpha_{2}=244^{\circ}\,(135^{\circ})\) and \(\alpha_{3}=91^{\circ}\,(290^{\circ})\). The Dirac phase \(\delta\) has also been estimated by the NuFit analysis, which reports a \(3\sigma\) range of \([144^{\circ},350^{\circ}]\) when including SK data, and \([0,44^{\circ}]\cup[108^{\circ},360^{\circ}]\) when not including it. Consequently, the set of parameters of Set I with \(\delta=59^{\circ}\) is disfavoured for more than \(3\sigma\).
Footnote 4: The model also makes predictions about the effective Majorana mass \(\langle m_{\beta\beta}\rangle\) which determines the rate of the neutrinoless double-beta decay. The parameter set I (II) predicts \(\langle m_{\beta\beta}\rangle\simeq 0.025\) eV (0.016 eV), which is below the current constraint given by the KamLAND-Zen experiment, \(\langle m_{\beta\beta}\rangle<0.036\)-0.156 eV [64], and may be probed by future experiments with sensitivities of \(\langle m_{\beta\beta}\rangle\simeq\mathcal{O}(0.01)\)[65, 66].
By specifying the three additional input parameters in the Dirac mass matrix, \(\mathcal{M}_{D}=(v/\sqrt{2})\mbox{diag}(\lambda_{\rm e},\lambda_{\mu},\lambda _{\tau})\), it is possible to obtain the right-handed neutrino mass matrix \(\mathcal{M}_{R}\). These Yukawa couplings are parameterised, according to Ref. [21], as
\[(\lambda_{e},\lambda_{\mu},\lambda_{\tau})=\lambda(\cos\theta,\sin\theta\cos \phi,\sin\theta\sin\phi), \tag{9}\]
where we consider the range of \(0\leq\theta\,\mbox{and}\,\phi\leq\pi/2\). We also limit the value of \(\lambda\) to \(\lambda\lesssim 1\) to ensure that the Yukawa couplings remain perturbative. The masses of the right-handed neutrinos are obtained by performing the Takagi diagonalisation on the complex symmetric matrix \(\mathcal{M}_{R}\)
\[\mathcal{M}_{R}=\Omega^{*}\mbox{diag}(M_{1},M_{2},M_{3})\Omega^{\dagger}, \tag{10}\]
where \(\Omega\) is a unitary matrix and \(M_{1,\,2,\,3}\geq 0\). Once the mass matrix of the right-handed neutrinos \(\mathcal{M}_{R}\) is diagonalised, the terms in Eq. (2) lead to
\[\Delta\mathcal{L}= -\hat{\lambda}_{j\alpha}\hat{N}_{j}^{c}(\ell_{\alpha}\cdot H)- \frac{1}{2}M_{j}\hat{N}_{j}^{c}\hat{N}_{j}^{c}+\mbox{h.c.}\, \tag{11}\]
where the sum over equal indices is implicit and
\[\hat{N}_{j}^{c} = \sum_{\alpha}\Omega_{\alpha j}^{*}N_{\alpha}^{c}\;, \tag{12}\] \[\hat{\lambda}_{j\alpha} = \Omega_{\alpha j}\lambda_{\alpha}\ (\text{not summed}). \tag{13}\]
The Weyl spinor \(\hat{N}_{j}\), \(j=1,\,2,\,3\), with \((\hat{N}_{j}^{c})_{a}=\varepsilon_{ab}(\hat{N}_{j}^{*})_{b}\), describes a right-handed neutrino \(N_{j}\) with mass \(M_{j}\).
Upon examining Eq. (8) and using the Yukawa coupling parameterisation provided in Eq. (9), it becomes transparent that the masses of the right-handed neutrinos are proportional to \(\lambda^{2}\). This dependence can be expressed more precisely as follows,
\[M_{1,\,2,\,3}=\frac{v^{2}\lambda^{2}}{2m_{1}}\,\beta_{1,\,2,\,3}(\theta,\phi) \simeq 6\times 10^{14}\,\text{GeV}\left(\frac{0.05\,\text{eV}}{m_{1}} \right)\lambda^{2}\beta_{1,\,2,\,3}(\theta,\phi), \tag{14}\]
where \(\beta_{1,\,2,\,3}(\theta,\phi)\), satisfying \(\beta_{1,\,2,\,3}(\theta,\phi)\lesssim\mathcal{O}(1)\), are some real numbers that only depend on the ratios \(m_{1}/m_{2}<1\), \(m_{1}/m_{3}<1\), the PMNS parameters and on trigonometric functions of \(\theta\), \(\phi\).5
Footnote 5: More precisely, \(\beta_{1,\,2,\,3}(\theta,\phi)\) are defined by \(\text{diag}(\beta_{1},\beta_{2},\beta_{3})=-\Omega^{T}D_{\theta,\phi}UD_{m}U^ {T}D_{\theta,\phi}\Omega\), where \(D_{\theta,\phi}\equiv\text{diag}(\cos\theta,\sin\theta\cos\phi,\sin\theta \sin\phi)\) and \(D_{m}\equiv\text{diag}(1,m_{1}/m_{2},m_{1}/m_{3})\).
Fig. 1 shows the contours for three parameters: \(\beta_{1}=2m_{1}v^{-2}\lambda^{-2}M_{1}\) (top), \(\beta_{2}/\beta_{1}-1=\Delta M_{21}/M_{1}\) (bottom-left), and \(\beta_{3}/\beta_{2}-1=\Delta M_{32}/M_{2}\) (bottom-right); here, \(\Delta M_{21(32)}\equiv M_{2(3)}-M_{1(2)}\). These contours are obtained using the neutrino mixing angles and squared mass differences as in Set I, with \(\phi\) and \(\theta\) varied in the \(\pi/2\) range with \(0\leq\theta\), \(\phi\leq\pi/2\). We observe that the parameter \(\beta_{1}\) exhibits a maximum value of \(\beta_{1}^{\text{max}}\sim 0.24\) at the location with coordinates \(\phi=\tilde{\phi}\sim 50^{\circ}\) and \(\theta=\tilde{\theta}\sim 62^{\circ}\), which is marked with a red star. At the same coordinates, we find numerically that \(\Delta M_{21}/M_{1}\) is minimised, while \(\Delta M_{32}/M_{2}\) is locally maximised. Moreover, we find that the overall trend of the parameter space remains consistent if we choose to work with the values for \(\theta_{12}\), \(\theta_{13}\), \(\Delta m_{21}^{2}\) and \(\Delta m_{31}^{2}\) as in Set II. However, the maximum value of \(\beta_{1}\) is found for different values of \(\phi\) and \(\theta\). Specifically, the maximum value of \(\beta_{1}\) is found at \(\phi=\tilde{\phi}\sim 38^{\circ}\) and \(\theta=\tilde{\theta}\sim 65^{\circ}\), taking a value of \(\sim 0.21\). Upon examining the bottom panels of Fig. 1, it is evident that our model predicts a nearly degenerate spectrum for the right-handed neutrinos, with \(M_{2}\lesssim 3M_{1}\) and \(M_{3}\lesssim 3M_{2}\), over a large part of the parameter space. However, we do not observe all three right-handed neutrinos to be simultaneously nearly-degenerate in mass in any region of the parameter space. There exists a tiny region where \(N_{1}\) and \(N_{2}\) are nearly-degenerate with \(M_{2}\lesssim 1.05M_{1}\). For \(N_{2}\) and \(N_{3}\), there exists a larger region nearly degenerate, with \(M_{3}\lesssim 1.05M_{2}\). These regions are shown in yellow-ish colour in the bottom panels of Fig. 1.6
Footnote 6: We suspect that a specific point for which \(M_{1}=M_{2}\) exists around \((\tilde{\phi},\tilde{\theta})\) (although finding such precise point would require infinite numerical resolution). It should be possible to consider \(M_{1}\simeq M_{2}\) with arbitrary precision around that point, but this would require a certain amount of fine-tuning in the choice of the parameters \(\theta\) and \(\phi\), much more severe than, e.g., the uncertainties in the neutrino parameters we have considered.
## 3 Thermal leptogenesis
In this section, we present the equations necessary to investigate the thermal leptogenesis mechanism in the minimal gauged U(1)\({}_{L_{\mu}-L_{\tau}}\) model. To simplify our analysis, we make
the following assumptions: i) the \(\mathrm{U}(1)_{L_{\mu}-L_{\tau}}\) gauge symmetry is never restored after the reheating; ii) the masses of the \(\mathrm{U}(1)_{L_{\mu}-L_{\tau}}\) gauge boson and the singlet scalar field associated with \(\sigma\) are larger than the reheating temperature \(T_{R}\) so that these fields are always absent from the thermal bath; iii) the masses of all three right-handed neutrinos are smaller than the reheating temperature. The first two can be realised by taking the \(\mathrm{U}(1)_{L_{\mu}-L_{\tau}}\) gauge coupling and the self coupling of the \(\sigma\) field sufficiently large and \(\langle\sigma\rangle\gg T_{R}\). The third assumption indicates \(|M_{ee,\mu r}|,|\lambda_{e\mu,e\tau}\langle\sigma\rangle|<T_{R}\).
To take the impact of charged lepton and heavy neutrino flavours into account, we employ the formalism of DMEs instead of the simpler BEs.7 Specifically, in the case of the three decaying right-handed neutrinos, we can express the DMEs in the following
Figure 1: The contour plots for the quantities \(\beta_{1}\) (top panel), \(\Delta M_{21}/M_{1}\) (bottom-left panel) and \(\Delta M_{32}/M_{2}\) (bottom-right panel) in the \(\phi-\theta\) plane. The red stars in the plots mark the point of coordinates \(\phi\sim 50^{\circ}\) and \(\theta\sim 62^{\circ}\) for which \(\beta_{1}\) is maximised (\(\Delta M_{21}/M_{1}\) is minimised). The plot is obtained for the neutrino mixing angles, and squared mass differences as in Set I. Choosing the input parameters as in Set II would lead to a qualitatively similar figure.
compact form [40, 41, 42, 44, 45, 67]:
\[\frac{dN_{N_{j}}}{dz} = -D_{j}(N_{N_{j}}-N_{N_{j}}^{\rm eq})\,, \tag{15}\] \[\frac{dN_{\alpha\beta}}{dz} = \phantom{-}\sum_{j=1}^{3}\left[\epsilon_{\alpha\beta}^{(j)}D_{j} (N_{N_{j}}-N_{N_{j}}^{\rm eq})-\frac{1}{2}W_{j}\left\{P^{0(j)},N\right\}_{ \alpha\beta}\right]\] \[-\frac{\Gamma_{\tau}}{Hz}\left[I_{\tau},\left[I_{\tau},N\right] \right]_{\alpha\beta}-\,\frac{\Gamma_{\mu}}{Hz}\left[I_{\mu},\left[I_{\mu},N \right]\right]_{\alpha\beta}\,,\]
where the indices \(j=1,\,2,\,3\) refer to the three decaying right-handed neutrinos, while \(\alpha,\,\beta=e,\,\mu,\,\tau\) to the charged lepton flavours. The variable \(z\) is defined as \(z\equiv M_{1}/T\). The quantity \(N_{N_{j}}\) represents the number of heavy neutrinos \(N_{j}\) in a comoving volume, which we normalise using the same method as in Refs. [44, 45, 67], such that it contains a single photon when \(z\ll 1\), i.e., \(N_{N_{j}}^{\rm eq}(0)=3/4\). We approximate the equilibrium number density of heavy neutrinos as \(N_{N_{j}}^{\rm eq}(z)=(3/8)x_{j}z^{2}K_{2}(\sqrt{x_{j}}z)\), where \(x_{j}\equiv(M_{j}/M_{1})^{2}\) and \(K_{n}(z)\), \(n=1\,,2\,,\,...\), is the modified \(n^{\rm th}\) Bessel function of the second kind [33, 69]. The diagonal entries \(N_{\alpha\alpha}\) of the density matrix \(N\) correspond to the comoving number densities for the \((1/3)B-L_{\alpha}\) asymmetry, such that \(N_{B-L}=\sum_{\alpha=e,\,\mu,\,\tau}N_{\alpha\alpha}\). The off-diagonal elements \(N_{\alpha\beta}\), where \(\alpha\neq\beta\), represent the degree of coherence between the flavour states.
The projection matrices in the anti-commutator term in the first line of Eq. (16) is defined as,
\[P^{0(j)}_{\alpha\beta}\equiv\frac{\hat{\lambda}^{*}_{j\alpha}\hat{\lambda}_{j \beta}}{(\hat{\lambda}\hat{\lambda}^{\dagger})_{jj}}\,, \tag{17}\]
generalising the notion of the projection probability. Furthermore, the double commutator structure in the second line of Eq. (16) involves \(3\times 3\) matrices \(I_{\tau}\) and \(I_{\mu}\) defined such that \((I_{\tau})_{\alpha\beta}=\delta_{\alpha\tau}\delta_{\beta\tau}\) and \((I_{\mu})_{\alpha\beta}=\delta_{\alpha\mu}\delta_{\beta\mu}\). Additionally, we use the following analytical expressions for \(D_{j}\) and \(W_{j}\)[33, 44],
\[D_{j}(z) = \kappa_{j}x_{j}z\frac{K_{1}(\sqrt{x_{j}}z)}{K_{2}(\sqrt{x_{j}}z) }\,, \tag{18}\] \[W_{j}(z) = \frac{1}{4}\kappa_{j}x_{j}^{2}z^{3}K_{1}(\sqrt{x_{j}}z)\,, \tag{19}\]
where the decay parameter \(\kappa_{j}\) quantifies the strength of the wash-out processes in erasing the asymmetry and is defined as the ratio between the total decay rate of \(N_{j}\) at zero temperature, \(\Gamma_{j}=(\hat{\lambda}\hat{\lambda}^{\dagger})_{jj}M_{j}/8\pi\), and the Hubble expansion rate \(H\) at \(T=M_{j}\).
Note that the decay rate \(\Gamma_{j}\) and the Hubble expansion rate \(H(T=M_{j})\propto M_{j}^{2}\) scale as \(\lambda^{4}\). As a result, the decay parameter \(\kappa_{j}\) depends only on the angles \(\theta\) and \(\phi\). Numerical calculations show that, for any \(0\leq\theta,\,\phi\leq\pi/2\) and \(j=1,\,2,\,3\), \(\kappa_{j}\gg 1\) and leptogenesis occurs in the strong wash-out regime. In such a regime, the initial abundance of right-handed neutrinos typically does not appreciably affect leptogenesis [70]. We thus focus on the case for which the initial abundances of the right-handed neutrinos are zero. It has also been estimated that, in the strong wash-out regime and in the case of a hierarchical right-handed neutrino mass spectrum, the \(\Delta L=1\) scattering processes and related wash-outs would only contribute to the final BAU at most by \(\mathcal{O}(10\%)\)[70, 71]. We find that this is still true in most of the parameter space of our model, except for certain fine-tuned
choices of the parameters. Therefore, we do not consider the effects of scatterings in the main analysis.
The CP-asymmetry \(\epsilon^{(j)}_{\alpha\beta}\) appearing in the DMEs can be separated into contributions from vertex and self-energy diagrams, namely \(\epsilon^{(j)}_{\alpha\beta}\equiv\epsilon^{\rm V}_{\alpha\beta}+\epsilon^{\rm S }_{\alpha\bar{\beta}}\). The two contributions are given by [72, 73, 74, 75, 76, 38, 40, 42, 72, 73, 74, 75, 76],
\[\epsilon^{\rm V}_{\alpha\beta} = \frac{1}{16\pi\left(\hat{\lambda}\hat{\lambda}^{\dagger}\right)_{ jj}}\sum_{k\neq j}\left\{i\left[\hat{\lambda}^{*}_{j\alpha}\hat{\lambda}_{k \beta}(\hat{\lambda}\hat{\lambda}^{\dagger})_{kj}-\hat{\lambda}_{j\beta}\hat{ \lambda}^{*}_{k\alpha}(\hat{\lambda}\hat{\lambda}^{\dagger})_{jk}\right]\xi \left(x_{k}/x_{j}\right)\right\}\,, \tag{20}\] \[\epsilon^{\rm S}_{\alpha\beta} = \frac{1}{16\pi\left(\hat{\lambda}\hat{\lambda}^{\dagger}\right)_ {jj}}\sum_{k\neq j}\left\{i\left[\hat{\lambda}^{*}_{j\alpha}\hat{\lambda}_{k \beta}(\hat{\lambda}\hat{\lambda}^{\dagger})_{kj}-\hat{\lambda}_{j\beta}\hat{ \lambda}^{*}_{k\alpha}(\hat{\lambda}\hat{\lambda}^{\dagger})_{jk}\right]\frac {\sqrt{x_{k}/x_{j}}}{x_{k}/x_{j}-1}\right.\] \[\left.\hskip 113.811024pt+i\left[\hat{\lambda}^{*}_{j\alpha}\hat{ \lambda}_{k\beta}(\hat{\lambda}\hat{\lambda}^{\dagger})_{jk}-\hat{\lambda}_{j \beta}\hat{\lambda}^{*}_{k\alpha}(\hat{\lambda}\hat{\lambda}^{\dagger})_{kj} \right]\frac{1}{x_{k}/x_{j}-1}\right\},\]
where \(\xi(x)\equiv\sqrt{x}\left[\left(1+x\right)\log\left(1+1/x\right)-1\right]\). In the case of a degeneracy in the right-handed neutrino mass spectrum, the self-energy contribution to the CP-asymmetry is resonantly enhanced and can dominate over the vertex contribution. However, the self-energy contribution in Eq. (21) becomes ill-defined when two neutrinos, say \(N_{j}\) and \(N_{k}\), are quasi-degenerate in mass, as \(1/(x_{k}/x_{j}-1)\to\infty\) when \(x_{k}/x_{j}\to 1\). To address this non-physical behaviour, the full-resummed Yukawa couplings must be considered in the calculations, and the self-energy contribution to the CP-asymmetry can be regularised by performing the following substitution [77, 78, 79, 80],
\[\frac{1}{x_{k}/x_{j}-1}\to\frac{(M_{k}^{2}-M_{j}^{2})M_{j}^{2}}{(M_{k}^{2}-M_{ j}^{2})^{2}+M_{j}^{4}\Gamma_{k}^{2}/M_{k}^{2}}\,. \tag{22}\]
The regularised CP-asymmetry, obtained by using the prescription in Eq.(22), does not suffer from divergences and vanishes for equal right-handed neutrino masses. However, the self-energy contribution to the CP-asymmetry, even after regularisation, still exhibits a resonance. In particular, the self-energy contribution is maximised when \(|\Delta M_{jk}|\simeq 0.5\,\Gamma_{j}\), with \(\Delta M_{jk}\equiv M_{j}-M_{k}\). This resonant behaviour has been extensively studied in the context of resonant leptogenesis, especially in the efforts to avoid the Davidson-Ibarra bound [29] and extend the scenario of thermal leptogenesis down to the electroweak scale [78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89]. A comprehensive review can be found in Ref. [90]. The ratios \(|\Delta M_{jk}|/\Gamma_{j}\), \(j\neq k\), quantify the importance of resonance effects. Far away from the resonance, when \(|M_{j}-M_{k}|\gg\Gamma_{j}\), the effects of the enhancement are sub-leading, and the regularised CP-asymmetry obtained with the prescription in Eq. (22) resembles the form given in Eq. (21). We perform a scan of \(|\Delta M_{12}|/\Gamma_{1}\) and \(|\Delta M_{23}|/\Gamma_{2}\) over the \(\phi-\theta\) plane for various choices of \(\lambda\). We found that resonance effects are significant only in certain small regions of the parameter space. Furthermore, these regions become even smaller with decreasing values of \(\lambda\) and are effectively negligible when \(\lambda\lesssim 0.5\). The details of this analysis can be found in Appendix A. Nevertheless, we have included the effects of resonance as in Eq. (22) in our calculations.8
Finally, we numerically solve the DMEs in Eqs. (15) and (16) with the Python package ULYSSES[46, 47]. The code computes \(N_{B-L}=N_{ee}+N_{\mu\mu}+N_{\tau\tau}\) and relates it to the present baryon-to-photon ratio via
\[\eta_{B}=\frac{c_{s}}{f}N_{B-L}\approx 0.013N_{B-L}\,, \tag{23}\]
where \(c_{s}\) is the SM sphaleron conversion coefficient and the \(f\) factor comes from the dilution of the baryon asymmetry due to the change in the photon density between leptogenesis and recombination [33].
## 4 Results of the Parameter Scan of Viable Leptogenesis
In this section, we discuss the results of our parameter scan aimed at identifying the viable space for thermal leptogenesis by solving the set of DMEs introduced in Sec. 3. Fig. 2 shows the results of the scan in the \(\phi-\theta\) plane for three benchmark values of \(\lambda\), namely \(\lambda=0.5\) (top panels), \(\lambda=0.3\) (central panels) and \(\lambda=0.1\) (bottom panels). The plots in the left and right panels are obtained with the neutrino mixing angles and squared mass differences as in Set I with (\(\delta\), \(\alpha_{2}\), \(\alpha_{3}\)) = (\(301^{\circ}\), \(116^{\circ}\), \(269^{\circ}\)) and Set II with (\(\delta\), \(\alpha_{2}\), \(\alpha_{3}\)) = (\(228^{\circ}\), \(225^{\circ}\), \(70^{\circ}\)), respectively. The regions of viable leptogenesis are those surrounded by the solid red contours, where the predicted BAU matches the observed value \(\eta_{B}=6.1\times 10^{-10}\). The points corresponding to the dotted red contours result in a predicted BAU of \(-6.1\times 10^{-10}\), indicating the correct magnitude but wrong sign. To darker (lighter) regions correspond larger (smaller) values of the predicted BAU, in modulus.
In Fig. 2, it is clear that there are numerous regions in the parameter space where leptogenesis within the minimal gauged U(1)\({}_{L_{\mu}-L_{\tau}}\) model predicts a BAU equal to or greater than the observed value, in modulus. However, the sign of the BAU alternates between positive and negative values in different regions, and only those regions where the BAU is positive correspond to successful leptogenesis. It is worth noting that the sign of the BAU can always be changed without changing its magnitude by switching to the equivalent solution with (\(m_{1}\), \(2\pi-\delta\), \(2\pi-\alpha_{2}\), \(2\pi-\alpha_{3}\)) [21]. This can be shown as follows. First, note that switching from a set of solutions (\(m_{1}\), \(\delta\), \(\alpha_{2}\), \(\alpha_{3}\)) to the other one (\(m_{1}\), \(2\pi-\delta\), \(2\pi-\alpha_{2}\), \(2\pi-\alpha_{3}\)) yields \(U\to U^{*}\), and thus \(\mathcal{M}_{\nu_{L}}\rightarrow\mathcal{M}_{\nu_{L}}^{*}\), \(\mathcal{M}_{R}\rightarrow\mathcal{M}_{R}^{*}\), \(\Omega\rightarrow\Omega^{*}\), and \(\hat{\lambda}\rightarrow\hat{\lambda}^{*}\). Then, \(\epsilon_{\alpha\beta}^{(j)}\) and \(P_{\alpha\beta}^{0(j)}\) in Eq. (16) transform as \(\epsilon_{\alpha\beta}^{(j)}\rightarrow-\epsilon_{\beta\alpha}^{(j)}\) and \(P_{\alpha\beta}^{0(j)}\to P_{\beta\alpha}^{0(j)}\), respectively. This indicates that if \(N_{N_{j}}\) and \(N_{\alpha\beta}\) are solutions of Eqs. (15) and (16) for a given set of (\(m_{1}\), \(\delta\), \(\alpha_{2}\), \(\alpha_{3}\)), then the solutions for (\(m_{1}\), \(2\pi-\delta\), \(2\pi-\alpha_{2}\), \(2\pi-\alpha_{3}\)) are given by \(N_{N_{j}}\) and \(-N_{\beta\alpha}\). By noting that \(\eta_{B}\propto(N_{ee}+N_{\mu\mu}+N_{\tau\tau})\), we conclude that if the baryon-to-photon ratio for (\(m_{1}\), \(\delta\), \(\alpha_{2}\), \(\alpha_{3}\)) is given by \(\eta_{B}\), that for (\(m_{1}\), \(2\pi-\delta\), \(2\pi-\alpha_{2}\), \(2\pi-\alpha_{3}\)) is
Figure 2: The contour plots in the \(\phi-\theta\) plane of the BAU (in modulus) predicted by the DMEs with three decaying right-handed neutrinos. The plots are obtained for \(\lambda=0.5\) (top panels), \(0.3\) (central panels) and \(0.1\) (bottom panels), with the input parameters \(\theta_{12}\), \(\theta_{13}\), \(\theta_{23}\), \(\Delta m^{2}_{21}\) and \(\Delta m^{2}_{31}\) as in Set I with (\(\delta\), \(\alpha_{2}\), \(\alpha_{3}\)) = (\(301^{\circ}\), \(116^{\circ}\), \(269^{\circ}\)) (left panels) and Set II with (\(\delta\), \(\alpha_{2}\), \(\alpha_{3}\)) = (\(228^{\circ}\), \(225^{\circ}\), \(70^{\circ}\)) (right panels). As indicated in the bar legends, darker (lighter) regions correspond to larger (smaller) values of the BAU, with the red solid contours representing the points for which the predicted BAU equals the observed value \(\eta_{B}\simeq 6.1\times 10^{-10}\). For the red dotted contours, the BAU equals the observed value in modulus, but the sign is negative. The dotted vertical and horizontal lines mark the benchmark points BMPa (top panels), BMPb (central panels), BMPcI and BMPcII (bottom panels).
given by \(-\eta_{B}\). It is also important to note that there is an overall sign difference between the parameter scans obtained for the two sets of input parameters, Set I and II. This is because \(\theta_{23}\) varies from values below \(45^{\circ}\) in Set I to values above \(45^{\circ}\) in Set II, resulting in a shift in the PMNS phases and an opposite sign of the BAU.9 Consequently, only precise combinations of the phase \(\delta\) and mixing angle \(\theta_{23}\) can lead to the correct sign of the BAU in a particular point of the \(\phi-\theta\) plane. Determining whether \(\theta_{23}<\) or \(>\pi/4\) and/or \(\delta<\) or \(>\pi\) in future experiments would rule out certain regions of the parameter space of viable leptogenesis in our model only based on the sign of the predicted baryon-to-photon ratio.
Footnote 9: The behaviour is not exactly symmetric in \(\theta_{23}\), see Fig. 1(b) of Ref. [24].
In addition, the size of the allowed regions for successful leptogenesis in the \(\phi-\theta\) plane is dependent on the mass scale of leptogenesis, which goes as \(\lambda^{2}\). As \(\lambda\) decreases (increases), the allowed ranges of \(\phi\) and \(\theta\) for successful leptogenesis become smaller (larger). To determine the minimal values of \(\lambda\) at which the viable regions shrink to points, we search for the local maxima of the predicted BAU. We identify four benchmark points (BMPs) in the \(\phi-\theta\) plane, around which the symmetry is locally maximised. The BMPs are located at the coordinates BMP a) \(\theta=29.06^{\circ}\) and \(\phi=47.28^{\circ}\); BMP b) \(\theta=29.06^{\circ}\) and \(\phi=19.94^{\circ}\); BMP cl) \(\theta=56.39^{\circ}\) and \(\phi=56.39^{\circ}\) for Set I, and BMP cII) \(\theta=56.39^{\circ}\) and \(\phi=48.19^{\circ}\) for Set II. These four BMPs are indicated in Fig. 2 by horizontal and vertical dotted grid lines.
Finally, Fig. 3 illustrates the dependence of the BAU on the scale \(\lambda\) for the four different BMPs. The left and right panels correspond to input parameters as in Set I with \((\delta,\,\alpha_{2},\,\alpha_{3})=(301^{\circ},\,116^{\circ},\,269^{\circ})\) and Set II with \((\delta,\,\alpha_{2},\,\alpha_{3})=(228^{\circ},\,225^{\circ},\,70^{\circ})\), respectively. Our numerical analysis yields the following minimal values of \(\lambda\) for which leptogenesis is viable: for Set I, \(\lambda\simeq 0.35\) (\(M_{1}\simeq 10^{12.8}\,\)GeV) for BMPa and \((\delta,\,\alpha_{2},\,\alpha_{3})=(301^{\circ},\,116^{\circ},\,269^{\circ})\); \(\lambda\simeq 0.25\) (\(M_{1}\!\simeq\!10^{12.2}\,\)GeV) for BMPb and \((\delta,\alpha_{2},\alpha_{3})\!=\!(59^{\circ},\,244^{\circ},\,91^{\circ})\);
Figure 3: The BAU as a function of \(\lambda\) predicted by the DMEs with three decaying right-handed neutrinos. The input parameters in the left and right panels are as in Set I with \((\delta,\,\alpha_{2},\,\alpha_{3})=(301^{\circ},\,116^{\circ},\,269^{\circ})\) and Set II with \((\delta,\,\alpha_{2},\,\alpha_{3})=(228^{\circ},\,225^{\circ},\,70^{\circ})\), respectively. The different curves are obtained for BMPa (blue), BMPb (orange), BMPcI and BMPcII (green), with the solid (dashed) style indicating the positive (negative) sign of the predicted baryon-to-photon ratio. Note the overall sign flip between the curves obtained in the two different sets of input parameters.
\(\lambda\simeq 0.05\) (\(M_{1}\simeq 10^{11.6}\,\)GeV) for BMPCl and (\(\delta,\,\alpha_{2},\,\alpha_{3}\)) = (\(301^{\circ},\,116^{\circ},\,269^{\circ}\)). For Set II, we find: \(\lambda\simeq 0.305\) (\(M_{1}\simeq 10^{12.8}\,\)GeV) for BMPPa and (\(\delta,\,\alpha_{2},\,\alpha_{3}\)) = (\(132^{\circ},\,135^{\circ},\,290^{\circ}\)); \(\lambda\simeq 0.25\) (\(M_{1}\simeq 10^{12.3}\,\)GeV) for BMPb and (\(\delta,\,\alpha_{2},\,\alpha_{3}\)) = (\(228^{\circ},\,225^{\circ},\,70^{\circ}\)); \(\lambda\simeq 0.03\) (\(M_{1}\simeq 10^{11.2}\,\)GeV) for BMPcII and (\(\delta,\,\alpha_{2},\,\alpha_{3}\)) = (\(228^{\circ},\,225^{\circ},\,70^{\circ}\)). Overall, we find that leptogenesis is viable for \(M_{1}\gtrsim 10^{11}\,\)GeV across the entire parameter space.
## 5 Conclusions
In this work, we have studied thermal leptogenesis in the context of the minimal gauged U(1)\({}_{L_{\mu}-L_{\tau}}\) model. Given the light neutrino squared mass differences and the PMNS mixing angles as input, the model predicts the values of the lightest neutrino mass \(m_{1}\), as well as the PMNS phases \(\delta\), \(\alpha_{2}\), and \(\alpha_{3}\). The model remains with three free parameters, which we have identified as \(\lambda\), \(\theta\), and \(\phi\) according to the parameterisation of the Yukawa couplings as in Eq. (9). We have then performed a numerical scan of the parameter space and searched for the allowed ranges of \(\lambda\), \(\theta\) and \(\phi\) for which leptogenesis is viable in reproducing the observed value of the baryon asymmetry of the Universe. To fully account for the effects of the charged lepton and heavy neutrino flavours, we have solved numerically the sets of density matrix equations instead of the simpler Boltzmann equations, and took the decays of all three right-handed neutrinos into account. Noting that thermal leptogenesis proceeds in the strong wash-out regime in the entire parameter space of the model so that the effects of scatterings and different initial conditions are typically sub-leading, we have focused only on direct and inverse decays and on the case of vanishing initial right-handed neutrino abundances. To avoid the non-physical behaviour in the CP-asymmetry due to degeneracy in the right-handed neutrino mass spectrum, we have included the full-resummed Yukawa couplings and resonant effects in the calculation, even though we found those to be relevant only in secluded regions of the parameter space.
We have found numerous regions in the parameter space where thermal leptogenesis within the minimal gauged U(1)\({}_{\mu-L_{\tau}}\) model predicts the correct baryon asymmetry of the Universe. This result was not obvious given the restrictions in the neutrino mass structure imposed by the model. The size of the allowed regions for successful leptogenesis in the \(\phi-\theta\) plane decreases with the mass scale of leptogenesis, which goes as \(\lambda^{2}\), implying minimal values of the mass scale and \(\lambda\) for which leptogenesis can be successful. We found that thermal leptogenesis is viable for \(M_{1}\gtrsim 10^{11-12}\,\)GeV across the entire parameter space, with \(\lambda\) taking values from order unity down to \({\cal O}(0.03-0.05)\). These values are larger than those obtained in the context of non-thermal leptogenesis within the minimal gauged U(1)\({}_{L_{\mu}-L_{\tau}}\) model [24]. The difference is mostly due to the wash-out effects that are present in the thermal leptogenesis mechanism but not in the non-thermal scenario, implying relatively heavier right-handed neutrinos to satisfy the out-of-equilibrium condition.
The sign of the baryon asymmetry can be either positive or negative in the various regions, but leptogenesis is successful only where the baryon asymmetry is positive. However, after specifying the neutrino squared mass differences and the PMNS mixing angles, the minimal gauged U(1)\({}_{L_{\mu}-L_{\tau}}\) model predicts two distinct sets of PMNS phases, (\(\delta,\,\alpha_{2},\,\alpha_{3}\)) and (\(2\pi-\delta,\,2\pi-\alpha_{2},\,2\pi-\alpha_{3}\)), each corresponding to opposite signs of the predicted baryon asymmetry. At present, the uncertainty on the estimate of \(\delta\) obtained in the global analysis [57] is relatively large so that, in general, both predictions are plausible (to a certain level of confidence). We are then allowed to change the sign of
the predicted asymmetry by switching from one set of PMNS phases to the other. The baryon asymmetry also has an opposite sign depending on whether \(\theta_{23}\) lies above or below \(\pi/4\). According to the \(3\sigma\) ranges for \(\theta_{23}\) obtained in the global analysis [57], see Table 2, both cases are still valid, and it is, therefore, impossible to discriminate the sign of the baryon asymmetry in a given region of the parameter space. Of course, with future more accurate measurements of the Dirac phase \(\delta\) and of the mixing angle \(\theta_{23}\) from, e.g., T2K [94], NO\(\nu\)A [95], JUNO [96], DUNE [97], and Hyper-Kamiokande [98] (see also Ref. [99]), we would be able to rule out the certain region of the parameter space of leptogenesis in the considered model only on the basis of the sign of the baryon asymmetry. We stress that the results of this work are obtained for two different extreme choices of \(\theta_{23}\) corresponding to the \(\pm 3\sigma\) limit values of the global analysis, so to minimize the sum of neutrino masses, \(\sum_{i}m_{i}\), and the tension with the cosmological bounds. In the future, more precise measurements of \(\theta_{23}\) and/or \(\sum_{i}m_{i}\) will likely impose even more stringent constraints on the model presented here.
## Acknowledgements
A.G. wishes to thank the Kavli IPMU and the Department of Physics of the University of Tokyo at Hongo Campus for the kind hospitality offered during the first part of this project. A.G. acknowledges the use of computational resources from the parallel computing cluster of the Open Physics Hub ([https://site.unibo.it/openphysichub/en](https://site.unibo.it/openphysichub/en)) at the Physics and Astronomy Department in Bologna. The work of A.G. has received funding from the European Union's Horizon Europe research and innovation programme under the Marie Sklodowska-Curie Staff Exchange grant agreement No. 101086085 - ASYMMETRY. The work of K.H., N.N., and M.R.Q. was supported in part by the Grant-in-Aid for Innovative Areas (No. 19H05810 [K.H.], No. 19H05802 [K.H.], No. 18H05542 [N.N.]), Scientific Research B (No. 20H01897 [K.H., N.N., and M.R.Q.]), and Young Scientists (No. 21K13916 [N.N.]). The work of J.W. is supported by the JSPS KAKENHI Grant (No. 22J21260).
## Appendix A The Impact of Resonance Effects
In this appendix, we estimate the relevance of the resonance effects of the CP-asymmetry in our numerical analysis. We first note that \(\Delta M_{kj}/\Gamma_{j}\propto\lambda^{-2}\) meaning that, for a given point in the \(\phi-\theta\) plane, the resonance effects are depleted if \(\lambda\) is sufficiently small. We show in Fig. 4 the points of our scan for which we find that \(\Delta M_{21}/\Gamma_{1}\lesssim 10\) (left panel) and \(\Delta M_{32}/\Gamma_{2}\lesssim 10\) (right panel) for \(\lambda=1\) (top panels) and \(0.5\) (bottom panels). The figure is obtained for the neutrino mixing angles and squared mass differences as in Set I. The results remain qualitatively similar for the input parameters in Set II and are thus not shown here. The plots in the top panels show that, when \(\lambda=1\), we have \(\Delta M_{21}/\Gamma_{1}\lesssim 10\) and \(\Delta M_{32}/\Gamma_{2}\lesssim 10\) in the region of the parameter space corresponding to \(M_{2}\lesssim 1.5M_{1}\) and \(M_{3}\lesssim 1.5M_{2}\), respectively. When \(\lambda=0.5\), these regions are reduced, and it becomes difficult to find points satisfying the two inequalities for \(\lambda<0.5\). This indicates that resonance effects are negligible in most of the parameter space when \(\lambda\lesssim 0.5\). By comparing the plots in the left and right panels, it follows that there is no region where the three right-handed neutrinos are simultaneously in the resonant regime, thus justifying the use of the regularisation in Eq. (22) in the main analysis. |
2305.07020 | Using Pressure to Unravel the Structure-Dynamic-Disorder Relationship in
Metal Halide Perovskites | The exceptional optoelectronic properties of metal halide perovskites (MHPs)
are presumed to arise, at least in part, from the peculiar interplay between
the inorganic metal-halide sublattice and the atomic or molecular cations
enclosed in the cage voids. The latter can exhibit a roto-translative dynamics,
which is shown here to be at the origin of the structural behavior of MHPs as a
function of temperature, pressure and composition. The application of high
hydrostatic pressure allows for unraveling the nature of the interaction
between both sublattices, characterized by the simultaneous action of hydrogen
bonding and steric hindrance. In particular, we find that under the conditions
of unleashed cation dynamics, the key factor that determines the structural
stability of MHPs is the repulsive steric interaction rather than hydrogen
bonding. Taking as example the results from pressure and temperature-dependent
photoluminescence and Raman experiments on MAPbBr$_3$ but also considering the
pertinent MHP literature, we provide a general picture about the relationship
between the crystal structure and the presence or absence of cationic dynamic
disorder. The reason for the structural sequences observed in MHPs with
increasing temperature, pressure, A-site cation size or decreasing halide ionic
radius is found principally in the strengthening of the dynamic steric
interaction with the increase of the dynamic disorder. In this way, we have
deepened our fundamental understanding of MHPs; knowledge that could be coined
to improve performance in future optoelectronic devices based on this promising
class of semiconductors. | Kai Xu, Luis Pérez-Fidalgo, Bethan L. Charles, Mark T. Weller, M. Isabel Alonso, Alejandro R. Goñi | 2023-05-11T17:57:58Z | http://arxiv.org/abs/2305.07020v1 | # Using Pressure to Unravel the
###### Abstract
The exceptional optoelectronic properties of metal halide perovskites (MHPs) are presumed to arise, at least in part, from the peculiar interplay between the inorganic metal-halide sublattice and the atomic or molecular cations enclosed in the cage voids. The latter can exhibit a roto-translative dynamics, which is shown here to be at the origin of the structural behavior of MHPs as a function of temperature, pressure and composition. The application of high hydrostatic pressure allows for unraveling the nature of the interaction between both sublattices, characterized by the simultaneous action of hydrogen bonding and steric hindrance. In particular, we find that under the conditions of unleashed cation dynamics, the key factor that determines the structural stability of MHPs is the repulsive steric interaction rather than hydrogen bonding. Taking as example the results from pressure and
temperature-dependent photoluminescencecence and Raman experiments on MAPbBr\({}_{3}\) but also considering the pertinent MHP literature, we provide a general picture about the relationship between the crystal structure and the presence or absence of cationic dynamic disorder. The reason for the structural sequences observed in MHPs with increasing temperature, pressure, A-site cation size or decreasing halide ionic radius is found principally in the strengthening of the dynamic steric interaction with the increase of the dynamic disorder. In this way, we have deepened our fundamental understanding of MHPs; knowledge that could be coined to improve performance in future optoelectronic devices based on this promising class of semiconductors.
## 1 Introduction
Metal halide perovskites (MHPs) are nowadays the focus of intense fundamental as well as applied research mainly for their exceptional photovoltaic properties that have catapulted solar cell efficiencies to values in excess of 25%[1] but using low-cost, solution-processing methods. MHPs with general formula ABX\({}_{3}\), being B a metal (Pb or Sn) and X a halogen atom (Cl, Br, I), are characterized by a labile inorganic cage of corner-sharing BX\({}_{6}\) octahedrons, enclosing the loosely bound atomic or molecular A-site cations in their voids. According to Goldschmidt's tolerance-factor criterium,[2] the A-site cations fitting in the inorganic cage voids are Cs and organic molecules such as methylammonium (MA) or formamidinium (FA). Because the A-site cations are only loosely bound to the inorganic cage by electrostatic forces, they are able to freely move (translate, rotate and librate) inside the cage voids. It is an experimentally and theoretically well-established fact that in cubic and tetragonal phases of MHPs, such dynamics is fully or partially (in-plane) unfolded, respectively, whereas in less symmetric orthorhombic phases the A-site cations are locked in certain positions and orientations inside the voids.[3] For example, experimentally the MA and/or FA dynamics has been directly assessed by ultra-fast vibrational spectroscopy[4, 5] or indirectly inferred from the analysis of the atomic displacement parameter in neutron scattering[6, 7] and X-ray diffraction experiments.[8] In the case of the MA\({}^{+}\) ions in pure lead halide perovskites, the dynamics consists essentially of a fast (ca. 0.3 ps) wobbling-in-a-cone motion and much slower, jump-like reorientation rotations of the molecules by 90\({}^{\circ}\).[5] The latter, which are the main cause of dynamic disorder, exhibit characteristic jump times ranging from 1 to 3 ps, depending on the halide atom. However, in mixed-halide compounds, these times can be as long as 15 ps.[5] Theoretically, the A-site cation dynamics has been well accounted for within molecular-dynamics calculations.[9, 10, 11] Using a diffusive model,[12] ab-initio molecular dynamics simulations yield for MAPbBr\({}_{3}\) at 300 K[11] a relaxation time of ca. 340 ps for the fast motion and about 2 ps for the jump-like rotations, in excellent agreement with the experiment. This dynamics has direct impact on one of the distinctive features of MHPs, namely the interplay between the inorganic network and the atomic or molecular cations enclosed in the cage voids, determining, at least in part, the outstanding optoelectronic properties of these semiconductor materials.
The interplay between the inorganic metal-halide sublattice and the network of
A-site cations picks up contributions from two interactions with different origin and acting at different length scales: Hydrogen bonding and steric effects. Hydrogen bonding results from the electrostatic interaction between the hydrogen atoms of the organic cations and the negatively charged halide anions. In the case of the Cs\({}^{+}\) cations, H bonding is replaced by the bare electrostatic anion-cation attraction. In contrast, steric effects corresponds to non-bonding dipole-dipole interactions between molecules and/or atoms, which are well described by a Lennard-Jones potential. At large distances steric effects correspond to the weak van der Waals attraction that is much weaker than electrostatic interactions, being thus negligible against H bonding. However, at short distances the repulsion between the electronic clouds of neighboring atoms or molecules comes into play and the steric interaction becomes strongly repulsive. In the case of MHPs, steric effects are intimately related to the movement of the A-site cations inside the cage voids, which provides the necessary kinetic energy to bring cations and anions sufficiently close together. Hence, at the risk of being redundant, the steric repulsion will be hereafter called dynamic steric interaction (DSI). H bonding is ubiquitous in hybrid halide perovskites and has been repeatedly invoked to explain the structural phase behavior of MA lead halides as a function of temperature[13, 11] and pressure.[14, 15] Apart from contributing to the structural stability of the low-temperature orthorhombic phases of MHPs, first-principle calculations have shown that H bonding is instrumental for the tilting of the PbX\({}_{6}\) octahedrons.[16, 17] Furthermore, molecular dynamics simulations have highlighted the role that H bonding plays at the one-to-one connection between octahedral tilting and local structural deformations with the roto-translational dynamics of the molecular cations.[9, 10, 11] This is the origin of the dynamic disorder caused by unleashed A-site cation dynamics. Curiously, besides for its consideration to explain phase stability in inorganic MHPs,[16] the dynamic steric interaction has been widely ignored in the literature. Yet, here we will show that DSI is crucial for a final understanding of the structural phase sequences observed in MHPs as a function of pressure and halide composition.
For MHPs Raman scattering turns out to be a very powerful technique, since it grants easy access and without experimental complications to the degree of dynamic disorder present in the sample for given temperature and pressure conditions. In previous temperature-dependent experiments on the three MA lead halides, we found evidence of the coupling mentioned before between the vibrations of the anionic network PbX\({}_{3}\) (X=I, Br, Cl) and the MA cations in the Raman scattering signature.[18, 19] As a consequence of the steric interaction between the MA molecules and the halogen atoms of the inorganic cage and due to dynamic disorder, the vibrational modes of the cage exhibit a wide statistical distribution of frequencies, which in turn leads to a strong _inhomogeneous_ broadening of the Raman peaks. In contrast, in the low-temperature orthorhombic phase, when the organic cations are locked and ordered inside the cage voids, becoming well oriented along high-symmetry directions of the perovskite crystal, dynamic disorder just disappears. The result is a pronounced reduction of the linewidths of the Raman peaks, which is readily observed in low-temperature Raman spectra.[13, 18, 19, 20] Interestingly, a similar locking effect of the MA cations and the concomitant reduction in linewidth of the inorganic cage phonons can be induced at room temperature through the application of high hydrostatic pressure.[10, 14, 21]
Here we make explicit use of this spectroscopic tool to monitor the appearance or disappearance of structural disorder as a function of pressure and temperature in relation to the A-site cation dynamics.
In this work, we present a systematic study of the structural phase behavior of high-quality MAPbBr\({}_{3}\) single crystals as a function of temperature in the range 80 to 320 K and ambient pressure as well as a function of pressure up to ca. 7 GPa at room temperature. This has been accomplished by monitoring the temperature and pressure-induced changes in the fundamental band gap and vibrational spectrum of MAPbBr\({}_{3}\), as observed in PL and Raman experiments, respectively, following the procedure reported elsewhere.[21; 22] By combining the results obtained here for MAPbBr\({}_{3}\) with data from the available literature on temperature and/or pressure-dependent studies for MAPbI\({}_{3}\),[6; 8; 19; 21; 23; 24; 25; 26; 27; 28; 29; 30; 31] MAPbBr\({}_{3}\),[11; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31]
Figure 1: (a) PL spectra of a MAPbBr\({}_{3}\) single crystal recorded at different temperatures using the green laser line (514.5 nm) for excitation. The spectra were normalized to their maximum intensity and plotted with a vertical shift for clarity. The temperature range is indicated (temperature step ca. 5 K). The different colors represent the different structural phases adopted as a function of temperature (Cubic: Pm\(\overline{3}\)m, Tetra.: I4/mcm, Ortho.: Pnma). (b) The maximum PL peak energy position plotted as a function of temperature, obtained from the PL lineshape fits to the spectra shown in (a), using a cross-product function (Eq. (1) of Supporting Information). Dashed lines indicate the phase transition temperatures.
15, 19, 23, 24, 25, 28, 31, 32, 33, 34] MAPbCl\({}_{3}\),[19, 23, 24, 25, 35] FAPbBr\({}_{3}\),[36][37] FA\({}_{x}\)MA\({}_{1-x}\)PbI\({}_{3}\),[22, 38, 39] CsPbI\({}_{3}\),[40] CsPbBr\({}_{3}\),[33] and the data from two recent reviews,[41, 42] we were able to conceive a general picture about the relationship between crystal structure and dynamic disorder in MHPs. One particularly important finding is that at the temperatures for unfolded A-site cation dynamics, the structural stability of the crystalline phases observed for increasing pressure, A-site cation size or decreasing halogen atomic radius can only be understood in terms of a strengthening of the DSI, rather than due to H-bonding effects. Moreover, we offer an explanation for the fact that the onset of the pressure-induced amorphization, i.e. static disorder, based on the amounts of vacancies present in each particular sample. We note that we have intentionally excluded the results on thin films and nanocrystals from the discussion to avoid complications due to effects on structural behavior of grain boundaries, interfaces, surfaces, and/or confinement, making the underlying physics difficult to understand.
## 2 Results
### Temperature and Pressure-Dependent Photoluminescence (PL) Spectra
Figure 1a shows the evolution with temperature of the PL spectra of a MAPbBr\({}_{3}\) high-quality single crystal in the range from 310 to 80 K. All spectra were normalized to their absolute maximum intensity and vertically offset to ease their comparison. The different colors correspond to the temperature ranges of stability of the different crystalline phases of MAPbBr\({}_{3}\), as indicated. According to X-ray diffraction results,[23, 32, 13] starting at ambient, the phase sequence for decreasing temperature is \(\alpha\)-cubic \(\rightarrow\)\(\beta\)-tetragonal-I \(\rightarrow\)\(\gamma\)-tetragonal-II \(\rightarrow\)\(\delta\)-orthorhombic. The \(\gamma\) phase is not indicated in Fig. 1a because we missed it in our experiments, since it has a very narrow stability range of 5 K. At all temperatures a single peak dominates the PL spectra of MAPbBr\({}_{3}\), corresponding to the free-exciton emission.[22, 43] With decreasing temperature the exciton peak exhibits a monotonous redshift of its energy, except for the sudden jumps at the phase transitions, and a clear decrease in linewidth. In view of the relatively small binding energy of ca. 15 meV,[44] the redshift can be taken as representative of the temperature dependence of the fundamental band gap. The linewidth reduction, in turn, is indicative of a homogeneously broadened emission peak, which means it is lifetime limited. In high quality crystals, non-radiative exciton decay is mainly associated to the scattering by phonons, thus, being strongly temperature dependent. At low temperatures below ca. 110 K, several peaks become apparent at the low-energy side of the main exciton peak (see Fig. 1a), which are ascribed to emission from bound (acceptor/donor) exciton complexes, as reported elsewhere.[45]
To analyze the PL spectra of the hybrid perovskites we used a Gaussian-Lorentzian cross-product function for describing the main emission peak, as successfully employed for the analysis of the PL spectra of MAPbI\({}_{3}\)[21] and MA/FA mixed crystals.[22] The expression for the cross-product function is given in the Supporting
Information. It contains three adjustable parameters: The amplitude prefactor \(A\), the peak energy position \(E_{0}\), and the full width at half maximum (FWHM) \(\Gamma\). This function is a useful simplification of a Voigt function, which corresponds to the mathematical convolution of a Lorentzian and a Gaussian. There is an additional lineshape parameter which takes the values \(s=0\) for pure Gaussian and \(s=1\) for pure Lorentzian. For MAPbBr\({}_{3}\) the exciton emission lineshape turned out mainly Gaussian with little Lorentzian admixture. The values of the peak energy \(E_{0}\) are plotted as a function of temperature in Fig. 1b (the PL linewidths and intensities are shown in Fig. S1 of the Supporting Information). As mentioned above, we consider the shift of the PL peak energy \(E_{0}\) with temperature representative of the temperature change of the gap. The linear increase of the gap with increasing temperature observed for the cubic and tetragonal phases is a common trend of MHPs, which was explained as due to two equally-contributing effects, namely thermal expansion and enhanced electron-phonon interaction.[46] Furthermore, the temperatures at which the jumps in the gap energy occur are in excellent agreement with the phase transition temperatures from X-ray data (dashed lines in Fig. 1b). At the phase transitions, the gap always increases for the phase with lower symmetry. This is due to the sudden increment in the overall octahedral tilting of the PbBr\({}_{6}\) octahedrons, which leads to a strong reduction of the Pb-Br-Pb bond angle, reducing the overlap between valence Pb and Br orbitals and increasing the bandgap.[28, 31]
Figure 2a shows representative PL spectra of MAPbBr\({}_{3}\) measured at different pressures up to about 6.5 GPa. Spectra were again normalized to its absolute maximum intensity and vertically offset to ease their comparison. The main PL peak exhibits abrupt changes in the position of its maximum, which are indicative of the occurrence of three phase transitions in the pressure range of the experiment. This can be better appreciated in Fig. 2b, where the values of the peak energy \(E_{0}\) are plotted as a function of pressure. Different colors correspond to the four observed phases and vertical dashed lines (except for the first one) mark the corresponding phase transition pressures. We note that at the very beginning solely of the first pressure upstroke, a sudden redshift of the PL emission, i.e. of \(E_{0}\), is observed to occur in the range from 0 to \(\sim 0.25\) GPa. As shown below, this effect is accompanied by changes in the linewidth of the Raman peaks. Since this happens only once, we believe it is not related to a phase transition. We speculate that this behavior might arise from an initial strain relaxation the first time the sample is pressurized in the diamond anvil cell (DAC). Such a strain could have been introduced by the way the small chips to be loaded into the DAC are produced (see Methods section).
Within the cubic-I (Pm\(\overline{3}\)m) phase, stable from ambient conditions, the PL spectra exhibit a clear redshift and the gap energy of MAPbBr\({}_{3}\) displays a negative linear dependence on pressure. A linear regression to the data points yields a pressure coefficient of \((-54\pm 5)\) meV/GPa, which is very similar to that of other counterparts like MAPbI\({}_{3}\)[46] and MAPbCl\({}_{3}\)[35] (for comparison see the survey of pressure coefficients of MHPs published in Ref.[46]). As previously argued for MAPbI\({}_{3}\)[21], such a negative pressure dependence of the gap can be readily explained using the well-established systematic about the pressure coefficients of conventional semiconductors[47] and ac
counting for the bonding/antibonding and atomic orbital character of the valence and conduction-band states. Relativistic band-structure calculations[48, 49] for a pseudo-cubic phase of MAPbI\({}_{3}\) predict that due to the huge spin-orbit interaction present in heavy atoms like Pb, there is a so-called band inversion. For MHPs this means that the top of the valence band is predominantly composed by antibonding Pb 6\(s\) orbitals, which shift up in energy with pressure, whereas the bottom of the conduction band is formed by the antibonding split-off Pb 6\(p\)-orbitals, which are fairly pressure insensitive. A totally similar result is expected for MAPbBr\({}_{3}\), which explains the negative sign and magnitude of the gap pressure coefficient.
In MAPbBr\({}_{3}\) the first phase transition thus occurs at a low pressure of ca. 0.75 GPa, as signalled by a turnover in the change of the PL peak energy \(E_{0}\) with pressure. This is an isostructural transition because the new high-pressure phase, which is stable
Figure 2: (a) PL spectra of MAPbBr\({}_{3}\) obtained for incremental steps of pressure up to about 6.5 GPa using the 405-nm laser line for excitation. The spectra were normalized to their maximum intensity and plotted with a vertical shift for increasing pressure. The different colors indicate the subsequent phases adopted by the material during the pressure upstroke (Cubic I: Pm\(\overline{3}\)m, Cubic II: Im\(\overline{3}\), Ortho. I: Pnma, Ortho. II: unknown). (b) The PL peak energy \(E_{0}\) plotted as a function of pressure, obtained from the PL lineshape fits using Eq. (1) of the Supporting Information. The pressures at which the phase transitions occur are indicated by vertical dashed lines. See text for details.
up to 2.2 GPa, corresponds to the cubic-II (Im\(\overline{3}\)) phase, as reported elsewhere.[14, 15, 28] This phase is characterized by a stepwise linear increase of \(E_{0}\) with increasing pressure, exhibiting a kink in the pressure dependence at about 1.2 GPa, also observed by Yesudhas et al.[15] However, there is no hint to a phase transition from the Raman data at this pressure, thus, the reason for the kink remains elusive to us. In contrast, the fact that in the cubic-II phase the gap energy gradually but steadily increases with pressure can be understood as arising from a pressure-induced increase in octahedral tilting. The cubic-II phase (Im\(\overline{3}\)) is obtained from the cubic-I (Pm\(\overline{3}\)m) by an alternate tilting of the PbBr\({}_{6}\) octahedrons in the direction of the cube diagonals. This doubles the unit cell in all three directions that remains cubic. Once the tilting starts, it increases gradually with pressure, causing the observed incremental opening of the gap.
The second phase transformation occurs at 2.2 GPa and is characterized by an abrupt increase of the PL peak energy. As explained in the discussion of the Raman results, this new phase is perfectly crystalline and probably orthorhombic in nature, in agreement with previous reports.[14, 15] The PL spectra clearly indicate the occurrence of a third phase transition at about 3.75 GPa, characterized by a dramatic change in PL lineshape (gray spectra in Fig. 2a). Two broad peaks appear at much lower energies and there is an overall lost of intensity together with a pronounced broadening of the main peak (see Fig. S2 of the Supporting Information). This speaks for a large heterogeneity in the sample, as far as the electronic states involved in the optical transitions are concerned. In fact, this is the pressure range for which a a pressure-induced amorphization is reported for MAPbBr\({}_{3}\).[14, 15, 28, 34] However, we anticipate that the Raman data indicate again that this phase is crystalline and probably orthorhombic up to the highest pressure of this experiment close to 6.5 GPa. We will return to deal with the amorphization and how it might be generated under pressure in the discussion of the Raman results. Finally, we remark that the changes in the PL emission (the Raman too) are fully reversible only provided the pressure was kept below that of the transition into the ortho-I phase. Otherwise there is a certain degree of hysteresis in the PL peak energy by releasing the pressure.
### Temperature and Pressure-Dependent Raman Spectra
Figure 3a summarizes the Raman results obtained on single-crystalline MAPbBr\({}_{3}\) as a function of temperature in a similar range as for the PL measurements (80 - 320 K) using the 785-nm line for excitation. The Raman spectra shown here correspond to the spectral region of the inorganic cage phonon modes below 300 cm\({}^{-1}\).[18] As reported before,[19] in the high-temperature cubic phase (red spectra), the MA dynamics is fully unfolded, resulting in a strong inhomogeneous broadening of the inorganic cage phonons due to the strong coupling to the molecular cations. In fact, the Raman spectra are quite featureless, exhibiting essentially a broad peak centered at around 70 cm\({}^{-1}\). The width of this Raman band decreases slightly, when the sample transforms into the tetragonal phase (blue spectra), for which the MA cations are free to move only in the tetragonal plane. The partial reduction of the dynamic disorder in the tetragonal phase leads to a slight decrease of the inhomogeneous broadening. In stark contrast, sev
eral well-defined peaks are apparent in the Raman spectra of the orthorhombic phase (black curves in Fig. 3a), a phase in which the MA cations are locked inside the cage voids in a state of static order. Concomitant with the disappearance of dynamic disorder, the inhomogeneous broadening vanishes, such that the Raman peaks just display their lifetime-limited homogeneous linewidth. An instructive digression on the relative importance of homogeneous versus inhomogeneous broadening in the Raman spectra of the three halide compounds MAPbX\({}_{3}\) with X=Cl, Br and I in relation to dynamic disorder is given in the Supporting Information.
The raw Raman spectra recorded for MAPbBr\({}_{3}\) under pressure are shown in Fig. S3
Figure 3: (a) Raman spectra of MAPbBr\({}_{3}\) measured at different temperatures from 320 K to 80 K (steps of 10 K) in the spectral range of the inorganic cage phonon modes using the 785-nm line for excitation. The spectra were normalized to their maximum intensity and shifted vertically for clarity. The different colors of the spectra indicate the changes in phase after every transition. (b) Examples of the performed lineshape fits (green solid curves) to the Raman spectra (black closed symbols) using Gaussian functions for different pressures, as indicated, each one representing a different high-pressure phase of the material. The solid curves in the color of the corresponding phase represent the different phonon components. The shown spectra were obtained by subtracting a special function used for describing the combined effect of the dichroic filter and the broad central peak (see text for details).
of the Supporting Information. For a quantitative assessment of the effect of pressure on the vibrational spectrum of MAPbBr\({}_{3}\) we have decomposed each Raman spectrum in its different mode components by a lineshape analysis, as illustrated in Fig. 3b, where a representative example of the fits for each of the observed phases is displayed. We note that at room temperature and mainly for the first two phases (P\(<\) 2.2 GPa), all Raman lineshapes are affected by the presence of a very broad and intense peak at very small Raman shifts and the edge-like attenuation caused by the dichroic filter used to screen the laser. The former is interpreted as a broad central Raman peak originating from local polar fluctuations in the perovskite structure caused by dynamic disorder [50]. A special function was constructed to describe such background,[22] which has been subtracted from the Raman spectra for a better visualization of the phonon modes. This simplifies the fitting of the Raman spectra using Gaussian functions, as illustrated in Fig. 3b. The number of Gaussian peaks and the approximate frequency positions are consistent with previous full assignments of Raman [18, 19] and far-infrared spectra [51] as well as those observed at low temperature for the orthorhombic phase (Fig. 3a).
The results of the Raman lineshape fits for the frequency and FWHM of the main peaks apparent in the Raman spectra of MAPbBr\({}_{3}\) (see Figs. S3 and 3b) are depicted in Fig. 4 as a function of pressure. Unlike what happened with MAPbI\({}_{3}\)[21], the changes in the Raman frequencies and mainly the linewidths were much less pronounced in MAPbBr\({}_{3}\). The latter is a consequence of the greater _homogeneous broadening_ of the Raman peaks due to a stronger coupling with the MA molecules within the narrower
Figure 4: (a) The frequency and (b) full width at half maximum (FWHM) of the Raman peaks in the spectral region of the inorganic cage phonons below 300 cm\({}^{-1}\), as obtained from the lineshape fits as a function of pressure up to ca. 6.5 GPa. The pressures at which the phase transitions occur are marked with vertical dashed lines. The different phases are indicated (C: cubic, O: orthorhombic).
voids of the lead bromide cage (see digression in the Supporting Information), that hampers the observation of the variations of the inhomogeneous part of the linewidth with the amount of disorder. However, the clear changes in Raman lineshape observed with increasing pressure like the reduction of the linewidths and/or the appearance of additional well-resolved peaks, allowed us to corroborate the occurrence of the phase transitions previously ascertained from the PL experiments. Therefore, the dashed lines in Fig. 4a also denote the phase transition pressures. An exception is the first abrupt and irreversible decrease in linewidth, which occurs only once at the start of the pressure experiments.
An important result of this work concerns the observation of a sharpening of the Raman modes also for the orthorhombic-II phase (see Figs. 3b and 4b), a phase that appears to be stable above the onset of amorphization, according to most of the high-pressure work on MAPbBr\({}_{3}\)[14, 15, 28, 34] and other halide perovskites. [8, 26, 29, 30, 31, 35] On the contrary, our Raman linewidths remain narrow up to ca. 6.5 GPa, the highest pressure of these experiments, exactly as was previously reported by us for MAPbI\({}_{3}\) too. [21] In this respect, it is interesting to compare our Raman results with those of Capitani _et al._, [14] where the low-frequency Raman spectra exhibit a broad, featureless band for the cubic low-pressure phases and well-defined, sharp peaks for the orthorhombic-I phase, exactly like us. The key difference lies in the strong broadening that Capitani _et al._ observe in MAPbBr\({}_{3}\) above ca. 4 GPa, which was ascribed to an amorphous-like state of _static_ disorder. [14, 42] Considering that the pressure-induced changes in the Raman linewidth are due to variations of its inhomogeneous part, this provides a tool to monitor the degree of disorder of the crystal lattice. Moreover, we note that the inhomogeneous broadening is unable to distinguish between static or dynamic disorder. At least in this particular case, this is so because the typical duration times of the Raman scattering processes are in the sub-picosecond regime, [52] i.e. much faster than the times required for the jump-like reorientations of the MA molecules causing dynamic disorder. Hence, a Raman measurement just corresponds to a sampling of \(10^{12}\) to \(10^{13}\) different but static MA orientational motifs per second, occurring in the sample throughout the molecular-cation dynamics. In fact, a marked increase in inhomogeneous broadening has been also observed in the Raman spectra of the low-temperature phases of FAPbBr\({}_{3}\)[53] and FA\({}_{x}\)MA\({}_{1-x}\)PbI\({}_{3}\)[22] with \(x>0.4\), which exhibit static structural disorder.
Hence, according to the Raman data, with increasing pressure MAPbBr\({}_{3}\) transforms from a state of dynamic disorder in the cubic phases, due to an unleashed MA-cation dynamics, to a state of static order with all MA molecules locked inside the cage voids and orderly oriented in the repeated unit cell of the orthorhombic-I phase. Further increase of pressure above ca. 4 GPa can either induce transformation into a static disordered phase [14] or to a presumably orthorhombic phase, as here reported, where the short-range order is still preserved (sharp Raman peaks) but the optical emission shows clear signs of carrier localization effects compatible with an incipient amorphization (see Fig. 2a). The first question is what triggers amorphization? Obviously, the MA cations cannot be, since they are locked and ordered throughout the crystal structure. By combining density functional theory and ab-initio molecular dynamics calculations,
[40] an answer to this question has been recently provided for CsPbI\({}_{3}\), although valid in general for MHPs. Essentially, high pressure induces a phase instability driven by the softening of lattice vibrational modes associated with the tilting of the PbI\({}_{6}\) octahedrons, i.e. with a strong reduction of the Pb-I-Pb bond angle. The deformation caused by the stark octahedral tilting starts at different seeding points across the sample and extends gradually, leading to a lost of the long and short-range crystalline order. If so, the next question that arises is why is the onset of amorphization so sample dependent (reports from 2 to 7 GPa)? One possibility might be the different experimental conditions, for the phase behavior of MHPs under compression can be very sensitive to the degree of hydrostaticity of the pressure transmitting medium used. [54] However, we propose an alternative explanation based on the amount of vacancies (mainly Pb vacancies) present in the sample. In a recent study on the optical emission of FA\({}_{x}\)MA\({}_{1-x}\)PbI\({}_{3}\) mixed crystals [45], we have shown that the most common shallow defects in single crystalline MHPs are vacancies, mainly of Pb but also of the halogen and A-site cation. In view of the fact that the crystal structure is already deformed around a vacancy, we can foresee vacancies acting as seeding points to trigger the proposed pressure-induced lattice instability, leading to static disorder. [40]
We finally turn to a key result that concerns the temperature and pressure dependence of the N-H symmetric stretching vibration [\(\nu_{s}\)(NH\({}_{3}^{+}\))] of the MA cations, as determined by Raman scattering. This vibrational mode corresponds to the strongest peak in the Raman spectrum of MAPbBr\({}_{3}\) in the spectral range of the N-H stretching vibrations around 3000 cm\({}^{-1}\), as shown in the inset to Fig. 5. This vibrational mode provides direct information about the coupling between the inorganic cage and the A
Figure 5: The frequency of the N-H symmetric stretching vibration [\(\nu_{s}\)(NH\({}_{3}^{+}\))] of the methylammonia (a) as a function of temperature at ambient pressure and (b) as a function of pressure at room temperature. The pressures at which the phase transitions occur are marked with vertical dashed lines and the different phases are indicated. The inset shows a representative Raman spectrum in the range of the N-H stretching vibrations around 3000 cm\({}^{-1}\).
site cations, in particular, allowing one to unravel the conditional weight of H bonding and steric hindrance. This is so because the frequency of this vibrational mode shifts up or down upon changes in temperature, pressure or composition, if the coupling between both sublattices is dominated by steric or H-bonding effects, respectively. The frequency of the \(\nu_{s}(\text{NH}_{3}^{+})\) vibration is determined by the strength of the covalent bond between nitrogen and hydrogen. In the H-bonding case, the electrostatic attraction between the H\({}^{+}\) and the negative halide ion of the inorganic cage weakens the N-H bond by elongating it, thus causing a redshift. [55] On the contrary, the DSI is repulsive and stronger the closer the H and the halide atom become, which in turn shortens the N-H bond, causing a blueshift. As shown in Fig. 5a, the frequency of the N-H stretching vibration decreases slightly with decreasing temperature (about 2 cm\({}^{-1}\) from room temperature down to 80 K). This is a clear indication that the H-bonding increases in importance with decreasing temperature, while the gradual cooling of the MA dynamics diminishes the steric effects. In fact, in the low-temperature orthorhombic phase, H-bonding is crucial to determine the arrangement (position and orientation) of the MA molecules within the cage voids. [40] However, at room temperature the application of moderate pressure causes a strong increase in frequency of the N-H stretching mode (ca. 8 cm\({}^{-1}\) up to 1.2 GPa), as displayed in Fig. 5b. This is compelling evidence that, when the MA dynamics is fully unfolded, the DSI dominates the inter-sublattice coupling. We point out that the prominent role of DSI at ambient conditions is also demonstrated by the theoretically predicted and experimentally assessed blueshift of the \(\nu_{s}(\text{NH}_{3}^{+})\) vibration for a reduction of the lattice parameter by the substitution of the halide atom from I to Br to Cl. [19] This idea gathers additional support from a recent study that combines Raman scattering and density functional theory, where the entire vibrational spectrum of _isolated_ MA\({}^{+}\) and FA\({}^{+}\) molecules is compared with that of MAPbX\({}_{3}\) and FAPbX\({}_{3}\) (X=I and Br), respectively. [56] This comparison clearly shows that there are no hydrogen bonds in MHPs at room temperature.
## 3 Discussion
At this point, it is worth offering a general discussion on the relationship between the crystal structures adopted by MHPs and the magnitude of dynamic disorder, as a function of different important parameters such as temperature, pressure and composition. For this purpose we use the sketch of Fig. 6, which serves both as a guide for discussion and as graphical summary of the "take-home" message of this work. The main hypothesis is that the strength of the dynamic steric interaction and, hence its leading role in the coupling between inorganic metal-halide network and the A-site cation sublattice, increases in direct proportion to the amount of dynamic disorder caused by the unfolded A-site cation dynamics. In this sense, the arrows in Fig. 6 indicate the direction of increase of the DSI linked to the variation of the corresponding parameter.
We first discuss the effect of temperature, represented by the blue circle in Fig. 6. With increasing temperature, the structural sequence exhibited by MHPs is typically: orthorhombic (O)\(\rightarrow\)tetragonal (T)\(\rightarrow\)cubic (C). Concomitantly with the thermal activa
tion of vibrations, rotations and translations of the A-site cations within the cage voids, there is an increase in dynamic disorder and, thus, of the DSI. The increase in entropy from dynamic disorder overcompensates both the decrease in structural entropy for the more symmetric structures and the detrimental effect of the lattice thermal expansion on the DSI.
We next consider the impact of replacing the A-site cation on the structural behavior of MHPs (orange circle in Fig. 6). The same O\(\rightarrow\)T\(\rightarrow\)C sequence is observed for an increasing cation size, when going from an atom like Cs to a molecule like MA and a larger one such as FA. Even though the lattice parameter of the perovskite increases slightly for larger A-site cation size, the effective volume filled by the A-site cation (\(V_{A}\)), the so-called steric bulk, increases faster than the void volume (\(V_{v}\)) itself. The volume \(V_{A}\) can be inferred, for example, from the atomic displacement parameter plots at 50% probability from neutron scattering [6, 7] or molecular dynamics calculations. [9, 10] The key point is that the strength of the DSI is proportional to the ratio \(\frac{V_{A}}{V_{v}}\) and being repulsive in nature, the fast movement of the A-site cations produces the same effect of an _internal_ pressure acting outwards on the imaginary walls of the cage voids. As clearly shown by molecular dynamics simulations, [9, 10] the spherical
Figure 6: Schematic representation of the relationship between crystal structure and dynamic disorder in lead halide perovskites with formula APbX\({}_{3}\) as a function of temperature, pressure and composition (A-site cation and halogen anion types). The arrow pointing counterclockwise represents the direction of increase of the DSI. Numbers in parentheses correspond to the cation/anion ionic radii from the literature (in pm). [57, 58]
atomic-density cloud generated by the movement of the A-site cations in the three spatial directions favors a cubic void environment, thus stabilizing the cubic phase. In contrast, a free movement solely in a plane favors the stabilization of the tetragonal phase, whereas the orthorhombic phase is only compatible with the locking of the A-site cations inside the cage voids. A nice example of the effect of the A-site cation size can be appreciated for the series of lead bromide compounds. [33, 59]
We now turn to the discussion of the effects of the halogen atom substitution, which are illustrated by the green circle in Fig. 6. In this case, the effects on the structural behavior are more subtle than for the preceding parameters. However, one can recognize certain correlation between the structural sequence O\(\rightarrow\)T\(\rightarrow\)C and the reduction of the ionic radius of the halogen atoms. The heavier the halogen atom, the larger its ionic radius, which leads to an increase of the lattice parameter, i.e. of \(V_{\rm v}\). This means that the DSI decreases with increasing ionic radius of the halogens, which explains why chlorine compounds are more prone to stabilize in the cubic phase than the bromide and iodide counterparts. At least, the T\(\rightarrow\)C transition temperature, for example for the MAPbX\({}_{3}\) family, shifts to higher temperatures as the halogen ionic radius increases. [19] As shown below for the case of varying the pressure, halogen substitution works in a similar way as what is known as _chemical pressure_.
Finally, we discuss the structural effect of an external hydrostatic pressure. The corresponding grey circle in Fig. 6 appears out of phase with respect to the others, because the observed structural sequence under compression is T\(\rightarrow\)C\(\rightarrow\)O, as for the emblematic case of MAPbI\({}_{3}\). [8, 21, 26, 28] This seems a priori counterintuitive. In fact, since the effect on the void volume \(V_{v}\) of thermal expansion is opposite to that of compression, one would expect the pressure effect to be represented in the sketch of Fig. 6 by a similar circle as for the temperature but going clockwise instead of anticlockwise. The only way to understand such behavior is by considering the DSI as the dominant interaction against H bonding. As mentioned before, when the A-site cation dynamics is fully unfolded and due to the repulsive character of the DSI, the moving A-site cations exert an outward force to the surrounding octahedrons, which partly counteracts the effect of the applied pressure. In the tetragonal phase, stable at ambient conditions for MAPbI\({}_{3}\), where the dynamics of the MA cations is restricted to the tetragonal plane, such a reaction of the MA molecules to compression is only expected along the (a,b) tetragonal axes. To the contraction of the tetragonal axes under pressure follows a reaction of the moving MA cations, mediated by DSI, that repels the tilted octahedrons slowing down further pressure-induced tilting. This leads to an effective asymmetry in the compressibility of the inorganic cage, in view of the fact that the longer (unperturbed) c axis would be more compressible than the (distorted) tetragonal ones. Thus, with increasing pressure the tetragonal distortion diminishes up to the point where the compressed crystal structure is almost cubic. At that moment is when the MA dynamics becomes unleashed in all three directions in space, what in turn stabilizes the cubic phase at finite pressure. [9, 10] Further compression will eventually induce a transformation into an orthorhombic phase, which is thermodynamically more stable at reduced volumes and after the MA dynamics has collapsed. This phenomenology is a unique signature of the DSI present in MHPs. We point out
that from a pure structural point of view, ferroelectric perovskites like CsGeX\({}_{3}\) with X=I, Br, and Cl also exhibit a similar behavior under pressure. [60, 61] However, the reason for it is the pressure-induced reduction up to a full collapse of the Jahn-Teller distortion giving rise to the ferroelectric polarization. Lead halide perovskites, in contrast, are not ferroelectric but ferroelastic [62] and the transformation from tetragonal to cubic structure under compression is the consequence of a gradual pressure-induced, DSI aided reduction of the tetragonal symmetry.
## 4 Conclusion
In summary, we have performed a systematic study of the optical emission and vibrational properties of single crystalline MAPbBr\({}_{3}\) as a function of temperature and hydrostatic pressure using photoluminescence and Raman scattering spectroscopy. These results combined with the available literature data on other closely-related MHPs allowed us to unravel the underlying physics relating the crystal structure stability, depending on composition as well as temperature and pressure conditions, and the dynamic disorder caused by the fast A-site cation dynamics. The main finding is that a full understanding of the relationship between structure and dynamic disorder in MHPs can only be achieved if dynamic steric effects are taken into account; H-bonding alone is insufficient. The leitmotif for the observed trends regarding the crystal phase sequences obtained with increasing temperature, pressure and A-site cation size or with decreasing halogen ionic radius is a strengthening of the DSI, which is directly linked with the magnitude of the dynamic disorder induced by the unfolded A-site cation roto-translational dynamics. Furthermore, we offer an explanation for the large spread in the reported values of the onset of the pressure-induced amorphization or static-disordered state, ubiquitous in MHPs. Here we suggest that vacancies (mainly of lead) act as seeding points for the pressure-induced lattice instability due to the softening of phonon modes related to octahedral tilting, as proposed to trigger amorphization. [40] Since the lattice is already deformed at a vacancy, the number of vacancies would then determine the onset of amorphization, making its observation fairly sample dependent. In this way, we believe to have deepened our understanding of a very fundamental issue for MHPs, namely the crystal-structure/dynamic-disorder relationship, thus contributing to advance the development of optoelectronic applications of this exceptional class of materials.
## 5 Methods
### Growth of the MAPbBr\({}_{3}\) single crystals
The inverse solubility method of Saidaminov _et al._[63] was developed to produce crystals of MAPbBr\({}_{3}\). Stoichiometric quantities of MABr (GreatCell Solar) and PbBr\({}_{2}\) (Merck, 99%) were dissolved at 20 \({}^{\circ}\)C in dry dimethylformamide (Alfa Aesar). When fully dissolved, the solution was heated to 80 \({}^{\circ}\)C and left undisturbed for 3 hours to
allow crystallisation. The remaining solution was filtered off and large single crystals were oven dried at 100 \({}^{\circ}\)C overnight.
### High-pressure experiments
The high-pressure photoluminescence and micro-Raman scattering measurements were performed at room temperature employing a gasketed diamond anvil cell (DAC). Anhydrorous propanol was used as pressure transmitting medium which ensures good hydrostatic conditions in the pressure range of the present experiments (perfectly hydrostatic up to 4.2 GPa [64]) and proved chemically inert to MAPbBr\({}_{3}\). For loading the DAC, small chips with a thickness below ca. 30 \(\mu\)m were produced by crushing a big MAPbBr\({}_{3}\) single crystal between two glass slides. By close inspection of the debris we were able to pick up small enough, good-quality single crystals, recognized by their flat and shiny surface under the microscope. This simple but effective procedure allowed us to avoid the thinning of the sample by either mechanical polishing or chemical etching, which are known to spoil the quality of such soft crystals. Small pieces of about 100\(\times\)100 \(\mu\)m\({}^{2}\) in size were placed into the DAC together with a ruby sphere for pressure calibration [65]. Here we point out that the Inconel gasket was intentionally pre-indented to a fairly large thickness of 120 \(\mu\)m, before drilling a hole of ca. 250 \(\mu\)m with a spark-gap machine from EasyLab. The reason was to be able to adjust the pressure with the DAC in steps less than 0.05 GPa, mainly at very low pressures (below 1 GPa). For this purpose an electric motor drive was used to change the pressure in a continuous manner and at low speed (by ca. 0.05 GPa/min). In return, the maximum pressure reached in our experiments was about 7 GPa. Regarding the high accuracy claimed in the measurement of the pressure, we point out that we always loaded more than one ruby sphere into the DAC for a multi-point determination of the pressure. The excitation of the ruby fluorescence was performed using extremely low laser powers in the range of a few tens of nW, in order to avoid any heating-induced shift of the ruby emission. Furthermore, the pressure was determined immediately before and after each PL or Raman measurement, to account for effects of mechanical relaxation of the DAC upon changes in pressure. The temperature of the room was also frequently monitored to correct for an eventual temperature increase of the room, for example, from the morning to the evening or if another heat-generating equipment (laser, vacuum pump, etc.) was switched on nearby. Finally, the backlash of the spectrometer was also considered and to minimize its effect, the fluorescence of the ruby was measured, acquiring the spectrum by forcing the spectrometer to approach its final position always from the same side.
### PL & Raman measurements
For the high-pressure experiments, the PL spectra were excited with the 405 nm line of laser diode, whereas for the PL measurements at low temperatures the 514.5 nm line of an Ar\({}^{+}\)-ion laser was employed, using a very low incident light power below 2 \(\mu\)W. The latter was selected as the closest available laser line to the MAPbBr\({}_{3}\) gap. This turned out to be very important to attain long time stability and reproducibility of the
PL emission by reducing as much as possible laser heating effects due to thermalization of photo-generated hot carriers. For the Raman measurements either an infrared diode laser emitting at 785 nm or the 633 nm line of a He-Ne laser was employed for excitation of the low-frequency spectra (below 500 cm\({}^{-1}\)) and the high-frequency ones (around 3.000 cm\({}^{-1}\)), respectively. The former turned out most suitable to excite the vibrational modes of the inorganic cage, providing also the highest spectral resolution and stray-light rejection. In all cases, a very low incident light power density below 15 W/cm\({}^{2}\) was used to avoid any photo-degradation of the samples, such that thermal damage by the laser can be safely ruled out. Spectra were collected using a 20\(\times\) long working distance objective with NA=0.35 and dispersed with a high-resolution LabRam HR800 grating spectrometer equipped with a charge-coupled device detector. PL spectra were corrected for the spectral response of the spectrometer by normalizing each spectrum using the detector and the 600-grooves/mm grating characteristics. Temperature-dependent measurements on large single crystals exhibiting flat surfaces were carried out between 80 and 320 K using a gas flow cryostat from CryoVac with optical access that fits under the microscope of the LabRam setup.
**Supporting Information**
The Supporting Information contains a set of PL spectra recorded at five different temperatures in the range of approx. 10 to 50 K for each of the ten compositions of the FA\({}_{x}\)MA\({}_{1-x}\)PbI\({}_{3}\) system studied here, showing details of the lineshape fits performed to the PL spectra using multiple Gaussian-Lorentzian cross-product functions. The results of the line-shape fits concerning the peak energy, line width and intensity for the main emission features, plotted as a function of temperature, are also included.
**Acknowledgements**
The Spanish "Ministerio de Ciencia e Innovacion (MICINN)" is gratefully acknowledged for its support through grant CEX2019-000917-S (FUNFUTURE) in the framework of the Spanish Severo Ochoa Centre of Excellence program and the AEI/FEDER(UE) grants PGC2018-095411-B-100 (RAINBOW) and PID2021-1289240B-I00 (ISOSCELLES). The authors also thank the Catalan agency AGAUR for grant 2017-SGR-00488 and the National Network "Red Perovskitas" (MICINN funded). K.X. acknowledges a fellowship (CSC201806950006) from China Scholarship Council and the PhD programme in Materials Science from Universitat Autonoma de Barcelona in which he was enrolled. B.C. thanks the EPSRC for PhD studentship funding via the University of Bath, CSCT CDT (EP/G03768X/1).
**Author contributions** Conceptualization: A.R.G.; experiments-data generation: K.X. and L.P.-F.; materials synthesis: B.C.; analysis: K.X., L.P.-F. and A.R.G.; supervision: M.I.A., M.T.W. and A.R.G.; writing-original draft preparation: A.R.G. All authors reviewed the manuscript.
**Data availability** All data generated or analysed during this study are either included in this published article and its supplementary information files or are available from the corresponding author on reasonable request.
**References** |
2305.00997 | The Expressivity of Classical and Quantum Neural Networks on
Entanglement Entropy | Analytically continuing the von Neumann entropy from R\'enyi entropies is a
challenging task in quantum field theory. While the $n$-th R\'enyi entropy can
be computed using the replica method in the path integral representation of
quantum field theory, the analytic continuation can only be achieved for some
simple systems on a case-by-case basis. In this work, we propose a general
framework to tackle this problem using classical and quantum neural networks
with supervised learning. We begin by studying several examples with known von
Neumann entropy, where the input data is generated by representing $\text{Tr}
\rho_A^n$ with a generating function. We adopt KerasTuner to determine the
optimal network architecture and hyperparameters with limited data. In
addition, we frame a similar problem in terms of quantum machine learning
models, where the expressivity of the quantum models for the entanglement
entropy as a partial Fourier series is established. Our proposed methods can
accurately predict the von Neumann and R\'enyi entropies numerically,
highlighting the potential of deep learning techniques for solving problems in
quantum information theory. | Chih-Hung Wu, Ching-Che Yen | 2023-05-01T18:00:01Z | http://arxiv.org/abs/2305.00997v1 | # The Expressivity of Classical and Quantum Neural Networks on Entanglement Entropy
###### Abstract
Analytically continuing the von Neumann entropy from Renyi entropies is a challenging task in quantum field theory. While the \(n\)-th Renyi entropy can be computed using the replica method in the path integral representation of quantum field theory, the analytic continuation can only be achieved for some simple systems on a case-by-case basis. In this work, we propose a general framework to tackle this problem using classical and quantum neural networks with supervised learning. We begin by studying several examples with known von Neumann entropy, where the input data is generated by representing \(\operatorname{Tr}\rho_{A}^{n}\) with a generating function. We adopt KerasTuner to determine the optimal network architecture and hyperparameters with limited data. In addition, we frame a similar problem in terms of quantum machine learning models, where the expressivity of the quantum models for the entanglement entropy as a partial Fourier series is established. Our proposed methods can accurately predict the von Neumann and Renyi entropies numerically, highlighting the potential of deep learning techniques for solving problems in quantum information theory.
###### Contents
* 1 Introduction
* 2 Analytic continuation of von Neumann entropy from Renyi entropies
* 3 Deep learning von Neumann entropy
* 3.1 Model architectures and training strategies
* 3.2 Entanglement entropy of a single interval
* 3.3 Entanglement entropy of two disjoint intervals
* 4 Renyi entropies as sequential deep learning
* 4.1 Model architectures and training strategies
* 4.2 Examples of the sequential models
* 5 Quantum neural networks and von Neumann entropy
* 5.1 Fourier series from variational quantum machine learning models
* 5.2 The generating function as a Fourier series
* 5.3 The expressivity of the quantum models on the entanglement entropy
* 5.4 Recovering the von Neumann entropy
* 6 Discussion
* A Fourier series representation of the generating function
* B The Gegenbauer polynomials and the Gibbs phenomenon
## 1 Introduction
The _von Neumann entropy_ is widely regarded as an effective measure of quantum entanglement, and is often referred to as _entanglement entropy_. The study of entanglement entropy has yielded valuable applications, particularly in the context of quantum information and quantum gravity (see [1; 2] for a review). However, the analytic continuation from the _Renyi entropies_ to von Neumann entropy remains a challenge in quantum field theory for general systems. We tackle this problem using both classical
and quantum neural networks to examine their expressive power on entanglement entropy and the potential for simpler reconstruction of the von Neumann entropy from Renyi entropies.
Quantum field theory (QFT) provides an efficient method to compute the \(n\)-th Renyi entropy with integer \(n>1\), which is defined as [3]
\[S_{n}(\rho_{A})\equiv\frac{1}{1-n}\ln\mathrm{Tr}(\rho_{A}^{n}). \tag{1}\]
The computation is done by replicating the path integral representation of the reduced density matrix \(\rho_{A}\) by \(n\) times. This step is non-trivial; however, we will be mainly looking at examples where explicit analytic expressions of the Renyi entropies are available, especially in two-dimensional conformal field theories (CFT\({}_{2}\)) [4; 5; 6; 7]. Then upon analytic continuation of \(n\to 1\), we have the von Neumann entropy
\[S(\rho_{A})=\lim_{n\to 1}S_{n}(\rho_{A}). \tag{2}\]
The continuation can be viewed as an independent problem from computing the \(n\)-th Renyi entropy. Although the uniqueness of \(S(\rho_{A})\) from the continuation is guaranteed by Carlson's theorem, analytic expressions in closed forms are currently unknown for most cases.
Furthermore, while \(S_{n}(\rho_{A})\) are well-defined in both integer and non-integer \(n\), determining it for a set of integer values \(n>1\) is not sufficient. To obtain the von Neumann entropy, we must also take the limit \(n\to 1\) through a _space_ of real \(n>1\). The relationship between the Renyi entropies and the von Neumann entropy is therefore complex, and the required value of \(n\) for a precise numerical approximation of \(S(\rho_{A})\) is not clear.
Along this line, we are motivated to adopt an alternative method proposed in [8], which would allow us to study the connection between higher Renyi entropies and von Neumann entropy "accumulatively." This method relies on defining a generating function that manifests as a Taylor series
\[G(w;\rho_{A})=\sum_{k=1}^{\infty}\frac{\tilde{f}(k)}{k}w^{k},\quad\tilde{f}(k )=\mathrm{Tr}[\rho_{A}(1-\rho_{A})^{k}]. \tag{3}\]
Summing over \(k\) explicitly yields an absolutely convergent series that approximates the von Neumann entropy with increasing accuracy as \(w\to 1\). This method has both numerical and analytical advantages, where we refer to [8] for explicit examples. Note that the accuracy we can achieve in approximating the von Neumann entropy depends on the truncation of the partial sum in \(k\), which is case-dependent and can be
difficult to evaluate. It becomes particularly challenging when evaluating the higher-order Riemann-Siegel theta function in the general two-interval case of CFT\({}_{2}\)[8], which remains an open problem.
On the other hand, deep learning techniques have emerged as powerful tools for tackling the analytic continuation problem [9; 10; 11; 12; 13; 14], thanks to their universal approximation property. The universal approximation theorem states that artificial neural networks can approximate any continuous function under mild assumptions [15], where the von Neumann entropy is no exception. A neural network is trained on a dataset of known function values, with the objective of learning a latent manifold that can approximate the original function within the known parameter space. Once trained, the model can be used to make predictions outside the space by extrapolating the trained network. The goal is to minimize the prediction errors between the model's outputs and the actual function values. In our study, we frame the supervised learning task in two distinct ways: the first approach involves using densely connected neural networks to predict von Neumann entropy, while the second utilizes sequential learning models to extract higher Renyi entropies.
Instead of using a static "define-and-run" scheme, where the model structure is defined beforehand and remains fixed throughout training, we have opted for a dynamic "define-by-run" approach. Our goal is to determine the optimal model complexity and hyperparameters based on the input validation data automatically. To achieve this, we employ KerasTuner [16] with Bayesian optimization, which efficiently explores the hyperparameter space by training and evaluating different neural network configurations using cross-validation. KerasTuner uses the results to update a probabilistic model of the hyperparameter space, which is then used to suggest the next set of hyperparameters to evaluate, aiming to maximize expected performance improvement.
A similar question can be explicitly framed in terms of quantum machine learning, where a trainable quantum circuit can be used to emulate neural networks by encoding both the data inputs and the trainable weights using quantum gates. This approach bears many different names [17; 18; 19; 20; 21; 22], but we will call it a _quantum neural network_. Unlike classical neural networks, quantum neural networks are defined through a series of well-defined unitary operations, rather than by numerically optimizing the weights for the non-linear mapping between targets and data. This raises a fundamental question for quantum computing practitioners: can _any unitary operation_ be realized, or is there a particular characterization for the _learnable function class_? In other words, is the quantum model universal in its ability to express any function with the given data input? Answering these questions will not only aid in designing future algorithms, but also provide deeper insights into how quantum models achieve universal approximation [23; 24].
Recent progress in quantum neural networks has shown that data-encoding strategies play a crucial role in their expressive power. The problem of data encoding has been the subject of extensive theoretical and numerical studies [25; 26; 27; 28]. In this work, we build on the idea introduced in [29; 30], which demonstrated the expressivity of quantum models as partial Fourier series. By rewriting the generating function for the von Neumann entropy in terms of a Fourier series, we can similarly establish the expressivity using quantum neural networks. However, the Gibbs phenomenon in the Fourier series poses a challenge in recovering the von Neumann entropy. To overcome this, we reconstruct the entropy by expanding the Fourier series into a basis of Gegenbauer polynomials.
The structure of this paper is as follows. In Sec. 2, we provide a brief overview for the analytic continuation of the von Neumann entropy from Renyi entropies within the framework of QFT. In addition, we introduce the generating function method that we use throughout the paper. In Sec. 3, we use densely connected neural networks with KerasTuner to extract the von Neumann entropy for several examples where analytic expressions are known. In Sec. 4, we employ sequential learning models for extracting higher Renyi entropies. Sec. 5 is dedicated to studying the expressive power of quantum neural networks in approximating the von Neumann entropy. In Sec. 6, we summarize our findings and discuss possible applications of our approach. A is devoted to the details of rewriting the generating function as a partial Fourier series, while Appendix. B addresses the Gibbs phenomenon using Gegenbauer polynomials.
## 2 Analytic continuation of von Neumann entropy from Renyi entropies
Let us discuss how to calculate the von Neumann entropy in QFTs [31; 32; 33; 34]. Suppose we start with a QFT on a \(d\)-dimensional Minkowski spacetime with its Hilbert space specified on a Cauchy slice \(\Sigma\) of the spacetime. Without loss of generality, we can divide \(\Sigma\) into two disjoint sub-regions \(\Sigma=A\cup A^{c}\). Here \(A^{c}\) denotes the complement sub-region of \(A\). Therefore, the Hilbert space also factorizes into the tensor product \(\mathcal{H}_{\Sigma}=\mathcal{H}_{A}\otimes\mathcal{H}_{A^{c}}\). We then define a reduced density matrix \(\rho_{A}\) from a pure state on \(\Sigma\), which is therefore mixed, to capture the entanglement between the two regions. The von Neumann entropy \(S(\rho_{A})\) allows us to quantify this entanglement
\[S(\rho_{A})\equiv-\operatorname{Tr}(\rho_{A}\ln\rho_{A})=\frac{\operatorname{ Area}(\partial A)}{\epsilon^{d-2}}+\cdots. \tag{1}\]
Along with several nice properties, such as the invariance under unitary operations, complementarity for pure states, and a smooth interpolation between pure and maxi
mally mixed states, it is therefore a fine-grained measure for the amount of entanglement between \(A\) and \(A^{c}\). The second equality holds for field theory, where we require a length scale \(\epsilon\) to regulate the UV divergence encoded in the short-distance correlations. The leading-order divergence is captured by the area of the entangling surface \(\partial A\), a universal feature of QFTs [35].1
Footnote 1: While in CFT\({}_{2}\), the leading divergence for a single interval \(A\) of length \(\ell\) in the vacuum state on an infinite line is a logarithmic function of the length, this is the simplest example we will consider later.
There have been efforts to better understand the structure of the entanglement in QFTs, including free theory [36], heat kernels [37; 38], CFT techniques [39] and holographic methods based on AdS/CFT [40; 41]. But operationally, computing the von Neumann entropy analytically or numerically is still a daunting challenge for generic interacting QFTs. For a review, see [1].
Path integral provides a general method to access \(S(\rho_{A})\). The method starts with the Renyi entropies [3]
\[S_{n}(\rho_{A})=\frac{1}{1-n}\ln\operatorname{Tr}\rho_{A}^{n}, \tag{2}\]
for real \(n>1\). As previously mentioned, obtaining the von Neumann entropy via analytic continuation in \(n\) with \(n\to 1\) requires two crucial steps. An analytic form for the \(n\)-th Renyi entropy must be derived from the underlying field theory in the first place, and then we need to perform analytic continuation toward \(n\to 1\). These two steps are independent problems and often require different techniques. We will briefly comment on the two steps below.
Computing \(\operatorname{Tr}\rho_{A}^{n}\) is not easy; therefore, the replica method enters. The early form of the replica method was developed in [34], and was later used to compute various examples in CFT\({}_{2}\)[4; 5; 6; 7], which can be compared with holographic ones [42]. The idea behind the replica method is to consider an orbifold of \(n\) copies of the field theory to compute \(\operatorname{Tr}\rho_{A}^{n}\) for positive integers \(n\). The computation reduces to evaluating the partition function on a \(n\)-sheeted Riemann surface, which can be alternatively computed by correlation functions of twist operators in the \(n\) copies. For more details on the construction in CFTs, see [4; 5; 6; 7]. If we are able to compute \(\operatorname{Tr}\rho_{A}^{n}\) for any positive integer \(n\geq 1\), we have
\[S(\rho_{A})=\lim_{n\to 1}S_{n}(\rho_{A})=-\lim_{n\to 1}\frac{\partial}{ \partial n}\operatorname{Tr}\rho_{A}^{n}. \tag{3}\]
This is computable for special states and regions, such as ball-shaped regions for the vacuum of the CFT\({}_{d}\). However, in CFT\({}_{2}\), due to its infinite-dimensional symmetry being sufficient to fix lower points correlation functions, we are able to compute \(\operatorname{Tr}\rho_{A}^{n}\) for several instances.
The analytic continuation in \(n\to 1\) is more subtle. Ensuring the existence of a unique analytic extension away from integer \(n\) typically requires the application of the Carlson's theorem. This theorem guarantees the uniqueness of the analytic continuation from Renyi entropies to the von Neumann entropy, provided that we can find some locally holomorphic function \(\mathcal{S}_{\nu}\) with \(\nu\in\mathbb{C}\) such that \(\mathcal{S}_{n}=S_{n}(\rho)\) for all integers \(n>1\) with appropriate asymptotic behaviors in \(\nu\to\infty\). Then we have unique \(S_{\nu}(\rho)=\mathcal{S}_{\nu}\)[43; 44]. Carlson's theorem addresses not only the problem of unique analytic continuation but also the issue of continuing across non-integer values of the Renyi entropies.
There are other methods to evaluate \(S(\rho_{A})\) in the context of string theory and AdS/CFT; see for examples [45; 46; 47; 48; 49; 50]. In this work, we would like to focus on an effective method outlined in [8] that is suitable for numerical considerations. In [8], the following generating function is used for the analytic continuation in \(n\) with a variable \(z\)
\[G(z;\rho_{A})\equiv-\operatorname{Tr}\bigg{(}\rho_{A}\ln\frac{1-z\rho_{A}}{1-z }\bigg{)}=\sum_{n=1}^{\infty}\frac{z^{k}}{k}\bigg{(}\operatorname{Tr}(\rho_{A} ^{k+1})-1\bigg{)}. \tag{4}\]
This manifest Taylor series is absolutely convergent in the unit disc with \(|z|<1\). We can analytically continue the function from the unit disc to a holomorphic function in \(\mathbb{C}\setminus[1,\infty)\) by choosing the branch cut of the logarithm to be along the positive real axis. The limit \(z\to-\infty\) is within the domain of holomorphicity and is exactly where we obtain the von Neumann entropy
\[S(\rho_{A})=\lim_{z\to-\infty}G(z;\rho_{A}). \tag{5}\]
However, a more useful form can be obtained by performing a Mobius transformation to a new variable \(w\)
\[G(w;\rho_{A})=-\operatorname{Tr}\bigg{(}\rho_{A}\ln\left\{1-w(1-\rho_{A}) \right\}\bigg{)},\quad w=\frac{z}{z-1}. \tag{6}\]
It again manifests as a Taylor series
\[G(w;\rho_{A})=\sum_{k=1}^{\infty}\frac{\tilde{f}(k)}{k}w^{k}, \tag{7}\]
where
\[\tilde{f}(k)=\operatorname{Tr}[\rho_{A}(1-\rho_{A})^{k}]=\sum_{m=0}^{k}\frac{ (-1)^{m}k!}{m!(k-m)!}\operatorname{Tr}\big{(}\rho_{A}^{m+1}\big{)}. \tag{8}\]
We again have a series written in terms of \(\operatorname{Tr}\rho_{A}^{n}\), and it is absolutely convergent in the unit disc \(|w|<1\). The convenience of using \(w\) is that by taking \(w\to 1\), we have the von Neumann entropy
\[S(\rho_{A})=\lim_{w\to 1}G(w;\rho_{A})=\sum_{k=1}^{\infty}\frac{\tilde{f}(k)}{k}. \tag{9}\]
This provides an exact expression of \(S(\rho_{A})\) starting from a known expression of \(\operatorname{Tr}\rho_{A}^{n}\). Numerically, we can obtain an accurate value of \(S(\rho_{A})\) by computing a partial sum in \(k\). The method guarantees that by summing to sufficiently large \(k\), we approach the von Neumann entropy with increasing accuracy.
However, a difficulty is that we need to sum up \(k\sim 10^{3}\) terms to achieve precision within \(10^{-3}\) in general [8]. It will be computationally costly for certain cases with complicated \(\operatorname{Tr}\rho_{A}^{n}\). Therefore, one advantage the neural network framework offers is the ability to give accurate predictions with only a limited amount of data, making it a more efficient method.
In this paper, we focus on various examples from CFT\({}_{2}\) with known analytic expressions of \(\operatorname{Tr}\rho_{A}^{n}\)[6], and we use the generating function \(G(w;\rho_{A})\) to generate the required training datasets for the neural networks.
## 3 Deep learning von Neumann entropy
This section aims to utilize deep neural networks to predict the von Neumann entropy via a supervised learning approach. By leveraging the gradient-based learning principle of the networks, we expect to find a non-linear mapping between the input data and the output targets. In the analytic continuation problem from the \(n\)-th Renyi entropy to the von Neumann entropy, such a non-linear mapping naturally arises. Accordingly, we consider \(S_{n}(\rho_{A})\) (equivalently \(\operatorname{Tr}\rho_{A}^{n}\) and the generating function) as our input data and \(S(\rho_{A})\) as the target function for the training process. As supervised learning, we will consider examples where analytic expressions of both sides are available. Ultimately, we will employ the trained models to predict the von Neumann entropy across various physical parameter regimes, demonstrating the efficacy and robustness of the approach.
The major advantage of using deep neural networks lies in that they improve the accuracy of the generating function for computing the von Neumann entropy. As we mentioned, the accuracy of this method depends on where we truncate the partial sum, and it often requires summing up a large \(k\) in (9), which is numerically difficult. In a sense, it requires knowing much more information, such as those of the higher Renyi entropies indicated by \(\operatorname{Tr}\rho_{A}^{n}\) in the series. Trained neural networks are able to predict
the von Neumann entropy more accurately given much fewer terms in the input data. We can even predict the von Neumann entropy for other parameter spaces without resorting to any data from the generating function.
Furthermore, the non-linear mappings the deep neural networks uncover can be useful for investigating the expressive power of neural networks on the von Neumann entropy. Additionally, they can be applied to study cases where analytic continuations are unknown and other entanglement measures that require analytic continuations.
In the following subsections, we will give more details on our data preparation and training strategies, then we turn to explicit examples as demonstrations.
### Model architectures and training strategies
Generating suitable training datasets and designing flexible deep learning models are empirically driven. In this subsection, we outline our strategies for both aspects.
**Data preparation**
To prepare the training datasets, we consider several examples with known \(S(\rho_{A})\). We use the generating function \(G(w;\rho)\), which can be computed from \(\operatorname{Tr}\rho_{A}^{n}\) for each example. This is equivalent to computing the higher Renyi entropies with different choices of physical parameters since the "information" available is always \(\operatorname{Tr}\rho_{A}^{n}\). However, note that all the higher Renyi entropies are distinct information. Therefore, adopting the generating function is preferable to using \(S_{n}(\rho_{A})\) itself, as it approaches the von Neumann entropy with increasing accuracy, making the comparison more transparent.
We generate \(N=10000\) input datasets for a fixed range of physical parameters, where each set contains \(k_{\text{max}}=50\) terms in (9); their corresponding von Neumann entropies will be the targets. We limit the amount of data to mimic the computational cost of using the generating function. We shuffle the input datasets randomly and then split the data into \(80\%\) for training, \(10\%\) for validation, and \(10\%\) as the test datasets. Additionally, we use the trained neural networks to make predictions on another set of \(10000\) test datasets with a different physical parameter regime and compare them with the correct values as a non-trivial test for each example.
**Model design**
To prevent overfitting and enhance the generalizability of our model, we have employed a combination of techniques in the design of neural networks. ReLU activation function is used throughout the section. We adopt Adam optimizer [51] in the training process with mean square error (MSE) as the loss function.
We consider a neural network consisting of a few hidden Dense layers with varying numbers of units in TensorFlow-Keras [52; 53]. In this case, each neuron in a layer
receives input from all the neurons in the previous layer. The Dense connection allows the model to find non-linear relations between the input and output, which is the case for analytic continuation. The final layer is a Dense layer with a single unit that outputs a unique value for each training dataset, which is expected to correspond to the von Neumann entropy. As an example, we show a neural network with 3 hidden Dense layers, each with 8 units, in Figure 1.
Figure 1: An architecture of 3 densely connected layers, where each layer has 8 units. The final output layer is a single Dense unit with a unique output corresponding to the von Neumann entropy.
To determine the optimal setting of our neural networks, we employ KerasTuner [16], a powerful tool that allows us to explore different combinations of model complexity, depth, and hyperparameters for a given task. An illustration of the KerasTuner process can be found in Figure 2. We use Bayesian optimization, and adjust the following designs and hyperparameters:
* We allow a maximum of 4 Dense layers. For each layer, we allow variable units in the range of 16 to 128 with a step size of 16. The number of units for each layer will be independent of each other.
* We allow BatchNormalization layers after the Dense layers as a Boolean choice to improve generalization and act as a regularization.
* A final dropout with log sampling of a dropout rate in the range of 0.1 to 0.5 is added as a Boolean choice.
* In the Adam optimizer, we only adjust the learning rate with log sampling from the range of 3\(\times\)10\({}^{-3}\) to 9\(\times\)10\({}^{-3}\). All other parameters are taken as default values in TensorFlow-Keras. We also use the AMSGrad [54] variant of this algorithm as a Boolean choice.
We deploy the KerasTuner for 100 trials with 2 executions per trial and monitor the validation loss with EarlyStopping of patience 8. Once the training is complete, since we will not be making any further hyperparameter changes, we no longer evaluate
Figure 2: Flowchart illustrating the steps of KerasTuner with Bayesian optimization. Bayesian optimization is a method for finding the optimal set of designs and hyperparameters for a given dataset, by iteratively constructing a probabilistic model from a prior distribution for the objective function and using it to guide the search. Once the tuner search loop is complete, we extract the best model in the final training phase by including both the training and validation data.
performance on the validation data. A common practice is to initialize new models using the best model designs found by KerasTuner while also including the validation data as part of the training data. Indeed, we select the top 5 best designs and train each one 20 times with EarlyStopping of patience 8. We pick the one with the smallest relative errors from the targets among the \(5\times 20\) models as our final model. We set the batch size in both the KerasTuner and the final training to be 512.
In the following two subsections, we will examine examples from \(\mathrm{CFT}_{2}\) with \(\mathrm{Tr}\,\rho_{A}^{n}\) and their corresponding von Neumann entropies \(S(\rho_{A})\)[4; 5; 6; 7; 8]. These instances are distinct and worth studying for several reasons. They have different mathematical structures and lack common patterns in their derivation from the field theory side, despite involving the evaluation of certain partition functions. Moreover, the analytic continuation for each case is intricate, providing strong evidence for the necessity of independent model designs.
### Entanglement entropy of a single interval
Throughout the following, we will only present the analytic expression of \(\mathrm{Tr}\,\rho_{A}^{n}\) since it is the only input of the generating function. We will also keep the UV cut-off \(\epsilon\) explicit in the formula.
#### Single interval
The simplest example corresponds to a single interval \(A\) of length \(\ell\) in the vacuum state of a \(\mathrm{CFT}_{2}\) on an infinite line. In this case, both the analytic forms of \(\mathrm{Tr}\,\rho_{A}^{n}\) and \(S(\rho_{A})\) are known [4], where \(S(\rho_{A})\) reduces to a simple logarithmic function that depends on \(\ell\). We have the following analytic form with a central charge \(c\)
\[\mathrm{Tr}\,\rho_{A}^{n}=\left(\frac{\ell}{\epsilon}\right)^{\frac{c}{6}( \frac{1}{n}-n)}, \tag{10}\]
that defines \(G(w;\rho_{A})\). The corresponding von Neumann entropy is given by
\[S(\rho_{A})=\frac{c}{3}\ln\frac{\ell}{\epsilon}. \tag{11}\]
We fixed the central charge \(c=1\) and the UV cutoff \(\epsilon=0.1\) when preparing the datasets. We generated 10000 sets of data for the train-validation-test split from \(\ell=1\) to 50, with an increment of \(\Delta\ell=5\times 10^{-3}\) between each step up to \(k=50\) in \(G(w;\rho_{A})\). To further validate our model, we generated an additional 10000 test datasets for the following physical parameters: \(\ell=51\) to 100 with \(\Delta\ell=5\times 10^{-3}\). For a density plot of the data distribution with respect to the target von Neumann entropy, see Figure 3.
Figure 4: Left: The MSE loss function as a function of epochs. We monitor the loss function with EarlyStopping, where the minimum loss is achieved at epoch 410 with loss \(\approx 10^{-7}\) for this instance. Right: The density plot of relative errors between the model predictions and targets. Note that the blue color corresponds to the test datasets from the initial train-validation-test split, while the green color is for the additional test datasets. We can see clearly that for both datasets, we have achieved high accuracy with relative errors \(\lesssim 0.30\%\).
Figure 3: The distribution of the data for the case of a single interval, where we plot density as a function of the von Neumann entropy computed by (10) with varying \(\ell\). The left plot represents the 10000 datasets for the train-validation-test split, while the right plot corresponds to the additional 10000 test datasets with a different physical parameter regime.
Figure 4 illustrates that the process outlined in the previous subsection effectively minimizes the relative errors in predicting the test data to a very small extent. Moreover, the model's effectiveness is further confirmed by its ability to achieve similarly small relative errors when predicting the additional test datasets. The accuracy of the model's predictions for the two test datasets significantly surpasses the approximate entropy obtained by summing the first 50 terms of the generating function, as can be seen in Figure 5. We emphasize that in order for the generating function to achieve the same accuracy as the deep neural networks, we generally need to sum \(k\geq 400\) from (9) [8]. This applies to all the following examples.
In this example, the von Neumann entropy is a simple logarithmic function, making it relatively straightforward for the deep learning models to decipher. However, we will now move on to a more challenging example.
#### Single interval at finite temperature and length
We extend the single interval case to finite temperature and length, where \(\operatorname{Tr}\rho_{A}^{n}\) becomes a complicated function of the inverse temperature \(\beta=T^{-1}\) and the length \(\ell\). The analytic expression of the Renyi entropies was first derived in [55] for a two-dimensional free Dirac fermion on a circle from bosonization. We can impose periodic
Figure 5: We plot the predictions from the model with the analytic von Neumann entropy computed by (10) for the 1000 test datasets (left) from the training-validation-test split and the additional 10000 test datasets (right), with the same scale on both figures. The correct von Neumann entropy overlaps with the model’s predictions precisely. We have also included the approximate entropy by summing over \(k=50\) terms in the generating function.
boundary conditions that correspond to finite size and finite temperature. For simplicity, we set the total spatial size \(L\) to \(1\), and use \(\ell\) to denote the interval length. In this case we have [55]
\[\operatorname{Tr}\rho_{A}^{n}=\prod_{k=-\frac{n-1}{2}}^{\frac{n-1}{2}}\left| \frac{2\pi\epsilon\eta(\tau)^{3}}{\theta_{1}(\ell|\tau)}\right|^{\frac{2k^{2}} {n^{\tau}}}\frac{|\theta_{\nu}(\frac{k\ell}{n}|\tau)|^{2}}{|\theta_{\nu}(0| \tau)|^{2}}, \tag{3.3}\]
where \(\epsilon\) is a UV cutoff. We study the case of \(\nu=3\), which is the Neveu-Schwarz (NS-NS) sector. We then have the following Dedekind eta function \(\eta(\tau)\) and the Jacobi theta functions \(\theta_{1}(z|\tau)\) and \(\theta_{3}(z|\tau)\)
\[\eta(\tau)\equiv q^{\frac{1}{24}}\prod_{n=1}^{\infty}(1-q^{n}), \tag{3.4}\]
\[\theta_{1}(z|\tau)\equiv\sum_{n=-\infty}^{n=\infty}(-1)^{n-\frac{1}{2}}e^{(n +\frac{1}{2})^{2}i\pi\tau}e^{(2n+1)\pi iz}\,,\qquad\theta_{3}(z|\tau)\equiv \sum_{n=-\infty}^{n=\infty}e^{n^{2}i\pi\tau}e^{2n\pi iz}\,. \tag{3.5}\]
Previously, the von Neumann entropy after analytically continuing (3.3) was only known in the high- and low-temperature regimes [55]. In fact, only the infinite length or zero temperature pieces are universal. However, the analytic von Neumann entropy for all temperatures was recently worked out by [56; 57], which we present below
\[S(\rho_{A})=\frac{1}{3}\log\frac{\sigma(\ell)}{\epsilon}+4i\ell\int_{0}^{ \infty}dq\frac{\zeta(iq\ell+1/2+i\beta/2)-\zeta(1/2)-\zeta(i\beta/2)}{e^{2\pi q }-1}. \tag{3.6}\]
Here \(\sigma\) and \(\zeta\) are the Weierstrass sigma function and zeta function with periods \(1\) and \(i\beta\), respectively. We can see clearly that the analytic expressions for both \(\operatorname{Tr}\rho_{A}^{n}\) and \(S(\rho_{A})\) are rather different compared to the previous example.
In preparing the datasets, we fixed the interval length \(\ell=0.5\) and the UV cutoff \(\epsilon=0.1\). We generated \(10000\) sets of data for train-validation-test split from \(\beta=0.5\) to \(1.0\), with an increment of \(\Delta\beta=5\times 10^{-5}\) between each step up to \(k=50\) in \(G(w;\rho_{A})\). Since \(\beta\) corresponds to the inverse temperature, this is a natural parameter to vary as the formula (3.6) is valid for all temperatures. To further validate our model, we generated \(10000\) additional test datasets for the following physical parameters: \(\beta=1.0\) to \(1.5\) with \(\Delta\beta=5\times 10^{-5}\). A density plot of the data with respect to the von Neumann entropy is shown in Figure 6. As shown in Figure 7 and Figure 8, our model demonstrates its effectiveness in predicting both test datasets, providing accurate results for this highly non-trivial example.
Figure 6: The distribution of the two test datasets for the case of a single interval at finite temperature and length, where we plot density as a function of the von Neumann entropy computed by (3.6) with varying \(\beta\).
Figure 7: Left: The MSE loss function as a function of epochs. The minimum loss close to \(10^{-8}\) is achieved at epoch 86 for this instance. Right: The relative errors between the model predictions and targets for the two test datasets, where we have achieved high accuracy with relative errors \(\lesssim 0.6\%\).
### Entanglement entropy of two disjoint intervals
We now turn to von Neumann entropy for the union of two intervals on an infinite line. In this case, several analytic expressions can be derived for both Renyi and von Neumann entropies. The theory we will consider is a CFT\({}_{2}\) for a free boson with central charge \(c=1\), and the von Neumann entropy will be distinguished by two parameters, a cross-ratio \(x\) and a universal critical exponent \(\eta\). The latter is proportional to the square of the compactification radius.
To set up the system, we define the union of the two intervals as \(A\cup B\) with \(A=[x_{1},x_{2}]\) and \(B=[x_{3},x_{4}]\). The cross-ratio is defined to be
\[x=\frac{x_{12}x_{34}}{x_{13}x_{24}},\quad x_{ij}=x_{i}-x_{j}. \tag{3.7}\]
With the definition, we can write down the generating function for two intervals in a free boson CFT with finite \(x\) and \(\eta\)[5]
\[\text{Tr}(\rho^{n})=c_{n}\bigg{(}\frac{\epsilon^{2}x_{13}x_{24}}{x_{12}x_{34} x_{14}x_{23}}\bigg{)}^{\frac{1}{6}(n-\frac{1}{n})}\mathcal{F}_{n}(x,\eta), \tag{3.8}\]
where \(\epsilon\) is a UV cutoff and \(c_{n}\) is a model-dependent coefficient [6] that we set to \(c_{n}=1\) for simplicity. An exact expression for \(\mathcal{F}_{n}(x,\eta)\) is given by
\[\mathcal{F}_{n}(x,\eta)=\frac{\Theta(0|\eta\Gamma)\Theta(0|\Gamma/\eta)}{[ \Theta(0|\Gamma)]^{2}}, \tag{3.9}\]
Figure 8: We plot the predictions from the model with the analytic von Neumann entropy computed by (3.6) for the two test datasets. Again, the approximate entropy by summing over \(k=50\) terms in the generating function is included.
for integers \(n\geq 1\). Here \(\Theta(z|\Gamma)\) is the Riemann-Siegel theta function defined as
\[\Theta(z|\Gamma)\equiv\sum_{m\in\mathbb{Z}^{n-1}}\exp[i\pi m^{t}\cdot\Gamma\cdot m +2\pi im^{t}\cdot z], \tag{3.10}\]
where \(\Gamma\) is a \((n-1)\times(n-1)\) matrix with elements
\[\Gamma_{rs}=\frac{2i}{n}\sum_{k=1}^{n-1}\sin\left(\frac{\pi k}{n}\right)\beta_ {k/n}\cos\bigg{[}\frac{2\pi k}{n}(r-s)\bigg{]}, \tag{3.11}\]
and
\[\beta_{y}=\frac{F_{y}(1-x)}{F_{y}(x)},\qquad F_{y}(x)\equiv{}_{2}F_{1}(y,1-y;1 ;x), \tag{3.12}\]
where \({}_{2}F_{1}\) is the hypergeometric function. A property of this example is that (3.9) is manifestly invariant under \(\eta\leftrightarrow 1/\eta\).
The analytic continuation towards the von Neumann entropy is not known, making it impossible to study this example directly with supervised learning. Although the Taylor series of the generating function guarantees convergence towards the true von Neumann entropy for sufficiently large values of \(k\) in the partial sum, evaluating the higher-dimensional Riemann-Siegel theta function becomes increasingly difficult. For efforts in this direction, see [58; 59]. However, we will revisit this example in the next section when discussing the sequence model.
However, there are two limiting cases where analytic perturbative expansions are available, and approximate analytic continuations of the von Neumann entropies can be obtained. The first limit corresponds to small values of the cross-ratio \(x\), where the von Neumann entropy has been computed analytically up to second order in \(x\). The second limit is the decompactification limit, where we take \(\eta\to\infty\). In this limit, there is an approximate expression for the von Neumann entropy.
#### Two intervals at small cross-ratio
Let us consider the following expansion of \(\mathcal{F}_{n}(x,\eta)\) at small \(x\) for some \(\eta\neq 1\)
\[\mathcal{F}_{n}(x,\eta)=1+\left(\frac{x}{4n^{2}}\right)^{\alpha}s_{2}(n)+ \left(\frac{x}{4n^{2}}\right)^{2\alpha}s_{4}(n)+\cdots, \tag{3.13}\]
where we can look at the first order contribution with
\[s_{2}(n)\equiv\mathcal{N}\frac{n}{2}\sum_{j=1}^{n-1}\frac{1}{\left[\sin(\pi j /n)\right]^{2\alpha}}. \tag{3.14}\]
The coefficient \(\alpha\) for a free boson is given by \(\alpha=\min[\eta,1/\eta]\). \(\mathcal{N}\) is the multiplicity of the lowest dimension operators, where for a free boson we have \(\mathcal{N}=2\). Up to this order, the analytic von Neumann entropy is given by
\[S(\rho_{AB})=\frac{1}{3}\ln\left(\frac{x_{12}x_{34}x_{14}x_{23}}{\epsilon^{2}x _{13}x_{24}}\right)-\mathcal{N}\bigg{(}\frac{x}{4}\bigg{)}^{\alpha}\frac{ \sqrt{\pi}\Gamma(\alpha+1)}{4\Gamma\left(\alpha+\frac{3}{2}\right)}-\cdots. \tag{3.15}\]
We can set up the numerics by taking \(|x_{12}|=|x_{34}|=r\), and the distance between the centers of \(A\) and \(B\) to be \(L\), then the cross-ratio is simply
\[x=\frac{x_{12}x_{34}}{x_{13}x_{24}}=\frac{r^{2}}{L^{2}}. \tag{3.16}\]
Similarly we can express \(|x_{14}|=L+r=L(1+\sqrt{x})\) and \(|x_{23}|=L-r=L(1-\sqrt{x})\). This would allow us to express everything in terms of \(x\) and \(L\).
For the datasets, we fixed \(L=14\), \(\alpha=0.5\), and \(\epsilon^{2}=0.1\). We generated 10000 sets of data for train-validation-test split from \(x=0.05\) to \(0.1\), with an increment of \(\Delta x=5\times 10^{-6}\) between each step up to \(k=50\) in \(G(w;\rho_{A})\). To further validate our model, we generated 10000 additional test datasets for the following physical parameters: \(x=0.1\) to \(0.15\) with \(\Delta x=5\times 10^{-6}\). A density plot of the data with respect to the von Neumann entropy is shown in Figure 9. We refer to Figure 10 and Figure 11 for a clear demonstration of the learning outcomes.
The study up to second order in \(x\) using the generating function method is available in [8], as well as through the use of holographic methods [60]. Additionally, an analytic continuation toward the von Neumann entropy up to second order in \(x\) for general CFT\({}_{2}\) can be found in [61]. Although this is a subleading correction, it can also be approached using our method.
Figure 10: Left: The MSE loss function as a function of epochs. The minimum loss close to \(10^{-8}\) is achieved at epoch 696 for this instance. Right: The relative errors between the model predictions and targets for the two test datasets, where we have achieved high accuracy with relative errors \(\lesssim 0.03\%\).
Figure 9: The distribution of the two test datasets for the case of two intervals at small cross-ratio, where we plot density as a function of the von Neumann entropy computed by (3.15) with varying \(x\).
**Two intervals in the decompactification limit**
There is a different limit that can be taken other than the small cross-ratio, where an approximate analytic Renyi entropies can be obtained. This is called the decompactification limit where we take \(\eta\to\infty\), then for each fixed value of \(x\) we have \(\mathcal{F}(x,\eta)\) as
\[\mathcal{F}_{n}(x,\eta)=\bigg{[}\frac{\eta^{n-1}}{\prod_{k=1}^{n-1}F_{k/n}(x)F_ {k/n}(1-x)}\bigg{]}^{\frac{1}{2}}, \tag{3.17}\]
where \({}_{2}F_{1}\) is the hypergeometric function. Equation (3.17) is invariant under \(\eta\leftrightarrow 1/\eta\), so we will instead use the result with \(\eta\ll 1\)
\[\mathcal{F}_{n}(x,\eta)=\bigg{[}\frac{\eta^{-(n-1)}}{\prod_{k=1}^{n-1}F_{k/n}( x)F_{k/n}(1-x)}\bigg{]}^{\frac{1}{2}}. \tag{3.18}\]
In this case, the exact analytic continuation of the von Neumann entropy is not known, but there is an approximate result following the expansion
\[S(\rho_{AB})\simeq S^{W}(\rho_{AB})+\frac{1}{2}\ln\eta-\frac{D_{1}^{\prime}(x) +D_{1}^{\prime}(1-x)}{2}+\cdots,\quad(\eta\ll 1) \tag{3.19}\]
with \(S^{W}(\rho_{AB})\) being the von Neumann entropy computed from the Renyi entropies without the special function \(\mathcal{F}_{n}(x,\eta)\) in (3.8). Note that
\[D_{1}^{\prime}(x)=-\int_{-i\infty}^{i\infty}\frac{dz}{i}\frac{\pi z}{\sin^{2 }(\pi z)}\ln F_{z}(x). \tag{3.20}\]
Figure 11: We plot the predictions from the model with the analytic von Neumann entropy computed by (3.15) for the two test datasets. We also include the approximate entropy by summing over \(k=50\) terms in the generating function.
This approximate von Neumann entropy has been well tested in previous studies [5; 8], and we will adopt it as the target values in our deep learning models.
For the datasets, we fixed \(L=14\), \(x=0.5\) and \(\epsilon^{2}=0.1\). We generated 10000 sets of data for train-validation-test split from \(\eta=0.1\) to \(0.2\), with an increment of \(\Delta\eta=10^{-5}\) between each step up to \(k=50\). To further validate our model, we generated 10000 additional test datasets for the following physical parameters: \(\eta=0.2\) to \(0.3\) with \(\Delta\eta=10^{-5}\). A density plot of the data with respect to the von Neumann entropy is shown in Figure 12. We again refer to Figure 13 and Figure 14 for a clear demonstration of the learning outcomes.
We have seen that deep neural networks, when treated as supervised learning, can achieve accurate predictions for the von Neumann entropy that extends outside the parameter regime in the training phase. However, the potential for deep neural networks may go beyond this.
As we know, the analytic continuation must be worked out on a case-by-case basis (see the examples in [4; 5; 6; 7]) and may even depend on the method we use [8]. Finding general patterns in the analytic continuation is still an open question. Although it remains ambitious, the non-linear mapping that the neural networks uncover would
Figure 14: We plot the predictions from the model with the analytic von Neumann entropy computed by (3.19) for the two test datasets. We also include the approximate entropy by summing over \(k=50\) terms in the generating function.
Figure 13: Left: The MSE loss function as a function of epochs. The minimum loss at around \(10^{-7}\) is achieved at epoch 132 for this instance. Right: The relative errors between the model predictions and targets for the two test datasets, where we have achieved high accuracy with relative errors \(\lesssim 0.4\%\).
allow us to investigate the expressive power of deep neural networks for the analytic continuation problem of the von Neumann entropy.
Our approach also opens up the possibility of using deep neural networks to study cases where analytic continuations are unknown, such as the general two-interval case. Furthermore, it may enable us to investigate other entanglement measures that follow similar patterns or require analytic continuations. We leave these questions as future tasks.
## 4 Renyi entropies as sequential deep learning
In this section, we focus on higher Renyi entropies using sequential learning models. Studying higher Renyi entropies that depend on \(\operatorname{Tr}\rho_{A}^{n}\) is equivalent to studying the higher-order terms in the Taylor series representation of the generating function (9). There are a few major motivations. Firstly, although the generating function can be used to compute higher-order terms, it becomes inefficient for more complex examples. Additionally, evaluating \(\operatorname{Tr}\rho_{A}^{n}\) in (11) for the general two-interval case involves the Riemann-Siegel theta function, which poses a challenge in computing higher Renyi entropies [58; 59; 8]. On the other hand, all higher Renyi entropies should be considered independent and cannot be obtained in a linear fashion. They can all be used to predict the von Neumann entropy, but in the Taylor series expansion (9), knowing higher Renyi entropies is equivalent to knowing a more accurate von Neumann entropy. As we cannot simply extrapolate the series, using a sequential learning approach is a statistically robust way to identify underlying patterns.
_Recurrent neural networks_ (RNNs) are a powerful type of neural network for processing sequences due to their "memory" property [62]. RNNs use internal loops to iterate through sequence elements while keeping a state that contains information about what has been observed so far. This property allows RNNs to identify patterns in a sequence regardless of their position in the sequence. To train an RNN, we initialize an arbitrary state and encode a rank-2 tensor of size (steps, input features), looping over multiple steps. At each step, the networks consider the current state at \(k\) with the input, and combine them to obtain the output at \(k+1\), which becomes the state for the next iteration.
RNNs incorporate both feedforward networks and _back-propagation through time_ (BPTT) [63; 64], with "time" representing the steps \(k\) in our case. The networks connect the outputs from a fully connected layer to the inputs of the same layer, referred to as the hidden states. These inputs receive the output values from the previous step, with the number of inputs to a neuron determined by both the number of inputs to the layer and the number of neurons in the layer itself, known as _recurrent_
connections_. Computing the output involves iteratively feeding the input vector from one step, computing the hidden states, and presenting the input vector for the next step to compute the new hidden states.
RNNs are useful for making predictions based on sequential data, or "sequential regression," as they learn patterns from past steps to predict the most probable values for the next step.
### Model architectures and training strategies
In this subsection, we discuss the methodology of treating the Renyi entropies (the Taylor series of the generating function) as sequence models.
**Data preparation**
To simulate the scenario where \(k_{\rm max}\) in the series cannot be efficiently computed, we generate \(N=10000\) datasets for different physical parameters, with each dataset having a maximum of \(k_{\rm max}=50\) steps in the series. We also shuffle the \(N\) datasets since samples of close physical parameters will have most of their values in common. Among the \(N\) datasets, we only take a fraction \(p<N\) for the train-validation-test split. The other fraction \(q=N-p\) will all be used as test data for the trained model. This serves as a critical examination of the sequence models we find. The ideal scenario is that we only need small \(p\) datasets while achieving accurate performance for the \(q\) datasets.
Due to the rather small number of steps available, we are entitled to adopt the SimpleRNN structure in TensorFlow-Keas2 instead of the more complicated ones such as LSTM or GRU networks [66; 67].
Footnote 2: SimpleRNN suffers from the vanishing gradient problem when learning long dependencies [65]. Even using ReLU, which does not cause a vanishing gradient, back-propagation through time with weight sharing can still lead to a vanishing gradient across different steps. However, since the length of the sequence is small due to the limited maximum steps available in our case, we have found that SimpleRNN generally performs better than its variants.
We also need to be careful about the train-validation-test splitting process. In this type of problem, it is important to use validation and test data that is more recent than the training data. This is because the objective is to predict the next value given the past steps, and the data splitting should reflect this fact. Furthermore, by giving more weight to recent data, it is possible to mitigate the vanishing gradient (memory loss) problem that can occur early in the BPTT. In this work, the first 60% of the steps (\(k=1\sim 30\)) are used for training, the middle 20% (\(k=31\sim 40\)) for validation, and the last 20% (\(k=41\sim 50\)) for testing.
We split the datasets in the following way: for a single dataset from each step, we use a fixed number of past steps3, specified by \(\ell\), to predict the next value. This will create \((\text{steps}-\ell)\) sequences from each dataset, resulting in a total of \((\text{steps}-\ell)\times p\) sequences for the \(p\) datasets in the train-validation-test splitting. Using a fixed sequence length \(\ell\) allows the network to focus on the most relevant and recent information for predicting the next value, while also simplifying the input size and making it more compatible with our network architectures. We take \(p=1000\), \(q=9000\), and \(\ell=5\). An illustration of our data preparation strategy is shown in Figure 15.
Footnote 3: We could also include as many past steps as possible, but we have found it less effective. This can be attributed to our choice of network architectures and the fact that we have rather short maximum steps available.
**Model design**
After the pre-processing of data, we turn to the model design. Throughout the section, we use the ReLU activation function and Adam optimizer with MSE as the loss function.
Figure 15: Data preparation process for the sequential models. A total of \(N\) datasets are separated into two parts: the \(p\) datasets are for the initial train-validation-test split, while the \(q\) datasets are treated purely as test datasets. The zoomed-in figure on the right hand side illustrates how a single example sequence is generated, where we have used a fixed number of past steps \(\ell=5\). Note that for the additional \(q\) test datasets, a total of \((\text{steps}-\ell)\times q=405000\) sequences are generated.
In KerasTuner, we employ Bayesian optimization by adjusting a few crucial hyperparameters and designs. We summarize them in the following list:
* We introduce one or two SimpleRNN layers, with or without recurrent dropouts. The units of the first layer range from 64 to 256 with a step size of 16. If a second layer is used, the units range from 32 to 128 with a step size of 8. Recurrent dropout is applied with a dropout rate in the range of 0.1 to 0.3 using log sampling.
* We take LayerNormalization as a Boolean choice to enhance the training stability, even with shallow networks. The LayerNormalization is added after the SimpleRNN layer if there is only one layer; in between the two layers if there are two SimpleRNN layers.
* We allow a Dense layer with units ranging from 16 to 32 and a step size of 8 as an optional regressor after the recurrent layers.
* A final dropout with log sampling of a dropout rate in the range of 0.2 to 0.5 is added as a Boolean choice.
* In the Adam optimizer, we only adjust the learning rate with log sampling from the range of \(10^{-5}\) to \(10^{-4}\). All other parameters are taken as the default values in TensorFlow-Keras. We take the AMSGrad [54] variant of this algorithm as a Boolean choice.
The KerasTuner is deployed for 300 trials with 2 executions per trial. During the process, we monitor the validation loss using EarlyStopping of patience 8. Once the best set of hyperparameters and model architecture are identified based on the validation data, we initialize a new model with the same design, but with both the training and validation data. This new model is trained 30 times while monitoring the training loss using EarlyStopping of patience 10. The final predictions are obtained by averaging the results of the few cases with close yet overall smallest relative errors from the targets. The purpose of taking the average instead of picking the case with minimum loss is to smooth out possible outliers. We set the batch size in both the KerasTuner and the final training to be 2048.
We will also use the trained model to make predictions on the \(q\) test data and compare them with the correct values as validation for hitting the benchmark.
### Examples of the sequential models
The proposed approach will be demonstrated using two examples. The first example is a simple representative case of a single interval (3.1); while the second is a more challenging case of the two-interval at decompactification limit (3.19), where the higher-order terms in the generating function cannot be efficiently computed. Additionally, we will briefly comment on the most non-trivial example of the general two-interval case.
**Single interval**
In this example, we have used the same \(N\) datasets for the single interval as in Sec. 3.2. Following the data splitting strategy we just outlined, it is worth noting that the ratio of training data to the overall dataset is relatively small. We have plotted the losses of the three best-performing models, as well as the density plot of relative errors for the two test datasets in Figure 16. Surprisingly, even with a small ratio of training data, we were able to achieve small relative errors on the additional test datasets.
Figure 16: Top: The loss function for the best 3 models as a function of epochs. We monitor the loss function with EarlyStopping, where the epochs of minimum losses at around \(10^{-8}\) for different models are specified in the parentheses of the legend. Bottom: The density plots as a function of relative errors for the two test datasets. The relative errors for the \(p\) test datasets are concentrated at around \(1\%\); while for the additional \(q\) test datasets, they are concentrated at around \(2.5\%\) with a very small ratio of outliers.
**Two intervals in the decompactification limit**
Again, we have used the same \(N\) datasets for the two intervals in the \(\eta\to\infty\) limit as in Sec. 3.3. In Figure 17, we have plotted the losses of the four best-performing models and the density plot of relative errors for the two test datasets. In this example, the KerasTuner identified a relatively small learning rate, which led us to truncate the training at a maximum of 1500 epochs since we had achieved the required accuracy. In this case, the predictions are of high accuracy, essentially without outliers.
Let us briefly address the most challenging example discussed in this paper, which is the general two-interval case (3.8) where the analytic expression for the von Neumann entropy is not available. In this example, only \(\operatorname{Tr}\rho_{A}^{n}\) is known, and since it involves the Riemann-Siegel theta function, computing the generating function for large \(k\) in the partial sum becomes almost infeasible. Therefore, the sequential learning models we have introduced represent the most viable approach for extracting useful information in this case.
Figure 17: Top: The loss function for the best 4 models as functions of epochs. We monitor the loss function with EarlyStopping. Bottom: The density plot as a function of relative errors for the two test datasets. The relative errors for the \(p\) test datasets are well within \(\lesssim 1.5\%\); while for the additional \(q\) test datasets, they are well within \(\lesssim 2\%\).
Since only \(k_{\rm max}\approx 10\) can be efficiently computed from the generating function in this case, we have much shorter steps for the sequential learning models. We have tested the above procedure with \(N=10000\) datasets and \(k_{\rm max}=10\), however, we could only achieve an average of \(5\%\) relative errors. Improvements may come from a larger dataset with a longer training time, which we leave as a future task.
In general, sequential learning models offer a potential solution for efficiently computing higher-order terms in the generating function. To extend our approach to longer sequences beyond the \(k_{\rm max}\) steps, we can treat the problem as self-supervised learning. However, this may require a more delicate model design to prevent error propagation. Nonetheless, exploring longer sequences can provide a more comprehensive understanding of the behavior of von Neumann entropy and its relation to Renyi entropies.
## 5 Quantum neural networks and von Neumann entropy
In this section, we explore a similar supervised learning task by treating the quantum circuits as models that map data inputs to predictions, which influences the expressive power of quantum circuits as function approximations.
### Fourier series from variational quantum machine learning models
We will focus on a specific function class that a quantum neural network can explicitly realize, namely a simple Fourier-type sum [29, 30]. Before linking it to the von Neumann entropy, we shall first give an overview of the seminal works in [30].
Consider a general Fourier-type sum in the following form
\[f_{\theta_{i}}(\vec{x})=\sum_{\vec{\omega}\in\Omega}c_{\vec{\omega}}(\theta_{ i})e^{i\vec{\omega}\cdot\vec{x}}, \tag{5.1}\]
with the frequency spectrum specified by \(\Omega\subset\mathbb{R}^{N}\). Note that \(c_{\vec{\omega}}(\theta_{i})\) are the (complex) Fourier coefficients. We need to come up with a quantum model that can learn the characteristics of the sum by the model's control over the frequency spectrum and the Fourier coefficients.
Now we define the quantum machine learning model as the following expectation value
\[f_{\theta_{i}}(x)=\langle 0|U^{\dagger}(x,\theta_{i})MU(x,\theta_{i})|0\rangle, \tag{5.2}\]
where \(|0\rangle\) is taken to be some initial state of the quantum computer. The \(M\) will be the physical observable. Note that we have omitted writing the vector symbol and the hat on the operator, which should be clear from the context. The crucial component is \(U(x,\theta_{i})\), which is a quantum circuit that depends on the data input \(x\) and the trainable
parameters \(\theta_{i}\) with \(L\) layers. Each layer has a data-encoding circuit block \(S(x)\), and the trainable circuit block \(W(\theta_{i})\). Schematically, it has the form
\[U(x,\theta_{i})=W^{(L+1)}(\theta_{i})S(x)W^{(L)}(\theta_{i})\cdots W^{(2)}( \theta_{i})S(x)W^{(1)}(\theta_{i}), \tag{100}\]
where we refer to Figure 18 for a clear illustration.
Let us discuss the three major components of the quantum circuit in the following:
* The repeated data-encoding circuit block \(S(x)\) prepares an initial state that encodes the (one-dimensional) input data \(x\) and is not trainable due to the absence of free parameters. It is represented by certain gates that embed classical data into quantum states, with gates of the form \(g(x)=e^{-ixH}\), where \(H\) is the encoding Hamiltonian that can be any unitary operator. In this work, we use the Pauli X-rotation gate, and the encoding Hamiltonians in \(S(x)\) will determine the available frequency spectrum \(\Omega\).
* The trainable circuit block \(W(\theta_{i})\) is parametrized by a set of free parameters \(\theta_{i}=(\theta_{1},\theta_{2},...)\). There is no special assumption made here and we can take these trainable blocks as arbitrary unitary operations. The trainable parameters will contribute to the coefficients \(c_{\omega}\).
Figure 18: Quantum neural networks with repeated data-encoding circuit blocks \(S(x)\) (whose gates are of the form \(g(x)=e^{-ixH}\)) and trainable circuit blocks \(W^{(i)}\). The data-encoding circuit blocks determine the available frequency spectrum for \(\vec{\omega}\), while the remainder determines the Fourier coefficients \(c_{\vec{\omega}}\).
* The final piece is the measurement of a physical observable \(M\) at the output. This observable is general, it could be local for each wire or subset of wires in the circuit.
Our goal is to establish that \(f(x)\) can be written as a partial Fourier series [29; 30]
\[f_{\theta_{i}}(x)=\langle 0|U^{\dagger}(x,\theta_{i})MU(x,\theta_{i})|0\rangle =\sum_{n\in\Omega}c_{n}e^{inx}. \tag{100}\]
Note that here for simplicity, we have taken frequencies being integers \(\Omega\subset\mathbb{Z}^{N}\). The training process goes as follows: we sample a quantum model with \(U(x,\theta_{i})\), and then define the mean square error as the loss function. To optimize the loss function, we need to tune the free parameters \(\theta=(\theta_{1},\theta_{2},...)\). The optimization is performed by a classical optimization algorithm that queries the quantum device, where we can treat the quantum process as a black box and only examine the classical data input and the measurement output. The output of the quantum model is the expectation value of a Pauli-Z measurement.
We use the single-qubit Pauli rotation gate as the encoding \(g(x)\)[30]. The frequency spectrum \(\Omega\) is determined by the encoding Hamiltonians. Two scenarios can be considered to determine the available frequencies: the _data reuploading_[68] and the _parallel encodings_[69] models. In the former, we repeat \(r\) times of a Pauli rotation gate in sequence, which means we act on the same qubit, but with multiple layers \(r=L\); whereas in the latter, we perform similar operations in parallel on \(r\) different qubits. but with a single layer \(L=1\). These models allow quantum circuits to access increasingly rich frequencies, where \(\Omega=\{-r,...,-1,0,1,...,r\}\) with a spectrum of integer-valued frequencies up to degree \(r\). This will correspond to the maximum degree of the partial Fourier series we want to compute.
From the discussion above, one can immediately derive the maximum accessible frequencies of such quantum models [30]. But in practice, if the degree of the target function is greater than the number of layers (for example, in the single qubit case), the fit will be much less accurate.4 Increasing the value of \(L\) typically requires more training epochs to converge at the same learning rate.
Footnote 4: Certain initial weight samplings may not even converge to a satisfactory solution. This is relevant to the barren plateau problem [70] generically present in variational quantum circuits with a random initialization, similar to the classical vanishing gradient problem.
This is relevant to a more difficult question of how to control the Fourier coefficients in the training process, given that all the blocks \(W^{(i)}(\theta_{i})\) and the measurement observable contribute to "every" Fourier coefficient. However, these coefficients are functions of the quantum circuit with limited degrees of freedom. This means that a
quantum circuit with a certain structure can only realize a subset of all possible Fourier coefficients, even with enough degrees of freedom. While a systemic understanding is not yet available, a simulation exploring which Fourier coefficients can be realized can be found in [30]. In fact, it remains an open question whether, for asymptotically large \(L\), a single qubit model can approximate any function by constructing arbitrary Fourier coefficients.
### The generating function as a Fourier series
Given the framework of the quantum model and its relation to a partial Fourier series, a natural question arises as to whether the entanglement entropy can be realized within this setup. To approach this question, it is meaningful to revisit the generating function for the von Neumann entropy
\[G(z;\rho_{A})\equiv-\operatorname{Tr}\left(\rho_{A}\ln\frac{1-z\rho_{A}}{1-z} \right)=\sum_{k=1}^{\infty}\frac{f(k)}{k}z^{k}, \tag{100}\]
as a manifest Taylor series. The goal is to rewrite the generating function in terms of a partial Fourier series. Therefore, we would be able to determine whether the von Neumann and Renyi entropies are the function classes that the quantum neural network can describe. Note that we will only focus on small-scale tests with a low depth or width of the circuit, as the depth or width of the circuit will correspond exactly to the orders that can be approximated in the Fourier series.
But we cannot simply convert either the original generating function or its Taylor series form to a Fourier series. By doing so, it will generally involve special functions in \(\rho_{A}\), for which we will be unable to specify in terms of \(\operatorname{Tr}\rho_{A}^{n}\). Therefore, it is essential to have an expression of the Fourier series that allows us to compute the corresponding Fourier coefficients at different orders using \(\operatorname{Tr}\rho_{A}^{n}\), for which we know the analytic form from CFTs.
This can indeed be achieved, see Appendix A for a detailed derivation. The Fourier series representation of the generating function on an interval \([w_{1},w_{2}]\) with period \(T=w_{2}-w_{1}\) is given by
\[G(w;\rho)=\frac{a_{0}}{2} + \sum_{n=1}^{\infty}\bigg{\{}\sum_{m=0}^{\infty}\frac{\tilde{f}(m )}{m}C_{cos}(n,m)\cos\left(\frac{2\pi nw}{T}\right) \tag{101}\] \[+ \sum_{m=0}^{\infty}\frac{\tilde{f}(m)}{m}C_{sin}(n,m)\sin\left( \frac{2\pi nw}{T}\right)\bigg{\}},\]
where \(C_{cos}\) and \(C_{sin}\) are some special functions defined as
\[C_{cos}(n,m) = \frac{2}{(m+1)T}\bigg{[}{}_{p}F_{q}\bigg{(}\frac{m+1}{2};\frac{1}{2 },\frac{m+3}{2};-\frac{n^{2}\pi^{2}t_{2}^{2}}{T^{2}}\bigg{)}t_{2}^{m+1} \tag{104}\] \[-{}_{p}F_{q}\bigg{(}\frac{m+1}{2};\frac{1}{2},\frac{m+3}{2};- \frac{n^{2}\pi^{2}t_{2}^{2}}{T^{2}}\bigg{)}t_{1}^{m+1}\bigg{]},\]
\[C_{sin}(n,m) = \frac{4n\pi}{(m+2)T^{2}}\bigg{[}{}_{p}F_{q}\bigg{(}\frac{m+2}{2}; \frac{3}{2},\frac{m+4}{2};-\frac{n^{2}\pi^{2}t_{2}^{2}}{T^{2}}\bigg{)}t_{2}^{m +2} \tag{105}\] \[-{}_{p}F_{q}\bigg{(}\frac{m+2}{2};\frac{3}{2},\frac{m+4}{2};- \frac{n^{2}\pi^{2}t_{1}^{2}}{T^{2}}\bigg{)}t_{1}^{m+2}\bigg{]},\]
with \({}_{p}F_{q}\) being the generalized hypergeometric function. Note also that
\[\tilde{f}(m)\equiv\sum_{k=0}^{m}\frac{(-1)^{2m-k+1}m!}{k!(m-k)!}\,{\rm Tr}\,( \rho_{A}^{k+1}). \tag{106}\]
Similarly, the zeroth order Fourier coefficient is given by
\[a_{0}=\sum_{m=0}^{\infty}\frac{\tilde{f}(m)}{m}C_{cos}(0,m)=\sum_{m=0}^{\infty }\frac{\tilde{f}(m)}{m}\frac{2(w_{2}^{m+1}-w_{1}^{m+1})}{(m+1)T}. \tag{107}\]
Note that summing to \(m=10\) suffices our purpose, while the summation in \(n\) corresponds to the degree of the Fourier series. Note that the complex-valued Fourier coefficients \(c_{n}\) to be used in our simulation can be easily reconstructed from the expression. Therefore, the only required input for evaluating the Fourier series is \(\tilde{f}(m)\), with \({\rm Tr}\,\rho_{A}^{k+1}\) explicitly given. This is exactly what we anticipated and allows for a straightforward comparison with the Taylor series form.
Note the interval for the Fourier series is not arbitrary. We will take the interval \([w_{1},w_{2}]\) to be \([-1,1]\), which is the maximum interval where the Fourier series (103) is convergent. Furthermore, we expect that as \(w\to 1\) from (103), we arrive at the von Neumann entropy, that is
\[S(\rho_{A})=\lim_{w\to 1}G(w;\rho_{A}). \tag{108}\]
However, as we can see in Figure 19, there is a rapid oscillation near the end points of the interval for the Fourier series. The occurrence of such "jump discontinuity" is a generic feature for the approximation of discontinuous or non-periodic functions using Fourier series known as the _Gibbs phenomenon_. This phenomenon poses a serious problem in recovering accurate values of the von Neumann entropy because we are taking the limit to the boundary point \(w\to 1\). We will return to this issue in Section 5.4.
### The expressivity of the quantum models on the entanglement entropy
In this subsection, we will demonstrate the expressivity of the quantum models of the partial Fourier series with examples from CFTs. We will focus on two specific examples: a single interval and two intervals at small cross-ratio \(x\). While these examples suffice for our purpose, it is worth noting that once the Fourier series representation is derived using the expression in (5.6), all examples with a known analytic form of \(\operatorname{Tr}\rho_{A}^{n}\) can be studied.
The demonstration is performed using Pennylene [71]. We have adopted the Adam optimizer with a learning rate \(0.005\) and batch size of \(100\), where MSE is the loss function. Note that we have chosen a smaller learning rate compared to [30] and monitor with EarlyStopping. For the two examples we study, we have considered both the serial (data reuploading) and parallel (parallel encodings) models for the training. Note that in the parallel model, we have used the StronglyEntanglingLayers in Pennylene with itself of 3 user-defined layers. In each case, we start by randomly initializing a quantum model with 300 sample points to fit the target function
\[f(x)=\sum_{n=-k}^{n=k}c_{n}e^{-inx}. \tag{5.12}\]
where the complex-valued Fourier coefficients are calculated from the real coefficients in (5.6). We have chosen \(k=4\) with prescribed physical parameters in the single- and two-interval examples. Therefore, we will need \(r\) in the serial and parallel models to be larger than \(k=4\). We have executed multiple trials from each case, where we
Figure 19: Gibbs phenomenon for the Fourier series near the end point for \(w\to 1\). We take the single interval example where the yellow curve represents the generating function as a Taylor series, and the blue curve is the Fourier series approximation of the generating function.
include the most successful results with maximum relative errors controlled in \(\lesssim 3\%\) in Figures 20\(\sim\)23.
Figure 20: A random serial quantum model trained with data samples to fit the target function of the single interval case. Top: the MSE loss function as a function of epochs, where the minimum loss is achieved at epoch 982. Bottom left: a random initialization of the serial quantum model with \(r=6\) sequential repetitions of Pauli encoding gates. Bottom right: the circles represent the 300 data samples of the single interval Fourier series with \(\ell=2\) and \(\epsilon=0.1\) for (3.2). The red curve represents the quantum model after training.
**Figure 21**: A random parallel quantum model for the single interval case. Top: the loss function achieves minimum loss at epoch 917. Bottom: a random initialization of the quantum model with \(r=5\) parallel repetitions of Pauli encoding gates that has achieved a good fit.
Figure 22: A random serial quantum model trained with data samples to fit the target function of the two-interval system with a small cross-ratio. Top: the loss function achieves minimum loss at epoch 968. Bottom left: a random initialization of the serial quantum model of \(r=6\) sequential repetitions of Pauli encoding gates. Bottom right: the circles represent the 300 data samples of the two-interval Fourier series with \(x=0.05\), \(\alpha=0.1\), and \(\epsilon=0.1\) for (3.15). The red curve represents the quantum model after training.
As observed from Figures 20\(\sim\)23, a rescaling of the data is necessary to achieve precise matching between the quantum models and the Fourier spectrum of our examples. This rescaling is possible because the global phase is unobservable [30], which introduces an ambiguity in the data-encoding. Consider our quantum model
\[f_{\theta}(x)=\langle 0|U^{\dagger}(x,\theta)MU(x,\theta)|0\rangle=\sum_{\omega \in\Omega}c_{\omega}(\theta)e^{i\omega x}, \tag{119}\]
Figure 23: A random parallel quantum model for the two-interval case. Top: the loss function achieves minimum loss at epoch 818. Bottom: a random initialization of the quantum model with \(r=5\) parallel repetitions of Pauli encoding gates that has achieved a good fit.
where we consider the case of a single qubit \(L=1\), then
\[U(x)=W^{(2)}g(x)W^{(1)}. \tag{110}\]
Note that the frequency spectrum \(\Omega\) is determined by the eigenvalues of the data-encoding Hamiltonians, which is given by the operator
\[g(x)=e^{-ixH}. \tag{111}\]
\(H\) has two eigenvalues \((\lambda_{1},\lambda_{2})\), but we can rescale the energy spectrum to \((-\gamma,\gamma)\) as the global phase is unobservable (e.g. for Pauli rotations, we have \(\gamma=\frac{1}{2}\)). We can absorb \(\gamma\) from the eigenvalues of \(H\) into the data input by re-scaling with
\[\tilde{x}=\gamma x. \tag{112}\]
Therefore, we can assume the eigenvalues of \(H\) to be some other values. Specifically, we have chosen \(\gamma=6\) in the training, where the interval in \(x\) is stretched from \([0,1]\) to \([0,6]\), as can be seen in Figures 20\(\sim\)23.
We should emphasize that we are not re-scaling the original target data, but instead, we are re-scaling how the data is encoded. Effectively, we are re-scaling the frequency of the quantum model itself. The intriguing part is that the global phase shift of the operator acting on a quantum state cannot be observed, yet it affects the expressive power of the quantum model. This can be understood as a pre-processing of the data, which is argued to extend the function classes of the quantum model that can represent [30].
This suggests that one may consider treating the re-scaling parameter \(\gamma\) as a trainable parameter [68]. This would turn the scaling into an adaptive "frequency matching" process, potentially increasing the expressivity of the quantum model. Here we only treat \(\gamma\) as a tunable hyperparameter. The scaling does not need to match with the data, but finding an appropriate scaling parameter is crucial for model training.
### Recovering the von Neumann entropy
So far, we have managed to rewrite the generating function into a partial Fourier series \(f_{N}(w)\) of degree \(N\), defined on the interval \(w\in[-1,1]\). By leveraging variational quantum circuits, we have been able to reproduce the Fourier coefficients of the series accurately. In principle, with appropriate data-encoding and re-scaling strategies, increasing the depth or width of the quantum models would enable us to capture the series to any arbitrary degree \(N\). Thus, the expressivity of the Renyi entropies can be
established in terms of quantum models. However, a crucial problem remains, that is, we need to recover the von Neumann entropy under the limit \(w\to 1\)
\[\lim_{w\to 1}G(w;\rho_{A})=S(\rho_{A}), \tag{5.17}\]
where the limiting point is exactly at the boundary of the interval that we are approximating. However, as we can see clearly from Figure 24, taking such a limit naively gives a very inaccurate value compared to the true von Neumann entropy. This effect does not diminish even by increasing \(N\) to achieve a better approximation of the series when compared to its Taylor series form, as shown in Figure 24. This is because the Fourier series approximation is always oscillatory at the endpoints, a general feature known as the _Gibbs phenomenon_ for the Fourier series when approximating discontinuous or non-periodic functions.
_A priori_, a partial Fourier series of a function \(f(x)\) is a very accurate way to reconstruct the point values of \(f(x)\), as long as \(f(x)\) is smooth and periodic. Furthermore, if \(f(x)\) is analytic and periodic, then the partial Fourier series \(f_{N}\) would converge to \(f(x)\) exponentially fast with increasing \(N\). However, \(f_{N}(x)\) in general is not an accurate approximation of \(f(x)\) if \(f(x)\) is either discontinuous or non-periodic. Not only the convergence is slow, there is an overshoot near the boundary of the interval. There are many different ways to understand this phenomenon. Broadly speaking, the difficulty lies in the fact that we are trying to obtain accurate local information from the global properties of the Fourier coefficients defined via an integral over the interval, which seems to be inherently impossible.
Figure 24: We have plotted the single interval example with \(L=2\) and \(\epsilon=0.1\) for (3.2). Here the legends \(G_{N}\) refer to the Fourier series of the generating function to degree \(N\), by summing up to \(m=10\) in (5.6). \(G_{\text{Taylor}}\) refers to the Taylor series form (2.9) of the generating function by summing up to \(k=100\).
Mathematically, the occurrence of the Gibbs phenomenon can be easily understood in terms of the oscillatory nature of the Dirichlet kernel, which arises when the Fourier series is written as a convolution. Explicitly, the Fourier partial sum can be written as
\[s_{n}(x)=\frac{1}{\pi}\int_{-\pi}^{\pi}f(\xi)D_{n}(\xi-x)d\xi, \tag{111}\]
where the Dirichlet kernel \(D_{n}(x)\) is given by
\[D_{n}(x)=\frac{\sin{(n+\frac{1}{2})x}}{2\sin{\frac{x}{2}}}. \tag{112}\]
This function oscillates between positive and negative values. The behavior is therefore responsible for the appearance of the Gibbs phenomenon near the jump discontinuities of the Fourier series at the boundary.
Therefore, our problem can be accurately framed as follows: given the \(2N+1\) Fourier coefficients \(\hat{f}_{k}\) of our generating function (109) for \(-N\leq k\leq N\), with the generating function defined in the interval \(w\in[-1,1]\), we need to reconstruct the point value of the function at the limit \(w\to 1\). The point value of the generating function at this limit exactly corresponds to the von Neumann entropy. Especially, we need the reconstruction to converge exponentially fast with \(N\) to the correct point value of the generating function, that is
\[\lim_{w\to 1}|G(w;\rho_{A})-f_{N}(w)|\leq e^{-\alpha N},\quad\alpha>0. \tag{113}\]
This is for the purpose of having a realistic application of the quantum model, where currently the degree \(N\) we can approximate for the partial Fourier series is limited by the depth or the width of the quantum circuits.
We are in need of an operation that can diminish the oscillations, or even better, to completely remove them. Several filtering methods have been developed to ameliorate the oscillations, including the non-negative and decaying Fejer kernel, which smooths out the Fourier series over the entire interval, or the introduction of Lanczos \(\sigma\) factor, which locally reduces the oscillations near the boundary. For a comprehensive discussion on the Gibbs phenomenon and these filtering methods, see [72]. However, we emphasize that none of these methods are satisfying, as they still cannot recover accurate point values of the function \(f(x)\) near the boundary.
Therefore, we need a more effective method to remove the Gibbs phenomenon completely. Here we will adopt a powerful method by re-expanding the partial Fourier
series into a basis of Gegenbauer polynomials.5
Footnote 5: Note that other methods exist based on periodically extending the function to give an accurate representation within the domain of interest, which involves reconstructing the function based on Chebyshev polynomials [73]. However, we do not explore this method in this work.
This is a method developed in the 1990s by a series of seminal works [74; 75; 76; 77; 78; 79], we also refer to [80; 81] for more recent reviews.
The Gegenbauer expansion method allows for accurate representation, within exponential accuracy, by only summing a few terms from the Fourier coefficients. Given an analytic and non-periodic function \(f(x)\) on the interval \([-1,1]\) (or a sub-interval \([a,b]\subset[-1,1]\)) with the Fourier coefficients
\[\hat{f}_{k}=\frac{1}{2}\int_{-1}^{1}f(x)e^{-ik\pi x}dx, \tag{5.21}\]
and the partial Fourier series
\[f_{N}(x)=\sum_{k=-N}^{N}\hat{f}_{k}e^{ik\pi x}. \tag{5.22}\]
The following Gegenbauer expansion represents the original function we want to approximate with the Fourier information
\[S_{N,M}(x)=\sum_{n=0}^{M}g_{n,N}^{\lambda}C_{n}^{\lambda}(x), \tag{5.23}\]
where \(g_{n,N}^{\lambda}\) is the Gegenbauer expansion coefficients and \(C_{n}^{\lambda}(x)\) are the Gegenbauer polynomials.6
Footnote 6: The Gegenbauer expansion coefficients \(g_{n,N}^{\lambda}\) are defined with the partial Fourier series \(f_{N}(x)\) as
\[g_{n,N}^{\lambda}=\frac{1}{h_{n}^{\lambda}}\int_{-1}^{1}(1-x^{2})^{\lambda- \frac{1}{2}}f_{N}(x)C_{n}^{\lambda}(x)dx,\quad 0\leq n\leq M. \tag{5.24}\]
For \(\lambda\geq 0\), the Gegenbauer polynomial of degree \(n\) is defined to satisfy \[\int_{-1}^{1}(1-x^{2})^{\lambda-\frac{1}{2}}C_{k}^{\lambda}(x)C_{n}^{\lambda} (x)dx=0,\quad k\neq n.\] (5.25)
We refer to Appendix. B for a more detailed account on the properties of the Gegenbauer expansion. Note that we have the following integral formula for computing \(g_{n,N}^{\lambda}\)
\[\frac{1}{h_{n}^{\lambda}}\int_{-1}^{1}(1-x^{2})^{\lambda-\frac{1}{2}}e^{in\pi x }C_{n}^{\lambda}(x)dx=\Gamma(\lambda)\bigg{(}\frac{2}{\pi k}\bigg{)}^{\lambda} i^{n}(n+\lambda)J_{n+\lambda}(\pi k), \tag{5.26}\]
then
\[g_{n,N}^{\lambda}=\delta_{0,n}\hat{f}(0)+\Gamma(\lambda)i^{n}(n+\lambda)\sum_ {k=-N,k\neq 0}^{N}J_{n+\lambda}(\pi k)\bigg{(}\frac{2}{\pi k}\bigg{)}^{ \lambda}\hat{f}_{k}, \tag{5.27}\]
where we only need the Fourier coefficients \(\hat{f}_{k}\).
In fact, the Gegenbauer expansion is a two-parameter family of functions, characterized by \(\lambda\) and \(M\). It has been shown that by setting \(\lambda=M=\beta\epsilon N\) where \(\epsilon=(b-a)/2\) and \(\beta<\frac{2\pi e}{27}\) for the Fourier case, the expansion can achieve exponential accuracy with \(N\). Note that \(M\) will determine the degrees of the Gegenbauer polynomials, and as such, we should allow the degrees of the original Fourier series to grow with \(M\). For a clear demonstration of how the Gegenbauer expansion approaches the generating function from the Fourier data, see Figure 25. We will eventually be able to reconstruct the point value of the von Neumann entropy near \(w\to 1\) with increasing order in the expansion. A more precise statement regarding the exponential accuracy can be found in Appendix B. This method is indeed a process of reconstructing local information from global information with exponential accuracy, thereby effectively removing the Gibbs phenomenon.
## 6 Discussion
In this paper, we have considered a novel approach of using classical and quantum neural networks to study the analytic continuation of von Neumann entropy from Renyi entropies. We approach the analytic continuation problem in a way suitable to deep learning techniques by rewriting \(\operatorname{Tr}\rho_{A}^{n}\) in the Renyi entropies in terms of a generating function that manifests as a Taylor series (9). We show that our deep learning models achieve this goal with a limited number of Renyi entropies.
Figure 25: Gegenbauer expansion constructed from the Fourier information. Here \(S_{M}\) refers to the Gegenbauer polynomials of order \(M\). Note that we set \(\beta\epsilon=0.25\), then \(\lambda=M=0.25N\). Therefore, in order to construct the polynomials of order \(M\), we need the information of the Fourier coefficients to order \(N=4M\).
Instead of using a static model design for the classical neural networks, we adopt the KerasTuner in finding the optimal model architecture and hyperparameters. There are two supervised learning scenarios: predicting the von Neumann entropy given the knowledge of Renyi entropies using densely connected neural networks, and treating higher Renyi entropies as sequential deep learning using RNNs. In both cases, we have achieved high accuracy in predicting the corresponding targets.
For the quantum neural networks, we frame a similar supervised learning problem as a mapping from inputs to predictions. This allows us to investigate the expressive power of quantum neural networks as function approximators, particularly for the von Neumann entropy. We study quantum models that can explicitly realize the generating function as a partial Fourier series. However, the Gibbs overshooting hinders the recovery of an accurate point value for the von Neumann entropy. To resolve this issue, we re-expand the series in terms of Gegenbauer polynomials, which leads to exponential convergence and improved accuracy.
Several relevant issues and potential improvements arise from our approach:
* It is crucial to choose the appropriate architectures before employing KerasTuner, for instances, densely connected layers in Sec. 3 and RNNs in Sec. 4. Because these architectures are built for certain tasks _a priori_. KerasTuner only serves as an effective method to determine the optimal complexity and hyperparameters for model training. However, since the examples from CFT\({}_{2}\) have different analytic structures for both the von Neumann and Renyi entropies, it would be interesting to explore how the different hyperparameters correlate with each example.
* Despite being efficient, the parameter spaces we sketched in Sec. 3.1 and Sec. 4.1 that the KerasTuner searches are not guaranteed to contain the optimal setting, and there could be better approaches.
* We can generate datasets by fixing different physical parameters, such as temperature for (3.6) or cross-ratio \(x\) for (3.15). While we have considered the natural parameters to vary, exploring different parameters may offer more representational power. It is possible to find a Dense model that provides feasible predictions in all parameter ranges, but may require an ensemble of models.
* Regularization methods, such as K-fold validation, can potentially reduce the model size or datasets while maintaining the same performance. It would be valuable to determine the minimum datasets required or whether models with low complexity still have the same representational power for learning entanglement entropy.
* On the other hand, training the model with more data and resources is the most effective approach to improve the model's performance. One can also scale up the search process in the KerasTuner or use ensemble methods to combine the models found by it.
* For the quantum neural networks, note that our approach does not guarantee convergence to the correct Fourier coefficients, as we outlined in Sec. 5.1. It may be beneficial to investigate various pre-processing or data-encoding strategies to improve the approximation of the partial Fourier series with a high degree \(r\).
There are also future directions that are worth exploring that we shall comment on briefly:
* **Mutual information:** We can extend our study to mutual information for two disjoint intervals \(A\) and \(B\), which is an entanglement measure related to the von Neumann entropy defined as \[I(A:B)\equiv S(\rho_{A})+S(\rho_{B})-S(\rho_{A\cup B}).\] (110) In particular, there is a conjectured form of the generating function in [8], with \(\operatorname{Tr}\rho_{A}^{n}\) being replaced by \(\operatorname{Tr}\rho_{A}^{n}\operatorname{Tr}\rho_{B}^{n}/\operatorname{Tr} \rho_{A\cup B}^{n}\). It is worth exploring the expressivity of classical and quantum neural networks using this generating function, particularly as mutual information allows eliminating the UV-divergence and can be compared with some realistic simulations, such as spin-chain models [82].
* **Self-supervised learning for higher Renyi entropies:** Although we have shown that RNN architecture is effective in the sequence learning problem in Sec. 4, it is worth considering other architectures that could potentially offer better performance. For instance, a time-delay neural network, depthwise separable convolutional neural network, or a Transformer may be appropriate for certain types of data. These architectures may be worth exploring in extending the task of extracting higher Renyi entropies as self-supervised learning, particularly for examples where analytic continuation is not available.
* **Other entanglement measures from analytic continuation:** There are other important entanglement measures, say, relative entropy or entanglement negativity that may require analytic continuation and can be studied numerically based on neural networks. We may also consider entanglement entropy or entanglement spectrum that can be simulated in specific models stemming from condensed matter or holographic systems.
* **Expressivity of classical and quantum neural networks:** We have studied the expressivity of classical and neural networks for the von Neumann and Renyi entropies, with the generating function as the medium. This may help us in designing good generating functions for other entanglement measures suitable for neural networks. It is also worth understanding whether other entanglement measures are also in the function classes that the quantum neural networks can realize.
## Acknowledgments
We thank Xi Dong for his encouragement of this work. C-H.W. was supported in part by the U.S. Department of Energy under Grant No. DE-SC0023275, and the Ministry of Education, Taiwan. This material is based upon work supported by the Air Force Office of Scientific Research under award number FA9550-19-1-0360.
## Appendix A Fourier series representation of the generating function
Suppose there is a Fourier series representation of the generating function from (2.4)
\[G(z;\rho_{A})=\sum_{n=-\infty}^{\infty}c_{n}e^{inz}.\] (A.1)
The idea is that we want to compute the Fourier coefficients given only the information about \(G(z;\rho)\) or \(\operatorname{Tr}\rho_{A}^{n}\). We can compute the complex-valued Fourier coefficients \(c_{n}\) using real-valued coefficients \(a_{n}\) and \(b_{n}\) for a general period \(T\) where
\[G(z;\rho_{A})=\frac{a_{0}}{2}+\sum_{n=1}^{\infty}a_{n}\cos\bigg{(}\frac{2\pi nz }{T}\bigg{)}+b_{n}\sin\bigg{(}\frac{2\pi nz}{T}\bigg{)}.\] (A.2)
Note that
\[a_{n} =\frac{2}{T}\int_{z_{1}}^{z_{2}}G(z;\rho_{A})\cos\bigg{(}\frac{2 \pi nz}{T}\bigg{)}dz,\] (A.3) \[b_{n} =\frac{2}{T}\int_{z_{1}}^{z_{2}}G(z;\rho_{A})\sin\bigg{(}\frac{2 \pi nz}{T}\bigg{)}dz,\] (A.4)
where we only need to compute the two Fourier coefficients using the generating function of \(\operatorname{Tr}\rho_{A}^{n}\). However, the above integrals are hard to evaluate in general. Instead, we will show that both \(a_{n}\) and \(b_{n}\) can be written as the following series
\[a_{n}=\sum_{m=0}^{\infty}\frac{G(0;\rho)^{(m)}}{m!}C_{cos}(n,m),\] (A.5)
\[b_{n}=\sum_{m=0}^{\infty}\frac{G(0;\rho)^{(m)}}{m!}C_{sin}(n,m). \tag{104}\]
where \(C_{cos}(n,m)\) and \(C_{sin}(n,m)\) involve certain special functions. The definitions of \(G(0;\rho_{A})^{(m)}\) starts from the following generating function in terms of \(w\) from (6)
\[G(w;\rho_{A})=-\operatorname{Tr}\left(\rho_{A}\ln\left[1-w(1-\rho_{A})\right]\right), \tag{105}\]
where the \(m\)-th derivative with \(w\to 0\)
\[G(0;\rho_{A})^{(m)} = -\operatorname{Tr}[(-1)^{m+1}(m-1)!\rho_{A}(\rho_{A}-1)^{m}] \tag{106}\] \[= -(m-1)!\sum_{k=0}^{m}\frac{(-1)^{2m-k+1}m!}{k!(m-k)!}\operatorname {Tr}\left(\rho_{A}^{k+1}\right).\]
Note that we have to define for \(m=0\) such that
\[G(0;\rho_{A})^{(0)}=-\operatorname{Tr}(\rho_{A}\ln 1)=0. \tag{107}\]
Then we have the Fourier series representation of the generating function on an interval \([w_{1},w_{2}]\) with period \(T=w_{2}-w_{1}\) given by
\[G(w;\rho_{A})=\frac{a_{0}}{2} + \sum_{n=1}^{\infty}\bigg{\{}\sum_{m=0}^{\infty}\frac{\tilde{f}(m )}{m}C_{cos}(n,m)\cos\left(\frac{2\pi nw}{T}\right) \tag{108}\] \[+ \sum_{m=0}^{\infty}\frac{\tilde{f}(m)}{m}C_{sin}(n,m)\sin\left( \frac{2\pi nw}{T}\right)\bigg{\}},\]
where we have defined
\[\tilde{f}(m)\equiv-\sum_{k=0}^{m}\frac{(-1)^{2m-k+1}m!}{k!(m-k)!}\operatorname {Tr}\left(\rho_{A}^{k+1}\right). \tag{109}\]
with manifest \(\operatorname{Tr}\rho_{A}^{k+1}\) appearing in the expression.
Now we need to work out \(C_{cos}(n,m)\) and \(C_{sin}(n,m)\). First, let us consider in general
\[a_{n}=\frac{2}{T}\int_{t_{1}}^{t_{2}}f(t)\cos\bigg{(}\frac{2\pi nt}{T}\bigg{)}dt, \tag{110}\]
where we have written \(G(w;\rho_{A})\) as \(f(t)\) for simplicity. We can write down the Taylor series of both pieces
\[f(t)=\sum_{j=0}^{\infty}\frac{f^{(j)}(0)}{j!}t^{j},\quad\cos\bigg{(}\frac{2 \pi nt}{T}\bigg{)}=\sum_{k=0}^{\infty}\frac{(-1)^{k}}{(2k)!}\bigg{(}\frac{2 \pi nt}{T}\bigg{)}^{2k}, \tag{111}\]
Consider the following function
\[T_{cos}(t)\equiv f(t)\cos\left(\frac{2\pi nt}{T}\right)=\bigg{[}\sum_{j=0}^{ \infty}\frac{f^{(j)}(0)}{j!}t^{j}\bigg{]}\bigg{[}\sum_{k=0}^{\infty}\frac{(-1)^ {k}}{(2k)!}\bigg{(}\frac{2\pi nt}{T}\bigg{)}^{2k}\bigg{]},\] (A.14)
then let us collect the terms in orders of \(t\)
\[T_{cos}(t)=f(0)+f^{(1)}(0)t + \bigg{(}\frac{1}{2}f^{(2)}(0)-2f(0)\bigg{(}\frac{\pi n}{T}\bigg{)} ^{2}\bigg{)}t^{2}\] (A.15) \[+ \bigg{(}\frac{1}{6}f^{(3)}f(0)-2f^{(1)}(0)\bigg{(}\frac{\pi n}{T} \bigg{)}^{2}\bigg{)}t^{3}\] \[+ \bigg{(}\frac{1}{24}f^{(4)}(0)-f^{(2)}\bigg{(}\frac{\pi n}{T} \bigg{)}^{2}+\frac{2}{3}f(0)\bigg{(}\frac{\pi n}{T}\bigg{)}^{4}\bigg{)}t^{4}\] \[+ \cdots,\]
then the integral becomes
\[\int_{t_{1}}^{t_{2}}T_{cos}(t)dt=f(0)(t_{2}-t_{1}) + \frac{1}{2}f^{(1)}(0)(t_{2}^{2}-t_{1}^{2})\] (A.16) \[+ \frac{1}{3}\bigg{(}\frac{1}{2}f^{(2)}(0)-2f(0)\bigg{(}\frac{\pi n }{T}\bigg{)}^{2}\bigg{)}(t_{2}^{3}-t_{1}^{3})\] \[+ \frac{1}{4}\bigg{(}\frac{1}{6}f^{(3)}f(0)-2f^{(1)}(0)\bigg{(} \frac{\pi n}{T}\bigg{)}^{2}\bigg{)}(t_{2}^{4}-t_{1}^{4})\] \[+ \frac{1}{5}\bigg{(}\frac{1}{24}f^{(4)}(0)-f^{(2)}\bigg{(}\frac{ \pi n}{T}\bigg{)}^{2}+\frac{2}{3}f(0)\bigg{(}\frac{\pi n}{T}\bigg{)}^{4}\bigg{)} (t_{2}^{5}-t_{1}^{5})\] \[+ \cdots.\]
Now we want to re-order this expression, where we collect terms in terms of \(f^{(m)}(0)\)
\[\int_{t_{1}}^{t_{2}}T_{cos}(t)dt = f(0)\bigg{(}(t_{2}-t_{1})-\frac{2}{3}\bigg{(}\frac{\pi n}{T} \bigg{)}^{2}(t_{2}^{3}-t_{1}^{3})+\frac{2}{15}\bigg{(}\frac{\pi n}{T}\bigg{)} ^{4}(t_{2}^{5}-t_{1}^{5})+\cdots\bigg{)}\] (A.17) \[+ f^{(1)}(0)\bigg{(}\frac{1}{2}(t_{2}^{2}-t_{1}^{2})-\frac{1}{2} \bigg{(}\frac{\pi n}{T}\bigg{)}^{2}(t_{2}^{4}-t_{1}^{4})+\cdots\bigg{)}\] \[+ f^{(2)}(0)\bigg{(}\frac{1}{24}(t_{2}^{4}-t_{1}^{4})+\cdots\bigg{)} +\cdots.\]
After multiplying a factor of \(2/T\), this can be written as
\[a_{n}=\frac{2}{T}\int_{t_{1}}^{t_{2}}T_{cos}(t)dt=\sum_{m=0}^{\infty}\frac{f^{( m)}(0)}{m!}C_{cos}(n,m),\] (A.18)
where
\[C_{cos}(n,m) = \sum_{p=0}^{\infty}\bigg{[}\frac{(-1)^{p}2^{(2p+1)}n^{2p}\pi^{2p}(t _{2}^{(2p+m+1)}-t_{1}^{(2p+m+1)})}{(2p+m+1)(2p)!T^{2p+1}}\bigg{]}\] (A.19) \[= \frac{2}{(m+1)T}\bigg{[}{}_{p}F_{q}\bigg{(}\frac{m+1}{2};\frac{1} {2},\frac{m+3}{2};-\frac{n^{2}\pi^{2}t_{2}^{2}}{T^{2}}\bigg{)}t_{2}^{m+1}\] \[-{}_{p}F_{q}\bigg{(}\frac{m+1}{2};\frac{1}{2},\frac{m+3}{2};- \frac{n^{2}\pi^{2}t_{2}^{2}}{T^{2}}\bigg{)}t_{1}^{m+1}\bigg{]}.\]
Next, we consider the case for \(C_{sin}(n,m)\), where we need to work out
\[b_{n}=\frac{2}{T}\int_{t_{1}}^{t_{2}}f(t)\sin\bigg{(}\frac{2\pi nt}{T}\bigg{)}dt,\] (A.20)
again, we know
\[\sin\bigg{(}\frac{2\pi nt}{T}\bigg{)}=\sum_{k=0}^{\infty}\frac{(-1)^{k}}{(2k+ 1)!}\bigg{(}\frac{2\pi nt}{T}\bigg{)}^{(2k+1)},\] (A.21)
then we define
\[T_{sin}(t)\equiv f(t)\sin\bigg{(}\frac{2\pi nt}{T}\bigg{)}=\bigg{[}\sum_{j=0}^ {\infty}\frac{f^{(j)}(0)}{j!}t^{j}\bigg{]}\bigg{[}\sum_{k=0}^{\infty}\frac{(-1 )^{k}}{(2k+1)!}\bigg{(}\frac{2\pi nt}{T}\bigg{)}^{2k+1}\bigg{]},\] (A.22)
with the only difference being the denominator \((2k)!\rightarrow(2k+1)!\) and the power of \(\frac{2\pi nt}{T}\) becomes \(2k+1\). Then
\[C_{sin}(n,m) = \sum_{p=0}^{\infty}\bigg{[}\frac{(-1)^{p}2^{(2p+2)}n^{2p+1}\pi^{ 2p+1}(t_{2}^{(2p+m+2)}-t_{1}^{(2p+m+2)})}{(2p+m+2)(2p+1)!T^{2p+2}}\bigg{]}\] (A.23) \[= \frac{4n\pi}{(m+2)T^{2}}\bigg{[}{}_{p}F_{q}\bigg{(}\frac{m+2}{2} ;\frac{3}{2},\frac{m+4}{2};-\frac{n^{2}\pi^{2}t_{2}^{2}}{T^{2}}\bigg{)}t_{2}^ {m+2}\] \[-{}_{p}F_{q}\bigg{(}\frac{m+2}{2};\frac{3}{2},\frac{m+4}{2};- \frac{n^{2}\pi^{2}t_{1}^{2}}{T^{2}}\bigg{)}t_{1}^{m+2}\bigg{]}.\]
## Appendix B The Gegenbauer polynomials and the Gibbs phenomenon
In the appendix, we discuss briefly the definition and properties of the Gegenbauer polynomials used to remove the Gibbs phenomenon in Section 5.4.
The Gegenbauer polynomials \(C_{n}^{\lambda}(x)\) of degree \(n\) for \(\lambda\geq 0\) are defined by the integral
\[\int_{-1}^{1}(1-x^{2})^{\lambda-\frac{1}{2}}C_{k}^{\lambda}(x)C_{n}^{\lambda} (x)dx=0,\quad k\neq n.\] (B.1)
with the following normalization
\[C_{n}^{\lambda}(1)=\frac{\Gamma(n+2\lambda)}{n!\Gamma(2\lambda)}. \tag{110}\]
Note the polynomials are not orthonormal, the norm of \(C_{n}^{\lambda}(x)\) is
\[\int_{-1}^{1}(1-x^{2})^{\lambda-\frac{1}{2}}(C_{n}^{\lambda}(x))^{2}dx=h_{n}^{ \lambda}, \tag{111}\]
where
\[h_{n}^{\lambda}=\pi^{\frac{1}{2}}C_{n}^{\lambda}(1)\frac{\Gamma(\lambda+\frac {1}{2})}{\Gamma(\lambda)(n+\lambda)}. \tag{112}\]
Given a function \(f(x)\) defined on the interval \([-1,1]\) (or a sub-interval \([a,b]\subset[-1,1]\)), the corresponding Gegenbauer coefficients \(\hat{f}^{\lambda}(l)\) are given by
\[\hat{f}^{\lambda}(l)=\frac{1}{h_{n}^{\lambda}}\int_{-1}^{1}(1-x^{2})^{\lambda -\frac{1}{2}}f(x)C_{l}^{\lambda}(x)dx, \tag{113}\]
then the truncated Gegenbauer expansion up to the first \(m+1\) terms is
\[f_{m}^{\lambda}(x)=\sum_{l=0}^{m}\hat{f}^{\lambda}(l)C_{l}^{\lambda}(x). \tag{114}\]
Here we will sketch briefly how the Gegenbauer expansion leads to a resolution of the Gibbs phenomenon as we discussed in Section 5.4. In fact, one can prove that there is an exponential convergence between the function \(f(x)\) we want to approximate and the \(m\)-th degree Gegenbauer polynomials. We will only sketch the idea behind the proof, and we refer the readers to the review in [79] for the details.
One can establish exponential convergence by demonstrating that the errors for the \(N\)-th Fourier coefficient, expanded into Gegenbauer polynomials, can be made exponentially small. Let us call the \(f_{N}^{m}(x)\) the expansion of \(f_{N}(x)\) into \(m\)-th degree Gegenbauer polynomials and \(f^{m}(x)\) the expansion of \(f(x)\) into \(m\)-th degree Gegenbauer polynomials. Then we have the following relation, where the approximation of \(f(x)\) by \(f_{N}^{m}(x)\) is obviously bounded by the error between \(f(x)\) and \(f^{m}(x)\) and the error between \(f^{m}(x)\) and \(f_{N}^{m}(x)\)
\[||f(x)-f_{N}^{m}(x)||\leq||f(x)-f^{m}(x)||+||f^{m}(x)-f_{N}^{m}(x)||. \tag{115}\]
On the right hand side of the inequality, we call the first norm as the _regularization error_, while the second norm as the _truncation error_. Note that we take the norm to
be the maximum norm over the interval \([-1,1]\). To be more precise, we can write the truncation error as
\[||f^{m}-f^{m}_{N}||=\max_{-1\leq x\leq 1}\bigg{|}\sum_{k=0}^{m}(\hat{f}^{\lambda}_ {k}-\hat{g}^{\lambda}_{k})C^{\lambda}_{k}(x)\bigg{|}, \tag{111}\]
where we take \(\hat{f}^{\lambda}_{k}\) to be the unknown Gegenbauer coefficients of the function \(f(x)\). If both \(\lambda\) and \(m\) grow linearly with \(N\), this error is shown to be exponentially small. On the other hand, the regularization error can be written as
\[||f-f^{m}||=\max_{-1\leq 1}\bigg{|}f(x)-\sum_{k=0}^{m}\hat{f}^{\lambda}_{k}C^{ \lambda}_{k}(x)\bigg{|}. \tag{112}\]
It can also be shown that this error is exponentially small for \(\lambda=\gamma m\) with a positive constant \(\gamma\). Since both the regularization and truncation errors can be made exponentially small with the prescribed conditions, the Gegenbauer expansion achieves uniform exponential accuracy and removes the Gibbs phenomenon from the Fourier data.
|
2310.05481 | Cabbage Sweeter than Cake? Analysing the Potential of Large Language
Models for Learning Conceptual Spaces | The theory of Conceptual Spaces is an influential cognitive-linguistic
framework for representing the meaning of concepts. Conceptual spaces are
constructed from a set of quality dimensions, which essentially correspond to
primitive perceptual features (e.g. hue or size). These quality dimensions are
usually learned from human judgements, which means that applications of
conceptual spaces tend to be limited to narrow domains (e.g. modelling colour
or taste). Encouraged by recent findings about the ability of Large Language
Models (LLMs) to learn perceptually grounded representations, we explore the
potential of such models for learning conceptual spaces. Our experiments show
that LLMs can indeed be used for learning meaningful representations to some
extent. However, we also find that fine-tuned models of the BERT family are
able to match or even outperform the largest GPT-3 model, despite being 2 to 3
orders of magnitude smaller. | Usashi Chatterjee, Amit Gajbhiye, Steven Schockaert | 2023-10-09T07:41:19Z | http://arxiv.org/abs/2310.05481v1 | Cabbage Sweeter than Cake? Analysing the Potential of Large Language Models for Learning Conceptual Spaces
###### Abstract
The theory of Conceptual Spaces is an influential cognitive-linguistic framework for representing the meaning of concepts. Conceptual spaces are constructed from a set of quality dimensions, which essentially correspond to primitive perceptual features (e.g. hue or size). These quality dimensions are usually learned from human judgements, which means that applications of conceptual spaces tend to be limited to narrow domains (e.g. modelling colour or taste). Encouraged by recent findings about the ability of Large Language Models (LLMs) to learn perceptually grounded representations, we explore the potential of such models for learning conceptual spaces. Our experiments show that LLMs can indeed be used for learning meaningful representations to some extent. However, we also find that fine-tuned models of the BERT family are able to match or even outperform the largest GPT-3 model, despite being 2 to 3 orders of magnitude smaller.1
Footnote 1: Our datasets and evaluation scripts are available at [https://github.com/ExperimentSLLM/EMNLP2023_PotentialOfLLM_LearningConceptualSpace.git](https://github.com/ExperimentSLLM/EMNLP2023_PotentialOfLLM_LearningConceptualSpace.git).
## 1 Introduction
Conceptual spaces (Gardenfors, 2000) represent concepts in terms of cognitively meaningful features, called quality dimensions. For example, a conceptual space of colour is composed of three quality dimensions, representing hue, saturation and intensity. Conceptual spaces provide an elegant framework for explaining various cognitive and linguistic phenomena (Gardenfors, 2014). Within Artificial Intelligence (AI), the role of conceptual spaces is essentially to act as an intermediate representation layer, in between neural and symbolic representations (Gardenfors, 2004). As such, conceptual spaces could play a central role in the development of explainable AI systems. Unfortunately, such representations are difficult to learn from data. Most applications of conceptual spaces are thus limited to narrow domains, where meaningful representations can be learned from ratings provided by human participants (Paradis, 2015; Zwarts, 2015; Chella, 2015).
In this paper, we explore whether Large Language Models (LLMs) could be used for learning conceptual spaces. This research question is closely related to the ongoing debate about the extent to which Language Models (LMs) can learn perceptually grounded representations (Bender and Koller, 2020; Abdou et al., 2021; Patel and Pavlick, 2022; Sogaard, 2023). Recent work seems to suggest this might indeed be possible, at least for the colour domain. For instance, Abdou et al. (2021) found that LMs are able to learn representations of colour terms which are isomorphic to perceptual colour spaces. When it comes to predicting the typical colour of objectes, Paik et al. (2021) found that the predictions of LMs are heavily skewed by surface co-occurrence statistics, which are unreliable for colours due to reporting bias (Gordon and Durme, 2013), i.e. the fact that obvious colours are rarely mentioned in text. However, Liu et al. (2022) found the effects of reporting bias to largely disappear in recent LLMs. These findings suggest that it may now be possible to distill meaningful conceptual space representations from LLMs, as long as sufficiently large models are used. However, existing analyses are limited in two ways:
* Several works have explored the colour domain, and visual domains more generally (Li et al., 2023), but little is known about the abilities of LLMs in other perceptual domains.
* Most analyses focus on classifying concepts, e.g. predicting colour terms or the materials from which objects are made, rather than on evaluating the underlying quality dimensions.
We address the first limitation by including an evaluation in the taste domain. To address the second
limitation, rather than considering discrete labels (e.g. _sweet_), we use LLMs to rank concepts according to the degree to which they have a particular feature (e.g. _sweetness_).
## 2 Datasets
The primary focus of our experiments is on the taste domain, which has not yet been considered in this context, despite having a number of important advantages. For instance, the relevant quality dimensions are well-established and have a linear structure (unlike hue in the colour domain). This domain also seems particularly challenging, as the typical terms which are used to describe taste only apply to extreme cases. For instance, we can assert that grapefruit is bitter and bananas are sweet, but it is less clear how a language model would learn whether chicken is sweeter than cheese. As ground truth, we rely on the ratings that were collected by Martin et al. (2014). They rated a total of 590 food items along six dimensions: sweet, salty, sour, bitter, umami and fat. The ratings were obtained by a panel of twelve assessors who were experienced in sensory profiling. They scored the food items during an eight month measurement phase, after having received 55 hours of training in a laboratory. We manually rephrased some of the names of the items in this dataset, to make them more natural. For instance, _cherry_ (_fresh fruit_) was changed to _cherry_ and _hake_ (_grilled with lemon juice_) was changed to _grilled hake with lemon juice_.
We complement our analysis in the taste domain with experiments on three basic physical domains: mass, size and height. These were found to be particularly challenging by Li et al. (2023), with LLMs often failing to outperform random guessing. As the ground truth for mass, we use the household dataset from Standley et al. (2017), which specifies the mass of 56 household objects. The original dataset includes images of each object. We removed 7 items which were not meaningful without the image, namely _big elephant_, _small elephant_, _Ivan's phone_, _Ollie the monkey_, _Marshy the elephant_, _boy doll_ and _Dali Clock_, resulting in a dataset of 49 objects. We treat this problem as a ranking problem. Li et al. (2023) also created a binary classification version of this dataset, which involves judging pairwise comparisons (e.g. is a red lego brick heavier than a hammer?). For size and height, we use the datasets created by Liu et al. (2022). These size and height datasets each consist of 500 pairwise judgements (e.g. an ant is larger than a bird). Note that unlike for the other datasets, no complete ranking is provided.
## 3 Methods
We experiment with a number of different models.
Ranking with GPT-3We use GPT-3 models of four different sizes2: _ada_, _babbage_, _curie_ and _davinci_. To rank items according to a given dimension, we use a prompt that contains the name of that dimension as the final word, e.g. for sweetness we could use "_It is known that [food item] tastes sweet_". We then use the probability of this final word, conditioned on the rest of the prompt, to rank the item: the higher the probability of _sweet_, the more we assume the item to be sweet.
Footnote 2: The exact model sizes have not been made public, but were estimated to be 350M parameters for _ada_, 1.3B parameters for _babbage_, 6.7B parameters for _curie_ and 175B parameters for _davinci_: [https://blog.eleuther.ai/gpt3-model-sizes/](https://blog.eleuther.ai/gpt3-model-sizes/).
Pairwise Comparisons with GPT-3To predict pairwise judgements, we consider two approaches. First, we again use conditional probabilities. For instance, to predict whether _an ant is larger than a bird_, we would get the conditional probability of _large_ in the sentences _an ant is large_ and _a bird is large_. If the conditional probability we get from the first sentence is lower than the probability from the second sentence, we would predict that the claim that _an ant is larger than a bird_ is false. Second, we use a prompt that asserts the statement to be true (e.g. "An ant is larger than a bird") and a prompt that asserts the opposite (e.g. "A bird is larger than an ant"). We compute the perplexity of both statements and predict the version with the lowest perplexity to be the correct one.
Ranking with ChatGPT and GPT-4ChatGPT and GPT-4 are more difficult to use than GPT-3 because the OpenAI API does not allow us to compute conditional probabilities for these models. Instead, to use these conversational models, we directly ask them to rank a set of items, using a prompt such as: _Rank the following items according to their size, from the largest to the smallest_, followed by a list of items to be ranked.
Baseline: DeBERTaWe consider two baselines. First, we use a DeBERTa-v3-large model (He et al., 2021), which we fine-tuned to predict the commonsense properties of concepts. To this end, we
used the extended McRae dataset (McRae et al., 2005) introduced by Forbes et al. (2019) and the augmented version of CSLB3 introduced by Misra et al. (2022). Together, these two datasets contain 19,410 positive and 31,901 negative examples of (concept,property) pairs. We fine-tune the model on these examples using the following prompt: _can <concept> be described as <property>? <MASK>_. The probability that the concept has the property is then predicted using a linear classifier that takes the final-layer embedding of the <MASK> token as input. We use the resulting model for the different evaluations, without any further fine-tuning.
Footnote 3: [https://cslb.psychol.cam.ac.uk/propnorms](https://cslb.psychol.cam.ac.uk/propnorms)
Baseline: Bi-encoderAs the second baseline, we use two variants of the bi-encoder model from Gajbihye et al. (2022). First, we use the original BERT-large model from Gajbihye et al. (2022) that was trained on data from Microsoft Concept Graph (Ji et al., 2019) and GenericsKB (Bhakthavatsalam et al., 2020). However, as these training sets are not specifically focused on commonsense knowledge, we used ChatGPT to construct a dataset of 109K (concept,property) pairs, since no existing dataset of sufficient size and quantity was available. The key to obtain high-quality examples was to ask the model to suggest properties that are shared by several concepts, and to vary the examples that were provided as part of a few-shot prompting strategy. More details on how we collected this dataset using ChatGPT are provided in Appendix A. We then trained the BERT-large bi-encoder on this dataset.
with the model trained on ChatGPT examples outperforming _curie_, and even _davinci_ in two cases.
Table 2 presents some examples of the predictions that were made by _davinci_ for sweetness (using prompt 1), comparing the ranks according to the ground truth (_gold_) with the ranks according to the _davinci_ predictions. The table focuses on some of the most egregious mistakes. As can be seen, _davinci_ fails to identify the sweetness of common foods such as chocolate, fruit cake and jam. Conversely, the model significantly overestimates the sweetness of different cheeses and vegetables.
Physical PropertiesTable 3 summarises the results for the physical properties. For mass, we consider both the problem of ranking all objects, evaluated using Spearman \(\rho\%\), and the problem of evaluating pairwise judgments, evaluated using accuracy. Height and size can only be evaluated in terms of pairwise judgments. To obtain conditional probabilities from the GPT-3 models, we used a prompt of the form "In terms of [mass/height/size], it is known that a typical [concept] is [heavy/tall/large]". We also tried a few variants, which performed worse. To compute perplexity scores, for evaluating pairwise judgements, we used a prompt of the form "[concept 1] is heavier/taller/larger than [concept 2]". For the baselines, we obtained scores for the properties _heavy_, _tall_ and _large_.
The correlation between model size and performance is far from obvious here, except that _ada_ clearly underperforms the three larger models. However, among the GPT-3 models, _babbage_ actually achieves the best results in several cases. The results based on conditional probabilities are consistently better than those based on perplexity. ChatGPT and GPT-4 were difficult to use with the ranking prompt, as some items were missing, some were duplicated, and many items were paraphrased in the ranking. The results in Table 3 were obtained after manually correcting these issues. With this caveat in mind, it is nonetheless clear that GPT-4 performs exceptionally well in this experiment. In accordance with our findings in the taste domain, DeBERTa performs very well on the height and size properties, outperforming all GPT-3 models by a clear margin. For mass, however, DeBERTa failed completely, even achieving a negative correlation. The bi-encoder models perform well on height and size, although generally underperforming the largest GPT-3 models. For mass, the bi-encoder trained on ChatGPT examples performs poorly, while the model trained on Microsoft Concept Graph and GenericsKB was more robust. It is notable that the results in Table 3 are considerably higher than those obtained by Li et al. (2023) using OPT (Zhang et al., 2022). For mass, for instance, even the largest OPT model (175B) was not able to do better than random guessing.
In Table 3, the pairwise judgments about mass were assessed by predicting the probability of the word _heavy_ (for the GPT-3 models) or by predicting the probability that the property _heavy_ was satisfied (for the baselines). Another possibility is to use the word/property _light_ instead, or to combine the two probabilities. Let us write \(p_{\textit{heavy}}\) to denote the probability obtained for _heavy_ (i.e. the conditional probability of the word, as predicted by the language model, or the probability that the property is satisfied, as predicted by a baseline model), and similar for \(p_{\textit{light}}\). Then we can also predict the relative mass of items based on the value \(p_{\textit{heavy}}\cdot(1-p_{\textit{light}})\) or based on the value \(p_{\textit{heavy}}/p_{\textit{light}}\). These different possibilities are evaluated in Table 4. As can be seen, there is no variant that consistently outperforms the others.
Analysis of Training Data OverlapFor the baselines, we may wonder to what extent their knowledge comes from the pre-trained language model, and to what extent it has been injected during the fine-tuning step. For this analysis, we focus in particular on the DeBERTa model, which was fine
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline & & \multicolumn{1}{c}{**Mass**} & \multicolumn{1}{c}{**Mass**} & \multicolumn{1}{c}{**Height**} & \multicolumn{1}{c}{**Size**} \\ & & \(\rho\) & _Acc_ & _Acc_ & _Acc_ \\ \hline \multirow{4}{*}{\begin{tabular}{l} **GPT** \\ **davinci** \\ \end{tabular} } & Ada & 23.0 & 47.8 & 68.7 & 59.1 \\ & Babbage & 48.9 & 80.9 & 67.9 & 76.4 \\ & Curie & 30.6 & 65.1 & 77.6 & 86.4 \\ & Davinci & 36.2 & 76.8 & 76.4 & 80.4 \\ \hline \multirow{4}{*}{\begin{tabular}{l} **GPT** \\ **davinci** \\ \end{tabular} } & Ada & - & 49.0 & 49.7 & 36.5 \\ & Babbage & - & 55.0 & 59.1 & 66.7 \\ & Curie & - & 56.6 & 43.3 & 45.5 \\ & Davinci & - & 70.8 & 54.7 & 51.1 \\ \hline \multirow{4}{*}{\begin{tabular}{l} **davinci** \\ \end{tabular} } & ChatGPT\({}^{\dagger}\) & 28.6 & 68.3 & 89.9 & 84.3 \\ & GPT-4\({}^{\dagger}\) & **58.6** & **84.9** & **99.1** & **99.1** \\ \hline \multirow{4}{*}{
\begin{tabular}{l} **davinci** \\ \end{tabular} } & DeBERTa & -8.9 & 42.8 & 86.6 & 93.9 \\ & Bi-encMSCG-GRB & 31.1 & 69.2 & 69.7 & 71.9 \\ & Bi-encMSCG-GRB & 11.8 & 67.6 & 77.2 & 60.3 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results for physical properties, viewed as a ranking problem (mass) and as a pairwise judgment problem (mass, height and size). Prompt: “In terms of [mass/height/size], it is known that a typical [concept] is [heavy/tall/large]”. Results with \({}^{\dagger}\) required manual post-processing of predictions.
tuned on the McRae and CSLB datasets. These datasets indeed cover a number of physical properties, as well as some properties from the taste domain. Table 5 summarises how the performance of the DeBERTa model is affected when removing the most relevant properties from the McRae and CSLB training sets, which we refer to as _filtered training_ in the table. For instance, for the property _bitter_, in the filtered setting we omit all training examples involving the properties "bitter" and "can be bitter in taste"; for _sour_ we remove the properties "sour" and "can be sour in taste"; for _mass_ we remove the properties "heavy", "light", "light weight" and "can be lightweight"; for _height_ we remove the properties "short", "can be short", "tall" and "can be tall"; and for _size_ we remove the properties "large" and "small". Note that the McRae and CSLB datasets do not cover any properties that are related to sweetness, saltiness, umami and fatiness. The results in Table 5 show that filtering the training data indeed has an effect on results, although the performance of the model overall remains strong. Interestingly, in the case of _mass_, the filtered setting leads to clearly improved results.
## 5 Conclusions
We proposed the use of a dataset from the taste domain for evaluating the ability of LLMs to learn perceptually grounded representations. We found that LLMs can indeed make meaningful predictions about taste, but also showed that a fine-tuned DeBERTa model, and in some cases even a fine-tuned BERT-large bi-encoder, can outperform GPT-3. The performance of these smaller models crucially depends on the quality of the available training data. For this reason, we explored the idea of collecting training data from ChatGPT, using a new prompting strategy. We complemented our experiments in the taste domain with an evaluation of physical properties, where we achieved considerably better results than those reported in the literature (Li et al., 2023). Whereas previous work was essentially aimed at understanding the limitations of language models, our focus was more practical, asking the question: _can high-quality conceptual space representations be distilled from LLMs_? Our experiments suggest that the answer is essentially positive, but that new approaches may be needed to optimally take advantage of the knowledge that can be extracted from such models.
## Limitations
It is difficult to draw definitive conclusions about the extent to which cognitively meaningful representations can be obtained by querying LLMs. Among others, previous work has found that performance may dramatically differ depending on the prompt which is used; see e.g. (Liu et al., 2022). We have attempted to make reasonable choices when deciding on the considered prompts, through initial experiments with a few variations, but clearly this is not a guarantee that our prompts are close to being optimal. However, this also reinforces the conclusion that LLMs are difficult to use directly for learning conceptual spaces. While we believe that taste represents an interesting and under-explored domain, it remains to be verified to what extent LLMs are able to capture perceptual features in other domains.
## Acknowledgments
This work was supported by EPSRC grant EP/V025961/1.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & **ada** & **babbage** & **curie** & **davinci** & **DeBERTa** & **Bi-encMSC+GK** & **Bi-enc**CMQGP** \\ \hline _Pheavy_ & 46.6 & **81.7** & **64.7** & **76.8** & **67.7** & **69.2** & 42.9 \\ \(1-p_{\textit{light}}\) & 49.8 & 51.3 & 41.5 & 53.4 & 43.4 & 61.9 & **53.4** \\ \(p_{\textit{heavy}}\cdot(1-p_{\textit{light}})\) & 46.6 & **81.7** & **64.7** & **76.8** & 47.8 & **69.2** & 48.1 \\ \(p_{\textit{heavy}}/\textit{Plight}\) & **61.4** & 68.3 & 52.1 & 65.9 & 65.2 & 75.7 & 46.5 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Analysis of alternative strategies for predicting pairwise judgements about mass (accuracy).
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & **Bitter** & **Sour** & **Mass** & **Height** & **Size** \\ & \(\rho\) & \(\rho\) & \(\rho\) & _Acc_ & _Acc_ \\ \hline Full training & 24.7 & 43.9 & -8.9 & 86.6 & 93.9 \\ Filtered training & 24.8 & 35.0 & 30.7 & 82.0 & 90.8 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparison of the DeBERTa model in two settings: the full training setting, where the McRae and CSLB datasets are used for fine-tuning, and a filtered setting, where relevant properties are omitted. |
2308.05813 | Physical Layer Security for NOMA Systems: Requirements, Issues, and
Recommendations | Non-orthogonal multiple access (NOMA) has been viewed as a potential
candidate for the upcoming generation of wireless communication systems.
Comparing to traditional orthogonal multiple access (OMA), multiplexing users
in the same time-frequency resource block can increase the number of served
users and improve the efficiency of the systems in terms of spectral
efficiency. Nevertheless, from a security view-point, when multiple users are
utilizing the same time-frequency resource, there may be concerns regarding
keeping information confidential. In this context, physical layer security
(PLS) has been introduced as a supplement of protection to conventional
encryption techniques by making use of the random nature of wireless
transmission media for ensuring communication secrecy. The recent years have
seen significant interests in PLS being applied to NOMA networks. Numerous
scenarios have been investigated to assess the security of NOMA systems,
including when active and passive eavesdroppers are present, as well as when
these systems are combined with relay and reconfigurable intelligent surfaces
(RIS). Additionally, the security of the ambient backscatter (AmB)-NOMA systems
are other issues that have lately drawn a lot of attention. In this paper, a
thorough analysis of the PLS-assisted NOMA systems research state-of-the-art is
presented. In this regard, we begin by outlining the foundations of NOMA and
PLS, respectively. Following that, we discuss the PLS performances for NOMA
systems in four categories depending on the type of the eavesdropper, the
existence of relay, RIS, and AmB systems in different conditions. Finally, a
thorough explanation of the most recent PLS-assisted NOMA systems is given. | Saeid Pakravan, Jean-Yves Chouinard, Xingwang Li, Ming Zeng, Wanming Hao, Quoc-Viet Pham, Octavia A. Dobre | 2023-08-10T18:18:51Z | http://arxiv.org/abs/2308.05813v1 | # Physical Layer Security for NOMA Systems: Requirements, Issues, and Recommendations
###### Abstract
Non-orthogonal multiple access (NOMA) has been viewed as a potential candidate for the upcoming generation of wireless communication systems. Comparing to traditional orthogonal multiple access (OMA), multiplexing users in the same time-frequency resource block can increase the number of served users and improve the efficiency of the systems in terms of spectral efficiency. Nevertheless, from a security viewpoint, when multiple users are utilizing the same time-frequency resource, there may be concerns regarding keeping information confidential. In this context, physical layer security (PLS) has been introduced as a supplement of protection to conventional encryption techniques by making use of the random nature of wireless transmission media for ensuring communication secrecy. The recent years have seen significant interests in PLS being applied to NOMA networks. Numerous scenarios have been investigated to assess the security of NOMA systems, including when active and passive eavesdroppers are present, as well as when these systems are combined with relay and reconfigurable intelligent surfaces (RIS). Additionally, the security of the ambient backscatter (AmB)-NOMA systems are other issues that have lately drawn a lot of attention. In this paper, a thorough analysis of the PLS-assisted NOMA systems research state-of-the-art is presented. In this regard, we begin by outlining the foundations of NOMA and PLS, respectively. Following that, we discuss the PLS performances for NOMA systems in four categories depending on the type of the eavesdropper, the existence of relay, RIS, and AmB systems in different conditions. Finally, a thorough explanation of the most recent PLS-assisted NOMA systems is given.
Ambient backscatter systems, Non-orthogonal multiple access, Physical layer security, Reconfigurable intelligent surfaces, Relay, Security properties.
## I Introduction
Non-orthogonal multiple access (NOMA) has been widely regarded as a promising candidate for the upcoming generation of wireless communication networks [1]-[3]. The primary purpose of NOMA is to support simultaneous information transfer of multiple users over the same radio resources at the same time [4]. However, this introduces severe inter-user interference, which is reduced with the aid of successive interference cancellation (SIC) technique at the receiver [5]. Owing to the channel disparity among different users, it has been theoretically demonstrated that NOMA for single-input single-output (SISO) systems achieves a larger capacity region than traditional orthogonal multiple access (OMA) [6]. In [7, 8, 9], the effectiveness of these systems in the multiple-input multiple-output (MIMO) scenario has been studied. Results from both theoretical and numerical analyses have shown that MIMO-NOMA can achieve a higher sum rate than MIMO-OMA. Besides spectral efficiency, it has been illustrated that NOMA can lead to increased energy-efficiency, less delay, improved coverage region, and massive connectivity compared to OMA under different system settings [10, 11, 12]. However, from a security point of view, when multiple users are utilizing the same time-frequency resource, there may be concerns regarding keeping information confidential.
In order to address the security challenges, physical layer security (PLS) strategies have been identified as a reliable and effective approach that can supplement cryptographic-based approaches [13]. Through utilizing the dynamic aspects of wireless communication, like random channel, fading, noise, and interference, PLS can protect the information from being decoded by eavesdropper while guaranteeing that the legitimate user can decode the data without an issue. The future generation of wireless communication systems can also benefit from flexible and scenario-specific security due to PLS's ability to design channel-dependent resource allocation and link adaptability [13]. Considering the potential application of PLS in future networks, designing PLS techniques for NOMA and investigating the related security issues represent an interesting subject of study. While the primary objective of NOMA is to enable simultaneous information transfer of multiple users over the same radio resources, its unique characteristics can also be harnessed to mitigate security threats and bolster the confidentiality and integrity of wireless communications. By leveraging NOMA's capabilities, such as user grouping, power allocation (PA), and SIC, we can devise novel security mechanisms to counteract eavesdropping and strengthen the overall security performance of wireless networks [14]. The inherent inter-user interference in NOMA transmissions can serve as a deterrent to eavesdroppers, as the non-orthogonal signals make it challenging for them to distinguish the intended user's signal from others. The deliberate introduction of this interference
can confuse potential eavesdroppers, thereby enhancing the physical layer security [14-18]. Additionally, the unequal PA in NOMA can deliberately weaken the signals received by unintended users, reducing the likelihood of successful eavesdropping. By allocating higher power levels to users with sensitive information and lower power levels to other users, the security of confidential data can be enhanced. Furthermore, the SIC technique employed at NOMA receivers not only enhances system capacity but also contributes to PLS [14-18]. By successively decoding and subtracting the signals of stronger users, the NOMA receiver can extract the information intended for each user, while simultaneously rejecting interference. This inherent interference management technique strengthens the security of individual user transmissions, as it becomes more challenging for eavesdroppers to decode the desired information.
The most recent research studies mainly focus on analyzing the impacts of relay and reconfigurable intelligent surfaces (RIS) on the performances of the PLS metrics for NOMA systems [19-40]. The secrecy performance of these systems is often analyzed under passive eavesdropper and has been focused less on active eavesdropper. In recent years, cases with active eavesdropping have attracted increased attention, and related studies can be found in [41-43]. Reliability and security analysis of ambient backscatter (AmB)-NOMA systems is another hot topic [44-50]. Additionally, PLS has been considered for systems in which NOMA is integrated with other cutting-edge transmission techniques, like visible light communications [51-53] and mmWave networks [54-56].
The security design specifications for NOMA in the downlink domain and the PLS solutions satisfying these specifications have been described in [57]. Several advantages of using NOMA in comparison to OMA in specific situations with regard to PLS have been addressed in this study. In [58], the authors categorized the present PLS-aided NOMA frameworks into three distinct groups depending on the number of antennas at the base station (BS): SISO-, MIMO- and massive MIMO systems. Then, for each category, an overview of the research developments were provided. Most of the studies in this field focus mostly on data privacy and confidentiality and not on other security features such as message integrity, key generation, and device and source authentication. Various approaches to deal with the remaining security features have been suggested in [59], which also explored the data confidentiality of PLS for NOMA systems and their limits, difficulties, and solutions for it. In Table 1, a summary of the recent reviews on PLS for NOMA systems is provided.
The three previous reviews tackle important problems in NOMA systems with PLS, but they also carry certain limitations. Indeed, none of them has systematically looked into the works when NOMA is combined with other state-of-the-art transmission technologies, e.g., RIS, full-duplex, and AmB communication. Given that the research on PLS-NOMA with these novel technologies has gained popularity in recent years, there is a need for a comprehensive survey to discuss the recent development of PLS-NOMA in the context of other advanced transmission technologies. In this regard, we carry out this study and attempt to shed insight into this topic. Hence, we thoroughly discuss and describe the PLS for NOMA systems in four scenarios: active and passive eavesdropper, the presence of a relay, RIS, and AmB communication systems. The corresponding security levels of these scenarios are evaluated and analysed. A summary of the different aspects of each scheme is provided as well. Furthermore, a thorough overview of the most current research problems and unsolved PLS concerns in NOMA are presented. Suggestions for future research in this area are made as well.
The organization of this survey is illustrated in Fig. 1. In the beginning, we provide a detailed explanation of the concepts and principles of NOMA in Section II, and PLS in Section III. Following that, in Section IV, we thoroughly describe the current PLS-aided NOMA systems that have been proposed in the literature. The challenges of the PLS in NOMA, potential solutions and directions for future works are discusses in Section V. Finally, in Section VI, the conclusions of this survey are presented.
**Notation:**\([x]^{+}=\max(x,0)\). \(f(X,Y)\) represents the joint probability density function (PDF) of two random variables \(X\) and \(Y\). Additionally, the abbreviations are listed in Table II. The summary Tables V and VII employs a standardized notation, whereby a '+' symbol signifies that a particular factor was incorporated in the analysis, whereas a '-' symbol signifies its exclusion from the analytical framework.
## II Fundamentals of NOMA
The concept of NOMA and its various types are described in this section, representing the foundation for understanding the security strategies presented in the paper.
The existing NOMA structures can be classified in two main categories: power-domain NOMA (PD-NOMA) and code-domain NOMA (CD-NOMA). By employing different power
Fig. 1: Organization of this paper (RIS is a technology that can control the transmission of electromagnetic waves by modifying its electrical and magnetic attributes.).
levels, PD-NOMA provides services to multiple users simultaneously in the same time-frequency resource [1-4]. In order to reduce the interference between users, the receiver employs SIC technique. CD-NOMA assigns non-orthogonal resources to the users, like codebooks, scrambled patterns, broadcast sequences, and scrambled sequences [5]. Besides to the two main types of NOMA, exists several NOMA approaches which are not as widely known, like bit division multiplexing and pattern division multiple access [60]. Generally speaking, PD-NOMA has received more attention than CD-NOMA because of its convenience, effectiveness, and compatibility to existing systems. Indeed, most PLS solutions largely concentrate on PD-NOMA instead of CD-NOMA. As a result, the emphasis of this study is on PD-domain NOMA. For the simplicity of the notation, it will be referred to as NOMA in the rest of the paper. Interested readers can refer to [61], as well as any references therein for more information on CD-domain NOMA.
### _Downlink NOMA_
The BS superimposes the user signals into a signal waveform on the transmitter side using different power coefficients. The amount of power assigned to each user is based on the quality of its relative channel. Generally, users with worse channel conditions are allocated with a higher power and vice versa. In other words, the user equipment that is farthest away from the BS receives the highest PA, while the user equipment that is closest to the BS receives the lowest allocation. We will refer to them as the farthest user (FU) and the nearest user (NU), respectively. The downlink signal to be transmitted can be represented as follows:
\[x_{D}(t)=\sum_{k=1}^{K}\sqrt{\alpha_{k}P}x_{D,k}(t), \tag{1}\]
where \(K\) is the number of users in the network; \(x_{D,k}(t)\) is the individual information of user equipment \(k\); \(t\) represents the time here; \(P\) denotes the total power transmitted by the BS, and \(\alpha_{k}\) for \(k=\{1,...,K\}\) represents the power coefficient for the signal of user \(k\) where \(\sum_{k=1}^{K}\alpha_{k}=1\). Then, the signal that user \(k\) has received is specified as follows:
\[y_{k}(t)=h_{k}x_{D}(t)+n_{k}(t), \tag{2}\]
where \(h_{k}\) represents the channel gain between user \(k\) and the BS, and \(n_{k}(t)\) is the additive white Gaussian noise (AWGN) at user equipment \(k\), with zero-mean and variance \(\sigma^{2}\). It is supposed that users are hypothetically indexed based on the order of their channel gains for decoding, i.e., \(\left|h_{1}\right|^{2}\geq...\geq\left|h_{K}\right|^{2}\).
Each user equipment executes SIC to subtract the interference signals with higher power levels. This procedure is carried out sequentially until the user equipment locates its signal. Note that the user with the worst channel condition,
namely, the maximum power coefficient can recover the desired signal without performing SIC, wherein additional signals are regarded as noise [5]. As previously stated, SIC is carried out at user \(k\), \(k=\left\{1,...,K-1\right\}\) to eliminate the interference from the users with lesser channel gains at the receiver. Therefore, the downlink rate that can be achieved by user \(k\) is determined as follows:
\[R_{k}^{\mathrm{DL}}=\log_{2}{\left(1+\frac{{{\alpha_{k}}P\left|{{h_{k}}}\right| ^{2}}}{{\sum\nolimits_{i=1}^{k-1}{{\alpha_{i}}P\left|{{h_{k}}}\right|^{2}}+ \sigma^{2}}}\right)}, \tag{3}\]
where \(\sum\nolimits_{i=1}^{k-1}{{\alpha_{i}}}\) is assumed to be zero for \(k=1\). Fig. 2 (a) illustrates the detailed downlink network for a scenario with two users.
Theoretically, there is no limit to the number of users that can be served via downlink NOMA. Nevertheless, in practice, downlink NOMA is normally only used for a few users, usually \(K=2\) or \(3\). The reason is that as \(K\) increases, the bit error rate performance suffers greatly as a result of error propagation from an imperfect SIC. In the meantime, decoding the signal of other users requires more computing power and energy, making it less appealing for user devices with limited resources. User scheduling is required when the system consists of many users. First, users are separated into various clusters. Afterwards, NOMA is used to provide services to users who are inside the same cluster, whereas OMA is used to serve users within different clusters as time division multiple access or frequency division multiple access.
### _Uplink NOMA_
In the NOMA network for uplink transmissions, each user sends its signal to the BS. Then, these signals are combined into one signal using multiplexing. To distinguish and characterize each user's signal at the BS, SIC is employed [5]. The signal that is received at the BS, which consists of the signals from each user, can be written as:
\[y(t)=\sum\limits_{k=1}^{K}\ P_{k}x_{U,k}(t)h_{k}+n(t), \tag{4}\]
where \(x_{U,k}(t)\) represents the individual uplink information of user \(k\), \(P_{k}\) indicates the transmitted power of user \(k\); \(h_{k}\) indicates the channel gain between user \(k\) and the BS, and \(n(t)\) is the AWGN; zero-mean and variance \(\sigma^{2}\). It is supposed that the signals received are indexed in decreasing order, i.e., \(P_{1}\left|{{h_{1}}}\right|^{2}\geq...\geq{P_{K}\left|{{h_{K}}}\right|^{2}}\). Consequently, the uplink rate that can be achieved by user \(k\) is determined as follows:
\[R_{k}^{\mathrm{UL}}=\log_{2}{\left(1+\frac{{P_{k}\left|{{h_{k}}}\right|^{2}}} {{\sum\nolimits_{i=k+1}^{K}{{P_{i}}\left|{{h_{i}}}\right|^{2}}+\sigma^{2}}} \right)}. \tag{5}\]
It is assumed that \(\sum\nolimits_{i=k+1}^{K}{{P_{i}}\left|{{h_{i}}}\right|^{2}}\) is zero for \(i=K\). Fig. 2 (b) illustrates the uplink network for a scenario with two users.
The above-mentioned system models are studied for SISO systems where scalars have been used to represent the channels. Similarly, MIMO-NOMA systems can be found in [7-9].
## III Fundamental of PLS
Wireless networks are subjected to eavesdropping attacks because of the broadcast nature of wireless channels. How to ensure that the private messages are not accessed by eavesdroppers is a fundamental security requirement. Traditionally, cryptographic algorithms have been used to achieve this [13]. PLS is an alternate strategy that makes use of the randomness property of wireless channels at the physical layer. The idea of using random characteristics of fading wireless channels to prevent eavesdropping has been proposed using an information-theoretic perspective [61]. In matter of confidentiality, if the eavesdroppers entirely disregard the transmitted information and only randomly guess the original information bit by bit, perfect secrecy can be obtained. The early PLS research was directly affected by the entropy and equivocation concepts developed for communication issues since this definition of security is closely associated to communications when there is noise. The most efficient way to send a confidential message would be to use wiretap channel coding to reach the highest possible data transmission rate, which Wyner referred to as the secrecy capacity (SC) in
Fig. 2: Two scenarios for NOMA system models: a) Downlink NOMA system models with two users, b) Uplink NOMA system models with two users.
[62]. Wyner only demonstrated that secure communications are feasible in degraded broadcast channels. The ideas of PLS have gained further popularity as a result of the development of non-degraded channels [63], Gaussian channels [64], fading channels [65] and [64], multi-antenna channels [67], and relay channels [68].
Researchers in this field have introduced several metrics for evaluating and assessing PLS systems, and we outline some of them below.
**Ergodic Secrecy Capacity (ESC):** The ESC defines a limit on the capacity, based on the principles of information-theoretic secrecy, for a system where the coded message is sent over a large enough number of channel realizations to take advantage of the ergodic properties of the fading channel. The maximum achievable transmission rate, subject to the limits of reliability and information-theoretic secrecy, can be assessed through the SC [63]. This value can be used to measure the secrecy performance of an AWGN wiretap channel [61]. The SC of a single wiretap fading channel is determined as follows
\[C_{s}=\left[C_{m}-C_{e}\right]^{{}^{+}}, \tag{6}\]
where \(C_{m}\) and \(C_{e}\) respectively represent the instantaneous capacity of the legitimate receiver and eavesdropper, and are given by
\[C_{m}=\log_{2}(1+\gamma_{m}), \tag{7}\]
and
\[C_{e}=\log_{2}(1+\gamma_{e}), \tag{8}\]
where the instantaneous received signal-to-noise ratio (SNR) at the legitimate receiver and the eavesdropper, respectively are \(\gamma_{m}=\frac{P\left|h_{m}\right|^{2}}{\sigma_{m}^{2}}\) and \(\gamma_{e}=\frac{P\left|h_{e}\right|^{2}}{\sigma_{e}^{2}}\). Here, \(P\) is the average power of the transmitted signal. The fading coefficients of the channels between the transmitter and a legitimate receiver and the transmitter and an eavesdropper are denoted by \(h_{m}\) and \(h_{e}\), respectively, and \(\sigma_{m}^{2}\) and \(\sigma_{e}^{2}\) are the receiver noise variances.
Two possibilities of channel state information (CSI) available at the transmitter, namely full CSI and legitimate CSI, are taken into consideration in order to determine the ESC. In the first case, the sender is aware of the CSI at the legitimate and the eavesdropper channels. As a result, the transmitter only sends the information when the SNR of the legitimate channel is higher than the SNR of the eavesdropper channel, i.e., \(\gamma_{m}>\gamma_{e}\). The average SC over all fading realizations is used to calculate the ESC, which is represented as follows:
\[C_{s-f}^{avg}=\int_{0}^{\infty}\int_{\gamma_{e}}^{\infty}\log_{2}\bigg{(}\frac {1+\gamma_{m}}{1+\gamma_{e}}\bigg{)}f(\gamma_{m},\gamma_{e})d\gamma_{m}d\gamma _{e}. \tag{9}\]
Likewise, the ESC when only legitimate CSI is available at the transmitter is given as follows:
\[C_{s-l}^{avg}=\int_{0}^{\infty}\int_{0}^{\infty}\bigg{(}\log_{2}\bigg{(}\frac {1+\gamma_{m}}{1+\gamma_{e}}\bigg{)}\bigg{)}^{+}f(\gamma_{m},\gamma_{e})d \gamma_{m}d\gamma_{e}. \tag{10}\]
**Secrecy Outage Probability (SOP):** This metric is defined as the probability that the SC is less than the already specified secure transmission rate \(R_{s}>0\). Based on this definition, the SOP is characterized as follows
\[P_{SOP}=\text{Pr}(C_{s}\leq R_{s}). \tag{11}\]
**Secrecy Coverage Region (SCR):** In information theory, achieving the maximum rate is one of the most important issues that we deal with. However, one of the key issues in this subject is how to improve the coverage region at a specified and desired rate. For this purpose, [69] and [70] provide in-depth analysis of the concepts of coverage region. The concept of coverage region, which is often thought of as an area outside of which the target transmission rate cannot be achieved, is closely associated with the definition of outage capacity [71]. The path-loss impact is added to the SNRs by taking into consideration the fixed distance between the transmitter and receiver (as well as the eavesdropper). Thus, the coverage region is obtained by calculating the ergodic rates, which are dependent on the distances. As a result, the geographic region, which ensures the secrecy rate to be at least \(R_{s}>0\) is considered as the coverage region for the fading wiretap channel, i.e.,
\[\zeta(d_{m},d_{e}):=\left\{d_{m},d_{e}:C_{s}(d_{m},d_{e})>R_{s}\right\}, \tag{12}\]
where \(C_{s}(d_{m},d_{e})\) indicates the average channel capacity when legitimate receiver and eavesdropper are located at distances \(d_{m}\) and \(d_{e}\), respectively. We can characterize SCR as the smallest distance \(d_{e}\) between legitimate transmitter and eavesdropper for which secure communication is ensured by \(d_{m}\).
**Effective secrecy throughput:** This measure has been recently presented to evaluate dependability and safety [59, 72]. The channel throughput with regards to the reliability and security requirements is known as the effective secrecy throughput. This ensures that the average network latency is within acceptable limits and that the data being transmitted is safe and secure [59, 72].
**Low probability of detection:** A cutting-edge transmission method called low probability of detection communication concentrates on the confidentiality and security of wireless networks. Some studies in recent years have investigated the basic constraints of low-probability detection communication by assessing the amount of information bits transmitted between two users while placing restrictions on the probability of a warden making a detection error [73].
## IV PLS for NOMA Systems
Most recently, the PLS for NOMA systems has received a great deal of interest from the community. In order to comprehensively address the topic of PLS in NOMA, we will first discuss the technical difference between the PLS in uplink NOMA and the PLS in the downlink NOMA. Following that, in the following sub-sections, we classify and evaluate the current works into four scenarios based on the type of eavesdropper, the presence of a relay, RIS, and analyzing the PLS concept for AmB-NOMA systems.
### _Technical Difference between PLS in Uplink NOMA and Downlink NOMA_
The PLS in uplink NOMA and downlink NOMA transmissions exhibits technical differences that stem from the distinct characteristics of each communication direction. Understanding these differences is crucial for designing robust
and efficient PLS mechanisms in NOMA systems. In this sub-section, we analyze and compare the technical disparities between uplink NOMA and downlink NOMA in terms of PLS.
In uplink NOMA, the primary focus of PLS lies in safeguarding user privacy against potential eavesdroppers and ensuring the secure transmission of sensitive information. Various techniques have been proposed to enhance PLS in uplink NOMA, such as PA schemes that distribute power levels among users based on their channel conditions [74]. Cooperative jamming techniques have also been suggested, whereby selected users deliberately generate interference to confuse eavesdroppers [75]. On the other hand, the focus of PLS in downlink NOMA revolves around ensuring confidential and reliable transmission to the intended users while minimizing the impact of potential eavesdroppers. Researchers have explored different approaches to enhance PLS in downlink NOMA, including transmit beamforming techniques that direct the signal towards the intended user while minimizing leakage to unintended recipients [76]. Moreover, AN injection techniques have been proposed, involving the intentional introduction of additional noise by the BS to confuse eavesdroppers [77].
A comparative analysis of the technical disparities between PLS in uplink NOMA and downlink NOMA unveils unique characteristics and design considerations for each transmission direction. Uplink NOMA places a strong emphasis on protecting user privacy against eavesdropping attacks, while downlink NOMA focuses on ensuring secure transmission to intended users while mitigating the impact of potential eavesdroppers. These dissimilarities arise due to fundamental variances in signal propagation, receiver architectures, and user roles within NOMA systems. By comprehending and addressing these technical distinctions, it becomes feasible to develop tailored solutions for PLS that effectively tackle the specific challenges and requirements of uplink NOMA and downlink NOMA transmissions.
Table III provides a comprehensive analysis of the technical differences between uplink and downlink NOMA.
### _Types of Eavesdroppers in NOMA Systems_
Generally, the reliability, throughput, security, and other requirements that different NOMA users can have should be considered while designing PLS approaches. The basic system model for NOMA-based networks for PLS is demonstrated in Fig. 3. Furthermore, there are two types of eavesdroppers: internal and external. An internal eavesdropper is a legitimate user who also eavesdrops other users' information, whereas an external eavesdropper does not belong to any legitimate user. Another factor to consider when designing PLS methods is whether or not an eavesdropper is active (disrupting wireless communication by initiating jamming or channel estimation attacks) or passive (just observes the communication without interfering) [78-80].
This study focuses on internal eavesdropping and passive external eavesdropping. In the beginning, we classify eavesdroppers into the two categories described below:
**Internal eavesdroppers:** Internal eavesdroppers are legitimate users who share the same time-frequency resources as other network users and try to intercept other users' information.
**External eavesdroppers:** In contrast, external eavesdroppers are illegitimate users who are using the same bandwidth.
The objectives for security design in NOMA can be classified into three main distinct groups, as illustrated in Fig 4, depending on its requirements, which we will discuss below.
#### Iii-B1 Security Designs against Internal Eavesdroppers
In this scenario, it is presumed that the internal users (NUs and FUs) are untrusted while creating security protocols to combat internal eavesdroppers. Here, the design objective is to secure
Fig. 3: The PLS scenario for NOMA.
user information from one another while ensuring that the SIC operates normally. The following explanations will cover the two types of internal eavesdropping.
**Eavesdropping of FU by NU:** The primary security concern for FU under the NOMA principle is caused by the fact that the NU must decode the FU's signal in order to use SIC. The signal from the FU is given additional strength, which is another important factor that facilitates the NU's ability to detect it. Here, maintaining SIC functionality while preventing information from FU to be obtained by NU is the design objective. In order to further clarify this matter, it is important to note that there exist two different types of SIC receivers: the first is a symbol-level SIC receiver, which demodulates the FU signal without decoding it in order to use SIC, and the second is a codeword-level SIC receiver, which demodulates and decodes the FU signal simultaneously in order to use SIC. The messages can only be secured by encryption methods in the codeword-level-SIC case. However, PLS approaches can be used in the situation of symbol-level-SIC. By converting FU's messages into a different domain and applying a special sequence, security can be provided to FU's messages in symbol-level based SIC so that NU can use SIC but cannot decode FU's messages [78]. The use of channel-dependent characteristics is another method for doing this transformation.
**Eavesdropping of NU by FU:** According to the fundamentals of NOMA, the FU can directly decode its signal while treating NU's information as noise. Nevertheless, it may detect the NU signal after acquiring its own signal. Here, the objective of this design is to secure NU's messages from FU while ensuring that SIC functions normally. As compared to the security issue with FU's messages, designing security techniques in this case is easier. In order to meet the security necessities of NU and ensure that the fundamental data rate requirement of FU is satisfied, the BS can use PLS approaches based on PA, beamforming, or any other method that relies on adaptability. For instance, during beamforming, the design must consider a compromise among providing security at the NU and dependability at the FU [78].
#### Iii-B2 Security Designs against External Eavesdroppers
In this case, NU and FU are trusted. Therefore, the objective of the design is to secure NU and FU messages from an external eavesdropper. It is essential to consider eavesdropper's location while designing security algorithms since it can impact the NOMA system's security performance. The legitimate users are given different levels of power, which results in an uneven level of security for them relative to Eve's location. Accordingly, Eve has a set of abilities to eavesdrop on their signals in various dimensions. The following are two main conditions that must be considered while designing algorithms for this situation: first, the fundamental SIC process should be performed regularly with the security algorithms, which implies that the suggested algorithms should not interfere with how the usual NOMA operates; secondly, it is anticipated that the methods will function even when there is a significant spatial similarity among channels belonging to legitimate and illegitimate parties. A number of PLS techniques have been proposed to combat external eavesdropping, such as channel-based PA optimization for each user, assigning subcarriers to users, channel ordering of NOMA users together with the decoding order, adding interfere signal, optimize beamforming policies, transmit antenna selection approaches, phase manipulation, key generation, and exploiting user interference [79].
#### Iii-B3 Security Designs against both Internal and External Eavesdroppers
There is an internal as well as an external eavesdropper in this case, and the network users are not to be trusted. How to ensure the security of signals intended for NU and FU from external eavesdroppers as well as from one another is an objective of this design. In terms of security design, this situation is the most challenging. In order to achieve the aforementioned objectives, the design methods must guarantee that SIC will perform properly. In this situation, transforming the signal of nearby and distant users into a different domain using various randomization sequences is one potential security measure [80].
The objectives of security designs for the scenarios explained above are summarize in Table IV.
### _PLS for NOMA Systems with Relay_
First introduced in [81], the relay can improve transmission rate as well as facilitate communication between the users and BS. Since then, the relay has been considered in different communication systems, including studying the PLS for such
Fig. 4: A classification of NOMA systems with \(K\) legitimate users under eavesdropping conditions: a) Internal eavesdroppers, b) External eavesdroppers, c) Both internal and external eavesdroppers.
systems. Indeed, the relay was first introduced for the wiretap channel in [62]. The author of this work studied the coding issue with the relay channel when some of the transmitted messages are private to the relay. After that, in [82], Lifeng Lai et al. proposed several cooperation strategies for the wiretap channel with relay and obtained corresponding achievable performance bounds. Additionally, the some recent studies have investigated how the relay affects PLS performance for NOMA networks [14-25]. Numerous studies have been done under cooperative NOMA, namely when NOMA and relaying are combined. Two models for the relay, trusted and untrusted relay, are taken into consideration when studying the secrecy performance for such channels. In the existing works, the relay is referred to be untrusted in the sense that the messages transmitted to the users should remain a secret from it. Nevertheless, it is presumed that the relay is unmalicious in which it would not deviate from its purpose of transmission or try to attack the users; it could only be inquisitive enough to try to decode the users' messages. There are practical applications for such a communication scenario. For instance, users of a network that provides data could have varying access to different information contents depending on their subscription plans or could have varying hierarchical security approvals for various types of data. Users must cooperate with one another and follow the network protocols since they are legitimate members of the same network. In contrast, an untrusted relay can be inquisitive sufficient to decode the signals' contents before transmitting them to the users.
In the following, we go into details on how the relay affects NOMA networks' secrecy performance.
**Cooperative NOMA:** In cooperative NOMA, nearby users to the BS who have better channel conditions decode information for others and serve as relays for those further away users who have bad channels to the BS. This will improve reception reliability for them. A cooperative NOMA system with two users, a BS, an eavesdropper, and a relay was explored in [14]. It was assumed that the BS had no direct link with either the users or the eavesdropper. For relaying, both decode-and-forward (DF) and amplify-and-forward (AF) protocols were taken into consideration. The results of the SOP analyses showed that DF and AF reach almost identical secrecy performance levels. [15] considered a similar system model in the presence of a direct link and a relay link, while the secrecy performance was studied under different PAs and target rates for both trusted and untrusted relays. To assess the dependability and security performance of the underlying NOMA system with arbitrary system parameters, the SOP and strictly positive SC were obtained. The resulting findings showed that the secrecy performance could be remarkably improved by judicious selection of such parameters. The authors of [16] studied a NOMA system with two-way relay and multiple preassigned user pairs. Then, a cooperative scheme of the PA and subcarrier assignment was proposed for the secrecy energy efficiency minimization challenge. Many-to-many matching was used to manage the subcarrier assignment, and on this basis, the PA problem was resolved by using geometric programming. The authors of [19] considered a cooperative NOMA system based on DF in which an eavesdropper could wiretap information being transmitted from the relay. It was shown that the use of relay could reduce the SOP of the opportunistic user. In contrast of this study, a cooperative NOMA system based on AF that comprises of a multi-antenna source, a single antenna relay, and a destination was investigated in [20]. The relay has been considered untrusted and it acted as an eavesdropper. The antenna that increases the secrecy rate at the BS was selected when channel state information was available. Here, a straightforward strategy based on choosing an antenna that maximizes the connection quality between the source-destination links has been presented to decrease the signaling overhead. In [21], the authors suggested a two-stage secure relay selection strategy with NOMA to improve PLS in a cooperative network with multiple source-destination pairs and multiple relays. In the suggested approach, multiple eavesdroppers were present while two sources simultaneously communicated with their respective destinations over a single selected relay. The selected relay ensures successful, secure transmission of another source-destination pair at a predefined rate while maximizing the channel capacity of one source-destination pair. The suggested technique greatly outperforms this approach for OMA systems where the transmit power of the sources and chosen relay is in the middle and low regimes, according to an analysis of the SOP performance for this scheme.
**Full Duplex Relay:** The secrecy performance of a NOMA system with half duplex and full duplex relay (HDR and FDR) has been studied in [22]. The considered system consists of two legal NOMA users and one eavesdropper where a devoted FDR (or HDR) assists a far user. For NOMA users in these networks, exact expressions for SOP have been obtained. It has been demonstrated that FDR system always has lower SOP than HDR system. Additionally, due to the existence of a relay that decodes and forwards messages from the far user in both FDR and HDR systems, the NU has much less SOP than the FU. A cooperative NOMA system with two source-destination pairs sharing a single FD-DF relay was investigated in [23]. It is presumed that there are no direct links between the sources, destinations, and eavesdroppers. Therefore, the sources use
uplink NOMA to transmit information to the relay, while the relay employ downlink NOMA to forward information to the destinations. In this scheme, the relay generates AN during information transmission to prevent eavesdropping. To maximize the system capacity, the optimal PA between the AN signal and the information signal is established. Furthermore, analytical and numerical outcomes over SOP show that the suggested method greatly outperforms the joint NOMA and AN in HDR scheme. The authors of [24] created a NOMA-based cooperative network that is secure against eavesdropping, in which the source transmits confidential NOMA symbols to the destination using rate-splitting technique under the assistance of a FDR. Moreover, the source divides its symbol into two sub-symbols for superposition coding through PA and uses the channel gain differences created by the relay to make more efficient use of spectral resources. In fact, the relay uses FD operation to simultaneously receive NOMA symbols and send jamming signals in the same frequency band. This is done in order to increase the capacity of the legitimate channel and create confusion for any possible eavesdroppers.
Another secure two-way relay network based on NOMA with regard for different eavesdropping scenarios has been introduced in [25]. It has been demonstrated that the capability of the relay to prevent eavesdropping without affecting the legitimate users' ability can be improved through the employment of FD and AN techniques. Additionally, it is demonstrated that the data transmission efficiency has been increased with the use of FD mode to the user nodes in the first phase without the need for additional bandwidth resources. Finally, by obtaining closed-form expressions for the ESC, it has been determined that the relay performs two crucial functions in this networks: not only transmits the confidential information to two sub-users, but it clogs any probable eavesdropper.
The existing works show that in high SNR regimes, the SOP of NOMA systems for both the DF and AF protocols tends to be a constant. Moreover, to provide dependable communication in such a NOMA system, suitable quantities for the objective rate, PA coefficients, and the power level of the eavesdropper link under the influence of jamming signals should be selected. Despite the fact that secure objective rates in the scenarios under study have no impact on SOP, the best SOP performance is shown by the optimal PA parameters for NOMA. Additionally, by carefully choosing the PA for users in NOMA and raising the level of power provided to the untrusted relay, the strictly positive SC can be increased.
Table V presents a description of the relaying strategy, structure of links, type of duplex, obtained metrics for PLS, and main contributions for secure NOMA networks in the presence of the relay.
### _PLS for NOMA Systems with RIS_
RIS is a novel technology that has been proposed recently as a means of addressing the randomness and uncontrollability of wireless signal propagation [83]. RIS can reduce the damaging effects on radio waves that occur due to natural wireless transmission by scattering, managing the reflection, and refraction properties. Furthermore, RIS offers a new approach to the planning and optimization of wireless communication networks [84]. RIS can also improve the received signals [85] or reduce undesirable signals like co-channel interference [86] by modifying the reflection amplitude and phase coefficients. As was already noted, power domain NOMA has the ability of providing services to numerous users simultaneously within the same physical resource block, thereby enhancing secrecy efficiency and connection density [87]. Since NOMA is more effective when the differences between the user channel gains are greater and RIS can proactively adjust user channels to accomplish this aim [87], it is anticipated that combining RIS with NOMA can further improve the network performance. The private information is susceptible to eavesdropping due to the broadcast nature of wireless channels. Therefore, in order to accomplish secure communications, PLS, which makes use of wireless channels' properties, is proposed. Since RIS has the possibility to adjust the wireless propagation environment, it can be used to improve PLS by intelligently altering the reflection coefficients for signal enhancement at the receiver and signal mitigation/cancellation at the eavesdropper [88]. Indeed, the emergence of RIS technology has provided a new solution for PLS problems. In the following, we will go into details on how RIS affects NOMA networks' secrecy performance.
**Design Strategies for Ensuring PLS of RIS-Aided NOMA Networks:** In [24] a new design of RIS to improve the PLS in the RIS-assisted NOMA network has been proposed. The issue of how the increase of RIS elements affect the secrecy performance has been resolved in this study. Additionally, it has been shown that the network can employ traditional channel coding techniques to achieve confidentiality. As two common security scenarios in these networks, the joint precoding and reflecting beamforming approach for the internal untrusted user and the joint beamforming strategy assisted by artificial jamming for the external eavesdropping have been displayed in [27]. Furthermore, the effectiveness and feasibility of these two methods are shown through simulation results. For the first scenario, it has been shown that; higher SNR threshold of artificial jamming results in lower sum eavesdropping rate. Moreover, from the results obtained for the second scenario, it can be seen that when the RIS is close to the legitimate user, better security performance can be obtained with the same number of RIS elements. The introduction of RIS into NOMA networks enables secure communication using artificial jamming, where the multi-antenna BS transmits the NOMA and jamming superposition to the legitimated users with the help of RIS, in the presence of a single-antenna passive eavesdropper [28]. In this investigation, the jamming vector, the transmit beamforming, and the RIS reflecting vector are jointly optimized to maximize the sum rate of legitimate users while fulfilling the SIC decoding condition, the RIS reflecting constraint, and the quality-of-service (QoS) requirement. Furthermore, to ensure successful cancellation through SIC, the received jamming power has been adjusted at the highest level to all legitimate users. The non-convexity of the formulated optimization issue causes it to be split into two subproblems, i.e., the beamforming optimization and the RIS reflecting optimization. The suc
cessive convex approximation (SCA) approach was adopted to approximate each subproblem to a convex approximation, after which an effective solution based on iterative optimization was designed to solve each subproblem iteratively. The simultaneous transmitting and reflecting RIS in conjunction with NOMA is a very promising method that can considerably enhance the performance of coverage. On the other hand, eavesdroppers could get comparable performance advantages to the legitimate user. In order to optimize the secrecy rate, a secure communication approach supported by AN is offered in [29] as a solution to this issue. Then, by optimizing the passive beamforming, NOMA parameters, and AN signal model, a secrecy rate maximization issue has been developed under the restrictions of individual secrecy needs and overall transmit power. The obtained results from solving this non-convex issue demonstrated that simultaneously transmitting and reflecting RIS and AN can significantly enhance secrecy performance in comparison to benchmark schemes and increasing the number of RIS elements can decrease the required AN power. The ideas presented in this research provides to useful suggestions for the design of a method to simultaneously aid the transmission and reflection of signals in a RIS network.
The PLS is studied for a RIS-assisted NOMA 6G network in [30], where a RIS is placed to help the two NOMA users in the "dead zone" and both internal and external eavesdropping have been considered. For the scheme with only internal eavesdropping, the worst-case situation, in which the NU is unreliable and tries to intercept the information of the FU, has been taken into consideration. To enhance the PLS, a combined sub-optimal beamforming and PA strategy has been presented. Then, the scope of this study was expanded to include a scenario with both internal and external eavesdropping. For this scenario, there are two sub-scenarios that have been taken into consideration: one in which eavesdroppers do not have access to CSI, and the other in which they do. A noise beamforming system has been introduced for both sub-scenarios to protect from the external eavesdroppers. Moreover, in order to further increase the PLS for the second sub-scenario, an optimal PA strategy has been proposed. Finally, it has been demonstrated that increasing the number of reflecting components improves secrecy performance, and thus, can bring more gain in secrecy performance than that of the transmit antennas. The secure beamforming of a two-user uplink NOMA system assisted by a RIS, in which both users transmit AN to confuse the eavesdropper and simultaneously sending messages to the BS, has been studied in [31]. This paper formulates the combined beamforming optimization of the users and the RIS as a quadratic-fractional problem in order to increase security while ensuring max-min fairness. The deployment of an uplink transmission framework that simultaneously transmits and reflects RIS to the relay superimposed signals from indoor and outdoor users to the BS while preventing malicious eavesdropping has been discussed in [32]. In this study, two joint beamforming optimization problems for maximization of the minimal SC and minimization of the maximum SOP were formulated by considering different eavesdropping CSI assumptions. It
was observed that when using the adaptive rate wiretap code setting, it was better to deploy simultaneously transmitting and reflecting RIS close to the users or the BS, whereas when using the constant rate wiretap code setting, it was better to deploy simultaneously transmitting and reflecting RIS far from them.
An effective beamforming technique with AN to provide secure NOMA transmission with the IRS has been presented by considering a practical eavesdropping scenario with imperfect CSI of the eavesdropper [33]. The transmit power has been minimized in this study by solving a simultaneous transmit beamforming and RIS phase shift optimization problem. A secure NOMA network with distributed RISs helping a BS to transmit private information to NOMA users while protecting them from passive eavesdroppers has been studied in [34]. In this study, the objective is to improve the minimum secrecy rate of legitimate user by jointly designing the reflection coefficients, transmit power, and beamforming, subject to the transmit power restriction at the BS, the phase shift restriction of RISs, the SIC decoding restrictions, and the SOP restrictions. The exact SOP in closed-form terms for the case of a single-antenna BS has been calculated and a ring-penalty based SCA was used to simultaneously optimize the power distribution and phase shifts. The general multi-antenna BS scenario was then considered, and a Bernstein-type inequality approximation based alternating optimization method was considered to design the transmit beamforming matrix at the BS and alternatively optimize the reflection coefficients of RISs. The obtained results in this study clearly showed that the maximum secrecy rate is attained when distributed RISs shared the reflecting elements equally. The use of inter-user interference as a secure transmission method for an RIS-assisted NOMA system without eavesdroppers CSI has been suggested in [35]. By maximizing the transmit power of weaker users and changing the SIC order, the authors attempted to minimize the SNR of the passive eavesdropper to improve eavesdropping. Here, two transmit power maximization challenges have been established by simultaneously maximizing the active beamforming vector and passive RIS reflecting matrix, each of which is based on a distinct set of assumptions on secure users. Following the decomposition of each non-convex problem into two convex subproblems using semi-definite relaxation and successive SCA, a different optimization framework has been adopted to effectively address these optimization issues. It has been demonstrated that the suggested scheme outperforms the baseline schemes in terms of security performance under various QoS conditions.
Finally, to improve the internal secrecy of NOMA users with heterogeneous secrecy needs, RIS has been implemented in [36]. The goal is to jointly optimize the beamforming vectors and RIS reflection factors in order to minimize the overall transmission power under heterogeneous secrecy restrictions. By utilising the SCA and semi-definite relaxation techniques, an iterative solution based on alternating optimization has been proposed to effectively address this issue. It has been observed that under the users' individual secrecy requirements, the suggested NOMA algorithm with the help of RIS can greatly reduce the power usage.
**PLS of RIS-assisted NOMA Networks over Various Fading Channels:** PLS can be efficiently applied in fading channels for ensuring security measures in the presence of attacks from illegal eavesdroppers [13]. To provide secure communications, the randomness of wireless fading channels is specifically studied, and the wiretap coding method suggested in [62] has been extensively adopted as a secure channel coding approach in PLS. Recent studies have examined the critical secrecy performance metrics over a variety of fading channels, such as Rayleigh [66], [70], Rician [89], [90], Nakagami-_m_[91], Fisher-Snedecor \(F\)[92], etc. Nevertheless, PLS measures are rarely employed to measure performance in RIS-assisted NOMA. Some of these studies are reviewed below. The SOP of a RIS-assisted NOMA network in a multi-user setting was studied in [37]. It is important to note that the authors of [37] focused into Rayleigh fading when studying the PLS of RIS-assisted NOMA networks. Then, the secrecy performance of the RIS-aided NOMA networks with Nakagami-_m_ fading considered for the reflected links has been investigated in [36]. The closed-form term of the SOP has then been obtained using the channel statistics. Analytical findings show that the number of RISs and the Nakagami-_m_ fading parameters affect the secrecy diversity orders and the expectation of channel gain for the reflected connections, respectively. As an extension of this study, the effectiveness of the suggested network has been assessed in terms of the average SC in [39]. [40] analysed the PLS of a NOMA system with RIS assistance over Fisher-Snedecor \(F\) composite fading channels. In more detail, closed-form terms that are presented in terms of Meijers G-function are used to determine the outage probability and SOP. It is assumed that the system is constructed on a RIS access point and that RIS is employed to enhance the secrecy performance of two legitimate users. The analytical findings in this study have shown the impact of the quantity of reflecting components, the degree of fading, and the impact of shadowing on the performance of PLS metrics of the systems.
Tables VI and VII compare the above realizations for secure RIS-NOMA networks from two perspectives.
### _PLS for AmB-NOMA Systems_
It should be noted that in NOMA communication systems, a larger number of access users might cause a significant rise in power consumption. AmB communication was suggested as a promising solution to deal with the issue of power consumption since it can accomplish data transmission through the surrounding radio signal from ambient radio-frequency (RF) sources, without needing a separate energy source [93]. The backscatter device in AmB communication uses the incident RF signals such as Wi-Fi, cellular, or TV transmissions to modulate and reflect its signal to the readers [96]. Due to its applicability in low-powered Internet-of-Things (IoT) networks, NOMA's inclusion in backscatter communication has attracted significant academic attention. In spite of this, such systems are susceptible to a number of security risks, including interference and eavesdropping, because of the simple coding and modulation techniques. In the following, the security aspect of these networks has been reviewed.
**Impact of in-phase and quadrature-phase imbalance (IQI), cognition, and hardware impaired:** Investigations on the dependability and security of NOMA systems under the assumption that all nodes and the backscattering device experience IQI can be found in [44]. The results obtained in this study have demonstrated that while IQI has a negative impact on dependability, it may improve the target system's secure performance. Additionally, the suggested AmB-NOMA system can offer robust secure communication for backscatter devices. By deriving various security metrics, it has been explored in [45] how the overlay cognitive AmB-NOMA based on intelligent transportation system performs in terms of secrecy while there is an eavesdropping vehicle. For applications such internet-of-vehicle (IoV) enabled maritime transportation systems (MTS), the security and dependability of the cognitive AmB-NOMA network in the presence of IQI have been examined in [46]. As expected, these applications offered large-scale connections, varied QoS, and highly reliable, low-latency connectivity. Therefore, the significance of this subject has been highlighted in light of the previously mentioned points. The PLS of the AmB-NOMA systems with focus on security and reliability have been explored in [47] under the reasonable hypotheses of remaining hardware issues, imperfect SIC, and incorrect channel estimate. The RF source in this study serves as a jammer, sending interference signals to the legitimate receivers and eavesdropper in order to further increase the security of the system under consideration. These assumptions have a negative impact on the outage probability, but a positive impact on the intercept probability, according to the research outcomes.
**Optimization perspective:** In [48], it is discussed how NOMA IoT users can operate simultaneously with a backscatter in the presence of several eavesdroppers. The basic purpose of this research was to propose an optimization methods for maximizing the secrecy rate of AmB-NOMA in the presence of several non-colluding eavesdroppers. The benchmark schemes of conventional OMA and suboptimal NOMA have been presented in order to evaluate the effectiveness of the suggested scheme in this study. In [49], a secure beamforming technique of multiple-input single-output (MISO) NOMA backscatter symbiotic radio networks has been devised to optimize outage secrecy rate between backscatter devises and the central user under the circumstances of feasible secrecy rate restrictions. The results obtained in this study show that the proposed network achieves a substantially higher secrecy rate region than the OMA network.
A summary of the important aspects to secure AmB-NOMA networks has been provided in Table VIII.
## V Challenges, Possible Solutions, and Future Directions
We point out several open problems and challenges related to PLS in NOMA networks and discuss the corresponding research directions and opportunities in this section.
**Simultaneous Transmitting and Reflecting RIS (STAR-RIS) Communications in Secure NOMA Networks:** The STAR-RIS has been developed to increase the effectiveness of RIS communications [95]. This technique enables the RIS to broadcast and reflect incident signals while simultaneously increasing coverage and transmission efficiency. However, the fundamental issue of protecting privacy has not yet well been addressed, despite the fact that much effort is put into the STAR-RIS enabled NOMA communications. In reality, since STAR-RIS is able to provide a 360-degree service, it unavoidably results in a 360-degree eavesdropping, which poses more significant security issues to the transfer of private information than that with a traditional reflecting-only RIS. Additionally, there are a number of other issues that should be investigated for the secure NOMA networks with RIS, including an analysis of the effect of beam split for the transmit beamforming created in RIS, performance consequences of it, how to optimize both the transmission and reflection coefficients together, as well as how RIS should be deployed.
**Various Transmission Disturbances:** Current studies in 6G wireless networks have not adequately addressed the influence of diverse transmission disturbances, including partial CSI, discrete phase shift, and random phase noise, on the performance of RIS-assisted transmissions. It is imperative to thoroughly investigate the modeling of these impairments and their impact on SNR. Additionally, it is crucial to assess how these disturbances affect the performance of RIS-assisted transfers in terms of the ASC and the SOP. By considering these factors, the evaluation of secure NOMA in Terahertz bands and non-terrestrial networks can provide valuable insights for optimizing 6G wireless systems and facilitate better visibility in relevant 6G-related searches.
**PLS Metrics under Correlated Fading Channels:** The development of secure communication techniques for RIS-aided NOMA networks under different fading channels has received contributions in the literature [37, 38, 39, 40]. However, most of the research studies assume the independency between channel coefficients in order to obtain PLS metrics such as SOP and ASC, while they are practically correlated. The strength of this correlation is influenced by the placement of the antennas, how close the legitimate receiver and eavesdropper are to one another, and whether there are any scatters in the area [96]. Therefore, it is essential to investigate the correlated fading wiretap channel in order to comprehend how fading correlation affect PLS applications in practical wireless communication networks. For assessing the correlated fading channels and associated issues, having the appropriate mathematical tools can be highly beneficial. The idea of statistical correlation between random variables can be expressed in a variety of ways, among them one of the best adaptable and effective approaches is the copula theory [97].
**PLS for AmB-NOMA systems with delayed QoS consideration:** Current research on AmB-NOMA systems focuses exclusively on the performance analysis with delay insensitivity, which is insufficient to describe the real-time service. A crucial suggestion for future work is security analysis with respect to developing an analytical framework where delayed QoS is considered on AmB-NOMA systems.
**Deep Reinforcement Learning (DRL) for PLS in NOMA:** DRL has emerged as a powerful tool for solving complex decision-making problems in wireless networks [98]. By combining deep neural networks with reinforcement learning, DRL can learn optimal policies through interactions with the environment. In the context of PLS in NOMA networks, DRL can be utilized to design intelligent resource allocation strategies, beamforming techniques, and power control policies that maximize security while maintaining high-quality communication [99]. One potential application of DRL is in the optimization of transmit beamforming in NOMA systems with RIS. DRL algorithms can learn to adaptively adjust the transmit beamforming weights based on the channel conditions, interference levels, and security requirements. By considering the eavesdropping channel characteristics and the presence of RIS, DRL agents can optimize the beamforming vectors to enhance the security performance of NOMA transmissions. Moreover, DRL can also be employed to address the challenges associated with secure transmission under correlated fading channels. Traditional approaches often assume independent fading channels, which may not accurately represent real-world scenarios. DRL algorithms can learn to model and exploit the correlation between fading coefficients, enabling more accurate assessment of the secrecy performance and the
optimization of PA and beamforming strategies [100].
**Federated Learning (FL) for PLS in NOMA:** FL is a distributed learning paradigm that allows multiple entities, such as BSs or users in a NOMA network, to collaboratively train a shared model without sharing their local data [101]. This property makes FL an attractive approach for addressing security and privacy concerns in PLS. In the context of NOMA, FL can be employed to jointly optimize PLS-related tasks across multiple BSs or users. For instance, FL can be used to collaboratively learn interference mitigation strategies that enhance the security of NOMA transmissions. Each base station or user can train its local model using its own data while periodically exchanging model updates with the other entities. By aggregating the knowledge from diverse sources, FL can enable the extraction of global insights and facilitate the design of robust and secure NOMA communication systems [102]. Furthermore, FL can be applied to address the challenge of secure communication in NOMA systems with delayed QoS considerations. By collectively learning and adapting to the dynamic network conditions and delay requirements, FL algorithms can optimize resource allocation and power control policies that simultaneously meet the QoS constraints and ensure secure communications in AmB-NOMA systems [103-105].
## VI Conclusion
NOMA provides low latency, high spectral efficiency, and massive connections, whilst PLS offers simple and efficient security measures. These two technologies can operate together to meet the needs of 5G and beyond networks through enabling high spectral-efficiency and security. The effectiveness of PLS as a solution to these objectives has also been extensively presented, analysed, and discussed in this study along with the various PLS schemes for NOMA systems. Systems that are spectrally efficient, adaptable, and secure can be achieved by applying PLS to NOMA. Additionally, the provided schemes have also been tabulated, discussed, contrasted, and summarised. Finally, to assist researchers who are interested in investigating this subject, some key challenges, possible solutions and future directions have been outlined.
|
2310.17085 | Testing the Surface Brightness Fluctuation Method on Dwarf Galaxies in
the COSMOS Field | Dwarf galaxies are important tracers of small-scale cosmological structure,
yet much of our knowledge about these systems comes from the limited sample of
dwarf galaxies within the Local Group. To make a comprehensive inventory of
dwarf populations in the local Universe, we require effective methods for
deriving distance estimates for large numbers of faint, low surface brightness
objects. Here we test the surface brightness fluctuation (SBF) method,
traditionally applied to brighter early-type galaxies, on a sample of 20 nearby
dwarf galaxies detected in the COSMOS field. These objects are partially
resolved in HST ACS images, and have confirmed redshift distances in the range
17-130 Mpc. We discuss the many model choices required in applying the SBF
method, and explore how these affect the final distance estimates. Amongst
other variations on the method, when applying the SBF method, we alter the
standard equation to include a term accounting for the power spectrum of the
background, greatly improving our results. For the most robust modelling
choices, we find a roughly Gaussian SBF signal that correlates linearly with
distance out to distances of 50-100 Mpc, but with only a fraction of the power
expected. At larger distances, there is excess power relative to that
predicted, probably from undetected point sources. Overall, obtaining accurate
SBF distances to faint, irregular galaxies remains challenging, but may yet
prove possible with the inclusion of more information about galaxy properties
and point source populations, and the use of more advanced techniques. | Lauren M. Foster, James E. Taylor, John P. Blakeslee | 2023-10-26T01:07:09Z | http://arxiv.org/abs/2310.17085v1 | # Testing the Surface Brightness Fluctuation Method on Dwarf Galaxies in the COSMOS Field
###### Abstract
Dwarf galaxies are important tracers of small-scale cosmological structure, yet much of our knowledge about these systems comes from the limited sample of dwarf galaxies within the Local Group. To make a comprehensive inventory of dwarf populations in the local Universe, we require effective methods for deriving distance estimates for large numbers of faint, low surface brightness objects. Here we test the surface brightness fluctuation (SBF) method, traditionally applied to brighter early-type galaxies, on a sample of 20 nearby dwarf galaxies detected in the COSMOS field. These objects are partially resolved in HST ACS images, and have confirmed redshift distances in the range 17-130 Mpc. We discuss the many model choices required in applying the SBF method, and explore how these affect the final distance estimates. Amongst other variations on the method, when applying the SBF method, we alter the standard equation to include a term accounting for the power spectrum of the background, greatly improving our results. For the most robust modelling choices, we find a roughly Gaussian SBF signal that correlates linearly with distance out to distances of 50-100 Mpc, but with only a fraction of the power expected. At larger distances, there is excess power relative to that predicted, probably from undetected point sources. Overall, obtaining accurate SBF distances to faint, irregular galaxies remains challenging, but may yet prove possible with the inclusion of more information about galaxy properties and point source populations, and the use of more advanced techniques.
keywords: galaxies: distances and redshifts - galaxies: dwarf - methods: observational
## 1 Introduction
The Lambda-Cold Dark Matter (\(\Lambda\)CDM) model underpins modern cosmology, and has proven a great success in predicting the observed abundance and distribution of galaxies on large scales (Mo et al., 2010). On small scales, galaxy formation is strongly modulated by baryonic effects, which clearly suppress the abundance of dwarf galaxies through internal and external feedback processes, although the exact details are unclear and problematic (Bullock and Boylan-Kolchin, 2017). These feedback processes are multiple, and will often depend strongly on local environment, so large statistical samples of dwarfs from a range of different environments are needed to tease them apart.
Unfortunately, a significant part of our observational understanding of dwarf galaxies is limited to the Local Group and its immediate surroundings, where dwarf galaxies can be detected to faint magnitudes and low surface brightness limits, using a variety of techniques (McConnachie, 2012). Within the local volume out to 10 Mpc, ongoing efforts are building up extensive samples of satellites (see e.g. Carlsten et al., 2022; Nashimoto et al., 2022, for full lists of references), but most are in systems similar to the Local Group in mass. Our census of dwarfs in more and in less massive systems is much less complete, despite the important information these environments provide about the underlying efficiency of galaxy formation (e.g. Sales et al., 2013; Grossauer et al., 2015; Muller and Jerjen, 2020; Garling et al., 2021; Roberts et al., 2021, 2021, 2022, 2021, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 2022, 20223, 2
cal galaxies, whereas they also found the galaxy to have been recently star forming. A natural next step is to test whether this method can produce accurate distance estimates for the larger sample of partially resolved objects compiled by Xi et al. (2018).
In this paper, we apply the SBF method to a sample of 20 objects from the Xi et al. (2018) serendipitous catalogue, selected to have spectroscopic redshifts, and redshift-based distances of less than \(\sim\)130 Mpc. We compare the objects' SBF and proper distances, to explore the limits of the reliability of SBF distance estimates to galaxies of this type at these distances.
The outline of the paper is as follows. In Section 2, we describe our sample and the observational data used. In Section 3, we summarize each of the steps in the SBF method. In Section 4, we present a fiducial SBF distance estimate for each of the galaxies, following the SBF procedure outlined in Section 3. In Section 5, we test variations on the fiducial method, and determine their impact on the final results. In Section 6, we conclude by discussing the limitations of using the SBF method on dwarf galaxies at distances of 20-130 Mpc, and consider how the method might be improved in future work.
## 2 Data
### Galaxy Sample
The effective range of the SBF technique depends on the depth of the imaging used, but for bright galaxies imaged with HST, reaches 100-130 Mpc (Blakeslee et al., 2021; Moresco et al., 2022). For faint, low surface brightness galaxies, the range is unclear, but Xi et al. (2018) found a reasonable correspondence between general optical morphology (i.e. visual indications that systems were partially resolved) and distance for dwarf galaxies with \(i^{+}<20\) mag, out to distances of \(\sim 200\) Mpc. To test the effective range of the technique for fainter, less regular galaxies, we considered the 34 objects in their serendipitous catalogue with spectroscopic redshifts that put them within \(\sim\)130 Mpc.
Of these, 14 galaxies were excluded from our final sample; 5 because they were clearly intrinsically bright and/or star-forming galaxies, 2 because they had an extremely irregular morphology that would make model fitting difficult (see 3.1.2), and 7 because they were too small, faint, or low surface brightness. The COSMOS2015 IDs, heliocentric redshift distances (d = \(cz/H_{0}\), assuming \(H_{0}=70\,\rm km\,s^{-1}Mpc^{-1}\)), and coordinates of the remaining 20 objects are listed in Table 1, along with the cutout image sizes used in our subsequent analysis.
### Imaging and Spectroscopy
Each of the galaxies in our final sample has multi-band photometry from the COSMOS 2015 catalogue (Laigle et al., 2016), as well as a spectroscopic redshift given in Xi et al. (2018). We used these catalogue redshifts in all cases except for 549719, where Polzin et al. (2021) measured a redshift of 1222 \(\pm\)64 km s\({}^{-1}\), and 213165, whose catalogue redshift places the galaxy to close. For this galaxy, a corrected redshift was obtained from the NASA Extragalactic Database (NED)1.
Footnote 1: [https://ned.ipac.caltech.edu](https://ned.ipac.caltech.edu)
The HST images used in the SBF analysis were HST/ACS F814W-band mosaics (Koekemoer et al., 2007; Massey et al., 2010) obtained from the NASA/IPAC IRSA COSMOS Cutouts Service, while Subaru \(i^{+}\), IA464, and IA484-band images (Taniguchi et al., 2015) from the same source were used to estimate the \(g-i\) colour of the galaxies.
## 3 Method
If the light emitted by a galaxy were produced entirely by stellar point sources of fixed luminosity, we would expect the number \(N\) of these sources falling within a single pixel of the image to scale with the physical area subtended by the pixel, and thus to vary as the square of the geometric distance to the galaxy. Poisson fluctuations in this number from pixel to pixel, across a region of uniform brightness, should vary as \(N^{1/2}\), that is as \(D\). For more realistic stellar populations with a range of intrinsic luminosity, the amplitude of fluctuations should depend on a weighted moment of the luminosity function, and thus on the age and metallicity of the population, and the resulting distribution would no longer be purely Poissonian (Cervino & Luridiana, 2006; Cervino et al., 2008). Given an absolute calibration for a similar stellar population, however, one should still be able to infer the distance from the amplitude of pixel-to-pixel variations, relative to the mean surface brightness. Based on this idea, Tonry & Schneider (1988) first proposed the Surface Brightness Fluctuation (SBF) method for determining absolute distances to nearby galaxies.
In practice, the SBF method requires a number of distinct steps, including masking individual point sources and foreground/background objects, modelling and subtracting and/or dividing by the average light distribution across the galaxy, measuring the spatial power spectrum of the residual map, the background sky around the object, and the point spread function (PSF) of the image, fitting the overall power spectrum to these components, and adjusting the final result, cast as an 'SBF magnitude', for an age and metallicity-dependent zero-point. We discuss these steps, and the possible choices to make at each, below.
\begin{table}
\begin{tabular}{c c c c c} COSMOS2015 & Redshift & Right & & Image \\ ID & Distance & Ascension & Declination & Size \\ & (Mpc) & (deg) & (deg) & (′′) \\ \hline
[MISSING_PAGE_POST]
\end{tabular}
\end{table}
Table 1: COSMOS galaxy ID, redshift distance, coordinates, and image size
### Masking and Modelling
An important first step in the SBF method is to mask any potential contaminants in the image that could give rise to extra fluctuations unrelated to discreteness noise in the underlying stellar population. Examples include foreground stars, background galaxies, globular clusters, and cosmetic artifacts in the image. Once the image is masked, we can create a smooth model of the galaxy's light distribution, to be used in subsequent steps. Masking and modelling are interconnected and happen simultaneously, so we will discuss them both in this section.
We use three methods for masking the image to identify bad regions and exclude them from subsequent analysis: manual masking, masking with automatic point source detection, and masking using the galaxy image pixel histogram. Each of the three methods builds on and incorporates the previous one(s).
#### 3.1.1 Initial Manual Masking
We first masked the images manually, by inspecting them visually, noting which areas of the image contained bad pixels or contaminating objects, and masking them as simple circular or rectangular regions. The goal of this process was to remove the most obvious contaminating sources from each image, so that the subsequent steps were more efficient.
#### 3.1.2 Sersic Model
Given our initial masking, the next step was to model the smooth light distribution of the galaxy. To do this, we initially created elliptical Sersic models of each galaxy, using **imfit**(Erwin, 2015), with our manual mask indicating regions to ignore, and the absolute value of the most negative pixel in the image taken to be the value of the previously subtracted constant sky background. We used reasonable guesses for the initial parameters of the fit in each case, including \(n=0.5\), \(I(r_{e})=0.01\) counts/s, and \(r_{e}=100\) pixels, but found these had little impact on the final fitted parameters (see table 1), as did the constraints we put on parameters. To determine errors on the fit, we used **imfit**'s bootstrap resampling command, with 200 iterations. Given the Sersic Model fit, we restricted our subsequent SBF analysis to an elliptical region centred on the galaxy with a semi-major axis of \(2r_{e}\), and the same axis ratio as the Sersic Model.
#### 3.1.3 Bicubic Spline Model
We also created a second, non-parametric model for the smooth light distribution of the galaxy using bicubic spline interpolation (Jordan et al., 2004). First, we made a lower resolution version of the masked image, with each 40x40 pixel area on the original image corresponding to a single pixel on the lower resolution image. Then, we fit a bicubic spline function to the logarithm of the lower resolution version of the image using scipy.interpolate.griddata. Finally, we created the model by calling the bicubic spline function at the original resolution grid spacing, and taking the reverse logarithm of the resulting image.
To account for the mask affecting adjacent pixels during the process of lowering the resolution and interpolating between pixels, we also applied the spline fitting process to the manual mask. We masked all pixels in the resulting'mask model' that had values less than 0.9, in effect expanding the edges of the manual mask.
#### 3.1.4 Automatic Point Source Mask
Given our manual mask and smooth model for the light distribution in each galaxy, we then used the **photutils.detection.IRAFStarFinder** method from the **photutils** package (Bradley et al., 2022) to automatically detect any point sources in the images that had been missed when masking manually, choosing a threshold of 0.018 counts/s, a background of 0, and a PSF FWHM of 3 pixels. For each galaxy, we masked each detected source using a circular area with a radius of 6 pixels. For efficiency, we only searched for point sources within the \(2r_{e}\) region of interest defined above. Point source properties are described further in Appendix B.
#### 3.1.5 Smoothed Image Model
We then created a third, non-parametric model for the smooth light distribution in each galaxy, by simply convolving the masked image with a Gaussian kernel with a 5 pixel radius. This empirical model has the benefit of more accurately capturing any irregular structure in the galaxies, and being more customized to each individual galaxy.
#### 3.1.6 Histogram Mask
Finally, we constructed a third, more sophisticated mask based on a pixel histogram of the galaxy's normalized residual image (NRI; see Section 3.2 for details on the construction of this image), masked with the automatic detection mask. The pixel histogram of the residual image (Fig. 1) includes a Gaussian component at low counts/s, but in some cases also an excess tail at high counts/s. We expect intrinsic surface brightness fluctuations to be Gaussian in the limit where large numbers of stars contribute significantly to the light in each pixel. Since a Gaussian component appears in the NRI distributions for all the objects in the sample, whereas only a few show a significant high-residual tail, for this masking method we assume the Gaussian component corresponds to intrinsic surface brightness fluctuations, whereas the rest of the distribution is produced by other effects. We mask any pixels with residuals in the range where the NRI histogram greatly exceeds the Gaussian component, as follows.
We create the histogram mask by estimating the peak of the distribution to be the bin with the most pixels, \(x_{max}\), and then fitting a normal distribution to a region around and below the peak, below a threshold value:
\[x_{\rm threshold}=x_{\rm max}+|0.2x_{\rm max}|. \tag{1}\]
This ensures that the fit is not affected by the non-Gaussian tail. If, due to noise, the fitted value of the peak is above \(x_{\rm threshold}\), we iterate until it drops below this value.
Given our fit to the Gaussian component of the pixel distribution, we then mask any pixels beyond the point where the fit intercepts the line \(y=1\), i.e., where we would only expect one pixel in the bin with this pixel value. This produces the most aggressive of our three masks.
We note that the galaxies in our sample have on the order of \(10^{3}M_{\odot}\) per pixel. From (Tonry and Schneider, 1988), and the results of (Cervino and Luridiana, 2006; Cervino et al., 2008), this seems too low to expect fully Gaussian fluctuations, so it may be that the high-residual tails seen in the NRI for 6-7 of our galaxies are genuinely due to stellar discreteness noise, and the corresponding pixels should be included in the SBF calculation. On the other hand, the fact they generally appear in objects with _larger_ stellar surface mass density makes this seem less likely. As discussed in appendix B, these galaxies also
have the largest number of bright point sources, so residuals in the point source subtraction may explain the presence of the tails.
### Computing the Power Spectrum
The next step in the SBF method is to calculate the normalized residual image, and determine what fraction of the pixel-to-pixel variation in this image is due to SBFs. In practice, this is usually done in Fourier space, since for a raw CCD image, the power spectrum of the image should include an 'on-sky' component that is convolved with the PSF, and a white noise component from the readout electronics (Blakeslee et al., 1999).
After masking and modelling the galaxy, we create a normalized residual image (NRI). The NRI is calculated as:
\[NRI=\frac{image-model}{\sqrt{model}}\,, \tag{2}\]
where we first subtract the model from the image, so that only the fluctuations remain, and then divide by the square root of the model, to normalize the scale of the fluctuations to the expected value.
To compute the power spectrum of the NRI, we create an elliptical aperture around the galaxy and combine this with the mask, to ensure that any measured SBF variance is coming from the galaxy rather than from other sources.
Since SBFs occur in the real image of the galaxy on the sky, they will be convolved with the telescope's PSF. Taking the Fourier transform of the image converts this convolution to a multiplication, so we can simply divide the power spectrum of the NRI, minus any white noise component due to readout noise, by the power spectrum of the PSF, to estimate the amplitude of the SBFs.
Two complications arise. First, COSMOS mosaic images have been combined from multiple raw exposures, using drizzling with an interpolation kernel for geometric corrections, so the white noise component is not actually 'white', but has a slight dependence on wavenumber (as discussed further in section 5.3; see also Mei et al., 2005). To correct for this, we estimated the power spectrum of the background from blank regions in the image around the galaxy, and subtracted this empirical form when fitting for the SBFs.
A second complication is masking. This has a multiplicative effect on the image in real space, and thus corresponds to a convolution in frequency space. To account for this, we multiply the SBF component of the power spectrum by a Fourier transform of the PSF, convolved with a Fourier transform of the full mask (taken to be the product of the manual mask, the automatic point source mask, the histogram-based mask, and the elliptical aperture described above). All Fourier transforms were computed in 2D using the Python function numpy.fft.fft2; azimuthal averages were then taken for each bin in wavenumber.
To compute the PSF, we used eight non-saturated stars from across the COSMOS field. We took circular areas of radius 25 pixels around each star and masked the rest of the image. We then computed the 2D power spectrum of each star and took the average of all eight power spectra as the mean power spectrum of the PSF. Fig. 2 shows the resulting 1D power spectra of the eight individual stars, as well as the mean PSF.
To compute the power spectrum of the background, we used four nearly empty areas (i.e. with only minimal masking required) in the COSMOS field, ranging in size from 15-20'' (500-667 pixels) on a side. We modelled each area using **infit**'s FlatSky function, which only has one parameter, the surface brightness of the sky, to calculate a normalized residual image for each. Calculating the power spectrum of each and normalizing it to have an integral of 1, we then interpolated the resulting background spectrum to have the same sampling in \(k\) as each individual galaxy image. Finally, we took the mean of the four normalized and interpolated spectra to get the power spectrum of the background for each galaxy. Fig. 3 shows the resulting normalized 1D power spectrum of the background.
#### 3.2.1 Noise Estimate
We have found experimentally that the shape and amplitude of the NRI power spectrum change slightly with the choice of aperture size and position, so to avoid creating any bias from a particular choice of these parameters, we created 50 different power spectra for each galaxy, randomly selecting the size of the semi-major axis \(a\) and the position of the centre (\(x_{0},y_{0}\)) from uniform distributions: \(r_{e}\leq a<2r_{e}\), and \(x_{0}-5\leq x<x_{0}+5\), \(y_{0}-5\leq y<y_{0}+5\), where \(r_{e}\) is the effective radius from **infit** and \(x_{0}\) and \(y_{0}\) are the coordinates of the aperture centre in pixels. We held the ellipticity and angle constant, using the best-fit **infit** values for these parameters.
### Estimating the SBF Variance
The SBF variance is usually calculated by fitting the azimuthally-averaged power spectrum of the masked NRI, \(P(k)\), to the equation (Tonry et al., 1990):
\[P(k)=P_{0}\times E(k)+P_{1} \tag{3}\]
where \(P_{0}\), the quantity of interest, is the SBF variance, \(P_{1}\) is the white noise variance, and \(k\) is the spatial frequency in units of inverse pixels (e.g. Greco et al., 2021). \(E(k)\) is known as the (azimuthally-averaged) expectation power spectrum, and is a 1D average of the two-dimensional convolution of the power spectra of the PSF and the mask (Greco et al., 2021):
\[E(k_{x},k_{y})=|\mathrm{PSF}(k_{x},k_{y})|^{2}*|M(k_{x},k_{y})|^{2}\,. \tag{4}\]
We used **astropy.convolution.**convvolve_fft** to compute the expectation power spectrum, given the power spectra of the PSF and the mask, and normalized it to have an integral of 1 before fitting.
As mentioned above, however, we found that the white-noise component of the spectrum actually has a dependence on \(k\) (see Section 5.3). To account for this, we modified equation (3) to include the power spectrum of the background variance:
\[P(k)=P_{0}\times E(k)+P_{1}\times B(k) \tag{5}\]
where \(E(k)\) and \(B(k)\) are the expectation power spectrum and the power spectrum of the background respectively, computed as described above.
We fit to equation (5) using scipy.optimize.curve_fit, which uses a non-linear least squares algorithm. Given the model from equation (5), it returns the best fitting \(P_{0}\) and \(P_{1}\) values and the covariance for these. Note that when fitting, we use units of inverse pixels for \(k\) for convenience, i.e. we divide the standard wavenumber \(k\) by the number of pixels on one side of the image.
In fitting the expectation power spectrum, we limit the range of wavenumbers considered. Very low wavenumbers (large spatial scales) are affected by the smoothing used to make the empirical model of the smooth light distribution, while high wavenumbers are compromised by correlated noise between pixels (Greco et al., 2021) and/or errors in our background variance estimation. As discussed in Section 5.5, we tested many wavenumber ranges for the entire sample, finding the range 0.1-0.3 pix\({}^{-1}\) to be optimal.
### Computing the Fluctuation Magnitude and the SBF Distance
Each of the 50 SBF runs for a single galaxy returns a value of \(P_{0}\), which needs to be divided by the number of unmasked pixels, \(N\), as \(P_{0}\) is the sum of pixel variances (Tonry and Schneider, 1988; Greco et al., 2021). We computed the mean and standard deviation of the 50 \(P_{0}/N\) values rather than the mean of the SBF distances to estimate the uncertainty because some of the mean \(P_{0}/N\) values are negative, mapping to an infinite distance. This can happen when the data is noisy and the background term dominates equation (5). We calculated the apparent fluctuation magnitudes from:
\[m_{\rm SBF}=m_{\rm zpt}-2.5\log_{10}\frac{P_{0}}{N}\,, \tag{6}\]
where \(m_{\rm zpt}\) is the zero point magnitude of the original image.
To convert the apparent fluctuation magnitude to a distance, we need an independent estimate of the calibration magnitude \(M_{\rm SBF}\). We used the calibration developed by Carlsten et al. (2019), that was calculated using tip of the red-giant branch distances for low surface-brightness dwarf galaxies. \(M_{\rm SBF}\) is calculated using the colour of the galaxy, because the amplitude of SBF is dependant on the stellar luminosity function of the galaxy, and thus on its stellar population.
Figure 1: Pixel value distributions in the Normalized Residual Images (NRIs – coloured histograms) created using the smoothed image model. Note that in this and most subsequent plots, the sample is ordered by distance from top left to bottom right. The thick points show the bins used to fit the Gaussian component, while the black curves and vertical lines show the final fit and fitted peak value respectively. The histogram mask excludes pixels with values in excess of the Gaussian expectation (i.e. the tail to the right of the Gaussian component). Galaxies are ordered from left to right as follows: 424575, 549719, 677414, 260583, 561851, 401988, 733922, 686606, 213165, 259971, 709026, 918161, 458976, 331749, 880547, 380820, 589205, 279307, 377112, 824852.
The age and metallicity of the population can be accounted for using an optical colour, thus the net dependence of \(M_{\rm{SBF}}\) on colour (Worthey, 1994; Carlsten et al., 2019; Polzin et al., 2021).
We tested two separate methods of computing the \(g-i\) colour needed for the calibration, calculating this either from the COSMOS photometry, or from the images directly. See Section 5.6 for a comparison of the two methods. For the first method, we estimated the \(g-i\) value using the COSMOS 2015 catalogue's 3'' (corresponding to 100 pixels in the F814W imaging) aperture magnitudes in the Subaru bands \(i^{+}\), IA464, and IA484. This aperture was selected, as the automatic aperture magnitudes give noisier colours (Laigle et al., 2016). The g-band magnitude is not available, so we estimated it as:
\[m_{g}\approx\frac{m_{IA484}+m_{IA464}}{2}. \tag{7}\]
\[g-i\approx m_{g}(3\arcsec)-m_{i}(3\arcsec) \tag{8}\]
The second method involves the same \(g-i\) estimation but using the Subaru \(i^{+}\), IA464, and IA484 images directly. For each galaxy, we created an approximate g-band image using equation (7), and used this to make \(g-i\) image. We then applied to this colour image an elliptical aperture identical to that used on the galaxy. We took the average of all the values within the aperture as the estimated \(g-i\) colour, and the standard deviation as its error.
To use the Carlsten et al. (2019) calibration, we converted the galaxy's \(g-i\) colour to a F475W-F814W colour by solving equation (3) in Carlsten et al. (2019):
\[g-i=x-0.061x^{2}-0.040x-0.003 \tag{9}\]
where \(x\) is the F475W-F814W colour. Next, we used equation (4) in Carlsten et al. (2019):
\[M_{SBF,i}=(-3.17\pm 0.19)+(2.15\pm 0.35)\times(g-i) \tag{10}\]
to calculate the SBF absolute magnitude in the \(i\)-band. Since our imaging is in the F814W band, however, this was converted to this band using equation (2) in Carlsten et al. (2019):
\[M_{SBF,814}=M_{SBF,i}-0.540x^{2}+0.606x-0.253. \tag{11}\]
Finally, the SBF distance was calculated using:
\[m_{\rm{SBF}}-M_{\rm{SBF}}=5\log_{10}\frac{d}{10\rm{pc}} \tag{12}\]
## 4 Results
The derived SBF distances were compared to distances based on the redshifts described in Section 2. To correct the latter for the effects of nearby structure, including the Virgo Cluster, the Great Attractor, and the Shapley Supercluster, we also derived flow-corrected proper distances based on the model of (Carrick et al., 2015), by looking up the CMB-frame velocity in NED and then using the on-line tool at [http://cosmicflows.iap.fr](http://cosmicflows.iap.fr), with the cosmological parameters \((\Omega_{\rm{m}},H_{0})=(0.315,67.5\,{\rm km}\,{\rm s}^{-1}{\rm Mpc}^{-1})\). (Note CMB-frame velocities are typically \(\sim 300\) km s\({}^{-1}\) larger than heliocentric velocities in the COSMOS field.) We assigned all flow-corrected distances an error of 4.3 Mpc, corresponding to \(300\) km s\({}^{-1}\) for an \(H_{0}\) value of \(70\,{\rm km}\,{\rm s}^{-1}{\rm Mpc}^{-1}\), to account for peculiar velocities. We use the proper distances given in Table 2, for all subsequent comparisons.
Our fiducial SBF distances assume the following choices or parameters:
* the smoothed image model to compute the NRI
* the histogram mask (incorporating the preceding manual and automatic masks) to remove contaminating sources and bad regions
* the realistic background SBF equation (equation (5)) for background correction
* a \(g-i\) colour derived directly from the image
* a wavenumber range of 0.1 to 0.3 pix\({}^{-1}\) when fitting the power spectrum.
Also, we run the SBF method 50 times on each object, selecting a random aperture each time (see Section 3.2), to estimate the uncertainties in \(P_{0}\) and \(P_{1}\), and propagate the former through the distance calculations.
The results are shown in Fig. 4, where we plot SBF distances versus proper distances, with the dashed grey line representing a 1:1 correspondence. Error bars are derived from the set of 50 variants on the fiducial aperture described above (\(\pm 1\sigma\) in \(P_{0}/N\)), and probably underestimate the true errors; these are hard to define precisely, as they depend on the range of choices we are willing to make in the method.
Figure 3: Power spectra of the four empty regions used to estimate the power spectrum of the background (thin coloured curves), as well as the mean power spectrum of the background (thick black curve).
Figure 2: Power spectra of the eight non-saturated stars used to estimate the PSF (thin coloured curves), as well as the power spectrum of the mean PSF (thick black curve).
For galaxies at flow-corrected distances of less than 100 Mpc, there is a clear correspondence between the SBF fluctuation amplitude and actual distance. Fig. 5 shows the mean power spectra for galaxies grouped into four bins by proper distances: 21.3-49.3 Mpc (8 objects), 49.3-77.4 Mpc (5 objects), 77.4-105.5 Mpc (1 object), and 105.5-133.6 Mpc (6 objects). Individual power spectra (within \(1r_{e}\) aperture) were normalized by the number of unmasked pixels and averaged in each bin. We see a clear trend of decreasing power with distance, except in the fourth bin, which has a mean power spectrum similar to the third. Since we expect \(P_{0}/N\) to scale linearly with distance, we can normalize the results by distance by multiplying the power spectrum for each object by its distance before averaging; once again, Fig. 6 shows that the first three bins have comparable intrinsic power, while the last bin does not.
Finally, Fig. 7 shows residual NRI power spectra, after subtracting the fitted background component and dividing by the fitted SBF/Mask power spectrum:
\[\frac{P(k)-P_{1}B(k)}{P_{0}E(k)}. \tag{13}\]
As before, the results for each galaxy have been averaged in four distance bins. If our modelling and fit are correct, the residuals should be constant with \(y=1\) (black dashed line). This is appears to be somewhat true in the range \(k=0.13-0.33\), but outside this range we see significant positive and negative variations. (We also note that the third bin is very noisy because it contains only one object.)
From all this, we conclude that SBF power in our sample scales as expected out to \(\sim\)60-100 Mpc, but that beyond that range, some other component that is independent of distance dominates the power spectrum. At smaller distances, there is an additional problem with the zero-point of the relation. All of our sample points lie above the 1:1 line in Fig. 4, indicating that we are systematically underestimating the SBF power in all of them.
Fig. 8 illustrates these two problems for individual galaxies; distant galaxies generally have more power than expected, while nearby
\begin{table}
\begin{tabular}{c c c} COSMOS & Redshift & Proper \\ ID & Distance (Mpc) & Distance (Mpc) \\ \hline
[MISSING_PAGE_POST]
\end{tabular}
\end{table}
Table 2: COSMOS galaxy ID, redshift distance, and estimated proper distance
Figure 4: SBF distance \(d_{\rm SBF}\) versus proper distance \(d_{e}\). Objects with upper and lower limits are plotted with vertical error bars corresponding to \(\pm 1\sigma\) in \(P_{0}/N\). Objects with a finite lower limit but an infinite upper limit are plotted as upward-pointing arrows. Note the labels in the legend are sorted by increasing distance, as in Fig. 1.
Figure 5: Normalized NRI power spectra \(P(k)/N\), grouped by proper distance and averaged.
Figure 6: Average NRI power spectra for each of the four bins in proper distance, with the individual power spectra multiplied by their distance before averaging.
galaxies have only a third the power expected. We will explore these problems further in Section 5.
## 5 Testing Variations on the SBF Method
There are a number of choices to make when proceeding through the steps in the SBF method. They include the type of smooth model used, the type of mask used, how the background contribution is subtracted, the size of the aperture applied, the range of wavenumber values to fit, and the optical colour to use in the calibration. In this section, we discuss how we made each of these choices, and show how variations on our fiducial choices affect the results.
### Modelling the Smooth Light Distribution
In Section 3.1, we described three different models of the smooth light distribution in the galaxy: a Sersic profile created using imfit, a bicubic spline model, and a smoothed-image model created by convolving the image with a Gaussian kernel. The model of the light distribution is used when creating the NRI, using equation (2).
Using the parametric imfit model, we found that the NRI contains a lot of large-scale residual structure (Fig. 9). This is because the surface brightness profiles of the dwarf galaxies in our sample are generally too complicated to model with a simple Sersic profile.
Like the imfit model, the bicubic spline model also leaves behind a lot of residual structure in the NRI (Fig. 10). We also found that many of the NRI pixel histograms created using the bicubic spline models (Fig. 11) do not show a clear Gaussian component, and do not centre around 0 as expected. Furthermore, fitting the distribution for the furthest galaxy in the sample, 824852, did not produce a convergent result, and thus we could not obtain a SBF distance for this object.
By comparison, the smoothed-image model leaves much less residual structure in the NRI, as it is more customized to shape and gross features of the galaxy (Fig. 12). In addition, the pixel value distributions of the NRIs created using the smoothed-image model (Fig. 1) show a Gaussian component and centre around 0 as expected, assuming fluctuations are in the large-\(N\) limit (see Section 3.1.6). Thus, we have adopted the smoothed-image model for our fiducial analysis, although we still use the Sersic parameters given by imfit to position and scale the apertures around each galaxy.
### Masking
As discussed in Section 3.1, there are several ways to create a mask. Each represents a compromise between removing all contaminating sources and bad regions, and preserving genuine SBFs. Of our three masking options, the histogram mask is the most effective because it captures the most contaminating sources.
The initial manual mask removes only the most obvious contaminants, while the automatic point source detection mask removes more point sources, but leaves noticeable oscillations in the power spectrum of the masked NRI (Fig. 13). These appear to be particularly strong in objects with large numbers of bright central point sources.
Figure 8: Measured SBF power \(P_{0}/N\) versus expected power, as a function of expected power.
Figure 7: Residuals of the average NRI power spectra for each of the four bins in proper distance, after subtracting the fitted background component and dividing by the fitted SBF component. The dashed line indicates the theoretical expectation, y = 1.
Figure 9: Normalized residual images created using the imfit models. Each image has a \(2r_{e}\) aperture applied around it, to focus on the residuals left behind after fitting. Galaxies are ordered as in Fig. 1.
We have determined experimentally that the oscillations originate from the point-source masking itself. Using the same aperture size for all the point sources detected introduces structure on this spatial scale, and thus produces periodic oscillations in the power spectrum, which will degrade the quality of the SBF distance estimate, as discussed further in appendix C.
To remove these features, an even more aggressive approach is required, so we introduce the histogram mask. This mask removes the oscillations seen in the automatic point-source detection mask spectra, and in some cases, covers all the region of the automatic mask, meaning a histogram masking approach could be effective on its own without further point source removal. Thus, we adopt the histogram mask (which includes our previous two masks) in our fiducial method.
Figure 11: Pixel value distributions in the NRIs created using the bicubic spline model. Galaxies are ordered as in Fig. 1.
Figure 12: Normalized residual images created using the smoothed image models. Each image has a \(2r_{e}\) aperture applied around it. Galaxies are ordered as in Fig. 1.
Figure 10: Normalized residual images created using the bicubic spline models. Each image has a \(2r_{e}\) aperture applied around it. Galaxies are ordered as in Fig. 1.
Figure 13: Power spectra of the masked normalized residual images using different masks. These are created using the smoothed image models, and a \(1r_{e}\) aperture around the galaxy. Each subsequent masking method includes the previous method(s). Galaxies are ordered as in Fig. 1.
### Background Subtraction
Examining our NRI power spectra, we found that the power does not become constant at large wavenumbers, but continues to decrease down to the Nyquist frequency. This is an artifact of our images, which have been combined from multiple pointings and exposures, correlating and smoothing pixel noise over a range of scales (Mei et al., 2005; Mitzkus et al., 2018). Thus, rather than fitting the NRI power spectrum using the original SBF equation (equation (3)), we estimated the background from blank regions in the COSMOS field, as described above in Section 3.2, and fit using equation (5).
To illustrate the differences between the two approaches, we created an average power spectrum by combining the individual NRI spectra of the 5 nearest galaxies, normalized to the same integrated power, and fit to the average using both background subtraction methods. In place of the expectation power spectrum, we used the power spectrum of the PSF alone, since the expectation power spectrum is only slightly different from the power spectrum of the PSF, and is different for each object since they have different masks. The fit was performed over the wavenumber range 0.1-0.3 pix\({}^{-1}\).
Fig. 14 shows the the averaged masked NRI power spectrum (solid black curve), along with the two fits. It is clear that the fit using equation (3) (red dashed curve) does not accurately capture the shape of the power spectrum. Accounting for a realistic background contribution using equation (5), however (blue dashed curve), gives a very good fit to the spectrum, justifying its use in our fiducial method. This approach is a significant improvement on the assumption of white noise, as it gives a much better fit to the data.
### Dependence on Aperture Size
The choice of aperture size to use when measuring SBFs is unclear a priori. Too large an aperture will dilute the signal with a larger background contribution, while too small an aperture may leave too little area to measure the SBFs.
As explained previously, to account for some of the uncertainty introduced by this choice, we run the SBF measurement process 50 times, choosing a random aperture size each time, taking the final value of \(P_{0}/N\) to be the mean of the 50 resulting values, and its error to be the standard deviation of the distribution. Fig. 15 shows the value of \(P_{0}/N\) as a function of aperture size for each galaxy. In most cases, the variations seem small and/or random, but in some (e.g. 733922), we see indications of a systematic decrease in \(P_{0}/N\) with aperture size. This could indicate excess power from point sources or smooth model errors in the centre of the image, and we have used these plots iteratively to test the effectiveness of our masking choices.
### Wavenumber Range
As discussed in Section 3.3, both the low and high wavenumber ends of the NRI power spectra are compromised by various factors, and thus need to be excluded from the fitting. We have tested many different wavenumber ranges, with lower bounds ranging from 0.08 pix\({}^{-1}\) to 0.20 pix\({}^{-1}\) and upper bounds ranging from 0.20 pix\({}^{-1}\) to 0.50 pix\({}^{-1}\). We note that using the smoothed image model removes some large-scale power from the NRI, introducing a dip in the power spectrum at values below \(\sim\) 0.10 pix\({}^{-1}\), so we need to set the lower end of the range to at least this value.
To study the effect of wavenumber range, we plotted the values for \(P_{0}/N\) against the lower bound, using a fixed upper bound of 0.30 pix\({}^{-1}\) in Fig. 16, and against the upper bound, using a fixed lower bound of 0.10 pix\({}^{-1}\) in Fig. 17. For each plot, for efficiency, one point represents 25 rather than 50 SBF runs. Otherwise the process is the same as described in Section 3. A flat curve in these plots means the SBF signal is stable against small changes in the range, which suggests it is a good region in which to select a bound.
From Fig. 16, it is clear that the lower bound must be selected carefully as the SBF signal can vary significantly with the lower bound. We estimate that a lower bound in the region of 0.10 to 0.13 pix\({}^{-1}\) is the best choice overall, as this region is stable across most of the galaxies. Fig. 17 shows that the upper bound is much less important than the lower bound, as the plot is generally flat past about 0.30 pix\({}^{-1}\) for most galaxies. From this, we infer that the
Figure 14: Average power spectrum of the masked normalized residual image for the 5 nearest objects (black curve), along with fits assuming a flat white-noise background (equation (3) – red dashed curve), and a background determined directly from blank areas of the field (cf. equation (5) – blue dashed curve).
Figure 15: Dependence of \(P_{0}/N\) on aperture size. The values of \(P_{0}/N\) were computed using the smoothed-image model, with the histogram mask, and a fitting range \(k=0.1\) to 0.3 pix\({}^{-1}\). Galaxies are ordered as in Fig. 1.
SBF signal dominates at low values of \(k\), and that focusing on lower wavenumbers will give the best SBF fit. For the fiducial method, we choose to fit the wavenumber range 0.1 to 0.3 pix\({}^{-1}\) to capture as much of the SBF signal as possible while excluding the dip in the power spectrum below 0.1 pix\({}^{-1}\).
### Colour Variation
In Section 3.4, we outlined two methods for estimating galaxy colour. Both assume the same filter conversions, but one uses the catalogue magnitudes, while the other uses the images directly. For the second method, Fig. 18 shows pixel histograms of the \(g-i\) images within 1\(r_{e}\). It is clear from these histograms that in some cases, there is a large spread in the individual pixel colours relative to the mean catalogue colour, and that the latter may not correspond to the mean of the individual pixel colours. For these reasons, we choose to use the pixel-based approach in our fiducial method. The spread in some of the colour distributions (e.g. 824852) also suggests we should treat with caution SBF distances derived assuming a single mean colour, as the colour can vary widely across the extent of the object. An improvement on the method might be to account for the changes in colour across the object, which could be explored in future work.
## 6 Discussion
Dwarf galaxy populations in the local Universe represent a challenge and an opportunity to improve our understanding of galaxy formation, feedback, and dark matter physics. To identify local dwarfs and place them in their environmental context, distance estimates are essential, yet obtaining redshift-based distances for large numbers of faint, low-surface brightness dwarfs remains challenging. With a new generation of imaging surveys forthcoming, including space-based surveys with Euclid and Roman, it is worth reconsidering the potential for other methods such as SBF to tackle this problem.
The SBF method has traditionally been applied to relatively bright, high surface-brightness early-type galaxies with relatively simple stellar populations (Moresco et al., 2022), but in recent work, several groups have pushed to apply the technique to fainter systems within \(\sim 25\) Mpc (Carlsten et al., 2019; Polzin et al., 2021; Kim & Lee, 2021; Greco et al., 2021; Carlsten et al., 2022). In this paper, we have tested the possibility of applying the SBF method to a set of 20 dwarf galaxies at distances of 20-150 Mpc, drawn from the HST/ACS imaging of the COSMOS survey. There are several important challenges in getting the method to work for this sample, including the overall faintness and low surface brightness of the galaxies, internal gradients or scatter in stellar populations and/or extinction, irregular morphology, and a correlated background component introduced by mosaicing.
Figure 16: The normalized SBF signal plotted against the lower \(k\) bound, given a fixed upper bound of 0.3 pix\({}^{-1}\). The error bars represent the standard deviation on the values obtained using 25 randomly selected apertures. Galaxies are ordered as in Fig. 1.
Figure 17: The normalized SBF signal plotted against the upper \(k\) bound, given a fixed lower bound of 0.1 pix\({}^{-1}\). The error bars are as in Fig. 16. Galaxies are ordered as in Fig. 1.
Figure 18: Pixel histograms for the \(g-i\) images. The dotted vertical line represents the catalogue value and the dashed line represents the mean of the histogram values. The mean value and its corresponding standard deviation are what we choose to represent the \(g-i\) value of the galaxy. Galaxies are ordered as in Fig. 1.
After making careful choices in how the SBF method is applied to the images, we find a Gaussian SBF signal that does correlate linearly with distance, out to distances of 50-100 Mpc. On the other hand, the recovered SBF power is only \(\sim\nicefrac{{1}}{{3}}\) of that expected, leading to a corresponding overestimate in distance.
Beyond 100 Mpc, the measured SBF power is generally too high, leading to underestimates in distance (e.g., Blakeslee (1997)). Examining the point source luminosity functions of our most distant galaxies (Appendix B), we infer that this excess power comes from faint point sources undetected in the relatively shallow COSMOS imaging. In principle, deeper observations that better resolve the point source luminosity function would overcome this limitation.
Estimating SBF distances also requires a number of (only partly constrained) choices in the detailed method, including how to model the galaxy surface brightness profile, how to identify and mask contaminating sources, how to estimate and subtract the background contribution to the NRI power spectrum, the colour used in computing the SBF calibration magnitude, and the wavenumber range used in fitting the power spectrum. For the galaxies in our sample, many possible choices in these details lead to systematic variations significantly larger than the simplest errors we have estimated by varying the aperture around each galaxy slightly. Several choices increase the average SBF power for nearby galaxies until it is closer to the expected level, but in doing so produce oscillations in the NRI power spectrum, non-Gaussian features in the NRI pixel distribution, larger uncertainties, and/or reduce the correlation between detected power and actual distance.
Recent work by Polzin et al. (2021) estimated a SBF distance to the nearest galaxy in our sample, 549719. They obtained a result in agreement with the proper distance (\(d_{\rm c}=21.3\)Mpc), \(d_{\rm SBF}=24\pm 3\)Mpc (their full galaxy measurement, which averages over two regions with slightly different colours). By comparison, with our fiducial method, we obtain a SBF distance of \(d_{\rm SBF}=56\)Mpc with lower and upper limits of 50Mpc and 67Mpc respectively. Thus, our fiducial estimate is incorrect by a factor of almost 3, and inconsistent with the Polzin et al. (2021) result. We can recover their result by modifying our methodology to make it more similar to theirs. If we use a single Sersic profile to model the smooth light distribution, flat background subtraction assuming the standard SBF equation (Eq. 3), a less aggressive mask (using only a manual mask modelled after their figure 2), and a wavenumber range of 0.15 to 0.4 pix\({}^{-1}\), we obtain a SBF distance of \(23\pm 5\) Mpc, in excellent agreement with their result. These are not optimal choices for the rest of our sample, however; if we apply this modified method to the whole sample, we find all our galaxies have estimated SBF distances in the range of \(\sim\)10-30 Mpc, and there is little or no correlation between \(d_{\rm SBF}\) and \(d_{\rm c}\).
There are a few ways we may imagine making progress on the problem of generalizing the SBF method to a more diverse, distant, low surface-brightness populations of galaxies. With larger samples of partially resolved objects, we might be able to develop a new calibration for our current fiducial method, that corrected for the zero-point offset seen in Fig. 4. We note that with a different calibration, most objects in this figure with true distances of less than 100 Mpc might have SBF distances consistent to within the (10-15%) statistical errors shown in this plot.
We also have several important pieces of additional information about the galaxies in our sample, including their associated point source luminosity functions (see Appendix B), their SEDs (see Appendix A), and the detailed shape of the pixel histogram (Fig. 1). These diagnostic features may allow corrections to the SBF method for age, metallicity, galaxy type and/or star formation history. More general statistical methods that incorporate the additional information, such as those used in machine learning (ML), might be be applied to this problem, in order to determine the best choices to make in applying the SBF method, and which factors are most important in the analysis (see Tanoglidis et al., 2020; Muller and Schnider, 2021, for recent examples of the application of ML to the morphological classification of low surface brightness dwarf galaxies). Such methods might also be used to identify and correct the NRI in distant galaxies where undetected point sources contribute significant power, as discussed in Appendix B. We will explore these possibilities in future work.
Given the imaging data expected from forthcoming surveys (Moresco et al., 2022), extending the SBF method to a broader class of galaxies is a priority. The Euclid Wide Survey (Euclid Collaboration et al., 2022), for example, will cover 15,000 square degrees on the sky, almost \(10^{4}\) times the effective area of the COSMOS ACS imaging, albeit with lower resolution (\(\sim 0.2^{\circ}\)) and less collecting area. If the 20-30 galaxies detected in the COSMOS field at distances of less then 200 Mpc are indicative, Euclid may see thousands of partially-resolved dwarf galaxies out to \(\sim\)50 Mpc. The Roman Space Telescope (Spergel et al., 2015), with 100 times the field of view of HST and better infrared sensitivity, could produce samples of thousands of objects out to distances of 200 Mpc or beyond. From the ground, the Vera Rubin Observatory (LSST Science Collaboration et al., 2009) will similarly produce large samples of partially resolved galaxies over the distance range of the COSMOS sample (Greco et al., 2021). These samples could provide a trove of information about galaxy formation on the smallest scales, if we can identify these nearby objects and correctly place them in their three-dimensional context.
## Acknowledgements
We thank M. Hudson, L. Parker, M. Bravo, D. Lazarus, and M. Oxland for useful discussions. We also thank the anonymous referee for comments which significantly improved this work. JET acknowledges support from the Natural Sciences and Engineering Research Council (NSERC) of Canada through a Discovery Grant. The COSMOS 2015 catalogue is based on data products from observations made with European Southern Observatory (ESO) Telescopes at the La Silla Paranal Observatory under ESO programme ID 179. A-2005, and on data products produced by TERAPIX and the Cambridge Astronomy Survey Unit on behalf of the UltraVISTA consortium.
This research made use of the NASA/IPAC Infrared Science Archive and the NASA/IPAC Extragalactic Database (NED), which are funded by the National Aeronautics and Space Administration and operated by the California Institute of Technology, as well as the following software: astropy (Astropy Collaboration et al., 2013, 2018), DS9 (Joye and Mandel, 2003), imfit (Erwin, 2015), matplotlib (Hunter, 2007), numpy (Harris et al., 2020), photutils (Bradley et al., 2022), and scipy (Virtanen et al., 2020).
## Data Availability
The data used in this article are publicly available. The COSMOS 2015 catalogue (Laigle et al., 2016) can be accessed from the COSMOS website, at [http://cosmos.astro.caltech.edu/page/photom](http://cosmos.astro.caltech.edu/page/photom). Spectroscopic redshifts for our sample were taken from Xi et al. (2018), and are generally also available in NED. The derived data generated in this work will also be shared on reasonable request to the corresponding author. |
2307.12532 | On the Connection between Pre-training Data Diversity and Fine-tuning
Robustness | Pre-training has been widely adopted in deep learning to improve model
performance, especially when the training data for a target task is limited. In
our work, we seek to understand the implications of this training strategy on
the generalization properties of downstream models. More specifically, we ask
the following question: how do properties of the pre-training distribution
affect the robustness of a fine-tuned model? The properties we explore include
the label space, label semantics, image diversity, data domains, and data
quantity of the pre-training distribution. We find that the primary factor
influencing downstream effective robustness (Taori et al., 2020) is data
quantity, while other factors have limited significance. For example, reducing
the number of ImageNet pre-training classes by 4x while increasing the number
of images per class by 4x (that is, keeping total data quantity fixed) does not
impact the robustness of fine-tuned models. We demonstrate our findings on
pre-training distributions drawn from various natural and synthetic data
sources, primarily using the iWildCam-WILDS distribution shift as a test for
downstream robustness. | Vivek Ramanujan, Thao Nguyen, Sewoong Oh, Ludwig Schmidt, Ali Farhadi | 2023-07-24T05:36:19Z | http://arxiv.org/abs/2307.12532v1 | # On the Connection between Pre-training Data Diversity and Fine-tuning Robustness
###### Abstract
Pre-training has been widely adopted in deep learning to improve model performance, especially when the training data for a target task is limited. In our work, we seek to understand the implications of this training strategy on the generalization properties of downstream models. More specifically, we ask the following question: how do properties of the pre-training distribution affect the robustness of a fine-tuned model? The properties we explore include the label space, label semantics, image diversity, data domains, and data quantity of the pre-training distribution. We find that the primary factor influencing downstream effective robustness [43] is data quantity, while other factors have limited significance. For example, reducing the number of ImageNet pre-training classes by \(4\times\) while increasing the number of images per class by \(4\times\) (that is, keeping total data quantity fixed) does not impact the robustness of fine-tuned models. We demonstrate our findings on pre-training distributions drawn from various natural and synthetic data sources, primarily using the iWildCam-WILDS distribution shift as a test for downstream robustness.
## 1 Introduction
Transfer learning is a popular technique to deal with data scarcity, improve training speed, or transfer useful inductive biases that can benefit downstream tasks [7, 9, 27]. In the domain of computer vision, pre-training on ImageNet in particular has been the de-facto standard for obtaining features to solve a wide range of vision tasks, such as object detection [8, 17, 33], segmentation [5, 15], and action recognition [41]. While there exists previous work that seeks to pinpoint specific properties of ImageNet-trained features that benefit downstream performance [18, 22, 23, 37], the analysis is often done with respect to model accuracy. Our work instead examines the robustness of fine-tuned models to natural distribution shifts. Instead of looking at architecture variations and pre-training algorithms as done in prior work [13, 37, 48], we focus on the role of the pre-training data. This data-centric approach has been validated by past work [12, 26, 28], which show that the training data distribution plays a larger role than training method or architecture in influencing model robustness.
Robustness under distribution shifts is a fundamental concern for producing reliable machine learning systems: a model can perform in unexpected and undesirable ways when there is a mismatch between
the data distribution encountered in deployment and the one on which the model is trained [21, 35]. For example, a self-driving car should be able to generalize to a wide variety of weather scenarios to be considered safe, some of which it may not have seen during training. In our work, we focus on these forms of _natural_ distribution shifts [21, 32]--named so because they are induced by real-world processes--and study what aspects of the source dataset could help fine-tuned models become more robust to these shifts. We tackle this question along five different ablation axes: **(i)** Data quantity, **(ii)** Label granularity, **(iii)** Label semantics, **(iv)** Image diversity, and **(v)** Data sources. Most of our experiments revolve around the supervised learning setting, which makes controlled experiments on the pre-training distribution more tractable and systematic. Through a better understanding of the interplay between various properties of the pre-training distribution and downstream robustness, we seek to establish guidelines for constructing better pre-training datasets for fine-tuning.
Previous work by Miller et al. [26] experimented with a wide range of natural distribution shifts and found that pre-training on ImageNet yields the biggest improvement in robustness for the task of iWildCam-WILDS classification [2, 21]. Consequently, we use iWildCam-WILDS as a probe to evaluate how our interventions with the ImageNet pre-training distribution would alter the robustness trend uncovered in this previous work. We also analyze the use of other pre-training data sources that may differ significantly from ImageNet in both semantic content and data collection methodology. Our main findings can be summarized as follows:
(i) **Data quantity.** Pre-training with more data helps boost robustness. However, we do not need a lot of pre-training data to see significant robustness gains: using 25K images subsampled from either ImageNet or iNaturalist, which is 6\(\times\) smaller than the size of the fine-tuning dataset, already offers noticable robustness improvements.
(ii) **Label granularity.** Making pre-training labels more coarse-grained lowers transfer robustness. The effect is less significant than altering data quantity: extreme reduction in label granularity (e.g.,
Figure 1: A summary of our experimental pipeline. We pre-train a model on a variety of different data distributions and evaluate its effective robustness after fine-tuning on a downstream task (i.e., iWildCam). By examining many models in this manner, we can determine empirical properties of the pre-training distribution that are important for fine-tuning robustness.
using 5 coarse classes instead of 1000 fine-grained classes) still preserves some of the robustness gains compared to training from scratch.
(iii) **Label semantics.** Given enough data and labels, pre-training on more semantically similar classes does not have a notable impact on the robustness of fine-tuned models. In particular, we find that pre-training on the 600 inanimate object categories in ImageNet yields the same effective robustness as pre-training on the 400 animal categories, despite the fact that the downstream task consists of only animal categories.
(iv) **Image diversity.** Given the same pre-training label set and data quantity, increasing per-class diversity (e.g., by including more subclasses) has no effect on transfer robustness. In addition, the trade-off between having more classes and more images per class is negligible if the total number of samples is kept constant.
(v) **Data sources.** We find that natural data sources (i.e., ImageNet, iNaturalist) yield similar downstream robustness when controlling for data quantity. Pre-training with synthetic fractal data is less effective at the same data quantity regime but still has some robustness gain to offer compared to training from scratch. Synthetic natural-looking data (e.g., generated by Stable Diffusion [34]) can help close this gap between using natural data and synthetic fractal data.
Overall we find that increasing pre-training data quantity and label granularity makes fine-tuned models more robust to distribution shifts. However, not all additional data is equally helpful. For instance, in the context of iWildCam-WILDS task, pre-training with natural-looking data offers much more robustness than using \(10\times\) more synthetic fractal data.
## 2 Background
The main motivation for our paper comes from the work by Huh et al. [18], which investigates various factors in ImageNet training that affect the quality of the features used subsequently for transfer learning. For our investigation, we shift the focus from accuracy to robustness against distribution shift, which has been a long-standing issue in machine learning [3, 4, 30, 42]. In particular, we analyze the robustness of pre-trained features to natural distribution shifts observed in the real world through the iWildCam-WILDS benchmark [21]. Furthermore, in contrast to Huh et al. [18], we experiment with a greater variety of more recent neural network architectures, in addition to exploring the use of synthetic pre-training data.
A key goal in robustness is to reduce the impact of distribution shifts on the performance of a model. If model performances on in- and out-of-distribution test sets are plotted along the \(x\) and \(y\)-axes of a scatter plot respectively, then a more robust model would lie closer to the diagonal \(y=x\) line. This notion of robustness was captured by Taori et al. [43] under the term _effective robustness_, which measures the difference between a model's actual OOD performance and what could be predicted from its ID performance (Figure 2). Miller et al. [26] adopted this effective robustness framework and evaluated hundreds of models on various distribution shift settings. The authors observed that ID performance is highly correlated with OOD performance. This linear trend mapping ID to OOD performance, and how close it is to the \(y=x\) line, is what we use in our work to compare the quality of the pre-trained features.
More notably, Miller et al. [26] discovered that on the iWildCam dataset, models trained from
scratch and those that have been pre-trained on ImageNet lie on distinct linear trends, with the latter exhibiting much more robustness. We replicate these reported trends in Figure 2. Motivated by this result, our work seeks to better understand what aspects of ImageNet pre-training contribute to the improved robustness on iWildCam, and how these aspects translate to other pre-training data sources.
Previous work by Andreassen et al. [1] has looked at effective robustness over the course of fine-tuning and found that pre-trained models exhibit high effective robustness in the middle of fine-tuning, which eventually decreases as the training proceeds. The paper also experimented with ImageNet as one of the pre-training data sources. In our investigation, as a sanity check to remove number of training epochs as a potential source of bias for the linear fit, we adopt the linear trend of models pre-trained on ImageNet and fine-tuned on iWildCam computed previously by Miller et al. [26] as the baseline. We then report the residuals from comparing actual OOD performance at different epochs to what could be predicted from the corresponding ID performance using this baseline. Refer to Figure 3 for more details. We find that in the context of iWildCam fine-tuning, at each epoch, the residuals from our architectures of choice concentrate around the \(y=0\) line and exhibit no particular trend. This in turn allows us to vary the number of fine-tuning epochs as a hyperparameter, and obtain models covering a wide range of test performances for the scatter plots.
## 3 Experimental Setup
As mentioned earlier, the downstream task of interest is wildlife classification with the iWildCam-WILDS dataset [21]: the input is a photo taken by a camera trap, and the output is one of 182
Figure 3: We visualize the residuals of various architectures after fitting a linear trend that predicts OOD accuracy from ID accuracy. All models are pre-trained on the full ImageNet dataset and fine-tuned on iWildCam for 12 epochs. We observe that overall the residuals fluctuate around the \(y=0\) line and vary throughout the course of fine-tuning for most architectures.
Figure 2: Effective robustness is defined as movement towards a classifier which is robust to distribution shift (i.e., line \(y=x\)). Using this metric, Miller et al. [26] observes that for the iWildCam-WILDS task, models pre-trained on ImageNet are much more robust than models trained from scratch. We reproduce these two trends and use them as points of reference for our subsequent experiments, in which we modify the pre-training distribution and observe how our interventions alter the robustness trend lines.
different animal species. There are two test sets for evaluation: ID test data consists of images taken by the same camera traps as the training set, but on different days from the training and validation (ID) images. In contrast, OOD test data contains images taken by a disjoint set of camera traps from training and validation (ID) images. We include some examples of the geodiversity represented in each test split in Appendix Figure 13. Following [21], we report the macro F1 scores of the trained networks because this metric emphasizes performance on rare species, which is critical to the biodiversity monitoring application that the dataset was designed for.
**Pre-training datasets.** We use ImageNet [10] and iNaturalist [45] as the primary pre-training distributions of interest, given their hierarchical structures, complexity, and relevance to the downstream task. The two data sources also differ in many ways (Table 1), hence their pre-trained features make for an informative comparison. We also include experiments with synthetic pre-training data by using Stable Diffusion [34] and the FractalDB-1k dataset [20]. We will elaborate on this in Section 4.5.
**Network architectures.** To obtain data for plotting linear trends, we train a range of standard neural network architectures including ResNet [14], ResNext [47], DenseNet [19], AlexNet [24] and MobileNet-V3 [16]. In our scatter plots, besides varying the architectures, we also vary the number of fine-tuning epochs to obtain models with varying F1 scores. Appendix A contains further training details. While our focus is on supervised pre-training, we also report some additional results with CLIP [31] architecture in Section 5.
In the subsequent sections, we detail different interventions made to the pre-training distribution to disentangle key properties of interest. We show the resulting linear trends in relation to the trends replicated from previous work [26], which include models trained from scratch on iWildCam (solid blue line) as well as models pre-trained on ImageNet (solid cyan line). For each trend line, we show 95% bootstrap confidence intervals for the linear fit.
## 4 Experiment Results
### Effect of Data Quantity
First, we experiment with reducing the pre-training set size. To remove potential confounding effects from a long-tailed data distribution, we ensure that the class distribution of our pre-training datasets is uniform. ImageNet is already class-balanced, but this is not the case for iNaturalist [45]. We experiment with a 1000-class subset of iNaturalist using its most frequent classes. We further select images within each class uniformly at random so that the number of samples is the same across all classes. This results in a class-balanced training set of size 150K from iNaturalist. We repeat the same procedure to obtain subsets of size 100K, 50K, 25K and 5K. A similar subsampling process is done on ImageNet, using the full 1000-class dataset which already has a uniform label distribution.
In Figure 4, we observe that reducing the data quantity during pre-training lowers the effective
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & Training set size & Number of classes & Class distribution & Class hierarchy & Expert-labeled \\ \hline ImageNet & 1,281,167 & 1,000 & Class-balanced & WordNet & No \\ \hline iNaturalist & 579,184 & 5,089 & Long-tailed & Tree of life & Yes \\ \hline \hline \end{tabular}
\end{table}
Table 1: Differences between the ImageNet and iNaturalist datasets.
robustness of fine-tuned models. However, it is worth noting at 25K images, pre-training with subsampled ImageNet and iNaturalist data still produces much more robust models compared to training from scratch. This is \(6\times\) less data compared to what is used for fine-tuning. As a sanity check, we find that using only 5K samples (i.e., 5 examples per class) during pre-training yields roughly the same level of robustness as training from scratch on iWildCam.
### Effect of Label Granularity
Next, we adapt a question raised previously by Huh et al. [18] to our investigation: how does varying the number of pre-training classes affect downstream robustness? Following [18], we construct _supersets_ of classes in ImageNet using the WordNet hierarchy. We use the maximum of the shortest path distance from the root of WordNet to a label to compute the depth of the current label set. We then contract ImageNet label nodes along the shortest path to construct superclasses. Specifically, we investigate depths 2, 4, 5, 6, and 7, which result in class counts of 5, 17, 37, 85, and 232 respectively, in order to provide good coverage across a range of label granularities. Similarly, on iNaturalist, we use the superclass information that comes with the dataset to collapse the label space from 5,089 fine-grained classes to 13 coarse classes.
For ImageNet pre-training, we find that using the full 1000 classes provides the most robustness. However, when the label set size is reduced by four times (i.e., taking 232 superclasses at depth 7), model robustness only decreases slightly. From then on, reducing the label set further to 85 classes (depth 6), and then 37 classes (depth 5), does not deteriorate the linear trend further. Only when we experiment with 17 classes (depth 4) do we find another noticeable reduction in effective robustness. With 5 superclasses as the only pre-training labels (depth 2), pre-trained models are still significantly more robust than models trained from scratch.
On iNaturalist, we also observe a similar downward shift in linear trend when we reduce the initial label space to its phylum. Refer to Figure 5 for more details. Overall these findings suggest that using fine-grained labels during pre-training is better for learning representations that are robust to
Figure 4: Reducing the number of pre-training images randomly sampled from **(left)** ImageNet and **(right)** iNaturalist lowers the robustness linear trends of the fine-tuned models. However, using only 25K pre-training samples (green line) still yields significant robustness improvements compared to training from scratch on 129K iWildCam images (dark blue line). We subsample iNaturalist and ensure class balance, only including 1000 classes with the most number of samples.
distributions shifts in downstream tasks. But even if only coarse labels are available, pre-training with enough data still has significant robustness benefits to offer. We note that when it comes to absolute F1 scores, pre-training on more fine-grained labels also tends to lead to higher performance.
### Effect of Label Semantics
The number of pre-training classes seems to have an impact on downstream robustness, but does it matter what kind of classes models are pre-trained on? We next investigate whether using classes whose semantics are more aligned with the downstream task would improve robustness.
To do so, we separately pre-train models on ImageNet classes that are subsets of the "object" and "animal" WordNet synsets. This yields 2 broad categories that are similar in total sample size, each having around 600K images. In Figure 6, we find that models trained on "animal" classes (yellow line) exhibit slightly higher F1 scores, but roughly the same effective robustness as models trained on "object" classes (green line). This is surprising given that the fine-tuning distribution, iWildCam, contains only images of animals in the wild, which are semantically more similar to the animal classes of ImageNet. It is also worth noting that models pre-trained on "object" classes are also much more robust than models trained from scratch (blue line).
One potential confounder to this experiment setup is that some images from "object" classes also contain animals (i.e., co-occurrences that are not accounted for by ImageNet labels). To estimate the extent of this problem, we use TensorFlow's ImageNet2012 multilabel set [40], containing 20k ImageNet validation data with multi-class labels reviewed by experts. We find that 1.1% of the data have labels from both "animal" and "object" classes present in the same image, suggesting that the label co-occurrence issue only impacts a small fraction of training data. Consequently, to explain the result of Figure 6, we posit that training on a diverse set of classes in general helps the model pick up on useful invariances that in turn lead to similar downstream robustness. We explore this hypothesis further in Section 4.5 with synthetic training data.
Figure 5: Results of changing the label granularity of the pre-training task by combining classes according to some semantics hierarchy to form supersets, for **(left)** ImageNet and **(right)** iNaturalist. In general, this intervention lowers model robustness on downstream task. However, extreme reduction of the pre-training label space, e.g. by \(200\times\) in the case of ImageNet, still offers robustness gains compared to training from scratch.
### Effect of Image Diversity
Besides labels, another source of diversity comes from the training images themselves. We experiment with two different notions of image diversity: **(i)** Number of categories collected, and **(ii)** Per-class image diversity.
#### 4.4.1 More Data Per Class vs. More Classes of Data
A natural question arises when designing a dataset with a fixed data budget (or labeling cost): _should we collect more data from existing categories or more categories of data?_ To address this question, we keep the total number of images fixed while varying the number of classes of ImageNet used for pre-training. For example, if we have a budget of 60K images and 100 randomly selected ImageNet classes, we sample 600 images uniformly at random from each of these classes (Figure 7). Here, we find that in the context of the iWildCam distribution shift, there is no difference on downstream robustness between having more data per class or having more classes, as long as the total number of images is constant. This observation also holds at a larger data quantity scale (300K images, see Appendix Figure 18). This result demonstrates the dominant effect of data quantity over other aspects of the pre-training distribution (e.g., label set size).
Figure 6: Varying the semantic category of classes included in the pre-training data yields similar robustness linear trends, with pre-training on only “animal” classes exhibiting slightly higher F1 scores than pre-training on only “object” classes. Even though the downstream task is animal classification, models pre-trained only on “object” classes are still much more robust than models that do not undergo any pre-training.
Figure 7: We vary the number of classes randomly selected from the original 1000 ImageNet classes and adjust the number of images per class correspondingly, such that total image quantity is fixed at 60K. We observe that having \(4\times\) more classes, or \(4\times\) more images per class, induces the same level of robustness in fine-tuned models. Experiments with 300K data regime can be found in Appendix Figure 18.
#### 4.4.2 Image Diversity Within Each Class
Another way of modulating dataset diversity is by changing _per-class_ image diversity. For example, given a "dog" class, a dataset which only contains images of one dog breed could be seen as less diverse than a dataset which has examples of several breeds. In order to construct a controlled experiment, we use a quantitative heuristic for the diversity of each class: we fix certain superclasses (using the WordNet hierarchy) as the training labels and vary the number of corresponding subclasses where the images are taken from. For iNaturalist we can do the same with the tree of life structure. More diversity means more subclasses chosen per superclass.
For the ImageNet distribution that is built on the WordNet hierarchy, we construct a subset following BREEDS [38]. The two main ways that BREEDS recalibrates the WordNet hierarchy fit our goals for image diversity: (i) nodes are only included in the hierarchy if they convey some visual information, and (ii) nodes of similar specificity share the same distance from the root (e.g., "dog" and "cat" are now both at the same depth even if the "dog" subtree is much larger). With this new hierarchy, we obtain 16 superclasses, each encompassing 12 subclasses (i.e., original ImageNet classes). The full list can be found in Appendix C.1. We vary image diversity by changing the number of subclasses per superclass: 4, 8 or 12 subclasses--corresponding to the diversity ratio \(p=0.33,p=0.67,p=1.0\) in the left panel of Figure 8. To prevent data quantity from being a confounding variable, we subsample images from each chosen subclass accordingly (e.g., if we reduce number of subclasses per superclass by \(3\times\) then we sample \(3\times\) more images from each subclass).
For iNaturalist, we fix the total number of images at 80K and apply the same procedure described above to select a fraction of subclasses (see diversity ratios in the right panel of Figure 8), for each of the following superclasses: "Plantae", "Insecta", "Mammalia", "Fungi", "Aves", "Reptilia", and "Amphibia." We choose this set of superclasses so we could have a uniform distribution of images per class while maintaining the same number of images as our ImageNet experiment. For more details, see Appendix C.1. As seen in Figure 8, for both ImageNet and iNaturalist, the resulting linear trends are highly similar regardless of the diversity ratio \(p\), or the number of subclasses per superclass. We conclude that in this case, per-class image diversity does not have a significant impact on downstream robustness. Note that this does not hold in the extreme setting, e.g. repeating the same image to produce a dataset.
### Pre-training with Different Data Sources
Moving beyond interventions _within_ each data distribution, we now compare fine-tuning robustness behaviors _across_ different pre-training data sources.
Compared to ImageNet, iNaturalist exhibits different characteristics on multiple axes (see Table 1). We expect that pre-training on the diverse, domain-specific species - which have been verified by nature enthusiasts - in iNaturalist will provide a boost on robustness for the downstream animal-in-the-wild classification task, compared to training on general web-curated classes in ImageNet. However, in Figure 10, we find that iNaturalist behaves similarly to ImageNet as a pre-training data source. Even when we subsample iNaturalist to follow the ImageNet class distribution (refer to Section 4.1 for its construction), we observe a similar level of effective robustness compared to the equal-sized 150K ImageNet subset (Figure 11). We hypothesize that when a certain level of "diversity" is reached with the training images and labels, there is negligible robustness gain to be made even if we increase the alignment between the pre-training and fine-tuning data domains.
#### 4.5.1 Synthetic Data Sources
To push our diversity hypothesis to the limit, we also pre-train the same set of architectures on FractalDB-1k dataset [20], which has similar class distribution to ImageNet but only contains synthetic fractal images. Pre-training on FractalDB-1k has been shown to surpass the accuracy of pre-training on ImageNet/Places [20]. For the task of iWildCam-WILDS, however, it is noticeably less effective at improving downstream robustness compared to natural image data (Figure 10). However, pre-training with fractal images still offers more robustness than training from scratch.
Can we generate better synthetic data for pre-training than FractalDB-1k? We experiment with Stable Diffusion [34], a popular diffusion model which generates high-quality images from natural language prompts, to generate natural-looking images following the ImageNet class distribution. We use 80 diverse prompts per ImageNet class from [31] to generate a 150K ImageNet-like dataset. Examples from this synthetic dataset can be seen in Figure 9. We find that pre-training on this dataset yields similar robust generalization behaviors as pre-training on the same quantity of ImageNet and iNaturalist images (Figure 11). However, at a larger scale of 1M images, the robustness benefits of pre-training with synthetic data begins to saturate and slightly lag behind iNaturalist and ImageNet.
Figure 8: We fix the total amount of pre-training data and the label space, while reducing the number of subclasses that constitute each superclass label in **(left)** ImageNet and **(right)** Naturalist. Smaller \(p\) (diversity ratio) means proportionally fewer subclasses per superclass. We find that reducing per-class diversity by up to \(3\times\) during pre-training has no effect on downstream robustness.
Figure 9: Each grid in order shows random examples from the ImageNet ILSVRC 2012 challenge train set [10, 36], the iNaturalist 2017 challenge train set [45], the FractalDB-1k synthetic train set [20], and a 1000-class ImageNet-style synthetic dataset generated using Stable Diffusion [34].
See Appendix Figure 19 for more details.
Overall our findings demonstrate that while nuances in image semantics during pre-training are not important for fine-tuning robustness (e.g., "animal" versus "object" classes, or iNaturalist versus ImageNet), it is still beneficial to match the general characteristics of the downstream data (e.g., "natural-looking" images).
## 5 Self-supervised Pre-training
Previous experiments revolve around the supervised learning settings. However, it is increasingly common to pre-train on _self-supervised_ tasks using web-crawled corpora, which has been shown to significantly improve robustness to distribution shifts [31]. Our preliminary experiments with pre-trained CLIP models [31] on iWildCam show that the resulting ID and OOD performances still lie on the ImageNet pre-training linear trend (i.e., cyan line), despite the self-supervised training mechanism and the much larger training dataset of CLIP. Varying CLIP's data sources only move the F1 scores along the same line (Figure 12).
We also repeat the experiments with varying data quantity (Section 4.1) in the context of CLIP's image-text pre-training data. Refer to Appendix D for more details. We leave an extensive evaluation of how "diversity" of the pre-training distribution should be defined and measured differently on open-vocabulary web datasets to future work.
Figure 11: Similar to the results in Figure 10, with a budget of 150K images, pre-training on FractalDB-1k is significantly less effective than iNaturalist or ImageNet. However, we find that generating natural-looking data with Stable Diffusion can close this gap. All datasets are subsampled to be class-balanced, each having 1000 classes. We note that at larger pre-training data regime (e.g., 1M samples, see Figure 19), the effectiveness of Stable Diffusion data begins to lag behind natural data from ImageNet and iNaturalist.
Figure 10: Pre-training on a noisy, long-tailed distribution of natural images like iNaturalist (red line) does not change the robustness on downstream task, compared to pre-training on a clean, class-balanced dataset like ImageNet (cyan line). Pre-training on the same quantity of synthetic fractal data (FractalDB-1k) yields much lower robustness (green line), but still has some benefits compared to training from scratch (dark blue line).
## 6 Conclusion & Discussion
In this work, we find that many factors during pre-training such as label semantics and image diversity, do not significantly alter the effective robustness of models fine-tuned on iWildCam-WILDS. The more influential factors for downstream robust generalization are the _quantity_ of the pre-training data and the _granularity_ of the pre-training label set. Through experiments with Stable Diffusion, we also demonstrate the potential of synthetic natural-looking images as a way to increase the effectiveness of the pre-training distribution along these two ablation axes.
We can think about pre-training dataset construction in terms of an explore vs. exploit trade-off. Exploration, such as finding new data sources, is often time consuming, while exploiting, or collecting as much data as possible from an existing source, can sometimes be significantly easier. Our experiments suggest that a good approach to building pre-training dataset for robust generalization is to find a few data sources that exhibit robustness characteristics on a downstream task (e.g., Stable Diffusion data), and then collect as many samples from these sources as possible.
It is important to note that we are studying a different model behavior from Huh et al. [18]. Some interventions can reduce average performance while maintaining effective robustness (e.g., label granularity in Figure 5) and vice-versa (e.g., architecture modifications). Thus, certain choices during pre-training dataset design depend fundamentally on the goals of the dataset. For example, whether we want to achieve consistent performance across many settings (i.e., robustness) or optimize for very good performance on one specific application changes the applicability of our results.
An important open question is determining what characteristic of the iWildCam-WILDS task leads to the difference in linear trend between pre-training and training from scratch. Some other datasets (e.g., fMoW-WILDS, see [6, 21]) do not exhibit this behavior after fine-tuning. Therefore, it is important to pinpoint distribution shifts where pre-training can provide a significant boost in effective robustness. Finding a unifying property among such datasets would allow for better interpretation of our current results. As a first step in this direction, in Appendix E, we look into distribution shift settings constructed from the DomainNet benchmark [29], and observe that pre-training and training from scratch only produce different linear trends for certain pairs of domains and not others. Refer to this Appendix for further discussion.
Figure 12: Results from fine-tuning pre-trained CLIP models on iWildCam-WILDS. The models we use include CLIP with ResNet-50 image encoder trained on YFCC-15M [44] and LAION-15M [39] separately, as well as the original CLIP models released by OpenAI [31], including 3 with ViT [11] backbone trained on a dataset of size 400M. All of these models lie on the same linear trend as that of ImageNet pre-training, demonstrating the consistency of this trend across many pre-training dataset size scales and training algorithms.
## Acknowledgements
This work is supported in part by Open Philanthropy, the Allen Institute for AI, and NSF grants DMS-2134012 and CCF-2019844 as a part of NSF Institute for Foundations of Machine Learning (IFML). Ali Farhadi acknowledges funding from the NSF awards IIS 1652052, IIS 17303166, DARPA N66001-19-2-4031, DARPA W911NF-15-1-0543, and gifts from Allen Institute for Artificial Intelligence, Google, and Apple.
|
2304.14413 | Laboratory-Based Correlative Soft X-ray and Fluorescence Microscopy in
an Integrated Setup | Correlative microscopy is a powerful technique that combines the advantages
of multiple imaging modalities to achieve a comprehensive understanding of
investigated samples. For example, fluorescence microscopy provides unique
functional contrast by imaging only specifically labeled components, especially
in biological samples. However, the achievable structural information on the
sample in its full complexity is limited. Here, the intrinsic label-free carbon
contrast of water window soft X-ray microscopy can complement fluorescence
images in a correlative approach ultimately combining nanoscale structural
resolution with functional contrast. However, soft X-ray microscopes are
complex and elaborate, and typically require a large-scale synchrotron
radiation source due to the demanding photon flux requirements. Yet, with
modern high-power lasers it has become possible to generate sufficient photon
flux from laser-produced plasmas, thus enabling laboratory-based setups. Here,
we present a compact table-top soft X-ray microscope with an integrated
epifluorescence modality for 'in-situ' correlative imaging. Samples remain in
place when switching between modalities, ensuring identical measurement
conditions and avoiding sample alteration or destruction. We demonstrate our
new method by multimodal images of several exemplary samples ranging from
nanoparticles to various multicolor labeled cell types. A structural resolution
of down to 50 nm was reached. | Julius Reinhard, Sophia Kaleta, Johann Jakob Abel, Felix Wiesner, Martin Wünsche, Eric Seemann, Martin Westermann, Thomas Weber, Jan Nathanael, Alexander Iliou, Henryk Fiedorowicz, Falk Hillmann, Christian Eggeling, Gerhard G. Paulus, Silvio Fuchs | 2023-04-19T08:11:56Z | http://arxiv.org/abs/2304.14413v2 | # Laboratory-Based Correlative Soft X-ray and Fluorescence Microscopy in an Integrated Setup
###### Abstract
Correlative microscopy is a powerful technique that combines the advantages of multiple imaging modalities to achieve a comprehensive understanding of investigated samples. For example, fluorescence microscopy provides unique functional contrast by imaging only specifically labeled components, especially in biological samples. However, the achievable structural information on the sample in its full complexity is limited. Here, the intrinsic label-free carbon contrast of water window soft X-ray microscopy can complement fluorescence images in a correlative approach ultimately combining nanoscale structural resolution with functional contrast. However, soft X-ray microscopes are complex and elaborate, and typically require a large-scale synchrotron radiation source due to the demanding photon flux requirements. Yet, with modern high-power lasers it has become possible to generate sufficient photon flux from laser-produced plasmas, thus enabling laboratory-based setups. Here, we present a compact table-top soft X-ray microscope with an integrated epifluorescence modality for 'in-situ' correlative imaging. Samples remain in place when switching between modalities, ensuring identical measurement conditions and avoiding sample alteration or destruction. We demonstrate our new method by multimodal images of several exemplary samples ranging from nanoparticles to various multicolor labeled cell types. A structural resolution of down to 50 nm was reached.
correlative microscopy water-window X-ray microscopy fluorescence microscopy laser-produced plasma laboratory-based zone plates cell imaging
## 1 Introduction
Utilization of different imaging techniques to collect data from a sample allows to obtain more comprehensive information about its properties. This so-called correlative imaging is especially useful if complementary imaging techniques with different contrast mechanisms are combined. A particularly promising example is the combination of fluorescence and soft X-ray (SXR) microscopy in the water-window (Fonta and Humbel, 2015). Fluorescence microscopy (FLM) is a powerful tool in its own right for examining a variety of biological samples; it is in fact one of the most popular techniques in the life sciences (Lichtman and Conchello, 2005) especially with the rise of super-resolution techniques (Schermelleh et al., 2019). All FLM techniques are based on labeling specific components of the sample with fluorescent markers. This provides excellent functional contrast. But the greatest advantage of the method is also its greatest disadvantage: The sample typically cannot be imaged in its entirety as only labeled components are visible. This gap can adequately be filled by using soft X-ray (SXR) microscopy in the so called water window as a complementary correlative method. In the water window (WW) spectral range, which is defined by the absorption edges of carbon (282 eV/4.4 nm) and oxygen (533 eV/2.3 nm), a strong and label-free structural contrast can be achieved for biological samples, while still offering a relatively high penetration depth of several micrometers into water. A wide variety of imaging techniques have been established in this energy range, examples being coherent diffraction imaging (CDI), ptychography (Chapman and Nugent, 2010; Rose et al., 2018), X-ray holography (Mancuso et al., 2010) and Fresnel zone plate (ZP)-based methods (Jacobsen et al., 2019) such as scanning transmission X-ray microscopy (STXM) (Chao et al., 2012) or wide field imaging (Legall et al., 2012). Due to the relatively high penetration in water even tomography is possible (Schneider et al., 2010). At synchrotron facilities also correlative FLM-SXR microscopy has been demonstrated (Bernhardt et al., 2018; Duke et al., 2014; Smith et al., 2014; Hagen et al., 2012; Varsano et al., 2016). In all of these examples, fluorescence microscopy was used to identify and/or image the cellular components relevant to the research question. The structural and label-free contrast of SXR microscopy then allowed these components to be viewed in the context of their environment, providing additional information.
Many of the above-mentioned SXR methods are actually limited to synchrotron radiation sources, as they require a high flux of coherent photons, which cannot yet be generated in the laboratory. However, since access to these large-scale facilities is limited, it is of great interest to endow laboratory-scale SXR microscopy with the FLM modality to maximize the impact of both methods.
Generating sufficient photon flux in the WW spectral region is a challenge for laboratory-based setups. While WW coherent sources based on high harmonic generation exist (Gebhardt et al., 2021) their flux is still way too low for studies of biological samples. For this reason, most laboratory SXR microscopes are driven by incoherent plasma sources, where laser-produced plasmas have proved being the most powerful. Various target materials can be used ranging from solids (Fahy et al., 2021) and (cryogenic) liquids (Berglund et al., 1998, 2000) to gas targets (Muller et al., 2014; Wachulak et al., 2015). A detailed overview on the development of laboratory water-window microscopy is given in (Kordel et al., 2020). Although a number of laboratory-based SXR microscopes exist, to our knowledge a correlative instrument consisting of a compact SXR and FLM has not been demonstrated. What has been demonstrated is the combination of SXR with conventional light microscopy to accelerate the tomography measurement routine (Dehlinger et al., 2020).
In this work, we present a compact table-top SXR microscope with an integrated epifluorescence modality for "in-situ" correlative imaging. A sketch of our setup is shown in Figure 1. A double-stream gas puff target with nitrogen gas is utilized for the generation of monochromatic SXR line emission at a wavelength of 2.88 nm. Together with an ellipsoidal condenser mirror a wide-field zone plate microscope was built allowing for a structural resolution of 50 nm. Conventionally, correlative imaging is achieved by moving the sample from one microscope to another, oftentimes with additional preparation steps in between (Fonta and Humbel, 2015). Here, we directly integrated an epifluorescence microscope with different filter sets into the SXR microscope. This allows 'in-situ' correlation, i.e. there is no need to move or remove the sample from the SXR microscope's sample holder when switching between the different modalities.
Besides the obvious benefit of combining the functional fluorescence contrast with the natural structural contrast of WW microscopy, this offers several additional advantages: The field of view (FOV) of the SXR microscope is relatively small (\(<\)60 um), which complicates the identification of regions of interest, which is particularly relevant in consideration of the long exposure times in the SXR. By integrating the FLM into the SXR setup, it is possible to scan large areas of the sample quickly with the FLM to easily find regions of interest. In addition, the two images can be taken immediately after each other, so changes of the sample can be avoided. We demonstrate this with exemplary samples ranging from fluorescent nanoparticles and cyanobacteria to labeled 3T3 and COS-7 cells. Furthermore, our setup also allows for the investigation of the fluorescence response under SXR illumination to be studied, which is of particular interest for future applications. Our synchrotron-independent compact device, with a footprint of only 1.5 m \(\times\) 4 m, has the potential of becoming a stand-alone tool in biological research labs adding the unique label-free SXR structural contrast to the manifold of imaging methods.
## 2 Materials and Methods
### SXR Microscope
The SXR microscope has the same basic design as a wide-field optical microscope. A laser plasma is used as a light source. As a condenser serves an ellipsoidal for illuminating the specimen. The objective of the microscope is a Fresnel zone plate, which images the sample onto a CCD camera. The full setup including the fluorescence microscope is shown in Figure 1. Due to the strong absorption of soft X-rays in air, the microscope is operated in a vacuum. In the following sections, the individual components are described in detail.
#### 2.1.1 Plasma Source
The SXR radiation is generated in a laser-produced plasma utilizing nitrogen as target gas. While it is possible to use solid or liquid targets for higher plasma densities, a gas target has the advantage of producing very little debris. In addition, it is technically easy to implement. The gas-nozzle used is a so called double-stream gas puff target (GPT) (Wachulak et al., 2010), which has been employed for various different applications requiring extreme ultraviolet or SXR radiation ranging from microscopy (Wachulak et al., 2015) to XUV coherence tomography (Fuchs et al., 2017; Wachulak et al., 2018; Skruszewicz et al., 2021). It consists of two circular concentric nozzles. The actual target gas streams out of the inner nozzle, while a low absorption, low Z-number gas (typically helium) is emitted from the outer nozzle. The latter limits the target gas expansion, thus allowing higher densities even at larger distances from the nozzle,
Figure 1: Sketch of the full setup. **A)** The top figure shows the running SXR microscope with the ZP imaging the sample structure on the detector. The fluorescence microscope is not active in this mode. **B)** The zone plate, objective and the first mirror behind it are moved sideways to allow the fluorescence microscope to image the sample, which remains in place. The laser and plasma source are not active in this mode.
which leads to higher photon flux. Nitrogen is used as the working gas because it provides isolated emission lines in the water window region at a favorable plasma temperature of about 150 eV [Kramida et al., 2022]. In order to reach the required plasma conditions, a commercial Nd:YAG laser system (Spectra-Physics, Quanta Ray Pro-350) is tightly focused with a 50-mm aspheric lens into the gas stream above the nozzle. The laser pulses have an energy of up to 2.5 J and a pulse duration of 10 ns at 1064 nm wavelength and 10 Hz repetition rate.
#### 2.1.2 Condenser Optics
The emitted SXR-radiation is collected and focused by a nickel-coated ellipsoidal mirror (Rigaku) with a distance of 400 mm between the two focal spots. The ellipsoid has a length of 105 mm, an input NA of 0.05 to 0.1 and a focusing NA of 0.03 to 0.05. As a consequence, the reflection angles are 3\({}^{\circ}\), resulting in a reflectivity as high as 76%.
In the setup the optical axis is oriented perpendicular to the driving laser, as shown in Figure 1. For proper alignment of the mirror, the adjustment of tip, tilt, and translation is possible in all directions. In the respective procedure, the unfocused annular beam is viewed by the in-vacuum camera near, but not at the focal point and adjusted for maximum symmetry. The same camera was also used to characterize the resulting focal spot. To prevent the camera from becoming oversaturated, an additional, approximately 4 \(\mathrm{\SIUnitSymbolMicro m}\) of aluminum foil was placed behind the condenser to mitigate the light. The focal spot is presented in Figure 2.C. The image is slightly smoothed to filter out the effects of the uneven filter foil. The focus is approximately Gaussian with a FWHM width and height of 750 \(\mathrm{\SIUnitSymbolMicro m}\times 675\,\mathrm{\SIUnitSymbolMicro m}\), which corresponds well to the previously determined size of the plasma itself. Since an ellipsoid images only stigmatically, there is no enlarged or shrunken image of the plasma. Rather, rays from points that will not converge in the focal point of the ellipse, will lead to a ring in the 'image' plane of the condenser. Therefore, the focal spot is relatively round and symmetrical even if the source is oval with an aspect ratio of 2:1. Nevertheless, the FWHM of the focus is slightly increased as compared to the source size due to the mirror being closer to the source than the sample. This behavior was expected and confirmed by simulations with the ray-tracing software OpticStudio.
#### 2.1.3 Objective and Detector
Because of the high absorption of almost all materials in the WW spectral region, it is not possible to use refractive optics such as glass lenses or objectives for imaging. Instead, a Fresnel zone plate is employed, which uses diffraction instead of refraction for image formation. At a certain distance, a focus is created by constructive interference. The theoretical resolution is determined by the outermost zone-width \(\Delta r_{N}\): \(d_{\mathrm{Ray1}}=1.22\Delta r_{N}\)[Attwood and Sakdinawat, 2016].
We use a ZP with a diameter of 180 \(\mathrm{\SIUnitSymbolMicro m}\) and an outer-most zone width of 33 nm. This results in a focal length of 2.06 mm at 431 eV photon energy and offers a theoretical resolution of 40 nm. The NA of 0.044 is matched to the condenser NA, providing incoherent illumination for maximum contrast [Heck et al., 1998]. The ZP was manufactured on a silicon nitride (Si\({}_{3}\)N\({}_{4}\), short SiN) membrane in 150 mm tungsten by Zoneplates Ltd. The image is detected with a back-illuminated CCD camera (Andor iKon-L, 2048\(\times\)2048 pixels, 13.5 \(\mathrm{\SIUnitSymbolMicro m}\) pixel size), which can be mounted at distances varying from 500 mm to 1000 mm, depending on the desired magnification (250 to 500).
#### 2.1.4 Pumping
A differential pumping scheme is employed to reduce the pressure in the main experimental chamber, where all measurements take place. At the position of the skimmer and filter a partition was installed between the source and measurement chambers, such that the GPT resides in its own small vacuum chamber. The latter is pumped by a roots-pump (Edwards iXL 600) to keep the pressure at roughly \(10^{-1}\) mbar during measurements. The relatively high pressure is almost exclusively due to the difficult-to-pump helium from the larger outer nozzle of the GPT. Accordingly reabsorption of generated SXR radiation is nevertheless low. The main chamber is pumped by two turbomolecular pumps (Pfeiffer Vacuum HiPace), resulting in a pressure of about \(10^{-4}\) mbar when the gas nozzle is running. Both turbomolecular pumps are backed by a scroll-pump (Pfeiffer Vacuum HiScroll 18).
### Fluorescence Microscope
For correlative imaging, a fluorescence microscope (FLM) was integrated into the SXR microscope in the form of a bright-field epi-setup. The scheme is shown in Figure 1.B. It has been realized in such a way that only the objective and two mirrors are located inside the vacuum chamber. All other parts of the FLM are outside.
The excitation light from a fiber-coupled LED is collected by a 30 mm achromatic lens and guided through a spectral filter to a dichroic mirror, which reflects the light towards the vacuum chamber. Inside the chamber the light passes through the microscope objective (Olympus UPlanSapo 40x2) and illuminates the sample. The fluorescent light emitted
by the sample is collected by the same objective and directed back as a collimated beam. Due to the Stokes shift, it has a longer wavelength than the excitation light and thus can be transmitted by the dichroic mirror. Additional filtering is used to mitigate the background signal from excitation light reflected at the sample. A tube lens is then used to create the fluorescence image on the detector (pco edge). Leaving all components except the lens in air allows easy switching between different filter sets and illumination LEDs. This has the decisive advantage of enabling multi-color fluorescence imaging. The objective is motorized for translation in all dimensions such that alignment and refocusing is always possible.
The objective of the FLM is placed next to the ZP on a shared stage. This allows fast switching of the imaging modality without the need to move the sample. The mirror directly behind the objective is moved together with the optics such that the SXR beam behind the ZP can reach the detector unobstructed.
Three different filter sets (Chroma Technology) were used so far: One for UV/blue fluorescence (emission window: 375 nm central wavelength, 28 nm bandwidth, cutoff wavelength of dichroic mirror 415 nm, excitation window: 460 nm central wavelength, 50 nm bandwidth), one for green (480/40, 510, 535/50) and one in the deep red region (615/40, 635, >638). These filters were used in combination with different fiber-coupled LEDs from Pyroistech (LEDM-365, 365 nm), Mightex (FCS-0490, 490 nm) and Thorlabs (M625F2, 625 nm).
#### 2.2.1 Photoxicity Measurements
It is worth noting that the setup can be used to quantitatively study the behavior of fluorescent dyes under SXR irradiation. These investigations can be of particular interest in view of the increasing use of correlative imaging methods, be it with X-ray or with electron microscopy. For respective investigations, the objective is moved to the "fluorescence mode" in front of the sample and used with the appropriate LED. At the same time, the sample is irradiated from behind using the laser and the plasma source. By taking frequent fluorescence images during this process, the decay of the signal due to the phototoxicity of the X-rays can be measured. Due to the preparation of the samples on SiN-membranes, this measurement can even be carried out with a reference. To this end, particles on the free-standing SiN-membrane and the silicon substrate next to the membrane are imaged simultaneously with the fluorescence microscope. Since the SXR radiation does not penetrate the 200 um thick silicon, these particles are not affected.
### Samples and Preparation
The samples to be investigated were chosen to demonstrate the capabilities of the microscope with increasing complexity. In order to characterize the resolution of the SXR microscope a Siemens star with a diameter of 60 um and structure sizes decreasing to 50 nm half pitch towards the center of the star was investigated. This sample has been manufactured in a similar way to the ZP via electron beam lithography on a SiN membrane in 175 nm tungsten. All the other samples were prepared on SiN membranes (Norcada Inc). These membranes offer several advantages, namely transparency to both X-rays and visible light, mechanical stability, and compatibility with different sample preparation methods. They are manufactured on 5 mm\(\times\)5 mm silicon wafers and typically have a thickness around 50 nm. The silicon is particularly etched away such that a free-standing SiN window with a size of up to a few mm remains. For our samples we typically use window sizes between \(150\,\upmu\mathrm{m}\times 150\,\upmu\mathrm{m}\) and \(500\upmu\times 500\,\upmu\mathrm{m}\), as they provide a good compromise between open aperture and stability.
The first samples for demonstrating the correlation of a strong fluorescence signal and good SXR contrast were three different types of fluorescent nanobeads (FluoSphere Carboxylate-Modified Microspheres, excitation/ emission 360/ 450, 480/ 520 and 625/ 645) with sizes of 1 um and 200 nm. They were investigated with the three different filter sets and LEDs described above. For the preparation on the membranes the bead suspensions were diluted in water in a ratio of 1:1000 for the 200 nm beads and 1:500 for the 1 um beads according to the results of some preliminary tests on glass slides. The initial density of the bead solutions before mixing with water are 2 percent. To achieve sufficient dispersion of the beads and to minimize clustering, the water-mixed beads were placed in an ultrasonic bath for five minutes. Then the membranes are immersed in poly-L-lysine for 15 minutes before being washed with distilled water in order to ensure a high particle adherence while avoiding clustering. After that, the particles are applied and pipetted off after 10 minutes. This leads to a sufficient density of approximately uniformly distributed particles on the membrane with only a few clusters.
The first biological samples investigated were cyanobacteria of the type Synechocystis sp. PCC6803. These bacteria are autofluorescence that to their chlorophyll content. In addition, they are sufficiently robust to maintain their structure under vacuum conditions. The excitation of chlorophyll is possible over a broad spectral range. Therefore the red filter set can be used. The bacteria were grown under constant light illumination until an optical density of 1.5 was reached at \(\lambda=\)720 nm. Then, the cells were harvested by centrifugation, washed with H\({}_{2}\)O and resuspended in H\({}_{2}\)O. Different dilutions where dropped on poly-L-lysis coated SiN membranes. Cells were airdirted and stored at 4degC until imaging.
For a demonstration of the capabilities and scientific potential of the correlative microscope a conventional cell culture with multiple fluorescent labels was investigated. To this end, NIH-3T3 and COS-7 cells (Cell Lines Services GmbH) were cultured as described in [Seemann et al., 2017]. The cells were seeded and grown on SiN membranes. After the cells reached a confluence of 75%, they were incubated for 10 min with Mitotracker Deep Red 633 at 37\({}^{\circ}\)C and then fixed with 4% PFA for another 10 min. Immunofluorescence staining was done according to [Schneider et al., 2014]. After quenching with 25 mM glycine in PBS for 30 min, the cells were permeabilized and blocked with 10% horse serum and 2% BSA in PBS (blocking solution) with 0.2% Triton X-100. Alexa Fluor 488 phalloidin and DAPI incubations were done in blocking solution for 1 h at room temperature with PBS washing steps. The cells were stored in 4% PFA. Prior critical-point drying in a Leica EM CPD300 automatic critical point dryer, the samples were dehydrated in ascending ethanol concentration (30, 50, 70, 90, 100%) for 10 min at each concentration.
#### 2.3.1 Sample Holder
In order to examine new specimens, the vacuum chamber must be ventilated and evacuated for the transfer, causing the focus positions of the optics to shift slightly. It is therefore useful to be able to place as many samples as possible in the microscope at the same time. As all samples are prepared on SiN membranes of the same size, a sample holder was developed that can hold up to 33 SiN membranes and the Siemens star. It is mounted on a 2D-translation system from SmarAct, which allows precise lateral movement perpendicular to the optical axis over a range of 100 mm \(\times\) 100 mm. In addition, the distances between the various membranes on the sample holder are known, which greatly speeds up the search for the exact sample position.
## 3 Results and Discussion
### Plasma Characterization
In order to validate the emitted radiation, a spectrometer consisting of a 2400-lines/mm VLS-grating (Hitachi) and a CCD-Camera (Andor Newton) was set up. The design of the device is based on the one of [Wunsche et al., 2019], but has been optimized for the present application. Respective measurements are shown in Figure 2.A. Dominant line emission from the 1s\({}^{2}\)-1s2p-transition in He-like nitrogen at 431 eV is visible with additional lines at higher energies. Due to the large source size the spectral resolution of the spectrometer is limited. We assume that the spectral line width is in fact much smaller than indicated here. Since monochromatic illumination is required for ZP-microscopy, two 300-nm Titanium absorption filters were placed between source and condenser. Two filters are necessary, because a single filter would still transmit some visible light through micro holes. The transmission curve (red) of 600 nm titanium is shown in Fig. 2.A [Henke et al., 1993], along with the monochromatized spectrum calculated from the measured spectrum and the filter transmission (41% @ 431 eV). Taking into account the aperture and efficiency of the grating as well as the camera efficiency and transmission of the used filters, a photon flux of \(\approx 3\times 10^{11}\) photons/(sr\(\times\)pulse) was calculated. This compares quite well to a similar microscopy setup with a GPT [Wachulak et al., 2015]. The SXR microscope of the Stockholm group, which reported the highest photon flux for a laboratory based WW microscope to date, reaches \(5.5\times 10^{11}\) photons/(sr\(\times\)pulse) [Martz et al., 2012]. However, it is based on a cryogenic nitrogen source working at 2 kHz instead of 10 Hz.
The size of the generated plasma has been measured using a pinhole camera setup. For this purpose a 20 -um pinhole was placed 20 mm behind the source. Another 20 mm behind the pinhole an in-vacuum CCD camera (greateyes GE-VAC) was placed to record the image as shown in Figure 2.B. In this case, a single titanium filter was placed between source and detector so that only the hot and dense plasma emitting the 431 eV line is imaged. The support mesh of the filter is visible in the recorded image. Moreover, it can be observed that the plasma is neither horizontally nor vertically symmetrical. This can be explained as follows: The gas was hit by the laser on the left side as indicated by the red arrow. Due to the gradual absorption of the laser during propagation through the plasma, the highest intensity is observed slightly shifted to the left. In addition, self-focusing in the plasma causes a prolonged tail of lower intensity on the right side, resulting in a horizontal extension of 700 um FWHM. In vertical direction, the maximum intensity is slightly shifted downward, which is a direct consequence of the higher gas density closer to the nozzle. The vertical FWHM is just 340 um. The difference in size originates from the focusing geometry.
### Siemens Star
For characterization of the X-ray microscope, a Siemens star resolution test target was examined as a first step. The results of this investigation are shown in Figure 3. In 3.A a 80-s exposure showing the entire sample is presented. The field of view (FOV) is about 50 um in each direction. The red inset indicates the area, which is enlarged in subfigures B
to E. In the 80-s exposure, the second ring from the center can be clearly resolved with 100 nm half-pitch features and even the 50 nm-structures of the innermost ring can be clearly guessed (Fig. 3.B).
For further improvement of the resolution and signal-to-noise ratio (SNR), the exposure time needs to be increased. This is done by stacking several successively taken images. However, simply superimposing five exposures of 80 s each actually leads to a reduction of resolution as shown in Figure 3.C. The innermost ring is not resolvable and even the contrast on the top right and bottom left of the second ring is clearly reduced. This is due to changes in position of the image on the camera. The underlying issue is a changing temperature of the instrument caused by the exhaust heat of the laser and the pumps, which causes thermal drift of the sample, the ZP, and/or the camera.
In Figure 3.D, the same five images were added, however the drift was corrected by post-processing. The respective algorithm will be described in the following section. It allows for a significant increase in resolution: Stacking only five exposures, equivalent to an exposure time of 400 s, the structures in the innermost ring can be resolved, indicating a resolution of 50 nm half-pitch (Fig. 3.D). Additional exposures increase the SNR even more, as shown in Figure 3.E, where the innermost ring and its structures are clearly visible.
#### 3.2.1 Drift Correction
The determination of the changes in position of the individual images on the camera is the prerequisite for drift correction. To this end, a cross-correlation (CC) of all exposures with a specific reference image is calculated. From the position of the correlation maximum, the displacement can be determined. For computing the CC, we do not use the entire image, but rather a region of interest (ROI) with distinctive features and high contrast. A welcome side effect is that the computation time for the CC decreases. On the other hand, the precision of the shift measurement can be increased by interpolating the images to twice their size such that the exposures are stacked with half-pixel accuracy. For a series of 53 images of the Siemens star, each with an exposure time of 80 s (Fig. 3), a drift of 770 nm in vertical and 580 nm in horizontal direction was detected and compensated for. The maximum drift speed for this measurement was 35 nm/exposure, i.e. the resolution of a single image was not limited by drift. Shorter exposure times would certainly reduce the drift between subsequent exposures. However, the SNR would degrade at the same time, which would reduce the accuracy of the drift correction and consequently also the resolution. Therefore, there is an optimum for the exposure time. As a final step, the drift curve obtained is smoothed under the assumption that the thermal drift is more or less linear and discontinuities are not expected. Then, the drift curve is used to adjust the positions for all the 80-s exposures.
The procedure is hampered by residue on the camera chip, which results in small features in the images that do not change position during warm-up. For this reason, the reference image is also displaced, in fact by a relatively large margin (\(\approx\) 2 \(\mu\)m), such that the sample structure dominates the correlation result and not the residue. For biological samples like the COS-7 and 3T3 cells, a Gaussian filter is additionally used to smooth the image before correlation. This reduces not only the effects of the residue but also of noisy signal. The correlation of large structures such as the
Figure 2: Spectral and spatial characterization of the SXR plasma source. **A)** The measured spectrum shows a strong line emission at 431eV and multiple lines at higher energies. By using a 600 nm Titanium foil the spectrum can be monochromatized. **B)** Direct image of the source recorded by a pinhole camera attached with titanium filters. The FWHM diameter of the plasma is 325 \(\mu\)m\(\times\)675 \(\mu\)m. **C)** Image of the condenser focus. It is almost circular and has a FWHM diameter of 675 \(\mu\)m\(\times\)750 \(\mu\)m.
cell nuclei nevertheless provides precise information about the displacement, since it is basically a comparison of the centers-of-mass of the images, i.e. sharp edges are not required.
### Fluorescent Nanoparticles
The correlative imaging performance of the microscope was tested using three different types of fluorescent nanoparticles, which had been prepared on a SiN membrane. The red fluorescent beads have a diameter of 1 \(\upmu\)m, the blue and green beads 200 nm. A few typical microscope images are shown in Figure 4. In 4.A, three clusters consisting of 3 to 5 particles each are visible. They are clearly visible, in fact with strong contrast, in the SXR image, see Fig. 4.D.
A FLM can detect structures that are well below the resolution limit, as long as they are fluorescent. Accordingly, also the small green (Fig. 4.B) and blue (Fig. 4.C) beads, which are dispersed more widely across the image, are easy to recognize. This is different for the SXR image, which resolves only structures above the resolution limit. Therefore, the blue and green particles are much harder to detect because the contrast is lower due to their size not much larger than the SXR microscope resolution. Nevertheless, all of them can be recognized by close inspection of Figure 4.D. It should be mentioned that it would be hard to distinguish the small beads from residue on the camera. Interestingly, thermal drift and its subsequent compensation are helpful in this case, because the drift compensation procedure smeares residue into lines, while the particles remain dots.
Figure 3: SXR image of a Siemens star test pattern with 50 nm half pitch features in the center. **A)** Single 80 s exposure, full FOV. **B)** Single image, zoom on red inset, innermost structures. **C)** 5 images added up without drift correction. **D)** 5 images added up with drift correction. **E)** 53 \(\times\) 80 s exposure for maximum contrast.
Because of the mixing of the particles before application, the different colors can cluster together, which is clearly visible in the composite image 4.E. Blue and green are often seen in the same position, especially around the large red beads. This can also be seen by comparing Fig. 4.B and 4.C. However, the blue and green fluorescence signals appear somewhat'shadowed' around the large red beads (e.g. in the red frame). It is reasonable to assume that the small beads attach to the large beads, causing some of their signal to be blocked. This explanation is supported by the zoomed image 4.F (red frame in 4.A-D), where it can be seen that the large particles have small bumps on their surface, indicated by the red arrows in Fig. 4.F. These bumps are presumably the blue and green beads. Additionally, in the gaps between the large beads multiple small structures are visible, which are probably also smaller particles. These images already show the advantage of a correlative microscope, where, on the one hand, different fluorescent nanoparticles can be identified by their color, but, on the other hand, are all visible at higher resolution in the zone plate image.
### Cyanobacteria Synechocystis sp. PCC6803
The goal of our correlative microscope is its use in biological applications. Therefore, Cyanobacteria were chosen as a biological sample due their chlorophyll based autofluorescence, which can be made visible with the red filter set. The resulting images are presented in Figure 5. The images of the full FOV (Figs. 5.A-C) show cluster formation of the bacteria, which originates from airdrying. In the SXR images (Figs. 5.A and D) individual bacteria are clearly discernible with a high resolution. The exopolysaccharideaccharide capsules of the cells are visible as small gaps between them. Different (carbon) densities of the cyanobacteria lead to differences in SXR absorption and therefore to a different contrast of each bacterium in the image. On the left and bottom side of the image, the edge of the sample membrane can be seen, as well as a small dirt fiber in the center of the image.
In comparison, the fluorescence image exhibits a different contrast originating from the uneven intracellular distribution of the chlorophyll in the bacteria. This seems to correlate with the carbon density, which can be seen when comparing Figures 5.D and E. The darkest bacteria in the SXR image match the brightest bacteria in the fluorescence image. In addition, structures with no fluorescence signal can be identified as residue of the SXR camera or edges of the sample membrane. These examples show how the different contrast mechanisms of the two modalities nicely complement each other.
### 3T3 Cells
Next, we studied a conventional 3T3 cell culture. It was prepared as described earlier in the paper. As the signal from the Mitotracker was too weak and inconclusive, it was excluded from the evaluation. Figure 6 shows an examplary region of the sample that includes four 3T3 cells. Figures 5.A and B show the SXR image at different magnifications, whereas C shows the FLM image of the actin staining in green and D shows the DAPI staining in blue. In E, a composition of all 3 channels is shown, for which, however, the SXR contrast has been inverted to allow better visibility of the fluorescence.
In the SXR image (Fig. 6.A), the four dark oval-shaped components are easily identified as cell nuclei, which give a dark contrast due to their high (carbon) density. This is confirmed in the fluorescence image by DAPI staining (Fig.
Figure 4: Correlative measurement of fluorescent nanobeads on SiN-membrane. **A**) 1 μm red fluorescent beads. **B**) 200 nm green fluorescent beads. **C**) 200 nm blue fluorescent beads. **D**) 90 \(\times\) 45 s SXR image shows 1 μm beads with strong contrast and small beads scattered around. **E**) Composite image of all 4 channels, some crosstalk between different colors is visible. **F**) Enlarged image of the red frame, showing 3 large beads and some small beads attached to them.
6.D). Furthermore, nucleoli are recognizable in the cell nuclei. In the magnified SXR image (Fig. 6.B), which shows the red-framed section from Fig. 6.A, the high resolution of SXR microscopy becomes even clearer. Two nucleoli in the nucleus and, in particular, the cytoskeleton surrounding the nucleus can be seen, revealing the dense fiber network. The labeled actin cytoskeleton is also shown in the green fluorescence image (Fig. 6.C). In the composite image (Fig. 6.E), the interaction of the different contrasts is again particularly clear. Again, the SXR contrast was inverted for this image (Fig. 6.D). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence image (Fig. 6.E). The SXR image (Fig. 6.E) is also shown in the green fluorescence (Fig.
to allow better visibility of the fluorescence. In addition to what is displayed here, other components of the cytoskeleton such as microtubules or intermediate filaments could be stained as well. Also cytoskeleton-associated proteins could be labeled.
### COS-7 cells
The same procedure as for the 3T3 cells was performed for the COS-7 cells, except for labeling with DAPI. The recorded images are shown in Figure 7. Panels A and B show the SXR image with different magnifications. FLM images are presented in C and D, whereas C shows the red channel with mitochondrial staining and D shows the green channel again with actin staining. In E a composite image is presented in split view. The upper left half shows the mitotracker and SXR, while the lower right half shows the actin staining and SXR. Again, the SXR contrast is inverted.
Since the COS-7 are also fibroblasts, similar components of the cells can be seen as in the 3T3 cells. These are again the nuclei and the cytoskeleton, but additionally the mitochondria, which were also fluorescently labeled. The nuclei seem to be thicker. Therefore, nucleoli are only (faintly) visible in the upper nucleus, see Fig. 7.A. Furthermore, the cytoskeleton is not as dense as in 3T3 cells, so that individual fibers can be detected. This effect is particularly evident in the composite image.(Fig. 7.E). The mitochondria are visible as small particles distributed in the cytoskeleton, as shown in the enlarged section 7.B. Identification is enabled by labeling with the mitotracker (Fig. 7.C) and superposition of SXR and FLM images.
Both of these exemplary investigations of different cell types illustrate the interplay between structural SXR and functional FLM contrast. Especially labeling small organelles such as mitochondria and then being able to study them in the context of the whole cell visualized by SXR microscopy holds great potential. In a similar manner other organelles such as the Golgi apparatus or lysosomes can be stained as well.
### Phototoxicity Measurements
Our setup also enables the investigation of the degradation of the fluorescence signal due to SXR irradiation. To this end, fluorescent particles with a diameter of 200 nm were prepared on a SiN-membrane and a measurement was performed as described in the methods section. In the results presented in Figure 8, the fluorescence signal of 200 nm red fluorescent nanobeads was measured during constant irradiation (on the membrane) and without irradiation (on the chip) over a period of 10 minutes. The fluorescence signal was normalized to the first data value in order to compare its temporal
Figure 7: Results of the correlative measurements of COS-7 cells. **A**) 81 \(\times\) 45 s SXR image. 2 nuclei and the surrounding cytoskeleton and mitochondria can be seen. **B**) Enlargement of the red-framed area of panel A. **C**) Mitochondria detected with the red filter set. **D)** Actin detected with the green filter set. **E)** Composite image of fluorescence images and SXR image. The SXR contrast was inverted for better visibility of the fluorescence image.
evolution. The irradiated part of the sample shows a strong decrease in the detected signal compared to the dark part, where the fluorescence signal stays nearly constant. The behavior of the non-irradiated part can also be used to estimate the accuracy of this measurement and to rule out the possibility that the signal drop is caused by photo-bleaching. The strongest decay is observed during the first minute of irradiation. Presumably, the X-rays ionize the fluorescent molecules, permanently destroying them. This observation is consistent with the results published in [Hagen et al., 2012] and leads to the conclusion that the fluorescence images should be taken before the SXR images. In fact, this is the preferred order anyway, as the entire sample is first scanned with the FLM to quickly identify ROIs. Regardless of this, these results show that the setup enables the characterization of the X-ray resilience of different fluorescent dyes. This is important with respect to the further development of correlative experiments.
## 4 Conclusions
The microscope presented here combines laboratory-based water-window X-ray microscopy and fluorescence microscopy in an integrated setup. The light source for the X-ray microscope is based on a double-stream gas puff target with nitrogen as the target gas. It produces line emission at 431 eV or 2.88 nm. By using an ellipsoidal mirror as a condenser optic and a Fresnel zone plate for image generation, a resolution of 50 nm half-pitch can be achieved, as demonstrated on a Siemens star test target. An algorithm based on cross-correlation has been developed to eliminate the effects of thermal drift on resolution.
The integration of a wide-field epifluorescence microscope into the SXR microscope allows correlation of fluorescence and X-ray images, i.e. correlation of functional and structural information. Direct integration of the FLM into the SXR microscope allows correlated image acquisition without moving the sample. This not only reduces the risk of sample destruction or alteration, but also significantly speeds up the measurement process by allowing relevant sample positions to be quickly identified and targeted. Multiple filter sets can be used for multi-color measurements, further enhancing the capabilities of the integrated setup. The result is a powerful tool for investigating different types of relevant life science samples, which was realized in a compact setup with a footprint of 1.5 m \(\times\) 4 m.
As examples, fluorescent nanoparticles, cyanobacteria, and different types of cells, namely 3T3 and COS-7 cells with multi-color labeling have been presented in this work. All samples are prepared on silicon nitride membranes and, in the case of the cells, are critical-point dried. Furthermore, quantitative measurements of fluorescence behavior under SXR irradiation are possible, which was demonstrated using nanobeads as an example. These studies can be of great benefit to all correlative X-ray microscopes, including those at synchrotrons.
Based on the high resolution and functional contrast demonstrated in this work, new milestones in lab-based correlative microscopy have come into reach. Advanced preparation methods could enable the examination of wet samples and full exploitation of the possibilities of the water window. Progress in plasma source technology would lead to reduced exposure times and could enable tomography and cryofixation of the sample. Furthermore, future lensless imaging methods in the water window, enabled by coherent lab-based sources, can significantly profit by the achievements shown in this work.
Figure 8: Quantitative measurement of the influence of SXR illumination of 200 nm fluorescent nanobeads. The degradation of the fluorescence signal was normalized to the first data value to characterize the temporal behavior.
## 5 Competing Interests
No competing interest is declared.
## 6 Author Contributions Statement
J.R., T.W. and S.F. conceived the microscope, J.R., S.K., J.A., F.W., Ma.Wu. and J.N. set up the microscope, J.R., S.K., J.A. and F.W. conducted the measurements, S.K., E.S., Ma.We, A.I. and F.H. prepared samples. J.R., S.K., G.G.P., and S.F. wrote the manuscript. All authors gave advice on different aspects of the microscope.
## 7 Acknowledgments
The authors thank Christian Rodel for providing the laser system, Katharina Reglinski and Philipp Kellner for an introduction to nanoparticle preparation, Yashar Rouzbahani for preparing comparison samples, Christian Franke for discussions regarding data analysis, and Slawomir Skruszewicz for support with the gas puff target. We gratefully acknowledge funding by the Deutsche Forschungsgemeinschaft (PA 730/13), by NCN via 2020/39/I/ST7/03194, and by Laserlab Europe via Horizon 2020 (871124).
|
2306.15174 | Examining Lower Latency Routing with Overlay Networks | In today's rapidly expanding digital landscape, where access to timely online
content is paramount to users, the underlying network infrastructure and
latency performance significantly influence the user experience. We present an
empirical study of the current Internet's connectivity and the achievable
latencies to propose better routing paths if available. Understanding the
severity of the non-optimal internet topology with RIPE Atlas stats, we conduct
practical experiments to demonstrate that local traffic from the San Diego area
to the University of California, San Diego reaches up to Los Angeles before
serving responses. We examine the traceroutes and build an experimental overlay
network to constrain the San Diego traffic within the city to get better
round-trip time latencies. | Aakriti Kedia, Akhilan Ganesh, Aman Aggarwal | 2023-06-27T03:17:02Z | http://arxiv.org/abs/2306.15174v1 | # Examining Lower Latency Routing with Overlay Networks
###### Abstract
In today's rapidly expanding digital landscape, where access to timely online content is paramount to users, the underlying network infrastructure and latency performance significantly influence the user experience. We present an empirical study of the current Internet's connectivity and the achievable latencies to propose better routing paths if available. Understanding the severity of the non-optimal internet topology with RIPE Atlas stats, we conduct practical experiments to demonstrate that local traffic from the San Diego area to the University of California, San Diego reaches up to Los Angeles before serving responses. We examine the traceroutes and build an experimental overlay network to constrain the San Diego traffic within the city to get better round-trip time latencies.
## 1 Introduction
The current Internet topology is a web of transit and peering connections between numerous autonomous systems, which facilitate the high interconnectivity of the Internet. However, this interconnectivity has certain limitations. Since each autonomous system links up with the Internet on the basis of minimum cost, many sub-optimal geographical and physical routing paths are prioritized for the routing of general traffic.
The significance of Internet Exchange Points (IXPs) may play a role here. IXPs are physical centers that facilitate peering connectivity between Internet Service Providers (ISPs). Los Angeles (LA) has quite a few IXPs and these are the ones closest to San Diego [Datacenter Map]. This means that any traffic that must be transferred from one ISP to another in San Diego is likely to be routed all the way through LA and back. The connectivity of the University of California, San Diego (UCSD) network with respect to the San Diego area is a direct consequence of this Internet setup. In reality, even while the UCSD network and off-campus ISPs are geographically proximal, packets traveling to and from off-campus users and on-campus servers will still be routed through LA as shown in Figure 1.
This paper aims to (1) quantify the national severity of non-optimal forwarding paths and disconnectivity, to explore potential lower-latency overlays, (2) explore the topology involved when interacting with UCSD resources from off-campus and confirm the presence of LA IXP in the routing path, and (3) experiment with an overlay network to bypass the LA IXP and reduce effective latency of local UCSD traffic.
Figure 1: Network path from outside UCSD within San Diego to UCSD ieng6 through LA IXP
Background and Related Work
An overlay network is a network built on an already existing underlay network infrastructure. The chain of physical or virtual underlay links between two overlay nodes becomes a virtual overlay link on the overlay network [5]. Our research explores using overlay networks to gain better performance compared to routing in an underlay network, similar in intent to the role of resilient overlay networks which mitigate network faults [1]. However, our research explores potential improvements in latency rather than fault tolerance when using overlay networks to exploit or transform the current topology of the Internet.
For our work, we specifically analyze RIPE Atlas data RIPE NCC, and specifically keep in mind the findings of Holterbach et al. holterbach2015 and Bajpai et al. Bajpai2015. As suggested in these papers, we remove noise wherever possible and report only insights with at least \(1\%\) improvement. We build on the ideas of efficiency of geographic routing as stated in Sukhov and Onoprienko Sukhov2014 and our results align with the findings of Subramanian et al. Subramanian2002 where packets traveling through different ISPs encounter higher latencies. The criteria and metrics used for reporting results uses the ping and traceroute framework explained in Paxson et al. Paxson2019. We also refer to CAIDA - Center for Applied Internet Data Analysis datasets about connectivity and routing of the global Internet.
## 3 Methodology
We quantify the severity of the non-optimal internet topology by rigorously analyzing the publicly available global ping measurement stats provided by RIPE Atlas. We also conduct a UCSD level experiment to examine the layer-3 routing topology and understand the general properties and problems of the underlay network.
### RIPE Atlas
We collect the ping stats from the measurement data provided by RIPE for all measurements started after 1st January, 2023 till 19th May, 2023. Data cleanup is performed by filtering only those measurements which are of _stopped_ status so that the ongoing and incomplete data do not bias the analysis.
RIPE has internal probes, which are network devices that are installed to get network stats. Our analysis involved 10000 measurement ids with 11844 global probes involved. We focus primarily on ipv4 ping latencies and keep probes that collect only ipv6 data out of consideration. Under these constraints, we collected 309959 loglines of data and analyze them to create every possible triplet to see if it is possible to redirect data between any source node to a destination node via a middle node but get a better latency compared to the actual ping latency between the source node and the destination node obtained from the measurement data from RIPE. We identify numerous insights about non-optimal routing and internet disconnectivity as detailed in the 4.1 section.
To further understand the severity, we further drilled down our analysis to IP level, which is a level deeper than RIPE probes, as a single RIPE probe collects data for groups of ASes. The filtered measurements only include "stopped" status and started after 1st March, 2023. Further, only ipv4 IPs specific to the USA region are focused on to get a more holistic view of the United States region. This leads to a subset of 10,000 measurement ids and around 4212728 loglines of ping data consisting of 16756 unique ips. IP-info online tool [ipinfo.io] was used for mapping IP addresses to the corresponding city, region, and state to further analyze these levels.
We account for noise in the data by using the median of the ping latency available from the measurements. Each measurement sample from RIPE collects ping latency in three test runs. We sort these runs and move forward with the median of these. Further, data between source-destination pairs collected across different measurements are averaged to minimize the possibility of measurement data errors. We found some interesting insights which are detailed further in 4.1
### San Diego-UCSD Internet Analysis
We conduct a preliminary analysis of the Internet topology with respect to the UCSD network. This is done via a hands-on internet measurement data collection experiment of pings and traceroutes toucsd.edu and ieng6.ucsd.edu from diverse locations in and around the San Diego area and UCSD campus (Figure 2). These include the following: UCSD CSE building (wifi), UCSD Geisel library (wifi), San Diego downtown (AT&T wifi, Verizon, and AT&T), La Jolla downtown (Spectrum wifi and AT&T), Miramar (Spectrum wifi and AT&T),
and San Diego airport SAN (AT&T wifi). We exploit both wifi and cellular networks to get more robust results.
We log the ping and traceroute for each pair of source (associated with a specific ISP) and destination. The Time-To-Live field of the ping response indicates the number of hops that a packet takes to arrive at the destination address. We cross-check this value with the number of hops observed from traceroute analyis. Lastly, we record whether sources had packets routed to LA at some point in the traceroute.
### Designing and Testing a UCSD Bridge
We propose a 3-hop overlay network solution to route network traffic directly to UCSD without being routed to an LA IXP. The overlay network would resemble the design in Figure 3. The network nodes are as follows:
**Node A:** A computer connected to a San Diego ISP. This node represents users who want to retrieve UCSD resources (in this case, the ieng6 server) either via the usual layer-3 routing or via the overlay network.
**Node B:** A forwarding server that is connected on one port to a San Diego ISP and connected on another port to the UCSD network. This node serves as a network bridge that forwards packets through each network, allowing network traffic to potentially bypass the LA IXP.
**Node C:** The ieng6.ucsd.edu server hosted within the UCSD network. In a full implementation of an overlay network, there could be multiple types of node C, each hosting different resources on the UCSD network.
We attempt to setup a network bridge connected to UCSD wifi and also connected to the AT&T cellular network. It is posed as an Ethernet connection via USB tethering. We used different ports for the Wifi and cellular network. We plan to receive traffic on the cellular network port which can then be internally forwarded to UCSD Wifi. However, on running the experiment we found that in order to setup a reachable server through cellular we need administrative privileges on the cellular network router. This is to override the Network Address Translation (NAT) system in place and forward traffic to our specific part on the experimental server. Lacking these privileges, we leave an actual implementation of the overlay network and concrete design of node B for future work. Instead, we collect measurements of the RTTs between each node to simulate the expected latencies that such a network is expected to have.
For our simulation, we measure ping times between a device in AT&T cellular network to another device in the same cellular network. We represent this ping latency as the latency between node A and B. We collect another ping time measurement between a device connected to UCSD wifi to indicate an on-campus resource and an ieng6 server of UCSD, which is another resource within UCSD. This represents the latency between node B and C. For our simulation, we assume that the forwarding time between node B of the cellular network to node B of the on-campus resource is minimal and we don't consider it when reporting
Figure 3: Proposed 3-node overlay network bypassing the LA IXP
Figure 2: Ping and Traceroute sources to UCSD.
results. So in our simulation, A-to-B resides on the AT&T cellular network, while B-to-C resides on the UCSD network. For each ping measurement, 1000 packets were collected and the round trip times were compared for the analysis.
Additionally, we analyze the traceroutes of each of these connections to gain a better understanding of the routes that the proposed overlay network takes, and whether it bypasses the LA IXP.
## 4 Results
The following results globally quantify the problem of non-optimality in Internet topology and suggest alternate overlays for better connectivities and latencies.
### RIPE Atlas data analysis
From the experiments conducted on RIPE probes, we identified that Comcast Business Gateway Router Type in the region of Illinois (Probe ID \(10194\)) has no connectivity to New South Wales region of Australia (Probe ID \(6636\)). We propose an overlay path via France using Bouygues Telecom SA ISP (Probe ID \(1003746\)). The overlay latency is expected to be less than 5 ms as illustrated in Figure 4. Illinois to France (city Paris) takes 0.3 ms and France to Australia 4.6 ms.
Similarly, Washington region of United States via the EMERALD-ONION ISP (Probe ID \(6934\)) is connected to Australia (Probe ID \(6636\)) but lacks connectivity to France (Probe ID \(1003746\)). An overlay path to France via Australia would be 4.4ms as shown in Figure 5.
Another interesting insight is the ping latency from SPACEX-STARLINK ISP, Washington region (Probe ID \(62498\)) to Australia (Probe ID \(6636\)). The current round-trip-time is 6.57ms but if forwarded via France using Bouygues Telecom SA ISP (Probe ID \(1003746\)), an improvement of 0.81ms can be achieved.
We further drill down on IP level to discover that out of 108356 source IP, destination IP combinations, 105479 combinations could show a ping latency improvement of atleast \(1\%\). Table 1 shows comprehensive quantitative results. Although, a significant number of combination pairs allow for \(<10\%\) of ping latency improvement, there are also combinations which allow upto \(100\%\) improvements as depicted by Figure 6.
One of our insights from IP data includes IPs in Milpitas don't follow an optimal path when directing traffic to IPs in the east US. The current ping time between these IPs go upto as high as 265 ms. Cities like Phoenix, Hilliard, Ashburn, San Francisco can form a lower latency overlay network connecting Milpitas to the east. Figure 7 shows the current ping latency as dashed line (265.49ms), and an optimal path in green of 65ms, giving an improvement of 200ms. As the data is the median of three ping runs and is averaged over all mea
Figure 4: Illinois and Australia via France
Figure 5: Seattle and France via Australia
Figure 6: Ping percent improvements vs Source-destination pair counts
surement samples, it is reflective of the topology between Milpitas and Morrisdale following a high latency path.
Similarly, Figure 8 shows another example. Traffic from Newark to Las Cruces takes 53.5ms but when directed through Kenett Square takes 6.25ms from Newark to Kenett Square and 37.6ms from Kenett Square to Las Cruces.
However, all internet topology might not necessarily follow shortest paths. Sometimes, distant servers might provide better latencies than those within the same region. For example, Figure 9 shows that an optimal path of reaching LA from Arroy Grande hits the east of US all the way to Appleton. This indicates that some ISPs might have optimal latency servers deployed across geographies or the path latencies might depend on the network congestions.
### UCSD Network Analysis
During our traceroute analysis, we figured out that www.ucsd.edu was externally hosted by Amazon AWS Global Accelerator servers, located in Washington, U.S. Thus, we consider only ieng6.ucsd.edu as the on-campus resource henceforth. Table 2 shows our experimental locations, the number of hops and if the packets are routed through LA IXP.
Results confirm that almost all external traffic to the ieng6 resource is routed through LA, which the UCSD experimental network bridge aims to address. It further shows that sources within the UCSD network only travel 5 hops to reach the ieng6 server, whereas the outside sources require at least 10 hops.
Experimental results from the simulated overlay network are demonstrated in Figure 11 (latency between node A and B), Figure 12 (latency between node B and C), and Figure 10 (latency between node C and A). We consider ping latencies
\begin{table}
\begin{tabular}{|c|c|} \hline
**percent\_improvement** & **src\_dst\_pairs\_count** \\ \hline
1.0 & 5351 \\ \hline
2.0 & 4939 \\ \hline
3.0 & 4560 \\ \hline
4.0 & 4121 \\ \hline
5.0 & 3916 \\ \hline
6.0 & 3533 \\ \hline
7.0 & 3256 \\ \hline
8.0 & 3061 \\ \hline
9.0 & 2792 \\ \hline
10.0 & 2450 \\ \hline \end{tabular}
\end{table}
Table 1: Ping percent improvements
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Experimental source** & **\# Hops** & **LA IXP?** \\ \hline UCSD CSE wifi & 5 & No \\ \hline UCSD Geisel wifi & 5 & No \\ \hline SD Downtown wifi & 12 & Yes \\ \hline SD Downtown Verizon & 18 & Yes \\ \hline SD Downtown AT\&T & 12 & Yes \\ \hline LJ Downtown wifi & 14 & Yes \\ \hline LJ Downtown AT\&T & 16 & Yes \\ \hline Miramar wifi & 14 & Yes \\ \hline Miramar AT\&T & 16 & Yes \\ \hline SAN wifi & 15 & Unknown \\ \hline \end{tabular}
\end{table}
Table 2: Number of hops and routing IXP for San Diego locations
Figure 8: Newark and Las Cruces via Kenett Square
Figure 7: Milpitas and Morrisdale via Las Cruces
Figure 9: Arroy Grande to LA via Appleton
as continuous values and plot the distributions for the measurements between all pairs of nodes.
As illustrated in Figure 10, the ping latencies between A and C follow a unimodal distribution with an average ping latency of \(61.72\) and a median of \(62.00\). These high statistics are indicative of the traceroute results of traffic being routed from A to LA IXP and then down to C as depicted in Figure 1. It's interesting to note that the standard deviation in this analysis is moderate (\(16\)%) which can indicate some congestion in the network.
Figure 11 shows that the ping times distribution from A to B is also unimodal but with a lower mean \(57.45\) and median \(58.00\) as compared to route A to C. Traceroute analysis from A to B shows that the packets are served within the San Diego network and hence have a round trip time latency of \(4ms\) lower compared to A to C. The difference \(4ms\) is still underwhelming as the route covers a lesser geographical distance (within San Diego vs through LA). The standard deviation for this sample is \(12.61\) which indicates an instability in the network which can be because of routing delays like queuing delay, propagation delay, etc.
As Figure 12 explains, distribution for ping times between B and C takes on a bimodal distribution with first mode at \(4ms\) and the second one at \(12.52\), with the second mode being significantly more frequent than the first. As it is a bimodal distribution, the mean \(10.47\) is not a good representation of the central location of the data as it can be affected by the two modes. Therefore, we rely on median (\(12.22\) ms), a better estimate of the central tendency of the data. The standard deviation for this distribution is \(3.39\) ms which is comparatively lower than the other two routes. This result aligns with our expectations that route B to C should show better latencies as it is within the internal UCSD network and can be transmitted with lesser number of hops. Standard deviation of \(3.39\)ms with respect to the median of \(12.22\)ms still conveys network instability.
We combine the statistics of ping latencies from A to B with those from B to C to represent the estimated latencies from A to C via an overlay through B. For standard deviation, we add the two variances and then square-root the sum to resemble the combined deviation. For all other metrics, we add the results from A to B and from B to C.
Due to the bimodal distribution of ping times from B to C, mean is not a good statistic for comparison 4.2. Therefore, we consider the mode or median to compare the ping times from A to C directly versus through node B. Comparing the medians, we observe that ping latency from A to C directly would be \(8.22\) ms faster compared to
Figure 11: Unimodal RTT frequency distribution from A to B
Figure 12: Bimodal RTT frequency distribution from B to C
Figure 10: Unimodal RTT frequency distribution from A to C
the latency through B. Similarly, comparing the modes reveals that latency from A to C directly is \(10.52\)ms faster than through B.
Though the results did not align with RIPE Atlas results of strategically designed overlay networks yielding better latencies, future work in this direction can still yield promising results. As our sample data was restricted to only AT&T and UCSD ASes, the benefits of the overlay network were not prominent. We further discovered that an AT&T router on the path from A to B was located in LA, and therefore traffic was unavoidably being routed through LA, which forfeits our assumption of the traffic between A and B being constrained within San Diego. Additionally, as ASes are financially incentivized to optimize for shortest-path routing, the simulated setup might be able to propose improvements only when multiple ASes are involved and transit relationships influence path forwarding.
## 5 Conclusion
Having rigorously analyzed the network topology using RIPE Atlas measurements, we found great insights about the connectivity and the non-optimal latencies between geographies. We further simulated a hands-on experiment to demonstrate that local San Diego traffic is being routed to LA. Despite simulating the experiment, we could not achieve an improvement in routing the local traffic within San Diego due to lack of network administrative privileges and mobile ISP routing going through LA. The RIPE Atlas results are promising and we still believe that constraining traffic within San Diego would improve latencies because UCSD on-campus pings to ieng6 yield much lower latencies than off-campus pings. As future work, we aim to conduct the same experiment on a larger scale with at least a million data points and a server that is directly reachable within San Diego ISPs and that can also access the UCSD network directly. We further consider wireless networks as a possibility of network instability and hence conducting the experiment on wired systems can also prove fruitful.
|
2302.07087 | Antithesis of Object Orientation: Occurrence-Only Modeling Applied in
Engineering and Medicine | This paper has a dual character, combining a philosophical ontological
exploration with a conceptual modeling approach in systems and software
engineering. Such duality is already practiced in software engineering, in
which the current dominant modeling thesis is object orientation. This work
embraces an anti-thesis that centers solely on the process rather than
emphasizing the object. The approach is called occurrence-only modeling, in
which an occurrence means an event or process where a process is defined as an
orchestrated net of events that form a semantical whole. In contrast to object
orientation, in this occurrence-only modeling objects are nothing more than
long events. We apply this paradigm to (1) a UML/BPMN inventory system in
simulation engineering and (2) an event-based system that represents medical
occurrences that occur on a timeline. The aim of such a venture is to enhance
the field of conceptual modeling by adding yet a new alternative methodology
and clarifying differences among approaches. Conceptual modeling s importance
has been recognized in many research areas. An active research community in
simulation engineering demonstrates the growing interest in conceptual
modeling. In the clinical domains, temporal information elucidates the
occurrence of medical events (e.g., visits, laboratory tests). These
applications give an opportunity to propose a new approach that includes (a) a
Stoic ontology that has two types of being, existence and subsistence; (b)
Thinging machines that limit activities to five generic actions; and (c)
Lupascian logic, which handles negative events. With such a study, we aim to
substantiate the assertion that the occurrence only approach is a genuine
philosophical base for conceptual modeling. The results in this paper seem to
support such a claim. | Sabah Al-Fedaghi | 2023-02-14T14:42:08Z | http://arxiv.org/abs/2302.07087v1 | # Antithesis of Object Orientation: Occurrence-Only Modeling Applied in Engineering and Medicine
###### Abstract
This paper has a dual character, combining a philosophical ontological exploration with a conceptual modeling approach in systems and software engineering. Such duality is already practiced in software engineering, in which the current dominant modeling thesis is object orientation. This work embraces an anti-thesis that centers solely on _the process_ rather than emphasizing _the object_. The approach is called _occurrence-only_ modeling, in which an _occurrence_ means an _event_ or _process_ where a process is defined as an orchestrated net of events that form a semantical whole. In contrast to object orientation, in this occurrence-only modeling objects are nothing more than long events. We apply this paradigm to (1) a UML/BPMN inventory system in simulation engineering and (2) an event-based system that represents medical occurrences that occur on a timeline. The aim of such a venture is to enhance the field of conceptual modeling by adding yet a new alternative methodology and clarifying differences among approaches. Conceptual modeling's importance has been recognized in many research areas. An active research community in simulation engineering demonstrates the growing interest in conceptual modeling. In the clinical domains, temporal information elucidates the occurrence of medical events (e.g., visits, laboratory tests). These applications give an opportunity to propose a new approach that includes (a) a Stici ontology that has two types of being, _existence_ and _subsistence_; (b) Thinging machines that limit activities to five generic _actions;_ and (c) Lupascian logic, which handles _negative events_. With such a study, we aim to substantiate the assertion that the "occurrence only" approach is a genuine philosophical base for conceptual modeling. The results in this paper seem to support such a claim.
_Index Terms - conceptual modeling, Stici ontology, process philosophy, simulation engineering, medical events_
## I Introduction
The world is a network of events. Of happenings. Of processes. Of something that occurs. The things that are most "thinglike" are nothing more than long events. The hardest stone is in reality a complex vibration of quantum fields, a momentary interaction of forces, a process that for a brief moment managers to keep its shape, to hold itself in equilibrium before disintegrating again into dust. The world is not so much made of stones as of fleeting sounds, or of waves moving through the sea. [1]
A war is not a thing, it's a sequence of events. A storm is not a thing, it's a collection of occurrences. A cloud above a mountain is not a thing, it is the condensation of humidity in the air that the wind blows over the mountain. A wave is not a thing, it is a movement of water, and the water that forms it is always different. [1]
This paper has a dual character, combining a philosophical ontological exploration with a conceptual modeling approach. On the one hand, the philosophical undertaking is an attempt to provide a representation of reality by modeling features of the world in a specific domain. On the other hand, to understand real "systems," i.e., case studies in sections 3 and 4, the developed representation has to be demonstrated in practical, reasonable size fields.
According to Shults [2], computer scientists have typically had little interest in philosophers' arguments about the nature of being(s) and non-being. In recent years, a growing number of scientists in the modeling community have explored various advances in their field that bear on philosophical issues related to ontology. Shults [2] claims that developments in computer modeling have the potential to contribute to what may be "the most significant change in western philosophy since the foundational work of Aristotle's teacher Plato in the 4th century BC."
In this context, we view a conceptual model as a depiction of reality built using diagrammatic construction that is oriented toward human communication. This diagrammatic orientation started with earlier examples, which include _states_ in finite-state machines and _activities_ in flowcharts, which lead to modeling languages such as SysML Object Process Methodology, UML and BPMN [3]. In most such modeling languages, it is claimed that reality conceptualization requires _objects_ as a basic construct to express the system's structure and _processes_ to grant the model understanding of the system's dynamic behavior [4][5]. This requires adopting such notions as classes and associations with attributes and operations, aggregation and generalization, and predefined relationships, claiming applicability in many real-world problems with ease of use.
**This paper contests this approach, which is based on substance ("being" or "a basic entity"), wholly or partially, and challenges it as a fundamental paradigm.** Although such a dispute is not new (e.g., Whitehead process philosophy), the paper provides a more complete framework called _occurrence-only modeling_, with ontology and modeling language as an antithesis to object-oriented conceptual modeling, in which the thesis is substance-based ontology (e.g., Mario Bunge's ontology) and language such a UML is used to model real-world semantics.
The study presents a conceptualization based solely on _occurrences_ (see Fig. 1). An occurrence is an _event_ or _process_, and a process is defined as an orchestrated net of events that form a whole and emerge from these events. An event is a _subsisting region of potentiality_ "activated" by _time_, as we will show in detail later in this paper. The approach name is quantified with "only" instead of "oriented" to highlight that it is not an _alignment toward_; rather, it is a total commitment to a technique that is based solely on occurrences.
The proposed occurrence-only modeling is specified as a high-level diagrammatic language using Stoic ontology [6], thinging machines (TM) [7], and Lupascian logic [8]. See Fig. 2 for important notions that we will discuss in this paper.
### _Motivations_
To demonstrate this occurrence-only conceptual modeling, we apply it to (1) a UML/BPMN inventory system in simulation engineering and (2) an event-based system that represents medical occurrences that arise on a timeline.
An active research community in simulation engineering demonstrates the growing interest in conceptual modeling for simulation [9]. According to Wagner [10], "since a running computer simulation is a particular kind of software system, we may consider simulation engineering as a special case of _software engineering_." Modeling is an important first step in a simulation project; it is also thought to be the least understood part of simulation engineering [11]. There is a lack of standards for procedures, notation, and model qualities, and "often no information or process models are produced, but rather the modeler jumps from her mental model to its implementation in some target technology platform" [10].
In the second case study, according to Li et al. [12], in the clinical domains, temporal information elucidates the occurrence or changing status of medical events (e.g., visits, laboratory tests, procedures). Accurate profiling of clinical timelines could benefit condition trajectory tracking, adverse reaction detecting, disease risk prediction, etc. The widespread adoption of electronic health records provides great opportunities for accessing large amounts of clinical data. Due to the implicit nature of temporal expressions, often characterized by a considerable degree of under-specification, automatically constructing a timeline of clinical events is quite challenging.
Modeling of temporal concepts and relationships that could support subsequent temporal reasoning is a crucial prerequisite to overcoming this hurdle [12].
Such problems in the fields of simulation and medical systems provide motivation to suggest a different approach to achieve two aims, proposing a possible solution for workers in both fields based on utilizing a new more "stakeholder friendly" conceptual modeling language and simultaneously providing an opportunity to experiment with features of such a language in a new field of application.
### _Main Thesis_
The adopted general philosophy in the occurrence-only approach is that all things are events [13]. For example, the life of such an "object" as man is "a historic route of events as the same enduring person from birth to death" [13]. Objects and events are things of the same kind [14][13]. Anything that "exhibit[s] permanence and an abiding structure in nature must be explained in terms of events" [13]. According to McHenry [13],
The expansion of the universe is an event, but so is the hurricane off the coast of California, the traffic accident outside my window, and the dance of subatomic particles in my cup of tea. So in addition to galaxies, bodies of land and sea, automobiles and cups of tea, there appear to be activities, happenings or episodes.
Fig. 1: Fundamental ontology in this paper.
Fig. 2: A general framework of occurrence-only modeling.
### _Example_
Entity-like events and process-like events (what Whitehead termed "actual entities" and "actual occasions," respectively) are the existing things of which the world is made up. Consider _Socrates is walking_ [now], which involves the entity-like _Socrates_ and the process-like _walking_. Contrary to the classical Aristotelian interpretation, walking is not "in" Socrates; rather, it is a persistent event. The event _Socrates_ triggers the creation and processing of walk of the body Socrates. The assumption here is that Socrates is not just a body. For example, Socrates is discerning, caring, regretting, feeling, and warming, etc., which are not "in" his body, but each of them is some type of process in Socrates and is a region of potentiality.
Fig. 3 models _Socrates is walking_. We use the region (see Fig. 3) to represent _where the event occurs_. The actions _create_ and _process_ are two of the _five generic_ actions, as we will discuss in section 2. The upper diagram (dynamic level) is the _Process_ that includes the events of Socrates _existing_ (create) and walking (create walking and process it). The time is assumed to be now.
The lower part of the figure (static level) provides the base for the realization of the events. The potentiality of Socrates _subsisting_ refers to the potential capability of creating and processing walking. _Subsisting_ and _existing_ are Stoic terms that describe a view of dual _being_, as we will discuss in section 2. To simplify the event diagrams, we may replace each event with its regions. Fig. 4 (top) shows three generic events.
E\({}_{1}\): There exists Socrates.
E\({}_{2}\): Walk is generated by Socrates.
E\({}_{3}\): Waking is processed (continued).
Fig. 4 shows the behavioral model of _Socrates is walking_.
Events combine with each other to form a unity for a complex of events called _Process_. We will use the capital first letter to distinguish this _Process_ from _process_, which is one of the five TM actions illustrated in this example. Romero [15] called such processes "bundles of events ", "The thing 'Socrates', for instance, is a cluster of events sharing their occurrence in Greece, previous to such and such other events, including processes like 'talking with Plato', and so on" [15].
### _Paper Structure_
The next section provides a review and some new details of the proposed occurrence-only modeling. Section 3 presents the first case study that involves modeling an inventory system in simulation engineering. Section 4 concerns the case study modeling of clinical events in a medical information system.
## II Occurrence-only Modeling
Occurrence-only conceptual modeling is founded on three grounds, Stoic ontology, thinging machines (TMs), and Lupascian logic. In the following, we present further details of these foundations.
### _Stoic Ontology_
According to Verdonck et al. [16], conceptual models lacked an adequate specification of the semantics of the terminology of the underlying models, leading to inconsistent interpretations and uses of knowledge. To provide a foundation for modeling, ontologies were introduced. Ontology would express a domain's fundamental elements and therefore would become the theoretical basis of conceptual modeling. For instance, ontological theories, such as Bunge ontology, have been used to supplement conceptual modeling languages (e.g., UML) [16]. Occurrence-only modeling is based on the Stoic ontology, which provides two levels of _being_ necessary to represent reality: subsistence and existence. Stoic ontology is a materialist or, more precisely, corporealist ontology. According to such ontology, only _bodies_ exist because only bodies have the capacity to act or be acted on [17]. Stoic ontology includes _bodies_ that _exist_ as well as entities categorized as _incorporreal_ that are said to _subsist_ but not to exist. These entities are nonexistent in that they are not themselves solid bodies, but they have a derivative mode of reality.
Fig. 4: Generic events and behavior model of _Socrates_ is _walking_.
Fig. 3: Subsisting and existing Socrates walking.
In our embracing of this ontology, existence (_what is occurring or the actual reality of being_) includes two kinds of dynamic entities: (a) enduring (extended in time) entity-like existence (e.g., electrons and subatomic particles) and (b) Process-like existence (e.g., hunting (process) and traffic jams).
**Example:** Consider the nature of software as illustrated in Fig. 5. The software is in _subsistence_ while it is stored as a list of instructions. It _exists_ when it is executed. In both cases, it is a thing in reality.
### _Thinging Machines (TM)_
In TM modeling, a _thing_ is a Heideggerian notion [18] that indicates _something_. According to the Stoic doctrine, a something has a greater extension than _being_, which includes within itself the bodies and the incorpeals "entering" into the world [19]. This "entering" into the world marks "the situated-ness of the thing among other things in the world" [20].
The TM thing with this Heideggerian and Stoic underlining is called a thimac (_thing_/_machine_) because it is also conceptualized with the dual nature of a thing and machine. Such a characterization parallels the Stoic notion of a thing's capacity to act or be acted on. However, TM comprises five actions: _create_, _process_, _release_, _transfer_ and _receive_ (see Fig. 6). A thimac as a thing is created, processed, released, transferred, and received. A thimac as a machine creates, processes, releases, transfers, and receives other things.
A thimac's structure is a net of nodes. Each node has the dual structure of things and machines; therefore, these nodes are subthimacs. The thimac and its subthimacs may be connected internally and externally (outside the containing thimac) by links of flow of things. A thimac can accommodate existent, subsistent, and the other types of things that do not subsist/exist. A subsistent thing lacks a time subthimac.
The TM machine, at the static level, has the five _potential_ actions: create, process, release, transfer, and receive, described as follows.
#### Iii-C1 Accept
A thing enters the machine. For simplification, we assume that arriving things are _accepted_ (see Fig. 6); therefore, we can combine the _arrive_ and _accept_ stages into the _receive_ stage.
#### Iii-C2 Release
A thing is ready for transfer outside the machine.
#### Iii-C3 Process
A thing is changed, handled, and examined, but no new thing results.
#### Iii-C4 Transfer
A thing is input into or output from a machine. The dynamic (not necessarily physical) "movement" (event) is from a previous region to a different region through a third region.
#### Iii-C5 Create
A new thing (found/manifested) is realized at the dynamic level. Simultaneously, it also refers to the "existence" (at the dynamic level) of a potential thing (at the static level).
Additionally, the TM model includes a _triggering_ mechanism (denoted by a dashed arrow in this article's figures), which initiates a (non-sequential) flow from one machine to another. Moreover, each action stage may have its own memory storage (denoted by cylinder in the TM diagram) of things. A memory has its own five actions forming a _memory thimac_.
Note that for simplicity, we may omit _create_ in some diagrams because the box representing the thimac implies its "beingness" (in the model). Additionally, note that the five generic actions become generic events at the dynamic level. Therefore, what we call _Process_ emerges as aggregate comprising lower-level events. The resulting Process is different from the lower-level events that form it (e.g., as in chemical reactions). Structurally, as a thimac, this emergent Process has its own machine and therefore has its own behavior, i.e., a weight as a (sub)thimac is the sum of its subthimacs' weights, and it can be created, processed, etc.
### _Two Thinging Machine Levels of Specification_
#### Iii-C1 Static (Subsistence) Model
This model represents static things and static (potential) actions. A thing's "being" at this level is a certain state of being, _subsistence_ or a potential for "becoming," i.e., "it is there," inert, passive, waiting to _exist_ when it couples with time. _Becoming_ refers to transferring to the dynamic level to trigger the creation of an _event_. The static model is also the "inactive" state (e.g., dormant volcano). The static level is the _retreating_ "world" of events, e.g., _doing something_ becomes a negative event of _not doing_ (a Lupascian logic term). A static thing could become an actual thing (event); however, _some_ static (non-subsisting) things (e.g. square circle) could never become actual things. Accordingly, there are things that do not exist or subsist. Additionally, the static level includes all possibilities, just as a chess board exhibits all possible moves, including contradictory ones.
Fig. 5: Software is in subsistence while it is not executed.
Fig. 6: Thinging machine
#### Ii-A2 Dynamic (Existence: occurrence only) Model
Each event or process consists of a static subdiagram (region) that unfolds with _time_, leading to events, i.e., the realization of static things and actions. Therefore, the event is the existing being that was previously a subsisting being as a region at the static level. The Lupascian notion of a negative event refers to reverting to the static level from the dynamic level.
Stoic ontology serves to define the _being_ (subsistence or existence) of things and actions in reality. The Stoics concocted the idea of a broader category of being: reality is made of things that _exist_ and things that _subset_. This idea retains the commonsensical notion that static and dynamic things are in some sense real. The notion of "modes of being" appears in various forms in classical logic, in which the notions of existence and subsistence appear [21]. Meinong [22] introduced Meinongian metaphysics and distinguished between being and existence. Using Stoic ontology, we view the dynamic model description as an _occurrence-only model of existence_. Therefore, reality includes occurrence-only things.
The static model represents the world of potentialities with atemporal subsistence. It is self-contained and in a state in which time and its related notions lose meaning. This _static universe_ "contains everything there is or ever was or will be" (from [23], ignoring Post's metaphysical implications). Only a portion of this "everything" can become _occurrences_. Therefore, if we consider that the chess board includes all potential and non-potential plays, the subsisting plays are the legal plays and the existing plays are plays of the actual game. The castle that moves _nine_ places (i.e., goes outside the board) is a non-subsisting play and therefore cannot occur.
At the dynamic TM level, events form among themselves an interacting nexus of occurrences that define, inform, and constitute all "actual" thimac beings. Things at the dynamic level may present object-like and Process-like occurrences. Process is another term for events and, more specifically, a net of events that forms a whole notion. For example, _release-transfer_ may be considered the Process of _input_, and transfer-receive be the Process of _output_; however, _release-transfer-transfer_ does not seem identified with a standalone _notion_.
The _event_, as a generic event or Process, can be provisionally defined as a fundamental happening that forms the basic building blocks of the existing world. Everything in the world, including people and things, can be constructed from events that form the essential and sole ontological elements of existence.
### _The Thing Side of the Thimac_
The thimac is a whole that is more than the sum of its parts (i.e., it has its own machine). Even if interiority has no subthimacs (e.g., empty safe), the thimac has some of its actions. A thing's _subsistence_ means, along with its related actions, it is a potential event. An example of this subsistence is a city on a map. The city on the map can be described in terms of streets, population, connections with other cities, interaction with the environment, windiness, water resources, etc., but it is just a map with no activities. Even though it is connected with another city, there are no moving cars on the highways and no playing children in the streets. "Relations" between subsisting things are like dry river beds. Even though a dry river (e.g., release, transfer, transfer, receive) looks "permanent" in the static model, it becomes a flash _event_ that may perish at any time, i.e., alternate between static and dynamic levels.
Only thimacs that embed time are realizable (exist) at the dynamic level. Therefore, for example, a "square circle" is a static thimac that cannot be injected with time to exist in the dynamic model; neither does it subsist because it is not mappable to the dynamic level. The universe of such a world is populated by things that may alternate between two levels of being: static and dynamic. This total universe is a Process (an orchestrated net of events) in which events occur and then perish or cease to be.
### _Lupascian Logic_
The event is different from similarly named notions currently used in the literature. Note that this approach takes the side of philosophers who conceive of physical things as extended across time (e.g., Whitehead). Objects and events are things of the same kind [24].
Therefore, instead of _doing_ vs. _stop doing_ (action vs negative action), we have an event, doing, that includes its region in the dynamic level vs. _stop doing_: reverting (the event's region) to static level. This method of eliminating negativity stems from philosopher Stephane Lupasco. According to Brenner [25], every element \(e\) (an event, i.e., a thimac that contains a region plus time) always associates with a _non-e_ (static thimac), such that the actualization of one entails the potentialization of the other and vice versa, alternatively, "without either ever disappearing completely."
With this ontological foundation of the occurrence-only modeling, the next two sections demonstrate that such a modeling approach has the expressive power to represent reasonably sized systems.
## III Modeling an inventory system
Wagner [10] considered a simple case of inventory management: a shop selling one product type. The customers come to the shop and place their orders. If the ordered product quantity is in stock, customers pay their order, and the ordered products are handed out to them. Otherwise, the order may still be partially fulfilled if there are still some items in stock. When the stock quantity falls below the reorder point, a replenishment order is sent to the vendor for restocking the inventory, and the ordered quantity is delivered.
Wagner [10] used a BPMN-based process design modeling approach with UML class diagrams (see Fig. 7) to develop discrete event simulations. Wagner [10] justified the use of BPMN as follows:
Using BPMN as a basis for developing a _process design modeling_ approach is the best choice of a modeling language we can make, considering the alternatives, which are either not well defined or not sufficiently expressive (Italic added).
Although such an object-oriented approach is a valuable effort in applying modeling in simulation, the resultant mixed (dynamic vs. static) representation and ontological ambiguity (event vs. object) seem to produce a heterogeneous notation that distorts the purpose of the conceptual modeling as "a bridge between the developer and the user" [9] and "'the agreement between the simulation developer and the user about what the simulation will do" [26].
Wagner [10] is mainly concerned with _discrete event simulation_, _event process modeling_ notation, and _object event_ graphs. Such an _event-intensive_ approach involves objects and a discrete flow of _events_ that allegedly _change_ the _state_ of affected objects and cause follow-up _events_ and a _state transition system_ where _events_ are transitions and the system state consists of object states and future _events_. Ontologically, this understanding of events is based on Casati and Varzi's [27] description that Wagner [28] described as such: "The world consists of objects and events. Smiles, walks, dances, weddings, explosions, hiccups, hand-waves, arrivals and departures, births and deaths, thunder and lightning: the variety of the world seems to lie not only in the assortment of its ordinary citizens--animals and physical objects, and perhaps minds, sets, abstract particulars--but also in the sort of things that happen to or are performed by them."
Nevertheless, Casati and Varzi [27] stated that "there is significant disagreement concerning the precise nature of such entities. (Their broad characterization as 'things that happen', though commonly found in dictionaries, merely shifts the burden to the task of clarifying the meaning of 'happen'.)" Additionally, such a process-infected approach to modeling does not present or derive a clear definition of the notion of process.
The basic assertion in this paper is that using the so-called _process design_ is better represented with the occurrence-only modeling. Accordingly, the resultant conceptual models settle this issue when put side by side.
### Static Model
Fig. 8 shows the basic static model of the inventory system. Basic, here, means that it is possible to enhance such a model with other details such as constraints and rules because the involved modeling language is rich in expressibility. The main stream of actions in Fig. 8 is where the customer (circle 1) creates (2) an order that flows to the shop (3) to be processed (4). Note that the order may include many data; thus, it is initially processed (the pink process box) to trigger extraction of the order _quantity_.
The darkened boxes in the figure indicate modules in the system. The pink-shaded box in the middle of the figure is a module where main procedure is performed.
The Process (4) in the pink rectangle involves comparing the current value of the number of items in the inventory (5) with the ordered quantity. This current inventory value flows (6) to be processed (4). The Process (4) involves deciding the three following cases: Inventory = 0 (7), (customer) Quantity \(<=\) Inventory (8) and (customer) Quantity \(>\) Inventory \(>\) 0 (9).
1. **Inventory=0 (7)**: A decline notification is created (10) and communicated to the customer (11).
2. **(Customer) Quantity \(\Leftarrow\) Inventory (8)**: This result involves two series of actions.
* An invoice is created (12) and sent to the customer (13). The customer processes it (14) to create payment (15) that is sent to the shop (16 and 17).
* The shop triggers (18) the inventory to deliver the product to the customer. Assuming that the above two series of action are accomplished (19), the inventory sends the ordered product to the customer (20, 21, and 22). Additionally, the inventory is updated as follows.
* The ordered quantity to be delivered (the pink box) is extracted (19) and sent to the inventory (20) to be processed (21) along with the current value of the inventory to update the value (22).
* Also, the new value is processed (23) to determine whether it has reached the reordering level (24), and if it has, a reordering is created and sent to the supplier (25).
* In case a shipment comes from the supplier (26), the current inventory value is retrieved (27) and updated (28).
Fig. 7: UML and BPMN diagrams used to model the inventory system (From [10]).
(c) **(Customer) Quantity > Inventory**: A notification is created (29) and sent to the customer (30). The customer processes (31) the notification and creates a response (32) that flows to the shop (33). Assuming that the partial fulfillment is okay (34, the current value of inventory is retrieved (35), processed (36), and inserted as a new ordered quantity (37). Hence, the customer order is processed (with its new value) again (38) where ordered quantity is equal to the inventory value.
Fig. 8 is an engineering diagram that will be realized as a tangible Process. It looks to be a complex diagram; however, complexity is a relative term. When two representations involve the same level of abstraction, we can say that one of them is more complex than the other. UML is known for its complexity because it involves 14 models, each with different notations.
There are no generally accepted semantics of these concepts as conceptual modeling elements [29]. On the other hand, the apparent complexity of Fig. 8 appears as the result of repeatedly using the five generic actions _create process_, _release_, _transfer_, and _receive_, which give the model a uniformity that is rarely found in systems.
Fig. 8 can be simplified by assuming that the arrow direction indicates the direction of flow; thus, the transfer, release, and receive actions can be eliminated, resulting in Fig. 9. Note that the original diagram is still the base for the design phase, just as a complex electric circuit may be simplified by using such a technique as combining series and parallel resistor within the context of the larger circuit. Furthermore, this simplified diagram can be further simplified, e.g., eliminating _create_ and _process_.
Figure 8: The static model of the inventory system.
### _Dynamic and Behavior Models_
An event is a subdiagram of the static model (called region of event) injected with time. Fig. 10 shows the description of the event _Product has been delivered to the customer_.
For simplification sake, we will represent an event by its region. Accordingly, we identify the following events that are shown in Fig. 11.
Fig. 11: The dynamic model of the inventory system.
Fig. 10: The event _Product has been delivered to the customer_.
Fig. 9: Simplified static model of the inventory system.
* The customer creates an order that flows to the shop to be processed.
* The ordered quantity is extracted from the order.
* The current inventory value is retrieved.
* The ordered quantity is compared with the inventory.
* The result of comparison is Quantity \(<=\) Inventory.
* Invoice is sent and a payment is received.
* Product has been delivered to the customer.
* Inventory Current value has been updated.
* Reordering level has been reached; hence, a supply order has been sent to the supplier.
* Ordered product from the supplier is received and the inventory value is updated.
* The result of comparison is Inventory = 0; hence, a decline notification is sent to the customer.
* The result is Quantity \(>\) Inventory \(>\) 0; hence, a confirmation of partial fulfillment is sent to the customer.
* The customer accepts partial fulfilment.
* The customer does not accept partial fulfilment; hence, the order is cancelled.
Fig. 12 shows the behavior model of the inventory system. Note how the customer order is cancelled in case of the customer's refusal of a partial fulfillment (R\({}_{1}\) - This means reverting to region 1; that is, the order no longer exists). This cancelation is represented by a diamond-tail arrow from E\({}_{14}\) to E\({}_{1}\). This means, according to Lupascian logic, "not E\({}_{1}\)." which means returning to subsistence in Stoic ontology. Semantically, this indicates that the customer order does not exist anymore.
### _Queuing as a Process_
Consider the _Process_ where it is required to install a queue of orders waiting to be processed to extract the order quantity (red process box in Fig. 8). Fig. 13 shows how to install such a queue just before this process. We only show the dynamic model to save space since the static model can be extracted from the dynamic model.
* In the figure, E\({}_{1}\) and E\({}_{2}\) are the two events of receiving the orders, inputting them into the queue Q, and making Q _not empty_ (E\({}_{3}\)). This procedure continues filling the Q without limit (assumption). An empty Q is an initial condition.
* If the Q is _not empty_ (E\({}_{3}\)) and the Process (Red box) is _not busy_ (E\({}_{4}\)) then an order is retrieved from Q in Process (E\({}_{5}\)) and sent to the Process (red box). If E\({}_{5}\) leaves the Q empty (E\({}_{6}\)), then the Q indicator is set to _empty_ (E\({}_{7}\)).
Fig. 12: The behavior model of the inventory system.
Fig. 13: The dynamic model of the queue system.
The action process (red box _process_ in Fig. 13) is initially _not busy_. When the Process is activated (E\({}_{8}\)), its indicator is set to _busy_ (E\({}_{9}\)). When the Process (red box) finishes (E\({}_{10}\)), its indicator is set to _not busy_ (E\({}_{4}\)).
Fig. 14 shows the behavioral model of this queuing Process.
## IV Modeling Medical System
According to Li et al. [12], "Time is an important and pervasive concept of the real world. Li et al. [12] developed a time event ontology with "a rich set of classes and properties (object, data, and annotation)" that can formally represent and reason both structured and unstructured temporal information. They used the following:
* Concept primitives: clinical events ("anything that is relevant to the patient's clinical timeline) and temporal expressions and 'enriched' temporal relations."
* Real electronic health record data that faithfully represent more than 95% of the temporal expression, according to Li et al. [12].
* There are six types of events: test, problem, treatment, clinical_dept, evidential, and occurrence [30].
The results applied to a set of frequently asked time-related queries that show a strong capability of reasoning complex temporal relations.
Li et al. [12] introduced a class event to represent time-oriented medical events, which include any sort of "occurrences, states, procedures or situations that occurs on a timeline." Several subclasses are designed to cover the common clinical events (e.g., clinical intervention, diagnosis, test).
As an example, the following events report was initially manually annotated and then loaded into the Reasoner for inference. In the report, the words in red italic are manually annotated as events.
* A 35-year-old man was admitted to hospital with periodist swelling, redness, and pain on May 24, 2014. Then he was diagnosed with periodist cellulitis. He was treated with intravenous (IV) clindamycin, and with IV ciprofloxacin, which reduced the orbital redness and swelling. However, on the second day following antibiotic treatment, he developed nausea and right upper quadrant (RUQ) abdominal pain, his liver function tests (LFTs) began to increase. A diagnosis of idiosyncratic drug-induced liver injury (DILI) was made. [12]
### _Static Model_
Fig. 15 shows the corresponding static SCM model. First, the patient is admitted to the hospital (number 1) to be processed (2) and to record the patient's data (3). Note that to add some structure to the hospital, reception (4) and emergency (5) are added.
For example, this would give justification for executing to consecutive diagnoses at the beginning. The red arrow represents the movement of the patient through different stages of the medical processes. The first process (6) triggers the creation of initial diagnoses (7). Then, a diagnostic process triggered a medical description (8 and 9). In (10), a process created a prescription (11), thus triggering the delivery (e.g., from pharmacy) of medicine (12) to the patient (13). Accordingly, "Orbital redness and swelling" is reduced (14). This is followed by another process of diagnoses (15) to discover that "nausea and right upper quadrant (RUQ) abdominal pain, his liver function tests (LFTs) began to increase" (16). As a treatment (17), a prescription is written (18).
Note that the patient is a thing (red arrow) that goes through all of these processes, and, at certain stages, the relevant data of that point appear. For example, at (13 and 14), the patient is "expanded" to indicate the execution of medicine prescribed in (12) and the appearance of new patients' symptoms.
### _Dynamic Model_
The following events are selected (Fig. 16).
* The _patient_ is admitted in the hospital and necessary data are recorded.
* Initial diagnoses: "Periorbital swelling, redness, and pain"
* Patient is examined and diagnosis is "periorbital cellulitis."
* A prescription is written.
* Medicine is given to the patient.
* Orbital redness and swelling are reduced.
* _"Nausea and right upper quadrant (RUQ) abdominal pain, his liver function tests (LFTs) began to increase"_ began to increase.
* Prescribed treatment _"idiosyncratic drug-induced liver injury (DILI)."
Fig. 14: The behavior model of the queue system.
### _Analysis_
It is not difficult to see how this model should be generalized to become the base of a software system for any patient instead of the specific man mentioned in the events report given by Li et al. [12]. For example, diagnoses may be included in one file (e.g., UML class) instead of just three diagnoses marked in green in the model. Similarly, prescriptions are stored together (purple box includes medical prescriptions 1 and 2).
Li et al. [12] also gave sample queries that can be applied for events given in the events report. These and others can be incorporated into the SCM, including the following queries given by Li et al. [12].
_Query 1_: When was the patient admitted to the hospital? (**Answer is in E1**)
_Query 2_: What is the temporal relation between "admitted to hospital" and "liver function tests (LFTs) began to increase"? (**E1 and E7**)
_Query 3_: Does "ciprofloxacin" treatment start before "diagnosis of Does "ciprofloxacin" treatment start before "diagnosis of idiosyncratic drug-induced liver injury (DILI)"? drug-induced liver injury (DILI)"? (**E4 and E8**)
_Query 1_: What events happened before "diagnosis of idiosyncratic drug-induced liver injury (DILI)"? (**E1 to E8**)
Fig. 15: The static model.
## V Conclusion
The occurrence-only paradigm presented in this paper refers to conceptual modeling solely based on events and processes. This model has five generic events with high-level events formed from these generic events. Some of these high-level events are processes when the complex of events has semantically whole. For example, the inventory control system discussed in section 3 can be called a process, whereas an arbitrary subdiagram of it may not form a "whole" with associated events that may not be qualified with a specific name.
The occurrence-only modeling can be categorized as an anti-thesis of the currently dominant object-oriented conceptual modeling (individual-based modeling with a commitment to message passing, encapsulation, inheritance, etc.).
Although the basic idea of incorporating events and processes in modeling has been utilized by many researchers, the occurrence-only approach is probably the first attempt to build a "top-down" modeling ontology and language based on these two notions as first-class citizens. Hence, no claim of completeness or correctness can be applied for such a venture.
Accordingly, details and scrutiny of some parts may uncover ambiguity and errors at different portions of the modeling technique. Hopefully, pursuing further refinements though modeling applications in different domains would uncover these ambiguities and errors.
In the ontology part, the subsistence notion needs further scrutiny, especially the reasons for its rejection by reputable philosophers. The thing/machine concept requires further refinement such as a situation that cannot expressed by the five-action machine.
Fig. 16: Dynamic model |
2304.03609 | Revisiting Automated Prompting: Are We Actually Doing Better? | Current literature demonstrates that Large Language Models (LLMs) are great
few-shot learners, and prompting significantly increases their performance on a
range of downstream tasks in a few-shot learning setting. An attempt to
automate human-led prompting followed, with some progress achieved. In
particular, subsequent work demonstrates automation can outperform fine-tuning
in certain K-shot learning scenarios.
In this paper, we revisit techniques for automated prompting on six different
downstream tasks and a larger range of K-shot learning settings. We find that
automated prompting does not consistently outperform simple manual prompts. Our
work suggests that, in addition to fine-tuning, manual prompts should be used
as a baseline in this line of research. | Yulin Zhou, Yiren Zhao, Ilia Shumailov, Robert Mullins, Yarin Gal | 2023-04-07T12:06:44Z | http://arxiv.org/abs/2304.03609v2 | # Revisiting Automated Prompting: Are We Actually Doing Better?
###### Abstract
Current literature demonstrates that Large Language Models (LLMs) are great few-shot learners, and _prompting_ significantly increases their performance on a range of downstream tasks in a few-shot learning setting. An attempt to automate human-led prompting followed, with some progress achieved. In particular, subsequent work demonstrates that automation can outperform fine-tuning in certain \(K\)-shot learning scenarios Shin et al. (2020); Zhang et al. (2021). In this paper, we revisit techniques for automated prompting on six different downstream tasks and a larger range of \(K\)-shot learning settings. We find that _automated prompting does not consistently outperform simple manual prompting_. Our work suggests that, in addition to fine-tuning, _manual prompting should be used as a baseline_ in this line of research.
## 1 Introduction
Transformer-based Large Language Models (LLMs) are now considered foundation models for downstream tasks (Bommasani et al., 2021). The _pre-train then fine-tune_ approach achieved state-of-the-art performance on a range of Natural Language Processing (NLP) tasks (Liu et al., 2019; Raffel et al., 2020; Brown et al., 2020). Unfortunately, in many NLP applications, the lack of high-quality labelled training data is a barrier to producing a model with good performance in the pre-train and then fine-tune approach. To address this issue, _prompt-based learning_Petroni et al. (2019); Schick and Schutze (2020); Liu et al. (2021) emerged as a new paradigm for tuning a high-quality, pre-trained LLM in a _few-shot learning_ scenario, where only a few samples are available for downstream task learning.
In the prompt-based learning paradigm (Figure 1), an input \(X\) is modified using a template function \(p\), also known as a prompting function and has one or more placeholders called mask tokens _<mask>_, resulting in a prompted input \(X^{\prime}=p(X)\)(Liu et al., 2021). Additionally, a verbaliser designs an answer domain \(\mathcal{Z}\), so that for an output label domain \(\mathcal{Y}\), there is a many-to-one mapping for an answer \(z\in\mathcal{V}_{y}\subseteq\mathcal{Z}\) to an output label \(y\in\mathcal{Y}\) in accordance with the downstream task. Considering a language model \(f_{o}\) pre-trained on a large corpus of text, such as Wikipedia, the goal of prompt-based learning is to fine-tune it on a small dataset of prompted inputs \(X^{\prime}\) and corresponding output \(y\), in order to produce a high-quality language model \(f_{p}\) capable of generating an answer \(z\) for a given input \(X\).
Prompting formulates downstream tasks such as sentiment analysis and text classification to cloze completion (also known as filling in the blanks). Furthermore, using prompts and fine-tuning allows models to gain superior few-shot learning capabilities (Lester et al., 2021; Schick and Schutze, 2020; Shin et al., 2020). Despite the relative success of prompt-based learning, the design of prompts can be a challenging task. As a result, many research studies sought to _automate_ the process of designing suitable prompts for downstream tasks (Liu et al., 2021; Zhang et al., 2021; Shin et al., 2020). The motivation for automating prompt design is usually two-fold: first, manually designing prompts can be
Figure 1: Sentiment analysis with the prompt-based learning paradigm. Input \(X^{\prime}\) is the prompted input, and there is a many-to-one mapping between answers \(z\in\mathcal{Z}\) and labels \(y\in\mathcal{Y}\).
time-consuming; and second, automated ones can often provide better performance. In this work, we question _the second motivation_ and demonstrate that _existing automated prompts do not consistently outperform their manual counterparts_ under various \(K\)-shot learning setups. In this paper, we make the following contributions:
* We thoroughly investigate automated prompts and demonstrate that they do not consistently outperform manual prompts, even when the latter are created using basic heuristics and selected among a small number of options (Section 3.2).
* We show empirically that fine-tuning only serves a strong baseline when \(K\geq 100\) in a \(K\)-shot learning setup (Section 3.2).
* By visualising the prompts generated by autoprompting, we explain why these prompts are not necessarily better than manually designed ones (Section 3.4).
* Supported by our empirical evidence and evaluation, we strongly recommend that _future research should consider manual prompts as a simple yet effective baseline_.
## 2 Related Work
The rise of the _prompting-based learning paradigm_ comes with the development of LLMs Brown et al. (2020), which were demonstrated to be good few-shot learners Liu et al. (2021). To begin with, researchers focused on manually crafted prompts for downstream tasks Petroni et al. (2019); Liu et al. (2021); Scao and Rush (2021); Zhao et al. (2021); Schick and Schutze (2020), yet soon shifted towards automated prompt designs. Schick _et al._ investigated how to automatically identify label words for a prompt Schick and Schutze (2020), while Shin _et al._ proposed AutoPrompt, a framework for automatically generating prompts for various tasks, through a gradient-based search Shin et al. (2020). Gao _et al._ used another LLM, T5 Raffel et al. (2020), to generate both the prompting templates and verbaliser answer domains Gao et al. (2020). Han _et al._ incorporated logic rules into prompt designs, combining several simple sub-prompts according to these rules Han et al. (2022). All of the above mentioned methods are based on the assumption that the prompt design has to rely on discrete tokens.
Liu _et al._ and Lester _et al._ demonstrated that prompts could be trainable continuous embeddings, or soft prompts, instead of discrete tokens. These soft prompts can be learned with a frozen language model (LLM) on a target task Liu et al. (2021); Lester et al. (2021); Zhang et al. (2021). Liu _et al._ further discovered that Deep Prompts, which are soft prompts used in every layer of the model, allow for scaling to large LLMs for complex natural language processing (NLP) tasks Liu et al. (2021). Zhang _et al._ developed Differentiable Prompts, which put the label tokens design of the prompt into a continuous space and optimised it jointly with soft prompts Zhang et al. (2021). An extensive evaluation was conducted by Zhang _et al._ on various downstream tasks.
Most of the work on automating prompt design mentioned above has two major motivations: to reduce the amount of time it takes to design prompts manually; and to potentially gain better performance, since manual prompt formats can be sub-optimal Zhang et al. (2021). While the first motivation may be valid in some cases, it largely depends on the task complexity and the amount of data available - it is sometimes possible for non-experts to design a prompt sufficient for simple tasks with a large amount of data. The principal focus of this work, however, is on the second motivation: _can automated prompts really outperform manual prompts in a consistent manner?_ A comparison between automated and manual prompts is lacking in current research. To our knowledge, automated prompting methods focus solely on comparing to fine-tuning in a few-shot learning setup, while a comparisons to manual prompting methods remain unexplored. In this paper, we consider AutoPrompt (Auto) Shin et al. (2020) and Differential Prompt (Diff) Zhang et al. (2021) as representatives, where one is based on discrete tokens, while the other is based on continuous embeddings. We compare them with manually designed prompts and fine-tuning without prompting on various tasks.
## 3 Evaluation
### Experiment setup
A robust framework was developed to assess prompting model performance under \(K\)-shot learning scenarios where only \(K\) samples per class are available for the training and validation datasets. Three prompting models were re-implemented: LM-BFF (manual) Gao et al. (2020), AutoPrompt
(Auto) Shin et al. (2020), and DART (Diff) Zhang et al. (2021) models. During prompt-based learning, each prompting model is allowed to fine-tune the parameters of the pre-trained language model using the limited training and validation datasets.
#### 3.1.1 Datasets and Model
We conducted comprehensive experiments on six datasets to compare the performance of prompting models fine-tuned on the pre-trained RoBERTa-large model Liu et al. (2019). Table 2 in Appendix B shows we picked three sentiment analysis and three textural entailment tasks.
#### 3.1.2 Prompt Templates and Verbalisers
We design prompts to concatenate the input text and the _<mask>_ token, alongside a verbaliser that maps from the answer domain to the output label domain. Manually designed prompts and verbalisers are adapted from the Public Pool of Prompts Bach et al. (2022) and previous work on prompting Gao et al. (2020); Xu et al. (2022). For each dataset, we selected four to six prompt-and-verbaliser pairs, compared their performance under the same \(K=16\) few-shot scenario, and picked the best-performing pair for further experiments with different \(K\) values. Detailed manually designed prompts and verbalisers, as well as their performance measures, are illustrated in Table 3, and the best-performing pairs are summarised in Table 4 in Appendix C.
An automated discrete prompt replaces the template with trigger tokens <\(T\)>. Following the same settings used in AutoPrompt Shin et al. (2020), we inserted ten trigger tokens between the input text and the _<mask>_ token. Under a \(K\)-shot scenario, the verbaliser mapping is automatically generated from the train and validation dataset, each with \(K\) samples per class. Table 5 in Appendix D shows the automated discrete prompts and verbalisers for each dataset. A differential prompt starts from the manually designed prompt but treats both the template and the verbaliser as a collection of differentiable parameters.
Take the dataset SST2 as an example: a suitable manually designed prompt could be "<sentence>. It was <mask>." with a verbaliser \(\{\texttt{bad}\mapsto 0,\texttt{good}\mapsto 1\}\); An automated discrete prompt could be "<sentence> <\(T\)>... <\(T\)> <mask>." with ten trigger tokens <\(T\)>.
#### 3.1.3 Hyper-parameters
We conducted a beam search using the AdamW optimiser Loshchilov and Hutter (2017) for the optimal batch size, learning rate and weight decay for each set of experiments with the same dataset and \(K\)-shot value. Each experiment is run with \(100\) epochs and an early stopping value of \(5\), _i.e._, when the validation loss is non-decreasing for \(5\) epochs. The detailed hyper-parameters used in each set of experiments are listed in Table 6, and details on the evaluation metrics are in Appendix E.
### Main Results
Table 1 illustrates the performance of various prompting strategies. We observe that manual prompts exhibit the best performance in 13 out of the 24 setups (6 different datasets and 4 different \(K\)s), and the second-best performance in 8 of them. Automated prompts (both Auto and Diff) only show a clear advantage in TWEETS-HATE-OFFSETIVE when \(K=100\). The baseline in Table 1 is direct fine-tuning on the \(K\) samples.
We also see that automated prompts can be catastrophically ineffective in certain setups. For example, as shown in Table 5, Auto performs much worse than Manual or Baseline in MNLI-MATCHED when \(K=100\). Diff also significantly underperforms Manual in TWEETS-HATE-OFFSETIVE when \(K=16\). In later parts of this section, we provide an analysis of the generated prompts and explore the reasons for this phenomenon. Finally, we demonstrate that Baseline sometimes performs well when \(K\) is large. This is seen in SST2 when \(K=100,1000\) and also ENRON-SPAM when \(K=100\). In general, we make the following observations:
* Manual prompting outperforms automated prompting (Auto and Diff) with different \(K\)-shot setups on most tasks.
* Automated prompting sometimes cannot even outperform fine-tuning, _e.g._ MNLI-MISMATCHED \(K=100,1000\).
* When \(K\) is small, prompting can greatly improve performance, _e.g._ on SST2 and MNLI.
* Automated prompting can fail catastrophically (_e.g._ MNLI-MISMATCHED \(K=1000\)) and have a high variance in performance (_e.g._\(15.5\) standard deviation on SST2), while manual prompting is more robust.
### More \(K\)-shot Experiments
Figure 2 demonstrates the performance of different prompting styles with more \(K\) values on SST2, QNLI Wang et al. (2018) and ENRON-SPAM Metsis et al. (2006).
We observe that the performance of all methods starts to converge with larger \(K\) values, which is consistent with existing literature Shin et al. (2020). It is also worth mentioning that the automated prompting methods do not consistently outperform manual prompting on this large range of \(K\) values. More results are available in Appendix F.
### Visualizing Auto-prompts
As previously discussed, automated prompting can sometimes fail catastrophically. Table 5 summarises all the automated discrete prompts and verbaliser answer domains. Since the answer domain is generated from the \(K\) samples per class, it may not be general enough or optimal for the entire dataset. On the other hand, manual prompts and verbalisers are designed based on common knowledge that humans possess from countless examples encountered in daily life. One possible improvement idea on AutoPrompt is to start with a manually designed prompt and update both the prompt and the verbaliser through a gradient-based search in an iterative manner.
### Limitations
All prompting methods are trying to extract knowledge from the Large Language Models (LLMs).
\begin{table}
\begin{tabular}{c|c c c|c c c c} \hline \hline & \multicolumn{4}{c}{SST2} & \multicolumn{4}{c}{QNLI} \\ \(K\) & Baseline & Auto & Diff & Manual & Baseline & Auto & Diff & Manual \\ \hline \(8\) & \(59.8\pm 8.6\) & \(51.7\pm 1.9\) & \(\mathbf{88.0\pm 1.6}\) & \(77.6\pm 4.6\) & \(49.9\pm 1.0\) & \(51.5\pm 0.7\) & \(50.5\pm 2.1\) & \(\mathbf{54.6\pm 2.8}\) \\ \(16\) & \(72.1\pm 15.0\) & \(70.1\pm 3.9\) & \(\mathbf{87.8\pm 0.7}\) & \(86.9\pm 1.6\) & \(49.9\pm 0.2\) & \(53.4\pm 1.3\) & \(59.5\pm 3.6\) & \(\mathbf{74.1\pm 1.2}\) \\ \(100\) & \(\mathbf{89.6\pm 0.5}\) & \(83.5\pm 4.3\) & \(88.6\pm 0.7\) & \(89.4\pm 1.0\) & \(78.9\pm 2.3\) & \(74.0\pm 4.3\) & \(80.2\pm 2.1\) & \(\mathbf{82.7\pm 0.7}\) \\ \(1000\) & \(\mathbf{92.7\pm 0.2}\) & \(92.5\pm 0.2\) & \(90.1\pm 0.7\) & \(92.3\pm 0.2\) & \(87.2\pm 1.0\) & \(83.2\pm 3.8\) & \(85.2\pm 1.1\) & \(\mathbf{88.0\pm 0.3}\) \\ \hline \multicolumn{8}{c}{MNLI-Matched} & \multicolumn{4}{c}{MNLI-Mismatched} \\ \(K\) & Baseline & Auto & Diff & Manual & Baseline & Auto & Diff & Manual \\ \hline \(8\) & \(34.6\pm 2.4\) & \(34.2\pm 1.1\) & \(51.3\pm 1.1\) & \(\mathbf{55.7\pm 3.3}\) & \(33.8\pm 0.8\) & \(33.8\pm 0.5\) & \(47.6\pm 3.0\) & \(\mathbf{56.0\pm 1.4}\) \\ \(16\) & \(33.3\pm 0.2\) & \(34.9\pm 0.7\) & \(\mathbf{61.4\pm 1.5}\) & \(60.2\pm 3.7\) & \(32.8\pm 1.3\) & \(35.6\pm 0.8\) & \(59.4\pm 1.1\) & \(\mathbf{60.2\pm 2.7}\) \\ \(100\) & \(63.1\pm 1.3\) & \(42.3\pm 0.5\) & \(72.1\pm 0.8\) & \(\mathbf{74.1\pm 1.2}\) & \(73.6\pm 2.1\) & \(39.5\pm 1.0\) & \(73.3\pm 1.2\) & \(\mathbf{77.0\pm 1.2}\) \\ \(1000\) & \(82.7\pm 0.5\) & \(72.9\pm 2.3\) & \(80.0\pm 0.8\) & \(\mathbf{83.2\pm 0.3}\) & \(84.3\pm 0.5\) & \(76.6\pm 3.7\) & \(82.0\pm 0.4\) & \(\mathbf{85.0\pm 0.2}\) \\ \hline \multicolumn{8}{c}{ENRON-SPAM} & \multicolumn{4}{c}{TWEETS-HATE-OFFENSIVE} \\ \(K\) & Baseline & Auto & Diff & Manual & Baseline & Auto & Diff & Manual \\ \hline \(8\) & \(49.1\pm 36.6\) & \(73.4\pm 6.0\) & \(\mathbf{80.7\pm 5.7}\) & \(67.9\pm 12.2\) & \(14.5\pm 9.5\) & \(12.1\pm 4.6\) & \(\mathbf{32.5\pm 7.1}\) & \(25.8\pm 16.5\) \\ \(16\) & \(84.2\pm 4.0\) & \(80.5\pm 2.6\) & \(88.0\pm 2.3\) & \(\mathbf{89.4\pm 3.0}\) & \(38.0\pm 4.1\) & \(42.5\pm 2.6\) & \(37.2\pm 7.7\) & \(\mathbf{46.7\pm 2.5}\) \\ \(100\) & \(\mathbf{97.1\pm 0.4}\) & \(90.8\pm 0.4\) & \(96.3\pm 0.8\) & \(96.3\pm 0.5\) & \(44.9\pm 0.9\) & \(51.4\pm 3.4\) & \(\mathbf{59.7\pm 2.8}\) & \(47.0\pm 0.8\) \\ \(1000\) & \(98.0\pm 0.5\) & \(97.0\pm 0.7\) & \(\mathbf{99.0\pm 0.1}\) & \(98.7\pm 0.2\) & \(66.5\pm 1.5\) & \(66.8\pm 1.8\) & \(\mathbf{67.7\pm 3.3}\) & \(67.5\pm 2.1\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: The performance of various prompting methods on RoBERTa-large Liu et al. (2019) was assessed using numbers reported as percentages, with a mean and standard deviation across five independent runs. The best and second-best performing methods are represented in bold and underlined fonts, respectively. The baseline is fine-tuning only without any prompting, while Auto, Diff, and Manual correspond to AutoPrompt Shin et al. (2020), Differential Prompt Zhang et al. (2021), and LM-BFF Gao et al. (2020), respectively.
Figure 2: The performance of prompting models on the datasets SST2, QNLI Wang et al. (2018) and ENRON-SPAM Metsis et al. (2006) is shown for a wider range of \(K\) values. The solid line plots the mean accuracy across five independent runs, and is bounded by one standard deviation on both sides. |
2301.10915 | Parameter-Efficient Low-Resource Dialogue State Tracking by Prompt
Tuning | Dialogue state tracking (DST) is an important step in dialogue management to
keep track of users' beliefs. Existing works fine-tune all language model (LM)
parameters to tackle the DST task, which requires significant data and
computing resources for training and hosting. The cost grows exponentially in
the real-world deployment where dozens of fine-tuned LM are used for different
domains and tasks. To reduce parameter size and better utilize cross-task
shared information, we propose to use soft prompt token embeddings to learn
task properties. Without tuning LM parameters, our method drastically reduces
the number of parameters needed to less than 0.5% of prior works while achieves
better low-resource DST performance. | Mingyu Derek Ma, Jiun-Yu Kao, Shuyang Gao, Arpit Gupta, Di Jin, Tagyoung Chung, Nanyun Peng | 2023-01-26T03:01:59Z | http://arxiv.org/abs/2301.10915v2 | # Parameter-Efficient Low-Resource
###### Abstract
Dialogue state tracking (DST) is an important step in dialogue management to keep track of users' beliefs. Existing works fine-tune all language model (LM) parameters to tackle the DST task, which requires significant data and computing resources for training and hosting. The cost grows exponentially in the real-world deployment where dozens of fine-tuned LM are used for different domains and tasks. To reduce parameter size and better utilize cross-task shared information, we propose to use soft prompt token embeddings to learn task properties. Without tuning LM parameters, our method drastically reduces the number of parameters needed to less than 0.5% of prior works while achieves better low-resource DST performance.
## 1 Introduction
Dialogue state tracking (DST) that extracts structured conversation progress in a list of slot-value pairs from unstructured dialogue utterances is an essential component of a dialogue system (Wang and Lemon, 2013). Unlike classification-based models that pick the slot value from given candidate (Ye et al., 2021; Chen et al., 2020), recent works formulate DST as a conditional generation task (Gao et al., 2019; Lin et al., 2020), where the concatenation of dialogue history and a slot-specific prompt are fed to generative models and the text generation output are decoded to predicted slot values (Ham et al., 2020; Hosseini-Asl et al., 2020). This formulation enjoys the benefit of generalizability to unseen domains and slot types beyond a defined dialogue ontology (Li et al., 2021; Peng et al., 2021).
General prompting methods use a textual prompt to provide task information to the LM (Liu et al., 2021; Gao et al., 2021). Prior works have variations that update different parameter combinations such as both LM and prompt token embeddings (Gao et al., 2021; Li and Liang, 2021), only the token embeddings of the LM (Zhu et al., 2021), or only the prompt token embeddings (Lester et al., 2021; Gu et al., 2022; Vu et al., 2022).
While there are some existing prompt-based approaches for DST with different designs of prompts such as using slot name (Lee and Jha, 2019; Zhao et al., 2021; Lee et al., 2021; Su et al., 2022), slot description (Rastogi et al., 2020), slot type (Lin et al., 2021), possible values (Lin et al., 2021), priming examples (Gupta et al., 2022) and/or slot-specific question (Gao et al., 2019; Zhou and Small, 2019; Gao et al., 2020; Lin et al., 2021; Li et al., 2021) in prompt sentences, they all fine-tune the entire LM along with the prompt tokens for a new domain, which requires a significant amount of training time, system resources, and annotated data (Clarke et al., 2022; Sauer et al., 2022). The computing and data resource-hungry issues are more severe in the real-world deployment where LMs tuned for different domains and tasks need to be trained and hosted, and a typical dialogue system has to serve dozens of such LMs (Maronikolakis and Schutze, 2021; Strubell et al., 2019; Lacoste et al., 2019). This leads to a high cost of the development and service of dialogue systems and constrains offline deployment. In addition, limited data is available for a new domain or task.
We propose a **parameter-efficient** and **data-efficient** DST model for **low-resource** settings, which only needs to update 0.08% of parameters compared with the previous best model, by keeping LM parameters frozen and introducing soft prompt tokens to represent task properties of different slots. Figure 1 gives an overview of our model. The only prior work we are aware of that only updates prompt token embeddings and thus parameter-efficient is Zhu et al. (2022), but it focuses on continual domain adaptation and with a significant amount of training data.
Our design introduces three techniques that are
generalizable to other generative-based information extraction models. 1) **Task-specific parameters**: _task prompt tokens_ are introduced to specifically learn domain, slot and slot type information so that the model behaves according to the task; _word-mapping prompt tokens_ enable us to obtain task knowledge contained in natural language instruction and optimize human-created prompts with continuous embedding space. 2) **Task metadata in objective**: we introduce the reiteration technique in the target sequence in order to include explicit task signals in the text generation objective. 3) **Distinguishing segments**: segment embeddings help the model identify the prompt segment, dialogue speakers, and question partition. Our proposed method enables much more efficient dialogue system deployment as only one LM needs to be hosted and inference for different domains could be realized by feeding domain-specific prompt token embeddings into the transformer stack.
Experiments on MultiWOZ 2.0 show that our method achieves better performance on low-resource DST with orders of magnitude fewer parameters. We further conduct ablation studies, error analysis, and examine the semantic information shown in the prompt tokens. We observe that our model is more specialized in predicting categorical slot values, is more conservative for slots with free output space and introduces more hallucination errors for categorical slots.
## 2 Method
We introduce task definition (SS 2.1), overall framework (SS 2.2) and soft prompt designs (SS 2.3) in this section.
### Task Definition
The goal is to construct a belief state with \(|S|\) pairs of slot and value at a certain turn in a multi-turn conversation. All the turns up to the query turn are dialogue history, and slot-specific information (_i.e_. name, description, value candidates, question and type of the slot) is provided.1
Footnote 1: We show slot-specific info in § A.1 and § A.2. Value candidates are from dialogue ontology.
### Generative Seq2seq Framework
We use a decoder-only pre-trained language model (PLM) GPT-2 (Radford et al., 2019) as the backbone to provide language and commonsense knowledge, rather than an encoder-decoder model because of its superior performance (Li et al., 2021). To get a belief state at a certain turn, we create \(|S|\) data instances to predict the slot value for each slot. Figure 1 demonstrates the design and a sample query.
Input sequence.We construct the input sequence by concatenating the following segments: 1) _Task prompt tokens for domain, slot and type_, each has \(k\) prompt tokens and they are shared among instances with the same domain, slot or type; 2) _Prefix_, a short sentence containing slot description, names of domain, slot, and type, and all possible candidates if the query slot is categorical; 3) _Dialogue history_, in which [sys] and [usr] tokens are used to indicate the speaker; and 4) _Question_, human-written question about the slot.
Target sequence and reiteration.We introduce the reiteration technique in the target sequence as shown in Figure 1 and generate task information before the answer phrase. We include the verbalized slot information as a "domain is domain name, slot is slot name, type is type name" phrase in the expected output sequence. By doing so, we require the model to optimize to remember the task information explicitly before generating the answer phrase, while using a consistent text generation cross-entropy loss. This technique allows the model to optimize upon both the answer and the
Figure 1: Model design. The snow icon indicates non-trainable parameters. Absolute positional embeddings are added together with segment embeddings and sequence embeddings, we omit it for simplicity in the illustration.
sentence containing slot metadata, and explicitly learn the task information.
Segment embeddings.The input sequence contains segments with diverse formats and they are quite different from the format used in the pre-training phase of the LM. We divide the input sequence into segments, including five prompt segments, the system turns, the user turns and the answer segment. Tokens within a segment are assigned the same segment ID. Segment embeddings, which have the same length as the input sequence, are added with sequence embeddings and positional embeddings. We randomly initialize the embeddings of segment IDs and update them during training.
Training and inference.We pass the combined embeddings to the decoder stack to calculate the likelihood over the vocabulary. We use the cross-entropy loss with a regularization term to constrain the scale of prompt token embeddings following \(L=CE+\lambda\|PE^{\prime}-PE\|_{2}^{2}\) where \(\lambda\) is a weighting factor, and \(PE^{\prime}\) and \(PE\) are updated and initialized prompt token embeddings Muller et al. (2022). Parameters of the PLM are frozen, and only prompt and segment embeddings are updated with Adam optimizer. During inference, we generate the output autoregressively with greedy decoding, and extract the answer with a rule-based function.
### Soft Prompt Tokens
Prompt segments.We use two kinds of prompt tokens. _Task prompt tokens_ are chosen according to the task's metadata, and used in the domain, slot and type prompt segments. _Word-mapping prompt tokens_ are mapped from existing tokens in the pre-fix and question parts and used to replace normal tokens. In other words, task and word-mapping prompt tokens are shared across instances with the same task and instances using the same words respectively. We concatenate embeddings of each prompt segment (obtained by separate embedding matrices) with dialogue history embeddings (obtained by the frozen token embedding matrix) to form sequence embeddings.
Prompt initialization.To boost the performance in the low-resource setting, we use the pre-trained token embeddings to initialize the soft prompt token embeddings. The token embeddings from PLM are used to represent word semantics for language understanding, while the soft prompt tokens are used to represent task information initialized by task-related semantic meanings. We initialize a task prompt token by embedding of a randomly chosen token from its domain, slot or slot type name. Word-mapping prompt tokens are initialized with the embedding of the mapped word.
## 3 Experimental Setup
Dataset.We experiment on dialogues of five domains (_i.e_. attraction, hotel, restaurant, train, taxi) in MultiWOZ 2.0 Budzianowski et al. (2018).
Settings.We evaluate using the low-resource few-shot DST task. We take 5, 10, 20, 1%, 5% and 10% of training conversations to train, and evaluate on the full test set of each target domain.2
Footnote 2: § C.3 and C.4 show experimental setting details.
Evaluation metrics.Joint Goal Accuracy (JGA) represents the proportion of _turns_ with _all_ slots predicted correctly, and Slot Accuracy (SA) reflects the proportion of correct _slots_. If a slot is empty at a certain turn (for example, no related information is mentioned), the model needs to predict "none". A slot value is only correct if it matches exactly with the ground-truth value.
Baseline models.We compare with the following works.3 1) TRADE Wu et al. (2019): GRU-based model with copy mechanism; 2) DSTQA Zhou and Small (2019): QA-style model using ELMo representation; 3) T5DST Lin et al. (2021): T5-based generative model with slot type as prompt; 4) Lee et al. (2021): T5-based generative model with slot description and possible slot values as prompt; 5) Li et al. (2021): GPT-2 based QA-style generative model with manually created questions. The entire language model is updated for T5DST, Lee et al. and Li et al., and they represent the performance of prompt-based DST works. SS B.6 shows comparison with baselines' frozen LM variation.
Footnote 3: We are not comparing with prompt-based DST works that jointly train with other tasks for a fair comparison.
## 4 Experimental Results
Overall results.We show the overall few-shot experimental results in Table 1. Although our model uses only 0.08% and 0.45% of parameters compared with baselines, it still achieves higher JGA than all baseline models when using 1% or less training data across all domains. Especially
we observe around 5, and 9 points JGA increases for the attraction and hotel domains compared with existing best models with 1% training data. In the attraction domain with 3 unique slots, our model trained using 5 dialogues performs on par with the previous best model using 20 dialogues. Our model shows its superiority especially when the amount of unique tasks is small. Using 5% and 10% data, our model performs comparably with existing best models with small gaps.
We demonstrate the performance of slots with different types in Figure 2 across all five domains in Figure 2 compared with two generative baselines Li et al. and T5DST Lin et al. (2021). These slot types are defined in Table 4. We observe the worst performance in open slots, which could be explained by the larger output candidate space.4 Breaking down slot type to more fine-grained type lead to a better result (considering day as a separate type rather than categorical type, number and time as separate types rather than open type). Compared with baselines, our model performs comparably on open and time slots, but is more superior for categorical, number and day slots.5
Footnote 4: A SA vs ontology size analysis is in § B.2.
Footnote 5: SA for each slot and comparisons are in § B.3.
Ablation study.In Table 2, removing the slot segment (Line 2) leads to the largest performance drop among the three task prompt segments (L1-3), as slot is the most fine-grained task categorization. Prefix (L5) is more important than the question prompt (L4), which contains more metadata and parameters. The model without segment embedding (L6) has on average 7.8 points JGA drop, indicating the effectiveness of the segment embedding. We also observe an almost 2 points JGA drop (and an even larger drop with fewer training dialogues shown in SS B.1) without reiteration (L7), which shows the helpfulness of including explicit task information in the learning objective. Note that even without reiteration, our model performs better than all baselines using 1% training data.
Error and qualitative analysis.We categorize error cases as: 1) hallucination: predicting value for an empty slot; 2) omission: predicting "none" for a non-empty slot; 3) wrong value: predicting wrong real value for a non-empty slot Gao et al. (2020). Figure 3 shows the error distribution in terms of the proportion of each error category. The general open slots (including time and number) have
\begin{table}
\begin{tabular}{l|c c c c c|c c c c c c|c c c c c c c} \hline \hline \multirow{3}{*}{Model} & \multirow{3}{*}{Params} & \multicolumn{3}{c|}{5} & 10 & 20 & 1\% & 5\% & 10\% & \multicolumn{3}{c|}{5} & 10 & 20 & 1\% & 5\% & 10\% & \multicolumn{3}{c|}{5} & 10 & 20 & 1\% & 5\% & 10\% \\ \cline{3-19} & & \multicolumn{3}{c|}{Attraction (3 slots, 1\% = 27 conv.)} & \multicolumn{3}{c|}{Hotel (10 slots, 1\% = 33 conv.)} & \multicolumn{3}{c|}{Restaurant (7 slots, 1\% = 38 conv.)} \\ \hline TRADE & — & — & — & — & — & 52.19 & 58.46 & — & — & — & — & 31.93 & 41.29 & — & — & — & — & 47.31 & 53.65 \\ DSTQA & & & & & & & & & & & & & & & & & & & & & & & & & & & \\ T5DST & 60M & 4.77 & 21.93 & 30.57 & 40.68 & 52.12 & 60.13 & 8.19 & 13.46 & 17.94 & 18.63 & 38.76 & 46.13 & 13.80 & 19.51 & 27.99 & 24.79 & **53.32** & 58.44 \\ Lee et al. & 60M & 6.33 & 19.12 & 34.53 & 37.56 & 54.34 & 38.75 & 9.31 & 15.76 & 22.07 & 24.41 & **40.11** & **41.04** & 18.57 & 19.66 & 22.15 & 30.96 & 48.94 & **58.59** \\ Li et al. & 335M & 7.90 & 27.09 & 35.63 & 42.18 & 49.13 & 60.85 & 12.49 & 15.15 & 19.44 & 24.04 & 37.88 & 46.47 & 17.27 & 22.30 & 25.68 & 30.70 & 49.75 & 58.50 \\ \hline Ours & 271K & **33.56** & **39.41** & **45.75** & **47.28** & **56.99** & **63.61** & **15.63** & **18.18** & **22.50** & **33.01** & 38.24 & 45.60 & **19.76** & **25.72** & **27.65** & **34.40** & 50.81 & 55.57 \\ \hline \hline \end{tabular}
\begin{tabular}{l|c c c c c|c c c c c|c c c c c c c} \hline \hline \multirow{3}{*}{Model} & \multirow{3}{*}{Params} & \multicolumn{3}{c|}{\multirow{3}{*}{\#}} & \multirow{3}{*}{Attraction (3 slots, 1\% = 27 conv.)} & \multicolumn{3}{c|}{Train (6 slots, 1\% = 29 conv.)} & \multicolumn{3}{c}{Average} \\ \cline{3-19} & & & — & — & — & — & 59.03 & 60.51 & — & — & — & — & — & 48.82 & 59.65 & — & — & — & — & — & 47.86 & 54.71 \\ DSTQA & & & & & & & & & & & & & & & & & & & & & & & & & \\ T5DST & 60M & 48.22 & 53.74 & 58.27 & 58.19 & 59.23 & 59.03 & 12.31 & 21.93 & 36.45 & 43.93 & 69.27 & 69.48 & 17.46 & 26.11 & 33.30 & 38.18 & 54.54 & 60.64 \\ Lee et al. & 60M & 45.32 & 49.93 & 58.58 & 58.52 & 60.77 & **71.23** & 13.57 & 25.02 & 38.52 & 50.26 & 69.32 & 69.67 & 12.80 & 25.90 & 35.17 & 40.34 & 54.70 & 60.25 \\ Li et al. & 335M & 50.99 & 57.47 & 58.49 & 58.26 & **61.68** & 69.23 & 17.56 & 27.42 & 39.27 & 45.32 & **71.69** & 73.45 & 21.24 & 29.89 & 35.70 & 40.10 & 54.03 & **61.70** \\ \hline Ours & 271K & **51.11** & **59.63** & **60.89** & **60.33** & 61.63 & 63.00 & **18.95** & **30.95** & **50.34** & **52.05** & 69.51 & **75.00** & **27.80** & **34.78** & **41.43** & **45.41** & **55.44** & 60.60 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Overall performance. Detailed parameter counts are in § A.3, variances are in § B.5.
\begin{table}
\begin{tabular}{l l
relatively more omission errors, while the general categorical slots have relatively more hallucination errors. Our model is more conservative for open slots compared with Li et al..6
Footnote 5: Our model produces _relatively_ larger proportion of omission error than Li et al., but it generates a reasonable amount of not-none values for non-empty slots as explained in § B.7.
We then investigate semantic information contained in the learned prompt tokens by selecting the most changed prompt tokens and producing the closest tokens with the smallest cosine similarity between the learned prompt token embedding and frozen token embeddings of the PLM. We show the result for the attraction domain in Table 3, and for all domains in SS B.4. The closest tokens are mostly variations or semantically similar tokens of the expected meanings of prompt tokens.
## 5 Related Works
Two lines of work are closely related to our work.
Dialogue State Tracking.DST aims to extract slots and values to capture user intents from dialogues. In terms of model design, _classification-based models_ pick the slot value from candidates Ye et al. (2021); Gao et al. (2020); Chen et al. (2020), which assumes that the dialogue ontology is pre-defined and cannot be generalized Chen et al. (2020). _Generation-based models_ directly generate slot values to handle unseen domains and values Ham et al. (2020); Hosseini-Asl et al. (2020); Gao et al. (2019, 2020); Lin et al. (2020); Peng et al. (2021). In terms of knowledge sources, some methods create synthesized dialogues with human heuristics to do data augmentation for the target domain Campagna et al. (2020); Hou et al. (2020) which require expensive human costs. Recent works transfer knowledge from data-rich domains with slot description Lee and Jha (2019); Rastogi et al. (2020), slot type, possible values Lin et al. (2021), possible values Lee et al. (2021), task constraint Mi et al. (2022), similarity functions between slots and values Wu et al. (2020), and meta-learning Dingliwal et al. (2021), while the availability of related source domains constrains their generalizability. Some works transfer from other tasks like Reading Comprehension Gao et al. (2019, 2020); Lin et al. (2021). We take inspiration and use a transformer-based generative model with slot metadata but using much less trainable parameters.
Prompting methods.Recent works use a textual prompt as a part of the input sequence to provide task information to the LM Liu et al. (2021). The prompt can be chosen by experts Radford et al. (2019), learned as discrete readable tokens Shin et al. (2020) or continuous embeddings Qin and Eisner (2021). The textual prompt can also contain a few examples known as "in-context learning" without tuning LM Brown et al. (2020). Some works fine-tune both LM and prompt parameters Gao et al. (2021); Qin and Eisner (2021); Li and Liang (2021); Ma et al. (2022); Hsu et al. (2022) or only token embeddings Zhu et al. (2021). Works like Lester et al. (2021); Gu et al. (2022); Vu et al. (2022) show that freezing PLM and only tuning learnable soft prompts, known as "prompt tuning", is competitive with fine-tuning while using much less parameters. For the DST task, Lee et al. (2021) use slot information as prompt and fine-tune PLM, Zhu et al. (2022) use prompt tuning for continual domain adaptation, both requiring a significant amount of training data. For the low-resource DST task, Yang et al. (2022) introduce two-way prompts but need to fine-tune LM, which is not parameter-efficient.
## 6 Conclusion and Future Work
We propose a parameter-efficient DST model using prompt tuning, and it represents tasks with soft prompt tokens with segment awareness and reiteration. Our model achieves state-of-the-art low-resource DST performance with less than 0.5% parameters compared with fine-tuning LM. We plan to further investigate prompt aggregation.
Figure 3: Error distribution across slot types
\begin{table}
\begin{tabular}{l l} \hline \hline Prompt token & Closest tokens \\ \hline \textless{}domain\_attraction\_4\textgreater{} & \text{} \text{} \text{} \text{} \text{} \text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{ }\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{ }\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{ }\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{}\text{} \text{}\text{}\text{}\text{}\text{}\text{}\text{}\
### Limitations
There are several limitations to our work. Firstly, the proposed model is more sensitive to hyper-parameters such as the number of prompt tokens and learning rate than existing methods that fine-tune LM. Therefore, it would require additional parameter searching efforts to obtain the best performance. Secondly, our model is designed for and evaluated in English-only conversations, and applying our technique to other languages or code-switching scenarios might lead to performance decay. Finally, our experimental result shows that our proposed prompt tuning method works better than fine-tuning LM when there are fewer unique tasks to be optimized. Therefore, our method might not work well on a more diverse dataset.
## Ethics Statement
We do not see an immediate negative impact of the proposed method and do not see biased predictions made by our model. Our method is based on a pre-trained generative language model and trained on an open DST dataset, thus bias contained in the corpus for pre-training and the DST dataset might propagate to prediction outputs of our model. Human validation of the prediction results and their fairness needs to be conducted before our proposed model is used in production and other real-world applications. Our proposed model does not increase energy and carbon costs but will potentially reduce them due to its data and parameter efficiency.
## Acknowledgments
Many thanks to Sidi Lu, Tanmay Parekh, and Sarik Ghazarian for internal reviews, to members at Amazon Alexa AI, PLUS lab and UCLA-NLP for suggestions, and to the anonymous reviewers for their feedback.
|
2302.01390 | Controlling the Skyrmion Density and Size for Quantized Convolutional
Neural Networks | Skyrmion devices show energy efficient and high integration data storage and
computing capabilities. Herein, we present the results of experimental and
micromagnetic investigations of the creation and stability of magnetic
skyrmions in the Ta/IrMn/CoFeB/MgO thin film system. We investigate the
magnetic-field dependence of the skyrmion density and size using polar magneto
optical Kerr effect MOKE microscopy supported by a micromagnetic study. The
evolution of the topological charge with time under a magnetic field is
investigated, and the transformation dynamics are explained. Furthermore,
considering the voltage control of these skyrmion devices, we evaluate the
dependence of the skyrmion size and density on the Dzyaloshinskii Moriya
interaction and the magnetic anisotropy. We furthermore propose a skyrmion
based synaptic device based on the results of the MOKE and micromagnetic
investigations. We demonstrate the spin-orbit torque controlled discrete
topological resistance states with high linearity and uniformity in the device.
The discrete nature of the topological resistance makes it a good candidate to
realize hardware implementation of weight quantization in a quantized neural
network (QNN). The neural network is trained and tested on the CIFAR10 dataset,
where the devices act as synapses to achieve a recognition accuracy of 87%,
which is comparable to the result of ideal software-based methods. | Aijaz H. Lone, Arnab Ganguly, Hanrui Li, Nazek El- Atab, Gobind Das, H. Fariborzi | 2023-02-02T19:56:14Z | http://arxiv.org/abs/2302.01390v1 | # Controlling the Skyrmion Density and Size for Quantized Convolutional Neural Networks
###### Abstract
_ABSTRACT: Skyrmion devices show energy-efficient and high-integration data storage and computing capabilities. Herein, we present the results of experimental and micromagnetic investigations of the creation and stability of magnetic skyrmions in the Ta/IrMn/CoFeB/MgO thin-film system. We investigate the magnetic-field dependence of the skyrmion density and size using polar magneto-optic Kerr effect (MOKE) microscopy supported by a micromagnetic study. The evolution of the topological charge with time under a magnetic field is investigated, and the transformation dynamics are explained. Furthermore, considering the voltage control of these skyrmion devices, we evaluate the dependence of the skyrmion size and density on the Dzyaloshinskii-Moriya interaction and the magnetic anisotropy. We furthermore propose a skyrmion-based synaptic device based on the results of the MOKE and micromagnetic investigations. We demonstrate the spin-orbit torque-controlled discrete topological resistance states with high linearity and uniformity in the device. The discrete nature of the topological resistance (weights) makes it a candidate to realize hardware implementation of weight quantization in a quantized neural network (QNN). The neural network is trained and tested on the CIFAR-10 dataset, where the devices act as synapses to achieve a recognition accuracy of \(\thicksim\)87%, which is comparable to the result of ideal software-based methods._
_KEYWORDS: Skyrmions, Magnetic Tunnel Junction, Magneto-Optical Kerr Effect, Micromagnetics, Topological Resistance, and Neuromorphic devices_
## I Introduction
Magnetic skyrmions are topologically protected swirling structures obtained using chiral interactions in noncentrosymmetric magnetic compounds or thin films exhibiting broken inversion symmetry [1]. The intrinsic topological protection of the skyrmion makes them stable against external perturbations [2]. Topological protection means that skyrmions have a characteristic topological integer that cannot change by the continuous deformation of the field [3]. The Dzyaloshinskii-Moriya interaction (DMI), as a chiral antisymmetric exchange interaction responsible for the formation of magnetic skyrmions, is based on the strong spin-orbit coupling at
the heavy metal/ferromagnetic (HM/FM) interface with broken-inversion symmetry [4]. Magnetic skyrmions emerge from the interaction between different energy terms. The exchange coupling and anisotropy terms support the parallel alignment of spins, whereas the DMI and dipolar energy terms prefer the noncollinear alignment of spins [5]. In asymmetric FM multilayer systems, such as Pt/Co/Ta [6] and Pt/CoFeB/MgO [7], the DMI is facilitated by the high interfacial spin-orbit coupling induced by symmetry breaking. Because the DMI and anisotropy terms are material property- and geometry-dependent, a combination of different HM/FM structures has been investigated to stabilize skyrmions and define a specific chirality [8]. Spintronic devices based on magnetic skyrmions exhibit increased density and promote energy-efficient data storage because of their nanometric size and topological protection [9]. Magnetic skyrmions can be driven by extremely low depinning current densities [10], and they can be scaled down to 1 nm [11]. These properties indicate that they can be potentially applied in data storage and computing [12]. In particular, skyrmion devices exhibit excellent potential for application in unconventional computing techniques such as neuromorphic computing [13] and reversible computing [14]. Neuromorphic computing is inspired by the performance and energy efficiency of the brain [15]. Furthermore, it employs neuromimetic devices, emulating neurons, for computing tasks. Synapses store information in terms of weight. Spintronic devices, particularly magnetic tunnel junctions (MTJs), have attracted considerable interest in neuromorphic computing [16][17][18][19]. Recently, multiple neuromorphic computing systems coupled with MTJs based on skyrmions, such as skyrmion neurons [20][15][21] and skyrmion synapses [13][22][23], have been proposed. Furthermore, the control of spintronic devices using an electric field has attracted considerable attention for memory and logic applications because it is an efficient approach for improving the data-storage density [24][25] and reducing the switching energy [26]. The important challenges encountered in the application of skyrmions in storage and computing (both conventional and unconventional computing) are the controlled motion and readability of skyrmions [27].
Quantization of neural network is an effective method for model compression and acceleration.
To compress the network, all weights of the model with a high-bit floating-point number are quantized into a low-bit integer or fixed floating number, which significantly reduces the amount of the required memory size and computational cost [28]. Emerging memory devices could emulate functions of biological synapses to realize in-memory computing, which paved the way for the quantized weight implementation in the neuromorphic computing system [29][30]. In this
study, we conduct experimental and micromagnetic investigations in magnetic skyrmions in the Ta/IrMn/CoFeB/MgO thin film system for realization of neuromorphic device. The dependence of the skyrmion density and diameter on the magnetic field is studied by polar magneto-optic Kerr effect (MOKE) microscopy supported with the micromagnetic simulations. The evolution of the topological charge with time under the magnetic field is investigated, and the transformation dynamics from the labyrinth domains to Neel skyrmions is explained. Furthermore, we evaluate the dependence of the skyrmion density and size on the DMI and the magnetic anisotropy to realize voltage control skyrmion devices. Based on these results of the MOKE and micromagnetic investigations, we propose a skyrmion-based synaptic. We demonstrate the spin-orbit torque (SOT)-controlled skyrmion device and its constituent discrete topological resistance states. Considering the discrete nature of the topological resistance (weights), we demonstrate the neuromorphic implementation based on the quantized convolutional neural network (QCNN) architecture. The NN is trained and tested on the CIFAR-10 dataset and the devices acting as synapses achieve a recognition accuracy of \(\sim\)90%, which is comparable to the accuracy of ideal software-based methods.
## II Methods
As shown in Fig. 1, a multilayer thin film of Ta (5 nm)/IrMn (5 nm)/CoFeB (1.04 nm)/MgO (2 nm)/Ta\({}_{2}\) (nm) was deposited on thermally oxidized Si substrates by Singulus DC/RF magnetron sputtering. In this sample, the thickness of CoFeB is a curtailment parameter that provides suitable anisotropy for creating high-density skyrmions. The sputtering conditions were carefully optimized to achieve perpendicular magnetic anisotropy (PMA). Then, the sample was subjected to a post-deposition annealing treatment at 250\({}^{\circ}\)C for 30 min to enhance the PMA. The investigations were performed using MOKE microscopy in the polar configuration. Differential Kerr imaging was performed to observe the magnetic domains and eliminate the contribution of any nonmagnetic intensity. The square pulses of the MF were simultaneously applied both in-plane and out-of-plane of the sample using two independent electromagnets. The sample exhibited a labyrinth domain structure without an MF. First, using a sufficiently large out-of-plane field, magnetization was saturated in one perpendicular direction. A reference polar-MOKE image was captured in this state. Next, the out-of-plane MF strength (\(H_{z}\)) was decreased to the desired value
while the in-plane field with strength \(H_{x}\) was applied, as required. Subsequently, a second polar-MOKE image was captured in this state. The magnetic image of the final state was obtained by the differential image relative to the reference image. Fig. 1(b) shows the typical double-loop magnetic hysteresis characteristic of the multidomain in the system, which supports the MOKE imagining results.
## 2 Micromagnetics
Magnetic skyrmions are described using their topological or skyrmion number (Q), calculated as follows [31]:
\[Q=\tfrac{1}{4\pi}\int\int\mathbf{m}.\left(\tfrac{\partial\mathbf{m}}{ \partial x}\times\tfrac{\partial\mathbf{m}}{\partial y}\right)dxdy. \tag{1}\]
The spins projected on the xy plane and the normalized magnetization vector (\(\mathbf{m}\)) can be determined by the radial function (\(\theta\)), azimuthal angle \(\varphi\), vorticity (\(Q_{v}\)), and helicity (\(Q_{h}\)):
\[m(r)=[\sin(\theta)\cos(Q_{v}\varphi+Q_{h}),\sin(\theta)\sin(Q_{v }\varphi+Q_{h}),\cos\left(\theta\right)]. \tag{2}\]
The vorticity is related to the skyrmion number via the following expression [12]:
\[Q=\tfrac{Q_{v}}{2}\Big{[}\lim_{r\rightarrow\infty}\cos(\theta(r ))-\cos(\theta(0))\Big{]}. \tag{3}\]
Figure 1: Schematic of the fabricated sample for MOKE and vibrating sample magnetometry (VSM) characterizations
Micromagnetic simulations were performed using MuMax having the Landau-Lipschitz-Gilbert (LLG) equation as the basic magnetization dynamics computing unit [32]. The LLG equation describes the magnetization evolution as follows:
\[\frac{d\widehat{\mathbf{m}}}{dt}=\frac{-\gamma}{1+\alpha^{2}}\big{[}\mathbf{m}\times\mathbf{ H}_{\mathbf{eff}}+\mathbf{m}\times(\mathbf{m}\times\mathbf{H}_{\mathbf{eff}})\big{]}, \tag{4}\]
\(\gamma\) is the gyromagnetic ratio, \(\alpha\) is the Gilbert damping coefficient, and
The total magnetic energy of the free layer includes the exchange, Zeeman, uniaxial anisotropy, demagnetization, and DMI energies [33][34]:
\[E(\mathbf{m})=\int_{V}[\,A(\nabla\mathbf{m})^{2}-\mu_{0}\mathbf{m}.\,H_{\mathbf{ext}}\,-\frac{ \mu_{0}}{2}\mathbf{m}.\,H_{\mathbf{d}}-K_{\mathbf{u}}(\widehat{\alpha}.\mathbf{m})+\mathbf{\varepsilon }_{\mathbf{DM}}\,]d\mathbf{v}, \tag{5}\]
where A is the exchange stiffness, \(\mu_{0}\) is the permeability, \(K_{u}\) is the anisotropy energy density, \(H_{\mathbf{d}}\) is the demagnetization field, and \(H_{\mathbf{ext}}\) is the external field. Moreover, the DMI energy density is computed as follows:
\[\varepsilon_{DM}=D[m_{\text{z}}(\nabla.\mathbf{m})-(\mathbf{m}.\nabla).\,\mathbf{m}]. \tag{6}\]
\[\mathbf{H}_{\mathbf{eff}}=\frac{-1}{\mu_{0}M_{\text{S}}}\frac{\delta E}{\delta\mathbf{m}} \tag{7}\]
is the effective magnetic field around which the magnetization vector precesses.
Then, the SOT is introduced as a custom field term into MuMax [35]:
\[\mathbf{\tau}_{\mathbf{SOT}}=-\frac{\gamma}{1+\alpha^{2}}a_{J}[(1+\xi\alpha)\mathbf{m} \times(\mathbf{m}\times\mathbf{p})+(\xi-\alpha)(\mathbf{m}\times\mathbf{p})], \tag{8}\]
\[a_{J}=\left|\frac{\hbar}{2M_{\text{S}}e\mu_{0}}\frac{\theta_{SH}j}{d}\right| \qquad\text{and}\,\,\,\mathbf{p}=sign(\theta_{SH})\mathbf{j}\times\mathbf{n}\,\,,\]
where \(\theta_{SH}\) is the spin Hall coefficient of the material, \(j\) is the current density, and \(d\) is the free-layer thickness. The synapse resistance and neuron output voltage are computed using the nonequilibrium Green's function (NEGF) formalism. We then consider the magnetization profile of the free layer and feed it to the NEGF model, which computes the resistance of the device, as follows [36][37]:
\[R_{syn}=\frac{v_{syn}}{l_{syn}}. \tag{9}\]
Subsequently, the MTJ read current is computed as follows:
\[I_{syn}=trace\left\{\sum_{k_{t}}C_{\sigma}\frac{i}{h}\begin{pmatrix}H_{k,k+1}&G ^{n}_{k+1,k}\\ -G^{n}_{k,k+1}H_{k+1,k}\end{pmatrix}\right\}, \tag{10}\]
where \(H_{k}\) is the \(k\)th lattice site in the device Hamiltonian, and \(G^{n}_{k}\) is the electron correlation at the \(k\)th site, which yields the electron density.
## III Results and Discussion
In this study, skyrmions were created and annihilated in a multilayer thin film using a combination of in-plane and out-of-plane MFs. The sample and experimental conditions were optimized to achieve a relatively high skyrmion density at a low field amplitude. We experimentally obtained a maximum skyrmion density of \(350\times 10^{3}\)/mm\({}^{2}\) for the resultant MFs with \(H_{z}\) and \(H_{x}\) values of 20 Oe and 35 Oe, respectively. As shown in Fig. 2(a), we experimentally observed the evolution from the labyrinth domains into skyrmions under a constant in-plane MF (\(H_{X}=2\) mT) and an out-of-plane MF (\(H_{z}=0\)-2.4 mT), as reported in the top-right corner of each figure. The white and black contrasts correspond to the \(\uparrow\) and \(\downarrow\) domains, respectively. We then observed that the large labyrinth domains start splitting into small domain structures, and skyrmions start emerging as \(H_{z}\) increases from 0 to 1.2 mT. At \(H_{z}=2\) mT, all the labyrinth domains disappear and the skyrmion density reaches the maximum value. Then, any additional increase in the field strength decreases the skyrmion density because of skyrmion annihilation. The complete annihilation of the skyrmions occurs when \(H_{z}\) is \(\sim\)2.6 mT. The magnetization reversal mechanism is analytically explained using micromagnetic simulations. Here, we employ the finite difference method to solve the LLG equation to examine the spin dynamics in a system similar to that used in the experiments. We consider a 1024 \(\times\) 512 \(\times\) 1-nm\({}^{3}\) free layer discretized as a mesh with 1 \(\times\) 1 \(\times\) 1-nm\({}^{3}\) cells. The magnetic parameters used for the simulations are as follows: anisotropy, K = 1.1 \(\times\) 10\({}^{6}\) J/m\({}^{3}\); exchange constant, A = 1 \(\times\) 10\({}^{11}\) J/m; and saturation magnetization, M\({}_{\rm s}\)= 800 A/m. Fig. 2(b) shows that the micromagnetic simulation results are consistent with the experimental results. At \(H_{z}=2\) mT, we observe the splitting of large labyrinth domains into small domain structures, followed by the gradual stabilization of skyrmions; accordingly, an increased skyrmion density is observed at \(H_{z}=10\) mT. An important observation from these simulations is that the magnetic domains split, rotate counterclockwise, and shrink until the skyrmions are stabilized. However, during this
process, the already stabilized skyrmions are robust, and except for small translation motions, we observe no change in their size. Once only skyrmions are present in the magnetic free layer, any additional increase in the field strength will cause the skyrmion size to shrink. However, at this point, the skyrmions are less responsive to the MF, depicting their topological stability. The experimental field dependence of the skyrmion density is summarized in a 3D color plot in Fig. 2(c). The horizontal axes represent \(H_{x}\) and \(H_{z}\), and the height and the color represent the corresponding skyrmion densities. The skyrmion density peaks at \(H_{z}\!=\!2\) mT, independent of \(H_{x}\) (black dashed line). However, the skyrmion density is not symmetric with respect to the \(H_{z}\!=\!2\) mT line and is larger for higher values of \(H_{z}\). The red and blue arrows in Fig. 2(c) show the asymmetry of the skyrmion density line with respect to \(H_{z}\!=\!2\ mT\). The degree of asymmetry increases with \(H_{x}\). This result demonstrates that the evolution of skyrmions involves two different phases. On the left side of \(H_{z}\!=\!2\) mT, the skyrmion creation rate is higher than the annihilation rate, indicating the net creation of skyrmion. On the right side, net annihilation occurs. At high \(H_{x}\) values, a larger field range is required for skyrmion annihilation than that for creation. This observation can be explained by the increase in the skyrmion size with \(H_{x}\) and the topological protection of skyrmions. Therefore, for annihilation, large fields are required to decrease the critical diameter of skyrmions. Thus, at high \(H_{x}\) values, additional energy is required for the annihilation of skyrmions. This result demonstrates that the in-plane field promotes the stability of the skyrmion spin structures; as such, their annihilation occurs at high chiral energies. This can be observed in Figs. 2(b) and 2(e), where the skyrmions are created under fields with \(H_{z}\!=\!40\) mT; however, the annihilation occurs at \(\sim\)100 mT. When comparing to MOKE results as shown in Fig. 2(d), we observe micromagnetic simulations agreeing very well qualitatively (see Fig. (e)). Moreover, the skyrmion density increases with \(H_{x}\), and a relatively broad window of \(H_{z}\) is observed, indicating the presence of skyrmions. When an in-plane field is applied to the labyrinth domains, the domains align along the field direction and their widths decrease. The aligned domains increase the efficiency of the skyrmion creation compared to the case with misaligned labyrinth domains. The experimental results qualitatively agree with simulation results (Fig. 2(e)). The missing quantitative aspect is justified by the difference in the sample sizes and time differences in application of magnetic field in the experiment and simulation. The simulation time considered in MuMax was in range of ns while the magnetic field was applied for 100ms. In the experiment, the stack area was 100 mm\({}^{2}\)
whereas in the simulation, considering the computational viability, the MuMax stack size was set to \(1.024~{}\mu\mathrm{m}\times 512~{}\mathrm{nm}\).
Fig. 3(a) shows the dynamics of the skyrmion stabilization. We observe that the highly asymmetrical domains at \(\mathrm{t}=3~{}\mathrm{ns}\) rotate counterclockwise while gradually shrinking to additional symmetrical textures; thereafter, they are transformed into topologically protected skyrmions at \(\mathrm{t}=14.2~{}\mathrm{ns}\). The rotation stops when the domain transforms into a highly symmetrical skyrmion. Thus, the magnetic energy at the beginning is dissipated in three degrees of freedom (DOFs): rotation, translation, and shrinkage. Gradually, the DOFs are restricted; in particular, the rotation DOF completely vanishes with time because of the formation of symmetrical magnetic textures (skyrmions). In Figs. 3(b) and (c), we demonstrate the different torques acting on the domains. For the asymmetrical domains, the unbalanced torques due to the in-plane component (\(H_{x}\)) and the perpendicular component (\(H_{z}\)) induce domain rotation and domain shrinkage, respectively. However, for skyrmions, the torques are rotation torques, and they are balanced
Figure 2: (a) MOKE captures of the sample showing the MF dependence of the skyrmion density. (b) Micromagnetic simulation results corresponding to the experimental results. (c) Skyrmion-density tuning by applying in-plane and perpendicular MF’s.(d) Skyrmion density as function of Hz (MOKE) (e) Skyrmion density versus \(H_{z}\) showing an increasing trend, followed by a saturated constant (wide window) and, subsequently, annihilation (Micromagnetics)
because of the symmetrical magnetic textures; thus, the rotation DOF is eliminated. Consequently, the skyrmion size gradually decreases because of the increase in the shrinking energy.
In Figs. 4 (a1-a3), we demonstrate the evolution of the total topological charge under the MF at different times. The topological charge is initially \(-10\), which indicates that skyrmions with Q = \(-1\) dominate the overall free-layer magnetic texture. However, under the field with \(H_{Z}\) = \(-30\) mT, the topological charge switches to \(+55\), and within a fraction of a nanosecond, Q settles at around 40. On increasing the MF, the Q = \(+1\) skyrmion density increases, as shown in Figs. 4(a2) (d-f). The maximum topological charge reaches 50 when all the Q = \(-1\) skyrmions are annihilated, and we
Fig. 3: (a) Skyrmion-stabilization dynamics showing the counterclockwise rotation of the labyrinth domains and the gradual shrinking to yield skyrmions. (b) The unbalanced torque caused by the field in the asymmetrical domains causes rotation due to \(H_{x}\) and gradual shrinking due to \(H_{z}\). (c) For the skyrmion, the symmetrical texture balances the net torque, and hence no rotation occurs.
observe only Q = +1 skyrmions, as shown in (f). The Q = +1 skyrmions are stable from 40 to 80 mT, as shown in Fig. 2(e), and the corresponding topological charge is fixed at 50 in this range, as shown in Fig. 4(b).
_Fig. 4 (a1-a3) Time evolution of the topological charge (Q) under the MF. (b) Topological charge vs MF. (c) Magnetic texture corresponding to different times and fields._
As we increase the field, the skyrmion size decreases and we start observing the annihilation of Q = +1 skyrmions and the decrease in the topological charge. The topological charge reaches 0 around \(H_{Z}\) = 100 mT.
Figure 5: (a) Anisotropy dependence of the skyrmion density for different DMI magnitude; the plots show the behavior of two types of skyrmions with (Q, Q\({}_{V}\), Q\({}_{Q}\)) = (–1, 1, 0) and (1, 1, 0). (a–e) DMI magnitude = 2, 2.5, 3, 3.5, and 4 m/\(\text{J}\text{/}\text{J}^{2}\), respectively. (b) DMI = 4mJm\({}^{2}\) (c) Combined skyrmion density versus anisotropy while increasing the DMI magnitude. (d) DMI = 4mJm\({}^{2}\) (e) The control of the skyrmion density with increasing the DMI
In Figs. 5(a), we show the skyrmion density dependence on the DMI and anisotropy, for skyrmion with indices Q=(-1,1,0) and (1, 1, \(\pi\))The simulations were carried out for DMI in range (2mJm-2 to 4mJm-2) and anisotropy starting from K = \(0.5\times 10^{6}\) J/m\({}^{3}\) to K = \(1.6\times 10^{6}\) J/m\({}^{3}\). In all simulations, we observe the presence of two types of skyrmions with different winding numbers (charges, Q), vorticities (Q\({}_{\rm v}\)), and helicities (Q\({}_{\rm h}\)) in the same FM thin film. The magnetization texture splits into large domains, and these domains, in turn, split into small labyrinth domains and skyrmions. Depending on the background magnetization of these large domains, the skyrmions have (Q, Q\({}_{\rm v}\), Q\({}_{\rm h}\)) either as (\(-\)1, 1, 0) or (1, 1, \(\pi\)). Here, we independently consider the impact of the DMI and anisotropy on the size and density of these two types of skyrmions. At the DMI magnitude (DMI) of \(2\times 10^{-3}\) J/m\({}^{2}\), the density of skyrmions with attributes (1, 1, \(\pi\)) gradually decreases smoothly with an increase in the anisotropy and undergoes complete annihilation at K = \(1.3\times 10^{6}\) J/m\({}^{3}\). However, the skyrmions with attributes (\(-\)1, 1, 0) undergo abrupt annihilation at \(\sim\)K = \(1.1\times 10^{6}\) J/m\({}^{3}\), indicating their lower stability than those of the other skyrmions. As DMI increases to \(2.5\times 10^{-3}\) J/m\({}^{2}\), the skyrmion-density behavior compared to the anisotropy starts changing, the number of skyrmions with attributes (1, 1, \(\pi\)) increases from 29 to the local minimum of 20, and a peak value of 35 is observed at K = \(1\times 10^{6}\) J/m\({}^{3}\), followed by sharp annihilation at K = \(1.25\times 10^{6}\) J/m\({}^{3}\). The density of the other type of skyrmion remains almost constant until K = \(1\times 10^{6}\) J/m\({}^{3}\), followed by an abrupt decay at K = \(1.1\times 10^{6}\) J/m\({}^{3}\). At DMI = \(3\times 10^{-3}\) J/m\({}^{2}\), the skyrmion (1, 1, \(\pi\))'s density gradually decreases with a few oscillations, whereas that of the skyrmion with the (\(-\)1, 1, 0) attribute remains constant until K = \(1.2\times 10^{6}\) J/m\({}^{3}\), after which a sharp annihilation is observed. With an additional increase in DMI, the skyrmion density demonstrates a more stable behavior with an increase in the anisotropy, followed by an abrupt annihilation. For both types of skyrmions, at DMI = \(4\times 10^{-3}\) J/m\({}^{2}\), the skyrmion density decreases at low anisotropies and increases with the anisotropy to a peak value as shown in Fig. 5(b). Thus, a normal trend of the skyrmion density decreasing with the anisotropy is observed, as in previous cases. We plot the combined skyrmion density in Fig. 5(c) and Fig. 5(d), these results demonstrate an improved image of the skyrmion-stability dependence on the DMI and anisotropy. If the DMI of a system is low, skyrmions can exist only at low anisotropies; however, the stability increases with the DMI magnitude, and the skyrmions can exist in a range of anisotropies. At high DMI values, the reverse behavior is observed, and the skyrmion density attains the maximum value at a high anisotropy. The skyrmion density simultaneous dependence on DMI is shown in Fig. 5(e), we observe the maximum
skyrmion density around DMI = \(3\times 10^{-3}\) J/m\({}^{2}\). As both DMI and anisotropy depend upon the FM thickness and spin orbit interaction. This study provides additional insights into the optimization of material and geometrical properties, particularly the thickness of FM thin films, for the stabilization of skyrmions; for example, for a stable racetrack memory, conventional and unconventional logics depend on the SOT-driven motion of skyrmions.
The optimum DMI magnitude and anisotropy ranges are 3.1-3.5 mJ/m\({}^{2}\) and 0.5-1.2 \(\times\) 10\({}^{6}\) J/m\({}^{3}\), respectively, as the skyrmion density remains almost constant in these regions, which is important for realizing a reliable memory/logic operation. However, low DMI values (2-2.5 mJ/m\({}^{2}\)) appear to be ideal for voltage-controlled operations. Considering the behavior in Figs. 5(a) and Fig. 5(c) for D \(<\) 2.5 mJ/m\({}^{2}\), we observe a smooth variation in the skyrmion density with the anisotropy. The dependence of anisotropy on voltage has been demonstrated in [38]:
\[K_{S}(V)=K_{S}(0)-\frac{\xi E}{t_{FL}}\,, \tag{11}\]
where \(K_{S}(V)\) is the anisotropy at voltage \(V,E\) is the electric field across the oxide, \(\xi\) is the voltage-controlled magnetic anisotropy (VCMA) coefficient, and \(t_{FL}\) is the free-layer thickness. In Fig. 6(a), the MOKE results demonstrate that the skyrmion size decreases with an increase in \(H_{x}\) and \(H_{z}\). The skyrmion size is the maximum at \(H_{z}\) = 1 Mt and \(H_{x}\) = 0; the size decreases with an increase in both \(H_{x}\) and \(H_{z}\). Fig. 5(b) shows how the skyrmion responds to the magnetic field and that the size of the skyrmion sharply decreases. However, as the size decreases, the intraskyrmion forces increases, decreasing the responsiveness of the skyrmion to the MF. Consequently, the skyrmion
Figure 6: (a) Skyrmion size decreasing with increasing \(H_{x}\) and \(H_{z}\) (MOKE). (b) The micromagnetic simulation captures a similar trend. (c) Magnetization behavior with time shows the gradual decrease in the responsiveness of the skyrmion to the field
size decreases more gradually than the case with the large skyrmion. Fig. 6(c) shows the free-layer magnetization having a single skyrmion under the magnetic field. As the skyrmion size decreases, the magnetization increases; however, with time, saturation is realized because of the increase in the strength of the intraskyrmion forces, and the magnetic field finally overcomes the topological barrier. Thus, the skyrmion is annihilated under a strong field.
Fig. 6(a) shows the change in the magnetization texture of the free layer with an increase in the anisotropy; the skyrmion density and size decrease with an increase in the anisotropy. For our device simulations, we consider \(\xi=130\) (f)(Vm) \(-135\)] and \(t_{F}=1\ nm\). Therefore, we used the VCMA to control the skyrmion density, which controls the synaptic conductance and demonstrates its neuromimetic behavior. Fig. 6(b) shows the MuMax simulations, showing the variations in the skyrmion size with the anisotropy. The micromagnetic simulation exhibits a similar trend to that observed in the experiment: at the beginning, the skyrmion size decreases rapidly; however, the skyrmion becomes more stable and unresponsive to the magnetic field as the size decreases because of the intraskyrmion forces (topological barrier). Fig. 6(c) shows the variation in the skyrmion size with an increase in the anisotropy at a constant DMI value (3 mJ/m\({}^{2}\)). The skyrmion diameter linearly decreases in the anisotropy range of 0.58-0.64 \(\times\) 10\({}^{6}\) J/m\({}^{3}\) and at both extreme ends of the anisotropy range. The magnetization behavior with time plot shows a gradual decrease in the responsiveness of the skyrmion to the field as well as the final annihilation under a considerably strong field.
Thus, we can exploit the linear-region response for the synapses that act as linear weights; furthermore, the overall skyrmion behavior has considerable relevance to the sigmoid neuron behavior in artificial neural networks. We express this behavior in terms of a fitting model, as a modified sigmoid function:
\[R\,=\,\frac{\beta D}{1+e^{\,c_{1}(K-c_{2})+e^{\,c_{1}(K+c_{2})}}}\,. \tag{12}\]
From the equation, the critical condition for a skyrmion to exist is derived after expanding the equation to the first order in anisotropy K. We obtained the radius dependence on anisotropy K for a fixed DMI value of 3 mJ/m\({}^{2}\), as follows:
\[R\,=\frac{\beta D}{3+2C_{1}K}. \tag{13}\]
\(\beta\), \(c_{l}\), and \(c_{2}\) are the fitting coefficients (1.03 \(\times\) 10\({}^{5}\) m\({}^{3}\)/J, 5 \(\times\) 10\({}^{-5}\) m\({}^{3}\)/J, and 6.1 \(\times\) 10\({}^{5}\) J/m\({}^{3}\), respectively). The simulation results agree with the fitting model results.
These results open another possibility for the realization of voltage-controlled neuromorphic skyrmion devices. On application of the voltage, the anisotropy decreases or increases. Thus, a corresponding variation in the skyrmion size can be obtained.
The results based on the skyrmion density and size manipulation using the magnetic field, DMI, and anisotropy (voltage) terms provide in-depth insight into the
Figure 7: (a) The CoFeB (1 nm) layer magnetization texture evolution with anisotropy showing that the skyrmion density remains almost constant at 5–7 \(\times\) 10\({}^{5}\) J/m\({}^{3}\) and decreases with a further increase in K. The skyrmions are annihilated at \(\sim\)K = 11 \(\times\) 10\({}^{6}\) J/m\({}^{3}\). (b) The skyrmion size at a constant DMI value decreases with an increase in the anisotropy. (c) The skyrmion-size dependence on the anisotropy at a constant DMI of 3 m/m\({}^{2}\) agrees with the fitted analytical model. The model can be used as a neuron thresholding function in spiking and artificial neural networks.
parameters, stack geometry, and switching techniques for tuning the skyrmion density and diameter for memory and logic applications.
## VI Modulating the skyrmion density and size for a quantized convolutional neural network
Motivated by the skyrmion density, topological charge and skyrmion-size modulation discussed thus far, in Fig. 7(a), we propose a skyrmion-based memristive device, where the skyrmion topological resistance increases/decreases when a skyrmion is moved into/out of the active region. The current is applied across T-1 and T-2, which drives the skyrmions from pre-synapse to main-synapse. The topological Hall resistance due to the skyrmions is expressed as follows[40]:
\[B_{Z}^{e}=\frac{\Phi_{Z}^{e}}{A}=-\frac{h}{eA}\iint\frac{1}{4\pi}\mathbf{m}.\left( \frac{\partial m}{\partial x}\times\frac{\partial m}{\partial y}\right)dxdy,\]
\[\rho_{xy}^{T}{=}PR_{o}\left|\frac{h}{e}\right|\frac{1}{A}.\]
Here, \(P\) is the spin polarization of the conduction electrons, \(Ro\) is the normal Hall coefficient, \(h\) is Planck's constant, e is the electron charge, A is the area of the cross-overlap, and \(\frac{h}{e}\) is the flux quantum. The topological resistivity change is measured across T3 and T-4. Following the results from[2], the conductivity contribution by one skyrmion is 22 \(n\Omega\)cm. Therefore, in the proposed skyrmion synapse, the topological resistance (RTHE) across XY is expected to increase by 22 \(n\Omega\) on adding each skyrmion to the synapse region. We create a discrete set of skyrmions in the pre-synapse region, as shown in Fig. 7(b). Thereafter, using SOT current pulses, the skyrmions are driven into the synapse region, and the corresponding topological resistivity is reflected in RAH across XY. The skyrmions move at 80 m/s; thus, roughly each skyrmion takes time = skyrmion(initial)/velocity to reach the main-synapse region. We observe that the first skyrmion located at \(-50\) nm from the center arrives in \(\sim\)0.6 ns, and a step in the topological charge is detected. Likewise, in the constant current pulse, other skyrmions move into the synapse region, as shown in Fig. 7(b), and we observe discrete steps, as shown in Fig. 7(c). For the 8-skyrmions, eight discrete steps are detected. This results in discrete topological resistivity, as shown in Fig. 7(d). For the 16-skyrmions, the resistivity increases in 16 discrete steps on the application of current pulses. As shown in Fig. 7(b), the current moves from \(+\)x to \(-\)x, and the skyrmions move from \(-\)x to \(+\)x. Assuming that the background magnetization is not impacted by the current but by the
skyrmion motion, then during the period before the arrival of the next skyrmion, the resistivity remains constant. Thus, we observe that the topological resistivity increases in discrete steps, as shown in Fig. 7(d) (Potentiation). After 16 pulses, the current direction is reversed, the skyrmions move from +x to -x, and the topological charge in the main synapse decreases, which reduces the topological resistivity. This is referred to as synaptic depression. This is justified in Fig. 7(d), where it is shown that the topological charge in the synapse region increases from 0 to 15, and this is referred to as synaptic potentiation. To induce the synaptic depression, the current direction is reversed and the skyrmions are removed from the active region into the pre-synapse region. Note that any number of discrete states, such as 8 (3bit), 16 (4bit), 32 (5bit), and 64 (6bit) discrete states, can be realized by creating the corresponding skyrmions in the pre-synapse region. However, with increase in the skyrmion density, the skyrmion-skyrmion repulsion and skyrmion-hall effect begin to distort the skyrmion size, as shown in Fig. 7(b), although no impact on the topological charge is observed. This indicates that the topological resistivity-based skyrmion synapse tends to be more stable and noise resilient. In Fig. 7(e), the magneto-tunnel resistance corresponding to the discrete skyrmions in the synapse is shown. If the device resistance is measured using a MTJ, we observe that with each skyrmion moving into or out of the synapse, the vertical tunnel resistance varies by 5.62 \(\Omega\) (73 m\(\Omega\).cm). In addition to the MF and SOT control, the variation in the skyrmion size with MF and VCMA discussed in this work can introduce extra novelty to skyrmion device design. In particular, the tunnel resistance exhibited by the device can be tuned by SOT and voltage control; thus, additional advanced functional devices based on skyrmions can be realized.
_Fig. 7 (a) Topological resistivity-based skyrmion synapse; current is applied across T-1 and T-2. (b) Magnetic texture of the synapse showing the skyrmion motion from the pre-synapse region into the main-synapse region and vice-versa; the corresponding topological resistivity change is measured across T-3 and T-4. (c) Evolution of the topological charge in the presence of continuous current (\(8~{}\times 10^{11}\)A/m\({}^{2}\)). (d) Discrete topological resistivity of the device (measured across T-3 and T-4) for 16 skyrmions; by moving the skyrmions into/out of the main synapse, we achieve potentiation and depression. (e) Tunnel magneto-resistance in the MTJ configuration._
#### Synaptic weight and neural network configuration
The synapse value is configured using a differential pair of skyrmion devices. To measure the synaptic weight, we employ two skyrmion devices to subtract the values and obtain the corresponding positive and negative weights [32]. The target synapse values are determined by the following equation:
\[G_{i,j}^{target}=G_{i,j}^{+}-G_{i,j}^{-}=k\big{(}\big{|}w_{i,j}^{+}-w_{i,j}^{-} \big{|}\big{)}=kw_{i,j}^{target},\]
where \(G_{i,j}^{target}\)indicates the target conductance which can have positive or negative depending upon the difference between. \(G_{i,j}^{+}\) and \(G_{i,j}^{-}\), which indicate the device conductance receiving positive and negative voltage stresses at the site \((i,j)\)th in the array.. Similarly, \(w_{i,j}^{target}\) is the target weight obtained from the devices, and \(k\) is the coefficient that relates the weights to the conductance. Fig. 8(a) shows the example of the synaptic weights, which are directly obtained from eight discrete skyrmion states. Fig. 9(a) shows the schematic circuit diagram considered for the vector-matrix multiplication operations with differential skyrmion device pairs.
_Fig.8(a): Schematic of the circuit diagram comprising skyrmion-based synapses. (b) G diamond plot describing the method through which two skyrmion devices map to the synaptic weights, where each skyrmion device takes one of the eight conductance values from G1 to G8._
Through Kirchhoff's law, the weighted current sum can simply be calculated as the result of matrix multiplication, which increases the computing speed and decreases the power consumption within the in-memory computing architecture.
To measure the learning capability of our skyrmion device, we conducted an image-recognition task on CIFAR-10 Dataset with a nine-layer CNN, which is a brief variant of VGG-NET [33]. The network structure is shown in Fig. 9(a). Our network architecture comprises six convolutional layers for feature extraction and three fully connected layers for image classification. After every two convolutional layers, we adopt a max-pooling layer behind to subsample and aggregate the feature. In the simulation, the CNN
Fig. 9: Neural network structure for the CNN model (a variant of VGG-NET) comprising six convolutional layers (CONV), three max-pooling layers (Pooling), and three fully connected layers (FC). (b) System-level performance in terms of the recognition accuracy.
weights are directly used to obtain the synaptic values from the skyrmion device. The topological resistivity of the skyrmion device has the advantages of excellent uniformity and linearity, which are beneficial for the implementation of the QCNN. For example, our skyrmion device has exhibited suitable properties on both LTP and LTD with 64 and 32 states. We can easily implement 5-bit and 4-bit quantized parameter networks separately with our device synapse.
The simulation results show that QCNN implemented with a skyrmion-based synaptic device achieves results comparable to those of software-based CNN algorithms. The simulation results are illustrated in Fig. (9b), we implement 5- and 6-bit synaptic weights with 32 and 64 skyrmions in the active region respectively to achieve the recognition accuracy around 87.76% and 86.81%, which is slightly lower than the 32-bit floating-point (FP32) arithmetic by software default. The experimental results demonstrate the learning capability of the device to achieve competitive accuracy in image recognition and highlight its applicability as a synaptic device for neuromorphic computing systems.
## V Conclusions
In this study, we investigated the creation, stability, and controllability of skyrmions using experimental and simulation techniques to understand their applications in data storage and computing. We then analyze multiple aspects of the skyrmion stability, size, and density modulation under the influences of MF and anisotropy. Detailed insights into the transition from the labyrinth domains to skyrmions, along with the topological charge evolution, are obtained. Subsequently, an analytical model is developed to demonstrate the relationship between the skyrmion size and anisotropy, which helps in realizing VCMA-controlled synapses and neurons. We then analyze the influence of the DMI and anisotropy on the skyrmion size and density for device-parameter optimization in multiple skyrmion applications. Our results, in particular, contribute to the understanding of skyrmion voltage switching for data storage and unconventional computing applications. Therefore, we propose a skyrmion-based synaptic device based on our results and demonstrate the SOT-controlled skyrmion device with discrete topological resistance states. The discrete topological resistance of skyrmion device shows the inherent advantage of high linearity and uniformity, which makes it a suitable candidate for weight implementation of QCNN. The neural network is trained and tested on CIFAR-10 dataset, where we adopt the devices as synapses to achieve a competitive recognition accuracy against the software-based weights. |
2304.05278 | Complementarity between quantum entanglement, geometrical and dynamical
appearances in N spin-$1/2$ system under all-range Ising model | With the growth of geometric science, including the methods of exploring the
world of information by means of modern geometry, there has always been a
mysterious and fascinating ambiguous link between geometric, topological and
dynamical characteristics with quantum entanglement. Since geometry studies the
interrelations between elements such as distance and curvature, it provides the
information sciences with powerful structures that yield practically useful and
understandable descriptions of integrable quantum systems. We explore here
these structures in a physical system of $N$ interaction spin-$1/2$ under
all-range Ising model. By performing the system dynamics, we determine the
Fubini-Study metric defining the relevant quantum state space. Applying
Gaussian curvature within the scope of the Gauss-Bonnet theorem, we proved that
the dynamics happens on a closed two-dimensional manifold having both a
dumbbell-shape structure and a spherical topology. The geometric and
topological phases appearing during the system evolution processes are
sufficiently discussed. Subsequently, we resolve the quantum brachistochrone
problem by achieving the time-optimal evolution. By restricting the whole
system to a two spin-$1/2$ system, we investigate the relevant entanglement
from two viewpoints; The first is of geometric nature and explores how the
entanglement level affects derived geometric structures such as the
Fubini-Study metric, the Gaussian curvature, and the geometric phase. The
second is of dynamic nature and addresses the entanglement effect on the
evolution speed and the related Fubini-Study distance. Further, depending on
the degree of entanglement, we resolve the quantum brachistochrone problem. | Jamal Elfakir, Brahim Amghar, Abdallah Slaoui, Mohammed Daoud | 2023-04-11T15:26:19Z | http://arxiv.org/abs/2304.05278v3 | Complementarity between quantum entanglement, geometrical and dynamical appearances in \(N\) spin-\(1/2\) system under all-range Ising model
###### Abstract
With the growth of geometric science, including the methods of exploring the world of information by means of modern geometry, there has always been a mysterious and fascinating ambiguous link between geometric, topological and dynamical characteristics with quantum entanglement. Since geometry studies the interrelations between elements such as distance and curvature, it provides the information sciences with powerful structures that yield practically useful and understandable descriptions of integrable quantum systems. We explore here these structures in a physical system of \(N\) interaction spin-\(1/2\) under all-range Ising model. By performing the system dynamics, we determine the Fubini-Study metric defining the relevant quantum state space. Applying Gaussian curvature within the scope of the Gauss-Bonnet theorem, we proved that the dynamics happens on a closed two-dimensional manifold having both a dumbbell-shape structure and a spherical topology. The geometric and topological phases appearing during the system evolution processes are sufficiently discussed. Subsequently, we resolve the quantum brachistochrone problem by achieving the time-optimal evolution. By restricting the whole system to a two spin-\(1/2\) system, we investigate the relevant entanglement from two viewpoints; The first is of geometric nature and explores how the entanglement level affects derived geometric structures such as the Fubini-Study metric, the Gaussian curvature, and the geometric phase. The second is of dynamic nature and addresses the entanglement effect on the evolution speed and the related Fubini-Study distance. Further, depending on the degree of entanglement, we resolve the quantum brachistochrone problem.
**Keywords:** Quantum state space, Fubini-Study metric, Gaussian curvature, Geometric phase, Quantum brachistochrone issue, Quantum entanglement.
## I Introduction
Over the past few years, there has been growing excitement about the application of geometric ideas to quantum physics. It is argued that the geometrization of quantum theory provides a significant framework able to describe, to a large extent, the physical characteristics of solvable quantum systems [1; 2; 3; 4]. This geometrical approach has introduced the concept of the quantum phase space, endowed naturally with the Kahler manifold structure, on which the dynamics of quantum systems is well established [5; 6; 7; 8]. Lately, numerous studies have shown the relevance of geometric structures of the physical state space for exploring the physical properties of quantum systems. Indeed, It has been demonstrated that the Fubini-Study distance traveled by a quantum system during evolution along a given curve in relevant projective Hilbert space is related to the integral of the energy uncertainty, which in turn is proportional to the evolution speed [9]. The quantum speed limit time, which defines the fundamental limit on the rate of evolution of quantum systems, is also determined by means of Bures length between mixed quantum states [10]. Additionally, the geometric methods simplify considerably the resolution of the quantum brachistochrone problem, which is linked to determining the Hamiltonian that generates the time-optimal evolution between two states [11; 12; 13]. The efficient quantum circuits in quantum computation with \(n\) qutrits are investigated by use of Riemannian geometry. Indeed, it has been proven that the optimal quantum circuits are basically equivalent to the shortest path between two points in a specific curved geometry of SU(\(3^{n}\)) [14], analogous to the qubit case wherein the geodesic in SU(\(2^{n}\)) is involved [15]. For other further dynamic properties explored on the basis of geometric approaches, we recommend that readers look at the papers [16; 17; 18; 19].
Currently, the geometric quantum mechanics, which forms the theoretical framework of the geometric formulation of quantum theory, is the bedrock of information geometric science, in which the quantum phenomena are handled geometrically in the space of quantum states. One can cite, for instance, the quantum entanglement being an intriguing physical resource in the protocols of quantum information theory [20; 21; 22; 23; 24]. Importantly, entanglement is shown to be closely related to the Mannoury-Fubini-Study distance separating the entangled state and the nearest disentangled state [25]. Moreover, the euclidean distance of an entangled state to the disentangled states is equal to the maximal violation of a generalized Bell inequality with the tangent functional as entanglement witness [26]. The connection between quantum entanglement and the state space geometry has been thoroughly explored for a spin-s system with long-range Ising interaction [27]. Further to that, the geometrical description of entanglement is also explored within the backdrop of the Hopf fibration, which is a topological map compactifying the related quantum state space to an another lower-dimensional space referred to as the Hopf bundle [28; 29; 30]. For additional findings
highlighting the interplay between quantum entanglement and geometrical characteristics, see, e.g., Refs.[31; 32; 33; 34].
Another significant concept that has received much attention in quantum physics is the geometric phase, a remarkable geometric characteristic in quantum evolution processes [35; 36; 37; 38]. It can be viewed as the holonomy acquired by the state vector after a parallel transport along the evolution trajectory [39; 40]. The geometric phase is now intimately related to other geometrical features that define the quantum state spaces. In effect, it has been proven that the geometric phase can be expressed as the line integral of Berry-Simon connection along the Fubini-Study distance separating the two quantum states over the corresponding projective Hilbert space [7; 41]. On the practical side, several recent investigations have shown the most important role of the geometric phase in the advancement of quantum information science. Indeed, it is a valuable feature for generating quantum logic gates that are critical to quantum computation [42; 43; 44]. Furthermore, the conditional phase gate has been experimentally demonstrated for nuclear magnetic resonance [45] and trapped ions [46]. The interplay between quantum entanglement and topological and geometric phases is also extensively studied in the two-qudit systems [47; 48; 49]. Other geometric phase applications have been realized in Refs.[50; 51; 52; 53].
The main purpose of this work is to highlight the geometrical and dynamical characteristics of a many-body system, represented here by \(N\) spin-\(1/2\) system under all-range Ising model, and their connection to quantum entanglement. It is noteworthy that the ideas explored in this paper were primarily inspired by the findings obtained by Krynytskyi and Kuzmak in Ref.[27]. As a matter of fact, by performing the system dynamics, we derive the Fubini-Study metric identifying the associated quantum state space. Moreover, examining the Gaussian curvature (G-curvature) in the framework of the Gauss-Bonnet theorem, we determine the topology and the structure of this space. Afterward, we explore the acquired geometric and topological phases and tackle the quantum brachistochrone issue. Finally, we give a detailed explanation of the geometrical and dynamical characteristics of two interacting spin-\(1/2\) under the Ising model in connection to quantum entanglement.
The rest of this paper is structured as follows. In Sec.II, by carrying out the dynamics of \(N\) interacting spin-\(1/2\) under all-range Ising model, we give the Fubini-Study metric and identify the associated quantum state space. Moreover, by investigating the G-curvature within the scope of the Gauss-Bonnet theorem, we uncover the topology and the structure of this space. The geometric and topological phases emerging from the system evolution processes, over the resulting state space, are thoroughly discussed in Sec.III. The quantum brachistochrone problem is also addressed based on the evolution velocity as well as the related Fubini-Study distance, in Sec.IV. In Sec.V, we study the entanglement between two interacting spin-1/2 under the Ising model from two different appearances: the first is of a geometric nature and investigates how the entanglement degree impacts derived geometric features such as the Fubini-Study metric, the G-curvature, and the geometric phase. The second is of a dynamic type and addresses the entanglement effect on evolution speed and the corresponding Fubini-Study distance. Further to this, we resolve the quantum brachistochrone problem based on quantum entanglement. We supply concluding remarks in Sec.VI.
## II Unitary evolution and the quantum state manifold of \(N\) spin-\(1/2\) system
### Physical model and unitary quantum evolution
To start, the considered system is composed of \(N\) qubits represented by \(N\) interacting spin-\(1/2\) under all-range Ising model described by the following Hamiltonian
\[\mathrm{H}=\mathsf{J}\left(\sum_{i=1}^{N}S_{i}^{z}\right)^{2}, \tag{1}\]
with \(\mathsf{J}\) is the coupling constant characterizing the interaction and \(\mathsf{S}_{k}^{z}\) denotes the \(z\)-component of the spin operator \(\mathbf{S}_{i}=(\mathsf{S}_{i}^{x},\mathsf{S}_{i}^{y},\mathsf{S}_{i}^{z})^{T}\) associated with \(i\)th spin-\(1/2\) (i.e., \(i\)th qubit) which fulfills the eigenvalues equation
\[\mathsf{S}_{i}^{z}\left|\mathsf{m}_{i}\right\rangle=\hbar\mathsf{m}_{i}\left| \mathsf{m}_{i}\right\rangle, \tag{2}\]
where \(\mathsf{S}_{i}^{\alpha}=\frac{\hbar}{2}\sigma_{i}^{\alpha}\) and \(\sigma_{i}^{\alpha}\), \((\alpha=x,y,z)\) are the Pauli matrices, \(\mathsf{m}_{i}=\pm\hbar/2\) represent the possible values due to the projection of the \(ith\) spin over the \(z\)-axis, and \(\left|\mathsf{m}_{i}\right\rangle\) denote the associated eigenstates. It is worth noting that the components of spin-\(1/2\) operators \(\mathsf{S}_{i}^{x}\), \(\mathsf{S}_{i}^{y}\), and \(\mathsf{S}_{i}^{z}\) satisfy the algebraic structure of the Lie group SU(2):
\[\left[\mathsf{S}_{i}^{\alpha},\mathsf{S}_{j}^{\beta}\right]=i\delta_{ij}\sum_ {\gamma=x,y,z}\epsilon^{\alpha\beta\gamma}\mathsf{S}_{k}^{\gamma}, \tag{3}\]
where \(\delta_{ij}\) and \(\epsilon^{ijk}\) designate the Kronecker and Levi-Civita symbols, respectively. It is straightforward to see that for an even number of spins, the above Hamiltonian has \((N/2+1)\) eigenvalues, whereas for an odd number, it has \((N+1)/2\) eigenvalues. Explicitly, the eigenvalues and related eigenstates are provided as follows
\[\begin{array}{ccccc}\frac{\hbar^{2}}{4}N^{2}&|\uparrow\uparrow\uparrow \ldots\uparrow\uparrow\rangle,|\downarrow\downarrow\downarrow\ldots\downarrow \downarrow\rangle;\\ \\ \frac{\hbar^{2}}{4}(N-2)^{2}&|\downarrow\uparrow\uparrow\ldots\uparrow\uparrow \rangle,|\uparrow\downarrow\uparrow\ldots\uparrow\rangle,|\uparrow \uparrow\uparrow\ldots\uparrow\downarrow\rangle,\\ \\ &|\uparrow\downarrow\downarrow\ldots\downarrow\downarrow\rangle,|\downarrow \uparrow\downarrow\ldots\downarrow\downarrow\rangle,|\downarrow\downarrow \downarrow\ldots\downarrow\downarrow\rangle;\\ \\ \frac{\hbar^{2}}{4}(N-4)^{2}&|\downarrow\downarrow\uparrow\ldots\uparrow\uparrow \rangle,|\downarrow\uparrow\downarrow\ldots\uparrow\uparrow\rangle,| \uparrow\uparrow\uparrow\ldots\downarrow\downarrow\rangle,\\ \\ &|\uparrow\uparrow\uparrow\downarrow\ldots\downarrow\rangle,|\uparrow \downarrow\uparrow\uparrow\ldots\downarrow\rangle,\ldots,|\downarrow \downarrow\downarrow\ldots\uparrow\uparrow\rangle;\\ \\ \ldots&\ldots&\ldots&\ldots\end{array} \tag{4}\]
Taking into account all possible combinations of spin states (i.e., \(|\uparrow\rangle\) and \(|\downarrow\rangle\)), one finds that each eigenvalue \(J\hbar^{2}(N-2p)^{2}\Big{/}4\) matches \(2\mathrm{C}_{N}^{p}\) eigenstates, with \(\mathrm{C}\) stands for the binomial coefficient, while the index \(p=0,...,N/2\)
for \(N\) even (particle number) and \(p=0,...,(N-1)/2\) for \(N\) odd. We presume that the evolution of the \(N\) spin-\(1/2\) system starts with the initial state
\[|\Psi_{i}\rangle=|+\rangle^{\otimes N}, \tag{5}\]
where
\[|+\rangle=\cos\frac{\theta}{2}|\uparrow\rangle+\sin\frac{\theta}{2}e^{i\varphi }|\downarrow\rangle,\]
corresponds to the eigenstate of the spin-\(1/2\) projection operator on the direction denoted by the unit vector \(\mathbf{n}=(\sin\theta\cos\varphi,\sin\theta\sin\varphi,\cos\theta)\), where \(\theta\) and \(\varphi\) designate the polar and azimuthal angles, respectively. In this respect, the initial state (5) of the system can be rewritten, using the binomial theorem, as
\[|\Psi_{i}\rangle=\sum_{p=0}^{N}\cos^{N-p}\frac{\theta}{2}\sin^{p}\frac{\theta} {2}e^{i\varphi\varphi}\sum_{i_{1}<i_{2}<\ldots<i_{p}=1}^{N}\sigma_{i_{1}}^{x} \sigma_{i_{2}}^{x}\ldots\sigma_{i_{p}}^{x}|\uparrow\rangle^{\otimes N}, \tag{6}\]
where we set \(\hbar=1\), indicating that the energy is measured in the frequency units. To investigate the geometrical, topological, and dynamical features of the system under consideration in the remainder of this paper, we need to evolve the \(N\) spin-1/2 system, maintained initially in the starting state (6), by applying the time evolution propagator \(\mathcal{P}(t)=e^{-i\mathrm{H}t}\). In this perspective, the evolving state of the system is obtained as
\[|\Psi(t)\rangle=\sum_{p=0}^{N}\cos^{N-p}\left(\frac{\theta}{2}\right)\sin^{p} \left(\frac{\theta}{2}\right)\exp\left\{-i\left[\frac{\xi(t)}{4}(N-2p)^{2}-p \varphi\right]\right\}\sum_{i_{1}<i_{2}<\ldots<i_{p}=1}^{\mathcal{N}}\sigma_{i _{1}}^{x}\sigma_{i_{2}}^{x}\ldots\sigma_{i_{p}}^{x}|\uparrow\uparrow\ldots \uparrow\rangle, \tag{7}\]
with \(\xi(t)=\mathcal{I}t\). In this way, we come at demonstrating the evolved state of a collection of \(N\) qubits (i.e., \(N\) spin\(-1/2\)) that is explicitly dependent on three parameters, namely the spherical angles \((\theta,\varphi)\) and the time \(t\). It is also intriguing to note that the state (7) fulfills the periodic requirement \(|\Psi(\xi)\rangle=|\Psi(\xi+2\pi)\rangle\) along the temporal parameter, implying that \(\xi\in[0,2\pi]\). Accordingly, one can predict that the dynamics of the system occurs on a closed three-dimensional manifold.
### Geometry and topology of the resulting state manifold
After evolving the \(N\) spin-\(1/2\) system by means of temps-evolution operator and determining the evolved state (7), we will now discover the geometry and topology associated with the relevant quantum state space, which includes all possible states that the system may reach during evolution. For this task, we must compute the Fubini-Study metric, which defines the infinitesimal distance \(d\mathsf{S}\) between two adjoining pure quantum states \(|\Psi(\zeta^{\mu})\rangle\) and \(|\Psi(\zeta^{\mu}+d\zeta^{\mu})\rangle\), having the following form [54; 55]
\[d\mathsf{S}^{2}=\mathrm{g}_{\mu\nu}d\zeta^{\mu}d\zeta^{\nu}, \tag{8}\]
where \(\zeta^{\mu}\) are the physical parameters \(\theta,\varphi\) and \(\xi\) (i.e., representing the dynamical degrees of freedom of the considered system) specifying the evolved state (7) and \(\mathrm{g}_{\mu\nu}\) denote the components of this metric tensor given by
\[\mathrm{g}_{\mu\nu}=\mathrm{Re}\left(\left\langle\Psi_{\mu}|\Psi_{\nu}\right\rangle -\left\langle\Psi_{\mu}|\Psi\right\rangle\left\langle\Psi|\Psi_{\nu}\right\rangle \right), \tag{9}\]
with \(|\Psi_{\mu,\nu}\rangle=\frac{\partial}{\partial\zeta^{\mu,\nu}}|\Psi\rangle\). Using the definition (8), one finds, after a simple numerical calculation, the explicit version of the Fubini-Study metric as follows
\[d\mathsf{S}^{2}= d\mathsf{S}^{2}_{i}+\frac{1}{4}N(N-1)\sin^{2}\theta\left[N-1- \left(N-\frac{3}{2}\right)\sin^{2}\theta\right]d\xi^{2}\] \[+\frac{1}{4}N(N-1)\cos\theta\sin^{2}\theta d\varphi d\xi, \tag{10}\]
with
\[d\mathsf{S}^{2}_{i}=\frac{N}{4}\left(d\theta^{2}+\sin^{2}\theta d\varphi^{2} \right), \tag{11}\]
corresponds to the squared line element defining the sphere of possible initial states of the system. This can be well remarked when we take \(\xi=0\) (i.e., no evolution) in the metric (10), we discover indeed that the state space is reduced to a sphere of radius \(2\sqrt{N}\). Furthermore, the space of \(N\) spin-\(1/2\) states resulting from the temporal evolution is effectively a closed three-dimensional manifold. Note that the components of the underlying metric (10) are \(\varphi\)-independent, signifying that the quantum state spaces with a predestigated azimuthal angle have the same geometry. Hence, we draw the conclusion that the appropriate quantum state space (i.e., quantum phase space) corresponding to the \(N\) qubits (under consideration) is a two-parametric and curved manifold, it is identified by the following metric tensor
\[d\mathsf{S}^{2}=\frac{N}{4}d\theta^{2}+\frac{1}{4}N(N-1)\sin^{2}\theta\left[N- 1-\left(N-\frac{3}{2}\right)\sin^{2}\theta\right]d\xi^{2}. \tag{12}\]
To further characterize this state space, we are going to determine its topology. For this aim, we begin by assessing the corresponding G-curvature, which can be defined in terms of the relevant metric tensor (12) in the form [56]
\[\mathrm{K}=\frac{1}{\left(\mathrm{g}_{\theta\theta}\mathrm{g}_{ \xi\xi}\right)^{1/2}}\left[\frac{\partial}{\partial\xi}\left(\left(\frac{ \mathrm{g}_{\xi\xi}}{\mathrm{g}_{\theta\theta}}\right)^{1/2}\Gamma_{\theta \theta}^{\xi}\right)\right.\] \[\left.-\frac{\partial}{\partial\theta}\left(\left(\frac{\mathrm{g}_ {\xi\xi}}{\mathrm{g}_{\theta\theta}}\right)^{1/2}\Gamma_{\theta\xi}^{\xi} \right)\right], \tag{13}\]
where \(\Gamma^{\xi}_{\theta\theta}\) and \(\Gamma^{\xi}_{\theta\xi}\) account for the Christoffel symbols given by
\[\Gamma^{\xi}_{\theta\theta}=-\frac{1}{2\mathrm{g}_{\xi\xi}}\left(\frac{\partial \mathrm{g}_{\theta\theta}}{\partial\xi}\right),\quad\text{and}\quad\Gamma^{ \xi}_{\theta\xi}=\frac{1}{2\mathrm{g}_{\xi\xi}}\left(\frac{\partial\mathrm{g}_ {\xi\xi}}{\partial\theta}\right). \tag{14}\]
It is extremely intriguing to see that the temporal component \(\mathrm{g}_{\xi\xi}\) of the metric cancels out at the points \(\theta=0,\pi\). This implies that the G-curvature is not definable in these positions, hence we conclude that it exhibits a singularity in each of these two positions. However, it is well definable at all other positions within the space of \(N\) spin-1/2 states. Reporting the metric components \(\mathrm{g}_{\theta\theta}\) and \(\mathrm{g}_{\xi\xi}\) in the equation (13), the explicit expression of the relevant G-curvature writes
\[\mathrm{K}=\frac{8}{N}\left[2-\frac{(2N-3)\cos^{2}\theta+N}{\left((2N-3)\cos^ {2}\theta+1\right)^{2}}\right]. \tag{15}\]
Remark that the state space curvature (15) is mainly affected by the initial parameters \(\theta\) and \(N\), while it is independent of \(\xi\) containing the temporal evolution, meaning that the state space curvature is independent of the system dynamics. Furthermore, the G-curvature (15) verifies the following periodic requirement \(\mathrm{K}(\theta)=\mathrm{K}(\theta+\pi)\). This is consistent with our findings because the resulting quantum phase space (12) is a closed two-dimensional manifold.
From the analysis of the Fig.(1) showing the G-curvature behavior with respect to the initial parameters \((\theta,N)\), we see that the state space geometry is symmetric with respect to the centerline at \(\theta=\pi/2\), being the position with the minimal curvature. More specifically, in the region \(\theta\in[0,\pi/2]\), the G-curvature declines and therefore the state space geometry takes the concave shape, whereas in the region \(\theta\in[\pi/2,\pi]\), the G-curvature increases to its maximum value, and therefore the space geometry takes the convex shape. As a result, we infer that the \(N\) spin-\(1/2\) state space has a dumbbell-shape structure. Additionally, we notice that for \(N>2\), the curvature takes negative values for certain values of \(\theta\). This is congruent with the findings given in Ref. [27].
Considering this fact, in addition to the existence of two singularities in the G-curvature (15), we get to the conclusion that there are two conical flaws within the quantum phase space (12); the first one is situated near to the location \(\theta=0\), whereas the second one is situated near to the location \(\theta=\pi\). In light of these outcomes, let us now explore the topology associated with the space of \(N\) spin-\(1/2\) states (12). To achieve this, we have to compute the integer Euler characteristic \(\chi(\mathrm{M})\) (\(\mathrm{M}\) stands for state space (12)) provided in the Gauss-Bonnet theorem as [56]
\[\frac{1}{2\pi}\left[\int_{\mathrm{M}}\mathrm{K}d\mathrm{S}+\oint_{\partial \mathrm{M}}\mathrm{k}_{\mathrm{g}}dl\right]=\chi(\mathrm{M}), \tag{16}\]
where the geometric invariants \(d\mathrm{S}\), \(\mathrm{k}_{\mathrm{g}}\) and \(dl\) denote, respectively, the surface element, geodesic curvature and line element. Additionally, the first and second terms on the left-hand side of the equation (16) represent, respectively, the bulk and border contributions to the Euler characteristic identifying the state space topology. Furthermore, the Gauss-Bonnet theorem (16) can be established in terms of the state space geometry (12) in the form
\[\int_{0}^{\pi}\int_{0}^{2\pi}\mathrm{K}\left(\mathrm{g}_{\theta\theta}\mathrm{ g}_{\xi\xi}\right)^{1/2}d\theta d\xi+\Lambda=2\pi\chi(\mathrm{M}), \tag{17}\]
here \(\Lambda\) standing for the Euler border integral including the conical flaws contribution. After a simple calculation, the Gauss-Bonnet theorem (17) reads as
\[4\pi(N-1)+\Lambda=2\pi\chi(\mathrm{M}). \tag{18}\]
Therefore, to find the Euler characteristic \(\chi\), we must first determine the Euler border integral \(\Lambda\). For this purpose, we presume that the angular flaws are situated very near to the singular points \(\theta=0,\pi\). In this view, the underlying metric (12) can be evaluated, in the vicinity of these two singular positions, upto the second order in \(\theta\). Indeed, we obtain
\[d\mathsf{S}^{2}=\frac{N}{4}d\theta^{2}+\frac{1}{4}N(N-1)^{2}\theta^{2}d\xi^{2}. \tag{19}\]
As a further note, the solid angle of a revolution cone having the angle at the peak \(2\theta\) is defined by \(\Omega=2\pi(1-\cos\theta)\) where the second term of this definition represents the partial solid angle sketched out by the system around the cone apex during evolution. Taking into account the closeness of the angular flaws at the two singular points, we can easily find
\[2\pi\cos\theta\approx\frac{\mathsf{S}\left(2\pi\right)}{\mathsf{d}}=\frac{2 \pi\sqrt{\mathrm{g}_{\xi\xi}}}{\sqrt{\mathrm{g}_{\theta\theta}}\theta}, \tag{20}\]
here \(\mathsf{S}(2\pi)\) standing for the distance traveled by the system within the period of time \(\mathbf{t}=2\pi/\mathrm{J}\) around one of the two singular positions (\(\theta=0\) or \(\theta=\pi\)), while \(\mathbf{d}\) corresponds to the distance between evolution of the system and the relevant singular position. As a result, the angular flaws are explicitly given by
\[\Lambda=2\left[2\pi-\frac{2\pi\sqrt{\mathrm{g}_{\xi\xi}}}{\sqrt{\mathrm{g}_{ \theta\theta}}\theta}\right]=4\pi\left(2-N\right), \tag{21}\]
Figure 1: The dependence of the G-curvature (15) on the initial parameter \(\theta\) for some spin-\(1/2\) numbers.
where we multiplied it by the factor 2 because we have two singular points. Putting the equation (21) into (18), one finds the Euler characteristic \(\chi(\mathrm{M})=2\), showing that the quantum phase space associated with the \(N\) spin-\(1/2\) system has a spherical topology. The coming section will be devoted to a thorough analysis of the geometric phase that the \(N\) spin-\(1/2\) state (7) can accumulate when subjected to cyclic and arbitrary evolution processes on the underlying quantum state space (12).
## III Geometrical phases acquired by the \(N\) spin-\(1/2\) state
After investigating the geometry and topology of the \(N\) spin-\(1/2\) state space identified by the metric tensor (12). Let us now focus on the geometric phase that the evolving state (7) can acquire for both arbitrary and cyclic evolutionary processes.
### Geometrical phase during an arbitrary evolution
In this instance, we presume that the \(N\) spin-\(1/2\) system evolves arbitrarily along any evolution path on the closed two-dimensional manifold (12). In this picture, the geometric phase gained by the evolved state (7) is given by
\[\Phi_{g}(t)=\arg\langle\Psi_{i}|\Psi(t)\rangle-\mathrm{Im}\int_{0}^{t}\langle \Psi(t^{\prime})|\frac{\partial}{\partial t^{\prime}}|\Psi(t^{\prime})\rangle dt ^{\prime}, \tag{22}\]
which is defined as the difference between total phase and dynamic phase [47; 57]. To calculate the geometric phase, we must first compute total the phase acquired by the system. The overlap (i.e., transition-probability amplitude) between the starting state (6) and the ending state (7) is obtained as
\[\langle\Psi_{i}|\Psi(t)\rangle=\sum_{p=0}^{N}\mathrm{C}_{N}^{p}\cos^{2(N-p)} \left(\frac{\theta}{2}\right)\sin^{2p}\left(\frac{\theta}{2}\right)e^{-\frac{i \xi}{4}(N-2p)^{2}}. \tag{23}\]
Inserting the expression of the overlap (23) into the first term on the right side of the equation (22), the total phase gained by the \(N\) spin-\(1/2\) system writes
\[\Phi_{\mathrm{tot}}=-\arctan\left[\begin{array}{c}\sum\limits_{p=0}^{N} \mathrm{C}_{N}^{p}\cos^{2(N-p)}\frac{\theta}{2}\sin^{2p}\frac{\theta}{2}\sin \left(\frac{\xi(N-2p)^{2}}{4}\right)\\ \sum\limits_{p=0}^{N}\mathrm{C}_{N}^{p}\cos^{2(N-p)}\frac{\theta}{2}\sin^{2p} \frac{\theta}{2}\cos\left(\frac{\xi(N-2p)^{2}}{4}\right)\end{array}\right]. \tag{24}\]
It is interesting to observe that the total phase (24) comprises two distinct phase components: the first is of geometrical origin (known as the geometrical phase), and it is strongly related to the geometrical and topological features that characterize the quantum state space of such systems [56; 47]. This geometric component is explained by the implicit reliance of the total phase (24) on the G-curvature (15) and the component \(\mathrm{g}_{\xi\xi}\) of the metric (12), as they all depend on the parameters \((N,\theta)\). The second one is of dynamical origin (known as the dynamical phase), and it results from the time evolution of Hamiltonian eigenstates (4). Furthermore, the global phase (24) exhibits a non-linear time dependence and fulfills the following periodic conditions :
\[\Phi_{\mathrm{tot}}(\xi+4\pi)=\Phi_{\mathrm{tot}}(\xi)\qquad\text{for $N$ integer}, \tag{25}\]
and
\[\Phi_{\mathrm{tot}}(\xi+8\pi)=\Phi_{\mathrm{tot}}(\xi)\qquad\text{for $N$ half-integer}. \tag{26}\]
The dynamical phase, on the other hand, can be calculated by plugging the evolved state (7) into the second term on the right side in the equation (22). Indeed, one finds
\[\Phi_{\mathrm{dyn}}=-\frac{\xi N}{4}\left(N\cos^{2}\theta+\sin^{2}\theta \right). \tag{27}\]
It is proportional to the evolution time, meaning the dynamic phase primarily instructs us about the time spent by the system during evolution. On the other hand, the geometric phase that can be accrued by the \(N\) spin-\(1/2\) state (5), during any arbitrary evolution over the quantum phase space (12), is obtained as
\[\Phi_{\mathrm{g}}= -\arctan\left[\begin{array}{c}\sum\limits_{p=0}^{N}\mathrm{C}_ {N}^{p}\cos^{2(N-p)}\frac{\theta}{2}\sin^{2p}\frac{\theta}{2}\sin\left(\frac{ \xi(N-2p)^{2}}{4}\right)\\ \sum\limits_{p=0}^{N}\mathrm{C}_{N}^{p}\cos^{2(N-p)}\frac{\theta}{2}\sin^{2p} \frac{\theta}{2}\cos\left(\frac{\xi(N-2p)^{2}}{4}\right)\end{array}\right]\] \[+\frac{\xi N}{4}\left(N\cos^{2}\theta+\sin^{2}\theta\right). \tag{28}\]
It is clear that the resulting geometric phase (28) varies (i.e., accumulates or loses) non-linearly with the time, reflecting its dynamic character. Otherwise, it is dependent on the freedom degrees \((\theta,\xi)\) specifying the physical states over the quantum phase space (12), which means that the geometric phase depends on the shape of the evolution trajectory followed by the system, while its reliance on the initial parameters \((N,\theta)\) signifies that it is also sensitive to the state space geometry. Accordingly, we conclude that the geometric phase (28) can be exploited to parameterize the possible evolution trajectories of this system. This result can find applications in quantum computation, because such quantum phases can be used to design logic gates that are helpful for building good quantum algorithms [58; 44]. Let us now turn to a special scenario in which we investigate the geometric phase accrued by the \(N\) spin-\(1/2\) state (7) over a very brief period of time. In this framework, by extending the exponential factor given in (23) up to the second order in \(\xi\), one obtains
\[\langle\Psi_{i}|\Psi(t)\rangle\simeq 1+\frac{\xi^{2}N(N-1)}{64}\left[4(N-1)(N+2) \cos^{2}\theta-(N-3)(N-2)\sin^{2}2\theta+4(3N-2)\right]-i\frac{\xi N}{4} \left(N\cos^{2}\theta+\sin^{2}\theta\right). \tag{29}\]
In this perspective, the geometric phase (28) can be expressed as
\[\Phi_{\rm g}\simeq-\arctan\left[\frac{\xi N\left(N\cos^{2}\theta+\sin^{2}\theta \right)}{4+\frac{\xi^{2}N(N-1)}{16}\left[4(N-1)(N+2)\cos^{2}\theta-(N-3)(N-2) \sin^{2}2\theta+4(3N-2)\right]}\right]+\frac{\xi N}{4}\left(N\cos^{2}\theta+ \sin^{2}\theta\right). \tag{30}\]
Note that for \(\xi=0\), the system does not gain any quantum phase, this is because the system state remains confined in the starting state (6) (i.e. no evolution). As we can see that the greater the number of particles \(N\), the more the dynamic phase is dominant. Further, in the thermodynamic limit (\(N\rightarrow\infty\)), the total phase cancels out and therefore the geometric and dynamic phases coincide at any moment in the evolution process. This offers us the opportunity of measuring the geometric phase experimentally because, in this situation, it can be obtained by the temporal integral of the average eigenvalues of the Ising Hamiltonian (1).
### Geometrical phase under a cyclic evolution
Here, we are focusing on the study of the geometrical phase resulting from the cyclic evolution of the \(N\) spin-\(1/2\) system. In this regard, the wave function (7) satisfies the cyclic requirement \(|\Psi(T)\rangle=e^{i\Phi_{\rm tot}}|\Psi(0)\rangle\) wherein \(T\) represents the time span for a cyclic evolution. The AA-geometric phase (often referred to as the Aharonov-Anandan phase) gained by the system after a cyclic evolution (i.e., closed curve on the related parameter space) is given by [36, 59]
\[\Phi_{\rm g}^{\rm AA}=i\int_{0}^{T}\langle\tilde{\Psi}(t)|\frac{\partial}{ \partial t}|\tilde{\Psi}(t)\rangle dt, \tag{31}\]
here \(|\tilde{\Psi}(t)\rangle\) denotes the Anandan-Aharonov section given in Ref. [36]. It is identified through the evolved state (7) as \(|\tilde{\Psi}(t)\rangle=e^{-if(t)}|\Psi(t)\rangle\) where \(f(t)\) is any smooth function checking \(f(T)-f(0)=\Phi_{\rm tot}\). On this view, the AA-geometric phase (31) rewrites
\[\Phi_{\rm g}^{\rm AA}=\int_{0}^{T}d\Phi_{\rm tot}+i\int_{0}^{T}\langle\Psi(t) |\frac{\partial}{\partial t}|\Psi(t)\rangle dt. \tag{32}\]
Reporting the equations (24) and (27) into (32), one obtains the AA-geometric phase acquired by the \(N\) spin-\(1/2\) system (7) during a cyclic evolution of the form
\[\Phi_{\rm g}^{\rm AA}=-\frac{\pi}{2}N(N-1)\sin^{2}\theta. \tag{33}\]
Thus, the obtained AA-geometric phase (33) is independent of the system dynamics. Otherwise, it depends only on the initial parameters \(\theta\) and \(N\) (i.e., the starting state), controlling the shape of the state space (12). As a result, we conclude that the AA-geometric phase is impacted by the state space geometry and not by the evolution path followed by the system. Hence the cyclic evolution paths are not parameterizable by this cyclic phase. Further, using the equation (15) into (33), one can relate the AA-geometric phase with the G-curvature as follows
\[\Phi_{\rm g}^{\rm AA}=\frac{\pi N(N-1)}{2}\left[\frac{-56+3N\left(16-(N-1){ \rm K}\right)}{\left(2N-3\right)\left(N{\rm K}-16\right)}\right]. \tag{34}\]
In Fig.(2), we depict the behavior of the cyclic geometrical phase (33) with respect to the starting parameters \((N,\theta)\). We find that it behaves similarly to G-curvature (see Figure 1).
Specifically, the AA-geometric phase (33) decreases in the region \(\theta\in[0,\pi/2]\), in which the state space geometry has a concave shape (i.e., the G-curvature decreases), whereas it grows in the region \(\theta\in[\pi/2,\pi]\), in which this geometry has a convex form (i.e., the G-curvature increases). Thus, we conclude that the AA-geometric phase (33) also has a symmetric behavior along the parameter \(\theta\), this is due to the dumbbell-shape structure of the state space. Additionally, we see that it is interesting to discuss the topological phase that can emerge during the cyclic evolution of the wave function (7). In reality, it constitutes the part of the cyclic geometric phase that does not receive any dynamic contribution. Explicitly, it is found as
\[\Phi_{\rm top}^{\rm AA}=-\frac{\pi}{2}N^{2}. \tag{35}\]
The resulting topological phase (35) is propotional to the square of the particle number. Especially, this phase takes fractional values for \(N\) odd and multiples of \(\pi\) for \(N\) even. This shows that the topological phase (resp. the particle number) is relevant to control the state space topology given in (18). This provides the possibility to parametrize the closed evolution paths traversed by the system through the resulting topological phase (35). This result looks very interesting in
Figure 2: The dependence of the AA-geometric phase (33) on the initial parameter \(\theta\) for some spin-\(1/2\) numbers.
quantum computing, particularly in the search for efficient quantum circuits [53; 60]. This issue can be closely connected to the determination of the optimal evolution path of the system under consideration, by evaluating the evolution speed as well as the related Fubini-Study distance. This is the quantum Brachistochrone problem, which will be tackled in the succeeding section.
## IV Speed and quantum Brachistochrone issue for \(N\) spin-\(1/2\) system
Now, we will exploit the Riemannian geometry identifying the quantum state space (12) to investigate some dynamical properties of the system. In particular, we examine the evolution speed and the geodesic distance measured by the Fubini-Study metric (12) in order to solve the related quantum brachistochrone problem [11; 61]. This issue is often linked to achieving time-optimal evolution, which is characterized by a maximum speed and the trajectory between the starting state (6) and the ending state (7) is the shortest possible. In other terms, the solution to this dynamic problem is to find the shortest period of evolution.
### Speed of quantum evolution
In order to evaluate the evolution speed, we presume that the evolution of the \(N\) spin-\(1/2\) system depends only on time while leaving all other parameters unchanged. In this picture, the metric tensor (12) simplifies to
\[d\mathsf{S}^{2}=\frac{1}{4}N(N-1)\sin^{2}\theta\left[N-1-\left(N-\frac{3}{2} \right)\sin^{2}\theta\right]d\xi^{2}. \tag{36}\]
This shows that the dynamics of the system occurs on a circle of radius \(\sqrt{\mathbb{g}_{\theta\theta}}\). Therefore, the evolution speed of the \(N\) spin-\(1/2\) state (7) takes the form [9]
\[\mathrm{V}=\frac{d\mathsf{S}}{dt}=2\Delta\mathrm{E}, \tag{37}\]
where \(\Delta\mathrm{E}\) designates the energy uncertainty of the above Hamiltonian (1). From the analysis of the equation (37), we notice that the larger the energy uncertainty, the faster the system evolves, and vice versa. By setting the equation (36) into (37), the evolution velocity of the wave function (7) is found as
\[\mathrm{V}=\frac{\mathsf{J}}{2}\sqrt{N(N-1)\sin^{2}\theta\left[N-1-\left(N- \frac{3}{2}\right)\sin^{2}\theta\right]}. \tag{38}\]
Thus, the evolution rapidity is affected by the coupling interaction \(\mathsf{J}\) and the initial parameters (\(\theta\), \(N\)), i.e., the choice of the starting state. From the equation (38), we remark that the larger the particle number and the coupling constant, the faster the system evolves, with the exception of \(\theta=0\) or \(\theta=\pi\), in which the evolution velocity (38) cancels out \((\mathrm{V}=0)\) regardless of these physical parameters. This is justified by the fact that neither the wave function (7) nor the G-curvature (13) are defined in these two singular points.
Given that the speed depends parameter \(\theta\), meaning that it is also assigned both by the G-curvature (15) and the geometric phase (28) including the AA-geometric-phase (33). This can be clearly seen in the figure (3) displaying its reliance on the initial parameters \((\theta,N)\).
Note that the evolution speed (38) exhibits a symmetric behavior with respect to the state space geometry having the dumbbell-shape structure. This makes perfect sense because the resulting phase space corresponds exactly to the relevant quantum phase space, on which the dynamics of the \(N\) spin-\(1/2\) system is well established.
### Resolution of the quantum brachistochrone problem
To address this dynamic issue, we should determine the shortest duration possible necessary to carry out the time-optimal evolution of the considered system (7). To accomplish this, one begins by maximizing the evolution velocity (38) by solving the equation \(d\mathrm{V}/d\theta=0\), which yields
\[N(N-1)(N-1-(2N-3)\sin^{2}\theta)\sin 2\theta=0, \tag{39}\]
which entails that
\[\sin\theta_{\mathrm{max}}=\sqrt{\frac{N-1}{2N-3}}. \tag{40}\]
Therefore, the highest speed that the \(N\) spin-\(1/2\) system can achieve reads as
\[\mathrm{V}_{\mathrm{max}}=\mathsf{J}(N-1)\sqrt{\frac{N(N-1)}{8(2N-3)}}, \tag{41}\]
In this way, we manage to establish the maximum velocity, which depends only on the particle number making up the
Figure 3: The dependence of the evolution speed (38) on the initial parameter \(\theta\) for some spin-\(1/2\) numbers with \(\mathsf{J}=1\).
system. Let us now investigate the geodesic distance that the system traverses between the departure state (6) and the arrival state (7). For this, utilizing the equation (37), one finds
\[\mathsf{S}=\frac{\xi}{2}\sqrt{N(N-1)\sin^{2}\theta\left[N-1-\left(N-\frac{3}{2} \right)\sin^{2}\theta\right]}. \tag{42}\]
As the evolution rapidity (38) is time-independent, the Fubini-Study distance (42) traveled by the system has a linear behavior with time. Additionally, we note that, at each instant, the evolution speed and the distance have a similar behavior against the physical parameters \((\theta,\mathsf{J},N)\) of the system. Specifically, at singular points \(\theta=0,\pi\), the distance vanishes, which is reasonable because at these points the state of \(N\) spin-\(1/2\) (7) is not defined. Thereby, one concludes that the Fubini-Study distance (42) exhibits a local minimum at the point \(\theta=\pi/2\), it given by
\[\mathsf{S}_{\min}=\frac{\xi}{2}\sqrt{\frac{N(N-1)}{2}}. \tag{43}\]
Thus, the minimum feasible time required for the system to conduct any quantum evolution writes
\[\mathsf{t}_{\min}=\frac{\mathsf{S}_{\min}}{\mathsf{V}_{\max}}=\frac{t}{(N-1)} \sqrt{2N-3}. \tag{44}\]
This is the shortest duration needed to achieve a time-optimal evolution over the state circle (36). In specific terms, the result (44) defines the optimal evolution condition, which is typified by the fastest evolution and the shortest path joining the initial and final states. Therefore, the optimal evolution states can be produced through the following unitary transformation
\[\ket{\Psi_{i}}\rightarrow\ket{\Psi(\mathsf{t}_{\min})}=e^{-i\mathrm{Ht}_{ \min}}\ket{\Psi_{i}}. \tag{45}\]
so that the set of these states makes up an optimal state circle described by the Fubini-Study metric of the form
\[d\mathsf{S}_{\mathrm{op}}^{2}=\frac{1}{4}N(N-1)\sin^{2}\theta\left[N-1-\left( N-\frac{3}{2}\right)\sin^{2}\theta\right]d\xi_{\min}^{2}. \tag{46}\]
with \(\xi_{\min}=\mathbb{J}\mathsf{t}_{\min}\). On other hand, we remark that the temporal condition (44) is solely influenced by the particle number, this implies that the state circle topology affects the optimal evolution time as well. It is also proportional to the ordinary time \(t\) (i.e., matches the evolution over the state circle (36)) with a positive proportionality factor, meaning that the optimal and ordinary times have the same monotonicity. Particularly, one finds that for \(N=2\) (i.e., two-spin-\(1/2\) system) these two types of time coincide \((\mathsf{t}_{\min}=t)\), while for \(N\geq 3\) (i.e., \(N\) spin-\(1/2\) system), we discover that the optimal time (44) is strictly lower than the ordinary time, and therefore the time-optimal evolution is achievable. Besides, in the thermodynamic limit \((N\rightarrow\infty)\), the optimal time decreases to zero \((\mathsf{t}_{\min}\to 0)\). In this respect, the optimal state circle (46) coincides with a straight line since its radius \(\sqrt{\mathrm{g}_{\xi_{\min}\xi_{\min}}}\) becomes infinite. As a result, we infer that the particle number and the ordinary time are two physical magnitudes exploitable for performing the time-optimal evolutions in such integrable systems. At the end of this section, we note that it is intriguing to relate the geometric and dynamic structures that we explored above with quantum entanglement, as a physical resource of great relevance in quantum information tasks. The next section focuses on this subject.
## V Geometric and dynamic pictures of the entanglement for two-spin system \((N=2)\)
In this section, we shall study the quantum entanglement exchanged between two interacting spins under Ising model through two different perspectives; the first is geometric in nature and investigates the entanglement impact on derived geometric features such as the Fubini-Study metric, G-curvature, and the geometric phase under arbitrary and cyclic evolutions. The second is dynamic in nature and explores the entanglement effect on the evolution speed as well as the Fubini-Study distance covered by the system. Importantly, we address the quantum Brachistochrone problem depending on the entanglement degree.
### Entanglement degree of the two-spin system
The wave function of the global quantum system (7) is narrowed for a two-spin system to the form
\[\ket{\Psi(t)}= e^{-i\xi(t)}\cos^{2}\frac{\theta}{2}\ket{\uparrow\uparrow}+ \frac{1}{2}e^{i\varphi}\sin\theta(\ket{\uparrow\downarrow}+\ket{\downarrow \uparrow})\] \[+e^{i(2\varphi-\xi(t))}\sin^{2}\frac{\theta}{2}\ket{\downarrow \downarrow}. \tag{47}\]
Hence, the two-spin state space, on which the dynamics of the system takes place, is defined by the following metric tensor
\[d\mathsf{S}^{2}=\frac{1}{2}d\theta^{2}+\frac{1}{4}\sin^{2}\theta\left(2-\sin^{ 2}\theta\right)d\xi^{2}. \tag{48}\]
Using the Wootters concurrence expression given in Ref. [62], one obtains, after a simple calculation, the entanglement amount contained in the two-spin state (47) of the form
\[\mathscr{C}=\sin^{2}\theta|\sin\xi|. \tag{49}\]
It is the same for any two spins of the entire system (7). In other terms, each spin pair is quantum-correlated as much as any other pair. Interestingly, we observe that the two-spin entanglement (49) evolves periodically with time, signifying that it is impacted by the dynamics of the system. Moreover, it relies on the initial parameter \(\theta\), showing that the entanglement degree is also governed by the starting state choice. Notice that for \(\xi=\pi/2\) and \(\theta=\pi/2\), the two-spin state (47) reaches its maximum entanglement value \((\mathscr{C}=1)\), whereas for \(\theta=0\) or \(\pi\), the two spins can never be entangled \((\mathscr{C}=0)\), because the corresponding initial states \(\ket{\Psi_{i}}=\ket{\uparrow\uparrow}\) or \(\ket{\downarrow\downarrow}\) are the Hamiltonian eigenstates. This can be also justified topologically and geometrically by the existence of a conical defect close to these two singular points.
### Geometrical picture of the entanglement
In order to evoke the geometric aspect of the quantum correlations between the two spins under study, we suggest a thorough description illustrating the nexus between the entanglement and the geometric structures derived above. Setting the equation (49) into (48), we give the Fubini-Study metric identifying the two-spin state space in terms of the concurrence as
\[d\mathbb{S}^{2}=\frac{d\mathscr{C}^{2}}{8\mathscr{C}(\left|\sin\xi\right|- \mathscr{C})}-\frac{d\mathscr{C}d\xi}{4\tan\xi(\left|\sin\xi\right|-\mathscr{ C})}+\frac{\mathscr{C}}{4}\left(\frac{1}{2\tan^{2}\xi(\left|\sin\xi\right|- \mathscr{C})}+\frac{2\left|\sin\xi\right|-\mathscr{C}}{\sin^{2}\xi}\right)d \xi^{2}. \tag{50}\]
which can be transformed in its diagonal form
\[d\mathbb{S}^{2}=\frac{1}{8\mathscr{C}_{r}\left(1-\mathscr{C}_{r}\right)}d \mathscr{C}_{r}^{2}+\frac{1}{4}\mathscr{C}_{r}\left(2-\mathscr{C}_{r}\right)d \xi^{2}, \tag{51}\]
with \(\mathscr{C}_{r}=\mathscr{C}/|\sin\xi|\) denotes the reduced concurrence varying in the interval \([0,1]\). Thereby, we have managed to reparameterize the relevant phase space (48) according to the amount of entanglement shared between the two spins and the evolution time, which are two measurable physical magnitudes. This demonstrates the feasibility of examining, experimentally, all the geometrical, topological and dynamical characteristics of this state space, namely the phase space geometry, the quantum phases, the evolution speed, and the geodesic distance covered by the two-spin system (47) during its evolution. Importantly, the quantum entanglement serves in shrinking the state space dimension. For instance, the states of two spins with the same entanglement level (i.e., \(\mathscr{C}=\text{constant}\)) are located on closed one-dimensional manifolds defined by
\[d\mathbb{S}^{2}=\frac{\mathscr{C}}{4}\left(\frac{1}{2\tan^{2}\xi(\left|\sin \xi\right|-\mathscr{C})}+\frac{2\left|\sin\xi\right|-\mathscr{C}}{\sin^{2}\xi} \right)d\xi^{2}. \tag{52}\]
They are, in fact, closed curves, along the metric component \(\xi_{\xi\xi}\), on the whole state space (50). On the other hand, the states with the same degree of reduced entanglement (i.e., \(\mathscr{C}_{r}=\text{constant}\)) are located on circles identified by
\[d\mathbb{S}^{2}=\frac{1}{4}\mathscr{C}_{r}\left(2-\mathscr{C}_{r}\right)d\xi^ {2}, \tag{53}\]
whose radii \(\mathbb{R}=\sqrt{\mathscr{C}_{r}\left(2-\mathscr{C}_{r}\right)}/2\) depend on the level of reduced entanglement taken into account. In this way, we demonstrate the relevance of utilizing quantum correlations to minimize the dimensionality of the state space (50). This finding is applicable to all integrable quantum systems.
In the same framework, we can also investigate the influence of entanglement on the G-curvature of the two-spin phase space (50). As a matter of fact, putting the equation (49) into (15), we derive the curvature according to the concurrence under the form
\[\mathrm{K}=4\left[2+\frac{\left|\sin\xi\right|\left(\mathscr{C}-3\left|\sin \xi\right|\right)}{\left(\mathscr{C}-2\left|\sin\xi\right|\right)^{2}}\right]. \tag{54}\]
This outcome demonstrates again the explicit reliance of the state space geometry on the entanglement amount exchanged between the two interacting spins. From the equation (54), we observe that for \(\xi=0\) (i.e., no evolution), the G-curvature is \(\mathscr{C}\)-independent and takes a constant value \((\mathrm{K}=8)\), which corresponds to the curvature of the initial state sphere (55), while for \(\xi>0\) (i.e., evolution case) it is \(\mathscr{C}\)-dependent and its behavior versus the entanglement is shown in the figure (4). We notice that the G-curvature diminishes as the entanglement amount exchanged between the two spins increases. This can be explained by the fact that the existence of quantum correlations causes a decrease in the state space curvature. Further, the G-curvature reaches negative values for the entanglement degrees verifying the condition
\[\left|\sin\xi\right|(\mathscr{C}-3\left|\sin\xi\right|)<-2(\mathscr{C}-2\left| \sin\xi\right|)^{2}. \tag{55}\]
This elucidates the quantum correlation impact between the two particles in the compactification of the related state space (50) when the requirement (55) is met. It is interesting to note that the separable states \((\mathscr{C}=0)\) are housed in the regions of highest curvature \(\mathrm{K}_{\max}=5\), whereas the states of maximum entanglement \((\mathscr{C}=1)\) are housed in the regions of lowest curvature
\[\mathrm{K}_{\min}=4\left[2-\frac{\left|\sin\xi\right|(3\left|\sin\xi\right|-1) }{\left(2\left|\sin\xi\right|-1\right)^{2}}\right]. \tag{56}\]
Thus, we conclude that the information about the entanglement degree of the two-spin system (47) allows to determine
Figure 4: The G-curvature (54) versus the concurrence (49) for some values of \(\xi\).
its localization on the corresponding phase space (50). This clearly proves the deterministic character of geometric quantum mechanics, this latter is based on the geometrization of Hilbert space by introducing the concept of quantum phase space analogous to that of classical mechanics [4; 6].
The connection between the geometric phase and the quantum entanglement can be also explored here. Indeed, reporting the equation (49) into (28), we obtain the geometric phase gained by the two-spin state (47) in terms of the concurrence as
\[\Phi_{\rm g}=-\arctan\left[\frac{(2\left|\sin\xi\right|-\mathscr{C})\sin\xi} {(2\left|\sin\xi\right|-\mathscr{C})\cos\xi+\mathscr{C}}\right]+\xi\left(1- \frac{\mathscr{C}}{2\left|\sin\xi\right|}\right). \tag{57}\]
Notice that the geometric phase is determined by the two new physical degrees of freedom : entanglement and time. This means that it depends on each point (i.e., system physical state) of the underlying phase space (50). Consequently, we can say that the geometric phase depends both on the path followed by the system and the geometry of the state space. Since the geometrical phase is defined in terms of these two measurable physical magnitudes (i.e., \(\mathscr{C}\) and \(\xi\)), this offers us the ability to measure it experimentally for any arbitrary evolution process of the system. This result is extremely important because it can be exploited to build efficient quantum circuits based on the amount of entanglement exchanged between the two interacting spins. To further highlight the interplay between the geometric phase and entanglement, we have graphed the reliance of the Eq. (57) with respect to the concurrence for some values of \(\xi\) in Figure (5), we observe that the geometric phase (57) gained by the two-spin system (47) during its evolution from the separable state \((\mathscr{C}=0)\) to the maximum entanglement state \((\mathscr{C}=1)\) exhibits approximately a parabolic behavior. From here, we can divide its evolvement into two main stages : the first stage involves the geometric phase decrease along the concurrence interval \(\mathscr{C}\in[0,\mathscr{C}_{\rm c}]\) with \(\mathscr{C}_{\rm c}\) denotes the critical entanglement degree in which this phase reaches its minimum value (see the figure (5)), it is given explicitly as
\[\mathscr{C}_{\rm c}=\sin\xi-\cot\frac{\xi}{2}\sqrt{\frac{\sin\xi}{\xi}\left(2- \xi\sin\xi-2\cos\xi\right)}. \tag{58}\]
In this stage, the evolving state (47) acquires a negative geometric phase, which can be interpreted as the geometric phase part lost by the system. Geometrically, we can say that during parallel transport, the state vector (47) rotates clockwise (i.e., makes an angle of negative sign) with respect to the separable state (i.e., starting state). Thereby, we discover that, in the region \([0,\mathscr{C}_{\rm c}]\), the quantum correlations favors the loss of the geometric phase. The second stage concerns the geometric phase increase along the interval \(\mathscr{C}\in[\mathscr{C}_{\rm c},1]\) (i.e., reverse behavior), the evolving state (47) accumulates a positive geometric phase, which can be viewed as the geometric phase part gained by the system. We can say, geometrically, that during parallel transport, the state vector (47) rotates counter-clockwise (i.e., makes an angle of positive sign) with respect to the separable state. In this way, we find that, in the region \([\mathscr{C}_{\rm c},1]\), the quantum correlations favors the gain of the geometric phase. Accordingly, the geometric phase behavior versus the entanglement is approximately symmetric with respect to the critical value \(\mathscr{C}_{\rm c}\), this is mainly due to the dumbbell-shape structure of the underlying phase space (50). On the practical side, the quantum entanglement is then an interesting physical resource that can be exploited experimentally to control the geometric phase resulting from the evolution processes of the two-spin system.
In the cyclic evolution scenario, the geometric phase can be also investigated in connection to the entanglement. Indeed, inserting the equation (49) into (33), we give the AA-geometric phase accumulated by the evolved state (47) in relation to the concurrence as
\[\Phi_{\rm g}^{\rm AA}=-\pi\frac{\mathscr{C}}{\left|\sin\xi\right|}. \tag{59}\]
It is proportional to the entanglement level between the two spins with a negative proportionality factor, implying that the AA-geometric phase decreases linearly as entanglement increases. For the sake of clarity, we display this behavior in Figure (6), we observe indeed that the more the system is entangled, the more it accumulates an AA-geometric phase of negative sign. This is roughly the same behavior as we observed for the geometric phase (57) in the first stage (i.e., in the region \([0,\mathscr{C}_{\rm c}]\)), and hence the same interpretations can be provided for the AA-geometric phase. Regarding the topological phase resulting from the cyclic evolutions of the two-spin system, it is given by \(\Phi_{\rm top}^{\rm AA}=-2\pi\). Thus, it is unaffected by the entanglement, because it constitutes the AA-geometrical phase part receiving no contribution from the dynamic part.
### Dynamical picture of the entanglement
To close this section, it would be interesting to examine the dynamical aspect of entanglement by highlighting the link between the entanglement amount exchanged between the two spins and the relevant dynamical properties, such as the evolution speed and the traveled geodesic distance during a given evolution process over the resulting phase space (50). As a
Figure 5: The geometric phase (57) versus the concurrence (49) for some values of \(\xi\).
result, we address the quantum brachistochrone problem by relying on the entanglement level of the two-spin system. To accomplish this, using the equation (49) into (38), the related evolution speed can be expressed in relation to the concurrence as follows
\[\mathrm{V}=\frac{\mathrm{J}}{2\left|\sin\xi\right|}\sqrt{\mathscr{C}\left(2 \left|\sin\xi\right|-\mathscr{C}\right)}. \tag{60}\]
In this way, we manage to relate the rapidity of the two-spin system with its entanglement degree. In other words, the result (60) reflects the explicit relatedness between the system dynamics and the evolvement of the quantum correlations. Consequently, we infer that the dynamical characteristics of such a system are determinable through its entanglement state. The evolution speed reliance according to the concurrence is shown in Figure (7).
Interestingly, the variation of the evolution velocity (60) is split into two different parts : the first part shows the the evolution velocity increase of the two-spin state until it attains its highest value \(\mathrm{V}_{\mathrm{max}}=\mathrm{J}/2\), matching the critical entanglement level \(\mathscr{C}=\mathscr{C}_{\mathrm{c}}^{\prime}=\left|\sin\xi\right|\). This proves that, in this part, the existence of quantum correlations speed up the system evolution over the related phase space (50). The second part concerns the concurrence interval \(\mathscr{C}\in[\mathscr{C}_{\mathrm{c}}^{\prime},1]\), in which the evolution speed has reversed its monotonicity, it diminishes continuously until it achieves its local minimum \(\mathrm{V}(\mathscr{C}=1)\). This signifies that, in this second part, the quantum correlations slow down this evolution. As a result, we conclude that the system dynamics is controllable by its entanglement level. This outcome can be usefully exploited in quantum information protocols. Utilizing the equation (37), we can also establish the Fubini-Study distance traveled by the two-spin state (47) in relation to the concurrence, it is found as
\[\mathsf{S}=\frac{\xi}{2\left|\sin\xi\right|}\sqrt{\mathscr{C}\left(2\left|\sin \xi\right|-\mathscr{C}\right)}. \tag{61}\]
Thereby, we arrive at expressing the Fubini-Study distance (i.e., dynamical observable), relating any two quantum states over the two-spin phase space (50), in terms of the entanglement level and the evolution time. This proves again the feasibility to investigate experimentally the dynamical properties, which will motivate their exploitation in the novel applications of quantum technology [63].
By scrutinizing the two figures (7) and (8), we find that the Fubini-Study distance (61) exhibit, with respect to the entanglement degree, the same behavior as the evolution velocity (60), and thus the same conclusions can be achieved. Let us now solve the quantum brachistochrone issue for the two interacting spins based on their entanglement. To realize this, maximizing the evolution speed (60) with respect to the concurrence, the shortest time required to achieve a time-optimal evolution, over the relevant phase space (50), reads as
\[\boldsymbol{\tau}=\frac{\mathsf{S}}{\mathrm{V}_{\mathrm{max}}}=\frac{\xi}{ \mathrm{J}\left|\sin\xi\right|}\sqrt{\mathscr{C}\left(2\left|\sin\xi\right|- \mathscr{C}\right)}. \tag{62}\]
So, the optimal time (62) depends on both the ordinary time, the coupling constant, and the entanglement level of the system. Specifically, we observe that for \(\mathscr{C}=0\) the optimal time
Figure 8: The Fubini-Study distance (61) versus the concurrence (49) for some values of \(\xi\).
Figure 6: The AA-geometric phase (59) versus the concurrence (49) for some values of \(\xi\).
Figure 7: The evolution speed (60) verus the concurrence (49) for some values of \(\xi\) with \(\mathrm{J}=1\).
cancels out \((\mathbf{\tau}=0)\), this because the evolving state (47) coincides with the disentangled starting state \(|\Psi_{i}\rangle=|++\rangle\) (i.e., no evolvement). For the critical entanglement level \(\mathscr{C}=\mathscr{C}^{\prime}_{\rm c}\) the optimal time attains its highest value \((\mathbf{\tau}=t)\), signifying that the optimal and ordinary evolutions of the two-spin system coincide, whereas for \(\mathscr{C}\in]0,\mathscr{C}^{\prime}_{\rm c}[\cup]\mathscr{C}^{\prime}_{\rm c },1]\) the optimal time is strictly less than the ordinary time \((\mathbf{\tau}<t)\). In this respect, the optimal evolution states can be generated via the unitary operation given by
\[|\Psi_{i}\rangle\rightarrow|\Psi(\mathbf{\tau})\rangle=e^{-i\mathbf{H}\mathbf{\tau}} \left|\Psi_{i}\right\rangle. \tag{63}\]
In fact, they make up one-dimensional space of optimal states, over the whole space (50), defined by the metric tensor
\[d\mathsf{S}^{2}_{\rm opt}=\frac{\mathscr{C}}{4\sin^{2}\xi}\left(2\left|\sin \xi\right|-\mathscr{C}\right)d\mathbf{\xi}^{2}, \tag{64}\]
with \(\mathbf{\xi}=\mathfrak{J}\mathbf{\tau}\). The behavior of the optimal time (62) according to the entanglement is illustrated in Figure (9), we discover that the lower the entanglement level (resp. the ordinary time) of the two spins \(\mathscr{C}\to 0\) (resp. \(t\to 0\)), the shorter the optimal time \(\mathbf{\tau}\to 0\).
Accordingly, we conclude that the entanglement and the ordinary time are two physical magnitudes exploitable for realizing the optimal evolutions in such integrable systems. These kinds of evolutions are of paramount importance in quantum computing for designing good quantum algorithms [64; 65].
## VI Conclusion and outlook
To summarize, we investigated a physical system consisting of \(N\) interacting spin-\(1/2\) under all-range Ising model. We assumed that the starting state is the tensor product of \(N\) eigenstates of the spin-\(1/2\) projection operator on the positive direction denoted by the unit vector. After applying the time evolution propagator, the obtained evolved state is identified by three dynamical degrees of freedom, which are the spherical angles \((\theta,\varphi)\) and the evolution time \(t\). We established the Fubini-Study metric and explored that the related quantum phase space is a closed two-dimensional manifold. Moreover, by examining the G-curvature in the framework of the Gauss-Bonnet theorem, we demonstrated that this state space has both a dumbbell-shape structure and a spherical topology. The geometric phase acquired by the \(N\) spin-\(1/2\) system is also studied within the arbitrary and cyclic evolution processes. This is achieved by evaluating the difference between the total and dynamic phases. We found that the geometric phase, in the arbitrary evolution case, varies nonlinearly with the time, reflecting its dynamic character. Geometrically, we showed that it is affected both by the evolution trajectory taken by the system and the associated state space geometry. On this view, we concluded that the geometric phase can be exploited to parameterize the possible evolution trajectories of the system. This result may be interesting for building robust quantum gates. Further, in the thermodynamic limit (\(N\rightarrow\infty\)), the total phase cancels out and therefore the geometric and dynamic phases coincide at any moment in the evolution process. This offers the opportunity of measuring the geometric phase experimentally. In the cyclic evolution case, we have calculated the AA-geometric phase and discovered that it is independent of the system dynamics. Otherwise, it depends only on the initial state choice (i.e., the initial parameters), signifying that it is influenced by the state space geometry and not by the evolution path followed by the system. Hence the cyclic evolution paths are not parameterizable by the AA-geometric phase. We have also derived the topological phase appearing naturally during cyclic evolutions. We found that it is proportional to the square of the particle number \(N^{2}\), especially it takes fractional values for \(N\) odd and multiples of \(\pi\) for \(N\) even. This signifies that the number of spins influences the topology of the state space. The evolution speed and the Fubini-Study distance separating the quantum states are also well examined. As a result, we resolved the quantum Brachistochrone problem for the \(N\) spin-\(1/2\) system by determining the shortest time (i.e., optimal time) required to perform an time-optimal evolution, and defined the underlying optimal state space. In this perspective, we discovered that for \(N=2\) (i.e., two spin\(-1/2\) system) the optimal and ordinary times coincide \((\mathrm{t}_{\rm min}=t)\), while for \(N\geq 3\) (i.e., \(N\) spin-\(1/2\) system), the optimal time (44) is strictly lower than the ordinary time, and thus the time-optimal evolution is achievable. Besides, in the thermodynamic limit (\(N\rightarrow\infty\)), the optimal time decreases to zero \((\mathrm{t}_{\rm min}\to 0)\). In this scheme, the optimal state circle coincides with a straight line since its radius becomes infinite. Thereby, we inferred that the particle number and the ordinary time are two physical magnitudes exploitable for performing the time-optimal evolutions in such integrable systems.
On other hand, by restricting the whole system to a two-spin system, i.e., two interacting spin under the Ising model, we have studied the quantum entanglement via two approaches. The first approach is of geometric nature, in which we give the Fubini-Study metric in connection to the Wootters concurrence as a quantum correlation quantifier. This outcome may be interesting for the experimental handling of the geodesic distance between entangled states and also in the state space
Figure 9: The optimal time (62) versus the concurrence (49) for some values of \(\xi\) with \(\mathfrak{J}=1\).
geometry adjustment. We proved that an increase in the entanglement degree between the two spins causes a decrease in the state space curvature until it reaches negative values, showing the quantum correlation effect in the compactification of the related state space. Additionally, the entanglement can be used to identify the quantum states over the space of states, for example, the states of maximum entanglement are housed in the regions of lowest curvature, whereas the separable states are housed in the regions of highest curvature. The geometric phase acquired by the two-spin system is sufficiently discussed in relation to the entanglement. We explored that the geometric phase exhibits two different behaviors with respect to the critical entanglement level \(\mathscr{C}_{\text{c}}\left(\Phi_{\text{g}}(\mathscr{C}_{\text{c}})=\Phi_{ \text{g}_{\text{min}}}\right)\): the first one is in the interval \([0,\mathscr{C}_{\text{c}}]\), wherein the quantum correlations favors the loss of the geometric phase, while the second one is in the interval \([\mathscr{C}_{\text{c}},1]\), wherein the quantum correlations favors the gain of the geometric phase. In the cyclic evolution scenario, the AA-geometric phase behaves similarly to the geometric phase in the first interval \([0,\mathscr{C}_{\text{c}}]\). This highlights the significance of quantum entanglement for controlling the geometric phase evolvement in such spin systems. The second approach is of dynamic nature, we linked the evolution speed with the concurrence, we observed that the speed displays two different behaviors with respect to the critical entanglement level \(\mathscr{C}_{\text{c}}^{\prime}\left(\mathrm{V}(\mathscr{C}_{\text{c}}^{\prime })=\mathrm{V}_{\max}\right)\): the first one is in the interval \([0,\mathscr{C}_{\text{c}}^{\prime}]\), wherein the quantum correlations speed up the system evolution over the relevant phase space, while the second one is in the interval \([\mathscr{C}_{\text{c}}^{\prime},1]\), wherein the quantum correlations slow down this evolution. Accordingly, we concluded that the system dynamics is controllable by its entanglement level. The same behavior is noticed for the Fubini-Study distance separating the entangled states. Finally, we solved the quantum brachistochrone problem based on the entanglement amount exchanged between the two spins. We inferred that the quantum entanglement and the ordinary time are two physical magnitudes exploitable to realize the time-optimal evolution in such a spin system. Thus, we were able to illustrate, to a significant extent, the connection between quantum entanglement and the geometric and dynamical characteristics characterizing the considered two-spin phase space.
|
2305.05151 | Interfacial Stresses on Droplet Interface Bilayers Using Two Photon
Fluorescence Lifetime Imaging Microscopy | Response of lipid bilayers to external mechanical stimuli is an active area
of research with implications for fundamental and synthetic cell biology.
However, there is a lack of tools for systematically imposing mechanical
strains and non-invasively mapping out interfacial (membrane) stress
distributions on lipid bilayers. In this article, we report a miniature
platform to manipulate model cell membranes in the form of droplet interface
bilayers (DIBs), and non-invasively measure spatio-temporally resolved
interfacial stresses using two photon fluorescence lifetime imaging of an
interfacially active molecular flipper (Flipper-TR). We established the
effectiveness of the developed framework by investigating interfacial stresses
accompanying three key processes associated with DIBs: thin film drainage
between lipid monolayer coated droplets, bilayer formation, and bilayer
separation. Interestingly, the measurements also revealed fundamental aspects
of DIBs including the existence of a radially decaying interfacial stress
distribution post bilayer formation, and the simultaneous build up and decay of
stress respectively at the bilayer corner and center during bilayer separation.
Finally, utilizing interfacial rheology measurements and MD simulations, we
also reveal that the tested molecular flipper is sensitive to membrane fluidity
that changes with interfacial stress - expanding the scientific understanding
of how molecular motors sense stress. | Yaoqi Huang, Vineeth Chandran Suja, Menghao Yang, Andrey V. Malkovskiy, Arnuv Tandon, Adai Colom, Jian Qin, Gerald G. Fuller | 2023-05-09T03:32:10Z | http://arxiv.org/abs/2305.05151v2 | Interfacial Stresses on Droplet Interface Bilayers Using Two Photon Fluorescence Lifetime Imaging Microscopy
###### Abstract
Response of lipid bilayers to external mechanical stimuli is an active area of research with implications for fundamental and synthetic cell biology. However, there is a lack of tools for systematically imposing mechanical strains and non-invasively mapping out interfacial (membrane) stress distributions on lipid bilayers. In this article, we report a miniature platform to manipulate model cell membranes in the form of droplet interface bilayers (DIBs), and non-invasively measure spatio-temporally resolved interfacial stresses using two photon fluorescence lifetime imaging of an interfacially active molecular flipper (Flipper-TR). We established the effectiveness of the developed framework by investigating interfacial stresses accompanying three key processes associated with DIBs: thin film drainage between lipid monolayer coated droplets, bilayer formation, and bilayer separation. Interestingly, the measurements also revealed fundamental aspects of DIBs including the existence of a radially decaying interfacial stress distribution post bilayer formation, and the simultaneous build up and decay of stress respectively at the bilayer corner and center during bilayer separation. Finally, utilizing interfacial rheology measurements and MD simulations, we also reveal that the tested molecular flipper is sensitive to membrane fluidity that changes with interfacial stress - expanding the scientific understanding of how molecular flippers sense stress.
Bilayers | Molecular flippers | Interfacial Mechanics | FLIM | Two photon microscopy
The phospholipid membrane plays a crucial role in the structure and function of cells. The physical, chemical and biological properties of the cell membrane are actively studied using _in vivo_ and _in vitro_ cell models [(1, 2, 3)]. Droplet interface bilayer (DIB), a novel _in vitro_ cell model, is a bilayer formed between two aqueous droplets coated with lipid monolayers in a non-polar phase. DIBs are attractive due to their ability to intricately control and visualize bilayer composition and dynamics [(4, 5)]. To date, DIB has been used to evaluate physical characteristics of lipid bilayers [(5, 6, 7, 8)], electrical characteristics [(9, 10)], trans-membrane transport characteristics [(10, 11, 12, 13, 14)], and cell functionalization in a variety of conditions [(15, 16, 17)].
The interfacial (membrane) stress, defined as the force per unit length acting on the lipid bilayer, is crucial to many biological processes [(18, 19, 20)]. Cell adhesion, a fundamental property that is necessary for cell migration and multicellularity, is influenced by membrane stress [(21, 22)]. During cell division, membrane tension impacts the formation of the daughter cells, with an increased tension delaying abscission - the last step of cell division [(23, 24)]. Membrane stresses also play a critical role in phagocytosis [(25)] and can induce disruptions in the cell membrane - both of which have implications in the development of cell therapies [(26, 27, 28)]. Motivated by these processes, researchers have actively studied the effects of cell membrane stresses, including the work from this laboratory using DIBs to probe the connection between bilayer separation mechanics and membrane tension [(29)]. Despite the progress, it is currently not possible to obtain spatio-temporally resolved stress distributions in DIBs. Although techniques such as AFM allow the measurement of membrane stresses at a fixed location on a bilayer, none of the existing techniques are suitable for dynamically evolving bilayers [(30, 31)].
Recently, a new noninvasive method has been developed where membrane stresses can be measured utilizing a fluorescent lipid tension reporter (Flipper-TR), one of the first molecular flippers to specifically measure interfacial characteristics [(23)]. Flipper-TR consists of two dithienothiophene aromatic rings (also referred to as flippers) that can twist and planarize in response to increasing lipid packing density and membrane tension [(32)]. The fluorescence lifetime of the molecule depends on the dihedral angle between the flippers, with the molecule displaying higher lifetimes as the dihedral angle increases (i.e. as the molecule becomes more planar)
## Significance Statement
Understanding the behavior of phospholipid membranes under mechanical stimuli is important for fundamental cell biology and for developing novel cellular therapies. Existing tools such as AFM and optical tweezers are either invasive or lack the spatiotemporal resolution required to study phospholipid mechanics. The reported framework addresses this gap by integrating two-photon lifetime imaging of an interfacially active molecular flipper with droplet interface bilayers (DIBs). The precise mechanical manipulation capabilities of DIBs together with the real time measurement of spatiotemporally resolved membrane stresses opens up numerous possibilities for studying phospholipid membrane behavior under mechanoperturbations.
## 1 Introduction
The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying membranes. The biological membrane is a fundamental tool for studying membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipids. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipids. The biological membrane is a fundamental tool for studying phospholipids. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipids. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipids. The biological membrane is a fundamental tool for studying phospholipid membranes. The biological membrane is a fundamental tool for studying phospholipids. The biological membrane is a fundamental tool for studying phospholipids.
(23, 32, 33). This promising tool and the associated protocols, however, have yet to be optimized for dynamic stress measurements and made suitable for DIBs.
In this manuscript, we report an experimental framework and platform for incorporating Flipper-TR into DIBs and evaluating spatio-temporally resolved membrane stresses under a variety of dynamic conditions. Initially, we detail the construction and validation of a miniature inverted two-photon microscope compatible platform for creating and manipulating DIBs. We subsequently use this platform to create DIBs using two parallel droplets, and characterize dynamic stresses with two-photon fluorescence lifetime imaging (FLIM) during three key processes: approach of lipid monolayer coated droplets, bilayer formation, and bilayer separation. The key findings in this study are validated and supported by mathematical modeling, interfacial rheology measurements and molecular dynamics simulations. Finally, we conclude the manuscript by discussing interesting avenues for future research.
## Results
### Dynamic two photon DIB FLIM experiments
To enable the measurement of spatio-temporally resolved phospholipid membrane stresses, we built a miniature experimental DIB platform compatible with inverted microscopes for creating and manipulating droplet interface bilayers (DIBs). As shown in Fig.1a,c, the platform consists of a glass chamber to hold the non-polar solvent (hexadecane in our case) with lipids, and two 75 degree blunt needles (ID: 0.58 mm OD: 0.81 mm) to hold the pendant drops. The two needles are placed at the opposite sites of the chamber, with one of them connected to a picomotor, and the other one to a manually operated micrometer. Each needle has an agarose gel at its tip for anchoring the pendant drops and prevent dripping. The setup is mounted atop a two photon inverted microscope with a pulsed laser (Zeiss LSM780) for performing fluorescence lifetime imaging microscopy (FLIM). Lipid coated droplets containing the fluorescent molecular flipper Flipper-TR are generated on the two needles and brought together to create DIBs. Time correlated single photon fluorescence data is then acquired with the help of SPC150N photon counting module from Becker GmbH. Spatio-temporally resolved fluorescence lifetimes are recovered using FLIMFit (34) by first correcting the raw data with a previously obtained instrument response function (IRF) and then fitting a double exponential function (see _Methods_ for more details).
We found that two photon microscopy FLIM yields better signal-to-noise ratios compared with the single photon FLIM while imaging DIBs (_see SI Fig.(27)_). Unlike micrometer scale liposomes tested in the literature (23), DIBs are comprised of millimeter scale droplets, which require the increased imaging depth of two photon microscopes (35) for adequate visualization. We confirmed the suitability of the developed two photon microscopy protocols for FLIM, by imaging a single pendant drop via previously established single photon microscopy protocols (23) and two photon microscopy protocols. As shown in Fig.1e, nearly identical lifetime distributions are obtained in both cases (see _Methods_ and _SI Fig.(27)_ for more details).
For mapping lifetimes to membrane stresses, we measured fluorescence lifetimes of monolayers of lipid coated pendant drops with varying lipids and cholesterol concentrations (Fig.1f). The corresponding monolayer tensions were obtained via pendant drop tensiometry (36, 37). As shown in Fig.1g, we find that the lifetimes decrease with increasing monolayer
Figure 1: A schematic of the experimental setup and calibrations. **a.** FLIM setup used for lifetime - interfacial stress calibration. A pendant droplet of KCl containing Flipper-TR is created in the hexadecane solvent with DPhPC lipids and a certain percentage of cholesterol. Lipid monolayers are formed on the surface of the droplet, and the lifetime of the Flipper-TR probe embedded inside the lipid monolayer is recorded by the FLIM detector. **b.** FLIM image of a section of the pendant droplet. **c.** FLIM setup for DIB experiment. Two pendant droplets pinned by agarose onto needles, and a picomotor is used to position the left droplet. **d.** Monolayer FLIM images obtained by two photon microscopy and one photon microscopy. **e.** Gaussian curves of the lifetimes obtained via single and two-photon microscopy shows little difference. **f.** Distribution of fluorescence lifetimes with Gaussian fits for DPhPC monolayers with different cholesterol concentrations. **g.** Average lifetime variation with monolayer surface tension (N=3, where N is the number of trials). Scale bar for subfigures (b) and (d) is 0.05 mm.
tensions. A linear fit yields a slope of \(-0.08\ ns\ m\ mN^{-1}\), which is comparable to those reported for liposomes [(23)] and expected from MD simulations [(32)]. In other words, fluorescence lifetime is inversely correlated to the stress acting on the interface (see _Methods_ for more details).
### Dynamic FLIM with Flipper-TR is sensitive to hydrodynamic stresses
The first step in creating DIBs involves pressing two lipid monolayer coated droplets (see Fig.2a) against one another. This process traps a thin liquid film of hexadecane between the drops, which eventually drains before bilayer formation.
As qualitatively shown in Fig.2b and quantitatively in Fig. 2c, during thin film drainage, the fluorescence lifetime of the pre-bilayer (monolayer) interface with DPhPC decreases during thin film drainage. A similar, albeit smaller decrease in lifetime, was observed in the presence of cholesterol (Fig. 2c). We hypothesize that this dynamic change in lifetime is driven by the hydrodynamic lubrication stresses associated with the drainage of the non-polar phase.
Assuming incompressible, Newtonian, non-polar fluid behavior and approximating the region between drops as that between two parallel disks, the hydrodynamic force per unit length (\(\Pi\)) acting on the monolayer can be calculated from lubrication theory as (see _SI Fig.7_ and text for details),
\[\Pi=-6\mu\pi r^{2}\frac{V_{z}}{h^{2}}. \tag{1}\]
Here \(\mu\) is the non-polar bulk viscosity, \(r\) is the radial coordinate along the monolayer, \(V_{z}\) is the film thinning rate, and \(h\) is the thickness between the monolayers. An order of magnitude analysis at a radial location of r = 0.1 mm, with typical values of relevant quantities - \(\mu\) = 3 cP (for hexadecane [(38)]), \(V_{z}\) = 1 nm/s and h = 20 nm [(39)], yields \(\Pi\sim 1.5mN/m\). The residual stress (surface tension + hydrodynamic interfacial stress II) on the monolayer tension during compression agrees with the estimate obtained from lubrication theory (Fig. 2d), supporting that dynamic FLIM with Flipper-TR is sensitive to and can be used to study the effect of hydrodynamic stresses on lipid membranes. The agreement between theory and experiments also provides additional validation of the reported FLIM image acquisition, calibration and post-processing protocols.
### Membrane stress and fluidity of droplet interface bilayers
#### Increases radially inwards
Bilayer formation happens when hydrophobic phospholipid tails zip-up to completely exclude the oil film between the two droplets (Fig 3a,b). Supplementing the existing body of literature on the physics of bilayer formation[(40, 5, 15)], we measured the spatio-temporal fluorescence lifetime evolution during DPhPC and DPhPS bilayer formation. As shown in Fig.3c,d, fluorescence lifetimes are inhomogeneously distributed with a lower lifetime at the bilayer center. A similar trend is observed on the addition of cholesterol (Fig.3e) - suggesting the presence of a membrane stress field that increases inwards. Curiously, the magnitude of decrease in lifetime dramatically differs across both lipids (50% lower in DPhPS as compared to DPhPC). This is surprising as both lipid bilayers are known to have a similar packing density(area per lipid) [(41)], and Flipper-TR is expected to mechanically respond to membrane tension via changes in lipid packing.
To investigate this further, we first performed interfacial rheology measurements on lipid monolayers (Fig.3f). The interfacial fluidity (the inverse of the loss modulus) scales with the lifetime. It is worth noting that addition of cholesterol increases membrane fluidity for the tested lipids, but not for DOPC - consistent with previous studies [(23)] and likely a result of the dual role of cholesterol in regulating membrane fluidity [(42)] (_SI Fig.7_). To confirm that the fluidity trends observed on lipid monolayers translated to lipid bilayers, we turned to MD simulations (Fig.3g). The simulations confirmed the similar packing density of DPhPS and DPhPC lipid bilayers (_SI Fig.7_). We evaluated the lipid trajectory data to compute self diffusivity - a key property that is easily obtained and correlated to membrane fluidity [(43)]. As seen in Fig.3h, DPhPS lipids have a higher diffusivity than DPhPC, and by extension DPhPS bilayers are more fluid than DPhPC bilayers - consistent with our interfacial rheology measurements. This suggests that Flipper-TR responds to changes in membrane stresses not only via changes in lipid packing, but also due to changes in membrane fluidity imparted by membrane stresses. As the Flipper-TR molecule is known to dynamically oscillate about its dihedral in the bilayer [(32)], it is not surprising that membrane fluidity is also an important physical property that dictates its lifetime. Finally, the radial variation in lifetime can be explained by circular geometry of droplet interface bilayers and radial propagation of the lipid monolayer tension acting at the edge of the bilayer (see _SI Fig.7_ for details). Taken together, this suggests that droplet interfaces are more fluid as we move radially inwards. This finding has important implications in trans-membrane transport studies routinely performed with bilayers.
### Membrane stresses increase at corners and decrease at the center during bilayer separation
DIBs can be manipulated to apply dynamic strains for separating the bilayer. Here,
Figure 2: Mechanics when two monolayers are in contact. **a**, A schematic of droplet profiles showing two droplets approaching against each other. **b**, FLIM images showing the lifetime before, during, and after the two monolayers are in contact. **c**, Average lifetime for different lipid samples (N-3); **d**, Residual surface stress along with the theoretical prediction obtained from Eq. 1. Scale bar equals to 0.05 mm (N-3).
following a previously established protocol[29] (see _Methods_), we applied controlled step strains by separating the droplets step-wise to probe the spatio-temporal stresses during bilayer separation (Fig 4 a, _SI Fig._17). Visualizing the fluorescence lifetimes immediately before and after a single step separation (Fig. 4 b), we find a decrease in lifetime at the corners of the bilayer. This decrease is most pronounced for DPhPC systems, while addition of cholesterol dampens this effect (Fig.4 d). The center lifetime is observed to mostly increase, albeit weakly, after the first step separation. Interestingly, this indicates that there is a localized increase in membrane stress at the corners and a weak relaxation of in-plane membrane stress at the center during bilayer separation.
This evolution of the membrane stress during step separation has important implications on the evolution of bilayer contact angle (Fig.4e). For identical step strains, we see that the magnitude of change in contact angle is inversely correlated to the magnitude of change in membrane stress, explaining previous observations in the literature [8]. To physically understand this behavior, let us recall from the previous section that there is an inverse correlation between membrane stress and fluidity (Fig.3f - h). The more fluid the membrane, the more rapid is the stress relaxation. This results in a lower change in the membrane stress post step separation, which, in turn, drives a smaller change in the contact angle due to the lower magnitude of counteracting membrane stress.
In the terminal separation step, there is a significant increase in the center lifetime as in-plane stresses give way to normal stresses that initiate the so-called pulling mode of bilayer separation [29, 44] (see Fig.4a). Interestingly, we observe the formation of tethers (inverted bilayer tube) during the complete separation of the bilayer (Fig.4c). Although tethers were only observed in DPhPC (the most rigid lipid membrane) and were not captured in other DPhPC-chol or any DPhPS samples, this is an interesting phenomenon that deserves further investigation.
Figure 3: Mechanics during the bilayer formation. **a.** A schematic of droplets showing the formation of the bilayer. **b.** A schematic of the bilayer profile between two droplets during bilayer formation. The zipping of two monolayers changes the lipid packing. **c.** FLIM images showing the lifetime of the pure DPhPC sample when two monolayers are forming a bilayer. **d.** Lifetime distributions over 20% (N-3). **e.** Lifetime distribution for different lipid samples (N-3). **f.** Inverse of the loss modulus of PC and PS mixtures (N-3). Scale bar is equal to 0.05 mm. **g.** Atomic configurations for DPhPC bilayers between water molecules. As shown in the legend, the entire water molecule is shown with a solid blue color, and the DPhPC molecule has red representing oxygen atoms, the white representing hydrogen atoms, cyan representing carbon atoms, and tan representing phosphorus atoms (not visible). **h.** Mean square displacement (MSD) for DPhPC and DPhPS lipid bilayer as a function of MD simulation time.
## Discussion
The behavior of phospholipid membranes under mechanical stimuli is of fundamental and practical interest, with applications in understanding biophysical processes such as cell division, cell migration and phagocytosis, and in the development of cellular therapies [21, 22, 24, 28]. Existing tools are either invasive or lack the spatio-temporal resolution required to study phospholipid mechanics. To address this gap, we report a droplet interface bilayer (DIB) framework employing two-photon fluorescence lifetime imaging of an interfacially active molecular flipper (Flipper-TR) for evaluating spatio-temporally resolved membrane stresses under a variety of dynamic conditions. Two photon microscopy enhanced the imaging depth, whereas the use of interfacially active molecular flippers minimized unwanted signal from the bulk. Both of these components are vital for improving the signal-to-noise of fluorescence lifetime images of the bilayer sandwiched between millimeter scale droplets.
Systematic experiments with DIBs established the effectiveness of the framework in resolving interfacial stresses across diverse conditions - during thin film drainage between droplets prior to bilayer formation, during bilayer formation, and in the course of bilayer separation under external stimuli. Interestingly, the experimental framework also enabled us to uncover fundamental aspects of stress distributions in DIBs. Post bilayer formation, a radially decaying membrane stress field is created within DIBs with the highest stress existing at the center. During bilayer separation under step-strain, the existing stress field becomes more uniform, with the stress gradually relaxing at the center and building up at the corners. The change in bilayer contact angle post step strain, a key metric tracked in DIB mechanoperturbation studies [8, 45], is positively correlated to the change in the membrane stress. This finding explains previous reports in the literature [8] where surprisingly dissimilar changes in contact angle were observed during identical step strains in closely related DIB systems.
Interfacially active molecular flippers such as Flipper-TR sense membrane stresses via changes in their fluorescence lifetime. The spectroscopic response and fluorescence lifetime of molecular flippers are tightly correlated to the mean dihedral angle and the twisting dynamics of the flipper about its dihedral, which in turn changes with the molecular environment. Lipid packing density changes in response to membrane stress, and has been attributed as the key molecular environment change responsible for the mechanosensitivity of Flipper-TR. Investigating the lifetimes of Flipper-TR in two phospholipid membranes (DPhPC and DPhPS) with very similar lipid
Figure 4: Blayer separation mechanics. **a.** A schematic of droplet profile during the peeling process ( and ) and complete detachment (ii and iv). **b.** FLIM images before and after the strain displacement. \(\theta\) inside the subfigure is denoted as contact angle. **c.** Formation and thread extension during bilayer detachment. **d.** Lifetime during the peeling process for different lipid samples (N-3). **e.** Corner surface stress versus contact angle (\(\theta\)) before and after a step strain (bulup) for different lipid samples (N-3). Scale bar equals to 0.05 mm.
packing densities, we show, supported by interfacial rheology measurements and MD simulations, that membrane fluidity can also influence the lifetime of Flipper-TR. This expands the current scientific of mechanosensation by molecular flippers.
There are a number of avenues for extending the findings in the current work. First, existing interfacially active molecular flippers are sensitive to a number of microenvironment features such as lipid packing density and membrane fluidity, necessitating the need for time-consuming lifetime - stress calibrations on a case-by-case basis. Establishing general physical principles that can minimize the need for repeated calibrations will make this a more attractive tool. Second, a key limitation in the reported study - the absence of orthogonal membrane stress measurements - can be alleviated by incorporating tools such as optical tweezers for force measurements. Even though the calibrations based on surface tension variations are valid [32, 43], the calibration range is limited by physically attainable interfacial tensions. Incorporating tools such as optical tweezers can overcome this limitation. Finally, the ability of molecular flippers to sense membrane fluidity opens up possibilities for their use as an interfacial rheology tool. Overall, the reported results open up promising possibilities for non-invasive phospholipid stress measurements and drive advances in fundamental cell biology and in the development of novel cellular therapies.
## Materials and Methods
### Materials
DPhPC (1,2-diphytanoyl-sn-glycero-3-phosphocholine, a neutral charged lipid) and DPhPS (1,2-diphytanoyl-sn-glycero-3-phospho-L-serine, a negative charged lipid), and DOPC (1,2-diology-sn-glycero-3-phosphocholine, a neutral charged lipid) was used as model lipids for generating phospholipid bilayers reported in this manuscript. Cholesterol from ovine wool (Catalog no: 700000; Avanti Polar Lipids Inc., Alabamate, Alabama) was purchased in powder form and was then transferred and stored in chloroform solution. Prior to the start of the experiments, DPhPC (Catalog no: 850356; Avanti Polar Lipids Inc., Alabamate, Alabama), DPhPS (Catalog no: 850408; Avanti Polar Lipids Inc., Alabaster, Alabama) and cholesterol (if needed) was extracted from the suspending chloroform solution in two steps. Initially, a predetermined amount of lipid in chloroform and a certain amount of cholesterol in chloroform are mixed. Then, the chloroform in the mixed solution was evaporated off by gently blowing a stream of nitrogen for 3 minutes. Subsequently, the residue was vacuum dried for another 30 minutes. The chloroform free lipids and cholesterol were then dissolved in hexadecane to give a final DPhPC concentration of 10 mM with certain mole percentage of cholesterol.
The molecular flipper probe, Flipper-TR (Catalog no: CY-SC020, Cytoskeleton Inc), was prepared as follows. The probe was initially reconstituted in 50 \(\mu\)l DMSO to form 1 mM solution. 1 \(\mu\)L of the reconstituted probe solution is extracted in a new container, and 49 \(\mu\)L of 1M KCl solution is added to obtain a 20 \(\mu\)m Flipper-TR solution in KCl.
Agarose gel used as a core to support the pendent drop (Fig.1 c) was prepared as follows. 300 mg of the agarose powder (Thermo Fisher Scientific, Catalog no: BP164100) was mixed with 10 mL of distilled water at high temperature, and then cooled down to make it into the gel form (46, 47). 1M KCl solution was then used for preparing the aqueous sessile and pendant droplets. The agarose core size was ensured to be much smaller than the pendant drop size to avoid any undesired influence of the agarose core on the reported bilayer dynamics.
### Fluorescence lifetime imaging microscopy
Fluorescence lifetime imaging microscopy (FLIM) measurements were obtained with a Zeiss LSM780 - a two photon microscope with a pulsed laser (80MHz, 485/850 nm). SPC150N from Becker GmbH is equipped with the microscope as the photon counting module, and FLIMfit itself for analyzing FLIM image (34). A GFP 525/50 nm filter is installed inside the module to collect the emission signal.
To start the FLIM experiment, non deschannel two photon laser beam is applied with 485/850 nm of the wavelength. Then, the FLIM imaging was performed under two different protocols: 1) static FLIM and 2) dynamic FLIM. The static FLIM protocol is intended to capture membrane stresses on static monolayers, and it collects 20 million photon events for each image, over 20s. The dynamic FLIM protocol is intended to resolve the membrane stresses under dynamic conditions. Here we collect only 5 million photon events, taking approximately 4s. During the experiment, 10x objective lens is used, and brightfield image from T-PMT detector may be activated to confirm the formation of the bilayer (see _SI Fig.??_).
For data analysis, a dual exponential model is applied to fit fluorescence decay data for the region of interest (23). Since similar tendencies were seen for the two lifetimes, the longest lifetime is reported for all experiments. The instrument response function (IRF) used for correcting the raw data was obtained as follows. 1 \(\mu\)L of the gold nanoparticle solution (Sigma Aldrich, 80 nm diameter, OD 1, stabilized dispersion in citrate buffer) was pipetted on a slide and was imaged for 30 s under the same FLIM parameters as the experiments reported in the manuscript. The relaxation curve of the photons were then saved and imported as an IRF calibration for correction.
### Pendant drop experiments
The setup in Fig. 1a is used to measure the lifetime of pendant drops coated with lipid monolayers. On the stage a glass chamber holds the oil solution and a 0.25 mL syringe with a 90 degree blunt capillary needle (ID: 0.43 mm OD: 0.63 mm, fixed by a stand) is used to create the pendant drop. To conduct pendant drop experiments, a predetermined amount of KCl solution with Flipper-TR is collected by 1 mL syringe with 90 degree bend needle, and 0.55 mL of the lipid solution (previously sonicated for 1 hour to prevent aggregates) is dispensed onto the glass chamber. Then the syringe is placed inside the oil solution and fixed in place. After that, the syringe is gently pushed until a 1 \(\mu\)L pendant drop is observed from the microscope brightfield image. Subsequently, a sequence of FLIM images of the pendant drop were acquired for calibration.
### DIB experiments
The setup in Fig. 1c is used to create and manipulate DIBs. As shown in the figure, the same glass chamber used for the pendant drop experiments is used to hold the oil phase solution, but with two 75 degree blunt needles (ID: 0.58 mm OD: 0.81 mm) to hold the pendant drop. The two needles are placed at the two opposite sites of the chamber, with one of them connected to a piconotor (Newport, a stepper motor with a rotary encoder controlled by PC1), and the other one connected to a manually operated micrometer. Each needle has an agarose gel at its tip for anchoring the pendant drops and prevent dripping. The glass chamber can be mounted atop the inverted microscope stage, and the droplet profiles are obtained by FLIM via PC2.
For DIB formation and separation experiments, 0.55 mL of the oil solution is gently added into the chamber. Then outside the chamber 1 \(\mu\)L of 1M KCl solution with Flipper-TR dye (20 mM) is pipetted onto the agarose on each needle to form the pendant drop, and the micrometers on both left and right droplets are adjusted to make sure that the two droplets are able to immerse into the oil phase. Then, both droplets are put into the glass chamber, and the height of the right droplet is adjusted to make sure that the both droplets are placed along the same horizontal axis. Finally, the right droplet is moved towards the left droplet so that the edge distance between the two droplets are approximately 0.3 mm. The lens is adjusted as well to ensure the two droplets are focused.
To form the bilayers 2 droplets are aged for 10 minutes in order to allow the formation of the lipid monolayers at the oil-water interfaces. After aging, the piconotor is used to slowly push the left drop against the right pendant drop for approximately 0.35 mm at 30 \(\mu\)m/s, and then held in place. The thin liquid film between the pendant and sessile droplets drains until the lipid monolayers are close enough to form a bilayer.
To conduct the bilayer separation experiments, we again used the piconotor to pull the left drop away from the pendant drop in a step-wise manner at a velocity of 0.05 mm/s for one second. The
step size (\(d\)) has a constant value of 0.05 mm, resulting in step strain of \(d/R_{a}=0.067\), where \(R_{a}=0.75\) mm is the apex drop curvature of the pendant drop. After each separation step, we allowed the bilayer to relax for 60 seconds. This process is continued until two pendant droplets separated completely. The entire process of bilayer formation and separation was captured by FLIM, and was subsequently analyzed utilizing FLIMfit and ImageJ for recovering membrane stresses. All experiments in this paper were performed at room temperature. A schematic diagram of the above-mentioned bilayer formation and separation under microscope is shown in Fig 1 c.
### MD simulations
Both DPhPC (PC) and DPhPS (PS) lipid bilayer models were constructed by using the Membrane Builder from CHARMM-GUI (48). A total number of 72 DPhPC lipids was generated with a hydration of approximately 90 water molecules per lipid, and the DPhPC topology was obtained from Klauda et al. (49). The DPhPS coordinates and topology were modified based on a combination of existing the PS head group and lipid chain topologies for the CHARMM36 force field. We choose the sodium cations as the counter-ions to neutralize the DPhPS lipid bilayers. Each membrane model is consisting of 72 lipids with a hydration of 90 water molecules per lipid. The initial membrane models have a dimension of 54 nm \(\times\) 54 nm \(\times\) 102 nm with a total of 30528 atoms for DPhPC lipid bilayers and a total of 30096 atoms for DPhPS lipid bilayers.
After the membrane systems were constructed, we performed all-atom Molecular Dynamic (MD) simulations of DPhPC and DPhPS lipid bilayers by using NAMD software with the CHARMM36 lipid force field (50) and the modified TIP3P water model (51). The initial membrane systems were firstly minimized to eliminate the bad atomic contacts, and then were simulated for 600 ns at 298 K. Periodic boundary conditions were performed in the X, Y, and Z directions of the membrane systems. During the MD simulations, a tetragonal unit cell for the membrane systems was maintained to keep an equal dimension between X and Y directions along the membrane plane, while Z direction changed independently under the NPT ensemble. Langevin dynamics were applied to maintain the constant temperature of 298 K for each membrane system, and the Langevin-piston algorithm was applied to maintain the constant pressure of 1.01325 bar, i.e., atmospheric pressure at.sea level. As an efficient full electrostatics method, the particle-mesh Ewald (PME) method calculated the long-range electrostatic interactions, and the grid sizes were 60 \(\times\) 60 \(\times\) 105 for the membrane systems. A time step of 2 fs was used for MD simulations, and atomic trajectories were collected every 100 ps for the subsequent statistical analysis. When the MD simulations reached an equilibrium state, the lipid molecule trajectories were used to calculate the membrane diffusivity (see _Supplementary Materials_ for more details).
### Interfacial rheology
Interfacial shear rheology of lipids at the oil-water (KCl) interface was measured using a Discovery HR-3 rheometer (TA Instruments) with a Du Noviy ring made of platinum/iridium wires (CSC Scientific, Fairfax, VA, catalog no. 70542000) (8). Before each experiment, the Du Noviy ring was rinsed with ethanol and water and flame treated to remove organic contaminants. Before the experiments, 10 mM lipid in oil followed by 1M KCl solution was filled within a double-wall Couette flow cell with an internal Teflon cylinder and an external glass beaker. The Du Noviy ring was carefully lowered and positioned at the oil-water interface. A time sweep was performed at a strain of 1% (within the linear regime) and a frequency of 0.05 Hz (sufficiently low such that the effects due to instrument inertia will not be significant). The loss modulus result was recorded at 5 minutes following the creation of the oil-water interface.
## Acknowledgments
We thank Youngbin Lim from the Cell Science Image Facility for his help on the FLIM and Andrea Merino from Biofiska Institute for the help on the FLIMfit software. AC acknowledges funding from MCIU, PID2019-111096GA-I00; MCIU/AEI/FEDER MINECOG19/P66, RYC2018-024686-I, and Basque Government IT1625-22
|
2306.03070 | Reductive Shafarevich Conjecture | In this paper, we prove the holomorphic convexity of the covering of a
complex projective {normal} variety $X$, which corresponds to the intersection
of kernels of reductive representations $\rho:\pi_1(X)\to {\rm
GL}_{N}(\mathbb{C})$, therefore answering a question by Eyssidieux, Katzarkov,
Pantev, and Ramachandran in 2012. It is worth noting that Eyssidieux had
previously proven this result in 2004 when $X$ is smooth. While our approach
follows the general strategy employed in Eyssidieux's proof, it introduces
several improvements and simplifications. Notably, it avoids the necessity of
using the reduction mod $p$ method in Eyssidieux's original proof.
Additionally, we construct the Shafarevich morphism for complex reductive
representations of fundamental groups of complex quasi-projective varieties
unconditionally, and proving its algebraic nature at the function field level. | Ya Deng, Katsutoshi Yamanoi, Ludmil Katzarkov | 2023-06-05T17:48:31Z | http://arxiv.org/abs/2306.03070v2 | # Reductive Shafarevich Conjecture
###### Abstract
In this paper, we present a more accessible proof of Eyssidieux's proof of the reductive Shafarevich conjecture in 2004, along with several generalizations. In a nutshell, we prove the holomorphic convexity of the covering of a projective normal variety \(X\), which corresponds to the intersection of kernels of reductive representations \(\varrho:\pi_{1}(X)\to\operatorname{GL}_{N}(\mathbb{C})\). Our approach avoids the necessity of using the reduction mod \(p\) method employed in Eyssidieux's original proof. Moreover, we extend the theorems to singular normal varieties under a weaker condition of absolutely constructible subsets, thereby answering a question by Eyssidieux, Katzarkov, Pantev, and Ramachandran. Additionally, we construct the Shafarevich morphism for reductive representations over quasi-projective varieties unconditionally, and proving its algebraic nature at the function field level.
Key words and phrases:Reductive Shafarevich conjecture, Hamonic mapping to Bruhat-Tits buildings, period mappings and period domain, variation of Hodge structures, Holomorphic convexity.
3.1 Factorizing through non-rigidity3.2 Infinite monodromy at infinity3.3 Construction of Shafarevich morphism (I)3.4 Construction of Shafarevich morphism (II)3.5 On the algebraicity of the Shafarevich morphism via \(L^{2}\)-methods
4 Proof of the reductive Shafarevich conjecture4.1 Reduction map of representation into algebraic tori4.2 Some criterion for representation into tori4.3 Eyssidieux-Simpson Lefschetz theorem and its application4.4 A factorization theorem4.5 Constructing Kahler classes via representations into non-archimedean fields4.6 Holomorphic convexity associated with absolutely constructible subsets4.7 Universal covering is Stein4.8 Shafarevich conjecture for projective normal varietiesA.1 Absolutely constructible subset (II)A.2 Reductive Shafarevich conjecture for normal projective varieties
## 0 Introduction
### Shafarevich conjecture
In his famous textbook "Basic Algebraic Geometry" [23, p 407], Shafarevich raised the following tantalizing conjecture.
**Conjecture 0.1** (Shafarevich): _Let \(X\) be a complex projective variety. Then its universal covering is holomorphically convex._
Recall that a complex normal space \(X\) is _holomorphically convex_ if it satisfies the following condition: for each compact \(K\subset X\), its _holomorphic hull_
\[\left\{x\in X\mid|f(x)|\leq\sup_{K}|f|,\forall f\in\mathcal{O}(X)\right\},\]
is compact. \(X\) is _Stein_ if it is holomorphically convex and holomorphically separable, i.e. for distinct \(x\) and \(y\) in \(X\), there exists \(f\in\mathcal{O}(X)\) such that \(f(x)\neq f(y)\). By the Cartan-Remmert theorem, a complex space \(X\) is holomorphically convex if and only if it admits a proper surjective holomorphic mapping \(\phi\) onto some Stein space.
The study of Conjecture 0.1 for smooth projective surfaces has been a subject of extensive research since the mid-1980s. Gurjar-Shastri [10] and Napier [29] initiated this investigation, while Kollar [11] and Campana [14] independently explored the conjecture in the 1990s, employing the tools of Hilbert schemes and Barlet cycle spaces. In 1994, Katzarkov discovered that non-abelian Hodge theories developed by Simpson [29] and Gromov-Schoen [10] can be utilized to prove Conjecture 0.1. His initial work [17] demonstrated Conjecture 0.1 for projective varieties with nilpotent fundamental groups. Shortly thereafter, he and Ramachandran [13] successfully established Conjecture 0.1 for smooth projective surfaces whose fundamental groups admit a faithful Zariski-dense representation in a reductive complex algebraic group. Building upon the ideas presented in [13] and [14], Eyssidieux further developed non-abelian Hodge theoretic arguments in higher dimensions. In [10] he proved that Conjecture 0.1 holds for any _smooth_ projective variety whose fundamental group possesses a faithful representation that is Zariski dense in a reductive complex algebraic group. This result is commonly referred to as the "_Reductive Shafarevich conjecture_". It is worth emphasizing that the work of Eyssidieux [10] is not only ingenious but also highly significant in subsequent research. It serves as a foundational basis for advancements in the linear Shafarevich conjecture [1] and the exploration of compact Kahler cases [18]. More recently,
there have been significant advancements in the quasi-projective setting by Green-Griffiths-Katzarkov [10] and Aguilar-Campana [1], particularly when considering the case of nilpotent fundamental groups.
### Main theorems
The aim of this paper is to present a more comprehensive and complete proof of Eyssidieux's results on the reductive Shafarevich conjecture and its associated problems, as originally discussed in [11]. Additionally, we aim to extend these results to the cases of quasi-projective and singular varieties. Our first main result is the _unconditional_ construction of the _Shafarevich morphism_ for reductive representations. Additionally, we establish the algebraicity of the Shafarevich morphism at the function field level.
**Theorem A** (\(=\)Theorems 3.39 and 3.46).: _Let \(X\) be a quasi-projective normal variety, and let \(\varrho:\pi_{1}(X)\to\operatorname{GL}_{N}(\mathbb{C})\) be a reductive representation. Then_
1. _there exists a dominant holomorphic map_ \(\operatorname{sh}_{\varrho}:X\to\operatorname{Sh}_{\varrho}(X)\) _to a complex normal space_ \(\operatorname{Sh}_{\varrho}(X)\) _whose general fibers are connected such that for any closed subvariety_ \(Z\subset X\)_,_ \(\varrho(\operatorname{Im}[\pi_{1}(Z^{\operatorname{norm}})\to\pi_{1}(X)])\) _is finite if and only if_ \(\operatorname{sh}_{\varrho}(Z)\) _is a point._
_Furthermore, when \(X\) is a smooth, after we replace \(X\) by some finite etale cover and \(\varrho\) by its pullback over the cover, there exists another smooth quasi-projective variety \(X^{\prime}\) containing \(X\) as a Zariski dense open subset such that:_
1. \(\varrho\) _extends to a reductive representation_ \(\varrho_{0}:\pi_{1}(X^{\prime})\to\operatorname{GL}_{N}(\mathbb{C})\)_;_
2. _the Shafarevich morphism_ \(\operatorname{sh}_{\varrho_{0}}:X^{\prime}\to\operatorname{Sh}_{\varrho_{0}}(X ^{\prime})\) _exists, which is a holomorphic proper fibration;_
3. \(\operatorname{sh}_{\varrho}=\operatorname{sh}_{\varrho_{0}}|_{X}\)_; namely, we have the following commutative diagram:_ _There exists a bimeromorphic map_ \(h:\operatorname{Sh}_{\varrho}(X)\dasharrow Y\) _to a quasi-projective normal variety_ \(Y\)_._
4. _The composition_ \(h\circ\operatorname{sh}_{\varrho}:X\dasharrow Y\) _is a rational map._
The holomorphic map \(\operatorname{sh}_{\varrho}:X\to\operatorname{Sh}_{\varrho}(X)\) that satisfies the properties in Theorem A.(i) will be called the _Shafarevich morphism_ of \(\varrho\).
The proof of Theorem A.(i) relies on a more technical result but with richer information, cf. Theorem 3.20.
**Remark 0.2**.: It is noticeable that Theorems A.(i) to A.(iv) extend the previous theorems of Griffiths [12] when \(\varrho\) underlies a \(\mathbb{Z}\)-variation of Hodge structures. In this case, the representation \(\varrho_{0}\) in Theorem A.(ii) is constructed in [12, Theorem 9.5]. Additionally, Griffiths proved in [12, Theorem 9.6] that the period mapping \(p:X^{\prime}\to\mathscr{D}/\Gamma\) associated with \(\varrho_{0}\) is proper, where \(\mathscr{D}\) represents the period domain of the \(\mathbb{C}\)-VHS, and \(\Gamma\) denotes the monodromy group of \(\varrho_{0}\). It can be easily verified that the Shafarevich morphism \(\operatorname{sh}_{\varrho_{0}}:X^{\prime}\to\operatorname{Sh}_{\varrho_{0}}(X ^{\prime})\) corresponds to the Stein factorization of the period mapping \(p:X^{\prime}\to\mathscr{D}/\Gamma\), and that Theorem A.(iv) holds.
We conjecture that \(\operatorname{Sh}_{\varrho_{0}}(X^{\prime})\) is quasi-projective and \(\operatorname{sh}_{\varrho_{0}}\) is an algebraic morphism (cf. Conjecture 3.44). Our conjecture is motivated by Griffiths' conjecture, which predicted the same result when \(\varrho\) underlies a \(\mathbb{Z}\)-VHS. Consequently, we can interpret the results presented in Theorems A.(v) and A.(vi) as supporting evidence for our conjecture at the function field level. It is worth noting that Sommese proved Theorems A.(v) and A.(vi) when \(\varrho\) underlies a \(\mathbb{Z}\)-VHS in [14], utilizing \(L^{2}\)-methods. We adopt the same approach in [14] to prove Theorems A.(v) and A.(vi). Griffiths' conjecture was recently proved by Baker-Brunebarbe-Tsimerman [1] using o-minimal geometry.
Our second main result focuses on the holomorphic convexity of topological Galois coverings associated with reductive representations of fundamental groups within _absolutely constructible subsets_ of character varieties \(M_{\mathrm{B}}(\pi_{1}(X),\mathrm{GL}_{N})\), where \(X\) represents a projective normal variety.
**Theorem B** (=Theorems 4.25 and a.3).: _Let \(X\) be a projective normal variety, and let \(\mathfrak{C}\) be an absolutely constructible subset of \(M_{\mathrm{B}}(\pi_{1}(X),\mathrm{GL}_{N})(\mathbb{C})\) as defined in Definitions 1.17 and A.1. We assume that \(\mathfrak{C}\) is defined on \(\mathbb{Q}\). Set \(H:=\cap_{\varrho}\ker\varrho\), where \(\varrho:\pi_{1}(X)\to\mathrm{GL}_{N}(\mathbb{C})\) ranges over all reductive representations such that \([\varrho]\in\mathfrak{C}(\mathbb{C})\). Let \(\widetilde{X}\) be the universal covering of \(X\), and denote \(\widetilde{X}_{\mathfrak{C}}:=\widetilde{X}/H\). Then the complex space \(\widetilde{X}_{\mathfrak{C}}\) is holomorphically convex. In particular, we have_
1. _the covering of_ \(X\) _corresponding to the intersections of the kernels of all reductive representations of_ \(\pi_{1}(X)\) _in_ \(\mathrm{GL}_{N}(\mathbb{C})\) _is holomorphically convex;_
2. _if_ \(\pi_{1}(X)\) _is a subgroup of_ \(\mathrm{GL}_{N}(\mathbb{C})\) _whose Zariski closure is reductive, then the universal covering_ \(\widetilde{X}\) _of_ \(X\) _is holomorphically convex._
For large representations, we have the following result.
**Theorem C** (=Theorems 4.26 and a.5).: _Let \(X\) and \(\mathfrak{C}\) be as described in Theorem B. If \(\mathfrak{C}\) is large, meaning that for any closed subvariety \(Z\) of \(X\), there exists a reductive representation \(\varrho:\pi_{1}(X)\to\mathrm{GL}_{N}(\mathbb{C})\) such that \([\varrho]\in\mathfrak{C}(\mathbb{C})\) and \(\varrho(\mathrm{Im}[\pi_{1}(Z^{\mathrm{norm}})\to\pi_{1}(X)])\) is infinite, then all intermediate coverings of \(X\) between \(\widetilde{X}\) and \(\widetilde{X}_{\mathfrak{C}}\) are Stein spaces._
In addition to employing new methods for the proof of Theorems B and C, it yields a stronger result compared to [10] in two aspects:
1. The definition of absolutely constructible subsets (cf. Definitions 1.17 and A.1) in our proof is more general than the one provided in [10]. Our definition allows for a broader range of applications, including the potential extension of Conjecture 0.1 to quasi-projective varieties, which is currently our ongoing project.
2. Our result extends to the case where \(X\) is a singular variety, whereas in [10], the result is limited to smooth varieties. This expansion of our result answers a question raised by Eyssidieux, Katzarkov, Pantev and Ramachandran in their celebrated work on linear Shafarevich conjecture for smooth projective varieties (cf. [1, p. 1549]).
We remark that Theorem C is not a direct consequence of Theorem B. It is important to note that Theorem C holds significant practical value in the context of singular varieties. Indeed, finding a large representation over a _smooth_ projective variety can be quite difficult. In practice, the usual approach involves constructing large representations using the Shafarevich morphism in Theorem A, resulting in large representations of fundamental groups of _singular normal_ varieties. Therefore, the extension of Theorem C to singular varieties allows for more practical applicability.
### Comparison with [10] and Novelty
It is worth noting that Eyssidieux [10] does not explicitly require absolutely constructible subsets \(\mathfrak{C}\) to be defined over \(\mathbb{Q}\), although it may seem to be an essential condition (cf. Remark 3.13). Regarding Theorem A, it represents a new result that significantly builds upon our previous work [13]. While Theorem C is not explicitly stated in [10], it should be possible to derive it for smooth projective varieties \(X\) based on the proof provided therein. However, it is worth noting that the original proof in [10] is known for its notoriously difficult and involved nature, with certain aspects outlined without sufficient detail. One of the main goals of this paper is to provide a relatively accessible proof for Theorem B by incorporating more detailed explanations. We draw inspiration from some of the methods introduced in our recent work [13], which aids in presenting a more comprehensible proof. Our proofs of Theorems B and C require us to apply Eyssidieux's Lefschetz theorem from [10]. We also owe many ideas to Eyssidieux's work in [10] and frequently draw upon them without explicit citation.
Despite this debt, there are some novelties in our approach, including:
* An avoidance of the reduction mod \(p\) method used in [10].
* A new and more canonical construction of the Shafarevich morphism that incorporates both rigid and non-rigid cases, previously treated separately in [10].
* The construction of the Shafarevich morphism for reductive representations over quasi-projective varieties, along with a proof of its algebraic property at the function field level.
* A detailed exposition of the application of Simpson's absolutely constructible subsets to the proof of holomorphic convexity in Theorems B and C (cf. SS 4.1 and Theorem 4.21). This application was briefly outlined in [10, Proof of Proposition 5.4.6], but we present a more comprehensive approach, providing complete details.
The main part of this paper was completed in February 2023 and was subsequently shared with several experts in the field in April for feedback. During the revision process, it came to our attention that Brunebarbe [14] recently announced a result similar to Theorem A.1. In [14, Theorem B] Brunebarbe claims the existence of the Shafarevich morphism under a stronger assumption of infinite monodromy at infinity and torsion-freeness of the representation, and he does not address the algebraicity of Shafarevich morphisms. It seems that some crucial aspects of the arguments in [14] need to be carefully addressed, particularly those related to non-abelian Hodge theories may have been overlooked (cf. Remark 3.36).
## Convention and notation
In this paper, we use the following conventions and notations:
* Quasi-projective varieties and their closed subvarieties are assumed to be positive-dimensional and irreducible unless specifically mentioned otherwise. Zariski closed subsets, however, may be reducible.
* Fundamental groups are always referred to as topological fundamental groups.
* If \(X\) is a complex space, its normalization is denoted by \(X^{\text{norm}}\).
* The bold letter Greek letter \(\varrho\) (or \(\tau\), \(\sigma\)...) denotes a family of finite reductive representations \(\{\varrho_{i}:\pi_{1}(X)\to\operatorname{GL}_{N}(K)\}_{i=1,\ldots,k}\) where \(K\) is some non-archimedean local field or complex number field.
* A _proper holomorphic fibration_ between complex spaces \(f:X\to Y\) is _surjective_ and each fiber of \(f\) is _connected_.
* Let \(X\) be a compact normal Kahler space and let \(V\subset H^{0}(X,\Omega^{1}_{X})\) be a \(\mathbb{C}\)-linear subspace. The _generic rank_ of \(V\) is the largest integer \(r\) such that \(\operatorname{Im}[\Lambda^{r}V\to H^{0}(X,\Omega^{r}_{X})]\neq 0\).
* For a quasi-projective normal variety \(X\), we denote by \(M_{\text{B}}(X,N)\) the character variety of the representations of \(\pi_{1}(X)\) into \(\operatorname{GL}_{N}\). For any linear representation \(\varrho:\pi_{1}(X)\to\operatorname{GL}_{N}(K)\) where \(K\) is some extension of \(\mathbb{Q}\), we denote by \([\varrho]\in M_{\text{B}}(X,N)(K)\) the equivalent class of \(\varrho\).
* \(\mathbb{D}\) denotes the unit disk in \(\mathbb{C}\).
## Acknowledgments
We would like to thank Daniel Barlet, Michel Brion, Frederic Campana, Philippe Eyssidieux, Ludmil Katzarkov, Janos Kollar, Mihai Paun, Carlos Simpson, Botong Wang and Mingchen Xia for answering our questions. The impact of Eyssidieux's work [10] on this paper cannot be overstated. This work was completed during YD's visit at the University of Miami in February. He would like to extend special thanks to Ludmil Katzarkov for the warm invitation and fruitful discussions that ultimately led to the collaborative development of the joint appendix.
## 1 Technical preliminary
### Admissible coordinates
The following definition of _admissible coordinates_ introduced in [10] will be used throughout the paper.
**Definition 1.1** (Admissible coordinates): Let \(X\) be a complex manifold and let \(D\) be a simple normal crossing divisor. Let \(x\) be a point of \(D\), and assume that \(\{D_{j}\}_{j=1,\dots,\ell}\) be components of \(D\) containing \(x\). An _admissible coordinate_ centered at \(x\) is the tuple \((U;z_{1},\dots,z_{n};\varphi)\) (or simply \((U;z_{1},\dots,z_{n})\) if no confusion arises) where
* \(U\) is an open subset of \(X\) containing \(x\).
* there is a holomorphic isomorphism \(\varphi:U\to\mathbb{D}^{n}\) so that \(\varphi(D_{j})=(z_{j}=0)\) for any \(j=1,\dots,\ell\).
### Tame and pure imaginary harmonic bundles
Let \(\overline{X}\) be a compact complex manifold, \(D=\sum_{i=1}^{\ell}D_{i}\) be a simple normal crossing divisor, \(X=\overline{X}\backslash D\) be the complement of \(D\) and \(j:X\to\overline{X}\) be the inclusion.
**Definition 1.2** (Higgs bundle): _A Higgs bundle_ on \(X\) is a pair \((E,\theta)\) where \(E\) is a holomorphic vector bundle, and \(\theta:E\to E\otimes\Omega_{X}^{1}\) is a holomorphic one form with value in \(\operatorname{End}(E)\), called the Higgs field, satisfying \(\theta\wedge\theta=0\).
Let \((E,\theta)\) be a Higgs bundle over a complex manifold \(X\). Suppose that \(h\) is a smooth hermitian metric of \(E\). Denote by \(\nabla_{h}\) the Chern connection of \((E,h)\), and by \(\theta_{h}^{\dagger}\) the adjoint of \(\theta\) with respect to \(h\). We write \(\theta^{\dagger}\) for \(\theta_{h}^{\dagger}\) for short if no confusion arises. The metric \(h\) is _harmonic_ if the connection \(\nabla_{h}+\theta+\theta^{\dagger}\) is flat.
**Definition 1.3** (Harmonic bundle): _A harmonic bundle on \(X\) is a Higgs bundle \((E,\theta)\) endowed with a harmonic metric \(h\)._
Let \((E,\theta,h)\) be a harmonic bundle on \(X\). Let \(p\) be any point of \(D\), and \((U;z_{1},\dots,z_{n})\) be an admissible coordinate centered at \(p\). On \(U\), we have the description:
\[\theta=\sum_{j=1}^{\ell}f_{j}d\log z_{j}+\sum_{k=\ell+1}^{n}f_{k}dz_{k}. \tag{1.1}\]
**Definition 1.4** (Tameness): _Let \(t\) be a formal variable. For any \(j=1,\dots,\ell\), the characteristic polynomial \(\det(f_{j}-t)\in\mathcal{O}(U\backslash D)[t]\), is a polynomial in \(t\) whose coefficients are holomorphic functions. If those functions can be extended to the holomorphic functions over \(U\) for all \(j\), then the harmonic bundle is tame at \(p\). A harmonic bundle is tame if it is tame at each point._
For a tame harmonic bundle \((E,\theta,h)\) over \(\overline{X}\backslash D\), we prolong \(E\) over \(\overline{X}\) by a sheaf of \(\mathcal{O}_{\overline{X}}\)-module \({}^{\circ}\!E_{h}\) as follows:
\[{}^{\circ}\!E_{h}(U)=\{\sigma\in\Gamma(U\backslash D,E|_{U\backslash D})\mid |\sigma|_{h}\lesssim\prod_{i=1}^{\ell}|z_{i}|^{-\varepsilon}\ \text{ for all }\varepsilon>0\}.\]
In [10] Mochizuki proved that \({}^{\circ}\!E_{h}\) is locally free and that \(\theta\) extends to a morphism
\[{}^{\circ}\!E_{h}\to{}^{\circ}\!E_{h}\otimes\Omega_{\overline{X}}^{1}(\log D),\]
which we still denote by \(\theta\).
**Definition 1.5** (Pure imaginary): _Let \((E,h,\theta)\) be a tame harmonic bundle on \(\overline{X}\backslash D\). The residue \(\operatorname{Res}_{D}\!\theta\) induces an endomorphism of \({}^{\circ}\!E_{h}|_{D_{i}}\). Its characteristic polynomial has constant coefficients, and thus the eigenvalues are all constant. We say that \((E,\theta,h)\) is pure imaginary if for each component \(D_{j}\) of \(D\), the eigenvalues of \(\operatorname{Res}_{D_{i}}\!\theta\) are all pure imaginary._
One can verify that Definition 1.5 does not depend on the compactification \(\overline{X}\) of \(\overline{X}\backslash D\).
**Theorem 1.6** (Mochizuki [14, Theorem 25.21]).: _Let \(\overline{X}\) be a projective manifold and let \(D\) be a simple normal crossing divisor on \(\overline{X}\). Let \((E,\theta,h)\) be a tame pure imaginary harmonic bundle on \(X:=\overline{X}\backslash D\). Then the flat bundle \((E,D_{h}+\theta+\theta^{\dagger})\) is semi-simple. Conversely, if \((V,\nabla)\) is a semisimple flat bundle on \(X\), then there is a tame pure imaginary harmonic bundle \((E,\theta,h)\) on \(X\) so that \((E,\nabla_{h}+\theta+\theta^{\dagger})\simeq(V,\nabla)\). Moreover, when \(\nabla\) is simple, then any such harmonic metric \(h\) is unique up to positive multiplication._
The following important theorem by Mochizuki will be used throughout the paper.
**Theorem 1.7**.: _where \(Y\) is smooth and \(X\) is normal. For any reductive representation \(\varrho:\pi_{1}(Y)\to\operatorname{GL}_{N}(K)\), where \(K\) is a non-archimedean local field of characteristic zero or a complex number field, the pullback \(f^{*}\varrho:\pi_{1}(X)\to\operatorname{GL}_{N}(K)\) is also reductive._
Proof.: If \(K\) is a non-archimedean local field of characteristic zero, then there is an abstract embedding \(K\hookrightarrow\mathbb{C}\). Therefore, it suffices to prove the theorem for \(K=\mathbb{C}\).
Let \(\mu:X^{\prime}\to X\) be a desingularization of \(X\). By [14, Theorem 25.30], \((f\circ\mu)^{*}\varrho:\pi_{1}(X^{\prime})\to\operatorname{GL}_{N}(\mathbb{C})\) is reductive. Since \(\mu_{*}:\pi_{1}(X^{\prime})\to\pi_{1}(X)\) is surjective, it follows that \((f\circ\mu)^{*}\varrho(\pi_{1}(X^{\prime}))=f^{*}\varrho(\pi_{1}(X))\). Hence, \(f^{*}\varrho\) is also reductive.
### Positive currents on normal complex spaces
For this subsection, we refer to [10] for more details.
**Definition 1.8**.: Let \(Z\) be an irreducible normal complex space. A upper semi continuous function \(\phi:Z\to\mathbb{R}\cup\{-\infty\}\) is plurisubharmonic if it is not identically \(-\infty\) and every point \(z\in Z\) has a neighborhood \(U\) embeddable as a closed subvariety of the unit ball \(B\) of some \(\mathbb{C}^{M}\) in such a way that \(\left.\phi\right|_{U}\) extends to a psh function on \(B\).
A closed positive current with continuous potentials \(\omega\) on \(Z\) is specified by a data \(\{U_{i},\phi_{i}\}_{i}\) of an open covering \(\{U_{i}\}_{i}\) of \(Z\), a continuous psh function \(\phi_{i}\) defined on \(U_{i}\) such that \(\phi_{i}-\phi_{j}\) is pluriharmonic on \(U_{i}\cap U_{j}\).
A closed positive current with continuous potentials \(Z\) is a Kahler form iff its local potentials can be chosen smooth and strongly plurisubharmonic.
A psh function \(\phi\) on \(Z\) is said to satisfy \(\operatorname{dd}^{\mathrm{c}}\!\phi\geq\omega\) iff \(\phi-\phi_{i}\) is psh on \(U_{i}\) for every \(i\).
In other words, a closed positive current with continuous potentials is a section of the sheaf \(C^{0}\cap PSH_{Z}/\operatorname{Re}\left(O_{Z}\right)\).
**Definition 1.9**.: Assume \(Z\) to be compact. The class of a closed positive current with continuous potentials is its image in \(H^{1}\left(Z,\operatorname{Re}\left(O_{Z}\right)\right)\).
A class in \(H^{1}\left(Z,\operatorname{Re}\left(O_{Z}\right)\right)\) is said to be Kahler if it is the image of a Kahler form.
To make contact with the usual terminology observe that if \(Z\) is a compact Kahler manifold \(H^{1}\left(Z,\operatorname{Re}\left(O_{Z}\right)\right)=H^{1,1}(Z,\mathbb{R})\). Hence we use abuse of notation to write \(H^{1,1}(Z,\mathbb{R})\) instead of \(H^{1}\left(Z,\operatorname{Re}\left(O_{Z}\right)\right)\) in this paper.
**Lemma 1.10**.: _Let \(f:X\to Y\) be a Galois cover with Galois group \(G\), where \(X\) and \(Y\) are both irreducible normal complex space. Let \(T\) be a positive \((1,1)\)-current on \(X\) with continuous potential. Assume that \(T\) is invariant under \(G\). Then there is a closed positive \((1,1)\)-current \(S\) on \(Y\) with continuous potential such that \(T=f^{*}S\)._
Proof.: Since the statement is local, we may assume that \(T=\operatorname{dd}^{\mathrm{c}}\!\varphi\) such that \(\varphi\in C^{0}(X)\). Define a function on \(Y\) by
\[f_{*}\varphi(y):=\sum_{x\in f^{-1}(y)}\varphi(x)\]
here the sums are counted with multiplicity. By [10, Proposition 1.13.(b)], we know that \(f_{*}\varphi\) is a psh function on \(Y\) and
\[\operatorname{dd}^{\mathrm{c}}\!f_{*}\varphi=f_{*}T.\]
One can see that \(f_{*}\varphi\) is also continuous. Define a current \(S:=\frac{1}{\deg f}f_{*}T\). Since \(T\) is \(G\)-invariant, it follows that \(f^{*}S=T\) outside the branch locus of \(f\). Since \(f^{*}S=\frac{1}{\deg f}\mathrm{d}\mathrm{d}^{\mathrm{c}}(f_{*}\varphi)\circ f\), the potential of \(f^{*}S\) is continuous. It follows that \(f^{*}S=T\) over the whole \(X\).
### Holomorphic forms on complex normal spaces
There are many ways to define holomorphic forms on complex normal spaces. For our purpose of the current paper, we use the following definition in [10].
**Definition 1.11**.: Let \(X\) be a normal complex space. Let \(\left(A_{i}\right)_{i\in I}\) be an open finite covering of \(X\) such that each subset \(A_{i}\) is an analytic subset of some open subset \(\Omega_{i}\subset\mathbb{C}^{N_{i}}\). The space of holomorphic \(p\)-forms, denoted by \(\Omega_{X}^{p}\), is defined by local restrictions of holomorphic \(p\)-forms on the sets \(\Omega_{i}\) above to \(A_{i}^{\mathrm{reg}}\), where \(A_{i}^{\mathrm{reg}}\) is the smooth locus of \(A_{i}\).
The following fact will be used throughout the paper.
**Lemma 1.12**.: _Let \(f:X\to Y\) be a holomorphic map between normal complex spaces. Then for any holomorphic \(p\)-form \(\omega\) on \(Y\), \(f^{*}\omega\) a holomorphic \(p\)-form on \(Y\)._
Proof.: By Definition 1.11, for any \(x\in X\), there exist
* a neighborhood \(A\) (resp. \(B\)) of \(x\) (resp. \(f(x)\)) such that \(A\) (resp. \(B\)) is an analytic subset of some open \(\Omega\subset\mathbb{C}^{m}\) (resp. \(\Omega^{\prime}\subset\mathbb{C}^{n}\)).
* a holomorphic map \(\tilde{f}:\Omega\to\Omega^{\prime}\) such that \(\tilde{f}|_{A}=f|_{A}\).
* A holomorphic \(p\)-form \(\tilde{\omega}\) on \(\Omega^{\prime}\) such that \(\omega=\tilde{\omega}|_{B}\).
Therefore, we can define \(f^{*}\omega|_{A}:=\tilde{f}^{*}\tilde{\omega}|_{\Omega}\). One can check that this does not depend on the choice of local embeddings of \(X\) and \(Y\).
### The criterion for Kahler classes
We will need the following extension of the celebrated Demailly-Paun's theorem [11] on characterization of Kahler classes on complex normal Kahler spaces by Das-Hacon-Paun in [11].
**Theorem 1.13** ([11, Corollary 2.39]).: _Let \(X\) be a compact normal Kahler space, \(\omega\) a Kahler form on \(X\), and \(\alpha\in H^{1,1}_{\mathrm{BC}}(X)\), then \(\alpha\) is Kahler if and only if \(\int_{V}\alpha^{k}\wedge\omega^{p-k}>0\) for every analytic \(p\)-dimensional subvariety \(V\subset X\) and for all \(0<k\leq p\)._
### Some criterion for Stein space
We require the following criterion for the Stein property of a topological Galois covering of a compact complex normal space.
**Proposition 1.14** ([11, Proposition 4.1.1]).: _Let \(X\) be a compact complex normal space and let \(\pi:\widetilde{X}^{\prime}\to X\) be some topological Galois covering. Let \(T\) be a positive current on \(X\) with continuous potential such that \(\{T\}\) is a Kahler class. Assume that there exists a continuous plurisubharmonic function \(\phi:\widetilde{X}^{\prime}\to\mathbb{R}_{\geq 0}\) such that \(\mathrm{d}\mathrm{d}^{\mathrm{c}}\phi\geq\pi^{*}T\). Then \(\widetilde{X}^{\prime}\) is a Stein space._
### Some facts on moduli spaces of rank 1 local systems
For this subsection we refer the readers to [12] for a systematic treatment. Let \(X\) be a smooth projective variety defined over a field \(K\subset\mathbb{C}\). Let \(M=M(X)\) denote the moduli space of complex local systems of rank one over \(X\). We consider \(M\) as a real analytic group under the operation of tensor product. There are three natural algebraic groups \(M_{\mathrm{B}}\), \(M_{\mathrm{DR}}\) and \(M_{\mathrm{Dol}}\) whose underlying real analytic groups are canonically isomorphic to \(M\). The first is Betti moduli space \(M_{\mathrm{B}}:=\mathrm{Hom}(\pi_{1}(X),\mathbb{C}^{*})\). The second is De Rham moduli space \(M_{\mathrm{DR}}\) which consists of pairs \((L,\nabla)\) where \(L\) is a holomorphic line bundle on \(X\) and \(\nabla\) is an integrable algebraic connection on \(L\). The last one \(M_{\mathrm{Dol}}\) is moduli spaces of rank one Higgs bundles on \(X\). Recall that \(\mathrm{Pic}^{\tau}(X)\) denotes the group of line bundles on \(X\) whose first Chern classes are torsion. We have
\[M_{\mathrm{Dol}}\,=\mathrm{Pic}^{\tau}(X)\times H^{0}(X,\Omega_{X}^{1})\]
For any subset \(S\subset M\), let \(S_{\mathrm{B}}\), \(S_{\mathrm{Dol}}\) and \(S_{\mathrm{DR}}\) denote the corresponding subsets of \(M_{\mathrm{B}}\), \(\mathrm{M_{DR}}\) and \(\mathrm{M_{Dol}}\).
**Definition 1.15** (Triple torus).: A triple torus is a closed, connected real analytic subgroup \(N\subset M\) such that \(N_{\mathrm{B}},N_{\mathrm{DR}}\), and \(N_{\mathrm{Dol}}\) are algebraic subgroups defined over \(\mathbb{C}\). We say that a closed real analytic subspace \(S\subset M\) is a translate of a triple torus if there exists a triple torus \(N\subset M\) and a point \(v\in M\) such that \(S=\{v\otimes w,w\in N\}\). Note that, in this case, any choice of \(v\in M\) will do.
We say that a point \(v\in M\) is torsion if there exists an integer \(a>0\) such that \(v^{\otimes a}=1\). Let \(M^{\mathrm{tor}}\) denote the set of torsion points. Note that for a given integer \(a\), there are only finitely many solutions of \(v^{\otimes a}=1\). Hence, the points of \(M^{\mathrm{tor}}_{\mathrm{B}}\) are defined over \(\overline{\mathbb{Q}}\), and the points of \(M^{\mathrm{tor}}_{\mathrm{DR}}\) and \(M^{\mathrm{tor}}_{\mathrm{Dol}}\) are defined over \(\overline{K}\).
We say that a closed subspace \(S\) is a torsion translate of a triple torus if \(S\) is a translate of a triple torus \(N\) by an element \(v\in M^{\mathrm{tor}}\). This is equivalent to asking that \(S\) be a translate of a triple torus, and contain a torsion point.
Let \(A\) be the Albanese variety of \(X\) (which can be defined as \(H^{0}(X,\Omega^{1}_{X})^{*}/H_{1}(X,\mathbb{Z})\)). Let \(X\to A\) be the map from \(X\) into \(A\) given by integration (from a basepoint, which will be suppressed in the notation but assumed to be defined over \(\overline{K}\)). Pullback of local systems gives a natural map from \(M(A)\) to \(M(X)\), which is an isomorphism
\[M(A)\cong M^{0}(X),\]
where \(M^{0}(X)\) is the connected component of \(M(X)\) containing the trivial rank one local system. The Albanese variety \(A\) is defined over \(\overline{K}\). We recall the following result in [23, Lemma 2.1].
**Lemma 1.16** (Simpson).: _Let \(N\subset M\) be a closed connected subgroup such that \(N_{\mathrm{B}}\subset M_{\mathrm{B}}\) is complex analytic and \(N_{\mathrm{Dol}}\subset M_{\mathrm{Dol}}\) is an algebraic subgroup. Then there is a connected abelian subvariety \(P\subset A\), defined over \(\overline{K}\), such that \(N\) is the image in \(M\) of \(M(A/P)\). In particular, \(N\) is a triple torus in \(M\)._
### Absolutely constructible subsets
In this section we will recall some facts on _absolutely constructible subsets_ (resp. _absolutely closed subsets_) introduced by Simpson in [23, SS6] and later developed by Budur-Wang [20].
Let \(X\) be a smooth projective variety defined over a subfield \(\ell\) of \(\mathbb{C}\). Let \(G\) be a reductive group defined over \(\bar{\mathbb{Q}}\). The representation scheme of \(\pi_{1}(X)\) is an affine \(\bar{\mathbb{Q}}\)-algebraic scheme described by its functor of points:
\[R(X,G)(\mathrm{Spec}\:A):=\mathrm{Hom}(\pi_{1}(X),G(A))\]
for any \(\bar{\mathbb{Q}}\)-algebra \(A\). The character scheme of \(\pi_{1}(X)\) with values in \(G\) is the finite type affine scheme \(M_{\mathrm{B}}(X,G):=R(X,G)\mathbin{/\!/}G\), where "\(\mathbin{/\!/}\)" denotes the GIT quotient. If \(G=\mathrm{GL}_{N}\), we simply write \(M_{\mathrm{B}}(X,N):=M_{\mathrm{B}}(X,\mathrm{GL}_{N})\). Simpson constructed a quasi-projective scheme \(M_{\mathrm{DR}}(X,G)\), and \(M_{\mathrm{Dol}}(X,G)\) over \(\ell\). The \(\mathbb{C}\)-points of \(M_{\mathrm{DR}}(X,G)\) are in bijection with the equivalence classes of flat \(G\)-connections with reductive monodromy. There are natural isomorphisms
\[\psi:M_{\mathrm{B}}(X,G)(\mathbb{C})\to M_{\mathrm{DR}}(X,G)(\mathbb{C})\]
such that \(\psi\) is an isomorphism of complex analytic spaces. For each automorphism \(\sigma\in\mathrm{Aut}(\mathbb{C}/\mathbb{Q})\), let \(X^{\sigma}:=X\times_{\sigma}\mathbb{C}\) be the conjugate variety of \(X\), which is also smooth projective. There is a natural map
\[p_{\sigma}:M_{\mathrm{DR}}(X,G)\to M_{\mathrm{DR}}\left(X^{\sigma},G^{\sigma} \right).\]
Let us now introduce the following definition of absolutely constructible subsets.
**Definition 1.17** (Absolutely constructible subset).: A subset \(\mathfrak{C}\subset M_{\mathrm{B}}(X,G)(\mathbb{C})\) is an _absolutely constructible subset_ (resp. _absolutely closed subset_) if the following conditions are satisfied.
(i) \(\mathfrak{C}\) is the a \(\bar{\mathbb{Q}}\)-constructible (resp. \(\bar{\mathbb{Q}}\)-closed) subset of \(M_{\mathrm{B}}(X,G)\).
2. For each \(\sigma\in\operatorname{Aut}(\mathbb{C}/\mathbb{Q})\), there exists a \(\bar{\mathbb{Q}}\)-constructible (resp. \(\bar{\mathbb{Q}}\)-closed) set \(\mathfrak{C}^{\sigma}\subset M_{\mathrm{B}}\left(X^{\sigma},G^{\sigma}\right)( \mathbb{C})\) such that \(\psi^{-1}\circ p_{\sigma}\circ\psi(\mathfrak{C})=\mathfrak{C}^{\sigma}\).
3. \(\mathfrak{C}(\mathbb{C})\) is preserved by the action of \(\mathbb{R}^{*}\) defined in SS 2.4.
**Remark 1.18**.: Note that this definition is significantly weaker than the notion of absolutely constructible sets defined in [23, 17], as it does not consider moduli spaces of semistable Higgs bundles with trivial characteristic numbers, and it does not require that \(\psi(\mathfrak{C})\) is \(\bar{\mathbb{Q}}\)-constructible in \(M_{\mathrm{DR}}(X,G)(\mathbb{C})\). This revised definition allows for a broader range of applications, including quasi-projective varieties. In [23, 17], the preservation of \(\mathfrak{C}(\mathbb{C})\) under the action of \(\mathbb{C}^{*}\) is a necessary condition. It is important to emphasize that our definition only requires \(\mathbb{R}^{*}\)-invariance, which is weaker than \(\mathbb{C}^{*}\)-invariance. Our definition corresponds to the _absolutely constructible subset_ as defined in [16, Definition 6.3.1], with the additional condition that \(\mathfrak{C}(\mathbb{C})\) is preserved by the action of \(\mathbb{R}^{*}\).
By [16, Theorem 9.1.2.(2) & Proposition 7.4.4.(2)] we have the following result, which generalizes [23].
**Theorem 1.19** (Budur-Wang, Simpson).: _Let \(X\) be a smooth projective variety over \(\mathbb{C}\). If \(\mathfrak{C}\subset M_{\mathrm{B}}(X,1)(\mathbb{C})\) is an absolute constructible subset, then \(\mathfrak{C}=\cup_{i=1}^{m}N_{i}^{\circ}\) where each \(N_{i}^{\circ}\) is a Zariski dense open subset of a torsion-translated subtori \(N_{i}\) of \(M_{\mathrm{B}}(X,1)\). Moreover, let \(A\) be the Albanese variety of \(X\). Then there are abelian subvarieties \(P_{i}\subset A\) such that \(N_{i}\) is the torsion translate of the image in \(M_{\mathrm{B}}^{0}(X,1)\simeq M_{\mathrm{B}}(A,1)\) of \(M_{\mathrm{B}}(A/P_{i},1)\). Here \(M_{\mathrm{B}}^{0}(X,1)\) denotes the connected component of \(M_{\mathrm{B}}^{0}(X,1)\) containing the identity. _
Absolute constructible subsets are preserved by the following operations:
**Theorem 1.20** (Simpson).: _Let \(f:Z\to X\) be a morphism between smooth projective varieties over \(\mathbb{C}\) and let \(g:G\to G^{\prime}\) be a morphism of reductive groups over \(\bar{\mathbb{Q}}\). Consider the natural map \(i:M_{\mathrm{B}}(X,G)\to M_{\mathrm{B}}(X,G^{\prime})\) and \(j:M_{\mathrm{B}}(X,G)\to M_{\mathrm{B}}(Z,G)\). Then for any absolutely constructible subsets \(\mathfrak{C}\subset M_{\mathrm{B}}(X,G)(\mathbb{C})\) and \(\mathfrak{C}^{\prime}\subset M_{\mathrm{B}}(X,G^{\prime})(\mathbb{C})\), we have \(i(\mathfrak{C})\), \(i^{-1}(\mathfrak{C}^{\prime})\) and \(j(\mathfrak{C})\) are all absolutely constructible. _
**Example 1.21**.: \(M_{\mathrm{B}}(X,G)(\mathbb{C})\)_, the isolated point in \(M_{\mathrm{B}}(X,G)(\mathbb{C})\), and the class of trivial representation in \(M_{\mathrm{B}}(X,G)(\mathbb{C})\) are all absolutely constructible._
In this paper, absolutely constructible subsets are used to prove the holomorphic convexity of some topological Galois covering of \(X\) in Theorems B and C. It will not be used in the proof of Theorem A.
### Katzarkov-Eyssidieux reduction and canonical currents
For this subsection, we refer to the papers [17, SS3.3.2] or [16] for a comprehensive and systematic treatment.
**Theorem 1.22** (Katzarkov, Eyssidieux).: _Let \(X\) be a projective normal variety, and let \(K\) be a non-archimedean local field. Let \(\varrho:\pi_{1}(X)\to\mathrm{GL}_{N}(K)\) be a reductive representation. Then there exists a fibration \(s_{\varrho}:X\to S_{\varrho}\) to a normal projective space, such that for any subvariety \(Z\) of \(X\), the image \(\varrho(\mathrm{Im}\,[\pi_{1}(Z^{\mathrm{norm}})\to\pi_{1}(X)])\) is bounded if and only if \(s_{\varrho}(Z)\) is a point. Moreover, if \(X\) is smooth, then the above property holds for \(\varrho(\mathrm{Im}\,[\pi_{1}(Z)\to\pi_{1}(X)])\) without requiring the normalization of \(Z\)._
We will call the above \(s_{\varrho}\) the (Katzarkov-Eyssidieux) reduction map for \(\varrho\). When \(X\) is smooth this theorem is proved by Katzarkov [18] and Eyssidieux [17]. It is easier to derive the singular case from their theorem.
Proof of Theorem 1.22.: Let \(\mu:Y\to X\) be a resolution of singularities. Since \(X\) is normal, \(\mu_{*}:\pi_{1}(Y)\to\pi_{1}(X)\) is surjective and thus \(\mu^{*}\varrho:\pi_{1}(Y)\to\mathrm{GL}_{N}(K)\) is reductive. By the original theorem of Katzarkov-Eyssidieux, there exists a surjective proper fibration \(s_{\mu^{*}\varrho}:Y\to S_{\mu^{*}\varrho}\) such that, for any closed subvariety \(Z\subset Y\), \(s_{\mu^{*}\varrho}(Z)\) is a point if and only if \(\mu^{*}\varrho(\mathrm{Im}[\pi_{1}(Z^{\mathrm{norm}})\to\pi_{1}(Y)])\).
If \(Z\) is an irreducible component of a fiber of \(\mu\). Note that \(\mu^{*}\varrho(\operatorname{Im}[\pi_{1}(Z)\to\pi_{1}(Y)])=\{1\}\), it follows that \(s_{\mu^{*}\varrho}(Z)\) is a point by the proper of Katzarkov-Eyssidieux. Since each fiber of \(\mu\) is connected, \(s_{\mu^{*}\varrho}\) contracts each fiber of \(\mu\) to a point, and it thus descends to a morphism \(s_{\varrho}:X\to S_{\mu^{*}\varrho}\) such that \(s_{\mu^{*}\varrho}=s_{\varrho}\circ\mu\).
Let \(W\subset X\) be any closed subvariety. Then there exist a closed subvariety \(Z\subset Y\) such that \(\mu(Z)=W\). Note that \(\operatorname{Im}[\pi_{1}(Z^{\operatorname{norm}})\to\pi_{1}(W^{\operatorname{ norm}})]\) is a finite index subgroup of \(\pi_{1}(W^{\operatorname{norm}})\). Therefore, \(s_{\varrho}(Z)\) is a point if and only if \(s_{\mu^{*}\varrho}(W)\) is a point. This condition is equivalent to \(\mu^{*}\varrho(\operatorname{Im}[\pi_{1}(Z^{\operatorname{norm}})\to\pi_{1}( Y)])=\varrho(\operatorname{Im}[\pi_{1}(Z^{\operatorname{norm}})\to\pi_{1}(X)])\) being bounded. In turn, this is equivalent to \(\varrho(\operatorname{Im}[\pi_{1}(W^{\operatorname{norm}})\to\pi_{1}(X)])\) being bounded since \(\operatorname{Im}[\pi_{1}(Z^{\operatorname{norm}})\to\pi_{1}(W^{\operatorname {norm}})]\) is a finite index subgroup of \(\pi_{1}(W^{\operatorname{norm}})\).
We will outline the construction of certain _canonical_ positive closed \((1,1)\)-currents over \(S_{\varrho}\). As demonstrated in the proof of [2], we can establish the existence of a finite ramified Galois cover denoted by \(\pi:X^{\operatorname{sp}}\to X\) with the Galois group \(H\), commonly known as the _spectral covering of \(X\)_ (cf. [1, Definition 5.14]). This cover possesses holomorphic \(1\)-forms \(\eta_{1},\dots,\eta_{m}\subset H^{0}(X^{\operatorname{sp}},\pi^{*}\Omega^{1}_{ X})\), which can be considered as the \((1,0)\)-part of the complexified differential of the \(\pi^{*}\varrho\)-equivariant harmonic mapping from \(X^{\operatorname{sp}}\) to the Bruhat-Tits building of \(G\). These particular \(1\)-forms, referred to as the _spectral one-forms_ (cf. [1, Definition 5.16]), play a significant role in the proof of Theorems B and C. Consequently, the Stein factorization of the _partial Albanese morphism_\(a:X^{\operatorname{sp}}\to A\) (cf. [1, Definition 5.19]) induced by \(\eta_{1},\dots,\eta_{m}\) leads to the Katzarkov-Eyssidieux reduction map \(s_{\pi^{*}\varrho}:X^{\operatorname{sp}}\to S_{\pi^{*}\varrho}\) for \(\pi^{*}\varrho\). Moreover, we have the following commutative diagram:
Here \(\sigma_{\pi}\) is also a finite ramified Galois cover with Galois group \(H\). Note that there are one forms \(\{\eta^{\prime}_{1},\dots,\eta^{\prime}_{m}\}\subset H^{0}(A,\Omega^{1}_{A})\) such that \(a^{*}\eta^{\prime}_{i}=\eta_{i}\). Consider the finite morphism \(b:S_{\pi^{*}\varrho}\to A\). Then we define a positive \((1,1)\)-current \(T_{\pi^{*}\varrho}:=b^{*}\sum_{i=1}^{m}i\eta^{\prime}_{i}\wedge\overline{\eta ^{\prime}_{i}}\) on \(S_{\pi^{*}\varrho}\). Note that \(T_{\pi^{*}\varrho}\) is invariant under the Galois action \(H\). Therefore, by Lemma 1.10 there is a positive closed \((1,1)\)-current \(T_{\varrho}\) defined on \(S_{\varrho}\) with continuous potential such that \(\sigma_{\pi}^{*}T_{\varrho}=T_{\pi^{*}\varrho}\).
**Definition 1.23** (Canonical current).: The closed positive \((1,1)\)-current \(T_{\varrho}\) on \(S_{\varrho}\) is called the _canonical current_ of \(\varrho\).
More generally, let \(\{\varrho_{i}:\pi_{1}(X)\to\operatorname{GL}_{N}(K_{i})\}_{i=1,\dots,k}\) be reductive representations where \(K_{i}\) is a non-archimedean local field. We shall denote by the bolded letter \(\boldsymbol{\varrho}:=\{\varrho_{i}\}_{i=1,\dots,k}\) be such family of representations. Let \(s_{\boldsymbol{\varrho}}:X\to S_{\boldsymbol{\varrho}}\) be the Stein factorization of \((s_{\varrho_{1}},\dots,s_{\varrho_{k}})\) : \(X\to S_{\varrho_{1}}\times\dots\times S_{\varrho_{k}}\) where \(s_{\varrho_{i}}:X\to S_{\varrho_{i}}\) denotes the reduction map associated with \(\varrho_{i}\) and \(p_{i}:S_{\boldsymbol{\varrho}}\to S_{\varrho_{i}}\) is the induced finite morphism. \(s_{\boldsymbol{\varrho}}:X\to S_{\boldsymbol{\varrho}}\) is called the _reduction map_ for the family \(\boldsymbol{\varrho}\) of representations.
**Definition 1.24** (Canonical current II).: The closed positive \((1,1)\)-current \(T_{\boldsymbol{\varrho}}:=\sum_{i=1}^{k}p_{i}^{*}T_{\varrho_{i}}\) on \(S_{\boldsymbol{\varrho}}\) is called the canonical current of \(\boldsymbol{\varrho}\).
**Lemma 1.25** ([2, Lemme 1.4.9 & 3.3.10]).: _Let \(f:Z\to X\) be a morphism between projective normal varieties and let \(\boldsymbol{\varrho}:=\{\varrho_{i}:\pi_{1}(X)\to\operatorname{GL}_{N}(K_{i})\}_ {i=1,\dots,k}\) be a family of reductive representations where \(K_{i}\) is a non-archimedean local field. Then we
have_
(1.2)
_where \(\sigma_{f}\) is a finite morphism. Here \(f^{*}\boldsymbol{\varrho}=\{f^{*}\varrho_{i}\}_{i=1,\ldots,k}\) denotes the pull back of the family of \(\boldsymbol{\varrho}:=\{\varrho_{i}:\pi_{1}(X)\to\operatorname{GL}_{N}(K_{i})\}_ {i=1,\ldots,k}\). Moreover, the following properties hold:_
1. _The local potential of_ \(T_{\boldsymbol{\varrho}}\) _is continuous. In particular, for any closed subvariety_ \(W\subset X\)_, we have_ \[\{T_{\boldsymbol{\varrho}}\}^{\dim W}\cdot W=\int_{W}T_{\boldsymbol{\varrho}}^ {\dim W}\geq 0.\]
2. \(T_{f^{*}\boldsymbol{\varrho}}=\sigma_{f}^{*}T_{\boldsymbol{\varrho}}\)_;_
3. _For every closed subvariety_ \(\Xi\subset S_{f^{*}\boldsymbol{\varrho}}\)_,_ \(\{T_{\boldsymbol{\varrho}}\}^{\dim\Xi}.(\sigma_{f}(\Xi))>0\) _if and only if_ \(\{T_{f^{*}\boldsymbol{\varrho}}\}^{\dim\Xi}\)_._ \(\Xi>0\)_._
Note that Lemma 1.25.3 is a consequence of the first two assertions.
The current \(T_{\varrho}\) will serve as a lower bound for the complex hessian of plurisubharmonic functions constructed by the method of harmonic mappings.
**Proposition 1.26** ([12, Proposition 3.3.6, Lemme 3.3.12]): _Let \(X\) be a projective normal variety and let \(\varrho:\pi_{1}(X)\to G(K)\) be a Zariski dense representation where \(K\) is a non archimedean local field and \(G\) is a reductive group. Let \(x_{0}\in\Delta(G)\) be an arbitrary point. Let \(u:\widetilde{X}\to\Delta(G)\) be the associated the harmonic mapping, where \(\widetilde{X}\) is the universal covering of \(X\). The function \(\phi:\widetilde{X}\to\mathbb{R}_{\geq 0}\) defined by_
\[\phi(x)=2d^{2}\left(u(x),u(x_{0})\right)\]
_satisfies the following properties:_
1. \(\phi\) _descends to a function_ \(\phi_{\varrho}\) _on_ \(\widetilde{X}_{\varrho}=\widetilde{X}/\ker\left(\varrho\right)\)_._
2. \(\operatorname{d\mathrm{d}^{c}}\!\phi_{\varrho}\geq(s_{\varrho}\circ\pi)^{*}T _{\varrho}\)_, where we denote by_ \(\pi:\widetilde{X}_{\varrho}\to X\) _the covering map._
3. \(\phi_{\varrho}\) _is locally Lipschitz;_
4. _Let_ \(T\) _be a normal complex space and_ \(r:\widetilde{X}_{\varrho}\to T\) _a proper holomorphic fibration such that_ \(s_{\varrho}\circ\pi:\widetilde{X}_{\varrho}\to S_{\varrho}\) _factorizes via a morphism_ \(\nu:T{\to}S_{\varrho}\)_. The function_ \(\phi_{\varrho}\) _is of the form_ \(\phi_{\varrho}=\phi_{\varrho}^{T}\circ r\) _with_ \(\phi_{\varrho}^{T}\) _being a continuous plurisubharmonic function on_ \(T\)_;_
5. \(\operatorname{d\mathrm{d}^{c}}\!\phi_{\varrho}^{T}\geq\nu^{*}T_{\varrho}\)_._
**1.10**.: **The generalization of Katzarkov-Eyssidieux reduction to quasi-projective varieties. --** In our work [10] on hyperbolicity of quasi-projective varieties, we extended Theorem 1.22 to quasi-projective varieties. The theorem we established is stated below.
**Theorem 1.27** ([10, Theorem 0.10]): _Let \(X\) be a complex smooth quasi-projective variety, and let \(\varrho:\pi_{1}(X)\to\operatorname{GL}_{N}(K)\) be a reductive representation where \(K\) is non-archimedean local field. Then there exists a quasi-projective normal variety \(S_{\varrho}\) and a dominant morphism \(s_{\varrho}:X\to S_{\varrho}\) with connected general fibers, such that for any connected Zariski closed subset \(T\) of \(X\), the following properties are equivalent:_
1. _the image_ \(\varrho(\operatorname{Im}[\pi_{1}(T)\to\pi_{1}(X)])\) _is a bounded subgroup of_ \(G(K)\)_._
2. _For every irreducible component_ \(T_{o}\) _of_ \(T\)_, the image_ \(\varrho(\operatorname{Im}[\pi_{1}(T_{o}^{\mathrm{norm}})\to\pi_{1}(X)])\) _is a bounded subgroup of_ \(G(K)\)_._
3. _The image_ \(s_{\varrho}(T)\) _is a point._
This result plays a crucial role in the proof of Theorem A.
### Simultaneous Sten factorization
**Lemma 1.28**.: _Let \(V\) be a smooth quasi-projective variety. For \(i=1,2,\ldots\), let \(W_{i}\) be normal quasi-projective varieties such that_
* _there exist dominant morphisms_ \(p_{i}:V\to W_{i}\)_, and_
* _there exist dominant morphisms_ \(q_{i}:W_{i}\to W_{i-1}\) _such that_ \(q_{i}\circ p_{i}=p_{i-1}\)_._
_Then there exists \(i_{0}\in\mathbb{Z}_{\geq 2}\) such that for every \(i\geq i_{0}\) and every subvariety \(Z\subset V\), if \(p_{i-1}(Z)\) is a point, then \(p_{i}(Z)\) is a point._
Proof.: Let \(E_{i}\subset V\times V\) be defined by
\[E_{i}=\{(x,x^{\prime})\in V\times V;p_{i}(x)=p_{i}(x^{\prime})\}.\]
Then \(E_{i}\subset V\times V\) is a Zariski closed set. Indeed, \(E_{i}=(p_{i},p_{i})^{-1}(\Delta_{i})\), where \((p_{i},p_{i}):V\times V\to W_{i}\times W_{i}\) is the morphism defined by \((p_{i},p_{i})(x,x^{\prime})=(p_{i}(x),p_{i}(x^{\prime}))\) and \(\Delta_{i}\subset W_{i}\times W_{i}\) is the diagonal. Now by \(q_{i}\circ p_{i}=p_{i-1}\), we have \(E_{i}\subset E_{i-1}\). By the Noetherian property, there exists \(i_{0}\) such that \(E_{i+1}=E_{i}\) for all \(i\geq i_{0}\). Then the induced map \(p_{i+1}(V)\to p_{i}(V)\) is injective. Hence if \(p_{i-1}(Z)\) is a point, then \(p_{i}(Z)\) is a point.
**Lemma 1.29**.: _Let \(V\) be a quasi-projective normal variety and let \((f_{\lambda}:V\to S_{\lambda})_{\lambda\in\Lambda}\) be a family of morphisms into quasi-projective varieties \(S_{\lambda}\). Then there exist a normal projective variety \(S_{\infty}\) and a morphism \(f_{\infty}:V\to S_{\infty}\) such that_
* _for every subvariety_ \(Z\subset V\)_, if_ \(f_{\infty}(Z)\) _is a point, then_ \(f_{\lambda}(Z)\) _is a point for every_ \(\lambda\in\Lambda\)_, and_
* _there exist_ \(\lambda_{1},\ldots,\lambda_{n}\in\Lambda\) _such that_ \(f_{\infty}:V\to S_{\infty}\) _is the quasi-Stein factorization of_ \((f_{1},\ldots,f_{n}):V\to S_{\lambda_{1}}\times\cdots S_{\lambda_{n}}\)_._
Proof.: We take \(\lambda_{1}\in\Lambda\). Let \(p_{1}:V\to W_{1}\) be the quasi-Stein factorization of \(f_{\lambda_{1}}:V\to S_{\lambda_{1}}\).
Next we take (if it exists) \(\lambda_{2}\in\Lambda\) such that for the quasi-Stein factorization \(p_{2}:V\to W_{2}\) of \((s_{\lambda_{1}},s_{\lambda_{2}}):X\to S_{\lambda_{1}}\times S_{\lambda_{2}}\), there exists a subvariety \(Z\subset V\) such that \(p_{1}(Z)\) is a point, but \(p_{2}(Z)\) is not a point.
Similarly, we take (if it exists) \(\lambda_{3}\in\Lambda\) such that for the Stein factorization \(p_{3}:V\to W_{3}\) of \((f_{\lambda_{1}},f_{\lambda_{2}},f_{\lambda_{3}}):V\to S_{\lambda_{1}}\times S _{\lambda_{2}}\times S_{\lambda_{3}}\), there exists a subvariety \(Z\subset V\) such that \(p_{2}(Z)\) is a point, but \(p_{3}(Z)\) is not a point.
We repeat this process forever we may continue. However by Lemma 1.28, this process should terminate to get \(\lambda_{1},\ldots,\lambda_{n}\in\Lambda\). We let \(S_{\infty}=W_{n}\), namely \(f_{\infty}:V\to S_{\infty}\) is the Stein factorization of \((f_{\lambda_{1}},\ldots,f_{\lambda_{n}}):V\to S_{\lambda_{1}}\times\cdots \times S_{\lambda_{n}}\).
Now let \(\lambda\in\Lambda\). Then by the construction, if \(f_{\infty}(Z)\) is a point, then \((f_{\lambda_{1}},\ldots,f_{\lambda_{n}},f_{\lambda})(Z)\) is a point. In particular, \(f_{\lambda}(Z)\) is a point.
We also need the following generalized Stein factorization proved by Henri Cartan in [10, Theorem 3].
**Theorem 1.30**.: _Let \(X,S\) be complex spaces and \(f:X\to S\) be a morphism. Suppose a connected component \(F\) of a fibre of \(f\) is compact. Then, \(F\) has an open neighborhood \(V\) such that \(f(V)\) is a locally closed analytic subvariety \(S\) and \(V\to f(V)\) is proper._
_Suppose furthermore that \(X\) is normal and that every connected component \(F\) of a fibre of \(f\) is compact. The set \(Y\) of connected components of fibres of \(f\) can be endowed with the structure of a normal complex space such that \(f\) factors through the natural map \(e:X\to Y\) which is a proper holomorphic fibration. _
## 2 Some non-abelian Hodge theories
In this section, we will build upon the previous work of Simpson [20], Iyer-Simpson [14, 15], and Mochizuki [16, 17] to further develop non-abelian Hodge theories over quasi-projective varieties. We begin by establishing the functoriality of pullback for regular filtered Higgs bundles (cf. Proposition 2.5). Then we clarify the \(\mathbb{C}^{*}\) and \(\mathbb{R}^{*}\)-action
on the character varieties of smooth quasi-projective varieties, following [14]. Lastly, we prove Proposition 2.9, which essentially states that the natural morphisms of character varieties induced by algebraic morphisms commute with the \(\mathbb{C}^{*}\)-action. This section's significance lies in its essential role in establishing Propositions 3.12 and 3.35, which serves as a critical cornerstone of the whole paper.
### Regular filtered Higgs bundles
In this subsection, we recall the notions of regular filtered Higgs bundles (or parabolic Higgs bundles). For more details refer to [14]. Let \(\overline{X}\) be a complex manifold with a reduced simple normal crossing divisor \(D=\sum_{i=1}^{\ell}D_{i}\), and let \(X=\overline{X}\backslash D\) be the complement of \(D\). We denote the inclusion map of \(X\) into \(\overline{X}\) by \(j\).
**Definition 2.1**.: A _regular filtered Higgs bundle_\((\mathbf{E}_{*},\theta)\) on \((\overline{X},D)\) is holomorphic vector bundle \(E\) on \(X\), together with an \(\mathbb{R}^{\ell}\)-indexed filtration \({}_{\mathbf{a}}E\) (so-called _parabolic structure_) by locally free subsheaves of \(j_{*}E\) such that
1. \(\mathbf{a}\in\mathbb{R}^{\ell}\) and \({}_{\mathbf{a}}E|_{X}=E\).
2. For \(1\leq i\leq\ell\), \({}_{\mathbf{a}+\mathbf{1}_{i}}E={}_{\mathbf{a}}E\otimes\mathcal{O}_{X}(D_{i})\), where \(\mathbf{1}_{i}=(0,\dots,0,1,0,\dots,0)\) with \(1\) in the \(i\)-th component.
3. \({}_{\mathbf{a}+\mathbf{\epsilon}}E={}_{\mathbf{a}}E\) for any vector \(\mathbf{\epsilon}=(\epsilon,\dots,\epsilon)\) with \(0<\epsilon\ll 1\).
4. The set of _weights_\(\{\mathbf{a}\ |\ _{\mathbf{a}}E/{}_{\mathbf{a}-\mathbf{\epsilon}}E\neq 0\) for any vector \(\mathbf{\epsilon}=(\epsilon,\dots,\epsilon)\) with \(0<\epsilon\ll 1\}\) is discrete in \(\mathbb{R}^{\ell}\).
5. There is a \(\mathcal{O}_{\overline{X}}\)-linear map, so-called Higgs field, \[\theta:{}^{\circ}\!E\to\Omega^{1}_{\overline{X}}(\log D)\otimes{}^{\circ}\!E\] such that \(\theta\wedge\theta=0\), and (2.1) \[\theta({}_{\mathbf{a}}E)\subseteq\Omega^{1}_{\overline{X}}(\log D)\otimes{}_{\mathbf{a }}E.\]
Denote \({}_{\mathbf{0}}E\) by \({}^{\circ}\!E\), where \(\mathbf{0}=(0,\dots,0)\). When disregarding the Higgs field, \(\mathbf{E}_{*}\) is referred to as a _parabolic bundle_. By the work of Borne-Vistoli the parabolic structure of a parabolic bundle is _locally abelian_, _i.e._ it admits a local frame compatible with the filtration (see e.g. [13]).
A natural class of regular filtered Higgs bundles comes from prolongations of tame harmonic bundles. We first recall some notions in [14, SS2.2.1]. Let \(E\) be a holomorphic vector bundle with a smooth hermitian metric \(h\) over \(X\).
Let \(U\) be an open subset of \(\overline{X}\) with an admissible coordinate \((U;z_{1},\dots,z_{n})\) with respect to \(D\). For any section \(\sigma\in\Gamma(U\backslash D,E|_{U\backslash D})\), let \(|\sigma|_{h}\) denote the norm function of \(\sigma\) with respect to the metric \(h\). We denote \(|\sigma|_{h}\mathcal{O}(\prod_{i=1}^{\ell}|z_{i}|^{-b_{i}})\) if there exists a positive number \(C\) such that \(|\sigma|_{h}\leq C\cdot\prod_{i=1}^{\ell}|z_{i}|^{-b_{i}}\). For any \(\mathbf{b}\in\mathbb{R}^{\ell}\), say \(-\mathrm{ord}(\sigma)\leq\mathbf{b}\) means the following:
\[|\sigma|_{h}=\mathcal{O}(\prod_{i=1}^{\ell}|z_{i}|^{-b_{i}-\varepsilon})\]
for any real number \(\varepsilon>0\) and \(0<|z_{i}|\ll 1\). For any \(\mathbf{b}\), the sheaf \({}_{\mathbf{b}}E\) is defined as follows:
\[\Gamma(U,{}_{\mathbf{b}}E):=\{\sigma\in\Gamma(U\backslash D,E|_{U\backslash D})\ | -\mathrm{ord}(\sigma)\leq\mathbf{b}\}. \tag{2.2}\]
The sheaf \({}_{\mathbf{b}}E\) is called the prolongment of \(E\) by an increasing order \(\mathbf{b}\). In particular, we use the notation \({}^{\circ}\!E\) in the case \(\mathbf{b}=(0,\dots,0)\).
According to Simpson [15, Theorem 2] and Mochizuki [14, Theorem 8.58], the above prolongation gives a regular filtered Higgs bundle.
**Theorem 2.2** (Simpson, Mochizuki).: _Let \(\overline{X}\) be a complex manifold and \(D\) be a simple normal crossing divisor on \(\overline{X}\). If \((E,\theta,h)\) is a tame harmonic bundle on \(\overline{X}\backslash D\), then the corresponding filtration \({}_{\mathbf{b}}E\) defined above defines a regular filtered Higgs bundle \((\mathbf{E}_{*},\theta)\) on \((\overline{X},D)\)._
### Pullback of parabolic bundles
In this subsection, we introduce the concept of pullback of parabolic bundles. We refer the readers to [18, 19] for a more systematic treatment. We avoid the language of Deligne-Mumford stacks in [18, 19]. This subsection is conceptional and we shall make precise computations in next subsection.
A parabolic line bundle is a parabolic sheaf \(F\) such that all the \({}_{\boldsymbol{a}}F\) are line bundles. An important class of examples is obtained as follows: let \(L\) be a line bundle on \(\overline{X}\), if \({\boldsymbol{a}}=(a_{1},\ldots,a_{\ell})\) is a \(\mathbb{R}^{\ell}\)-indexed, then we can define a parabolic line bundle denoted \(L_{*}^{\boldsymbol{a}}\) by setting
\[{}_{\boldsymbol{b}}L^{\boldsymbol{a}}:=L\otimes\mathcal{O}_{\overline{X}} \left(\sum_{i=1}^{\ell}|a_{i}+b_{i}|D_{i}\right) \tag{2.3}\]
for any \({\boldsymbol{b}}\in\mathbb{R}^{\ell}\).
**Definition 2.3** (Locally abelian parabolic bundle).: A parabolic sheaf \(\boldsymbol{E}_{*}\) is a _locally abelian parabolic bundle_ if, in a neighborhood of any point \(x\in\overline{X}\) there is an isomorphism between \(\boldsymbol{E}_{*}\) and a direct sum of parabolic line bundles.
Let \(f:\overline{Y}\to\overline{X}\) be a holomorphic map of complex manifolds. Let \(D^{\prime}=\sum_{j=1}^{k}D^{\prime}_{j}\) and \(D=\sum_{i=1}^{\ell}D_{i}\) be simple normal crossing divisors on \(\overline{Y}\) and \(\overline{X}\) respectively. Assume tht \(f^{-1}(D)\subset D^{\prime}\). Denote by \(n_{ij}=\operatorname{ord}_{D^{\prime}_{j}}f^{*}D_{i}\in\mathbb{Z}_{\geq 0}\). Let \(L\) be a line bundle on \(\overline{X}\) and let \(L_{*}^{\boldsymbol{a}}\) be the parabolic line bundle defined in (2.3). Set
\[f^{*}{\boldsymbol{a}}:=(\sum_{i=1}^{\ell}n_{i1}a_{i},\ldots,\sum_{i=1}^{\ell}n _{ik}a_{i})\in\mathbb{R}^{k}. \tag{2.4}\]
Then \(f^{*}(L_{*}^{\boldsymbol{a}})\) is defined by setting
\[{}_{\boldsymbol{b}}(f^{*}L)^{f^{*}{\boldsymbol{a}}}:=f^{*}L\otimes\mathcal{O} _{\overline{Y}}\left(\sum_{j=1}^{k}[\sum_{i=1}^{\ell}n_{ij}a_{i}+b_{j}]D^{ \prime}_{j}\right) \tag{2.5}\]
for any \({\boldsymbol{b}}\in\mathbb{R}^{k}\).
Let \(\overline{X}\) be a compact complex manifold. Consider a locally abelian parabolic bundle \(\boldsymbol{E}_{*}\) defined on \(\overline{X}\). We can cover \(\overline{X}\) with open subsets \(U_{1},\ldots,U_{m}\), such that \(\boldsymbol{E}_{*}|_{U_{i}}\) can be expressed as a direct sum of parabolic line bundles on each \(U_{i}\).
Using this decomposition, we define the pullback \(f^{*}(\boldsymbol{E}_{*}|_{U_{i}})\) as in (2.5). It can be verified that \(f^{*}(\boldsymbol{E}_{*}|_{U_{i}})\) is compatible with \(f^{*}(\boldsymbol{E}_{*}|_{U_{j}})\) whenever \(U_{i}\cap U_{j}\neq\varnothing\). This allows us to extend the local pullback to a global level, resulting in the definition of the pullback of a locally abelian parabolic bundle denoted by \(f^{*}\boldsymbol{E}_{*}\). In next section, we will see an explicit description of the pullback of regular filtered Higgs bundles induced by tame harmonic bundles.
### Functoriality of pullback of regular filtered Higgs bundle
We recall some notions in [20, SS2.2.2]. Let \(X\) be a complex manifold, \(D\) be a simple normal crossing divisor on \(X\), and \(E\) be a holomorphic vector bundle on \(X\backslash D\) such that \(E|_{X\backslash D}\) is equipped with a hermitian metric \(h\). Let \({\boldsymbol{v}}=(v_{1},\ldots,v_{r})\) be a smooth frame of \(E|_{X\backslash D}\). We obtain the \(H(r)\)-valued function \(H(h,{\boldsymbol{v}})\) defined over \(X\backslash D\),whose \((i,j)\)-component is given by \(h(v_{i},v_{j})\).
Let us consider the case \(X=\mathbb{D}^{n}\), and \(D=\sum_{i=1}^{\ell}D_{i}\) with \(D_{i}=(z_{i}=0)\). We have the coordinate \((z_{1},\ldots,z_{n})\). Let \(h\), \(E\) and \({\boldsymbol{v}}\) be as above.
**Definition 2.4**.: A smooth frame \({\boldsymbol{v}}\) on \(X\backslash D\) is called _adapted up to log order_, if the following inequalities hold over \(X\backslash D\):
\[C^{-1}(-\sum_{i=1}^{\ell}\log|z_{i}|)^{-M}\leq H(h,{\boldsymbol{v}})\leq C(- \sum_{i=1}^{\ell}\log|z_{i}|)^{M}\]
for some positive numbers \(M\) and \(C\).
The goal of this subsection is to establish the following result concerning the functoriality of the pullback of a regular filtered Higgs bundle. This result will play a crucial role in proving Proposition 3.35.
**Proposition 2.5**.: _Consider a morphism \(f:\overline{Y}\to\overline{X}\) of smooth projective varieties \(\overline{X}\) and \(\overline{Y}\). Let \(D\) and \(D^{\prime}\) be simple normal crossing divisors on \(\overline{X}\) and \(\overline{Y}\) respectively. Assume that \(f^{-1}(D)\subset D^{\prime}\). Let \((E,\theta,h)\) be a tame harmonic bundle on \(X:=\overline{X}\backslash D\). Let \((\boldsymbol{E}_{*},\theta)\) be the regular filtered Higgs bundle defined in SS 2.1. Consider the pullback of \(f^{*}\boldsymbol{E}_{*}\) defined in SS 2.2, which is also a parabolic bundle over \((\overline{Y},D^{\prime})\). Then_
1. \(f^{*}\boldsymbol{E}_{*}\) _is the prolongation_ \(\tilde{E}_{*}\) _of_ \(f^{*}E\) _using the norm growth with respect to the metric_ \(f^{*}h\) _as defined in (_2.2_)._
2. \((f^{*}\boldsymbol{E}_{*},f^{*}\theta)\) _is a filtered regular Higgs bundle._
Proof.: Since this is a local result, we assume that \(\overline{X}:=\mathbb{D}^{n}\) and \(D:=\bigcup_{i=1}^{\ell}\left\{z_{i}=0\right\}\). Let \(\overline{Y}:=\mathbb{D}^{m}\) and \(D^{\prime}:=\bigcup_{j=1}^{k}\left\{w_{j}=0\right\}\). Then, \(f^{*}\left(z_{i}\right)=\prod_{j=1}^{k}w_{j}^{n_{ij}}g_{i}\) for some invertible functions \(\left\{g_{i}\right\}_{i=1,\ldots,\ell}\subset\mathcal{O}(\overline{Y})\).
By [10, Proposition 8.70], there exists a holomorphic frame \(\boldsymbol{v}=(v_{1},\ldots,v_{r})\) of \({}^{\circ}\!E|_{\overline{X}}\) and \(\left\{a_{ij}\right\}_{i=1,\ldots,r;j=1,\ldots,\ell}\subset\mathbb{R}\) such that if we put \(\tilde{v}_{i}:=v_{i}\cdot\prod_{j=1}^{\ell}|z_{j}|^{-a_{ij}}\), then for the smooth frame \(\widetilde{\boldsymbol{v}}=(\tilde{v}_{1},\ldots,\tilde{v}_{r})\) over \(X=\overline{X}\backslash D\), \(H(h,\widetilde{\boldsymbol{v}})\) is adapted to log order in the sense of Definition 2.4.
Define \(L_{i}\) to be the sub-line bundle of \({}^{\circ}\!E\) generated by \(v_{i}\). Write \(\boldsymbol{a}_{i}:=(a_{i1},\ldots,a_{i\ell})\in\mathbb{R}^{\ell}\). Consider the parabolic line bundle \((L_{i})_{\boldsymbol{s}}^{\boldsymbol{a}_{i}}\) over \((\overline{X},D)\) defined in (2.3), namely,
\[{}_{\boldsymbol{b}}(L_{i})^{\boldsymbol{a}_{i}}:=L_{i}\otimes\mathcal{O}_{ \overline{X}}\left(\sum_{j=1}^{\ell}|a_{ij}+b_{j}|D_{j}\right) \tag{2.6}\]
for any \(\boldsymbol{b}\in\mathbb{R}^{\ell}\).
**Claim 2.6**.: _The parabolic bundles \(\boldsymbol{E}_{*}\) and \(\oplus_{i=1}^{r}(L_{i})_{*}^{\boldsymbol{a}_{i}}\) are the same. In particular, \(\boldsymbol{E}_{*}\) is locally abelian._
Proof.: By (2.2), for any \(\boldsymbol{b}\in\mathbb{R}^{\ell}\), any holomorphic section \(\sigma\in\Gamma(\overline{X},{}_{\boldsymbol{b}}E)\) satisfies
\[|\sigma|_{h}=\mathcal{O}(\prod_{j=1}^{\ell}|z_{j}|^{-b_{j}-\varepsilon}).\]
As \(\boldsymbol{v}\) is a frame for \({}^{\circ}\!E\), one can write \(\sigma=\sum_{i=1}^{r}g_{i}v_{i}\) where \(g_{i}\) is a holomorphic function defined on \(X\). Write \(\boldsymbol{g}:=(g_{1},\ldots,g_{r})\). Since \(H(h,\widetilde{\boldsymbol{v}})\) is adapted to log order, it follows that
\[C^{-1}(-\sum_{j=1}^{\ell}\log|z_{j}|)^{-M}\cdot\sum_{i=1}^{r}|g_{i}|^{2}\prod_ {j=1}^{\ell}|z_{j}|^{2a_{ij}}\leq\overline{\boldsymbol{g}}H(h,\boldsymbol{v}) \boldsymbol{g}^{T}=|\sigma|_{h}^{2}=\mathcal{O}(\prod_{j=1}^{\ell}|z_{j}|^{-2 b_{i}-\varepsilon})\]
for any \(\varepsilon>0\). Hence for each \(i\) and any \(\varepsilon>0\) we have
\[|g_{i}|^{2}=\mathcal{O}(\prod_{j=1}^{\ell}|z_{j}|^{-2(b_{j}+a_{ij})-\varepsilon }).\]
Therefore, \(\operatorname{ord}_{D_{j}}g_{i}\leq-\lfloor b_{j}+a_{ij}\rfloor\). This proves that
\[{}_{\boldsymbol{b}}E\subset\oplus_{i=1}^{r}{}_{\boldsymbol{b}}(L_{i})^{ \boldsymbol{a}_{i}}.\]
On the other hand, we consider any section \(\sigma\in\Gamma(\overline{X},{}_{\boldsymbol{b}}(L_{i})^{\boldsymbol{a}_{i}})\). Then \(\sigma=gv_{i}\) for some meromorphic function \(g\) defined over \(\overline{X}\) such that \(\operatorname{ord}_{D_{j}}g_{i}\leq-\lfloor b_{j}+a_{ij}\rfloor\) by (2.6). Therefore, there exists some positive constant \(C>0\) such that
\[|\sigma|_{h}^{2}=|g|^{2}|v_{i}|_{h}^{2}\leq C\prod_{j=1}^{\ell}|z_{j}|^{-2(b_{j }+a_{ij})}\cdot|\tilde{v}_{i}|_{h}^{2}\cdot\prod_{j=1}^{\ell}|z_{j}|^{2a_{ij}} =C\prod_{j=1}^{\ell}|z_{i}|^{-2b_{j}}\cdot|\tilde{v}_{i}|_{h}^{2}=\mathcal{O}( \prod_{i=1}^{\ell}|z_{i}|^{-b_{i}-\varepsilon}).\]
as \(|\tilde{v}_{i}|_{h}^{2}\leq C(-\sum_{j=1}^{\ell}\log|z_{j}|)^{M}\) for some \(C,M>0\). This implies that
\[\oplus_{i=1\boldsymbol{b}}^{r}(L_{i})^{\boldsymbol{a}_{i}}\subset{}_{ \boldsymbol{b}}E.\]
The claim is proved.
Consider the pullback \(f^{*}\boldsymbol{v}:=(f^{*}v_{1},\ldots,f^{*}v_{m})\). Then it is a holomorphic frame of \(f^{*}E|_{Y}\) where \(Y:=\overline{Y}\backslash D^{\prime}\). Note that we have
\[f^{*}\tilde{v}_{i}:=f^{*}v_{i}\cdot\prod_{j=1}^{\ell}|f^{*}z_{j}|^{-a_{ij}}=f^ {*}v_{i}\cdot\prod_{j=1}^{\ell}\prod_{q=1}^{k}|w_{q}|^{-n_{jq}a_{ij}}\cdot g_{ i}^{\prime}\]
for some invertible holomorphic function \(g_{i}^{\prime}\in\mathcal{O}(\overline{Y})\). Similar to (2.4), we set
\[f^{*}\boldsymbol{a}_{i}:=(\sum_{j=1}^{\ell}n_{j1}a_{ij},\ldots,\sum_{j=1}^{\ell }n_{jk}a_{ij})\in\mathbb{R}^{k}.\]
Then we have
\[f^{*}\tilde{v}_{i}:=f^{*}v_{i}\cdot|\boldsymbol{w}^{-f^{*}\boldsymbol{a}_{i} }|\cdot g_{i}^{\prime}.\]
Since \(H(h,\widetilde{\boldsymbol{v}})\) is adapted to log order, it is easy to check that \(H(f^{*}h,f^{*}\widetilde{\boldsymbol{v}})\) also is adapted to log order. Set \(e_{i}:=f^{*}v_{i}\cdot|\boldsymbol{w}^{-f^{*}\boldsymbol{a}_{i}}|\) for \(i=1,\ldots,r\) and \(e:=(e_{1},\ldots,e_{r})\). Then \(\boldsymbol{e}\) is a smooth frame for \(f^{*}E|_{Y}\). Since \(g_{i}^{\prime}\) is invertible, it follows that \(H(f^{*}h,\boldsymbol{e})\) is also adapted to log order. Consider the prolongation \((\tilde{E}_{*},\tilde{\theta})\) of the tame harmonic bundle \((f^{*}E,f^{*}\theta,f^{*}h)\) using the norm growth as defined in (2.2). Applying the result from Claim 2.6 to \((f^{*}E,f^{*}\theta,f^{*}h)\), we can conclude that the parabolic bundle \(\tilde{E}_{*}\) is given by
\[\tilde{E}_{*}=\oplus_{i=1}^{r}(f^{*}L_{i})_{*}^{f^{*}\boldsymbol{a}_{i}}, \tag{2.7}\]
where \((f^{*}L_{i})_{*}^{f^{*}\boldsymbol{a}_{i}}\) are parabolic line bundles defined by
\[{}_{\boldsymbol{b}}(f^{*}L_{i})^{f^{*}\boldsymbol{a}_{i}}:=f^{*}L_{i}\otimes \mathcal{O}_{\overline{Y}}\left(\sum_{j=1}^{k}[\sum_{q=1}^{\ell}n_{qj}a_{iq}+b _{j}]D_{j}^{\prime}\right). \tag{2.8}\]
On the other hand, by our definition of pullback of parabolic bundles and Claim 2.6, we have
\[f^{*}\boldsymbol{E}_{*}:=\oplus_{i=1}^{r}f^{*}(L_{i})_{*}^{\boldsymbol{a}_{i}}\]
where \(f^{*}(L_{i})_{*}^{\boldsymbol{a}_{i}}\) is the pullback of parabolic line bundle \((L_{i})_{*}^{\boldsymbol{a}_{i}}\) defined in (2.5). By performing a straightforward computation, we find that
\[f^{*}(L_{i})_{*}^{\boldsymbol{a}_{i}}=f^{*}L_{i}\otimes\mathcal{O}_{ \overline{Y}}\left(\sum_{j=1}^{\ell}[\sum_{q=1}^{\ell}n_{qj}a_{iq}+b_{j}]D_{j }^{\prime}\right).\]
This equality together with (2.7) and (2.8) yields \(\tilde{E}_{*}=f^{*}\boldsymbol{E}_{*}.\) We prove our first assertion. The second assertion can be deduced from the first one, combined with Theorem 2.2.
### \(\mathbb{C}^{*}\)-action and \(\mathbb{R}^{*}\)-action on character varieties
Consider a smooth projective variety \(\overline{X}\) equipped with a simple normal crossing divisor \(D\). We define \(X\) as the complement of \(D\) in \(\overline{X}\). Additionally, we fix an ample line bundle \(L\) on \(\overline{X}\). Let \(\varrho:\pi_{1}(X)\to\operatorname{GL}_{N}(\mathbb{C})\) be a reductive representation.
According to Theorem 1.6, there exists a tame pure imaginary harmonic bundle \((E,\theta,h)\) on \(X\) such that \((E,\nabla_{h}+\theta+\theta_{h}^{\dagger})\) is flat, with the monodromy representation being precisely \(\varrho\). Here \(\nabla_{h}\) is the Chern connection of \((E,h)\) and \(\theta_{h}^{\dagger}\) is the adjoint of \(\theta\) with respect to \(h\). Let \((\boldsymbol{E}_{*},\theta)\) be the prolongation of \((E,\theta)\) on \(\overline{X}\) defined in SS 2.1. By [10, Theorem 1.4], \((\boldsymbol{E}_{*},\theta)\) is a \(\mu_{L}\)-polystable regular filtered Higgs bundle on \((\overline{X},D)\) with trivial characteristic numbers. Therefore, for any \(t\in\mathbb{C}^{*}\), \((\boldsymbol{E}_{*},t\theta)\) be also \(\mu_{L}\)-polystable regular filtered Higgs bundle on \((\overline{X},D)\) with trivial characteristic numbers. By [10, Theorem 9.4], there is a pluriharmonic metric \(h_{t}\) for \((E,t\theta)\) adapted to the parabolic structures of \((\boldsymbol{E}_{*},t\theta)\). Then \((E,t\theta,h_{t})\) is a harmonic bundle and thus the connection \(\nabla_{h_{t}}+t\theta+t\theta_{h}^{\dagger}\) is flat. Here
is the Chern connection for \((E,h_{t})\) and \(\theta_{h_{t}}^{\dagger}\) is the adjoint ot \(\theta\) with respect to \(h_{t}\). Let us denote by \(\varrho_{t}:\pi_{1}(X)\to\operatorname{GL}_{N}(\mathbb{C})\) the monodromy representation of \(\nabla_{h_{t}}+t\theta+\bar{t}\theta_{h_{t}}^{\dagger}\). It should be noted that the representation \(\varrho_{t}\) is well-defined up to conjugation. As a result, the \(\mathbb{C}^{*}\)-action is only well-defined over \(M_{\mathrm{B}}(X,N)\) and we shall denote it by
\[t.[\varrho]:=[\varrho_{t}]\quad\text{for any }t\in\mathbb{C}^{*}.\]
It is important to observe that unlike the compact case, \(\varrho_{t}\) is not necessarily reductive in general, even if the original representation \(\varrho\) is reductive. However, if \(t\in\mathbb{R}^{*}\), \((E,t\theta)\) is also pure imaginary and by Theorem1.6, \(\varrho_{t}\) is reductive. Nonetheless, we can obtain a family of (might not be semisimple) representations \(\{\varrho_{t}:\pi_{1}(X)\to\operatorname{GL}_{N}(\mathbb{C})\}_{t\in\mathbb{C }^{*}}\). By [13, Proofs of Theorem10.1 and Lemma10.2] we have
**Lemma 2.7**.: _The map_
\[\Phi:\mathbb{R}^{*} \to M_{\mathrm{B}}(\pi_{1}(X),N)\] \[t \mapsto[\varrho_{t}]\]
_is continuous. \(\Phi(\{t\in\mathbb{R}^{*}\mid|t|<1\})\) is relatively compact in \(M_{\mathrm{B}}(\pi_{1}(X),N)\). _
Note that Lemma2.7 can not be seen directly from [13, Lemma 10.2] as he did not treat the character variety in his paper. Indeed, based on Uhlenbeck's compactness in Gauge theory, Mochizuki's proof can be read as follows: for any \(t_{n}\in\mathbb{R}^{*}\) converging to \(0\), after subtracting to a subsequence, there exists some \(\varrho_{0}:\pi_{1}(X)\to\operatorname{GL}_{N}(\mathbb{C})\) and \(g_{n}\in\operatorname{GL}_{N}(\mathbb{C})\) such that \(\lim\limits_{n\to\infty}\varrho_{n}^{*}\varrho_{t_{n}}=\varrho_{0}\) in the representation variety \(R(\pi_{1}(X),\operatorname{GL}_{N})(\mathbb{C})\). Moreover, one can check that \(\varrho_{0}\) corresponds to some tame pure imaginary harmonic bundle, and thus by Theorem1.6 it is reductive (cf. [1] for a more detailed study). For this reason, we can see that it will be more practical to work with \(\mathbb{R}^{*}\)-action instead of \(\mathbb{C}^{*}\)-action as the representations we encounter are all reductive.
When \(X\) is compact, Simpson proved that \(\lim_{t\to 0}\Phi(t)\) exists and underlies a \(\mathbb{C}\)-VHS. However, it is current unknown in the quasi-projective setting. Instead, Mochizuki proved that, we achieve a \(\mathbb{C}\)-VHS after finite steps of deformations. Let us recall it briefly and the readers can refer to [13, SS10.1] for more details.
Let \(\varrho:\pi_{1}(X)\to\operatorname{GL}_{N}(\mathbb{C})\) be a semisimple representation. Then there exists a tame and pure imaginary harmonic bundle \((E,\theta,h)\) corresponding to \(\varrho\). Then the induced regular filtered Higgs bundle \((\mathbf{E}_{*},\theta)\) on \((\overline{X},D)\) is \(\mu_{L}\)-polystable with trivial characteristic numbers. Hence we have a decomposition
\[(\mathbf{E}_{*},\theta)=\oplus_{j\in\Lambda}(\mathbf{E}_{j*},\theta_{j})\otimes \mathbb{C}^{m_{j}}\]
where \((\mathbf{E}_{j*},\theta_{j})\) is \(\mu_{L}\)-stable regular filtered Higgs bundle with trivial characteristic numbers. Put \(r(\varrho):=\sum_{j\in\Lambda}m_{j}\). Then \(r(\varrho)\leq\operatorname{rank}E\). For any \(t\in\mathbb{R}^{*}\), we know that \((E,t\theta)\) is still tame and pure imaginary and thus \(\varrho_{t}\) is also reductive. Since \(\varrho(\{t\in\mathbb{C}^{*}\mid|t|<1\})\) is relatively compact, then there exists some \(t_{n}\in\mathbb{R}^{*}\) which converges to zero such that \(\lim_{t_{n}\to 0}[\varrho_{t_{n}}]\) exists, denoting by \([\varrho_{0}]\). Moreover, \(\varrho_{0}\) corresponds to some tame harmonic bundle. There are two possibilities:
* For each \(j\in\Lambda\), \((\mathbf{E}_{j*},t_{n}\theta_{j})\) converges to some \(\mu_{L}\)-stable regular filtered Higgs sheaf (cf. [13, p. 96] for the definition of convergence). Then by [13, Proposition 10.3], \(\varrho_{0}\) underlies a \(\mathbb{C}\)-VHS.
* For some \(j\in\Lambda\), \((\mathbf{E}_{j*},t_{n}\theta_{j})\) converges to some \(\mu_{L}\)-semistable regular filtered Higgs sheaf, but not \(\mu_{L}\)-stable. Then by [13, Lemma 10.4]\(r(\varrho)<r(\varrho_{0})\). In other words, letting \(\varrho_{i}\) be the representation corresponding to \((\mathbf{E}_{j*},\theta_{j})\) and \(\varrho_{i,t}\) be the deformation under \(\mathbb{C}^{*}\)-action. Then \(\lim_{n\to\infty}\varrho_{i,t_{n}}\) exists, denoted by \(\varrho_{i,0}\). Then \(\varrho_{i,0}\) corresponds to some tame harmonic bundle, and thus also a \(\mu_{L}\)-polystable regular filtered Higgs bundle which is not stable. In this case, we further deform \(\varrho_{0}\) until we achieve Case 1.
In summary, Mochizuki's result implies the following, which we shall refer to as _Mochizuki's ubiquity_, analogous to the term _Simpson's ubiquity_ for the compact case (cf. [10]).
**Theorem 2.8**.: _Let \(X\) be a smooth quasi-projective variety. Consider \(\mathfrak{C}\), a Zariski closed subset of \(M_{\mathrm{B}}(X,G)(\mathbb{C})\), where \(G\) denotes a complex reductive group. If \(\mathfrak{C}\) is invariant under the action of \(\mathbb{R}^{*}\) defined above, then each geometrically connected component of \(\mathfrak{C}(\mathbb{C})\) contains a \(\mathbb{C}\)-point \([\varrho]\) such that \(\varrho:\pi_{1}(X)\to\mathrm{GL}_{N}(\mathbb{C})\) is a reductive representation that underlies a \(\mathbb{C}\)-variation of Hodge structure._
### Pullback of reductive representations commutes with \(\mathbb{C}^{*}\)-action
In this section, we prove that the \(\mathbb{C}^{*}\)-action on character varieties commutes with the pullback.
**Proposition 2.9**.: _Let \(f:Y\to X\) be a morphism of smooth quasi-projective varieties. If \(\varrho:\pi_{1}(X)\to\mathrm{GL}_{N}(\mathbb{C})\) is a reductive representation, then for any \(t\in\mathbb{C}^{*}\), we have_
\[f^{*}(t.[\varrho])=t.[f^{*}\varrho]. \tag{2.9}\]
Proof.: Let \(\overline{X}\) and \(\overline{Y}\) be smooth projective compactifications of \(X\) and \(Y\) such that \(D:=\overline{X}\backslash X\) and \(D^{\prime}:=\overline{Y}\backslash Y\) are simple normal crossing divisors. We may assume that \(f\) extends to a morphism \(f:\overline{Y}\to\overline{X}\).
By Theorem 1.6, there is a tame pure imaginary harmonic bundle \((E,\theta,h)\) on \(X\) such that \(\varrho\) is the monodromy representation of the flat connection \(\nabla_{h}+\theta+\theta_{h}^{\dagger}\). Then \(f^{*}\varrho\) is the monodromy representation of \(f^{*}(\nabla_{h}+\theta+\theta_{h}^{\dagger})\), which is the flat connection corresponding to the harmonic bundle \((f^{*}E,f^{*}\theta,f^{*}h)\).
Let \((\mathbf{E}_{*},\theta)\) be the induced regular filtered Higgs bundle on \((\overline{X},D)\) by \((E,\theta,h)\) defined in SS 2.1. According to SSSS 2.2 and 2.3 we can define the pullback \((f^{*}\mathbf{E}_{*},f^{*}\theta)\), which also forms a regular filtered Higgs bundle on \((\overline{Y},D^{\prime})\) with trivial characteristic numbers.
Fix some ample line bundle \(L\) on \(\overline{X}\). It is worth noting that for any \(t\in\mathbb{C}^{*}\), \((\mathbf{E}_{*},t\theta)\) is \(\mu_{L}\)-polystable with trivial characteristic numbers. By [13, Theorem 9.4], there is a pluriharmonic metric \(h_{t}\) for \((E,t\theta)\) adapted to the parabolic structures of \((\mathbf{E}_{*},t\theta)\). Recall that in SS 2.4, \(\varrho_{t}\) is defined to be the monodromy representation of the flat connection \(\nabla_{h_{t}}+t\theta+\widetilde{t}\theta_{h_{t}}^{\dagger}\). It follows that \(f^{*}\varrho_{t}\) is the monodromy representation of the flat connection \(f^{*}(\nabla_{h_{t}}+t\theta+\widetilde{t}\theta_{h_{t}}^{\dagger})\).
By virtue of Proposition 2.5, the regular filtered Higgs bundle \((f^{*}\mathbf{E}_{*},tf^{*}\theta)\) is the prolongation of the tame harmonic bundle \((f^{*}E,tf^{*}\theta,f^{*}h_{t})\) using norm growth defined in SS 2.1. By the definition of \(\mathbb{C}^{*}\)-action, \((f^{*}\varrho)_{t}\) is the monodromy representation of the flat connection \(\nabla_{f^{*}h_{t}}+tf^{*}\theta+\widetilde{t}(f^{*}\theta)_{f^{*}h_{t}}^{\dagger}\), which is equal to \(f^{*}(\nabla_{h_{t}}+t\theta+\widetilde{t}\theta_{h_{t}}^{\dagger})\). It follows that \((f^{*}\varrho)_{t}=f^{*}\varrho_{t}\). This concludes (2.9).
As a direct consequence of Proposition 2.9, we have the following result.
**Corollary 2.10**.: _Let \(f:Y\to X\) be a morphism of smooth quasi-projective varieties. Let \(M\subset M_{\mathrm{B}}(X,N)(\mathbb{C})\) be a subset which is invariant by \(\mathbb{C}^{*}\)-action (or \(\mathbb{R}^{*}\)-action). Then for the morphism \(f^{*}:M_{\mathrm{B}}(X,N)\to M_{\mathrm{B}}(Y,N)\) between character varieties, \(f^{*}M\) is also invariant by \(\mathbb{C}^{*}\)-action (or \(\mathbb{R}^{*}\)-action)._
## 3 Construction of the Shafarevich morphism
The aim of this section is to establish the proofs of Theorem A. Additionally, the techniques developed in this section will play a crucial role in SS 4 dedicated to the proof of the reductive Shafarevich conjecture.
### Factorizing through non-rigidity
In this subsection, \(X\) is assumed to be a smooth quasi-projective variety. Let \(\mathfrak{C}\subset M_{\mathrm{B}}(X,N)(\mathbb{C})\) be a \(\bar{\mathbb{Q}}\)-constructible subset. Since \(M_{\mathrm{B}}(X,N)\) is an finite type affine scheme defined over \(\mathbb{Q}\), \(\mathfrak{C}\) is defined over some number field \(k\).
Let us utilize Lemma 1.29 and Theorem 1.27 to construct a reduction map \(s_{\mathfrak{C}}:X\to S_{\mathfrak{C}}\) associated with \(\mathfrak{C}\), which allows us to factorize non-rigid representations into those underlying \(\mathbb{C}\)-VHS with discrete monodromy.
Definition 3.1 ().: The _reduction map_\(s_{\mathfrak{E}}:X\to S_{\mathfrak{E}}\) is obtained through the simultaneous Stein factorization of the reductions \(\{s_{\tau}:X\to S_{\tau}\}_{[\tau]\in\mathfrak{E}(K)}\), employing Lemma 1.29. Here \(\tau:\pi_{1}(X)\to\operatorname{GL}_{N}(K)\) ranges over all reductive representations with \(K\) a non-archimedean local field containing \(k\) such that \([\tau]\in\mathfrak{C}(K)\) and \(s_{\tau}:X\to S_{\tau}\) is the reduction map constructed in Theorem 1.27.
Note that \(s_{\mathfrak{E}}\) is a dominant morphism with connected general fibers and we have the following diagram.
The reduction map \(s_{\mathfrak{E}}:X\to S_{\mathfrak{E}}\) employs the following crucial property, thanks to Theorem 1.27.
**Lemma 3.2**.: _Let \(F\subset X\) be a connected Zariski closed subset such that \(s_{\mathfrak{E}}(F)\) is a single point in \(S_{\mathfrak{E}}\). Then for any non-archimedean local field \(L\) and any reductive representation \(\tau:\pi_{1}(X)\to\operatorname{GL}_{N}(L)\), the image \(\tau(\operatorname{Im}[\pi_{1}(F)\to\pi_{1}(X)])\) is a bounded subgroup of \(\operatorname{GL}_{N}(L)\)._
Proof.: By our construction \(s_{\tau}=e_{\tau}\circ s_{\mathfrak{E}}\), so \(s_{\tau}(F)\) is a single point. Hence by Theorem 1.27, \(\tau(\operatorname{Im}[\pi_{1}(F)\to\pi_{1}(X)])\) is bounded.
Recall the following definition in [10, Definition 2.2.1].
**Definition 3.3** (Bounded set).: Let \(K\) be a non-archimedean local field. Let \(X\) be an affine \(K\)-scheme. A subset \(B\subset X(K)\) is _bounded_ if for every \(f\in K[X]\), the set \(\{v(f(b))\mid b\in B\}\) is bounded below, where \(v:K\to\mathbb{R}\) is the valuation of \(K\).
We have the following lemma in [10, Fact 2.2.3].
**Lemma 3.4**.: _If \(B\subset X(K)\) is closed, then \(B\) is bounded if and only if \(B\) is compact with respect to the analytic topology of \(X(K)\). If \(f:X\to Y\) is a morphism of affine \(K\)-schemes of finite type, then \(f\) carries bounded subsets of \(X(K)\) to bounded subsets in \(Y(K)\)._
We will establish a lemma that plays a crucial role in the proof of Proposition 3.9 and is also noteworthy in its own regard.
**Lemma 3.5**.: _Let \(\varrho:\pi_{1}(X)\to\operatorname{GL}_{N}(K)\) be a (un)bounded representation. Then its semisimplification \(\varrho^{ss}:\pi_{1}(X)\to\operatorname{GL}_{N}(\bar{K})\) is also (un)bounded._
Proof.: Note that there exists some \(g\in\operatorname{GL}_{N}(\bar{K})\) such that
\[g\varrho g^{-1}=\left[\begin{array}{cccc}\varrho_{1}&a_{12}&\cdots&a_{1n} \\ 0&\varrho_{2}&\cdots&a_{2n}\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&\varrho_{n}\end{array}\right] \tag{3.1}\]
where \(\varrho_{i}:\pi_{1}(X)\to\operatorname{GL}_{N_{i}}(\bar{K})\) is an irreducible representation such that \(\sum_{i=1}^{N}N_{i}=N\). Note that \(g\varrho g^{-1}\) is unbounded if and only if \(\varrho\) is unbounded. Hence we may assume at the beginning that \(\varrho\) has the form of (3.1). The semisimplification of \(\varrho\) is defined by
\[\varrho^{ss}=\left[\begin{array}{cccc}\varrho_{1}&0&\cdots&0\\ 0&\varrho_{2}&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&\varrho_{n}\end{array}\right]\]
It is obvious that if \(\varrho\) is bounded, then \(\varrho^{ss}\) is bounded.
Assume now \(\varrho^{ss}\) is bounded. Then each \(\varrho_{i}\) is bounded. Let \(L\) be a finite extension of \(K\) such that \(\varrho\) is defined over \(L\). Then \(\varrho_{i}(\pi_{1}(X))\) is contained in some maximal compact
subgroup of \(\operatorname{GL}_{N_{i}}(L)\). Since all maximal compact subgroups of \(\operatorname{GL}_{N_{i}}(L)\) are conjugate to \(\operatorname{GL}_{N_{i}}(\mathcal{O}_{L})\), then there exists \(g_{i}\in\operatorname{GL}_{N_{i}}(L)\) such that \(g_{i}\varrho_{i}g_{i}^{-1}:\pi_{1}(X)\to\operatorname{GL}_{N_{i}}(\mathcal{O} _{L})\). Define
\[\tau:=\left[\begin{array}{cccc}g_{1}&0&\cdots&0\\ 0&g_{2}&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&g_{n}\end{array}\right]\left[\begin{array}{cccc}\varrho_{1}&a_{ 12}&\cdots&a_{1n}\\ 0&\varrho_{2}&\cdots&a_{2n}\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&\varrho_{n}\end{array}\right]\left[\begin{array}{cccc}g_{1}^{-1}& 0&\cdots&0\\ 0&g_{2}^{-1}&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&g_{n}^{-1}\end{array}\right] \tag{3.2}\]
which is conjugate to \(\varrho\), and is thus unbounded. Then \(\tau\) can be written as
\[\tau=\ \left[\begin{array}{cccc}g_{1}\varrho_{1}g_{1}^{-1}&h_{12}& \cdots&h_{1n}\\ 0&g_{2}\varrho_{2}g_{2}^{-1}&\cdots&h_{2n}\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&g_{n}\varrho_{n}g_{n}^{-1}\end{array}\right]\]
such that \(g_{i}\varrho_{i}g_{i}^{-1}:\pi_{1}(X)\to\operatorname{GL}_{N}(\mathcal{O}_{L})\) is irreducible. Write
\[\tau_{1}:=\left[\begin{array}{cccc}g_{1}\varrho_{1}g_{1}^{-1}&0&\cdots&0\\ 0&g_{2}\varrho_{2}g_{2}^{-1}&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&g_{n}\varrho_{n}g_{n}^{-1}\end{array}\right]\]
and
\[\tau_{2}:=\left[\begin{array}{cccc}0&h_{12}&\cdots&h_{1n}\\ 0&0&\cdots&h_{2n}\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&0\end{array}\right]\]
Note that \(\tau_{2}\) is not a group homomorphism but only a map from \(\pi_{1}(X)\) to \(\operatorname{GL}_{N}(L)\).
For any matrix \(B\) with values in \(L\), we shall write \(v(B)\) the matrix whose entries are the valuation of the corresponding entries in \(B\) by \(v:L\to\mathbb{R}\). Let us define \(M(B)\) the lower bound of the entries of \(v(B)\). Then for another matrix \(A\) with values in \(L\), one has \(M(A+B)\geq\min\{M(A),M(B)\}\).
Let \(x_{1},\ldots,x_{m}\) be a generator of \(\pi_{1}(X)\). Let \(C\) be the lower bound of the entries of \(v(h_{ij}(x_{k}))\). We assume that \(C<0\), or else it is easy to see that \(\tau\) is bounded. Note that \(\min_{i=1,\ldots,m}M(g_{i}\varrho_{i}g_{i}^{-1}(x_{i}))\geq 0\). It follows that \(M(\tau_{1}(x_{i}))\geq 0\) for each \(x_{i}\). Then for any \(x=x_{i_{1}}\cdots x_{i_{t}}\),
\[M(\tau(x)) =M(\sum_{j_{1},\ldots,j_{t}=1,2}\tau_{j_{1}}(x_{i_{1}})\cdots\tau _{j_{t}}(x_{i_{t}}))\] \[\geq\min_{j_{1},\ldots,j_{t}=1,2}\{M(\tau_{j_{1}}(x_{i_{1}}) \cdots\tau_{j_{t}}(x_{i_{t}}))\}.\]
Note that \(\tau_{j_{1}}(x_{i_{1}})\cdots\tau_{j_{t}}(x_{i_{t}})=0\) if \(\#\{k\mid j_{k}=2\}\geq n\) since \(\tau_{2}(x_{i})\) isipotent. Hence
\[M(\tau(x))\geq\min_{j_{1},\ldots,j_{t}=1,2;\#\{k\mid j_{k}=2\}<n}\{M(\tau_{j_{1 }}(x_{i_{1}})\cdots\tau_{j_{t}}(x_{i_{t}}))\}.\]
Since \(M(\tau_{1}(x_{i}))\geq 0\) for each \(x_{i}\), it follows that \(M(\tau_{j_{1}}(x_{i_{1}})\cdots\tau_{j_{t}}(x_{i_{t}})\geq(n-1)C\) if \(\#\{k\mid j_{k}=2\}<n\). Therefore, \(M(\tau(x))\geq(n-1)C\) for any \(x\in\pi_{1}(X)\). \(\tau\) is thus bounded. Since \(\varrho\) is conjugate to \(\tau\), \(\varrho\) is also bounded. We finish the proof of the lemma.
We recall the following facts of character varieties.
**Lemma 3.6**.: _Let \(K\) be an algebraically closed field of characteristic zero. Then the \(K\)-points \(M_{\operatorname{B}}(X,N)\) are in one-to-one correspondence with the conjugate classes of reductive representations \(\pi_{1}(X)\to\operatorname{GL}_{N}(K)\). More precisely, if \(\{\varrho_{i}:\pi_{1}(X)\to\operatorname{GL}_{N}(K)\}_{i=1,2}\) are two linear representations such that \([\varrho_{1}]=[\varrho_{2}]\in M_{\operatorname{B}}(X,N)(K)\), then the semisimplification of \(\varrho_{1}\) and \(\varrho_{2}\) are conjugate. _
The following result is thus a consequence of Lemma 3.5.
**Lemma 3.7**.: _Let \(K\) be a non-archimedean local field. Let \(x\in M_{\mathrm{B}}(X,N)(K)\). If \(\{\varrho_{i}:\pi_{1}(X)\to\mathrm{GL}_{N}(\bar{K})\}_{i=1,2}\) are two linear representations such that \([\varrho_{1}]=[\varrho_{2}]=x\in M_{\mathrm{B}}(X,N)(\bar{K})\), then \(\varrho_{1}\) is bounded if and only if \(\varrho_{2}\) is bounded. In other words, for the GIT quotient \(\pi:R(X,N)\to M_{B}(X,N)\) where \(R(X,N)\) is the representation variety of \(\pi_{1}(X)\) into \(\mathrm{GL}_{N}\), for any \(x\in M_{B}(X,N)(\bar{K})\), the representations in \(\pi^{-1}(x)\subset R(X,N)(\bar{K})\) are either all bounded or all unbounded._
Proof.: By the assumption and Lemma 3.6, we know that the semisimplificaitons \(\varrho_{1}^{ss}:\pi_{1}(X)\to\mathrm{GL}_{N}(\bar{K})\) of \(\varrho_{2}^{ss}:\pi_{1}(X)\to\mathrm{GL}_{N}(\bar{K})\) are conjugate by an element \(g\in\mathrm{GL}_{N}(\bar{K})\). Therefore, there exists a finite extension \(L\) of \(K\) such that \(\varrho_{i}^{ss}\) and \(\varrho_{i}\) are all defined in \(L\) and \(g\in\mathrm{GL}_{N}(L)\). Hence \(\varrho_{1}^{ss}\) is bounded if and only if \(\varrho_{2}^{ss}\) is bounded. By Lemma 3.5, we know that \(\varrho_{i}^{ss}\) is bounded if and only if \(\varrho_{i}\) is bounded. Therefore, the lemma follows.
We thus can make the following definition.
**Definition 3.8** (Class of bounded representations).: Let \(K\) be a non-archimedean local field of characteristic zero. A point \(x\in M_{B}(X,N)(\bar{K})\) is called _a class of bounded representations_ if there exist \(\varrho:\pi_{1}(X)\to\mathrm{GL}_{N}(\bar{K})\) (thus any \(\varrho\) by Lemma 3.7) such that \([\varrho]=x\) and \(\varrho\) is bounded.
**Proposition 3.9**.: _Let \(X\) be a smooth quasi-projective variety and let \(\mathfrak{C}\) be a \(\bar{\mathbb{Q}}\)-constructible subset of \(M_{\mathrm{B}}(X,N)\). Let \(\iota:F\to X\) be a morphism from a quasi-projective normal variety \(F\) such that \(s_{\mathfrak{C}}\circ\iota(F)\) is a point. Let \(\{\tau_{i}:\pi_{1}(X)\to\mathrm{GL}_{N}(\mathbb{C})\}_{i=1,2}\) be reductive representations such that \([\tau_{1}]\) and \([\tau_{2}]\) are in the same geometric connected component of \(\mathfrak{C}(\mathbb{C})\). Then \(\tau_{1}\circ\iota\) is conjugate to \(\tau_{2}\circ\iota\). In other words, \(j(\mathfrak{C})\) is zero-dimensional, where \(j:M_{\mathrm{B}}(X,N)\to M_{\mathrm{B}}(F,N)\) is the natural morphism of character varieties induced by \(\iota:F\to X\)._
Proof.: Let \(M_{X}\) (resp. \(M\)) be the moduli space of representations of \(\pi_{1}(X)\) (resp. \(\pi_{1}(F^{\mathrm{norm}})\)) in \(\mathrm{GL}_{N}\). Note that \(M_{X}\) and \(M\) are both affine schemes of finite type defined over \(\mathbb{Q}\). Let \(R_{X}\) (resp. \(R\)) be the affine scheme of finite type defined over \(\mathbb{Q}\) such that \(R_{X}(L)=\mathrm{Hom}(\pi_{1}(X),\mathrm{GL}_{N}(L))\) (resp. \(R(L)=\mathrm{Hom}(\pi_{1}(F),\mathrm{GL}_{N}(L))\)) for any field \(L/\mathbb{Q}\). Then we have
(3.3)
where \(\pi:R_{X}\to M_{X}\) and \(p:R\to M\) are the GIT quotient that are both surjective. For any field extension \(K/\mathbb{Q}\) and any \(\varrho\in R_{X}(K)\), we write \([\varrho]:=\pi(\varrho)\in M_{X}(K)\). Let \(\mathfrak{R}:=\pi^{-1}(\mathfrak{C})\) that is a constructible subset defined over some number field \(k\). Then \(\tau_{i}\in\mathfrak{R}(\mathbb{C})\).
**Claim 3.10**.: _Let \(\mathfrak{R}^{\prime}\) be any geometric irreducible component of \(\mathfrak{R}\). Then \(j\circ\pi(\mathfrak{R}^{\prime})\) is zero dimensional._
Proof.: Assume, for the sake of contradiction, that \(j\circ\pi(\mathfrak{R}^{\prime})\) is positive-dimensional. If we replace \(k\) by a finite extension, we may assume that \(\mathfrak{R}^{\prime}\) is defined over \(k\). Since \(M\) is an affine \(\mathbb{Q}\)-scheme of finite type, it follows that there exist a \(k\)-morphism \(\psi:M\to\mathbb{A}^{1}\) such that the image \(\psi\circ j\circ\pi(\mathfrak{R}^{\prime})\) is Zariski dense in \(\mathbb{A}^{1}\). After replacing \(k\) by a finite extension, we can find a locally closed irreducible curve \(C\subset\mathfrak{R}^{\prime}\) such that the restriction \(\psi\circ j\circ\pi|_{C}:C\to\mathbb{A}^{1}\) is a generically finite \(k\)-morphism. We take a Zariski open subset \(U\subset\mathbb{A}^{1}\) such that \(\psi\circ j\circ\pi\) is finite over \(U\). Let \(\mathfrak{p}\) be a prime ideal of the ring of integer \(\mathcal{O}_{k}\) and let \(K\) be its non-archimedean completion. In the following, we shall work over \(K\).
Let \(x\in U(K)\) be a point, and let \(y\in C(\bar{K})\) be a point over \(x\). Then \(y\) is defined over some extension of \(K\) whose extension degree is bounded by the degree of \(\psi\circ j\circ\pi|_{C}:C\to\mathbb{A}^{1}\). Note that there are only finitely many such field extensions. Hence there exists a finite extension \(L/K\) such that the points over \(U(K)\) are all contained in \(C(L)\). Since \(U(K)\subset\mathbb{A}^{1}(L)\) is unbounded, the image \(\psi\circ j\circ\pi(C(L))\subset\mathbb{A}^{1}(L)\) is unbounded.
Write \(p:R\to M\) be the GIT quotient. Let \(R_{0}\) be the set of bounded representations in \(R(L)\). Recall that by [10], \(M_{0}:=p(R_{0})\) is compact in \(M(L)\) with respect to analytic topology, hence \(M_{0}\) is bounded by Lemma 3.4. By Lemma 3.4 once again, \(\psi(M_{0})\) is a bounded subset in \(\mathbb{A}^{1}(L)\). Recall that \(\psi\circ j\circ\pi(C(L))\subset\mathbb{A}^{1}(L)\) is unbounded. Therefore, there exists \(\varrho\in C(L)\) such that \(\psi\circ j([\varrho])\not\in\psi(M_{0})\). Note that \([\varrho\circ\iota]=j([\varrho])\) by (3.3). Hence \([\varrho\circ\iota]\not\in M_{0}\) which implies that \(\varrho\circ\iota\not\in R_{0}\). By definition of \(R_{0}\), \(\varrho\circ\iota\) is unbounded.
Let \(\varrho^{ss}:\pi_{1}(X)\to\operatorname{GL}_{N}(\bar{L})\) be the semisimplification of \(\varrho\). Then \([\varrho]=[\varrho^{ss}]\in\mathfrak{C}(\bar{L})\) by Lemma 3.6. Therefore, \([\varrho\circ\iota]=[\varrho^{ss}\circ\iota]\in M(\bar{L})\) by (3.3). By Lemma 3.7, \(\varrho^{ss}\circ\iota:\pi_{1}(F)\to\operatorname{GL}_{N}(\bar{L})\) is also unbounded. Note that \(\varrho^{ss}\circ\iota\) is reductive by Theorem 1.7. Since \(\pi_{1}(F)\) is finitely generated, there exist a finite extension \(L^{\prime}\) of \(L\) such that \(\varrho^{ss}\) is defined over \(L^{\prime}\). However, by Lemma 3.2, \(\varrho^{ss}\circ\iota\) is always bounded. We obtain a contradiction and thus \(j\circ\pi(\mathfrak{R}^{\prime})\) is zero dimensional.
We can also apply [11, Corollary 4.3] instead of Lemma 3.7. As \(\varrho\in C(L)\), its image \([\varrho]\in M_{X}(L)\). Consider the fiber \(\pi^{-1}([\varrho])\) which is a \(L\)-variety. Its closed orbit is defined over \(L\) by Galois descent. As \(\pi^{-1}([\varrho])\) contains the \(L\)-point \(\varrho\), the closed orbit in \(\pi^{-1}([\varrho])\) has an \(L\)-point \(\varrho^{\prime}:\pi_{1}(X)\to\operatorname{GL}_{N}(L)\) as well by [11, Corollary 4.3]. By Lemma 3.6\(\varrho^{\prime}\) is reductive and \([\varrho^{\prime}]=[\varrho]\). Hence \([\varrho^{\prime}\circ\iota]=[\varrho\circ\iota]\not\in M_{0}\). Therefore, \(\varrho^{\prime}\circ\iota:\pi_{1}(F)\to\operatorname{GL}_{N}(L)\) is unbounded by our definition of \(M_{0}\). However, by the definition of \(s_{\mathfrak{C}}:X\to S_{\mathfrak{C}}\) in Definition 3.1, \(\varrho^{\prime}\circ\iota\) is always bounded. We obtain a contradiction and thus \(j\circ\pi(\mathfrak{R}^{\prime})\) is zero dimensional.
Let \(\{\tau_{i}:\pi_{1}(X)\to\operatorname{GL}_{N}(\mathbb{C})\}_{i=1,2}\) be reductive representations such that \([\tau_{1}]\) and \([\tau_{2}]\) are contained in the same connected component \(\mathfrak{C}^{\prime}\) of \(\mathfrak{C}(\mathbb{C})\). We aim to prove that \(j(\mathfrak{C}^{\prime})\) is a point in \(M(\mathbb{C})\).
Consider an irreducible component \(\mathfrak{C}^{\prime\prime}\) of \(\mathfrak{C}^{\prime}\). We can choose an irreducible component \(Z\) of \(\pi^{-1}(\mathfrak{C}^{\prime\prime})\) such that \(\pi(Z)\) is dense in \(\mathfrak{C}^{\prime\prime}\). It follows that \(Z\) is an irreducible component of \(\mathfrak{R}(\mathbb{C})\). By Claim 3.10, we know that \(j\circ\pi(Z)\) is a point in \(M(\mathbb{C})\). Thus, \(j(\mathfrak{C}^{\prime\prime})\) is also a point in \(M(\mathbb{C})\).
Consequently, \(j(\mathfrak{C}^{\prime})\) is a point in \(M(\mathbb{C})\). As a result, we have \([\tau_{1}\circ\iota]=j([\tau_{1}])=j([\tau_{2}])=[\tau_{2}\circ\iota]\). By Theorem 1.7, \(\tau_{1}\circ\iota\) and \(\tau_{2}\circ\iota\) are reductive, and according to Lemma 3.6, they are conjugate to each other. We have established the proposition.
We will need the following lemma on the intersection of kernels of representations.
**Lemma 3.11**.: _Let \(X\) be a quasi-projective normal variety and let \(\mathfrak{C}\) be a constructible subset of \(M_{\mathrm{B}}(X,N)(\mathbb{C})\). Then we have_
\[\cap_{[\varrho]\in\mathfrak{C}}\ker\varrho=\cap_{[\varrho]\in\overline{ \mathfrak{C}}}\ker\varrho, \tag{3.4}\]
_where \(\varrho\)'s are reductive representations of \(\pi_{1}(X)\) into \(\operatorname{GL}_{N}(\mathbb{C})\)._
Proof.: Let \(M_{X}\) be the moduli space of representation of \(\pi_{1}(X)\) in \(\operatorname{GL}_{N}\). Let \(R_{X}\) be the affine scheme of finite type such that \(R(L)=\operatorname{Hom}(\pi_{1}(X),N)(L)\) for any field \(\mathbb{Q}\subset L\). We write \(M:=M_{X}(\mathbb{C})\) and \(R:=R_{X}(\mathbb{C})\). Then the GIT quotient \(\pi:R\to M\) is a surjective morphism. It follows that \(\pi^{-1}(\mathfrak{C})\) is a \(\operatorname{GL}_{N}(\mathbb{C})\)-invariant subset where \(\operatorname{GL}_{N}(\mathbb{C})\) acts on \(R\) by the conjugation. Define \(H:=\cap_{[\varrho]\in\mathfrak{C}}\ker\varrho\), where \(\varrho\)'s are reductive representations of \(\pi_{1}(X)\) into \(\operatorname{GL}_{N}(\mathbb{C})\). Pick any \(\gamma\in H\). Then the set \(Z_{\gamma}:=\{\varrho\in R\mid\varrho(\gamma)=1\}\) is a Zariski closed subset of \(R\). Moreover, \(Z_{\gamma}\) is \(\operatorname{GL}_{N}(\mathbb{C})\)-invariant. Define \(Z:=\cap_{\gamma\in H}Z_{\gamma}\). Then \(Z\) is also \(\operatorname{GL}_{N}(\mathbb{C})\)-invariant. Therefore, \(\pi(Z)\) is also a Zariski closed subset of \(M\). Note that \(\mathfrak{C}\subset\pi(Z)\). Therefore, \(\overline{\mathfrak{C}}\subset\pi(Z)\). Note that for any reductive \(\varrho:\pi_{1}(X)\to\operatorname{GL}_{N}(\mathbb{C})\) such that \([\varrho]\in\pi(Z)\), we have \(\varrho(\gamma)=1\) for any \(\gamma\in H\). It follows that (3.4) holds.
Lastly, let us prove the main result of this subsection. This result will serve as a crucial cornerstone in the proofs of Theorems A to C.
**Proposition 3.12**.: _Let \(X\) be a smooth quasi-projective variety. Let \(\mathfrak{C}\) be a constructible subset of \(M_{\mathrm{B}}(X,N)(\mathbb{C})\), defined over \(\mathbb{Q}\), such that \(\mathfrak{C}\) is invariant under \(\mathbb{R}^{*}\)-action. When
\(X\) is non-compact, we further assume that \(\mathfrak{C}\) is closed. Then there exist reductive representations \(\{\sigma^{\mbox{\tiny{\sc viss}}}_{i}:\pi_{1}(X)\to\operatorname{GL}_{N}(\mathbb{C })\}_{i=1,\ldots,m}\) such that each \(\sigma^{\mbox{\tiny{\sc viss}}}_{i}\) underlies a \(\mathbb{C}\)-VHS, and for a morphism \(\iota:Z\to X\) from any quasi-projective normal variety \(Z\) with \(s_{\mathfrak{C}}\circ\iota(Z)\) being a point, the following properties hold:_
1. _For_ \(\sigma:=\oplus_{i=1}^{m}\sigma^{\mbox{\tiny{\sc viss}}}_{i}\)_,_ \(\iota^{*}\sigma(\pi_{1}(Z))\) _is discrete in_ \(\prod_{i=1}^{m}\operatorname{GL}_{N}(\mathbb{C})\)_._
2. _For each reductive representation_ \(\tau:\pi_{1}(X)\to\operatorname{GL}_{N}(\mathbb{C})\) _with_ \([\tau]\in\mathfrak{C}(\mathbb{C})\)_,_ \(\iota^{*}\tau\) _is conjugate to some_ \(\iota^{*}\sigma^{\mbox{\tiny{\sc viss}}}_{i}\)_._
3. _For each_ \(\sigma^{\mbox{\tiny{\sc viss}}}_{i}\)_, there exists a reductive representation_ \(\tau:\pi_{1}(X)\to\operatorname{GL}_{N}(\mathbb{C})\) _with_ \([\tau]\in\mathfrak{C}(\mathbb{C})\) _such that_ \(\iota^{*}\tau\) _is conjugate to_ \(\iota^{*}\sigma^{\mbox{\tiny{\sc viss}}}_{i}\)_._
4. _For every_ \(i=1,\ldots,m\)_, we have_ (3.5) _where_ \(\varrho:\pi_{1}(X)\to\operatorname{GL}_{N}(\mathbb{C})\) _varies among all reductive representations such that_ \([\varrho]\in\mathfrak{C}(\mathbb{C})\)_._
Proof.: Let \(\mathfrak{C}_{1},\ldots,\mathfrak{C}_{\ell}\) be all geometric connected components of \(\mathfrak{C}\) which are defined over \(\bar{\mathbb{Q}}\). We can pick reductive representations \(\{\varrho_{i}:\pi_{1}(X)\to\operatorname{GL}_{N}(\bar{\mathbb{Q}})\}_{i=1, \ldots,\ell}\) such that \([\varrho_{i}]\in\mathfrak{C}_{i}(\bar{\mathbb{Q}})\) for every \(i\). Since \(\pi_{1}(X)\) is finitely generated, there exists a number field \(k\) which is a Galois extension of \(\mathbb{Q}\) such that \(\varrho_{i}:\pi_{1}(X)\to\operatorname{GL}_{N}(k)\) for every \(\varrho_{i}\).
Let \(\operatorname{Ar}(k)\) be all archimedean places of \(k\) with \(w_{1}\) the identity map. Then for any \(w\in\operatorname{Ar}(k)\) there exists \(a\in\operatorname{Gal}(k/\mathbb{Q})\) such that \(w=w_{1}\circ a\). Note that \(\mathfrak{C}\) is defined over \(\mathbb{Q}\). Then \(\mathfrak{C}\) is invariant under the conjugation \(a\). Therefore, for any \(w:k\to\mathbb{C}\) in \(\operatorname{Ar}(k)\), letting \(\varrho_{i,w}:\pi_{1}(X)\to\operatorname{GL}_{N}(\mathbb{C})\) be the composition \(w\circ\varrho_{i}\), we have \([\varrho_{i,w}]\in\mathfrak{C}(\mathbb{C})\).
For any \(t\in\mathbb{R}^{*}\), we consider the \(\mathbb{R}^{*}\)-action \(\varrho_{i,w,t}:\pi_{1}(X)\to\operatorname{GL}_{N}(\mathbb{C})\) of \(\varrho_{i,w}\) defined in SS 2.4. Then \(\varrho_{i,w,t}\) is also reductive by the arguments in SS 2.4. Since we assume that \(\mathfrak{C}(\mathbb{C})\) is invariant under \(\mathbb{R}^{*}\)-action, it follows that \([\varrho_{i,w,t}]\in\mathfrak{C}(\mathbb{C})\). By Lemma 2.7, \([\varrho_{i,w,t}]\) is a continuous deformation of \([\varrho_{i,w}]\). Hence they are in the same geometric connected component of \(\mathfrak{C}(\mathbb{C})\), and by Proposition 3.9 we conclude that \([\iota^{*}\varrho_{i,w,t}]=[\iota^{*}\varrho_{i,w}]\) for any \(t\in\mathbb{R}^{*}\).
We first assume that \(X\) is compact. According to [28], \(\lim\limits_{\begin{subarray}{c}t\to 0\\ t\to 0\end{subarray}}[\varrho_{i,w,t}]\) exists, and there exists a reductive \(\varrho^{\mbox{\tiny{\sc viss}}}_{i,w}:\pi_{1}(X)\to\operatorname{GL}_{N}( \mathbb{C})\) such that \([\varrho^{\mbox{\tiny{\sc viss}}}_{i,w}]=\lim\limits_{\begin{subarray}{c}t\to 0 \\ t\to 0\end{subarray}}[\varrho_{i,w,t}]\). Moreover, \(\varrho^{\mbox{\tiny{\sc viss}}}_{i,w}\) underlies a \(\mathbb{C}\)-VHS. Therefore, \([\iota^{*}\varrho_{i,w}]=\lim\limits_{t\to 0}[\iota^{*}\varrho_{i,w,t}]=[\iota^{*} \varrho^{\mbox{\tiny{\sc viss}}}_{i,w}]\). Since \([\varrho_{i,w,t}]\in\mathfrak{C}(\mathbb{C})\) for any \(t\in\mathbb{R}^{*}\), it follows that \([\varrho^{\mbox{\tiny{\sc viss}}}_{i,w}]\in\overline{\mathfrak{C}}(\mathbb{C})\). By eq. (3.4), we conclude
\[\cap_{[\varrho]\in\mathfrak{C}}\ker\varrho\subset\ker\varrho^{\mbox{\tiny{\sc viss }}}_{i,w}. \tag{3.6}\]
Assume now \(X\) is non-compact. As we assume that \(\mathfrak{C}\) is closed and invariant under \(\mathbb{R}^{*}\)-action, by Theorem 2.8, we can choose a reductive representation \(\varrho^{\mbox{\tiny{\sc viss}}}_{i,w}:\pi_{1}(X)\to\operatorname{GL}_{N}( \mathbb{C})\) such that
* it underlies a \(\mathbb{C}\)-VHS;
* \([\varrho^{\mbox{\tiny{\sc viss}}}_{i,w}]\) and \([\varrho_{i,w}]\) are in the same geometric connected component of \(\mathfrak{C}(\mathbb{C})\).
Note that (3.6) is satisfied automatically. By Proposition 3.9, we have \([\iota^{*}\varrho_{i,w}]=[\iota^{*}\varrho^{\mbox{\tiny{\sc viss}}}_{i,w}]\).
In summary, we construct reductive representations \(\{\varrho^{\mbox{\tiny{\sc viss}}}_{i,w}:\pi_{1}(X)\to\operatorname{GL}_{N}( \mathbb{C})\}_{i=1,\ldots,k;w\in\operatorname{Ar}(k)}\) in both compact and non-compact cases. Each of these representations underlies a \(\mathbb{C}\)-VHS and satisfies \([\iota^{*}\varrho_{i,w}]=[\iota^{*}\varrho^{\mbox{\tiny{\sc viss}}}_{i,w}]\) and (3.6).
Let \(v\) be any non-archimedean place of \(k\) and \(k_{v}\) be the non-archimedean completion of \(k\) with respect to \(v\). Write \(\varrho_{i,v}:\pi_{1}(X)\to\operatorname{GL}_{N}(k_{v})\) the induced representation from \(\varrho_{i}\). By the construction of \(s_{\mathfrak{C}}\), it follows that \(\iota^{*}\varrho_{i,v}(\pi_{1}(Z))\) is bounded. Therefore, we have a factorization
\[\iota^{*}\varrho_{i}:\pi_{1}(Z)\to\operatorname{GL}_{N}(\mathcal{O}_{k}).\]
Note that \(\operatorname{GL}_{N}(\mathcal{O}_{k})\to\prod_{w\in\operatorname{Ar}(k)} \operatorname{GL}_{N}(\mathbb{C})\) is a discrete subgroup by [22, Proposition 6.1.3]. It follows that for the product representation
\[\prod_{w\in\operatorname{Ar}(k)}\iota^{*}\varrho_{i,w}:\pi_{1}(Z)\to\prod_{w\in \operatorname{Ar}(k)}\operatorname{GL}_{N}(\mathbb{C}),\]
its image is discrete.
Since \(Z\) is normal, by Theorem 1.7, both \(\iota^{*}\varrho_{i,w}\) and \(\iota^{*}\varrho_{i,w}^{\text{\tiny{VHS}}}\) are reductive. Recall that \([\iota^{*}\varrho_{i,w}]=[\iota^{*}\varrho_{i,w}^{\text{\tiny{VHS}}}]\), it follows that \(\iota^{*}\varrho_{i,w}\) is conjugate to \(\iota^{*}\varrho_{i,w}^{\text{\tiny{VHS}}}\) by Lemma 3.6. Consequently, \(\prod_{w\in\operatorname{Ar}(k)}\iota^{*}\varrho_{i,w}^{\text{\tiny{VHS}}}:\pi _{1}(Z)\to\operatorname{GL}_{N}(\mathbb{C})\) has discrete image. Consider the product representation of \(\varrho_{i,w}^{\text{\tiny{VHS}}}\)
\[\sigma:=\prod_{i=1}^{\ell}\prod_{w\in\operatorname{Ar}(k)}\varrho_{i,w}^{ \text{\tiny{VHS}}}:\pi_{1}(X)\to\prod_{i=1}^{\ell}\prod_{w\in\operatorname{Ar} (k)}\operatorname{GL}_{N}(\mathbb{C}).\]
Then \(\sigma\) underlies a \(\mathbb{C}\)-VHS and \(\iota^{*}\sigma:\pi_{1}(Z)\to\prod_{i=1}^{\ell}\prod_{w\in\operatorname{Ar}(k )}\operatorname{GL}_{N}(\mathbb{C})\) has discrete image.
Let \(\tau:\pi_{1}(X)\to\operatorname{GL}_{N}(\mathbb{C})\) be any reductive representation such that \([\tau]\in\mathfrak{C}(\mathbb{C})\). Then \([\tau]\in\mathfrak{C}_{i}(\mathbb{C})\) for some \(i\). By Proposition 3.9, it follows that \([\iota^{*}\tau]=[\iota^{*}\varrho_{i,w_{1}}]=[\iota^{*}\varrho_{i,w_{1}}^{ \text{\tiny{VHS}}}].\) By Theorem 1.7 and Lemma 3.6 once again, \(\iota^{*}\tau\) is conjugate to \(\iota^{*}\varrho_{i,w_{1}}^{\text{\tiny{VHS}}}\). The proposition is proved if we let \(\{\sigma_{i}^{\text{\tiny{VHS}}}:\pi_{1}(X)\to\operatorname{GL}_{N}(\mathbb{C })\}_{i=1,\ldots,m}\) be \(\{\varrho_{i,w}^{\text{\tiny{VHS}}}:\pi_{1}(X)\to\operatorname{GL}_{N}(\mathbb{ C})\}_{i=1,\ldots,\ell;w\in\operatorname{Ar}(k)}\).
**Remark 3.13**.: In the proof of Proposition 3.12, we take the Galois conjugate of \(\mathfrak{C}\subset M_{\mathrm{B}}(X,N)\) under \(a\in\operatorname{Gal}(k/\mathbb{Q})\). If \(\mathfrak{C}\) is not defined over \(\mathbb{Q}\), it is not known that \(a(\mathfrak{C})\subset M_{\mathrm{B}}(X,N)\) is \(\mathbb{R}^{*}\)-invariant. This is why we include the assumption that \(\mathfrak{C}\) is defined over \(\mathbb{Q}\) in our proof, whereas Eyssidieux disregarded such a condition in [10]. It seems that this condition should also be necessary in [10].
### Infinite monodromy at infinity
When considering a non-compact quasi-projective variety \(X\), it is important to note that the Shafarevich conjecture fails in simple examples. For instance, take \(X:=A\backslash\{0\}\), where \(A\) is an abelian surface. Its universal covering \(\widetilde{X}\) is \(\mathbb{C}^{2}-\Gamma\), where \(\Gamma\) is a lattice in \(\mathbb{C}^{2}\). Then \(\widetilde{X}\) is not holomorphically convex. Therefore, additional conditions on the fundamental groups at infinity are necessary to address this issue.
**Definition 3.14** (Infinity monodromy at infinity).: Let \(X\) be a quasi-projective normal variety and let \(\overline{X}\) be a projective compactification of \(X\). We say a subset \(M\subset M_{\mathrm{B}}(X,N)(\mathbb{C})\)_has infinite monodromy at infinity_ if for any holomorphic map \(\gamma:\mathbb{D}\to\overline{X}\) with \(\gamma^{-1}(\overline{X}\setminus X)=\{0\}\), there exists a reductive \(\varrho:\pi_{1}(X)\to\operatorname{GL}_{N}(\mathbb{C})\) such that \([\varrho]\in M\) and \(\gamma^{*}\varrho:\pi_{1}(\mathbb{D}^{*})\to\operatorname{GL}_{N}(\mathbb{C})\) has infinite image.
Note that Definition 3.14 does not depend on the projective compactification of \(X\).
**Lemma 3.15**.: _Let \(f:Y\to X\) be a proper morphism between quasi-projective normal varieties. If \(M\subset M_{\mathrm{B}}(X,N)(\mathbb{C})\) has infinite monodromy at infinity, then \(f^{*}M\subset M_{\mathrm{B}}(Y,N)(\mathbb{C})\) also has infinite monodromy at infinity._
Proof.: We take projective compactification \(\overline{X}\) and \(\overline{Y}\) of \(X\) and \(Y\) respectively such that \(f\) extends to a morphism \(\bar{f}:\overline{Y}\to\overline{X}\). Let \(\gamma:\mathbb{D}\to\overline{Y}\) be any holomorphic map with \(\gamma^{-1}(\overline{Y}\setminus Y)=\{0\}\). Then \(\bar{f}\circ\gamma:\mathbb{D}\to\overline{X}\) satisfies \((\bar{f}\circ\gamma)^{-1}(\overline{X}\setminus X)=\{0\}\) as \(f\) is proper. Then by Definition 3.14 there exists a reductive \(\varrho:\pi_{1}(X)\to\operatorname{GL}_{N}(\mathbb{C})\) such that \([\varrho]\in M\) and \(\gamma^{*}(f^{*}\varrho)=(f\circ\gamma)^{*}\varrho:\pi_{1}(\mathbb{D}^{*})\to \operatorname{GL}_{N}(\mathbb{C})\) has infinite image. The lemma follows.
We have a precise local characterization of a representation with infinite monodromy at infinity.
**Lemma 3.16**.: _Consider a smooth quasi-projective variety \(X\) along with a smooth projective compactification \(\overline{X}\), where \(D:=\overline{X}\backslash X\) is a simple normal crossing divisor. A set \(M\subset M_{\mathrm{B}}(X,N)(\mathbb{C})\) has infinite monodromy at infinity is equivalent to the following:_
_for any \(x\in D\), there exists an admissible coordinate \((U;z_{1},\ldots,z_{n})\) centered at \(x\) with \(U\cap D=(z_{1}\cdots z_{k}=0)\) such that for any \(k\)-tuple \((i_{1},\ldots,i_{k})\in\mathbb{Z}_{<0}^{k}\), there exists a reductive \(\varrho:\pi_{1}(X)\to\mathrm{GL}_{N}(\mathbb{C})\) such that \([\varrho]\in M(\mathbb{C})\) and \(\varrho(\gamma_{1}^{i_{1}}\cdots\gamma_{k}^{i_{k}})\neq 0\), where \(\gamma_{i}\) is the anti-clockwise loop around the origin in the \(i\)-th factor of \(U\setminus D\simeq(\mathbb{D}^{*})^{k}\times\mathbb{D}^{n-k}\). For such condition we will say that \(\varrho\) has infinite monodromy at \(x\)._
Proof.: For any holomorphic map \(f:\mathbb{D}\to\overline{X}\) with \(f^{-1}(D)=\{0\}\), let \(x:=f(0)\) which lies on \(D\). We take an admissible coordinate \((U;z_{1},\ldots,z_{n})\) centered at \(x\) in the lemma. Then \(f(\mathbb{D}_{2\varepsilon})\subset U\) for some small \(\varepsilon>0\). We can write \(f(t)=(f_{1}(t),\ldots,f_{n}(t))\) such that \(f_{1}(0)=\cdots=f_{k}(0)=0\) and \(f_{i}(0)\neq 0\) for \(i=k+1,\ldots,n\). Denote by \(m_{i}:=\mathrm{ord}_{0}f_{i}\) the vanishing order of \(f_{i}(t)\) at \(0\). Consider the anti-clockwise loop \(\gamma\) defined by \(\theta\mapsto\varepsilon e^{i\theta}\) which generates \(\pi_{1}(\mathbb{D}_{2\varepsilon}^{*})\). Then \(f\circ\gamma\) is homotopy equivalent to \(\gamma_{1}^{m_{1}}\cdots\gamma_{k}^{m_{k}}\) in \(\pi_{1}(U\backslash D)\). If \(M\) has infinite monodromy at infinity, by Definition 3.14 there exists a reductive \(\varrho:\pi_{1}(X)\to\mathrm{GL}_{N}(\mathbb{C})\) such that \([\varrho]\in M(\mathbb{C})\) and \(f^{*}\varrho(\gamma)\neq 0\). This is equivalent to that \(\varrho(\gamma_{1}^{m_{1}}\cdots\gamma_{k}^{m_{k}})\neq 0\). The lemma is proved.
Definition 3.14 presents a stringent condition that is not be practically applicable in many situations. To address this issue, we establish the following result:
**Proposition 3.17**.: _Assume that \(\varrho:\pi_{1}(X)\to\mathrm{GL}_{N}(\mathbb{C})\) is a reductive representation with \(\varrho(\pi_{1}(X))\) is torsion free. Then we can find a birational morphism \(\mu:\overline{X}_{0}\to\overline{X}\) by taking a sequence of blowing-ups with smooth centers such that_
1. \(\mu\) _is an isomorphism over_ \(X\)_;_
2. _there exists a Zariski open set_ \(X^{\prime}\subset\overline{X}_{0}\) _containing_ \(X\) _such that_ \(\varrho\) _extends to a representation_ \(\varrho_{0}\) _over_ \(\pi_{1}(X^{\prime})\)_;_
3. \(\varrho_{0}:\pi_{1}(X^{\prime})\to\mathrm{GL}_{N}(\mathbb{C})\) _has infinite monodromy at infinity._
Proof.: Write \(D=\sum_{i=1}^{m}D_{i}\) into sum of irreducible components. We first look at smooth points of \(D\). If there exists some irreducible component \(D_{1}\) of \(D\) such that the local monodromy of \(\varrho\) around \(D_{1}\) is finite (which is thus trivial as \(\varrho(\pi_{1}(X))\) is assumed to be torsion-free), then \(\varrho\) extends across the irreducible component of \(D_{1}\). It follows that \(\varrho\) extends to a representation \(\pi_{1}(\overline{X}\setminus\cup_{i=2}^{m}D_{i})\to\mathrm{GL}_{N}(\mathbb{ C})\). We replace \(X\) by \(\overline{X}\setminus\cup_{i=2}^{m}D_{i}\). To prove the proposition, we will use induction as follows.
We first define an index \(i(x)\) of \(x\in D\) by setting \(i(x):=\#\{j\mid x\in D_{j}\}\). This depends on the compactification of \(X\), and the index is computed with respect to the new boundary divisor if we blow-up the boundary \(D\).
_Induction._ Assume that there exists an algorithm of the blowing-ups \(\overline{X}_{0}\to\overline{X}\) required in the proposition such that we can extend \(\varrho\) on some Zariski dense open set \(X^{\prime}\) of \(\overline{X}_{0}\) containing \(X\), and achieve the following: for any point \(x\) in the new boundary \(\overline{X}_{0}\backslash X^{\prime}\), \(\varrho\) has infinite monodromy at \(x\) if \(i(x)\leq k-1\). Here the index of \(x\) is computed with respect to the new boundary divisor \(\overline{X}_{0}\backslash X^{\prime}\).
We know that \(k=2\) can be achieved by the above argument without blowing-up \(\overline{X}\).
By induction, we can assume for any \(x\in D\) with \(i(x)\leq k-1\), \(\varrho\) always has infinite monodromy at \(x\) in the sense of Lemma 3.16.
We will work on points in \(D\) of index \(k\) at which \(\varrho\) has infinite monodromy. We need to cover \(D_{1}\cup\ldots\cup D_{m}\) by a natural stratification. For any \(J\subset\{1,\ldots,m\}\), define
\[D_{J}:=\{x\in D_{1}\cup\ldots\cup D_{m}\mid x\in D_{j}\Leftrightarrow j\in J \}\,.\]
Note that for any point \(x\in D_{J}\), its index \(i(x)=\#J\). It is worth noting that for any connected component \(Z\) of \(D_{J}\), for each two points \(x,y\in Z\), \(\varrho\) has infinite monodromy at \(x\) if and only if it has infinite monodromy at \(y\). Therefore, we only have to deal with finitely many strata whose points have index is \(k\).
Without loss of generality, we may assume that \(\varrho\) does not have infinite monodromy at the points of a connected component \(Z\) of the strata \(D_{\{1,\ldots,k\}}\). Pick any point \(x\in Z\). We choose an admissible coordinate \((U;z_{1},\ldots,z_{n})\) centered at \(x\) with \(D_{i}\cap U=(z_{i}=0)\)
for \(i=1,\ldots,k\) and \(D_{j}\cap U=\varnothing\) for \(j=k+1,\ldots,n\). Let \(\gamma_{i}\) be the anti-clockwise loop around the origin in the i-th factor of \(U\backslash D\simeq(\mathbb{D}^{*})^{k}\times\mathbb{D}^{n-k}\). By our assumption, there exists \((i_{1},\ldots,i_{k})\in\mathbb{Z}_{>0}^{k}\) such that \(\varrho(\gamma_{1}^{i_{1}}\cdots\gamma_{k}^{i_{k}})=0\).
**Claim 3.18**.: _If there are some \(k\)-tuple \((j_{1},\ldots,j_{k})\in\mathbb{Z}_{>0}^{k}\) such that \(\varrho(\gamma_{1}^{j_{1}}\cdots\gamma_{k}^{j_{k}})=0\), then \((j_{1},\ldots,j_{k})=\ell(i_{1},\ldots,i_{k})\) for some \(\ell>0\)._
Proof.: Assume that the claim does not hold. After reordering \(1,\ldots,k\), we can assume that
\[\frac{j_{1}}{i_{1}}=\cdots=\frac{j_{\ell-1}}{i_{\ell-1}}<\frac{j_{\ell}}{i_{ \ell}}\leq\cdots\leq\frac{j_{k}}{i_{k}}\]
for some \(\ell\in\{2,\ldots,k\}\). Then \(i_{1}(j_{1},\ldots,j_{k})-j_{1}(i_{1},\ldots,i_{k})=(0,\cdots,0,i_{\ell}^{ \prime},\cdots,i_{k}^{\prime})\) with \(i_{\ell}^{\prime},\ldots,i_{k}^{\prime}\in\mathbb{Z}_{>0}\). As \(\varrho(\gamma_{1}^{i_{1}}\cdots\gamma_{k}^{i_{k}})=0\), it follows that \(\varrho(\gamma_{\ell}^{i_{\ell}^{\prime}}\cdots\gamma_{k}^{i_{k}^{\prime}})=0\). Let us define a holomorphic map
\[g:\mathbb{D} \to U\] \[t \mapsto(\frac{1}{2},\ldots,\frac{1}{2},t^{i_{\ell}^{\prime}}, \ldots,t^{i_{k}^{\prime}}).\]
Then we have \(1\leq i(g(0))\leq k-1\). The loop \(\theta\mapsto g(\frac{1}{2}e^{i\theta})\) is homotopy to \(\gamma_{\ell}^{i_{\ell}^{\prime}}\cdots\gamma_{k}^{i_{k}^{\prime}}\). Hence \(g^{*}\varrho:\pi_{1}(\mathbb{D}^{*})\to\operatorname{GL}_{N}(\mathbb{C})\) is trivial. As we assume that \(\varrho\) has infinite monodromy at \(g(0)\), a contradiction is obtained. The claim follows.
After reordering \(1,\ldots,k\), we can assume that \(i_{1}=\cdots=i_{\ell-1}<i_{\ell}\leq\ldots\leq i_{k}\). Then we have \(2\leq\ell\leq k+1\). Here we make the convention that \(i_{1}=\cdots=i_{k}\) if \(\ell=k+1\). Since \(\varrho(\pi_{1}(X))\) is torsion-free, we can replace the tuple \((i_{1},\ldots,i_{k})\) with \(\frac{1}{\gcd(i_{1},\ldots,i_{k})}(i_{1},\ldots,i_{k})\). This allows us to assume that \(\gcd(i_{1},\ldots,i_{k})=1\). Let \(\overline{Z}\) be the closure of \(Z\). Then it is a smooth, connected, closed subvariety of codimension \(k\) contained in \(D_{1}\cap\ldots\cap D_{k}\). We proceed by performing the blow-up of \(\overline{Z}\). Let \(D_{0}^{\prime}\) represent the exceptional divisor resulting from the blow-up, and we denote the strict transform of \(D_{i}\) as \(D_{i}^{\prime}\). It is important to note that \(D_{1}^{\prime}\cap\ldots\cap D_{k}^{\prime}\cap D_{0}^{\prime}=\varnothing\).
For any \(J\subset\{1,\ldots,k\}\), we set
\[D_{J}^{\prime}:=\left\{x\in D_{0}^{\prime}\mid x\in D_{j}^{\prime}\Leftrightarrow j \in J\right\},\]
and
\[D_{\varnothing}^{\prime}:=D_{0}^{\prime}\setminus(D_{1}^{\prime}\cup\ldots \cup D_{k}^{\prime}).\]
Note that for any \(x\in D_{J}^{\prime}\), its index \(i(x)=1+\#J\). We can verify that
\[\mathcal{S}_{k}:=\{x\in D_{0}^{\prime}\mid i(x)\leq k\}=\cup_{J\subset\{1, \ldots,k\}}D_{J}^{\prime}.\]
**Claim 3.19**.: _For any point \(y\) in \(\mathcal{S}_{k}\), \(\varrho\) has infinite monodromy at \(y\) if and only if \(y\notin D_{\{\ell,\ldots,k\}}\). Here we make the convention that \(\{\ell,\ldots,k\}=\varnothing\) if \(\ell=k+1\)._
Proof.: Write \(J=\{j_{2},\ldots,j_{p}\}\). We make the convention that \(J=\varnothing\) if \(p=1\). Let \((V;w_{1},\ldots,w_{n})\) be an admissible coordinate centered at \(y\) with \(V\cap D_{0}^{\prime}=(w_{1}=0)\), \(V\cap D_{j_{i}}^{\prime}=(w_{i}=0)\) for \(i=2,\ldots,p\) and \(V\cap D_{q}^{\prime}=\varnothing\) for other irreducible components \(D_{q}^{\prime}\) of the boundary divisor. Let \(\gamma_{i}^{\prime}\) be the anti-clockwise loop around the origin in the i-th factor of \((\mathbb{D}^{*})^{p}\times\mathbb{D}^{n-p}\). We can see that \(\gamma_{1}^{\prime}\sim\gamma_{1}\cdots\gamma_{k}\), and \(\gamma_{i}^{\prime}\sim\gamma_{j_{i}}\) for \(i=2,\ldots,p\). Here "\(\sim\)" stands for homotopy equivalent. Then for any \(p\)-tuple \((q_{1},\ldots,q_{p})\in\mathbb{Z}_{>0}^{p}\), writing \((\gamma_{1}^{\prime})^{q_{1}}\cdots(\gamma_{p}^{\prime})^{q_{p}}\sim\gamma_{1} ^{n_{1}}\cdots\gamma_{k}^{n_{k}}\). An easy computation shows that \((n_{1},\ldots,n_{k})\) is never linear to \((i_{1},\ldots,i_{k})\). By Claim 3.18 we conclude that \(\varrho((\gamma_{1}^{\prime})^{j_{1}}\cdots(\gamma_{k}^{\prime})^{j_{k}})\neq 0\) if \(J\neq\{\ell,\ldots,k\}\). The claim is proved.
By the above claim, there are two possibilites:
_Case 1: \(\#J<k-1\)._ In this case, for each \(x\in\mathcal{S}_{k}\), one has \(i(x)\leq k-1\). By our induction, we can perform a further sequence of blowing-ups with smooth centers in the boundary to obtain a birational morphism \(\mu:\overline{X}\to\overline{X}\) such that
1. there exists a Zariski open set \(X^{\prime}\subset\overline{X}_{0}\) containing \(X\) such that \(\varrho\) extends to a representation \(\varrho_{0}\) over \(\pi_{1}(X^{\prime})\);
2. for any point \(x\in\mu^{-1}(\overline{Z})\) with \(i(x)\leq k\), \(\varrho_{0}\) at infinite monodromy at \(x\).
_Case 2:_\(\#J=k-1\). In this case, \(J=\{2,\ldots,k\}\). Pick any point \(y\in D^{\prime}_{J}\). Let \((V;w_{1},\ldots,w_{n})\) be an admissible coordinate centered at \(y\) with \(V\cap D^{\prime}_{0}=(w_{1}=0)\) and \(V\cap D^{\prime}_{j}=(w_{j}=0)\) for \(j=2,\ldots,k\). Let \(\gamma^{\prime}_{i}\) be the anti-clockwise loop around the origin in the i-th factor of \((\mathbb{D}^{*})^{k}\times\mathbb{D}^{n-k}\). We can see that \(\gamma^{\prime}_{1}\sim\gamma_{1}\cdots\gamma_{k}\), and \(\gamma^{\prime}_{i}\sim\gamma_{i}\) for \(i=2,\ldots,k\). Then for the \(k\)-tuple \((j_{1},\ldots,j_{k}):=(i_{1},i_{2}-i_{1},\ldots,i_{k}-i_{1})\in\mathbb{Z}_{>0}^ {k}\), we have \((\gamma^{\prime}_{1})^{ji}\cdots(\gamma^{\prime}_{k})^{j_{k}}\sim\gamma^{i_{1 }}_{1}\cdots\gamma^{i_{k}}_{k}\). Therefore, \(\varrho((\gamma^{\prime}_{1})^{ji}\cdots(\gamma^{\prime}_{k})^{j_{k}})=0\). In this case \(j_{1}+\cdots+j_{k}<i_{1}+\cdots+i_{k}\). Next, we proceed to blow up the closure \(\overline{D^{\prime}_{J}}\) of \(D^{\prime}_{J}\) and iterate the algorithm described above. This iterative process will terminate after a finite number of steps, resulting in a birational morphism \(\mu:\overline{X}^{\prime}\to\overline{X}\) that satisfies the properties described in Items 1 and 2. We repeat this algorithm of blowing-up for all other connected components \(Z\) of \(D_{J}\) with \(|J|=k\) where \(\varrho\) does not have infinite monodromy at points in \(Z\). By establishing and proving the induction, we complete the proof of the proposition.
### Construction of Shafarevich morphism (I)
We will construct the Shafarevich morphism for smooth quasi-projective varieties \(X\) associated to a constructible subsets of \(M_{\mathrm{B}}(X,N)(\mathbb{C})\) defined over \(\mathbb{Q}\) that is invariant under \(\mathbb{R}^{*}\)-action.
**Theorem 3.20**.: _Let \(X\) be a smooth quasi-projective variety. Let \(\mathfrak{C}\) be a constructible subset of \(M_{\mathrm{B}}(X,N)(\mathbb{C})\), defined over \(\mathbb{Q}\), such that \(\mathfrak{C}\) is invariant under \(\mathbb{R}^{*}\)-action. When \(X\) is non-compact, we make two additional assumptions:_
1. \(\mathfrak{C}\) _is closed;_
2. \(\mathfrak{C}\) _has infinite monodromy at infinity in the sense of Definition_ 3.14_._
_Then there is a proper surjective holomorphic fibration \(\operatorname{sh}_{\mathfrak{C}}:X\to\operatorname{sh}_{\mathfrak{C}}(X)\) over a normal complex space \(\operatorname{Sh}_{\mathfrak{C}}(X)\) such that for any closed subvariety \(Z\) of \(X\), \(\operatorname{Sh}_{\mathfrak{C}}(Z)\) is a point if and only if \(\varrho(\operatorname{Im}[\pi_{1}(Z)\to\pi_{1}(X)])\) is finite for any reductive representation \(\varrho:\pi_{1}(X)\to\operatorname{GL}_{N}(\mathbb{C})\) such that \([\varrho]\in\mathfrak{C}\). When \(X\) is compact, \(\operatorname{Sh}_{\mathfrak{C}}(X)\) is projective._
Proof.: We will divide the proof into two steps. The first step is dedicated to constructing \(\operatorname{sh}_{\mathfrak{C}}:X\to\operatorname{Sh}_{\mathfrak{C}}(X)\). In the second step, we will prove the projectivity of \(\operatorname{Sh}_{\mathfrak{C}}(X)\) when \(X\) is compact.
_Step 1: constructing the Shafarevich morphism._ By Proposition 3.12, there exist reduction representations \(\{\sigma^{\text{\tiny{VHS}}}_{i}:\pi_{1}(X)\to\operatorname{GL}_{N}(\mathbb{C} )\}_{i=1,\ldots,m}\) that underlie \(\mathbb{C}\)-VHS such that, for a morphism \(\iota:Z\to X\) from any quasi-projective _normal_ variety \(Z\) with \(s_{\mathfrak{C}}\circ\iota(Z)\) being a point, the following properties hold:
1. For \(\sigma:=\oplus_{i=1}^{m}\sigma^{\text{\tiny{VHS}}}_{i}\), the image \(\iota^{*}\sigma(\pi_{1}(Z))\) is discrete in \(\prod_{i=1}^{m}\operatorname{GL}_{N}(\mathbb{C})\).
2. For each reductive \(\tau:\pi_{1}(X)\to\operatorname{GL}_{N}(\mathbb{C})\) with \([\tau]\in\mathfrak{C}(\mathbb{C})\), \(\iota^{*}\tau\) is conjugate to some \(\iota^{*}\sigma^{\text{\tiny{VHS}}}_{i}\). Moreover, for each \(\sigma^{\text{\tiny{VHS}}}_{i}\), there exists some reductive representation \(\tau:\pi_{1}(X)\to\operatorname{GL}_{N}(\mathbb{C})\) with \([\tau]\in\mathfrak{C}(\mathbb{C})\) such that \(\iota^{*}\tau\) is conjugate to \(\iota^{*}\sigma^{\text{\tiny{VHS}}}_{i}\).
3. We have the following inclusion: (3.7) \[\cap_{[\varrho]\in\mathfrak{C}(\mathbb{C})}\ker\varrho\subset\ker\sigma^{ \text{\tiny{VHS}}}_{i}\] where \(\varrho\) varies in all reductive representations such that \([\varrho]\in\mathfrak{C}(\mathbb{C})\).
Define \(H:=\cap_{\varrho}\ker\varrho\cap\ker\sigma\), where \(\varrho:\pi_{1}(X)\to\operatorname{GL}_{N}(\mathbb{C})\) ranges over all reductive representation such that \([\varrho]\in\mathfrak{C}(\mathbb{C})\). By (3.7) we have \(H=\cap_{\varrho}\ker\varrho\), where \(\varrho:\pi_{1}(X)\to\operatorname{GL}_{N}(\mathbb{C})\) ranges over all reductive representation such that \([\varrho]\in\mathfrak{C}(\mathbb{C})\). Denote by \(\widetilde{X}_{H}:=\widetilde{X}/H\). Let \(\mathscr{D}\) be the period domain associated with the \(\mathbb{C}\)-VHS \(\sigma\) and let \(p:\widetilde{X}_{H}\to\mathscr{D}\) be
the period mapping. We define a holomorphic map
\[\Psi:\widetilde{X}_{H} \to S_{\mathfrak{C}}\times\mathscr{D}, \tag{3.8}\] \[z \mapsto(s_{\mathfrak{C}}\circ\pi_{H}(z),p(z))\]
where \(\pi_{H}:\widetilde{X}_{H}\to X\) denotes the covering map and \(s_{\mathfrak{C}}:X\to S_{\mathfrak{C}}\) is the reduction map defined in Definition 3.1.
**Lemma 3.21**.: _Each connected component of any fiber of \(\Psi\) is compact._
Proof of Lemma 3.21.: It is equivalent to prove that for any \((t,o)\in S_{\mathfrak{C}}\times\mathscr{D}\), any connected component of \(\Psi^{-1}(t,o)\) is compact. We fix any \(t\in S_{\mathfrak{C}}\).
_Step 1: we first assume that each irreducible component of \((s_{\mathfrak{C}})^{-1}(t)\) is normal._ Let \(F\) be an irreducible component of \((s_{\mathfrak{C}})^{-1}(t)\). Then the natural morphism \(\iota:F\to X\) is proper. By Item 1, \(\Gamma:=\sigma(\operatorname{Im}\left[\pi_{1}(F)\to\pi_{1}(X)\right])\) is a discrete subgroup of \(\prod_{i=1}^{m}\operatorname{GL}_{N}(\mathbb{C})\).
**Claim 3.22**.: _The period mapping \(F\to\mathscr{D}/\Gamma\) is proper._
Proof.: Although \(F\) might be singular, we can still define its period mapping since it is normal. The definition is as follows: we begin by taking a resolution of singularities \(\mu:E\to F\). Since \(F\) is normal, each fiber of \(\mu\) is connected, and we have \(\Gamma=\sigma(\operatorname{Im}[\pi_{1}(E)\to\pi_{1}(X)])\). It is worth noting that \(\mathscr{D}/\Gamma\) exists as a complex normal space since \(\Gamma\) is discrete. Now, consider the period mapping \(E\to\mathscr{D}/\Gamma\) for the \(\mathbb{C}\)-VHS induced \(\mu^{*}\sigma\). This mapping then induces a holomorphic mapping \(F\to\mathscr{D}/\Gamma\), which satisfies the following commutative diagram:
The resulting holomorphic map \(F\to\mathscr{D}/\Gamma\) is the period mapping for the \(\mathbb{C}\)-VHS on \(F\) induced by \(\sigma|_{\pi_{1}(F)}\). To establish the properness of \(F\to\mathscr{D}/\Gamma\), it suffices to prove that \(E\to\mathscr{D}/\Gamma\) is proper. Let \(\overline{X}\) be a smooth projective compactification such that \(D:=\overline{X}\backslash X\) is a simple normal crossing divisor. Given that \(E\to X\) is a proper morphism, we can take a smooth projective compactification \(\overline{E}\) of \(E\) such that
the complement \(D_{E}:=\overline{E}\setminus E\) is a simple normal crossing divisor;
there exists a morphism \(j:\overline{E}\to\overline{X}\) such that \(j^{-1}(D)=D_{E}\).
We aim to prove that \(j^{*}\sigma:\pi_{1}(E)\to\prod_{i=1}^{m}\operatorname{GL}_{N}(\mathbb{C})\) has infinite monodromy at infinity.
Consider any holomorphic map \(\gamma:\mathbb{D}\to\overline{E}\) such that \(\gamma^{-1}(D_{E})=\{0\}\). Then \((j\circ\gamma)^{-1}(D)=\{0\}\). As we assume that \(\mathfrak{C}(\mathbb{C})\) has infinite monodromy at infinity, there exists a reductive representation \(\tau:\pi_{1}(X)\to\operatorname{GL}_{N}(\mathbb{C})\) such that \([\tau]\in\mathfrak{C}(\mathbb{C})\) and \((j\circ\gamma)^{*}\tau(\pi_{1}(\mathbb{D}^{*}))\) is infinite. Using Item 1, it follows that \(j^{*}\tau\) and \(j^{*}\sigma_{i}^{\text{VHS}}\) are conjugate to each other as \(E\) is smooth quasi-projective. As \(\sigma_{i}^{\text{VHS}}\) is a direct factor of \(\sigma\), it follows that \((j\circ\gamma)^{*}\sigma(\pi_{1}(\mathbb{D}^{*}))\) is also infinite. Hence, we conclude that \(j^{*}\sigma\) has infinite monodromy at infinity.
By a theorem of Griffiths (cf. [12, Corollary 13.7.6]), we conclude that \(E\to\mathscr{D}/\Gamma\) is proper. Therefore, \(F\to\mathscr{D}/\Gamma\) is proper.
Take any point \(o\in\mathscr{D}\). Note that there is a real Lie group \(G_{0}\) which acts holomorphically and transitively on \(\mathscr{D}\). Let \(V\) be the compact subgroup that fixes \(o\). Thus, we have \(\mathscr{D}=G_{0}/V\). Now, let \(Z\) be any connected component of the fiber of \(F\to\mathscr{D}/\Gamma\) over \([o]\). According to Claim 3.22, \(Z\) is guaranteed to be compact. We have that \(\sigma(\operatorname{Im}\left[\pi_{1}(Z)\to\pi_{1}(X)\right])\subset V\cap\Gamma\). Notably, \(V\) is compact, and \(\Gamma\) is discrete. As a result, it follows that \(\sigma(\operatorname{Im}\left[\pi_{1}(Z)\to\pi_{1}(X)\right])\) is finite.
**Claim 3.23**.: \(\operatorname{Im}\left[\pi_{1}(Z)\to\pi_{1}(X)\right]\cap H\) _is a finite index subgroup of \(\operatorname{Im}\left[\pi_{1}(Z)\to\pi_{1}(X)\right]\)._
Proof.: By Item 2 and (3.7), we have
\[\ker\sigma\cap\operatorname{Im}\left[\pi_{1}(F)\to\pi_{1}(X)\right]=H\cap \operatorname{Im}\left[\pi_{1}(F)\to\pi_{1}(X)\right]. \tag{3.9}\]
Since \(\sigma(\operatorname{Im}\left[\pi_{1}(Z)\to\pi_{1}(X)\right])\) is finite, \(\ker\sigma\cap\operatorname{Im}\left[\pi_{1}(Z)\to\pi_{1}(X)\right]\) is a finite index subgroup of \(\operatorname{Im}\left[\pi_{1}(Z)\to\pi_{1}(X)\right]\). The claim follows from (3.9).
Pick any connected component \(Z_{0}\) of \(\pi_{H}^{-1}(Z)\). Note that \(\operatorname{Aut}(Z_{0}/Z)=\frac{\operatorname{Im}\left[\pi_{1}(Z)\to\pi_{1} (X)\right]}{\operatorname{Im}\left[\pi_{1}(Z)\to\pi_{1}(X)\right]\cap H}\). According to Claim 3.23, \(\operatorname{Aut}(Z_{0}/Z)\) is finite, implying that \(Z_{0}\) is compact. Hence, \(\pi_{H}^{-1}(Z)\) is a disjoint union of compact subvarieties of \(\widetilde{X}_{H}\), each of which is a finite etale Galois cover of \(Z\) under \(\pi_{H}\), with the Galois group \(\frac{\operatorname{Im}\left[\pi_{1}(Z)\to\pi_{1}(X)\right]}{\operatorname{Im }\left[\pi_{1}(Z)\to\pi_{1}(X)\right]\cap H}\). If we denote by \(\widetilde{F}\) a connected component of \(\pi_{H}^{-1}(F)\), then each connected component of any fiber of \(p|_{\widetilde{F}}:\widetilde{F}\to\mathscr{D}\) is a connected component of \(\pi_{H}^{-1}(Z)\), which is compact. This can be illustrated by the following commutative diagram:
Since we have assumed that each irreducible component of \((s_{\mathfrak{C}})^{-1}(t)\) is normal, it follows that for any \(o\in\mathscr{D}\), each connected component of \(\Psi^{-1}(t,o)\) is compact.
_Step 2: we prove the general case._ In the general case, we consider an embedded resolution of singularities \(\mu:Y\to X\) for the fiber \((s_{\mathfrak{C}})^{-1}(t)\) such that each irreducible component of \((s_{\mathfrak{C}}\circ\mu)^{-1}(t)\) is smooth. It is worth noting that \(s_{\mathfrak{C}}\circ\mu:Y\to S_{\mathfrak{C}}\) coincides with the reduction map \(s_{\mu^{*}\mathfrak{C}}:Y\to S_{\mu^{*}\mathfrak{C}}\) for \(\mu^{*}\mathfrak{C}\). Let \(\widetilde{Y}_{H}:=\widetilde{X}_{H}\times_{X}Y\), which is connected.
We observe that \(\tilde{\mu}\) is a proper holomorphic fibration. We define \(H^{\prime}:=\cap_{\varrho}\ker\varrho\cap\ker\mu^{*}\sigma\), where \(\varrho:\pi_{1}(Y)\to\operatorname{GL}_{N}(\mathbb{C})\) ranges over all reductive representation such that \([\varrho]\in\mu^{*}\mathfrak{C}\). Since \((\mu_{1})_{*}:\pi_{1}(Y)\to\pi_{1}(X)\) is an isomorphism, we have \((\mu_{\star})^{-1}(H)=H^{\prime}\). Consequently, \(\widetilde{Y}_{H}\) is the covering of \(Y\) corresponding to \(H^{\prime}\), and thus \(\operatorname{Aut}(\widetilde{Y}_{H}/Y)=H^{\prime}\simeq H\). It is worth noting that \(\mu^{*}\mathfrak{C}\) satisfying all the conditions required for \(\mathfrak{C}\) as stated in Theorem 3.20, unless the \(\mathbb{R}^{*}\)-invariance is not obvious. However, we note that \(\mu^{*}\mathfrak{C}\) is invariant by \(\mathbb{R}^{*}\)-action by Corollary 2.10. This enables us to work with \(\mu^{*}\mathfrak{C}\) instead of \(\mathfrak{C}\).
As a result, \(\mu^{*}\sigma=\oplus_{i=1}^{m}\mu^{*}\sigma_{i}^{\prime\prime\prime\prime}\) satisfies all the properties in Items 1 and 2 and 2 of 2. Note that \(\mu^{*}\sigma\) underlies a \(\mathbb{C}\)-VHS with the period mapping \(p\circ\tilde{\mu}:\widetilde{Y}_{H}\to\mathscr{D}\). It follows that \(\Psi\circ\tilde{\mu}:\widetilde{Y}_{H}\to S_{\mathfrak{C}}\times\mathscr{D}\) is defined in the same way as (3.8), determined by \(\mu^{*}\mathfrak{C}\) and \(\mu^{*}\sigma\).
Therefore, by Step 1, we can conclude that for any \(o\in\mathscr{D}\), each connected component of \((\Psi\circ\tilde{\mu})^{-1}(t,o)\) is compact. Let \(Z\) be a connected component of \(\Psi^{-1}(t,o)\). Then we claim that \(Z\) is compact. Indeed, \(\tilde{\mu}^{-1}(Z)\) is closed and connected as each fiber of \(\tilde{\mu}\) is connected. Therefore, \(\tilde{\mu}^{-1}(Z)\) is contained in some connected component of \((\Psi\circ\tilde{\mu})^{-1}(t,o)\). So \(\tilde{\mu}^{-1}(Z)\) is compact. As \(\tilde{\mu}\) is proper and surjective, it follows that \(Z=\tilde{\mu}(\tilde{\mu}^{-1}(Z))\) is compact. Lemma 3.21 is proved.
As a result of Lemma 3.21 and Theorem 1.30, the set \(\widetilde{S}_{H}\) of connected components of fibers of \(\Psi\) can be endowed with the structure of a complex normal space such that \(\Psi=g\circ\operatorname{sh}_{H}\) where \(\operatorname{sh}_{H}:\widetilde{X}_{H}\to\widetilde{S}_{H}\) is a proper holomorphic fibration and \(g:\widetilde{S}_{H}\to\widetilde{S}_{\mathfrak{C}}\times\mathscr{D}\) is a holomorphic map. In Claim 3.31 below, we will prove that each fiber of \(g\) is discrete.
**Claim 3.24**.: \(\mathrm{sh}_{H}\) _contracts every compact subvariety of \(\widetilde{X}_{H}\)._
Proof.: Let \(Z\subset\widetilde{X}_{H}\) be a compact irreducible subvariety. Then, \(W:=\pi_{H}(Z)\) is also a compact irreducible subvariety in \(X\) with \(\dim Z=\dim W\). Hence \(\mathrm{Im}\left[\pi_{1}(Z^{\mathrm{norm}})\to\pi_{1}(W^{\mathrm{norm}})\right]\) is a finite index subgroup of \(\pi_{1}(W^{\mathrm{norm}})\). Note that \(W\) can be endowed with an algebraic structure induced by \(X\). As the natural map \(Z\to W\) is finite, \(Z\) can be equipped with an algebraic structure such that the natural map \(Z\to X\) is algebraic.
For any reductive representation \(\varrho:\pi_{1}(X)\to\mathrm{GL}_{N}(K)\) with \(\varrho\in\mathfrak{C}(K)\) where \(K\) is a non archimedean local field, we have \(\varrho(\mathrm{Im}\left[\pi_{1}(Z)\to\pi_{1}(X)\right])\subset\varrho( \mathrm{Im}\left[\pi_{1}(\widetilde{X}_{H})\to\pi_{1}(X)\right])=\{1\}\). Hence, \(\varrho(\mathrm{Im}\left[\pi_{1}(W^{\mathrm{norm}})\to\pi_{1}(X)\right])\) is finite which is thus bounded. By Lemma 3.2, \(W\) is contained in a fiber of \(s_{\mathfrak{C}}\). Consider a desingularization \(Z^{\prime}\) of \(Z\) and let \(i:Z^{\prime}\to X\) be the natural algebraic morphism. Note that \(i^{*}\sigma(\pi_{1}(Z^{\prime}))=\{1\}\). It follows that the variation of Hodge structure induced by \(i^{*}\sigma\) is trivial. Therefore, \(p(Z)\) is a point. Hence \(Z\) is contracted by \(\Psi\). The claim follows.
**Lemma 3.25**.: _There is an action of \(\mathrm{Aut}(\widetilde{X}_{H}/X)=\pi_{1}(X)/H\) on \(\widetilde{S}_{H}\) that is equivariant for the proper holomorphic fibration \(\mathrm{sh}_{H}:\widetilde{X}_{H}\to\widetilde{S}_{H}\). This action is analytic and properly discontinuous. Namely, for any point \(y\) of \(\widetilde{S}_{H}\), there exists an open neighborhood \(V_{y}\) of \(y\) such that the set_
\[\{\gamma\in\pi_{1}(X)/H\mid\gamma.V_{y}\cap V_{y}\neq\varnothing\}\]
_is finite._
Proof.: Take any \(\gamma\in\pi_{1}(X)/H\). We can consider \(\gamma\) as an analytic automorphism of \(\widetilde{X}_{H}\). According to Claim 3.24, \(\mathrm{sh}_{H}\circ\gamma:\widetilde{X}_{H}\to\widetilde{S}_{H}\) contracts each fiber of the proper holomorphic fibration \(\mathrm{sh}_{H}:\widetilde{X}_{H}\to\widetilde{S}_{H}\). As a result, it induces a holomorphic map \(\tilde{\gamma}:\widetilde{S}_{H}\to\widetilde{S}_{H}\) such that we have the following commutative diagram:
Let us define the action of \(\gamma\) on \(\widetilde{S}_{H}\) by \(\tilde{\gamma}\). Then \(\gamma\) is an analytic automorphism and \(\mathrm{sh}_{H}\) is \(\pi_{1}(X)/H\)-equivariant. It is evident that \(\tilde{\gamma}:\widetilde{S}_{H}\to\widetilde{S}_{H}\) carries one fiber of \(\mathrm{sh}_{H}\) to another fiber. Thus, we have shown that \(\pi_{1}(X)/H\) acts on \(\widetilde{S}_{H}\) analytically and equivariantly with respect to \(\mathrm{sh}_{H}:\widetilde{X}_{H}\to\widetilde{S}_{H}\). Now, we will prove that this action is properly discontinuous.
Take any \(y\in\widetilde{S}_{H}\) and let \(F:=\mathrm{sh}_{H}^{-1}(y)\). Consider the subgroup \(\mathcal{S}\) of \(\pi_{1}(X)/H\) that fixes \(y\), i.e.
\[\mathcal{S}:=\{\gamma\in\pi_{1}(X)/H\mid\gamma\cdot F=F\}. \tag{3.10}\]
Since \(F\) is compact, \(\mathcal{S}\) is finite.
**Claim 3.26**.: \(F\) _is a connected component of \(\pi_{H}^{-1}(\pi_{H}(F))\)._
Proof of Claim 3.26..: Let \(x\in\pi_{H}^{-1}(\pi_{H}(F))\). Then there exists \(x_{0}\in F\) such that \(\pi_{H}(x)=\pi_{H}(x_{0})\). Therefore, there exists \(\gamma\in\pi_{1}(X)/H\) such that \(\gamma.x_{0}=x\). It follows that \(\pi_{H}^{-1}(\pi_{H}(F))=\cup_{\gamma\in\pi_{1}(X)/H}\gamma.F\). Since \(\gamma\) carries one fiber of \(\mathrm{Sh}_{H}\) to another fiber, and the group \(\pi_{1}(X)/H\) is finitely presented, it follows that \(\cup_{\gamma\in\pi_{1}(X)/H}\gamma.F\) are countable union of fibers of \(\mathrm{Sh}_{H}\). It follows that \(F\) is a connected component of \(\pi_{H}^{-1}(\pi_{H}(F))\).
Claim 3.26 implies that \(\pi_{H}:F\to\pi_{H}(F)\) is a finite etale cover. Denote by \(Z:=\pi_{H}(F)\) which is a connected Zariski closed subset of \(X\). Then \(\mathrm{Im}[\pi_{1}(F)\to\pi_{1}(Z)]\) is finite. As a consequence of [10, Theorem 4.5], there is a connected open neighborhood \(U\) of \(Z\) such that \(\pi_{1}(Z)\to\pi_{1}(U)\) is an isomorphism. Therefore, \(\mathrm{Im}[\pi_{1}(U)\to\pi_{1}(W)]=\mathrm{Im}[\pi_{1}(F)\to\pi_{1}(W)]\).
\(\pi_{1}(W)\)] is also finite. As a result, \(\pi_{H}^{-1}(U)\) is a disjoint union of connected open sets \(\{U_{\alpha}\}_{\alpha\in I}\) such that
1. For each \(U_{\alpha}\), \(\pi_{H}|_{U_{\alpha}}:U_{\alpha}\to U\) is a finite etale covering.
2. Each \(U_{\alpha}\) contains exactly one connected component of \(\pi_{H}^{-1}(Z)\).
We may assume that \(F\subset U_{\alpha_{1}}\) for some \(\alpha_{1}\in I\). By Item 2, for any \(\gamma\in\pi_{1}(X,z)/H\simeq\operatorname{Aut}(\widetilde{X}_{H}/X)\), \(\gamma\cdot U_{\alpha_{1}}\cap U_{\alpha_{1}}=\varnothing\) if and only if \(\gamma\notin\mathcal{S}\).
Since \(\operatorname{sh}_{H}\) is a proper holomorphic fibration, we can take a neighborhood \(V_{y}\) of \(y\) such that \(\operatorname{sh}_{H}^{-1}(V_{y})\subset U_{\alpha_{1}}\). Since \(\gamma\cdot U_{\alpha_{1}}\cap U_{\alpha_{1}}=\varnothing\) if and only if \(\gamma\notin\mathcal{S}\), it follows that \(\gamma\cdot V_{y}\cap V_{y}=\varnothing\) if \(\gamma\not\in\mathcal{S}\). Since \(\mathcal{S}\) is finite and \(y\) was chosen arbitrarily, we have shown that the action of \(\pi_{1}(X)/H\) on \(\widetilde{S}_{H}\) is properly discontinuous. Thus Lemma 3.25 is proven.
Let \(\nu:\pi_{1}(X)/H\to\operatorname{Aut}(\widetilde{S}_{H})\) be action of \(\pi_{1}(X)/H\) on \(\widetilde{S}_{H}\) and let \(\Gamma_{0}:=\nu(\pi_{1}(X)/H)\). By Lemma 3.25 and [10], we know that the quotient \(\operatorname{Sh}_{\mathcal{C}}(X):=\widetilde{S}_{H}/\Gamma_{0}\) is a complex normal space, and it is compact if \(X\) is compact. Moreover, since \(\operatorname{sh}_{H}:\widetilde{X}\to\widetilde{S}_{H}\) is \(\nu\)-equivariant, it induces a proper holomorphic fibration \(\operatorname{sh}_{\mathcal{C}}:X\to\operatorname{Sh}_{\mathcal{C}}(X)\) from \(X\) to a complex normal space \(\operatorname{Sh}_{\mathcal{C}}(X)\).
(3.11)
**Claim 3.27**.: _For any closed subvariety \(Z\subset X\), \(\operatorname{sh}_{\mathcal{C}}(Z)\) is a point if and only if \(\varrho(\operatorname{Im}[\pi_{1}(Z)\to\pi_{1}(X)])\) is finite for any reductive representation \(\varrho:\pi_{1}(X)\to\operatorname{GL}_{N}(\mathbb{C})\) such that \([\varrho]\in\mathfrak{C}\)._
Proof.: _Proof of "\(\Leftarrow\)"_. Let \(f:Y\to Z\) be a desingularization. Then for any non archimedean local field \(K\) and any reductive representation \(\tau:\pi_{1}(X)\to\operatorname{GL}_{N}(K)\) with \([\tau]\in\mathfrak{C}(K)\), \(f^{*}\tau(\pi_{1}(Y))\) is finite, and therefore bounded. Hence, \(f(Y)\) is contained in some fiber \(F\) of \(s_{\mathcal{C}}\) by Lemma 3.2. Using Items 1 and 2, we have that \(f^{*}\sigma(\pi_{1}(Y))\) is also finite. Therefore, \(Y\) is mapped to one point by the period mapping \(Y\to\mathscr{D}/\Gamma\) of \(f^{*}\sigma\). As a result, \(\operatorname{sh}_{\mathcal{C}}(Z)\) is a point by (3.11).
Proof of "\(\Rightarrow\)": Assume that \(Z\subset X\) is a closed subvariety such that \(\operatorname{sh}_{\mathcal{C}}(Z)\) is a point. We observe from (3.11) that for any connected component \(Z^{\prime}\) of \(\pi_{H}^{-1}(Z)\), it is contracted by \(\Psi\). By Lemma 3.21, \(Z^{\prime}\) is contained in some compact subvariety of \(\widetilde{X}_{H}\). Since \(Z^{\prime}\) is closed, it is also compact. Therefore, the map \(Z^{\prime}\to Z\) induced by \(\pi_{H}\) is a finite etale cover.
Let \(\varrho:\pi_{1}(X)\to\operatorname{GL}_{N}(\mathbb{C})\) be any reductive representation such that \([\varrho]\in\mathfrak{C}\). Note that \(\varrho(\operatorname{Im}\left[\pi_{1}(Z^{\prime})\to\pi_{1}(X)\right])\) is a finite index subgroup of \(\varrho(\operatorname{Im}\left[\pi_{1}(Z)\to\pi_{1}(X)\right])\). Since \(\varrho(\operatorname{Im}\left[\pi_{1}(Z^{\prime})\to\pi_{1}(X)\right])=\{1\}\), it follows that \(\varrho(\operatorname{Im}\left[\pi_{1}(Z)\to\pi_{1}(X)\right])\) is finite. The claim is proved.
Therefore, we have constructed the desired proper holomorphic fibration \(\operatorname{sh}_{\mathcal{C}}:X\to\operatorname{Sh}_{\mathcal{C}}(X)\). For the remaining part of the proof, we assume that \(X\) is compact and focus on proving the projectivity of \(\operatorname{Sh}_{\mathcal{C}}(X)\).
Step 2: projectivity of \(\operatorname{Sh}_{\mathcal{C}}(X)\) if \(X\) is compact.
**Lemma 3.28**.: _There exists a finite index normal subgroup \(N\) of \(\pi_{1}(X)/H\) such that its action on \(\widetilde{S}_{H}\) does not have fixed point._
Proof of Lemma 3.28.: Let \(N_{0}\) be a normal subgroup of \(\pi_{1}(X)/H\). Consider the set of fixed points by \(N\) in \(\widetilde{S}_{H}\) defined by
\[R_{0}:=\{y\in\widetilde{S}_{H}\mid\exists\gamma\in N_{0}\text{ such that }\gamma\neq 1, \gamma.y=y.\}\]
_Claim 3.29_.: \(R_{0}\) _is an analytic subset of \(\widetilde{S}_{H}\), and it invariant under \(\pi_{1}(X)/H\)._
Proof.: Take any \(\gamma\in\pi_{1}(X)/H\) which is not the identity element. Consider the set of points in \(\widetilde{S}_{H}\) fixed by \(\gamma\) defined by
\[F_{\gamma}:=\{y\in\widetilde{S}_{H}\mid\gamma.y=y\}.\]
We claim that \(F_{\gamma}\) is an analytic subset. Indeed, if we define a holomorphic map
\[i_{\gamma}:\widetilde{S}_{H} \to\widetilde{S}_{H}\times\widetilde{S}_{H}\] \[y \mapsto(y,\gamma.y),\]
then \(F_{\gamma}=i_{\gamma}^{-1}(\Delta)\), where \(\Delta\) is the diagonal of \(\widetilde{S}_{H}\times\widetilde{S}_{H}\). Hence, \(F_{\gamma}\) is an analytic subset of \(\widetilde{S}_{H}\).
Observe that \(R_{0}=\cup_{\gamma\in N_{0};\gamma\neq 1}F_{\gamma}\). Then we claim that \(R_{0}\) is also an analytic subset of \(\widetilde{S}_{H}\). Indeed, for any \(y\in\widetilde{S}_{H}\), since the action of \(\pi_{1}(X)/H\) on \(\widetilde{S}_{H}\) is analytic and properly discontinuous, there exists an open neighborhood \(V_{y}\) of \(y\) such that \(\mathcal{S}_{y}:=\{\gamma\in N_{0}\mid\gamma.V_{y}\cap V_{y}\neq\varnothing\}\) is finite. Therefore,
\[V_{y}\cap R_{0}=V_{y}\cap(\cup_{\gamma\in N_{0};\gamma\neq 1}F_{\gamma})=V_{y} \cap(\cup_{\gamma\in\mathcal{S}_{y};\gamma\neq 1}F_{\gamma}).\]
Hence, locally \(R_{0}\) is a finite union of analytic subsets, that is also analytic subset. Therefore, \(R_{0}\) is an analytic subset of \(\widetilde{S}_{H}\). The first assertion is proved.
Take an arbitrary \(y\in R_{0}\) and any \(\gamma\in\pi_{1}(X)/H\) such that \(\gamma.y\neq y\). Then there exists \(\gamma_{0}\in N_{0}\) such that \(\gamma_{0}\neq 1\) and \(\gamma_{0}.y=y\). It follows that \((\gamma\gamma_{0}\gamma^{-1}).(\gamma.y)=\gamma.y\). Note that \(\gamma\gamma_{0}\gamma^{-1}\neq 1\) and \(\gamma\gamma_{0}\gamma^{-1}\in N_{0}\) as \(N_{0}\) is normal. Hence \(\gamma.y\in R_{0}\). Therefore, \(R_{0}\) is invariant under \(\pi_{1}(X)/H\). The claim is proved.
Since \(\operatorname{Sh}_{\mathfrak{e}}(X)\) is the quotient of \(\widetilde{S}_{H}\) by \(\pi_{1}(X)/H\), there exists an analytic subset \(\mathcal{R}_{0}\) of \(\operatorname{Sh}_{\mathfrak{e}}(X)\) such that \(\mu^{-1}(\mathcal{R}_{0})=R_{0}\), where \(\mu:\widetilde{S}_{H}(X)\to\operatorname{Sh}_{\mathfrak{e}}(X)\) is the quotient map of \(\widetilde{S}_{H}(X)\) by \(\pi_{1}(X)/H\).
_Claim 3.30_.: _For any \(y\in R_{0}\), there exists a finite index normal subgroup \(N_{1}\subset N_{0}\) such that for any \(\gamma\in N_{1}\), \(\gamma.y=y\) if and only if \(\gamma=1\)._
Proof.: Let \(\mathcal{S}\) be the subgroup of \(N_{0}\) that fixes \(y\) as defined in (3.10). It follows that \(\mathcal{S}\) is a finite subgroup. By the definition of \(H\), there exists a finite family of reductive representations \(\{\varrho_{i}:X\to\operatorname{GL}_{N}(\mathbb{C})\}_{i=1,\ldots,\ell}\) with \([\varrho_{i}]\in\mathfrak{C}(\mathbb{C})\) such that \(\cap_{i=1,\ldots,\ell}\ker\varrho_{i}\cap\mathcal{S}=\{1\}\). Considering the representation \(\varrho_{0}=\oplus_{i=1}^{\ell}\varrho_{i}:\pi_{1}(X)/H\to\prod_{i=1}^{\ell} \operatorname{GL}_{N}(\mathbb{C})\), the restriction \(\varrho_{0}|_{\mathcal{S}}:\mathcal{S}\to\prod_{i=1}^{\ell}\operatorname{GL}_{N }(\mathbb{C})\) is injective. Since \(\varrho_{0}(\pi_{1}(X))=\varrho_{0}(\pi_{1}(X)/H)\) is finitely generated and linear, by Malcev's theorem, there exists a finite index subgroup \(\Gamma_{1}\) of \(\varrho_{0}(\pi_{1}(X)/H)\) such that \(\Gamma_{1}\cap\varrho_{0}(\mathcal{S})=\{1\}\). Let \(N_{1}:=\varrho_{0}^{-1}(\Gamma_{1})\) which is a finite index subgroup of \(\pi_{1}(X)/H\). Observe that \(N_{1}\cap\mathcal{S}=\{1\}\). Hence, the claim is proven.
Let \(N_{0}:=\pi_{1}(X)/H\), and let \(R_{0}\), \(\mathcal{R}_{0}\) be defined as above, induced by \(N_{0}\). Now, let \(y\in R_{0}\) be any point, and consider the finite index subgroup \(N_{1}\in\pi_{1}(X)/H\) as in Claim 3.30. It follows that the set of fixed points
\[R_{1}:=\{z\in\widetilde{S}_{H}\mid\exists\gamma\in N_{1}\text{ such that }\gamma\neq 1,\gamma.z=z\}\]
does not contain \(y\). By Claim 3.29, \(R_{1}\) is invariant under \(\pi_{1}(X)/H\) and thus there exists an analytic subset \(\mathcal{R}_{1}\) of \(\operatorname{Sh}_{\mathfrak{e}}(X)\) such that \(\mu^{-1}(\mathcal{R}_{1})=R_{1}\). It follows that \(\mu(y)\not\in\mathcal{R}_{1}\). Hence \(\mathcal{R}_{1}\subsetneq\mathcal{R}\).
We can iterate such procedure to find a decreasing sequence of finite index subgroups
\[\pi_{1}(X)/H=N_{0}\supset N_{1}\supset N_{2}\supset\cdots\]
of \(\pi_{1}(X)/H\) such that for the set of fixed points
\[R_{k}:=\{x\in\widetilde{S}_{H}\mid\exists\gamma\in N_{k}\text{ such that }\gamma\neq 1, \gamma.x=x\}\]
it is invariant under \(\pi_{1}(X)/H\) by Claim 3.29, and there exist analytic subsets \(\mathcal{R}_{k}\) of \(\operatorname{Sh}_{\mathfrak{E}}(X)\) such that \(\mu^{-1}(\mathcal{R}_{k})=R_{k}\) and \(R_{k+1}\subsetneq R_{k}\). Since \(\operatorname{Sh}_{\mathfrak{E}}(X)\) is compact, by the notheriainity, \(\mathcal{R}_{k}\) will stablise at some finite positive integer \(k_{0}\). It is worth noting that \(R_{k_{0}}=\varnothing\), or else we can still use the above algorithm to find \(R_{k_{0}+1}\subsetneq R_{k_{0}}\). Therefore, we conclude that there exists a finite index subgroup \(N:=N_{k_{0}}\) of \(\pi_{1}(X)/H\) which acts on \(\widetilde{S}_{H}\) without fixed point.
Let \(Y:=\widetilde{X}_{H}/N\). Then \(Y\to X\) is a finite Galois etale cover with \(\operatorname{Aut}(\widetilde{X}_{H}/Y)=N\). Recall that we define \(\nu:\pi_{1}(X)/H\to\operatorname{Aut}(\widetilde{S}_{H})\) to be action of \(\pi_{1}(X)/H\) on \(\operatorname{Aut}(\widetilde{S}_{H})\). Since \(\operatorname{sh}_{H}:\widetilde{X}_{H}\to\widetilde{S}_{H}\) is \(\nu\)-equivariant, the group \(N\) gives rise to a proper holomorphic fibration \(Y\to\widetilde{S}_{H}/\nu(N)\) over a complex normal space \(\widetilde{S}_{H}/\nu(N)\). By Claim 3.22 and Lemma 3.28, \(\nu(N)\) acts on \(\widetilde{S}_{H}\) properly continuous and freely and thus the covering \(\widetilde{S}_{H}\to\widetilde{S}_{H}/\nu(N)\) is etale.
**Claim 3.31**.: _Each fiber of \(g:\widetilde{S}_{H}\to S_{\mathfrak{E}}\times\mathscr{D}\) is discrete._
Proof.: Let \((t,o)\in S_{\mathfrak{E}}\times\mathscr{D}\) be arbitrary point and take any point \(y\in g^{-1}((t,o))\). Then \(Z:=\operatorname{sh}_{H}^{-1}(y)\) is a connected component of the fiber \(\Psi^{-1}((t,o))\), that is compact by Lemma 3.21. By Theorem 1.30, \(Z\) has an open neighborhood \(U\) such that \(\Psi(U)\) is a locally closed analytic subvariety of \(S_{\mathfrak{E}}\times\mathscr{D}\) and \(\Psi|_{U}:U\to\Psi(U)\) is proper. Therefore, for the Stein factorization \(U\to V\stackrel{{\pi_{V}}}{{\to}}\Psi(U)\) of \(\Psi|_{U}\), \(U\to V\) coincides with \(\operatorname{sh}_{H}|_{U}:U\to\operatorname{sh}_{H}(U)\) and \(\pi_{V}:V\to\Psi(U)\) is finite. Observe that \(V\) is an open neighborhood of \(y\) and \(\pi_{V}:V\to\Psi(U)\) coincides with \(g|_{V}:V\to S_{\mathfrak{E}}\times\mathscr{D}\). Therefore, the set \(V\cap g^{-1}((t,o))=V\cap(\pi_{V})^{-1}(t,o)\) is finite. As a result, \(g^{-1}((t,o))\) is discrete. The claim is proven.
In [10], Griffiths discovered a so-called _canonical bundle_\(K_{\mathscr{D}}\)_on the period domain_\(\mathscr{D}\), which is invariant under \(G_{0}\). Here \(G_{0}\) is a real Lie group acting on \(\mathscr{D}\) holomorphically and transitively. It is worth noting that \(K_{\mathscr{D}}\) is endowed with a \(G_{0}\)-invariant smooth metric \(h_{\mathscr{D}}\) whose curvature is positive-definite in the horizontal direction. The period mapping \(p:\widetilde{X}_{H}\to\mathscr{D}\) induces a holomorphic map \(\phi:\widetilde{S}_{H}\to\mathscr{D}\) which is horizontal. We note that \(\phi\) is \(\nu\)-equivariant. As a result, \(\phi^{*}K_{\mathscr{D}}\) descends to a line bundle on the quotient \(W:=\widetilde{S}_{H}/\nu(N)\), denoted by \(L_{G}\). The smooth metric \(h_{\mathscr{D}}\) induces a smooth metric \(h_{G}\) on \(L_{G}\) whose curvature form is denoted by \(T\). Let \(x\in\widetilde{S}_{H}\) be a smooth point of \(\widetilde{S}_{H}\) and let \(v\in T_{\widetilde{S}_{H},x}\). Then \(-iT(v,\bar{v})>0\) if \(d\phi(v)\neq 0\).
**Claim 3.32**.: \(\operatorname{Sh}_{\mathfrak{E}}(X)\) _is a projective normal variety._
Proof.: Note that \(S_{\mathfrak{E}}\) is a projective normal variety. We take an ample line bundle \(L\) over \(S_{\mathfrak{E}}\). Recall that there is a line bundle \(L_{\text{G}}\) on \(W\) equipped with a smooth metric \(h_{\text{G}}\) such that its curvature form is \(T\). Denote by \(f:W\to S_{\mathfrak{E}}\) the natural morphism induced by \(g:\widetilde{S}_{H}\to S_{\mathfrak{E}}\times\mathscr{D}\). Let \(\mu:W^{\prime}\to W\) be a resolution of singularities of \(W\).
We take a smooth metric \(h\) on \(L\) such that its curvature form \(i\Theta_{h}(L)\) is Kahler. As shown in Claim 3.31, the map \(g:\widetilde{S}_{H}\to S_{\mathfrak{E}}\times\mathscr{D}\) is discrete. Therefore, \(g\) is an immersion at general points of \(\widetilde{S}_{H}\). Thus, for the line bundle \(\mu^{*}(L_{\text{G}}\otimes f^{*}L)\) equipped with the smooth metric \(\mu^{*}(h\otimes f^{*}h_{\text{G}})\), its curvature form is strictly positive at some points of \(W^{\prime}\),. By
Demailly's holomorphic Morse inequality or Siu's solution for the Grauert-Riemenschneider conjecture, \(\mu^{*}(L_{\mathbb{G}}\otimes f^{*}L)\) is a big line bundle and thus \(W^{\prime}\) is a Moishezon manifold. Hence \(W\) is a Moishezon variety.
Moreover, we can verify that for irreducible positive-dimensional closed subvariety \(Z\) of \(W\), there exists a smooth point \(x\) in \(Z\) such that it has a neighborhood \(\Omega\) that can be lifted to the etale covering \(\widetilde{S}_{H}\) of \(W\), and \(g|_{\Omega}:\Omega\to S_{\mathfrak{C}}\times\mathscr{D}\) is an immersion. It follows that \((if^{*}\Theta_{h}(L)+T)|_{\Omega}\) is strictly positive. Note that
\[(L_{G}\otimes f^{*}L)^{\dim Z}\cdot[Z]=\int_{Z^{\mathrm{reg}}}(if^{*}\Theta_{h }(L)+T)^{\dim Z}>0.\]
By the Nakai-Moishezon criterion for Moishezon varieties (cf. [10, Theorem 3.11]), \(L_{G}\otimes f^{*}L\) is ample, implying that \(W\) is projective. Recall that the compact complex normal space \(\mathrm{Sh}_{\mathfrak{C}}(X):=\widetilde{S}_{H}/\Gamma_{0}\) is a quotient of \(W=\widetilde{S}_{H}/\nu(N)\) by the finite group \(\Gamma_{0}/\nu(N)\). Therefore, \(\mathrm{Sh}_{\mathfrak{C}}(X)\) is also projective. The claim is proved.
We accomplish the proof of the theorem.
We remark that Lemma 3.25 is claimed without a proof in [1, p. 524] and [11, Proof of Theorem 10]. It appears to us that the proof of Lemma 3.25 is not straightforward.
It is worth noting that Lemma 3.28 is implicitly used in [1, Proposition 5.3.10]. In that proof, the criterion for Stein spaces (cf. Proposition 1.14) is employed, assuming Lemma 3.28. The proof of Lemma 3.28 is non-trivial, particularly considering that \(\pi_{1}(X)/H\) may not be residually finite. Given its significance in the proofs of Theorems B and C, we provide a complete proof. It is noteworthy that our proof of Lemma 3.28 is valid only for the compact case, and extending it to the quasi-projective case is not straightforward due to the reliance on the compactness of \(\mathrm{Sh}_{\mathfrak{C}}(X)\).
### Construction of Shafarevich morphism (II)
In the previous subsection, we established the existence of the Shafarevich morphism associated with a constructible subset of \(M_{\mathrm{B}}(X,N)(\mathbb{C})\) defined over \(\mathbb{Q}\) that are invariant under \(\mathbb{R}^{*}\)-action. In this section, we focus on proving an existence theorem for the Shafarevich morphism associated with a single reductive representation, based on Theorem 3.20. Initially, we assume that the representation has infinite monodromy at infinity. However, we will subsequently employ Proposition 3.17 to remove this assumption and establish the more general result.
Let \(X\) be a quasi-projective normal variety. Let \(\varrho:\pi_{1}(X)\to\mathrm{GL}_{N}(\mathbb{C})\) be a reductive representation. Assume that \(\varrho\) has infinite monodromy at infinity if \(X\) is non-compact. Then there exists a proper surjective holomorphic fibration \(\mathrm{sh}_{\varrho}:X\to\mathrm{Sh}_{\varrho}(X)\) onto a complex normal space \(\mathrm{Sh}_{\varrho}(X)\) such that for any closed subvariety \(Z\subset X\), \(g(\mathrm{Im}[\pi_{1}(Z^{\mathrm{norm}})\to\pi_{1}(X)])\) is finite if and only if \(\mathrm{sh}_{\varrho}(Z)\) is a point. If \(X\) is compact, then \(\mathrm{Sh}_{\varrho}(X)\) is projective.
We first prove the following crucial result.
Let \(X\) be a smooth quasi-projective variety. Let \(f:Z\to X\) be a proper morphism from a _smooth quasi-projective variety_\(Z\). Let \(\varrho:\pi_{1}(X)\to\mathrm{GL}_{N}(\mathbb{C})\) be a reductive representation. Define \(M:=j_{Z}^{-1}\{1\}\), where \(1\) stands for the trivial representation, and \(j_{Z}:M_{\mathrm{B}}(X,N)\to M_{\mathrm{B}}(Z,N)\) is the natural morphism of \(\mathbb{Q}\)-scheme. Then \(M\) is a closed subscheme of \(M_{\mathrm{B}}(X,N)\) defined over \(\mathbb{Q}\) such that \(M(\mathbb{C})\) is invariant under \(\mathbb{C}^{*}\)-action.
Proof.: We take a smooth projective compactification \(\overline{X}\) (resp. \(\overline{Z}\)) of \(X\) (resp. \(Z\)) such that \(D:=\overline{X}\backslash X\) (resp. \(D_{Z}:=\overline{Z}\backslash Z\)) is a simple normal crossing divisor and \(f\) extends to a morphism \(\overline{f}:\overline{X}\to\overline{Z}\). Note that the morphism \(j_{Z}\) is a \(\mathbb{Q}\)-morphism between affine schemes of finite type \(M_{\mathrm{B}}(X,N)\) and \(M_{\mathrm{B}}(Z,N)\) defined over \(\mathbb{Q}\). \(M\) is thus a closed subscheme of \(M_{\mathrm{B}}(X,N)\) defined over \(\mathbb{Q}\). Let \(\varrho:\pi_{1}(X)\to\mathrm{GL}_{N}(\mathbb{C})\) be a reductive representation such that \([\varrho]\in M(\mathbb{C})\). By Theorem 1.6, there is a tame pure imaginary harmonic bundle \((E,\theta,h)\) on \(X\) such that \(\varrho\) is the monodromy representation of \(\nabla_{h}+\theta+\theta_{h}^{\dagger}\). By definition,
\(f^{*}\varrho\) is a trivial representation. Therefore, \(f^{*}\varrho\) corresponds to a trivial harmonic bundle \((\oplus^{N}\mathcal{O}_{Z},0,h_{0})\) where \(h_{0}\) is the canonical metric for the trivial vector bundle \(\oplus^{N}\mathcal{O}_{Z}\) with zero curvature. By the unicity theorem in [13, Theorem 1.4], \((\oplus^{N}\mathcal{O}_{Z},0,h_{0})\) coincides with \((f^{*}E,f^{*}\theta,f^{*}h)\) with some obvious ambiguity of \(h_{0}\). Therefore, \(f^{*}E=\oplus^{N}\mathcal{O}_{Z}\) and \(f^{*}\theta=0\). In particular, the regular filtered Higgs bundle \((\tilde{E}_{*},\tilde{\theta})\) on \((\overline{Z},D_{Z})\) induced by the prolongation of \((f^{*}E,f^{*}\theta,f^{*}h)\) using norm growth defined in SS 2.1 is trivial; namely we have \(\smash{\raisebox{-0.86pt}{$a$}}\tilde{E}=\mathcal{O}_{\overline{Z}}^{N} \otimes\mathcal{O}_{\overline{Z}}(\sum_{i=1}^{\ell}a_{i}D^{\prime}_{i})\) for any \(\smash{\raisebox{-0.86pt}{$a$}}=(a_{1},\dots,a_{m})\in\mathbb{R}^{\ell}\) and \(\tilde{\theta}=0\). Here we write \(D_{Z}=\sum_{i=1}^{\ell}D^{\prime}_{i}\).
Let \((\smash{\raisebox{-0.86pt}{$E$}}_{*},\theta)\) be the induced regular filtered Higgs bundle on \((\overline{X},D)\) by \((E,\theta,h)\) defined in SS 2.1. According to SSSS 2.2 and 2.3 we can define the pullback \((f^{*}\smash{\raisebox{-0.86pt}{$E$}}_{*},f^{*}\theta)\), which also forms a regular filtered Higgs bundle on \((\overline{Z},D_{Z})\) with trivial characteristic numbers. By virtue of Proposition 2.5, we deduce that \((f^{*}\smash{\raisebox{-0.86pt}{$E$}}_{*},f^{*}\theta)=(\tilde{E}_{*},\tilde{ \theta})\). Consequently, it follows that \((f^{*}\smash{\raisebox{-0.86pt}{$E$}}_{*},f^{*}\theta)\) is trivial. Hence \((f^{*}\smash{\raisebox{-0.86pt}{$E$}}_{*},tf^{*}\theta)\) is trivial for any \(t\in\mathbb{C}^{*}\).
Fix some ample line bundle \(L\) on \(\overline{Z}\). It is worth noting that for any \(t\in\mathbb{C}^{*}\), \((\smash{\raisebox{-0.86pt}{$E$}}_{*},t\theta)\) is \(\mu_{L}\)-polystable with trivial characteristic numbers. By [13, Theorem 9.4], there is a pluriharmonic metric \(h_{t}\) for \((E,t\theta)\) adapted to the parabolic structures of \((\smash{\raisebox{-0.86pt}{$E$}}_{*},t\theta)\). By Proposition 2.5 once again, the regular filtered Higgs bundle \((f^{*}\smash{\raisebox{-0.86pt}{$E$}}_{*},tf^{*}\theta)\) is the prolongation of the tame harmonic bundle \((f^{*}\smash{\raisebox{-0.86pt}{$E$}},tf^{*}\theta,f^{*}h_{t})\) using norm growth defined in SS 2.1. Since \((f^{*}\smash{\raisebox{-0.86pt}{$E$}}_{*},tf^{*}\theta)\) is trivial for any \(t\in\mathbb{C}^{*}\), by the unicity theorem in [13, Theorem 1.4] once again, it follows that \((\oplus^{N}\mathcal{O}_{Z},0,h_{0})\) coincides with \((f^{*}E,tf^{*}\theta,f^{*}h_{t})\) with some obvious ambiguity of \(h_{0}\). Recall that in SS 2.4, \(\varrho_{t}\) is defined to be the monodromy representation of the flat connection \(\nabla_{h_{t}}+t\theta+\smash{\raisebox{-0.86pt}{$\theta$}}^{\dagger}_{h_{t}}\). It follows that \(f^{*}\varrho_{t}\) is the monodromy representation of the flat connection \(f^{*}(\nabla_{h_{t}}+t\theta+\smash{\raisebox{-0.86pt}{$\theta$}}^{\dagger}_{h_ {t}})\). Therefore, \(f^{*}\varrho_{t}\) is a trivial representation.
However, it is worth noting that \(\varrho_{t}\) might not be reductive as \((E,t\theta,h_{t})\) might not be pure imaginary. Let \(\varrho_{t}^{ss}\) be the semisimplification of \(\varrho_{t}\). Then \([\varrho_{t}]=[\varrho_{t}^{ss}]\). Since \(f^{*}\varrho_{t}\) is a trivial representation, then \(f^{*}\varrho_{t}^{ss}\) is trivial. The proposition is proved.
**Remark 3.36**.: It is important to note that, unlike the projective case, the proof of Proposition 3.35 becomes considerably non-trivial when \(X\) is quasi-projective. This complexity arises from the utilization of the functoriality of pullback of regular filtered Higgs bundles, which is established in Proposition 2.5. Lemma 3.35 plays a crucial role in the proof of Proposition 3.34 as it allows us to remove the condition of \(\mathbb{R}^{*}\)-invariance in Theorem 3.20. However, we remark that Proposition 3.35 is claimed without a proof in the proof of [1, Lemma 9.3].
Proof of Proposition 3.34.: _Step 1: We assume that \(X\) is smooth._ Let \(f:Z\to X\) be a _proper_ morphism from a _smooth_ quasi-projective variety \(Z\). Then \(j_{Z}:M_{\mathrm{B}}(X,N)\to M_{\mathrm{B}}(Z,N)\) is a morphism of \(\mathbb{Q}\)-scheme. Define
\[\mathfrak{C}:=\bigcap_{\{f:Z\to X|f^{*}\varrho=1\}}j_{Z}^{-1}\{1\}, \tag{3.12}\]
where \(1\) stands for the trivial representation, and \(f:Z\to X\) ranges over all proper morphisms from smooth quasi-projective varieties \(Z\) to \(X\). Then \(\mathfrak{C}\) is a zariski closed subset defined over \(\mathbb{Q}\), and by Proposition 3.35, \(\mathfrak{C}(\mathbb{C})\) is invariant under \(\mathbb{C}^{*}\)-action. Note that \([\varrho]\in\mathfrak{C}(\mathbb{C})\). As we assume that \(\varrho\) has infinite monodromy at infinity, conditions in Theorem 3.20 are fulfilled. Therefore, we apply Theorem 3.20 to conclude that the Shafarevich morphism \(\mathrm{sh}_{\mathfrak{E}}:X\to\mathrm{Sh}_{\mathfrak{E}}(X)\) exists. It is a proper holomorphic fibration over a complex normal space.
**Claim 3.37**.: _For any proper morphism \(f:Z\to X\) from a smooth quasi-projective variety \(Z\), \(f^{*}\varrho(\pi_{1}(Z))\) is finite if and only if \(\mathrm{sh}_{\mathfrak{E}}(Z)\) is a point._
Proof.: _Proof of "\(\Leftarrow\)"_: this follows from the fact that \([\varrho]\in\mathfrak{C}(\mathbb{C})\) and Theorem 3.20.
_Proof of "\(\Rightarrow\)"_: we take a finite etale cover \(Y\to Z\) such that \(f^{*}\varrho(\mathrm{Im}[\pi_{1}(Y)\to\pi_{1}(Z)])\) is trivial. Denote by \(g:Y\to X\) the composition of \(f\) with \(Y\to Z\). Then \(g\) is proper and
\(g^{*}\varrho=1\). Let \(\tau:\pi_{1}(X)\to\operatorname{GL}_{N}(\mathbb{C})\) be any reductive representation such that \([\tau]\in\mathfrak{C}(\mathbb{C})\). Then \(g^{*}\tau=1\) by (3.12). It follows that \(f^{*}\tau(\pi_{1}(Z))\) is finite. The lemma is proved.
Let \(\operatorname{sh}_{\varrho}:X\to\operatorname{Sh}_{\varrho}(X)\) be \(\operatorname{sh}_{\mathfrak{E}}:X\to\operatorname{Sh}_{\mathfrak{E}}(X)\). The proposition is proved if \(X\) is smooth.
_Step 2: We does not assume that \(X\) is smooth._ We take a desingularization \(\mu:Y\to X\). Then \(\mu^{*}\varrho:\pi_{1}(Y)\to\operatorname{GL}_{N}(\mathbb{C})\) is also a reductive representation. By Lemma 3.16 it also has infinite monodromy at infinity when \(X\) is non-compact. Based on the first step, the Shafarevich morphism \(\operatorname{sh}_{\mu^{*}\varrho}:Y\to\operatorname{Sh}_{\mu^{*}\varrho}(Y)\) exists, which is a surjective proper holomorphic fibration. Let \(Z\) be an irreducible component of a fiber of \(\mu\). Then \(\mu^{*}(\pi_{1}(Z))=\{1\}\). It follows that \(\operatorname{sh}_{\mu^{*}\varrho}(Z)\) is a point. Note that each fiber of \(\mu\) is connected as \(X\) is normal. It follows that each fiber of \(\mu\) is contracted to a point by \(\operatorname{sh}_{\mu^{*}\varrho}\). Therefore, there exists a dominant holomorphic map \(\operatorname{sh}_{\varrho}:X\to\operatorname{Sh}_{\mu^{*}\varrho}(Y)\) with connected general fibers such that we have the following commutative diagram:
(3.13)
**Claim 3.38**.: _For any closed subvariety \(Z\subset X\), \(\operatorname{sh}_{\varrho}(Z)\) is a point if and only if \(\operatorname{Im}[\pi_{1}(Z^{\operatorname{norm}})\to\pi_{1}(X)]\) is finite._
Proof.: Let us choose an irreducible component \(W\) of \(\mu^{-1}(Z)\) which is surjective onto \(Z\). Since \(\operatorname{Im}[\pi_{1}(W^{\operatorname{norm}})\to\pi_{1}(Z^{\operatorname {norm}})]\) is a finite index subgroup of \(\pi_{1}(Z^{\operatorname{norm}})\), and \(\mu_{*}:\pi_{1}(Y)\to\pi_{1}(X)\) is surjective, it follows that \(\varrho(\operatorname{Im}[\pi_{1}(Z^{\operatorname{norm}})\to\pi_{1}(X)])\) is finite if and only if \(\mu^{*}\varrho(\operatorname{Im}[\pi_{1}(W^{\operatorname{norm}})\to\pi_{1}(Y)])\) is finite.
_Proof of \(\Rightarrow\):_ Note that \(\operatorname{sh}_{\mu^{*}\varrho}(W)\) is a point and thus \(\mu^{*}\varrho(\operatorname{Im}[\pi_{1}(W^{\operatorname{norm}})\to\pi_{1}( Y)])\) is finite. Hence \(\varrho(\operatorname{Im}[\pi_{1}(Z^{\operatorname{norm}})\to\pi_{1}(X)])\) is finite.
_Proof of \(\Leftarrow\):_ Note that \(\mu^{*}\varrho(\operatorname{Im}[\pi_{1}(W^{\operatorname{norm}})\to\pi_{1}( Y)])\) is finite. Therefore, \(\operatorname{sh}_{\mu^{*}\varrho}(W)\) is a point and thus \(\operatorname{sh}_{\varrho}(Z)\) is a point by (3.13).
Let us write \(\operatorname{Sh}_{\varrho}(X):=\operatorname{Sh}_{\mu^{*}\varrho}(Y)\). Then \(\operatorname{sh}_{\varrho}:X\to\operatorname{Sh}_{\varrho}(X)\) is the Shafarevich morphism associated with \(\varrho:\pi_{1}(X)\to\operatorname{GL}_{N}(\mathbb{C})\).
The condition in Proposition 3.34 that \(\varrho\) has infinite monodromy at infinity poses significant practical limitations for further applications. However, we can overcome this this drawback by utilizing Proposition 3.17, which allows us to eliminate this requirement.
**Theorem 3.39**.: _Let \(X\) be a non-compact, quasi-projective normal varieties, and let \(\varrho:\pi_{1}(X)\to\operatorname{GL}_{N}(\mathbb{C})\) be a reductive representation. Then there exists a dominant holomorphic map \(\operatorname{sh}_{\varrho}:X\to\operatorname{Sh}_{\varrho}(X)\) to a complex normal space \(\operatorname{Sh}_{\varrho}(X)\) whose general fibers are connected such that for any closed subvariety \(Z\subset X\), \(\varrho(\operatorname{Im}[\pi_{1}(Z^{\operatorname{norm}})\to\pi_{1}(X)])\) is finite if and only if \(\operatorname{sh}_{\varrho}(Z)\) is a point._
Proof.: By Step 2 of the proof of Proposition 3.34, it suffices to prove the theorem for \(X\) being a smooth variety. Therefore, we can replace \(X\) by its desingularization and replace \(\varrho\) by its pullback over this smooth model. Since \(\varrho(\pi_{1}(X))\) is residually finite by Malcev's theorem, we can find a finite etale cover \(\nu_{0}:\widehat{X}\to X\) such that \(\nu_{0}^{*}\varrho\) is torsion free.
**Claim 3.40**.: _There are partial compactifications \(X^{\prime}\) (resp. \(\widehat{X}^{\prime}\) ) of \(X\) (resp. \(\widehat{X}\)) such that_
* \(\widehat{X}^{\prime}\) _and_ \(X^{\prime}\) _are quasi-projective normal varieties;_
* \(\nu_{0}:\widehat{X}\to X\) _extends to a finite morphism_ \(\nu^{\prime}:\widehat{X}^{\prime}\to X^{\prime}\)_;_
* \(\pi^{*}\varrho\) _extends to a reductive representation_ \(\varrho^{\prime}:\pi_{1}(\widehat{X}^{\prime})\to\operatorname{GL}_{N}(\mathbb{C})\) _that has infinite monodromy at infinity._
Proof.: Let \(\overline{X}\) be a smooth projective compactification of \(X\). Then there exists a smooth projective variety \(\overline{\widetilde{X}}_{1}\) that compactifies \(\widehat{X}\), and a surjective generically finite morphism \(\nu_{1}:\overline{\widetilde{X}}_{1}\to\overline{X}\) that extends \(\nu_{0}\).
By utilizing Proposition 3.17, after replacing \(\overline{\widehat{X}}_{1}\) by some birational modification, there exists a simple normal crossing divisor \(D\subset\overline{\widetilde{X}}_{1}\) such that we have \(\widehat{X}_{1}:=\overline{\widetilde{X}}_{1}\backslash D\supset\widehat{X}\) and \(\nu^{\star}\varrho\) extends to a representation \(\varrho_{1}:\pi_{1}(\widehat{X}_{1})\to\operatorname{GL}_{N}(\mathbb{C})\) that has infinite monodoromy at infinity.
The morphism \(\nu_{1}:\overline{\widetilde{X}}_{1}\to\overline{X}\) is not necessarily finite, but the restriction \(\nu_{1}|_{\widetilde{X}}:\widehat{X}\to X\) is a finite etale cover. By applying Hironaka-Raynaud-Gruson's flattening theorem, we can find a birational morphism \(\overline{X}_{1}\to\overline{X}\) that is isomorphic over \(X\) such that for the base change \(\overline{\widetilde{X}}_{1}\times_{\overline{X}}\overline{X}_{1}\to\overline{ X}_{1}\), the main component denoted as \((\overline{\widetilde{X}}_{1}\times_{\overline{X}}\overline{X}_{1})_{\text{ main}}\) which dominates \(\overline{X}_{1}\), is flat over \(\overline{X}_{1}\). Let \(\overline{\widetilde{X}}\) be the normalization of \((\overline{\widetilde{X}}_{1}\times_{\overline{X}}\overline{X}_{1})_{\text{ main}}\).
Then \(\nu\) is a finite morphism. Let's define \(D^{\prime}:=\mu^{-1}(D)\). Now, consider the pullback \(\mu^{\star}\varrho_{1}:\pi_{1}(\overline{\widetilde{X}}\backslash D^{\prime}) \to\operatorname{GL}_{N}(\mathbb{C})\). By Lemma 3.15, we observe that it has infinite monodromy at infinity. Consequently, \((\mu_{0}\circ\nu)^{\star}\varrho\) has infinite monodromy at each point of \(D^{\prime}\) in the sense of Lemma 3.16. Next, we assert that \(\nu^{-1}(\nu(D^{\prime}))=D^{\prime}\).
To establish the claim, we need to show that \(\mu_{0}^{\star}\varrho\) has infinite monodromy at each point of \(\nu(D^{\prime})\). Assume, for the sake of contradiction, that there exists \(q\in\nu(D^{\prime})\) such that \(\mu_{0}^{\star}\varrho\) does not have infinite monodromy at \(q\). Let \(p^{\prime}\in D^{\prime}\) be such that \(\nu(p^{\prime})=q\). Then there exists a holomorphic map \(f:\mathbb{D}\to\overline{X}_{1}\) such that
* \(f(\mathbb{D}^{\star})\subset X\) and \(f(0)=q\);
* \(f^{\star}(\mu_{0}^{\star}\varrho)(\pi_{1}(\mathbb{D}^{\star}))=\{1\}\);
* \(f_{\star}(\pi_{1}(\mathbb{D}^{\star}))\subset\operatorname{Im}[\pi_{1}( \widehat{X})\to\pi_{1}(X)]\).
Since \(\widehat{X}\to X\) is a finite etale cover, there exists a holomorphic map \(\hat{f}:\mathbb{D}\to\overline{\widetilde{X}}\) such that \(\nu\circ\hat{f}=f\), \(\hat{f}(\mathbb{D}^{\star})\subset\widehat{X}\) and \(\hat{f}(0)=p^{\prime}\). Therefore, we have \(\hat{f}^{\star}((\mu_{0}\circ\nu)^{\star}\varrho)(\pi_{1}(\mathbb{D}^{\star}) )=\{1\}\). However, this contradicts the fact that \((\mu_{0}\circ\nu)^{\star}\varrho\) has infinite monodromy at each point of \(D^{\prime}\). We conclude that \(\mu_{0}^{\star}\varrho\) has infinite monodromy at each point of \(\nu(D^{\prime})\). Then \(\nu^{\star}(\mu_{0}^{\star}\varrho)\) has infinite monodromy at each point of \(\nu^{-1}(\nu(D^{\prime}))\). We observe that \(\mu^{\star}\varrho_{1}\) is the extension of \(\nu^{\star}(\mu_{0}^{\star}\varrho):\pi_{1}(\widehat{X})\to\operatorname{GL}_{N }(\mathbb{C})\) over \(\overline{\widehat{X}}\backslash D^{\prime}\). Consequently, we have \(\nu^{-1}(\nu(D^{\prime}))=D^{\prime}\). Hence, \(\nu|_{\overline{\widehat{X}}\backslash D^{\prime}}:\overline{\widehat{X}} \backslash D^{\prime}\to\overline{X}_{1}\backslash\nu(D^{\prime})\) is a finite morphism.
The claim follows by denoting \(\widehat{X}^{\prime}:=\overline{\widehat{X}}\backslash D^{\prime}\), \(X^{\prime}:=\overline{X}_{1}\backslash\nu(D^{\prime})\) and \(\nu^{\prime}:=\nu|_{\widetilde{X}^{\prime}}\).
We proceed by finding a finite morphism \(h:Y^{\prime}\to\widehat{X}^{\prime}\) from a normal quasi-projective variety \(Y^{\prime}\) such that the composition \(f:Y^{\prime}\to X^{\prime}\) of \(\widehat{X}^{\prime}\to X^{\prime}\) and \(Y^{\prime}\to\widehat{X}^{\prime}\) is a Galois cover with Galois group \(G\). By Claim 3.40 and Lemma 3.15, \(h^{\star}\varrho^{\prime}:\pi_{1}(Y^{\prime})\to\operatorname{GL}_{N}(\mathbb{C})\) also has infinite monodromy at infinity. Consequently, we can apply Proposition 3.34 to deduce the existence of a proper holomorphic fibration \(\operatorname{sh}_{h^{\star}\varrho^{\prime}}:Y^{\prime}\to\operatorname{Sh}_{h^ {\star}\varrho^{\prime}}(Y^{\prime})\) such that for any closed subvariety \(Z\) of \(Y^{\prime}\), \(\operatorname{sh}_{h^{\star}\varrho^{\prime}}(Z)\) is a point if and only if \(h^{\star}\varrho^{\prime}(\operatorname{Im}[\pi_{1}(Z^{\text{norm}})\to\pi_{1} (Y^{\prime})])\) is finite.
**Claim 3.41**.: _The Galois group \(G\) acts analytically on \(\operatorname{Sh}_{h^{\star}\varrho^{\prime}}(Y^{\prime})\) such that \(\operatorname{sh}_{h^{\star}\varrho^{\prime}}\) is \(G\)-equivariant._
Proof.: Take any \(y\in\operatorname{Sh}_{h^{*}\varrho^{\prime}}(Y^{\prime})\) and any \(g\in G\). Since \(\operatorname{sh}_{h^{*}\varrho^{\prime}}\) is surjective and proper, the fiber \(\operatorname{sh}_{h^{*}\varrho^{\prime}}^{-1}(y)\) is thus non-empty and compact. Let \(Z\) be an irreducible component of the fiber \(\operatorname{sh}_{h^{*}\varrho^{\prime}}^{-1}(y)\). Then \(h^{*}\varrho^{\prime}(\operatorname{Im}[\pi_{1}(Z^{\mathrm{norm}})\to\pi_{1} (Y^{\prime})])\) is finite, implying that \(h^{*}\varrho^{\prime}(\operatorname{Im}[\pi_{1}((g.Z)^{\mathrm{norm}})\to\pi_{ 1}(Y^{\prime})])\) is also finite. Consequently, there exists a point \(y^{\prime}\in\operatorname{Sh}_{h^{*}\varrho^{\prime}}(Y^{\prime})\) such that \(\operatorname{sh}_{h^{*}\varrho^{\prime}}(g.Z)=y^{\prime}\). Since each fiber of \(\operatorname{sh}_{h^{*}\varrho^{\prime}}(g.Z^{\prime})=y^{\prime}\), since each fiber of \(\operatorname{sh}_{h^{*}\varrho^{\prime}}(g.Z^{\prime})=y^{\prime}\). Consequently, it follows that \(g\) maps each fiber of \(\operatorname{sh}_{h^{*}\varrho^{\prime}}\) to another fiber.
We consider \(g\) as an analytic automorphism of \(Y^{\prime}\). For the holomorphic map \(\operatorname{sh}_{h^{*}\varrho^{\prime}}\circ g:Y^{\prime}\to\operatorname{Sh }_{h^{*}\varrho^{\prime}}(Y^{\prime})\), since it contracts each fiber of \(\operatorname{sh}_{h^{*}\varrho^{\prime}}:Y^{\prime}\to\operatorname{Sh}_{h^{*} \varrho^{\prime}}(Y^{\prime})\), it induces a holomorphic map \(\tilde{g}:\operatorname{Sh}_{h^{*}\varrho^{\prime}}(Y^{\prime})\to\operatorname {Sh}_{h^{*}\varrho^{\prime}}(Y^{\prime})\) such that we have the following commutative diagram:
(3.14)
Let us define the holomorphic map \(\tilde{g}:\operatorname{Sh}_{h^{*}\varrho^{\prime}}(Y^{\prime})\to\operatorname {Sh}_{h^{*}\varrho^{\prime}}(Y^{\prime})\) to be the action of \(g\in G\) on \(\operatorname{Sh}_{h^{*}\varrho^{\prime}}(Y^{\prime})\). Based on (3.14), it is clear that \(\operatorname{sh}_{h^{*}\varrho^{\prime}}\) is \(G\)-equivariant. Therefore, the claim is proven.
Note that \(X^{\prime}:=Y^{\prime}/G\). The quotient of \(\operatorname{Sh}_{h^{*}\varrho^{\prime}}(Y^{\prime})\) by \(G\), resulting in a complex normal space denoted by \(Q\) (cf. [10]). Then \(\operatorname{sh}_{h^{*}\varrho^{\prime}}\) induces a proper holomorphic fibration \(c^{\prime}:X^{\prime}\to Q.\) Consider the restriction \(c:=c^{\prime}|_{X}\).
**Claim 3.42**.: _For any closed subvariety \(Z\) of \(X\), \(c(Z)\) is a point if and only if \(\varrho(\operatorname{Im}[\pi_{1}(Z^{\mathrm{norm}})\to\pi_{1}(Y^{\prime})])\) is finite._
Proof.: Let \(Y:=f^{-1}(X)\) and \(f_{0}:=f|_{Y}\). Note that \(f_{0}:Y\to X\) is a Galois cover with Galois group \(G\). We have \(h^{*}\varrho^{\prime}|_{\pi_{1}(Y)}=f_{0}^{*}\varrho\). Now, consider any closed subvariety \(Z\) of \(X\). There exists an irreducible closed subvariety \(W\) of \(Y\) such that \(f_{0}(W)=Z\). Let \(\overline{W}\) be the closure of \(W\) in \(Y^{\prime}\), which is an irreducible closed subvariety of \(Y^{\prime}\).
Observe that \(c(Z)\) is a point if and only if \(\operatorname{sh}_{h^{*}\varrho^{\prime}}(\overline{W})\) is a point, which is equivalent to \(h^{*}\varrho^{\prime}(\operatorname{Im}[\pi_{1}(\overline{W}^{\mathrm{norm}}) \to\pi_{1}(Y^{\prime})])\) being finite. Furthermore, this is equivalent to \(f_{0}^{*}\varrho(\operatorname{Im}[\pi_{1}(W^{\mathrm{norm}})\to\pi_{1}(Y)])\) being finite since \(h^{*}\varrho^{\prime}|_{\pi_{1}(Y)}=f_{0}^{*}\varrho\). Since \(\operatorname{Im}[\pi_{1}(W^{\mathrm{norm}})\to\pi_{1}(Z^{\mathrm{norm}})]\) is a finite index subgroup of \(\pi_{1}(Z^{\mathrm{norm}})\), the above condition is equivalent to \(\varrho(\operatorname{Im}[\pi_{1}(Z^{\mathrm{norm}})\to\pi_{1}(X)])\) being finite.
Let \(f:=\operatorname{sh}_{\varrho}\) and \(Q:=\operatorname{Sh}_{\varrho}(X)\). This concludes our construction of the Shafarevich morphism of \(\varrho\). Therefore, our theorem is proven.
**Corollary 3.43**.: _Let \(X\) be a quasi-projective normal variety. Let \(\Sigma\) be a (non-empty) set of reductive representations \(\varrho:\pi_{1}(X)\to\operatorname{GL}_{N_{\varrho}}(\mathbb{C})\). If \(X\) is non-compact, we assume additionally that each \(\varrho\) has infinite monodromy at infinity. Then there is a proper surjective holomorphic fibration \(\operatorname{sh}_{\Sigma}:X\to\operatorname{Sh}_{\Sigma}(X)\) onto a complex normal space such that for closed subvariety \(Z\subset X\), \(\operatorname{sh}_{\Sigma}(Z)\) is a point if and only if \(\varrho(\operatorname{Im}[\pi_{1}(Z^{\mathrm{norm}})\to\pi_{1}(X)])\) is finite for every \(\varrho\in\Sigma\). Moreover, \(\operatorname{Sh}_{\Sigma}(X)\) is a projective normal variety if \(X\) is compact._
Proof.: By Proposition 3.34, for each \(\varrho\in\Sigma\), there exists a surjective proper holomorphic fibration \(\operatorname{sh}_{\varrho}:X\to\operatorname{Sh}_{\varrho}(X)\) onto a complex normal space \(\operatorname{Sh}_{\varrho}(X)\). By [10], there exists a surjective proper holomorphic fibration \(\operatorname{sh}_{\Sigma}:X\to\operatorname{Sh}_{\Sigma}(X)\) onto a complex normal space \(\operatorname{Sh}_{\Sigma}(X)\) and holomorphic maps \(e_{\varrho}:\operatorname{Sh}_{\Sigma}(X)\to\operatorname{Sh}_{\varrho}(X)\) such that
1. \(\operatorname{sh}_{\varrho}=e_{\varrho}\circ\operatorname{sh}_{\Sigma}\);
2. for any \(y\in\operatorname{Sh}_{\Sigma}(X)\), we have \(\operatorname{sh}_{\Sigma}^{-1}(y)=\cap_{\varrho\in\Sigma}\operatorname{sh}_{ \varrho}^{-1}(e_{\varrho}(y))\).
Let \(Z\) be a closed subvariety \(Z\) of \(X\). If \(\operatorname{sh}_{\Sigma}(Z)\) is a point, then \(\operatorname{sh}_{\varrho}(Z)\) is a point for any \(\varrho\in\Sigma\) by Item 1. It follows that \(\varrho(\operatorname{Im}[\pi_{1}(Z^{\operatorname{norm}})\to\pi_{1}(X)])\) is finite for every \(\varrho\in\Sigma\).
Conversely, if \(\varrho(\operatorname{Im}[\pi_{1}(Z^{\operatorname{norm}})\to\pi_{1}(X)])\) is finite for every \(\varrho\in\Sigma\), then \(\operatorname{sh}_{\varrho}(Z)\) is a point for any \(\varrho\in\Sigma\). By Item 1, \(\operatorname{sh}_{\Sigma}(Z)\) is a point. The corollary is proved.
### On the algebraicity of the Shafarevich morphism via \(L^{2}\)-methods
In Theorem 3.39, when \(X\) is compact, we proved that the image \(\operatorname{Sh}_{\varrho}(X)\) is projective. In general, as mentioned in Remark 0.2, we propose the following conjecture.
**Conjecture 3.44** (Algebraicity of Shafarevich morphism): _Let \(X\), \(\varrho\) and \(\operatorname{sh}_{\varrho}:X\to\operatorname{Sh}_{\varrho}(X)\) be as in Theorem 3.39. Then \(\operatorname{Sh}_{\varrho}(X)\) is a quasi-projective normal variety and \(\operatorname{sh}_{\varrho}:X\to\operatorname{Sh}_{\varrho}(X)\) is an algebraic morphism._
This conjecture seems to be a difficult problem, with the special case when \(\varrho\) arises from a \(\mathbb{Z}\)-VHS known as a long-standing Griffiths conjecture. In this paper, we provide confirmation of such expectations at the function field level, inspired by the work of Sommese [23, 24].
We first recall the definition of (bi)meromorphic maps of complex spaces \(X\) and \(Y\) (in the sense of Remmert) with a few exceptional convenience. Let \(X^{\circ}\) be an open subset of \(X\) such that \(X\backslash X^{\circ}\) is a nowhere-dense analytic subset and suppose that a holomorphic mapping \(f:X^{\circ}\to Y\) has been given. Then \(f:X\dashrightarrow Y\) is called a meromorphic mapping if the closure \(\Gamma_{f}\) of the graph of \(f\) in \(X\times Y\) is an analytic subset of \(X\times Y\) and if the projection \(\Gamma_{f}\to X\) is a proper mapping. If additionally, there is a Zariski dense open subset \(X^{\prime}\subset X\) such that \(f|_{X^{\prime}}:X^{\prime}\to f(X^{\prime})\) is a biholomorphism, then \(f\) is called _bimeromorphic_. It is worth noting that this definition does not require \(\Gamma_{f}\) to be proper over \(Y\), which differs from the standard definition of bimeromorphic maps.
We present a result that is derived from [23, Proposition I], where the proof utilizes an elegant application of the Hormander-Andreotti-Vesentini \(L^{2}\)-estimate.
**Proposition 3.45**.: _Let \(X\) be a smooth quasi-projective variety and let \(f:X\to Y\) be a proper surjective holomorphic map onto a normal complex space \(Y\). Let \(\overline{X}\) be a smooth projective compactification of \(X\) such that \(\overline{X}\backslash X\) is a simple normal crossing divisor. Assume that there exists a holomorphic line bundle \(L\) on \(Y\) equipped with a smooth hermitian metric \(h\) satisfying the following property:_
1. \(f^{*}L\) _extends to an algebraic line bundle_ \(\mathcal{L}\) _on_ \(\overline{X}\)_._
2. \(\mathcal{L}\) _has_ \(L^{2}\)_-poles with respect to_ \(f^{*}h\)_, i.e., for any point_ \(x\) _in the smooth locus of_ \(D\)_, it has an admissible coordinate_ \((U;z_{1},\ldots,z_{n})\) _centered at_ \(x\) _with_ \(D\cap U=(z_{1}=0)\) _such that_ \(\mathcal{L}|_{U}\) _is trivialized by a section_ \(s\in\Gamma(U,\mathcal{L})\) _and_ \(\int_{U\backslash D}|z_{1}|^{N}|s|_{f^{*}h}^{2}idz_{1}\wedge d\bar{z}_{1} \wedge\ldots\wedge idz_{n}\wedge d\bar{z}_{n}<\infty\) _for some integer_ \(N\geq 1\)_._
3. _The curvature_ \(i\Theta_{h}(L)\) _is semipositive everywhere and strictly positive at a general smooth point of_ \(Y\)_._
_Then there exists a bimeromorphic map \(h:Y\dashrightarrow W\) to a quasi-projective variety \(W\) such that \(h\circ f:X\dashrightarrow W\) is rational._
As the paper by Sommese [23] is rather involved, for the readers' convenience, we recall briefly the ideas of the proof in [23].
Sketch of the proof.: After taking successive generic hyperplane sections on \(\overline{X}\), we assume that there exists a proper surjective generically finite holomorphic map \(g:Z\to Y\)
from a complete Kahler manifold \(Z\). By Item (c) we can choose an open set \(U\subset Y^{\operatorname{reg}}\) such that
1. \(g^{-1}(U)=\cup_{i=1}^{m}U_{i}\) with \(g|_{U_{i}}:U_{i}\to U\) is a biholomorphism;
2. \(i\Theta_{h}(L)\) is strictly positive at \(U\).
We fix a point \(y\in U\) and let \(z_{i}\) be the unique point in \(U_{i}\) such that \(g(z_{i})=y\). By applying the Hormander \(L^{2}\)-estimate, we can prove that there exists integer \(N_{0}\geq 1\) such that for any \(N\geq N_{0}\), the global \(L^{2}\)-sections \(L^{2}(Z,K_{Z}\otimes g^{*}L^{\otimes N})\) generates 1 -jets at points \(z_{1},\dots,z_{m}\), where \(g^{*}L^{\otimes N}\) is equipped with the metric \(g^{*}h^{\otimes N}\). For any \(e\in L^{2}(Z,K_{Z}\otimes g^{*}L^{\otimes N})\), the trace map induces a section on \(\widetilde{e}\in L^{2}(Z,\mathcal{K}_{Y}\otimes L^{\otimes N})\), where \(\mathcal{K}_{Y}\) is the Grauert-Riemenschneider sheaf of \(Y\) (cf. [12, Lemma II-A]). Therefore, when \(N\geq N_{0}\), the sections \(L^{2}(Y,\mathcal{K}_{Y}\otimes L^{\otimes N})\) generating 1-jet at \(y\). We then choose a finite set of sections in \(L^{2}(Y,\mathcal{K}_{Y}\otimes L^{\otimes N})\) generating 1-jets at \(y\). It thus induces a meromorphic map \(h:Y\dashrightarrow\mathbb{P}^{N}\) such that \(h\) is immersive at a neighborhood of \(y\).
On the other hand, by [12, Lemma I-C], \(f^{*}L^{2}(Y,\mathcal{K}_{Y}\otimes L^{\otimes N})\) extends to a meromorphic section of \(\Omega^{k}_{\overline{X}}\otimes\mathcal{L}^{\otimes N}\), where \(k:=\dim Y\). Therefore, by [12, Lemma I-E] there is a meromorphic map \(p:\overline{X}\dashrightarrow\mathbb{P}^{N}\) such that \(h\circ f=p|_{X}\).
By the Chow theorem, \(p\) is rational. Let \(W\) be the image of \(p\) which is a projective variety. Then \(\dim W=\dim Y\) and \(h(Y)\subset W\). Since \(h\) is immersive at one point, it follows that there is a Zariski dense open set \(Y^{\circ}\) such that \(h|_{Y^{\circ}}:Y^{\circ}\to h(Y^{\circ})\) is a biholomorphism. Therefore, \(h:Y\dashrightarrow W\) is a bimeromorphic map.
Let us apply Proposition 3.45 to study the algebraicity property of the Shafarevich morphism constructed in Theorem 3.39.
**Theorem 3.46**.: _Let \(X\) be a non-compact smooth quasi-projective variety and \(\varrho:\pi_{1}(X)\to\operatorname{GL}_{N}(\mathbb{C})\) be a reductive representation. Then after we replace \(X\) by some finite etale cover and \(\varrho\) by its pullback over the cover, there exists a bimeromorphic map \(h:\operatorname{Sh}_{\varrho}(X)\dashrightarrow Y\) to a quasi-projective normal variety \(Y\) such that \(h\circ\operatorname{sh}_{\varrho}:X\dashrightarrow Y\) is rational._
Proof.: We replace \(X\) by a finite etale cover such that the pullback of \(\varrho\) over this etale cover is torsion free. Based on Theorem 3.39, we can then extend \(X\) to a partial projective compactification in such a way that the representation \(\varrho\) also extends, and the extended representation has infinite monodromy at infinity. Let \(\mathfrak{C}\subset M_{\operatorname{B}}(X,N)(\mathbb{C})\) be the Zariski closed subset defined in (3.12). Then \(\mathfrak{C}\) is a defined over \(\mathbb{Q}\), and by Proposition 3.35, \(\mathfrak{C}(\mathbb{C})\) is invariant under \(\mathbb{C}^{*}\)-action. Furthermore, since \([\varrho]\in\mathfrak{C}(\mathbb{C})\), \(\mathfrak{C}\) has infinite monodromy at infinity. Therefore, \(\mathfrak{C}\) satisfies the conditions in Theorem 3.20, and we can apply the claims in the proof of Theorem 3.20. Let \(\sigma:\pi_{1}(X)\to\prod_{i=1}^{m}\operatorname{GL}_{N}(\mathbb{C})\) be the reductive representation underlying a \(\mathbb{C}\)-VHS constructed in Proposition 3.12 with respect to \(\mathfrak{C}\). It satisfies all the properties in Items (a) to (c) in Theorem 3.20. We will use the same notations as in the proof of Theorem 3.20.
By Lemma 1.29, we can establish a family of finitely many reductive representations \(\boldsymbol{\varrho}:=\{\varrho_{i}:X\to\operatorname{GL}_{N}(K_{i})\}_{i=1, \dots,\ell}\) with \(K_{i}\) non-archimedean local fields, which satisfies the conditions \([\varrho_{i}]\in\mathfrak{C}(K_{i})\) for every \(i\), and \(s_{\mathfrak{C}}:X\to S_{\mathfrak{C}}\) coincides with the reduction map \(s_{\boldsymbol{\varrho}}:X\to S_{\boldsymbol{\varrho}}\).
Let \(\boldsymbol{\tau}:=\{\varrho_{i}:X\to\operatorname{GL}_{N}(K_{i})\}_{i=1, \dots,\ell}\cup\{\sigma:\pi_{1}(X)\to\prod_{i=1}^{m}\operatorname{GL}_{N}( \mathbb{C})\}\). Let \(\pi_{\boldsymbol{\tau}}:\widetilde{X}_{\boldsymbol{\tau}}\to X\) be the covering of \(X\) corresponding to the normal subgroup \(\cap_{i=1,\dots,\ell}\ker\varrho_{i}\cap\ker\sigma\) of \(\pi_{1}(X)\).
Define
\[\Phi:\widetilde{X}_{\boldsymbol{\tau}} \to S_{\boldsymbol{\varrho}}\times\mathscr{D}\] \[x \mapsto(s_{\boldsymbol{\varrho}}\circ\pi_{\boldsymbol{\tau}}(x),p(x))\]
where \(p:\widetilde{X}_{\boldsymbol{\tau}}\to\mathscr{D}\) is the period mapping of the \(\mathbb{C}\)-VHS induced by \(\sigma\). Then we have
where \(\tilde{\pi}\) is a topological Galois covering.
**Claim 3.47**.: _Each connected component of the fiber of \(\Phi\) is compact._
Proof.: Let \((t,o)\in S_{\boldsymbol{\varrho}}\times\mathscr{D}\) be arbitrary, and consider a connected component \(F\) of \(\Phi^{-1}(t,o)\). Then any connected component \(F^{\prime}\) of \(\tilde{\pi}^{-1}(F)\) is a connected component of \(\Psi^{-1}(t,o)\), which is compact by virtual of Lemma 3.21. Therefore, \(\tilde{\pi}(F^{\prime})=F\) holds, implying that \(F\) is also compact. Thus, the claim follows.
As a result of Claim 3.47 and Theorem 1.30, the set \(\widetilde{S}_{\boldsymbol{\tau}}\) consisting of connected components of fibers of \(\Phi\) can be equipped with the structure of a complex normal space. Moreover, we have \(\Phi=g_{0}\circ\widetilde{s}_{\boldsymbol{\tau}}\) where \(\widetilde{s}_{\boldsymbol{\tau}}:\widetilde{X}_{\boldsymbol{\tau}}\to \widetilde{S}_{\boldsymbol{\tau}}\) is a proper holomorphic fibration and \(g_{0}:\widetilde{S}_{\boldsymbol{\tau}}\to\widetilde{S}_{\boldsymbol{\xi}} \times\mathscr{D}\) is a holomorphic map. By the proof of Claim 3.47, \(\tilde{\pi}\) maps each fiber of \(\operatorname{sh}_{H}:\widetilde{X}_{H}\to\widetilde{S}_{H}\) to a fiber of \(\widetilde{s}_{\boldsymbol{\tau}}:\widetilde{X}_{\boldsymbol{\tau}}\to \widetilde{S}_{\boldsymbol{\tau}}\). This induces a holomorphic map \(\tilde{\mu}:\widetilde{S}_{H}\to\widetilde{S}_{\boldsymbol{\tau}}\) and we have the following commutative diagram
**Claim 3.48**.: \(\widetilde{s}_{\boldsymbol{\tau}}\) _contracts every compact subvariety of \(\widetilde{X}_{\boldsymbol{\tau}}\)._
Proof.: The proof is exactly the same as Claim 3.24 and we repeat it for the sake of completeness. Let \(Z\subset\widetilde{X}_{\boldsymbol{\tau}}\) be a compact irreducible subvariety. Then \(W:=\pi_{\boldsymbol{\tau}}(Z)\) is also a compact irreducible subvariety in \(X\) with \(\dim Z=\dim W\). Hence \(\operatorname{Im}\left[\pi_{1}(Z^{\text{norm}})\to\pi_{1}(W^{\text{norm}})\right]\) is a finite index subgroup of \(\pi_{1}(W^{\text{norm}})\). Note that \(W\) can be endowed with an algebraic structure induced by \(X\). As the natural map \(Z\to W\) is finite, \(Z\) can be equipped with an algebraic structure such that the natural map \(Z\to X\) is algebraic.
By the definition of \(\widetilde{X}_{\boldsymbol{\tau}}\), we have \(\varrho_{i}(\operatorname{Im}\left[\pi_{1}(Z)\to\pi_{1}(X)\right])\subset \varrho_{i}(\operatorname{Im}\left[\pi_{1}(\widetilde{X}_{\boldsymbol{\tau}} )\to\pi_{1}(X)\right])=\{1\}\) for each \(\varrho_{i}\in\boldsymbol{\varrho}\). Hence, \(\varrho_{i}(\operatorname{Im}\left[\pi_{1}(W^{\text{norm}})\to\pi_{1}(X)\right])\) is finite which is thus bounded. By the definition of \(s_{\boldsymbol{\varrho}}\), \(W\) is contained in a fiber of \(s_{\boldsymbol{\varrho}}\). Consider a desingularization \(Z^{\prime}\) of \(Z\) and let \(i:Z^{\prime}\to X\) be the natural algebraic morphism. Note that \(i^{*}\sigma(\pi_{1}(Z^{\prime}))=\{1\}\). It follows that the \(\mathbb{C}\)-VHS induced by \(i^{*}\sigma\) is trivial. Therefore, for the period mapping \(p:\widetilde{X}_{\boldsymbol{\tau}}\to\mathscr{D}\), \(p(Z)\) is a point. Hence \(Z\) is contracted by \(\widetilde{s}_{\boldsymbol{\tau}}\). The claim follows.
By Claim 3.48, we can apply a similar proof as in Lemma 3.25 to \(\widetilde{s}_{\boldsymbol{\tau}}:\widetilde{X}_{\boldsymbol{\tau}}\to \widetilde{S}_{\boldsymbol{\tau}}\). This allows us to conclude that there is an action of \(\operatorname{Aut}(\widetilde{X}_{\boldsymbol{\tau}}/X)\) on \(\widetilde{S}_{\boldsymbol{\tau}}\) that is equivariant for the proper holomorphic fibration \(\widetilde{s}_{\boldsymbol{\tau}}:\widetilde{X}_{\boldsymbol{\tau}}\to \widetilde{S}_{\boldsymbol{\tau}}\). This action is analytic and properly discontinuous. Taking the quotient of \(\widetilde{s}_{\boldsymbol{\tau}}\) by this action, we obtain a proper holomorphic
fibration \(\operatorname{sh}_{\boldsymbol{\varepsilon}}:X\to\operatorname{Sh}_{\boldsymbol{ \varepsilon}}(X)\) defined in the proof of Theorem 3.20, as it is also the quotient of \(\operatorname{sh}_{H}:\widetilde{X}_{H}\to\widetilde{S}_{H}\) by \(\pi_{1}(X)/H\).
It is worth noting that \(\operatorname{sh}_{\boldsymbol{\varepsilon}}:X\to\operatorname{Sh}_{ \boldsymbol{\varepsilon}}(X)\) coincides with the Shafarevich morphism \(\operatorname{sh}_{\boldsymbol{\varrho}}:X\to\operatorname{Sh}_{\boldsymbol{ \varrho}}(X)\), as shown in Step 1 of the proof of Proposition 3.34.
**Claim 3.49**.: _There exists a finite index normal subgroup \(N\) of \(\operatorname{Aut}(\widetilde{X}_{\boldsymbol{\tau}}/X)\) such that its action on \(\widetilde{S}_{\boldsymbol{\tau}}\) does not have any fixed point._
Proof.: Note that \(\operatorname{Aut}(\widetilde{X}_{\boldsymbol{\tau}}/X)\simeq\frac{\pi_{1}(X) }{\operatorname{Im}[\pi_{1}(\widetilde{X}_{\boldsymbol{\tau}})\to\pi_{1}(X)]}\). Since \(\widetilde{X}_{\boldsymbol{\tau}}\) is the covering of \(X\) corresponding to the normal subgroup \(\cap_{i=1,\ldots,\ell}\ker\varrho_{i}\cap\ker\sigma\) of \(\pi_{1}(X)\), it follows that \(\operatorname{Aut}(\widetilde{X}_{\boldsymbol{\tau}}/X)\simeq\frac{\pi_{1}(X )}{\cap_{i=1,\ldots,\ell}\ker\varrho_{i}\cap\ker\sigma}\). Hence \(\operatorname{Aut}(\widetilde{X}_{\boldsymbol{\tau}}/X)\) is finitely generated and linear. By Malcev's theorem, \(\operatorname{Aut}(\widetilde{X}_{\boldsymbol{\tau}}/X)\) has a finite index normal subgroup \(N\) that is torsion free. We will prove that \(N\) acts on \(\widetilde{S}_{\boldsymbol{\tau}}\) without fixed point.
Assume that there exists \(\gamma\in N\) and \(y\in\widetilde{S}_{\boldsymbol{\tau}}\) such that \(\gamma.y=y\). Let \(F:=\widetilde{s}_{\boldsymbol{\tau}}^{-1}(y)\), which is a compact connected analytic subset of \(\widetilde{X}_{\boldsymbol{\tau}}\) by Claim 3.47. We have \(\gamma.F=F\). Since \(F\) is compact, the subgroup \(\mathcal{S}\) of \(N\) that fixes \(F\) is finite. Since \(N\) is torsion-free, it follows that \(\mathcal{S}=\{1\}\) and thus \(\gamma=1\). Therefore, the fixator of arbitrary point \(y\in\widetilde{S}_{\boldsymbol{\tau}}\) in \(N\) can only be the identity element. Thus, the claim is proved.
Let \(Y:=\widetilde{X}_{\boldsymbol{\tau}}/N\). Then \(f:Y\to X\) is a finite Galois etale cover. Since \(\widetilde{s}_{\boldsymbol{\tau}}:\widetilde{X}_{\boldsymbol{\tau}}\to \widetilde{S}_{\boldsymbol{\tau}}\) is \(\operatorname{Aut}(\widetilde{X}_{\boldsymbol{\tau}}/X)\)-equivariant, we take its quotient by \(N\) to obtain a proper holomorphic fibration \(\operatorname{sh}_{f^{*}\varrho}:Y\to\operatorname{Sh}_{f^{*}\varrho}(Y)\) over a complex normal space \(\operatorname{Sh}_{f^{*}\varrho}(Y)\). As shown in Claim 3.49, \(N\) acts on \(\widetilde{S}_{\boldsymbol{\tau}}\) properly continuous and freely. Hence the covering \(\widetilde{S}_{\boldsymbol{\tau}}\to\operatorname{Sh}_{f^{*}\varrho}(Y)\) is etale.
**Claim 3.50**.: _The proper holomorphic fibration \(\operatorname{sh}_{f^{*}\varrho}:Y\to\operatorname{Sh}_{f^{*}\varrho}(Y)\) is the Shafarevich morphism of \(f^{*}\varrho\)._
Proof.: Let \(Z\) be a closed subvariety of \(Y\). Then \(W:=f(Z)\) is an irreducible closed subvariety in \(X\) with \(\dim Z=\dim W\). Hence \(\operatorname{Im}\left[\pi_{1}(Z^{\operatorname{norm}})\to\pi_{1}(W^{ \operatorname{norm}})\right]\) is a finite index subgroup of \(\pi_{1}(W^{\operatorname{norm}})\). Since \(\operatorname{Sh}_{f^{*}\varrho}(Y)\to\operatorname{Sh}_{\varrho}(X)\) is a finite holomorphic map, \(Z\) is contracted by \(\operatorname{sh}_{f^{*}\varrho}\) if and only if \(\operatorname{sh}_{\varrho}(W)\) is a point. This is equivalent to \(\varrho(\operatorname{Im}[\pi_{1}(W^{\operatorname{norm}})\to\pi_{1}(X)])\) is finite as \(\operatorname{sh}_{\varrho}:X\to\operatorname{Sh}_{\varrho}(X)\) is the Shafarevich morphism of \(\varrho\). This, in turn, is equivalent to \(f^{*}\varrho(\operatorname{Im}[\pi_{1}(Z^{\operatorname{norm}})\to\pi_{1}(Y)])\) is finite. The claim is proved.
**Claim 3.51**.: _Each fiber of \(g_{0}:\widetilde{S}_{\boldsymbol{\tau}}\to S_{\boldsymbol{\varrho}}\times \mathscr{D}\) is discrete._
Proof.: The proof is the same as in Claim 3.31. We provide it for the sake of completeness. Let \((t,o)\in S_{\boldsymbol{\varrho}}\times\mathscr{D}\) be arbitrary point and take any point \(y\in g_{0}^{-1}((t,o))\). Then \(Z:=\widetilde{s}_{\boldsymbol{\tau}}^{-1}(y)\) is a connected component of the fiber \(\Phi^{-1}((t,o))\), that is compact by Claim 3.47. By Theorem 1.30, \(Z\) has an open neighborhood \(U\) such that \(\Phi(U)\) is a locally
closed analytic subvariety of \(S_{\boldsymbol{\varrho}}\times\mathscr{D}\) and \(\Phi|_{U}:U\to\Psi(U)\) is proper. Therefore, for the Stein factorization \(U\to V\stackrel{{\pi_{V}}}{{\to}}\Phi(U)\) of \(\Phi|_{U}\), \(U\to V\) coincides with \(\widetilde{s}_{\boldsymbol{r}}|_{U}:U\to\widetilde{s}_{\boldsymbol{r}}(U)\) and \(\pi_{V}:V\to\Phi(U)\) is finite. Note that \(V\) is an open neighborhood of \(y\) and \(\pi_{V}:V\to\Phi(U)\) coincides with \(g_{0}|_{V}:V\to S_{\boldsymbol{\varrho}}\times\mathscr{D}\). Therefore, the set \(V\cap g_{0}^{-1}((t,o))=V\cap(\pi_{V})^{-1}(t,o)\) is finite. As a result, \(g_{0}^{-1}((t,o))\) is discrete. The claim is proven.
For the readers' convenience, we draw a commutative diagram below.
Recall that the canonical bundle \(K_{\mathscr{D}}\) of the period domain \(\mathscr{D}\) is equipped with a \(G_{0}\)-invariant smooth metric \(h_{\mathscr{D}}\), which has a positive-definite curvature in the horizontal direction. The period mapping \(p:\widetilde{X}_{\boldsymbol{\tau}}\to\mathscr{D}\) of \(\mathbb{C}\)-VHS associated with \(\sigma\) induces a holomorphic map \(\phi:\widetilde{S}_{\boldsymbol{\tau}}\to\mathscr{D}\) that is horizontal. Observe that \(\phi\) is equivariant for the \(\operatorname{Aut}(\widetilde{X}_{\boldsymbol{\tau}}/X)\)-action. As a result, \(\phi^{*}K_{\mathscr{D}}\) descends to a line bundle on the quotient \(\operatorname{Sh}_{f^{*}\varrho}(Y)\), denoted by \(L_{G}\). Since \(\widetilde{S}_{\boldsymbol{\tau}}\to\operatorname{Sh}_{f^{*}\varrho}(Y)\) is etale, the smooth metric \(h_{\mathscr{D}}\) induces a smooth metric \(h_{G}\) on \(L_{G}\) whose curvature form is smooth and denoted by \(T\). Note that \(T\) is semipositive as \(\phi\) is horizontal.
On the other hand, for the period mapping \(p:\widetilde{X}_{\boldsymbol{\tau}}\to\mathscr{D}\), the pullback \(p^{*}K_{\mathscr{D}}\) descends to a holomorphic line bundle on \(Y\) that is equal to \((\operatorname{sh}_{f^{*}\varrho})^{*}L_{\mathbb{G}}\). It is well-known that \((\operatorname{sh}_{f^{*}\varrho})^{*}L_{\mathbb{G}}\) extends to an algebraic line bundle \(\mathcal{L}_{1}\) over \(\overline{Y}\), known as the Deligne extension. According to [13, SS2.1], \(\mathcal{L}_{1}\) has \(L^{2}\)-poles with respect to the pullback metric \((\operatorname{sh}_{f^{*}\varrho})^{*}h_{\mathbb{G}}\).
Let \(\overline{Y}\) be a smooth projective compactification of \(Y\) such that the boundary \(D_{Y}:=\overline{Y}\backslash Y\) is a simple normal crossing divisor. Consider the reduction map \(s_{\boldsymbol{\varrho}}:X\to S_{\boldsymbol{\varrho}}\) of \(\boldsymbol{\varrho}\). Let \(\overline{S}_{\boldsymbol{\varrho}}\) be a projective compactification of \(S_{\boldsymbol{\varrho}}\). Since \(s_{\boldsymbol{\varrho}}\circ f:Y\to S_{\boldsymbol{\varrho}}\) is an algebraic morphism, we can blow-up \(D_{Y}\) such that \(s_{\boldsymbol{\varrho}}\circ f\) extends to a morphism \(j:\overline{Y}\to\overline{S}_{\boldsymbol{\varrho}}\). Let us choose an ample line bundle \(L_{0}\) on \(\overline{S}_{\boldsymbol{\varrho}}\), equipped with a smooth metric \(h_{0}\) of positive-definite curvature. Let \(L:=q^{*}L_{0}\otimes L_{\mathbb{G}}\), and equip it with the smooth metric \(h:=q^{*}h_{0}\otimes h_{\mathbb{G}}\). It is worth noting that the algebraic line bundle \(\mathcal{L}:=\mathcal{L}_{1}\otimes j^{*}L_{0}\) on \(\overline{Y}\) extends \((\operatorname{sh}_{f^{*}\varrho})^{*}L\), and has \(L^{2}\)-poles with respect to \((\operatorname{sh}_{f^{*}\varrho})^{*}h\).
According to Claim 3.31, the holomorphic map \(g:\widetilde{S}_{H}\to\mathscr{D}\times S_{\boldsymbol{\varrho}}\) has discrete fibers. Therefore, at general points on the regular locus of \(\operatorname{Sh}_{f^{*}\varrho}(Y)\), the curvature \(i\boldsymbol{\Theta}_{h}(L)\) of \((L,h)\) is strictly positive. Note that \(i\Theta_{h}(L)\) is semipositive everywhere. Consequently, the conditions in Proposition 3.45 are satisfied. Thus, we can conclude that there exists a bimeromorphic map \(b:\operatorname{Sh}_{f^{*}\varrho}(Y)\dashrightarrow Q\) to a quasi-projective variety \(Q\) such that \(b\circ\operatorname{sh}_{f^{*}\varrho}:Y\dashrightarrow Q\) is rational. Since \(Y\) is the finite etale cover of \(X\) and \(\operatorname{sh}_{f^{*}\varrho}:Y\to\operatorname{Sh}_{f^{*}\varrho}(Y)\) is the Shafarevich morphism of \(f^{*}\varrho\) as shown in Claim 3.50, we conclude the proof of the theorem.
## 4. Proof of the reductive Shafarevich conjecture
The goal of this section is to provide proofs for Theorems B and C when \(X\) is a _smooth_ projective variety. It is important to note that our methods differs from the approach presented in [12], although we do follow the general strategy in that work.
In this section, we will use the notation \(\mathcal{D}G\) to denote the derived group of any given group \(G\). Throughout the section, our focus is on non-archimedean local fields with characteristic zero. More precisely, we consider finite extensions of \(\mathbb{Q}_{p}\) for some prime \(p\).
### Reduction map of representation into algebraic tori
Let \(X\) be a smooth projective variety. Let \(a:X\to A\) be the Albanese morphism of \(X\).
**Lemma 4.1**.: _Let \(P\subset A\) be an abelian subvariety of the Albanese variety \(A\) of \(X\) and \(K\) be a non-archimedean local field. If \(\tau:\pi_{1}(X)\to\operatorname{GL}_{1}(K)\) factors through \(\sigma:\pi_{1}(A/P)\to\operatorname{GL}_{1}(K)\), then the Katzarkov-Eyssidieux reduction map \(s_{\tau}:X\to S_{\tau}\) factors through the Stein factorization of the map \(q:X\to A/P\)._
Proof.: As \(\tau=q^{*}\sigma\), if follows that for each connected component \(F\) of the fiber of \(q:X\to A/P\), \(\tau(\pi_{1}(F))=\{1\}\). Therefore, \(F\) is contracted by \(s_{\tau}\). The lemma follows.
**Lemma 4.2**.: _Let \(P\subset A\) be an abelian subvariety of \(A\). Let \(N\) be a Zariski dense open set of the image \(j:M_{\operatorname{B}}(A/P,1)\to M_{\operatorname{B}}(A,1)\) where we consider \(M_{\operatorname{B}}(A/P,1)\) and \(M_{\operatorname{B}}(A,1)\) as algebraic tori defined over \(\bar{\mathbb{Q}}\). Then there are non-archimedean local fields \(K_{i}\) and a family of representations \(\boldsymbol{\tau}:=\{\tau_{i}:\pi_{1}(X)\to\operatorname{GL}_{1}(K_{i})\}_{i=1,\ldots,m}\) such that_
* \(\tau_{i}\in N(K_{i})\)_, where we use the natural identification_ \(M_{\operatorname{B}}^{0}(X,1)\simeq M_{\operatorname{B}}(A,1)\)_. Here_ \(M_{\operatorname{B}}^{0}(X,1)\) _denotes the connected component of_ \(M_{\operatorname{B}}^{0}(X,1)\) _containing the trivial representation._
* _The reduction map_ \(s_{\boldsymbol{\tau}}:X\to S_{\boldsymbol{\tau}}\) _is the Stein factorization of_ \(X\to A/P\)_._
* _For the canonical current_ \(T_{\boldsymbol{\tau}}\) _defined over_ \(S_{\boldsymbol{\tau}}\)_,_ \(\{T_{\boldsymbol{\tau}}\}\) _is a Kahler class._
Proof.: Let \(e_{1},\ldots,e_{m}\) be a basis of \(\pi_{1}(A/P)\simeq H_{1}(A/P,\mathbb{Z})\). Note that \(\bar{\mathbb{Q}}\)-scheme \(M_{\operatorname{B}}(A/P,1)\simeq(\bar{\mathbb{Q}}^{\times})^{m}\). Denote by \(S\subset U(1)\cap\bar{\mathbb{Q}}\) the set of roots of unity. Then \(S\) is Zariski dense in \(\bar{\mathbb{Q}}^{\times}\). Since \(j^{-1}(N)\) is a Zariski dense open set of \(M_{\operatorname{B}}(A/P,1)\), it follows that there are \(\{a_{ij}\}_{i,j=1,\ldots,m}\in\bar{\mathbb{Q}}^{\times}\) and representations \(\{\varrho_{i}:\pi_{1}(A/P)\to\bar{\mathbb{Q}}^{\times}\}_{i=1,\ldots,m}\) defined by \(\varrho_{i}(e_{j})=a_{ij}\) such that
* \([\varrho_{i}]\in j^{-1}(N)(\bar{\mathbb{Q}})\);
* If \(i=j\), \(a_{ij}\in\bar{\mathbb{Q}}^{\times}\setminus U(1)\);
* If \(i\neq j\), \(a_{ij}\in S\).
Consider a number field \(k_{i}\) containing \(a_{i1},\ldots,a_{im}\) endowed with a discrete non-archimedean valuation \(v_{i}:k_{i}\to\mathbb{R}\) such that \(v_{i}(a_{ii})\neq 0\). Then \(v_{i}(a_{ij})=0\) for every \(j\neq i\). Indeed, for every \(j\neq i\), since \(a_{ij}\) is a root of unity, there exists \(\ell\in\mathbb{Z}_{>0}\) such that \(a_{ij}^{\ell}=1\). It follows that \(0=v(a_{ij}^{\ell})=\ell v(a_{ij})\). Let \(K_{i}\) be the non-archimedean local field which is the completion of \(k_{i}\) with respect to \(v_{i}\). It follows that each \(\varrho_{i}:\pi_{1}(A/P)\to K_{i}^{\times}\) is unbounded. Consider \(\nu_{i}:\pi_{1}(A/P)\to\mathbb{R}\) by composing \(\varrho_{i}\) with \(v_{i}:K_{i}^{\times}\to\mathbb{R}\). Then \(\{\nu_{1},\ldots,\nu_{m}\}\subset H^{1}(A/P,\mathbb{R})\) is a basis for the \(\mathbb{R}\)-linear space \(H^{1}(A/P,\mathbb{R})\). It follows that \(\nu_{i}(e_{j})=\delta_{ij}\) for any \(i,j\). Let \(\eta_{i}\in H^{0}(A/P,\Omega^{1}_{A/P})\) be the \((1,0)\)-part of the Hodge decomposition of \(\nu_{i}\). Therefore, \(\{\eta_{1},\ldots,\eta_{m}\}\) spans the \(\mathbb{C}\)-linear space \(H^{0}(A/P,\Omega^{1}_{A/P})\). Hence \(\sum_{i=1}^{m}i\eta_{i}\wedge\overline{\eta_{i}}\) is a Kahler form on \(A/P\). Let \(\tau_{i}:\pi_{1}(X)\to K_{i}^{\times}\) be the composition of \(\varrho_{i}\) with \(\pi_{1}(X)\to\pi_{1}(A/P)\).
Let \(q:A\to A/P\) be the quotient map. Let \(P^{\prime}\) the largest abelian subvariety of \(A\) such that \(q^{*}\eta_{i}|_{P^{\prime}}\equiv 0\) for each \(i\). Since \(\{\eta_{1},\ldots,\eta_{m}\}\) spans \(H^{0}(B,\Omega^{1}_{B})\), it follows that \(P^{\prime}=P\). Therefore, the reduction map \(s_{\boldsymbol{\tau}}:X\to S_{\boldsymbol{\tau}}\) is the Stein factorization of \(X\to A/P\) with \(g:S_{\boldsymbol{\tau}}\to A/P\) be the finite morphism. According to Definition 1.24, \(T_{\boldsymbol{\tau}}=g^{*}\sum_{i=1}^{m}i\eta_{i}\wedge\overline{\eta_{i}}\). Since \(\sum_{i=1}^{m}i\eta_{i}\wedge\overline{\eta_{i}}\) is a Kahler form on \(A/P\), it follows that \(\{T_{\boldsymbol{\tau}}\}\) is a Kahler class by Theorem 1.13. The lemma is proved.
**Corollary 4.3**.: _Let \(X\) be a smooth projective variety. If \(\mathfrak{C}\subset M_{\operatorname{B}}(X,1)\) is an absolutely constructible subset. Consider the reduction map \(s_{\boldsymbol{\xi}}:X\to S_{\boldsymbol{\xi}}\) defined in Definition 3.1. Then there is a family of representations \(\boldsymbol{\varrho}:=\{\varrho_{i}:\pi_{1}(X)\to\operatorname{GL}_{1}(K_{i})\}_{ i=1,\ldots,\ell}\) where \(K_{i}\) are non-archimedean local fields such that_
* _For each_ \(i=1,\ldots,\ell\)_,_ \(\varrho_{i}\in\mathfrak{C}(K_{i})\)_;_
* _The reduction map_ \(s_{\boldsymbol{\varrho}}:X\to S_{\boldsymbol{\varrho}}\) _of_ \(\boldsymbol{\varrho}\) _coincides with_ \(s_{\boldsymbol{\xi}}\)_._
* _For the canonical current_ \(T_{\boldsymbol{\varrho}}\) _defined over_ \(S_{\boldsymbol{\xi}}\)_,_ \(\{T_{\boldsymbol{\varrho}}\}\) _is a Kahler class._
Proof.: Let \(A\) be the Albanese variety of \(X\). Since \(\mathfrak{C}\subset M_{\mathrm{B}}(X,1)\) is an absolute constructible subset, by Theorem 1.19, there are abelian subvarieties \(P_{i}\subset A\) and torsion points \(v_{i}\in M_{\mathrm{B}}(X,1)(\bar{\mathbb{Q}})\) such that \(\mathfrak{C}=\cup_{i=1}^{m}v_{i}.N_{i}^{\circ}\); where \(N_{i}\) is the image in \(M_{\mathrm{B}}^{0}(X,1)\simeq M_{\mathrm{B}}(A,1)\) of the natural morphism \(M_{\mathrm{B}}(A/P_{i},1)\to M_{\mathrm{B}}(A,1)\) and \(N_{i}^{\circ}\) is a Zariski dense open subset of \(N_{i}\). Let \(k\) be a number field such that \(v_{i}\in M_{\mathrm{B}}(X,1)(k)\) for each \(i\).
**Claim 4.4**.: _Denote by \(P:=\cap_{i=1}^{m}P_{i}\). Then \(s_{\mathfrak{C}}:X\to S_{\mathfrak{C}}\) is the Stein factorization of \(X\to A/P\)._
Proof.: Let \(\tau:\pi_{1}(X)\to\mathrm{GL}_{1}(K)\) be a reductive representation with \(K\) a non-archimedean local field such that \(\tau\in\mathfrak{C}(K)\). Note that the reduction map \(s_{\tau}\) is the same if we replace \(K\) by a finite extension. We thus can assume that \(k\subset K\). Note that there exists some \(i\in\{1,\dots,\ell\}\) such that \([v_{i}^{-1}.\tau]\in N_{i}(K)\). Write \(\varrho:=v_{i}^{-1}.\tau\). Since \(v_{i}\) is a torsion element, it follows that \(v_{i}(\pi_{1}(X))\) is finite, and thus the reduction map \(s_{\varrho}\) coincides with \(s_{\tau}\). Since \(\varrho\) factors through \(\pi_{1}(A/P_{i})\to\mathrm{GL}_{1}(K)\), by Lemma 4.1\(s_{\varrho}\) factors through the Stein factorization of \(X\to A/P_{i}\). Hence \(s_{\varrho}\) factors through the Stein factorization of \(X\to A/P\). By Definition 3.1, it follows that \(s_{\mathfrak{C}}:X\to S_{\mathfrak{C}}\) factors through the Stein factorization of \(X\to A/P\).
Fix any \(i\). By Lemma 4.2 there are non-archimedean local fields \(K_{j}\) and a family of reductive representations \(\boldsymbol{\tau}:=\{\tau_{j}:\pi_{1}(X)\to\mathrm{GL}_{1}(K_{j})\}_{j=1,\dots,n}\) such that
* \(\tau_{j}\in N_{i}^{\circ}(K_{j})\).
* The reduction map \(s_{\boldsymbol{\tau}}:X\to S_{\boldsymbol{\tau}}\) is the Stein factorization of \(X\to A/P_{i}\).
* For the canonical current \(T_{\boldsymbol{\tau}}\) over \(S_{\boldsymbol{\tau}}\), \(\{T_{\boldsymbol{\tau}}\}\) is a Kahler class.
We can replace \(K_{i}\) by a finite extension such that \(k\subset K_{i}\) for each \(K_{i}\). Then \(v_{i}.\tau_{i}\in\mathfrak{C}(K_{i})\) for every \(i\). Note that the Katzarkov-Eyssidieux reduction map \(s_{v_{i}.\tau_{j}}:X\to S_{v_{i}.\tau_{j}}\) coincides with \(s_{\tau_{j}}:X\to S_{\tau_{j}}\). Therefore, the Stein factorization of \(X\to A/P_{i}\) factors through \(s_{\mathfrak{C}}\). Since this holds for each \(i\), it follows that the Stein factorization \(X\to A/P_{1}\times\dots\times A/P_{m}\) factors through \(s_{\mathfrak{C}}\). Note that the Stein factorization \(X\to A/P_{1}\times\dots\times A/P_{m}\) coincides with the Stein factorization of \(X\to A/P\). Therefore, the Stein factorization of \(X\to A/P\) factors through \(s_{\mathfrak{C}}\). The claim is proved.
By the above arguments, for each \(i\), there exists a family of reductive representations into non-archimedean local fields \(\boldsymbol{\varrho}_{i}:=\{\varrho_{ij}:\pi_{1}(X)\to\mathrm{GL}_{1}(K_{ij}) \}_{j=1,\dots,k_{i}}\) such that
* \(\varrho_{ij}\in\mathfrak{C}(K_{ij})\)
* \(s_{\boldsymbol{\varrho}_{i}}:X\to S_{\boldsymbol{\varrho}_{i}}\) is the Stein factorization of \(X\to A/P_{i}\)
* For the canonical current \(T_{\boldsymbol{\varrho}_{i}}\) defined over \(S_{\boldsymbol{\varrho}_{i}}\), \(\{T_{\boldsymbol{\varrho}_{i}}\}\) is a Kahler class.
By the above claim, we know that \(s_{\mathfrak{C}}:X\to S_{\mathfrak{C}}\) is the Stein factorization of \(X\to S_{\boldsymbol{\varrho}_{1}}\times\dots\times S_{\boldsymbol{\varrho}_{m}}\). Then for the representation \(\boldsymbol{\varrho}:=\{\varrho_{ij}:\pi_{1}(X)\to\mathrm{GL}_{1}(K_{ij})\}_{ i=1,\dots,m;j=1,\dots,k_{i}}\), \(s_{\boldsymbol{\varrho}}:X\to S_{\boldsymbol{\varrho}}\) is the Stein factorization of \(X\to A/P\) hence \(s_{\boldsymbol{\varrho}}\) coincides with \(s_{\boldsymbol{\xi}}\). Moreover, the canonical current \(T_{\boldsymbol{\varrho}}=\sum_{i=1}^{m}g_{i}^{*}T_{\boldsymbol{\varrho}_{i}}\) where \(g_{i}:S_{\boldsymbol{\xi}}\to S_{\boldsymbol{\varrho}_{i}}\) is the natural map. As \(S_{\mathfrak{C}}\to S_{\boldsymbol{\varrho}_{1}}\times\dots\times S_{\boldsymbol {\varrho}_{m}}\) is finite, by Theorem 1.13\(\{T_{\boldsymbol{\varrho}}\}\) is Kahler.
Let us prove the main result in this subsection.
**Theorem 4.5**.: _Let \(X\) be a smooth projective variety and let \(T\) be an algebraic tori defined over some number field \(k\). Let \(\mathfrak{C}\subset M_{\mathrm{B}}(X,T)(\mathbb{C})\) be an absolutely constructible subset. Consider the reduction map \(s_{\mathfrak{C}}:X\to S_{\mathfrak{C}}\). Then there is a family of reductive representations \(\boldsymbol{\tau}:=\{\tau_{i}:\pi_{1}(X)\to T(K_{i})\}_{i=1,\dots,N}\) where \(K_{i}\) are non-archimedean local fields containing \(k\) such that_
* _For each_ \(i=1,\dots,N\)_,_ \([\tau_{i}]\in\mathfrak{C}(K_{i})\)_;_
* _The reduction map_ \(s_{\boldsymbol{\tau}}:X\to S_{\boldsymbol{\tau}}\) _of_ \(\boldsymbol{\tau}\) _coincides with_ \(s_{\mathfrak{C}}\)_._
* _For the canonical current_ \(T_{\boldsymbol{\tau}}\) _over_ \(S_{\mathfrak{C}}\) _defined in Definition_ 1.24_,_ \(\{T_{\boldsymbol{\tau}}\}\) _is a Kahler class._
Proof.: We replace \(k\) by a finite extension such that \(T\) is split over \(k\). Then we have \(T\simeq\mathbb{G}_{m,k}^{\ell}\). Note that this does not change the reduction map \(s_{\mathfrak{C}}:X\to S_{\mathfrak{C}}\). We take
\(p_{i}:T\to\mathbb{G}_{m,k}\) to be the \(i\)-th projection which is a \(k\)-morphism. It induces a morphism of \(k\)-schemes \(\psi_{i}:M_{\mathrm{B}}(X,T)\to M_{\mathrm{B}}(X,\mathrm{GL}_{1})\). By Theorem 1.20, \(\mathfrak{C}_{i}:=\psi_{i}(\mathfrak{C})\) is also an absolutely constructible subset. Consider the reduction maps \(\{s_{\mathfrak{C}_{i}}:X\to S_{\mathfrak{C}_{i}}\}_{i=1,\ldots,\ell}\) defined by Definition 3.1.
**Claim 4.6**.: \(s_{\mathfrak{C}}:X\to S_{\mathfrak{C}}\) _is the Stein factorization of \(s_{\mathfrak{C}_{1}}\times\cdots\times s_{\mathfrak{C}_{\ell}}:X\to S_{ \mathfrak{C}_{1}}\times\cdots\times S_{\mathfrak{C}_{\ell}}\)._
Proof.: Let \(\varrho:\pi_{1}(X)\to T(K)\) be any reductive representation where \(K\) is a non-archimedean local field containing \(k\) such that \([\varrho]\in\mathfrak{C}(K)\). Write \(\varrho_{i}=p_{i}\circ\varrho:\pi_{1}(X)\to\mathrm{GL}_{1}(K)\). Then \([\varrho_{i}]=\psi_{i}([\varrho])\in\mathfrak{C}_{i}(K)\). Note that for any subgroup \(\Gamma\subset\pi_{1}(X)\), \(\varrho(\Gamma)\) is bounded if and only if \(\varrho_{i}(\Gamma)\) is bounded for any \(i\). Therefore, \(s_{\varrho}:X\to S_{\varrho}\) is the Stein factorization of \(X\to S_{\varrho_{1}}\times\cdots\times S_{\varrho_{\ell}}\). Hence \(s_{\mathfrak{C}}:X\to S_{\mathfrak{C}}\) factors through the Stein factorization of \(X\to S_{\mathfrak{C}_{1}}\times\cdots\times S_{\mathfrak{C}_{\ell}}\).
On the other hand, consider any \(\varrho_{i}\in\mathfrak{C}_{i}(K)\) where \(K\) is a non-archimedean local field containing \(k\). Then there is a finite extension \(L\) of \(K\) such that
* there is a reductive representation \(\varrho:\pi_{1}(X)\to T(L)\) with \([\varrho]\in\mathfrak{C}(L)\);
* \(p_{i}\circ\varrho=\varrho_{i}\).
By the above argument, \(s_{\varrho_{i}}:X\to S_{\varrho_{i}}\) factors through \(s_{\varrho}:X\to S_{\varrho}\). Note that \(s_{\varrho}\) factors through \(s_{\mathfrak{C}}\). It follows that the Stein factorization of \(X\to S_{\mathfrak{C}_{1}}\times\cdots\times S_{\mathfrak{C}_{\ell}}\) factors through \(s_{\mathfrak{C}}\). The claim is proved.
We now apply Corollary 4.3 to conclude that for each \(i\), there exists a family of reductive representations into non-archimedean local fields \(\varrho_{i}:=\{\varrho_{ij}:\pi_{1}(X)\to\mathrm{GL}_{1}(K_{ij})\}_{j=1,\ldots,k_{i}}\) such that
* \(\varrho_{ij}\in\mathfrak{C}_{i}(K_{ij})\);
* The reduction map \(s_{\varrho_{i}}:X\to S_{\varrho_{i}}\) of \(\varrho_{i}\) coincides with \(s_{\mathfrak{C}_{i}}:X\to S_{\mathfrak{C}_{i}}\);
* for the canonical current \(T_{\varrho_{i}}\) defined over \(S_{\varrho_{i}}\), \(\{T_{\varrho_{i}}\}\) is a Kahler class.
Denote by \(\boldsymbol{\varrho}:=\{\varrho_{ij}\}_{i=1,\ldots,\ell;j=1,\ldots,k_{i}}\). Then \(s_{\boldsymbol{\varrho}}:X\to S_{\boldsymbol{\varrho}}\) coincides with \(s_{\mathfrak{C}}:X\to S_{\mathfrak{C}}\) by the above claim. Then \(T_{\boldsymbol{\varrho}}\) is a Kahler class.
By the definition of \(\mathfrak{C}_{i}\), we can find a finite extension \(L_{ij}\) of \(K_{ij}\) such that
* there is a reductive representation \(\tau_{ij}:\pi_{1}(X)\to T(L_{ij})\) with \([\tau_{ij}]\in\mathfrak{C}(L_{ij})\);
* \(p_{i}\circ\tau_{ij}=\varrho_{ij}\).
Therefore, for the family \(\boldsymbol{\tau}:=\{\tau_{ij}\}_{i=1,\ldots,\ell;j=1,\ldots,k_{i}}\), \(s_{\boldsymbol{\tau}}:X\to S_{\boldsymbol{\tau}}\) coincides with \(s_{\mathfrak{C}}\) by the above claim. Note that for any \(i,j\), there exists an morphism \(e_{ij}:S_{\tau_{ij}}\to S_{\varrho_{ij}}\) such that \(s_{\varrho_{ij}}:X\to S_{\varrho_{ij}}\) factors through \(e_{ij}\). We also note that \(e_{ij}^{*}T_{\varrho_{ij}}\leq T_{\tau_{ij}}\) for the canonical currents. It follows that \(T_{\boldsymbol{\varrho}}\leq T_{\boldsymbol{\tau}}\) (note that \(S_{\boldsymbol{\tau}}=S_{\boldsymbol{\varrho}}=S_{\boldsymbol{\xi}}\)). Therefore, \(\{T_{\boldsymbol{\tau}}\}\) is a Kahler class. We prove the theorem.
### Some criterion for representation into tori
We recall a lemma in [13, Lemma 5.3].
**Lemma 4.7**.: _Let \(G\) be an almost simple algebraic group over the non-archimedean local field \(K\). Let \(\Gamma\subset G(K)\) be a finitely generated subgroup so that_
* _it is a Zariski dense subgroup in_ \(G\)_,_
* _it is not contained in any bounded subgroup of_ \(G(K)\)_._
_Let \(\Upsilon\) be a normal subgroup of \(\Gamma\) which is bounded. Then \(\Upsilon\) must be finite._
This lemma enables us to prove the following result.
**Lemma 4.8**.: _Let \(G\) be a reductive algebraic group over the non-archimedean local field \(K\) of characteristic zero. Let \(X\) be a projective manifold and let \(\varrho:\pi_{1}(X)\to G(K)\) be a Zariski dense representation. If \(\varrho(\mathcal{D}\pi_{1}(X))\) is bounded, then after replacing \(K\) by some finite extension, for the reductive representation \(\tau:\pi_{1}(X)\to G/\mathcal{D}G(K)\) which is
the composition of \(\varrho\) with \(G\to G/\mathcal{D}G\), the reduction map \(s_{\tau}:X\to S_{\tau}\) coincides with \(s_{\varrho}:X\to S_{\varrho}\)._
Proof.: Since \(G\) is reductive, then after replacing \(K\) by a finite extension, there is an isogeny \(G\to H_{1}\times\cdots\times H_{k}\times T\), where \(H_{i}\) are almost simple algebraic groups over \(K\) and \(T=G/\mathcal{D}G\) is an algebraic tori over \(K\).Write \(G^{\prime}:=H_{1}\times\cdots\times H_{k}\times T\). We denote by \(\varrho^{\prime}:\pi_{1}(X)\to G^{\prime}(K)\) the induced representation by the above isogeny.
**Claim 4.9**.: _The Katzarkov-Eyssidieux reduction map \(s_{\varrho}:X\to S_{\varrho}\) coincides with \(s_{\varrho^{\prime}}:X\to S_{\varrho^{\prime}}\)._
Proof.: It suffices to prove that, for any subgroup \(\Gamma\) of \(\pi_{1}(X)\), \(\varrho(\Gamma)\) is bounded if and only if \(\varrho^{\prime}(\Gamma)\) is bounded. Note that we have the following short exact sequence of algebraic groups
\[0\to\mu\to G\to G^{\prime}\to 0\]
where \(\mu\) is finite. Then we have
\[0\to\mu(K)\to G(K)\xrightarrow{f}G^{\prime}(K)\to H^{1}(K,\mu),\]
where \(H^{1}(K,\mu)\) is the Galois cohomology. Note that \(\mu(K)\) is finite. Since \(K\) is a finite extension of some \(\mathbb{Q}_{p}\), it follows that \(H^{1}(K,\mu)\) is also finite. Therefore, \(f:G(K)\to G^{\prime}(K)\) has finite kernel and cokernel. Therefore, \(\varrho(\Gamma)\) is bounded if and only if \(\varrho^{\prime}(\Gamma)\) is bounded.
Set \(\Gamma:=\varrho^{\prime}(\pi_{1}(X))\) and \(\Upsilon:=\varrho^{\prime}(\mathcal{D}\pi_{1}(X))\). Let \(\Upsilon_{i}\subset H_{i}(K)\) and \(\Gamma_{i}\) be the image of \(\Upsilon\) and \(\Gamma\) under the projection \(G(K)\to H_{i}(K)\). Then \(\Gamma_{i}\) is Zariski dense in \(H_{i}\) and \(\Upsilon_{i}\triangleleft\Gamma_{i}\) is also bounded. Furthermore, \(\mathcal{D}\Gamma_{i}=\Upsilon_{i}\).
**Claim 4.10**.: \(\Gamma_{i}\) _is bounded for every \(i\)._
Proof.: Assuming a contradiction, let's suppose that some \(\Gamma_{i}\) is unbounded. Since \(\Upsilon_{i}\triangleleft\Gamma_{i}\) and \(\Upsilon_{i}\) is bounded, we can refer to Lemma 4.7 which states that \(\Upsilon_{i}\) must be finite. We may replace \(X\) with a finite etale cover, allowing us to assume that \(\Upsilon_{i}\) is trivial. Consequently, \(\Gamma_{i}\) becomes abelian, which contradicts the fact that \(\Gamma_{i}\) is Zariski dense in the almost simple algebraic group \(H_{i}\).
Based on the previous claim, it follows that the induced representations \(\tau_{i}:\pi_{1}(X)\to H_{i}(K)\) are all bounded for every \(i\). Consequently, they do not contribute to the reduction map of \(s_{\varrho^{\prime}}:X\to S_{\varrho^{\prime}}\). Therefore, the only contribution to \(s_{\varrho^{\prime}}\) comes from \(\tau:\pi_{1}(X)\to T(K)\), where \(\tau\) is the composition of \(\varrho:\pi_{1}(X)\to G(K)\) and \(G(K)\to T(K)\).
According to Claim 4.9, we can conclude that \(s_{\varrho}\) coincides with the reduction map \(s_{\tau}:X\to S_{\tau}\) of \(\tau:\pi_{1}(X)\to T(K)\). This establishes the lemma.
### Eyssidieux-Simpson Lefschetz theorem and its application
Let \(X\) be a compact Kahler manifold and let \(V\subset H^{0}(X,\Omega^{1}_{X})\) be a \(\mathbb{C}\)-subspace. Let \(a:X\to\mathcal{A}_{X}\) be the Albanese morphism of \(X\). Note that \(a^{*}:H^{0}(\mathcal{A}_{X},\Omega^{1}_{\mathcal{A}_{X}})\to H^{0}(X,\Omega^{1 }_{X})\) is an isomorphism. Write \(V^{\prime}:=(a^{*})^{-1}(V)\). Define \(B(V)\subset\mathcal{A}_{X}\) to be the largest abelian subvariety of \(\mathcal{A}_{X}\) such that \(\eta|_{B(V)}=0\) for every \(\eta\in V^{\prime}\). Set \(\mathcal{A}_{X,V}:=\mathcal{A}_{X}/B(V)\). The _partial Albanese morphism associated with \(V\)_ is the composition of \(a\) with the quotient map \(\mathcal{A}_{X}\to\mathcal{A}_{X,V}\), denoted by \(g_{V}:X\to\mathcal{A}_{X,V}\). Note that there exists \(V_{0}\subset H^{0}(\mathcal{A}_{X,V},\Omega^{1}_{\mathcal{A}_{X,V}})\) with \(\dim_{\mathbb{C}}V_{0}=\dim_{\mathbb{C}}V\) such that \(g_{V}^{*}V_{0}=V\). Let \(\widetilde{\mathcal{A}_{X,V}}\to\mathcal{A}_{X,V}\) be the universal covering and let \(X_{V}\) be \(X\times_{\mathcal{A}_{X,V}}\widetilde{\mathcal{A}_{X,V}}\). Note that \(V_{0}\) induces a natural linear map \(\widetilde{\mathcal{A}_{X,V}}\to V_{0}^{*}\). Its composition with \(X_{V}\to\widetilde{\mathcal{A}_{X,V}}\) and \(g_{V}^{*}:V_{0}\to V\) gives rise to a holomorphic map
\[\widetilde{g}_{V}:X_{V}\to V^{*}. \tag{4.1}\]
Let \(f:X\to S\) be the Stein factorization of \(g_{V}:X\to\mathcal{A}_{X,V}\) with \(q:S\to\mathcal{A}_{X,V}\) the finite morphism. Set \(\mathbb{V}:=q^{*}V_{0}\).
**Definition 4.11**.: \(V\) is called _perfect_ if for any closed subvariety \(Z\subset S\) of dimension \(d\geq 1\), one has \(\operatorname{Im}[\Lambda^{d}\mathbb{V}\to H^{0}(Z,\Omega_{Z}^{d})]\neq 0\).
The terminology of "perfect \(V\)" in Definition 4.11 is called "SSKB factorisable" in [10, Lemme 5.1.6].
Let us recall the following Lefschetz theorem by Eyssidieux, which is a generalization of previous work by Simpson [14]. This theorem plays a crucial role in the proofs of Theorems B and C.
**Theorem 4.12** ([10, Lemme 5.1.22]).: _Let \(X\) be a compact Kahler normal space and let \(V\subset H^{0}(X,\Omega_{X}^{1})\) be a subspace. Assume that_
\[\operatorname{Im}\left[\Lambda^{\dim V}V\to H^{0}(X,\Omega_{X}^{\dim V}) \right]=\eta\neq 0.\]
_Set \((\eta=0)=\cup_{i=1}^{k}Z_{k}\) where \(Z_{i}\) are proper closed subvarieties of \(X\). For each \(Z_{i}\), denote by \(V_{i}:=\operatorname{Im}[V\to H^{0}(Z_{i},\Omega_{Z_{i}})]\). Assume that \(V_{i}\) is perfect for each \(i\). Then there are two possibilities which exclude each other:_
* _either_ \(V\) _is perfect;_
* _or for the holomorphic map_ \(\widetilde{g}_{V}:X_{V}\to V^{*}\) _defined as (_4.1_),_ \((X_{V},\widetilde{g}_{V}^{-1}(t))\) _is_ \(1\)_-connected for any_ \(t\in V^{*}\)_; i.e._ \(\widetilde{g}_{V}^{-1}(t)\) _is connected and_ \(\pi_{1}(\widetilde{g}_{V}^{-1}(t))\to\pi_{1}(X_{V})\) _is surjective._
We need the following version of the Castelnuovo-De Franchis theorem.
**Theorem 4.13** (Castelnuovo-De Franchis).: _Let \(X\) be a compact Kahler normal space and let \(W\subset H^{0}(X,\Omega_{X})\) be the subspace of dimension \(d\geq 2\) such that_
* \(\operatorname{Im}\bigl{(}\Lambda^{d}W\to H^{0}(X,\Omega_{X}^{d})\bigr{)}=0\)_;_
* _for every hyperplane_ \(W^{\prime}\subset W\)_,_ \(\operatorname{Im}\bigl{(}\Lambda^{d-1}W^{\prime}\to H^{0}(X,\Omega_{X}^{d-1 })\bigr{)}\neq 0\)_._
_Then there is a projective normal variety \(S\) of dimension \(d-1\) and a fibration \(f:X\to S\) such that \(W\subset f^{*}H^{0}(S,\Omega_{S})\)._
To apply Theorem 4.13, we need to show the existence of a linear subspace \(W\subset H^{0}(X,\Omega_{X})\) as in the theorem.
**Lemma 4.14**.: _Let \(X\) be a projective normal variety and let \(V\subset H^{0}(X,\Omega_{X})\). Let \(r\) be the largest integer such that \(\operatorname{Im}\left[\Lambda^{r}V\to H^{0}(X,\Omega_{X}^{r})\right]\neq 0\). Assume that \(r<\operatorname{dim}_{\mathbb{C}}V\). There exists \(W\subset H^{0}(X,\Omega_{X})\) such that_
* \(2\leq\dim W\leq r+1\)_._
* \(\operatorname{Im}\left[\Lambda^{\dim W}W\to H^{0}(X,\Omega_{X}^{\dim W}) \right]=0\)_;_
* _for every hyperplane_ \(W^{\prime}\subsetneq W\)_, we always have_ \(\operatorname{Im}\left[\Lambda^{\dim W-1}W^{\prime}\to H^{0}(X,\Omega_{X}^{ \dim W-1})\right]\neq 0\)_._
Proof.: By our assumption there exist \(\{\omega_{1},\ldots,\omega_{r}\}\subset V\) such that \(\omega_{1}\wedge\cdots\wedge\omega_{r}\neq 0\). Let \(W_{0}\subset V\) be the subspace generated by \(\{\omega_{1},\ldots,\omega_{r}\}\). Since \(r<\operatorname{dim}_{\mathbb{C}}V\), there exists \(\omega\in V\setminus W_{0}\).
Pick a point \(x\in X\) such that \(\omega_{1}\wedge\cdots\wedge\omega_{r}(x)\neq 0\). Then there exists a coordinate system \((U;z_{1},\ldots,z_{n})\) centered at \(x\) such that \(dz_{i}=\omega_{i}\) for \(i=1,\ldots,r\). Write \(\omega=\sum_{i=1}^{n}a_{i}(z)dz_{i}\). By our choice of \(r\), we have \(\omega_{1}\wedge\cdots\wedge\omega_{r}\wedge\omega=0\). It follows that
* \(a_{j}(z)=0\) for \(j=r+1,\ldots,n\);
* at least one of \(a_{1}(z),\ldots,a_{r}(z)\) is not constant.
Let \(k+1\) be the transcendental degree of \(\{1,a_{1}(z),\ldots,a_{r}(z)\}\subset\mathbb{C}(U)\). Then \(k\geq 1\). We assume that \(1,a_{1}(z),\ldots,a_{k}(z)\) is linearly independent for the transcendental extension \(\mathbb{C}(U)/\mathbb{C}\). One can check by an easy linear algebra that the subspace \(W\) generated \(\{\omega_{1},\ldots,\omega_{k},\omega\}\) is an element of \(E\). The lemma is proved.
**Lemma 4.15**.: _Let \(X\) be a projective normal variety and let \(V\subset H^{0}(X,\Omega_{X})\). Let \(r\) be the largest integer such that \(\operatorname{Im}\left[\Lambda^{r}V\to H^{0}(X,\Omega_{X}^{r})\right]\neq 0\), which will be called generic rank of \(V\). Consider the partial Albanese morphism \(g_{V}:X\to\mathcal{A}_{X,V}\) induced by \(V\). Let \(V_{0}\subset H^{0}(\mathcal{A}_{X,V},\Omega_{\mathcal{A}_{X,V}}^{1})\) be the linear subspace such that \(g_{V}^{*}V_{0}=V\). Let \(f:X\to S\) be the Stein factorization of \(g_{V}\) with \(q:S\to\mathcal{A}_{X,V}\) the finite morphism. Consider \(\mathbb{V}:=q^{*}V_{0}\). Assume that_
\[\operatorname{Im}[\Lambda^{\dim Z}\mathbb{V}\to H^{0}(Z,\Omega_{Z}^{\dim Z})]\neq 0\]
_for every proper closed subvariety \(Z\subsetneq S\). Then there are two possibilities._
* _either_ \[\operatorname{Im}[\Lambda^{\dim S}\mathbb{V}\to H^{0}(S,\Omega_{S}^{\dim S})] \neq 0;\]
* _or_ \(r=\dim_{\mathbb{C}}V\)_._
Proof.: Assume that both
\[\operatorname{Im}[\Lambda^{\dim S}\mathbb{V}\to H^{0}(S,\Omega_{S}^{\dim S})] =0,\]
and \(r<\dim_{\mathbb{C}}V\). Therefore, \(r<\dim S\leq\dim X\). By Lemma 4.14 there is a subspace \(W\subset V\) with \(\dim_{\mathbb{C}}W=k+1\leq r+1\) such that \(\operatorname{Im}\left[\Lambda^{\dim W}W\to H^{0}(X,\Omega_{X}^{\dim W}) \right]=0\), and for any subspace \(W^{\prime}\subsetneq W\), we always have \(\operatorname{Im}\left[\Lambda^{\dim W^{\prime}}W^{\prime}\to H^{0}(X,\Omega_ {X}^{\dim W^{\prime}})\right]\neq 0\). By our assumption, we have \(\dim_{\mathbb{C}}W\leq\dim X\). By Theorem 4.13, there is a fibration \(p:X\to B\) with \(B\) a projective normal variety with \(\dim B=\dim W-1\leq\dim X-1\) such that \(W\subset p^{*}H^{0}(B,\Omega_{B}^{1})\). In particular, the generic rank of the forms in \(W\) is \(\dim W-1\). Consider the partial Albanese morphism \(g_{W}:X\to\mathcal{A}_{X,W}\) associated with \(W\). We shall prove that \(p\) can be made as the Stein factorisation of \(g_{W}\).
Note that each fiber of \(p\) is contracted by \(g_{W}\). Therefore, we have a factorisation \(X\overset{p}{\to}B\overset{h}{\to}\mathcal{A}_{X,W}\). Note that there exists a linear space \(W_{0}\subset H^{0}(\mathcal{A}_{X,W},\Omega_{\mathcal{A}_{X,W}}^{1})\) such that \(W=g_{W}^{*}W_{0}\). If \(\dim h(B)<\dim B\), then the generic rank of \(W\) is less or equal to \(\dim h(B)\). This contradicts with Theorem 4.13. Therefore, \(\dim h(B)=\dim B\). Let \(X\overset{p^{\prime}}{\to}B^{\prime}\to\mathcal{A}_{X,W}\) be the Stein factorisation of \(g_{W}\). Then there exists a birational morphism \(\nu:B\to B^{\prime}\) such that \(p^{\prime}=\nu\circ p\). We can thus replace \(B\) by \(B^{\prime}\), and \(p\) by \(p^{\prime}\).
Recall that \(f:X\to S\) is the Stein factorisation of the partial Albanese morphism \(g_{V}:X\to\mathcal{A}_{X,V}\) associated with \(V\). As \(g_{W}\) factors through the natural quotient map \(\mathcal{A}_{X,V}\to\mathcal{A}_{X,W}\), it follows that \(p:X\to B\) factors through \(X\overset{f}{\to}S\overset{\nu}{\to}B\).
Assume that \(\dim S=\dim B\). Then \(\nu\) is birational. Since \(\dim B=\dim W-1\) and the generic rank of \(W\) is \(\dim W-1\), it follows that
\[\operatorname{Im}[\Lambda^{\dim S}\mathbb{V}\to H^{0}(S,\Omega_{S}^{\dim S}) ]\neq 0.\]
This contradicts with our assumption at the beginning. Hence \(\dim S>\dim B\).
Let \(Z\) be a general fiber of \(\nu\) which is positive-dimensional. Since \(W\subset p^{*}H^{0}(B,\Omega_{B}^{1})\), and we have assumed that the generic rank of \(\mathbb{V}\) is less than \(\dim S\), it follows that the generic rank of \(\operatorname{Im}\left[\mathbb{V}\to H^{0}(Z,\Omega_{Z}^{1})\right]\) is less than \(\dim Z\). This implies that
\[\operatorname{Im}[\Lambda^{\dim Z}\mathbb{V}\to H^{0}(Z,\Omega_{Z}^{\dim Z}) ]=0,\]
which contradicts with our assumption. Therefore we obtain a contradiction. The lemma is proved.
**Remark 4.16**.: Let \(Y\) be a normal projective variety. Let \(\boldsymbol{\varrho}=\{\varrho_{i}:\pi_{1}(Y)\to\operatorname{GL}_{N}(K_{i})\}_ {i=1,\ldots,k}\) be a family of reductive representations where \(K_{i}\) are non-archimedean local field. Let \(\pi:X\to Y\) be a Galois cover dominating all spectral covers induced by \(\varrho_{i}\). Let \(V\subset H^{0}(X,\Omega_{X})\) be the set of all spectral forms (cf. SS 1.9 for definitions). We use the same notations as in Lemma 4.15. Considering Katzarkov-Eyssidieux reduction maps \(s_{\boldsymbol{\varrho}}:Y\to S_{\boldsymbol{\varrho}}\) and \(s_{\pi^{*}\boldsymbol{\varrho}}:X\to S_{\pi^{*}\boldsymbol{\varrho}}\). One can check that, for every closed subvariety \(Z\subset S_{\boldsymbol{\varrho}}\), \(\{T_{\boldsymbol{\varrho}}^{\dim Z}\}\cdot Z>0\) if and only if for any closed subvariety \(W\subset S_{\pi^{*}\boldsymbol{\varrho}}\) dominating \(Z\) under \(\sigma_{\pi}:S_{\pi^{*}\boldsymbol{\varrho}}\to S_{\boldsymbol{\varrho}}\) defined in (1.2), one has
\[\operatorname{Im}[\Lambda^{\dim W}\mathbb{V}\to H^{0}(W,\Omega_{W}^{\dim W})] \neq 0.\]
In particular, \(V\) is perfect if and only if \(\{T_{\boldsymbol{\varrho}}\}\) is a Kahler class by Theorem 1.13.
**Theorem 4.17**.: _Let \(X\) be a smooth projective variety and let \(\boldsymbol{\varrho}:=\{\varrho_{i}:\pi_{1}(X)\to\operatorname{GL}_{N}(K_{i})\}_ {i=1,\ldots,k}\) be a family of reductive representations where \(K_{i}\) is a non-archimedean local field. Let \(S_{\boldsymbol{\varrho}}:X\to S_{\boldsymbol{\varrho}}\) be the Katzarkov-Eyssidieux reduction map. Let \(T_{\boldsymbol{\varrho}}\) be the canonical \((1,1)\)-current on \(S_{\boldsymbol{\varrho}}\) associated with \(\boldsymbol{\varrho}\) defined in Definition 1.24. Denote by \(H_{i}\) the Zariski closure of \(\varrho_{i}(\pi_{1}(X))\). Assume that for any proper closed subvariety \(\Sigma\subsetneq S\), one has \(\{T_{\boldsymbol{\varrho}}\}^{\dim\Sigma}\cdot\Sigma>0\). Then_
* _either_ \(\{T_{\boldsymbol{\varrho}}\}^{\dim S_{\boldsymbol{\varrho}}}\cdot S_{ \boldsymbol{\varrho}}>0\)_;_
* _or the reduction map_ \(s_{\sigma_{i}}:X\to S_{\sigma_{i}}\) _coincides with_ \(s_{\varrho_{i}}:X\to S_{\varrho_{i}}\) _for each_ \(i\)_, where_ \(\sigma_{i}:\pi_{1}(X)\to(H_{i}/\mathcal{D}H_{i})(K_{i})\) _is the composition of_ \(\varrho_{i}\) _with the group homomorphism_ \(H_{i}\to H_{i}/\mathcal{D}H_{i}\)_._
Proof.: Assume that \(\{T_{\boldsymbol{\varrho}}\}^{\dim S_{\boldsymbol{\varrho}}}\cdot S_{ \boldsymbol{\varrho}}=0\). Let \(Y\to X\) be a Galois cover which dominates all spectral covers of \(\varrho_{i}\). We pull back all the spectral one forms on \(Y\) to obtain a subspace \(V\subset H^{0}(Y,\Omega^{1}_{Y})\). Consider the partial Albanese morphism \(g_{V}:Y\to\mathcal{A}_{Y,V}\) associated to \(V\), then \(s_{\pi^{*}\boldsymbol{\varrho}}:Y\to S_{\pi^{*}\boldsymbol{\varrho}}\) is its Stein factorization with \(q:S_{\pi^{*}\boldsymbol{\varrho}}\to\mathcal{A}_{Y,V}\) the finite morphism. Note that there is a \(\mathbb{C}\)-linear subspace \(\mathbb{V}\subset H^{0}(S_{\pi^{*}\boldsymbol{\varrho}},\Omega^{1}_{S_{\pi^{* }\boldsymbol{\varrho}}})\) such that \(s_{\pi^{*}\boldsymbol{\varrho}}^{*}\mathbb{V}=V\).
Note that \(\sigma_{\pi}\) is finite surjective morphism. By Lemma 1.25 we have \(T_{\pi^{*}\boldsymbol{\varrho}}=\sigma_{\pi}^{*}T_{\boldsymbol{\varrho}}\). By our assumption, for an proper closed subvariety \(\Xi\subsetneq S_{\boldsymbol{\varrho}}\), one has \(\{T_{\boldsymbol{\varrho}}\}^{\dim\Xi}.\Xi>0\). Hence for an proper closed subvariety \(\Xi\subsetneq S_{\pi^{*}\boldsymbol{\varrho}}\), one has \(\{T_{\pi^{*}\boldsymbol{\varrho}}\}^{\dim\Xi}.\Xi>0\). According to Remark 4.16, this implies that
\[\operatorname{Im}[\operatorname{\Lambda}^{\dim\Xi}\mathbb{V}\to H^{0}(\Xi, \Omega^{\dim\Xi}_{\Xi})]\neq 0.\]
Since \(\{T_{\boldsymbol{\varrho}}\}^{\dim S}\cdot S=0\), it follows that \(\{T_{\pi^{*}\boldsymbol{\varrho}}\}^{\dim S_{\pi^{*}\boldsymbol{\varrho}}} \cdot S_{\pi^{*}\boldsymbol{\varrho}}=0\). This implies that
\[\operatorname{Im}[\operatorname{\Lambda}^{\dim S_{\pi^{*}\boldsymbol{\varrho} }}\mathbb{V}\to H^{0}(S_{\pi^{*}\boldsymbol{\varrho}},\Omega^{\dim S_{\pi^{* }\boldsymbol{\varrho}}}_{S_{\pi^{*}\boldsymbol{\varrho}}})]=0.\]
Let \(r\) be the generic rank \(V\). According to Remark 4.16, we have \(r=\dim S_{\pi^{*}\boldsymbol{\varrho}}-1\). By Lemma 4.15, we have \(r=\dim_{\mathbb{C}}V\). Therefore, \(\operatorname{Im}\left[\operatorname{\Lambda}^{r}V\to H^{0}(Y,\Omega^{r}_{Y})\right]\simeq\mathbb{C}\).
**Claim 4.18**.: _For any non-zero \(\eta\in\operatorname{Im}\left[\operatorname{\Lambda}^{r}V\to H^{0}(Y,\Omega^ {r}_{Y})\right]\), each irreducible component \(Z^{\prime}\) of \((\eta=0)\) satisfies that \(s_{\pi^{*}\boldsymbol{\varrho}}(Z^{\prime})\) is a proper subvariety of \(S_{\pi^{*}\boldsymbol{\varrho}}\)._
Proof.: Assume that this is not the case. Let \(Z\to Z^{\prime}\) be a desingularization. Set \(V^{\prime}:=\operatorname{Im}\left[V\to H^{0}(Z,\Omega^{1}_{Z})\right]\). Denote by \(r^{\prime}\) the generic rank of \(V^{\prime}\). Then \(r^{\prime}<r\) as \(Z^{\prime}\) is an irreducible component of \((\eta=0)\). Write \(\iota:Z\to Y\) and \(g:Z\to X\) for the natural map. Then the Katzarkov-Eyssidieux reduction \(s_{g^{*}\boldsymbol{\varrho}}:Z\to S_{g^{*}\boldsymbol{\varrho}}\) associated with \(g^{*}\boldsymbol{\varrho}\) is the Stein factorization of the partial Albanese morphism \(g_{V^{\prime}}:Z\to\mathcal{A}_{Z,V^{\prime}}\). We have the diagram
such that \(\sigma_{\iota}\) is a finite _surjective_ morphism as we assume that \(s_{\pi^{*}\boldsymbol{\varrho}}(Z^{\prime})=S_{\pi^{*}\boldsymbol{\varrho}}\). Let \(\Sigma\subsetneqneq S_{g^{*}\boldsymbol{\varrho}}\) be a proper closed subvariety. Let \(\Sigma^{\prime}:=\sigma_{g}(\Sigma)\). Since \(\{T_{\boldsymbol{\varrho}}\}^{\dim\Sigma^{\prime}}\cdot\Sigma^{\prime}>0\) by our assumption, by Lemma 1.25\(\{T_{g^{*}\boldsymbol{\varrho}}\}^{\dim\Sigma}\cdot\Sigma>0\). By Remark 4.16, it follows that the
generic rank \(r^{\prime}\) of \(V^{\prime}\) is equal to \(\dim S_{g^{*}\boldsymbol{\varrho}}-1=\dim S_{\pi^{*}\boldsymbol{\varrho}}-1\). This contradicts with the fact that \(r^{\prime}<r=\dim S_{\pi^{*}\boldsymbol{\varrho}}-1\). The claim is proved.
By the above claim, \(s_{\pi^{*}\boldsymbol{\varrho}}(Z^{\prime})\) is a proper subvariety of \(S_{\pi^{*}\boldsymbol{\varrho}}\). Therefore, we have \(\{T_{\boldsymbol{\varrho}}\}^{\dim S_{g^{*}\boldsymbol{\varrho}}}\cdot S_{g^{ *}\boldsymbol{\varrho}}>0\). Hence for each irreducible component \(Z\) of \((\eta=0)\), \(\operatorname{Im}\left[V\to H^{0}(Z,\Omega^{1}_{Z})\right]\) is perfect by Remark 4.16 once again. We can apply Theorem 4.12 to conclude that for the holomorphic map \(\widetilde{g}_{V}:Y_{V}\to V^{*}\) defined as (4.1), \((Y_{V},\widetilde{g}_{V}^{-1}(t))\) is \(1\)-connected for any \(t\in V^{*}\). For the covering \(Y_{V}\to Y\), we know that \(\operatorname{Im}[\pi_{1}(Y_{V})\to\pi_{1}(Y)]\) contains the derived subgroup \(\mathcal{D}\pi_{1}(Y)\) of \(\pi_{1}(Y)\). Then \(\pi^{*}\varrho_{i}(\operatorname{Im}\left[\pi_{1}(Y_{V})\to\pi_{1}(Y)\right])\) contains \(\pi^{*}\varrho_{i}(\operatorname{Im}\pi_{1}(Y))\). On the other hand, since \((Y_{V},\widetilde{g}_{V}^{-1}(t))\) is \(1\)-connected for any \(t\in V^{*}\), it follows that \(\pi^{*}\varrho_{i}(\operatorname{Im}\left[\pi_{1}(\widetilde{g}_{V}^{-1}(t)) \to\pi_{1}(Y)\right])\) contains \(\pi^{*}\varrho_{i}(\mathcal{D}\pi_{1}(Y))\). Note that \(V\) is consists of all the spectral forms of \(\pi^{*}\varrho_{i}\) for all \(i\), hence each \(\pi^{*}\varrho_{i}\)-equivariant harmonic mapping \(u_{i}\) vanishes over each connected component \(p^{-1}(\widetilde{g}_{V}^{-1}(t))\) where \(p:\widetilde{Y}\to Y\) is the universal covering. Then \(\pi^{*}\varrho_{i}(\operatorname{Im}\left[\pi_{1}(\widetilde{g}_{V}^{-1}(t)) \to\pi_{1}(Y)\right])\) fixes a point \(P\) in the Bruhat-Tits building, which implies that it is bounded. Therefore, \(\pi^{*}\varrho_{i}(\mathcal{D}\pi_{1}(Y))\) is also bounded. Note that the image of \(\pi_{1}(Y)\to\pi_{1}(X)\) is a finite index subgroup of \(\pi_{1}(X)\). Hence \(\varrho_{i}(\mathcal{D}\pi_{1}(X))\) is also bounded for each \(\varrho_{i}\). The theorem then follows from Lemma 4.8.
### A factorization theorem
As an application of Theorem 4.17, we will prove the following factorization theorem which partially generalizes previous theorem by Corlette-Simpson [10]. This result is also a warm-up for the proof of Theorem 4.21.
**Theorem 4.19**.: _Let \(X\) be a smooth projective variety and let \(G\) be an almost simple algebraic group defined over \(K\). Assume that \(\varrho:\pi_{1}(X)\to G(K)\) is a Zariski dense representation such that for any morphism \(f:Z\to X\) from any positive dimensional smooth projective variety \(Z\) to \(X\) which is birational to the image, the Zariski closure of \(f^{*}\varrho(\pi_{1}(Z))\) is a semisimple algebraic group. Then after we replace \(X\) by a finite etale cover and a birational modification, there is an algebraic fiber space \(f:X\to Y\) and a big and Zariski dense representation \(\tau:\pi_{1}(Y)\to G(K)\) such that \(f^{*}\tau=\varrho.\) Moreover, \(\dim Y\leq\operatorname{rank}_{K}G\)._
Proof.: We know that there after we replace \(X\) by a finite etale cover and a birational modification, there are an algebraic fiber space \(f:X\to Y\) over a smooth projective variety \(Y\) and a big and Zariski dense representation \(\tau:\pi_{1}(Y)\to G(K)\) such that \(f^{*}\tau=\varrho.\) We will prove that \(\dim Y\leq\operatorname{rank}_{K}G\).
**Claim 4.20**.: _The \((1,1)\)-class \(\{T_{\tau}\}\) on \(S_{\tau}\) is Kahler, where \(T_{\tau}\) is the canonical current on \(S_{\tau}\) associated to \(\tau\)._
Proof.: By Theorem 1.13, it is equivalent to prove that for any closed subvariety \(\Sigma\subset S_{\tau}\), \(\int_{\Sigma}\{T_{\tau}\}^{\dim\Sigma}>0\). We will prove it by induction on \(\dim\Sigma\).
**Induction**. Assume that for every closed subvariety \(\Sigma\subset S_{\tau}\) of dimension \(\leq r-1\), \(\{T_{\tau}\}^{\dim\Sigma}\cdot\Sigma>0\).
Let \(\Sigma\) be any closed subvariety of \(S_{\tau}\) with \(\dim\Sigma=r\). Let \(Z\) be a desingularization of any irreducible component in \(s_{\tau}^{-1}(\Sigma)\) which is surjective over \(\Sigma\). Denote by \(f:Z\to Y\).
By Lemma 1.25, \(\sigma_{f}\) is a finite morphism whose image is \(\Sigma\) and \(T_{f^{*}\tau}=\sigma_{f}^{*}T_{\tau}\).
We first prove the induction for \(\dim\Sigma=1\). In this case \(\dim S_{f^{*}\tau}=1\). Since the spectral forms associated to \(f^{*}\tau\) are not constant, it follows that \(T_{f^{*}\tau}\) is big. By Lemma 1.25, \(\{T_{\tau}|_{\Sigma}\}\) is big. Therefore, we prove the induction when \(\dim\Sigma=1\).
Assume now the induction holds for closed subvariety \(\Sigma\subset S_{\tau}\) with \(\dim\Sigma\leq r-1\). Let us deal with the case \(\dim\Sigma=r\). By Lemma 1.25 and the induction, we know that for
any closed proper positive dimensional subvariety \(\Xi\subset S_{f^{*}\tau}\), we have \(\{T_{f^{*}\tau}\}^{\dim\Xi}\cdot\Xi>0\). Note that the conditions in Theorem 4.17 for \(f^{*}\tau\) is fulfilled. Therefore, there are two possibilities:
* either \(\{T_{f^{*}\tau}\}^{r}\cdot S_{f^{*}\tau}>0\);
* or the reduction map \(s_{f^{*}\tau}:Z\to S_{f^{*}\tau}\) coincides with \(s_{\nu}:Z\to S_{\nu}\), where \(\nu:\pi_{1}(Z)\to(H/\mathcal{D}H)(K)\) is the composition of \(\tau\) with the group homomorphism \(H\to H/\mathcal{D}H\). Here \(H\) is the Zariski closure of \(f^{*}\tau\).
If the first case happens, by Lemma 1.25 again we have \(\int_{\Sigma}\{T_{\tau}\}^{\dim\Sigma}>0\). we finish the proof of the induction for \(\Sigma\subset S_{\tau}\) with \(\dim\Sigma=r\). Assume that the second situation occurs. Since \(H\) is assumed to be semisimple, it follows that \(H/\mathcal{D}H\) finite. Therefore, \(\nu\) is bounded and thus \(S_{f^{*}\tau}\) is a point. This contradicts with the fact that \(\dim S_{f^{*}\tau}=\dim\Sigma=r>0\). Therefore, the second situation cannot occur. We finish the proof of the induction. The claim is proved.
This claim in particular implies that the _generic rank_\(r\) of the multivalued holomorphic \(1\)-forms on \(Y\) induced by the differential of harmonic mappings of \(\tau\) is equal to \(\dim S_{\tau}\).
Since \(G\) is almost simple, by [10] we know that the Katzarkov-Eyssidieux reduction map \(s_{\tau}:Y\to S_{\tau}\) is birational. Therefore \(r=\dim Y\). On the other hand, we note that \(r\) is less or equal to the dimension of the Bruhat-Tits building \(\Delta(G)_{K}\), which is equal to \(\mathrm{rank}_{K}G\). The theorem is proved.
### Constructing Kahler classes via representations into non-archimedean fields
Let \(X\) be a smooth projective variety. In this subsection we will prove a more general theorem than Theorem 4.5.
**Theorem 4.21**.: _Let \(\mathfrak{C}\) be absolutely constructible subset of \(M_{\mathrm{B}}(X,N)(\mathbb{C})\). Then there is a family of representations \(\boldsymbol{\tau}:=\{\tau_{i}:\pi_{1}(X)\to\mathrm{GL}_{N}(K_{i})\}_{i=1,\dots,M}\) where \(K_{i}\) are non-archimedean local fields such that_
* _For each_ \(i=1,\dots,M\)_,_ \([\tau_{i}]\in\mathfrak{C}(K_{i})\)_;_
* _The reduction map_ \(s_{\boldsymbol{\tau}}:X\to S_{\boldsymbol{\tau}}\) _of_ \(\boldsymbol{\tau}\) _coincides with_ \(s_{\boldsymbol{\xi}}:X\to S_{\boldsymbol{\xi}}\) _defined in Definition_ 3.1_._
* _For the canonical current_ \(T_{\boldsymbol{\tau}}\) _defined over_ \(S_{\boldsymbol{\xi}}\)_,_ \(\{T_{\boldsymbol{\tau}}\}\) _is a Kahler class._
Proof.: _Step 1._ By Definition 3.1 and Lemma 1.29 there are non-archimedean local fields \(L_{1},\dots,L_{\ell}\) of characteristic zero and reductive representations \(\tau_{i}:\pi_{1}(X)\to\mathrm{GL}_{N}(L_{i})\) such that \([\tau_{i}]\in\mathfrak{C}(L_{i})\) and \(s_{\boldsymbol{\xi}}:X\to S_{\boldsymbol{\xi}}\) is the Stein factorization of \((s_{\tau_{1}},\dots,s_{\tau_{\ell}}):X\to S_{\tau_{1}}\times\dots\times S_{ \tau_{\ell}}\). Write \(\boldsymbol{\tau}:=\{\tau_{i}\}_{i=1,\dots,\ell}\). We shall prove that we can add more reductive representations \(\tau_{\ell+1},\dots,\tau_{M}\) into non-archimedean local fields \(L_{i}\) with \([\tau_{i}]\in\mathfrak{C}(L_{i})\) for each \(i=\ell+1,\dots,M\) such that \(\{T_{\boldsymbol{\tau}^{\prime}}\}\) over \(S_{\boldsymbol{\xi}}\) is Kahler for the new family \(\boldsymbol{\tau}^{\prime}:=\{\tau_{i}:\pi_{1}(X)\to T(L_{i})\}_{i=1,\dots,M}\).
_Step 2._ By Theorem 1.13, it suffices to find extra \(\tau_{\ell+1},\dots,\tau_{M}\) such that \(\{T_{\boldsymbol{\tau}}\}^{\dim\Sigma}\cdot\Sigma>0\) for every closed subvariety \(\Sigma\) of \(S_{\boldsymbol{\xi}}\).
Let \(\dim\Sigma=1\). Let \(Z\) be the desingularization of an irreducible component in \(s_{\boldsymbol{\xi}}^{-1}(\Sigma)\) which is surjective over \(\Sigma\). Hence after we reorder \(\tau_{1},\dots,\tau_{\ell}\), one has \(s_{\tau_{1}}(Z)=\Sigma_{1}\) is a curve. This implies that \(\{T_{\tau_{1}}\}\cdot\Sigma_{1}>0\). Note that \(e_{\tau_{1}}:\Sigma\to\Sigma_{1}\) is finite. Hence \(\{e_{\tau_{1}}^{*}T_{\tau_{1}}\}\cdot\Sigma>0\). Note that \(T_{\boldsymbol{\tau}}\geq e_{\tau_{1}}^{*}T_{\tau_{1}}\). Therefore, \(\{T_{\boldsymbol{\tau}}\}\cdot\Sigma>0\). The case of curves is proved. We now make two inductions of dimension of closed subvarieties in \(S_{\boldsymbol{\xi}}\) to prove the theorem.
**Induction One**. Assume that for every closed subvariety \(\Sigma\subset S_{\boldsymbol{\xi}}\) of dimension \(\leq r-1\), one can add reductive representations \(\{\tau_{i}:\pi_{1}(X)\to\mathrm{GL}_{N}(L_{i})\}_{i=\ell+1,\dots,k}\) (depending on \(\Sigma\)) with \([\tau_{i}]\in\mathfrak{C}(L_{i})\) for each \(i=\ell+1,\dots,k\) such that \(\{T_{\boldsymbol{\tau}^{\prime}}\}^{\dim\Sigma}\cdot\Sigma>0\) for the new family \(\boldsymbol{\tau}^{\prime}:=\{\tau_{i}:\pi_{1}(X)\to T(L_{i})\}_{i=1,\dots,k}\).
Let \(\Sigma\subset S_{\mathfrak{E}}\) be a closed subvariety of dimension \(r\) such that \(\{T_{\boldsymbol{\tau}}\}^{\dim\Sigma}\cdot\Sigma=0\). Let \(\pi:X^{\mathrm{sp}}\to X\) be a ramified Galois cover which dominates the spectral covers associated to each \(\tau_{1},\dots,\tau_{\ell}\). Pulling back all the spectral forms associated with \(\tau_{i}\) to \(X^{\mathrm{sp}}\), we obtain a linear space \(V\subset H^{0}(X^{\mathrm{sp}},\Omega^{1}_{X^{\mathrm{sp}}})\). We denote by \(\mathcal{A}_{X^{\mathrm{sp}}}\) the Albanese variety of \(X^{\mathrm{sp}}\). Then \(s_{\pi^{*}\boldsymbol{\tau}}:X^{\mathrm{sp}}\to S_{\pi^{*}\boldsymbol{\tau}}\) is the Stein factorization of the partial Albanese morphism \(X^{\mathrm{sp}}\to\mathcal{A}_{X^{\mathrm{sp}},V}\) associated to \(V\). Then we have a commutative diagram
with \(\sigma_{\pi}\) a finite surjective morphism. Take a closed subvariety \(\Sigma^{\prime}\subset S_{\pi^{*}\boldsymbol{\tau}}\) which dominates \(\Sigma\) via \(\sigma_{\pi}\). Let \(Y\) be the desingularization of the normalization of an irreducible component of \(X^{\mathrm{sp}}\times_{W}\Sigma^{\prime}\) which dominates \(\Sigma^{\prime}\). Consider the pullback representation \(\varphi^{*}\tau_{i}:\pi_{1}(Y)\to\operatorname{GL}_{N}(L_{i})\) and the reduction maps \(s_{\varphi^{*}\tau_{i}}:Y\to S_{\varphi^{*}\tau_{i}}\). Then \(s_{\varphi^{*}\boldsymbol{\tau}}:Y\to S_{\varphi^{*}\boldsymbol{\tau}}\) is the Stein factorization of the partial Albanese morphism associated to \(\iota^{*}V\subset H^{0}(Y,\Omega^{1}_{V})\).
Note that \(\sigma_{\iota}(S_{\varphi^{*}\boldsymbol{\tau}})=\Sigma^{\prime}\). By taking successive hyperplane sections in \(Y\), we can find a morphism \(Z^{\prime}\to Y\) from a smooth projective variety \(Z^{\prime}\) which is birational into the image such that the composition \(Z^{\prime}\to S_{\varphi^{*}\boldsymbol{\tau}}\) is generically finite surjective morphism.
Then \(S_{\phi^{*}\boldsymbol{\tau}}\to S_{\varphi^{*}\boldsymbol{\tau}}\) is a finite surjective morphism by Lemma 1.25. It follows that \(s_{\phi^{*}\boldsymbol{\tau}}:Z^{\prime}\to S_{\phi^{*}\boldsymbol{\tau}}\) is a birational morphism. Note that for any reductive representation \(\varrho:\pi_{1}(X)\to\operatorname{GL}_{N}(K)\), its reduction map \(s_{\varrho}:X\to S_{\varrho}\) factors through \(s_{\mathfrak{E}}:X\to S_{\mathfrak{E}}\). Hence the reduction map \(s_{\phi^{*}\varrho}:Z^{\prime}\to S_{\phi^{*}\varrho}\) factors through \(s_{\phi^{*}\boldsymbol{\tau}}:Z^{\prime}\to S_{\phi^{*}\boldsymbol{\tau}}\) by Theorem 1.22. We assume that after we add reductive representations into non-archimedean local fields \(\tau_{\ell+1},\dots,\tau_{k}\) with \([\tau_{i}]\in\mathfrak{C}(L_{i})\) for each \(i=\ell+1,\dots,k\), the generic rank of the multivalued holomorphic \(1\)-forms on \(Z^{\prime}\) induced by the differential of harmonic mappings of \(\{\phi^{*}\tau_{i}:\pi_{1}(Z^{\prime})\to\operatorname{GL}_{N}(L_{i})\}_{i=1, \dots,k}\) achieves its _maximum_, which we denoted by \(m\). We take a Galois cover \(Z\to Z^{\prime}\) which dominants all spectral covers of \(\phi^{*}\tau_{i}\) for \(i=1,\dots,k\). We replace \(Z\) by a desingularization and we pullback all the spectral forms of \(\phi^{*}\tau_{i}\) for \(i=1,\dots,k\) to \(Z\) to obtain \(\mathbb{V}\subset H^{0}(Z,\Omega^{1}_{Z})\). We still use the same notation \(\boldsymbol{\tau}\) to denote the increased family of representation \(\{\tau_{i}\}_{i=1,\dots,k}\). Note that \(s_{\boldsymbol{\tau}}\) always coincides with \(s_{\mathfrak{E}}\) if we add \(\tau_{\ell+1},\dots,\tau_{\ell^{\prime}}\) with \([\tau_{i}]\in\mathfrak{C}(L_{i})\) for each \(i=\ell+1,\dots,\ell^{\prime}\). Therefore, the diagram below will stabilize
whenever we add such new reductive representations to \(\{\tau_{1},\dots,\tau_{k}\}\).
(4.2)
Note that \(s_{\psi^{*}\boldsymbol{\tau}}:Z\to S_{\psi^{*}\boldsymbol{\tau}}\) is the Stein factorization of the partial Albanese morphism \(g_{\mathbb{V}}:Z\to\mathcal{A}_{Z,\mathbb{V}}\) associated with \(\mathbb{V}\). Note that \(s_{\psi^{*}\boldsymbol{\tau}}\) is birational as \(s_{\phi^{*}\boldsymbol{\tau}}\) is birational. Note that the generic rank of \(\mathbb{V}\) is \(m\). Therefore, if \(m=\dim Z\), according to Remark 4.16 the current \(T_{\psi^{*}\boldsymbol{\tau}}\) on \(S_{\psi^{*}\boldsymbol{\tau}}\) is big and by the functoriality of the canonical currents in Lemma 1.25, \(\{T_{\boldsymbol{\tau}}\}^{r}\cdot\Sigma>0\). The induction for subvarieties in \(S_{\boldsymbol{\xi}}\) of dimension \(r\) is thus proved.
Assume now \(m<\dim Z\), which means that the generic rank of \(\mathbb{V}\) is less than \(\dim Z\). We shall prove that this cannot happen.
_Case (1):_\(m<\dim_{\mathbb{C}}\mathbb{V}\). The proof is closed to Lemma 4.15. By Lemma 4.14 there is \(\mathbb{W}\subset\mathbb{V}\) with \(\dim_{\mathbb{C}}\mathbb{W}\leq m+1\) such that \(\operatorname{Im}\left[\Lambda^{\dim\mathbb{W}}\mathbb{W}\to H^{0}(Z,\Omega^{ \dim\mathbb{W}}_{Z})\right]=0\), and for any hyperplane \(\mathbb{W}^{\prime}\subseteq\mathbb{W}\), we always have \(\operatorname{Im}\left[\Lambda^{\dim\mathbb{W}}\mathbb{W}^{\prime}\to H^{0}(Z,\Omega^{\dim\mathbb{W}^{\prime}}_{Z})\right]\neq 0\).
Since we assume that \(m<\dim Z\), it follows that \(\dim\mathbb{W}\leq\dim Z\). By Theorem 4.13, there is a fibration \(p:Z\to B\) with \(B\) a projective normal variety with \(\dim B=\dim\mathbb{W}-1\leq\dim Z-1\) such that \(\mathbb{W}\subset p^{*}H^{0}(B,\Omega^{1}_{B})\). Let \(F\) be a general fiber of \(p\) which is a proper closed subvariety of \(Z\) such that \(F\) is birational to \(F:=s_{\psi^{*}\boldsymbol{\tau}}(F)\) via \(s_{\psi^{*}\boldsymbol{\tau}}\). Since \(\mathbb{W}\subset p^{*}H^{0}(B,\Omega^{1}_{B})\), the generic rank of \(\mathbb{W}\) is equal to \(\dim B\), and \(\operatorname{Im}\left[\Lambda^{\dim\mathbb{V}}\to H^{0}(Z,\Omega^{\dim \mathbb{V}}_{Z})\right]=0\), it implies that
\[\operatorname{Im}[\Lambda^{\dim F}\mathbb{V}\to H^{0}(F,\Omega^{\dim F}_{F} )]=0.\]
Therefore, \(\{T_{\psi^{*}\boldsymbol{\tau}}\}^{\dim F}\cdot F=0\) by Remark 4.16.
By the induction, we can add some new reductive representation \(\tau_{k+1},\dots,\tau_{k^{\prime}}\) into non-archimedean local fields \(L_{i}\) with \([\tau_{i}]\in\mathfrak{C}(L_{i})\) for each \(i=k+1,\dots,k^{\prime}\) such that for the new family \(\boldsymbol{\tau}^{\prime}:=\{\tau_{i}\}_{i=1,\dots,k^{\prime}}\), one has \(\{T_{\boldsymbol{\tau}^{\prime}}\}^{\dim\sigma_{\psi}(F)}\cdot\sigma_{\psi}(F)>0\). By Lemma 1.25, \(\{T_{\psi^{*}\boldsymbol{\tau}^{\prime}}\}^{\dim F}\cdot F>0\).
Note that \(s_{\boldsymbol{\tau}^{\prime}}:X\to S_{\boldsymbol{\tau}^{\prime}}\) coincides with \(s_{\boldsymbol{\tau}}:X\to S_{\boldsymbol{\tau}}\) by our definition of \(s_{\boldsymbol{\xi}}:X\to S_{\boldsymbol{\xi}}\). Hence by Lemma 1.25\(s_{\psi^{*}\boldsymbol{\tau}^{\prime}}:Z\to S_{\psi^{*}\boldsymbol{\tau}^{ \prime}}\) coincides with \(s_{\psi^{*}\boldsymbol{\tau}}:Z\to S_{\psi^{*}\boldsymbol{\tau}}\). Since \(\{T_{\psi^{*}\boldsymbol{\tau}^{\prime}}\}^{\dim F}\cdot F^{\prime}>0\), we conclude that the rank of multivalued one forms on \(Z\) induced by \(\psi^{*}\boldsymbol{\tau}^{\prime}\) has rank \(\dim Z\). It implies that the the rank of multivalued one forms on \(Z^{\prime}\) induced by \(\phi^{*}\boldsymbol{\tau}^{\prime}\) has rank \(\dim Z^{\prime}=\dim Z\). This contradicts with our assumption that \(m<\dim Z\). Hence Case (1) cannot happen. In the next Step we will deal with Case (2) using Theorems 4.5 and 4.17 and show that it can neither happen.
_Step 3. Case (2):_\(m=\dim_{\mathbb{C}}\mathbb{V}\).
_Claim 4.22_.: _For any reductive representations \(\{\tau_{i}:\pi_{1}(X)\to\operatorname{GL}_{N}(L^{\prime}_{i})\}_{i=k+1,\dots,k ^{\prime}}\) with \(L_{i}\) non-archimedean local fields and \([o_{i}]\in\mathfrak{C}(L^{\prime}_{i})\), the new family \(\boldsymbol{\tau}^{\prime}:=\{\boldsymbol{\tau}\}\cup\{\tau_{i}\}_{i=k+1, \dots,k^{\prime}}\) satisfies that_
\[CT_{\psi^{*}\boldsymbol{\tau}^{\prime}}\geq T_{\psi^{*}\boldsymbol{\tau}}\geq C ^{-1}T_{\psi^{*}\boldsymbol{\tau}^{\prime}} \tag{4.3}\]
_for some constant \(C>0\)._
Proof.: We may replace \(Z\) by a Galois cover which dominates the spectral covers of \(\psi^{*}\boldsymbol{\tau}^{\prime}\). Note that \(T_{\psi^{*}\boldsymbol{\tau}^{\prime}}\geq T_{\psi^{*}\boldsymbol{\tau}}\). Note that the rank of multivalued one forms on \(Z\) induced by \(\psi^{*}\boldsymbol{\tau}^{\prime}\) always has rank \(m\) by our choice of \(m\). Assume by contradiction that (4.3) does not happen. Then the dimension of global spectral forms \(\mathbb{V}^{\prime}\) induced \(\psi^{*}\boldsymbol{\tau}^{\prime}\) will be greater than \(\dim\mathbb{V}\). We are now in the situation of Case (1), which gives us the contradiction. The claim is proved.
**Claim 4.23**.: _For any proper closed subvariety \(V\subsetneq\Sigma\) (resp. \(V\subsetneq S_{\psi^{*}\boldsymbol{\tau}}\)), one has \(\{T_{\boldsymbol{\tau}}\}^{\dim V}\cdot V>0\) (resp. \(\{T_{\psi^{*}\boldsymbol{\tau}}\}^{\dim V}\cdot V>0\))._
Proof.: Indeed, by the induction, any proper closed subvariety \(V\subsetneq\Sigma\) we can add some new reductive representation \(\tau_{k+1},\ldots,\tau_{k^{\prime}}\) into non-archimedean local fields \(L_{i}\) with \([\tau_{i}]\in\mathfrak{C}(L_{i})\) for each \(i=k+1,\ldots,k^{\prime}\) such that we have \(\{T_{\boldsymbol{\tau}^{\prime}}\}^{\dim V}\cdot V>0\). By Lemma 1.25 this implies that \(\{T_{\psi^{*}\boldsymbol{\tau}^{\prime}}\}^{\dim V^{\prime}}\cdot V^{\prime}>0\) for any closed subvariety \(V^{\prime}\subset S_{\psi^{*}\boldsymbol{\tau}}\) which dominates \(V\). By Claim 4.22, it follows that \(\{T_{\psi^{*}\boldsymbol{\tau}}\}^{\dim V^{\prime}}\cdot V^{\prime}>0\). The claim follows.
Let us denote by \(H_{i}\) the Zariski closure of \(\psi^{*}\tau_{i}(\pi_{1}(Z))\) for each \(i=1,\ldots,k\), which is a reductive algebraic group over \(L_{i}\). By [uh], there is some number field \(k_{i}\) and some non-archimedean place \(v_{i}\) of \(k_{i}\) such that \(L_{i}=(k_{i})_{v_{i}}\) and \(H_{i}\) is defined over \(k_{i}\). Denote \(T_{i}:=H_{i}/\mathcal{D}H_{i}\). Consider the morphisms of affine \(k_{i}\)-schemes of finite type
(4.4)
Then by Theorem 1.20, \(\mathfrak{C}\subset M_{\mathrm{B}}(X,N)(\mathbb{C})\) is transferred via the diagram (4.4) to some absolutely constructible subset \(\mathfrak{C}_{i}\) of \(M_{\mathrm{B}}(Z,T_{i})\). Consider the reduction map \(s_{\mathfrak{C}_{i}}:Z\to S_{\mathfrak{C}_{i}}\) defined in Definition 3.1. Denote by \(f:Z\to S\) be the Stein factorisation of \(s_{\mathfrak{C}_{1}}\times\cdots\times s_{\mathfrak{C}_{k}}:Z\to S_{ \mathfrak{C}_{1}}\times\cdots\times S_{\mathfrak{C}_{k}}\).
**Claim 4.24**.: _The reduction map \(s_{\psi^{*}\boldsymbol{\tau}}:Z\to S_{\psi^{*}\boldsymbol{\tau}}\) factors through \(Z\stackrel{{ f}}{{\to}}S\stackrel{{ q}}{{\to}}S_{\psi^{*} \boldsymbol{\tau}}\)._
Proof.: By Claim 4.23 the conditions in Theorem 4.17 are fulfilled for \(\psi^{*}\boldsymbol{\tau}\). Since we assume that \(m<\dim Z\), which means that the generic rank of \(\mathbb{V}\) is less than \(\dim Z\). It implies that \(\{T_{\psi^{*}\boldsymbol{\tau}}\}^{\dim S_{\psi^{*}\boldsymbol{\tau}}}\cdot S _{\psi^{*}\boldsymbol{\tau}}=0\). Hence the second possibility in Theorem 4.17 happens and thus we conclude that the reduction map \(s_{\sigma_{i}}:Z\to S_{\sigma_{i}}\) coincides with \(s_{\psi^{*}\tau_{i}}:Z\to S_{\psi^{*}\tau_{i}}\) where \(\sigma_{i}:\pi_{1}(Z)\to T_{i}(L_{i})\) is the composition of \(\psi^{*}\tau_{i}:\pi_{1}(Z)\to H_{i}(L_{i})\) with the group homomorphism \(H_{i}\to T_{i}\). By (4.4) and the definition of \(\mathfrak{C}_{i}\), \([\sigma_{i}]\in\mathfrak{C}_{i}(L_{i})\). Therefore, \(s_{\sigma_{i}}\) factors through \(s_{\mathfrak{C}_{i}}\). Since \(s_{\sigma_{i}}:Z\to S_{\sigma_{i}}\) coincides with \(s_{\psi^{*}\tau_{i}}:Z\to S_{\psi^{*}\tau_{i}}\), it follows that \(s_{\psi^{*}\boldsymbol{\tau}}\) factors through \(Z\stackrel{{ f}}{{\to}}S\stackrel{{ q}}{{\to}}S_{\psi^{*} \boldsymbol{\tau}}\).
Since \(T_{i}\) are all algebraic tori defined over number fields \(k_{i}\), we apply Theorem 4.5 to conclude that there exists a family of reductive representations \(\boldsymbol{\varrho}_{i}:=\{\varrho_{ij}:\pi_{1}(Z)\to T_{i}(K_{ij})\}_{j=1, \ldots,n_{i}}\) with \(K_{ij}\) non-archimedean local field such that
1. For each \(i=1,\ldots,k;j=1,\ldots,n_{i}\), \([\varrho_{ij}]\in\mathfrak{C}_{i}(K_{ij})\);
2. The reduction map \(s_{\boldsymbol{\varrho}_{i}}:Z\to S_{\boldsymbol{\varrho}_{i}}\) of \(\boldsymbol{\tau}_{i}\) coincides with \(s_{\boldsymbol{\varrho}_{i}}:Z\to S_{\boldsymbol{\varrho}_{i}}\);
3. for the canonical current \(T_{\boldsymbol{\varrho}_{i}}\) over \(S_{\boldsymbol{\xi}_{i}}\) associated with \(\boldsymbol{\varrho}_{i}\), \(\{T_{\boldsymbol{\varrho}_{i}}\}\) is a Kahler class.
By the definition of \(\mathfrak{C}_{i}\), there exist a finite extension \(F_{ij}\) of \(K_{ij}\) and reductive representations \(\{\delta_{ij}:\pi_{1}(X)\to\mathrm{GL}_{N}(F_{ij})\}_{j=1,\ldots,n_{i}}\) such that
1. For each \(i=1,\ldots,k;j=1,\ldots,n_{i}\), \([\delta_{ij}]\in\mathfrak{C}(F_{ij})\);
2. the Zariski closure of \(\psi^{*}\delta_{ij}:\pi_{1}(Z)\to\mathrm{GL}_{N}(F_{ij})\) is contained in \(H_{i}\);
3. \([\eta_{ij}]=[\varrho_{ij}]\in M_{\mathrm{B}}(Z,T_{i})(F_{ij})\), where \(\eta_{ij}:\pi_{1}(Z)\to T_{i}(F_{ij})\) is the composition of \(\psi^{*}\delta_{ij}:\pi_{1}(Z)\to H_{i}(F_{ij})\) with the group homomorphism \(H_{i}\to T_{i}\).
Therefore, \(\eta_{ij}\) is conjugate to \(\varrho_{ij}\) and thus their reduction map coincides. It follows that the canonical currents \(T_{\eta_{ij}}\) coincides with \(T_{\varrho_{ij}}\). Let \(R_{i}\) be the radical of \(H_{i}\). Write \(\eta^{\prime}_{ij}:\pi_{1}(Z)\to(H_{i}/R_{i})(F_{ij})\) to be the composition of \(\psi^{*}\delta_{ij}:\pi_{1}(Z)\to H_{i}(F_{ij})\) with the homomorphism \(H_{i}\to H_{i}/R_{i}\). Note that \(H_{i}\to T_{i}\times H_{i}/R_{i}\) is an isogeny. It follows that the reduction map \(s_{\psi^{*}\delta_{ij}}\) is the Stein factorization of \(s_{\eta_{ij}}\times s_{\eta^{\prime}_{ij}}:Z\to S_{\eta_{ij}}\times S_{\eta^{ \prime}_{ij}}\). Therefore, the reduction map \(s_{\eta_{ij}}:Z\to S_{\eta_{ij}}\) factors through the reduction map \(s_{\psi^{*}\delta_{ij}}:Z\to S_{\psi^{*}\delta_{ij}}\) with the finite morphism \(q_{ij}:S_{\psi^{*}\delta_{ij}}\to S_{\eta_{ij}}\). Moreover, By Definition 1.24, one can see that
\[q^{*}_{ij}T_{\varrho_{ij}}=q^{*}_{ij}T_{\eta_{ij}}\leq T_{\psi^{*}\delta_{ij}}. \tag{4.5}\]
Consider the family of representations \(\boldsymbol{\delta}:=\{\delta_{ij}:\pi_{1}(X)\to\operatorname{GL}_{N}(F_{ij}) \}_{i=1,\ldots,k;j=1,\ldots,n_{i}}\). By Items (2) and (c)the Stein factorization \(f:Z\to S\) of \(Z\to S_{\boldsymbol{\xi}_{1}}\times\cdots\times S_{\boldsymbol{\xi}_{k}}\) factors through the reduction map \(s_{\psi^{*}\boldsymbol{\delta}}:Z\to S_{\psi^{*}\boldsymbol{\delta}}\). By Claim 4.24, \(f:Z\to S\) coincides with \(s_{\psi^{*}\boldsymbol{\delta}}:Z\to S_{\psi^{*}\boldsymbol{\delta}}\). Let \(e_{i}:S\to S_{\boldsymbol{\xi}_{i}}=S_{\boldsymbol{\varrho}_{i}}\) be the natural map. Note that \(e_{1}\times\cdots\times e_{k}:S\to S_{\boldsymbol{\xi}_{1}}\times\cdots\times S _{\boldsymbol{\xi}_{k}}\) is finite. By Items (2) and (3), \(\{\sum_{i=1}^{k}e_{i}^{*}T_{\boldsymbol{\theta}_{i}}\}\) is Kahler on \(S=S_{\boldsymbol{\xi}}\). By (4.5), we conclude that \(\{T_{\psi^{*}\boldsymbol{\delta}}\}\) is Kahler on \(S_{\boldsymbol{\xi}}\). According to Remark 4.16, it implies that the generic rank \(m\) of the multivalued holomorphic \(1\)-forms on \(Z^{\prime}\) induced by the differential of harmonic mappings associated with \(\{\phi^{*}\delta_{ij}:\pi_{1}(Z^{\prime})\to\operatorname{GL}_{N}(F_{ij})\}_{ i=1,\ldots,k;j=1,\ldots,n_{i}}\) is equal to \(\dim Z\). This contradicts with our assumption that \(m<\dim Z\). Hence Case (2) can neither happen. We prove Induction one.
#### Step 4.
We now prove the theorem by another induction.
**Induction Two**. Assume that for every closed subvariety \(\Sigma\subset S_{\boldsymbol{\xi}}\) of dimension \(\leq r-1\), one can add \(\tau_{\ell+1},\ldots,\tau_{p}\) (depending on \(\Sigma\)) with \([\tau_{i}]\in\mathfrak{C}(L_{i})\) for each \(i=\ell+1,\ldots,p\) such that for every closed subvariety \(\Xi\subset\Sigma\), one has \(\{T_{\boldsymbol{\tau}}\}^{\dim\Xi}\cdot\Xi>0\), where \(\boldsymbol{\tau}:=\{\tau_{i}\}_{i=1,\ldots,p}\).
Obviously, this induction is the same as Induction one for \(\dim\Sigma=1\) and thus it holds in this case. Let \(\Sigma\subset S_{\boldsymbol{\xi}}\) be a closed subvariety of dimension \(r\). We shall prove that the induction holds for such \(\Sigma\).
By Induction One, one can add reductive representations \(\{\tau_{i}:\pi_{1}(X)\to\operatorname{GL}_{N}(L_{i})\}_{i=\ell+1,\ldots,k}\) (depending on \(\Sigma\)) with \([\tau_{i}]\in\mathfrak{C}(L_{i})\) for each \(i=\ell+1,\ldots,k\) such that \(\{T_{\boldsymbol{\tau}^{\prime}}\}^{\dim\Sigma}\cdot\Sigma>0\) for the new family \(\boldsymbol{\tau}^{\prime}:=\{\tau_{i}:\pi_{1}(X)\to T(L_{i})\}_{i=1,\ldots,k}\). We construct \(\psi:Z\to X\) a diagram as (4.2). Then \(\{T_{\psi^{*}\boldsymbol{\tau}^{\prime}}\}\) is a big class on \(S_{\psi^{*}\boldsymbol{\tau}^{\prime}}\) by Lemma 1.25. We may replace \(Z\) by a Galois cover which dominates the spectral covers of \(\psi^{*}\boldsymbol{\tau}^{\prime}\). Let \(V\subset H^{0}(Z,\Omega^{1}_{Z})\) be the subspace generated by all spectral one forms induced by \(\psi^{*}\boldsymbol{\tau}^{\prime}\). Note that there is a subspace \(\mathbb{V}\subset H^{0}(S_{\psi^{*}\boldsymbol{\tau}^{\prime}},\Omega^{1}_{S_{ \psi^{*}\boldsymbol{\tau}^{\prime}}})\) such that \(s^{*}_{\psi^{*}\boldsymbol{\tau}^{\prime}}\mathbb{V}=V\). By Remark 4.16,
\[\operatorname{Im}\left[\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname {\operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname {\operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname{ \operatornameoperatorname \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname {\operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatorname { \operatornameoperatornameoperatornameoperatornameoperatornameoperatornameoperatorname { \operatorname
### Holomorphic convexity associated with absolutely constructible subsets
In this subsection we will prove Theorem B. We shall use the notations and results proven in SS 3.3 and Theorem 3.2 without recalling the details.
**Theorem 4.25**.: _Let \(X\) be a smooth projective variety. Let \(\mathfrak{C}\) be an absolutely constructible subset of \(M_{\mathrm{B}}(X,N)(\mathbb{C})\) defined in Definition 1.17. Assume that \(\mathfrak{C}\) is defined on \(\mathbb{Q}\). Let \(\pi:\widetilde{X}_{\mathfrak{C}}\to X\) be the covering corresponding to the group \(\cap_{\varrho}\ker\varrho\subset\pi_{1}(X)\) where \(\varrho:\pi_{1}(X)\to\mathrm{GL}_{N}(\mathbb{C})\) ranges over all reductive representations such that \([\varrho]\in\mathfrak{C}(\mathbb{C})\). Then \(\widetilde{X}_{\mathfrak{C}}\) is holomorphically convex. In particular, if \(\pi_{1}(X)\) is a subgroup of \(\mathrm{GL}_{N}(\mathbb{C})\) whose Zariski closure is reductive, then \(\widetilde{X}_{\mathfrak{C}}\) is holomorphically convex._
Proof.: Let \(H:=\cap_{\varrho}\ker\varrho\cap\sigma\), where \(\sigma\) is the \(\mathbb{C}\)-VHS defined in Proposition 3.12 and \(\varrho:\pi_{1}(X)\to\mathrm{GL}_{N}(\mathbb{C})\) ranges over all reductive representation such that \([\varrho]\in\mathfrak{C}\). Denote by \(\widetilde{X}_{H}:=\widetilde{X}/H\). Let \(\mathscr{D}\) be the period domain associated to the \(\mathbb{C}\)-VHS \(\sigma\) defined in Proposition 3.12 and let \(p:\widetilde{X}_{H}\to\mathscr{D}\) be the period mapping. By (3.5), \(H=\cap_{\varrho}\ker\varrho\), where \(\varrho:\pi_{1}(X)\to\mathrm{GL}_{N}(\mathbb{C})\) ranges over all reductive representation such that \([\varrho]\in\mathfrak{C}\). Therefore, \(\widetilde{X}_{\mathfrak{C}}=\widetilde{X}_{H}\).
Consider the product
\[\Psi=s_{\mathfrak{C}}\circ\pi_{H}\times p:\widetilde{X}_{H}\to S_{\mathfrak{C}} \times\mathscr{D}\]
where \(p:\widetilde{X}_{H}\to\mathscr{D}\) is the period mapping of \(\sigma\). Recall that \(\Psi\) factors through a proper surjective fibration \(\mathrm{sh}_{H}:\widetilde{X}_{H}\to\widetilde{S}_{H}\). Moreover, there is a properly discontinuous action of \(\pi_{1}(X)/H\) on \(\widetilde{S}_{H}\) such that \(\mathrm{sh}_{H}\) is equivariant with respect to this action. Write \(g:\widetilde{S}_{H}\to S_{\mathfrak{C}}\times\mathscr{D}\) to be the induced holomorphic map. Denote by \(\phi:\widetilde{S}_{H}\to\mathscr{D}\) the composition of \(g\) and the projection map \(S_{\mathfrak{C}}\times\mathscr{D}\to\mathscr{D}\). Since the period mapping \(p\) is horizontal, and \(\mathrm{sh}_{H}\) is surjective, it follows that \(\phi\) is also horizontal.
Recall that in Lemma 3.28 we prove that there is a finite index normal subgroup \(N\) of \(\pi_{1}(X)/H\) and a homomorphism \(\nu:N\to\mathrm{Aut}(\widetilde{S}_{H})\) such that \(\mathrm{sh}_{H}:\widetilde{X}_{H}\to\widetilde{S}_{H}\) is \(\nu\)-equivariant and \(\nu(N)\) acts on \(\widetilde{S}_{H}\) properly discontinuous and without fixed point. Let \(Y:=\widetilde{X}_{H}/N\). Moreover, \(c:Y\to X\) is a finite Galois etale cover and \(N\) gives rise to a proper surjective fibration \(Y\to\widetilde{S}_{H}/\nu(N)\) between compact normal complex spaces. Write \(W:=\widetilde{S}_{H}/\nu(N)\). Then \(\widetilde{S}_{H}\to W\) is a topological Galois unramified covering. Recall that the canonical bundle \(K_{\mathscr{D}}\) on the period domain \(\mathscr{D}\) can be endowed with a \(G_{0}\)-invariant smooth metric \(h_{\mathscr{D}}\) whose curvature is strictly positive-definite in the horizontal direction. As \(\phi:\widetilde{S}_{H}\to\mathscr{D}\) is \(\nu(N)\)-equivariant, it follows that \(\phi^{*}K_{\mathscr{D}}\) descends to a line bundle on the quotient \(W:=\widetilde{S}_{H}/\nu(N)\), denoted by \(L_{G}\). The smooth metric \(h_{\mathscr{D}}\) induces a smooth metric on \(L_{G}\) whose curvature form is denoted by \(T\). Let \(x\in W\) be a smooth point of \(W\) and let \(v\in T_{\widetilde{S}_{H},x}\). Then \(|v|_{\omega}^{2}>0\) if \(d\phi(v)\neq 0\).
We fix a reference point \(x_{0}\) on \(\widetilde{S}_{H}\). Define \(\phi_{0}:=2d_{\mathscr{D}}^{2}(\phi(x),\phi(x_{0}))\) where \(d_{\mathscr{D}}:\mathscr{D}\times\mathscr{D}\to\mathbb{R}_{\geq 0}\) is the distance function on the period domain \(\mathscr{D}\). By [1, Theorem 3.3.2], we have
\[\mathrm{dd}^{\mathrm{c}}\phi_{0}\geq\omega=q^{*}T. \tag{4.6}\]
where we \(q:\widetilde{S}_{H}\to\widetilde{S}_{H}/\nu(N)\) denotes the quotient map.
We now apply Theorem 4.21 to find a family of representations \(\boldsymbol{\tau}:=\{\tau_{i}:\pi_{1}(X)\to\mathrm{GL}_{N}(K_{i})\}_{i=1,\ldots,m}\) where \(K_{i}\) are non-archimedean local fields such that
* For each \(i=1,\ldots,m\), \([\tau_{i}]\in\mathfrak{C}(K_{i})\);
* The reduction map \(s_{\boldsymbol{\tau}}:X\to S_{\boldsymbol{\tau}}\) of \(\boldsymbol{\tau}\) coincides with \(s_{\mathfrak{C}}\).
* For the canonical current \(T_{\boldsymbol{\tau}}\) defined over \(S_{\mathfrak{C}}\), \(\{T_{\boldsymbol{\tau}}\}\) is a Kahler class.
Consider
Note that \(p\) is a finite surjective morphism.
We fix a reference point \(x_{0}\) on \(\widetilde{S}_{H}\). For each \(i=1,\ldots,m\), let \(u_{i}:\widetilde{X}_{H}\to\Delta(\mathrm{GL}_{N})_{K_{i}}\) be the \(\tau_{i}\)-equivariant harmonic mapping from \(\widetilde{X}_{H}\) to the Bruhat-Tits building of \(\mathrm{GL}_{N}(K_{i})\) whose existence was ensured by a theorem of Gromov-Schoen [10]. Then the function \(\tilde{\phi}_{i}(x):=2d_{i}^{2}(u_{i}(x),u_{i}(x_{0}))\) defined over \(\widetilde{X}_{H}\) is locally Lipschitz, where \(d_{i}:\Delta(\mathrm{GL}_{N})_{K_{i}}\times\Delta(\mathrm{GL}_{N})_{K_{i}}\to \mathbb{R}_{\geq 0}\) is the distance function on the Bruhat-Tits building. By Proposition 1.26, it induces a continuous psh functions \(\{\phi_{i}:\widetilde{S}_{H}\to\mathbb{R}_{\geq 0}\}_{i=1,\ldots,m}\) such that \(\mathrm{dd}^{\mathrm{c}}\phi_{i}\geq r_{i}^{*}T_{\tau_{i}}\) for each \(i\). By the definition of \(T_{\mathbf{\tau}}\), we have
\[\mathrm{dd}^{\mathrm{c}}\sum_{i=1}^{m}\phi_{i}\geq r^{*}T_{\mathbf{\tau}}. \tag{4.7}\]
Therefore, putting (4.6) and (4.7) together we obtain
\[\mathrm{dd}^{\mathrm{c}}\sum_{i=0}^{m}\phi_{i}\geq q^{*}(f^{*}T_{\mathbf{\tau}}+T). \tag{4.8}\]
As \(f\) is a finite surjective morphism, \(\{f^{*}T_{\mathbf{\tau}}\}\) is also Kahler by Theorem 1.13.
By Claim 3.31, we know that \(g:\widetilde{S}_{H}\to S_{\mathbf{\varepsilon}}\times\mathscr{D}\) has discrete fibers. Since \(T\) is induced by the curvature form of \((K_{\mathscr{D}},h_{\mathscr{D}})\), and \(\phi:\widetilde{S}_{H}\to\mathscr{D}\) is horizontal, we can prove that for every irreducible positive dimensional closed subvariety \(Z\) of \(W\), \(f^{*}T_{\mathbf{\tau}}+T\) is strictly positive at general smooth points of \(Z\). Therefore,
\[\{f^{*}T_{\mathbf{\tau}}+T\}^{\dim Z}\cdot Z=\int_{Z}(f^{*}T_{\mathbf{\tau}}+T)^{\dim Z }>0.\]
Recall that \(W\) is projective by the proof of Claim 3.32. We utilize Theorem 1.13 to conclude that \(\{f^{*}T_{\mathbf{\tau}}+T\}\) is Kahler.
Given that \(\widetilde{S}_{H}\to W\) represents a topological Galois unramified cover, we can apply Proposition 1.14 in conjunction with (4.8) to deduce that \(\widetilde{S}_{H}\) is a Stein manifold. Furthermore, since \(\widetilde{X}_{H}\to\widetilde{S}_{H}\) is a proper surjective holomorphic fibration, the holomorphic convexity of \(\widetilde{X}_{H}\) follows from the Cartan-Remmert theorem. Ultimately, the theorem is established by noting that \(\widetilde{X}_{H}=\widetilde{X}_{\mathbf{\xi}}\).
### Universal covering is Stein
We shall use the notations in the proof of Theorem 4.25 without recalling their definitions.
**Theorem 4.26**.: _Let \(X\) be a smooth projective variety. Consider an absolutely constructible subset \(\mathfrak{C}\) of \(M_{\mathrm{B}}(X,\mathrm{GL}_{N}(\mathbb{C}))\) as defined in Definition 1.17. We further assume that \(\mathfrak{C}\) is defined over \(\mathbb{Q}\). If \(\mathfrak{C}\) is considered to be large, meaning that for any closed subvariety \(Z\) of \(X\), there exists a reductive representation \(\varrho:\pi_{1}(X)\to\mathrm{GL}_{N}(\mathbb{C})\) such that \([\varrho]\in\mathfrak{C}\) and \(\varrho(\mathrm{Im}[\pi_{1}(Z^{\mathrm{norm}})\to\pi_{1}(X)])\) is infinite, then all intermediate coverings between \(\widetilde{X}\) and \(\widetilde{X}_{\mathfrak{C}}\) of \(X\) are Stein manifolds._
Proof.: Note that \(\mathrm{sh}_{H}:\widetilde{X}_{H}\to\widetilde{S}_{H}\) is a proper holomorphic surjective fibration.
**Claim 4.27**.: \(\mathrm{sh}_{H}\) _is biholomorphic._
Proof.: Assume that there exists a positive-dimensional compact subvariety \(Z\) of \(\widetilde{X}_{H}\) which is contained in some fiber of \(\operatorname{sh}_{H}\). Consider \(W:=\pi_{H}(Z)\) which is a compact positive-dimensional irreducible subvariety of \(X\). Therefore, \(\operatorname{Im}[\pi_{1}(Z^{\text{norm}})\to\pi_{1}(W^{\text{norm}})]\) is a finite index subgroup of \(\pi_{1}(W^{\text{norm}})\). By the definition of \(\widetilde{X}_{H}\), for any reductive \(\varrho:\pi_{1}(X)\to\operatorname{GL}_{N}(\mathbb{C})\) with \([\varrho]\in\mathfrak{C}(\mathbb{C})\), we have \(\varrho(\operatorname{Im}[\pi_{1}(Z^{\text{norm}})\to\pi_{1}(X)])=\{1\}\). Therefore, \(\varrho(\operatorname{Im}[\pi_{1}(W^{\text{norm}})\to\pi_{1}(X)])=\{1\}\) is finite. This contradicts with out assumption that \(\mathfrak{C}\) is large. Hence, \(\operatorname{sh}_{H}\) is a one-to-one proper holomorphic map of complex normal spaces. Consequently, it is biholomorphic.
By the proof of Theorem 4.25, there exist
* a topological Galois unramified covering \(q:\widetilde{X}_{H}=\widetilde{S}_{H}\to W\), where \(W\) is a projective normal variety;
* a positive \((1,1)\)-current with continuous potential \(f^{*}T_{\boldsymbol{\tau}}+T\) over \(W\) such that \(\{f^{*}T_{\boldsymbol{\tau}}+T\}\) is Kahler;
* a continuous semi-positive plurisubharmonic function \(\sum_{i=0}^{m}\phi_{i}\) on \(\widetilde{S}_{H}\) such that we have (4.9) \[\operatorname{dd}^{\text{c}}\sum_{i=0}^{m}\phi_{i}\geq q^{*}(f^{*}T_{ \boldsymbol{\tau}}+T).\]
Let \(p:\widetilde{X}^{\prime}\to\widetilde{X}_{H}\) be the intermediate Galois covering of \(X\) between \(\widetilde{X}\to\widetilde{X}_{H}\). By (4.6) we have
\[\operatorname{dd}^{\text{c}}\sum_{i=0}^{m}p^{*}\phi_{i}\geq(q\circ p)^{*}(f^{* }T_{\boldsymbol{\tau}}+T). \tag{4.10}\]
We apply Proposition 1.14 to conclude that \(\widetilde{X}^{\prime}\) is Stein.
## Appendix A Shafarevich conjecture for projective normal varieties
Ya Deng, Ludmil Katzarkov\({}^{(a)}\) & Katsutoshi Yamanoi
In this appendix, we aim to extend Theorems 4.25 and 4.26 to include singular normal varieties, and thus completing the proofs of Theorems B and C.
### Absolutely constructible subset (II)
Let \(X\) be a projective normal variety. Following the recent work of Lerer [10], we can also define absolutely constructible subsets in the character variety \(M_{\text{B}}(X,N):=M_{\text{B}}(\pi_{1}(X),\operatorname{GL}_{N})\).
**Definition A.1** ().: Let \(X\) be a normal projective variety, \(\mu:Y\to X\) be a resolution of singularities, and \(\iota:M_{\text{B}}(X,N)\hookrightarrow M_{\text{B}}(Y,N)\) be the embedding. A subset \(\mathfrak{C}\subset M_{\text{B}}(X,N)(\mathbb{C})\) is called _absolutely constructible_ if \(\iota(\mathfrak{C})\) is an absolutely constructible subset of \(M_{\text{B}}(Y,N)\) in the sense of Definition 1.17.
Note that the above definition does not depend on the choice of the resolution of singularities (cf. [10, Lemma 2.7]). Moreover, we have the following result.
**Proposition A.2** ([10, Proposition 2.8]).: _Let \(X\) be a normal projective variety. Then \(M_{\text{B}}(X,N)\) is absolutely constructible in the sense of Definition A.1._
This result holds significant importance, as it provides a fundamental example of absolutely constructible subsets for projective normal varieties. It is worth noting that in [10, Proposition 2.8], it is explicitly stated that \(\iota(M_{\text{B}}(X,N))\) is \(U(1)\)-invariant, with \(\iota\) defined in Definition A.1. However, it should be emphasized that the proof can be easily adapted to show \(\mathbb{C}^{*}\)-invariance, similar to the approach used in the proof of Proposition 3.35.
### Reductive Shafarevich conjecture for normal projective varieties
Let \(Y\) be a projective normal variety. Let \(\mathfrak{C}\) be an absolutely constructible subset of \(M_{\mathrm{B}}(Y,N)(\mathbb{C})\), defined on \(\mathbb{Q}\) (e.g. \(\mathfrak{C}=M_{\mathrm{B}}(Y,N)\)). Consider the covering \(\pi:\widetilde{Y}_{\mathfrak{C}}\to Y\) corresponding to the subgroup \(\cap_{\varrho}\ker\varrho\) of \(\pi_{1}(Y)\), where \(\varrho:\pi_{1}(Y)\to\mathrm{GL}_{N}(\mathbb{C})\) ranges over all reductive representations such that \([\varrho]\in\mathfrak{C}\). Then the complex space \(\widetilde{Y}_{\mathfrak{C}}\) is holomorphically convex. In particular,
* The covering corresponding to the intersection of the kernels of all reductive representations of \(\pi_{1}(Y)\) in \(\mathrm{GL}_{N}(\mathbb{C})\) is holomorphically convex;
* if \(\pi_{1}(Y)\) is a subgroup of \(\mathrm{GL}_{N}(\mathbb{C})\) whose Zariski closure is reductive, then the universal covering of \(Y\) is holomorphically convex.
Proof.: Let \(\mu:X\to Y\) be any desingularization. Let \(j:M_{\mathrm{B}}(Y,N)\hookrightarrow M_{\mathrm{B}}(X,N)\) the closed immersion induced by \(\mu\), which is a morphism of affine \(\mathbb{Q}\)-schemes of finite type. Then by Definition A.1, \(j(\mathfrak{C})\) is an absolutely constructible in the sense of Definition 1.17. Since \(\mathfrak{C}\) is defined on \(\mathbb{Q}\), so is \(j(\mathfrak{C})\). We shall use the notations in Theorem 3.20. Let \(\widetilde{X}_{H}\) be the covering associated with the subgroup \(H:=\cap_{\varrho}\ker\varrho\) of \(\pi_{1}(X)\) where \(\varrho:\pi_{1}(X)\to\mathrm{GL}_{N}(\mathbb{C})\) ranges over all reductive representations such that \([\varrho]\in j(\mathfrak{C})(\mathbb{C})\). In other words, \(H:=\cap_{\tau}\ker\mu^{*}\tau\) where \(\tau:\pi_{1}(Y)\to\mathrm{GL}_{N}(\mathbb{C})\) ranges over all reductive representations such that \([\tau]\in\mathfrak{C}(\mathbb{C})\). Denote by \(H_{0}:=\cap_{\tau}\ker\tau\) where \(\tau:\pi_{1}(Y)\to\mathrm{GL}_{N}(\mathbb{C})\) ranges over all reductive representations such that \([\varrho]\in\mathfrak{C}(\mathbb{C})\). Therefore, \(H=(\mu_{*})^{-1}(H_{0})\), where \(\mu_{*}:\pi_{1}(X)\to\pi_{1}(Y)\) is a surjective homeomorphism as \(Y\) is normal. Therefore, the natural homeomorphism \(\pi_{1}(X)/H\to\pi_{1}(Y)/H_{0}\) is an isomorphism. Then \(\widetilde{X}_{\mathfrak{C}}=\widetilde{X}/H\) and \(\widetilde{Y}_{H}:=\widetilde{Y}/H_{0}\) where \(\widetilde{X}\) (resp. \(\widetilde{Y}\)) is the universal covering of \(X\) (resp. \(X\)). It induces a lift \(p:\widetilde{X}_{H}\to\widetilde{Y}_{\mathfrak{C}}\) such that
\[\begin{CD}\widetilde{X}_{H}@>{\pi_{H}}>{}>X\\ @V{}V{p}V@V{\mu}V{\mu}V\\ \widetilde{Y}_{\mathfrak{C}}@>{\pi}>{Y}V\end{CD}\]
**Claim A.4**.: \(p:\widetilde{X}_{H}\to\widetilde{Y}_{\mathfrak{C}}\) _is a proper surjective holomorphic fibration with connected fibers._
Proof.: Note that \(\mathrm{Aut}(\widetilde{X}_{H}/X)=\pi_{1}(X)/H\simeq\pi_{1}(Y)/H_{0}=\mathrm{ Aut}(\widetilde{Y}_{\mathfrak{C}}/Y)\). Therefore, \(\widetilde{X}_{H}\) is the base change \(\widetilde{Y}_{\mathfrak{C}}\times_{Y}X\). Note that each fiber of \(\mu\) is connected as \(Y\) is normal. It follows that each fiber of \(p\) is connected. The claim is proved.
By Theorem 3.20, we know that there exist a proper surjective holomorphic fibration \(\mathrm{sh}_{H}:\widetilde{X}_{H}\to\widetilde{S}_{H}\) such that \(\widetilde{S}_{H}\) is a Stein space. Therefore, for each connected compact subvariety \(Z\subset\widetilde{X}_{H}\), \(\mathrm{sh}_{H}(Z)\) is a point. By Claim A.4, it follows that each fiber of \(p\) is compact and connected, and thus is contracted by \(\mathrm{sh}_{H}\). Therefore, \(\mathrm{sh}_{H}\) factors through a proper surjective fibration \(f:\widetilde{X}_{\mathfrak{C}}\to\widetilde{S}_{H}\):
\[\begin{CD}\widetilde{X}_{H}@>{\pi_{H}}>{}>\widetilde{Y}_{\mathfrak{C}}@>{ \mathrm{sh}_{H}}>{}>\widetilde{S}_{H}\end{CD}\]
Therefore, \(f\) is a proper surjective holomorphic fibration over a Stein space. By the Cartan-Remmert theorem, \(\widetilde{Y}_{\mathfrak{C}}\) is holomorphically convex.
If we define \(\mathfrak{C}\) as \(M_{\mathrm{B}}(Y,N)\), then according to Proposition A.2, \(\mathfrak{C}\) is also absolutely constructible. As a result, the last two claims can be deduced. Thus, the theorem is proven.
**Theorem A.5**.: _Let \(Y\) be a projective normal variety. Let \(\mathfrak{C}\) be an absolutely constructible subset of \(M_{\mathrm{B}}(Y,N)(\mathbb{C})\), defined on \(\mathbb{Q}\) (e.g. \(\mathfrak{C}=M_{\mathrm{B}}(Y,N)\)). Let \(\mathfrak{C}(\mathbb{C})\) be large in the sense that for any closed positive dimensional subvariety \(Z\) of \(Y\), there exists a reductive representation \(\varrho:\pi_{1}(Y)\to\mathrm{GL}_{N}(\mathbb{C})\) such that \([\varrho]\in\mathfrak{C}(\mathbb{C})\) and \(\varrho(\mathrm{Im}[\pi_{1}(Z^{\mathrm{norm}})\to\pi_{1}(Y)])\) is infinite. Then all intermediate Galois coverings of \(Y\) between \(\widetilde{Y}\) and \(\widetilde{Y}_{\mathfrak{C}}\) are Stein spaces. Here \(\widetilde{Y}\) denotes the universal covering of \(Y\)._
Proof.: Let \(\mu:X\to Y\) be any desingularization. In the following, we will use the same notations as in the proof of Theorem A.3 without explicitly recalling their definitions. Recall that we have constructed three proper surjective holomorphic fibrations \(p\), \(f\), and \(\mathrm{sh}_{H}\) satisfying the following commutative diagram:
**Claim A.6**.: \(f:\widetilde{Y}_{\mathfrak{C}}\to\widetilde{S}_{H}\) _is a biholomorphism._
The proof follows a similar argument to that of Claim 4.27. For the sake of completeness, we will provide it here.
Proof of Claim a.6.: As each fibers of \(f\) is compact and connected, it suffices to prove that there are no compact positive dimensional subvarieties \(Z\) of \(\widetilde{Y}_{\mathfrak{C}}\) such that \(f(Z)\) is a point. Let us assume, for the sake of contradiction, that such a \(Z\) exists. Consider \(W:=\pi(Z)\) which is a compact positive-dimensional irreducible subvariety of \(Y\). Therefore, \(\mathrm{Im}[\pi_{1}(Z^{\mathrm{norm}})\to\pi_{1}(W^{\mathrm{norm}})]\) is a finite index subgroup of \(\pi_{1}(W^{\mathrm{norm}})\). By the definition of \(\widetilde{Y}_{\mathfrak{C}}\), for any reductive \(\varrho:\pi_{1}(Y)\to\mathrm{GL}_{N}(\mathbb{C})\) with \([\varrho]\in\mathfrak{C}(\mathbb{C})\), we have \(\varrho(\mathrm{Im}[\pi_{1}(Z^{\mathrm{norm}})\to\pi_{1}(X)])=\{1\}\). Therefore, \(\varrho(\mathrm{Im}[\pi_{1}(W^{\mathrm{norm}})\to\pi_{1}(Y)])\) is finite. This contradicts with out assumption that \(\mathfrak{C}\) is large. Hence, \(f\) is a one-to-one proper holomorphic map of complex normal spaces. Consequently, it is biholomorphic.
The rest of the proof is same as in Theorem 4.26. By the proof of Theorem 4.25, there exist
* a topological Galois unramified covering \(q:\widetilde{S}_{H}\to W\), where \(W\) is a projective normal variety;
* a positive closed \((1,1)\)-current with continuous potential \(T_{0}\) over \(W\) such that \(\{T_{0}\}\) is Kahler;
* a continuous semi-positive plurisubharmonic function \(\phi\) on \(\widetilde{S}_{H}\) such that we have (A.1) \[\mathrm{dd}^{\mathrm{c}}\phi\geq q^{*}T_{0}.\]
By Claim A.6, \(\widetilde{Y}_{\mathfrak{C}}\) can be identified with \(\widetilde{S}_{H}\). Let \(p:\widetilde{Y}^{\prime}\to\widetilde{Y}_{\mathfrak{C}}\) be the intermediate Galois covering of \(Y\) between \(\widetilde{Y}\to\widetilde{Y}_{\mathfrak{C}}\). By (A.1) we have (A.2) \[\mathrm{dd}^{\mathrm{c}}p^{*}\phi\geq(q\circ p)^{*}T_{0}.\]
We apply Proposition 1.14 to conclude that \(\widetilde{Y}^{\prime}\) is Stein.
|
2308.08489 | Metriplectic Heavy Top: An Example of Geometrical Dissipation | Recently, Morrison and Updike showed that many dissipative systems are
naturally described as possessing a Riemann curvature-like bracket, which
similar to the Poisson bracket, generates the dissipative equations of motion
once suitable generators are chosen. In this paper, we use geometry to
construct and explore the dynamics of these new brackets. Specifically, we
consider the dynamics of a heavy top with dissipation imposed by a Euclidian
contravariant curvature. We find that the equations of motion, despite their
rather formal motivation, naturally generalize the energy-conserving
dissipation considered by Matterasi and Morrison. In particular, with suitable
initial conditions, we find that the geometrically motivated equations of
motion cause the top to relax to rotation about a principal axis. | Michael Updike | 2023-06-15T20:08:41Z | http://arxiv.org/abs/2308.08489v1 | # Metriplectic Heavy Top: An Example of Geometrical Dissipation
###### Abstract
Recently, Morrison and Updike [10] showed that many dissipative systems are naturally described as possessing a Riemann curvature-like bracket, which similar to the Poisson bracket, generates the dissipative equations of motion once suitable generators are chosen. In this paper, we use geometry to construct and explore the dynamics of these new brackets. Specifically, we consider the dynamics of a heavy top with dissipation imposed by a Euclidian contravariant curvature. We find that the equations of motion, despite their rather formal motivation, naturally generalize the energy-conserving dissipation considered by Matterasi and Morrison [4]. In particular, with suitable initial conditions, we find that the geometrically motivated equations of motion cause the top to relax to rotation about a principal axis.
## Definitions and Conventions
All repeated indices are assumed to be summed over unless otherwise specified. We denote the phase space manifold of a system \(\mathcal{Z}\). We use \(z^{i}\) to denote the coordinates of \(\mathcal{Z}\). The symbols \(f,g,h,F,G,K,\) and \(N\) are always used to represent smooth (\(C^{\infty}(\mathcal{Z})\)) functions. The tangent bundle of \(\mathcal{Z}\) is denoted \(T\mathcal{Z}\), and the cotangent bundle \(T^{*}\mathcal{Z}\).
Here, \(\alpha,\beta\), and \(\gamma\) always represent one-forms. The space of one-forms is denoted \(\Omega^{1}(\mathcal{Z})\). The symbol \(\mathbf{d}\) represents the exterior derivative, which acts on a function \(f(z)\) as
\[\mathbf{d}f=\frac{\partial f}{\partial z^{i}}\mathbf{d}z^{i}.\]
The exterior derivative of a coordinate function is strictly formal. In coordinates, a general one-form may be written
\[\alpha=f_{i}(z)\;\mathbf{d}z^{i}.\]
A one-form \(\alpha\) is said to be exact if \(\alpha=\mathbf{d}f\) for some \(f\).
Given a tangent vector \(v\), we use \(v[f]\) to mean \(v\) acting on \(f\). We use \(\left\langle\beta,v\right\rangle\) to denote the canonical pairing between a vector and a one-form.
We use the word bracket to mean a smooth map from some number of smooth functions to a single smooth function. We demand that a bracket is both linear and a derivation in each of its arguments. In particular, we use \(\left\{\cdot,\cdot\right\}\colon\;C^{\infty}(\mathcal{Z})\times C^{\infty}( \mathcal{Z})\to C^{\infty}(\mathcal{Z})\) to denote the Poisson bracket. The Poisson bracket gives a bivector field
\[J^{ij}=J(\mathbf{d}z^{i},\mathbf{d}z^{j})\colon\;=\left\{z^{i},z^{j}\right\}\]
and also an anchor map \(J^{\#}:T^{*}\mathcal{Z}\to T\mathcal{Z}\) defined implicitly by
\[\left\langle\beta,J^{\#}(\alpha)\right\rangle\colon\;=J(\alpha,\beta).\]
We use \(D\) to represent a contravariant connection, also called a contravariant derivative, which gives the "derivative" of a-one form with respect to another one-form. A contravariant derivative is similar to but not the same as a covariant derivative.
## Introduction
Given a degenerate Poisson bracket \(\{\cdot,\cdot\}\), there exist distinguished functions \(S\) called Casimirs that Poisson commute with all functions \(f\)
\[\{f,S\}=0. \tag{1}\]
Casimirs represent, potentially, the entropy of a system. Given that the dynamics of **any** Hamiltonian system with the Poisson bracket \(\{\cdot,\cdot\}\) is constrained to surfaces of constant \(S\), a dissipative structure is required to create dynamics that respect the second law of thermodynamics. First developed by Morrison and others, participate dynamics is a systematic way to add energy-conserving dissipation to an otherwise Hamiltonian system (cf.[7]).
Suppose we have a Hamiltonian system with Hamiltonian \(H\) and a Casimir \(S\). We consider a symmetric bracket \((\cdot,\cdot)\) such that, for all functions \(f\),
\[(f,H)=0. \tag{2}\]
The metriplectic equations of motion that both preserve energy and increase \(S\) are
\[\dot{f}=\{f,H\}+(f,S). \tag{3}\]
In this paper, we first use the framework of both Riemannian and Poisson geometry to rephrase metriplectic dynamics as a geometrical theory. We then use this new approach to metriplectic dynamics to recreate a bracket first introduced by [6] and used in [4] for control of a rigid body. Afterward, we naturally generalize the construction to the heavy top system, constructing a family of dissipative theories which we collectively call the "metriplectic heavy top." Finally, we simulate the dynamics of a handful of these theories, finding asymptotic relaxation of the heavy top.
## Riemann-Poisson Geometry
Suppose we have system described on some Poisson manifold \((\mathcal{Z},\{\cdot,\cdot\})\) with coordinates \(z^{i}\). The Poisson bracket on functions is naturally extended to \(T^{*}\mathcal{Z}\) by the Koszul bracket \([\cdot,\cdot]\), which for exact forms reads [2][1]
\[[\mathbf{d}z^{i},\mathbf{d}z^{j}]\colon\,=\mathbf{d}\{z^{i},z^{j}\}. \tag{4}\]
The Kozul bracket gives us a natural Lie bracket on forms. Given a (pseudo-)metric
\[g^{ij}=g(\mathbf{d}z^{i},\mathbf{d}z^{j}) \tag{5}\]
we can define a contravariant Levi-Civita connection \(D\) on \(\mathcal{Z}\) via the formula
\[2g(D_{\alpha}\beta,\gamma)= J^{\#}(\alpha)[g(\beta,\gamma)]-J^{\#}(\gamma)[g(\alpha,\beta)] \tag{6}\] \[+J^{\#}(\beta)[g(\gamma,\alpha)]+g([\alpha,\beta],\gamma)-g([ \beta,\gamma],\alpha)+g([\gamma,\alpha],\beta)\]
where \(\alpha,\beta,\gamma\in\Omega^{1}(M)\) and \(J^{\#}:T^{*}\mathcal{Z}\to T\mathcal{Z}\) is the anchor map defined by
\[\left\langle\beta,J^{\#}(\alpha)\right\rangle=J(\alpha,\beta). \tag{7}\]
In coordinate form
\[J^{\#}(\mathbf{d}f)[g]=\frac{\partial f}{\partial z^{i}}J^{ij}\frac{\partial g }{\partial z^{j}}. \tag{8}\]
The Kozul bracket on general one-forms can be obtained using the formula
\[[\alpha,f(z)\beta]=f(z)[\alpha,\beta]+J^{\#}(\alpha)[f(z)]\beta. \tag{9}\]
It should be noted \(D\) is not a covariant derivative, which is defined with tangent vectors. Even so, it can be shown \(D\) satisfies similar linearity properties
\[D_{\alpha+\beta}\gamma =D_{\alpha}\gamma+D_{\beta}\gamma \tag{10}\] \[D_{f\alpha}\gamma =fD_{\alpha}\gamma\] \[D_{\alpha}(\beta+\gamma) =D_{\alpha}\beta+D_{\alpha}\gamma\] \[D_{\alpha}(f\gamma) =fD_{\alpha}\gamma+J^{\#}(\alpha)[f]\gamma.\]
Letting \(\alpha=\mathbf{d}f\), \(\beta=\mathbf{d}h\), and \(\gamma=\mathbf{d}z^{\delta}\), we can write the contravariant derivate as
\[2g^{\delta a}(D_{\mathbf{d}f}\mathbf{d}h)_{a} =\frac{\partial f}{\partial z^{i}}J^{ij}\frac{\partial}{ \partial z^{j}}\left[g^{k\delta}\frac{\partial h}{\partial z^{k}}\right]+J^{i \delta}\frac{\partial}{\partial z^{i}}\left[g^{kl}\frac{\partial f}{\partial z ^{k}}\frac{\partial h}{\partial z^{l}}\right] \tag{11}\] \[+\frac{\partial h}{\partial z^{i}}J^{ij}\frac{\partial}{\partial z ^{j}}\left[g^{k\delta}\frac{\partial f}{\partial z^{k}}\right]+g^{k\delta} \frac{\partial}{\partial z^{k}}\left[J^{ij}\frac{\partial f}{\partial z^{i}} \frac{\partial g}{\partial z^{j}}\right]\] \[\quad-g^{kl}\frac{\partial}{\partial z^{k}}\left[J^{i\delta} \frac{\partial h}{\partial z^{i}}\right]\frac{\partial f}{\partial z^{l}}-g^ {kl}\frac{\partial}{\partial z^{k}}\left[J^{jk}\frac{\partial f}{\partial z^{j }}\right]\frac{\partial h}{\partial z^{l}}.\]
It is convenient to introduce
\[\Gamma^{ij}_{l}\colon=(D_{\mathbf{d}z^{i}}\mathbf{d}z^{j})_{k}= \frac{1}{2}g_{kl}\left[J^{ia}\frac{\partial g^{jk}}{\partial z^{ a}}-J^{ka}\frac{\partial g^{ij}}{\partial z^{a}}+J^{ja}\frac{\partial g^{ik}}{ \partial z^{a}}\right] \tag{12}\] \[+\frac{1}{2}g_{kl}\left[g^{ka}\frac{\partial J^{ij}}{\partial z^{ a}}-g^{ai}\frac{\partial J^{jk}}{\partial z^{a}}-g^{aj}\frac{\partial J^{ik}}{ \partial z^{a}}\right].\]
\(D\) is the unique connection that is both torison-free
\[D_{\alpha}\beta-D_{\beta}\alpha=[\alpha,\beta] \tag{13}\]
and metric compatible
\[J^{\#}(\alpha)[g(\beta,\gamma)]=g(D_{\alpha}\beta,\gamma)+g(\beta,D_{\alpha} \gamma). \tag{14}\]
In coordinates, these conditions read
\[\Gamma^{ij}_{k}-\Gamma^{ji}_{k}=\frac{\partial J^{ij}}{\partial z^{k}} \tag{15}\]
and
\[J^{ia}\frac{\partial g^{jk}}{\partial z^{j}}=g^{ka}\Gamma^{ij}_{a}+g^{ja} \Gamma^{ik}_{a}. \tag{16}\]
We define the contravariant Riemann curvature tensor in a way formally reminiscent of the usual curvature tensor
\[R(\alpha,\beta)\gamma=D_{\alpha}D_{\beta}\gamma-D_{\beta}D_{\alpha}\gamma-D_{[ \alpha,\beta]}\gamma. \tag{17}\]
This tensor gives us a natural 4-bracket \((\cdot,\cdot;\cdot,\cdot)\colon C^{\infty}(\mathcal{Z})\times C^{\infty}( \mathcal{Z})\times C^{\infty}(\mathcal{Z})\times C^{\infty}(\mathcal{Z}) \to C^{\infty}(\mathcal{Z})\) defined by
\[(F,K;G,N)\colon=g(R(\mathbf{d}F,\mathbf{d}K)\mathbf{d}G,\mathbf{d}N) \tag{18}\]
or in coordinates, noting the raised index,
\[(F,K;G,N)=R^{ijkl}\frac{\partial F}{\partial z^{i}}\frac{\partial K}{\partial z ^{j}}\frac{\partial G}{\partial z^{k}}\frac{\partial N}{\partial z^{l}}. \tag{19}\]
This bracket inherits the following symmetries
\[(F,K;G,N)=-(K,F;G,N)=-(F,K;N,G)=(G,N;F,K) \tag{20}\]
in addition to the symmetries obtained by the first and second Bianchi identity (see e.g.[11]).
Given a Hamiltonian \(H\), we can define a bracket analogous to the operator describing geodesic deviation
\[(F,G)\colon\ =(F,H;G,H). \tag{21}\]
Notice, that by the symmetries of the 4-bracket, \((F,G)\) is both symmetric and has \(H\) in its kernel, precisely the necessary conditions for a metriplectic bracket.
## Free Rigid Body Bracket
Before we use the formalism of Riemann-Poisson Geometry to add dissipation to the heavy top, we first attempt to understand the geometry of the simpler free rigid body. Using angular momentum as phase space coordinates (\(z^{i}=L^{i}\), \(\mathcal{Z}=\mathbb{R}^{3}\)), the Poisson bracket for the FRB system realizes the \(\mathfrak{so}(3)\) algebra
\[\{L^{i},L^{j}\}=-\epsilon^{ijk}L^{k}. \tag{22}\]
The Hamiltonian for this system is
\[H=\frac{1}{2}\left(\frac{L^{1}}{I_{1}}+\frac{L^{2}}{I_{2}}+\frac{L^{3}}{I_{3} }\right). \tag{23}\]
To introduce dissipation to this system, we first consider a "dissipative metric" for the system. The simplest possible choice is the Euclidian (or equivalently the Cartan-killing) metric
\[g^{ij}=\delta^{ij} \tag{24}\]
For a general Lie-Poisson bracket corresponding to a semi-simple, compact algebra with structure constants \(C^{ij}_{k}\), the contravariant derivative takes a very simple form when we use the Cartan-Killing form as the metric
\[\Gamma^{ij}_{k}=\frac{1}{2}C^{ij}_{k}. \tag{25}\]
The totally contravariant curvature tensor is also quite simple
\[R^{ijk}_{l}=\frac{1}{4}C^{jk}_{a}C^{ia}_{l}-\frac{1}{4}C^{ik}_{a}C^{jal}+\frac {1}{2}C^{ij}_{a}C^{ka}_{l}. \tag{26}\]
For the \(\mathfrak{so}(3)\) case at hand, the structure constants are given by \(C^{ij}_{k}=-\epsilon_{ijk}\) and our curvature tensor is given by
\[R^{ijkl}=\frac{1}{4}(\delta^{ik}\delta^{jl}-\delta^{il}\delta^{jk}). \tag{27}\]
The metriplectic bracket is, up to a constant,
\[(F,G)=(\omega^{2}\delta_{ij}-\omega_{i}\omega_{j})\frac{\partial F}{\partial L ^{i}}\frac{\partial G}{\partial L^{j}}. \tag{28}\]
Amazingly, this is the bracket first constructed in [6] and later considered in Matterasi and Morrison [4] as a way to model an energy-conserving torque driving the body to rotate about a principle axis.
## Heavy Top Bracket
Inspired by the free-rigid body, we consider the heavy top system describing a rigid body in a constant gravitational field, which is given by the body frame vector \(\mathbf{\Gamma}\). The Poisson bracket is [13][12]
\[\{F,G\}=-\epsilon_{ijk}L^{i}\frac{\partial F}{\partial L^{j}}\frac{\partial G}{ \partial L^{k}}-\epsilon_{ijk}\Gamma^{i}\left(\frac{\partial F}{\partial L^{j }}\frac{\partial G}{\partial\Gamma^{k}}-\frac{\partial G}{\partial L^{j}} \frac{\partial F}{\partial\Gamma^{k}}\right). \tag{29}\]
In analogy with the free rigid body system from earlier, it is only natural (and as we will see, desirable) to assume the heavy top also has a Euclidean metric
\[g^{ij}=\delta^{ij}. \tag{30}\]
Organizing the phase space coordinates \(\mathbf{z}^{i}=(\mathbf{L},\mathbf{\Gamma})^{i}\), the Lie-Poisson structure of the heavy top bracket allows us to write the bracket as
\[\{z^{i},z^{j}\}=C_{k}^{ij}z^{k} \tag{31}\]
where \(C_{k}^{ij}\) are the structure constants of the semi-direct product algebra \(\mathfrak{so}(3)\ltimes\mathbb{R}^{3}\).
For any Lie-Poisson bracket with the Euclidean metric, the connection coefficients are (a similar formula can be found in [5])
\[\Gamma_{k}^{ij}=\frac{1}{2}(C_{k}^{ij}-C_{i}^{jk}+C_{j}^{ki}) \tag{32}\]
and the curvature tensor is
\[R_{l}^{ijk}= \frac{1}{4}(C_{a}^{jk}-C_{j}^{ka}+C_{k}^{aj})(C_{l}^{ia}-C_{i}^{ al}+C_{a}^{li}) \tag{33}\] \[-\frac{1}{4}(C_{a}^{ik}-C_{i}^{ka}+C_{k}^{ai})(C_{l}^{ja}-C_{j}^{ al}+C_{a}^{lj})-\frac{1}{2}C_{a}^{ij}(C_{l}^{ak}-C_{a}^{kl}+C_{k}^{la}).\]
In general, a six-dimensional system like the heavy top will have 105 independent components in the curvature tensor, and over a 1000 dependant terms. Fortunately, calculations like these are easily done computationally. For the heavy top, we find that the curvature tensor is quite sparse with the nonzero terms being
\[\frac{1}{4} =R^{1212}=R^{1313}=R^{2121}=R^{2323}=R^{3131}=R^{3232} \tag{34}\] \[=-R^{2112}=-R^{3113}=-R^{1221}=-R^{3223}=-R^{1331}=-R^{2332}.\]
This expression is even simpler than it appears. Every term involving a \(\mathbf{\Gamma}\) index vanishes. Furthermore, for \(i,j,k,l\in\{1,2,3\}\)
\[R^{ijkl}=\frac{1}{4}(\delta^{ik}\delta^{jl}-\delta^{il}\delta^{jk}). \tag{35}\]
That is, restricted to \(\mathbf{L}\) indices only, the curvature tensor is exactly that of the free rigid body.
From here on, we assume the top is symmetric (\(I_{1}=I_{2}\)).The Hamiltonian for the symmetric heavy top system is given by
\[H=\frac{1}{2}\left(\frac{L^{1}}{I_{1}}+\frac{L^{2}}{I_{1}}+\frac{L^{3}}{I_{3}} \right)+\xi\Gamma^{3}. \tag{36}\]
Plugging this into the curvature tensor, the heavy top metriplectic bracket is precisely the same bracket considered in Morrison and Matterasi to describe the FRB
\[(F,G)=(\omega^{2}\delta^{ij}-\omega^{i}\omega^{j})\frac{\partial F}{\partial L ^{i}}\frac{\partial G}{\partial L^{j}}. \tag{37}\]
Unlike the free rigid body, the total angular momentum \(L^{2}\) is no longer a Casimir invariant. Rather, there are two new Casimirs up to composition with an analytic function [13]
\[\vec{\Gamma}\cdot\vec{L} \tag{38}\]
and
\[\vec{\Gamma}^{2}. \tag{39}\]
The latter choice of Casimir trivializes the metriplectic dynamics, so the only suitable choice of generating function for the dissipation has the form
\[S=C(\vec{\Gamma}\cdot\vec{\omega}) \tag{40}\]
where \(C\) is analytic.
The dissipative equations of motion are
\[\dot{f}=\{f,H\}+(f,S)= \tag{41}\] \[-\epsilon^{ijk}L^{i}\frac{\partial f}{\partial L^{j}}\omega^{k}- \Gamma^{i}\left(\epsilon^{ij3}\frac{\partial f}{\partial L^{j}}\xi-\epsilon^ {ijk}\omega^{j}\frac{\partial f}{\partial\Gamma^{k}}\right)+C^{\prime}(\vec{ \Gamma}\cdot\vec{\omega})(\omega^{2}\delta^{ij}-\omega^{i}\omega^{j})\frac{ \partial f}{\partial L^{j}}\Gamma^{i}.\]
We can expand this equation as a system of 6 ODEs
\[\dot{L}^{1} =-L^{2}L^{3}\left(\frac{1}{I_{1}}-\frac{1}{I_{3}}\right)+\xi \Gamma^{2}+C^{\prime}(\mathbf{\Gamma}\cdot\mathbf{L})\left(\left[\frac{(L^{2} )^{2}}{I_{1}^{2}}+\frac{(L^{3})^{2}}{I_{3}^{2}}\right]\Gamma^{1}-\frac{L^{1}L ^{2}}{I_{1}^{2}}\Gamma^{2}-\frac{L^{1}L^{3}}{I_{1}I_{3}}\Gamma^{3}\right) \tag{42}\] \[\dot{L}^{2} =L^{1}L^{3}\left(\frac{1}{I_{1}}-\frac{1}{I_{3}}\right)-\xi \Gamma^{1}+C^{\prime}(\mathbf{\Gamma}\cdot\mathbf{L})\left(-\frac{L^{1}L^{2}}{ I_{1}^{2}}\Gamma^{1}+\left[\frac{(L^{1})^{2}}{I_{1}^{2}}+\frac{(L^{3})^{2}}{I_{3}^{ 2}}\right]\Gamma^{2}-\frac{L^{2}L^{3}}{I_{1}I_{3}}\Gamma^{3}\right)\] \[\dot{L}^{3} =C^{\prime}(\mathbf{\Gamma}\cdot\mathbf{L})\left(-\frac{L^{1}L^{ 3}}{I_{1}I_{3}}\Gamma^{1}-\frac{L^{2}L^{3}}{I_{1}I_{3}}\Gamma^{2}+\left[\frac {(L^{2})^{2}}{I_{1}^{2}}+\frac{(L^{1})^{2}}{I_{1}^{2}}\right]\Gamma^{3}\right)\] \[\dot{\Gamma}^{1} =\Gamma^{2}\frac{L^{3}}{I_{3}}-\Gamma^{3}\frac{L^{2}}{I_{1}}\] \[\dot{\Gamma}^{2} =\Gamma^{3}\frac{L^{1}}{I_{1}}-\Gamma^{1}\frac{L^{3}}{I_{3}}\] \[\dot{\Gamma}^{3} =\Gamma^{1}\frac{L^{2}}{I_{1}}-\Gamma^{2}\frac{L^{1}}{I_{1}}.\]
Notice that the dissipation does not affect the equation of motion for \(\mathbf{\Gamma}\), as we should expect from any physically realistic system. In fact, had we naively used a bracket projecting out the Hamiltonian, this would no longer be true. It is worthwhile to note that \(S\) is increased by the dynamics since
\[\dot{S}=(C^{\prime}(\vec{\Gamma}\cdot\vec{\omega}))^{2}\Gamma^{2}\omega^{2} \sin^{2}\theta\geq 0 \tag{43}\]
where \(\theta\) measures the angle between \(\vec{\omega}\) and \(\vec{\Gamma}\).
### Dynamics of The Heavy Top
The first analytic function we try is \(C(x)=\lambda x\). The equations of motion are
\[\dot{L}^{1} =-L^{2}L^{3}\left(\frac{1}{I_{1}}-\frac{1}{I_{3}}\right)+\xi\Gamma ^{2}+\lambda\left(\left[\frac{(L^{2})^{2}}{I_{1}^{2}}+\frac{(L^{3})^{2}}{I_{3} ^{2}}\right]\Gamma^{1}-\frac{L^{1}L^{2}}{I_{1}^{2}}\Gamma^{2}-\frac{L^{1}L^{3} }{I_{1}I_{3}}\Gamma^{3}\right) \tag{44}\] \[\dot{L}^{2} =L^{1}L^{3}\left(\frac{1}{I_{1}}-\frac{1}{I_{3}}\right)-\xi\Gamma ^{1}+\lambda\left(-\frac{L^{1}L^{2}}{I_{1}^{2}}\Gamma^{1}+\left[\frac{(L^{1})^ {2}}{I_{1}^{2}}+\frac{(L^{3})^{2}}{I_{3}^{2}}\right]\Gamma^{2}-\frac{L^{2}L^{ 3}}{I_{1}I_{3}}\Gamma^{3}\right)\] \[\dot{L}^{3} =\lambda\left(-\frac{L^{1}L^{3}}{I_{1}I_{3}}\Gamma^{1}-\frac{L^{ 2}L^{3}}{I_{1}I_{3}}\Gamma^{2}+\left[\frac{(L^{2})^{2}}{I_{1}^{2}}+\frac{(L^{1 })^{2}}{I_{1}^{2}}\right]\Gamma^{3}\right)\] \[\dot{\Gamma}^{1} =\Gamma^{2}\frac{L^{3}}{I_{3}}-\Gamma^{3}\frac{L^{2}}{I_{1}}\] \[\dot{\Gamma}^{2} =\Gamma^{3}\frac{L^{1}}{I_{1}}-\Gamma^{1}\frac{L^{3}}{I_{3}}\] \[\dot{\Gamma}^{3} =\Gamma^{1}\frac{L^{2}}{I_{1}}-\Gamma^{2}\frac{L^{1}}{I_{1}}.\]
It's not hard to see that the metriplectic heavy top has an equilibrium when \(\Gamma^{1,2}=0\) and \(L^{1,2}=0\). Given an initial configuration, we define \(L_{*}^{3}>0\) and \(\Gamma_{*}^{3}>0\) to be the physically realizable equilibrium consistent with \(\Gamma^{1,2}=0\) and \(L^{1,2}=0\). Linearizing the equations of motion around this equilibrium, we get the equations
\[\dot{\delta L}^{1} =-L_{*}^{3}\left(\frac{1}{I_{1}}-\frac{1}{I_{3}}\right)\delta L^{ 2}+\xi\delta\Gamma^{2}+\lambda\left(\frac{(L_{*}^{3})^{2}}{I_{3}^{2}}\delta \Gamma^{1}-\frac{L_{*}^{3}\Gamma_{*}^{3}}{I_{1}I_{3}}\delta L^{1}\right) \tag{45}\] \[\dot{\delta L}^{2} =L_{*}^{3}\left(\frac{1}{I_{1}}-\frac{1}{I_{3}}\right)\delta L^{ 1}-\xi\delta\Gamma^{1}+\lambda\left(\frac{(L_{*}^{3})^{2}}{I_{3}^{2}}\delta \Gamma^{2}-\frac{L_{*}^{3}\Gamma_{*}^{3}}{I_{1}I_{3}}\delta L^{2}\right)\] \[\dot{\delta L}^{3} =0\] \[\dot{\delta\Gamma}^{1} =\frac{L_{*}^{3}}{I_{3}}\delta\Gamma^{2}-\frac{\Gamma_{*}^{3}}{I_ {1}}\delta L^{2}\] \[\dot{\delta\Gamma}^{2} =\frac{\Gamma_{*}^{3}}{I_{1}}\delta L^{1}-\frac{L_{*}^{3}}{I_{3}} \delta\Gamma^{1}\] \[\dot{\delta\Gamma}^{3} =0.\]
We can express these equations compactly as
\[\dot{\delta_{\mathbf{z}}}=M\delta_{\mathbf{z}}. \tag{46}\]
The general solution for the spectrum of \(M\) isn't particularly enlightening beyond the fact that the eigenvalues are always of the form \((0,0,A,A^{*},B,B^{*})\). Depending on the initial conditions, either (a.) \(Re(A)\) and \(Re(B)\) differ in sign or (b.) both \(Re(A)<0\) and \(Re(B)<0\). The latter case is of particular interest since it implies the system is linearly stable. If we impose that \(I_{3}=2I_{1}\), the spectrum of \(M\) is a lot easier to understand with
\[A =\frac{-\Gamma_{*}^{3}L_{*}^{3}\lambda-\sqrt{-4(I_{1})^{2}(L_{*}^{3 })^{2}+(\Gamma_{*}^{3})^{2}(L_{*}^{3})^{2}\lambda^{2}+16\Gamma_{*}^{3}(I_{1})^ {3}\xi}}{4(I_{1})^{2}}\] \[B =\frac{-\Gamma_{*}^{3}L_{*}^{3}\lambda+\sqrt{-4(I_{1})^{2}(L_{*}^{ 3})^{2}+(\Gamma_{*}^{3})^{2}(L_{*}^{3})^{2}\lambda^{2}+16\Gamma_{*}^{3}(I_{1})^ {3}\xi}}{4(I_{1})^{2}}.\]
When the root is real, the system is linearly stable if and only if
\[(L_{*}^{3})^{2}\geq 4\Gamma_{*}^{3}I_{1}\xi.\]
When the root is complex, the system is always linearly stable, with the dynamics oscillating towards the equilibrium \(L^{1,2}=\Gamma^{1,2}=0\).
Computationally modeling the dynamics in Mathematica, it seems the relaxation behavior of this system can be well understood in terms of its linearization, provided the top does not fall. For example, in arbitrary units, we can let \(I_{1}=1=\frac{1}{2}I_{3}\), \(\lambda=0.1\), and \(\xi=1\). If we start with with initial conditions such that \(L_{*}^{3}\approx 5\), \(\Gamma_{*}^{3}\approx 3\) then
\[\text{Eigenvalues}(M)\approx(0,0,-.38+1.76i,-.38-1.76i,-.38+1.76i,-.38-1.76i). \tag{47}\]
The linear dynamics predict that the metriplectic heavy top solutions will decay towards equilibrium while oscillating. Modeling the full nonlinear dynamics with initial conditions \(\boldsymbol{\Gamma}(t=0)=(1,0,2.8)\) and \(\mathbf{L}(t=0)=(1,0,4.2)\), we see that the behavior of the system qualitatively approximates the linear dynamics, even with large perturbations from equilibrium (figure 1). This is an interesting point, since \(S\) is not a valid Lyapunov function. Likely, this stability comes from a sort of constrained optimization on the surfaces of constant energy.
If we instead fix \(\lambda=1\) then
\[\text{Eigenvalues}(M)\approx(0,0,-7.03,-7.03,-.46,-.46), \tag{48}\]
which is again stable, but this time we see no oscillatory behavior in the linearized dynamics. Changing our initial conditions to \(\boldsymbol{\Gamma}(t=0)=(.3,0,3)^{T}\) and \(\mathbf{L}(t=0)=(.5,0,5.2)\) so the top doesn't fall, we again observe the linear dynamics is qualitatively similar to the relaxation behavior of the full nonlinear dynamics (figure 2).
Provided we change the phase space from \(\mathbb{R}^{6}\) to \(\mathbb{R}^{6}-S\), where \(S\) is the vanishing set of \(\boldsymbol{\Gamma}\cdot\mathbf{L}\), we can also consider the dynamics of the heavy top with an otherwise singular generating function.
In particular, we explore when \(S=\lambda\log(\mathbf{\Gamma}\cdot\mathbf{L})\). The equations of motion are
\[\dot{L}^{1} =-L^{2}L^{3}\left(\frac{1}{I_{1}}-\frac{1}{I_{3}}\right)+\xi\Gamma ^{2}+\frac{\lambda}{\mathbf{\Gamma}\cdot\mathbf{L}}\left(\left[\frac{(L^{2})^{2 }}{I_{1}^{2}}+\frac{(L^{3})^{2}}{I_{3}^{2}}\right]\Gamma^{1}-\frac{L^{1}L^{2}} {I_{1}^{2}}\Gamma^{2}-\frac{L^{1}L^{3}}{I_{1}I_{3}}\Gamma^{3}\right) \tag{49}\] \[\dot{L}^{2} =L^{1}L^{3}\left(\frac{1}{I_{1}}-\frac{1}{I_{3}}\right)-\xi\Gamma ^{1}+\frac{\lambda}{\mathbf{\Gamma}\cdot\mathbf{L}}\left(-\frac{L^{1}L^{2}}{I_ {1}^{2}}\Gamma^{1}+\left[\frac{(L^{1})^{2}}{I_{1}^{2}}+\frac{(L^{3})^{2}}{I_{ 3}^{2}}\right]\Gamma^{2}-\frac{L^{2}L^{3}}{I_{1}I_{3}}\Gamma^{3}\right)\] \[\dot{L}^{3} =\frac{\lambda}{\mathbf{\Gamma}\cdot\mathbf{L}}\left(-\frac{L^{ 1}L^{3}}{I_{1}I_{3}}\Gamma^{1}-\frac{L^{2}L^{3}}{I_{1}I_{3}}\Gamma^{2}+\left[ \frac{(L^{2})^{2}}{I_{1}^{2}}+\frac{(L^{1})^{2}}{I_{1}^{2}}\right]\Gamma^{3}\right)\] \[\dot{\Gamma}^{1} =\Gamma^{2}\frac{L^{3}}{I_{3}}-\Gamma^{3}\frac{L^{2}}{I_{1}}\] \[\dot{\Gamma}^{2} =\Gamma^{3}\frac{L^{1}}{I_{1}}-\Gamma^{1}\frac{L^{3}}{I_{3}}\] \[\dot{\Gamma}^{3} =\Gamma^{1}\frac{L^{2}}{I_{1}}-\Gamma^{2}\frac{L^{1}}{I_{1}}.\]
Again, this system has an equilibrium \(\Gamma^{1,2}=0\) and \(L^{1,2}=0\). Letting be \(L_{*}^{3}>0\) and \(\Gamma_{*}^{3}>0\) be
the realizable equilibrium conditions, the linearized equations are
\[\dot{\delta L}^{1} =-L_{*}^{3}\left(\frac{1}{I_{1}}-\frac{1}{I_{3}}\right)\delta L^{2}+ \xi\delta\Gamma^{2}+\frac{\lambda}{L_{*}^{3}\Gamma_{*}^{3}}\left(\frac{(L_{*}^{ 3})^{2}}{I_{3}^{2}}\delta\Gamma^{1}-\frac{L_{*}^{3}\Gamma_{*}^{3}}{I_{1}I_{3}} \delta L^{1}\right) \tag{50}\] \[\dot{\delta L}^{2} =L_{*}^{3}\left(\frac{1}{I_{1}}-\frac{1}{I_{3}}\right)\delta L^{1 }-\xi\delta\Gamma^{1}+\frac{\lambda}{L_{*}^{3}\Gamma_{*}^{3}}\left(\frac{(L_{* }^{3})^{2}}{I_{3}^{2}}\delta\Gamma^{2}-\frac{L_{*}^{3}\Gamma_{*}^{3}}{I_{1}I_{ 3}}\delta L^{2}\right)\] \[\dot{\delta\Gamma}^{3} =0\] \[\dot{\delta\Gamma}^{1} =\frac{L_{*}^{3}}{I_{3}}\delta\Gamma^{2}-\frac{\Gamma_{*}^{3}}{I _{1}}\delta L^{2}\] \[\dot{\delta\Gamma}^{2} =\frac{\Gamma_{*}^{3}}{I_{1}}\delta L^{1}-\frac{L_{*}^{3}}{I_{3} }\delta\Gamma^{1}\] \[\dot{\delta\Gamma}^{3} =0.\]
Again, we may write
\[\delta\dot{\mathbf{z}}=M\delta\mathbf{z}.\]
Provided \(2I_{1}=I_{3}\), the eigenvalues of \(M\) are \((0,0,A,A^{*},B,B^{*})\) where
\[A =\frac{-\Gamma_{*}^{3}\lambda-\sqrt{-4(\Gamma_{*}^{3})^{2}(I_{1})^ {2}(L_{*}^{3})^{2}+(\Gamma_{*}^{3})^{2}\lambda^{2}+16\Gamma_{*}^{3}(I_{1})^{3} \xi}}{4(I_{1})^{2}\Gamma_{*}^{3}}\] \[B =\frac{-\Gamma_{*}^{3}\lambda+\sqrt{-4(\Gamma_{*}^{3})^{2}(I_{1})^ {2}(L_{*}^{3})^{2}+(\Gamma_{*}^{3})^{2}\lambda^{2}+16\Gamma_{*}^{3}(I_{1})^{3} \xi}}{4(I_{1})^{2}\Gamma_{*}^{3}}.\]
The system is linearly stable when
\[\Gamma_{*}^{3}(L_{*}^{3})^{2}>4I_{1}\xi\]
For the sake of comparison to the \(C(x)=\lambda x\) case, we use \(I_{1}=1\), \(I_{1}=2\), \(\xi=1\), and \(\lambda=1\) with the initial conditions \(\Gamma=(.3,0,3)\) and \(L=(.5,0,5.2)\). We see the nonlinear system stable relaxes to \(\Gamma_{*}^{3}\approx 3.0\) and \(L_{*}^{3}\approx 5.2\) (figure 3). This is reflected in the matrix for the linear equations of motion, which has eigenvalues
\[\text{Eigenvalues}(M)\approx(0,0,-0.25+2i,-0.25+2i,-0.25+2i,-0.25+2i).\]
Also for the sake of comparison, we can also choose \(\lambda=.1\), \(\mathbf{L}(t=0)=(1,0,4.2)\), and \(\mathbf{\Gamma}(t=0)=(1,0,2.8)\). We see the system relaxes to equilibrium, but at about a tenth the speed as before, agreeing with the dynamics predicted by the eigenvalues of \(M\) (figure 4)
\[\text{Eigenvalue}(M)\approx(0,0,-0.025+1.67i,-0.025+1.67i,-0.025-1.67i,-0.02 5-1.67i).\]
As a final example, we consider the case when \(C(x)=\frac{\lambda}{2}x^{2}\). The equations of motion are
\[\dot{L}^{1} =-L^{2}L^{3}\left(\frac{1}{I_{1}}-\frac{1}{I_{3}}\right)+\xi\Gamma ^{2}+\lambda\mathbf{\Gamma}\cdot\mathbf{L}\left(\left[\frac{(L^{2})^{2}}{I_{1} ^{2}}+\frac{(L^{3})^{2}}{I_{3}^{2}}\right]\Gamma^{1}-\frac{L^{1}L^{2}}{I_{1}^{2 }}\Gamma^{2}-\frac{L^{1}L^{3}}{I_{1}I_{3}}\Gamma^{3}\right) \tag{51}\] \[\dot{L}^{2} =L^{1}L^{3}\left(\frac{1}{I_{1}}-\frac{1}{I_{3}}\right)-\xi\Gamma ^{1}+\lambda\mathbf{\Gamma}\cdot\mathbf{L}\left(-\frac{L^{1}L^{2}}{I_{1}^{2}} \Gamma^{1}+\left[\frac{(L^{1})^{2}}{I_{1}^{2}}+\frac{(L^{3})^{2}}{I_{3}^{2}} \right]\Gamma^{2}-\frac{L^{2}L^{3}}{I_{1}I_{3}}\Gamma^{3}\right)\] \[\dot{L}^{3} =\lambda\mathbf{\Gamma}\cdot\mathbf{L}\left(-\frac{L^{1}L^{3}}{I _{1}I_{3}}\Gamma^{1}-\frac{L^{2}L^{3}}{I_{1}I_{3}}\Gamma^{2}+\left[\frac{(L^ {2})^{2}}{I_{1}^{2}}+\frac{(L^{1})^{2}}{I_{1}^{2}}\right]\Gamma^{3}\right)\] \[\dot{\Gamma}^{1} =\Gamma^{2}\frac{L^{3}}{I_{3}}-\Gamma^{3}\frac{L^{2}}{I_{1}}\] \[\dot{\Gamma}^{2} =\Gamma^{3}\frac{L^{1}}{I_{1}}-\Gamma^{1}\frac{L^{3}}{I_{3}}\] \[\dot{\Gamma}^{3} =\Gamma^{1}\frac{L^{2}}{I_{1}}-\Gamma^{2}\frac{L^{1}}{I_{1}},\]
which we linearize
\[\dot{\delta L}^{1} =-L_{*}^{3}\left(\frac{1}{I_{1}}-\frac{1}{I_{3}}\right)\delta L^{ 2}+\xi\delta\Gamma^{2}+\lambda L_{*}^{3}\Gamma_{*}^{3}\left(\frac{(L_{*}^{3})^ {2}}{I_{3}^{2}}\delta\Gamma^{1}-\frac{L_{*}^{3}\Gamma_{*}^{3}}{I_{1}I_{3}} \delta L^{1}\right) \tag{52}\] \[\dot{\delta L}^{2} =L_{*}^{3}\left(\frac{1}{I_{1}}-\frac{1}{I_{3}}\right)\delta L^{ 1}-\xi\delta\Gamma^{1}+\lambda L_{*}^{3}\Gamma_{*}^{3}\left(\frac{(L_{*}^{3})^ {2}}{I_{3}^{2}}\delta\Gamma^{2}-\frac{L_{*}^{3}\Gamma_{*}^{3}}{I_{1}I_{3}} \delta L^{2}\right)\] \[\dot{\delta L}^{3} =0\] \[\dot{\delta\Gamma}^{1} =\frac{L_{*}^{3}}{I_{3}}\delta\Gamma^{2}-\frac{\Gamma_{*}^{3}}{I_{ 1}}\delta L^{2}\] \[\dot{\delta\Gamma}^{2} =\frac{\Gamma_{*}^{3}}{I_{1}}\delta L^{1}-\frac{L_{*}^{3}}{I_{3}} \delta\Gamma^{1}\] \[\dot{\delta\Gamma}^{3} =0.\]
Even with the usual caveat that \(I_{3}=2I_{2}\), the eigenvalues of \(M\) do not have a particularly nice expression. We again let \(\xi=1,I_{1}=1\), \(I_{3}=2\), and \(\lambda=.1\). With the initial conditions \(\mathbf{\Gamma}(t=0)=(1,0,2.8)\) and \(\mathbf{L}(t=0)=(1,0,4.2)\) the system is linearly stable with
\[\text{Eigenvalues}(M)\approx(-8.31,-8.31,-0.21,-0.21,0,0),\]
as reflected by the nonlinear dynamics (figure 5).
It should always be noted, including in all the prior examples, that this system is not stable for all initial conditions, as letting \(\mathbf{L}(t=0)=(1,0,2.8)^{T}\) and \(\mathbf{\Gamma}(t=0)=(1,0,4.2)\) exemplifies (figure 6).
Figure 5: \(\lambda=0.1\); \(C(x)=\frac{\lambda}{2}x^{2}\); \(\mathbf{\Gamma}(t=0)=(1,0,2.8)\); \(\mathbf{L}(t=0)=(1,0,4.2)\)
Keeping with tradition, we also try \(\lambda=1\) with the initial conditions \(\Gamma=(.3,0,3)\) and \(L=(.5,0,5.2)\). Like all the other systems, this initial condition is linearly stable with the nonlinear dynamics following suit (figure 7)
\[\text{Eigevalues}(M)\approx(-11.85,-11.85,-0.317,-0.317,0,0).\]
## Conclusion
In this paper, we showed how the formalism of Riemann-Poisson geometry can be used to create dissipative dynamics. In particular, by considering the simplest dissipative metric on the heavy top phase space, we constructed a toy-model system that often relaxed to a stable equilibrium spinning about a principle axis, extending the work by Matterasi and Morrison [4] to the case where gravity cannot be neglected. We then computationally modeled the dynamics of a few systems, which while possessing the same metriplectic bracket, had their dynamics generated by different functions.
Beyond serving as a useful toy model and proof-of-concept for more sophisticated work, the family of dynamical systems constructed in this paper have an obvious control-theoretic utility. Without dissipating any energy, beyond that required to monitor the orientation and rotation rate of a heavy top, we showed how a torque can be applied in such a way as to align a symmetric spinning body with its third moment of inertia. Perhaps more importantly, this work highlights how the language of geometry can be fitted to metriplectic dissipation, allowing for a wide class of geometric constructions to be carried into the theory of dissipative systems.
This work can be extended in a number of ways. For one, it stands open to exploration as to how different metrics can affect the allowed dynamics of this and other systems. Another, perhaps harder question, is applying this geometric formalism to field theories such as the Navier-Stokes
equations. In this paper, we also left many questions about the metriplectic heavy top unanswered. Most notably, we did not fully address many interesting questions related to the stability of the system, such as what regions of the phase space are nonlinearly asymptotically stable.
|
2307.11985 | Brownian yet Non-Gaussian Heat Engine | We investigate the performance of a Brownian heat engine working in a
heterogeneous thermal bath where the mobility fluctuates. Brownian particle is
trapped by the time-dependent harmonic potential, by changing the stiffness
coefficient and the bath temperatures, we perform a Stirling cycle. We
numerically evaluated the average work, power and efficiency. We compare our
results with the Brownian heat engine working in a homogeneous thermal bath. We
find that for the normal diffusive system, the performance of a Gaussian heat
engine serves as an upper bound. We also observe that the non-Gaussian position
distribution decreases the stochastic heat engine performance. | I. Iyyappan, Jetin E. Thomas, Sibasish Ghosh | 2023-07-22T05:33:30Z | http://arxiv.org/abs/2307.11985v2 | # Brownian yet non-Gaussian thermal machines
###### Abstract
We investigate the performance of a Brownian thermal machine working in a heterogeneous heat bath. The mobility of the heat bath fluctuates and it is modelled as an Ornstein-Uhlenbeck process. We trap the Brownian particle with time-dependent harmonic potential and by changing the stiffness coefficient and bath temperatures, we perform a Stirling cycle. We numerically calculate the average absorbed work, the average ejected heat and the performance of the heat pump. For shorter cycle times, we find that the performance of a Brownian yet non-Gaussian heat pump is significantly higher than the normal (Gaussian) heat pump. We numerically find the coefficient of performance at maximum heating power.
_Introduction._--The development of heat engines led to the industrial revolution. Two centuries ago, in 1824, Sadi Carnot showed that the maximum efficiency of a heat engine depends only on the temperatures of the hot (\(T_{h}\)) and cold (\(T_{c}\)) reservoirs. The Carnot efficiency is given by \(\eta_{C}=1-T_{c}/T_{h}\)[1]. To achieve this maximum efficiency, the heat engine needs to operate reversibly, which takes an infinite amount of time. Therefore, the power output becomes zero by definition, which makes no practical use. Finite power output is crucial for any practical application. Efficiency at maximum power (EMP) is another fundamental parameter to studying real heat engines. Novikov for atomic power plants, Curzon and Ahlborn for endo-reversible heat engines showed that the efficiency at maximum power is \(\eta_{NCA}=1-\sqrt{T_{c}/T_{h}}\)[2; 3]. EMP for heat engines has been actively investigated in the last four decades [4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26].
Apart from macroscopic heat engines, micrometre-sized heat engines have unique features where the fluctuations become significant [27]. Ken Sekimoto identified the thermodynamic quantities, such as heat, work and internal energy for a Brownian particle at a single trajectory level [28]. Now the framework is called stochastic thermodynamics [29; 30]. Schmiedl and Seifert modelled a stochastic Carnot heat engine and showed that the EMP is \(\eta_{SS}=\eta_{C}/(2-\alpha\eta_{C})\)[31]. Here \(\alpha\) represents the dissipation due to a finite-time process. Several studies have been conducted on Brownian heat engines [32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45]. Using advanced experimental techniques, various microscopic heat engines have been realized in the lab [46; 47; 48; 49]. In particular, active heat engines attracted a lot of attention [50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63].
Recently, Wang _et. al_ discovered a new class of diffusion process in which they found the normal diffusion of Brownian particle (\(\langle x^{2}(t)\rangle\propto t\)) with the Laplace displacement distribution (for a shorter time) [64; 65]
\[\rho(x,t)\backsimeq\exp\left(-\frac{|x|}{\lambda(t)}\right). \tag{1}\]
Here, the characteristic decay length \(\lambda(t)\) varies as \(\sqrt{t}\). Many physical and biological systems exhibit normal diffusion with the non-Gaussian displacement distribution [66; 67; 68; 69; 70; 71]. This Brownian yet non-Gaussian behaviour has been well studied by the concept of super-statistics [65; 66] and the diffusing diffusivity model [72; 73]. In the diffusing diffusivity model, the diffusion coefficient is treated as a random variable and it is given by the square of the Ornstein-Uhlenbeck process [73]. In this work, we perform a Stirling cycle for a Brownian particle diffusing with fluctuating mobility. Our results show that the Brownian yet Non-Gaussian thermal machine works as a heat pump and performs significantly better than the usual Brownian heat pump for a shorter cycle time (see Fig. (4)).
_Stochastic thermodynamics._--In this work, we only consider an overdamped regime. We use the following definitions to calculate the average thermodynamic quantities. The average internal energy is given by [31]
\[U\equiv\int V(x,\lambda(t))p(x,t)dx. \tag{2}\]
The average work done _by_ the particle is defined as [29]
\[W=\int_{t_{i}}^{t_{f}}dt\int\frac{\partial V(x,\lambda(t))}{\partial\lambda} \dot{\lambda}p(x,t)dx. \tag{3}\]
Here, \(V(x,\lambda(t))\) is the external potential and \(\lambda(t)\) is the time-dependent control parameter. \(p(x,t)\) is the probability distribution of \(x\) at time \(t\). The justification for calling Eq. (3) as work can be found in Refs. [28] and [74]. Using the thermodynamic first law like energy balance, we can calculate the absorbed heat from the thermal bath as [33]
\[Q=W-\Delta U, \tag{4}\]
where \(\Delta U\) is the change in internal energy during the isothermal process.
_The Model._--When Brownian particles diffuse in the heterogeneous medium it encounters varying mobility with time. It can be due to the dynamic evolution of the medium or the different regions with different mobilities [66; 67; 68; 69; 70; 71; 75]. Chechkin _et. al_ used the set of Langevin equations to explain the diffusion of Brownian particle in an environment with fluctuating mobility [73]. Here, we consider the Brownian particle trapped by the time-dependent harmonic potential
\[V(x,\lambda(t))=\frac{\lambda(t)}{2}x^{2}, \tag{5}\]
where \(\lambda(t)\) is the externally controllable stiffness coefficient. The Langevin equation of motion for a Brownian particle diffusing in a heterogeneous medium is governed by the following set of equations
\[\frac{dx}{dt}=-\mu(t)\lambda(t)x+\sqrt{2\mu(t)\kappa_{B}T}\ \xi(t), \tag{6}\]
\[\mu(t)=\chi^{2}(t), \tag{7}\]
\[\frac{d\chi(t)}{dt}=-\frac{\chi(t)}{\tau^{\prime}}+\sigma\zeta(t). \tag{8}\]
Here, \(x(t)\) is the position of the Brownian particle at time \(t\). We restrict our studies to one dimension. \(\mu(t)\) is the fluctuating mobility [73]. \(\kappa_{B}\) is the Boltzmann constant and \(T\) is the bath temperature. \(\xi(t)\) is the Gaussian white thermal noise with \(\langle\xi(t)\rangle=0\), \(\langle\xi(t)\xi(t^{\prime})\rangle=\delta(t-t^{\prime})\). The fluctuating mobility is defined as a square of the random variable \(\chi\) to make sure it is positive [73]. The random variable \(\chi(t)\) is given by Ornstein-Uhlenbeck process (Eq. (8)) and its explanation can be found [73]. \(\tau^{\prime}\) is the correlation time of the Ornstein-Uhlenbeck process. \(\zeta(t)\) is white Gaussian noise with \(\langle\zeta(t)\rangle=0\), \(\langle\zeta(t)\zeta(t^{\prime})\rangle=\delta(t-t^{\prime})\)[73]. \(\sigma\) is the strength of the fluctuating noise \(\zeta(t)\). For simplicity, we consider the single random variable \(\chi(t)\). In general, \(\chi(t)\) can have \(n\) degrees of freedom [73]. The distribution of the random variable \(\chi(t)\) at time \(t\) is denoted by \(f(\chi,t)\) and it evolves according to the Fokker-Planck equation [76]
\[\frac{\partial f(\chi,t)}{\partial t}=-\frac{\partial}{\partial\chi}\left[- \frac{\chi}{\tau^{\prime}}-\frac{\sigma^{2}}{2}\frac{\partial}{\partial\chi} \right]f(\chi,t). \tag{9}\]
For a stationary state, the distribution of \(\chi\) becomes
\[f(\chi)=\frac{1}{\sqrt{\pi\sigma^{2}\tau^{\prime}}}\text{exp}\left(-\frac{ \chi^{2}}{\sigma^{2}\tau^{\prime}}\right). \tag{10}\]
We use the Euler-Muryama numerical method to integrate the stochastic differential equations (6)-(8). We set the initial values \(x(0)=0\) and \(\chi(0)\) has been chosen randomly from the distribution given in Eq. (10).
Now, we apply the framework of stochastic thermodynamics to the fluctuating mobility system and construct a thermodynamic cycle to study its energetics. Experimentally realizing the Carnot cycle for Brownian particles is very difficult since it contains adiabatic processes. However, Martinez _et. al_ accomplished microscopic adiabatic process for an underdamped Brownian particle by precisely controlling the phase space volume [77]. The microscopic adiabatic process for a non-Gaussian system is not yet realized. Therefore, we study the Stirling cycle, which consists of two isothermal processes at different temperatures and two isochoric processes which connecting them. The schematic diagram of stochastic Stirling cycle is given in Fig. 1.
_Isothermal expansion process._-- The Brownian particle is kept in a hot reservoir at temperature \(T_{h}\) with the initial stiffness coefficient \(\lambda_{h}\). We linearly decrease the stiffness coefficient to its lower value \(\lambda_{l}\) for the period \(t=0\) to \(t=\tau/2\), which is easily realizable in experiments [46]. This is equivalent to the volume expansion of the macroscopic heat engine. We use the following protocol [40]
\[\lambda_{1}(t)=\lambda_{h}-\frac{2}{\tau}\Delta\lambda t. \tag{11}\]
Here, \(\Delta\lambda\equiv\lambda_{h}-\lambda_{l}\). The work done _by_ the particle is
\[w_{1}=-\frac{\Delta\lambda}{\tau}\int_{0}^{\frac{\tau}{2}}x^{2}(t)dt. \tag{12}\]
_Isochoric Cooling._--Keeping the stiffness coefficient constant, we decrease the temperature of the bath from \(T_{h}\) to \(T_{c}\) instantaneously. Therefore, the probability distribution of displacement does not change. However, in experiments, it takes a few milliseconds to cool the bath temperature from \(T_{h}\) to \(T_{c}\)[46], which can be neglected as compared with the cycle time. The work done by the
Figure 1: The schematic diagram of the Stirling cycle for a Brownian particle diffusing with fluctuating mobility. The solid parabolic curve represents the harmonic potential. The shaded red (blue) region represents the probability distribution of displacement when a Brownian particle in a hot (cold) bath at temperature \(T_{h}\) (\(T_{c}\)). The probability distribution plotted on a logarithmic scale. \(\tau\) is the cycle time. It has to be mentioned that the position of a Brownian particle after a cycle time \(\tau\) is random.
particle during an isochoric process is zero since there is no change in the external control parameter [46].
_Isothermal compression process._--Now, keeping the bath temperature at \(T_{c}\). The stiffness coefficient is increased linearly for the period \(t=\tau/2\) to \(t=\tau\) as the protocol is given below [40]
\[\lambda_{3}(t)=\lambda_{l}-\Delta\lambda\left(1-\frac{2}{\tau}t\right). \tag{13}\]
The work done _on_ the particle is calculated as
\[w_{3}=\frac{\Delta\lambda}{\tau}\int_{\frac{\tau}{2}}^{\tau}x^{2}(t)dt. \tag{14}\]
_Isochoric Heating._--Finally, the bath temperature increased from \(T_{c}\) to \(T_{h}\), instantaneously, while keeping the stiffness coefficient constant. Nevertheless, in Blickle and Bechinger experiment, the isochoric heating process is achieved in less than one millisecond [46]. Again, the isochoric heating process does not contribute to the work since there is no change in external control parameters. Using Eq. (4), we can calculate the heat transferred during the isothermal expansion process as
\[q_{h}=-\frac{\Delta\lambda}{\tau}\int_{0}^{\frac{\tau}{2}}x^{2}(t)dt-\frac{1} {2}\left[\lambda_{l}x\left(\frac{\tau}{2}\right)^{2}-\lambda_{h}x(0)^{2} \right]. \tag{15}\]
Similarly, the heat transferred during the isothermal compression process becomes
\[q_{c}=\frac{\Delta\lambda}{\tau}\int_{\frac{\tau}{2}}^{\tau}x^{2}(t)dt-\frac{ 1}{2}\left[\lambda_{h}x(\tau)^{2}-\lambda_{l}x\left(\frac{\tau}{2}\right)^{2} \right]. \tag{16}\]
The total work consumed by the Brownian particle is calculated by using the energy conservation law. Therefore, the consumed work becomes \(w=q_{h}+q_{c}+\Delta u\), which is random in nature. \(\Delta u\) becomes zero only on average. The ensemble average of total consumed work is given by \(W=Q_{h}+Q_{c}\)[33].
_Numerical Simulation._--When we perform a Stirling cycle for an ensemble of Brownian particles. The final position of Brownian particles (\(x(\tau)\)) is a random variable. To get the performance of stochastic heat devices independent of \(x(\tau)\), first, we need to run many cycles until the mean-squared displacement of \(x(m\tau)\) and \(x((m+1)\tau)\) becomes equal, where \(m\) is the number of cycles performed. Then the probability distribution satisfies the following condition, \(p_{ss}(x,t)=p_{ss}(x,t+\tau)\), which is called the time-periodic steady state (TPSS) [33; 40; 57]. If the ensemble of Brownian particles once reaches a TPSS, then it will no longer have the memory of its initial position \(x(0)=0\,\mu m\)(micrometer). Brownian particle in a homogeneous medium reaches a steady state after a few cycles. However, our system takes a huge number of cycles to reach a steady state which is not feasible. Therefore, we keep track of the cumulative of square displacement updated after each cycle for an ensemble of particles. We subtract the mean-square displacement averaged over cycles and an ensemble of particles from the cumulative mean square displacement. The average of this quantity is precision which from the Central limit theorem becomes smaller with more number of cycles. The precision decides when the particles have reached a steady state and the thermodynamic quantities have reached the desired accuracy. Therefore, we start to calculate the thermodynamic quantities for a Stirling cycle. In our calculations, we consider the ensembles of \(10^{5}\) particle trajectories to compute the average quantities. For integration, we use the step size \(dt=10^{-5}s\). We set the following parameters. The high and low values of stiffness coefficients, respectively as \(\lambda_{h}=5pN/\mu m\) (pN-pico Newton), and \(\lambda_{l}=1pN/\mu m\). The above \(\lambda\)'s are easily accessible in the lab [40]. Temperatures of the hot and cold reservoirs, respectively as \(T_{h}=363.15\)K, and \(T_{c}=278.15\)K. The amplitude of fluctuation of \(\chi\) is \(\sigma=1\,\mu m^{1/2}pN^{-1/2}s^{-1}\) and the relaxation time \(\tau^{\prime}=1s\). The Boltzmann constant \(\kappa_{B}=1.38\times 10^{-5}pN\mu mK^{-1}\). To compare our results with the constant mobility case, we consider the following Langevin equation
\[\frac{dx}{dt}=-\mu\lambda(t)x+\sqrt{2\mu\kappa_{B}T}\ \xi(t), \tag{17}\]
where \(\mu\) is the mobility. \(\xi(t)\) is the Gaussian white noise with \(\langle\xi(t)\rangle=0\), \(\langle\xi(t)\xi(t^{\prime})\rangle=\delta(t-t^{\prime})\). We set \(\mu=26.5258\,\mu mpN^{-1}s^{-1}\).
_Results and Discussion._--Next, we present our numerical results. The figures are plotted in a semi-logarithmic manner where the x-axis is scaled logarithmically. In Fig. (2), we plot the input heat, absorbed work and ejected heat as a function of cycle time \(\tau\). The heat absorbed from the cold bath \(Q_{c}\) increases monotonically with time and the ejected heat to the hot bath \(Q_{h}\) decreases monotonically with time. However, the absorbed work initially increases with cycle time \(\tau\) and attains its maximum and then starts to decrease.
In Fig. (3), we plot heating power (\(Q_{h}/\tau\)) as a function of cycle time \(\tau\). The solid line represents the heating power of a Brownian yet non-Gaussian heat pump (BNGHP) and the dashed curve represents the Gaussian heat pump (GHP). In the case of BNGHP, initially, the heating power increases with time and reaches its maximum value around \(\tau=0.04s\). Further, it decreases and attains its second minimum. Again the heating power increases with \(\tau\). However, the heating power of GHP increases with the cycle time monotonically.
The average coefficient of performance of a stochastic heat pump is given by
\[\epsilon=\frac{Q_{h}}{W}. \tag{18}\]
In Fig. (4), we plot the coefficient of performance as a function of cycle time \(\tau\). Our results show that for cycle time \(\tau=0.01s\) the COP of GHP is higher than the COP
of BNGHP. For the range \(0.02s<\tau\leq 0.2s\), the COP of BNGHP is significantly higher than the COP of GHP. That is for a cycle time less than the mobility correlation time \(\tau^{\prime}\), the change in the mobility is negligible, and an ensemble of particles are diffusing with different mobility which mimics a spatially heterogeneous environment (see Eqs. (7) and (8), the initial value of mobility is randomly chosen from Eq. (10)) for further discussion see Ref. [73]. From Figs. (3) and (4), we find that for a BNGHP, the COP at maximum cooling rate is \(0.0230587\).
_Conclusion._--In this work, we studied the performance of a Brownian yet non-Gaussian thermal machine which works as a heat pump, where the inhomogeneity of the thermal bath is considered. We used the Euler-Muryama numerical method to calculate the average thermodynamic quantities such as the absorbed heat, absorbed work and ejected heat. Our results showed that the Brownian yet non-Gaussian heat pump performs significantly better as compared with the normal (Gaussian) heat pump for a shorter cycle time (see Fig. 4)). Further, we find that, for a BNGHP the heating power becomes maximum when the cycle time is around \(\tau=0.04s\). We numerically evaluated the coefficient of performance at maximum heating power as \(\epsilon=0.0230587\).
_Future outlook._--The Brownian yet non-Gaussian heat pump can be easily realized in the lab. Chakraborty and Roichman artificially created spatial heterogeneity by using micropillars [71]. With optical tweezers, one can create a harmonic potential to trap the Brownian particle. By varying the laser intensity and bath temperatures, the Stirling cycle can be easily accomplished [46]. Entangled F-actin networks can also be used to realize BNGHP [64]. Some active systems show varying diffusion coefficients due to the time evolution of the medium or the heterogeneity present in the environment. It will be interesting to study the energetics of diffusing diffusivity models for active systems [75]. Further, in our future study, we would like to investigate the Brownian yet non-Gaussian heat devices with the external fluctuating force, which can increase the effective temperature of the Brownian particle [78]. We are also interested in extending the present work in the quantum domain.
S.G. acknowledges the support from Interdisciplinary Cyber Physical Systems (ICPS) program of the Department of Science and Technology (DST), India, Grant No. DST/ICPS/QuEST/Theme-1/2019/13. J.E.T. thanks Ralf Metzler for useful discussion.
Figure 3: The heating power for Brownian yet non-Gaussian heat pump and Brownian heat pump are plotted as a function of cycle time \(\tau\). Here, the cycle time \(\tau\) varies on a logarithmic scale.
Figure 2: The average absorbed heat, consumed work, and ejected heat are plotted as functions of cycle time \(\tau\). Here, the cycle time \(\tau\) is plotted on a logarithmic scale.
Figure 4: The performance of the Brownian yet non-Gaussian heat pump and Brownian heat pump are plotted as a function of cycle time \(\tau\). The solid line represents the COP of the Brownian yet non-Gaussian heat pump. The dashed curve represents the COP of the Brownian heat pump. Inset shows the magnified view of the coefficient of performance for initial cycle times. It has to be noted that the cycle time \(\tau\) in the figure and inset are plotted on the logarithmic scale.
I.I. is delighted to dedicate this article to S. V. M. Satyanarayana who has been teaching free physics classes on Sundays to research aspirants for over a quarter century. I am one of the beneficiaries of his Sunday class.
I. I. and J. E. T. equally contributed to this work.
|
2304.08984 | Robustness of Visual Explanations to Common Data Augmentation | As the use of deep neural networks continues to grow, understanding their
behaviour has become more crucial than ever. Post-hoc explainability methods
are a potential solution, but their reliability is being called into question.
Our research investigates the response of post-hoc visual explanations to
naturally occurring transformations, often referred to as augmentations. We
anticipate explanations to be invariant under certain transformations, such as
changes to the colour map while responding in an equivariant manner to
transformations like translation, object scaling, and rotation. We have found
remarkable differences in robustness depending on the type of transformation,
with some explainability methods (such as LRP composites and Guided Backprop)
being more stable than others. We also explore the role of training with data
augmentation. We provide evidence that explanations are typically less robust
to augmentation than classification performance, regardless of whether data
augmentation is used in training or not. | Lenka Tětková, Lars Kai Hansen | 2023-04-18T13:31:52Z | http://arxiv.org/abs/2304.08984v1 | # Robustness of Visual Explanations to Common Data Augmentation Methods
###### Abstract
As the use of deep neural networks continues to grow, understanding their behaviour has become more crucial than ever. Post-hoc explainability methods are a potential solution, but their reliability is being called into question. Our research investigates the response of post-hoc visual explanations to naturally occurring transformations, often referred to as augmentations. We anticipate explanations to be invariant under certain transformations, such as changes to the colour map while responding in an equivariant manner to transformations like translation, object scaling, and rotation. We have found remarkable differences in robustness depending on the type of transformation, with some explainability methods (such as LRP composites and Guided Backprop) being more stable than others. We also explore the role of training with data augmentation. We provide evidence that explanations are typically less robust to augmentation than classification performance, regardless of whether data augmentation is used in training or not.
## 1 Introduction
Convolutional neural networks (CNNs) are commonly used in computer vision. However, CNNs are fragile to adversarial attacks [1]. It has been shown that explanation methods are fragile as well and that attackers can manipulate the explanations arbitrarily [2, 3].
To be trusted, explanations need to show common-sense behaviour. In this work, we investigate one such basic behaviour: _If a transformation of an image does not change the target class, the explanation should assign importance to the same part of the object as in the untransformed image1_. If the explainability method does not preserve the explanations of the perturbed images, we lose trust in it. We believe that it is even more concerning than adversarial attacks since perturbations such as _e.g._, object rotation, are omnipresent and happen spontaneously.
Footnote 1: We do not consider cases where the transformation of an image would change the ground-truth label.
In this work, we investigate how perturbations of an image influence visual post-hoc explanations. To understand the role of augmentation during training, we train CNNs from scratch on both augmented and non-augmented data. We examine the robustness of the models and compare the explanations. We pose the questions: Are visual explanations as robust to augmenting the input image as the predictions? Are there differences among various explainability methods and model architectures? Does training with augmented images improve the robustness? Which explainability methods are the best both in robustness to small augmentations and in faithfulness measured by the pixel-flipping test?
Related workThe feasibility of adversarial attacks [4] is well-known. It has been shown [2, 3] that
explanation methods are fragile as well and that attackers can manipulate the explanations arbitrarily. In this paper, we focus on the fragility of the explanations in the case of more naturally occurring (often unintentional) disruptions.
Data augmentation techniques [5, 6] have been used to improve the generalization of the image classifiers (_e.g._, [7, 8]). Rebuffi _et al_. [9] found that using data augmentations helps to improve the robustness against adversarial attacks. Very recent work by Won _et al_. [10] found that data augmentation used under model training has an impact on model interpretability, however, they do not consider stability under test time augmentation as in the present work.
Wang and Wang [11] built a model with transformation invariant interpretations. However, this self-interpretable model violates one of the desiderata for explanations [12]: low construction overhead. We explore whether we could get similar robustness with available post-hoc explainability methods. Moreover, we broaden the set of considered transformations.
Although explainability is important for understanding neural networks, the existing methods differ in the quality of produced explanations and many saliency methods have been criticized (_e.g._, [13, 14, 15]). Therefore, metrics to evaluate the quality have been developed (_e.g._, [16, 17, 18]). Quantus [19] is a toolkit that collects many of those metrics. Our experiments shed further light on the stability of explainability methods.
## 2 Methods
Augmentation methodsHere we divide augmentation techniques into two groups: invariant and equivariant methods. For invariant techniques, the explanation of the augmented image should be the same as the explanation of the original image. In the case of the equivariant techniques, the explanation of the augmented image should be the same as the augmented explanation of the original image.
We chose three invariant (change of brightness, hue and saturation) and three equivariant techniques (rotation, translation and scaling). When using equivariant methods, the background of each image was padded with black pixels to match the original image format if necessary. In preliminary experiments, we studied the influence of various background padding methods and differences were negligible.
We used the library ImgAug [20] for augmenting images. For each method, we chose an interval of values so that classification performance was reduced by 10%. A table showing the chosen intervals for each method can be found in Tab. 3 and Fig. 3 displays one image augmented by values within these intervals for changing brightness and rotation. The figures for the rest of the methods are in Appendix B.
The experiments were performed on the ImageNet [21] dataset. For comparing explanations, 500 images across all ImageNet classes were randomly selected. Analyses were done on the correctly classified images. For every augmentation method and for each image, the interval of possible values of the augmentation method parameter was divided into equidistant units and augmented versions of the image were created, one for each of these values. Each image was passed through the networks to get the probability of the target class and we got explanations for post-hoc explainability methods. We computed the Pearson correlation between the explanations of the augmented images and the explanation of the original image (augmented explanation in the case of the equivariant methods) and top-1000 intersection (intersection of 1000 most important pixels in the explanation). We compared only the area of the original image - hence, in equivariant methods, we computed the correlation and top-1000 intersection only on the parts that were present in both the original and the augmented image and mask the rest.
MetricsWe can plot the probability of the target class and all its augmented versions with the augmentation parameter on the x-axis and the probability on the y-axis. We call this relation a _probability curve_. In the same way, we plot the correlations between the original and the augmented images (call it _correlation curve_) and the top-1000 intersections (_top-100 curve_). These curves can be visualised as in Fig. 2. To compare explainability methods in a fair way, we score relative to classification certainty. For a fixed
range \([M,N]\), we compute a normalized area under the response curve for \(x\in[M,N]\), or, more precisely, the portion of this area out of a rectangle with corners \([M,0],[N,0],[N,1],[M,1]\). Moreover, to be able to compare the scores of different curves and let the score depend only on the shape of the curve, we ensure that the point on the curve corresponding to the zero-augmented image takes a value of \(1\) by shifting the response curve. Figure 0(a) illustrates how the score is computed. For each curve, we get a number between \(0\) and \(1\) and higher values indicate a more stable response. Finally, since we want to compare the robustness of the model's predictions and its explanations, we divide the score for the correlation (or top-1000) curve of explanations by the score for the probability curve and denote it as _S(corelation, probability)_ (or _S(top-1000, probability)_). If \(\mathrm{S}(\cdot,\) probability) is smaller than \(1\), it means that the predictions are more stable than the explanations, whereas values higher than \(1\) entails more robust explanations. The intervals for augmentation parameters are chosen such that the probability of the target class drops on average by at least \(10\%\) at one of the endpoints (in comparison to the original image).
Apart from comparing the robustness of the explainability methods, we are interested in the overall quality of explanations. One method for evaluating the quality is pixel flipping [16]. We consider only the original and correctly classified images. We flip the most relevant pixels first and replace them with black pixels. For each perturbed image, we divide its probability of the target class by the original image's probability of the target class and plot these values as a curve by linear interpolation. We compute the normalized area over the curve (up to \(1\)) from zero to the first \(20\%\) pixels flipped and average these numbers across all images. Figure 0(b) visualizes how the pixel flipping score is computed. A similar definition has been given by Samek _et al_. [22]. Our definition differs in dividing the probabilities instead of subtracting them. The fractions better capture the relative decline of the probability and can take all values in \([0,1]\).
NetworksWe study three convolutional networks (ResNet50 [23], VGG16 [24], and EfficientNetV2 small [25]). Because of space constraints, we present in this paper only the results for ResNet50. However, the results for VGG16 and EfficientNet V2 small show similar tendencies. Since we wanted to explore the role of augmenting images during training, we trained each model architecture with two different settings. Models trained with fully augmented data (denoted "full aug" in the following) were trained with Trivial Augment wide [26] strategy. Models trained with limited data augmentation (denoted "lim aug") used only random resized cropping, random horizontal flipping and random erasing [27] (only EfficientNet V2 and ResNet50). Details on training can be found in Appendix A.
Explanation methodsWe investigated the following explanation methods: Gradients [28], Input x Gradients [28], Integrated Gradients [29], Guided Backpropagation [30], Deconvolution [31] and three variants of Layer-wise Relevance Propagation [32, 33] composites: EpsilonPlusFlat (LRP-\(\varepsilon\)-rule for dense layers, LRP-\(\alpha,\beta\) (\(\alpha=1,\beta=0\)), also called ZPlus rule, for convolutional layers, and the flat rule for the first linear layer), EpsilonGammaBox (LRP-\(\varepsilon\)-rule for dense layers, the LRP-\(\gamma\)-rule (\(\gamma=0.25\)) for convolutional layers, and the LRP-\(Z^{B}\)-rule (or box-rule) for the first layer) and EpsilonAlpha2Beta1Flat (LRP-\(\varepsilon\)-rule for dense layers, LRP-\(\alpha,\beta\) (\(\alpha=2,\beta=1\)) for convolutional layers and the flat rule for the first linear layer) [34]. We used Zennit [35] to generate LRP explanations and Captum [36] for the rest of the explainability methods.
The code and hyperparameters for reproducing the
Figure 1: Visualization of the metrics defined in Sec. 2. In both cases, we compute the portion of the yellow part in the green rectangle.
experiments can be found in the project repository 2.
Footnote 2: [https://github.com/LenkaTetkova/robustness-of-explanations.git](https://github.com/LenkaTetkova/robustness-of-explanations.git)
## 3 Results
Figure 2 shows the probability and correlation curves for rotation and "ResNet50 full aug". It shows that, although the predictions do not change much for increasing magnitudes of augmentation, the drop in correlation is huge. Table 1 shows S(corelation, probability) for all augmentation and explainability methods tested on "ResNet50 full aug". We observe that the explanations are in most cases less stable than the predictions. Moreover, the robustness of explanations depends on the augmentation method - for some of them, the explanations are more robust than for others. Specifically, explanations of images augmented by invariant methods are more stable than the ones augmented by equivariant methods. The variance in robustness across explainability methods was an unexpected finding. The most stable ones, the composites of LRP and Guided Backprop, indicate a certain degree of stability, whereas the least stable ones, Gradients and Gradients x Inputs, show a steep decrease in the similarity of explanations even for small perturbations. Figure 5 depicts the comparison of ResNet50 trained with full and limited augmentations evaluated on the changes in brightness. We can observe negligible differences between both networks. Therefore, it indicates that training with data augmentations does not diminish this problem. Additional plots for other augmentation methods can be found in Appendix C. However, stability is not the only desired property of explainability methods. We need to consider also their overall quality. In our experiments, we measured faithfulness, specifically pixel flipping score. Figure 4 shows S(corelation, probability) against the pixel-flipping scores. We can observe that all the LRP composites lie in the top-right corner. On the other hand, Guided Backprop and Deconvolution attain low pixel-flipping scores in comparison to other methods. This is not surprising because Nie [15] showed that these two methods do not depend much on the tested model but rather perform a (partial) image recovery.
Figure 3: Examples of the augmented images and their explanations.
Figure 2: Example of curves showing the probabilities and correlations between the original and the rotated images.
We consider the instability of explanations to be a serious problem that is relevant for many domains where computer vision tasks are solved using neural networks. Our study contributes additional evidence that current explainability methods cannot be trusted to deliver a reliable justification of the outputs of a model. Many of the tested perturbations may occur unintentionally when taking images under different light conditions, from a different angle or by domain shift and variability of the data. Unless more stability of explainability methods is ensured, explanations cannot be trusted and used as a foundation for authorizing neural networks with important tasks with significant impact.
\begin{table}
\begin{tabular}{|l||l|l|l||l|l|l|} \hline & Brightness & Hue & Saturation & Rotate & Scale & Translate \\ \hline \hline Gradients & 0.468 & 0.442 & 0.354 & 0.127 & 0.122 & 0.246 \\ \hline Input x Gradients & 0.330 & 0.443 & 0.343 & 0.126 & 0.120 & 0.245 \\ \hline Integrated Gradients & 0.478 & 0.636 & 0.546 & 0.209 & 0.229 & 0.327 \\ \hline Guided Backprop & **1.005** & 1.028 & 0.994 & **0.819** & **0.866** & **0.875** \\ \hline Deconvolution & 0.975 & 1.014 & 0.975 & 0.434 & 0.437 & 0.449 \\ \hline LRP: EpsilonPlusFlat & 0.923 & **1.053** & **1.038** & 0.796 & 0.834 & 0.792 \\ \hline LRP: EpsilonGammaBox & 0.632 & 0.856 & 0.832 & 0.480 & 0.512 & 0.532 \\ \hline LRP: EpsilonAlpha2Beta1Flat & 0.662 & 1.006 & 0.972 & 0.691 & 0.722 & 0.706 \\ \hline \end{tabular}
\end{table}
Table 1: Results of S(correlation, probability) for ”ResNet50 full aug”, computed on 391 (correctly classified) images. All numbers are with uncertainty (standard error of the mean) at most \(\pm\)0.007. Highlighted are the highest values for each augmentation.
Figure 4: Comparison of S(corelation, probability) and pixel flipping score for ”ResNet50 full aug”. The scores are defined in Sec. 2. The x-axis shows the average of the S(corelation, probability) for all six augmentation methods used in this paper. The dashed line corresponds to a baseline pixel-flipping score computed with random sorting of the pixels. The best methods are in the top right corner.
Figure 5: Comparison of ”ResNet50 full aug” (391 images) and ”ResNet50 lim aug” (385 images) for each explainability method. We plot S(corelation, probability) for changes in brightness (AddToBrightness from -95 to 95). Boxes show the quartiles and medians, and whiskers extend to the most extreme, non-outlier data points.
Conclusion
We investigated the robustness of post-hoc explainability methods under natural perturbations of the input images. We found out that LRP composites and Guided Backprop produce the most stable explanations and Gradients and Input x Gradients are the least stable ones. When perturbing with the invariant methods (_e.g._, changing brightness, hue and saturation), the explanations are more stable than when perturbing with equivariant methods (_e.g._, rotation, scaling and translation). Training with data augmentation does not reduce this problem.
## 5 Acknowledgements
This work was supported by the DIREC Bridge project Deep Learning and Automation of Imaging-Based Quality of Seeds and Grains, Innovation Fund Denmark grant number 9142-00001B and by the Danish Pioneer Centre for AI, DNRF grant number P1. We acknowledge EuroHPC Joint Undertaking for awarding us access to Karolina at IT4Innovations, Czech Republic.
|
2307.10443 | Integrating a Heterogeneous Graph with Entity-aware Self-attention using
Relative Position Labels for Reading Comprehension Model | Despite the significant progress made by transformer models in machine
reading comprehension tasks, they still fall short in handling complex
reasoning tasks due to the absence of explicit knowledge in the input sequence.
To address this limitation, many recent works have proposed injecting external
knowledge into the model. However, selecting relevant external knowledge,
ensuring its availability, and requiring additional processing steps remain
challenging. In this paper, we introduce a novel attention pattern that
integrates reasoning knowledge derived from a heterogeneous graph into the
transformer architecture without relying on external knowledge. The proposed
attention pattern comprises three key elements: global-local attention for word
tokens, graph attention for entity tokens that exhibit strong attention towards
tokens connected in the graph as opposed to those unconnected, and the
consideration of the type of relationship between each entity token and word
token. This results in optimized attention between the two if a relationship
exists. The pattern is coupled with special relative position labels, allowing
it to integrate with LUKE's entity-aware self-attention mechanism. The
experimental findings corroborate that our model outperforms both the
cutting-edge LUKE-Graph and the baseline LUKE model across two distinct
datasets: ReCoRD, emphasizing commonsense reasoning, and WikiHop, focusing on
multi-hop reasoning challenges. | Shima Foolad, Kourosh Kiani | 2023-07-19T20:17:37Z | http://arxiv.org/abs/2307.10443v3 | Integrating a Heterogeneous Graph with Entity-aware Self-attention using Relative Position Labels for Reading Comprehension Model
###### Abstract
Despite the significant progress made by transformer models in machine reading comprehension tasks, they still fall short in handling complex reasoning tasks due to the absence of explicit knowledge in the input sequence. To address this limitation, many recent works have proposed injecting external knowledge into the model. However, selecting relevant external knowledge, ensuring its availability, and requiring additional processing steps remain challenging. In this paper, we introduce a novel attention pattern that integrates reasoning knowledge derived from a heterogeneous graph into the transformer architecture without relying on external knowledge. The proposed attention pattern comprises three key elements: global-local attention for word tokens, graph attention for entity tokens that exhibit strong attention towards tokens connected in the graph as opposed to those unconnected, and the consideration of the type of relationship between each entity token and word token. This results in optimized attention between the two if a relationship exists. The pattern is coupled with special relative position labels, allowing it to integrate with LUKE's entity-aware self-attention mechanism. The experimental findings corroborate that our model outperforms both the cutting-edge LUKE-Graph and the baseline LUKE model on the ReCoRD dataset that focuses on commonsense reasoning.
global-local attention. graph-enhanced self-attention. transformer-based model. reasoning knowledge. relative position encoding \(\cdot\) cloze-style machine reading comprehension.
## 1 Introduction
Recent successes on a variety of Natural Language Processing (NLP) tasks, including Question Answering (QA) and Machine Reading Comprehension (MRC), have been achieved using transformer-based models, such as BERT [1] or other variations. The main innovation in transformers is the addition of a self-attention mechanism that can evaluate each token of the input sequence simultaneously. Thanks to this parallelism, transformers may train NLP models on datasets of unprecedented size, allowing contemporary hardware accelerators like GPUs/TPUs to fully leverage their capabilities. This ability enables the models to be pretrained on huge general-purpose corpora, which then allows the knowledge to be transferred to downstream tasks like QA and MRC.
Despite the advanced outcomes achieved by most transformer-based models, their transformer architecture presents two notable limitations: (1) Inclusion of Reasoning Information: the challenge lies in enriching a specific pre-training objective with reasoning information, and (2) Constraints of Attention Allocation between Tokens: determining how to implement the relationship type of each pair of tokens (such as an entity token with its mention, a query token with another token) in the attention assigned to each other is another challenge.
The original transformer model [2] has the ability to link tokens (words) to each other via the self-attention mechanism. However, managing these relationships within datasets that necessitate strong commonsense reasoning, such as ReCoRD, proves to be a difficult task, given that the model typically breaks down most entities into multiple tokens. In order to address this challenge, several recent studies [3; 4; 5] have explored the use of external knowledge to enhance the reasoning capabilities of models. On the other hand, some studies [4; 6; 7; 8; 9] have introduced entities as distinct tokens within the model, allowing for direct comprehension of the relationships between entities. In some of them[4; 9], each entity is allocated a fixed embedding vector that stores information about the entity in a knowledge base (KB). Whilst these models are capable of capturing the rich information present in the KB, they are limited in their capacity to represent entities that do not exist within the KB. Moreover, they necessitate the selection of the optimal subgraph and the most pertinent entity in KGs, particularly for ambiguous ones. By contrast, other researches [6; 8] have proposed contextual representations of entities that are trained using unsupervised pre-training tasks based on language modeling. Most of the research overlooks the inclusion of prior knowledge about linking mentions of different entities in documents and the intuitive relationships between them. The relationships contain abundant reasoning information in documents.
Conversely, the original transformer model simply connects the tokens together, without considering their relationship type in their attention to each other. Various models of the transformer [8; 10; 11; 12; 13] have been suggested for scaling up input length and reducing the memory and computational requirements in the full self-attention mechanism. They have adapted the transformer architecture to employ sparse attention, which restricts the ability of tokens to attend to one another, by applying specialized attention patterns. As such, they identify some input tokens as local, attending to nearby tokens, and some important ones as global, attending to all tokens (e.g., [CLS] token). The approach most closely related to this paper is the ETC model [8]. This model introduces extra global tokens that do not correspond to any of the input tokens. In addition, it has been proposed to manage structured inputs by amalgamating global-local attention with relative position labels [14].
In order to address the aforementioned limitations, we propose a Graph-Enhanced Self-Attention (GESA) approach without relying on external knowledge. First, we employ the LUKE [6] as the foundational pre-trained model, which considers both words and entities in a given document as distinct input tokens. Then, we augment the entity-aware self-attention mechanism of LUKE by dividing the attention matrix into four components: w2w, w2e, e2w, and e2e, depending on token types - word (w) or entity (e). For the w2w component, we utilize the global-local attention approach used in Longformer [10] by designating certain input tokens as local, to concentrate on nearby tokens, and some key ones ([CLS] and question tokens) as global, to attend all tokens. For the w2e and e2w components, we efficiently account for the relationship type between each entity token with word token (e.g., entity token with its word mention in the document, question tokens with missing entity token) in the attention given to each other. Based on the relationship type of each pair, we assign a unique relative position embedding and add it to the scaled dot product operation. Additionally, we transform all entity candidates in the document and their connections into a heterogeneous graph and integrate the graph information into the e2e part of the attention matrix. Our experimental results show that our method outperforms both the LUKE-Graph and the baseline LUKE model on the ReCoRD dataset [15] with commonsense reasoning. In summary, our primary contributions are:
* We introduce a novel attention pattern that includes global-local attention for word tokens, graph attention for entity tokens, and attention between related entity and word tokens. This attention pattern seamlessly integrates commonsense representations into the fine-tuning phase without using any external knowledge.
* We incorporate this attention pattern into the entity-aware self-attention mechanism of LUKE using relative position encoding. This integration allows the self-attention mechanism to effectively emphasize reasoning information, thereby aiding in accurate decision-making.
* By incorporating the reasoning information from a heterogeneous graph into the self-attention mechanism, we allow the transformer layers to give more attention to tokens connected in the graph than those unconnected.
* By considering the relationship type between tokens, we modify the attention mechanism to limit their mutual attention, leading to enhanced attention efficiency when dealing with related tokens.
* We evaluate the effectiveness of our proposed method on the ReCoRD dataset with commonsense reasoning. The results demonstrate a 1% improvement in performance compared to the baseline LUKE model.
## 2 Related Work
In this section, we review methods that improve the original transformer by scaling up input length, longer-term attention span, limiting the connection between tokens in self-attention, and reducing the memory consumption and computation time. The attention span of the original transformer is fixed and constrained. Therefore, no information can cross fixed-length segments. The Transformer-XL [16] extended the attention span over a longer period of time and across several segments by incorporating information from earlier hidden states. Moreover, to maintain a consistent flow of positional information across segments, it encoded a relative position rather than absolute position. It has been shown that relative position embeddings perform better in tasks requiring comprehension and the creation
of natural language [14, 16]. To reduce memory and computational requirements, Sparse Transformer [12] introduced factorized self-attention using predefined sparsity patterns such as local or stride attention, making it possible to train deeper networks on sequences of unprecedented length on current hardware.
The objective of research lines like Routing Transformer [17], Reformer [13], and Sinkhorn Transformer [18] is to acquire knowledge of sparsity patterns. In particular, Routing Transformer [17] implements a sparsity pattern learning method that is based on content similarity and employs k-means clustering. This approach allows it to generate attention queries and keys exclusively from the same cluster. On the other hand, the Reformer [13] model uses locality-sensitive hashing (LSH) [19] to incorporate attention mechanisms and to compute attention only for query and key vectors within the same hash buckets.
Another line of study [8, 10, 11] showed both attention types (local and global attention) are essential. The local attention is mainly utilized to construct contextual representations, while the global attention can build whole sequence representations for prediction. Longformer [10] introduced a windowed local-context self-attention along with global attention to a few pre-selected input tokens for learning task-specific representations, such as CLS token for classification and all question tokens for MRC. Extended transformer construction (ETC) model [8] defined some additional global content embeddings that do not correspond to any of the input tokens. The Big-Bird [11] also added random sparse attention patterns to global-local attention in the ETC structure. It enables the quick blending of information from different parts of the input sequence. Since the ETC model can directly encode a graph or hierarchical structure, we model its attention pattern for integrating our heterogeneous graph into a global-local attention. Moreover, we expand the vocabulary of relative position labels to efficiently consider the relation type of each pair of tokens in the amount of attention to each other. Comparing our attention pattern with other attentions is shown in Fig. 1. In our attention pattern, three forms of attention, namely the global-local attention used in Longformer, graph attention, and attention pattern employed in ETC, are combined. The attention matrix is partitioned into four segments, namely w2w, w2e, e2w, and e2e, based on the types of tokens, i.e., word (w) or entity (e), inspired by ETC. The global-local attention mechanism of Longformer is incorporated into the w2w segment, wherein some input tokens are treated as local to attend to nearby tokens, while certain crucial ones ([CLS] and question tokens) are deemed global and attend to all tokens. The relationship type between each entity token and word token is effectively considered in the w2e and e2w segments, enabling them to attend to each other more strongly if they are related. Moreover, all entity candidates in the document and their connections are transformed into a heterogeneous graph, and the graph information is integrated into the e2e portion of the attention matrix.
Figure 1: Comparing different attention patterns with our model. (a) local attention. (b) global-local attention used in Longformer model. (c) graph attention. (d) global-local attention used in ETC. (e) global-local attention used in big-bird. (f) our attention
A different line of work [4; 5; 6; 7; 20; 21; 22] improved transformer-based models by injecting them with external knowledge. Li et al. [5] utilized pre-calculated embeddings from ConceptNet [23] as a form of external knowledge representation, which they incorporated into BERT through three different methods during the fine-tuning process. Additionally, they introduced a mask mechanism that allows for the token-level search of multi-hop relationships to filter external knowledge. Meng et al. [21] proposed a confidence-based knowledge integration (CBKI) module, which determines the amount of knowledge to be integrated into the model based on confidence scores. However, external knowledge sources may not always be readily available and injecting them into the models requires additional processing steps. This can increase the complexity of the model, making it harder to train and deploy. ERNIE [4] and Know-BERT [20] learned static entity embeddings from a Knowledge Base (KB), while LUKE (Language Understanding with Knowledge-based Embeddings) applied a new pretraining task to learn entity representations. LUKE leverages entity information in its self-attention mechanism, allowing the model to attend to specific entities in the input text, thereby improving its performance on tasks that require an understanding of entity relationships. Besides, LUKE-Graph [7] combined the strengths of the LUKE with graph-based reasoning to capture intuitive relationships between entities. Integrating Gated Relational Graph Attention (Gated-RGAT) achieved a notable improvement. In this paper, we incorporate the knowledge from KB into our model using the LUKE pretrained task, and add the entity-aware self-attention mechanism to take the kind of the tokens (words or entities) into account while computing attention scores. Furthermore, we take advantage of the LUKE-Graph to capture the relationship importance between entities using a heterogeneous graph. however, we integrate the graph module with the self-attention mechanism instead of using a separate graph module from the transformer. This implies that a multi-step model will no longer be necessary, thereby resulting in a reduction in the model's execution time.
Recently, some Large Language Models or LLMs [24; 25; 26] have proposed with hundreds of billions of parameters that have achieved impressive results on various NLP tasks. DeBERTa (Decoding-enhanced BERT with disentangled attention) [25] introduces a disentangled attention mechanism and a decoding-enhanced training procedure to improve the efficiency and effectiveness of the BERT model. The disentangled attention mechanism in DeBERTa helps to reduce the impact of irrelevant information and increase the importance of relevant information in the attention mechanism. Moreover, a decoding-enhanced training procedure trains the model to generate the output sequence in a left-to-right order, rather than a random order. In contrast, T5 [24] is a text-to-text transformer that is pre-trained on a variety of tasks and can perform a wide range of natural language processing tasks by converting them into a text-to-text format, while PaLM (Pathways-augmented Language Model) [26] combines traditional transformer-based models with a novel pathways mechanism that allows for more efficient processing of long sequences. The pathways mechanism works by dividing the input sequence into smaller segments and processing each segment separately. One common disadvantage among T5, DeBERT, and PaLM models is their computational cost. All three models are large and complex, requiring significant computational resources to train and fine-tune. This can make it challenging to deploy these models on resource-constrained devices. While, the smaller number of parameters is one of our model key advantages, as it allows for more efficient training and inference.
## 3 Methodology
Fig. 2 displays the architecture of our model, which builds upon the pre-trained multi-layer transformer from the LUKE model [6]. Our model incorporates a number of significant modifications, the most notable of which is the construction of a heterogeneous graph that is linked to each token's designated relationship type as an attention pattern. This attention pattern is then integrated into the self-attention mechanism of our transformer using relative position encoding. To provide further context on our model, we present an outline below. Firstly, we partition the inputs into two distinct sequences, namely the word input for question and document tokens, including special characters such as [CLS] and [SEP], and the entity input for candidate answers or extracted entities originating from the document. Then, we compute a representation for each token in an embedding layer. Since our model uses relative position encodings in transformer layers, we exclude absolute position encodings in the embedding layer. Additionally, we build a heterogeneous graph based on relationships stemming from the entity input. The heterogeneous information regarding entities and the input sequence representations are imported to the transformer layers. In the multi-head attention part of the layers, an entity-aware self-attention mechanism of LUKE is modified by segregating the attention matrix into four parts: w2w, w2e, e2w, and e2e, depending on token types - word (w) or entity (e). For the w2w part, we use the global-local attention approach utilized in Longformer [10] by designating certain input tokens as local, to focus on nearby tokens, and some critical ones ([CLS] and question tokens) as global, to attend all tokens. For the w2e and e2w portions, we efficiently account for the relationship type between each entity token with word token, enabling them to attend to each other more strongly if they are related. Based on the relationship type of each pair, we assign a unique relative position embedding and add it to the scaled dot product operation. For example, one relative position label is assigned to link the entity tokens with the word tokens that belong to them, and a different label for those that do not. Additionally, we integrate the heterogeneous graph information into the e2e part of the attention matrix, enabling it to attend more strongly to connected tokens in the graph than those unconnected. Finally, we compute a score for each candidate entity using a linear classifier in a score accumulation part and choose the candidate with the highest score
as the final answer.
### Embedding Layer
The Embedding layer utilizes the LUKE pre-trained language model to encode text to textual representations. For cloze-style reading comprehension task, it takes the following input sequence: {[CLS], Q, [SEP], [SEP], D, [SEP], E} where Q represents all question tokens with a [PLC] special token in the position of missing entity (placeholder), D refers to all document tokens along with two [ENT] special tokens for each entity within it as entity separators, and E denotes [MASK] tokens for missing entity and each candidate entity appearing in the document. We consider E as the entity input and others as the word input. Further, [CLS] and [SEP] special tokens are defined as a separator and a classification token, respectively.
In our modified transformer model, we begin by computing a representation for each token in an embedding layer. The input representation of a token is computed using the following two embeddings: token embedding and entity type embedding. A token embedding is a numerical representation of the corresponding token learned during
Figure 2: The architecture of our method.
pretraining the LUKE model. The entity type embedding defines the type of token, whether it is a word or an entity. Since our model uses relative position encodings in transformer layers, we omit absolute position encodings in the embedding layer. Therefore, we allow the transformer layers to learn the relative positions of the tokens, which is more flexible and effective for capturing the contextual relationships between them.
### Graph Building
Similar to the LUKE-Graph, we build a heterogeneous graph based on relationships stemming from the entity input. This graph accurately portrays the natural linkages between entities within a document without relying on any external knowledge graphs. In this way, we denote entity tokens (missing entity and the entities specified in the document) as the nodes of the graph. we identify three different kinds of undirected edges between them: 1) SENT-BASED edges: for node pairs that appear in the same sentences, 2) MATCH edges: for node pairs that are found in different sentences but share the same entity string, 3) PLC edges: edges between the node that corresponds to the missing entity of the question and all other nodes. An illustration of the graph creation is shown in Fig. 3.
### Relative Position Embeddings
We incorporate relative position embeddings into the attention mechanism of each transformer layer instead of absolute position in embedding layer. This approach assigns more weight to pairs of words that are closer together in the sequence and less weight to those that are farther apart. Relative position embeddings provide information on the position of tokens in the input sequence relative to each other, and they are input-length independent, making them easy to adapt to longer input sequences. To utilize these embeddings in the transformer layer, we divide the self-attention matrix into four parts: word-to-word (w2w), word-to-entity (w2e), entity-to-word (e2w), and entity-to-entity (e2e), based on the token types (word or entity). We assign a unique attention pattern to each part by using relative position labels, which are depicted in Fig. 4 using different colors. These labels are then converted into learnable vectors that modify the attention mechanism.
Figure 3: An illustration of the graph creation
We apply local attention, also known as sliding window attention, for w2w part. Local attention attends to a subset of positions in the input sequence and focuses on local interactions between nearby tokens, which can be computationally efficient for long sequences. The w2w part can be viewed as an undirected graph with labels on the edges connecting the word tokens. The labels depend on the relative position of the tokens, and 2k+1 (k tokens to the left, k tokens to the right and the current token) labels are defined for a given maximum distance k. Words that are further than k words to the right of word i are given the (2k+1)-th label, whereas words that are further than k words to the left of word i are given the 0-th label. Fig. 5. illustrates a schematic (Fig. 5. a) and a numerical (Fig. 5. b) lookup table of relative position labels on an input sequence of 7 words and k=2 which has a total of \((2\times 2+1)\) 5 relative position embeddings ((Fig. 5. c) to be learned. Due to the fact that local attention is not adaptable enough to develop task-specific representations, similar to Longformer [10], we mark some important tokens of the w2w part ([CLS] token, question tokens) as global which attend to all positions in the input sequence. We make this attention operation symmetric; it means that a token attends to all other tokens in the sequence, and all tokens attend to it. Therefore, we assign a specific relative position label for each type of global token; one label for attending [CLS] token with other tokens and another label for attending each question token with others.
Similar to the ETC [8], we add other relative position labels based on pairwise relationships between tokens in the w2e and the e2w parts. Therefore, one relative position label is assigned to link the entity tokens in entity input with the word tokens that belong to them in word input (their string-matched mentions), and a different label for those that do not. We also encode the created heterogeneous graph (described in Section 3.2) in e2e part. In this way, the graph is viewed with labels on the edges connecting the entity tokens, and different relative position labels are assigned for each kind of edge (SENT-BASED, MATCH, and PLC) between them.
In summary, all relative position labels used in our self-attention mechanism are listed below:
* 2k+1 labels to attend to the k left and k right neighbors of each token in the w2w part.
* One label for [CLS] token of the w2w part to attend to all positions in the input sequence.
* One label for question tokens of the w2w part to attend to all positions in the input sequence.
* One label for the w2e and e2w parts to link the entity tokens in the entity input with their string-matched
Figure 4: Attention pattern used in our model. we use different colors to indicate different relative position labels.
Figure 5: Illustration of a schematic (Fig. 5. a) and a numerical (Fig. 5. b) lookup table of relative position labels on an input sequence of 7 words and k=2 which has a total of 5 relative position embeddings ((Fig. 5. c) to be learned.
mention tokens in the word input, and a different label for those that do not.
* One label for the PLC edges of the e2e part to link the missing entity (placeholder) with other entities.
* One label for the SENT-BASED edges of the e2e part to link the entities in a same sentence.
* One label for the MATCH edges of the e2e part to link the same string-matched entities in the different sentences.
### Transformer Layers with entity-aware self-attention
The heterogeneous information regarding entities and the input sequence representations are imported to the transformer layers. In the transformer layers, we modify the entity-aware self-attention mechanism of the LUKE by adding relative position embeddings, which efficiently takes individual pairings' influence on attention towards one another into account. Furthermore, we encode a heterogeneous graph alongside each token's designated relation type as an attention pattern, which is then integrated into the entity-aware self-attention mechanism using relative position encoding.
**Self-attention:** The self-attention mechanism is a key component of the transformer architecture. In the context of the transformer, self-attention refers to a mechanism by which the model can weigh the importance of different parts of a sequence when generating its output. Similar to the LUKE [6], entity-aware self-attention is achieved through the use of token type, word or entity for computing attention score. In a self-attention layer, the input sequence (long and global input) is transformed into three vectors: the query vector, the key vector, and the value vector. These vectors are then used to compute an attention vector. In that way, an individual query vector is utilized based on the kind of tokens (word to word, word to entity, entity to word, and entity to entity). Also, the relative position embeddings are added to the query and key vectors before computing the dot product. The relative position embeddings are designed to capture the relative distance between each pair of tokens in the input sequence, allowing the model to attend to nearby tokens more strongly than those further away. To compute the attention weights, the dot product between the modified query and key vectors is divided by the square root of the dimension of the key vector, as in the standard self-attention mechanism. The resulting scores are then passed through a softmax function. Finally, the attention vector is computed as the weighted sum of the value vectors.
Formally, given an input sequence \(x_{1},x_{2},...,x_{p}\), which \(x_{i}\in\mathbb{R}^{\perp}\)is a token representation, the attention vectors are \(y_{1},y_{2},...,y_{p}\), where \(y_{i}\in\mathbb{R}^{H}\)is calculated as follows:
\[\begin{split}& y_{i}=\sum_{j=1}^{p}a_{ij}(x_{i}W^{v})\\ & a_{ij}=\text{softmax}(\frac{x_{i}W^{Q}(x_{j}W^{K})^{T}+x_{i}W^{ Q}(r_{ij})^{T}}{\sqrt{H}})\\ &\text{ \
with the highest score is chosen as the best answer \(e^{*}\) :
\[e^{*}=\underset{e\in E}{\text{argmax}}\ f_{o}\left(\left[y_{PLC};y_{e}\right]\right) \tag{2}\]
Where \(f_{o}(.)\) is a fully connected layer followed by a sigmoid function,\(\left[y_{PLC};y_{e}\right]\) is a concatenation of the PLC representation \(\left(y_{PLC}\right)\) and the candidate representation \(\left(y_{e}\right)\), and \(E\) denotes the entity input includes set of entity candidates.
## 4 Experiments
This section aims to demonstrate the effectiveness of our model on ReCoRD [15], a cloze-style reading comprehension dataset, and compare its performance with other state-of-the-art methods.
### Training Configuration
The architecture of the model is based on that of the LUKE large model [6], which has 24 hidden layers, 1024 hidden dimensions (\(L=1024\)), 64 attention head dimensions (\(H=64\)), and 16 self-attention heads. This means that a word token embedding has 1024 dimensions, while an entity token embedding has 256 dimensions, which is then converted to 1024 dimensions using a dense layer. The input text is tokenized using RoBERTa's tokenizer [27], which has a vocabulary of 50K words, and the entity vocabulary includes 500K common entities, as well as two special entities, [MASK] and [UNK], with [UNK] being used for missing entities. The maximum sequence length \((P)\) and maximum question length are set to 512 and 90, respectively, and the other hyperparameters are similar to the LUKE model that is shown in Table 1. The attention pattern is created with k=150, which controls the maximum distance between two tokens in the input sequence that can attend to each other in the w2w part. However, we adjust this value based on the length of the input sequence and the presence of longer sequences in the dataset that require a larger value. The model is trained using an Amazon EC2 p3.8xlarge instance with four GPUs, and a single model trained with 2 batch sizes and 2 epochs takes about 2 hours on the ReCoRD dataset. Additionally, when transferring weights from the pre-trained LUKE model, the original query matrix \(W^{Q}\) is copied to different matrices of \(W^{Q_{2\times 2}},W^{Q_{2\times 2}},W^{Q_{2\times 2}},W^{Q_{2\times 2}}\) in the model.
### Dataset
This study utilizes the ReCoRD (Reading Comprehension with Commonsense Reasoning Dataset) [15], which is a cloze-style dataset designed to evaluate the ability of NLP models to understand and reason over complex text. The dataset consists of over 120,000 cloze-style questions, which require selecting a missing entity (placeholder) to fill in the blank in a given document. For each question, the placeholder is filled with a suitable answer from all entities in the associated document. One of the unique features of ReCoRD is that it emphasizes commonsense reasoning and contextual understanding over surface-level factual recall. The dataset includes documents from a variety of domains, including science, history, and literature, and the questions require reasoning about the context to determine the correct answer. For example, a question might ask about the actions of a character in a story, or the implications of a scientific experiment. The dataset is split into 100k training, 10k development, and 10k test sets, with each set containing documents and questions from a range of difficulty levels. ReCoRD has been used as a benchmark for evaluating the performance of NLP models on tasks that require more advanced reasoning and comprehension skills.
The ReCoRD dataset uses two evaluation metrics to assess the performance of NLP models on the task of cloze-style
\begin{table}
\begin{tabular}{l|l|l|c} \hline
**Name** & **Value** & **Name** & **Value** \\ \hline Max Seq length (\(P\)) & 512 & number of transformer layers & 24 \\ Max question length & 90 & hidden size in transformer (\(L\)) & 1024 \\ Warmup ratio & 0.06 & attention head size in transformer (\(H\)) & 64 \\ Weight decay & 0.01 & number of self-attention heads in transformer & 16 \\ Adam \(\beta\)1 & 0.9 & word token embedding size & 1024 \\ Adam \(\beta\)2 & 0.98 & entity token embedding size & 256 \\ Adam \(\epsilon\) & 1e-6 & maximum relative position distance (k) & 150 \\ \hline \end{tabular}
\end{table}
Table 1: Hyper-parameters of our model.
reading comprehension. The first metric is the exact match (EM) score, which measures the percentage of questions for which the model's predicted answer exactly matches the ground-truth answer. The second metric is the F1 score, which calculates the harmonic mean of precision and recall for the predicted answer compared to the ground-truth answer. The EM metric is a strict measure of accuracy and penalizes models heavily for even minor errors in their predictions. For example, if a model predicts the correct answer but misspells it slightly, it will receive a score of 0 for that question under the EM metric. The F1 score, on the other hand, is a more forgiving measure that takes into account partial matches between the predicted and ground-truth answers.
### Results and Discussion
Table 2 presents a comparison of our method with other state-of-the-art models on the ReCoRD dataset [15]. The results have been obtained from multiple sources, including the ReCoRD1 and SuperGLUE2[28] leaderboards, as well as relevant literature [6, 29].
Footnote 1: [https://sheng-z.github.io/ReCoRD-explorer/](https://sheng-z.github.io/ReCoRD-explorer/)
Footnote 2: [https://super.gluebenchmark.com/tasks](https://super.gluebenchmark.com/tasks)
Our model (GESA-500M) has been evaluated against three large language models (LLMs), namely T5-11B [24], PaLM-540B [26], and DeBERTa-1.5B [25], which outperform our model by +1.7 EM, +1.6 EM, and +2.4 EM, respectively. However, these LLMs require a significant amount of computational resources for effective training and inference, which can make them expensive and time-consuming to use. Nonetheless, comparing transformer-based models can be challenging due to variations in computing and pretraining resources. Our model differs significantly from the aforementioned LLMs, as it has only around 500 million parameters, while they have hundreds of billions of parameters. When compared to T5-Large, which has 770 million parameters, our model outperforms it by +5.4 F1 and +5.8 EM, despite being 1.5 times smaller. Our encoder-only setup is particularly suitable for tasks that require commonsense reasoning, while the decoder-encoder architecture of DeBERTa and T5 models is more versatile for various natural language processing tasks. Conversely, PaLM has a decoder-only setup and requires significant computational resources, making it less accessible to researchers or organizations without adequate computational resources, especially when deploying it on resource-constrained devices. In conclusion, the smaller number of parameters in our model provides a significant advantage over LLMs, enabling more efficient training and inference.
In comparison to other transformer-based models, some models such as BERT [1], XLNET [30], and RoBERTa [27] lack explicit knowledge of inference concepts. As a result, their representations, as shown in the BERT-based group of Table 2, fall short in their ability to support reasoning and have lower accuracy. Since entities play a crucial role in such datasets, and there is reasoning about the relationships between them, LUKE [6] outperforms RoBERTa significantly by +0.6 F1/+0.6 EM, with the only difference being the addition of entities to the inputs and the use of an entity-aware self-attention mechanism. Therefore, we have chosen LUKE as our transformer-based model. Furthermore, LUKE-Graph [7] has demonstrated that the creation of a heterogeneous graph for reasoning through the relationships of these entities can significantly enhance the model's capability to select relevant information and make better decisions. To achieve this, we have integrated the heterogeneous information with the transformer layers instead of using a separate graph module, eliminating the need for a multi-step model. As a result, our model exhibits a significant improvement over LUKE-Graph, with an increase of +0.7 F1 and +0.5 EM, while also reducing execution time. This indicates that incorporating graph information with attention is an effective approach, as demonstrated by a gain of +2.1 (F1+EM) over the LUKE.
Compared to the graph-based models, namely Graph-BERT [31], SKG-BERT [32], KT-NET [33], and KELM [29], our model achieves a significant improvement of +29.2 F1/+30.9 EM, +19.4 F1/+19.5 EM, +17.4 F1/+18.7 EM, and +15.5 F1/EM, respectively. While SKG-BERT, KT-NET, and KELM employ external knowledge graphs such as WordNet, ConceptNet, and NELL, our model does not rely on any additional knowledge graph. Instead, we convert document entities into a graph to capture their relationships.
### Ablation study
To examine the impact of each component on our model, we conduct multiple ablation experiments on the ReCoRD development dataset.
**Effect of using different attention patterns:** In this study, we examine the impact of different attention patterns on the performance of our model across four components: w2w, w2e, e2w, and e2e. Our findings, presented in Table 3, indicate that attention mechanisms play a crucial role in optimizing model performance. In the w2w component, we experiment by omitting attention between global [CLS] and question tokens, and only using a local attention pattern. We observed a decline in F1/EM results by 0.16/0.14, indicating the importance of attending to all tokens for some crucial tokens. Similarly, in the e2e component, we use a local attention pattern and ignore heterogenous information from the created graph in the attention mechanism, which results in a performance drop of 1.4%. This highlights the significance of integrating the graph with self-attention.
We also investigate the impact of removing relative position labels from Equation (1) in different attention components. Following LUKE [6], we use Equation (3) to compute the attention score between two tokens.
\[a_{\bar{g}}=\text{softmax}(\frac{x_{i}W^{\bar{Q}}(x_{j}W^{K})^{T}}{\sqrt{H}}) \tag{3}\]
When we remove all relative position labels in w2e, e2w, and e2e components, and only keep 2k+1 labels in w2w, the model's performance decreases significantly, reaching an F1 score of 90.1 and EM score of 89.65, which is lower than the accuracy of the baseline LUKE (91.4 F1/90.8 EM). The reduced performance is not only due to the removal of graph information and the relationship between related entity and word tokens, but also because the use of relative position embeddings in the w2w component limits the model to encoding only the relative positions of tokens within a certain range of each other (k). Consequently, the model may not be capable of capturing dependencies that span longer distances in the sequence.
To demonstrate the importance of relative position labels in the w2e, e2w, and e2e components, we remove 2k+1 and global labels in w2w but keep the labels in other components. Indeed, we use Equation (3) to compute the attention score between tokens in w2w. The F1/EM results degrade by 0.94/1.21, indicating that relative position labels in the three components are more critical than those in w2w.
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline \multirow{2}{*}{**Group**} & \multirow{2}{*}{**Name**} & \multicolumn{2}{c}{**Dev**} & \multicolumn{2}{c}{**Test**} \\ \cline{3-6} & & **F1** & **EM** & \multicolumn{1}{c}{**F1**} & \multicolumn{1}{c}{**EM**} \\ \hline & Human & 91.64 & 91.28 & 91.69 & 91.31 \\ \hline BERT- & BERT-Base [1] & - & - & 56.1 & 54.0 \\ based & BERT-Large\({}^{\text{+}}\)[1] & 72.2 & 70.2 & 72.0 & 71.3 \\ & XLNet-Verifier’ [30] & 82.1 & 80.6 & 82.7 & 81.5 \\ & RoBERTa\({}^{\text{+}}\)[27] & 89.5 & 89.0 & 90.6 & 90.0 \\ \hline Graph- & Graph-BERT [31] & - & - & 63.0 & 60.8 \\ based & SKG-BERT\({}^{\text{-}}\)[32] & 71.6 & 70.9 & 72.8 & 72.2 \\ & KT-NET\({}^{\text{+}}\)[33] & 73.6 & 71.6 & 74.8 & 73.0 \\ & KELM\({}^{\text{+}}\)[29] & 75.6 & 75.1 & 76.7 & 76.2 \\ \hline LLM & T5-Large [24] & - & - & 86.8 & 85.9 \\ & T5-11B [24] & 93.8 & 93.2 & 94.1 & 93.4 \\ & PaLM 540B [26] & _94.0_ & _94.6_ & 94.2 & 93.3 \\ & DeBERTa-1.5B [25] & 91.4 & 91.0 & _94.5_ & _94.1_ \\ \hline LUKE- & LUKE\({}^{\text{+}}\)[6] & 91.4 & 90.8 & 91.2 & 90.6 \\ based & LUKE- & LUKE-Graph [7] & 91.36 & 90.95 & 91.5 & 91.2 \\ & **GESA-500M** & **92.14** & **91.61** & **92.2** & **91.7** \\ \hline \hline \end{tabular}
\end{table}
Table 2: The results obtained from the ReCoRD dataset. It is noteworthy that all models except DeBERTa (ensemble) are based on a single model. The absence of results is denoted by a hyphen. The results marked with [\(\pm\)] are reported in the KELM paper, while those marked with [\(+\)] are reported in the LUKE paper. The remaining results have been obtained from various sources, including the ReCoRD and SuperGLUE leaderboards, as well as relevant literature. GESA-500M indicates our model with 500 million parameters.
**Impact of relative position labels:** we investigate the effect of relative position labels by removing them individually as shown in Table 4. In our approach, we incorporate two relative position labels for global tokens in the w2w parts, one for the [CLS] token and another for question tokens. To assess the impact of each label, we individually omit them and observe a performance drop of 0.05 and 0.11 on the F1 metric, respectively. Our results reveal that the relationships of question tokens with all input tokens are more crucial than those of the [CLS] token. We also restrict the use of relative position labels between word and entity tokens in the w2e and e2w parts to only represent the missing entity (PLC) relationships with question tokens. This results in the removal of any special attention between candidate entities and their mentions, leading to a decrease in the F1 metric by 0.41. Conversely, when we remove the connection between PLC and question tokens and instead maintain the relationships between candidate entities and their mentions, the F1 metric decreases by 0.22.
To evaluate the contribution of each edge type in the constructed graph, we eliminate the corresponding labels individually and measure their impact on the model's performance. Specifically, we remove the relative position label of edges between entity pairs of the graph in the same sentences (SENT-BASED edges), the label of connections between matching entities (MATCH edges), or the label of edges between the missing entity (placeholder) with other entities (PLC edges). The performance on F1 falls off 0.43, 0.25, and 0.56, respectively. Our findings suggest that the connections between the PLC and other entities have a more substantial impact than other relationships, as removing the PLC edges leads to the most significant performance drop of 0.56 on the F1 metric. We also investigate the effect of using a single label for all SENT-BASED, MATCH, and PLC edges instead of three labels, which results in a decrease of F1 by 0.27.
## 5 Conclusion
In this paper, we proposed a graph-enhanced self-attention approach that extends upon the pre-trained multi-layer transformer of the LUKE model. Our approach incorporates several modifications to address the limitations of complex reasoning tasks in machine reading comprehension. Specifically, we introduced a unique attention pattern that includes global-local attention for word tokens, graph attention for entity tokens, and attention for related word and entity tokens to enhance performance. The attention pattern is integrated into the self-attention mechanism of the transformer using relative position encoding. Furthermore, a heterogeneous graph is constructed based on relationships from the entity input, which exhibits strong attention towards tokens connected in the graph in the attention mechanism. Our proposed model also considers the relationship type between each entity token and word token, resulting in more efficient attention between them if they are related. Experimental results indicate that our model outperforms both the LUKE-Graph and the baseline LUKE model on the ReCoRD dataset with commonsense
\begin{table}
\begin{tabular}{l|l|c c|c c} \hline \multirow{2}{*}{**Attention**} & \multirow{2}{*}{**Model**} & \multicolumn{4}{c}{**Dev Results (\%)**} \\ \cline{3-6} & **F1** & \(\Delta\) & **EM** & \(\Delta\) \\ \hline & Full Model & 92.14 & - & 91.61 & - \\ \hline w2w & w/o [CLS] global label & 92.09 & 0.05 & 91.57 & 0.04 \\ & w/o global label of question tokens & 92.03 & 0.11 & 91.51 & 0.1 \\ \hline w2e and & w/o label between candidate entities & 91.73 & 0.41 & 91.14 & 0.47 \\ e2w & w/o label between PLC and question & 91.92 & 0.22 & 91.42 & 0.19 \\ \hline e2e & w/o label of SENT-BASED edges & 91.71 & 0.43 & 91.22 & 0.39 \\ & w/o label of MATCH edges & 91.89 & 0.25 & 91.4 & 0.21 \\ & w/o label of PLC edges & 91.58 & 0.56 & 90.98 & 0.63 \\ & w one label for all edges & 91.87 & 0.27 & 91.32 & 0.29 \\ \hline \end{tabular}
\end{table}
Table 4: Ablation results of relative position labels on the ReCoRD dev set. w/o stands for without.
\begin{table}
\begin{tabular}{l|c c|c c} \hline \multirow{2}{*}{**Model**} & \multicolumn{4}{c}{**Dev Results (\%)**} \\ \cline{2-5} & **F1** & \(\Delta\) & **EM** & \(\Delta\) \\ \hline Full Model & 92.14 & - & 91.61 & - \\ \hline Local attention pattern in w2w (w/o global tokens) & 91.98 & 0.16 & 91.47 & 0.14 \\ Local attention pattern in e2e (w/o graph attention) & 90.72 & 1.42 & 90.2 & 1.41 \\ w/o relative position labels in three parts of w2e, e2w & 90.1 & 2.04 & 89.65 & 1.96 \\ and e2e & 91.2 & 0.94 & 90.4 & 1.21 \\ \hline \end{tabular}
\end{table}
Table 3: Effect of using different attention patterns on the ReCoRD dev set. w/o stands for without.
reasoning.
In conclusion, our proposed model effectively integrates heterogeneous graph information into the entity-aware self-attention mechanism of LUKE using relative position labels, which has the potential to improve the performance of various natural language processing tasks that require the handling of graph structures.
|
2310.02488 | Machine learning for online sea ice bias correction within global
ice-ocean simulations | In this study we perform online sea ice bias correction within a GFDL global
ice-ocean model. For this, we use a convolutional neural network (CNN) which
was developed in a previous study (Gregory et al., 2023) for the purpose of
predicting sea ice concentration (SIC) data assimilation (DA) increments. An
initial implementation of the CNN shows systematic improvements in SIC biases
relative to the free-running model, however large summertime errors remain. We
show that these residual errors can be significantly improved with a data
augmentation approach, in which sequential CNN and DA corrections are applied
to a new simulation over the training period. This then provides a new training
data set with which to refine the weights of the initial network. We propose
that this machine-learned correction scheme could be utilized for generating
improved initial conditions, and also for real-time sea ice bias correction
within seasonal-to-subseasonal sea ice forecasts. | William Gregory, Mitchell Bushuk, Yongfei Zhang, Alistair Adcroft, Laure Zanna | 2023-10-03T23:30:27Z | http://arxiv.org/abs/2310.02488v1 | # Machine learning for online sea ice bias correction within global ice-ocean simulations
###### Abstract
We use a convolutional neural network (CNN) to perform online sea ice bias correction within global ice-ocean simulations.
###### Abstract
In this study we perform online sea ice bias correction within a GFDL global ice-ocean model. For this, we use a convolutional neural network (CNN) which was developed in a previous study (Gregory et al., 2023) for the purpose of predicting sea ice concentration (SIC) data assimilation (DA) increments. An initial implementation of the CNN shows systematic improvements in SIC biases relative to the free-running model, however large summertime errors remain. We show that these residual errors can be significantly improved with a data augmentation approach, in which sequential CNN and DA corrections are applied to a new simulation over the training period. This then provides a new training data set with which to refine the weights of the initial network. We propose that this machine-learned correction scheme could be utilized for generating improved initial conditions, and also for real-time sea ice bias correction within seasonal-to-subseasonal sea ice forecasts.
## Plain Language Summary
Climate models contain errors which often lead to predictions which are consistently out of agreement with what we observe in reality. In some cases we know the origin of these errors, for example predicting too much sea ice as a result of consistently cool ocean temperatures. In reality however, there are typically numerous model errors interacting across the atmosphere, ocean and sea ice, and to manually parse through large volumes of climate model data in an attempt to isolate these errors in time and space is highly impractical. Machine learning on the other hand is a framework which is well-suited to this task. In this work we take a machine learning model which, at any given moment, ingests information about a climate model's atmosphere, ocean and sea ice conditions, and predicts how much error there is in the climate model's representation of sea ice, without seeing any actual sea ice observations. We use this to adjust the sea ice conditions in one particular climate model as it is running forward in time making predictions, and we find that this significantly reduces the model's sea ice errors globally.
## 1 Introduction
Machine learning (ML) algorithms are beginning to cement their position as viable subgrid-scale climate model parameterizations, through their ability to isolate complex non-linear relationships within large volumes of high dimensional data (Brenowitz and Bretherton, 2018; Gentine et al., 2018; O'Gorman and Dwyer, 2018; Yuval and O'Gorman, 2020; Finn et al., 2023; Sane et al., 2023). Typically this is achieved by training an ML model to learn a functional mapping which characterizes the impact of subgrid processes on resolved scales, by training on high resolution simulations or observational data. Significant effort is currently being afforded to the development of these ML parameterizations in the context of e.g., ocean turbulence, with early results (Zanna and Bolton, 2020; Frezat et al., 2022; Ross et al., 2023; Kurz et al., 2023; C. Zhang et al., 2023) highlighting their potential to improve important climate statistics, such as eddy kinetic energy at large scales, over their traditional physics-based counterparts.
Alternatively, combining data assimilation (DA) and ML has shown to be a promising framework for learning either subgrid parameterizations or systematic model errors across various domains (Bonavita and Laloyaux, 2020; Brajard et al., 2021; Farchi et al., 2021; Mojgani et al., 2022; Chen et al., 2022; Laloyaux et al., 2022; He et al., 2023). In a recent study by Gregory et al. (2023), hereafter G23, the authors presented a DA-based ML framework in which convolutional neural networks (CNNs) were used to predict state-dependent sea ice errors within an ice-ocean configuration of the Geophysical Fluid Dynamics Laboratory (GFDL) Seamless system for Prediction and EArh System Research (SPEAR) model, as a way to highlight the feasibility of a data-driven sea ice model parameterization within SPEAR. They approached this by first showing that the clima
tological sea ice concentration analysis increments from an ice-ocean DA experiment map closely onto the systematic bias patterns of the equivalent free-running model. This suggested that an ML model which is able to predict the analysis increments could, in principle, reduce sea ice biases as an online model parameterization or bias correction tool. Their subsequent CNN architecture then used information from local model state variables and their tendencies, to make predictions of the corresponding sea ice concentration analysis increment at any grid cell location. These offline predictions were shown to generalize well to both the Arctic and Antarctic domains, and across all seasons. However, offline performance does not always directly translate to online simulations, which can sometimes exhibit instabilities as well as climate drift after implementation (Rasp et al., 2018; Ott et al., 2020; Brenowitz et al., 2020). In such cases, the ML model may require an additional online training step in order to sample a larger model state space to which it was initially trained (Rasp, 2020).
In this present work, we advance the field of ML-based parameterizations by investigating the online performance of the G23 DA-based ML model when used as a tool to correct short-term sea ice error growth. We implement the correction scheme here within a coupled ice-ocean configuration of SPEAR, as the G23 CNN was originally trained on data from an ice-ocean DA system, which therefore allows us to make direct comparisons of model biases and increments produced from both the CNN and DA simulations. If the CNN is able to reduce sea ice biases relative to the free-running model, then this will provide a solid foundation for future work into assessing the generalization to fully coupled systems, and ultimately a physics-based sea ice model parameterization.
## 2 Data and methods
### SPEAR ice-ocean model
SPEAR is a fully coupled ice-ocean-atmosphere-land model (Delworth et al., 2020), which shares the same components as the GFDL CM4 model (Held et al., 2019), however with parameterizations and resolutions geared toward seasonal-to-decadal prediction. The ocean and sea ice components are configured at a 1\({}^{\circ}\) horizontal resolution and correspond to the Modular Ocean Model v6 (MOM6) and the Sea Ice Simulator v2 (SIS2), respectively (Adcroft et al., 2019). In this work, we consider an ice-ocean configuration of SPEAR, in which MOM6 and SIS2 are forced by atmospheric conditions from the Japanese 55-year Reanalysis for driving ocean-sea-ice models (JRA55-do; Tsujino et al. (2018)). Details of the ice-ocean experiments are provided in section 2.3.
### Machine learning model
The CNN model from G23 was trained to predict sea ice concentration (SIC) increments from a SPEAR ice-ocean DA experiment (Y. Zhang et al. (2021); hereafter Z21). The Z21 DA experiment spanned January 1st 1982 - January 1st 2018, where satellite observations of SIC from the National Snow and Ice Data Center (NSIDC; Cavalieri et al. (1996)) NASA Team algorithm were assimilated into SIS2 every 5 days using the Ensemble Adjustment Kalman Filter (EAKF) approach (Anderson, 2001), and sea-surface temperatures were nudged towards observations from version 2 of the Optimum Interpolation Sea-Surface Temperature (OISSTv2) data set (Reynolds et al., 2007; Banzon et al., 2016) at the model timestep. It should be noted that SIS2 has a 5-category ice thickness distribution (Bitz et al., 2001), with lower thickness bounds of 0.0, 0.1, 0.3, 0.7, and 1.1 meters. The (observable) aggregate SIC field is therefore computed in the model as the sum of the sea ice concentration in each category (SICN), hence \(\text{SIC}=\sum_{k=1}^{5}\text{SICN}_{k}\). Similarly, we compute the aggregate SIC increment (\(\Delta\)SIC) as the sum of the analysis increments in each category (\(\Delta\)SICN). The G23 CNN then uses 5-day mean inputs of state variables and tendencies corresponding to: SIC, sea-surface temperature (SST), zonal and meridional components of ice velocities (SIU and SIV, respectively), sea ice thick
ness (SIT), net shortwave radiation (SW), ice-surface skin temperature (TS), sea-surface salinity (SSS), and a land-sea mask, in order to predict \(\Delta\)SIC. This prediction of \(\Delta\)SIC is then passed to a second CNN, along with SICN, to predict the category concentration increments \(\Delta\)SICN. For convenience we refer to these two CNNs as a single network hereafter.
The implementation of the CNN into SIS2 here is performed in an analogous manner to DA. Specifically, we run an ensemble forecast of the model for 5 days (e.g., from 00:00 hours UTC on January 1st to 00:00 UTC on January 6th), where we then generate the corresponding \(\Delta\)SICN predictions for each ensemble member, add the predicted \(\Delta\)SICN fields to the instantaneous SICN state (i.e., the state at 00:00 UTC on January 6th), and restart the model for the next 5-day forecast (schematics of this 5-day forecast plus correction process are shown in Figure 1, although section 2.3 describes this figure in more detail). It is important to note that we also apply a post-processing after each correction. For this we follow the Z21 procedure for updating sea ice variables during DA, which is as follows: first we remove non-physical values from the updated SICN terms by applying a lower bound of 0 to each category, and then scaling each category by 1/SIC if the updated SIC is greater than 1. Secondly, in the case where the correction is removing all sea ice within a given grid cell, we set the corresponding ice and snow thickness, enthalpy, and ice salinity to 0, and subsequently set the ice-surface skin temperature to the freezing point of sea water, \(-1.8^{\circ}\)C. In the case where the correction is adding sea ice to a given category which was previously ice-free, we set the thickness of the ice to the mid-point value within the ice thickness distribution bounds (given as 0.05, 0.2, 0.5, 0.9, 1.3 meters). We then set the salinity, enthalpy and skin temperature of the ice to 5 psu, \(-87576\) J, and \(-0.36^{\circ}\)C, respectively (conditions based on an initial liquid frac
Figure 1: Schematic of bias correction schemes, shown in each panel for one 5-day assimilation/correction cycle. The dots represent the daily model state integrating forward in time. The vertical lines are then the corrections from either the CNN or DA. (a) Out-of-the-box G23 CNN training and implementation showing the Z21 DA simulation in red and the CNN simulation in light blue. (b) Optimized G23 CNN training and implementation showing simulations with combined CNN and DA corrections in violet, and the optimized CNN simulation in dark blue.
tion of frazil ice of 0.75). We also ensure that the newly added sea ice contains no overlying snow.
### Ice-ocean experiments
We compare four ice-ocean simulations in this study, where each extends for a 5-year period between January 1st 2018 and January 1st 2023. The initial ice and ocean conditions for all simulations are based on those from the Z21 DA experiment, which ended January 1st 2018. The atmospheric forcing is provided by JRA55-do reanalysis version 1.5, SSTs are nudged towards OISSTv2 observations using a piston velocity of 4 meters per day, and SSS is nudged to a seasonal climatology with a piston velocity of 1/6 meters per day. The experiments are given as follows:
1. The free-running model in ice-ocean mode (FREE).
2. An extension of the ice-ocean DA experiment (DA\({}_{\rm Z21}\)), which serves as the benchmark for this study.
3. An 'out-of-the-box' implementation of the G23 network (CNN\({}_{\rm G23}\)), where the network has been trained offline using all available data from the original DA experiment. This procedure is highlighted in Figure 1a, where, during training, the loss function \(\mathcal{L}\) minimizes the error between the network predictions, CNN\((\mathbf{\bar{X}})\), and the increment from DA, \(\Delta\)SIC(DA\({}_{\rm Z21}\)). Here \(\mathbf{\bar{X}}\) represents the 5-day mean state variables and tendencies described in section 2.2. After training, this produces the network CNN\({}_{\rm G23}\), which is then implemented over the 2018-2022 period. The reader is referred to G23 for more details of the architecture and hyperparameters related to the network training process.
4. An 'optimized' version of the G23 network (CNN\({}_{\rm opt}\)) where the weights of the G23 network are refined to improve online performance. For this we use both DA and CNNs to iteratively augment the training data, and subsequently refine the network weights after each augmentation iteration. For example, in the first iteration we run a new ice-ocean simulation between 1982-2017, and in which we apply a two-step CNN+DA correction every 5 days; first using CNN\({}_{\rm G23}\), and then using DA (see Figure 1b). We then use this 36-year simulation as a new training data set with which to update the weights of CNN\({}_{\rm G23}\), where, during training, the loss function now minimizes the error between the network predictions, CNN\((\mathbf{\bar{X}})\), and the total model error, \(\Delta\)SIC(CNN\({}_{\rm G23}\)+DA). This procedure is performed for a total of \(N=3\) iterations. The network refinement after each augmentation iteration is performed in an identical way to the offline learning procedure outlined in G23, except now we only update the weights for 5 epochs after each iteration.
Note that the 'Online validation' panels in Figure 1 highlight the simulations with CNN implementations relative to a simulation which applies the respective 'perfect' correction (i.e., the correction which either CNN would produce if it had 100% prediction accuracy). These simply correspond to the extended DA experiment for CNN\({}_{\rm G23}\) in Figure 1a, and the two-step CNN correction plus DA for CNN\({}_{\rm opt}\) in Figure 1b. A comparison of these corrections (increments) is made in section 3.2.2 in order to establish how the different implementation configurations manifest within the increments. As a final point to note here, all results presented in this work are based on ensemble mean fields, and all simulations are run with a 'no leap' calendar, which excludes leap-year days.
## 3 Results
### Model bias
Figure 2 shows model biases for each of the ice-ocean experiments outlined in section 2.3. Initially considering the annual-mean spatial bias patterns of the free-running
model, we can see that this simulation is overall positively biased in both hemispheres, with largest Arctic biases occurring in the east Atlantic sector (Greenland, Barents, and Kara seas), and largest Antarctic biases in the Bellingshausen, Amundsen, and Indian Ocean sectors. The daily SIC root-mean-squared error (RMSE) curves then highlight the seasonal variation of the model bias, with largest RMSE values in FREE (black curves) occurring across June-August in the Arctic (22.7%), and December-February in the Antarctic (28.5%). The average RMSE over the entire simulation period corresponds to 17.8% and 20.0% in the Arctic and Antarctic, respectively, with larger errors in summer and smaller errors in winter. As expected, the DA experiment (DA\({}_{221}\); red curves) visibly reduces the bias across all seasons, with average Arctic and Antarctic RMSE reductions relative to FREE of 4.2% and 5.5%, respectively.
Figure 2: Comparison of model biases from ice-ocean simulations over the 2018–2022 period. The first and second rows show the mean SIC biases (model minus observations), relative to the NSIDC NASA Team observations (DiGirolamo et al., 2022). The average RMSE of SIC is reported in each panel (RMSE computed each day over sea ice covered grid cells only, and then averaged over all days). The bottom two time series plots show pan-Arctic and pan-Antarctic RMSE of SIC for each simulation.
Turning to the two CNN correction schemes, the out-of-the-box implementation (\(\mathrm{CNN}_{\mathrm{G23}}\); light blue curves) shows systematic improvements relative to FREE, with average RMSE reductions of 1.9% and 3.9% in the Arctic and Antarctic, respectively. The modest improvements in the Arctic make it difficult to identify qualitative differences in the climatology spatial bias plots, however some improvements can be seen in the east Atlantic sector. This is also highlighted in the regional Arctic sea ice extent (SIE) time series (Figure S1), where regions are defined according to Meier and Stewart (2023). On the other hand, \(\mathrm{CNN}_{\mathrm{G23}}\) shows visible improvements across much of the Antarctic domain, with large bias reductions in the Amundsen Sea and Pacific Ocean. The regional Antarctic SIE time series (Figure S2) also more closely track the DA experiment throughout the majority of the simulation period, particularly in the Antarctic growth season. In the melt season however, the simulation shows a tendency to drift back towards to the free-running model state. Comparing these results to the optimized CNN implementation (\(\mathrm{CNN}_{\mathrm{opt}}\); dark blue curves), we see marked skill improvements. The average RMSE reductions compared to FREE are 3.9% and 5.9% in the Arctic and Antarctic, respectively, and Figure 2 shows that sizeable improvements have been made in the summer months in both hemispheres. Furthermore, both pan-Antarctic and regional SIE (Figure S2) are also considerably improved in the melt season compared to \(\mathrm{CNN}_{\mathrm{G23}}\), and often show reduced biases relative to the DA experiment. It is worth noting however that many of the regional Antarctic SIE time series for \(\mathrm{CNN}_{\mathrm{opt}}\) show visible imprints of model shock (i.e., large fluctuations in extent, occurring every 5 days). This can occur in DA when there is significant drift between each assimilation cycle, and the fact that we see this here may suggest that there is rapid error growth occurring over the space of 5 days in the Antarctic. We discuss this further in section 3.2.2. In any case, the fact that the \(\mathrm{CNN}_{\mathrm{opt}}\) experiment, which does not assimilate any observations, has similar errors to \(\mathrm{DA}_{\mathrm{Z21}}\) suggests that the DA run is primarily correcting systematic model error and that \(\mathrm{CNN}_{\mathrm{opt}}\) is successfully capturing these errors.
### Understanding online improvements
Between the two CNN models, it is clear that \(\mathrm{CNN}_{\mathrm{opt}}\) is the most desirable scheme for reducing the free-running model bias. Furthermore, it is also clear that, relative to \(\mathrm{CNN}_{\mathrm{G23}}\), the largest gains from \(\mathrm{CNN}_{\mathrm{opt}}\) come in the summer months. In this section we take a closer look at the performance of each CNN correction scheme in order to discern how these improvements manifest in both the spatial error patterns of each simulation, and also in the SIC increments produced from each scheme.
#### 3.2.1 Snapshots
Figure 3 shows example snapshots of summertime model errors in each hemisphere (see also supplementary movie S1 for snapshots over the 5-year simulation period). In both hemispheres we can see that FREE (Figures 3a and 3e) contains large positive errors related to over-estimation of the sea ice edge (indicated by the positive SIC biases equator-ward of the observed ice edge contour). The errors pole-ward of the ice edge contour indicate local SIC errors. While the DA simulation (Figures 3b and 3f) retains some of these local SIC errors, a significant fraction of the ice edge errors are reduced; almost halving the RMSE in the Antarctic relative to FREE. \(\mathrm{CNN}_{\mathrm{G23}}\) (Figures 3c and 3g) shows some improvements relative to FREE (4.1% and 7.8% RMSE improvement in the Arctic and Antarctic, respectively), however there are still considerable ice edge and local SIC errors throughout both the Pacific sector in the Arctic, and the Atlantic and Pacific sectors in the Antarctic.
For \(\mathrm{CNN}_{\mathrm{opt}}\) (Figures 3d and 3h) there are clear RMSE improvements relative to both FREE and \(\mathrm{CNN}_{\mathrm{G23}}\) in both hemispheres, with remarkable improvements in the Antarctic. It could be argued however that, in the Arctic, the simulated ice edge position is not much improved in this example. A useful metric to confirm this is the integrated ice edge
error (IIEE; Goessling et al. (2016)), which computes the total area for which the ice edge is both over- and under-predicted, relative to satellite observations. For panels (a-d) in Figure 3, the IIEEs are given as 1.44, 0.90, 1.32, 1.05 million km\({}^{2}\), respectively, which shows that the sea ice edge from each correction scheme is in better agreement with the observations than FREE, and that CNN\({}_{\rm opt}\) does indeed improve over CNNG\({}_{\rm 23}\) in this regard. Similarly in the Antarctic panels (e-h), the IIEEs are given as 3.98, 1.38, 3.11, 1.30 million km\({}^{2}\), respectively. Here we can see that CNN\({}_{\rm opt}\) even shows improved ice edge errors over the DA simulation (see Figure S4 for IIEE metrics computed over the entire 2018-2022 period). We can also take the assessment of ice edge errors further by disaggregating the SIC RMSE metric into grid points that lie pole-ward and equatorward of the observed ice edge contour on any given day, in order to assess where the largest improvements from each correction scheme are manifesting (i.e., whether improvements are primarily in the ice edge location or SIC within the ice pack). From this decomposition (Figures S5 and S6) we find that, relative to FREE, in both hemispheres the largest RMSE reductions from each correction scheme come from improvements in the ice edge. Furthermore, we find that CNN\({}_{\rm opt}\) is considerably reducing the summer ice edge errors relative to CNNG\({}_{\rm 23}\).
#### 3.2.2 Analysis increments
Figure 4 shows the mean SIC increments for all DA and CNN correction schemes. The increments here correspond to those which were originally outlined in the validation panels of Figure 1. Namely, the extended DA experiment (DA\({}_{\rm Z21}\)), the out-of-the-box CNN implementation (CNNG\({}_{\rm 23}\)), the optimized version of the G23 network (CNN\({}_{\rm opt}\)), and each of the corrections from the two-step CNN+DA process, using the optimized network (CNN\({}_{\rm CNN+DA}\) and DA\({}_{\rm CNN+DA}\), respectively).
Figure 3: Snapshots of summertime model errors (model minus observations) for the FREE, DA and CNN ice-ocean simulations. Errors are computed relative to the NSIDC NASA Team observations (DiGirolamo et al., 2022). RMSE values are reported in each panel. The black contours mark the observed sea ice edge position (15% SIC).
The mean increments from both \(\text{CNN}_{\text{G23}}\) and \(\text{CNN}_{\text{opt}}\) in Figure 4 show largely similar spatial patterns in both hemispheres, with \(\text{CNN}_{\text{opt}}\) displaying overall larger magnitudes. In the Arctic, while both sets of CNN increments show isolated regions of positive values along the Eurasian coast, they do not reflect the larger area of mean positive increments seen in \(\text{DA}_{\text{Z21}}\) across the East Siberian, Laptev and Kara seas (from the \(\text{DA}_{\text{Z21}}\) Arctic time series panel in Figure 4 we can see that these positive increments originate in summer). On the other hand, the increments from \(\text{CNN}_{\text{CNN+DA}}\) do indeed show mean positive summer values in these regions. This suggests that the combination of input variables to the network which are needed to generate these positive predictions in the Arctic, is only being sampled after the additional \(\text{DA}_{\text{CNN+DA}}\) step. Nonetheless, in section 3.2.1 we have seen that \(\text{CNN}_{\text{opt}}\) yields significant improvements in Arctic summer SIC errors over \(\text{CNN}_{\text{G23}}\), which is primarily coming from improvements in ice edge errors. This is consistent with the larger magnitude negative increments from \(\text{CNN}_{\text{opt}}\) in regions such as the Beaufort and Chukchi seas. This may then suggest that the positive summer increments seen in the DA and \(\text{CNN}_{\text{CNN+DA}}\) corrections are needed to target local summer SIC errors. Regarding \(\text{DA}_{\text{CNN+DA}}\), we can see that, in both hemispheres, these corrections are lower in magnitude than \(\text{DA}_{\text{Z21}}\) on average, which highlights how the initial correction from \(\text{CNN}_{\text{CNN+DA}}\) is removing a sizeable component of the model error; leaving less error to correct with DA. This is particularly the case in the Antarctic, where the daily increments from \(\text{DA}_{\text{CNN+DA}}\) are very close to zero, suggesting that the CNN has effectively removed the systematic component of the model error in the Antarctic.
Figure 4: Comparison of SIC increments produced from either DA or CNNs during online simulations. The first two rows show spatial climatologies over 2018–2022. The bottom two panels are then the equivalent time series, computed as mean fields over Arctic and Antarctic domains.
tic. Meanwhile in the Arctic, there are still residual systematic summertime errors associated with under-predicting the positive increments, which \(\text{DA}_{\text{CNN+DA}}\) needs to address.
A natural question then arises as to why \(\text{DA}_{\text{Z21}}\) is as effective, if not more effective, at reducing the model bias than \(\text{CNN}_{\text{opt}}\), even though the increments from \(\text{CNN}_{\text{opt}}\) are considerably larger in magnitude. For this we turn to a comparison of the increments from each of the concentration categories (Figures S6 and S7). For \(\text{CNN}_{\text{opt}}\), we find that the largest magnitude corrections are being made to the thinnest ice category, while on the other hand, \(\text{DA}_{\text{Z21}}\) makes sizeable corrections to some of the thicker categories. This therefore means that \(\text{CNN}_{\text{opt}}\) needs to make larger corrections to achieve the same volume change as \(\text{DA}_{\text{Z21}}\). Furthermore, the fact that \(\text{CNN}_{\text{opt}}\) is largely updating the thinnest ice category also explains the model shock seen in the regional Antarctic SIE time series for \(\text{CNN}_{\text{opt}}\) in Figure S2. In the Pacific sector in summer for example, the CNN is adding large extents of ice, which the model is then consistently removing over each 5 day interval. This now seems conceivable given that the new ice is very thin (5 cm thickness), and hence would be susceptible to completely melting if advected to grid cells with sufficiently warm SSTs. Further evidence to support this claim comes from the fact that this model shock behavior is significantly damped in the regional sea ice volume time series (Figure S8); which makes sense as the thinnest ice category will typically contribute less to the regional volume.
## 4 Discussion and conclusions
There is currently much discourse centered around ML-based parameterizations and/or corrections within climate models, particularly in the context of how to achieve stable and unbiased simulations after implementation. Many studies have illustrated how to achieve stability within idealized models, where parameterizations are learned from high resolution simulations. For example, by swapping out neural networks for random forests (Yuval & O'Gorman, 2020; Watt-Meyer et al., 2021), or using online and/or reinforcement learning (Rasp, 2020; Kurz et al., 2023).
In this study we have shown that a CNN model which has been trained purely offline to predict increments from a sea ice DA system (which assimilates real sea ice observations) can be used 'out-of-the-box' to systematically reduce sea ice biases in a 5-year global ice-ocean simulation, without instabilities or drift. We have also introduced a data augmentation approach to optimize the offline-trained CNN, which significantly improves online generalization in both hemispheres; particularly in terms of reducing sea ice edge errors in summer. This augmentation approach is performed by iteratively generating new simulations in which corrections are applied from both the current iteration of the ML model, as well as DA. Each iteration of the augmentation procedure therefore provides a new training data set with which to refine the CNN weights from the previous iteration. While, in theory, this procedure could be repeated to convergence, we opted for \(N=3\) iterations in this study due to computational expense. It is likely however that continued iterations would yield further improvements, particularly in the Arctic summer, given that sizeable gains were made between each of the three iterations here (see Figure S9). We hypothesise that the improvements from this augmentation procedure are a result of exposing the network to input variables which contain information about how the model trajectory evolves after implementing the CNN (as opposed to training purely offline where the inputs have no feedback with the CNN).
Interestingly, we find that, relative to the original DA experiment, the climatological sea ice biases associated with the simulation which uses this 'optimized' network are actually modestly improved in the Antarctic (Figure 2). This is understandable when we consider that the target variable during each iteration of the network refinement step is no longer the increment from original DA experiment, but rather the sum of the in
crements from the two-step CNN+DA experiment (recall Figure 1b). Therefore, the model bias from the original DA experiment should not be seen as the lower limit on what is achievable with the CNN. We can see this in Figure S10, where the bias of the simulation which applies this two-step CNN+DA procedure is indeed systematically lower than both the original DA experiment and the optimized CNN. This leaves exciting avenues for future work relating to improved initial conditions for numerical prediction. For seasonal predictions with the GFDL SPEAR model for example, initial conditions for the ice and ocean (Y. Zhang et al., 2022; Lu et al., 2020) are based on DA via Ensemble Kalman filters. The Ensemble Kalman filter is not formally designed to correct for systematic model error, and so this two-step CNN+DA procedure could be a way to generate more accurate initial conditions (see Figure S10) with the CNN and DA fixing the systematic and random components of the errors, respectively. Indeed while this is similar to a weak constraint 4-D variational DA approach (Wergen, 1992; Zupanski, 1993; Tremolet, 2007), the CNN has computational advantages (once trained) in that it does not require the construction of an adjoint model (Bonavita and Laloyaux, 2020; Laloyaux et al., 2022).
Further avenues for future work also include the use of the optimized CNN for making bias corrections to real-time seasonal sea ice forecasts, or sea ice projections on climate timescales. For seasonal prediction, the methodology would follow that which has been presented in this study, except the CNN would be applied within the fully coupled SPEAR model. This would however require several considerations. For example, addressing the issue of model shock seen in the Antarctic (which could potentially be reduced by increasing the frequency of the CNN corrections), and also assessing generalization of the CNN to the fully coupled model, which includes new interactive feedbacks with an atmospheric model. Looking also to longer term climate projections, G23 discussed this in the context of implementing the CNN as a sea ice model parameterization. This would then require further considerations of how to appropriately conserve mass, heat and salt when adding/removing sea ice from the ocean.
## 5 Open Research
All data for training each CNN are openly available (Gregory, 2023), along with auxiliary data such as the optimized CNN weights and standardization statistics. Python code to pre-process the input data and train the CNNs is also available at the same location.
### Acknowledgments
William Gregory, Mitchell Bushuk, Alistair Adcroft and Laure Zanna received M\({}^{2}\)LInES research funding by the generosity of Eric and Wendy Schmidt by recommendation of the Schmidt Futures program. This work was also intellectually supported by various other members of the M\({}^{2}\)LinES project, as well as being supported through the provisions of computational resources from the National Oceanic and Atmospheric Administration (NOAA) Geophysical Fluid Dynamics Laboratory (GFDL). We also thank Theresa Morrison and Feiyu Lu for their invaluable feedback on this article.
|
2305.06458 | Classical fully-packed loop model with attractive interactions on the
square lattice | We study a classical model of fully-packed loops on the square lattice, which
interact through attractive loop segment interactions between opposite sides of
plaquettes. This study is motivated by effective models of interacting quantum
matter arising in frustrated magnets or Rydberg atom arrays, for which loop
degrees of freedom appear at low energy. Through a combination of Monte Carlo
simulations and an effective height field theory, we find that the critical
point known to occur at infinite temperature gives rise to a high-temperature
critical phase with floating exponents. At lower temperature, the system
transitions via a Kosterlitz-Thouless phase transition to a nematic phase where
lattice rotation symmetry is broken. We discuss consequences for the phase
diagram of the quantum loop model on the same lattice. | Bhupen Dabholkar, Xiaoxue Ran, Junchen Rong, Zheng Yan, G. J. Sreejith, Zi Yang Meng, Fabien Alet | 2023-05-10T20:54:13Z | http://arxiv.org/abs/2305.06458v2 | # Classical fully-packed loop model with attractive interactions on the square lattice
###### Abstract
We study a classical model of fully-packed loops on the square lattice, which interact through attractive loop segment interactions between opposite sides of plaquettes. This study is motivated by effective models of interacting quantum matter arising in frustrated magnets or Rydberg atom arrays, for which loop degrees of freedom appear at low energy. Through a combination of Monte Carlo simulations and an effective height field theory, we find that the critical point known to occur at infinite temperature gives rise to a high-temperature critical phase with floating exponents. At lower temperature, the system transitions via a Kosterlitz-Thouless phase transition to a nematic phase where lattice rotation symmetry is broken. We discuss consequences for the phase diagram of the quantum loop model on the same lattice.
+
Footnote †: These authors contributed equally to this work.
+
Footnote †: These authors contributed equally to this work.
## I Introduction
An important notion in the renormalization group theory is the emergence of effective degrees of freedom at low energies. These new degrees of freedom can have local structures which take the form of a _constraint_. For instance, for degrees of freedom that live on the bonds of a lattice, a gauge-like condition can emerge which requires that every site of the lattice is touched by a fixed number of occupied bonds. Related statistical mechanical models such as dimer or loop models arise as effective theories in many physical situations, such as in frustrated magnetic systems [1; 2], Rydberg atom arrays [3; 4; 5], models of high-T\({}_{c}\) superconductors [6], adsorption physics [7], quantum Hall effects [8; 9], topological order [10], deconfined quantum critical points [11; 12; 13; 14; 15; 16; 17; 18; 19; 20] etc. Loop models have also a long history in statistical physics [11; 21; 22; 23; 24], in relation to Potts models [25], Temperley-Lieb algebras [26], polymers and O(\(N\)) models [27; 28], Schramm-Loewner Evolution [29] or percolation. These models often provide fixed fugacity for loops [21; 22], but there are few results when the loop segments interact [30], even though loop interactions naturally arise in effective models of quantum condensed matter [6; 31].
In this work, we study a two-dimensional classical statistical mechanical model of fully-packed loops which attract locally. With the help of a directed-loop Monte Carlo algorithm [32; 33; 34; 35; 36; 37] and a Coulomb gas [22] approach formulated in terms of a height-field description of the loop constraint [38; 39], we obtain evidence for the existence of a finite temperature Kosterlitz-Thouless (KT) transition separating a high-temperature critical phase from a low-temperature nematic phase. Our results have similarities with those obtained for the classical dimer model with attractive interactions [40; 33; 41], albeit with specific differences that we highlight.
Besides their interest in two-dimensional statistical mechanics in extending previous works on loop models [21; 22; 23; 24], our results are also relevant for quantum-constrained models. First, the ground-state wave function at a Rokhsar-Kivelson point [6] (or its generalizations [42; 43]) in the phase diagram of quantum loop models (QLM) maps to the partition function of a classical loop model. In addition, the phase diagram of the classical model and the methods we use in its inference can serve to guide us in mapping out the finite-temperature phase diagram [44] and transitions of the quantum loop [45; 46; 47; 48; 49; 50] (see e.g. the finite-temperature phase diagram of the quantum dimer model [51]). Such quantum-constrained models host a rich set of phases [52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78] and have recently been shown to be relevant in the context of Rydberg atom arrays [59; 60; 61; 62; 63; 64; 65; 55; 59; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 82; 84; 87; 88; 89; 91; 85; 86; 89; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 213; 214; 215; 216; 217; 218; 223; 219; 224; 225; 226; 227; 228; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 251; 252; 253; 254; 255; 256; 257; 258; 26; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 291; 292; 293; 294; 295; 296; 297; 300; 301; 31; 320; 332; 333; 341; 350; 351; 352; 353; 354; 356; 357; 358; 359; 360; 371; 372; 38; 398; 399; 40; 41; 42; 43; 44; 45; 45; 46; 47; 48; 49; 500; 48; 49; 51; 52; 539; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 74; 75; 76; 77; 78; 79; 80; 82; 83; 84; 85; 86; 87; 89; 90; 911; 11; 122; 123; 124; 125; 126; 127; 128; 129; 131; 140; 141; 15; 15; 157; 158; 159; 161; 17; 181; 19; 193; 194; 195; 1962; 197; 198; 1996; 1997; 1998; 1999; 200; 210; 2111; 214; 215; 216; 217; 218; 224; 219; 232; 240; 217; 225; 226; 227; 228; 233; 241; 242; 243; 244; 25; 257; 26; 27; 28; 291; 294; 229; 25; 296; 297; 27; 298; 361; 2999; 37; 38; 39; 410; 39; 50; 512; 30; 30; 313; 314; 32; 335; 36; 37; 39; 52; 38; 39; 60; 61; 62; 63; 64; 65; 67; 68; 69; 71; 82; 84; 85; 87; 89; 93; 941; 95; 196; 197; 198; 1998; 1999; 38; 1999; 400; 410; 107; 108; 109; 111; 1209; 121; 133; 141; 15; 116; 117; 118; 1999; 212; 230; 231; 232; 234; 235; 241; 25; 256; 26; 27; 28; 293; 33; 342; 357; 363; 37; 38; 39; 50; 51; 52; 53; 54; 57; 58; 59; 61; 70; 71; 72; 73; 74; 75; 76; 78; 79; 81; 82; 85; 89; 99; 90; 911; 922; 943; 95; 96; 97; 98; 99; 992; 97; 198; 1999; 39; 1999; 1
the correlation functions that we used in performing the analysis of Sec. V.3.
## II Model and Methods
Configurations --Configurations of the fully packed loop model on a square lattice require two loop segments (or "dimers") to touch each site of a square lattice; and are in one-to-one correspondence with configurations of the 6-vertex model [63; 64; 65; 66; 67; 68]. The ice-rule constraint of the 6-vertex model associates an arrow on each bond and only allows vertices which have two arrows pointing inwards and two outwards from the lattice site. Under this constraint, there are six possible vertex configurations on the square lattice as shown in Fig. 1 (a). The mapping from the six-vertex model to the loop model on the square lattice is illustrated in the Fig. 1 (b). If we place dimers on two incoming arrows on all sites of a sublattice of the square lattice, dimers will collectively form fully packed loops as every site is touched by exactly two dimers ('loop segments').
Energetics --Loop or vertex models often associate a fugacity with each closed loop or to each type of vertex respectively, to define the corresponding partition function [21]. The model that we study here associates an interaction energy term between proximate parallel loop segments, similar to the classical interacting dimer models [33; 40]. We consider the following partition function and energy for an interacting fully-packed loop model on the square lattice
\[Z=\sum_{c}e^{-\beta E_{c}} \tag{1}\] \[E_{c}=V(N(\;\raisebox{-1.0pt}{\includegraphics[scale=0.5]{ rgb1.eps}}\;)+N(\;\raisebox{-1.0pt}{\includegraphics[scale=0.5]{ rgb1.eps}}\;)),\]
where the summation in the partition function \(Z\) is over all fully-packed loop configurations on the square lattice and \(\beta=1/T\) is the inverse temperature. We assign an energy \(E_{c}\) to each covering in which \((N(\;\raisebox{-1.0pt}{\includegraphics[scale=0.5]{ rgb1.eps}}\;)+N(\;\raisebox{-1.0pt}{\includegraphics[scale=0.5]{ rgb1.eps}}\;))\) counts the number of plaquettes with parallel loop segments. Note that there is no energy assigned to a plaquette that has more than two loop segments. Here we set \(V=-1\), which corresponds to _attractive_ interactions between loop segments. We assume periodic boundary conditions for square lattices of linear size \(L\) with \(N=L^{2}\) total number of sites.
Limiting cases --The model admits two simple limits. At infinite temperature, it is equivalent to the 6-vertex model at the ice-point with equal fugacities for all vertices in Fig. 1 (a) which is _critical_ with power-law correlators (see the precise description below). At \(T=0\), there are two configurations which minimize the energy (\(E_{0}=-L^{2}\)). These are _nematic_ configurations with \(L\) horizontal or vertical loops that wrap around the boundary. The \(\pi/2\) lattice rotation symmetry is broken at \(T=0\), and since this model admits only discrete energies - the first excited states have energies \(E_{1}=-L^{2}+4\) - we expect a finite-temperature transition into a low-temperature nematic phase. As will be shown below, this transition is of Kosterlitz-Thouless (KT) type.
While the two limiting phases (critical and nematic) are easily identified, one cannot exclude other intervening phases. We will explore the finite-temperature phase diagram of the model using directed-loop Monte Carlo simulation [32; 33; 34; 35; 36; 37], which allows for efficient non-local moves. The precise implementation we use is presented in Appendix. A. The simulations are supplemented by a field-theoretical analysis in terms of a Coulomb gas description of the system (Sec. IV).
## III Physical observables
In this section, we describe the observables measured during the Monte Carlo simulations to characterize the phases and the transitions.
Winding number fluctuations --Fully-packed loop configurations on the square lattice can be associated with two integer winding numbers \(W_{x}\) and \(W_{y}\). To compute \(W_{y}\) (\(W_{x}\)), draw a horizontal (vertical) line that cuts across \(L\) lattice bonds oriented in the \(y\) (\(x\)) direction. For a given configuration, we denote by \(N_{o}\) and \(N_{e}\) the number of loop segments on the odd and even bonds that cross this line. The winding numbers are defined as \(N_{e}-N_{o}\). Each winding number \(W_{x}\) and \(W_{y}\) vary between \(-L\) and \(L\), and there is at least one fully-packed loop configuration for any pair (\(W_{x},W_{y}\)) in this range. Note that the loop constraint ensures that the winding numbers calculated using different parallel lines are the same.
On account of translation symmetry, the equilibrium average values of \(W_{x},W_{y}\) vanish, but not their fluctuations
\[\langle W^{2}\rangle=\frac{1}{2}\langle W_{x}^{2}+W_{y}^{2}\rangle, \tag{2}\]
Figure 1: (a) The allowed vertex types for the 6-vertex model. (b) Correspondence between a 6-vertex configuration and the fully packed loop configuration on the square lattice. The solid and open circles represent sites of the A and B sublattices, respectively. Placing dimers on all incoming arrows of the vertices on the A-sublattice produces a fully-packed loop configuration.
which have useful physical content and can easily be measured in Monte Carlo calculations.
_Low-temperature order parameter --_ We can identify the low-temperature phase through the rotational symmetry breaking, nematic order parameter
\[D=\frac{1}{N}|N_{\mathbf{\mathsf{I}}}-N_{\mathbf{\mathsf{-}}}|, \tag{3}\]
with \(N_{\mathbf{\mathsf{I}}}=\sum_{\mathbf{\mathsf{r}}}n_{\mathbf{\mathsf{r}}}(\mathbf{\mathsf{r}})\) and \(N_{\mathbf{\mathsf{-}}}=\sum_{\mathbf{\mathsf{r}}}n_{\mathbf{\mathsf{-}}}(\mathbf{\mathsf{r}})\), where \(n_{\mathbf{\mathsf{-}}}(\mathbf{\mathsf{r}})\) denotes a horizontal loop segment at the site \(\mathbf{r}\). It is 1 if a loop segment occupies the edge between \(\mathbf{r}\) and \(\mathbf{r}+(1,0)\) and is 0 if the edge is empty. \(n_{\mathbf{\mathsf{I}}}(\mathbf{\mathsf{r}})\) denotes a vertical loop segment at site at lattice site \(\mathbf{r}\), and is 1 if a loop segment occupies the edge between \(\mathbf{r}\) and \((\mathbf{r}+(0,1))\). The order parameter \(D\) is 1 in the two nematic ground-states, and vanishes on average (\(\langle D\rangle=0\)) at infinite temperature.
We also compute the associated susceptibility:
\[\chi_{D}=N(\langle D^{2}\rangle-\langle D\rangle^{2}) \tag{4}\]
and monitor its temperature dependence. As shown below, the divergence of \(\chi_{D}\) allows us to determine the transition temperature and the form of the divergence can be further used to infer the nature of the transition.
_Loop-segment (dimer) correlators --_ We consider the connected correlation function between loop-segments separated by a vector \(\mathbf{\mathsf{r}}=(x,y)\): \(C_{\alpha,\beta}(\mathbf{\mathsf{r}})=\langle n_{\alpha}(\mathbf{\mathsf{0}})n_{\beta}(\mathbf{\mathsf{r}})\rangle-1/4\), where \(\alpha,\beta\) can be \(\mathsf{-},\mathbf{\mathsf{1}}\). The expectation value \(\langle n_{\alpha}\rangle\langle n_{\beta}\rangle=1/4\) has been subtracted to get the connected correlator. In the Monte Carlo simulations, we average over all possible initial positions \(\mathbf{0}\) of the first loop segment, as well as all equivalent pairs \(\alpha,\beta\). For simplicity, we will focus on the lattice direction \(\mathbf{\mathsf{r}}=(x=r,0)\) and consider three types of loop-segment correlations, longitudinal, transverse, and crossed, respectively defined as:
\[C^{L}(r) =\langle n_{\mathbf{\mathsf{-}}}(0)n_{\mathbf{ \mathsf{-}}}(r,0)\rangle-1/4, \tag{5}\] \[C^{T}(r) =\langle n_{\mathbf{\mathsf{I}}}(0)n_{\mathbf{ \mathsf{I}}}(r,0)\rangle-1/4,\] (6) \[C^{C}(r) =\langle n_{\mathbf{\mathsf{-}}}(0)n_{\mathbf{ \mathsf{I}}}(r,0)\rangle-1/4. \tag{7}\]
We will also consider correlators associated to the rotational symmetry breaking order parameter
\[\langle D_{\mathbf{0}}D_{\mathbf{\mathsf{r}}}\rangle= \langle(n_{\mathbf{\mathsf{I}}}(0)-n_{\mathbf{\mathsf{-}}}(0 ))(n_{\mathbf{\mathsf{I}}}(\mathbf{\mathsf{r}})-n_{\mathbf{\mathsf{-}}}(\mathbf{\mathsf{r}}))\rangle\\ =C^{L}(r)+C^{T}(r)-2C^{C}(r) \tag{8}\]
where in the last line we again focus on the direction \(\mathbf{\mathsf{r}}=(x=r,0)\).
_Monomer correlators --_ We also measure the monomer-monomer correlator [33; 34; 40; 41; 51; 69; 70], which requires going beyond the definition of the fully-packed loop configurations space by allowing two test monomers - sites touched by only one dimer - while the rest of the sites are all touched by two loop segments. This extended phase space is precisely the one which is explored during the intermediate steps of the Monte Carlo directed loop algorithm (see Appendix A for details). We define monomer correlation function
\[M(\mathbf{\mathsf{r}})=\langle m(\mathbf{0})m(\mathbf{ \mathsf{r}})\rangle. \tag{9}\]
where the presence of a monomer at site \(\mathbf{r}\) is denoted by \(m(\mathbf{\mathsf{r}})=1\) and \(m(\mathbf{\mathsf{r}})=0\) otherwise. The monomer correlation function \(M(\mathbf{\mathsf{r}})\) is estimated as the fraction of such Monte Carlo samples of configurations with two monomers where they are found to be separated by \(\mathbf{r}\).
## IV Theoretical framework
Before analyzing the results of the numerical simulations, we first describe the finite-temperature phase diagram of the model using a field theoretical analysis. Following its success in two-dimensional models of statistical mechanics [33; 38; 39; 41; 71; 72; 73; 74; 75; 76; 77; 78], we use a Coulomb gas description [22], formulated in terms of a height field \(h(p)\). This field lives on plaquettes \(p\) of the lattice, and is defined (up to an irrelevant constant) in the following way: when turning clockwise around A-sublattice sites of the square lattice, the height increases (decreases) by 1/2 (i.e. \(h\to h\pm 1/2\)) if one crosses a loop segment (an empty edge). At the microscopic level, it can be shown that the value of the height field inside a small patch, can be changed by 1 without changing the local loop segment configuration, simply by a change of configuration at far away points (similar argument as for the dimer model [33]). To see this, consider a region encircled by a pair of loops separated by a plaquette (\(\cdots\)). Local changes to these loops that convert them into a single zig-zag loop (\(\cdots\)) surrounding the region changes the heights by 1 (holding heights on the exterior fixed) everywhere inside the region and irrespective of the distance from these loops.
This indicates that the physical action should be invariant under height shifts \(h\to h\pm 1\). Promoting the height field to the continuum \(h(\mathbf{\mathsf{r}})\), we expect the effective action to be:
\[S=\int d^{2}r\left[g(T)\pi(\nabla h(\mathbf{\mathsf{r}}))^{2}+v\cos( 4\pi h(\mathbf{\mathsf{r}}))\right] \tag{10}\]
which we briefly justify below, and the validity of which will be discussed alongside the numerics in Sec. V.
This action is of the Sine-Gordon type [79; 80]. Here \(g(T)\) is the Coulomb gas coupling constant, which depends on microscopic details and on temperature \(T\). At infinite temperature, we have \(g(T=\infty)=1/3\) from exact results for the 6-vertex model at the ice point [81]. This action displays the competition between the first term (\(g(T)\pi(\nabla h(\mathbf{\mathsf{r}}))^{2}\)) which alone describes the critical phase (rough in the height language) to be encountered at high temperature, and the \(v\cos(4\pi h(\mathbf{\mathsf{r}}))\) "vertex" term whose minima corresponds to the two nematic configurations for which the average height is constant (flat configurations) and takes values \(\vec{h}=\pm 1/4\)
\(v\) can also depend on temperature but its exact dependency is not relevant as long as it remains positive such that the two nematic configurations are always favored.
In the Coulomb gas language and given the periodicity of the height \(h\to h+1\) in the microscopic configurations, the later vertex term can be identified with an electric charge \(e=2\) operator. This term is irrelevant at infinite temperature where \(g=1/3\), but becomes relevant when \(g\geq g_{c}=1\) (a general electric charge \(e\) operator reads \(\exp(i2e\pi h)\), and has scaling dimension \(e^{2}/(2g)\), and thus becomes relevant when \(g\geq e^{2}/4\)). As interactions favor the flat nematic phases, we expect \(g\) to increase (from its \(g(T=\infty)=1/3\) value) as the temperature is lowered.
The Coulomb gas analysis predicts a Kosterlitz-Thouless phase transition [79; 80; 82; 83] from a high-temperature critical phase to the low-temperature nematic phase, and furthermore provides predictions for several observables discussed in Sec. III. First, at the critical Coulomb gas constant \(g_{c}=1\), one can extract the Kosterlitz-Thouless transition temperature \(T_{KT}\) from the relation between the winding fluctuations and the Coulomb gas constant obtained in Ref. [33]:
\[\langle W^{2}\rangle=\sum_{n\in\mathbb{Z}}n^{2}e^{-g\pi n^{2}}/\sum_{n\in \mathbb{Z}}e^{-g\pi n^{2}}, \tag{11}\]
as shown in Fig. 2 below.
Next, the leading terms for the dimer/loop segment occupation operator _in the continuum_ have been identified in Ref. [39] as:
\[n\_\_(\mathbf{r}=(x,y)) =\frac{1}{2}+(-)^{\pi+y+1}\nabla_{y}h-\frac{X}{2i}(\exp(2i\pi h( \mathbf{r}))-\exp(-2i\pi h(\mathbf{r}))) \tag{12}\] \[n\_\mathbf{f}(\mathbf{r}=(x,y)) =\frac{1}{2}+(-)^{\pi+y}\nabla_{x}h+\frac{X}{2i}(\exp(2i\pi h( \mathbf{r}))-\exp(-2i\pi h(\mathbf{r}))) \tag{13}\]
The loop segment occupation is thus composed of a gradient part and a vertex part. The vertex part of the loop segment operator can be expressed in harmonics of \(2\pi h\) (as \(h\equiv h+1\)) and microscopic \(\pi/2\) rotations of the model give \(h\rightarrow-h\) and \(h\to h+1/2\). It can be identified with an electric charge \(e=1\) in the Coulomb gas.
Note that the overall sign in front of the gradient depends on the convention for the height (odd or even sublattice). The constant \(X\) cannot be fixed easily and we need an external exact solution (see below) - in fact, we expect it to be re-normalized, that is to change with temperature. This gives the following predictions for the leading terms of the correlators defined in Eqs. (5)-(7)
\[C^{L}(\mathbf{r}=(x,y)) =\langle n\_(0)n\_(\mathbf{r})\rangle-1/4=(-)^{x+y}A\frac{x^{2}- y^{2}}{(x^{2}+y^{2})^{2}}+\frac{B}{(x^{2}+y^{2})^{1/2g}} \tag{14}\] \[C^{T}(\mathbf{r}=(x,y)) =\langle n\_(0)n\_(\mathbf{r})\rangle-1/4=(-)^{x+y}A\frac{y^{2}- x^{2}}{(x^{2}+y^{2})^{2}}+\frac{B}{(x^{2}+y^{2})^{1/2g}}\] (15) \[C^{C}(\mathbf{r}=(x,y)) =\langle n\_(0)n\_(\mathbf{r})\rangle-1/4=(-)^{x+y}A\frac{2xy}{(x ^{2}+y^{2})^{2}}-\frac{B}{(x^{2}+y^{2})^{1/2g}} \tag{16}\]
The coefficient \(A=\frac{1}{4g\pi^{2}}\) is fixed by the operator product expansion (12) and the two-point correlation function \(\langle h(x)h(y)\rangle\) known exactly for free compact boson conformal field theory [84]. We also have \(B=X^{2}/2\), however, its dependence on \(g\) is not universal. At \(T=\infty\), exact expressions for the XXZ spin chain [85] give \(B\simeq 0.01795\), see Table 1 in Ref. [85] (see also Ref. [86]).
Finally, we note that on the lattice, a monomer creates a dislocation of \(\pm 1\) in the height field. The prediction of the monomer correlator decaying as \(M(r)\propto r^{-g}\) follows from the identification of the monomer operator with the \(m=\pm 1\) magnetic charge operator (the sign depends on the sublattice) with a scaling dimension \(gm^{2}/2\).
This interpretation parallels the one for the interacting classical dimer model [33; 40] with the following three minor (albeit important for numerics) distinctions: (i) The infinite temperature value of the Coulomb gas constant \(g=1/3\) renders the vertex contribution (scaling as \(r^{-3}\)) _subleading_ with respect to the dipolar contribution (scaling as \(r^{-2}\)), which explains why it is often not reported in the polarization fluctuations for the 6-vertex model [87]. For the dimer problem, we have \(g(T=\infty)=1/2\) and both terms contribute equally to the \(r^{-2}\) decay of the dimer correlators [86]. (ii) The critical value of the Coulomb gas constant at the critical point is \(g_{c}=1\) (instead of \(g_{c}=4\) for the dimer model), consistent with the lower degeneracy of the ground-states (2 nematic ground-states instead of 4 columnar ground
states for the dimer model) and resulting in a larger value of the anomalous dimension of the low-temperature order parameter \(\eta_{D}=1/g_{c}=1\) for the loop model (see below) instead of \(\eta_{D}=1/g_{c}=1/4\) for the dimer model at their respective KT transitions. (iii) Lastly, the critical value for the winding fluctuations \(\langle W^{2}\rangle\) is much _larger_ for the loop model, which allows for a statistically meaningful measurement in the Monte Carlo simulations. The very small value of \(\langle W^{2}\rangle\) for the dimer model does not allow for an accurate Monte Carlo determination of the critical point using the value of the winding number fluctuations.
## V MC simulation results
We present our MC simulation results in this section. It contains results for observables from which we can precisely estimate the critical temperature \(T_{KT}\): the winding number fluctuations (Sec. V.1) and the essential singularity of the dimer susceptibility in the KT transition (Sec. V.2). Sec. V.3 presents results for different correlation functions in the high-temperature critical phase, confirming the field theoretical analysis presented in Sec. IV.
### Winding number fluctuations
The numerical results for the winding number fluctuations \(\langle W^{2}\rangle\) as a function of temperature \(T\) of the classical loop model are shown in Fig. 2 (a). We simulate system sizes up to \(L=128\) for this measurement. These data directly provide the temperature dependence (albeit on finite size) of the Coulomb gas constant, which will later be compared with other estimates of \(g(T)\). At the transition point, the analysis of Sec. IV predicts the critical Coulomb gas constant \(g_{c}\) to be \(1\), corresponding to the critical winding number fluctuations \(\langle W^{2}\rangle_{c}=0.07958\) (from Eq. 11, also see inset of Fig. 2(b)). The predicted
Figure 2: (a) MC results for winding number fluctuations as a function of \(T\). The gray dashed line shows the critical winding number fluctuations \(\langle W^{2}\rangle_{c}=0.07958\), which is obtained from Eq. (11) with \(g_{c}=1\) at the transition point. Inset is a zoom-in for the \(1.33\leqslant T\leqslant 1.41\) region. (b) Finite-size scaling for the estimated transition temperature as a function of system size. The finite-size \(T_{KT}(L)\) data points are obtained from (a). The black curve shows the fit to Eq. (17). The extrapolation to the thermodynamic limit gives \(T_{KT}=1.425(1)\). The inset shows \(\langle W^{2}\rangle\) as a function of \(g\) according to the relation in Eq. (11).
Figure 3: (a) Susceptibility \(\chi_{D}\) of the dimer symmetry breaking order parameter defined in the Eq. (4). Data collapse is performed in (b) the critical phase with \(T>T_{KT}\). Here we use \(\eta_{D}=1\) and \(T_{KT}=1.425\), where all the data point nicely collapse onto a single curve.
critical value \(\langle W^{2}\rangle_{c}\) is shown as the gray dashed horizontal line in Fig. 2 (a) and in its inset.
We estimate the transition temperature \(T_{KT}(L)\) for each system size as the temperature at which the winding number fluctuations cross the critical value, which is in turn estimated from a linear fit of the data points near \(\langle W^{2}\rangle_{c}\) (Fig. 2 (a)inset). This estimate has an obvious finite-size dependence. To determine the transition temperature \(T_{KT}\) in the thermodynamic limit, we use the following finite-size scaling relation for a KT transition [88, 89, 78]
\[\frac{1}{T_{KT}(L)}=\frac{1}{T_{KT}}+\frac{C}{\log(L/L_{0})^{2}}, \tag{17}\]
where \(C\) is a constant. By fitting the estimated \(T_{KT}(L)\) in Fig. 2 (b) with Eq. (17), we obtain \(T_{KT}=1.425(1)\).
### Dimer susceptibility
The transition temperature \(T_{KT}\) obtained in the previous section can be cross-validated from the data collapse of the susceptibility \(\chi_{D}\) of the dimer symmetry breaking order parameter (Eq. (4)). In the vicinity of the KT transition, the susceptibility \(\chi_{D}\) obeys the scaling behaviour
\[\chi_{D}\sim L^{2-\eta_{D}}f\left(L\exp(-\frac{K}{\sqrt{T-T_{KT}}})\right) \tag{18}\]
for \(T>T_{KT}\) where \(K\) is a constant and \(\eta_{D}=1/g_{c}=1\) the anomalous dimension [40]. Such data collapse has been used in the literature to determine the Kosterlitz-Thouless transition temperature \(T_{KT}\) in many 2D systems such as the 2D XY model [88], magnetic thin films [89], triangular lattice transverse field Ising model [90] or for the pairing transition in various 2D fermionic lattice models [91, 92, 93, 94].
The susceptibility \(\chi_{D}\) across the transition for the system sizes up to 128 is shown in Fig. 3 (a), where a peak structure close to \(T_{KT}\) clearly emerges. We then use the data in the \(T>T_{KT}\) region to rescale the \(y\)-axis as \(\chi_{D}L^{-(2-\eta_{D})}\) and the \(x\)-axis as \(L\exp(-\frac{K}{\sqrt{T-T_{KT}}})\) as shown in Fig. 3 (b). We obtain that values \(T_{KT}=1.425,\eta_{D}=1\) provide a good data collapse, resulting in a good agreement with the \(T_{KT}\) obtained in Fig. 2 (b).
### Correlation functions
The height description of the loop segment correlations given in Eqs. (14), (15) and (16) suggests that the correlators have two contributions - from the vertex and the dipolar part. We show the correlators calculated from the Monte Carlo methods for \(L=256\) system and present the fits to their expected forms in Appendix B. Below we present a simpler approach to fitting - by considering combinations of the correlators that separate out the vertex and dipolar terms.
We first consider the sum \(C^{L}+C^{T}=\langle n\_\mathbf{-}(0)n\mathbf{-}(\mathbf{r})\rangle+\langle n\mathbf{\mathfrak {z}}(0)n\mathbf{\mathfrak{z}}(\mathbf{r})\rangle-\frac{1}{2}\) which should contain only a vertex contribution \(2B/(x^{2}+y^{2})^{1/2g}\). This combination for the direction \(\mathbf{r}=(r,0)\) is shown in Figs. 4(a) and (b) in linear and log scales respectively. Consistent with the expected
Figure 4: Equal-time loop-segment correlation functions (a) \(C^{L}+C^{T}\) as a function of \(r\), (b) the log-log plot for \(|C^{L}+C^{T}|\) (absolute value is used to correct for very small negative values occurring at large \(r\), large \(T\) due to statistical fluctuations caused by the finite Monte Carlo sampling). (c) The crossed correlations \(C^{C}\) (Eq. (7)), and (d) The log-log plot of the monomer correlations \(\langle M(r)\rangle\) in Eq. (9). The system size is \(L=256\) for the measurements of loop-segment correlators and \(L=400\) for monomer correlations. These data correspond to the high-temperature critical phase, that is temperatures above the estimated \(T_{KT}=1.425\). Gray curves are power-law fits (in their respective fitting range) according to the scaling form \((-1)^{r}A^{\prime}/r^{\alpha_{S}}+B^{\prime}/r^{\alpha_{U}}+C\) for \(C^{C}\), \(B^{\prime}/r^{1/g}+C\) for \(C^{L}+C^{T}\), and \(B^{\prime}/r^{g}+C\) for \(M(r)\). In the two later cases, we add a constant to the power-law fit to account for a small non-vanishing value of correlators at large-distance in our finite-size Monte Carlo simulations.
form, the combination shows a power-law scaling with distance \(r\) with an exponent (slope in the log plot) that increases with temperature. The estimated value of \(g\) from this combination is discussed further below.
We then consider the crossed correlators \(C^{C}=\langle n_{\underline{\mathbf{r}}}(0)n_{\mathbf{I}}(\mathbf{r})\rangle-1/2\) along the direction \(\mathbf{r}\equiv(r,0)\); where the correlation is expected to be dominated by the vertex term according to Eq. 16. Monte Carlo estimates of \(C^{C}(r)\) for different temperatures are shown in Fig. 4 (c). The crossed correlators show expected power law scaling at large distances but with a possible oscillatory subleading correction that affects the short distance correlations, which is visible at higher temperatures. We tentatively attribute this effect to further subleading terms that do not cancel for \(y=0\) and are not included in Eq. 16.
Next, we study the combination \(C^{L}-C^{T}=\langle n_{\underline{\mathbf{r}}}(0)n_{\mathbf{r}}(\mathbf{r}) \rangle-\langle n_{\mathbf{I}}(0)n_{\mathbf{I}}(\mathbf{r})\rangle\) which, based on Eqs. (14) and (15), is expected to have a purely staggered dipolar contribution \((-)^{r}/r^{2}\). Our results for \((-)^{r}(C^{L}-C^{T})(\mathbf{r})\) in the \(\mathbf{r}=(r,0)\) direction is presented in Fig. 5 (a). We observe that there is small but non-vanishing uniform component in the numerical data (which appears as a staggered part in Fig. 5 (a) due to the \((-)^{r}\) factor). To account for this, we fit \(C^{L}-C^{T}\) to a form \((-1)^{r}A^{\prime}/r^{\alpha_{S}}+B^{\prime}/r^{\alpha_{U}}+C\). The constant \(C\) accounts for a non-zero value of this correlator present only at temperatures close to the phase transition, which we attribute to the finite-size used in our Monte Carlo simulations. Here \(\alpha_{U}\) is meant to describe a subleading correction to the vertex part not included in Eqs. (14) and (15). The estimates for \(\alpha_{S},\alpha_{U}\) are presented in Fig. 5 (b), where we find that \(\alpha_{S}\) is very close to the predicted value 2 all along the high-temperature critical phase, and \(\alpha_{U}>\alpha_{S}\) confirming the subleading nature of this uniform correction. If we fit the data fixing \(\alpha_{U}\) to be zero, the exponent \(\alpha_{S}\) is always larger than its expected value of 2.
Finally, the monomer-monomer correlator \(M(r)\) in Eq. (9) should decay only with the vertex contribution \(1/r^{g}\)[34; 40; 51; 95]. We present the monomer correlations \(M(r)\) at different temperatures in Fig. 4 (d) for \(L=400\) (within the directed loop algorithm, we can get good statistics for \(M(r)\) for larger systems than for loop-segment correlators). The log-log plot shows a clear power-law decay above the Kosterlitz-Thouless transition temperature.
We now collect, in Fig. 6, the estimates of the Coulomb gas constant obtained from the fits to the correlators \(C^{L}+C^{T}\) (from Fig. 4(a) and (b)), \(C^{C}\) (from Fig. 4(c)) and \(M(r)\) (from Fig. 4(d)) as well as from the winding number fluctuations \(\langle W^{2}\rangle\) in Fig. 2. Fig. 6 shows the temperature dependence of \(g\) as a function of inverse temperature \(\beta=1/T\). We find that as \(\beta\) increases (_i.e._ as the temperature decreases), the Coulomb gas constant increases from its infinite temperature value \(g=1/3\), which is consistent with the expectation that attractive interactions tend to _stiffen_ the loops. As the temperature decreases from the \(T=\infty\) point to finite but high temperature, the dipolar part in Eqs. (14), (15) and
Figure 6: The Coulomb gas constant \(g\) obtained from Fig. 4 (b), (c), (d), and \(\langle W^{2}\rangle\). The vertical gray dash line indicates the transition point \(\beta_{KT}=1/T_{KT}\simeq 0.7\). The upper gray dash line in the horizontal direction denotes the critical value \(g_{c}=1\); the medium one at \(g=1/2\) corresponds to the case where the vertex and dipolar terms have the same contribution to scaling, the lowest gray dash lines indicates the infinite temperature value \(g=1/3\).
Figure 5: (a) The equal-time loop-segment correlation functions of \((-)^{r}(C^{L}-C^{T})\) for \(L=256\). A weak staggered part can be observed in this representation (particularly visible for the lowest temperatures at short distances), signaling a small uniform component for \((C^{L}-C^{T})\). The gray curves fit \((C^{L}-C^{T})\) to the form \((-1)^{r}A^{\prime}/r^{\alpha_{S}}+B^{\prime}/r^{\alpha_{U}}+C\). (b) The two exponents \(\alpha_{S}\) and \(\alpha_{U}\) obtained from this fit.
(16) dominates down to a temperature \(T\approx 3\), below which \(g>1/2\) and the vertex part takes over down to \(T_{KT}\simeq 1.425\) (\(\beta_{KT}\simeq 0.7\)) where \(g=1\). The various estimates of \(g\) are overall in good agreement (we note the \(g\) value obtained from \(C^{C}\) is less accurate due to the subleading oscillations at high temperature) with each other, and consistent with the theoretical expectations of Sec. IV.
## VI Discussion and Conclusions
In this work, we investigated the finite-temperature phase diagram of a classical model of fully-packed loops on the square lattice with attractive local interactions between loop segments. With the help of a directed-loop Monte Carlo algorithm and a field theoretical analysis based on a height description of loop configurations, we are able to locate the finite temperature Kosterlitz-Thouless transition, separating a critical phase at \(T>T_{KT}\) and a nematic phase below \(T_{KT}\). We find that in the loop model the anomalous dimension at the KT transition \(\eta_{D}=1\), is four times larger than that in the classical dimer model [40]. The high-temperature critical phase is fully characterized by the temperature dependence of the Coulomb gas constant presented in Fig. 6, which is obtained using several different concurrent estimates.
An interesting, closely related system to consider would be a similar classical model, but with _repulsive_ interactions (\(V>0\)) between fully packed loops, favoring large-winding sectors. Analogous repulsive interactions in the dimer model result in a continuous phase transition from a critical to staggered phase, which has been argued to be in the two-dimensional Ising universality class [78].
We finally connect our results to the quantum loop model on the square lattice. From our analysis, we expect that the QLM on the square lattice should also host a critical phase at any sufficiently high-temperature parametrized by a Coulomb gas constant \(g(T/t,V/t)\) which depends on temperature and potential energy, similar to the quantum dimer model [51]. At large negative ratio of potential to kinetic energy (\(V/t\ll 0\)), the QLM hosts a nematic ground-state. From our results, we conclude that the finite-temperature phase transition to the nematic phase in the QLM should occur as a Kosterlitz-phase transition that can be described using the same analysis provided here. The QLM also hosts a plaquette ground-state in a finite range of \(-0.35\lesssim V/t\ll 1\)[49]. We believe that the finite-temperature phase transition to this plaquette phase should be of KT type too, with an effective action described by Eq. (10) but with _negative_\(v<0\), as the two plaquette ground-states have average height \(\bar{h}=0,\frac{1}{2}\). It would be interesting to find a classical model with a similar phase transition and low-temperature phase. Finally, we note that the directed loop algorithm that we use can be directly implemented as a new move [51] within the sweeping cluster algorithm [96; 97] for the QLM, allowing to study its finite-temperature phase diagram fully taking into account the loop constraints and winding fluctuations.
_Acknowledgments --_ We acknowledge support from the ANR/RGC Joint Research Scheme sponsored by Research Grants Council of Hong Kong SAR of China (Project No. A_HKU703/22) and French National Research Agency (grant ANR-22-CE30-0042-01). XXR, ZY and ZYM further acknowledge the support from the Research Grants Council of Hong Kong SAR of China (Project Nos. 17301420, 17301721, AoE/P-701/20, 17309822, HKU C7037-22G), and BD, GJS and FA the support from the joint PhD program between CNRS and IISER Pune, as well as the grant NanoX ANR-17-EURE-0009 in the framework of the French "Programme des Investissements d'Avenir". The research of JR is supported by the Huawei Young Talents Program at IHES. We acknowledge the use of HPC resources from CALMIP (grants 2022-P0677 and 2023-P0677), GENCI (projects A0110500225 and A0130500225), the HPC2021 system under the Information Technology Services and the Blackbody HPC system at the Department of Physics, University of Hong Kong. SGJ and BD thanks K Damle for useful discussions and TIFR, Mumbai for hospitality during the completion of this work.
|
2306.07216 | Cyclic objects from surfaces | In this paper, we endow the family of closed oriented genus $g$ surfaces,
starting with torus, with a structure of a (co)cyclic object in the category of
$3$-dimensional cobordisms. As a corollary, any $3$-dimensional TQFT induces a
(co)cyclic module, which we compute algebraically for the Reshetikhin-Turaev
TQFT. | Ivan Bartulović | 2023-06-12T16:21:33Z | http://arxiv.org/abs/2306.07216v2 | # Cyclic objects from surfaces
###### Abstract.
In this paper, we endow the family of closed oriented genus \(g\) surfaces, starting with torus, with a structure of a (co)cyclic object in the category of \(3\)-dimensional cobordisms. As a corollary, any \(3\)-dimensional TQFT induces a (co)cyclic module, which we compute algebraically for the Reshetikhin-Turaev TQFT.
_Keywords_. Cyclic objects, cobordisms, topological quantum field theories (TQFTs).
2020 _Mathematics Subject Classification_. 18N50, 57K16.
###### Contents
* 1 Introduction
* 2 Cyclic objects
* 3 \(3\)-cobordisms
* 4 Cyclic objects from surfaces
* 5 Preliminaries on monoidal categories and Hopf algebras
* 6 Cyclic modules from (co)algebras
* 7 Cyclic modules from TQFTs
* 8 Related work
* 9 Appendix
## 1. Introduction
In this paper, we study cyclic objects and their interplay with topological field theories. A (co)cyclic object in a category is, roughly speaking, a (co)simplicial object with compatible actions of the cyclic groups. Cyclic homology of algebras was independently introduced by Connes [10] and Tsygan [30]. To any algebra is associated a cocyclic vector space, that is, a cocyclic object in the category of vector spaces. This construction was generalized to braided setting by Akrami and Majid [1], who associate a cocyclic vector space to any ribbon algebra in a braided monoidal category.
On the other hand, compact surfaces are relatively well-understood \(2\)-manifolds which appear in many areas of mathematics. In particular, closed oriented surfaces are objects of a symmetric monoidal category of \(3\)-dimensional cobordisms. A morphism between two surfaces is given by a homeomorphism class of \(3\)-cobordisms between the given surfaces. The category of \(3\)-cobordisms is of great interest in quantum topology. Introduced by Atiyah [3], a \(3\)-dimensional topological quantum field theory (or shortly, TQFT) is a strong symmetric monoidal functor from the category of \(3\)-cobordisms to vector spaces. A fundamental construction of a \(3\)-dimensional TQFT in this sense is the Reshetikhin-Turaev TQFT [28, 31].
The main result of this paper is that the family of closed oriented surfaces has a structure of a (co)cyclic object in the category of 3-dimensional cobordisms (see Theorem 4.1). As a corollary, any 3-dimensional TQFT induces a (co)cyclic vector space. We calculate this (co)cyclic vector space for the Reshetikhin-Turaev TQFT (see Theorem 7.1). In a certain sense, the (co)cyclic objects obtained from closed oriented surfaces universally calculate the cyclic (co)homology of (co)cyclic modules (a la Akrami-Majid [1] and its variants) associated to the coend (which is a braided Hopf algebra) of anomaly free modular categories. Finally, we discuss some potentially related work in the setting of category of connected cobordisms, which first appeared in [17] and is different from the cobordism category used throughout the paper. For instance, it is a non-symmetric braided category. In this setting, we outline a construction of the so called para(co)cyclic objects associated to the one-holed torus (see sections **8B** and **8C**). By composition, the braided monoidal functor \(J_{3}\) from [7] universally induces para(co)cyclic objects associated to the end of unimodular ribbon factorizable categories.
The paper is organized as follows. In Section 2, we recall the notion of a (co)cyclic object in a category. In Section 3, we recall some facts about 3-cobordisms and their presentation via special ribbon graphs. In Section 4, we construct (co)cyclic objects in the category of 3-dimensional cobordisms. In Section 5, we review ribbon categories and their graphical calculus, braided Hopf algebras, and related concepts. Section 6 is dedicated to (co)cyclic modules from categorical (co)algebras. In Section 7 we relate, via the Reshetikhin-Turaev TQFT, the (co)cyclic objects from surfaces with (co)cyclic modules associated to the coend of an anomaly free modular category. In Section 8, we discuss paracyclic objects in the category of connected cobordisms. In Appendix, we recall some known facts about (co)ends of a braided pivotal category, thought as universal (co)modules.
Unless otherwise stated, by \(\Bbbk\) we denote any commutative ring. The class of objects in a category \(\mathcal{C}\) is denoted by \(\operatorname{Ob}(\mathcal{C})\).
### Acknowledgments
Most of the content of this paper is part of my PhD thesis. I would like to thank my supervisor Alexis Virelizier for his comments and advice. I would also like to thank Ulrich Krahmer, Christoph Schweigert, and Kenichi Shimizu for discussions and comments. Finally, I am grateful to the Laboratoire Paul Painleve of the University of Lille and the Institut fur Geometrie of the TU Dresden for their hospitality. This work was supported by the Labex CEMPI (ANR-11-LABX-0007-01), by the Region Hauts-de-France, and by the FNS-ANR OChoTop grant (ANR-18-CE93-0002-01).
## 2. Cyclic objects
In this section we recall the notions of (co)simplicial and (co)cyclic objects in a category.
### 2A
**The simplicial category**. The _simplicial category_\(\Delta\) is defined as follows. The objects of \(\Delta\) are the non-negative integers. For \(n\in\mathbb{N}\), denote \([n]=\{0,\ldots,n\}\). A morphism \(n\to m\) in \(\Delta\) is an increasing map between sets \([n]\) and \([m]\). For \(n\in\mathbb{N}^{*}\) and \(0\leq i\leq n\), the \(i\)-th _coface_\(\delta_{i}^{n}\colon n-1\to n\) is the unique increasing injection from \([n-1]\) into \([n]\) which misses \(i\). For \(n\in\mathbb{N}\) and \(0\leq j\leq n\), the \(j\)-th _codegeneracy_\(\sigma_{j}^{n}\colon n+1\to n\) is the unique increasing surjection from \([n+1]\) onto \([n]\) which sends both \(j\) and \(j+1\) to \(j\).
It is well known (see [24, Lemma 5.1]) that morphisms in \(\Delta\) are generated by cofaces \(\{\delta_{i}^{n}\}_{n\in\mathbb{N}^{*},0\leq i\leq n}\) and codegeneracies \(\{\sigma_{j}^{n}\}_{n\in\mathbb{N},0\leq j\leq n}\) subject to the _simplicial relations_:
\[\delta_{j}^{n+1}\delta_{i}^{n} =\delta_{i}^{n+1}\delta_{j-1}^{n}\quad\text{for all }0\leq i<j\leq n+1, \tag{1}\] \[\sigma_{j}^{n}\sigma_{i}^{n+1} =\sigma_{i}^{n}\sigma_{j+1}^{n+1}\quad\text{for all }0\leq i\leq j \leq n, \tag{2}\]
\[\sigma_{j}^{n}\delta_{i}^{n+1}=\begin{cases}\delta_{i}^{n}\sigma_{j-1}^{n-1}&\text { for all }0\leq i<j\leq n,\\ \text{id}_{n}&\text{ for all }0\leq i=j\leq n\text{ or }1\leq i=j+1\leq n+1,\\ \delta_{i-1}^{n}\sigma_{j}^{n-1}&\text{ for all }1\leq j+1<i\leq n+1.\end{cases} \tag{3}\]
In the opposite category \(\Delta^{\text{op}}\), every coface \(\delta_{i}^{n}\) and every codegeneracy \(\sigma_{j}^{n}\) are respectively denoted by
\[d_{i}^{n}\colon n\to n-1\quad\text{and}\quad s_{j}^{n}\colon n\to n+1.\]
The morphisms \(\{d_{i}^{n}\}_{n\in\mathbb{N}^{*},0\leq i\leq n}\) are called _faces_ and the morphisms \(\{s_{j}^{n}\}_{n\in\mathbb{N},0\leq j\leq n}\) are called _degeneracies_.
**2B**. **The cyclic category.** The _cyclic category_\(\Delta C\) is defined as follows. The objects of \(\Delta C\) are the non-negative integers. The morphisms in this category are generated by morphisms \(\{\delta_{i}^{n}\}_{n\in\mathbb{N}^{*},0\leq i\leq n}\), called _cofaces_, morphisms \(\{\sigma_{j}^{n}\}_{n\in\mathbb{N},0\leq j\leq n}\), called _codegeneracies_, and isomorphisms \(\{\tau_{n}\colon n\to n\}_{n\in\mathbb{N}}\), called _cocyclic operators_, satisfying the simplicial relations and additionally:
\[\tau_{n}\delta_{i}^{n} =\delta_{i-1}^{n}\tau_{n-1}\quad\text{for all }1\leq i\leq n, \tag{4}\] \[\tau_{n}\delta_{0}^{n} =\delta_{n}^{n}\quad\text{for all }n\geq 1,\] (5) \[\tau_{n}\sigma_{i}^{n} =\sigma_{i-1}^{n}\tau_{n+1}\quad\text{for all }1\leq i\leq n,\] (6) \[\tau_{n}\sigma_{0}^{n} =\sigma_{n}^{n}\tau_{n+1}^{2}\quad\text{for all }n\geq 0\quad\text{and}\] (7) \[\tau_{n}^{n+1} =\text{id}_{n}\quad\text{for all}\quad n\in\mathbb{N}. \tag{8}\]
Note that \(\tau_{0}=\text{id}_{0}\). In the opposite category \(\Delta C^{\text{op}}\), every coface \(\delta_{i}^{n}\), every codegeneracy \(\sigma_{j}^{n}\), and every cocyclic operator \(\tau_{n}\) are respectively denoted by
\[d_{i}^{n}\colon n\to n-1,\quad s_{j}^{n}\colon n\to n+1,\quad\text{and}\quad t _{n}\colon n\to n.\]
The morphisms \(\{d_{i}^{n}\}_{n\in\mathbb{N}^{*},0\leq i\leq n}\) are called _faces_, the morphisms \(\{s_{j}^{n}\}_{n\in\mathbb{N},0\leq j\leq n}\) are called _degeneracies_, and the morphisms \(\{t_{n}\}_{n\in\mathbb{N}}\) are called _cyclic operators_.
**2C**. **(Co)simplicial and (co)cyclic objects in a category.** Let \(\mathcal{C}\) be any category. A _simplicial object_ in \(\mathcal{C}\) is a functor \(\Delta^{\text{op}}\to\mathcal{C}\) and a _cyclic object_ in \(\mathcal{C}\) is a functor \(\Delta C^{\text{op}}\to\mathcal{C}\). Dually, a _cosimplicial object_ in \(\mathcal{C}\) is a functor \(\Delta\to\mathcal{C}\) and a _cocyclic object_ in \(\mathcal{C}\) is a functor \(\Delta C\to\mathcal{C}\). A (co)simplicial/(co)cyclic object in the category of \(\Bbbk\)-modules is called a _(co)simplicial/(co)cyclic \(\Bbbk\)-module_. A _morphism_ between two (co)simplicial/(co)cyclic objects is a natural transformation between them. For shortness, one often denotes the image of a morphism \(f\) under a (co)simplicial/(co)cyclic object in \(\mathcal{C}\) by the same letter \(f\).
Since the categories \(\Delta\) and \(\Delta C\) are defined by generators and relations, a (co)simplicial/(co)cyclic object in a category is entirely determined by the images of the generators satisfying the corresponding relations. For example, a cocyclic object \(X\) in \(\mathcal{C}\) may be explicitly described as a family \(X^{\bullet}=\{X^{n}\}_{n\in\mathbb{N}}\) of objects in \(\mathcal{C}\), equipped with morphisms \(\{\delta_{i}^{n}\colon X^{n-1}\to X^{n}\}_{n\in\mathbb{N}^{*},0\leq i\leq n}\), called _cofaces_, morphisms \(\{\sigma_{j}^{n}\colon X^{n+1}\to X^{n}\}_{n\in\mathbb{N},0\leq j\leq n}\), called _codegeneracies_, and isomorphisms \(\{\tau_{n}\colon X^{n}\to X^{n}\}_{n\in\mathbb{N}}\), called _cocyclic operators_, which satisfy (1)-(8). Note that \(\tau_{0}=\text{id}_{X^{0}}\). A morphism \(\alpha^{\bullet}\colon X^{\bullet}\to Y^{\bullet}\) between cocyclic objects \(X^{\bullet}\) and \(Y^{\bullet}\) in \(\mathcal{C}\) is then described by a family \(\alpha^{\bullet}=\{\alpha^{n}\colon X^{n}\to Y^{n}\}_{n\in\mathbb{N}}\) of morphisms in \(\mathcal{C}\) such that
\[\delta_{i}^{n}\alpha^{n-1} =\alpha^{n}\delta_{i}^{n}\quad\text{for all }n\in\mathbb{N}^{*} \text{ and }0\leq i\leq n, \tag{9}\] \[\sigma_{j}^{n}\alpha^{n+1} =\alpha^{n}\sigma_{j}^{n}\quad\text{for all }n\in\mathbb{N}\text{ and }0\leq j\leq n,\] (10) \[\alpha^{n}\tau_{n} =\tau_{n}\alpha^{n}\quad\text{for all }n\in\mathbb{N}. \tag{11}\]
Similarly as above, a cyclic object \(X\) in \(\mathcal{C}\) may be seen as a family \(X_{\bullet}=\{X_{n}\}_{n\in\mathbb{N}}\) of objects in \(\mathcal{C}\) equipped with morphisms \(\{d_{i}^{n}\colon X_{n}\to X_{n-1}\}_{n\in\mathbb{N}^{*},0\leq i\leq n}\), called _faces_, morphisms \(\{s_{j}^{n}\colon X_{n}\to X_{n+1}\}_{n\in\mathbb{N},0\leq j\leq n}\), called _degeneracies_, and isomorphisms \(\{t_{n}\colon X_{n}\to X_{n}\}_{n\in\mathbb{N}}\), called _cyclic operators_, which satisfy the corresponding relations in \(\Delta C^{\mathrm{op}}\). Also, a morphism \(\alpha_{\bullet}\colon X_{\bullet}\to Y_{\bullet}\) between two cyclic objects \(X_{\bullet}\) and \(Y_{\bullet}\) in \(\mathcal{C}\) is described by a family \(\alpha_{\bullet}=\{\alpha_{n}\colon X_{n}\to Y_{n}\}_{n\in\mathbb{N}}\) of morphisms in \(\mathcal{C}\) commuting with faces, degeneracies and cyclic operators of \(X_{\bullet}\) and \(Y_{\bullet}\).
### 2D. Cyclic duality
It is well-known that cyclic category is isomorphic to its opposite category. The isomorphism established by Connes in [9] is called _cyclic duality_. In its version due to Loday [22, Proposition 6.1.11], the cyclic duality \(L\colon\Delta C^{\mathrm{op}}\to\Delta C\) is identity on objects and it is defined on morphisms as follows. For \(n\in\mathbb{N}^{*}\) and \(0\leq i\leq n\),
\[L(d_{i}^{n})=\begin{cases}\sigma_{i}^{n-1}&\text{if $0\leq i\leq n-1$},\\ \sigma_{0}^{n-1}\tau_{n}^{-1}&\text{if $i=n$},\end{cases}\]
and for \(n\in\mathbb{N}\) and \(0\leq j\leq n\),
\[L(s_{j}^{n})=\delta_{j+1}^{n+1}\quad\text{and}\quad L(t_{n})=\tau_{n}^{-1}.\]
Given a category \(\mathcal{C}\), the cyclic duality transforms any cocyclic object \(X\colon\Delta C\to\mathcal{C}\) in \(\mathcal{C}\) into the cyclic object \(XL\colon\Delta C^{\mathrm{op}}\to\mathcal{C}\). Similarly, the opposite functor \(L^{\mathrm{op}}\) turns any cyclic object \(Y\colon\Delta C^{\mathrm{op}}\to\mathcal{C}\) in \(\mathcal{C}\) into the cocyclic object \(YL^{\mathrm{op}}\colon\Delta C\to\mathcal{C}\).
### 2E. Reindexing involution automorphism
Following Loday [22, Section 6.1.14], we recall the _reindexing involution automorphism_\(\Phi\) of the cyclic category. It is identity on objects and it is defined on morphisms as follows. For \(n\in\mathbb{N}^{*}\) and \(0\leq i\leq n\),
\[\Phi(\delta_{i}^{n})=\delta_{n-i}^{n},\]
and for \(n\in\mathbb{N}\) and \(0\leq j\leq n\),
\[\Phi(\sigma_{j}^{n})=\sigma_{n-j}^{n}\quad\text{and}\quad\Phi(\tau_{n})=\tau_ {n}^{-1}.\]
## 3. \(3\)-cobordisms
In this section we recall some facts about the category \(\mathbf{Cob}_{3}\) of \(3\)-dimensional cobordisms (or shortly, \(3\)-cobordisms) and their surgery presentations via ribbon graphs. We denote by \(D^{n}\) the closed unit ball in \(\mathbb{R}^{n}\). The \(n\)-dimensional sphere is denoted by \(S^{n}\).
### \(3\)-cobordisms
A \(3\)_-cobordism_ is a quadruple \((M,h,\Sigma_{0},\Sigma_{1})\), where \(M\) is a compact oriented \(3\)-manifold, \(\Sigma_{0}\) and \(\Sigma_{1}\) are two closed oriented surfaces, and \(h\) is an orientation preserving homeomorphism \(h\colon(-\Sigma_{0})\sqcup\Sigma_{1}\to\partial M\). The surface \(\Sigma_{0}\) is called the _bottom base_ and the surface \(\Sigma_{1}\) is called the _top base_ of the cobordism \(M\). Two cobordisms \((M,h,\Sigma_{0},\Sigma_{1})\) and \((M^{\prime},h^{\prime},\Sigma_{0},\Sigma_{1})\) are _homeomorphic_, if there is an orientation preserving homeomorphism \(g\colon M\to M^{\prime}\) such that \(h=h^{\prime}g_{|\partial M}\). When clear, we will denote a cobordism \((M,h,\Sigma_{0},\Sigma_{1})\) only by \(M\).
The composition of two cobordisms \((M_{0},h_{0},\Sigma_{0},\Sigma_{1})\) and \((M_{1},h_{1},\Sigma_{1},\Sigma_{2})\) is the cobordism \((M,h,\Sigma_{0},\Sigma_{2})\), where \(M\) is obtained by gluing \(M_{0}\) to \(M_{1}\) along \(h_{1}h_{0}{}^{-1}\colon h_{0}(\Sigma_{1})\to h_{1}(\Sigma_{1})\) and the homeomorphism \(h\) is given by
\[h=h_{0}|_{\Sigma_{0}}\sqcup h_{1}|_{\Sigma_{2}}\colon(-\Sigma_{0})\sqcup\Sigma _{2}\to\partial M.\]
We say that cobordism \(M\) is obtained by gluing cobordisms \(M_{0}\) and \(M_{1}\) along \(\Sigma_{1}\).
**3B**.: **The category \(\mathbf{Cob}_{3}\).** The category \(\mathbf{Cob}_{3}\) of \(3\)-cobordisms is defined as follows. The objects are closed oriented surfaces. A morphism \(f\colon\Sigma_{0}\to\Sigma_{1}\) in \(\mathbf{Cob}_{3}\) is a homeomorphism class of cobordisms between \(\Sigma_{0}\) and \(\Sigma_{1}\). In \(\mathbf{Cob}_{3}\), the identity of a closed oriented surface \(\Sigma\) is represented by _identity cobordism_\((C_{\Sigma},e,\Sigma,\Sigma)\), where \(C_{\Sigma}=\Sigma\times[0,1]\) is a cylinder over \(\Sigma\) together with product orientation, and \(e\colon(-\Sigma)\sqcup\Sigma\to\partial C_{\Sigma}\) is the homeomorphism with \(e|_{-\Sigma}(x,0)=(x,0)\) and \(e|_{\Sigma}(x,1)=(x,1)\). The composition of morphisms \(\Sigma_{0}\to\Sigma_{1}\) and \(\Sigma_{1}\to\Sigma_{2}\) in \(\mathbf{Cob}_{3}\), represented respectively by cobordisms \(M\) and \(N\), is represented by cobordism obtained by gluing cobordisms \(M\) and \(N\) along \(\Sigma_{1}\). The category \(\mathbf{Cob}_{3}\) is symmetric monoidal (see Section **5B**). The monoidal product is given by disjoint union and the monoidal unit is the empty surface. For more details, see [32].
**3C**.: **Surgery presentation of closed 3-manifolds.** Let \(L\) be an \(n\)-component framed link in the \(3\)-sphere \(S^{3}\). Pick a closed tubular neighborhood \(N_{L}\) of \(L\). Since \(N_{L}\) is homeomorphic to \(\sqcup_{i=1}^{n}S^{1}\times D^{2}\), the boundary of the \(3\)-manifold \(S^{3}\setminus\operatorname{Int}(N_{L})\) is homeomorphic to the disjoint union of \(n\)-tori \(S^{1}\times S^{1}\). The _Dehn surgery on \(S^{3}\) along \(L\)_ is the closed manifold
\[S^{3}_{L}=(S^{3}\setminus N_{L})\bigcup_{\phi}\left(\sqcup_{i=1}^{n}D^{2} \times S^{1}\right),\]
where \(\phi\colon\partial(S^{3}\setminus\operatorname{Int}(N_{L}))\to\sqcup_{i=1}^{n }S^{1}\times S^{1}\) is a homeomorphism exchanging meridians and parallels. Any connected, oriented, closed \(3\)-manifold \(M\) is, according to the Lickorish's theorem [21, Chapter 12], homeomorphic to \(S^{3}_{L}\) for some framed link \(L\subset S^{3}\).
**3D**.: **Ribbon graphs.** A _circle_ is a \(1\)-manifold homeomorphic to \(S^{1}\). An _arc_ is a \(1\)-manifold homeomorphic to the closed interval \([0,1]\). The boundary points of an arc are called its _endpoints_. A _rectangle_ is a \(2\)-manifold with corners homeomorphic to \([0,1]\times[0,1]\). The four corner points of a rectangle split its boundary into four arcs called the _sides_. A _coupon_ is an oriented rectangle with a distinguished side called the _bottom base_, the opposite side being the _top base_.
A _plexus_ is a topological space obtained from a disjoint union of a finite number of oriented circles, oriented arcs, and coupons by gluing some endpoints of the arcs to the bases of the coupons. It is required that different endpoints of the arcs are never glued to the same point of a (base of a) coupon. The endpoints of the arcs that are not glued to coupons are called _free ends_. The set of free ends of a plexus \(\gamma\) is denoted by \(\partial\gamma\).
Given non-negative integers \(g\) and \(h\), a _ribbon \((g,h)\)-graph_\(\Gamma\) is a plexus \(\Gamma\) embedded in \(\mathbb{R}^{2}\times[0,1]\) and equipped with a framing such that
\[\partial\Gamma=\Gamma\cap\partial(\mathbb{R}^{2}\times[0,1])=\{(1,0,0),\dots,( g,0,0)\}\cup\{(1,0,1),\dots,(h,0,1)\}\]
and such that the arcs of \(\Gamma\) are transverse to \(\partial(\mathbb{R}^{2}\times[0,1])\) at all points of \(\partial\Gamma\). The free end \((i,0,0)\) is called the \(i\)_-th input_ and the free end \((j,0,1)\) is called the \(j\)_-th output_ of \(\Gamma\). For example, ribbon graphs without free ends and without coupons are nothing but framed oriented links in \(\mathbb{R}^{2}\times(0,1)\cong\mathbb{R}^{3}\).
We represent ribbon graphs by plane diagrams with blackboard framing. We require that for each coupon, its orientation is that of the plane, its bases are horizontal, and its bottom base is below its top base. By Reidemeister theorem, two diagrams represent isotopic ribbon graphs if and only if they are related by a finite sequence of plane isotopies, ribbon Reidemeister moves \(\mathbf{R}1\)-\(\mathbf{R}3\)
\[\begin{CD}\includegraphics[width=142.26378pt]{figs/2-3-1}\leftrightarrow \includegraphics[width=142.26378pt]{figs/2-3-1}\leftrightarrow\includegraphics[width=142.26378pt]{figs/2-3-1}\leftrightarrow \includegraphics[width=142.26378pt]{figs/2-3-1}\leftrightarrow\includegraphics[width=142.26378pt]{figs/2-3-1}\,,\]
\[\mathbf{R}1\]
\[\mathbf{R}2\]
\[\mathbf{R}3\]
and the following move:
**3E**.: **Standard ribbon graphs and surfaces.** For \(g\geq 0\), consider the ribbon graph
\[G_{g}^{+}=\raisebox{-15.0pt}{\includegraphics[]{fig/fab-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -
and embeddings \(\tilde{f}^{-}\colon H_{g}^{-}\to S^{3}\setminus N_{L}\) and \(\tilde{f}^{+}\colon H_{h}^{+}\to S^{3}\setminus N_{L}\) respectively extending \(f^{-}\) and \(f^{+}\). Let \(S_{L}^{3}\) be the Dehn surgery of \(S^{3}\) along \(L\) (see Section **3C**). The manifold
\[S_{\Gamma}^{3}=S_{L}^{3}\setminus\left(\tilde{f}^{-}(\operatorname{Int}(H_{g}^{ -}))\cup\tilde{f}^{+}(\operatorname{Int}(H_{h}^{+}))\right)\]
is a connected oriented compact \(3\)-manifold. By Section **3E**, its boundary \(\tilde{f}^{-}(\partial(H_{g}^{-}))\sqcup\tilde{f}^{+}(\partial(H_{h}^{+}))\) is canonically homeomorphic to \((-S_{g})\sqcup S_{h}\). This gives rise to a connected \(3\)-cobordism \(M_{\Gamma}\colon S_{g}\to S_{h}\).
For example, by [31, Chapter IV, Lemma 2.6.], the identity cobordism of the standard surface \(S_{g}\) (see Section **3B**) is represented by the following special ribbon \((g,g)\)-graph
(12)
The following lemma gives a presentation of the composition of \(3\)-cobordisms between standard surfaces.
**Lemma 3.1** ([31, Section IV.2.3]).: _If \(\Gamma\) is a special ribbon \((g,h)\)-graph and \(\Gamma^{\prime}\) is a special ribbon \((h,k)\)-graph, then the composition of \(3\)-cobordisms \(M_{\Gamma}\colon S_{g}\to S_{h}\) and \(M_{\Gamma^{\prime}}\colon S_{h}\to S_{k}\) is then the \(3\)-cobordism \(M_{\Gamma^{\prime}\circ\Gamma}\colon S_{g}\to S_{k}\), where \(\Gamma^{\prime}\circ\Gamma\) is the special ribbon \((g,k)\)-graph obtained by stacking \(\Gamma^{\prime}\) over \(\Gamma\)._
Note that the connectedness of top base of \(M_{\Gamma}\) and bottom base of \(M_{\Gamma}^{\prime}\) is here important. For the general case of \(3\)-cobordisms with non-connected bases, which we do not need in what follows, see [31, Section IV.2.8].
**3G**.: **Extended Kirby calculus.** Recall from Section **3C** that any closed oriented \(3\)-manifold can be obtained by surgery of \(S^{3}\) along a framed link. Kirby proved [20] that two framed links represent the same \(3\)-manifold (up to an orientation preserving homeomorphism) if and only if they are related by a finite sequence of isotopies and of the _Kirby moves_ **K**1 and **K**2. The move **K**1 consists in adding an unknot with framing number (which is the self-linking number) \(1\) or \(-1\):
The move **K**2 consists in sliding a component over another component. More precisely, given two distinct components \(L_{i}\) and \(L_{j}\) of a framed link, this move replaces \(L_{i}\) by the connected sum \(L_{i}\#L_{j}^{\prime}\) of \(L_{i}\) with a copy \(L_{j}^{\prime}\) of \(L_{j}\) obtained by slightly pushing \(L_{j}\) along its framing. For example, sliding an unknot with framing number \(0\) over an unknot with framing number \(1\) can be depicted as:
It follows from Reidemeister and Kirby theorems that the Kirby calculus extends to the representation of \(3\)-cobordisms by special ribbon graphs (see [31, Chapter I, Lemma 3.4]). More explicitly, special ribbon graphs represent the same \(3\)-cobordism (up to an orientation preserving homeomorphism) if and only if they are related by a finite sequence of isotopies
and of the following moves: the move \(\mathbf{K}1\), the generalized Kirby move \(\mathbf{K}2^{\prime}\), the move \(\mathbf{OR}\), the move \(\mathbf{COUPON}\), and the move \(\mathbf{TWIST}\). The move \(\mathbf{K}2^{\prime}\) consists in sliding an arc or circle component of a special ribbon graph over a distinct circle component. The move \(\mathbf{OR}\) consists in reversing the orientation of a circle component. The \(\mathbf{COUPON}\) move consists in changing the type of a crossing of a component passing over (or under) all its outputs
or all its inputs
The move \(\mathbf{TWIST}\) consists in a simultaneous twist of the output components
or the input components
## 4. Cyclic objects from surfaces
The first main result of this paper is that closed oriented surfaces can be organized in a (co)cyclic object in the category \(\mathbf{Cob}_{3}\) of \(3\)-dimensional cobordisms:
**Theorem 4.1**.: _For \(g\geq 1\), consider a closed oriented surface \(\Sigma_{g}\) of genus \(g\). Then the family \(\{\Sigma_{g}\}_{g\geq 1}\) has a structure of a cocyclic object \(X^{\bullet}\) in \(\mathbf{Cob}_{3}\) and a structure of a cyclic object \(X_{\bullet}\) in \(\mathbf{Cob}_{3}\)._
We construct (co)cyclic objects in \(\mathbf{Cob}_{3}\) by means of surgery presentation of \(3\)-cobordisms developed in [28, 31] and reviewed in the Section 3. We prove Theorem 4.1 in sections \(\mathbf{4A}\)-\(\mathbf{4C}\). First, in Section \(\mathbf{4A}\) we construct the functor \(Y^{\bullet}\colon\Delta C\to\mathbf{Cob}_{3}\). Next, in Section \(\mathbf{4B}\)
we construct the functor \(Y_{\bullet}\colon\Delta C^{\mathrm{op}}\to\mathbf{Cob}_{3}\). Cobordisms in both of these constructions have standard surfaces (see Section **3E**) as bases. Finally, in Section **4C** we pass from \(Y^{\bullet}\) and \(Y_{\bullet}\) to arbitrary \(X^{\bullet}\) and \(X_{\bullet}\), as stated in Theorem 4.1.
The category \(\mathbf{Cob}_{3}\) is of great interest in quantum topology. For instance, a \(3\)-dimensional TQFT is a symmetric monoidal functor from \(\mathbf{Cob}_{3}\) to \(\mathrm{Mod}_{\Bbbk}\). By composition, we have the following:
**Corollary 4.2**.: _If \(Z\) is a \(3\)-dimensional TQFT, then \(Z\circ X^{\bullet}\) is a cocyclic \(\Bbbk\)-module and \(Z\circ X_{\bullet}\) is a cyclic \(\Bbbk\)-module._
A fundamental construction of a \(3\)-dimensional TQFT is the Reshetikhin-Turaev TQFT \(\mathrm{RT}_{\mathcal{B}}\colon\mathbf{Cob}_{3}\to\mathrm{Mod}_{\Bbbk}\) associated to an anomaly free modular category \(\mathcal{B}\). We postpone calculations of \(\mathrm{RT}_{\mathcal{B}}\circ X^{\bullet}\) and \(\mathrm{RT}_{\mathcal{B}}\circ X_{\bullet}\) to Section 7. Some algebraic preparations for it are given in sections 5 and 6.
**4A**. **The construction \(Y^{\bullet}\).** Recall the standard surface \(S_{g}\) (see Section **3E**) of genus \(g\). For any \(n\in\mathbb{N}\), set \(Y^{n}=S_{n+1}\). For \(n\in\mathbb{N}^{*}\) and \(0\leq i\leq n\), the faces \(Y^{\bullet}(\delta_{i}^{n})\colon S_{n}\to S_{n+1}\) are defined as follows. The morphism \(Y^{\bullet}(\delta_{0}^{n})\colon S_{n}\to S_{n+1}\) is the cobordism class presented by the special ribbon graph \(G_{Y^{\bullet}(\delta_{0}^{n})}\):
For \(1\leq i\leq n-1\), the morphism \(Y^{\bullet}(\delta_{i}^{n})\colon S_{n}\to S_{n+1}\) is defined as the cobordism class presented by the special ribbon graph \(G_{Y^{\bullet}(\delta_{i}^{n})}\):
Finally, the morphism \(Y^{\bullet}(\delta_{n}^{n})\colon S_{n}\to S_{n+1}\) is defined as the cobordism class presented by the special ribbon graph \(G_{Y^{\bullet}(\delta_{n}^{n})}\):
For \(n\in\mathbb{N}\) and \(0\leq j\leq n\), the degeneracy \(Y^{\bullet}(\sigma_{j}^{n})\colon S_{n+2}\to S_{n+1}\) is the cobordism class presented by the special ribbon graph \(G_{Y^{\bullet}(\sigma_{j}^{n})}\):
The morphism \(Y^{\bullet}(\tau_{0})\colon S_{1}\to S_{1}\) is the identity map \(\mathrm{id}_{S_{1}}\), which is represented by the special ribbon graph \(I_{1}\) depicted in (12). For \(n\in\mathbb{N}^{*}\), the cocyclic operator \(Y^{\bullet}(\tau_{n})\colon S_{n+1}\to S_{n+1}\)
is the cobordism class presented by the special ribbon graph \(G_{Y^{\bullet}(\tau_{n})}\):
\[G_{Y^{\bullet}(\tau_{n})}=\raisebox{-14.226378pt}{\includegraphics[]{figs/.eps}}.\]
**Lemma 4.3**.: _The family \(Y^{\bullet}=\{S_{n+1}\}_{n\in\mathbb{N}}\), equipped with the cofaces \(\{Y^{\bullet}(\delta_{i}^{n})\}_{n\in\mathbb{N}^{*},0\leq i\leq n}\), the codegeneracies \(\{Y^{\bullet}(\sigma_{j}^{n})\}_{n\in\mathbb{N},0\leq j\leq n}\), and the cocycle operators \(\{Y^{\bullet}(\tau_{n})\}_{n\in\mathbb{N}}\) is a cocycle object in \(\mathbf{Cob}_{3}\)._
To prove the Lemma 4.3, we intensively use some well-known consequences of the Kirby calculus from Section **3G**, which we recollect in the following lemma:
**Lemma 4.4**.: _One has the following moves on special ribbon graphs:_
\[\begin{array}{c}\includegraphics[]{figs/.eps}\leftrightarrow\emptyset, \qquad(2)\quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(3)\quad \leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad\end{array}\]
\[\begin{array}{c}\includegraphics[]{figs/.eps}\leftrightarrow\emptyset, \qquad(5)\quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(6) \quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad\end{array}\]
\[\begin{array}{c}\includegraphics[]{figs/.eps}\leftrightarrow\quad \includegraphics[]{figs/.eps},\qquad(7)\quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(8) \quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad\end{array}\]
\[\begin{array}{c}\includegraphics[]{figs/.eps}\leftrightarrow\quad \includegraphics[]{figs/.eps},\qquad(9)\quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(10) \quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad\end{array}\]
\[\begin{array}{c}\includegraphics[]{figs/.eps}\leftrightarrow\quad \includegraphics[]{figs/.eps},\qquad(11)\quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(12) \quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad\end{array}\]
\[\begin{array}{c}\includegraphics[]{figs/.eps}\leftrightarrow\quad \includegraphics[]{figs/.eps},\qquad(13)\quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(14) \quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(15)\quad \leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(16)\quad\leftrightarrow \quad\includegraphics[]{figs/.eps},\qquad(17)\quad\leftrightarrow\quad \includegraphics[]{figs/.eps},\qquad(18)\quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(19) \quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(19)\quad \leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(19)\quad\leftrightarrow \quad\includegraphics[]{figs/.eps},\qquad(18)\quad\leftrightarrow\quad \includegraphics[]{figs/.eps},\qquad(19)\quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(20) \quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(21)\quad \leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(22)\quad\leftrightarrow \quad\includegraphics[]{figs/.eps},\qquad(23)\quad\leftrightarrow\quad \includegraphics[]{figs/.eps},\qquad(24)\quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(25)\quad \leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(26)\quad\leftrightarrow \quad\includegraphics[]{figs/.eps},\qquad(27)\quad\leftrightarrow\quad \includegraphics[]{figs/.eps},\qquad(28)\quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(29) \quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(29)\quad \leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(20)\quad\leftrightarrow \quad\includegraphics[]{figs/.eps},\qquad(21)\quad\leftrightarrow\quad \includegraphics[]{figs/.eps},\qquad(22)\quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(23)\quad \leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(24)\quad\leftrightarrow \quad\includegraphics[]{figs/.eps},\qquad(25)\quad\leftrightarrow\quad \includegraphics[]{figs/.eps},\qquad(26)\quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(27)\quad \leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(28)\quad\leftrightarrow \quad\includegraphics[]{figs/.eps},\qquad(29)\quad\leftrightarrow\quad \includegraphics[]{figs/.eps},\qquad(20)\quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(2 1)\quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(20)\quad \leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(21)\quad\leftrightarrow \quad\includegraphics[]{figs/.eps},\qquad(22)\quad\leftrightarrow\quad \includegraphics[]{figs/.eps},\qquad(23)\quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(24)\quad \leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(25)\quad\leftrightarrow \quad\includegraphics[]{figs/.eps},\qquad(26)\quad\leftrightarrow\quad \includegraphics[]{figs/.eps},\qquad(27)\quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(28)\quad \leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(29)\quad\leftrightarrow \includegraphics[]{figs/.eps},\qquad(20)\quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(2 2)\quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(20)\quad \leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(21)\quad\leftrightarrow \quad\includegraphics[]{figs/.eps},\qquad(22)\quad\leftrightarrow\quad \includegraphics[]{figs/.eps},\qquad(23)\quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(24)\quad\leftrightarrow\quad \includegraphics[]{figs/.eps},\qquad(25)\quad\leftrightarrow\quad \includegraphics[]{figs/.eps},\qquad(26)\quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(27)\quad\leftrightarrow\quad \includegraphics[]{figs/.eps},\qquad(28)\quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(29)\quad \leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(20)\quad\leftrightarrow \quad\includegraphics[]{figs/.eps},\qquad(21)\quad\leftrightarrow\quad \includegraphics[]{figs/.eps},\qquad(22)\quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(23)\quad\leftrightarrow\quad \includegraphics[]{figs/.eps},\qquad(24)\quad\leftrightarrow\quad \includegraphics[]{figs/.eps},\qquad(25)\quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(26)\quad\leftrightarrow\quad \includegraphics[]{figs/.eps},\qquad(27)\quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(28)\quad\leftrightarrow\quad \includegraphics[]{figs/.eps},\qquad(29)\quad\leftrightarrow\quad \includegraphics[]{figs/.eps},\qquad(20)\quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(2 0)\quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(21)\quad \leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(22)\quad\leftrightarrow \quad\includegraphics[]{figs/.eps},\qquad(23)\quad\leftrightarrow\quad \includegraphics[]{figs/.eps},\qquad(24)\quad\leftrightarrow\quad \includegraphics[]{figs/.eps},\qquad(25)\quad\leftrightarrow\quad \includegraphics[]{figs/.eps},\qquad(26)\quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(27)\quad\leftrightarrow\quad \includegraphics[]{figs/.eps},\qquad(28)\quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(29)\quad\leftrightarrow\quad \includegraphics[]{figs/.eps},\qquad(20)\quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(21)\quad\leftrightarrow\quad \includegraphics[]{figs/.eps},\qquad(23)\quad\leftrightarrow\quad \includegraphics[]{figs/.eps},\qquad(24)\quad\leftrightarrow\quad \includegraphics[]{figs/.eps},\qquad(25)\quad\leftrightarrow\quad \includegraphics[]{figs/.eps},\qquad(26)\quad\leftrightarrow\quad \includegraphics[]{figs/.eps},\qquad(27)\quad\leftrightarrow\quad \includegraphics[]{figs/.eps},\qquad(28)\quad\leftrightarrow\quad \includegraphics[]{figs/.eps},\qquad(29)\quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(28)\quad\leftrightarrow\quad \includegraphics[]{figs/.eps},\qquad(29)\quad\leftrightarrow\quad \includegraphics[]{figs/.eps},\qquad(20)\quad\leftrightarrow\quad \includegraphics[]{figs/.eps},\qquad(21)\quad\leftrightarrow\quad\includegraphics[]{figs/.eps},\qquad(21)\quad\leftrightarrow\quad \includegraphics[]{figs/.eps},
Here \((a)\) and \((f)\) follow from definitions and Lemma 3.1, \((b)\) and \((e)\) follow from Lemma 3.1 and the fact that the graph from equation (12) represents the identity cobordism, \((c)\) from Lemma 4.4 (3), and \((d)\) by isotopy. The remaining cases are verified in a similar way.
Let us verify the relation (2).
1. Suppose that \(0\leq i<j\leq n\). We have: \[\sigma_{j}^{n}\sigma_{i}^{n+1}\stackrel{{(a)}}{{=}}\] \[\sigma_{j}^{n}\sigma_{i}^{n+1}\stackrel{{(a)}}{{=}}\] \[\sigma_{0}^{n}\sigma_{i}^{n+1}\stackrel{{(a)}}
which equals to \(\sigma_{i}^{n}\sigma_{j+1}^{n+1}\). Here \((a)\) follows from definitions and Lemma 3.1, \((b)\) and \((d)\) follow from Lemma 3.1 and the fact that the graph from equation (12) represents the identity cobordism, \((c)\) from Lemma 4.4 (4), \((e)\) follows by applying the parts \((2),(4)\) and \((5)\) of Lemma 4.4 and isotopy.
2. Suppose that \(0\leq i=j\leq n\). By definition and Lemma 3.1, the composition \(\sigma_{i}^{n}\sigma_{i}^{n+1}\) is presented by. (13) On the other hand, the composition \(\sigma_{i}^{n}\sigma_{i+1}^{n+1}\) is presented by. (14) By Lemma 4.4(6), both special graphs from (13) and (14) are equivalent to the special graph. Consequently, the cobordisms \(\sigma_{i}^{n}\sigma_{i}^{n+1}\) and \(\sigma_{i}^{n}\sigma_{i+1}^{n+1}\) are in the same class, that is, they are equal in \(\mathbf{Cob}_{3}\).
Let us now verify the relation (3).
Suppose that \(1\leq i=j\leq n-1\). We have
Here \((a)\) follows from definition and Lemma 3.1, \((b)\) from Lemma 4.4 (3), and \((c)\) by isotopy, Lemma 3.1, and presentation of identity cobordism which is given in equation (12). The cases when \(i=0\) or \(i=n\) are proven similarly.
Suppose now that \(0\leq i<j\leq n\). In the case when \(i\neq 0\), we have
Here \((a)\) and \((f)\) follow from definitions and Lemma 3.1, \((b)\) and \((e)\) follow from Lemma 3.1 and the fact that the graph from equation (12) represents the identity cobordism, \((c)\) follows by applying twice the move from Lemma 4.4 (3), and \((d)\) by isotopy. The case when \(i=0\) is proven similarly.
The case when \(1\leq i=j+1\leq n+1\) is verified in a similar way as the case \(0\leq i=j\leq n\). Also, the case when \(1\leq j+1<i\leq n+1\) is verified in a similar way as the case \(0\leq i<j\leq n\).
It remains to show that relations (4), (6) and (8) hold. Indeed, according to [22, Section 6.1.1], these relations imply relations (5) and (7). Let us verify the relation (4). In the case when \(n\geq 3\) and \(2\leq i\leq n-1\), we have
Here \((a)\) and \((d)\) follow from definitions and Lemma 3.1, \((b)\) by isotopy, \((c)\) follows from Lemma 4.4 (3) and by isotopy. The remaining cases are proven similarly.
We further show the relation (6). Assume that \(1\leq j\leq n\). We have:
\(\tau_{n}\sigma_{j}^{n}\)\(\leftrightarrow\)\(\tau_{n}\sigma_{j}^{n}\)\(\
Here \((a)\) and \((d)\) follow from the definition and Lemma 3.1, \((b)\) follows from Lemma 3.1 and the fact that the graph from equation (12) represents the identity cobordism, \((c)\) follows by applying the parts \((2)\) and \((5)\) of Lemma 4.4 and isotopy.
Finally, we check the relation \((8)\) in the case \(n=1\). The general case is proven by the similar reasoning. We have:
Here \((a)\) follows from definition and Lemma 3.1, \((b)\) follows by the **TWIST** move, \((c)\) follows by isotopy, \((d)\) follows by isotopy, Lemma 3.1, and presentation of the identity cobordism, as depicted in (12). This finishes the proof of Lemma 4.3.
**4B. The construction \(Y_{\bullet}\).** Recall the standard surface \(S_{g}\) (see Section **3E**) of genus \(g\). For \(n\in\mathbb{N}\), set \(Y_{n}=S_{n+1}\). For \(n\in\mathbb{N}^{*}\) and \(0\leq i\leq n\), the face \(Y_{\bullet}(d_{i}^{n})\colon S_{n+1}\to S_{n}\) is the cobordism class presented by the special ribbon graph \(G_{Y_{\bullet}(d_{i}^{n})}\):
\[G_{Y_{\bullet}(d_{i}^{n})}=\raisebox{-14.226378pt}{\includegraphics[width=14.22637 8pt]{fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig/// fig// fig/// fig// fig/// fig/// fig// fig// fig/// fig// fig/// fig// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig//// fig/// fig/// fig/// fig/// fig/// fig/// fig//// fig//// fig/// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig///// fig//// fig///// fig///// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig///// fig//// fig//// fig///// fig///// fig//// fig//// fig//// fig///// fig///// fig//// fig//// fig///// fig///// fig//// fig//// fig//// fig///// fig//// fig///// fig//// fig//// fig//// fig//// fig//// fig//// fig///// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig////
**4B. The construction \(Y_{\bullet}\).** Recall the standard surface \(S_{g}\) (see Section **3E**) of genus \(g\). For \(n\in\mathbb{N}\), set \(Y_{n}=S_{n+1}\). For \(n\in\mathbb{N}^{*}\) and \(0\leq i\leq n\), the face \(Y_{\bullet}(d_{i}^{n})\colon S_{n+1}\to S_{n}\) is the cobordism class presented by the special ribbon graph \(G_{Y_{\bullet}(d_{i}^{n})}\):
\[G_{Y_{\bullet}(d_{i}^{n})}=\raisebox{-14.226378pt}{\includegraphics[width=14.22637 8pt]{fig/fig/fig/ fig// fig// fig// fig// fig// fig/// fig/// fig// fig// fig/// fig// fig// fig/// fig//// fig/// fig/// fig//// fig/// fig///// fig//// fig//// fig///// fig//// fig//// fig//// fig/// fig//// fig//// fig/// fig//// fig//// fig//// fig/// fig//// fig//// fig//// fig//// fig//// fig//// fig///// fig/// fig//// fig//// fig///// fig//// fig///// fig//// fig//// fig/// fig///// fig//// fig//// fig///// fig///// fig//// fig///// fig//// fig///// fig///// fig///// fig//// fig//// fig///// fig//// fig//// fig///// fig///// fig///// fig//// fig///// fig//// fig//// fig//// fig///// fig//// fig//// fig///// fig//// fig///// fig//// fig//// fig//// fig//// fig//// fig///// fig//// fig/// fig/// fig//// fig///// fig//// fig//// fig//// fig//// fig//// fig///// fig//// fig//// fig//// fig//// fig/// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig///// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig///// fig///// fig//// fig//// fig//// fig/// fig//// fig//// fig//// fig//// fig/// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig///// fig/// fig//// fig//// fig//// fig//// fig//// fig/// fig//// fig//// fig//// fig//// fig//// fig//// fig///// fig/// fig///
**4B. The construction \(Y_{\bullet}\).** Recall the standard surface \(S_{g}\) (see Section **3E**) of genus \(g\). For \(n\in\mathbb{N}\), set \(Y_{n}=S_{n+1}\). For \(n\in\mathbb{N}^{*}\) and \(0\leq i\leq n\), the face \(Y_{\bullet}(d_{i}^{n})\colon S_{n+1}\to S_{n}\) is the cobordism class presented by the special ribbon graph \(G_{Y_{\bullet}(d_{i}^{n})}\):
\[G_{Y_{\bullet}(d_{i}^{n})}=\raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{ fig
Let \(n\in\mathbb{N}\) and \(0\leq j\leq n\). The morphism \(Y_{\bullet}(s_{j}^{n})\colon S_{n+1}\to S_{n+2}\) is the cobordism class presented by the special ribbon graph \(G_{Y_{\bullet}(s_{j}^{n})}\):
\[G_{Y_{\bullet}(s_{j}^{n})}=\raisebox{-14.226378pt}{\includegraphics[scale=0. 0]{fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/fig/figfig/fig/figfig/fig/fig/fig/figfig/fig/fig/figfig/figfig/fig/fig/fig/figfig/fig/figfig/fig/fig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/fig/figfig/figfig/fig/figfig/figfig/fig/figfig/fig/fig/figfig/fig/figfig/fig/fig/figfig/fig/figfig/figfig/fig/figfig/figfig/figfig/fig/figfig/figfig/figfig/figfig/fig/figfig/figfig/fig/figfig/fig/figfig/figfig/fig/figfig/figfig/figfig/figfig/fig/figfig/fig/figfig/figfig/figfig/figfigfig/figfig/figfig/figfig/figfig/figfig/figfig/fig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfigfig/fig/figfig/figfig/figfig/figfig/figfigfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfigfig/figfig/figfig/figfig/figfigfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfigfig/figfig/figfig/figfigfig/figfig/figfig/figfig/figfig/figfig/figfigfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfigfig/figfigfig/figfig/figfigfig/figfig/figfig/figfig/figfigfig/figfigfig/fig
**4C**. **Passing to \(X^{\bullet}\) and \(X_{\bullet}\).** Let \(\{\Sigma_{n+1}\}_{n\geq 0}\) be any family of closed oriented surfaces. By the classification theorem, for each \(n\) there exists an orientation preserving homeomorphism \(f_{n}\colon\Sigma_{n+1}\to S_{n+1}.\) Denote by \(\operatorname{Cyl}(f_{n})\colon\Sigma_{n+1}\to S_{n+1}\) the associated morphism in \(\mathbf{Cob}_{3}\), given by the quadruple
\[\operatorname{Cyl}(f_{n})=(C_{S_{n+1}}=S_{n+1}\times[0,1]\,,h_{n}\colon(-\Sigma _{n+1})\sqcup S_{n+1}\to\partial(C_{S_{n+1}}),\Sigma_{n+1},S_{n+1}),\]
where \(h_{n}(x)=(f_{n}(x),0)\), if \(x\in\Sigma_{n+1}\) and \(h_{n}(x)=(x,1)\), if \(x\in S_{n+1}\). It follows from [31, Chapter IV, Section 5.1.], that the cobordism \(\operatorname{Cyl}(f_{n})\) is determined up to isotopy.
We pass from \(Y^{\bullet}\) to the cocyclic object \(X^{\bullet}\) in \(\mathbf{Cob}_{3}\) as follows. For any \(n\in\mathbb{N}\), define \(X^{n}=\Sigma_{n+1}\). Next, the cofaces \(\{X^{\bullet}(\delta^{n}_{i})\colon\Sigma_{n}\to\Sigma_{n+1}\}_{n\in\mathbb{N} ^{*},0\leq i\leq n}\), the codegeneracies \(\{X^{\bullet}(\sigma^{n}_{j})\colon\Sigma_{n+2}\to\Sigma_{n+1}\}_{n\in \mathbb{N},0\leq j\leq n}\), and the cocyclic operators \(\{X^{\bullet}(\tau_{n})\colon\Sigma_{n+1}\to\Sigma_{n+1}\}_{n\in\mathbb{N}}\) are defined by formulas
\[X^{\bullet}(\delta^{n}_{i}) =(\operatorname{Cyl}(f_{n}))^{-1}Y^{\bullet}(\delta^{n}_{i}) \operatorname{Cyl}(f_{n-1}),\] \[X^{\bullet}(\sigma^{n}_{j}) =(\operatorname{Cyl}(f_{n}))^{-1}Y^{\bullet}(\sigma^{n}_{j}) \operatorname{Cyl}(f_{n+1}),\] \[X^{\bullet}(\tau_{n}) =(\operatorname{Cyl}(f_{n}))^{-1}Y^{\bullet}(\tau_{n}) \operatorname{Cyl}(f_{n}).\]
It follows from definitions that the family of cylinders \(\{\operatorname{Cyl}(f_{n})\colon\Sigma_{n+1}\to S_{n+1}\}_{n\in\mathbb{N}}\) is a natural isomorphism between cocyclic objects \(X^{\bullet}\) and \(Y^{\bullet}\) in \(\mathbf{Cob}_{3}\). One similarly passes from \(Y_{\bullet}\) to a cyclic object \(X_{\bullet}\) in \(\mathbf{Cob}_{3}\).
## 5. Preliminaries on monoidal categories and Hopf algebras
In this section we recall some algebraic preliminaries on ribbon categories and their graphical calculus as well as categorical Hopf algebras and related concepts. For more details, see [32].
### Conventions
In what follows, we suppress in our formulas the associativity and unitality constraints of the monoidal category. We denote by \(\otimes\) and \(\mathbb{1}\) the monoidal product and unit object of a monoidal category. For any objects \(X_{1},\ldots,X_{n}\) of a monoidal category with \(n\geq 2\), we set
\[X_{1}\otimes X_{2}\otimes\cdots\otimes X_{n}=(\cdots((X_{1}\otimes X_{2}) \otimes X_{3})\otimes\cdots\otimes X_{n-1})\otimes X_{n}\]
and similarly for morphisms. A monoidal category is \(\Bbbk\)-linear, if its Hom sets have a structure of a \(\Bbbk\)-module such that the composition and monoidal product of morphisms are \(\Bbbk\)-bilinear. For shortness, we often use the term monoidal \(\Bbbk\)-category.
### Braided categories
A _braiding_ of a monoidal category \((\mathcal{B},\otimes,\mathbb{1})\) is a family \(\tau=\{\tau_{X,Y}\colon X\otimes Y\to Y\otimes X\}_{X,Y\in\operatorname{Ob }(\mathcal{B})}\) of natural isomorphisms such that
\[\tau_{X,Y\otimes Z} =(\operatorname{id}_{Y}\otimes\tau_{X,Z})(\tau_{X,Y}\otimes \operatorname{id}_{Z})\text{ and }\] \[\tau_{X\otimes Y,Z} =(\tau_{X,Z}\otimes\operatorname{id}_{Y})(\operatorname{id}_{X} \otimes\tau_{Y,Z})\]
for all \(X,Y,Z\in\operatorname{Ob}(\mathcal{B})\). Note that the above axioms imply that \(\tau_{X,\mathbb{1}}=\tau_{\mathbb{1},X}=\operatorname{id}_{X}\) for any \(X\in\operatorname{Ob}(\mathcal{B})\). A _braided category_ is a monoidal category endowed with a braiding. A braiding \(\tau\) of \(\mathcal{B}\) is _symmetric_ if for all \(X,Y\in\operatorname{Ob}(\mathcal{B})\),
\[\tau_{Y,X}\tau_{X,Y}=\operatorname{id}_{X\otimes Y}.\]
A _symmetric category_ is a monoidal category endowed with a symmetric braiding. For example, the category of left \(\Bbbk\)-modules is symmetric.
**5C**.: **Braided categories with a twist.** A _twist_ for a braided monoidal category \(\mathcal{B}\) is a natural isomorphism \(\theta=\{\theta_{X}\colon X\to X\}_{X\in\operatorname{Ob}(\mathcal{B})}\) such that for all \(X,Y\in\operatorname{Ob}(\mathcal{B})\),
\[\theta_{X\otimes Y}=\tau_{Y,X}\tau_{X,Y}(\theta_{X}\otimes\theta_{Y}). \tag{15}\]
Note that this implies \(\theta_{\mathbb{1}}=\operatorname{id}_{\mathbb{1}}\). By a _braided category with a twist_, we mean a braided category endowed with a twist. For example, the family \(\{\operatorname{id}_{X}\colon X\to X\}_{X\in\operatorname{Ob}(\mathcal{B})}\) is a twist for \(\mathcal{B}\) if and only if \(\mathcal{B}\) is symmetric. Also, any ribbon category (see Section 5**F**) has a canonical twist.
**5D**.: **Graphical calculus.** In this paper, we use the _Penrose graphical calculus_, which allows us to avoid lengthy algebraic computations by using simple topological arguments. The diagrams read from bottom to top. In a monoidal category \(\mathcal{B}\), the diagrams are made of arcs colored by objects of \(\mathcal{B}\) and of boxes, colored by morphisms of \(\mathcal{B}\). Arcs colored by \(\mathbb{1}\) may be omitted in the pictures. The identity morphism of an object \(X\), a morphism \(f\colon X\to Y\) in \(\mathcal{B}\), and its composition with a morphism \(g\colon Y\to Z\) in \(\mathcal{B}\) are represented respectively as
The tensor product of two morphisms \(f\colon X\to Y\) and \(g\colon U\to V\) is represented by placing a picture of \(f\) to the left of the picture of \(g\):
Any diagram represents a morphism. The latter depends only on the isotopy class of the diagram representing it. For example, the _level-exchange property_
reflects the formula
\[f\otimes g=(f\otimes\operatorname{id}_{V})(\operatorname{id}_{X}\otimes g)=( \operatorname{id}_{Y}\otimes g)(f\otimes\operatorname{id}_{U}).\]
When \(\mathcal{B}\) is braided with a braiding \(\tau\), we depict
\[\tau_{X,Y}=\raisebox{-14.226378pt}{\includegraphics[]{fig-X-Y-X-Y-X-Y-X-Y-X-Y-X-Y-X-Y-X-X-Y-X-X-Y-X-X-Y-
When \(\mathcal{B}\) is a braided category with a twist \(\theta=\{\theta_{X}\colon X\to X\}_{X\in\operatorname{Ob}(\mathcal{B})}\), we depict
\[\theta_{X}=\raisebox{-14.226378pt}{\includegraphics[]{fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/figfig/fig/fig/figfig/fig/fig/figfig/fig/figfig/fig/fig/figfig/fig/fig/fig/figfig/fig/figfig/figfig/fig/figfig/fig/fig/fig/fig/fig/figfig/fig/fig/figfig/fig/figfig/fig/figfig/fig/figfig/fig/fig/figfig/fig/figfigfig/fig/figfig/fig/figfig/fig/figfig/figfig/figfig/figfig/fig/fig/figfig/figfig/fig/fig/figfig/figfig/figfig/fig/fig/figfig/figfig/fig/figfig/figfig/fig/figfig/fig/figfig/figfig/fig/figfig/figfig/fig/figfig/figfig/fig/figfig/fig/figfig/figfig/figfig/figfigfig/fig/fig/figfig/figfig/fig/figfig/fig/figfigfig/figfig/fig/figfig/figfig/figfig/fig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfigfig/figfig/fig/figfigfig/figfig/figfigfig/fig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/fig/figfigfig/figfig/figfig/figfig/figfig/figfigfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfigfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfigfig/figfig/figfig/figfig/figfig/fig/figfig/figfig/figfig/figfig/figfigfig/fig/figfig/figfig/figfig/figfig/
The left and the right twist are natural isomorphisms with inverses
\[(\theta_{X}^{l})^{-1}=\bigvee_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**5K**.: **Dimension of a fusion category.** Let \(\mathcal{C}\) be a pivotal fusion \(\Bbbk\)-category. We identify \(\Bbbk\) and \(\operatorname{End}_{\mathcal{C}}(\mathbb{1})\) via the \(\Bbbk\)-linear isomorphism \(k\mapsto k\mathrm{id}_{\mathbb{1}}\). Pick a representative set \(I\) of simple objects of \(\mathcal{C}\). The _dimension of the category_\(\mathcal{C}\) is an element of \(\operatorname{End}_{\mathcal{C}}(\mathbb{1})\cong\Bbbk\) defined by
\[\dim(\mathcal{C})=\sum_{i\in I}\dim_{l}(i)\dim_{r}(i).\]
The dimension of \(\mathcal{C}\) does not depend on the choice of \(I\) since isomorphic objects of \(\mathcal{C}\) have the same left/right dimensions. Note that if \(\mathcal{C}\) is spherical, then
\[\dim(\mathcal{C})=\sum_{i\in I}(\dim(i))^{2}.\]
**5L**.: **Modular categories.** Let \(\mathcal{C}\) be a ribbon fusion \(\Bbbk\)-category. Pick a representative set \(I\) of simple objects of \(\mathcal{C}\). The scalars
\[\Delta_{\pm}=\sum_{i\in I}\dim(i)\operatorname{tr}(\theta_{i}^{\pm 1})\in \operatorname{End}_{\mathcal{C}}(\mathbb{1})\cong\Bbbk,\]
where \(\theta\) is the twist of \(\mathcal{C}\), do not depend on the choice of \(I\). The _\(S\)-matrix_\([S_{i,j}]_{i,j\in I}\) of \(\mathcal{C}\) is defined by
\[S_{i,j}=\operatorname{tr}(\tau_{i,j}\tau_{j,i})\in\operatorname{End}_{\mathcal{ C}}(\mathbb{1})\cong\Bbbk.\]
Note that the invertibility of \(S\) does not depend on the choice of \(I\).
A _modular \(\Bbbk\)-category_ is a ribbon fusion \(\Bbbk\)-category whose \(S\)-matrix is invertible. The scalars \(\Delta_{+}\), \(\Delta_{-}\), and \(\dim(\mathcal{C})\) associated with a modular \(\Bbbk\)-category \(\mathcal{C}\) are invertible in \(\Bbbk\) and are related by \(\dim(\mathcal{C})=\Delta_{-}\Delta_{+}\) (see [31, p. 89]). A modular \(\Bbbk\)-category is _anomaly free_ if \(\Delta_{+}=\Delta_{-}\).
**5M**.: **Categorical algebras.** An _algebra_ in a monoidal category \(\mathcal{C}\) is a triple \((A,m,u)\), where \(A\) is an object of \(\mathcal{C}\), \(m\colon A\otimes A\to A\) and \(u\colon\mathbb{1}\to A\) are morphisms in \(\mathcal{C}\), called _multiplication_ and _unit_ respectively, which satisfy
\[m(m\otimes\mathrm{id}_{A})=m(\mathrm{id}_{A}\otimes m)\quad\text{and}\quad m( u\otimes\mathrm{id}_{A})=\mathrm{id}_{A}=m(\mathrm{id}_{A}\otimes u).\]
The multiplication and the unit are depicted by
\[\raisebox{-10.0pt}{\includegraphics[height=56.905512pt]{fig/Categorical_cong.pdf}}\quad \text{and}\quad\raisebox{-10.0pt}{\includegraphics[height=56.905512pt]{fig/Categorical_cong.pdf}}\,.\]
An _algebra morphism_ between two algebras \((A,m_{A},u_{A})\) and \((B,m_{B},u_{B})\) in a monoidal category \(\mathcal{C}\) is a morphism \(f\colon A\to B\) in \(\mathcal{C}\) such that \(fm_{A}=m_{B}(f\otimes f)\) and \(fu_{A}=u_{B}\).
**5N**.: **Categorical coalgebras.** A _coalgebra_ in a monoidal category \(\mathcal{C}\) is a triple \((C,\Delta,\varepsilon)\), where \(C\) is an object of \(\mathcal{C}\), \(\Delta\colon C\to C\otimes C\) and \(\varepsilon\colon C\to\mathbb{1}\) are morphisms in \(\mathcal{C}\), called _co-multiplication_ and _counit_ respectively, which satisfy
\[(\Delta\otimes\mathrm{id}_{C})\Delta=(\mathrm{id}_{C}\otimes\Delta)\Delta\quad \text{and}\quad(\mathrm{id}_{C}\otimes\varepsilon)\Delta=\mathrm{id}_{C}=( \varepsilon\otimes\mathrm{id}_{C})\Delta.\]
The comultiplication and the counit are depicted by
\[\raisebox{-10.0pt}{\includegraphics[height=56.905512pt]{fig/Categorical_cong.pdf}}\quad \text{and}\quad\raisebox{-10.0pt}{\includegraphics[height=56.905512pt]{fig/Categorical_cong.pdf}}\,.\]
A _coalgebra morphism_ between two coalgebras \((C,\Delta_{C},\varepsilon_{C})\) and \((D,\Delta_{D},\varepsilon_{D})\) in a monoidal category \(\mathcal{C}\) is a morphism \(f\colon C\to D\) in \(\mathcal{C}\) such that \(\Delta_{D}f=(f\otimes f)\Delta_{C}\) and \(\varepsilon_{D}f=\varepsilon_{C}\).
**5O. Categorical bialgebras.** A _bialgebra_ in a braided monoidal category \(\mathcal{B}\) is a quintuple \((A,m,u,\Delta,\varepsilon)\) such that \((A,m,u)\) is an algebra in \(\mathcal{B}\), \((A,\Delta,\varepsilon)\) is a coalgebra in \(\mathcal{B}\), and such that the following additional relations hold:
\[\Delta m=(m\otimes m)(\operatorname{id}_{A}\otimes\tau_{A,A}\otimes \operatorname{id}_{A})(\Delta\otimes\Delta),\quad\varepsilon m=\varepsilon \otimes\varepsilon,\quad\Delta u=u\otimes u,\quad\text{and}\quad\varepsilon u =\operatorname{id}_{\mathbb{1}}.\]
A _bialgebra morphism_ between two bialgebras \(A\) and \(B\) in a braided monoidal category \(\mathcal{B}\) is a morphism \(A\to B\) in \(\mathcal{B}\), which is both an algebra and a coalgebra morphism.
**5P. Categorical Hopf algebras.** A _Hopf algebra_ in a braided monoidal category \(\mathcal{B}\) is a sextuple \((A,m,u,\Delta,\varepsilon,S)\), where \((A,m,u,\Delta,\varepsilon)\) is a bialgebra in \(\mathcal{B}\) and \(S\colon A\to A\) is an isomorphism in \(\mathcal{B}\), called the _antipode_, which satisfies the equation
\[m(S\otimes\operatorname{id}_{A})\Delta=u\epsilon=m(\operatorname{id}_{A} \otimes S)\Delta.\]
The antipode and its inverse are depicted by
\[\begin{CD}\text{\Large$\bullet$}\text{\Large$\bullet$}\text{\Large$\bullet$} \text{\Large$\bullet$}\text{\Large$\bullet$}\text{\Large$\bullet$}\text{\Large$ \bullet$}\text{\Large$\bullet$}\text{\Large$\bullet$}\text{\Large$\bullet$} \text{\Large$\bullet$}\text{\Large$\bullet$}\text{\Large$\bullet$}\text{\Large$ \bullet$}\text{\Large$\bullet$}\text{\Large$\bullet$}\text{\Large$\bullet$}\text{ \Large$\bullet$}\text{\Large$\bullet$}\text{\Large$\bullet$}\text{\Large$}\text{ \Large$\bullet$}\text{\Large$\bullet$}\text{\Large$}\text{\Large$\bullet$}\text{ \Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\}}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\} \text{\Large$}\text{\Large$}\text{\Large$}\text{\}\text{\Large$}\text{\Large$}\text{\Large$}\text{\}\text{\Large$} \text{\Large$}\text{\Large$}\text{\}\text{\Large$}\text{\}\text{\Large$}\text{\}\text{\Large$}\text{\Large$} \text{\Large$}\text{\Large$}\text{\Large$}\text{\Large$}\text{\}\text{\Large$}\text{\Large$}\text{\}\text{\Large$}\text{\}\text{\Large$}\text{\}\text{\Large$}\text{\}\text{ \Large$}\text{\}\text{\Large$}\text{\}\text{\Large$}\text{\}\text{\Large$}\text{\}\text{\Large$}\text{\}\text{\}\text{\Large$}\text{\}\text{\Large$}\text{\}\text{\}\text{\}\text{\Large$}\text{\}\text{\}\text{\}\text{\Large$}\text{\}\text{ \}\text{\Large$}\text{\}\text{\}\text{\}\text{\Large$}\text{
category \(\mathcal{B}\), if it exists, is unique up to a unique isomorphism commuting with the dinatural transformation. We depict the dinatural transformation \(i=\{i_{X}\colon X^{*}\otimes X\to\mathbb{F}\}_{X\in\operatorname{Ob}(\mathcal{B})}\) as
\[i_{X}=\raisebox{-14.226378pt}{\includegraphics[]{fig/C3.pdf}}.\]
An important factorization property is given in the following lemma.
**Lemma 5.1** (Fubini theorem for coends, [25]).: _Let \((\mathbb{F},i)\) be a coend of a braided pivotal category \(\mathcal{B}\). If \(d=\{d_{X_{1},\ldots,X_{n}}\colon X_{1}^{*}\otimes X_{1}\otimes\cdots\otimes X _{n}^{*}\otimes X_{n}\to D\}_{X_{1},\ldots,X_{n}\in\operatorname{Ob}(\mathcal{ B})}\) is a family of morphisms in \(\mathcal{B}\), which is dinatural in each \(X_{i}\) for \(1\leq i\leq n\), then there exists a unique morphism \(\varphi\colon\mathbb{F}^{\otimes n}\to D\) in \(\mathcal{B}\) such that_
\[d_{X_{1},\ldots,X_{n}}=\varphi(i_{X_{1}}\otimes\cdots\otimes i_{X_{n}})\]
_for all \(X_{1},\ldots,X_{n}\in\operatorname{Ob}(\mathcal{B})\)._
According to [23, 26], coend of a braided pivotal category \(\mathcal{B}\) is a Hopf algebra in \(\mathcal{B}\) endowed with a canonical Hopf pairing. Its unit is \(u=(\operatorname{id}_{\mathbb{1}}\otimes i_{\mathbb{1}})(\operatorname{ coev}_{\mathbb{1}}\otimes\operatorname{id}_{\mathbb{1}})\colon\mathbb{1}\to \mathbb{F}\). Multiplication \(m\colon\mathbb{F}\otimes\mathbb{F}\to\mathbb{F}\) and canonical pairing \(\omega\colon\mathbb{F}\otimes\mathbb{F}\to\mathbb{1}\) are unique morphisms such that for all \(X,Y\in\operatorname{Ob}(\mathcal{B})\),
Its comultiplication \(\Delta\colon\mathbb{F}\to\mathbb{F}\otimes\mathbb{F}\), counit \(\varepsilon\colon\mathbb{F}\to\mathbb{1}\), and antipode \(S\colon\mathbb{F}\to\mathbb{F}\) are unique morphisms such that for all \(X\in\operatorname{Ob}(\mathcal{B})\),
\[\raisebox{-14.226378pt}{\includegraphics[]{fig/C3.pdf}}=\raisebox{-14.226378pt}{ \includegraphics[]{fig/C4.pdf}},\qquad\raisebox{-14.226378pt}{ \includegraphics[]{fig/C5.pdf}}=\raisebox{-14.226378pt}{\includegraphics[]{fig/C6.pdf}},\qquad \raisebox{-14.226378pt}{\includegraphics[]{fig/C7.pdf}}=\raisebox{-14.226378pt}{ \includegraphics[]{fig/C8.pdf}}.\]
Two useful properties of antipode of coend \(\mathbb{F}\) are
\[S^{2} =\theta_{\mathbb{F}}^{r}\quad\text{and} \tag{16}\] \[\omega(S\otimes\operatorname{id}_{\mathbb{F}}) =\omega(\operatorname{id}_{\mathbb{F}}\otimes S). \tag{17}\]
Also, we have by definitions of \(\omega\) and \(S\) that \(X,Y\in\operatorname{Ob}(\mathcal{B})\),
\[\raisebox{-14.226378pt}{\includegraphics[]{fig/C7.pdf}}=\raisebox{-14.226378pt}{ \includegraphics[]{fig/C7.pdf}}.\]
## 6. Cyclic modules from (co)algebras
Let \(\mathcal{B}\) be a braided \(\Bbbk\)-linear category with a twist. In this section, we first review some (co)cyclic \(\Bbbk\)-modules from coalgebras and algebras in \(\mathcal{B}\). Proofs can be found in [5], where these were merely (co)cyclic sets, since one dropped the hypothesis of \(\Bbbk\)-linearity of \(\mathcal{B}\). Next, we outline some basic computations of cyclic (co)homology of some introduced
(co)cyclic \(\Bbbk\)-modules. Finally, in Section 6**F**, we explicit (co)faces, (co)degeneracies, and (co)cyclic operators of (co)cyclic \(\Bbbk\)-vector spaces associated to the coend of the representation category of a finite dimensional ribbon Hopf algebra. For more details on Hochschild and cyclic (co)homology of (co)cyclic \(\Bbbk\)-modules, see [35, Section 9.6].
### Cocyclic modules from coalgebras
Any coalgebra \(C\) in \(\mathcal{B}\) gives rise to a cocyclic \(\Bbbk\)-module \(C^{\bullet}\) as follows. For any \(n\in\mathbb{N}\), define \(C^{n}=\operatorname{Hom}_{\mathcal{B}}(C^{\otimes n+1},\mathbb{1}).\) Next, define the cofaces \(\{\delta_{i}^{n}\colon C^{n-1}\to C^{n}\}_{n\in\mathbb{N}^{*},0\leq i\leq n}\), the codegeneracies \(\{\sigma_{j}^{n}\colon C^{n+1}\to C^{n}\}_{n\in\mathbb{N},0\leq j\leq n}\), and the cocyclic operators \(\{\tau_{n}\colon C^{n}\to C^{n}\}_{n\in\mathbb{N}}\) by setting
An integer \(k\) below an arc denotes the \(k\)-th tensorand of a tensor power of \(C\). This construction is functorial in \(C\), that is, a morphism between coalgebras in \(\mathcal{B}\) induces the morphism of corresponding cocyclic \(\Bbbk\)-modules.
### Cyclic modules from algebras
Any algebra \(A\) in \(\mathcal{B}\) gives rise to a cyclic \(\Bbbk\)-module \(A_{\bullet}\) as follows. For any \(n\in\mathbb{N}\), define \(A_{n}=\operatorname{Hom}_{\mathcal{B}}(A^{\otimes n+1},\mathbb{1})\). Next, define the faces \(\{d_{i}^{n}\colon A_{n}\to A_{n-1}\}_{n\in\mathbb{N}^{*},0\leq i\leq n}\), the degeneracies \(\{s_{j}^{n}\colon A_{n}\to A_{n+1}\}_{n\in\mathbb{N},0\leq j\leq n}\), and cyclic operators \(\{t_{n}\colon A_{n}\to A_{n}\}_{n\in\mathbb{N}}\) by setting
This construction is functorial in \(A\), that is, a morphism between algebras in \(\mathcal{B}\) induces, in a functorial way, the morphism of corresponding cyclic \(\Bbbk\)-modules.
### Cyclic duals
The cyclic duality \(L\) from Section 2**D** transforms the cocyclic \(\Bbbk\)-module \(C^{\bullet}\) from Section 6**A** into the cyclic \(\Bbbk\)-module \(C^{\bullet}\circ L\). For any \(n\in\mathbb{N}\), \(C^{\bullet}\circ L(n)=C^{n}=\operatorname{Hom}_{\mathcal{B}}(C^{\otimes n+1},\mathbb{1})\). The faces \(\{\tilde{d}_{i}^{n}\}_{n\in\mathbb{N}^{*},0\leq i\leq n}\), the degeneracies \(\{\tilde{s}_{j}^{n}\}_{n\in\mathbb{N},0\leq j\leq n}\), and the cyclic operators \(\{\tilde{t}_{n}\}_{n\in\mathbb{N}}\) are computed by formulas
Similarly, the functor \(L^{\operatorname{op}}\) transforms the cyclic \(\Bbbk\)-module \(A_{\bullet}\) from Section 6**B** into the cocyclic \(\Bbbk\)-module \(A_{\bullet}\circ L^{\operatorname{op}}\). By definitions, \(A_{\bullet}\circ L^{\operatorname{op}}(n)=A_{n}=\operatorname{Hom}_{\mathcal{B }}(A^{\otimes n+1},\mathbb{1})\) for
all \(n\in\mathbb{N}\). The cofaces \(\{\tilde{\delta}_{i}^{n}\}_{n\in\mathbb{N}^{n},0\leq i\leq n}\), the codegeneracies \(\{\tilde{\sigma}_{j}^{n}\}_{n\in\mathbb{N},0\leq j\leq n}\), and the cocyclic operators \(\{\tilde{\tau}_{n}\}_{n\in\mathbb{N}}\) are computed by formulas
Note that the construction \(A_{\bullet}\circ L^{\text{op}}\) is a particular case of the work of Akrami and Majid [1] (since any algebra in a braided category with a twist is a ribbon algebra in the sense of [1]).
### On the (co)homology of \(C^{\bullet}\) and \(C_{\bullet}\)
Let \(\Bbbk\) be a commutative ring, \(\mathcal{B}\) a braided \(\Bbbk\)-category with a twist and \(C\) any coalgebra in \(\mathcal{B}\). Let \(\alpha\colon\mathbb{1}\to C\) be a morphism such that \(\varepsilon\alpha=\operatorname{id}_{\mathbb{1}}\). For example, if \(C\) is a bialgebra in \(\mathcal{B}\), then we can take \(\alpha\) to be the unit of \(C\). By expanding and writing in the categorical setting [19, Remark 1.], we obtain that Hochschild (co)homology of underlying (co)simplicial modules of \(C^{\bullet}\) and \(C_{\bullet}\) appears only in degree zero. Indeed, the \(n\)-th Hochschild cohomology \(HH^{n}(C^{\bullet})\) of \(C^{\bullet}\) is the \(n\)-th cohomology of the cochain complex
where \(\beta_{n}=\sum_{i=0}^{n}(-1)^{i}\delta_{i}^{n}.\) Then the family \(\{h_{n}\colon\operatorname{Hom}_{\mathcal{B}}(C^{\otimes n+1},\mathbb{1}) \to\operatorname{Hom}_{\mathcal{B}}(C^{\otimes n},\mathbb{1})\}_{n\in\mathbb{N }^{n}}\), defined by setting for any \(f\in\operatorname{Hom}_{\mathcal{B}}(C^{\otimes n+1},\mathbb{1})\),
satisfies the equalities
\[\beta_{n}h_{n}+h_{n+1}\beta_{n+1} =\operatorname{id}_{\operatorname{Hom}_{\mathcal{B}}(C^{\otimes n +1},\mathbb{1})}\quad\text{for $n\geq 1$}\quad\text{and}\] \[h_{1}\beta_{1}+\operatorname{Hom}_{\mathcal{B}}(\alpha\varepsilon, \mathbb{1}) =\operatorname{id}_{\operatorname{Hom}_{\mathcal{B}}(C,\mathbb{1})}.\]
As a corollary, \(HH^{0}(C^{\bullet})=\ker(\beta_{1})\) and \(HH^{n}(C^{\bullet})=0\) for \(n>0\). From the cohomological form of the Connes' long exact sequence [35, Proposition 9.6.11] and the fact that Hochschild and cyclic (co)homology always agree in degree \(0\), we easily obtain that \(HC^{n}(C^{\bullet})\cong\ker(\beta_{1})\) for even \(n\) and \(HC^{n}(C^{\bullet})\cong 0\) for odd \(n\). A similar statement can be derived for Hochschild and cyclic homologies of \(C_{\bullet}.\) This calculation shows that in a sense, \(C^{\bullet}\) and \(C_{\bullet}\) are not interesting from the homological point of view. Therefore, we focus on their cyclic duals in sections that follow.
**6E**. **Internal characters.** Let \(\mathcal{B}\) be a ribbon \(\Bbbk\)-category with a coend \((\mathbb{F},i)\). For any object \(X\in\operatorname{Ob}(\mathcal{B})\), the morphism \(\chi_{X}=i_{X}\widetilde{\operatorname{coev}}_{X}\colon\mathbb{1}\to\mathbb{F}\), also known as internal character [14, 29], enjoys the following trace-like property:
(18)
Here \((a)\) follows by definition of comultiplication of \(\mathbb{F}\), naturality and definition of twists, braidings and the isotopy invariance of graphical calculus. Note that a pictorial proof of this fact is given in [6, page 22]. The equality \((b)\) is obtained by composing both sides of \((a)\) with \(\tau_{\mathbb{F},\mathbb{F}}^{-1}(\operatorname{id}_{\mathbb{F}}\otimes\theta_ {\mathbb{F}}^{-1})\).
Next, the morphism \(\psi_{X}=\omega(\chi_{X}\otimes\operatorname{id}_{\mathbb{F}})\) satisfies
(19)
Indeed, we have
(a) (d)
The category \(\operatorname{rep}_{H}\) is a ribbon category. Let \(\mathbb{F}\) be the coend of the category \(\operatorname{rep}_{H}\). As a vector space, it is equal to \(H^{*}\) and as a left \(H\)-module, it is given by the coadjoint action, that is, for all \(h\in H\) and \(f\in H^{*}\),
\[h\triangleright f=f(S(h_{(1)})\_hphantom{}).\]
For \(n\in\mathbb{N}^{*}\), consider the evaluation \(\operatorname{ev}\colon H^{\otimes n}\to\operatorname{Hom}_{\Bbbk}(H^{* \otimes n},\Bbbk)\) defined by setting for all \(X\in H^{\otimes n}\) and \(f\in H^{*\otimes n}\),
\[\operatorname{ev}(X)(f)=\langle f,X\rangle.\]
Following [34, Lemma 4.5(d)], this evaluation induces an isomorphism between \(\Bbbk\)-vector spaces \(\operatorname{Hom}_{\operatorname{rep}_{H}}(\mathbb{F}^{\otimes n},\Bbbk)\) and
\[V_{n}(H)=\{X\in H^{\otimes n}\ |\ X\triangleleft h=\varepsilon(h)X\text{ for any }h\in H\}.\]
Here the right \(H\)-action \(\triangleleft\) on \(H^{\otimes n}\) is defined by setting for any \(h\in H\) and any elementary tensor \(X=x_{1}\otimes\cdots\otimes x_{n}\in H^{\otimes n}\),
\[X\triangleleft h=S(h_{(1)})x_{1}h_{(2)}\otimes S(h_{(3)})x_{2}h_{(4)}\otimes \cdots\otimes S(h_{(2n-1)})x_{n}h_{(2n)}.\]
Remark that the vector space \(V_{n}(H)\) is equal to the \(0\)-th classical Hochschild homology \(HH_{0}(H,H^{\otimes n})\) of \(H\) with coefficients in \(H^{\otimes n}\), where \(H^{\otimes n}\) is the bimodule over \(H\) (the left action is given by trivial action via counit).
Under the above isomorphism between \(\operatorname{Hom}_{\operatorname{rep}_{H}}(\mathbb{F}^{\otimes n},\Bbbk)\) and \(V_{n}(H)\), the cyclic \(\Bbbk\)-vector space \(\mathbb{F}^{\bullet}\circ L\) is identified with the cyclic \(\Bbbk\)-vector space \(\mathbf{W}_{\bullet}\) which is defined as follows. For any \(n\in\mathbb{N}\), set \(\mathbf{W}_{n}=V_{n+1}(H)\). The faces \(\{d_{i}^{n}\colon V_{n+1}(H)\to V_{n}(H)\}_{n\in\mathbb{N}^{*},0\leq i\leq n}\) are given by setting for any elementary tensor \(h_{1}\otimes\cdots\otimes h_{n+1}\in V_{n+1}(H)\),
\[d_{i}^{n}(h_{1}\otimes\cdots\otimes h_{n+1}) =h_{1}\otimes h_{2}\otimes\cdots\otimes h_{i+1}h_{i+2}\otimes \cdots\otimes h_{n+1}\quad\text{ and }\] \[d_{n}^{n}(h_{1}\otimes\cdots\otimes h_{n+1}) =\sum_{i}(h_{n+1}\triangleleft a_{i}\theta)(h_{1}\triangleleft(b_ {i})_{(1)})\otimes h_{2}\triangleleft(b_{i})_{(2)}\otimes\cdots\otimes h_{n} \triangleleft(b_{i})_{(n)}.\]
The degeneracies \(\{s_{j}^{n}\colon V_{n+1}(H)\to V_{n+2}(H)\}_{n\in\mathbb{N},0\leq j\leq n}\) are given by setting for any elementary tensor \(h_{1}\otimes\cdots\otimes h_{n+1}\in V_{n+1}(H)\),
\[s_{j}(h_{1}\otimes\cdots\otimes h_{n+1})=h_{1}\otimes\cdots\otimes h_{j+1} \otimes 1_{H}\otimes h_{j+2}\otimes\cdots\otimes h_{n+1}.\]
The cyclic operators \(\{t_{n}\colon V_{n+1}(H)\to V_{n+1}(H)\}_{n\in\mathbb{N}}\) are given by setting for any elementary tensor \(h_{1}\otimes\cdots\otimes h_{n+1}\in V_{n+1}(H)\),
\[t_{n}(h_{1}\otimes\cdots\otimes h_{n+1})=\sum_{i}h_{n+1}\triangleleft a_{i} \theta\otimes h_{1}\triangleleft(b_{i})_{(1)}\otimes\cdots\otimes h_{n} \triangleleft(b_{i})_{(n)}.\]
Similarly, the cocyclic \(\Bbbk\)-vector space \(\mathbb{F}_{\bullet}\circ L^{\operatorname{op}}\) is identified with the cocyclic \(\Bbbk\)-vector space \(\mathbf{W}^{\bullet}\) which is defined as follows. For any \(n\in\mathbb{N}\), set \(\mathbf{W}^{n}=V_{n+1}(H)\). The cofaces \(\{\delta_{i}^{n}\colon V_{n}(H)\to V_{n+1}(H)\}_{n\in\mathbb{N}^{*},0\leq i\leq n}\) are given by setting for any elementary tensor \(h_{1}\otimes\cdots\otimes h_{n}\in V_{n}(H)\),
\[\delta_{i}^{n}(h_{1}\otimes\cdots\otimes h_{n}) =h_{1}\otimes h_{2}\otimes\cdots\otimes\Delta^{\operatorname{Bd} }(h_{i+1})\otimes\cdots\otimes h_{n}\quad\text{and}\] \[\delta_{n}^{n}(h_{1}\otimes\cdots\otimes h_{n}) =\sum_{i}(h_{1})_{(2)}^{\operatorname{Bd}}\triangleleft(\beta_{i} )_{(1)}\otimes h_{2}\triangleleft(\beta_{i})_{(2)}\otimes\cdots\otimes h_{n} \triangleleft(\beta_{i})_{(n)}\otimes(h_{1})_{(1)}^{\operatorname{Bd}} \triangleleft(\alpha_{i}\theta^{-1}),\]
where \(\Delta^{\operatorname{Bd}}\) is a comultiplication on the braided Hopf algebra \(H^{\operatorname{Bd}}\) (see [34, Lemma 4.4]) associated to \(H\). As algebras \(H^{\operatorname{Bd}}=H\), but the comultiplication and antipode in \(H^{\operatorname{Bd}}\) are different. Explicitly, for each \(h\in H\),
\[\Delta^{\operatorname{Bd}}(h)=h_{(2)}a_{i}\otimes S((b_{i})_{(1)})h_{(1)}(b_{i })_{(2)}.\]
The codegeneracies \(\{\sigma_{j}^{n}\colon V_{n+2}(H)\to V_{n+1}(H)\}_{n\in\mathbb{N},0\leq j\leq n}\) are given by setting for any elementary tensor \(h_{1}\otimes\cdots\otimes h_{n+2}\in V_{n+2}(H)\),
\[\sigma_{j}(h_{1}\otimes\cdots\otimes h_{n+2})=h_{1}\otimes\cdots\otimes h_{j+1 }\otimes\varepsilon(h_{j+2})\otimes\cdots\otimes h_{n+2}.\]
The cocyclic operators \(\{\tau_{n}\colon V_{n+1}(H)\to V_{n+1}(H)\}_{n\in\mathbb{N}}\) are given by setting for any elementary tensor \(h_{1}\otimes\cdots\otimes h_{n+1}\in V_{n+1}(H)\),
\[\tau_{n}(h_{1}\otimes\cdots\otimes h_{n+1})=\sum_{i}h_{2}\triangleleft(\beta_{ i})_{(1)}\otimes\cdots\otimes h_{n+1}\triangleleft(\beta_{i})_{(n)}\otimes h _{1}\triangleleft(\alpha_{i}\theta^{-1}).\]
## 7. Cyclic modules from TQFTs
In Theorem 4.1 says that there exist (co)cyclic objects in the category of \(3\)-cobordisms. By composition, any \(3\)-dimensional TQFT induces a (co)cyclic \(\Bbbk\)-module. In this section we compute it for the Reshetikhin-Turaev TQFT \(\mathrm{RT}_{\mathcal{B}}\colon\mathbf{Cob}_{3}\to\mathrm{Mod}_{\Bbbk}\) associated to an anomaly free modular category \(\mathcal{B}\). Note that the coend \(\mathbb{F}\) of \(\mathcal{B}\) exists and is a Hopf algebra in \(\mathcal{B}\). Recall the cocyclic \(\Bbbk\)-module \(\mathbb{F}^{\bullet}\) and the cyclic \(\Bbbk\)-module \(\mathbb{F}_{\bullet}\) (see Section 6) associated to \(\mathbb{F}\), as well as the reindexing involution \(\Phi\colon\Delta C\to\Delta C\) (see Section 2**E**). The second main result of this paper is the following:
**Theorem 7.1**.: _The cocyclic \(\Bbbk\)-modules \(\mathrm{RT}_{\mathcal{B}}\circ X^{\bullet}\) and \(\mathbb{F}^{\bullet}\circ\Phi\) are isomorphic. The cyclic \(\Bbbk\)-modules \(\mathrm{RT}_{\mathcal{B}}\circ X_{\bullet}\) and \(\mathbb{F}_{\bullet}\circ\Phi^{\mathrm{op}}\) are isomorphic._
We note that the \(n\)-th Hochschild cohomology of \(\mathbb{F}^{\bullet}\circ\Phi\) and \(\mathbb{F}^{\bullet}\) are equal. Indeed, the Hochschild differentials of the associated cochain complexes are equal. The cohomological form of the Connes' long exact sequence [35, Proposition 9.6.11] then implies that this is also the case for their cyclic cohomology. The proof of Theorem 7.1 is given in sections 7**A**-**7E**.
The cyclic duality \(L\colon\Delta C^{\mathrm{op}}\to\Delta C\) and reindexing involution automorphism \(\Phi\) transform the cocyclic object \(X^{\bullet}\) in \(\mathbf{Cob}_{3}\) into a cyclic object \(X^{\bullet}\circ\Phi\circ L\) in \(\mathbf{Cob}_{3}\). Similarly, the functors \(L^{\mathrm{op}}\colon\Delta C\to\Delta C^{\mathrm{op}}\) and \(\Phi^{\mathrm{op}}\) transform the cyclic object \(X_{\bullet}\) in \(\mathbf{Cob}_{3}\) into a cocyclic object \(X_{\bullet}\circ\Phi^{\mathrm{op}}\circ L^{\mathrm{op}}\) in \(\mathbf{Cob}_{3}\). By Theorem 7.1 and the fact that \(\Phi\) is involutive, we obtain the following:
**Corollary 7.2**.: _The cyclic \(\Bbbk\)-modules \(\mathrm{RT}_{\mathcal{B}}\circ X^{\bullet}\circ\Phi\circ L\) and \(\mathbb{F}^{\bullet}\circ L\) are isomorphic. The cocyclic \(\Bbbk\)-modules \(\mathrm{RT}_{\mathcal{B}}\circ X_{\bullet}\circ\Phi^{\mathrm{op}}\circ L^{ \mathrm{op}}\) and \(\mathbb{F}_{\bullet}\circ L^{\mathrm{op}}\) are isomorphic._
Recall that the cyclic \(\Bbbk\)-module \(\mathbb{F}^{\bullet}\circ L\) and the cocyclic \(\Bbbk\)-module \(\mathbb{F}_{\bullet}\circ L^{\mathrm{op}}\) (see Section 6**C**) are cyclic duals of \(\mathbb{F}^{\bullet}\) and the cyclic \(\Bbbk\)-module \(\mathbb{F}_{\bullet}\).
Another fundamental construction of a \(3\)-dimensional TQFT is the Turaev-Viro TQFT \(\mathrm{TV}_{\mathcal{C}}\colon\mathbf{Cob}_{3}\to\mathrm{Mod}_{\Bbbk}\) associated to a spherical fusion \(\Bbbk\)-category \(\mathcal{C}\) with invertible dimension (for details, see [32]). Moreover, in the case when \(\mathcal{C}\) is additive and \(\Bbbk\) is an algebraically closed field, the center \(\mathcal{Z}(\mathcal{C})\) of \(\mathcal{C}\) is an anomaly free modular category (see [32, Theorem 5.3., Theorem 5.4.]). In this case, according to [32, Theorem 17.1.], the TQFTs \(\mathrm{RT}_{\mathcal{Z}(\mathcal{C})}\) and \(\mathrm{TV}_{\mathcal{C}}\) are isomorphic. Denote by \(\mathbb{G}\) the coend of \(\mathcal{Z}(\mathcal{C})\). These results and Theorem 7.1 imply the following corollary:
**Corollary 7.3**.: _The cocyclic \(\Bbbk\)-modules \(\mathrm{TV}_{\mathcal{C}}\circ X^{\bullet}\) and \(\mathbb{G}^{\bullet}\circ\Phi\) are isomorphic. The cyclic \(\Bbbk\)-modules \(\mathrm{TV}_{\mathcal{C}}\circ X_{\bullet}\) and \(\mathbb{G}_{\bullet}\circ\Phi^{\mathrm{op}}\) are isomorphic._
By using cyclic duality \(L\) and reindexing involution \(\Phi\), we obtain:
**Corollary 7.4**.: _The cyclic \(\Bbbk\)-modules \(\mathrm{TV}_{\mathcal{C}}\circ X^{\bullet}\circ\Phi\circ L\) and \(\mathbb{G}^{\bullet}\circ L\) are isomorphic. The cocyclic \(\Bbbk\)-modules \(\mathrm{TV}_{\mathcal{C}}\circ X_{\bullet}\circ\Phi^{\mathrm{op}}\circ L^{ \mathrm{op}}\) and \(\mathbb{G}_{\bullet}\circ L^{\mathrm{op}}\) are isomorphic._
To show the claim from Theorem 7.1, it follows from Section 4**C** that it suffices to compute \(\operatorname{RT}_{\mathcal{B}}\circ Y^{\bullet}\) and \(\operatorname{RT}_{\mathcal{B}}\circ Y_{\bullet}\). In Section 7**A**, we give some algebraic preliminaries on coend of a modular category. Next, in Section 7**B**, we describe the Reshetikhin-Turaev TQFT \(\operatorname{RT}_{\mathcal{B}}\) via the coend \(\mathbb{F}\) of \(\mathcal{B}\). Then, in Section 7**C**, we compute the cocyclic \(\Bbbk\)-module \(\operatorname{RT}_{\mathcal{B}}\circ Y^{\bullet}\). In Section 7**D** we prove that the latter is isomorphic to \(\mathbb{F}^{\bullet}\circ\Phi\), as stated in Theorem 7.1. Finally, in Section 7**E**, we sketch the computation of \(\operatorname{RT}_{\mathcal{B}}\circ Y_{\bullet}\) and the proof of the fact that it is isomorphic to the cyclic \(\Bbbk\)-module \(\mathbb{F}_{\bullet}\circ\Phi^{\operatorname{op}}\).
Remember that \(\mathbb{F}\) is a Hopf algebra in \(\mathcal{B}\) endowed with a Hopf pairing \(\omega\colon\mathbb{F}\otimes\mathbb{F}\to\mathbb{1}\). Here, we denote by \(m\), \(u\), \(\Delta\), \(\varepsilon\), and \(S\) multiplication, unit, comultiplication, counit, and antipode of \(\mathbb{F}\), respectively. While using graphical calculus, we often drop \(\mathbb{F}\) from notation.
**7A**. **Modularity and pairing of a coend.** In this section we provide some algebraic preliminaries needed for computation of the Reshetikhin-Turaev TQFT \(\operatorname{RT}_{\mathcal{B}}\) via the coend of anomaly-free modular category. In the following lemma, we compute the inverse, under some conditions, of the pairing of the coend. Note that the statement and the proof of Lemma 7.5\((b)\) is similar to [32, Lemma 6.2.].
**Lemma 7.5**.: _Let \(\mathcal{B}\) be a ribbon \(\Bbbk\)-category with a coend \(\mathbb{F}\) and suppose that the canonical pairing \(\omega\colon\mathbb{F}\otimes\mathbb{F}\to\mathbb{1}\) associated to the coend is non-degenerate._
1. _If_ \(\Lambda\colon\mathbb{1}\to\mathbb{F}\) _is a right integral of a coend, then_ \(\omega(\Lambda\otimes\operatorname{id}_{\mathbb{F}})\colon\mathbb{F}\to \mathbb{1}\) _is a left cointegral of_ \(\mathbb{F}\)_._
2. _Let_ \(\Lambda\) _be a right integral of the coend_ \(\mathbb{F}\) _of_ \(\mathcal{B}\)_. Suppose that the element_ \(\omega(\Lambda\otimes\Lambda)\) _is invertible. The inverse of the pairing_ \(\omega\) _is given by the morphism_ \(\Omega\colon\mathbb{1}\to\mathbb{F}\otimes\mathbb{F}\)_, which is computed by_ \[\Omega=\left(\begin{array}{c}\includegraphics[height=85.358268pt]{images/100.eps}\\ \includegraphics[height=85.358268pt]{images/100.
**Corollary 7.6**.: _Let \(\mathcal{B}\) be a ribbon \(\Bbbk\)-category with a coend \(\mathbb{F}\) and \(\Lambda\colon\mathbb{1}\to\mathbb{F}\) a right integral for the coend \(\mathbb{F}\). If the pairing \(\omega\colon\mathbb{F}\otimes\mathbb{F}\to\mathbb{1}\) is non-degenerate, then_
(20)
Proof.: By Lemma 7.5\((b)\) and invertibility of the antipode, we have
Here \((i)\) and \((iii)\) follow from invertibility of the antipode, \((ii)\) follows by Lemma 7.5\((b)\).
If \(\mathcal{B}\) is an additive ribbon fusion \(\Bbbk\)-category with a coend \(\mathbb{F}\), then, according to [32, Theorem 6.6.], the category \(\mathcal{B}\) is modular (in the sense of Section **5L**) if and only if the canonical pairing \(\omega\colon\mathbb{F}\otimes\mathbb{F}\to\mathbb{1}\) associated to \(\mathbb{F}\) is non-degenerate. Moreover, the coend \(\mathbb{F}\) is given by \(\mathbb{F}=\bigoplus_{i\in I}i^{*}\otimes i\), where \(I\) is a representative set of simple objects of \(\mathcal{B}\). For \(i\in I\), we denote the projection associated with the direct sum decomposition by \(p_{i}\colon\mathbb{F}\to i^{*}\otimes i\). However, we drop inclusions \(i^{*}\otimes i\to\mathbb{F}\) in our notation. By [32, Theorem 6.4.], any integral of \(\mathbb{F}\) is a scalar multiple of the universal integral
\[\Lambda=\sum_{i\in I}\dim(i)\widetilde{\operatorname{coev}}_{i}\colon \mathbb{1}\to\mathbb{F}. \tag{21}\]
Similarly, one can show that any cointegral of \(\mathbb{F}\) is a scalar multiple of the universal cointegral
\[\lambda=\operatorname{ev}_{\mathbb{1}}p_{\mathbb{1}}\colon\mathbb{F}\to \mathbb{1}. \tag{22}\]
Universal (co)integrals \(\lambda\) and \(\Lambda\) satisfy \(\lambda\Lambda=\operatorname{id}_{\mathbb{1}}\).
**Remark 7.7**.: Let \(\mathcal{B}\) be an additive ribbon fusion \(\Bbbk\)-category such that the canonical pairing \(\omega\) associated to the coend \(\mathbb{F}\) is non-degenerate. Recall the universal integral \(\Lambda\) and the universal cointegral \(\lambda\) of the coend \(\mathbb{F}\), defined in equations (21) and (22), respectively. Let us calculate \(\omega(\Lambda\otimes\Lambda)\). By Lemma 7.5\((a)\), \(\omega(\Lambda\otimes\mathrm{id}_{\mathbb{F}})\) is a left cointegral of \(\mathbb{F}\). By universality of \(\lambda\), \(\omega(\Lambda\otimes\mathrm{id}_{\mathbb{F}})=k\lambda\), for some \(k\in\Bbbk\cong\mathrm{End}(\mathbb{1}).\) This further implies that
\[\omega(\Lambda\otimes\Lambda)=k\lambda\Lambda=k\mathrm{id}_{\mathbb{1}}.\]
Now, remark also that
\[\omega(\Lambda\otimes u)=\varepsilon\Lambda=\dim(\mathcal{B}).\]
These two properties together with the fact that \(p_{\mathbb{1}}i_{\mathbb{1}}=\mathrm{id}_{\mathbb{1}^{*}\otimes\mathbb{1}}\), with definition of unit \(u\) of the coend \(\mathbb{F}\), and definition of \(\lambda\), give that
\[\omega(\Lambda\otimes\Lambda)= k\mathrm{id}_{\mathbb{1}}=k(\mathrm{id}_{\mathbb{1}}\otimes\mathrm{ ev}_{\mathbb{1}})(\mathrm{coev}_{\mathbb{1}}\otimes\mathrm{id}_{\mathbb{1}})=k( \mathrm{id}_{\mathbb{1}}\otimes\mathrm{ev}_{\mathbb{1}}p_{\mathbb{1}})( \mathrm{id}_{\mathbb{1}}\otimes\mathrm{id}_{\mathbb{1}})(\mathrm{coev}_{ \mathbb{1}}\otimes\mathrm{id}_{\mathbb{1}})=\] \[= k(\mathrm{id}_{\mathbb{1}}\otimes\lambda)u=k\lambda u=\omega( \Lambda\otimes u)=\dim(\mathcal{B}).\]
### 7B. The Reshetikhin-Turaev TQFT via coends
In [31], Turaev associates to any modular category \(\mathcal{B}\) a \(3\)-dimensional TQFT \(\mathrm{RT}_{\mathcal{B}}\). There, a precise definition of a \(3\)-dimensional TQFT involves Lagrangian spaces in homology of surfaces and \(p_{1}\)-structures in cobordisms. However, if the modular category \(\mathcal{B}\) is anomaly free (see Section 5**L**), then the TQFT \(\mathrm{RT}_{\mathcal{B}}\) does not depend on this additional data and is a genuine symmetric monoidal functor \(\mathrm{RT}_{\mathcal{B}}\colon\mathbf{Cob}_{3}\to\mathrm{Mod}_{\Bbbk}\).
Let \(\mathcal{B}\) be an anomaly free modular \(\Bbbk\)-category. Recall from Section 5**L** that the scalar \(\Delta=\Delta_{+}=\Delta_{-}\) is invertible and satisfies \(\Delta^{2}=\dim(\mathcal{B})\). By Section 3**F**, any special ribbon \((g,h)\)-graph \(\Gamma\) represents a \(3\)-cobordism \(M_{\Gamma}\colon S_{g}\to S_{h}\). Our goal is to compute the \(\Bbbk\)-linear homomorphism
\[\mathrm{RT}_{\mathcal{B}}(M_{\Gamma})\colon\mathrm{RT}_{\mathcal{B}}(S_{g}) \to\mathrm{RT}_{\mathcal{B}}(S_{h})\]
in terms of the coend \(\mathbb{F}\) of \(\mathcal{B}\) (which always exists, see Section 7**A**). First, it follows from the definition of \(\mathrm{RT}_{\mathcal{B}}\) and the computation of the coend \(\mathbb{F}\) in terms of a representative set \(I\) of simple objects of \(\mathcal{B}\) that
\[\mathrm{RT}_{\mathcal{B}}(S_{g})=\mathrm{Hom}_{\mathcal{B}}(\mathbb{1}, \mathbb{F}^{\otimes g})\quad\text{and}\quad\mathrm{RT}_{\mathcal{B}}(S_{h})= \mathrm{Hom}_{\mathcal{B}}(\mathbb{1},\mathbb{F}^{\otimes h}).\]
Next, the formula (2.3)\((a)\) from [31, Section IV.2.3], which computes \(\mathrm{RT}_{\mathcal{B}}(M_{\Gamma})\), rewrites in our setting as
\[\mathrm{RT}_{\mathcal{B}}(M_{\Gamma})=\Delta^{-n-h}\,\mathrm{Hom}_{\mathcal{B }}(\mathbb{1},|\Gamma|), \tag{23}\]
where \(|\Gamma|\colon\mathbb{F}^{\otimes g}\to\mathbb{F}^{\otimes h}\) is a morphism in \(\mathcal{B}\) defined as follows. By pulling down some part of each circle component of \(\Gamma\) and of each arc connecting the outputs of \(\Gamma\), we obtain that the ribbon graph \(\Gamma\) is isotopic to
\(\Gamma\
where the cups \(L_{1},\ldots,L_{n}\) correspond to the circle components of \(\Gamma\) and the cups \(C_{1},\ldots,C_{h}\) correspond to the upper arcs of \(\Gamma\). Here, \(\widetilde{\Gamma}\) is a ribbon graph with \((g+n+2h)\) arcs, \((2g+2n+2h)\) inputs, \(2h\) outputs, no coupons, no circle components, and such that:
* for all \(1\leq i\leq g+n\), an arc \(a_{i}\) connects the \((2i-1)\)-th input to the \((2i)\)-th input of \(\widetilde{\Gamma}\),
* for all \(1\leq j\leq h\), an arc \(u_{j}\) connects the \((2g+2n+2j-1)\)-th input to the \((2j-1)\)-th output of \(\widetilde{\Gamma}\), and an arc \(v_{j}\) connects the \((2j)\)-th output to the \((2g+2n+2j)\)-th input of \(\widetilde{\Gamma}\).
The ribbon graph \(\Gamma\) is called _closure_ of the ribbon graph \(\widetilde{\Gamma}\). Coloring the arc \(a_{i}\) by an object \(X_{i}\) of \(\mathcal{B}\) and coloring both the arcs \(u_{j},v_{j}\) by an object \(Y_{j}\) of \(\mathcal{B}\), we obtain a \(\mathcal{B}\)-colored ribbon graph representing a morphism \(\phi_{X_{1},\ldots,X_{g+n},Y_{1},\ldots Y_{h}}\). Let \(i=\{i_{X}\colon X^{*}\otimes X\to\mathbb{F}\}_{X\in\operatorname{Ob}(\mathcal{ B})}\) be the universal dinatural transformation associated to the coend \(\mathbb{F}\). Then the family of morphisms
\[(i_{Y_{1}}\otimes\cdots\otimes i_{Y_{h}})\circ\phi_{X_{1},\ldots,X_{g+n},Y_{1},\ldots Y_{h}}\]
from \(X_{1}^{*}\otimes X_{1}\otimes\cdots\otimes X_{g+n}^{*}\otimes X_{g+n}\otimes Y _{1}^{*}\otimes Y_{1}\otimes\cdots\otimes Y_{h}^{*}\otimes Y_{h}\) to \(\mathbb{F}^{\otimes h}\) is dinatural in each variable and so, by Lemma 5.1, it factorizes as
\[\phi_{\Gamma}\circ(i_{X_{1}}\otimes\cdots\otimes i_{X_{g+n}}\otimes i_{Y_{1}} \otimes\cdots\otimes i_{Y_{h}})\]
for a unique morphism \(\phi_{\Gamma}\colon\mathbb{F}^{\otimes g+n+h}\to\mathbb{F}^{\otimes h}\). Then
\[|\Gamma|=\phi_{\Gamma}\circ(\operatorname{id}_{\mathbb{F}^{\otimes g}}\otimes \Lambda^{\otimes(n+h)})\colon\mathbb{F}^{\otimes g}\to\mathbb{F}^{\otimes h},\]
where \(\Lambda\) is the universal integral defined in the equation (21). It follows from the fact that \(\Lambda\) is a right integral for \(\mathbb{F}\) (see Section 7**A**) that the morphism \(|\Gamma|\) is an isotopy invariant of \(\Gamma\). This invariant is multiplicative:
\[|\Gamma\sqcup\Gamma^{\prime}|=|\Gamma|\otimes|\Gamma^{\prime}|, \tag{24}\]
for all special ribbon graphs, where \(\Gamma\sqcup\Gamma^{\prime}\) is obtained by concatenating \(\Gamma^{\prime}\) to the right of \(\Gamma\).
### 7C
**Computation of \(\mathbf{RT}_{\mathcal{B}}\circ Y^{\bullet}\).** We remind that \(\mathcal{B}\) denotes an anomaly-free modular category. In the following lemma, we calculate the isotopy invariant \(|\cdot|\) (see Section 7**B**) of particular special graphs:
**Lemma 7.8**.: _If \(T_{i}\) for \(i=1,2,3\) are the following special ribbon graphs_
_then_
* \(|T_{1}|=\dim(\mathcal{B})\mathrm{id}_{\mathbb{F}}\)_,_
* \(|T_{2}|=\dim(\mathcal{B})u\)_,_
* \(|T_{3}|=\dim(\mathcal{B})m\)_._
Proof.:
* A ribbon graph whose closure is isotopic to \(T_{1}\) is
\(\widetilde{T_{1}}=\)\(\widetilde{T_{1}}=
For all objects \(X,Y\) in \(\mathcal{B}\), we have:
Hence, by definition of \(|\cdot|\) given in Section **7B**, by Lemma 7.5\((a)\), and Remark 7.7, we have:
\[|T_{2}|=\left\lfloor\begin{array}{c}\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{\includegraphics[height=85.358268pt]{ \includegraphics[height=85.358268pt]{includegraphics[height=85.358268pt]{ \includegraphics[height=85.
Hence, by definition of \(|\cdot|\) given in Section **7B**, axioms of a Hopf pairing, the equation (17), Corollary 7.6, and Remark 7.7, we have
\[|T_{3}|=\raisebox{-19.916929pt}{\includegraphics[]{figures/7.eps}}=\raisebox{-19.916929 pt}{\includegraphics[]{figures/7.eps}}=\raisebox{-19.916929pt}{\includegraphics[]{figures/7.eps}}= \omega(\Lambda\otimes\Lambda)m=\dim(\mathcal{B})m.\]
The previously proved Lemma 7.8 implies the following result:
**Lemma 7.9**.: _We have:_
1. \(|G_{Y^{\bullet}(\delta_{i}^{n})}|=\left\{\begin{array}{c|c|c|c|c}\dim( \mathcal{B})^{n+1}\raisebox{-19.916929pt}{\includegraphics[]{figures/7.eps}}& \raisebox{-19.916929pt}{\includegraphics[]{figures/7.eps}}&\raisebox{-19.916929 pt}{\includegraphics[]{figures/7.eps}}&\raisebox{-19.916929pt}{ \includegraphics[]{figures/7.eps}}\\ &\raisebox{-19.916929pt}{\includegraphics[]{figures/7.
2. Recall the special ribbon graphs \(T_{1}\) and \(T_{3}\) from Lemma 7.8. Let \(n\in\mathbb{N}\) and \(0\leq j\leq n\). By Lemma 7.8, \[|G_{Y^{\bullet}(\sigma_{j}^{n})}|= \left(\dim(\mathcal{B})\mathrm{id}_{\mathbb{F}}\right)^{\otimes j }\otimes\dim(\mathcal{B})m\otimes\left(\dim(\mathcal{B})\mathrm{id}_{ \mathbb{F}}\right)^{\otimes n-j}\] \[=\dim(\mathcal{B})^{n+1}\left|\begin{array}{c}\cdots\\ 0\end{array}\right|\begin{array}{c}\includegraphics[height=142.26378pt]{images }\\ \includegraphics[height=142.
Hence, by definition of \(|\cdot|\) given in Section **7B**, the equations (17) and (16), Corollary 7.6 and Remark 7.7, we have:
\[|G_{Y^{\bullet}(\tau_{1})}|=\] \[= \left(\omega(\Lambda\otimes\Lambda)\right)^{2}\]
The computation of \(\mathrm{RT}_{\mathcal{B}}\circ Y^{\bullet}\) follows from Lemma 7.9:
**Lemma 7.10**.: _The cocyclic \(\Bbbk\)-module \(\mathrm{RT}_{\mathcal{B}}\circ Y^{\bullet}\) equals to the cocyclic \(\Bbbk\)-module given by the family \(\{\mathrm{Hom}_{\mathcal{B}}(1,\mathbb{F}^{\otimes n+1})\}_{n\in\mathbb{N}}\), equipped with the cofaces \(\{\mathrm{RT}_{\mathcal{B}}(Y^{\bullet}(\delta_{i}^{n}))\}_{n\in\mathbb{N}^{ \ast},0\leq i\leq n}\), codegeneracies \(\{\mathrm{RT}_{\mathcal{B}}(Y^{\bullet}(\sigma_{j}^{n}))\}_{n\in\mathbb{N},0 \leq j\leq n}\), and cocyclic operators \(\{\mathrm{RT}_{\mathcal{B}}(Y^{\bullet}(\tau_{n}))\}_{n\in\mathbb{N}}\) given by formulas_
\[\delta_{0}^{n}(f)=\]
Proof.: By using the formula (23) from Section **7B**, the construction of \(Y^{\bullet}\) from Section **4A**, and the Lemma 7.9, we calculate \(\{\mathrm{RT}_{\mathcal{B}}(Y^{\bullet}(\delta_{i}^{n}))\}_{n\in\mathbb{N}^{ \ast},0\leq i\leq n}\), \(\{\mathrm{RT}_{\mathcal{B}}(Y^{\bullet}(\sigma_{j}^{n}))\}_{n\in\mathbb{N},0 \leq j\leq n}\), and
\(\{\mathrm{RT}_{\mathcal{B}}(Y^{\bullet}(\tau_{n}))\}_{n\in\mathbb{N}}\). Let \(n\in\mathbb{N}^{*}\). In the case when \(1\leq i\leq n-1\),
\[\mathrm{RT}_{\mathcal{B}}(Y^{\bullet}(\delta_{i}^{n}))= \Delta^{-(n+1)-(n+1)}\mathrm{Hom}_{\mathcal{B}}\left(\mathbb{1},|G _{Y^{\bullet}(\delta_{i}^{n})}|\right)\] \[= \dim(\mathcal{B})^{-(n+1)}\mathrm{Hom}_{\mathcal{B}}\left( \mathbb{1},\dim(\mathcal{B})^{n+1}\left|\begin{array}{ccc}...&\raisebox{-1.0pt}{\includegraphics[height=14.0pt]{images/2014-14}}&\raisebox{-1.0pt}{ \includegraphics[height=14.0pt]{images/2014-14}}\\ \raisebox{-1.0pt}{\includegraphics[height=14.0pt]{images/2014-14}}&\raisebox{-1.0pt}{\includegraphics[height=14.0pt]{images/2014-14}}\\ \raisebox{-1.0pt}{\includegraphics[height=14.0pt]{images/2014-14}}&\raisebox{-1.
the family \(\omega^{\bullet}=\{\omega^{n}\colon\operatorname{Hom}_{\mathcal{B}}(\mathbb{1}, \mathbb{F}^{\otimes n+1})\to\operatorname{Hom}_{\mathcal{B}}(\mathbb{F}^{\otimes n +1},\mathbb{1})\}_{n\in\mathbb{N}}\), which is defined by setting for any \(n\in\mathbb{N}\) and \(f\in\operatorname{Hom}_{\mathcal{B}}(\mathbb{1},\mathbb{F}^{\otimes n+1})\),
(25)
The inverse of \(\omega^{\bullet}\) is given by the family \(\Omega^{\bullet}=\{\Omega_{n}\colon\operatorname{Hom}_{\mathcal{B}}(\mathbb{ F}^{\otimes n+1},\mathbb{1})\to\operatorname{Hom}_{\mathcal{B}}(\mathbb{1}, \mathbb{F}^{\otimes n+1})\}_{n\in\mathbb{N}}\), which is defined by setting for any \(n\in\mathbb{N}\) and \(f\in(\mathbb{F}^{\otimes n+1},\mathbb{1})\),
\[\Omega^{n}(f)=\begin{array}{c}\includegraphics[width=142.3678pt]{images/.eps}\\ \includegraphics[width=142.3678pt]{images/.eps}\end{array}.\]
It remains to check that \(\omega^{\bullet}\) is a natural transformation between cocyclic \(\Bbbk\)-modules \(\operatorname{RT}_{\mathcal{B}}\circ Y^{\bullet}\) and \(\mathbb{F}^{\bullet}\circ\Phi\). The equations (9) and (10) follow from axioms of a Hopf pairing \(\omega\) and isotopy invariance of graphical calculus. The equation (11) follows from equations (16) and (17), naturality of braiding, and isotopy invariance of graphical calculus.
### Sketch of computation of \(\operatorname{RT}_{\mathcal{B}}\circ Y_{\bullet}\)
Let us sketch the computation of \(\operatorname{RT}_{\mathcal{B}}\circ Y_{\bullet}\), which is similar to computation of \(\operatorname{RT}_{\mathcal{B}}\circ Y^{\bullet}\) given in detail in Section **7C** and **7D**. The following lemma is analogously proved as the Lemma 7.8.
**Lemma 7.11**.: _If \(T_{4}\) and \(T_{5}\) are the following special ribbon graphs_
_then_
1. \(|T_{4}|=\varepsilon\)_,_
2. \(|T_{5}|=\dim(\mathcal{B})^{2}\Delta\)_._
Next, by using Lemma 7.11, one obtains an analogue of Lemma 7.9:
**Lemma 7.12**.: _We have:_
1. \(|G_{Y_{\bullet}(d_{i}^{n})}|=\dim(\mathcal{B})^{n}\left|\begin{array}{c} \includegraphics[width=142.3678pt]{images/.eps}\end{array}\right|\)_,_
2. \(|G_{Y_{\bullet}(s_{j}^{n})}|=\dim(\mathcal{B})^{n+2}\left|\begin{array}{c} \includegraphics[width=142.3678pt]{images/.eps}\end{array}\right|\)_,_
(c) \(|G_{Y_{\bullet}(t_{n})}|=\begin{cases}\dim(\mathcal{B})\mathrm{id}_{\mathbb{F}},& \text{if $n=0$},\\ \dim(\mathcal{B})^{n+1}\end{cases}\)\(\begin{array}{c}\includegraphics[height=36.135pt]{images/2011-1
This construction is functorial, that is, any morphism between coalgebras in \(\mathcal{B}\) induces, in a functorial way, the morphism of corresponding paracyclic objects in \(\mathcal{B}\).
Similarly, one can associate to \(H\) the paracocyclic object \(\mathbf{A}_{\bullet}(H)\colon\Delta C_{\infty}\to\mathcal{B}\), given by setting for any \(n\in\mathbb{N}\), \(\mathbf{A}_{n}(H)=H^{\otimes n+1}\) and
This construction is functorial, that is, any algebra morphism in \(\mathcal{B}\) induces, in a functorial way, the morphism of corresponding paracocyclic objects in \(\mathcal{B}\).
Notice that if \(\mathcal{B}\) is a ribbon category and if \(H\) is an involutive Hopf algebra in \(\mathcal{B}\) (that is, \(S^{2}=\theta_{H}\), where \(\theta\) is the canonical twist of \(\mathcal{B}\)), then the paracocyclic operators \(\{t_{n}\}_{n\in\mathbb{N}}\) of \(\mathbf{C}_{\bullet}(H)\) satisfy the "twisted cyclicity condition", that is, for all \(n\in\mathbb{N}\),
\[t_{n}^{n+1}=(\theta_{H^{\otimes n+1}})^{-1}. \tag{27}\]
If \(\mathcal{B}\) is additionally \(\Bbbk\)-linear, then the cocyclic \(\Bbbk\)-module \(H^{\bullet}\) from Section 6**A** can be obtained by composing \(\mathbf{C}_{\bullet}(H)\) with the hom-functor \(\operatorname{Hom}_{\mathcal{B}}(-,\mathbb{1})\). Here, the cocyclicity condition (8) for \(H^{\bullet}\) follows by naturality of twists and the fact that \(\theta_{\mathbb{1}}=\operatorname{id}_{\mathbb{1}}\). In this vein, one derives from \(\mathbf{C}_{\bullet}(H)\) an \(r\)-cocyclic \(\Bbbk\)-module as follows. Namely, if \(i\) is a simple object of \(\mathcal{B}\), then the twist \(\theta_{i}\) is a scalar multiple of the identity morphism, still denoted by \(\theta_{i}\). If \(\theta_{i}\) is of finite order \(r\), then the composition of \(\mathbf{C}_{\bullet}(H)\) with the hom-functor \(\operatorname{Hom}_{\mathcal{B}}(-,i)\) induces an \(r\)-cocyclic (respectively, cyclic) \(\Bbbk\)-module. In a similar way, one obtains \(r\)-cyclic modules from \(\mathbf{A}_{\bullet}(H)\). For instance, according to [4], and built on theorems of Vafa [33] and Muger [27], all the twists on simple objects of a \(\mathbb{C}\)-linear ribbon fusion category are roots of unity.
Finally, we point out that the above construction of \(\mathbf{C}_{\bullet}(H)\) (respectively, \(\mathbf{A}_{\bullet}(H)\)) may be restated for coalgebra \(H\) (respectively, algebra) in a braided category with a twist \(\mathcal{B}\), just like we did in Section 6. Namely, in the above formulas, the square of the antipode should be replaced with the twists. As already remarked in Section 6**C**, by passing to \(\mathbf{C}_{\bullet}(H)\circ L\) (respectively, \(\mathbf{A}_{\bullet}(H)\circ L^{\mathrm{op}}\)), these constructions fit into a more general framework of Akrami and Majid in [1], who considered ribbon algebras (or dually, coribbon coalgebras) in a braided category.
### 8C. Paracyclic and \(r\)-cyclic objects from Crane-Yetter Hopf algebra
Another relevant category in quantum topology is that of connected \(3\)-cobordisms, further denoted as **3Cob**, which first appeared in [17], which was studied over the last three decades, notably in [18, 16, 2, 8], and most recently in [7]. For each \(g\in\mathbb{N}\), fix a compact connected oriented genus \(g\)-surface \(\Sigma_{g,1}\) with one boundary component. The objects of **3Cob** are surfaces \(\Sigma_{g,1}\) for each \(g\in\mathbb{N}\). A morphism \(\Sigma_{g,1}\to\Sigma_{h,1}\) is given by the connected cobordism from \(\Sigma_{g,1}\) to \(\Sigma_{h,1}\). In contrast to the category \(\mathbf{Cob}_{3}\) which is recalled in Section 3**B** and used throughout the paper, the category **3Cob** is a braided monoidal category, the monoidal product on objects is given by connected sum of surfaces, on morphisms it is given by connected sum of cobordisms, the and the unit object is \(\Sigma_{0,1}\).
It is a result of Crane and Yetter [11], that the one-holed torus \(\Sigma_{1,1}\) has a structure of a Hopf algebra in **3Cob**. By the general construction from Section 8**B**, one can organize the family of surfaces \(\{\Sigma_{1,1}^{\otimes g}\}_{g\in\mathbb{N}^{*}}=\{\Sigma_{g,1}\}_{g\in \mathbb{N}^{*}}\) into a paracyclic object \(C_{\bullet}(\Sigma_{1,1})\) (respectively, paracocyclic object \(A_{\bullet}(\Sigma_{1,1})\)) in **3Cob**. The existence of \(C_{\bullet}(\Sigma_{1,1})\) (respectively, \(A_{\bullet}(\Sigma_{1,1})\)) implies that any braided monoidal functor from **3Cob** to a braided monoidal category \(\mathcal{B}\) induces a para(co)cyclic object in \(\mathcal{B}\). Let \(\mathcal{B}\) be a ribbon unimodular factorizable category
and \(\Bbbk\) an algebraically closed field. Here the unimodularity means that \(\mathcal{B}\) is a finite \(\Bbbk\)-category in the sense of [12] and that the projective cover of \(\mathbb{1}\) is self-dual. By [7, Theorem 1.2.], there is a braided monoidal functor \(J_{3}\colon\mathbf{3Cob}\to\mathcal{B}\), which, in particular, sends the one-holed torus to the end \(\mathbb{A}\) of category \(\mathcal{B}\), that is, an end of functor \((X,Y)\mapsto X\otimes Y^{*}\). Furthermore,
\[J_{3}\circ\mathbf{C}_{\bullet}(\Sigma_{1,1})=\mathbf{C}_{\bullet}(\mathbb{A}) \quad\text{and}\quad J_{3}\circ\mathbf{A}_{\bullet}(\Sigma_{1,1})=\mathbf{A}_{ \bullet}(\mathbb{A}).\]
Note that factorizability in the sense [7] is equivalent to non-degeneracy of coend of \(\mathcal{B}\) (see Remark 9.1). In this setting, the end \(\mathbb{A}\) and the coend \(\mathbb{F}\) are isomorphic Hopf algebras (see Section **9B**). By functoriality, paracyclic objects \(\mathbf{C}_{\bullet}(\mathbb{A})\) and \(\mathbf{C}_{\bullet}(\mathbb{F})\) (respectively, paracocyclic objects \(\mathbf{A}_{\bullet}(\mathbb{A})\) and \(\mathbf{A}_{\bullet}(\mathbb{F})\)) in \(\mathcal{B}\) are isomorphic. Finally, since \(S^{2}_{\mathbb{A}}=\theta_{\mathbb{A}}\), paracyclic operators of \(\mathbf{C}_{\bullet}(\mathbb{A})\) (respectively, paracocyclic operators of \(\mathbf{A}_{\bullet}(\mathbb{A})\)) satisfy the twisted cyclicity (respectively, cocyclicity) condition (27). Hence, by composing \(J_{3}\circ\mathbf{C}_{\bullet}(\Sigma_{1,1})\) with appropriate hom-functors (see Section **8B**), one obtains \(r\)-(co)cyclic \(\Bbbk\)-modules.
**Remark 8.1**.: Cobordisms in **3Cob** admit surgery presentation by certain tangles (for a review, see [7, Section 4]). On one hand, there is a resemblance between tangles which present structural morphisms of the Hopf algebra \(\Sigma_{1,1}\) and the special ribbon graphs which present generating morphisms of (co)cyclic objects in \(\mathbf{Cob}_{3}\) (see Theorem 4.1). Note also that in the construction of the latter objects, we do not use the monoidal structure of \(\mathbf{Cob}_{3}\). One the other hand, in the contrast with the Kirby calculus on special ribbon graphs from Section **3G**, the moves **COUPON** and **TWIST** do not appear as moves between tangles presenting morphisms in **3Cob**. It would be interesting to explore relationships between (co)cyclic objects in \(\mathbf{Cob}_{3}\) which were constructed in Section 4 and para(co)cyclic objects in **3Cob** from Section **8C**.
## 9. Appendix
In this section we discuss (co)end of a braided pivotal category \(\mathcal{B}\) via universal (co)actions, the Drinfeld map as in [14], and factorizability.
**9A**.: **(Co)ends via universal (co)actions.** Coend \((\mathbb{F},i)\) of \(\mathcal{B}\) can be thought as universal right comodule. Indeed, \(\mathbb{F}\) coacts on each object \(X\) of \(\mathcal{B}\) via the coaction \(\phi_{X}\colon X\to X\otimes\mathbb{F}\), defined by setting
\[\phi_{X}=(\operatorname{id}_{X^{*}}\otimes i_{X})(\operatorname{coev}_{X} \otimes\operatorname{id}_{X}),\quad\text{depicted by}\quad\raisebox{-14.226378pt}{ \includegraphics[]{fig/X}}\.\]
This coaction is universal, that is, for any \(D\in\operatorname{Ob}(\mathcal{B})\) and any natural transformation \(\{\gamma_{X}\colon X\to X\otimes D\}_{X\in\operatorname{Ob}(\mathcal{B})}\), there exists a unique morphism \(f\colon\mathbb{F}\to D\) such that for any \(X\in\operatorname{Ob}(\mathcal{B})\),
\[\gamma_{X}=(\operatorname{id}_{X}\otimes f)\phi_{X}.\]
The structural morphisms of coend \(\mathbb{F}\) from Section **5S** are characterized in terms of the universal coaction as follows. For all \(X,Y\in\operatorname{Ob}(\mathcal{B})\),
\[\raisebox{-14.226378pt}{\includegraphics[]{fig/X}}\,\qquad u_{\mathbb{F}}=\phi_{ \mathbb{1}}=\raisebox{-14.226378pt}{\includegraphics[]{fig/X}}\,\]
Similarly, end \((\mathbb{A},j)\) of \(\mathcal{B}\), where \(j=\{j_{X}\colon\mathbb{A}\to X\otimes X^{*}\}_{X\in\operatorname{Ob}(\mathcal{B})}\) is the universal dinatural transformation, can be thought as a universal left module, that is, for each \(X\in\operatorname{Ob}(\mathcal{B})\) there is a left action \(\alpha_{X}\colon\mathbb{A}\otimes X\to X\), defined by setting
\[\alpha_{X}=(\operatorname{id}_{X}\otimes\operatorname{ev}_{X})(j_{X}\otimes \operatorname{id}_{X}),\quad\text{depicted by}\]
As above, this action is universal, that is, for any \(D\in\operatorname{Ob}(\mathcal{B})\) and any natural transformation \(\beta=\{\beta_{X}\colon D\otimes X\to X\}_{X\in\operatorname{Ob}(\mathcal{B})}\), there is a unique morphism \(f\colon D\to\mathbb{A}\) such that for any \(X\in\operatorname{Ob}(\mathcal{B})\),
\[\beta_{X}=\alpha_{X}(f\otimes\operatorname{id}_{X}).\]
The structural morphisms of end \(\mathbb{A}\) (as in [7, Section 7.1.]) are characterized in terms of the universal action as follows: for all \(X,Y\in\operatorname{Ob}(\mathcal{B})\),
\[\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)* {\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xyxy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xyxy(0,0)*{\xy(0,0)*{\xyxy(0,0)*{\xy(0,0)*{\xyxy(0,0)*{\xy(0,0)*{ \xyxy(0,0)*{\xy(0,0)*{\xyxy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xyxy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{\xy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{ \xyxy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{\xy(0,0)*{ \xyxy(0,0)*{\xyxy(0,0)*{\xy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{ \xyxy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{\xy(0,0)*{\xyxy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xyxy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{ \xyxy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{\xy(0,0)*{\xyxy(0,0)*{ \xyxy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{\xy(0,0)*{\xyxy(0,0)*{\xy(0,0)*{ \xyxy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{ \xyxy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{\xyxy(0,0)*{ \xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxy(0)*{\xyxyxyxy(0)*{\xyxyxy(0*{\xyxyxyxy(0*{ \xyxyxy(0)*{\xyxyxy(0*{\xyxyxyxy(0)*{\xyxyxyxy(0})*{\xyxyxy(0*{ \left)xyxyxy(0*{xxy(0)*{\xyxyxyxy(0*{\left(0)*{\xyxyxyxy(0)*{ \left({\xyxyxyxyxyxyxy(0)*{\leftleft(0}{\leftleftleftleftleftleftleft(0){\leftleftleftleft(0){\leftleft(0 {0{0{\left\left\left\left(\left(\left(\left({\left({\left\left({\{\left\left\left\left}}}{ \cdot{\cdot{\cdot{\cdot \cdot \mid{ \mid \mid }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\ \
The Drinfeld map is a morphism of Hopf algebras \(\mathbb{F}\) and \(\mathbb{A}\). Indeed, for all \(X,Y,Z\in\operatorname{Ob}(\mathcal{B})\),
Here \((i)\) follows by definition of \(m_{\mathbb{F}}\) and \(D_{\mathbb{F},\mathbb{A}}\), \((ii)\) by isotopy, \((iii)\) by definition of \(D_{\mathbb{F},\mathbb{A}}\) and isotopy, and \((iv)\) follows by definition of \(m_{\mathbb{A}}\).
Next, for all \(X\in\operatorname{Ob}(\mathcal{B})\),
Here, \((i)\) (respectively, \((iv)\)) follows by definition of \(u_{\mathbb{F}}\) (respectively, \(u_{\mathbb{A}}\)), \((ii)\) by definition of \(D_{\mathbb{F},\mathbb{A}}\), \((iii)\) by the fact that \(\tau_{\mathbb{1},X}=\operatorname{id}_{X}\) for any \(X\in\operatorname{Ob}(\mathcal{B})\). In conclusion, \(D_{\mathbb{F},\mathbb{A}}\) is an algebra morphism. One verifies similarly that it is also a coalgebra morphism.
**Remark 9.1**.: A unimodular ribbon category \(\mathcal{B}\) is factorizable (see [14]) if the morphism
where \(\omega\) is the Hopf pairing on \(\mathbb{F}\) (see Section 5**S**), is an isomorphism. The latter is equivalent to the non-degeneracy of \(\omega\) in the sense of Section 5**R**. Indeed, if \(D_{\mathbb{F}}\) is an isomorphism, then \(\Omega=D_{\mathbb{F}}^{-1}\mathrm{coev}_{\mathbb{F}}\) is an inverse of \(\omega\). Conversely, if \(\omega\) is non-degenerate with inverse \(\Omega\), then \(D_{\mathbb{F}}\) is an isomorphism with inverse \(D_{\mathbb{F}}^{-1}=(\mathrm{ev}_{\mathbb{F}}\otimes\operatorname{id}_{ \mathbb{F}})(\operatorname{id}_{\mathbb{F}^{*}}\otimes\Omega)\).
According to [7, Appendix A.], the factorizability or modularity of in the sense of [7] is equivalent to the notion of factorizability in the above sense. The latter is also equivalent to invertibility of Drinfeld map (see [14, Proposition 4.11.]).
|
2306.08125 | Implicit Compressibility of Overparametrized Neural Networks Trained
with Heavy-Tailed SGD | Neural network compression has been an increasingly important subject, not
only due to its practical relevance, but also due to its theoretical
implications, as there is an explicit connection between compressibility and
generalization error. Recent studies have shown that the choice of the
hyperparameters of stochastic gradient descent (SGD) can have an effect on the
compressibility of the learned parameter vector. These results, however, rely
on unverifiable assumptions and the resulting theory does not provide a
practical guideline due to its implicitness. In this study, we propose a simple
modification for SGD, such that the outputs of the algorithm will be provably
compressible without making any nontrivial assumptions. We consider a
one-hidden-layer neural network trained with SGD, and show that if we inject
additive heavy-tailed noise to the iterates at each iteration, for any
compression rate, there exists a level of overparametrization such that the
output of the algorithm will be compressible with high probability. To achieve
this result, we make two main technical contributions: (i) we prove a
'propagation of chaos' result for a class of heavy-tailed stochastic
differential equations, and (ii) we derive error estimates for their Euler
discretization. Our experiments suggest that the proposed approach not only
achieves increased compressibility with various models and datasets, but also
leads to robust test performance under pruning, even in more realistic
architectures that lie beyond our theoretical setting. | Yijun Wan, Melih Barsbey, Abdellatif Zaidi, Umut Simsekli | 2023-06-13T20:37:02Z | http://arxiv.org/abs/2306.08125v2 | # Implicit Compressibility of Overparametrized Neural Networks Trained with Heavy-Tailed SGD
###### Abstract
Neural network compression has been an increasingly important subject, due to its practical implications in terms of reducing the computational requirements and its theoretical implications, as there is an explicit connection between compressibility and the generalization error. Recent studies have shown that the choice of the hyperparameters of stochastic gradient descent (SGD) can have an effect on the compressibility of the learned parameter vector. Even though these results have shed some light on the role of the training dynamics over compressibility, they relied on unverifiable assumptions and the resulting theory does not provide a practical guideline due to its implicitness. In this study, we propose a simple modification for SGD, such that the outputs of the algorithm will be provably compressible without making any nontrivial assumptions. We consider a one-hidden-layer neural network trained with SGD and we inject additive heavy-tailed noise to the iterates at each iteration. We then show that, for _any_ compression rate, there exists a level of overparametrization (i.e., the number of hidden units), such that the output of the algorithm will be compressible with high probability. To achieve this result, we make two main technical contributions: (i) we build on a recent study on stochastic analysis and prove a 'propagation of chaos' result with improved rates for a class of heavy-tailed stochastic differential equations, and (ii) we derive strong-error estimates for their Euler discretization. We finally illustrate our approach on experiments, where the results suggest that the proposed approach achieves compressibility with a slight compromise from the training and test error.
## 1 Introduction
Obtaining compressible neural networks has become an increasingly important task in the last decade, and it has essential implications from both practical and theoretical perspectives. From a practical point of view, as the modern network architectures might contain an excessive number of parameters, compression has a crucial role in terms of deployment of such networks in resource-limited environments O'Neill (2020); Blalock et al. (2020). On the other hand, from a theoretical perspective, several studies have shown that compressible neural networks should achieve a better generalization performance due to their lower-dimensional structure Arora et al. (2018); Suzuki et al. (2020, 2020); Hsu et al. (2021); Barsbey et al. (2021); Sefidgaran et al. (2022).
Despite their evident benefits, it is still not yet clear how to obtain compressible networks with provable guarantees. In an empirical study Frankle and Carbin (2018), introduced the 'lottery ticket hypothesis', which indicated that a randomly initialized neural network will have a sub-network that can achieve a performance that is comparable to the original network; hence, the original network can be compressed to the smaller sub-network. This empirical study has formed a fertile ground for subsequent theoretical research, which showed that such a sub-network can indeed exist (see e.g., Malach et al. (2020); Burkholz et al. (2021); da Cunha et al. (2022)); yet, it is not clear how to develop an algorithm that can find it in a feasible amount of time.
Another line of research has developed methods to enforce compressibility of neural networks by using sparsity enforcing regularizers, see e.g., Papyan et al. (2018); Aytekin et al. (2019); Chen et al. (2020); Lederer (2023); Kengne and Wade (2023). While they have led to interesting algorithms, the resulting algorithms typically require higher computational needs due to the increased complexity of the problem. On the other hand, due to the nonconvexity of the overall objective, it is also not trivial to provide theoretical guarantees for the compressibility of the resulting network weights.
Recently it has been shown that the training dynamics can have an influence on the compressibility of the algorithm output. In particular, motivated by the empirical and theoretical evidence that heavy-tails might arise in stochastic optimization (see e.g., Martin and Mahoney (2019); Simsekli et al. (2019); Simsekli et al. (2019); Simsekli et al. (2020); Zhou et al. (2020); Zhang et al. (2020); Camuto et al. (2021)), Barsbey et al. (2021); Shin (2021) showed that the network weights learned by stochastic gradient descent (SGD) will be compressible if we assume that they are heavy-tailed and there exists a certain form of statistical independence within the network weights. These studies illustrated that, even _without_ any modification to the optimization algorithm, the learned network weights can be compressible depending on the algorithm hyperparameters (such as the step-size or the batch-size). Even though the tail and independence conditions were recently relaxed in Lee et al. (2022), the resulting theory relies on unverifiable assumptions, and hence does not provide a practical guideline.
In this paper, we focus on single-hidden-layer neural networks with a fixed second layer (i.e., the setting used in previous work De Bortoli et al. (2020)) trained with vanilla SGD, and show that, when the iterates of SGD are simply perturbed by heavy-tailed noise with infinite variance (similar to the settings considered in Simsekli (2017); Nguyen et al. (2019); Simsekli et al. (2020); Huang et al. (2021); Zhang and Zhang (2023)), the assumption made in Barsbey et al. (2021) in effect holds. More precisely, denoting the number of hidden units by \(n\) and the step-size of SGD by \(\eta\), we consider the _mean-field limit_, where \(n\) goes to infinity and \(\eta\) goes to zero. We show that in this limiting case, the columns of the weight matrix will be independent and identically distributed (i.i.d.) with a common _heavy-tailed_ distribution. Then, we focus on the finite \(n\) and \(\eta\) regime and we prove that for _any_ compression ratio (to be precised in the next section), there exists a number \(N\), such that if \(n\geq N\) and \(\eta\) is sufficiently small, the network weight matrix will be compressible with high probability. Figure 1 illustrates the overall approach and precises our notion of compressibility.
To prove our compressibility result, we make two main technical contributions. We first consider the case where the step-size \(\eta\to 0\), for which the SGD recursion perturbed with heavy-tailed noise yields a _system_ of heavy-tailed stochastic differential equations (SDE)
with \(n\) particles. As our first technical contribution, we show that as \(n\to\infty\) this particle system converges to a mean-field limit, which is a McKean-Vlasov-type SDE that is driven by a heavy-tailed process Jourdain et al. (2007); Liang et al. (2021); Cavallazzi (2023). For this convergence, we obtain a rate of \(n^{-1/2}\), which is faster than the best known rates, as recently proven in Cavallazzi (2023). This result indicates that a _propagation of chaos_ phenomenon Sznitman (1991) emerges1: in the mean-field regime, the columns of the weight matrix will be i.i.d. and heavy-tailed due to the injected noise.
Footnote 1: Here, the term chaos refers to statistical independence: when the particles are initialized independently, they stay independent through the whole process even though their common distribution might evolve.
Next, we focus on the Euler discretizations of the particle SDE to be able to obtain a practical, implementable algorithm. As our second main technical contribution, we derive _strong-error_ estimates for the Euler discretization Kloeden et al. (1992) and show that for sufficiently small \(\eta\), the trajectories of the discretized process will be close to the one of the continuous-time SDE, in a precise sense. This result is similar to the ones derived for vanilla SDEs (e.g., Mikulevicius and Xu (2018)) and enables us to incorporate the error induced by using a finite step-size \(\eta\) to the error of the overall procedure.
Equipped with these results, we finally prove a high-probability compression bound by invoking Gribonval et al. (2012); Amini et al. (2011), which essentially shows that an i.i.d. sequence of heavy-tailed random variables will have a small proportion of elements that will dominate the whole sequence in terms of absolute values (to be stated formally in the next section). This establishes our main contribution. Here, we shall note that similar mean-field regimes have already been considered in machine learning (see e.g., Mei et al. (2018); Chizat and Bach (2018); Rotskoff and Vanden-Eijnden (2018); Jabir et al. (2019); Mei et al. (2019); De Bortoli et al. (2020); Sirignano and Spiliopoulos (2022)). However, these studies all focused on particle SDE systems that either converge to deterministic systems or that are driven by Brownian motion. While they have introduced interesting analysis tools, we cannot directly benefit from their analysis in this paper, since the heavy-tails are crucial for obtaining compressibility, and the Brownian-driven SDEs cannot produce heavy-tailed solutions in general. Hence, as we consider heavy-tailed SDEs in this paper, we need to use different techniques to prove mean-field limits, compared to the prior art in machine learning.
Figure 1: The illustration of the overall approach. We consider a one-hidden-layer neural network with \(n\) hidden units, which results in a weight matrix of \(n\) columns (first layer). We show that, when SGD is perturbed with heavy-tailed noise, as \(n\to\infty\), each column will follow a multivariate heavy-tailed distribution in an i.i.d. fashion. This implies that a small number of columns will have significantly larger norms compared to the others; hence, the norm of the overall weight matrix will be determined by such columns Gribonval et al. (2012). As a result, the majority of the columns can be removed (i.e., set to zero), which we refer to as compressibility.
To validate our theory, we conduct experiments on single-hidden-layer neural networks on different datasets. Our results show that, even with a minor modification to SGD (i.e., injecting heavy-tailed noise), the proposed approach can achieve compressibility with a negligible computational overhead and with a slight compromise from the training and test error. For instance, on a classification task with the MNIST dataset, when we set \(n=10\)K, with vanilla SGD, we obtain a test accuracy of 94.69%, whereas with the proposed approach, we can remove 44% of the columns of the weight matrix, while maintaining a test accuracy of 94.04%. We provide all the proofs in the appendix.
## 2 Preliminaries and Technical Background
**Notation.** For a vector \(u\in\mathbb{R}^{d}\), denote by \(\|u\|\) its Euclidean norm, and by \(\|u\|_{p}\) its \(\ell_{p}\) norm. For a function \(f\in C(\mathbb{R}^{d_{1}},\mathbb{R}^{d_{2}})\), denote by \(\|f\|_{\infty}:=\sup_{x\in\mathbb{R}^{d_{1}}}\|f(x)\|\) its \(L^{\infty}\) norm. For a family of \(n\) (or infinity) vectors, the indexing \(\cdot^{i,n}\) denotes the \(i\)-th vector in the family. In addition, for random variables, \(\stackrel{{\rm(d)}}{{=}}\) means equality in distribution, and the space of probability measures on \(\mathbb{R}^{d}\) is denoted by \(\mathcal{P}(\mathbb{R}^{d})\). For a matrix \(A\in\mathbb{R}^{d_{1}\times d_{2}}\), its Frobenius norm is denoted by \(\|A\|_{F}=\sqrt{\sum_{i=1}^{d_{1}}\sum_{j=1}^{d_{2}}|a_{i,j}|^{2}}\). Without specifically mentioning, \(\mathbb{E}\) denotes the expectation over all the randomness taken into consideration.
### Alpha-stable processes
A random variable \(X\in\mathbb{R}^{d}\) is called \(\alpha\)_-stable_ with the stability parameter \(\alpha\in(0,2]\), if \(X_{1}\), \(X_{2}\), \(\ldots\) are independent copies of \(X\), then \(n^{-1/\alpha}\sum_{j=1}^{n}X_{j}\stackrel{{\rm(d)}}{{=}}X\) for all \(n\geq 1\)Samoradnitsky (2017). Stable distributions appear as the limiting distribution in the generalized central limit theorem (CLT) Gnedenko and Kolmogorov (1954). In the one-dimensional case (\(d=1\)), we call the variable \(X\) a symmetric \(\alpha\)-stable random variable if its characteristic function is of the following form: \(\mathbb{E}[\exp(i\omega X)]=\exp(-|\omega|^{\alpha})\) for \(\omega\in\mathbb{R}\).
For symmetric \(\alpha\)-stable distributions, the case \(\alpha=2\) corresponds to the Gaussian distribution, while \(\alpha=1\) corresponds to the Cauchy distribution. An important property of \(\alpha\)-stable distributions is that in the case \(\alpha\in(1,2)\), the \(p\)-th moment of an \(\alpha\)-stable random variable is finite if and only if \(p<\alpha\); hence, the distribution is heavy-tailed. In particular, \(\mathbb{E}[|X|]<\infty\) and \(\mathbb{E}[|X|^{2}]=\infty\), which can be used to model phenomena with heavy-tailed observations.
There exist different types of \(\alpha\)-stable random vectors in \(\mathbb{R}^{d}\). In this study we will be interested in the following three variants, whose characteristic functions (for \(u\in\mathbb{R}^{d}\)) are given as follows:
* **Type-I.** Let \(Z\in\mathbb{R}\) be a symmetric \(\alpha\)-stable random variable. We then construct the random vector \(X\) such that all the coordinates of \(X\) is equated to \(Z\). In other words \(X=\mathbf{1}_{d}Z\), where \(\mathbf{1}_{d}\in\mathbb{R}^{d}\) is a vector of ones. With this choice, \(X\) admits the following characteristic function: \(\mathbb{E}\left[\exp(i\langle u,X\rangle)=\exp(-|\langle u,\mathbf{1}_{d} \rangle|^{\alpha})\right]\);
* **Type-II.**\(X\) has i.i.d. coordinates, such that each component of \(X\) is a symmetric \(\alpha\)-stable random variable in \(\mathbb{R}\). This choice yields the following characteristic function: \(\mathbb{E}\left[\exp(i\langle u,X\rangle)=\exp(-\sum_{i=1}^{d}|u_{i}|^{\alpha})\right]\);
* **Type-III.**\(X\) is rotationally invariant \(\alpha\)-stable random vector with the characteristic function \(\mathbb{E}\left[\exp(i\langle u,X\rangle]=\exp(-\|u\|^{\alpha})\). Note that the Type-II and Type-III noises reduce to a Gaussian distribution when \(\alpha=2\) (i.e., the characteristic function becomes \(\exp(-\|u\|^{2})\)). Similar to the fact that stable distributions extend the Gaussian distribution, we can define a more general random process, called the _\(\alpha\)-stable Levy process_, that extends the Brownian motion. Formally, \(\alpha\)-stable processes are stochastic processes \((\mathrm{L}_{t}^{\alpha})_{t\geq 0}\) with independent and stationary \(\alpha\)-stable increments, and have the following definition:
* \(\mathrm{L}_{0}^{\alpha}=0\) almost surely,
* For any \(0\leq t_{0}<t_{1}<\cdots<t_{N}\), the increments \(\mathrm{L}_{t_{n}}^{\alpha}-\mathrm{L}_{t_{n-1}}^{\alpha}\) are independent,
* For any \(0\leq s<t\), the difference \(\mathrm{L}_{t}^{\alpha}-\mathrm{L}_{s}^{\alpha}\) and \((t-s)^{1/\alpha}\mathrm{L}_{1}^{\alpha}\) have the same distribution,
* \(\mathrm{L}_{t}^{\alpha}\) is stochastically continuous, i.e. for any \(\delta>0\) and \(s\geq 0\), \(\mathbb{P}(\|\mathrm{L}_{t}^{\alpha}-\mathrm{L}_{s}^{\alpha}\|>\delta)\to 0\) as \(t\to s\). To fully characterize an \(\alpha\)-stable process, we further need to specify the distribution of \(\mathrm{L}_{1}^{\alpha}\). Along with the above properties, the choice for \(\mathrm{L}_{1}^{\alpha}\) will fully determine the process. For this purpose, we will again consider the previous three types of \(\alpha\)-stable vectors: We will call the process \(\mathrm{L}_{t}^{\alpha}\) a Type-I process if \(\mathrm{L}_{1}^{\alpha}\) is a Type-I \(\alpha\)-stable random vector. We define the Type-II and Type-III processes analogously. Note that, when \(\alpha=2\), Type-II and Type-III processes reduce to the Brownian motion. For notational clarity, occasionally, we will drop the index \(\alpha\) and denote the process by \(\mathrm{L}_{t}\).
### Compressibility of heavy-tailed processes
One interesting property of heavy-tailed distributions in the one-dimensional case is that they exhibit a certain compressibility property. Informally, if we consider a sequence of i.i.d. random variables coming from a heavy-tailed distribution, a small portion of these variables will likely have a very large magnitude due to the heaviness of the tails, and they will dominate all the other variables in terms of magnitudes Nair et al. (2022). Therefore, if we only keep this small number of variables with large magnitude, we can 'compress' (in a lossy way) the whole sequence of random variables by representing it with this small subset. Concurrently, Amini et al. (2011); Gribonval et al. (2012) provided formal proofs for these explanations. Formally, Gribonval et al. (2012) characterized the family of probability distributions whose i.i.d. realizations are compressible. They introduced the notion of \(\ell_{p}\)-compressibility - in terms of the error made after pruning a fixed portion of small (in magnitude) elements of an i.i.d. sequence, whose common distribution has diverging \(p\)-th order moments. More precisely, let \(X_{n}=(x_{1},\ldots,x_{n})\) be a sequence of i.i.d. random variables such that \(\mathbb{E}\left[|x_{1}|^{\alpha}\right]=\infty\) for some \(\alpha\in\mathbb{R}_{+}\). Then, for all \(p\geq\alpha\) and \(0<\kappa\leq 1\) denoting by \(X_{n}^{(\kappa n)}\) the \(\lfloor\kappa n\rfloor\) largest ordered statistics2 of \(X_{n}\), the following asymptotic on the relative compression error holds almost surely:
Footnote 2: In other words, \(X_{n}^{(\kappa n)}\) is obtained by keeping only the largest (in magnitude) \(\kappa n\) elements of \(X_{n}\) and setting all the other elements to \(0\).
\[\lim_{n\to\infty}\frac{\|X_{n}^{(\kappa n)}-X_{n}\|_{p}}{\|X_{n}\|_{p}}=0\]
Built upon this fact, Barsbey et al. (2021) proposed structural pruning of neural networks (the procedure described in Figure 1) by assuming that the network weights provided by SGD will be asymptotically independent. In this study, instead of making this assumption, we will directly prove that the network weights will be asymptotically independent in the two layer neural network setting with additive heavy-tailed noise injections to SGD.
## 3 Problem Setting and the Main Result
We consider a single hidden-layer overparametrized network of \(n\) units and use the setup provided in De Bortoli et al. (2020). Our goal is to minimize the expected loss in a supervised learning regime, where for each data \(z=(x,y)\) distributed according to \(\pi(\mathrm{d}x,\mathrm{d}y)\);3 the feature \(x\) is included in \(\mathcal{X}\subset\mathbb{R}^{d}\) and the label \(y\) is in \(\mathcal{Y}\). We denote by \(\theta^{i,n}\in\mathbb{R}^{p}\) the parameter for the \(i\)-th unit, and the parametrized model is denoted by \(h_{x}:\mathbb{R}^{p}\rightarrow\mathbb{R}^{l}\). The mean-field network is the average over models for \(n\) units:
Footnote 3: Note that for finite datasets, \(\pi\) can be chosen as a measure supported on finitely many points.
\[f_{\Theta^{n}}(x)=\frac{1}{n}\sum_{i=1}^{n}h_{x}(\theta^{i,n}),\]
where \(\Theta^{n}=(\theta^{i,n})_{i=1}^{n}\in\mathbb{R}^{p\times n}\) denotes the collection of parameters in the network and \(x\in\mathcal{X}\) is the feature variable for the data point. In particular, the mean-field network corresponds to a two-layer neural network with the weights of the second layer are fixed to be \(1/n\) and \(\Theta^{n}\) is the parameters of the first layer. While this model is less realistic than the models used in practice, nevertheless, we believe that it is desirable from theoretical point of view, and this defect can be circumvented upon replacing \(h_{x}(\theta^{i,n})\) by \(h_{x}(c^{i,n},\theta^{i,n})=c^{i,n}h_{x}(\theta^{i,n})\), where \(c^{i,n}\) and \(\theta^{i,n}\) are weights corresponding to different layers. However, in order to obtain similar results in this setup as in our paper, stronger assumptions are inevitable and the proof should be more involved, which are left for future work.
Given a loss function \(\ell:\mathbb{R}^{l}\times\mathcal{Y}\rightarrow\mathbb{R}^{+}\), the goal (for each \(n\)) is to minimize the expected loss
\[R(\Theta^{n})=\mathbb{E}_{(x,y)\sim\pi}\left[\ell\left(f_{\Theta^{n}}(x),y \right)\right]. \tag{1}\]
One of the most popular approaches to minimize this loss is the stochastic gradient descent (SGD) algorithm. In this study, we consider a simple modification of SGD, where we inject a stable noise vector to the iterates at each iteration. For notational clarity, we will describe the algorithm and develop the theory over gradient descent, where we will assume that the algorithm has access to the true gradient \(\nabla R\) at every iteration. However, since we are already injecting a heavy-tailed noise with _infinite variance_, our techniques can be adapted for handling the stochastic gradient noise (under additional assumptions, e.g., De Bortoli et al. (2020)), which typically has a milder behavior compared to the \(\alpha\)-stable noise4.
Footnote 4: In Simsekli et al. (2019) the authors argued that the stochastic gradient noise in neural networks can be modeled by using stable distributions. Under such an assumption, the effect of the stochastic gradients can be directly incorporated into \(\mathrm{L}_{t}^{\alpha}\).
Let us set the notation for the proposed algorithm. Let \(\hat{\theta}_{0}^{i,n}\), \(i=1,\ldots,n\), be the initial values of the iterates, which are \(n\) random variables in \(\mathbb{R}^{d}\) distributed independently according to a given initial probability distribution \(\mu_{0}\). Then, we consider the gradient descent updates
with stepsize \(\eta n\), which is perturbed by i.i.d. \(\alpha\)-stable noises \(\sigma\cdot\eta^{1/\alpha}X_{k}^{i,n}\) for each unit \(i=1,\ldots,n\) and some \(\sigma>0\):
\[\begin{cases}\hat{\theta}_{k+1}^{i,n}=\hat{\theta}_{k}^{i,n}-\eta n\left[ \partial_{\theta^{i,n}}R(\Theta^{n})\right]+\sigma\cdot\eta^{1/\alpha}X_{k}^{ i,n}\\ \hat{\theta}_{0}^{i,n}\sim\mu_{0}\in\mathcal{P}(\mathbb{R}^{d}),\end{cases} \tag{2}\]
where the scaling factor \(\eta^{1/\alpha}\) in front of the stable noise enables the discrete dynamics of the system homogenize to SDEs as \(\eta\to 0\). At this stage, we do not have to determine which type of stable noise (e.g., Type-I, II, or III) that we shall consider as they will all satisfy the requirements of our theory. However, our empirical findings will illustrate that the choice will affect the overall performance.
We now state the assumptions that will imply our theoretical results. The following assumptions are similar to (De Bortoli et al., 2020, Assumption A1).
**Assumption 1**.:
* Regularity of the model: for each \(x\in\mathcal{X}\), the function \(h_{x}:\mathbb{R}^{p}\to\mathbb{R}^{l}\) is two-times differentiable, and there exists a function \(\Psi:\mathcal{X}\to\mathbb{R}_{+}\) such that for any \(x\in\mathcal{X}\), \[\|h_{x}(\cdot)\|_{\infty}+\|\nabla h_{x}(\cdot)\|_{\infty}+\|\nabla^{2}h_{x}( \cdot)\|_{\infty}\leq\Psi(x).\]
* Regularity of the loss function: there exists a function \(\Phi:\mathcal{Y}\to\mathbb{R}_{+}\) such that \[\|\partial_{1}\ell(\cdot,y)\|_{\infty}+\|\partial_{1}^{2}\ell(\cdot,y)\|_{ \infty}\leq\Phi(y)\]
* Moment bounds on \(\Phi(\cdot)\) and \(\Psi(\cdot)\): there exists a positive constant \(B\) such that \[\mathbb{E}_{(x,y)\sim\pi}[\Psi^{2}(x)(1+\Phi^{2}(y))]\leq B^{2}.\]
Let us remark that these are rather standard smoothness assumptions that have been made in the mean field literature Mei et al. (2018, 2019) and are satisfied by several smooth activation functions, including the sigmoid and hyper-tangent functions.
We now proceed to our main result. Let \(\hat{\Theta}_{k}^{n}\in\mathbb{R}^{p\times n}\) be the concatenation of all parameters \(\hat{\theta}_{k}^{i,n}\), \(i=1,\ldots,n\) obtained by the recursion (2) after \(k\) iterations. We will now compress \(\hat{\Theta}_{k}^{n}\) by pruning its columns with small norms. More precisely, fix a compression ratio \(\kappa\in(0,1)\), compute the norms of the columns of \(\hat{\Theta}_{k}^{n}\), i.e., \(\|\hat{\theta}_{k}^{i,n}\|\). Then, keep the \(\lfloor\kappa n\rfloor\) columns, which have the largest norms, and set all the other columns to zero, in all their entirety. Finally, denote by \(\hat{\Theta}_{k}^{(\kappa n)}\in\mathbb{R}^{p\times n}\), the pruned version of \(\hat{\Theta}_{k}^{n}\).
**Theorem 3.1**.: _Suppose that Assumption 1 holds. For any fixed \(t>0\), \(\kappa\in(0,1)\) and \(\epsilon>0\) sufficiently small, with probability \(1-\epsilon\), there exists \(N\in\mathbb{N}_{+}\) such that for all \(n\geq N\) and \(\eta\) such that \(\eta\leq n^{-\alpha/2-1}\), the following upper bound on the relative compression error for the parameters holds:_
\[\frac{\left\|\hat{\Theta}_{\lfloor t/\eta\rfloor}^{(\kappa n)}-\hat{\Theta}_{ \lfloor t/\eta\rfloor}^{n}\right\|_{F}}{\left\|\hat{\Theta}_{\lfloor t/\eta \rfloor}^{n}\right\|_{F}}\leq\epsilon.\]
This bound shows that, thanks to the heavy-tailed noise injections, the weight matrices will be compressible at _any_ compression rate, as long as the network is sufficiently over-parametrized and the step-size is sufficiently small. We shall note that this bound also enables us to directly obtain a generalization bound by invoking (Barsbey et al., 2021, Theorem 4).
## 4 Proof Strategy and Intermediate Results
In this section, we gather the main technical contributions with the purpose of demonstrating Theorem 3.1. We begin by rewriting (2) in the following form:
\[\begin{cases}\hat{\theta}^{i,n}_{k+1}-\hat{\theta}^{i,n}_{k}=\eta b(\hat{\theta }^{i,n}_{k},\hat{\mu}^{n}_{k})+\sigma\cdot\eta^{1/\alpha}X^{i,n}_{k}\\ \hat{\theta}^{i,n}_{0}\sim\mu_{0}\in\mathcal{P}(\mathbb{R}^{d}),\end{cases} \tag{3}\]
where \(\hat{\mu}^{n}_{k}=\frac{1}{n}\delta_{\hat{\theta}^{i,n}_{k}}\) is the empirical distribution of parameters at iteration \(k\) and \(\delta\) is the Dirac measure, and the drift is given by \(b(\theta^{i,n}_{k},\mu^{n}_{k})=-\mathbb{E}[\partial_{1}\ell(\mu^{n}_{k}(h_{x }(\cdot),y)\nabla h_{x}(\theta^{i,n})]\), where \(\partial_{1}\) denotes the partial derivative with respect to the first parameter and
\[\mu^{n}_{k}(h_{x}(\cdot)):=\int h_{x}(\theta)\mathrm{d}\mu^{n}_{k}(\theta)= \sum_{i=1}^{n}h_{x}(\theta^{i,n}_{k})=f_{\Theta^{n}_{k}}(x).\]
It is easy to check that \(b(\theta^{i,n}_{k},\mu^{n}_{k})=-n\partial_{\theta^{i,n}}R(\Theta^{n})\). By looking at the dynamics from this perspective, we can treat the evolution of the parameters as a system of evolving probability distributions \(\mu^{n}_{k}\): the empirical distribution of the parameters during the training process will converge to a limit as \(\eta\) goes to \(0\) and \(n\) goes to infinity.
We start by linking the recursion (2) to its limiting case where \(\eta\to 0\). The limiting dynamics can be described by the following system of SDEs:
\[\begin{cases}\mathrm{d}\theta^{i,n}_{t}=b(\theta^{i,n}_{t},\mu^{n}_{t}) \mathrm{d}t+\sigma\mathrm{d}\mathrm{L}^{i,n}_{t}\\ \theta^{i,n}_{0}\sim\mu_{0}\in\mathcal{P}(\mathbb{R}^{d}),\end{cases} \tag{4}\]
where \(\mu^{n}_{t}=\frac{1}{n}\delta_{\theta^{i,n}_{t}}\) and \((\mathrm{L}^{i,n}_{t})_{t\geq 0}\) are independent \(\alpha\)-stable processes such that \(\mathrm{L}^{i,n}_{1}\stackrel{{(\mathrm{d})}}{{=}}X^{i,n}_{1}\). We can now see the original recursion (2) as an Euler discretization of (4) and then we have the following strong uniform error estimate for the discretization.
**Theorem 4.1**.: _Let \((\theta^{i,n}_{t})_{t\geq 0}\) be the solutions to SDE (4) and \((\hat{\theta}^{i,n}_{k})_{k\in\mathbb{N}_{+}}\) be given by SGD (2) with the same initial condition \(\xi^{i,n}\) and \(\alpha\)-stable Levy noise \(\mathrm{L}^{i,n}_{\cdot}\), \(i\)=1,...,n. Under Assumption 1, for any \(T>0\), if \(\eta k\leq T\), there exists a constant \(C\) depending on \(B,T,\alpha\) such that the approximation error_
\[\mathbb{E}\left[\sup_{i\leq n}\|\theta^{i,n}_{\eta k}-\hat{\theta}^{i,n}_{k}\| \right]\leq C(\eta n)^{1/\alpha}.\]
In comparison to the standard error estimates in the Euler-Maruyama scheme concerning only the stepsize \(\eta\), the additional \(n\)-dependence is due to the fact that here we consider the supremum of the approximation error over all \(i\leq n\), which involves the expectation of the supremum of the modulus of \(n\) independent \(\alpha\)-stable random variables.
Next, we start from the system (4) and consider the case where \(n\to\infty\). In this limit, we obtain the following McKean-Vlasov-type stochastic differential equation:
\[\begin{cases}\mathrm{d}\theta^{\infty}_{t}=b(\theta^{\infty}_{t},[\theta^{ \infty}_{t}])\mathrm{d}t+\mathrm{d}\mathrm{L}_{t}\\ [\theta^{\infty}_{0}]=\mu\in\mathcal{P}(\mathbb{R}^{d}),\end{cases} \tag{5}\]
where \((\mathrm{L}_{t})_{t\geq 0}\) is an \(\alpha\)-stable process and \([\theta_{t}^{\infty}]\) denotes the distribution of \(\theta_{t}^{\infty}\). The existence and uniqueness of a strong solution to (5) are given in Cavallazzi (2023). Moreover, for any positive \(T\), \(\mathbb{E}\left[\sup_{t\leq T}\|\theta_{t}^{\infty}\|^{\alpha}\right]<+\infty.\) This SDE with measure-dependent coefficients turns out to be a useful mechanism for analyzing the behavior of neural networks and provides insights into the effects of noise on the learning dynamics.
In this step, we will link the system (4) to its limit (5), which is a strong uniform propagation of chaos result for the weights. The next result shows that, when \(n\) is sufficiently large, the trajectories of weights asymptotically behave as i.i.d. solutions to (5).
**Theorem 4.2**.: _Following the existence and uniqueness of strong solutions to (4) and (5), let \((\theta_{t}^{i,\infty})_{t\geq 0}\) be solutions to the McKean-Vlasov equation (5) and \((\theta_{t}^{i,n})_{t\geq 0}\) be solutions to (4) associated with same realization of \(\alpha\)-stable processes \((\mathrm{L}_{t}^{i})_{t\geq 0}\) for each \(i\). Suppose that \((\mathrm{L}_{t}^{i})_{t\geq 0}\) are independent. Then there exists \(C\) depending on \(T,B\) such that_
\[\mathbb{E}\left[\sup_{t\leq T}\sup_{i\leq n}|\theta_{t}^{i,n}-\theta_{t}^{i, \infty}|\right]\leq\frac{C}{\sqrt{n}}\]
It is worth mentioning that the \(O(1/\sqrt{n})\) decreasing rate here is better, if \(\alpha<2\), than \(O(1/n^{\alpha})\) in the litterature on the propagation of chaos Cavallazzi (2023) with classical Lipschitz assumptions on the coefficients of SDEs. The reason is that here, thanks to Assumption 1, we can take into account the specific structure of the one-hidden layer neural networks.
Finally, we are interested in the distributional properties of the McKean-Vlasov equation (5). The following result establishes that the marginal distributions of (5) will have diverging second-order moments, hence, they will be heavy-tailed.
**Theorem 4.3**.: _Let \((\mathrm{L}_{t})_{t\geq 0}\) be an \(\alpha\)-stable process. For any time \(t\), let \(\theta_{t}\) be the solution to (5) with initialization \(\theta_{0}\) which is independent of \((\mathrm{L}_{t})_{t\geq 0}\) such that \(\mathbb{E}\left[\|\theta_{0}\|\right]<\infty\), then the following holds_
\[\mathbb{E}\left[\|\theta_{t}^{\infty}\|^{2}\right]=+\infty.\]
We remark that the result is weak in the sense that details on the tails of \(\theta_{t}\) with respect to \(\alpha\) and \(t\) are implicit. However, it renders sufficient for our compressibility result in Theorem 3.1.
Now, having proved all the necessary ingredients, Theorem 3.1 is obtained by accumulating the error bounds proven in Theorems 4.1 and 4.2, and applying (Gribonval et al., 2012, Proposition 1) along with Theorem 4.3.
## 5 Empirical Results
In this section, we validate our theory with empirical results. Our goal is to investigate the effects of the heavy-tailed noise injection in SGD in terms of compressibility and the train/test performance. We consider a single-hidden-layer neural network with ReLU activations and the cross entropy loss, applied on classifications tasks. We chose the Electrocardiogram (ECG) dataset Yanping and Eamonn, MNIST, and the CIFAR10 datasets. By slightly streching the scope of our theoretical framework, we also train the weights of the second layer instead of fixing them to \(1/n\).
For SGD, we fix the batch-size to be one tenth of the number of training data points, the step-size is chosen to be small enough to approximate the continuous dynamics given by the McKean-Vlasov equation in order to stay close to the theory, but also not too small so that SGD converges in a reasonable amount of time. As for the noise level \(\sigma\), we have tried a range of values for each dataset and \(n\), and we chose the largest \(\sigma\) such that the perturbed SGD converges. Intuitively, we can expect that smaller \(\alpha\) with heavier tails will lead to lower relative compression error. However, it does not guarantee better test performance: one has to fine tune the parameters appropriately to achieve a favorable trade-off between compression error and the test performance. We repeat all the experiment 5 times and report and average and the standard deviation. For the noiseless case (vanilla SGD), the results of the different runs were almost identical, hence we did not report the standard deviations. All the experimentation details are given in Appendix C and we present additional experimental results in Appendix D.
In our first experiment, we consider the ECG500 dataset and choose the Type-I noise. Our goal is to investigate the effects \(\alpha\) and \(n\) over the performance. Tables 1-2 illustrate the results. Here, for different cases, we monitor the training and test accuracies (over 1.00), the pruning ratio: the percentage of the weight matrix that can be pruned while keeping the 90% of the norm of the original matrix5, and training/test accuracies after pruning (a.p.) the network with the specified pruning ratio.
Footnote 5: The pruning ratio has the same role of \(\kappa\), whereas we fix the compression error to 0.1 and find the largest \(\kappa\) that satisfies this error threshold.
The results show that, even for a moderate number of neurons \(n=2\)K, the heavy-tailed noise results in a significant improvement in the compression capability of the neural network. For \(\alpha=1.9\), we can see that the pruning ratio increases to 39%, whereas vanilla SGD can only be compressible with a rate 11%. Besides, the compromise in the test accuracy is almost negligible, the proposed approach achieves 95.3%, whereas vanilla SGD achieves 95.7% accuracy. We also observe that decreasing \(\alpha\) (i.e., increasing the heaviness of the tails) results in a better compression rate; yet, there is a tradeoff between this rate and the test performance. In Table 2, we repeat the same experiment for \(n=10\)K. We observe that the
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \(\alpha\) & Train Acc. & Test Acc. & Pruning Ratio & Train Acc. a.p. & Test Acc. a.p. \\ \hline \hline no noise & 0.974 & 0.957 & 11.45 & 0.97 & 0.954 \\ \hline
1.75 & \(0.97\pm 0.007\) & \(0.955\pm 0.003\) & \(48.07\pm 7.036\) & \(0.944\pm 0.03\) & \(0.937\pm 0.022\) \\ \hline
1.8 & \(0.97\pm 0.007\) & \(0.955\pm 0.003\) & \(44.68\pm 5.4\) & \(0.95\pm 0.025\) & \(0.963\pm 0.016\) \\ \hline
1.9 & \(0.966\pm 0.008\) & \(0.959\pm 0.01\) & \(39.37\pm 2.57\) & \(0.962\pm 0.012\) & \(0.953\pm 0.005\) \\ \hline \end{tabular}
\end{table}
Table 1: ECG5000, Type-I noise, \(n=2\)K.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \(\alpha\) & Train Acc. & Test Acc. & Pruning Ratio & Train Acc. a.p. & Test Acc. a.p. \\ \hline \hline no noise & 0.978 & 0.963 & 11.46 & 0.978 & 0.964 \\ \hline
1.75 & \(0.978\pm 0.001\) & \(0.964\pm 0.001\) & \(52.59\pm 6.55\) & \(0.95\pm 0.03\) & \(0.954\pm 0.022\) \\ \hline
1.8 & \(0.978\pm 0.001\) & \(0.964\pm 0.001\) & \(52.59\pm 6.55\) & \(0.95\pm 0.03\) & \(0.954\pm 0.022\) \\ \hline
1.9 & \(0.978\pm 0.001\) & \(0.964\pm 0.001\) & \(40.85\pm 2.89\) & \(0.96\pm 0.021\) & \(0.958\pm 0.013\) \\ \hline \end{tabular}
\end{table}
Table 2: ECG5000, Type-I noise, \(n=10\)K.
previous conclusions become even clearer in this case, as our theory applies to large \(n\). For the case where \(\alpha=1.75\), we obtain a pruning ratio of \(52\%\) with test accuracy \(95.4\%\), whereas for vanilla SGD the ratio is only \(11\%\) and the original test accuracy is \(96.3\%\).
In our second experiment, we investigate the impact of the noise type. We set \(n=10\)K and use the same setting as in Table 2. Tables 3-4 illustrate the results. We observe that the choice of the noise type can make a significant difference in terms of both compressibility and accuracy. While the Type-III noise seems to obtain a similar accuracy when compared to Type-I, it achieves a worse compression rate. On the other hand, the behavior of Type-II noise is perhaps more interesting: for \(\alpha=1.9\) it both increases compressibility and also achieves a better accuracy when compared to unpruned, vanilla SGD. However, we see that its behavior is much more volatile, the performance quickly degrades as we decrease \(\alpha\). From these comparisons, Type-I noise seems to achieve a better tradeoff.
In our next experiment, we consider the MNIST dataset, set \(n=10\)K and use Type-I noise. Table 5 illustrates the results. Similar to the previous results, we observe that the injected noise has a visible benefit on compressibility. When \(\alpha=1.9\), our approach doubles the compressibility of the vanilla SGD (from \(10\%\) to \(21\%\)), whereas the training and test accuracies almost remain unchanged. On the other hand, when we decrease \(\alpha\), we observe that the pruning ratio goes up to \(44\%\), while only compromising \(1\%\) of test accuracy. To further illustrate this result, we pruned vanilla SGD by using this pruning ratio (\(44\%\)). In this case, the test accuracy of SGD drops down to \(92\%\), where as our simple noising scheme achieves \(94\%\) of test accuracy with the same pruning ratio.
Our last experiment is a negative result that might be useful for illustrating the limitations of our approach. In this case, we consider the CIFAR10 dataset, set \(n=5000\), use Type-I
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \(\alpha\) & Train Acc. & Test Acc. & Pruning Ratio & Train Acc. a.p. & Test Acc. a.p. \\ \hline \hline \(1.75\) & \(0.97\pm 0.007\) & \(0.957\pm 0.005\) & \(33.48\pm 7.33\) & \(0.969\pm 0.008\) & \(0.957\pm 0.011\) \\ \hline \(1.8\) & \(0.97\pm 0.007\) & \(0.956\pm 0.007\) & \(26.81\pm 4.72\) & \(0.963\pm 0.008\) & \(0.952\pm 0.008\) \\ \hline \(1.9\) & \(0.97\pm 0.005\) & \(0.955\pm 0.005\) & \(17.59\pm 1.56\) & \(0.968\pm 0.004\) & \(0.954\pm 0.96\) \\ \hline \end{tabular}
\end{table}
Table 4: ECG5000, Type-III noise, \(n=10\)K.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \(\alpha\) & Train Acc. & Test Acc. & Pruning Ratio & Train Acc. a.p. & Test Acc. a.p. \\ \hline \hline \(1.75\) & \(0.986\pm 0.003\) & \(0.982\pm 0.005\) & \(52.13\pm 27.78\) & \(0.865\pm 0.261\) & \(0.866\pm 0.251\) \\ \hline \(1.8\) & \(0.985\pm 0.003\) & \(0.980\pm 0.005\) & \(39.9\pm 21.55\) & \(0.971\pm 0.025\) & \(0.972\pm 0.023\) \\ \hline \(1.9\) & \(0.982\pm 0.003\) & \(0.976\pm 0.006\) & \(20.95\pm 6.137\) & \(0.982\pm 0.004\) & \(0.977\pm 0.006\) \\ \hline \end{tabular}
\end{table}
Table 3: ECG5000, Type-II noise, \(n=10\)K.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \(\alpha\) & Train Acc. & Test Acc. & Pruning Ratio & Train Acc. a.p. & Test Acc. a.p. \\ \hline \hline no noise & \(0.95\) & \(0.9487\) & \(10.59\) & \(0.9479\) & \(0.9476\) \\ \hline \(1.75\) & \(0.95\pm 0.0001\) & \(0.9454\pm 0.0005\) & \(44.42\pm 7.16\) & \(0.944\pm 0.0025\) & \(0.9409\pm 0.0019\) \\ \hline \(1.8\) & \(0.95\pm 0.0001\) & \(0.9457\pm 0.0007\) & \(34.49\pm 5.07\) & \(0.9453\pm 0.0015\) & \(0.9397\pm 0.0036\) \\ \hline \(1.9\) & \(0.95\pm 0.0001\) & \(0.9463\pm 0.0004\) & \(21.31\pm 1081\) & \(0.9478\pm 0.0008\) & \(0.9444\pm 0.0009\) \\ \hline \end{tabular}
\end{table}
Table 5: MNIST, Type-I noise, \(n=10\)K until reaching \(95\%\) training accuracy.
noise. We compute the pruning ratio and accuracies as before and we illustrate the results in Table 6. We observe that the injected noise does not bring an advantage in this case: vanilla SGD achieves a better pruning ratio when compared to the case where \(\alpha=1.9\). On the other hand, the noise injections result in a significant drop in the training accuracy, and the situation becomes even more prominent when we decrease \(\alpha\). This might indicate that the injected noise might complicate the training process.
Following the arguments of Barsby et al. (2021), we suspect that, in this case, vanilla SGD already exhibits some sort of heavy tails and the additional noise might not be as beneficial as it was in the other cases. Although neural SGD can achieve similar compressibility, this regime is not easily controllable, and our paper is able to provide a more practical guideline for achieving compressibility along with theoretical guarantees.
## 6 Conclusion
We provided a methodological and theoretical framework for provably obtaining compressibility in mean-field neural networks. Our approach requires minimal modification for vanilla SGD and has the same computational complexity. By proving discretization error bounds and propagation of chaos results, we showed that the resulting algorithm is guaranteed to provide compressible parameters. We illustrated our approach on several experiments, where we showed that, in most cases, the proposed approach achieves a high compressibility ratio, while slightly compromising from the accuracy.
The limitations of our approach are as follows: (i) we consider mean-field networks, it would be of interest to generalize our results to more sophisticated architectures, (ii) we focused on the compressibility; yet, the noise injection also has an effect on the train/test accuracy. Hence, an investigation of the noise injection on the training loss needs to be performed to understand the bigger picture. Finally, due to the theoretical nature of our paper, it does not have a direct negative social impact.
## Acknowledgements
The authors thank Alain Durmus for fruitful discussions. U.S. is partially supported by the French government under management of Agence Nationale de la Recherche as part of the "Investissements d'avenir" program, reference ANR-19-P3IA0001 (PRAIRIE 3IA Institute) and by the European Research Council Starting Grant DYNASTY - 101039676.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \(\alpha\) & Train Acc. & Test Acc. & Pruning Ratio & Train Acc. a.p. & Test Acc. a.p. \\ \hline \hline no noise & 0.9514 & 0.5636 & 25.1 & 0.8289 & 0.5214 \\ \hline
1.75 & 0.9503 & 0.5626 & 21.74 & 0.8196 & 0.52 \\ \hline
1.9 & 0.95 & 0.5755 & 16.56 & 0.8870 & 0.5641 \\ \hline \end{tabular}
\end{table}
Table 6: CIFAR10, Type-I noise, \(n=5\)K until reaching 95% training accuracy. |
2305.11124 | Thermal light in confined dimensions for "laser" cooling with unfiltered
sunlight | Cooling of systems to sub-kelvin temperatures is usually done using either a
cold bath of particles or spontaneous photon scattering from a laser field; in
either case, cooling is driven by interaction with a well-ordered, cold (i.e.
low entropy) system. However, there have recently been several schemes proposed
for ``cooling by heating,'' in which raising the temperature of some mode
drives the cooling of the desired system faster. We discuss how to cool a
trapped ion to its motional ground state using unfiltered sunlight at
$5800\,\mathrm{K}$ to drive the cooling. We show how to treat the statistics of
thermal light in a single-mode fiber for delivery to the ion, and show
experimentally how the black-body spectrum is strongly modified by being
embedded in quasi-one-dimension. Quantitative estimates for the achievable
cooling rate with our measured fiber-coupled, low-dimensional sunlight show
promise for demonstrating this implementation of cooling by heating. | Amanda Younes, Wesley C. Campbell | 2023-05-18T17:14:51Z | http://arxiv.org/abs/2305.11124v1 | # Thermal light in confined dimensions for "laser" cooling with unfiltered sunlight
###### Abstract
Cooling of systems to sub-kelvin temperatures is usually done using either a cold bath of particles or spontaneous photon scattering from a laser field; in either case, cooling is driven by interaction with a well-ordered, cold (_i.e._ low entropy) system. However, there have recently been several schemes proposed for "cooling by heating," in which raising the temperature of some mode drives the cooling of the desired system faster. We discuss how to cool a trapped ion to its motional ground state using unfiltered sunlight at 5800 K to drive the cooling. We show how to treat the statistics of thermal light in a single-mode fiber for delivery to the ion, and show experimentally how the black-body spectrum is strongly modified by being embedded in quasi-one-dimension. Quantitative estimates for the achievable cooling rate with our measured fiber-coupled, low-dimensional sunlight show promise for demonstrating this implementation of cooling by heating.
## I Introduction
Cooling is commonly accomplished by the coupling between a system of interest and a cold bath, with no further interactions. However, in some cases the coupling between the two is controlled by the occupation of some other mode that connects them. A good example of this is laser cooling, where the electromagnetic modes populated by laser photons allow the system of interest to repeatedly spontaneously emit into cold vacuum modes [1]. The cooling is thereby driven by the highly-occupied modes of the laser, without which the cooling rate will fall to essentially zero.
However, since an ideal laser field is often approximated as in a coherent state [2], it is a displaced vacuum and has no entropy; the laser field can be thought of as being highly-ordered and in that sense also extremely cold [3]. The lasers used for laser cooling tend to have very narrow linewidths (typically \(\Delta\nu/\nu<10^{-8}\) for laser cooling atoms and molecules), as that feature allows the absorption of laser photons to be velocity-dependent. One can therefore ask, is it _necessary_ for laser cooling that the field that drives the cooling step (_i.e._ that couples the system to the cold bath) also be in a low-entropy state? And if so, to what extent can we identify the highly-ordered nature of _that_ laser field, as opposed to the highly-ordered nature of the vacuum field, as being responsible for the cooling?
Here, we propose how the phenomenon known as "cooling by heating" [4; 5] can be used to illustrate the answer to these questions. Cooling by heating refers to cases where the coupling between the system of interest and the cold bath can be increased by increasing the thermal occupation of a mode that couples the two, and therefore the system can be cooled by heating that mode. This paradigm has been used to study some counter-intuitive scenarios exhibiting cooling by heating [6; 7], and open questions persist about the interplay between this phenomenon and quantum correlations [8] and dissipative generation of entangled states [4].
We begin by introducing an experimentally-accessible scenario from atomic physics where the motion of a single, trapped atomic ion is to be cooled to its quantum ground state via a repeated cycle that uses sunlight to drive the coupling to the cold vacuum. Unitary (and therefore reversible, entropy-conserving, and non-cooling) operations on the atom's state are driven by a laser, followed by a separate step where only sunlight is applied to cool the ion's temperature. We analyze the achievable temperature in the presence of multiple baths at different temperatures using a model of virtual qubits [9; 10]. We then discuss the statistical physics of black-body radiation confined in a single-mode optical fiber for delivery to the ion, and observe how dimensionality affects the spectrum of a black body by analyzing fiber-coupled sunlight with a spectrometer. We conclude with an estimate of the achievable experimental cooling rate in this system.
## II Cooling an ion with thermal light
The form of laser cooling that we will consider for this demonstration is known as resolved-sideband cooling [11; 12], and has been implemented with lasers to cool ions [13], atoms [14], and micromechanical oscillators [15] to their quantum ground states of motion.
For a harmonically trapped atomic ion, the ion's motion in the trap is periodic at frequency \(\omega_{\rm motion}\) and this gives rise to the appearance of phase-modulated sidebands on the spectrum of applied laser light as observed in the rest frame of the ion. In the lab frame, this means that the ion's optical absorption spectrum consists not only of a "carrier" peak at the rest-frame resonant frequency of some optical transition (call it \(\omega_{1}\)), but also sidebands at \(\pm\omega_{\rm motion}\) from the carrier (as well as at \(\pm 2\omega_{\rm motion}\), and so on). If the laser frequency (\(\omega_{\ell}\)) is set to be resonant with the feature at \(\omega_{\ell}=\omega_{1}-\omega_{\rm motion}\) (the "red sideband"), the ion can absorb photons with energy \(\hbar\omega_{\ell}\) but it will emit them with an average energy closer to \(\hbar\omega_{1}\). Each cycle, then, removes approximately \(\hbar\omega_{\rm motion}\) of thermal energy on average, cooling the ion. Since the strength of the red sideband goes to zero as the ion's motion approaches the ground state, resolved
sideband cooling is capable of cooling the ion to its quantum ground state and then ceases to have any effect (in the absence of other sources of heating).
Since the spatial extent of the ion's harmonic motion (\(x_{0}=\sqrt{\hbar/(2m\omega_{\mathrm{motion}})}\), typically \(\approx 5\,\mathrm{nm}\)) is much smaller than the wavelength of radiation at that frequency (\(\lambda=2\pi c/\omega_{\mathrm{motion}}\), typically \(\approx 1\,\mathrm{km}\)), it is a very poor antenna, and the ion's motion is substantially impedance mismatched to electromagnetic radiation. In the absence of technical electric field noise, the motion of ions trapped inside room-temperature vacuum chambers remain out-of-equilibrium with the chamber for all relevant experimental timescales, and we will ignore any direct coupling between light and motion.
The scheme we consider for sideband cooling of a trapped ion is shown in figure 1, and consists of a repeated, two-step cycle as follows. In step I (Fig. 1(b)), the ion is illuminated by a narrow-linewidth laser on the red sideband of the \(\mathrm{S}\rightarrow\mathrm{D}\) transition. The intensity and illumination time are chosen to fully transfer population to the (long-lived) D state for ions with energy near their thermal average energy, and then the laser is turned off. On average, this step can reduce the motional energy of the ion, but it also adds a much larger amount of total energy in the form of internal excitation. Entropy from the motional state has been partially transferred to the internal state of the ion in this unitary process. Since step I is reversible, the total entropy of the ion has not changed and the laser has accomplished no cooling.
In step II (Fig. 1(c)), the ion is illuminated with light capable of driving any population in the long-lived D state to a higher-lying P state (with resonant frequency \(\omega_{2}\)) that can quickly decay to the ground S state by spontaneously emitting a photon (\(\omega_{3}\)) into an approximately unoccupied mode (thermal states of optical-frequency modes at room temperature are close to the vacuum state). This step does not change the motional energy of the ion on average, but does reduce the total energy since the ion returns to its internal ground state. However, unlike step I, this step is not reversible and the ion's entropy (and therefore temperature) has been reduced. The optical modes at frequency \(\omega_{3}\) contain information about the ion's motional state, which is to say that the spontaneously emitted photon carries away entropy. This is the cooling step in the process.
In a typical implementation of sideband cooling for applications in precision measurement or quantum information processing, step II is driven by a laser at \(\omega_{2}\). However, this light need not be coherent nor narrow in linewidth, as its only job is to couple the ion to the vacuum modes at \(\omega_{3}\). For this, we propose to use thermal light in the form of fiber-coupled black-body radiation at \(T_{\mathsf{O}}\approx 5800\,\mathrm{K}\) from the sun. Even if this light has spectral density near \(\omega_{1}\) and \(\omega_{3}\), the driving of those transitions will not directly change the motional energy of the atom on average, and this does not inhibit cooling to the ground state (see Appendix B).
### Minimum Achievable Temperature
To estimate the minimum temperature that can be achieved with this scheme, we start with a steady-state version of the sequence shown in Fig. 1(b) and (c) to illustrate thermalization, for which we assume band-limited sunlight and utilize the concept of virtual qubits [9]. Following that, we analyze the time-dependent scheme with unfiltered sunlight and argue that it achieves the same limiting temperature insofar as both scenarios yield a predicted minimum temperature that is below what the limiting temperature will likely be in practice due to other heating effects (such as momentum diffusion from absorption and emission, or heating from electrical noise).
If we wish to apply principles of thermodynamic equilibrium to the trapped ion example in Fig. 1, we need to identify a bath that is brought into contact with the ion's motion to cool it. For this, we can simplify the time-dependent scheme into a steady-state scheme by assuming that both the narrow-band laser light (Fig. 1(b)) and the sunlight (\(T_{2}=T_{\mathsf{O}}\)) that connects levels D and P (Fig. 1(c)) are applied simultaneously and continuously, and that the sunlight is band-limited such that it does not connect either upper state to S. In this case, the \(\mathrm{S}\leftrightarrow\mathrm{P}\) transition is driven by room-temperature (\(T_{3}=T_{\mathrm{room}}\)) black-body radiation, and we can model the effect of the laser on \(\mathrm{S}\leftrightarrow\mathrm{D}\) as a thermal field with temperature \(T_{\ell}\rightarrow\infty\).
The continuous interaction of these three subsystems, each with a unique temperature, with the ion's motion can be aggregated into an effective interaction of the motion with a single virtual qubit with splitting \(\omega_{\mathrm{V}}=\omega_{\mathrm{motion}}\) held at a virtual temperature \(T_{\mathrm{V}}\), which
Figure 1: Atomic level structure for sideband cooling. (a) A three-level atom with an \(\mathrm{S}\leftrightarrow\mathrm{D}\) transition at \(\omega_{1}\), a \(\mathrm{D}\leftrightarrow\mathrm{P}\) transition at \(\omega_{2}\), and a \(\mathrm{S}\leftrightarrow\mathrm{P}\) transition at \(\omega_{3}\). (b) Step I of the cooling cycle, in which a red sideband of the \(\mathrm{S}\leftrightarrow\mathrm{D}\) transition is driven by a laser at frequency \(\omega_{\ell}=\omega_{1}-\omega_{\mathrm{motion}}\). (c) Step II of the cooling cycle, in which any population in D is returned to S via excitation to P from absorption of a photon at \(\omega_{2}\) followed by spontaneously emitting a photon at \(\omega_{3}\). We propose that the light at \(\omega_{2}\) can be provided by black-body radiation from the sun.
is given by [9] (also, Appendix A):
\[T_{\rm V} = \frac{\omega_{\rm V}}{\frac{\omega_{3}}{7_{3}}-\frac{\omega_{2}}{7_{ 2}}-\frac{\omega_{\rm f}}{7_{\rm c}}} \tag{1}\] \[= \frac{\omega_{\rm motion}}{7_{\rm room}^{\rm room}}-\frac{\omega_{ 2}}{7_{\rm O}}\,.\]
In the limit that \(\omega_{3}/T_{\rm room}\gg\omega_{2}/T_{\rm\emptyset}\), we conclude that the minimum achievable temperature is given by the virtual temperature
\[T_{\rm V}\approx\frac{\omega_{\rm motion}}{\omega_{3}}T_{\rm room}. \tag{2}\]
A thermal state of motion at \(T_{\rm V}\) has an average motional excitation of \(\left\langle n_{\rm motion}\right\rangle=\left(\exp\left(\beta_{\rm V}\hbar \omega_{\rm motion}\right)-1\right)^{-1}\approx\exp\left(-\beta_{\rm V}\hbar \omega_{\rm motion}\right)\) where \(\beta_{i}^{-1}\equiv k_{\rm B}T_{i}\). Since we expect \(k_{\rm B}T_{\rm room}\ll\hbar\omega_{3}\), this corresponds to an ion in its ground state of motion (\(T_{\rm V}\approx 1\,\upmu\)K and \(\bar{n}_{\rm motion}\approx 10^{-46}\)). Even in the limit where \(T_{\rm room}\) is replaced with \(T_{\rm\emptyset}\), the virtual temperature is cold enough to cool the ion to its ground state (see Appendix B). We stress here that there are many potential sources of heating in experiments that we have not attempted to capture with this analysis, which focuses only on the steady state solution with thermal radiation fields (the "black-body limit"). Our conclusion is that ground-state cooling is possible with this scheme, but we do not claim that it is necessarily practical to achieve cooling all the way to \(T_{\rm V}\).
For the time-dependent scheme of Fig. 1(b) and (c) with unfiltered sunlight, if the sunlight is left on long enough for the atom's internal states to equilibrate, the atomic energy distribution will be held at \(T_{\rm\emptyset}\). However, the periodic extinction of that light in Step I, and in particular the case if we assume that the light is turned off at the end of the protocol and the ion's internal states are allowed to relax, indicates that the atomic internal temperature will equilibrate to \(T_{\rm room}\). As such, the effect of the laser is essentially to translate the motional thermal state to an energy scale set by \(\omega_{1}\) and allow that system to thermalize to \(T_{\rm room}\), followed by relaxation of that atomic excitation back to the energy scale of \(\omega_{\rm motion}\). We therefore expect the motional temperature to be limited by these considerations to a scaled version of \(T_{\rm room}\), which is precisely the temperature in Eq. (2). This is also the black-body limit for standard sideband cooling with laser light in a room-temperature vacuum chamber.
### Excitation Rate from fiber-coupled black-body radiation
An atom illuminated by a focused beam of thermal light will not experience the same field as if it were inside a black body. Many modes will be in vacuum (or at least different thermal) states due to the anisotropy of the illumination, rather than in thermal states at the temperature of the black body. To calculate the excitation rate, \(\Gamma\), of a two-level atom initially in its ground state (state manifolds \(\{|{\rm e}_{i}\rangle\}\) and \(\{|{\rm g}_{i}\rangle\}\) with degeneracies \(g_{\rm e}\) and \(g_{\rm g}\), split by \(\omega_{\rm eg}\)) illuminated by focused incoherent light, we adopt an Einstein rate equation approach,
\[\Gamma = B_{\rm ge}\rho(\omega_{\rm eg}) \tag{3}\] \[= \frac{\pi^{2}c^{3}}{\hbar\omega_{\rm eg}^{3}}\frac{g_{\rm e}}{g_ {\rm g}}A_{\rm eg}\,\rho(\omega_{\rm eg}) \tag{4}\]
where \(A_{\rm eg}\) and \(B_{\rm ge}\) are the Einstein A and B coefficients for the transition and \(\rho(\omega_{\rm eg})\) is the spectral energy density (energy per unit volume per unit angular frequency) at the ion's position at the transition frequency.
We consider that light from an ideal black body is coupled into an optical fiber with a single, gaussian transverse mode and this fiber's output is being imaged onto the atom with an imaging system having half-cone convergence angle \(\vartheta\). We will further suppose, to keep our analysis consistent with typical experimental hardware, that \(\vartheta\ll 1\), which allows us to treat the optical system with the paraxial approximation. We will assume that the fiber-coupled thermal light is the only significant source of illumination to calculate the rate from that contribution alone.
In the typical, textbook treatment of black body radiation, the light inside a black body in equilibrium at temperature \(T\) is characterized by a constant, isotropic spectral radiance \(B(\omega)\) (power per unit solid angle, per unit area, per unit angular frequency) given by Planck's law of black-body radiation,
\[B_{\rm P}(\omega)=\frac{\frac{\hbar\omega^{3}}{4\pi^{3}c^{2}}}{\exp\left( \beta\hbar\omega\right)-1}. \tag{5}\]
However, as we show in the next section, thermal light emerging from a single-mode fiber differs somewhat from a true black body, and is more conveniently characterized instead by a power spectral density \(S(\omega)\) (power per unit angular frequency of a single transverse spatial mode). If the imaging system is capable of focusing the fiber mode onto a (potentially frequency-dependent) effective mode area \(A(\omega)\), the spectral energy density at the atom is given by
\[\rho(\omega)=\frac{S(\omega)}{cA(\omega)}, \tag{6}\]
which can be used with Eq. (4) to calculate the excitation rate.
Given an instantaneous excitation rate, \(\Gamma p_{\rm D}\), from state D to state P (see Fig. 1) for an atom with probability \(p_{\rm D}\) of being in state D, the rate at which this results in a spontaneous emission back to the ground state, which completes the step of removing one quantum of motion on average, will be
\[\dot{n}_{\rm motion}=-\Gamma\,p_{\rm D}\,\eta_{\rm SP} \tag{7}\]
where \(\eta_{\rm SP}\equiv A_{\rm PS}/(A_{\rm PS}+A_{\rm PD})\) is the branching fraction of spontaneous emission from P to go back to S.
## III Thermal light in a single-mode fiber
The statistics of thermal light confined to quasi-one-dimension (q1D [16]) has been discussed in various contexts, including Johnson-Nyquist noise [17; 18], photonics [19; 20], photovoltaic energy conversion [21], and extra spatial dimensions [22; 23]. We do not, therefore, present a new theoretical result by deriving the power spectral density of black-body radiation embedded in q1D. Here, we present an optics-oriented derivation to illustrate the origin of the spectrum we will use to compare to experimental observations, and a brief discussion of how to reconcile the modified spectrum in the fiber with Planck's law.
We can consider a single-mode optical fiber (which is to say, some waveguide that only supports one transverse mode of the electromagnetic field at each frequency \(\omega\)) of length \(L\) (later, we will take \(L\rightarrow\infty\)) with periodic boundary conditions and light allowed to propagate in only one of the two possible directions. Assuming, for simplicity, that the effective index of refraction in the fiber is \(n=1\), the allowed frequencies will be \(\omega_{i}=i\times 2\pi c/L\), and the density of states per polarization will therefore be \(\mathrm{d}i/\mathrm{d}\omega=L/(2\pi c)\).
The average rate of photons in mode \(i\) passing through a fixed reference plane in the fiber will be \(\langle n_{i}\rangle c/L\), so the time-averaged power from mode \(i\) is \(P_{i}=\hbar\omega_{i}\langle n_{i}\rangle c/L\), where \(\langle n_{i}\rangle\) is the average number of photons in mode \(i\). Summing over the two available polarizations, using \(\langle n_{i}\rangle=\left(\exp\left(\beta\hbar\omega_{i}\right)-1\right)^{-1}\) for the expected thermal population for a mode with splitting \(\hbar\omega_{i}\) and temperature \(T=1/(k_{\mathrm{B}}\beta)\), and taking the \(L\rightarrow\infty\) limit, we have the total, time-averaged power
\[P_{\mathrm{total}}=\int_{0}^{\infty}\mathrm{d}\omega\frac{\frac{\hbar\omega}{ \pi}}{\exp\left(\beta\hbar\omega\right)-1}=\frac{\pi}{6\hbar}\frac{1}{\beta^{ 2}}. \tag{8}\]
From the integrand, we identify the power spectral density for thermal light in a single-mode fiber,
\[S(\omega)=\frac{\frac{\hbar\omega}{\pi}}{\exp\left(\beta\hbar\omega\right)-1}. \tag{9}\]
This expression was used by Nyquist in 1928 to explain thermal noise in electrical circuits [17], but it is also the spectrum of power for thermal light coupled into a single-mode fiber.
The spectrum of \(S(\omega)\) (\(\propto\omega\langle n\rangle\)) differs in shape from the spectral radiance given by Planck (\(B_{\mathrm{P}}(\omega)\propto\omega^{3}\langle n\rangle\)), and the total power is proportional to \(T^{2}\), as opposed to the more-familiar \(T^{4}\) of the Stefan-Boltzmann law in three dimensions. Thermodynamics, however, requires that the two ends of the fiber, if brought into optical contact with two isolated black bodies, will allow them to equilibrate through the fiber. How this is possible if the spectrum in the fiber has a different shape and peak position than the three-dimensional case can be resolved as follows. We consider two extremes for the transverse mode confinement in the fiber: (i) the _divergence angle_ of light from the fiber end is independent of \(\omega\), which is approximately true for a total-internal-reflection interpretation of step-index fiber; and (ii) the _mode area_ of the fiber is independent of \(\omega\), which is approximately true for a photonic crystal fiber. Cases that are intermediate between these two are likewise handled as follows.
For case (i), optical considerations dictate that the effective mode area, \(A(\omega)\), must be frequency-dependent to maintain a frequency-independent solid angle \(\Omega\). For case (ii), \(\Omega(\omega)\) must be frequency dependent to ensure that \(A\) is independent of frequency. For cases between these two extremes, the relationship between the area of a diffraction-limited mode and its solid angle are related by a well-known phase-space-volume theorem in antenna theory, namely that their product must be equal to the square of the wavelength:
\[A(\omega)\Omega(\omega)=\lambda^{2}=\left(\frac{2\pi c}{\omega}\right)^{2}. \tag{10}\]
As pointed out by Dicke in 1946 in the context of thermal noise in microwave systems [24], this connects the 1D power spectral density to the radiance,
\[B(\omega)=\frac{S(\omega)}{A(\omega)\Omega(\omega)}. \tag{11}\]
Since thermodynamics requires that this be equal to Eq. (5) in thermal equilibrium, this argument highlights that Eq. (10) is a basic consequence of Planck's law. The assignment of spectral radiance for single spatial modes is discussed in Appendix C.
Earlier, we argued that the spectral energy density (\(\rho(\omega)\)) at the center of the focus of an optical system imaging a single mode of thermal radiation onto a spot size \(A(\omega)\) was given by Eq. (6). Since the spectral radiance of that light will have the same frequency dependence (spectrum) as Planck's law (5) but lower power, the thermal light can be called gray-body radiation, and we can use the ratio of the energy spectral density to that inside an ideal black body to define an efficiency (or geometric grayness) factor for the thermal light delivery system,
\[G\equiv\frac{\rho(\omega)}{\rho_{\mathrm{P}}(\omega)} \tag{12}\]
where
\[\rho_{\mathrm{P}}(\omega)=\frac{\omega^{2}}{\pi c^{3}}S(\omega)=\frac{\frac{ \hbar\omega^{3}}{\pi^{2}c^{3}}}{\exp\left(\beta\hbar\omega\right)-1} \tag{13}\]
is the energy density inside an ideal black body.
Combining (6) with (10) and (13) allows us to write the geometric grayness as
\[G=\frac{\frac{\lambda^{2}}{4\pi}}{A(\omega)}=\frac{\Omega(\omega)}{4\pi} \tag{14}\]
\(\Omega(\omega)\) is the solid angle of the mode of the imaging system. In the limit that the mode solid angle covers all of the available solid angle, we recover the ideal black-body energy density and \(G\to 1\).
## IV Measured power spectrum of fiber coupled sunlight
To observe the predicted spectrum and power spectral density of Eq. (9) and benchmark the optical power that can be coupled onto a trapped ion, three fibers with distinct guiding regimes were employed: a single-mode step index fiber, a single-mode photonic crystal fiber, and a step-index multi-mode fiber. The first two were discussed above, and the multi-mode fiber was used to compare to the 3D spectrum.
For each fiber, sunlight was coupled in using a roof-mounted, home-built sun tracker and a commercial, aspheric fiber collimator lens as the collection optic. To keep sunlight maximally coupled, the collimator lens is much larger than the minimum diameter necessary to resolve the sun from a point source (which would be a diameter of \(D_{\mathrm{min}}\approx 100\,\upmu\mathrm{m}\)), allowing for steady coupling efficiency even with pointing instability. This system is able to maintain maximal coupling for many hours.
To estimate the effect of atmospheric absorption and scattering, as well as non-ideal emission, we use a standard reference spectrum [25] for sunlight on the surface of the earth for the case of a collecting lens oriented toward the sun. We use the ratio of the ideal 3D Planck spectrum to the standard spectrum to create the expected standard spectrum in q1D. This spectrum is an average correction for the sun at a specific elevation in specific atmospheric conditions and does not apply perfectly to our conditions at each measurement; the true correction varies somewhat depending on the elevation of the sun, weather conditions, and pollution levels.
To measure the power spectrum of the fiber-coupled light, we measured the output with a fiber-coupled spectrometer [26]. We correct the measured output with the response function provided by the manufacturer. For the single mode fibers, we also correct for the wavelength dependence of light entering the spectrometer through a slit using the mode properties in the fiber specifications. This correction is done by treating the fiber mode as a gaussian beam between the fiber tip and the slit, then cutting off parts of the beam that are blocked by the slit.
For the multi-mode fiber, Figure 2 shows that we observe a frequency dependence similar to the standard 3D Planck spectrum, Eq. (13), since the fiber-coupled light can occupy many transverse modes. The vertical scale in this case is arbitrary, and we have roughly matched the height of the measured and predicted spectra to allow comparison of their shapes.
For the two single-mode fibers, the measured spectra are shown in black in figures 3 and 4. The shapes of these spectra are clearly modified from Planck's law in 3D (compare to Fig. 2). The predicted spectrum of Eq. (9) is shown in dark blue, along with a prediction that takes into account the empirical solar spectrum at the Earth's surface (shaded blue).
To calibrate the vertical scale, a short-pass filter is inserted in front of the input collimator to remove light with wavelengths longer than \(\lambda=900\,\mathrm{nm}\), and the power delivered by the fiber is measured with a calibrated photodiode power meter. By matching this power to numerical integration of the measured spectrum, we obtain power spectral density.
By comparing this to Eq. (9), we obtain the delivery efficiency, \(\eta(\omega)\), as the ratio of our measured power spectral density to the ideal q1D spectrum at \(T_{\mathcal{O}}\). In the visible and near-infrared, we find efficiencies of \(\eta=0.6-0.9\) under good seeing conditions with both types of single-mode fiber.
## V Estimate of cooling rate from measured fiber output
We can now estimate the achievable cooling rate using fiber-coupled sunlight for our experimental parameters as follows. We consider a demonstration with \(\mathrm{Ba}^{+}\), for which the cooling light (\(\omega_{2}\) in figure 1) is near a wavelength of \(614\,\mathrm{nm}\).
Loss in \(200\,\mathrm{m}\) of optical fiber from the roof to the lab is measured to be \(35\%-40\%\) at this wavelength, giving an overall delivery efficiency of \(\eta\approx.35-.60\). For estimates of cooling rates, we will use \(\eta\approx.50\), which we have measured in the lab.
The imaging system is capable of focusing the fiber output to a spot size of \(w_{0}=20\,\upmu\mathrm{m}\), which corresponds to a geometric grayness of \(G=5\times 10^{-5}\). The power spectral density in this mode can be related to Eq. (9)
Figure 2: Measured spectrum of sunlight coupled into a step-index, multi-mode optical fiber (black). The green trace shows the theoretical frequency-dependence of light emitted by an ideal 3D black body, normalized to a peak height near the measured value to allow comparison. The shaded green-line trace shows the theoretical spectrum, with the atmospheric correction.
via
\[S_{\mathrm{exp}}(\omega_{2})=\eta\,G\,S(\omega_{2})\approx 2.5\times 10^{-5}\,S( \omega_{2}). \tag{15}\]
Combining this power spectral density with the atomic parameters for Ba\({}^{+}\) yields an expected cooling rate of
\[\dot{n}_{\mathrm{motion}}=-8.2\ \mathrm{phonon/s} \tag{16}\]
for initial motional states well above the ground state. Heating rates that are comparable (or smaller) in magnitude to this cooling rate estimate (\(\dot{n}_{\mathrm{motion}}<10\,\mathrm{phonon/s}\)) have been measured for ions in Paul traps [27; 28; 29; 30; 31; 32], and lower rates (\(\dot{n}_{\mathrm{motion}}<1\,\mathrm{phonon/s}\)) have been measured in in Penning traps [33; 34]. It therefore appears feasible that fiber-coupled sunlight may be capable of cooling a trapped ion to its ground state of motion.
###### Acknowledgements.
The authors acknowledge Kristian Barajas for discussions.
|
2306.06108 | Demystifying Fraudulent Transactions and Illicit Nodes in the Bitcoin
Network for Financial Forensics | Blockchain provides the unique and accountable channel for financial
forensics by mining its open and immutable transaction data. A recent surge has
been witnessed by training machine learning models with cryptocurrency
transaction data for anomaly detection, such as money laundering and other
fraudulent activities. This paper presents a holistic applied data science
approach to fraud detection in the Bitcoin network with two original
contributions. First, we contribute the Elliptic++ dataset, which extends the
Elliptic transaction dataset to include over 822k Bitcoin wallet addresses
(nodes), each with 56 features, and 1.27M temporal interactions. This enables
both the detection of fraudulent transactions and the detection of illicit
addresses (actors) in the Bitcoin network by leveraging four types of graph
data: (i) the transaction-to-transaction graph, representing the money flow in
the Bitcoin network, (ii) the address-to-address interaction graph, capturing
the types of transaction flows between Bitcoin addresses, (iii) the
address-transaction graph, representing the bi-directional money flow between
addresses and transactions (BTC flow from input address to one or more
transactions and BTC flow from a transaction to one or more output addresses),
and (iv) the user entity graph, capturing clusters of Bitcoin addresses
representing unique Bitcoin users. Second, we perform fraud detection tasks on
all four graphs by using diverse machine learning algorithms. We show that
adding enhanced features from the address-to-address and the
address-transaction graphs not only assists in effectively detecting both
illicit transactions and illicit addresses, but also assists in gaining
in-depth understanding of the root cause of money laundering vulnerabilities in
cryptocurrency transactions and the strategies for fraud detection and
prevention. Released at github.com/git-disl/EllipticPlusPlus. | Youssef Elmougy, Ling Liu | 2023-05-25T18:36:54Z | http://arxiv.org/abs/2306.06108v1 | # Demystifying Fraudulent Transactions and Illicit Nodes
###### Abstract.
Blockchain provides the unique and accountable channel for financial forensics by mining its open and immutable transaction data. A recent surge has been witnessed by training machine learning models with cryptocurrency transaction data for anomaly detection, such as money laundering and other fraudulent activities. This paper presents a holistic applied data science approach to fraud detection in the Bitcoin network with two original contributions. First, we contribute the _Elliptic++ dataset_, which extends the Elliptic transaction dataset to include over 822k Bitcoin wallet addresses (nodes), each with 56 features, and \(1.27M\) temporal interactions. This enables both the detection of fraudulent transactions and the detection of illicit addresses (actors) in the Bitcoin network by leveraging four types of graph data: (i) the transaction-to-transaction graph, representing the money flow in the Bitcoin network, (ii) the address-to-address interaction graph, capturing the types of transaction flows between Bitcoin addresses, (iii) the address-transaction graph, representing the bi-directional money flow between addresses and transactions (BTC flow from input address to one or more transactions and BTC flow from a transaction to one or more output addresses), and (iv) the user entity graph, capturing clusters of Bitcoin addresses representing unique Bitcoin users. Second, we perform fraud detection tasks on all four graphs by using diverse machine learning algorithms. We show that adding enhanced features from the address-to-address and the address-transaction graphs not only assists in effectively detecting both illicit transactions and illicit addresses, but also assists in gaining in-depth understanding of the root cause of money laundering vulnerabilities in cryptocurrency transactions and the strategies for fraud detection and prevention. The Elliptic++ dataset is released at [https://www.github.com/git-disl/EllipticPlus](https://www.github.com/git-disl/EllipticPlus).
Blockchain, Anomaly Detection, Financial Forensics +
Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †:
17 additional features, then we crawl the Blockchain to create the Elliptic++ dataset, consisting of the _feature-enhanced transactions dataset_ and the _actors (wallet addresses) dataset_. The actors dataset includes over \(822k\) labelled wallet addresses, each described with \(56\) features, and over \(1.27M\) temporal occurrences (interactions) across the same time steps (as those recorded in the Elliptic dataset). With our Elliptic++ dataset, one can perform anomaly detection of fraudulent activities, such as illicit transactions and fraudulent actors. We also include four types of graphs: the _Money Flow Transaction Graph_, the _Actor Interaction Graph_, the _Address-Transaction Graph_, and the _User Entity Graph_. These graphs contribute to both mining and visualization of the connections of transactions and the interactions among wallet addresses through their transactions.
The second contribution of the paper is performing fraud detection using the Elliptic++ dataset and the four graph representations by combining diverse ML algorithms and feature optimizations. We observe that Random Forest (RF) with feature refinement offers the best-performing model: on the transactions dataset, it achieves \(98.6\%\) precision and \(72.7\%\) recall compared to \(97.5\%\) precision and \(71.9\%\) recall when using RF without feature refinement; on the actors dataset, RF with feature refinement achieves \(92.1\%\) precision and \(80.2\%\) recall, compared to \(91.1\%\) precision and \(78.9\%\) recall when using RF without feature refinement. Furthermore, the fraud detection using the Elliptic++ dataset allows for in-depth understanding of the root cause of fraudulent activities in cryptocurrency transactions through semantic and statistical explainability, shining light on the strategies for fraud detection and prevention.
## 2. Related Work
Since the introduction of the Elliptic dataset in 2019 by Weber et al. (2019), there have been numerous efforts on data labelling and anomaly detection using ML models. Lorenz et al. (2019) assumed minimal access to labels, and proposes an active learning solution by leveraging a smaller subset of the available labels. Oliveira et al. (2019) proposed GuiltyWalker, a detection method that computes new features based on the structure of a transaction-to-transction graph and the distance to known illicit transactions. Alarab et al. (2019) presented a comparative analysis of the Elliptic dataset using different supervised learning methods. Loa et al. (2019) proposed Inspection-L, a graph neural network framework for anomaly detection, and similarly, Alarab et al. (2019) proposed a graph-based LSTM with a graph convolutional network to detect illicit transactions.
More generally, there is increasing interest in AML in the context of financial transactions and cryptocurrency networks, such as Bitcoin or Ethereum. Combinatorial optimization methods for identifying transactions from graphs, including statistical deviation and dense subgraph discovery methods, have been explored in (Kang et al., 2018; Li et al., 2019). The lack of labelled data and the imbalanced data are two major challenges for fraud detection when using supervised learning (Kang et al., 2018; Li et al., 2019; Li et al., 2019) or unsupervised learning (Kang et al., 2018; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019). Some efforts have also focused on de-anonymizing mixed transactions by Bitcoin-dedicated graph-based approaches (Li et al., 2019; Li et al., 2019; Li et al., 2019). Regarding Bitcoin account profiling, Michalski et al. (2019) built a small dataset of \(9,000\) addresses and applied supervised learning to characterize nodes in the Blockchain as miner or exchange, indicating the need of countermeasures for preserving the desired level of anonymity. To the best of our knowledge, our Elliptic++ dataset is the largest public Bitcoin account dataset with \(822,942\) addresses.
Existing works on Ethereum account profiling and phishing account detection exclusively focus on graph representation learning methods (Zhou et al., 2017; Li et al., 2019). For Ethereum account profiling, (Zhou et al., 2017) and (Zhou et al., 2017) applied either Graph Convolution Network (GCN) (Li et al., 2019) or a hierarchical graph attention encoder (HGATE) to infer the identity of Ethereum accounts based on learning latent features using node-embedding and subgraph-embeddings. TTAGNN (Li et al., 2019) generate node embedding by combining LSTM encoder and Graph Attention Network (GAT) (Zhou et al., 2017). For detecting phishing accounts, Trans2Vec (Zhou et al., 2017) and its variants (Li et al., 2019; Li et al., 2019) utilize a graph based random walk algorithm on transaction time and amount information. The cascade method (Michalski et al., 2019) leverages statistical feature extraction and a lightGBM-based ensemble algorithm. It is worth to note that given the difference in fundamental settings (UTXO and account models), it is not straightforward to apply the existing graph representation learning models developed for profiling Ethereum accounts to profiling Bitcoin accounts on the Bitcoin network.
Compared to the literature, this paper presents novel contributions from two perspectives. First, it describes and makes publicly available a labelled dataset with Bitcoin blockchain transactions and wallet addresses. The main benefit of this dataset is that it allows more accurate fraud detection models and algorithms to be developed. Second, it uses the dataset to showcase the detection of illicit blockchain transactions, illicit actors, and the risks of de-anonymizing users based on address clustering. For this, it utilizes a number of baseline machine learning models.
## 3. The Elliptic++ Dataset
The Elliptic++ dataset consists of \(203k\) Bitcoin transactions and \(822k\) wallet addresses. It leverages elements from the Elliptic dataset1(Zhou et al., 2017), a published dataset deanonymizing \(99.5\%\) of Elliptic transaction data2, and the Bitcoin addresses dataset obtained by using our Bitcoin Blockchain3 scraping pipeline. A detailed description of the data collection pipeline is included in the Appendix.
Footnote 1: www.kaggle.com/datasets/ellipticco/elliptic-data-set
Footnote 2: www.kaggle.com/datasets/alexbenzik/deanonymized-995-pct-of-elliptic-transactions
Footnote 3: www.blockchain.com
### Transactions Dataset
The _transactions dataset_ consists of a time-series graph with \(49\) distinct time steps, \(203,769\) transactions (nodes), and \(234,355\) directed edges representing the payment flows. Each transaction node is labelled as illicit, licit, or unknown; with \(2\%\) (\(4,545\)) labelled as class-1 (illicit), \(21\%\) (\(42,019\)) as class-2 (licit), and the remaining transactions are unknown with regard to licit/illicit, hence labelled as class-3. Three csv files are used, as shown in Table 1. Each transaction node has an entry in txs_features.csv, with numerical data for \(183\) transaction features, and an entry in txs_classes.csv, representing its class label (1: _illicit_, 2: _licit_, 3: _unknown_). Each edge has an entry in txs_edgelist.csv (indexed by two transaction IDs), representing money flow from one transaction to another. Among the \(183\) node features, \(166\) features are inherited from the Elliptic dataset, i.e., the time step, \(93\)_local features_, representing local information about the transaction, and \(72\)_aggregate features_, obtained by
aggregating transaction information one-hop forward/backward. The remaining 17 node features are gathered by our Elliptic++ data collection pipeline (with the exception of the 0.5% of transactions that were not deanonymized) as _augmented features_ and are shown in Table 2. Figure 1 shows the distribution of transactions across the three classes in each of the 49 time steps.
### Actors (Wallet Addresses) Dataset
The _actors_ (_wallet addresses_) _dataset_ is a graph network of 822, 942 wallet addresses, each with 56 features as shown in Table 3. Five csv files are used, as shown in Table 4. Each address has an entry in wallets_features.csv, with numerical data for the time
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \hline \multirow{2}{*}{txId} & \multirow{2}{*}{Time step} & \multirow{2}{*}{LF\_1} & \multirow{2}{*}{\(\cdots\)} & \multirow{2}{*}{LF\_93} & \multirow{2}{*}{AF\_1} & \multirow{2}{*}{\(\cdots\)} & \multirow{2}{*}{AF\_72} & \multirow{2}{*}{TXS\_in} & \multirow{2}{*}{\(\cdots\)} & \multirow{2}{*}{BTC\_out\_total} \\ \cline{1-1} \cline{5-10}
272145560 & & & & & & & & \\ \hline \multirow{3}{*}{txs\_edgelist.csv} & \multirow{3}{*}{txs\_classes.csv} & \multirow{3}{*}{txs\_classes.csv} & \multirow{3}{*}{txd1} & \multirow{3}{*}{txs\_class} \\ \cline{1-1} \cline{5-10}
272145560 & & & & & & & \\ \cline{1-1} \cline{5-10}
272145560 & & & & & & & \\ \cline{1-1} \cline{5-10}
299475624 & & & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1. Data structure for an example transaction node in the transactions dataset: (1) _features_, row of all 183 feature values for txId, (2) _edgelist_, all incoming and outgoing edges involving txId, and (3) _classes_, the class label for txId.
\begin{table}
\begin{tabular}{l l} \hline \hline Feature & Description \\ \hline \(BTC_{in}\) & Total BTC incoming \\ \cline{1-1} \cline{5-10} \(BTC_{out}\) & Total BTC outgoing \\ \cline{1-1} \cline{5-10}
272145560 & & & \\ \cline{1-1} \cline{5-10}
272145560 & & & \\ \cline{1-1} \cline{5-10}
299475624 & & & \\ \hline \hline \end{tabular}
\end{table}
Table 2. Transactions dataset augmented features.
Figure 1. Number of transactions by time step.
\begin{table}
\begin{tabular}{l l} \hline \hline Feature & Description \\ \hline \(BTC_{in}\) & Total BTC incoming \\ \cline{1-1} \cline{5-10} \(BTC_{out}\) & Total BTC outgoing \\ \cline{1-1} \cline{5-10}
272145560 & & & \\ \cline{1-1} \cline{5-10}
272145560 & & & \\ \cline{1-1} \cline{5-10}
299475624 & & & \\ \hline \hline \end{tabular}
\end{table}
Table 3. Actors (wallet addresses) dataset features.
step and \(56\) address features listed in Table 3 for each transaction it was involved in, and an entry in wallets_classes.csv, representing its class label (1: _illicit_, 2: _licit_, 3: _unknown_). A wallet address is labelled _illicit_ if it has at least \(1\) edge with an illicit transaction, otherwise it is labelled _licit_ if the ratio of its total unknown transactions (edges) to its total tacit transactions (edges) is \(>3.7\) or _unknown_ if \(\leq 3.7\). The ratio '3.7" is calculated using the total ratio of unknown transactions to licit transactions in the transactions dataset. We also create AddrAddr_edglist.csv to record the pairwise interactions of input and output addresses through Bitcoin transactions. Each entry represents the input address and output address relationship of one transaction. If there are multiple transactions between a pair of addresses, then there are multiple entries in this table. Additionally, we create AddrTx_edglist.csv, where each entry represents the relationship between an input address and a transaction, and TxAddr_edglist.csv, with each entry representing a directed connection between a transaction and an output address. With these data structures, the Elliptic++ dataset can be used to build the address-to-address graph (Section 3.3.2), and the address-transaction graph (Section 3.3.3), in addition to the transaction-to-transaction money flow graph (Section 3.3.1).
The distribution of address classes are \(2\%\) (\(14,266\)) class-1 (illicit), \(31\%\) (\(251,088\)) class-2 (licit), and the remaining are unknown (class-3). When populated using temporal information provided by the time steps in the transactions dataset, we obtain \(1,268,260\) wallet address occurrences across all time steps, including \(2\%\) (\(28,601\)) illicit addresses, \(27\%\) (\(338,871\)) licit addresses, and \(71\%\) (\(900,788\)) unknown addresses, as shown in Figure 2.
### Graph Visualization
In this section, we provide a detailed explanation with visualization of the Elliptic++ dataset using four types of graphs: Money Flow Transaction Graph, Actor Interaction Graph, Address-Transaction Graph, and User Entity Graph. Each graph has its own unique importance.
* The _Money Flow Transaction Graph_ shows BTC flow from one transaction to the next, allowing exploration of the spatial and temporal patterns surrounding a given transaction.
* The _Actor Interaction Graph_ shows the pairwise interactions among input and output addresses of transactions, showing the density of the k-hop neighborhoods of wallet addresses.
* The _Address-Transaction Graph_ is a heterogenous graph showing flow of BTC across transactions and addresses, allowing an evaluation of purposes of a transaction and the relationships among addresses of the same transaction.
* The _User Entity Graph_ is an address cluster graph that allows for potential linking of addresses controlled by a specific user, which further de-anonymizes their identity and purpose.
#### 3.3.1. Money Flow Transaction Graph
This is a directed graph where nodes represent transactions and edges represent directed BTC flows from one transaction to the next. Figure 3 visualizes the distribution of all transactions in time step 32. We choose three transactions (one per class) representing the unknown (yellow), illicit (blue), and licit (red) classes, and display sub-graphs of their four-hop neighbourhoods as shown in the left, middle, and right of Figure 3. All neighbour nodes are shown, though for visual clarity, some edges are abbreviated with dotted arrows signifying that a particular node has two (or more) outgoing edges of the same pattern as the graph is traversed further. This displays the potential utility of exploring the spatial and temporal patterns surrounding a given transaction.
It is important to note that some time steps produce sparse transaction graphs, e.g. time step 14, while others produce dense transaction graphs, e.g. time step 32. We provide a comparison in the Appendix (see Figure 14). This difference in graph density drastically affects the node neighborhood patterns, with some time steps creating shallow, wide money flow transaction graphs, while
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline address & time step & txs\_input & \(\cdots\) & lifetime\_blocks & \(\cdots\) & Addr\_interactions\_median \\ \hline
**39sfuA8pY4UfybgEZi7uvA13jkGzZpsg5K** & 23 & 420 & \(\cdots\) & 18145 & \(\cdots\) & 1 \\ \hline AddrAddr\_edgelist.csv & & & & & & \\ \hline input\_address & output\_address & & & & & \\ \hline
**39sfuA8pY4UfybgEZi7uvA13jkGzZpsg5K** & 1ML\_kTL & & & & & \\ \hline AddrTx\_edgelist.csv & & & & & & \\ \hline input\_address & & txld & & & & \\ \hline
**39sfuA8pY4UfybgEZi7uvA13jkGzZpsg5K** & 272145560 & & & & \\ \hline \end{tabular}
\end{table}
Table 4. Data structure for an example address in the actors dataset: (1) _features_, row of \(56\) feature values for address, (2) _classes_, the class label for address, (3) _AddrAddr_, all edges involving address in the actor interaction graph, (4) _AddrTx_, all edges as involving address as the input address in the address-transaction graph, and similarly (5) _TxAddr_, as the output address.
Figure 2. Number of wallet addresses by time step.
other time steps creating deep, narrow money flow transaction graphs. Moreover, although there are only \(2\%\) illicit transactions within the dataset, they are evidently spread out across each time step and hence the ML model trained for anomaly detection over the earlier sequence of time steps can be leveraged to perform forensics on the later part of the sequence of time steps.
#### 3.3.2. Actor Interaction Graph
This is a directed graph where nodes represent addresses and edges represent pairwise interactions among input and output addresses of transactions. Figure 4 visualizes the distribution of all addresses in time step 32. Unlike Figure 3 where transaction interactions are captured in terms of money flows, Figure 4 displays the interactions at the address level as opposed to transaction level. The grey area is due to the density of edges. Here, the density of the address graphs at a given time step impact the density of the \(k\)-hop neighborhood of a wallet address node. We provide a comparison of actor interaction graphs at time steps 14 and 32 in the Appendix (see Figure 15).
#### 3.3.3. Address-Transaction Graph
This is a heterogeneous directed graph supported by our Elliptic++ dataset. It has two types of nodes, the transaction node (circle) and the address node (square), and two types of edges, the sender-to-transaction edges, each of which represents the BTC flow from an input address (sender) to a transaction, and the transaction-to-receiver edges, each of which represents the BTC flow from a transaction to an output address. Figure 5 shows an address-transaction graph of selected transactions and addresses anchored at a given actor (i.e., the Illicit Actor 13) in time step 32. The flow and quantity of input and output addresses provide information regarding the purposes of a transaction and the relationships among addresses connected by the same transaction. We provide a visual comparison of the distributions of all transactions and addresses at both time steps in the Appendix (see Figure 16).
#### 3.3.4. User Entity Graph
This graph is generated by address clustering analysis rather than direct construction using the Elliptic++ dataset. Clustering addresses involves taking the set of all addresses \(\mathcal{A}\) as training data to build a clustering model which creates a set of disjoint subsets \(\mathcal{U}=\{U_{1},U_{2},...,U_{n}\}\) (where \(U_{1},U_{2},...,U_{n}\) represents \(n\) clusters of addresses in \(\mathcal{A}\)) such that \(\bigcup_{i=1}^{n}C_{i}=\mathcal{A}\). By treating each cluster (sub-graph) as one user, we construct a user-entity graph of \(n\) nodes, where each node represents a user in the Bitcoin network and the edges represent interactions between unique users with \(\geq 1\) common element in each set into unique users and adding edges across users if a path existed between groups of transactions in the original graph.
Figure 4. Distribution of wallet addresses in time step 32.
Figure 5. An address-transaction graph for selected nodes in time step 32 featuring Illicit Actor \(13\) at the root.
Figure 3. Money flow transaction graphs for selected unknown (left), illicit (middle), and licit (right) transactions in time step \(32\) (top shows distribution of TS \(32\) transactions).
Figure 6. A user entity graph created by grouping transactions with \(\geq 1\) common element in each set into unique users and adding edges across users if a path existed between groups of transactions in the original graph.
through Blockchain transactions. Address clustering is performed in four steps. First, for each transaction, the set of input addresses associated with the transaction is collected. Second, transactions whose address sets are overlapping with one or more addresses are grouped into a unique user. Third, the address-transaction graph is searched to highlight all transactions corresponding to each user. Finally, the highlighted address-transaction graph is converted into a user entity graph. Figure 6 shows the resulting user graph from the same subset of addresses and transactions in Figure 5. Refer to Section A.4 in the Appendix (Figure 17) for a detailed explanation and visualization of the clustering process. Bitcoin address clustering is a hot research topic (Bahdan et al., 2017; Wang et al., 2018; Wang et al., 2019) due to its potential of linking addresses controlled by a specific user, effectively deanonymizing the identity of those users.
## 4. Fraud Detection Methodology
### Dataset Preprocessing
We used an effective 70/30 train-test split with respect to time steps for both the transactions and actors datasets, with time steps 1 to 34 for training and time steps 35 to 49 for testing. Figure 7 graphically shows the distribution of data points (top for transactions, bottom for actors) of all three classes by time step for both the training and testing sets. We provide a detailed distribution of the number of transactions and addresses for all three classes in each of the 49 steps in the Appendix (Tables 12 and 13 respectively). Due to the underlying class imbalance across illicit and licit classes, normalization and standardization transformations are applied. The augmented features in the transactions dataset and all features in the actors dataset are transformed by scaling each feature using the MinMaxScaler to the range \((0,1)\), reducing imbalance and assisting with model convergence (Krishnan et al., 2017).
### Machine Learning Models
Previous studies (Krishnan et al., 2017; Wang et al., 2018) indicate that ensemble methods perform better than graph neural networks, hence the ML models used for evaluation included Random Forest (RF) (Krishnan et al., 2017), Multilayer Perceptrons (MLP) (Krishnan et al., 2017), Long Short-Term Memory (LSTM) (Krishnan et al., 2017), and Extreme Gradient Boosting (XGB) (Krishnan et al., 2017). We also include Logistic Regression (LR) (Krishnan et al., 2017) as the baseline. LR is a single layer neural network which estimates the probability of an event, such as licit or illicit, based on independent variables. RF is an ensemble of classification trees trained on samples with random subsets of features for each decision, evaluating a final decision from averaging all decision trees. MLP is an artificial neural network with at least three layers where data features are fed into input neurons that assign probability vectors for classes as outputs. The Scikit-learn python library was used for LR (default parameters with 1000 max iterations), RF (default parameters with 50 estimators), and MLP (parameters: 1 hidden layer with 50 neurons, 500 epochs, Adam optimizer, 0.001 learning rate). LSTM is a recurrent neural network that has feedback connections and is capable of learning long-term dependencies (sequences of data as compared to single data points). The TensorFlow python library was used for LSTM (hyper-parameters: sigmoid activation, Adam optimizer, 30 epochs, binary cross-entropy loss function, 15 embedding output dims).
\begin{table}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline
**Time step** & **1** & **2** & **3** & **4** & **5** & **6** & **7** & **8** & **9** & **10** & **11** & **12** & **13** & **14** & **15** & **16** & **17** \\ \hline \hline illicit actor appears in \(1\) time step & 68 & 69 & 42 & 186 & 36 & 8 & 233 & 116 & 384 & 26 & 185 & 47 & 342 & 74 & 193 & 174 & 118 \\ \hline illicit actor appears in \(2-4\) time steps & 5 & 1 & 3 & 15 & 2 & 1 & 28 & 7 & 21 & 11 & 7 & 2 & 4 & 0 & 6 & 8 & 8 \\ \hline illicit actor appears in \(\geq 5\) time steps & 0 & 0 & 0 & 0 & 2 & 2 & 5 & 4 & 3 & 3 & 2 & 1 & 2 & 0 & 2 & 1 & 6 \\ \hline \hline
**Time step** & **18** & **19** & **20** & **21** & **22** & **23** & **24** & **25** & **26** & **27** & **28** & **29** & **30** & **31** & **32** & **33** & **34** \\ \hline \hline illicit actor appears in \(1\) time step & 77 & 122 & 405 & 612 & 692 & 376 & 532 & 1096 & 704 & 43 & 97 & 372 & 130 & 131 & 759 & 270 & 447 \\ \hline illicit actor appears in \(2-4\) time steps & 4 & 5 & 20 & 16 & 18 & 17 & 40 & 9 & 21 & 1 & 7 & 52 & 11 & 24 & 23 & 7 & 11 \\ \hline illicit actor appears in \(\geq 5\) time steps & 2 & 4 & 5 & 4 & 5 & 3 & 12 & 1 & 9 & 2 & 1 & 9 & 4 & 6 & 8 & 5 & 2 \\ \hline \hline
**Time step** & **35** & **36** & **37** & **38** & **39** & **40** & **41** & **42** & **43** & **44** & **45** & **46** & **47** & **48** & **49** \\ \hline \hline illicit actor appears in \(1\) time step & 427 & 588 & 428 & 180 & 127 & 436 & 168 & 395 & 93 & 277 & 23 & 505 & 193 & 370 & 631 \\ \hline illicit actor appears in \(2-4\) time steps & 9 & 5 & 5 & 9 & 0 & 14 & 8 & 12 & 11 & 6 & 9 & 4 & 11 & 17 & 20 \\ \hline illicit actor appears in \(\geq 5\) time steps & 6 & 4 & 7 & 5 & 2 & 3 & 6 & 3 & 1 & 1 & 1 & 0 & 0 & 1 & 1 \\ \hline \end{tabular}
\end{table}
Table 5. The distribution of the number of illicit actors (that appear in \(1\), \(2-4\), and \(\geq 5\) time steps) by time step.
Figure 7. The distribution of the number of transactions and addresses by time step for training and testing sets.
XGB is a supervised learning algorithm which predicts variables by combining estimates of prior models. The XGBoost python library was used for XGB (default parameters with "multiisofmax" objective and 2 classes).
### Fraud Detection Evaluation Metrics
The metrics used to verify the models were Precision (ratio of correct classifications), Recall (proportion of actual positive labels correctly classified), F1 Score (harmonic mean of precision and recall), and Micro-Avg F1 (Micro-F1) Score (ratio of correct classifications to total classifications). In some cases to distinguish between classifiers with close performance, the Matthews Correlation Coefficient (MCC) is used due to its suitability for unbalanced datasets and its prior use on financial fraud and cryptocurrency-related studies (Bahdan et al., 2017; Chen et al., 2017; Chen et al., 2018; Chen et al., 2018). We provide the formal definition of these metrics in Section A.2 of the Appendix. In addition, to gain deeper understanding of the different ML models and their classification performance for illicit transactions and illicit addresses, we conduct the following three case studies: (i) EASY cases: _all models classify an illicit transaction correctly_; (ii) HARD cases: _all models classify an illicit transaction incorrectly_; and (iii) AVERAGE cases: _some models failed to classify an illicit transaction but \(\geq 1\) models classified correctly_.
## 5. Results and Analysis
### Statistical Analysis of the Dataset
The building blocks of the Elliptic++ dataset are the transaction features and address features, which directly impact the quality of information gained by the models and quality of classification explainability into the root cause of fraudulent activities. Figure 8 shows three chosen features for the transactions (left) and actors (right) datasets, with feature values in y-axis and time steps in x-axis. We conjecture that important dataset features will show a reflective trend that clearly distinguishes between the illicit and licit classes. Such features will provide a level of interpretation into the risk assessment of a transaction or an actor. In Figure 8, the green curves of the features in the top two rows show the trend of licit transactions (left) and licit actors (right), which are distinctly different from the red curves of illicit transactions (left) and dishonest actors (right). Conversely, some features may not contribute to the detection of illicit transactions and actors, such as the two features in the bottom row. For example, the curves for the illicit and licit transactions in Local Feature 53 can be clearly differentiated through visual analysis, while Local Feature 15 does not display any visual clue on illicit v.s. licit transactions over all time steps. Similar observations are found in the actors dataset.
We can further expand the statistical analysis on the behavioral trends of actors by their features captured in our Elliptic++ dataset, e.g., life span, number of transactions involved, and distribution of actors, which provide additional level of explainability to both the dataset and the classification/detection model performance for illicit/licit/unknown actors. Figure 9 shows the timeline of 15 illicit actors that transact in \(\geq 5\) time steps. For instance, Illicit Actor 1transacts in 15 time steps, while Illicit Actor 11 transactions in only 6 time steps. A similar figure for illicit actors existing in only 1 time step (\(14,007\) illicit actors) is included in the Appendix (see Figure 19). Moreover, the number of involved transactions within each time step varies across illicit actors as shown in Figure 10 for the five selected actors. Table 5 (shown on the previous page) provides the distribution of illicit actors across time steps categorized into 3 sets.
Regarding the Bitcoin users, clustering addresses using the previously discussed four steps in Section 3.3.4 created \(146,783\) user entities. Table 6 shows some relevant statistics. Each user controlled a varying number of addresses, with 98.72% of users controlling \(\leq 10\) addresses, while only 0.02% of users controlled \(\geq 1\)K addresses.
Figure 8. Trend comparison for selected features from the transactions (left) and actors (right) datasets. Green highlight shows good correlation, red shows unclear correlation.
Figure 10. Number of transactions per time step for Illicit Actor 13 (purple), 1 (green), 5 (orange), 14 (Lblue), and 8 (Dblue).
Figure 9. Timeline of illicit actors in \(\geq 5\) time steps, each point represents at least one illicit transaction involved in.
### Model Evaluation and Analysis
Table 7 and Table 8 show the results of all models trained on the transactions dataset and the actors dataset respectively. From Table 7, the results for the transactions dataset in Elliptic++ (labelled TX) show an increase in performance in most of the metrics for all of the models, compared to the Elliptic dataset (labelled EC). This can be attributed to our addition of 17 augmented features to each transaction, which in turn improves the generalization performance and the explainability of both the transaction dataset and the fraudulent transaction detection models. It is observed that \(RF^{TX}\) is the best-performing model (followed by \(XGB^{TX}\) and \(MLP^{TX}\)) with a precision of 97.5% and recall of 71.9%. Although \(RF^{TX}\) and \(RF^{EC}\) have comparable precision and recall performance, \(RF^{TX}\) is better as the MCC values are 0.83 vs 0.81. Table 7 also shows the results of \(2-\) and \(3-\)classifier ensembles by selecting the top 3 models (RF, XGB, MLP). The best performing \(2-\) and \(3-\)classifier ensembles are those using Elliptic++. In comparison, the \(LR\), \(MLP^{EC}\), and \(LSTM\) models are ineffective with \(<50\%\) precision/recall (highlighted in red). Table 8 shows the results for the actors dataset. It is observed that \(RF^{AR}\) is the best-performing model (followed by \(XGB^{AR}\) and \(MLP^{AR}\)) with a precision of 91.1% and recall of 78.9%. Also, the best \(2-\) and \(3-\)classifier ensembles (\(RF\)+\(XGB^{AR}\) and \(RF\)+\(MLP\)+\(XGB^{AR}\)) show increases in precision (95.9%,93.3% vs 91.1%), but decreases in recall (53.0%,57.2% vs 78.9%), indicating the member models of \(2-\) and \(3-\)classifier ensembles are not complimentary and have high negative correlation (Zhu et al., 2018). The \(LR^{AR}\) and \(LSTM^{AR}\) models are ineffective with extremely low recall (highlighted in red).
### Easy, Hard, and AVERAGE cases Analysis
To further understand the performance of RF models against other models, and the performance results of their best performing \(2-\) and \(3-\)classifier ensembles, we analyze their performance in terms of the EASY, HARD, and AVERAGE cases as defined in Section 4.3. Table 9 (shown on the next page) provides a temporal split of the classification results across the test time steps of 35 to 49 for the EASY case, HARD case, and AVERAGE case respectively. It is important to note that AVERAGE cases present the opportunities for further optimizations. For the AVERAGE cases (correct classification by \(1\leq x\leq 4\) models), all combinations are shown for each individual model, for the \(2/3\) models scenario we show top 2 combinations, and for the \(4\) models scenario we show top 1 combination. First, the case where the 4 models RF, MLP, XGB, and LR correctly classify the transaction accounts for 71% of the AVERAGE cases. Second, the cases where RF classifies incorrectly only make up for \(<1\%\) of the AVERAGE cases. This motivates us to focus on optimization of the RF model with feature refinement.
\begin{table}
\begin{tabular}{|l||c|} \hline
**\# Users** & \(146,783\) \\ \hline
**\# Addresses per User: Min** & 1 \\ \hline
**\# Addresses per User: Median** & 1 \\ \hline
**\# Addresses per User: Mean** & 2.73 \\ \hline
**\# Addresses per User: Max** & \(14,885\) \\ \hline
**\% Users w/ \(1-10\) Addresses** & 98.72\% \\ \hline
**\% Users w/ \(11-1K\) Addresses** & 1.26\% \\ \hline
**\% Users w/ \(1K-\max\) Addresses** & 0.02\% \\ \hline \end{tabular}
\end{table}
Table 6. Statistics for Bitcoin users in the Elliptic++ Dataset.
\begin{table}
\begin{tabular}{|c||c|c|c|c|} \hline
**Model** & **Precision** & **Recall** & **F1 Score** & **Micro-F1** \\ \hline \hline \(\text{LR}^{\text{EC}}\) & 0.326 & 0.707 & 0.446 & 0.886 \\ \hline \(\text{LR}^{\text{TX}}\) & 0.328 & 0.707 & 0.448 & 0.884 \\ \hline \(\text{RF}^{\text{EC}}\) & 0.940 & 0.724 & 0.818 & 0.979 \\ \hline \(\text{RF}^{\text{TX}}\) & **0.975** & **0.719** & **0.828** & **0.980** \\ \hline \(\text{MLP}^{\text{EC}}\) & 0.476 & 0.673 & 0.558 & 0.931 \\ \hline \(\text{MLP}^{\text{TX}}\) & 0.611 & 0.613 & 0.612 & 0.949 \\ \hline \(\text{LSTM}^{\text{EC}}\) & 0.665 & 0.350 & 0.459 & 0.946 \\ \hline \(\text{LSTM}^{\text{TX}}\) & 0.709 & 0.223 & 0.339 & 0.942 \\ \hline \(\text{XGB}^{\text{EC}}\) & 0.812 & 0.717 & 0.761 & 0.971 \\ \hline \(\text{XGB}^{\text{TX}}\) & 0.793 & 0.718 & 0.754 & 0.969 \\ \hline \hline \multicolumn{4}{|c|}{\(2\)_ classifiers ensemble_, selecting top 3 classifiers_} \\ \hline \(\text{RF+MLP}^{\text{EC}}\) & 0.987 & 0.624 & 0.765 & 0.975 \\ \hline \(\text{RF+MLP}^{\text{TX}}\) & 0.989 & 0.635 & 0.773 & 0.975 \\ \hline \(\text{RF+XGB}^{\text{EC}}\) & 0.960 & 0.704 & 0.812 & 0.979 \\ \hline \(\text{RF+XGB}^{\text{TX}}\) & **0.977** & **0.706** & **0.820** & **0.979** \\ \hline \(\text{MLP+XGB}^{\text{EC}}\) & 0.457 & 0.737 & 0.564 & 0.926 \\ \hline \(\text{MLP+XGB}^{\text{TX}}\) & 0.974 & 0.596 & 0.739 & 0.972 \\ \hline \hline \multicolumn{4}{|c|}{\(3\)_ classifiers ensemble_, selecting top 3 classifiers_} \\ \hline \(\text{RF+MLP+XGB}^{\text{EC}}\) & 0.947 & 0.719 & 0.817 & 0.979 \\ \hline \(\text{RF+MLP+XGB}^{\text{TX}}\) & **0.962** & **0.723** & **0.826** & **0.980** \\ \hline \end{tabular}
\end{table}
Table 7. Illicit transactions results using individual/ensemble of classifiers. EC refers to classification on Elliptic dataset (Zhu et al., 2018), TX is on our Elliptic++ transactions dataset.
\begin{table}
\begin{tabular}{|c||c|c|c|c|} \hline
**Model** & **Precision** & **Recall** & **F1 Score** & **Micro-F1** \\ \hline \hline \(\text{LR}^{\text{AR}}\) & 0.477 & 0.046 & 0.083 & 0.964 \\ \hline \(\text{RF}^{\text{AR}}\) & **0.911** & **0.789** & **0.845** & **0.990** \\ \hline \(\text{MLP}^{\text{AR}}\) & 0.708 & 0.502 & 0.587 & 0.974 \\ \hline \(\text{LSTM}^{\text{AR}}\) & 0.922 & 0.033 & 0.064 & 0.965 \\ \hline \(\text{XGB}^{\text{AR}}\) & 0.869 & 0.534 & 0.662 & 0.980 \\ \hline \hline \multicolumn{4}{|c|}{\(2\)_ classifiers ensemble_, selecting top 3 classifiers_} \\ \hline \(\text{RF+MLP}^{\text{AR}}\) & 0.967 & 0.403 & 0.568 & 0.978 \\ \hline \(\text{RF+XGB}^{\text{AR}}\) & **0.959** & **0.530** & **0.682** & **0.982** \\ \hline \(\text{MLP+XGB}^{\text{AR}}\) & 0.929 & 0.324 & 0.481 & 0.975 \\ \hline \hline \multicolumn{4}{|c|}{\(3\)_classifiers ensemble_, _selecting top 3 classifiers_} \\ \hline \(\text{RF+MLP+XGB}^{\text{AR}}\) & **0.933** & **0.572** & **0.709** & **0.983** \\ \hline \end{tabular}
\end{table}
Table 8. Illicit actors results using individual/ensemble of classifiers. AR is classification on our Elliptic++ actors dataset.
### Model Optimization by Feature Refinement
Given that the RF is the best performing model, we explore feature refinement to further optimize RF. By examining the results from decision trees, we combine feature importance, permutation feature importance, and drop column feature importance to show the top 10 and bottom 10 features by importance on both transactions dataset and actors dataset, as shown in Figure (a)a for TX and Figure (b)b for AR. For the transactions dataset, the 17 augmented features produced a collective 12% importance, with the transaction size feature alone responsible for 4.1%. For the actors dataset, 35% of the features were responsible for 80% of the importance, with the total address interaction as the most important feature. Interestingly, a connection can be made to the set of top and bottom 10 features with the features highlighted in Figure 8. It can be seen that there is a link from the green highlight to the top 10 features, and likewise with the red highlight and bottom 10 features. This solidifies that features providing visual proof of the risk attribute to a larger classification importance, and vice versa. The top and bottom 2 of each feature type in both datasets are included in the Appendix Section A.5 for reference. Using this analysis, we run the best 1\(-\)/2\(-\)/3\(-\)classifiers ensembles with selected features instead of the full set of features. Tables 10 and 11 show the model performance results for transactions and wallet addresses (actors) respectively. Interestingly, the feature refined models show an average improvement of 0.92%, 1.17%, and 0.89% for precision, recall, and F1 score respectively on the transactions dataset, and similarly 1.07%, 3.1%, and 1.13% on the actors dataset.
\begin{table}
\begin{tabular}{|c||c|c|c|c|} \hline
**Model** & **Precision** & **Recall** & **F1 Score** & **Micro-F1** \\ \hline \hline \(\text{RF}^{\text{TX}}\) & 0.975 & 0.719 & 0.828 & 0.980 \\ \hline \(\text{RF}^{\text{TX}^{\text{P}}}\) & **0.986** & **0.727** & **0.836** & **0.981** \\ \hline \hline \(\text{RF}+\text{XGB}^{\text{TX}}\) & 0.977 & 0.706 & 0.820 & 0.979 \\ \hline \(\text{RF}+\text{XGB}^{\text{TX}^{\text{P}}}\) & **0.987** & **0.717** & **0.826** & **0.980** \\ \hline \hline \(\text{RF}+\text{MLP}+\text{XGB}^{\text{TX}}\) & 0.962 & 0.723 & 0.826 & 0.980 \\ \hline \(\text{RF}+\text{MLP}+\text{XGB}^{\text{TX}^{\text{P}}}\) & **0.968** & **0.729** & **0.834** & **0.980** \\ \hline \end{tabular}
\end{table}
Table 10. Illicit transactions classification results using selected features, labelled as \(\psi\), Micro-F1 denotes Micro-Avg F1.
Figure 11. Top 10 and bottom 10 features for the transactions dataset (left) and actors dataset (right).
\begin{table}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline
**Time Step** & **35** & **36** & **37** & **38** & **39** & **40** & **41** & **42** & **43** & **44** & **45** & **46** & **47** & **48** & **49** & **TOTAL** \\ \hline \hline \(\text{EASV}\) & 32 & 0 & 2 & 5 & 5 & 1 & 1 & 3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 49 \\ \hline \(\text{HARD}\) & 4 & 0 & 10 & 7 & 4 & 28 & 6 & 36 & 22 & 20 & 4 & 1 & 21 & 27 & 53 & 243 \\ \hline \(\text{LR}\) & 0 & 0 & 3 & 0 & 2 & 3 & 0 & 6 & 2 & 3 & 1 & 0 & 1 & 9 & 2 & \\ \hline \(\text{RF}\) & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline \(\text{MLP}\) & 0 & 0 & 1 & 1 & 0 & 2 & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline \(\text{LSTM}\) & 1 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline \(\text{XGB}\) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline \(\text{RF}\),XGB & 4 & 0 & 0 & 1 & 2 & 1 & 17 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline \(\text{LR}\),MLP & 1 & 0 & 0 & 1 & 0 & 2 & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline \(\text{RF}\),MLP,XGB & 5 & 6 & 0 & 8 & 3 & 4 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline \(\text{LR}\),RF,XGB & 6 & 1 & 10 & 27 & 18 & 10 & 5 & 21 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline \(\text{RF}\),MLP,XGB,LR & 124 & 24 & 12 & 57 & 45 & 55 & 81 & 159 & 0 & 1 & 0 & 1 & 0 & 0 & 0 \\ \hline \end{tabular}
\end{table}
Table 9. Classification results on the testing dataset showing distributions of EASY, HARD, AVERAGE cases among time steps 35 to 49. Total of each case is 49, 243, and 791 respectively. For the AVERAGE case, distributions are shown for cases where only \(1\) model (all shown), only \(2\) models (top 2 shown), only \(3\) models (top 2 shown), and only \(4\) models (top 1 shown) classified correctly.
The increase in recall is also evident in the RF trees voting. Figures 11(a) and 11(b) show the cumulative votes of 50 RF trees for 9 chosen transactions/addresses before (left) and after (right) selecting important features for both the transactions and actors datasets respectively. This demonstrates a trend that when some non-contributing features are dropped, those transactions/addresses that are close to the correct manifold in classification will in turn be classified correctly by more trees, increasing in the number of correct votes.
## 6. Concluding Remarks
With the rapid growth of the cryptocurrency ecosystem, there is a growing demand for robust financial forensics on the blockchain networks. This paper makes three original contributions.
_First_, we collect and contribute the Elliptic++ dataset, combining over 203k transactions and 822k addresses, and providing four graph representations (Money Flow Transaction Graph, Actor Interaction Graph, Address-Transaction Graph, User Entity Graph) that allow for mining and visualizations of connections for anomaly detection. This enables the detection of both illicit transactions and illicit accounts in Bitcoin networks.
_Second_, we leverage the four unique graph representations to showcase the fraud detection of illicit transactions, illicit actors (wallet addresses), and the risks of de-anonymization of users using clustering of addresses. We demonstrate the utility of the Elliptic++ dataset for detecting fraudulent transactions and illicit accounts using representative machine learning approaches, including Random Forest, Logistic Regression, Multilayer Perceptrons, LSTM, and Extreme Gradient Boosting. We analyze why Random Forest (RF) is the best-performing model for fraud detection, achieving 97.5% precision and 71.9% recall on the transactions dataset, and 91.1% precision and 78.9% recall on the actors dataset. We also provide comparative analysis of their performance and provide the explainability of the ML algorithms through visualization of all four types of graphs. In addition, we also provide explainability of the performance comparison through detailed analysis on the time series of selected transaction features, and the time series of account features over the time steps of the Elliptic++ dataset.
_Third_, to further improve the accuracy of the ML algorithms for detecting fraudulent transactions and illicit accounts, we employ ensemble methods to improve the generalization performance of individual ML algorithms. We study ensemble learning with two and three member models, with detailed analysis on the effectiveness of ensemble methods through EASY, HARD and AVERAGE cases. Motivated by our ensemble learning analysis, we show that model training using selective features instead of all extracted features of transactions and of addresses can further improve RF precision and recall performance by 0.92% and 1.17% respectively for the transactions dataset, and 1.07% and 3.1% respectively for the actors dataset. The combination of ensembles and importance-based feature pruning not only demonstrates the utility of the Elliptic++ dataset for fraud detection in Bitcoin network, but also showcases the importance of root cause analysis of fraudulent activities through semantic and statistical explainability of ML algorithms.
The Bitcoin blockchain is the first and the largest cryptocurrency network, boasting as the most widely used cryptocurrency. We believe that the methodology used for collecting the Elliptic++ dataset and the approach developed to exhibit the utilities of this dataset for fraud detection in Bitcoin network can be leveraged to analyze other types of currencies and blockchain networks, including Monero, ZCash, and Ethereum, to name a few. Moreover, the graph representations and the use of time series representations of transaction/account features over the time steps provides guidelines for developing financial forensics models that can be beneficial to other blockchains with high performance and explainability.
The Elliptic++ dataset and its tutorials are made publicly available at **[https://www.github.com/git-disl/EllipticPlusPlus](https://www.github.com/git-disl/EllipticPlusPlus).** We conjecture that making this dataset publicly available will enable the applied data science researchers to conduct financial forensics on the Bitcoin cryptocurrency network and develop more effective fraud detection models and algorithms.
_Acknowledgement_. This research is partially sponsored by the NSF CISE grants 2302720 and 2038029, an IBM faculty award (002114), and a CISCO edge AI grant (001927). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation or other funding agencies and companies mentioned above.
\begin{table}
\begin{tabular}{|c||c|c|c|c|} \hline
**Model** & **Precision** & **Recall** & **F1 Score** & **Micro-F1** \\ \hline \hline RF\({}^{\text{AR}}\) & 0.911 & 0.789 & 0.845 & 0.990 \\ \hline RF\({}^{\text{AR}^{\text{P}}}\) & **0.921** & **0.802** & **0.858** & **0.990** \\ \hline \hline RF+XGB\({}^{\text{AR}}\) & 0.959 & 0.530 & 0.682 & 0.982 \\ \hline RF+XGB\({}^{\text{AR}^{\text{P}}}\) & **0.967** & **0.543** & **0.686** & **0.982** \\ \hline \hline RF+MLP+XGB\({}^{\text{AR}}\) & 0.933 & 0.572 & 0.709 & 0.983 \\ \hline RF+MLP+XGB\({}^{\text{AR}^{\text{P}}}\) & **0.945** & **0.601** & **0.718** & **0.984** \\ \hline \end{tabular}
\end{table}
Table 11. **Illicit actors classification results using selected features, labelled as \(\psi\) and compared with AR. Micro-F1 denotes Micro-Avg F1.**
Figure 12. **Cumulative votes for 50 RF trees for (a) chosen transactions and (b) addresses before (left) and after (right) feature selection and refinement.** |
2305.07746 | Microscopic Examination of SRF-quality Nb Films through Local Nonlinear
Microwave Response | The performance of superconducting radio-frequency (SRF) cavities is
sometimes limited by local defects. To investigate the RF properties of these
local defects, especially those that nucleate RF magnetic vortices, a
near-field magnetic microwave microscope is employed. Local third harmonic
response (P3f) and its temperature-dependence and RF power-dependence are
measured for one Nb/Cu film grown by Direct Current Magnetron Sputtering (DCMS)
and six Nb/Cu films grown by High Power Impulse Magnetron Sputtering (HiPIMS)
with systematic variation of deposition conditions. Five out of the six HiPIMS
Nb/Cu films show a strong third harmonic response that is likely coming from RF
vortex nucleation due to a low-Tc surface defect with a transition temperature
between 6.3 K and 6.8 K, suggesting that this defect is a generic feature of
air-exposed HiPIMS Nb/Cu films. A phenomenological model of surface defect
grain boundaries hosting a low-Tc impurity phase is introduced and studied with
Time-Dependent Ginzburg-Landau (TDGL) simulations of probe/sample interaction
to better understand the measured third harmonic response. The simulation
results show that the third harmonic response of RF vortex nucleation caused by
surface defects exhibits the same general features as the data, including peaks
in third harmonic response with temperature, and their shift and broadening
with higher microwave amplitude. We find that the parameters of the
phenomenological model (the density of surface defects that nucleate RF
vortices and the depth an RF vortex travels through these surface defects) vary
systematically with film deposition conditions. From the point of view of these
two properties, the Nb/Cu film that is most effective at reducing the
nucleation of RF vortices associated with surface defects can be identified. | Chung-Yang Wang, Carlota Pereira, Stewart Leith, Guillaume Rosaz, Steven M. Anlage | 2023-05-12T20:09:13Z | http://arxiv.org/abs/2305.07746v2 | # Microscopic Examination of SRF-quality Nb Films through Local Nonlinear Microwave Response
###### Abstract
The performance of superconducting radio-frequency (SRF) cavities is sometimes limited by local defects. To investigate the RF properties of these local defects, a near-field magnetic microwave microscope is employed. Local third harmonic response (\(P_{3f}\)) and its temperature-dependence and RF power-dependence are measured for one Nb/Cu film grown by Direct Current Magnetron Sputtering (DCMS) and six Nb/Cu films grown by High Power Impulse Magnetron Sputtering (HiPIMS) with systematic variation of deposition conditions. Five out of the six HiPIMS Nb/Cu films show a strong third harmonic response that is likely coming from a low-\(T_{c}\) surface defect with a transition temperature between 6.3 K and 6.8 K, suggesting that this defect is a generic feature of air-exposed HiPIMS Nb/Cu films. One possible origin of such a defect is grain boundaries hosting a low-\(T_{c}\) impurity such as oxidized Nb. Time-Dependent Ginzburg-Landau (TDGL) simulations are performed to better understand the measured third harmonic response. The simulation results show that the third harmonic response of RF vortex nucleation caused by surface defects can qualitatively explain the experimental data. Moreover, the density of surface defects that nucleate RF vortices, and how deep an RF vortex travels through these surface defects, can be extracted qualitatively from third harmonic response measurements. From the point of view of these two properties, the best Nb/Cu film for SRF applications can be identified.
## I Introduction
In high-energy physics, there is continued interest in building next-generation particle accelerators (for example, the International Linear Collider, ILC) using bulk Nb superconducting radio-frequency (SRF) cavities [1; 2]. For the ILC, around 10000 SRF cavities will be built.
The quality of an SRF cavity is typically quantified by its quality factor (Q-factor) as a function of the accelerating gradient for the particle beam. Real-world materials are not perfect. The Q-factors of SRF cavities are usually below their theoretical predictions. In particular, as the accelerating gradient, and hence the RF magnetic field on the Nb surfaces, becomes strong, the Q-factor drops significantly (this is called the Q-slope) [3; 4; 5; 6]. Such a Q-slope phenomenon limits the RF field supported by the SRF cavities, which then limits the performance of the particle accelerator. Besides the Q-slope phenomenon, quenches are also frequently observed in many SRF cavities [7; 8; 9]. One reason for a quench is that a superconductor is locally heated up to exceed its critical temperature and loses superconductivity. Both the Q-slope and defect-nucleated quenches indicate that the performance of SRF cavities is limited by breakdown events below the theoretically predicted intrinsic critical field of the superconductor [1; 4; 5]. These breakdowns are sometimes caused by uncontrolled local defects [10; 11; 12]. Candidates of defects in SRF cavities include oxides [13; 14; 15; 16; 17], impurities [18; 19], grain boundaries [20; 21; 22; 23; 24; 25; 26], dislocations [27; 28; 25], surface roughness [29; 30], etc. To make high Q-factor SRF cavities that operate to high accelerating gradients, it is necessary to understand these defects, in particular their influence on the RF properties of SRF cavities. Therefore, there is a need to understand in detail the RF properties of these local defects.
In SRF material science, various kinds of techniques have been developed to characterize SRF cavities and SRF materials. For example, researchers routinely measure the Q-factor [31] and residual resistance [32] of SRF cavities. However, it is costly and time-consuming to fabricate and measure an entire cavity. As a result, many measurements are performed on coupon samples of SRF materials, including measurements of RF quench field [33; 9; 34] and surface resistance [35; 36; 33].
Another quantity of interest is the vortex penetration field because SRF cavities are expected to operate best in the Meissner state (vortex-free) to avoid dissipation due to vortex motion [37; 38; 9; 20; 30]. Superconductors show strong nonlinearity in the presence of vortices and show relatively weak nonlinearity in the vortex-free Meissner state. The nonlinear electrodynamic response arises when properties of the superconductor (such as the superfluid density) become time-dependent during the RF cycle. One manifestation of nonlinearity is that the superconductor creates response currents to the stimulation at frequencies other than the driving frequency. Utilizing the connection between vortices and nonlinearity, the vortex penetration field can be determined by measuring the third harmonic response of a superconductor subjected to a time-harmonic magnetic field [39]. In particular, the vortex penetration field of thin films and multilayer structures have been studied with such alternating current (AC) (kHz regime) third harmonic response magnetometry [38; 40; 41; 42; 43; 44; 45; 46].
The techniques described above (Q-factor, residual resistance, RF quench field, surface resistance, vortex penetration field, etc.) help physicists to characterize the global properties of SRF materials. However, none of
them can directly study the local RF properties of SRF materials.
Motivated by the need to study RF properties of local defects, we successfully built and operated a near-field magnetic microwave microscope using a scanned loop (the original version) [47; 48; 49; 50; 51] as well as a magnetic writer from a magnetic recording hard-disk drive (the microwave microscope adopted in this work) [52; 53; 54; 55; 56; 57]. The spatial resolution of the local probe of the microwave microscope adopted in this work is in the sub-micron scale, and the frequency is in the range of several GHz. Since the presence of a vortex is closely related to the third harmonic response, we measure the third harmonic response and its dependence on temperature and RF field amplitude. With this microwave microscope, the local RF properties of SRF materials are explored by measuring locally-generated third harmonic response.
Bulk Nb is the standard choice for fabricating SRF cavities. The main reason is that Nb has the highest critical temperature (\(T_{c}=9.3\) K) and the highest first critical field (\(B_{c1}=180\) mT) of all the pure metals at ambient pressure. Besides bulk Nb, there are some candidate alternative materials for SRF applications [58], including Nb film on Cu [37; 30; 59; 50; 51; 52; 53; 56; 57], Nb\({}_{3}\)Sn on bulk Nb substrate [63; 64; 65; 66; 9; 20], multilayer structure (superconductor-insulator-superconductor structures, for instance) [6; 29; 38; 40; 67; 68; 69], etc. The potential benefits of using materials other than bulk Nb would be a higher \(T_{c}\) and a potentially higher critical field \(B_{c}\). Here we focus on Nb films on Cu.
The development of the deposition of Nb films onto Cu cavities has a long history [70]. In particular, the first Nb/Cu cavities were produced at CERN in the early 1980s [71]. Motivations for Nb thin film technology for SRF applications include better thermal stability (Nb/Cu cavities allow operation at 4 K, rather than 2K, because of the superior thermal conductivity of Cu) and reducing material cost (high purity Nb costs around 40 times more than Cu). The performance of bulk Nb cavities is approaching the intrinsic limit of the material. On the contrary, Nb/Cu cavities typically suffer from serious Q-slope problems [72; 73], which limits their use in high accelerating fields. Solving the Q-slope problem in Nb/Cu cavities is essential for making them competitive for use in high-field accelerators.
In this work, we use our near-field magnetic microwave microscope to study local RF properties (on sub-micron scales, at several GHz) of SRF-quality Nb/Cu films produced at CERN. In particular, surface defects of these Nb/Cu films are the main focus of this work.
The outline of this paper is as follows: In Sec. II, we describe the experimental setup. In Sec. III, we show the experimental results of these Nb/Cu films and focus on surface defect signals. In Sec. IV, we perform Time-Dependent Ginzburg-Landau (TDGL) simulations to better understand the experimental results, and compare these Nb/Cu films. In Sec. V, we summarize the experimental results of these Nb/Cu films and identify the best one for SRF application.
## II Experimental setup
The setup of our near-field magnetic microwave microscope (identical to that described in Ref. [57]) is described in this section. A schematic of the setup is shown in Fig. 1.
The heart of our microwave microscope is the magnetic writer head (provided by Seagate Technology) that is used in conventional hard-disk drives. The central part of the magnetic writer head is basically a solenoid that generates a localized RF magnetic field. The solenoid is in the sub-micron scale, which sets the spatial resolution of our microscope. In the setup, a Seagate magnetic writer head is attached to a cryogenic XYZ positioner (sub-micron spatial resolution) and used in a scanning probe microscope fashion. The probe is in contact with the sample during the third harmonic measurement. However, the surfaces of the probe and sample are not perfectly flat, resulting in a finite probe-sample separation \(h\), estimated to be less than 1 micron.
The microwave source signal \(P_{RF}sin^{2}(\omega t)\) is sent to the probe (magnetic writer head) by its built-in and highly-engineered transmission line. The probe then produces a local (sub-micron scale) RF magnetic field \(B_{RF}sin(\omega t)\) acting on the sample surface. The superconducting sample then generates a screening current on the surface in an effort to maintain the Meissner state. This screening current generates a response magnetic field that is coupled back to the same probe, creating a propagating signal whose third harmonic component \(P_{3f}sin^{2}(3\omega t)\) is measured by a spectrum analyzer at room temperature. \(P_{3f}\) is measured because it arises from both the nonlinear Meissner effect [74; 75], and when a vortex penetrates
Figure 1: Schematic of experiment setup. The microwave source (MW source) sends a signal of \(P_{RF}sin^{2}(\omega t)\) to the magnetic writer (Probe), which then generates \(B_{RF}sin(\omega t)\) acting on the sample surface locally. The sample response is collected by the same magnetic writer, and the third Fourier component \(P_{3f}sin^{2}(3\omega t)\) is studied. The dashed line box represents the cryostat.
the sample surface and forms a vortex semi-loop [76] (as discussed in Sec. I), either due to an intrinsic mechanism or local defects (weak spot for a vortex to penetrate).
\(P_{3f}\) measurements of superconductor nonlinear response show tremendous dynamic range, often more than 50 dB [52; 53; 55; 56; 57]. The excellent instrumental nonlinear background of our measurements (\(\sim\) -155 dBm) allows for very sensitive measurements of superconductor nonlinearity and its variation with temperature, driving RF power, location, and probe-sample separation. Note that measurements are recorded in dBm and later converted to linear power for further study.
To improve the signal-to-noise ratio, microwave filters are installed as follows. Low pass filters are installed between the microwave source and the probe to block the unwanted harmonic signals generated by the microwave source. High pass filters are installed between the probe and the spectrum analyzer to block the fundamental input frequency signal from reaching the spectrum analyzer and producing unwanted nonlinear signals.
Measurements are performed with a variety of fixed input frequencies between 1.1 GHz and 2.2 GHz, while varying temperature and applied RF field amplitude. No external DC magnetic field is applied. The measured residual DC field near the sample at low temperatures is around 35 \(\mu\)T, as measured by a cryogenic 3-axis magnetometer.
The base temperature for a sample in the cryostat is around 3.5 K. The sample and the thermometer are both directly mounted on the cold plate to ensure good thermalization.
## III Experimental results
In this work, we study seven Nb films deposited on one common Cu substrate, as shown in Fig. 2. One of the samples is prepared by Direct Current Magnetron Sputtering (DCMS) with zero bias, and the sample thickness is around 3.5 \(\mu\)m. The other six samples are prepared by High Power Impulse Magnetron Sputtering (HiPIMS), with bias from 0 V to 125 V, and the sample thickness ranges from 1.15 \(\mu\)m to 1.5 \(\mu\)m. The preparation of the seven Nb/Cu films is discussed in Appendix A. In the following, the HiPIMS 25 V bias Nb/Cu sample is discussed in detail, and the results of all the seven Nb/Cu samples are summarized and compared in Table 2 and in Table 3.
Fig. 3 shows the representative data for the third harmonic response power \(P_{3f}\) as a function of temperature at a fixed location on the HiPIMS 25 V bias Nb/Cu sample. A representative measurement protocol is as follows. The sample is warmed up to 10 K (above \(T_{c}\)), and then the microwave source is turned on with fixed input frequency (\(\omega/2\pi\)=1.86 GHz) and input power (\(P_{RF}\)=+2 dBm), and then \(P_{3f}\) is measured as the sample is gradually cooled down to 3.5 K. In other words, the surface of the sample experiences a fixed RF field \(B_{RF}sin(\omega t)\) in a sub-micron scale area during the cooldown process from 10 K to 3.5 K.
The measured \(P_{3f}(T)\) in Fig. 3 can be decomposed into three segments: the region above 9.1 K, the region below 6.7 K, and the region in between. The magnetic writer head (the probe of our microwave microscope) itself has temperature-independent nonlinearity. \(P_{3f}\) above 9.1 K comes from this probe background and is indeed temperature-independent. A transition around 9.1 K can be seen in the inset of Fig. 3. This transition around 9.1 K comes from the intrinsic nonlinear response of the Nb film. The strongest \(P_{3f}\) signal here shows up below 6.7 K. In the following, such a \(P_{3f}\) onset temperature is called the \(P_{3f}\) transition temperature and is denoted as \(T_{c}^{P_{3f}}\). Compared to the signal around 9.1 K, the onset around 6.7 K is dramatic. Such a signal suggests that some mechanism shows up at and below 6.7 K that produces strong nonlinearity. In summary, \(P_{3f}(T)\) in Fig. 3 can be understood as the combination of three different sources of nonlinearity: temperature-independent probe background, that related to intrinsic Nb response, and mechanisms showing up at and below 6.7 K. The mechanisms leading to strong \(P_{3f}\) below 6.7 K are extrinsic and are likely due to surface defects. Note that \(P_{3f}\) below 6.7 K is much stronger than the intrinsic Nb signal around 9.1 K, suggesting that our local \(P_{3f}\) measurement is sensitive to surface defects.
Figure 2: Photo of the seven Nb films on one common Cu substrate. The samples are prepared at CERN.
Figure 3: Representative data for \(P_{3f}\) as a function of temperature for the HiPIMS 25 V bias Nb/Cu sample. The input frequency is 1.86 GHz and the input power is 2 dBm. The blue dots are the raw data, and the red curve is the \(P_{3f}\) averaged over 0.05 K range bins. Inset: enlargement of the figure above 7 K.
Since the main objective of this work is investigating RF properties of surface defects, the strong \(P_{3f}\) below 6.7 K is the main focus in the following.
To further study the nature of \(P_{3f}\) below 6.7 K, \(P_{3f}(T)\) for various input powers \(P_{RF}\) (and hence various applied RF field amplitudes \(B_{RF}\)) are measured. For each measurement, the sample is warmed up to 10 K and then cooled down to 3.5 K while experiencing an applied RF field with fixed input frequency and input power. Since the measured \(P_{3f}\) is the combination of probe background and sample contribution, the probe background (probe background is obtained by averaging the magnitude of \(P_{3f}(T)\) between 9.5 K and 10 K) is subtracted from the total signal to isolate the sample signal. The process is repeated six times, each time with a different input power. The results of the six \(P_{3f}(T)\) of different input powers/RF field amplitudes are shown in the linear format in Fig. 4.
In Fig. 4, all the six \(P_{3f}(T)\) exhibit a two-peak feature, and their \(T_{c}^{P_{3f}}\) are consistently around 6.7 K. The two-peak feature suggests that there might be two distinct mechanisms, or features, of nonlinearity. For both peaks, as the RF field amplitude increases (purple to red in Fig. 4), the \(P_{3f}(T)\) maximum increases; in addition, the \(P_{3f}(T)\) maximum and the \(P_{3f}(T)\) low-temperature end both show up at a lower temperature. These features of \(P_{3f}(T)\) are listed in Table 1. We will see that the four features of \(P_{3f}(T)\) listed in Table 1 are the key features of all the nonlinear data in the sense that they show up in all the Nb/Cu films measurement results (see Fig. 4 and Fig. 6) and also in numerical simulations of superconductor nonlinear response (see Fig. 9, Fig. 13, and Fig. 14).
Nb/Cu samples in the same manner as the HiPIMS 25 V bias Nb/Cu sample. The HiPIMS 25 V bias Nb/Cu sample \(P_{3f}(T)\) shows the two-peak feature, suggesting that the measured \(P_{3f}(T)\) is the superposition of two defect signals. On the other hand, for some of the Nb/Cu samples (75 V bias, 100 V bias, 125 V bias), the \(P_{3f}(T)\) defect signal shows a single-peak structure, suggesting that the measured \(P_{3f}(T)\) captures only one defect signal. Fig. 6 shows the representative \(P_{3f}(T)\) for the 75 V bias sample (Fig. 6 (a)) and the 125 V bias sample (Fig. 6 (b)). For both Fig. 6 (a) and (b), \(P_{3f}(T)\) displays Features 1, 2, and 3 (Table 1), suggesting that these features are quite universal. Although we have no access to the temperature regime below 3.5 K, Feature 4 (Table 1) is likely to be present based on \(P_{3f}(T)\) above 3.5 K.
Besides these common features, Fig. 6 (a) and Fig. 6 (b) do have a qualitative difference. In Fig. 6 (a), a stronger RF field amplitude leads to a stronger \(P_{3f}\) (the red curve is above the blue curve) for all temperatures. In Fig. 6 (b), on the contrary, for \(T>4.4\) K, a stronger RF field amplitude leads to a weaker \(P_{3f}\) (the red curve is below the blue curve). In Sec. IV.4, we will show that such a difference could be related to how deep an RF vortex semi-loop penetrates into a sample through a surface defect.
The results of the defect signal and the Nb signal of all seven Nb/Cu samples are summarized in Table 2. For the six HiPIMS Nb/Cu samples, it is quite universal that \(P_{3f}\) measurements reveal the intrinsic Nb signal around 9 K and the extrinsic defect signal at low temperatures. Specifically, the defect signal with \(T_{c}^{P_{3f}}\) between 6.3 K and 6.8 K is observed for five out of the six HiPIMS Nb/Cu samples, suggesting that such a defect is a generic feature for these HiPIMS Nb/Cu samples. Moreover, the defect signals are always much stronger than the intrinsic Nb signal around 9 K, like the situation shown in Fig. 3. Scanning measurements are performed for three HiPIMS samples (25 V bias sample, 50 V bias sample and 125 V bias sample), and the results show that the defect signals are quite uniform at the \(\mu\)m-scale for all three samples. Data for the HiPIMS 50 V bias sample and the 100 V bias sample are shown in Appendix B.
One may wonder if hysteresis shows up in \(P_{3f}(T)\) measurements. To check hysteresis, we compare \(P_{3f}(T)\) in a warm-up process to \(P_{3f}(T)\) in a cool-down process for a sample showing a defect signal (Appendix C). As shown in Appendix C, \(P_{3f}(T)\) does not exhibit a clear hysteresis.
## IV Discussion
### Introduction to numerical simulations
In our measurements, the applied field is a localized RF magnetic field instead of a uniform DC magnetic field. In addition, the configuration of the RF magnetic field produced by the probe is non-uniform but is similar to the field produced by a point dipole. In other words, the situation is different from the case of "a constant DC magnetic field parallel to the sample surface". As a result, it is required to study the third harmonic response more carefully.
The Time-Dependent Ginzburg-Landau (TDGL) model is widely used for studying vortex behavior in superconductors [20, 29, 77, 78, 79, 80, 81, 82]. In particular, vortices (see Sec. IV.2) and the proximity effect (see Sec. IV.3) are incorporated naturally in TDGL simulations and hence TDGL is a good tool for SRF material science.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Sample & Fixed-point measurement & Scanning \\ \hline HiPIMS, 125V bias & \(T_{c}^{P_{3f}}\) = 6.8 K defect is everywhere \\ \hline HiPIMS, 100V bias & Nb’s signal around 8.8 K & N/A \\ & \(T_{c}^{P_{3f}}\) = 6.5 K defect & N/A \\ \hline HiPIMS, 75V bias & \(T_{c}^{P_{3f}}\) = 6.3 K defect & N/A \\ \hline & \multicolumn{2}{c|}{Nb’s signal around 8.9 K} \\ HiPIMS, 50V bias & \(T_{c}^{P_{3f}}\) = 6.4 K defect is everywhere \\ & \(T_{c}^{P_{3f}}\) = 6.8 K defect is everywhere \\ \hline HiPIMS, 25V bias & \multicolumn{2}{c|}{Nb’s signal around 9.1 K} \\ & \multicolumn{2}{c|}{two of \(T_{c}^{P_{3f}}\) = 6.7 K defects are everywhere} \\ \hline HiPIMS, no bias & Nb’s signal around 9 K & N/A \\ \hline DCMS, no bias & No recognizable signal & N/A \\ \hline \end{tabular}
\end{table}
Table 2: Summary of the defect signal (red) and the Nb signal (blue) of the seven Nb/Cu samples.
Figure 6: \(P_{3f}(T)\) for two input powers for the HiPIMS 75 V bias Nb/Cu sample with an input frequency of 1.98 GHz (a) and for the HiPIMS 125 V bias Nb/Cu sample with an input frequency of 1.66 GHz (b).
To better understand our data, numerical simulations of the TDGL equations are performed. The full TDGL equations must be solved in this case because the superconductor is subjected to a time-dependent and inhomogeneous RF magnetic field. We do not assume or impose any spatial symmetries in the model, and solve Maxwell's equations for the dipole in free space above the superconductor, as well as inside the superconductor [77]. The TDGL equations solved are the same as those discussed in Ref. [77].
In the simulations, the smooth and flat superconducting sample occupies the \(z<0\) region, and the magnetic writer probe is approximated to be a pointlike magnetic dipole with a sinusoidal time-dependent magnetic moment (\(M_{dp}sin(\omega t),0,0\)) (namely an RF magnetic dipole pointing in the x direction) whose frequency is 1.7 GHz (\(\omega/2\pi\)=1.7 GHz). The RF dipole locates at \((0,0,h_{dp})\) with \(h_{dp}\)=400 nm. That is, the RF magnetic dipole is above the superconducting sample and parallel to the surface of the sample. Since the magnetic field produced by the RF dipole is non-uniform, the peak RF magnetic field amplitude experienced by the superconductor is specified and is denoted as \(B_{pk}\). Here \((x,y,z)=(0,0,0)\) is the location in the sample that experiences the strongest field, and hence \(B_{pk}\) is the RF field amplitude at \((x,y,z)=(0,0,0)\). Because the RF field is localized, nontrivial dynamics of the sample (vortex nucleation, for example) would show up only in the region that is underneath the RF magnetic dipole (namely near \((x,y,z)=(0,0,0)\)), while the region that is far away from the RF magnetic dipole would be in the vortex-free Meissner state.
In the simulations, TDGL is used to calculate the time evolution of the order parameter and the vector potential as the superconducting sample is stimulated by the time-dependent RF field produced by the horizontal point dipole above it. (There is no DC magnetic field in the simulations.) With the order parameter and the vector potential, the screening current and hence the magnetic field associated with that screening current (response of the sample) can be calculated. The response magnetic field is calculated at the location of the point dipole and it is assumed that this time-varying magnetic field induces a voltage wave that propagates up to the spectrum analyzer at room temperature. \(\sqrt{P_{3f}}\) is proportional to the third Fourier component of the magnetic field (generated by the screening current) at the dipole location.
Most TDGL treatments assume a 2D sample and fields that are uniform in the third dimension. This oversimplifies the problem. It creates "artificial features that extend uniformly" in the third dimension and thus creates infinitely long vortices in the superconductor. Our approach (multi-domain 3D simulation) does not make such unrealistic assumptions. In addition, our experiment and model examine the properties of magnetic vortex semi-loops, which are thought to be the generic types of RF vortex excitations created at the surface of SRF cavities [76].
In the following, we first consider the case of a defect-free bulk Nb (\(T_{c}=9.3\) K) (Sec. IV.2) for developing central concepts relevant to nonlinear response and then propose a surface defect model (Sec. IV.3) whose \(P_{3f}\) shares common features with the experimental results (the four key features in Table 1). After that, we consider the surface defect model with various defect heights (Sec. IV.4) and show that \(P_{3f}(T)\) can reveal how deep an RF vortex semi-loop penetrates into a sample through a surface defect. We then study another surface defect model (Sec. IV.5) and compare its \(P_{3f}(T)\) with the \(P_{3f}(T)\) of the first surface defect model and show that \(P_{3f}(T)\) can reveal how many RF vortex semi-loops are nucleated by surface defects in each half of the RF cycle. With physical insights gained by these simulations, the Nb/Cu films can be compared further (Sec. IV.6), and the best Nb/Cu film for SRF applications can be identified.
Parameters for all the TDGL simulations are given in Appendix D.
### Calculated bulk Nb nonlinear response
Unlike a DC vortex whose behavior shows no time dependence, an RF vortex shows nontrivial dynamics, and should be examined in a time-domain manner. Here we demonstrate the time-domain analysis (focusing on the dynamics of RF vortices) for a specific RF field amplitude (\(B_{pk}=61.6\) mT) and a specific temperature (8.23 K). (Material parameters of Nb are used in the defect-free bulk Nb simulations. See Appendix D.)
The dynamics of RF vortex semi-loops for bulk Nb during the first half of an RF cycle (frequency=1.7 GHz, period=\(5.88*10^{-10}\)s) is shown in Fig. 7, for a fixed RF field amplitude (\(B_{pk}=61.6\) mT) and a fixed temperature (8.23 K). Fig. 7 (a)-(g) show the space and time dependence of the square of the normalized order parameter (\(|\psi/\psi_{\infty}|^{2}\)) (here 1 means full superconductivity and 0 means no superconductivity); the black region is where \(|\psi/\psi_{\infty}|^{2}<0.03\). Since the order parameter is suppressed significantly at the center of a vortex core, a vortex can be visualized by tracking the black region. Here \(\psi_{\infty}=\psi_{\infty}(T)\) is the value of the order parameter deep inside bulk Nb at temperature T.
In the early stage of the RF cycle, there is no RF vortex (Fig. 7 (a) and (b)), and then an RF vortex semi-loop that is parallel to the direction of the RF dipole (which points in the x direction) shows up (Fig. 7 (c), (d), (e) and (g)). The RF vortex semi-loop disappears later in the RF cycle (Fig. 7 (f)).
Besides examining the spatial distribution of the order parameter, another signature of vortices is the phase of the order parameter. Because Ginzburg-Landau theory is based on the existence of a single-valued complex superconducting order parameter (\(\psi=|\psi|e^{i\theta}\)), the phase \(\theta\) must change by integral multiples of \(2\pi\) in making a
closed contour (see equation (4.45) in [83]), namely
\[\oint ds\cdot\nabla\theta=2\pi N,\]
where N is a positive or negative integer, or zero. The integral \(\frac{1}{2\pi}\oint ds\cdot\nabla\theta\) is quantized, and corresponds to the number of vortices enclosed by the closed contour.
Fig. 7 (i) shows the value of the integral \(\frac{1}{2\pi}\oint ds\cdot\nabla\theta\) (namely \(\Delta\theta/2\pi\)) as a function of time. The contour is on the YZ plane and is large enough to enclose the entire nontrivial region. Note that Fig. 7 (h) and Fig. 7 (i) share a common horizontal axis. Based on Fig. 7 (i), there are no vortices at the moments of (a), (b) and (f), and there is one vortex at the moments of (c), (d) and (e), which agrees with the order parameter analysis (Fig. 7 (a)-(g)).
The time-domain analysis described here (space and time dependence of the order parameter (Fig. 7 (a)-(g)) and \(\Delta\theta/2\pi\) (Fig. 7 (i))) is applied to all TDGL simulations whenever we check whether or not there are RF vortex semi-loops.
Figure 7: Time-domain analysis for the dynamics of RF vortex semi-loops in bulk Nb. A time-dependent magnetic moment (RF dipole) is 400 nm above the superconductor surface, pointing in the x-direction, and producing \(B_{pk}=61.6\) mT at T=8.23 K. (a)-(f) show the square of the normalized order parameter (\(|\psi/\psi_{\infty}|^{2}\)) on the XZ plane cross-section at different times during the first half of the RF period. The black region is where \(|\psi/\psi_{\infty}|^{2}<0.03\). (g) shows \(|\psi/\psi_{\infty}|^{2}\) on the YZ plane cross-section at the same moment as (d). (h) RF field at \((x,y,z)=(0,0,0)\) versus time during the first half of the RF period. (i) Phase change for a closed contour (on the YZ plane) that is large enough to enclose the entire nontrivial region. Red crosses in (h) and (i) correspond to snapshots (a)-(g).
Figure 8: (a) TDGL simulation result of \(P_{3f}(T)\) for a fixed RF field amplitude (\(B_{pk}=61.6\) mT) imposed by a point dipole source for bulk Nb. From left to right, \(P_{3f}\) is weak at low temperatures (below 8.1 K), then increases with temperature (between 8.1 K and 8.8 K), and drops with the temperature at high temperatures (above 8.8 K). (b)-(e) show \(|\psi/\psi_{\infty}|^{2}\) on the YZ plane cross-section at 8.04 K, 8.14 K, 8.42 K, and 8.60 K, respectively. The black region is where \(|\psi/\psi_{\infty}|^{2}<0.03\). These snapshots are taken at \(\omega t=0.6\pi\). (f) shows \(\Delta\theta/2\pi\) versus time for the four temperatures.
Equipped with the picture of RF vortex nucleation, now let's move on to the resulting \(P_{3f}\). The simulation result of \(P_{3f}(T)\) for a fixed RF field amplitude (\(B_{pk}=61.6\) mT) for bulk Nb is shown in Fig. 8 (a). The bell-shaped structure \(P_{3f}(T)\) in Fig. 8 (a) can be decomposed into three segments (separated by the two dashed vertical black lines) and can be understood with the vortex penetration field \(B_{vortex}^{RF}(T)\) (see Fig. 9) and the strength of superconductivity. An RF vortex semi-loop shows up when \(B_{pk}>B_{vortex}^{RF}(T)\). Below 8.1 K, \(B_{vortex}^{RF}(T)>B_{pk}\) and hence the entire bulk Nb is in the vortex-free Meissner state (see Fig. 8 (b) and the purple curve in (f)), whose nonlinear response is weak. As temperature increases, \(B_{vortex}^{RF}(T)\) decreases and hence \(B_{pk}\) would be greater than \(B_{vortex}^{RF}(T)\) at a certain temperature depending on the strength of the RF stimulus. In this simulation (\(B_{pk}=61.6\) mT), from the full time-domain simulation (see the discussion for Fig. 7) one finds that there is one RF vortex semi-loop (underneath the RF magnetic dipole) that penetrates the surface of the bulk Nb when the temperature is around 8.14 K (see Fig. 8 (c) and the blue curve in (f)). Roughly speaking, this implies that \(B_{vortex}^{RF}(T=8.14\)K\()\approx 61.6\) mT. \(B_{vortex}^{RF}(T)\) drops/vortex nucleation is favorable as temperature increases, and indeed the second RF vortex semi-loop shows up around 8.6 K (see Fig. 8 (e) and the red curve in (f)) and thus \(P_{3f}\) increases with temperature between 8.1 K and 8.8 K. Besides examining the order parameter (Fig. 8 (b)-(e)), Fig. 8 (f) also shows how the vortex number changes with temperature.
The nonlinear response of the superconductor is determined not only by the number of vortices (as described above in the language of \(B_{vortex}^{RF}(T)\)) but also by the strength of superconductivity. As the temperature approaches the transition temperature of a superconductor, its superconductivity and hence nonlinear response becomes weak. Such a temperature dependence leads to the decreasing tail of \(P_{3f}(T)\) above 8.8 K in Fig. 8 (a).
The simulation indicates that \(P_{3f}\) and the presence of RF vortex semi-loops are indeed closely related. Specifically, \(P_{3f}\) is weak as the bulk Nb is in the vortex-free Meissner state (below 8.1 K) and is strong in the presence of RF vortex semi-loops (above 8.1 K).
Fig. 9(a) summarizes the simulation results of \(P_{3f}(T)\) for three different RF field amplitudes for bulk Nb. For all three RF field amplitudes, \(P_{3f}\) is weak at low temperatures (\(B_{pk}<B_{vortex}^{RF}(T)\), vortex-free Meissner state), arises at high temperatures (\(B_{pk}>B_{vortex}^{RF}(T)\), RF vortex semi-loops), and then drops with temperature as the temperature is near the critical temperature. For the red curve, the first vortex semi-loop shows up around 8.14 K, which implies \(B_{vortex}^{RF}(T=8.14\)K\()\approx 61.6\)mT; for the blue curve, the first vortex semi-loop shows up around 8.84 K, which implies \(B_{vortex}^{RF}(T=8.84\)K\()\approx 39.2\)mT. Fig. 9(b) illustrates \(B_{vortex}^{RF}(T)\) and the RF field amplitude dependence of the temperature range of \(P_{3f}(T)\)'s bell-shaped structure. In this RF field amplitude-temperature phase diagram, the vortex-free Meissner state occupies the region below \(B_{vortex}^{RF}(T)\) and RF vortex semi-loops show up in the region above \(B_{vortex}^{RF}(T)\). It is clear that the \(P_{3f}(T)\)'s bell-shaped structure extends to lower temperatures as the RF field amplitude becomes stronger (from the blue to the green to the red in Fig. 9(a) and (b)) because of the temperature dependence of \(B_{vortex}^{RF}(T)\), and this explains Feature 4 in Table 1. Note that the four key features of \(P_{3f}(T)\) (Table 1) are clearly observed in Fig. 9(a) (except that the onset temperature of \(P_{3f}(T)\) is equal to but not below 9.3 K since this simulation is for a defect-free bulk Nb), suggesting that these key features are signatures of RF vortex nucleation.
Equipped with the intuition of \(P_{3f}\) and RF vortex semi-loops and their temperature dependence and RF field amplitude dependence for the defect-free bulk Nb, now let's move on to the case with surface defects, which is the main focus of this paper.
Figure 9: (a) TDGL simulation result of \(P_{3f}(T)\) for three different RF field amplitudes for bulk Nb. (b) Schematic of the vortex penetration field \(B_{vortex}^{RF}(T)\) (the black curve) and the temperature range of \(P_{3f}(T)\)’s bell-shaped structure for the three RF field amplitudes (the three colorful horizontal lines). The y-axis is the RF field amplitude. Note that (a) and (b) share the same horizontal axis and the same color-coded RF field amplitudes.
### Simulation of vortex nucleation in grain boundaries
It is known that surface defects can serve as weak spots for RF vortex nucleation. The dynamics of RF vortices penetrating a sample surface through surface defects can be analyzed from two aspects: how many RF vortices are nucleated by surface defects, and how deep do RF vortices travel into a sample through surface defects in half an RF cycle? These two ideas could apply to RF vortex nucleation by various surface defects (grain boundaries, dislocations, etc.). In the following, we consider RF vortex nucleation in the case of grain boundaries, but the lessons of the features of \(P_{3f}\) and RF vortex nucleation should be generic.
As discussed in Sec. I, various kinds of surface defects could exist in SRF materials. One possible scenario of surface defects is that the grain boundaries of Nb are filled with the oxide of Nb [84]. The oxidation of Nb when it is exposed to air is a well-known phenomenon [13, 14, 15, 16, 85]. In such oxidation, oxygen forms a solid solution in Nb and produces materials with critical temperatures below the bulk \(T_{c}\) of pure Nb (9.3 K) [86, 12, 87]. Nb samples with higher oxygen content tend to have a lower critical temperature. For instance, \(T_{c}\) drops to around 7.33 K for 2% oxygen content, and drops to around 6.13 K for 3.5% oxygen content [87].
Motivated by the oxidation of Nb in grain boundaries, here we model the oxide of Nb as a low-\(T_{c}\) material (impurity phase) and consider a surface defect model that the grain boundaries of Nb are filled with the low-\(T_{c}\) material. Such grain boundaries might serve as weak spots for vortex nucleation. As shown later in this section, the
Figure 11: A top view schematic of the grain boundary model. (a) Illustration of Nb grains and grain boundaries, with red being Nb grains and blue being grain boundaries filled with impurities. The realization of this illustration used in the TDGL simulations is shown in (b). (b) The distribution of critical temperature on the XY plane around the origin (corresponds to the region indicated by the red dashed line in Fig. 10), with red being Nb and blue being low-\(T_{c}\) impurity with \(T_{c}^{\text{impurity}}=4\) K. \((x,y,z)=(0,0,0)\) is at the center. The RF dipole locates at \((x,y,z)=(0,0,400\text{nm})\) and points in the x direction. The white dashed curve indicates the grain boundary that is roughly parallel to the x direction. Because the model works well only for the region around the center, only the screening current inside the white dashed rectangle is collected when calculating \(P_{3f}\). (c) A snapshot of the distribution of normalized order parameter \(|\psi/\psi_{\infty}|\) obtained by a TDGL simulation with the temperature being 5.4 K (higher than \(T_{c}^{\text{impurity}}\)) and \(B_{pk}=56.6\) mT. This snapshot is taken at the end of an RF cycle, namely when the RF field drops to zero (\(\omega t=2\pi\) and hence \(B_{RF}sin(\omega t)=0\)). The normalized order parameter of the dark blue region is around 0.15.
Figure 10: Sketch of the side view (XZ plane) of the grain boundary model. The origin is marked by a blue dot. The RF dipole is represented by the blue arrow. The schematic is not to scale (\(h_{dp}\)=400 nm and \(h_{GB}\)=200 nm). Since the physics around the origin plays a dominant role, the region far away from the origin is set to be defect-free bulk Nb to reduce computational time. As a surface defect model, the grain boundary has a finite height \(h_{GB}\). The top view of the region indicated by the red dashed line is shown in Fig. 11 (b), and the top view of the region indicated by the green dashed line is shown in Fig. 18.
proximity effect is active in the grain boundaries. Here we consider one possible toy model realization of "Nb grain boundaries filled with low-\(T_{c}\) material". Of course, the toy model (a grain boundary model) considered here is just one possible scenario of surface defects that might be able to qualitatively explain the experimental results.
The RF dipole locates at \((0,0,h_{dp})\) and hence vortex semi-loops first show up near \((x,y,z)=(0,0,0)\). Therefore, the physics around the origin plays a dominant role. As an approximation, surface defects (Nb grain boundaries filled with low-\(T_{c}\) material) are introduced near the origin (illustrated in Fig. 10), while the region far away from the origin is set to be defect-free bulk Nb to reduce computational time. As a result, only the region around the origin characterizes the grain boundary scenario accurately, and thus the screening current is collected only from such an accurate region when calculating \(P_{3f}\).
Fig. 10 shows the side view of the grain boundary model. As a surface defect model, the grain boundary extends (in the z direction) from the sample's surface to a finite depth \(h_{GB}\) (grain boundary height), and the region below is set to be bulk Nb. (\(z=0\): sample surface; \(0>z>-h_{GB}\): defect whose XY cross-section is shown in Fig. 11 (b); \(z<-h_{GB}\): bulk Nb). Here the grain boundary height \(h_{GB}\) is set to be 200 nm.
The side view of the grain boundary model is shown in Fig. 10, and the top view (\((x,y,z=0)\)) is shown in Fig. 11. Fig. 11 (a) is an illustration of an Nb surface containing Nb grains (red) and grain boundaries filled with a low-\(T_{c}\) impurity (blue). The realization used in the TDGL simulations is shown in Fig. 11 (b). Fig. 11 (b) is a top view of the sample's critical temperature distribution around the origin of the grain boundary model: the red region means Nb with \(T_{c}=9.3\) K, and the blue region means low-\(T_{c}\) impurity with \(T_{c}^{\rm impurity}=4\) K. (Parameters of the grain boundary model are given in Appendix D.) The origin is at the center of Fig. 11 (b). The geometry is intentionally asymmetric in both the x direction and the y direction to prevent symmetry-induced artifacts. A top view of the sample's critical temperature distribution with a broader scope (containing the defect region together with the bulk Nb region) is shown in Appendix E.
An RF vortex semi-loop that nucleates in the sample tends to be parallel to the direction of the RF dipole, which points in the x direction. Therefore, as \(B_{pk}>B_{vortex}^{RF}(T)\), RF vortex semi-loops show up in the grain boundaries that are roughly parallel to the x direction. There is one grain boundary in Fig. 11 (b) that is roughly parallel to the x direction and it is marked by a white dashed curve. Grain boundaries are well-characterized inside the white dashed rectangle (the accurate region), and hence the screening current contribution to the signal recovered at the location of the dipole is collected only from inside the white dashed rectangle when calculating \(P_{3f}\).
It is worth mentioning that the proximity effect shows up naturally in TDGL simulations. Fig. 11 (c) shows a snapshot of the distribution of normalized order parameter \(|\psi/\psi_{\infty}|\) (here 1 means full superconductivity and 0 means no superconductivity) obtained by a TDGL simulation with the temperature being 5.4 K and \(B_{pk}=56.6\) mT. This snapshot is taken at the end of an RF cycle, namely when the RF field drops to zero (\(\omega t=2\pi\) and hence \(B_{RF}sin(\omega t)=0\)). Due to the proximity effect, the normalized order parameter of the dark blue region is around 0.15 but not zero, even though the temperature (5.4 K) is higher than \(T_{c}^{\rm impurity}\) (4 K). As a result, a grain boundary filled with a low-\(T_{c}\) impurity can host RF vortices even for \(T>T_{c}^{\rm impurity}\).
Fig. 12 shows the simulation result of \(P_{3f}(T)\) for the grain boundary model shown in Fig. 10 (side view of the model) and Fig. 11 (top view of the model). Compared to the \(P_{3f}\) around 9 K, the \(P_{3f}\) between 4.5 K and 6 K is much stronger. In other words, in the presence of surface defects, \(P_{3f}\) generated by surface defects is much stronger than the intrinsic \(P_{3f}\) of Nb. In the following, we focus on the \(P_{3f}\) generated by surface defects.
Fig. 13 shows the simulation result of \(P_{3f}(T)\) for two different RF field amplitudes for the grain boundary model. At low temperatures, the sample is in the Meissner state and \(P_{3f}\) is weak. As temperature increases, RF vortex semi-loops nucleate in the grain boundary marked
Figure 12: TDGL simulation result of \(P_{3f}(T)\) for the grain boundary model shown in Fig. 10 and Fig. 11. Here \(h_{GB}=200\) nm and \(B_{pk}=56.6\) mT. Inset: enlargement of the figure above 8.1 K.
Figure 13: TDGL simulation result of \(P_{3f}(T)\) for two different RF field amplitudes for the grain boundary model shown in Fig. 10 and Fig. 11. Here \(h_{GB}=200\) nm.
by the white dashed curve in Fig. 11 (b) and result in strong \(P_{3f}\). (The existence of RF vortex semi-loops is verified by examining the order parameter in a time-domain manner as described in Fig. 7.) This can be interpreted as \(B_{pk}>B_{vortex}^{RF}(T)\), where \(B_{vortex}^{RF}(T)\) is the vortex penetration field of the region around that specific grain boundary. Note that RF vortex semi-loops show up in the grain boundary, indicating that the grain boundary serves as the weak spot for RF vortex nucleation.
Here the upper onset temperature of \(P_{3f}\) is around 5.9 K. The grain boundary model is a mixture of the \(T_{c}^{\rm Nb}=9.3\) K Nb and the \(T_{c}^{\rm impurity}=4\) K impurity, and hence \(T_{c}^{P_{3f}}\) is between 4 K and 9.3 K. Of course, the numerical value of \(T_{c}^{P_{3f}}\) depends on how the Nb and the impurity are distributed. Our objective with this model is not to propose a specific microstructure of the sample, but to illustrate the generic nonlinear properties of a proximity-coupled defective region of the sample.
Similar to the case of bulk Nb (Fig. 9(a)), \(P_{3f}(T)\) in Fig. 13 also exhibits the four key features (Table 1). Therefore, the ideas illustrated in Fig. 9(b) could describe the physics of the grain boundary model, with \(T_{c}^{P_{3f}}\) being 5.9 K but not 9.3 K and \(B_{vortex}^{RF}(T)\) being the vortex penetration field of the region around the specific grain boundary, but not a bulk property. The fact that experimental data (Fig. 4 and Fig. 6) and TDGL simulation of the grain boundary model (Fig. 13) both exhibit the four key features (Table 1) and a \(T_{c}^{P_{3f}}\) below 9.3 K suggests that "RF vortex semi-loops nucleate in grain boundaries full of low-\(T_{c}\) impurity" is indeed one of the possible mechanisms of the observed \(P_{3f}\) signals for the HiPIMS Nb/Cu samples.
### Effect of height of grain boundaries on \(P_{3f}(T)\)
If the RF field amplitude is not very strong, RF vortex semi-loops would stay in grain boundaries (\(0>z>-h_{GB}\)) instead of entering the bulk Nb regions. Therefore, \(h_{GB}\) determines how deep an RF vortex semi-loop penetrates into a sample through a surface defect. In Sec. IV.3, the grain boundary height \(h_{GB}\) is set to be 200 nm. Here we consider the effect of varying \(h_{GB}\), with everything else being the same as described in Sec. IV.3.
Fig. 14 (a) and (b) show the results for \(h_{GB}=150\) nm and \(h_{GB}=200\) nm, respectively. Fig. 14 (a) is similar to Fig. 6 (a), in the sense that a stronger RF field amplitude leads to a stronger \(P_{3f}\) (the red curve is above the blue curve) for all temperatures; Fig. 14 (b) is similar to Fig. 6 (b), in the sense that \(P_{3f}(T)\) shows a "crossing" effect: a stronger RF field amplitude leads to a weaker \(P_{3f}\) (the red curve is below the blue curve) for temperatures close to \(T_{c}^{P_{3f}}\). The temperature where the red curve (\(P_{3f}(T)\) with a strong RF field amplitude) and the blue curve (\(P_{3f}(T)\) with a weak RF field amplitude) cross is denoted as \(T^{*}\). For Fig. 14 (a), there is no crossing and hence \(T^{*}=T_{c}^{P_{3f}}=5.9\) K. For Fig. 14 (b), \(T^{*}=5.27\)K \(<T_{c}^{P_{3f}}=5.9\)K. Fig. 14 (c) shows how crossing temperature \(T^{*}\) changes with grain boundary height \(h_{GB}\). For a shallow grain boundary (small \(h_{GB}\)), there is no crossing and hence \(T^{*}=T_{c}^{P_{3f}}\). The crossing shows up when the grain boundary is beyond a critical depth. As grain boundary height becomes larger, the crossing effect becomes more significant (\(T^{*}\) becomes smaller, which means that the temperature window that a stronger RF field amplitude leads to a weaker \(P_{3f}\) becomes larger) and eventually tends to saturate.
The crossing effect can be understood as follows. In experiments/simulations, \(P_{3f}\) is collected by the magnetic writer/at the RF dipole location, which is at \(z=h_{dp}\) (above the sample surface). For a weak RF field amplitude, the RF vortex semi-loop in the grain boundary stays close to the sample's surface (\(z=0\)). As the RF field amplitude increases, the RF vortex semi-loop in the
Figure 14: TDGL simulation result of \(P_{3f}(T)\) for two different RF field amplitudes for the grain boundary model shown in Fig. 10 and Fig. 11 for \(h_{GB}=150\) nm (a) and for \(h_{GB}=200\) nm (b) (identical to Fig. 13). (c) Crossing temperature \(T^{*}\) as a function of grain boundary height \(h_{GB}\).
grain boundary is pushed toward the bottom of the grain boundary (\(z=-h_{GB}\))(see Appendix F), which means that the RF vortex semi-loop is farther away from the magnetic writer/the RF dipole location (\(z=h_{dp}\)), and hence the measured \(P_{3f}\) becomes weaker. Such a phenomenon shows up only when the RF vortex semi-loop in the grain boundary can be pushed far away from the sample surface. For a shallow grain boundary, the RF vortex semi-loop always stays just below the sample surface instead of penetrating deep into the sample, and hence \(P_{3f}\) doesn't decrease as the RF field amplitude increases (no crossing effect).
The crossing effect is quantified by \(T^{*}\), which could be related to how deep an RF vortex semi-loop penetrates into a sample through a surface defect (denoted as \(h_{\rm penetration}^{\rm defect}\)).
### Simulation of a defect model with two grain boundaries
In the grain boundary model discussed in Sec. IV.3 and Sec. IV.4, there is only one single grain boundary underneath and roughly parallel to the RF dipole, and thus RF vortex semi-loops nucleate in one single grain boundary and \(P_{3f}(T)\) shows a single-peak feature. Such a scenario corresponds to the case that the sample's grain boundary density is low. For a sample whose grain boundary density is high, it can be modeled as a grain boundary model that contains two grain boundaries underneath and roughly parallel to the RF dipole.
Here we consider a grain boundary model that contains two grain boundaries that are near the origin and roughly parallel to the x direction. The basic setting of the model is the same as described in Sec. IV.3. The only difference is how the Nb and the impurity are distributed horizontally, as shown in Fig. 15 (a). Fig. 15 (a) shows the top view of the critical temperature distribution and Fig. 15 (b) shows the TDGL simulation result of \(P_{3f}(T)\) for this model.
\(P_{3f}(T)\) in Fig. 15 (b) exhibits a two-peak feature, which can be understood as follows. At low temperatures, the sample is in the Meissner state and \(P_{3f}\) is weak. As temperature increases, around 4.93 K an RF vortex semi-loop nucleates in the top grain boundary and results in the lower temperature peak, and then around 5.67 K another RF vortex semi-loop nucleates in the bottom grain boundary and results in the higher temperature peak. The nucleation of the two RF vortex semi-loops is verified by monitoring \(\Delta\theta/2\pi\) (the same analysis as shown in Fig. 8). Note that the two-peak feature of the \(P_{3f}(T)\) in Fig. 15 (b) (simulation) is also observed in Fig. 4 (measurement).
In this model, RF vortex semi-loops nucleate in both grain boundaries and thus result in the two-peak feature of \(P_{3f}(T)\). On the contrary, in Sec. IV.3 and Sec. IV.4, there is only one RF vortex semi-loop nucleate in one grain boundary and thus results in the single-peak feature of \(P_{3f}(T)\). In other words, the number of \(P_{3f}(T)\) peaks is an indication of the density of surface defects that nucleate RF vortex semi-loops.
### Nb/Cu samples comparison
Larger values of \(P_{3f}\) are closely related to the presence of RF vortex semi-loops. By studying \(P_{3f}\), we investigate surface defects that nucleate RF vortex semi-loops, which are the kinds of surface defects that are closely related to the RF performance of SRF cavities. Surface defects that don't nucleate RF vortex semi-loops are beyond consideration here. For example, if a grain boundary is too narrow (much smaller than the coherence length of a sample) to nucleate an RF vortex semi-loop, it would be less likely to produce large amounts of \(P_{3f}\). Such a grain boundary is likely to be harmless to SRF applications. In the following, surface defects refer to the kind of surface defects
Figure 15: A grain boundary model containing two grain boundaries that are roughly parallel to the x direction. (a) The distribution of critical temperature on the XY plane around the origin, with red being Nb and blue being low-\(T_{c}\) impurity with \(T_{c}^{\rm impurity}=4\) K. \((x,y,z)=(0,0,0)\) is at the center. The RF dipole locates at \((x,y,z)=(0,0,400{\rm nm})\), which is above the center of the image, and points in the x direction. The two white dashed curves indicate the grain boundaries (the top grain boundary and the bottom grain boundary) that are roughly parallel to the x direction. Because the model works well only for the region around the center, only the screening current inside the white dashed circle is collected when calculating \(P_{3f}\). (b) TDGL simulation result of \(P_{3f}(T)\) for the grain boundary model shown in (a). Here \(h_{GB}=240\) nm, and \(B_{pk}=56.6\) mT.
that nucleate RF vortex semi-loops.
Equipped with the simulation results, \(P_{3f}(T)\) of surface defects can be analyzed further. First, the number of \(P_{3f}(T)\) peaks can be related to the density of surface defects (\(\rho^{\text{defect}}\)). Single-peak \(P_{3f}(T)\) corresponds to a low density of surface defects, and two-peak \(P_{3f}(T)\) corresponds to a high density of surface defects. Second, the crossing effect of \(P_{3f}(T)\) can be related to how deep an RF vortex semi-loop penetrates into a sample through a surface defect (\(h_{\text{penetration}}^{\text{defect}}\)). \(T^{*}=T_{c}^{P_{3f}}\) (no crossing) corresponds to a small \(h_{\text{penetration}}^{\text{defect}}\) (shallow), and \(T^{*}<T_{c}^{P_{3f}}\) (crossing) corresponds to a large \(h_{\text{penetration}}^{\text{defect}}\) (deep).
The assignment of qualitative surface defect properties of all seven Nb/Cu samples are summarized in Table 3. The first column shows an estimate for the density of surface defects (\(\rho^{\text{defect}}\)) and the second column shows how deep an RF vortex semi-loop penetrates into a sample through a surface defect (\(h_{\text{penetration}}^{\text{defect}}\)). RF vortex penetration through surface defects is one of the main enemies of SRF applications. To achieve good SRF performance, the surface defect density should be low and the defect should be shallow. From the point of view of these two properties, the HiPIMS 75 V bias Nb/Cu sample is the best sample among the five HiPIMS Nb/Cu samples that have non-zero voltage bias.
In the numerical simulations of this work, surface defects that nucleate RF vortex semi-loops are modeled as grain boundaries. Since \(P_{3f}\) is associated with the behavior of RF vortex semi-loops, it is likely that the ideas of \(\rho^{\text{defect}}\) (how many RF vortex semi-loops nucleated by surface defects) and \(h_{\text{penetration}}^{\text{defect}}\) (how deep RF vortex semi-loops travel) could apply to surface defects other than grain boundaries (dislocation tangles, for example).
## V Conclusion
In this work, we study seven Nb/Cu films that are candidates for SRF applications. Local \(P_{3f}\) measurements reveal surface defects with \(P_{3f}\) onset temperatures between 6.3 K and 6.8 K for five out of the six HiPIMS Nb/Cu samples, indicating that such defects are a generic feature of these air-exposed HiPIMS Nb/Cu films. \(P_{3f}\) coming from the low-\(T_{c}\) surface defect is much stronger than the intrinsic Nb response around 9 K, suggesting that our local \(P_{3f}\) measurement is sensitive to surface defects. With the capability of \(\mu\)m-scale scanning, it is found that such a defect is quite uniform in space on the \(\mu\)m-scale.
TDGL simulations are performed to analyze the experimental results further. In particular, the simulations suggest that the density of surface defects that nucleate RF vortices and how deep RF vortices travel through these surface defects can be extracted qualitatively from our local \(P_{3f}\) measurements. From the point of view of these two properties, the HiPIMS 75 V bias Nb/Cu sample is the best sample for SRF applications.
## VI Acknowledgement
The authors would like to thank Javier Guzman from Seagate Technology for providing magnetic write heads. C.Y.W. would like to thank Bakhrom Oripov and Jingnan Cai for helpful discussions. This work is funded by the U.S. Department of Energy/High Energy Physics through grant No. DESC0017931 and the Maryland Quantum Materials Center.
## Appendix A Sample preparation
Here we discuss how the seven Nb/Cu films studied in this paper are prepared.
The substrate used for the Nb coatings is a 2 mm thick, oxygen-free electronic (OFE) copper disk measuring 75
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Parameter** & **HipIMS** & **DCMS** \\ \hline Average power (kW) & 1.3 & 1.3 \\ \hline Discharge voltage (V) & -600 & -377 \\ \hline Current (A) & 160 (Peak current) & 3.4 \\ \hline Working gas & Kr & Kr \\ \hline Pressure (mbar) & \(2.3*10^{-3}\) & \(2.3*10^{-3}\) \\ \hline Temperature (C) & 150 & 150 \\ \hline Coating duration (min) & 60 & 60 \\ \hline \end{tabular}
\end{table}
Table 4: Sample preparation parameters of the seven Nb/Cu films studied in this paper.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Sample & \(\rho^{defect}\) & \(h_{\text{penetration}}^{defect}\) \\ \hline HiPIMS, 125V bias & low & deep \\ \hline HiPIMS, 100V bias & low & deep \\ \hline HiPIMS, 75V bias & low & shallow \\ \hline HiPIMS, 50V bias & high & shallow \\ \hline HiPIMS, 25V bias & high & shallow \\ \hline HiPIMS, no bias & & N/A \\ \hline DCMS, no bias & & N/A \\ \hline \end{tabular}
\end{table}
Table 3: Comparison of the seven Nb/Cu samples. The first column shows qualitative assignments of the density of surface defects that nucleate RF vortex semi-loops (\(\rho^{\text{defect}}\)) and the second column shows qualitative assignments of how deep an RF vortex semi-loop penetrates into a sample through a surface defect (\(h_{\text{penetration}}^{\text{defect}}\)).
mm in diameter. Prior to coating, the substrate disk is decreased using commercial detergent. The sample is then chemically polished using a mixture of sulfamic acid (H\({}_{3}\)NSO\({}_{3}\), 5 g/L), hydrogen peroxide (H\({}_{2}\)O\({}_{2}\), 5% vol.), n-butanol (5% vol.) and ammonium citrate (1 g/L) heated up at 72\({}^{\text{o}}\)C for 20 minutes. After polishing, the disk is rinsed with sulfamic acid to remove the build-up of native oxide and cleaned with de-ionized water and ultra-pure ethanol.
The Cu substrate is mounted on an ultra-high vacuum (UHV) stainless steel chamber equipped with a rotatable shutter to expose in turn the areas to be coated, and the chamber is then connected to a sputtering system. Both assemblies are performed inside an ISO5 cleanroom, and the sputtering apparatus is described in detail in [62]. The entire system is transported to the coating bench where it is coupled to the pumping group and gas injection lines, and pumped down to about \(1*10^{-7}\) mbar. The pumping group and the sputtering system undergo a 48-hour baseout at 200\({}^{\text{o}}\)C, during which a 4-hour activation of the Non-Evaporable Getter (NEG) pump is performed. The temperature of the UHV chamber is maintained at 150\({}^{\text{o}}\)C until the start of the coating. After cooling down, the system reaches a base pressure around \(9.3*10^{-10}\) mbar. Ultra-pure krypton (99.998%) is injected into the system until a process pressure of \(2.3*10^{-3}\) mbar is reached. The seven coatings are then performed according to the deposition parameters outlined in Table 4. One of the samples is prepared by Direct Current Magnetron Sputtering (DCMS) with zero bias, and the coating thickness is around 3.5 \(\mu\)m. The other six samples are prepared by High Power Impulse Magnetron Sputtering (HiPIMS), with bias voltages ranging from 0 V to -125 V, with coating thicknesses ranging from 1.15 \(\mu\)m to 1.5 \(\mu\)m.
During the coating process, the cavity temperature was monitored with an infrared thermal sensor (OMEGA OS100-SOFT) and kept constant at 150\({}^{\text{o}}\)C. The HiPIMS plasma discharge was maintained using a pulsed power supply (Huettinger TruPlasma HighPulse 4006) and the negative bias voltage was applied to the samples using a DC power supply (TruPlasma Bias 3018). The DCMS discharge was maintained using a Huettinger Truplasma 3005 power supply. The discharge and bias voltages and currents were monitored throughout the entire coating process using voltage (Tektronix P6015A) and current (Pearson current monitor 301\(\times\)) probes whose signals are recorded by a digital oscilloscope (Picoscope 2000). After the coating, the samples were cooled down to room temperature, after which the chamber was vented with dry air. The Nb layer thickness is measured by X-ray fluorescence via the attenuation method.
## Appendix B Data for the HiPIMS 50 V bias and 100 V bias Nb/Cu sample
Here we show the representative data for \(P_{3f}(T)\) generated by surface defects for the HiPIMS 50 V bias and 100 V bias Nb/Cu sample. For the HiPIMS 50 V bias sample, \(P_{3f}(T)\) with \(T_{c}^{P_{3f}}\) around 6.4 K is observed in a strong input power regime (Fig. 16 (a)), and \(P_{3f}(T)\) with \(T_{c}^{P_{3f}}\) around 6.8 K is observed in a weak input power regime (Fig. 16 (b)). For the HiPIMS 100 V bias sample, \(P_{3f}(T)\) with \(T_{c}^{P_{3f}}\) around 6.5 K is observed (Fig. 16 (c)).
## Appendix C Check for hysteresis in \(P_{3f}\) measurements
Here we check whether or not \(P_{3f}(T)\) measurements exhibit hysteresis. The HiPIMS 125 V bias Nb/Cu sample first undergoes a cool down from 10 K to 3.6 K with zero RF field. The microwave signal is turned on after the temperature stabilizes at 3.6 K. The sample then gradu
Figure 16: (a) and (b) show \(P_{3f}(T)\) for the HiPIMS 50 V bias Nb/Cu sample with an input frequency of 2.16 GHz in a strong input power regime and in a weak input power regime, respectively. (c) shows \(P_{3f}(T)\) for the HiPIMS 100 V bias Nb/Cu sample with an input frequency of 1.18 GHz.
ally warms up from 3.6 K to 10 K (warm up \(P_{3f}(T)\)), and then cools down from 10 K to 3.6 K (cool down \(P_{3f}(T)\)), with the microwave signal being turned on in this process. As shown in Fig. 17, such \(P_{3f}(T)\) measurement does not exhibit a clear hysteresis.
The measurement of Fig. 17 is performed one year and a half after the measurement of Fig. 6 (b). It is quite possible different regions of the sample surface are studied in these two measurements. This might explain why Fig. 17 (\(T_{c}^{P_{3f}}\)=4.8 K) and Fig. 6 (b) (\(T_{c}^{P_{3f}}\)=6.8 K) show different \(P_{3f}\) signals.
## Appendix D Parameters for TDGL simulations
Values of parameters used in TDGL simulations are summarized in Table 5. The "impurity" sector specifies the material parameters of the low-\(T_{c}\) impurity in the two surface defect models (Sec. IV.3, Sec. IV.4 and Sec. IV.5).
Material parameters (penetration depth \(\lambda\), Ginzburg-Landau parameter \(\kappa\), etc.) of Nb films vary from one sample to sample. For the Nb part in the TDGL simulations, we adopt the material parameters of bulk Nb instead of Nb films for simplicity.
The choice of the material parameters of the low-\(T_{c}\) impurity in the two surface defect models is based on the following considerations. For simplicity, the Ginzburg-Landau parameter of the low-\(T_{c}\) impurity is taken to be the same as that of bulk Nb (\(\kappa^{impurity}=\kappa^{Nb}=1.1\)). In experiments, typically the measured \(P_{3f}(T)\) is strong at low temperatures (surface defects) and is weak around 9 K. To simulate such a feature, a mechanism that suppresses \(P_{3f}(T)\) at high temperatures is required.
One possible mechanism is that superconductivity near the sample surface, including both the impurity region and the Nb region, is suppressed significantly when the low-\(T_{c}\) impurity is above its transition temperature (\(T_{c}^{impurity}=4K\)), namely when \(T>4\) K. As the temperature increases significantly above \(T_{c}^{impurity}\) (around 9 K, for example), superconductivity is weak (and thus all of the responses related to the strength of superconductivity are weak, including \(P_{3f}\)) even in the Nb region, due to the proximity effect. The question is, what kind of material parameters should be assigned to the low-\(T_{c}\) impurity to achieve this feature at high temperatures?
The Ginzburg-Landau free energy is given by \(\alpha\psi^{2}+\frac{1}{2}\beta\psi^{4}\). One can naively extend Ginzburg-Landau theory to the regime above the transition temperature of a superconductor, where \(\alpha\) becomes positive (\(\alpha\propto\alpha_{0}(T-T_{c})\), with \(\alpha_{0}/\beta\propto 1/\lambda^{2}\)). In that case, a large and positive \(\alpha\) (of the impurity) means that superconductivity is strongly suppressed, which means that the superconductivity in nearby superconductors (the Nb region in our case) is weak due to the proximity effect. A large and positive \(\alpha\) above \(T_{c}\) implies a large \(|\alpha|\) below \(T_{c}\), which, in turn, implies a small penetration depth \(\lambda\) (\(\alpha_{0}/\beta\propto 1/\lambda^{2}\)). As a result, we choose a small \(\lambda^{impurity}\) (16.3 nm).
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multicolumn{2}{|c|}{_Parameter name_} & _Symbol_ & _Value_ \\ \hline \multicolumn{2}{|c|}{Dipole height} & \(h_{dep}\) & 400 nm \\ \hline \multicolumn{2}{|c|}{Period of applied RF field} & \(\frac{2\pi}{\omega_{0}}\) & 5.88 \(\pm\) 10\({}^{-16}\) s \\ \hline \multirow{3}{*}{Nb} & Critical temperature & \(T_{c}^{30}\) & 9.3 K \\ \cline{2-3} & Penetration depth & \(\lambda^{\rm{Nb}}\) & 40 nm \\ \cline{2-3} & Ginzburg-Landau parameter & \(\kappa^{\rm{Nb}}\) & 1.1 \\ \hline \multirow{3}{*}{impurity} & Critical temperature & \(T_{c}^{impurity}\) & 4 K \\ \cline{2-3} & Penetration depth & \(\lambda^{impurity}\) & 16.3 nm \\ \cline{1-1} \cline{2-3} & Ginzburg-Landau parameter & \(\kappa^{impurity}\) & 1.1 \\ \hline \end{tabular}
\end{table}
Table 5: Values of parameters used in TDGL simulations for bulk Nb (Sec. IV.2) and for the two surface defect models (Sec. IV.3, Sec. IV.4 and Sec. IV.5)
Figure 17: Check for hysteresis in \(P_{3f}(T)\) measurement for the HiPIMS 125 V bias Nb/Cu sample. The input frequency is 1.88 GHz and the input power is -25 dBm.
Figure 18: A broader view of the distribution of critical temperature on the XY plane for the grain boundary model discussed in Sec. IV.3. The region shown here corresponds to the region indicated by the green dashed line in Fig. 10.
## Appendix E A top view for the grain boundary model in Sec. IV.3
Fig. 11 (b) shows the region around the origin (which plays the dominant role in RF vortex nucleation and \(P_{3f}\)) for the grain boundary model discussed in Sec. IV.3. Compared to Fig. 11 (b), Fig. 18 shows the setup over a broader range as indicated by the green dashed line in Fig. 10. Fig. 18 contains the entire defect region (the five Nb grains and the blue region) and part of the bulk Nb region as shown in Fig. 10.
## Appendix F Snapshots of a vortex in a grain boundary for the grain boundary model in Sec. IV.3
An RF vortex semi-loop is roughly parallel to the direction of the RF dipole, which points in the x direction, and hence the cross-section of the RF vortex semi-loop is on the YZ plane. Fig. 19 visualizes an RF vortex semi-loop in the grain boundary marked by the white dashed curve in Fig. 11 (b), with the vortex core corresponding to the black region, where \(|\psi/\psi_{\infty}|^{2}<0.001\). The RF vortex semi-loop penetrates the sample surface and the vortex core is around 93.5 nm deep for \(B_{pk}=56.6\) mT (Fig. 19 (a)) and is around 110.6 nm deep for \(B_{pk}=67.9\) mT (Fig. 19 (b)).
|
2310.03796 | Searching for [CII] Emission from the First Sample of $z\sim 6$ OI
Absorption-Associated Galaxies with ALMA | We report the first statistical analyses of [CII] and dust continuum
observations in six strong OI absorber fields at the end of the reionization
epoch obtained by the Atacama Large Millimeter/Submillimeter Array (ALMA).
Combined with one [CII] emitter reported in Wu et al. (2021), we detect one
OI-associated [CII] emitter in six fields. At redshifts of OI-absorbers in
non-detection fields, no emitters are brighter than our detection limit within
impact parameters of 50 kpc and velocity offsets between $\pm200\ {\rm km\
s^{-1}}$. The averaged [CII]-detection upper limit is $< 0.06$ Jy ${\rm km\
s^{-1}}$ (3$\sigma$), corresponding to the [CII] luminosity of $L_{\rm [CII]}
<5.8\times 10^7\ L_{\odot}$ and the [CII]-based star formation rate of ${\rm
SFR_{\rm [CII]}} < 5.5$ $M_\odot$ yr$^{-1}$. Cosmological simulations suggest
that only $\sim10^{-2.5}$ [CII] emitters around [OI] absorbers have comparable
SFR to our detection limit. Although the detection in one out of six fields is
reported, an order of magnitude number excess of emitters obtained from our
ALMA observations supports that the contribution of massive galaxies that
caused the metal enrichment cannot be ignored. Further, we also found 14
tentative galaxy candidates with S/N of $\approx4.3$ at large impact parameters
($>50$ kpc) and having larger outflow velocities within $\pm 600$ km s$^{-1}$.
If these detections are confirmed in the future, then the mechanism of pushing
metals at larger distances with higher velocities needs to be further explored
from the theoretical side. | Yunjing Wu, Zheng Cai, Jianan Li, Kristian Finlator, Marcel Neeleman, J. Xavier Prochaska, Bjorn H. C. Emonts, Shiwu Zhang, Feige Wang, Jinyi Yang, Ran Wang, Xiaohui Fan, Dandan Xu, Emmet Golden-Marx, Laura C. Keating, Joseph F. Hennawi | 2023-10-05T18:00:03Z | http://arxiv.org/abs/2310.03796v2 | Searching for C ii Emission from the First Sample of \(z\)\(\sim\) 6 O i Absorption-associated Galaxies with the Atacama Large Millimeter/submillimeter Array
###### Abstract
We report the first statistical analyses of [C ii] and dust continuum observations in six strong O i absorber fields at the end of the reionization epoch obtained by the Atacama Large Millimeter/submillimeter Array (ALMA). Combined with one [C ii] emitter reported in Wu et al., we detect one O i-associated [C ii] emitter in six fields. At redshifts of O i absorbers in nondetection fields, no emitters are brighter than our detection limit within impact parameters of 50 kpc and velocity offsets between \(\pm\)200 km s\({}^{-1}\). The averaged [C ii]-detection upper limit is \(<\)0.06 Jy km s\({}^{-1}\)(3\(\sigma\)), corresponding to the [C ii] luminosity of \(L_{\rm[C\,II]}\)\(<\)5.8 \(\times\) 10\({}^{7}\) \(L_{\odot}\) and the [C ii]-based star formation rate of SFR\({}_{\rm[C\,II]}\)\(<\)5.5 \(M_{\odot}\) yr\({}^{-1}\). Cosmological simulations suggest that only \(\sim\)10\({}^{-2.5}\) [C ii] emitters around O i absorbers have comparable SFR to our detection limit. Although the detection in one out of six fields is reported, an order of magnitude number excess of emitters obtained from our ALMA observations supports that the contribution of massive galaxies that caused the metal enrichment cannot be ignored. Further, we also found 14 tentative galaxy candidates with a signal-to-noise ratio of \(\approx\)4.3 at large impact parameters (\(>\)50 kpc) and having larger outflow velocities within \(\pm\)600 km s\({}^{-1}\). If these detections are confirmed in the future, then the mechanism of pushing metals at larger distances with higher velocities needs to be further explored from the theoretical side.
Subject headings:C ii -- galaxies: +
Footnote †: slugcomment: Received 2022 July 2; revised 2023 September 18; accepted 2023 October 4; published 2023 November 8
## 1 Introduction
Cosmological reionization occurs when hydrogen transitions from its neutral to ionized state in the early universe. Investigating the behavior of neutral hydrogen (H i) can help to reveal the astrophysics driving reionization and its timing. Unfortunately, H i in the circumgalactic medium (CGM) is quite difficult to observe because of the Gunn-Peterson effect (Gunn & Peterson, 1965), the nearly complete absorption of photons with rest-frame excitation energy higher than that of Ly\(\alpha\) by the intergalactic medium (IGM). Moreover, the absorption caused by the gas in the CGM, with \(\lambda_{\rm rest}\)\(<\) 1216 A, will also fall into the Gunn-Peterson trough and will blanket in the Ly\(\alpha\)-caused absorption and then be undetectable (Simcoe et al., 2020). Thus, alternative tracers are required. Because neutral oxygen has a similar ionization energy as H i, it is regarded as a possible tracer for cosmological reionization (Oh, 2002; Finlator et al., 2013; Doughty & Finlator, 2019).
Metal absorption is commonly observed in QSO spectra. Thanks to the rapid development of the recent high \(-z\) QSO surveys (e.g., Banados et al., 2016; Wang et al., 2019; Yang et al., 2019), \(\sim\)260 QSOs have been identified at \(z\) = 6-7.5, which enables us to search for metal absorbers systematically (e.g., Becker et al., 2019; Cooper et al., 2019; Zou et al., 2021). Additionally, the early universe is proposed to be in a metal-poor state. Therefore, at high redshift (\(z\) \(\gtrsim\) 6), the existence of metal absorption systems (e.g., O i, Mg ii, and C iv) constrains the nature and location of the source galaxies that contribute to the early-metal enrichment. Then, connecting galaxies with the gaseous reservoirs plays a crucial role in understanding galaxy formation at the end of the epoch of reionization.
Cosmological simulations suggest that galactic winds from typical star-forming galaxies can eject metals into the CGM/ IGM (e.g., Keating et al., 2014; Pallottini et al., 2014; Dave et al., 2016). Inspired by these works, star formation rates (SFRs) and impact parameters of source galaxies are two key parameters that need to be measured to test models of galaxy formation. As such, the direct imaging of absorption-associated galaxies is necessary because it provides measurements of these two parameters simultaneously. Diaz et al. (2011) found a C iv
absorption-associated Lyman \(\alpha\) emitter (LAE) at \(z\) = 5.791 with Ly\(\alpha\)-based SFR of 1.4 \(M_{\odot}\) yr\({}^{-1}\)and at the distance of 79 kpc. Additionally, Diaz et al. (2014, 2015) observed LAEs around a sample of C iv absorbers with imaging and follow-up spectroscopic analysis with the detection limit of SFR\({}_{\rm Ly\alpha}\) \(\approx\) 5 \(M_{\odot}\) yr\({}^{-1}\). They surmised that the LAE distribution may potentially trace large-scale outflows. To further constrain the possibility of faint sources as C iv absorber host galaxies, Cai et al. (2017) used Hubble Space Telescope (HST) narrow band-imaging observations to constrain the detection limit down to 2 \(M_{\odot}\) yr\({}^{-1}\). Moreover, Diaz et al. (2021) used the Very Large Telescope and Multi Unit Spectroscopic Explorer (MUSE) to search for LAEs around 11 C iv absorbers and found these LAEs have impact parameters ranging from 11-200 kpc and Ly\(\alpha\) luminosity of 0.18-1.15L\({}^{\ast}_{\rm Ly\alpha}\). However, Ly\(\alpha\) can easily be scattered due to the resonant scattering effect (e.g., Dijkstra et al., 2006; Zheng et al., 2010), which causes the Ly\(\alpha\)-derived star formation rates to be uncertain.
Due to the limitation of treating Ly\(\alpha\) emission as galaxy tracers, only a few candidate galaxies in the field of absorbers have been identified. Because [C ii]-158 \(\mu\)m emission is one of the best ISM indicators (e.g., Wang et al., 2013; Decarli et al., 2017; Neeleman et al., 2019), it is an alternative tracer at \(z\) \(\sim\) 6. To trace H i at \(z\) \(\gtrsim\) 6, we choose to use O i absorption as an alternative. As such, we develop our approach following this methodology. To further investigate the connection between absorbers and their host galaxies, we use the Atacama Large Millimeter/submillimeter Array (ALMA) to search for O i absorber-[C ii] emitter pairs in QSO fields.
We chose to use ALMA because it also provides us with deep continuum observations, which will allow us to investigate the source densities/environment of these high-\(z\) QSOs. Previously, Wu et al. (2021) reported the results in the field of QSO J2054\(-\)0005. Here, we follow up on work and present observations of the additional four fields. These five systems also form the first sample to study metal absorber-submillimeter galaxy interactions. Simulations predict that QSOs at \(z\) \(\sim\) 6 with a black hole mass of \(M_{\rm BH}\) \(\approx\) 10\({}^{9}M_{\odot}\) are primarily embedded in massive halos and located in overdense regions (Costa et al., 2014). Observationally, Decarli et al. (2017) and Trakhtenbrot et al. (2017) conducted ALMA observations in QSO fields and found several continuum sources and [C ii] companions at the redshifts of QSOs. After comparing blind-field number counts, Decarli et al. (2017) concluded that the cumulative number of companion galaxies are excesses in QSO fields. Neeleman et al. (2019) observed more QSOs at this redshift and then obtained the same conclusion. Furthermore, at \(z\) \(\sim\) 4, Garcia-Vergara et al. (2022) detected five CO(4-3) emitters around 17 QSOs, while only 0.28 CO(4-3) detections are expected with the same volume, suggesting that QSO fields have clustering properties with emitters. Conversely, except for line emitters, Champagne et al. (2018) used ALMA to search for continuum sources in 35 bright QSOs fields at 6 \(<\) \(z\) \(<\) 7 and found no spatial overabundances at the scale of (\(<\)1 cMpc). Conducting our ALMA observations, we can further discuss the continuum overabundances in QSO fields.
This paper is organized as follows. In Section 2, we describe the composition of our sample, data reduction, and source-detection details. In Section 3, we present the [C ii]-intensity, continuum map and compare our observations with simulations. In Section 4, we provide further discussions about the physics behind our observations. In this paper, we assume a flat cosmological model with \(\Omega_{M}\) = 0.3, \(\Omega_{\lambda}\) = 0.7, and H\({}_{0}\) = 70 km s\({}^{-1}\) Mpc\({}^{-1}\), 1\({}^{\prime\prime}\) = 5.7 kpc at \(z\) = 6.
## 2 Observations, Data Reduction, and Source Detection
### Survey Description
We used ALMA in its compact configuration (C43-1) to search for [C ii] 158 \(\mu\)m emission of absorber-associated galaxies at \(z\) \(\sim\) 6 (ALMA 2017.1.01088.S, 2019.1.00466.S; PI: Cai). Our sample is composed of five QSO fields containing six strong O i absorbers with log(\(N_{\rm OH}\)/cm\({}^{-2}\)) \(>\) 14 (Becker et al., 2011; Cooper et al., 2019). The detailed redshift, column density, and source time are collected in Table 1. Each individual ALMA observation was done using four 1.875 GHz spectral windows (SPWs). In general, in each spectral tuning, we use one SPW to center on the [C ii] emission at the redshift of the O i absorber, while the remaining SPWs were used to obtain a continuum image of the field. Specifically, in the field QSO J0100\(+\)2802, because two absorbers are located at very similar redshifts, we used our SPW-setting strategy to observe [C ii] emission. Our sample contains five (six) QSO (absorber) fields. Combined with Wu et al. (2021), we describe the observations of all five QSO fields in this paper.
### Data Reduction
We reduced our data following the standard steps, which are part of the Common Astronomy Software Application (CASA v.5.6.1-8; McMullin et al., 2007), and calibrated the data based on the archival calibration script supplied by ALMA. The absolute flux uncertainties are expected to be less than 10%. After the calibration, we generated continuum images using tclean, where we excluded frequencies at the redshift of expected [C ii] emission and used a natural weighting (Li et al., 2021; Wu et al., 2021). Further, to obtain emission-line data cubes, we subtracted the continuum from the data using the task uvcontsub on the line-free channels with zero-order functions. Next, all [C ii]-intensity images are cleaned down to the 5\(\sigma\) level. For reaching a signal-to-noise ratio (S/N) of S/N \(>\) 5 of the [C ii] emission line over an entire line width (\(\approx\)200 km s\({}^{-1}\)), we need to reach an S/N \(>\) 3 over 1/3 of the source line width (\(\approx\)66 km s\({}^{-1}\)). Thus, our reduction procedure yields a channel width of \(\approx\)66 km s\({}^{-1}\).
\begin{table}
\begin{tabular}{l c c c} \hline QSO Field Name & \(z_{\rm OH}\) & log(\(N_{\rm O\textsc{I}}\)/cm\({}^{-2}\)) & \(t_{\rm on}\) (hr) \\ (1) & (2) & (3) & (4) \\ \hline J2054\(-\)0005 & 5.978 & 14.2 & 2.3 \\ J2315\(-\)0023 & 5.7529 & 14.5 & 2.8 \\ J0100\(+\)2802 (F1) & 6.144 & 14.7 & 2.5 \\ J0100\(+\)2802 (F2) & 6.112 & 14.4 & 2.5 \\ PSO J183\(+\)05 & 6.064 & 14.4 & 2.0 \\ PSO J159-02 & 6.238 & 14.5 & 1.4 \\ \hline \end{tabular} 1
\end{table}
Table 1O i Absorber and QSO Information
### Source-detection Algorithm
We use the source-detection Python package DAOStarFinder (Bradley et al., 2020), to search for point sources on an input image with a given shape similar to a defined 2D Gaussian kernel. The adopted shape is defined by the synthesized beams in each observation. We set a threshold for a tentative detection at the 4\(\sigma\) level, which corresponds to the peak S/N. The standard deviation of each image is defined by the pixel-to-pixel fluctuation and calculated using the Python Code Qubefit(Neeleman et al., 2020). We note that, in each field, no extended sources defined as spatially larger than a synthesized beam are found in our data cube. Thus, tentative sources are all point-like, and our detection algorithm is appropriate. The fidelity is further estimated in Appendix A.
## 3 Results
### O i Asorber-associated [C ii] Emitters Sample
#### 3.1.1 [C ii] Emitter Detection
Our primary science goal is to survey the [C ii] emission from galaxies that could host strong O i absorbers. We strictly followed a popular method called FINDCLUMPS (Walter et al., 2016; Decarli et al., 2020; Gonzalez-Lopez et al., 2020). The basic idea of this algorithm is to detect 2D sources on different moment-zero maps with different line widths at different frequencies. For the first step, the potential [C ii]-intensity images are generated by floating averages of a given number of channels with different window sizes (e.g., three-, four-, and five-channel windows) at different frequencies. Then, to search for candidates, we performed the source-detection algorithm (Section 2.3) on these moment-zero maps.
After having these candidates, we selected reliable targets. As mentioned previously, all O i absorbers have column densities of \(\log(N_{\rm O{I}}/{\rm cm}^{-2})>14\). Guided by theoretical works, at \(z\approx 6\), strong absorbers are generally linked to massive dark-matter (DM) haloes (\(\log(M_{\rm M}/M_{\odot})\gtrsim 11\); Finlator et al., 2013; Keating et al., 2016), corresponding to stellar masses of \(\log(M_{\rm s}/M_{\odot})\gtrsim 9\)(Ma et al., 2018). To search for such galaxies, we aim to detect [C ii] emission from data cubes by constraining three key parameters: the line width of [C ii] emission, the velocity offset, and the projected impact parameters between [C ii] emitters and O i absorbers.
To begin with, following Le Fevre et al. (2020), the line width of a reliable [C ii] emitter at \(z\sim 6\) should be \(\gtrsim\)250 km s\({}^{-1}\), as measured by Capak et al. (2015). Similar line width justification is also reported in (e.g., Aravena et al., 2016; Fujimoto et al., 2019; Bethermin et al., 2020). Furthermore, the relative velocity offset between the absorber and its host galaxy should be within the range of \(\pm\)200 km s\({}^{-1}\), because typical star-forming galaxies at \(z\approx 6\) have been shown to have outflow velocities of \(v\lesssim 200\) km s\({}^{-1}\)(e.g., Steidel et al., 2010; Keating et al., 2016; Diaz et al., 2021). We also note that due to projection effects, the projected velocity along the line of sight should be one component of the outflow velocity. As such, it is safe to constrain the velocity offset, our second constrained parameter, from \(-\)200 to 200 km s\({}^{-1}\). For the projected impact parameter, cosmological simulations suggest that star formations and galactic outflows from star-forming galaxies are the primary mechanisms (Oppenheimer et al., 2009). These metal-enriched gases (particularly for strong absorbers, e.g., \(\log(N_{\rm O{I}}/{\rm cm}^{-2})>14\)) are reasonably gravitationally bounded in the circumgalactic medium (CGM) scale (Finlator et al., 2013; Keating et al., 2014). Guided by these theoretical models, the hosts of strong O i absorbers tend to reside in dark-matter halos with masses of \(M_{\rm h}\approx 10^{11-12}\,M_{\odot}\), corresponding to virial radii of \(<\)50 proper kpc (pkpc) at \(z\sim 6\)(Keating et al., 2016). Therefore, the impact parameter between absorbers and hosts should be smaller than 50 kpc. We select candidates with a fidelity level of 30%, which corresponds to the S/N\(\gtrsim 4\). To summarize, we constrain the following parameters
1. Line Width \(\gtrsim\) 250 km s\({}^{-1}\)
2. \(-\)200 km s\({}^{-1}\)\(\lesssim\) Velocity Offset \(\leq\) 200 km s\({}^{-1}\)
3. Impact parameter \(\lesssim\) 50 kpc
4. S/N \(\gtrsim 4\)
We also discuss [C ii]-emitter candidates with large velocity offset and impact parameters in Section 4.2.2.
After completing all three steps, except the first detection (called [C ii]2054 below) reported by Wu et al. (2021), there are no emitters passing our criteria. In the case of nondetections, we used five-channel width windows, corresponding to the typical line width closing to \(\sim\)300 km s\({}^{-1}\)(Aravena et al., 2016), centered on the expected frequency of [C ii] emission to obtain final velocity-integrated flux maps. The detection and nondetection [C ii] maps of our samples are shown in Figure 1. Although only one out of six showed a positive detection, we constrained the velocity-integrated flux (\(S\Delta v_{\rm[C ii]}\)) of [C ii] emitters to be within 3\(\sigma\) upper limits. Additionally, the [C ii] luminosity (\(L_{\rm[C ii]}\)) and the derived star formation rate (SFR\({}_{\rm[C ii]}\); e.g., Wang et al., 2013; Schaerer et al., 2020) can be further constrained. The results are reported in Table 2. We conclude that, for the nondetection fields, the average 3\(\sigma\) upper limits of the various parameters are \(S\Delta v_{\rm[C ii]}<0.06\) Jy km s\({}^{-1}\), \(L_{\rm[C ii]}<5.8\times 10^{7}\,L_{\odot}\), and SFR\({}_{\rm[C ii]}<5.5\,M_{\odot}\) yr\({}^{-1}\). Note that several emitters with larger impact parameters in Figure 1 are further discussed in Sections 4.2.2 and 4.3.
#### 3.1.2 Statistical Results
To constrain the galaxy absorber cross-correlation function, we constructed it as follows:
\[\xi_{\rm g-abs}=\frac{1}{n_{0}}\frac{\Delta N(r)}{\Delta V}\,-\,1=\left(\frac{r }{r_{0}}\right)^{-\gamma}. \tag{1}\]
In Equation (1), \(n_{0}\) is the mean number density of galaxies, which can be obtained from the Luminosity function (LF) proposed by Loiacono et al. (2021). By integrating the LF to the bright end (\(L>L_{\rm[C ii],obs}\)), we then obtain \(n_{0}=1.3\times 10^{-4}\)\(\rm{cMpc^{-3}}\). Similarly, \(\Delta N(r)\) represents the number of galaxies in a spherical shell of survey volume \(\Delta V\), where \(r\) is the distance between an absorber and a galaxy. \(r_{0}\) and \(\gamma\) are the correlation length and power-law slope, respectively. Here, we recall the initial detection results. We have one [C ii] emitter with SFR\({}_{\rm[CII]}\approx 7\)\(M_{\odot}\) yr\({}^{-1}\) that is located approximately 20 kpc from OI-enriched gas at \(z=5.978\). Combined with our larger sample, however, only one [C ii] emitter in six O i absorber fields is detected. Plugging in the final survey volume and one detected [C ii] emitter into Equation (1) (\(r=20\) kpc and \(\Delta N(r)=1\), and \(\Delta V\) is the total survey volume of six fields), we get the relation between \(r_{0}\) and \(\gamma\). We add detailed calculations in Appendix B. This result is shown in the left panel of Figure 2. Although we only detect one [C ii] emitter, the
observed \(r_{\rm 0}\) and \(\gamma\) relation is statistically greater than that of cosmological simulations (Finlator et al., 2020).
In addition to comparisons between the measured SFR and impact parameter, the properties of dark-matter halos that host O i absorbers can also strongly constrain cosmological simulations. Here, we do not directly compare the observed host halo properties with simulations because of the mismatching problem. It is inevitable to mismatch absorbers and their host galaxies in cosmological simulations. Usually, the host-halo-match algorithm is based on the closest match method, i.e., the metal-enriched gas is naturally assumed to be hosted by the nearest halo. However, massive halos can reasonably blow out metal-enriched gas far away, which may result in a relatively small halo being closer to the blownout gas. An example of this scenario can be found in Figure 16 in Keating et al. (2016). Therefore, we compare the number of galaxies clustered with O i absorbers to that predicted by simulations. In the right panel of Figure 2, we show how many galaxies are around O i absorbers within our given impact parameters. The gray line shows the results in Technicolor Dawn simulations (Finlator et al., 2020), while the blue line is a best-fit linear relation. Simulations suggest that the number of associated [C ii] emitters is \(\approx\)10\({}^{-2.5}\) within 50 kpc around O i absorbers. However, the identification of [C ii]2054 suggests \(1/6\) of our O i absorber sample (red square), which is 1-2 orders of magnitude more abundant than that predicted by simulations. Considering the Poisson uncertainty at the 99% single-sided confidence level (Gehrels, 1986), the observed number of [C ii] emitters at our detection is 10\({}^{-2.77}\) (red triangle). Given these results, although there is only one detected [C ii] emitter in six absorber fields, the detection of such a bright [C ii] emitter is unexpected. But, considering the lower limit, our results are also consistent with those predicted by simulations. More discussions on the detection and nondetection of [C ii] emitters can be found in Section 4.2.
### Continuum Observations
Our deep ALMA observations also yield several continuum-source detections. Continuum images of five QSO fields are shown in Figure 3. In this section, we further analyze the properties of these targets.
To obtain the number of continuum sources, we run the source-detection algorithm on the continuum images. The effective search areas of our data are defined by regions where the primary beam limit was higher than 20% and are represented by dashed lines in Figure 3. Again, sources with peak S\(/\)N higher than 4\(\sigma\) are regarded as detections. Here, we give a quick summary. Our five-field observations yield continuum images at 260.0, 286.6, 259.9, 262.4, and 256.0 GHz, respectively. The standard deviations of these
\begin{table}
\begin{tabular}{l c c c} \hline \hline Field Name & \(S\Delta\)\(v_{\rm IC~{}II}\) & \(L_{\rm IC~{}II}\) & SFR\({}_{\rm RC~{}II}\) \\ & (Jy km s\({}^{-1}\)) & (10\({}^{7}\)\(L_{\odot}\)) & (\(M_{\odot}\) yr\({}^{-1}\)) \\ (1) & (2) & (3) & (4) \\ \hline J2054\(-\)0005\({}^{*}\) & 0.0758 \(+\) 0.0177 & 7.0 \(\pm\) 1.7 & 6.8 \(\pm\) 1.7 \\ J2315\(-\)0023 & \(<\)0.03 & \(<\)2.7 & \(<\)2.5 \\ J0100\(+\)2802 (F1) & \(<\)0.07 & \(<\)7.1 & \(<\)6.8 \\ J0100\(+\)2802 (F2) & \(<\)0.08 & \(<\)7.9 & \(<\)7.6 \\ PSO J183\(+\)05 & \(<\)0.05 & \(<\)5.0 & \(<\)4.8 \\ PSO J159-02 & \(<\)0.06 & \(<\)6.2 & \(<\)5.9 \\ \hline \end{tabular} 1004
\end{table}
Table 2[C ii] Moment-zero Map Properties
Figure 1.— [C ii] observations at the redshifts of six O i absorbers with \(z=5.978\), 5.753, 6.144, 6.112, 6.064, and 6.238, respectively. Color bars represent the integrated [C ii] flux. These moment-zero maps are integrated over the central \(\sim\)300 km s\({}^{-1}\) at the O i absorber redshifts. Solid black lines are searching areas of [C ii] candidates, while black dashed lines represent the regions with primary beam limits \(>\)20%. Outer red contours start at 3\(\sigma\), with increasing by 1\(\sigma\). Meanwhile, negative contours are dashed red lines and start at \(-\)2\(\sigma\). The beam size of each moment-zero map is demonstrated in the bottom left corner.
Figure 3: Continuum observations of five QSO fields at different frequencies. The 1\(\sigma\) rms of these fields are 0.012, 0.009, 0.018, 0.016, and 0.014 mJy/beam, respectively (from left to right). Contours are drawn at [3, 4, 5] \(\times\)\(\sigma\). Dashed contours represent \(-2\sigma\). Black dashed lines are marked as the boundary of the primary beam limit \(>\)20%. Beam sizes are plotted on the bottom left. Excluding five QSOs, we still detect nine continuum sources in these QSO fields. Detailed information is demonstrated in Table 3. Zoom-in figures are shown at the corner of each panel.
Figure 2: Left: Cross-correlation function between O i absorbers and their host galaxies. Symbols represent \(r_{0}\) and \(\gamma\) measured from the Technicolor Dawn simulation (Finlator et al., 2020) with log(\(N_{\rm O\textsc{I}}\)) comparable to our ALMA sample. Error bars are based on assuming the number of simulated host galaxies having Poisson fluctuations. The solid line shows the relation between \(r_{0}\) and \(\gamma\) based on one detected galaxy with SFR \(\approx\) 7 M\({}_{\odot}\) yr\({}^{-1}\) and within 20.0 pkpc in one of six absorber fields (see Section 3.1.2 for more details). The dashed black line shows the results derived from a 99% confidence-level Poisson uncertainty (Gétherts, 1986) of the observed galaxy. Right: the cumulative number of galaxies around strong O i absorbers at \(z\approx 6\). The solid line shows the number of galaxies as the function of SFR within impact parameters of 50 kpc from simulations (Finlator et al., 2020), while the blue line represents a best-fit linear relation. The red square represents the averaged number of galaxies in our six absorber fields, while the red triangle shows the lower limit according to the Poisson uncertainty at the 99% single-sided confidence level.
fields are 0.012, 0.009, 0.018, 0.016, and 0.014 \(\rm mJy/beam\). Excluding the five identified QSOs, there are nine sources detected with a mean flux density of 0.19 mJy and \(\rm S/N\) of 7.6. The sources catalog is presented in Table 3. We note that these newly detected continuum sources in our observations have no redshift information.
## 4 Discussion
### Continuum-source Number Counts
To evaluate the number-count excess of continuum sources in our observed fields, we compare observed continuum-source numbers with those in random fields.
Our continuum observations are conducted around \(\sim\)266 GHz, corresponding to \(\sim\)1.1 mm. To date, there are a handful of deep submillimeter-continuum surveys that conduct random-field 1.1 mm continuum galaxy number counts (e.g., Fujimoto et al., 2016; Franco et al., 2018; Gonzalez-Lopez et al., 2020). To reveal the continuum-source excesses in our QSO fields, we compared the number of detected sources to the expected number in blank fields based on the 1.1 mm galaxy number counts. To do a proper analysis, our observed flux densities at different frequencies need to be converted to flux densities at 1.1 mm. We do this conversion by assuming a modified blackbody emission model, specifically, \(S_{\nu}\propto\nu^{(3+j)}/(\exp(h\nu/kT_{\rm dust})-1)\)(Popping et al., 2020). For this model, we adopt \(\beta=2.0\) and \(T_{\rm dust}=38\) K for star-forming galaxies at \(z\sim 6\)(Faisst et al., 2020). For QSO field, J2054, J2315, J0100, PSO J183 and PSO J159, the flux converted ratios are \(S_{\rm 1.1\ mm}/S_{\rm 264.0\ GHz}=1.13,\ \ S_{\rm 1.1\ mm}/S_{\rm 286.6\ GHz}=0.83,\ S_{\rm 1.1\ mm}/S_{\rm 259.9\ GHz}=1.20,\ S_{\rm 1.1\ mm}/S_{\rm 2 62.4\ GHz}=1.16\), and \(S_{\rm 1.1\ mm}/S_{\rm 256.0\ GHz}=1.27\), respectively. For these models, the radii for the efficient searching area in these five fields are 16\(\farcs\)5, 15\(\farcs\)2, 16\(\farcs\)7, 16\(\farcs\)6, and 17\(\farcs\)0. We note that the efficient searching areas are defined as primary beam limits \(>\)20%. From this analysis, we obtain the completeness-corrected continuum-source densities in five QSO fields (see Figure 4). We list details of completeness correction in Appendix B.
In Figure 4, the different black symbols represent the observed continuum-source densities in different fields, while the blue star represents the five-field-averaged result. The measurement in the field of QSO J2054-0005 is shown in a red diamond. Error bars are estimated based on the Poissonian noise of the observed number of continuum sources. Our results suggest that within the field of view of ALMA, only the QSO J2054\(-\)0005 field shows a higher number density of continuum sources than the prediction from the continuum number counts function (Gonzalez-Lopez et al., 2020; Popping et al., 2020). Our observations are also consistent with that described in Champagne et al. (2018). However, we note that, due to the size of the ALMA field of view, one-pointing observations may miss the true number of excesses on the scale of \(\gtrsim\)30''. Meyer et al. (2022) conducted multidiffering ALMA observations to map the environments surrounding QSOs and also found an overabundance case of continuum sources in one of three QSO fields. This scenario is explained by a possible foreground overdensity caused by galaxies at the cosmic noon. This effect could also be suitable to the excess of dusty sources in J2054-0005. Although we only identify one overabundance, we conclude that more large-scale continuum observations in QSO fields will be necessary for future analysis of the environment around QSOs at high redshift or galaxy overdensities at any other redshift.
### Detection and Nondetection of [C ii] Emitters
#### 4.2.1 Gravitational Lensing Effects
It is intriguing that we only have one detection of a [C ii] emitter. In this section, we use our previous discussion and our overdensity measurements to propose one possible explanation.
Figure 4: The number function of 1.1 mm continuum sources. The black symbols represent the number of sources in our QSO fields. The red diamond shows the results in the field of QSO J2054-0005. Error bars are determined by assuming Poisson uncertainties. The blue star is the five-field-averaged result. The black line is the double power-law fitted results from Gonzalez-López et al. (2020).
\begin{table}
\begin{tabular}{l c c c} \hline \hline Target Name & R.A. & Decl. & \(S_{\rm cont}\) \\ & (J2000) & (J2000) & (mJy) \\ (1) & (2) & (3) & (4) \\ \hline J2054\(-\)0005C1 & 20:54:05.325 & \(-\)00:05:12.127 & 0.64 \(\pm\) 0.06 \\ J2054\(-\)0005C2 & 20:54:57.422 & \(-\)00:05:10.890 & 0.12 \(\pm\) 0.02 \\ J054\(-\)0005C3 & 20:54:06.358 & \(-\)00:05:14.828 & 0.19 \(\pm\) 0.01 \\ J2054\(-\)0005Q (C4) & 20:54:06.501 & \(-\)00:05:14.435 & 3.37 \(\pm\) 0.01 \\ J2054\(-\)0005C5 & 20:54:06.649 & \(-\)00:05:16.482 & 0.067 \(\pm\) 0.014 \\ J2054\(-\)0005C6 & 20:54:06.792 & \(-\)00:05:17.915 & 0.069 \(\pm\) 0.016 \\ \hline J2155\(-\)0023Q & 23:15:46.601 & \(-\)00:23.57.652 & 0.35 \(\pm\) 0.01 \\ J2315\(-\)0023C1 & 23:15:46.055 & \(-\)00:23:46.697 & 0.13 \(\pm\) 0.03 \\ \hline J0100\(+\)2802Q & 01:00:13.021 & \(+\)28:02:25.822 & 1.27 \(\pm\) 0.02 \\ J0100\(+\)2802C1 & 01:00:12.763 & \(+\)28:02:25.678 & 0.076 \(\pm\) 0.019 \\ J0100\(+\)2802C2 & 01:00:12.156 & \(+\)28:02:31.593 & 0.18 \(\pm\) 0.04 \\ \hline PSO J183\(+\)05Q & 12:12:26.976 & \(+\)05:05:33.576 & 4.77 \(\pm\) 0.02 \\ \hline PSO J159-02Q & 10:36:54.184 & \(-\)02:32:37.938 & 0.63 \(\pm\) 0.01 \\ PSO J159-02C1 & 10:36:54.718 & \(-\)02:32:39.164 & 0.21 \(\pm\) 0.02 \\ \hline \end{tabular} 4: **Solutions:** (1) name of continuum source, where QSO host galaxies are appended by a Q after the field name, while continuum sources are appended by a C; (4) continuum flux density.
\end{table}
Table 3Continuum Sources
In this field, we find that the observed continuum sources are \(\sim\)3 \(\times\) as abundant as the random field (Figure 4). As noted in Wu et al. (2021), these continuum sources are foreground galaxies at \(z\sim\) 2-4. Thus, in this section, we build up lensing models and discuss the details of the possibilities of lensing-caused detection.
To reveal the possibility of galaxy-lensing-caused detection, we demonstrate one rough approach by building up a lensing model. There are three parameters of foreground galaxies: redshifts, halo masses, and concentrations. We first use five HST broadband observations (Wu et al., 2021) to estimate the photometric redshift of the foreground targets. We ran eazy-py14 with flat priors and found most of them located at the redshift of \(z\sim\) 1.75. Then, the halo masses are converted based on the stellar mass-halo mass relation (Behroozi et al., 2010). We estimate the stellar masses of these targets using CIGALE (Boquien et al., 2019). The averaged stellar mass is \(M_{\ast}\approx\) 10\({}^{9.6}\) \(M_{\odot}\), corresponding to a halo mass of \(M_{\rm h}\approx\) 10\({}^{11.4}\) \(M_{\odot}\). Next, the concentrations of host halos of these galaxies are estimated based on the mass-concentration relation (Klypin et al., 2016; Ludlow et al., 2016; Child et al., 2018). The mean value of concentration is \(c\approx\) 5.9. Finally, the lensing model is calculated under the assumption of NFW profiles using lenstronomy15(Birrer & Amara, 2018; Birrer et al., 2021). After constructing a lensing model, we found that, at the location and redshift of our first detection, the magnification \(\mu\approx\) 1.1. We conclude that, under the rough calculations shown above, the first detection is not caused by the galaxy-caused gravitational lensing effects.
Footnote 14: [https://github.com/gbrammer/eazy-py](https://github.com/gbrammer/eazy-py)
Footnote 15: [https://github.com/sibirrer/lenstronomy](https://github.com/sibirrer/lenstronomy)
#### 4.2.2 [C ii] Emitter Candidates with Larger Impact Parameters and Higher Velocities
Except for the field J2054, our other field observations yield nondetection results. Thus, we discuss the scenarios that caused by more [C ii] candidates satisfying the following conditions. Diaz et al. (2021) reported LAEs as C iv absorber host galaxies having a velocity offset of \(\sim\)600 km s\({}^{-1}\) and impact parameters larger than 50 kpc (Figure 15 and Table 2 in their manuscript).
1. Line Width \(\gtrsim\) 250 km s\({}^{-1}\)
2. \(-\)600 km s\({}^{-1}\)\(\leqslant\) Redshift Offset \(\leqslant\) 600 km s\({}^{-1}\)
3. Impact parameter \(\leqslant\) 100 kpc
4. \(\rm{S/N}\geqslant\) 4
Adopting these conditions, in six O i absorber fields, we find 14 candidates with 4 \(<\) S/N \(<\)5, and no sources having S/N \(\geq\)5. The averaged S/N of these candidates is \(\approx\)4.3. The [C ii] luminosities of these targets are in the range of 5.4-47.3 \(L_{\odot}\), while the range of impact parameters is \(\sim\)24.2-92.1 kpc. To further analyze the reliability of these candidates, we measured the fidelity of our data by comparing the number of positive and negative targets. More details are shown in Appendix A. At the \(\rm{S/N}\) of \(\approx\)4.3 level, the fidelity is close to 30%, which means that there are four real targets within our 14 candidates. We put detailed information on these candidates in Appendix C. We note that future observations (e.g., JWST) could help to validate the reality of these candidates by checking their rest-frame optical emission lines (Bordoloi et al., 2023; Kashino et al., 2023).
### The Role of Outflow Velocities
We analyze the impact parameters again to discuss the physics behind our observations. Theoretical work predicts that metal-line absorbers are blown out by galactic feedback (Muratov et al., 2015; Doughty & Finlator, 2019; Finlator et al., 2020). In an effort to test the efficiency of galactic winds in transporting metals in our sample, following Diaz et al. (2021) and Galbiati et al. (2023), we assume the outflow starts at redshifts higher than those of the observed metal absorbers (\(z=\) 7-10, Galbiati et al., 2023). We then compare the observed and the expected impact parameters between galaxies and absorbers. The predicted impact parameters are calculated based on the assumption of the projected velocity of the galactic wind (\(\langle v(z)\rangle\)) as the following equation:
\[\rm{impact\ parameter}=\langle v(z)\rangle\,\times\,[t_{z\_{\rm{start}}}-t(z)], \tag{2}\]
where \(t_{x\_{\rm{start}}}\) is the lookback time at the launching redshift of galactic outflow, and \(t\) is the lookback time at the observed redshifts.
This comparison allows us to test different feedback models (Keating et al., 2016) with different outflow velocities. Diaz et al. (2021) regarded LAEs that are located near C iv absorbers as the real absorber host galaxies and found eight C iv-LAE pairs at \(z=\) 5-6 in MUSE observations. Their results suggest that the \(\langle v\rangle\) values of metal-line absorber-associated galaxies mostly range from 100-150 km s\({}^{-1}\). In Figure 5, we demonstrate the expected impact parameters as the combinations of different \(t_{x\_{\rm{start}}}\), and \(\langle v\rangle\). The gray-colored star and unfilled stars are our samples. We note that outflows launched more recently before the observations will be linked to more intense star formation activity, providing strong galactic winds with large velocities.
## 5 Summary
We present deep ALMA [C ii] and continuum observations in five QSO fields containing six strong O i absorbers at \(z\sim\) 6. By performing a source-finder algorithm on the [C ii] moment-zero and continuum maps, we obtain nondetections of [C ii] emitters, while nine additional, new continuum sources are detected. Our findings are summarized as follows:
1. If we constrain the velocity range of \(\pm\)200 km s\({}^{-1}\) and impact parameter of 50 kpc, our ALMA observations yielded nondetections of five strong O i absorption (\(\log(N_{\rm{0}}/{\rm{cm}}^{-2})>\) 14) associated galaxies. Including the detection we reported in Wu et al. (2021), we have one detection per six O i absorber fields. Further, we also found 14 tentative detections with S/N of \(\approx\)4.3 and an averaged SFR of 17.4 \(M_{\odot}\) yr\({}^{-1}\) at large impact parameters (\(>\)50 kpc) and having larger velocity offsets of within \(\pm\)600 km s\({}^{-1}\). If these detections are confirmed in the future, then the scenario that massive galaxies blow out metals in larger distances with higher velocities may be favored from the theoretical side.
2. Although we only detected one [C ii] emitter, the detection rate of bright sources is still much higher than that predicted from simulations. The observed correlation length (\(r_{0}\)) and power-law index \(\gamma\) of the galaxy absorber correlation function \(\xi_{\rm{g-abs}}\) is 1-2 orders higher than that measured from cosmological simulations Finlator et al. (2020). Meanwhile, comparing the cumulative number of galaxies around absorbers with that in cosmological
simulations, the detected [C ii] emitter is brighter at a factor of 10 than the expected one in simulations.
3. Using the method proposed by Diaz et al. (2021) and Galbiati et al. (2023), we use the measured impact parameter to test the mean speeds \(\left<v\right>\) of galactic winds, based on different assumptions of the launching time of galactic outflows. For the typical outflow velocities of galactic wind (\(\sim\)100-200 km s\({}^{-1}\)), at the observed redshift, a more recently launched outflow will be more intense to eject the metal-enriched gas out to the observed impact parameter.
4. ALMA observations provide us with deep continuum observations with a field of view with a diameter of \(\sim\)30\({}^{\prime\prime}\). No significant continuum-source number excesses are observed in QSO fields other than J2054. Our results suggest that QSOs do not directly trace overabundances, which is also consistent with Champagne et al. (2018). Nevertheless, future observations are required to further confirm the overabundances on a large scale (Meyer et al., 2022).
Absorbers discovered from QSO sightlines can directly trace gas that provides the fuel for star formation and regulates the formation of galaxies. Yet ALMA observations have provided constraints to early CGM/IGM enrichment theoretical models up to \(z\approx 6\). Our pilot results also suggest that future James Webb Space Telescope (JWST) observations of absorber-galaxy interaction at the epoch of reionization are necessary. We aim to use JWST, as JWST also plans to observe the same sightline with the slitless spectroscopic mode, which will enable us to detect the [O iii] emission \(\lambda\lambda\)4959, 5007 associated with the [C ii] emitter.
## Acknowledgments
Y.W. thanks Wenshuo Xu, Xiaojing Lin, and Mingyu Li for the fruitful discussion. Z.C., Y.W., and S.Z. are supported by the National Key R&D Program of China (grant No. 2018YFA0404503) and the National Science Foundation of China (grant No. 12073014). The science research grants from the China Manned Space Project with No. CMS-CSST2021-A05, and Tsinghua University Initiative Scientific Research Program (No. 20223080023). The Cosmic Dawn Center is funded by the Danish National Research Foundation. K.F. gratefully acknowledges support from STScI Program \(\#\)HST-AR-16125.001-A. This program was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Associations of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. K.F.'s simulation utilized resources from the New Mexico State University High Performance Computing Group, which is directly supported by the National Science Foundation (OAC-2019000), the Student Technology Advisory Committee, and New Mexico State University and benefits from inclusion in various grants (DoD ARO-W911NF1810454; NSF EPSCoR OIA-1757207; Partnership for the Advancement of Cancer Research, supported in part by NCI grants U54 CA132383 (NMSU)). M.N. acknowledges support from ERC
Figure 5: The impact parameter between absorber-galaxy pairs. The unfilled dots are C iv absorber-LAE pairs reported by Cai et al. (2017) and Díaz et al. (2021). The filled gray star is the identified O i absorber-[C ii] emitter pair, while the nonfilled stars are emitters in our data cube but get ruled out because of large impact parameters. The solid and dashed lines are calculated based on the projected outflow velocities of 100 and 200 km s\({}^{-1}\), respectively. Different colors indicate different launching times of galactic outflow ranging from \(z=7\)–10.
Advanced grant 740246 (Cosmic_Gas). F.W. is thankful for the support provided by NASA through a NASA Hubble Fellowship (grant No. HST-HF2-51448.001-A) awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. L.K. was supported by the European Unions Horizon 2020 research and innovation program under the Marie Skodowska-Curie grant agreement No. 885990. For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) license to any Author Accepted Manuscript version arising from this submission.
_Facility_: ALMA.
_Software:_ astropy (Astropy Collaboration et al., 2022), lenstronomy (Birrer and Amara, 2018; Birrer et al., 2021), CASA v.5.6.1-8 (McMullin et al., 2007), DAOSTarFinder (Bradley et al., 2020), CIGALE (Boquien et al., 2019), Qubefit (Neeleman et al., 2020), EAZY-py (Brammer et al., 2008), interferropy (Boogaard et al., 2021).
## Appendix A Fidelity
To get the fidelity of our [C II] emitter detection, we use _interferopy_ to search for both positive and negative targets in six O I absorber fields. The blue and orange lines in the left panel of Figure 6 show the cumulative distribution of positive and negative targets. To avoid the small number statistics at the tail of the distribution, following Gonzalez-Lopez et al. (2019), we regarded the number of negative sources as a function with the form of \(N=1-\mathrm{erf}(\frac{S/N}{\sqrt{2}\sigma})\), where \(\mathrm{erf}\) is the error function. We then integrated this function into the low-\(\mathrm{S/N}\) end to get the cumulative distribution (red line in the left panel). Following Aravena et al. (2016), we define the fidelity \(P\) as \(P(>\mathrm{S}\mathrm{/}\mathrm{N})=1-\frac{N_{\mathrm{negative}}}{N_{ \mathrm{positive}}}\), where \(N_{\mathrm{positive}}\) and \(N_{\mathrm{negative}}\) are the cumulative number of positive and negative targets, respectively. The measured values are shown in gray bars in the right panel of Response Figure 6. We also fitted the measured fidelity as a function of \(\mathrm{S/N}\) with the form of \(\mathrm{P}=1/2+\mathrm{erf}(\frac{S/N-\mathrm{e}}{\sigma}))/2\). The fitted result is shown as the dark-red line.
Figure 6.— Left: Cumulative number distribution of emitters in six O I absorber fields as the function of \(\mathrm{S/N}\). Blue and orange lines are the measured numbers in six fields, respectively. The red dashed line shows a fitted error function to the number of negative targets. This fitting is in order to avoid the small number of statistics at the high-\(\mathrm{S/N}\) end. Right: The measured fidelity as the function of \(\mathrm{S/N}\). The gray bars show the measured fidelity. The fitted fidelity is shown in dark red.
## Appendix B Completeness Correction for Continuum Sources
To estimate the actual number of sources in our observations, we insert pseudo-continuum targets into each image of each field and then obtain the detection completeness. Details of the completeness estimation mostly follow the procedure described in Fujimoto et al. (2016), Franco et al. (2018), and Gomez-Guijarro et al. (2022).
We first mask continuum sources with \(\rm S/N>4\) to avoid the contamination caused by tentative detections. The pseudo-continuum targets were generated using the flux-scaled synthesized beam with \(\rm S/N\) ranging from 4 to 6 in steps of 0.1. For each \(\rm S/N\) bin, we randomly insert 100 sources into each image and re-perform the source-detection algorithm. If the detected location of an artificial source is close to that inserted within a distance smaller than one beam size, we regard this detection as recovered. The completeness of these QSO fields is shown in Figure 7. The red line is fitted by the function of \(\rm 1-exp(aS/N-b)\). We find that sources with \(\rm S/N\) ranging from 4-5 have a mean probability of being detected of \(\sim\)70%. The completeness-corrected number of continuum sources in fields J2054, J2315, J0100, PSO J183, and PSO J159 are 7.0, 2.8, 4.1, 1.0, and 2.0, respectively.
## Appendix C Correlation Length \(r_{0}\) and Power-law Index \(\gamma\) Calculation
In this section, we show the detailed calculations of the correlation length and power-law index. The correlation function is defined as the number excess of galaxies around one metal absorber in a given survey volume.
\[\xi_{\rm g-abs}=\frac{1}{n_{0}}\frac{\Delta N(r)}{\Delta V}-1.\]
Thus we have,
\[n_{0}[1\,+\,\xi_{\rm g-abs}(r)]=\frac{\Delta N(r)}{\Delta V}.\] (C1)
If we then assume a power-law function, \(\xi_{\rm g-abs}=\left(\frac{r}{r_{0}}\right)^{-\gamma}\), we can then replace \(\xi_{\rm g-abs}\) as a power law and get the following equation:
\[n_{0}\Bigg{[}1\,+\left(\frac{r}{r_{0}}\right)^{-\gamma}\Bigg{]}=\frac{\Delta N (r)}{\Delta V}.\] (C2)
We assumed a 3D spherical survey volume with a given radius of \(r\). We then integrate this equation and obtain:
\[4\pi\int_{0}^{r}\,n_{0}\Bigg{[}1\,+\left(\frac{r}{r_{0}}\right)^{-\gamma} \Bigg{]}=N\,(r).\] (C3)
To justify the spherical approach, we use the following assumptions proposed by simulations. In simulations (Keating et al., 2016), strong O i absorbers are always included within the DM halo of galaxies. For the successfully detected system, the halo mass is \(4\times 10^{11}M_{\odot}\)(Wu et al., 2021), corresponding to the virial radius of \(\sim\)20-30 pkpc at \(z\)\(\approx\) 6. Further, the observed impact parameter (\(\sim\)20 pkpc) is very close to the halo virial radius. Thus, to include this system in the DM halo, the line-of-sight distance will be small. A sphere-like survey volume with a radius of 20 pkpc will thus be large enough to include this target. In this work, we detect one galaxy at 20 pkpc. Thus, \(N\) (20 pkpc) \(=1\). We then obtained:
\[4\pi\int_{0}^{r}\,n_{0}\Bigg{[}1\,+\left(\frac{r}{r_{0}}\right)^{-\gamma} \Bigg{]}=1\text{, where }r=20\text{ pkpc}.\] (C4)
By integrating this equation, we then obtain a relation between \(r_{0}\) and \(\gamma\):
\[\log(r_{0})=\frac{\log\left[A(\gamma)\right]}{\gamma}\text{, where }r =20\text{ pkpc}.\] (C5)
## Appendix D Velocity Offsets versus Impact Parameters of [C ii] Candidates
In this section, we show the comparison between our two sets of criteria. Our major criteria are guided by theoretical simulations and described in Section 3.1.1. Cosmological simulations suggest that these metal absorbers are mostly generated by star formations and galactic outflows (Oppenheimer et al., 2009). Therefore, the velocity offset between absorbers and the actual hosts should be close to the outflow velocity (\(\sim\)200 km s\({}^{-1}\), Steidel et al., 2010). Further, these strong absorbers need to be gravitationally bounded in the circumgalactic medium (CGM) scale (Keating et al., 2014). The impact parameters thus need to be smaller than the virial radii of DM halos (\(<\)50 pkpc, at \(z\)\(\sim\) 6 Keating et al., 2016). The criteria are shown in the dark-gray region in Figure 8. We note that galaxies in the dark-gray region could be more physically connected to the O i absorbers.
Figure 8: Velocity offsets vs. impact parameters of [C ii] candidates in this work. These targets are all color-coded by their signal-to-noise ratios. The dark and light gray regions show our major and relaxed selection criteria, respectively.
## ORCID IDs
Yunjing Wu [https://orcid.org/0000-0003-0111-8249](https://orcid.org/0000-0003-0111-8249)
Zheng Cai [https://orcid.org/00000-0001-8467-6478](https://orcid.org/00000-0001-8467-6478)
Jianan Li [https://orcid.org/0000-0002-1815-4839](https://orcid.org/0000-0002-1815-4839)
Kristian Finlator [https://orcid.org/0000-0002-0496-1656](https://orcid.org/0000-0002-0496-1656)
Marcel Neeleman [https://orcid.org/0000-0002-9838-8191](https://orcid.org/0000-0002-9838-8191)
J. Xavier Prochaska [https://orcid.org/0000-0002-7738-6875](https://orcid.org/0000-0002-7738-6875)
Bjorn H. C. Emonts [https://orcid.org/0000-0003-2983-815X](https://orcid.org/0000-0003-2983-815X)
Shiwu Zhang [https://orcid.org/0000-0002-0427-9577](https://orcid.org/0000-0002-0427-9577)
Feige Wang [https://orcid.org/0000-0002-7633-431X](https://orcid.org/0000-0002-7633-431X)
Jinyi Yang [https://orcid.org/0000-0001-5287-4242](https://orcid.org/0000-0001-5287-4242)
Ran Wang [https://orcid.org/0000-0003-4956-5742](https://orcid.org/0000-0003-4956-5742)
Xiaohui Fan [https://orcid.org/0000-0003-3310-0131](https://orcid.org/0000-0003-3310-0131)
Emmet Golden-Marx [https://orcid.org/0000-0001-5160-6713](https://orcid.org/0000-0001-5160-6713)
Laura C. Keating [https://orcid.org/0000-0001-5211-1958](https://orcid.org/0000-0001-5211-1958)
Joseph F. Hennawi [https://orcid.org/0000-0002-7054-4332](https://orcid.org/0000-0002-7054-4332)
|
2301.04960 | Exchange Bias Demonstrated in Bulk Nanocomposites Processed by High
Pressure Torsion | Ferromagnetic (Fe or Fe20Ni80) and antiferromagnetic (NiO) phases were
deformed by high pressure torsion, a severe plastic deformation technique, to
manufacture bulk sized nanocomposites and demonstrate an exchange bias, which
has been reported predominantly for bilayer thin films. High pressure torsion
deformation at elevated temperatures proved to be the key to obtain homogeneous
bulk nanocomposites. X-ray diffraction investigations detected
nanocrystallinity of the ferromagnetic and antiferromagnetic phases.
Furthermore, an additional phase was identified by X-ray diffraction, which
formed during deformation at elevated temperatures through the reduction of NiO
by Fe. Depending on the initial powder composition of Fe50NiO50 or
Fe10Ni40NiO50 the new phase was magnetite or maghemite, respectively.
Magnetometry measurements demonstrated an exchange bias in the high pressure
torsion processed bulk nanocomposites. Additionally, a tailoring of magnetic
parameters was demonstrated by the application of different strains or
post-process annealing. A correlation between the amount of applied strain and
exchange bias was found. The increase of exchange bias through applied strain
was related to the microstructural refinement of the nanocomposite. The
nanocrystalline maghemite was considered to have a crucial impact on the
observed changes of exchange bias through applied strain. | Michael Zawodzki, Lukas Weissitsch, Heinz Krenn, Stefan Wurster, Andrea Bachmaier | 2023-01-12T12:07:44Z | http://arxiv.org/abs/2301.04960v1 | # Exchange Bias Demonstrated in Bulk Nanocomposites Processed by High Pressure Torsion
###### Abstract
Ferromagnetic (Fe or Fe\({}_{20}\)Ni\({}_{80}\)) and antiferromagnetic (NiO) phases were deformed by high pressure torsion, a severe plastic deformation technique, to manufacture bulk sized nanocomposites and demonstrate an exchange bias, which has been reported predominantly for bilayer thin films. High pressure torsion deformation at elevated temperatures proved to be the key to obtain homogeneous bulk nanocomposites. X-ray diffraction investigations detected nanocrystallinity of the ferromagnetic and antiferromagnetic phases. Furthermore, an additional phase was identified by X-ray diffraction, which formed during deformation at elevated temperatures through the reduction of NiO by Fe. Depending on the initial powder composition of Fe\({}_{50}\)NiO\({}_{50}\) or Fe\({}_{10}\)Ni\({}_{40}\)NiO\({}_{50}\) the new phase was magnetite or maghemite, respectively.
Magnetometry measurements demonstrated an exchange bias in the high pressure torsion processed bulk nanocomposites. Additionally, a tailoring of magnetic parameters was demonstrated by the application of different strains or post-process annealing. A correlation between the amount of applied strain and exchange bias was found. The increase of exchange bias through applied strain was related to the microstructural refinement of the nanocomposite. The nanocrystalline maghemite was considered to have a crucial impact on the observed changes of exchange bias through applied strain.
**Keywords:** severe plastic deformation; high pressure torsion; nanocomposite; superior hardness; microstructural characterization; magnetic properties; hysteresis; exchange bias
Introduction
The introduction and application of nanomaterials is indispensable for progress in technological development. The use of nanomaterials is already well advanced and ranges from applications in medicine (targeted drug delivery for cancer treatment) [39], catalysis (optimisation of the catalysts nanostructure to improve the selectivity and efficiency of catalytic processes [29], such as the enhanced reduction of volatile organic compounds[52]), gas sensors [30], microelectronics [44] to engineering [32].
Metallic nanomaterials have outstanding physical properties, such as superior strength, superior hardness and enhanced wear resistance, and attracted the attention of engineers to reduce weight to improve efficiency for transportation vehicles [47, 32].
The importance of magnetic materials for power efficiency is commonly known. Material scientist focus here on the improvement of the nanostructure of soft- or hard magnetic materials to enhance the magnetic properties [20, 10].The important role of nanocrystalline microstructure can be demonstrated for hard magnetic materials. Here, highly textured nanocrystals are considered to be the key to push the energy product (\(BH\)) of a hard magnetic material to its theoretical maximum of \(BH_{\mathrm{max}}=1/4\mu_{0}M_{\mathrm{S}}{}^{2}\) (\(\mu_{0}...\)vacuum permeability and \(\mathrm{M}_{\mathrm{S}}...\)saturation magnetisation) [10, 43]. An increase in \(BH\) of the hard magnetic material leads to a reduction of weight and size of permanent magnet motors/generators and this addresses directly the need for power efficiency [19].
Although synthesis of nanocrystalline material has been challenging, a variety of methods have been established to obtain the desired nanocrystalline state, (e.g. inert gas condensation, electrodeposition, mechanical alloying, crystallisation out of an amorphous state and severe plastic deformation (SPD)) [28]. Regarding synthesis, SPD methods, especially high pressure torsion (HPT), offer unique advantages. For example, it is possible to process a bulk nanocrystalline sample based on powder blends, which allows a large variety of phase combinations. With HPT it is possible to synthesise supersaturated solid solutions [23] or influence the magnetic properties of the nanocrystalline sample through the application of strain [51, 2, 48, 27].
The focus of this study was on the use of rare-earth free phases due to their abundant availability. Therefore, the choices for a suitable ferromagnetic (FM) material were limited to Fe, Ni or FeNi-alloys. Fe was chosen, because of the vast base of experience available concerning HPT-deformation [34]. A suitable alternative FM-phase is the \(\gamma\)-Fe\({}_{20}\)Ni\({}_{80}\)-alloy, which possesses lower \(\mathrm{M}_{\mathrm{S}}\) compared to Fe and far larger domain wall width than Fe or Ni. Those two properties of \(\gamma\)-Fe\({}_{20}\)Ni\({}_{80}\) are regarded as beneficial to the enhancement of exchange bias (\(\mathrm{H}_{\mathrm{eb}}\)).
The \(\mathrm{H}_{\mathrm{eb}}\) was first observed by Meiklejohn and Bean on Co-CoO nanoparticles and introduced as 'new unidirectional anisotropy' [26]. \(\mathrm{H}_{\mathrm{eb}}\) originates from a FM spin exchange coupling mechanism between the surface spins of adjacent antiferromagnetic (AFM) and FM phases and allows the preservation of an external magnetic field direction through the alignment of AFM surface spins during field cooling (FC) below the Neel temperature (\(\mathrm{T}_{\mathrm{N}}\)). This phenomenon causes a biased shift of the hysteresis in the FC direction.
A further aim of this study was to use a material, which exhibits AFM properties at room temperature (RT). NiO has the benefit of possessing a face-centred cubic crystal structure with a smaller lattice parameter mismatch to Fe or \(\gamma\)-Fe\({}_{20}\)Ni\({}_{80}\)-alloy compared to other possible AFM materials (e.g. FeS or \(\alpha\)-Fe\({}_{2}\)O\({}_{3}\)), and is also AFM far above RT; it was therefore the material of choice.
Until recently, such material combinations have mainly been realised by thin film deposition techniques to study the interfacial phenomenon of \(\mathrm{H}_{\mathrm{eb}}\)[6], due to its technical application in magnetic read heads for hard-disc drives [9]. Consequently, previous investigations have been limited to 2D structures. A first successful synthesis by HPT of bulk composites, which possess an \(\mathrm{H}_{\mathrm{eb}}\), has been reported for the Co-NiO system [27].
The aim in this study was to obtain a bulk nanocomposite possessing enhanced magnetic properties, \(\mathrm{H}_{\mathrm{eb}}\) and be able to demonstrate a tailoring of magnetic parameters via applied
strain. Furthermore, to investigate in an initial trial the new bulk nanocomposite regarding their nanostructure and the influence of the nanostructure on the magnetic properties.
A material synthesis with a metallic and an oxide phase can be a challenging task, especially for HPT, if a homogeneous microstructure is desired. Although oxide ceramics (i.e. Al\({}_{2}\)O\({}_{3}\), ZrO\({}_{2}\), TiO\({}_{2}\) and Y\({}_{2}\)O\({}_{3}\)) have been processed previously with HPT [13], the present study has investigated for the first time the deformation process in combination with nanostructural characterisation and magnetic characterisation systematically to provide a detailed description of the FM-AFM bulk nanocomposite.
A further aim was to attain deeper insight into the HPT-deformation process of NiO itself and its interaction with the mechanically softer FM-phase. The importance of HPT processing at elevated temperatures was demonstrated to synthesise homogeneous bulk nanocomposites. In addition to a tailoring of magnetic properties, an unusually high Vickers microhardness was detected, when the NiO-phase was preserved during deformation.
## 2 Materials and Methods
Commercially available powders (Fe - Mateck Fe-99,9% 100+200 mesh, Ni - Alfa Aesar Ni 99,9% 100+325 mesh, NiO - Alfa Aesar NiO 99,8 % 325 mesh) were used. All powder compositions are labelled in -at%, if not otherwise noted. Powders were stored and prepared inside a glove box filled with Ar-atmosphere. For powder consolidation with HPT an air-tight capsule was used to prevent the powder from oxidation. The capsule itself enclosed the HPT-anvils, which can move freely only in axial direction for powder blend compaction. The obtained pellet was used later on in a second step for HPT-deformation Figure 9. The processed sample discs had dimensions of (\(\varnothing 8\times 0.6\)) \(mm\)[21]. The process parameters were the following: applied hydrostatic pressure of 6 \(GPa\), rotation speed \(\omega=1.25\)\(min^{-1}\) and deformation temperatures (T\({}_{\rm def}\)) of 200-300\({}^{\circ}\)C. The T\({}_{\rm def}\) was provided by inductive heating of both anvils during the processing, and the temperature was controlled by a pyrometer [34]. Samples deformed at RT were cooled with pressurised air to ensure temperature stability of the sample. Equivalent von Mises strain (\(\epsilon_{\rm vM}\)) was calculated with \(\epsilon_{\rm vM}=(2\pi Nr)/(\sqrt{3}t)\) (\(N...\)applied rotations, \(r...\)radius and \(t...\)sample thickness)[21].
Scanning electron microscopy (SEM; LEO-1525 Carl Zeiss GmbH) was used in backscattered electron mode (BSE; Model 7426, Oxford Instruments plc) with an acceleration voltage of 20 \(kV\). Energy dispersive X-ray spectroscopy (EDX; XFlash 6-60, Bruker) analysis with SEM was performed at 5 \(kV\) to maximise lateral resolution. In order to assign the corresponding radial position to the SEM images, the ends of the examined HPT half discs were determined during the SEM examination. \(r\sim\)0 \(mm\) was defined as the half distance between both ends, and \(r\sim\)3 \(mm\) was determined by the relative distance to \(r\sim\)0 \(mm\). \(r\sim\)3 \(mm\) was cross-checked by measuring the relative distance to the end of the disc, which should be 1 \(mm\). Hardness measurements were performed with a Micromet 5104 from Buehler with an indention load of 500 \(g\) along the radial direction every 250 \(\mu m\).
For X-ray diffraction (XRD), samples were polished on the top side and analysed with a Bruker D2 Phaser (Co-K\({}_{\alpha}\) source). XRD data from synchrotron measurements were collected in transmission mode at Deutsches Elektronen-Synchrotron (DESY) at the beamline P07B (high energy materials science) with a beam energy of 87.1 \(keV\) and a beam size of \(0.5\times 0.5\)\(mm^{2}\). All wide angle X-ray scattering (WAXS) measurements were done in axial direction [48] and the data were gained with a Perkin Elmer XRD 1621 detector. Peak analysis was done with a self-written script in Octave based on the pseudo-Voigt method to determine integral breadth and coherent scattering domain size (CSDS) [46] via Scherrer-relation for spherical crystallites. As XRD-reference pattern the American Mineralogist Crystal Structure Database (AMCSD) is used (Fe: AMCSD 0012931, Ni: AMCSD 0012932, NiO: AMCSD 0017028, \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\): AMCSD 0020516, \(\alpha\)-Fe\({}_{2}\)O\({}_{3}\): AMCSD 0000143 and Fe\({}_{3}\)O\({}_{4}\): AMCSD 0009109).
The annealing experiments were done within a vacuum furnace (XTube, type-1200, Xerion Advanced Heating). For sample preparation, a deformed HPT-disc was cut in quarters. One quarter of the sample material was used to obtain probing material for magnetometry measurements from the same radial position, and the other quarters for microstructural characterisation. For the annealing experiment itself, one full quarter and one sample for magnetometry measurements was heated up to 450\({}^{\circ}\)C or 550\({}^{\circ}\)C in a vacuum of 10\({}^{\text{-6}}\)\(mbar\) or better and kept at the specific temperature for one hour.
Magnetometry samples were prepared with a diamond wire saw and samples were cut out at desired radial position, having a volume of \(\sim\)2 \(mm^{3}\) or less [48]. Due to the sample dimensions, the applied strain, as mentioned for the magnetometry measurements, is an average value and related to the centre of mass of the measured sample.
Field cooling was performed with the applied magnetic field parallel to the radial direction from above the corresponding Neel-temperature from NiO of T\({}_{\text{N}}\sim\)524 \(K\)[45] to RT inside a vacuum chamber at 10\({}^{\text{-4}}\)\(mbar\) or better. An electro magnet provided a magnetic field of \(\sim\)10 \(kOe\). The field cooling was subsequently continued below RT, realised by a superconducting quantum interference device (SQUID) from Quantum Design MPMS-XL-7 from RT to 8 \(K\) at 10 \(kOe\), which was further used to determine magnetic hysteresis loops for the Fe\({}_{10}\)Ni\({}_{40}\)NiO\({}_{50}\) composition. Measurements were executed at 8 \(K\) with a maximum magnetic field strength of \(\pm\)70 \(kOe\) to ensure magnetic saturation. After determining H\({}_{\text{C1}}\) and H\({}_{\text{C2}}\), the H\({}_{\text{eb}}\) can be calculated with \(H_{\text{eb}}=(H_{\text{C1}}+H_{\text{C2}})/2\). The coercivity of the symmetric hysteresis is determined by \(H_{\text{C}}=((-)H_{\text{C1}}+H_{\text{C2}})/2\). M\({}_{\text{S}}\) was approximated with a linear fit from the saturated branches of hysteresis to obtain M\({}_{\text{S}}\) at the intercept with the ordinate.
## 3 Results
### Hardness and microstructural evolution
#### 3.1.1 Fe\({}_{\text{50}}\)NiO\({}_{\text{50}}\) composition
The first composition investigated was the binary Fe\({}_{\text{50}}\)NiO\({}_{\text{50}}\) (Fe\({}_{\text{42.8}}\)NiO\({}_{\text{57.2}}\) in -wt%) blend. Figures 1a and b present SEM micrographs at \(r\sim\)0 \(mm\) and \(r\sim\)3 \(mm\). Both samples were processed with 100 rotations and deformation temperature was increased from RT (Figure 1a) to 300\({}^{\circ}\)C (Figure 1b). This yielded an applied strain for T\({}_{\text{def}}\) = RT of \(\epsilon_{\text{vM}}\sim\)1760 and for T\({}_{\text{def}}\) = 300\({}^{\circ}\)C a \(\epsilon_{\text{vM}}\sim\)1490 at \(r\sim\)3 \(mm\), due to a slight difference in height. For the sample processed at RT, EDX measurements confirmed that the dark phase seen in BSE-contrast consisted of NiO, while the bright phase was identified as Fe-phase.
When processed at RT, both phases formed lamella structures (Figure 1a). Although the applied strain was high, the expected refinement of the two-phase microstructure was not observed. Within the phases, a grain refinement was evident at \(r\sim\)3 \(mm\). Vickers microhardness measurements detected an increase of microhardness, (Figure 1c) and confirmed the ongoing refinement, which was seen in SEM within the phases. This observation indicated that the applied strain had acted in the composite to a certain degree. However, whether the total amount of applied strain had been absorbed by the composite through deformation was in question. The deformation at RT of samples with NiO of 50-at% (Figure 1a) or higher was considered to be unsuccessful because the samples had inhomogeneous microstructure and the FM-phase dimensions were too large for observing an H\({}_{\text{eb}}\).
Deformation at elevated temperatures improved the deformation behaviour of the nanocomposites and led to a homogeneous microstructure (Figure 1b). SEM micrographs indicated that a homogeneous two-phase microstructure had been formed, having microstructural refinement from \(r\sim\)0 \(mm\) to \(r\sim\)3 \(mm\). The large NiO structures, which were observed for T\({}_{\text{def}}\) = RT, were not observed for T\({}_{\text{def}}\) = 300\({}^{\circ}\)C in the SEM micrographs along the sample radius. Here, the dark phase, which is shown in BSE-contrast, was determined by EDX
measurements in combination with XRD-results (Figure 2) as Fe\({}_{3}\)O\({}_{4}\) and the bright phase was identified as a FeNi-phase.
Vickers microhardness measurements detected a large difference in hardness between samples deformed at RT and 300\({}^{\circ}\)C (Figure 1c). For the T\({}_{\rm def}\) = 300\({}^{\circ}\)C composite, hardness values were considerably higher than for the RT sample, varying from 6 \(GPa\) at \(r\sim\)0 \(mm\) (\(\epsilon_{\rm vM}\sim\)0) to 6.8 \(GPa\) at \(r\sim\)3 \(mm\) (\(\epsilon_{\rm vM}\sim\)1740). Above \(r\sim\)1.5 \(mm\) the T\({}_{\rm def}\) = 300\({}^{\circ}\)C composite became microstructurally saturated according to Vickers microhardness measurements.
Figure 1: SEM micrographs in BSE mode depict samples deformed at RT (**a**) and 300\({}^{\circ}\)C (**b**) at \(r=0\)\(mm\) and \(r=3\)\(mm\). EDX spectroscopy identified the dark phase as NiO for (a) or Fe\({}_{3}\)O\({}_{4}\) for (b). (**c**) Vickers microhardness of Fe\({}_{50}\)NiO\({}_{50}\) samples along radial direction.The sample processed at 300\({}^{\circ}\)C had a higher saturation microhardness compared to the sample deformed at RT. (**d**) Vickers microhardness values form \(r=2.5\)\(mm\) to \(r=3.75\)\(mm\). Higher deformation temperatures led to a homogeneous microstructure and therefore higher hardness values. (**e**) SEM micrographs in BSE mode gained of Fe\({}_{50}\)NiO\({}_{50}\) samples at the outer radii (\(r\sim\)3\(mm\)). An onset of homogenisation of microstructure could be seen at 225\({}^{\circ}\)C and gradually improved towards 275\({}^{\circ}\)C. The micron bar applies to all micrographs in the same row.
Further deformation experiments were conducted at deformation temperatures between RT and 300\({}^{\circ}\)C (T\({}_{\rm def}\) = 200\({}^{\circ}\)C, 225\({}^{\circ}\)C, 250\({}^{\circ}\)C and 275\({}^{\circ}\)C) to gain insight into the onset of the microstructural homogenisation. Figure 1e displays SEM micrographs at \(r\sim\)3 \(mm\). At T\({}_{\rm def}\) = 200\({}^{\circ}\)C, the microstructure was identical to that at RT; however, at T\({}_{\rm def}\) = 225\({}^{\circ}\)C, the large NiO structures had become more and more refined through the application of strain. This process continued for T\({}_{\rm def}\) = 250\({}^{\circ}\)C and T\({}_{\rm def}\) = 275\({}^{\circ}\)C, gradually extending from outer radius to inner radius. At T\({}_{\rm def}\) = 275\({}^{\circ}\)C, the microstructure is comparable to the T\({}_{\rm def}\) = 300\({}^{\circ}\)C samples at \(r\sim\)3 \(mm\), but still large NiO structures were observed for T\({}_{\rm def}\) = 275\({}^{\circ}\)C at \(r\leq 2\)\(mm\) (not shown).
The Vicker microhardness values that were obtained, reflected the observations from SEM. In Figure 1d hardness results at the outer radius are summarised. The inhomogeneous microstructure of the sample deformed at T\({}_{\rm def}\) = 200\({}^{\circ}\)C had a lower microhardness (5 \(GPa\)) than those of samples deformed at temperatures T\({}_{\rm def}\geq\)225\({}^{\circ}\)C, which were approximately 6 to 7 \(GPa\) at \(r\sim\)3 \(mm\) and similar to the hardness values of the sample deformed at T\({}_{\rm def}\) = 300\({}^{\circ}\)C. All samples processed at elevated temperatures exhibited significantly higher Vickers microhardness values than the sample processed at RT.
Phase analysis with XRD detected Fe and NiO phases for the samples deformed at RT and 200\({}^{\circ}\)C with an onset of Fe\({}_{3}\)O\({}_{4}\) formation at T\({}_{\rm def}\) =200\({}^{\circ}\)C (Figure 2). An increase of deformation temperature from T\({}_{\rm def}\) = 225\({}^{\circ}\)C to T\({}_{\rm def}\) = 300\({}^{\circ}\)C yielded a continuously growing amount of Fe\({}_{3}\)O\({}_{4}\), caused by the reduction of NiO through Fe. Ni became available to build the \(\gamma\)-FeNi phase, which was detected in the XRD-scans. The content of Fe within the \(\gamma\)-FeNi phase could be approximated through the shift to lower q-values. This shift is caused by the substitution of Fe-atoms within the Ni lattice, resulting in an enlargement of lattice parameter. This enlargement is caused by the higher atomic radius of Fe compared to Ni [33]. The shifted peaks of \(\gamma\)-FeNi phase for T\({}_{\rm def}\) = 300\({}^{\circ}\)C implied a composition of approximately \(\gamma\)-Fe\({}_{39}\)Ni\({}_{61}\).
The quality of the XRD patterns and similarity of Fe\({}_{3}\)O\({}_{4}\) and maghemite (\(\gamma\)-Fe\({}_{2}\)O\({}_{3}\)) patterns allowed a determination of Fe\({}_{\rm x}\)O\({}_{y}\) phase only at higher q\({}_{\rm z}\geq\)3.5 A\({}^{\rm-1}\). The peaks at q\({}_{\rm z}\) = 3.889 A\({}^{\rm-1}\) and q\({}_{\rm z}\) = 4.235 A\({}^{\rm-1}\), for example, are in accordance with the Fe\({}_{3}\)O\({}_{4}\) pattern. Presence of \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\) could not be excluded, but apparently Fe\({}_{3}\)O\({}_{4}\) dominated the XRD-pattern; \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\) was therefore omitted in the following consideration for the Fe\({}_{50}\)NiO\({}_{50}\) nanocomposite.
The assumption that the \(\alpha\)-Fe phase had been completely consumed by the formation of Fe\({}_{3}\)O\({}_{4}\) or \(\gamma\)-FeNi, allowed estimation of the residual NiO. Using the known composition of \(\gamma\)-Fe\({}_{39}\)Ni\({}_{61}\) phase for the sample deformed at T\({}_{\rm def}\) = 300\({}^{\circ}\)C, yielded a residual NiO-phase of maximum 16-wt\(\%\), which should have been left after synthesis inside the sample, but could not be resolved by the laboratory XRD-equipment due to overlapping XRD-peaks (Figure 2). Although the sample deformed at T\({}_{\rm def}\) = 300\({}^{\circ}\)C had a promising microstructure, this nanocomposite was considered unsuitable for observing a significant H\({}_{\rm eb}\) in subsequent magnetometry measurements because it did not contain large amounts NiO according to the XRD results.
The CSDS was calculated via Scherrer-relation of Fe\({}_{50}\)NiO\({}_{50}\) sample at RT and T\({}_{\rm def}\) = 300\({}^{\circ}\)C displayed nanocrystallinity, whereby Fe\({}_{3}\)O\({}_{4}\) had the largest CSDS. Considering residual strain with the Williamson-Hall (WH) method [46] and comparing results, strain in the Fe\({}_{3}\)O\({}_{4}\)-phase is almost half compared to \(\gamma\)-FeNi. Strain-free CSDS estimations yielded approximately 45 \(nm\) for both analysed phases at T\({}_{\rm def}\) = 300\({}^{\circ}\)C (Table 1).
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \(<\)**D\({}_{\bf Fe}\)\(>\)** / & **D\({}_{\bf Fe\_WH}\)** / & \(\epsilon_{\bf Fe\_WH}\)** & \(<\)**D\({}_{\bf NiO}\)\(>\)** & **D\({}_{\bf NiO\_WH}\)** & \(\epsilon_{\bf NiO\_WH}\)** \\ & \([nm]\) & \([nm]\) & & / \([nm]\) & / \([nm]\) & \\ \hline Fe\({}_{50}\)NiO\({}_{50}\) & 11\(\pm\)2 & 40 & 2\(\cdot\)10\({}^{-2}\) & 11\(\pm\)2 & 30 & 2\(\cdot\)10\({}^{-2}\) \\ @T\({}_{\rm def}\)=RT & & & & & & \\ \hline & \(<\)**D\({}_{\bf FeNi}\)\(>\)** & **D\({}_{\bf FeNi\_WH}\)** & \(\epsilon_{\bf FeNi\_WH}\)** & \(<\)**D\({}_{\bf Fe304}\)\(>\)** & **D\({}_{\bf Fe304}\)-WH** & \(\epsilon_{\bf Fe304}\)-WH** \\ & / \([nm]\) & / \([nm]\) & & / \([nm]\) & / \([nm]\) & \\ \hline Fe\({}_{50}\)NiO\({}_{50}\) & 12\(\pm\)2 & 45 & 3\(\cdot\)10\({}^{-2}\) & 19\(\pm\)7 & 43 & 1\(\cdot\)10\({}^{-2}\) \\ @T\({}_{\rm def}\)=300\({}^{\circ}\)C & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of peak analysis executed for Fe\({}_{50}\)NiO\({}_{50}\) sample deformed at RT and 300\({}^{\circ}\)C. \(<\)D\({}_{\rm x}\)\(>\) is the average crystallite size or CSDS, D\({}_{\rm x\_WH}\) represents the strain-less CSDS result and \(\epsilon_{\rm x\_WH}\) residual strain in the crystallite, both values are obtained via the WH method.
Figure 2: XRD-data is displayed of the Fe\({}_{50}\)NiO\({}_{50}\) sample, which was measured with laboratory equipment. The formation of Fe\({}_{3}\)O\({}_{4}\), through the reduction of NiO by Fe, was seen for samples deformed at T\({}_{\rm def}\geq\)225\({}^{\circ}\)C. \(\alpha\)-Fe was not detectable at T\({}_{\rm def}\geq\)275\({}^{\circ}\)C.
#### 3.1.2 Fe\({}_{10}\)Ni\({}_{40}\)NiO\({}_{50}\) composition
To reduce the loss of NiO at 300\({}^{\circ}\)C, another composition of Fe\({}_{10}\)Ni\({}_{40}\)NiO\({}_{50}\) (Fe\({}_{8.4}\)Ni\({}_{35.4}\)NiO\({}_{56.2}\) in wt%) was tested with the idea to conserve the NiO-phase by alloying Ni initially with Fe to prevent the reduction of NiO.
The microstructures depicted in Figure 3a were collected from two samples; one has been deformed with 30 rotations the other with 125 rotations; all other parameters were equal. For the micrographs shown, the applied strain was \(\epsilon_{\rm vM}\sim\)90 (\(\triangleq r\sim\)0.5 \(mm\)) and \(\epsilon_{\rm vM}\sim\)610 (\(\triangleq r\sim\)3.5 \(mm\)) for the sample with 30 rotations. For the sample with 125 rotations, the applied strain was equivalent to: \(\epsilon_{\rm vM}\sim\)270 (\(\triangleq r\sim\)0.375 \(mm\)), \(\epsilon_{\rm vM}\sim\)1280 (\(\triangleq r\sim\)1.75 \(mm\)) and \(\epsilon_{\rm vM}\sim\)2560 (\(\triangleq r\sim\)3.5 \(mm\)). Due to the homogeneous microstructure, a uniform deformation of the sample material was assumed and applied strain was therefore used rather than the radius. The dark phase was identified as NiO and the bright phase as Ni. The microstructure was different from the Fe\({}_{50}\)NiO\({}_{50}\) sample deformed at T\({}_{\rm def}\) = 300\({}^{\circ}\)C (for comparison Figure 1b). Through the preservation of NiO, a fragmentation process of NiO was realised, leading to a highly diverse Ni and NiO nanocomposite structure. Additionally, SEM micrographs demonstrated a refinement of the nanocomposite microstructure with an increase of applied strain.
Vickers microhardness measurements along the radial direction demonstrated a microstructural saturation at \(\epsilon_{\rm vM}\sim\)700 of 9.3 \(GPa\) in microhardness for the Fe\({}_{10}\)Ni\({}_{40}\)NiO\({}_{50}\) sample after 125 rotations, (Figure 3b). Beyond \(\epsilon_{\rm vM}\sim\)700, further deformation caused only a small increase in microhardness of approximately 0.6 \(GPa\) to 9.9 \(GPa\), which correlated with the minor refinement seen in SEM between \(\epsilon_{\rm vM}\sim\)1280 to \(\epsilon_{\rm vM}\sim\)2560. Vickers microhardness values of 9.3 to 9.9 \(GPa\) were significantly higher than previously reported results from similar nanocrystalline materials [42, 1].
Synchrotron WAXS investigations were conducted along the axial direction for following applied strains: \(\epsilon_{\rm vM}\sim\)270, \(\epsilon_{\rm vM}\sim\)1280, and \(\epsilon_{\rm vM}\sim\)2560. Detected phases were NiO, Ni and \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\). However, the \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\) peaks could not be directly identified. The comparison of WAXS patterns of the as-deformed and post-process annealed samples (Section 3.1.3) showed a slight shift from the Fe\({}_{3}\)O\({}_{4}\) to the \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\) reference pattern (Figure 4). This shift became more evident through regarding the calculated peak centres.
According to the WAXS measurements, the FM-phase consisted of solely Ni. An expected substitution of Fe was thus not detectable with WAXS and implied that the main part of Fe was consumed by the formation of \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\). The WAXS results allowed estimation of residual amount of NiO within sample and yielded the following composition: Ni\({}_{48.6}\)NiO\({}_{39.4}\) and (\(\gamma\)-Fe\({}_{2}\)O\({}_{3}\))\({}_{12}\), all in wt%.
Integral peak breadth analysation with the Scherrer-relation showed minor changes of CSDS for Ni and NiO, but the CSDS of \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\) reduced by 50% with applied strain (Table 2). Residual strain was approximated via the WH method and for NiO was approximately a factor of 10 higher than for Ni.
Figure 3: (**a**) BSE micrographs of the Fe\({}_{10}\)Ni\({}_{40}\)NiO\({}_{50}\) samples. Microstructural refinement was detected from left to right. Two samples were deformed to obtain the depicted micrographs. First samples was deformed with 30 rotations and micrographs of an applied strain of \(\epsilon_{\rm vM}\sim\)90 to \(\epsilon_{\rm vM}\sim\)610 were taken. Second samples was deformed with 125 rotations and micrographs of an applied strain of \(\epsilon_{\rm vM}\sim\)270, \(\epsilon_{\rm vM}\sim\)1280 and \(\epsilon_{\rm vM}\sim\)2560 were taken. The micron bar applies to all micrographs in the same row. (**b**) Vickers microhardness of both Fe\({}_{10}\)Ni\({}_{40}\)NiO\({}_{50}\) samples deformed at 300\({}^{\circ}\)C.
#### 3.1.3 Annealing of Fe\({}_{10}\)Ni\({}_{40}\)NiO\({}_{50}\) composition
Because of the complex microstructure of the nanocomposite after deformation the effects of post-process annealing were investigated to understand the influence of phase growth and phase interface morphology on the magnetic properties of the Fe\({}_{10}\)Ni\({}_{40}\)NiO\({}_{50}\) sample better. Hence, sample material with a large amount of applied strain \(\epsilon_{\rm vM}\sim\)1150 (\(r\sim\)2.75 \(mm\)) was used to have.
The SEM micrographs are displayed in Figure 5 for the following states: as-deformed, annealed in vacuum at 450\({}^{\circ}\)C and 550\({}^{\circ}\)C. For the as-deformed state, the complex phase
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & \(\mathbf{<D_{Ni}>}\) & \(\mathbf{D_{Ni\_WH}}\) & \(\mathbf{\epsilon_{Ni\_WH}}\) & \(\mathbf{<D_{NiO}>}\) & \(\mathbf{D_{NiO\_WH}}\) & \(\mathbf{\epsilon_{NiO\_WH}}\) & \(\mathbf{<D_{\gamma\_Fe203}>}\) \\ & / [\(nm\)] & / [\(nm\)] & & / [\(nm\)] & / [\(nm\)] & & / [\(nm\)] \\ \hline \(\epsilon_{\rm vM}\)\(\sim\)270 & 18\(\pm\)2 & 21 & 3\(\cdot\)10\({}^{-3}\) & 11\(\pm\)3 & 25 & 2\(\cdot\)10\({}^{-2}\) & 12\(\pm\)2 \\ \(\epsilon_{\rm vM}\)\(\sim\)1280 & 15\(\pm\)5 & 19 & 2\(\cdot\)10\({}^{-3}\) & 11\(\pm\)2 & 20 & 1\(\cdot\)10\({}^{-2}\) & 9\(\pm\)4 \\ \(\epsilon_{\rm vM}\)\(\sim\)2560 & 15\(\pm\)4 & 18 & 2\(\cdot\)10\({}^{-3}\) & 11\(\pm\)2 & 19 & 1\(\cdot\)10\({}^{-2}\) & 6\(\pm\)2 \\ \hline ann.450\({}^{\circ}\)C & 21\(\pm\)2 & 22 & 1\(\cdot\)10\({}^{-3}\) & 14\(\pm\)3 & 22 & 9\(\cdot\)10\({}^{-3}\) & 8\(\pm\)2 \\ ann.550\({}^{\circ}\)C & 24\(\pm\)2 & 25 & 0.5\(\cdot\)10\({}^{-3}\) & 17\(\pm\)3 & 26 & 7\(\cdot\)10\({}^{-3}\) & 14\(\pm\)2 \\ \hline \hline \end{tabular}
\end{table}
Table 2: A summary of integral peak breadth analysis is shown, which was done for the Fe\({}_{10}\)Ni\({}_{40}\)NiO\({}_{50}\) sample deformed at T\({}_{\rm def}=300^{\circ}\)C. \(<\)D\({}_{\rm x}\)\(>\) is the average crystallite size or CSDS, D\({}_{\rm x\_WH}\) represents the strain-less CSDS result and \(\epsilon_{\rm x\_WH}\) residual strain in the crystallite both values were obtained by the WH method.
Figure 4: Synchrotron WAXS results are shown for different applied strains of the Fe\({}_{10}\)Ni\({}_{40}\)NiO\({}_{50}\) sample. Starting from bottom: \(\epsilon_{\rm vM}\sim\)270, \(\epsilon_{\rm vM}\sim\)1280 and \(\epsilon_{\rm vM}\sim\)2560. The two annealed samples are at the top: atop the sample annealed at 550\({}^{\circ}\)C and below the sample annealed at 450\({}^{\circ}\)C. At low q-values the development of \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\) peaks were visible. \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\)-peaks became weaker with high applied strains due to a reduction of CSDS.
interface was assumed to have large variations in topography at a nanometre scale, which was expected to simplify through annealing. The contrast in BSE mode has been enhanced for the annealed samples. This enhancement indicated a phase growth of the bright and dark phases. The annealing led to a reduction of phase interface, which was expected to be predominantly driven by the optimization of Gibbs free energy.
XRD-peak analysation affirmed the rise of CSDS for all three detected phases (Table 2), whereby \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\) had the highest increase in relative crystallite size of approximately two times to its initial CSDS in the as-deformed state. The relative CSDS growth of Ni and NiO-phase during annealing was significantly smaller.
### Magnetic characterisation
For SQUID investigations, the promising microstructures of the Fe\({}_{10}\)Ni\({}_{40}\)NiO\({}_{50}\) samples were chosen. The hysteresis loops shown in Figure 6a have a continuously growing H\({}_{\rm eb}\), which could be correlated to the refinement of microstructure through the application of strain. The applied strain changed the hysteresis loop characteristic in general. As M\({}_{\rm S}\) decreased, coercivity rose. The coercivity nearly doubled from \(\epsilon_{\rm{M}}\)\(\sim\)90 with H\({}_{\rm C}\) = 460 \(Oe\) to \(\epsilon_{\rm{M}}\)\(\sim\)2560 having H\({}_{\rm C}\) = 842 \(Oe\). Magnetic remanence (M\({}_{\rm R}\)) rose from M\({}_{\rm R}\) = 18.5 \(emu/g\) to M\({}_{\rm R}\) = 23.3 \(emu/g\) at \(\epsilon_{\rm{vM}}\)\(\sim\)610 and became independent of the applied strain for further deformation. H\({}_{\rm eb}\) started at \(\epsilon_{\rm{vM}}\)\(\sim\)90 with H\({}_{\rm eb}\) = -52 \(Oe\) and ended at \(\epsilon_{\rm{vM}}\)\(\sim\)2560 with H\({}_{\rm eb}\) = -243 \(Oe\) (Table 3). As mentioned previously, M\({}_{\rm S}\) reduced from M\({}_{\rm S,\epsilon_{\rm{vM}}}\)\(\sim\)90 = 37.2 \(emu/g\) to M\({}_{\rm S,\epsilon_{\rm{vM}}}\)\(\sim\)2560 = 27.5 \(emu/g\). These changes in M\({}_{\rm S}\) and H\({}_{\rm eb}\) appeared to be interrelated and might have had their origins in the detected changes in CSDS, enlargement of phase interfaces and the generation of \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\).
The slight increase in Vickers microhardness detected at \(r\leq\)1.25 \(mm\) (Figure 3b) was therefore confirmed through the continuously growing H\({}_{\rm eb}\) for such high amounts of applied strain and was in accordance with the impression of an ongoing refinement gained by SEM and XRD investigations.
Figure 6b shows hysteresis loops of the annealed samples at 8 \(K\). The annealing treatment caused a drastic reduction of H\({}_{\rm eb}\) from H\({}_{\rm eb,\epsilon_{\rm{vM}}}\)\(\sim\)1150 = -200 \(Oe\) to H\({}_{\rm eb,ann.~{}550^{\circ}\rm C}\) = -23 \(Oe\)
Figure 5: SEM micrographs were done in BSE mode of as-deformed and annealed samples of the Fe\({}_{10}\)Ni\({}_{40}\)NiO\({}_{50}\) composition. The annealing initiated a phase growth and change of interface morphology for the samples annealed at 450\({}^{\circ}\)C and 550\({}^{\circ}\)C. The micron bar applies to all micrographs in the same row.
and M\({}_{\rm R}\) (Table 3), whereas M\({}_{\rm S}\) became larger, changing from M\({}_{\rm S,\epsilon_{vM}\sim 1150}\) = 32.3 \(emu/g\) to M\({}_{\rm S,ann.~{}550^{\circ}C}\) = 36.6 \(emu/g\). The rise of M\({}_{\rm S}\) with annealing treatment correlated with the detected growth of CSDS for all involved phases (Table 2).
The annealing initiated topological changes of the phase interface morphology, leading to a reduction of interface roughness. This would be considered to affect the H\({}_{\rm eb}\) as well [24, 31], but its impact would be minor compared to the growth of phase size, especially in a polycrystalline sample [31]. The enlargement of FM-phase dimensions is qualitatively evident in the SEM micrographs shown in Figure 5 and is presumed to have been the main reason for the decline of H\({}_{\rm eb}\) by facilitating magnetic nucleation within the FM-phase, after annealing had been applied.
The detected change in H\({}_{\rm eb}\) versus applied strain for the Fe\({}_{10}\)Ni\({}_{40}\)NiO\({}_{50}\) nanocomposite displayed a saturation behaviour similar to the microstructural saturation in single-phase systems previously reported for several phases deformed by HPT [34, 36]. A simple fit model based on an asymptotic approximation yielded, for an asymptotic limit, an H\({}_{\rm eb,max}\) of 300 \(Oe\) (Figure 6c) for the Fe\({}_{10}\)Ni\({}_{40}\)NiO\({}_{50}\) composite. The fit model is not based on a physical theory and was only used here to approximate an assumed saturation of H\({}_{\rm eb}\) through a saturation of microstructural refinement.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & **H\({}_{\bf eb}\) / [\(Oe\)]** & **H\({}_{\bf C}\) / [\(Oe\)]** & **H\({}_{\bf C1}\) /** & **H\({}_{\bf C2}\) /** & **M\({}_{\bf S}\) /** & **M\({}_{\bf R}\) /** \\ & & & [\(Oe\)] & [\(Oe\)] & [\(emu/g\)] & [\(emu/g\)] \\ \hline \(\epsilon_{\rm vM}\)\(\sim\)90 & -52 & 460 & -512 & 408 & 37.3 & 18.6 \\ \(\epsilon_{\rm vM}\)\(\sim\)270 & -90 & 625 & -715 & 535 & 32.6 & 21.2 \\ \(\epsilon_{\rm vM}\)\(\sim\)610 & -124 & 587 & -710 & 463 & 34.1 & 23.3 \\ \(\epsilon_{\rm vM}\)\(\sim\)1280 & -186 & 786 & -971 & 600 & 28.8 & 23.7 \\ \(\epsilon_{\rm vM}\)\(\sim\)2560 & -243 & 843 & -1085 & 599 & 27.6 & 23.7 \\ \hline \(\epsilon_{\rm vM}\)\(\sim\)1150 & -200 & 697 & -894 & 494 & 32.3 & 26.8 \\ ann. 450\({}^{\circ}\)C & -67 & 515 & -581 & 449 & 35.8 & 27.0 \\ ann. 550\({}^{\circ}\)C & -23 & 373 & -396 & 349 & 36.6 & 19.6 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Summary of magnetic hysteresis loops of Fe\({}_{10}\)Ni\({}_{40}\)NiO\({}_{50}\) samples measured at 8 \(K\), which are shown in Figure 6a and b. Exchange bias (H\({}_{\rm eb}\)), symmetric coercivity (H\({}_{\rm C}\)), coercivity left side (H\({}_{\rm C1}\)), coercivity right side (H\({}_{\rm C2}\)), saturation magnetisation (M\({}_{\rm S}\)) and magnetic remanence (M\({}_{\rm R}\)).
Figure 6: The magnetic hysteresis loops of the Fe\({}_{10}\)Ni\({}_{40}\)NiO\({}_{50}\) sample were measured at 8 \(K\). The graph displays a zoom-in at \(\pm\) 2000 \(Oe\). Inlet in the left corner provides the whole hysteresis loops. (**a**) Hysteresis loop of as-deformed samples (**b**) Hysteresis loops of the annealed samples. (**c**) H\({}_{\rm eb}\) for different amounts of applied strain. The used fit is not based on a physical model and serves only to approximate the expected saturation of H\({}_{\rm eb}\) for an infinite amount of applied strain.
## 4 Discussion
### Deformation behaviour
The deformation with HPT of composites containing '-ductile-brittle' phases has been reported for several phase systems including, Cu-Co and Cu-W [3, 38]. For those systems, a fracturing of the '-brittle' phase has been proposed to have a crucial influence on the refinement and homogenisation of the sample's microstructure. Considering the Fe\({}_{50}\)NiO\({}_{50}\) sample at RT, such a fragmentation was not observed to an extensive degree. However, that does not imply that NiO cannot deform at RT. As shown in Figure 1a at RT the Fe and NiO deformed to lamella structures, but then reached a'steady state' without fragmentation of the NiO phase in reasonably large amounts. This effect is clear when the SEM micrographs at \(r\sim\)0 \(mm\) and \(r\sim\)3 \(mm\) are compared (Figure 1a). In contrast, Vickers microhardness increased with radius for the sample deformed at RT, indicating an ongoing grain refinement within the phases (Figure 1c), which appeared to occur predominantly in the more ductile \(\alpha\)-Fe phase.
With a body-centred cubic crystal structure, \(\alpha\)-Fe has {110}\(<\)-111\(>\) as the primary slip system at RT. The {110}\(<\)-111\(>\) slip system family fulfils the von Mises criterion of five independent slip systems for the absorption of general strain via slip [16]. The mechanical properties of \(\alpha\)-Fe are therefore different compared to ceramics such as NiO, which has an NaCl crystal structure with the primary slip systems {110}\(<\)1-10\(>\) at RT [16]. Basic consideration reveals, that such NaCl structures have two independent slip systems and that the von Mises criterion is not fulfilled; the absorption of plasticity by NiO is consequently limited. Hence, NiO cannot absorb general strain through slip, and the likelihood of the fragmentation of polycrystalline NiO is increased by the lack of matching slip systems of adjacent grains [16].
These considerations are important for better understanding of the deformation process in such complex systems. Whether the initial phase is \(\alpha\)-Fe or Ni, both phases are more ductile at RT than NiO. It could be that, especially at RT, the applied strain is absorbed by the \(\alpha\)-Fe to a large extent, as the activation of slip is assumed to be easier in \(\alpha\)-Fe than in the NiO-phase. It is plausible that through the elevated deformation temperatures, the activation of the additional slip systems is eased in NiO. The increased likelihood of slip in NiO could enhance fragmentation and consequently phase mixing. The eased activation of slip systems in NiO could be one reason for the drastic change of deformation behaviour at T\({}_{\rm def}\geq\)225\({}^{\circ}\)C, which was seen for the Fe\({}_{50}\)NiO\({}_{50}\) samples.
The discrete change of deformation behaviour between 200\({}^{\circ}\)C and 225\({}^{\circ}\)C (Figure 1e), could not be related to a specific transition temperature. Although NiO has a T\({}_{\rm N}\) of \(\sim\)251\({}^{\circ}\)C [45], the expected change of plasticity around the T\({}_{\rm N}\) was not reported in [18]. The brittle-to-ductile transition temperature (BDTT) of NiO has been reported to be \(\sim\)0.3 of the melting temperature or approximately \(\sim\)600\({}^{\circ}\)C [14], which is still significantly above 225\({}^{\circ}\)C. In the same study, a brittle-to-ductile transition region was noted, ranging from 300\({}^{\circ}\)C to \(\sim\)550\({}^{\circ}\)C. Above 550\({}^{\circ}\)C, NiO is considered to be ductile due to the activation of additional slip systems [14].
A further important point is the slip transfer across phase boundaries, which was observed in [17, 41]. The likelihood for a glide or dislocation transfer between phases is higher, if the angle mismatch of adjacent slip systems is at a minimum [41], in addition to other conditions mentioned in [41]. HPT processing at elevated temperatures could lower the activation energy of additional slip systems within NiO. Therefore, the likelihood of dislocation transfer between adjacent phases is increased, because more slip systems are available inside NiO, if processed at elevated temperatures. This assumption could also explain the abrupt change in deformation behaviour observed in the Fe\({}_{50}\)NiO\({}_{50}\) samples, when T\({}_{\rm def}\) was elevated from T\({}_{\rm def}\) = 200\({}^{\circ}\)C to T\({}_{\rm def}\) = 225\({}^{\circ}\)C.
Furthermore, the possibility cannot be excluded, that the application of high hydrostatic pressure reduced the activation energy of additional slip systems in NiO and caused a shift
to lower BDTT. Activation of additional slip systems through hydrostatic pressure has been discussed in the literature [49, 22] and has been demonstrated for a Mg-alloy, in which non-basal glide systems were activated with an applied pressure of \(\sim\)125 \(MPa\)[22].
Eased slip in NiO can lead to a roughening of interface morphology on a nanometre scale; this effect has been observed in other two-phase systems [17]. The absorption of dislocations from local pile-ups, could cause local plastic instabilities, which are believed to support fragmentation of the more brittle phase, as discussed in a recent study [23].
When the Fe\({}_{10}\)Ni\({}_{40}\)NiO\({}_{50}\) composition was deformed at T\({}_{\rm def}=300^{\circ}\)C, the NiO-phase was conserved. In addition to the previously discussed impact of slip system activation and dislocation transfer in NiO, a work-hardening of the softer phase could also result in an enhanced fragmentation of NiO. Such a phenomenon has been reported for the Cu-Co-system during HPT-deformation [3]. The conservation of NiO facilitates a prolonged fragmentation, which could lead to the fine distribution of NiO imbedded in Ni (Figure 3a).
NiO can break off from larger agglomerates, which would ease the reduction by Fe through the creation of additional phase interfaces. It is likely that the accumulation of \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\) occurs at phase interfaces or the grain boundary (GB). These small and dispersed clusters of \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\) could inhibit GB-motion and accelerate grain refinement. In addition, through the efficient fragmentation of NiO, a highly diverse Ni and NiO-phase structure is created on a microscale, significantly hindering GB-motion and plastic deformation. Both phenomena could have led to the small observed CSDS of approximately 20 \(nm\) according to the WH method at \(\epsilon_{\rm vM}>\)1280 (Table 2) and could have caused the unusual high Vickers microhardness of 9 to 9.9 \(GPa\).
The mismatch of slip systems and lattice parameters of participating phases, especially between \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\) and the Ni- and NiO phases, could have led to a dislocation pile-up at the \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\) phase, resulting in the creation of a non-crystalline region at the phase interface adjacent to \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\). Previous research has cited the build-up of a non-crystalline region at the phase interface of the Cu-Nb system as creating an impenetrable barrier for dislocation [35].
Comparing the observed Vickers microhardness values in Figure 3b to the results obtained for nanocrystalline Ni [42] or slightly oxidized Ni-powder processed by HPT [1] shows the influence of a nanostructured NiO phase on mechanical properties. Despite the reported high hardness values in of approximately 7 \(GPa\) for nanocrystalline Ni or Ni with NiO clusters [42, 1] the bulk nanocomposite of Fe\({}_{10}\)Ni\({}_{40}\)NiO\({}_{50}\) surpassed these results by 30-40%.
### Phase formation during deformation
The efficient reduction of NiO and the observed formation of Fe\({}_{3}\)O\({}_{4}\) and \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\) for both investigated compositions was unanticipated. Commercially available Ni powder is reported to oxidize at \(>\)350\({}^{\circ}\)C in ambient atmosphere [7]. In general, NiO is considered to be a stable oxide, and therefore a reduction of Fe was not expected. This reduction can be explained by comparing the heat of formation, which is significantly less for NiO than for Fe\({}_{3}\)O\({}_{4}\) and \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\)[11, 15]. The difference in heat of formation could also explain the one-way characteristic of the reaction.
According to the data, the onset of NiO reduction by Fe during HPT processing was initiated between 200\({}^{\circ}\)C and 225\({}^{\circ}\)C (Figure 2). This reaction occurred much more efficiently at T\({}_{\rm def}=300^{\circ}\)C in the Fe\({}_{50}\)NiO\({}_{50}\) material system. The formation to the thermodynamically more favourable Fe\({}_{3}\)O\({}_{4}\) appears to have been enhanced by the extreme conditions during sample synthesis combined with the fragmented and nanocrystalline NiO, which provided a very large amount of interfacial phase area for the observed reduction of NiO. A slip in the {110}\(<\)1-10\(>\) system in the NiO phase could facilitate the oxidation of Fe, because the occurrence of unsaturated oxygen bonds would be more likely. These two mentioned factors are considered to be the main cause for the observed two-phase system of Fe\({}_{3}\)O\({}_{4}\) and
\(\gamma\)-Fe\({}_{39}\)Ni\({}_{61}\) for the Fe\({}_{50}\)NiO\({}_{50}\) material system, when deformed at T\({}_{\rm def}=300^{\circ}\)C.
NiO could not be resolved in the XRD scan for the Fe\({}_{50}\)NiO\({}_{50}\) samples deformed at T\({}_{\rm def}\) = 300\({}^{\circ}\)C (Figure 2), although 16-wt% NiO should have been left after synthesis inside the nanocomposite. The possibility that some residual NiO remained after deformation is supported by the analysis of the \(\gamma\)-FeNi phase XRD-pattern. If complete reduction of NiO through Fe and complete incorporation of the residual Ni-atoms in the \(\gamma\)-FeNi phase is assumed, the composition of the \(\gamma\)-FeNi phase would change to \(\gamma\)-Fe\({}_{20}\)Ni\({}_{80}\). The peak pattern of \(\gamma\)-Fe\({}_{20}\)Ni\({}_{80}\) had its main peak at q\({}_{\rm z}\) = 3.073 A\({}^{-1}\) and was therefore distinguishable from the peak position of the observed \(\gamma\)-Fe\({}_{39}\)Ni\({}_{61}\) phase.
The XRD results of Fe\({}_{50}\)NiO\({}_{50}\) and Fe\({}_{10}\)Ni\({}_{40}\)NiO\({}_{50}\) samples in Figures 2 and 4, respectively, detected different formations of the Fe\({}_{x}\)O\({}_{y}\) phase through the reduction of NiO. The formation of Fe\({}_{x}\)O\({}_{y}\) is influenced by the amount of available O [50]. In the Fe\({}_{10}\)Ni\({}_{40}\)NiO\({}_{50}\) sample, the Fe content was much lower compared to Fe\({}_{50}\)NiO\({}_{50}\), and therefore sufficient O was available to form \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\).
Moreover, \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\) is thermodynamically more stable than Fe\({}_{3}\)O\({}_{4}\)[11] and therefore is considered more likely to be formed, provided that sufficient O is available during deformation. Another difference from the Fe\({}_{50}\)NiO\({}_{50}\) samples at T\({}_{\rm def}=300^{\circ}\)C is the neglectable substitution of Fe inside the Ni. After the deformation of Fe\({}_{10}\)Ni\({}_{40}\)NiO\({}_{50}\) composition, WAXS detected a \(\gamma\)-phase consisting solely of Ni.
### Magnetic properties
Regarding the magnetometry results, the aforementioned Fe-oxide phases (i.e. Fe\({}_{3}\)O\({}_{4}\) and \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\)) were both FM, but distinguished themselves in their magnetic properties, for example bulk-M\({}_{\rm S,Fe_{3}O_{4}}\) = 98.6 \(emu/g\)[8] and bulk-M\({}_{\rm S,\gamma\)-Fe\({}_{2}\)O\({}_{3}}\) = 85.6 \(emu/g\)[37] at 8 \(K\). Based on the results from WAXS measurements, the M\({}_{\rm S,\gamma\)-Fe\({}_{2}\)O\({}_{3}}\) is used in the following discussion of the results for Fe\({}_{10}\)Ni\({}_{40}\)NiO\({}_{50}\) nanocomposite.
The H\({}_{\rm eb}\) can be approximated for thin films with the simple theoretical model of \(H_{\rm eb}\sim\sigma/(M_{\rm S}\cdot t_{\rm FM})(\sigma\) = interfacial coupling energy density, t\({}_{\rm FM}\) = FM-thickness) [4]. Utilising this model, with the assumption that it is valid to some extent for nanocomposites measured at 8\(K\) as well, allows estimation of the origin of the observed change of H\({}_{\rm eb}\) with applied strain.
The increase of H\({}_{\rm eb,\epsilon_{\rm vM}\sim 90}\) = -52 \(Oe\) to H\({}_{\rm eb,\epsilon_{\rm vM}\sim 2560}\) = -242 \(Oe\) cannot be explained by the decline of M\({}_{\rm S}\) alone. As shown in Table 3, M\({}_{\rm S}\) decreased by approximately 26% from M\({}_{\rm S,\epsilon_{\rm vM}\sim 90}\) = 37.3 \(emu/g\) to M\({}_{\rm S,\epsilon_{\rm vM}\sim 2560}\) = 27.6 \(emu/g\), whereas H\({}_{\rm eb}\) increased almost fivefold from H\({}_{\rm eb,min}\) to H\({}_{\rm eb,max}\). Considering the observed changes of CSDS (Table 2) in addition to the decrease of M\({}_{\rm S}\), yields a more accurate estimation (Figure 7). The increase of H\({}_{\rm eb,\epsilon_{\rm vM}\sim 270}\) = -90 \(Oe\) to H\({}_{\rm eb,\epsilon_{\rm vM}\sim 2560}\) = -242 \(Oe\) can be explained to a great extent by the combined decreases of M\({}_{\rm S}\) and CSDS of \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\) using the approximation that \(\sigma\) was the same for both applied strains. This explanation implies, that NiO was exchange biased to the \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\) as well as to the Ni-crystallites. From this calculation it can be deduced that \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\) crystallites dominated the magnetic characteristic of the Fe\({}_{10}\)Ni\({}_{40}\)NiO\({}_{50}\) samples and that the Ni-crystallites contributed partially.
The annealing experiments allow a deeper insight into the dependency of H\({}_{\rm eb}\) on the nanocomposite microstructure. The variable M\({}_{\rm S}\) recovered after deformation, from M\({}_{\rm S,\epsilon_{\rm vM}\sim 1150}\) = 32.3 \(emu/g\) to M\({}_{\rm S,ann.~{}550^{o}C}\) = 36.9 \(emu/g\), and H\({}_{\rm eb}\) decreased to \(\sim\)10% of its initial value of H\({}_{\rm eb,as~{}def}\) = -200 \(Oe\) through annealing at 550\({}^{o}\)C. The annealed samples exhibited continuous CSDS growth for all three phases (Table 2) as well as phase size growth (Figure 5). Changes in CSDS in \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\) were approximately twofold as well, but the decrease of H\({}_{\rm eb}\) was more significant and could be described only partially by the previous estimation.
Although CSDS apparently had a crucial influence on H\({}_{\rm eb}\), magnetic properties in annealed samples can be influenced by other effects as well. The application of temperature can initialize optimisation of phase interface area to lower surface energies and cause a reduction of phase interface topology. Those effects should also transform the morphology of the FM-AFM interface to a sharper phase gradient. The decreased interfacial area could affect the enclosure of FM-grains by the AFM phase and reduce the overall magnetic stiffness inside the FM-phase. Phase size growth could enable magnetic nucleation during field reversal, which could further weaken H\({}_{\rm eb}\).
The variation of M\({}_{\rm S}\) with applied strain or annealing temperature appears to have been related to some degree to the rise of CSDS of \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\), due to its large relative changes in size. Previous studies have found that FM-nanocrystallites possess a size-dependent M\({}_{\rm S}\) in general. The decline of M\({}_{\rm S}\) is related to a non-magnetic region at the crystallite boundary, which begins to influence M\({}_{\rm S}\) non-linearly below approximately 20 \(nm\) of crystallite dimension through a growing surface-to-volume ratio [5].
To obtain a better insight, the simple approximation from [5] was used for M\({}_{\rm S}\), based on
Figure 8: The influence of crystallite size on M\({}_{\rm S}\), according to [5], assuming a non-magnetic layer between adjacent crystallites of 5.7 Å for \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\) and 3 Å for Ni. Due to the smaller CSDS for \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\) the surface-to-volume ratio becomes larger, causing a stronger decrease of M\({}_{\rm S\gamma\)-Fe\({}_{2}\)O\({}_{3}\).
the relationship \(M_{\rm S}(D_{\rm Cry})=M_{\rm S}(bulk)\cdot(1-3\cdot t_{\rm non-mag}/D_{\rm Cry})\)(D\({}_{\rm Cry}\)...crystallite size; for the calculation, the CSDSs from Table 2 and t\({}_{\rm non-mag}\)...thickness of a non-magnetic layer between adjacent nanocrystallites were used) for nanocrystalline material. Utilising this model for the Fe\({}_{10}\)Ni\({}_{40}\)NiO\({}_{50}\) nanocomposite, which consisted of 12-wt% \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\) and 48.6-wt% Ni; yields with a non-magnetic layer of 5.7 A for \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\)[5], 3 A for Ni (estimated for Ni) and assuming a bulk M\({}_{\rm S,Ni}=58.6~{}emu/g\)[12] and M\({}_{\rm S,\gamma\)-Fe\({}_{2}\)O\({}_{3}}=85.6~{}emu/g\)[37]: M\({}_{\rm cal,S,\epsilon_{\rm vM}\sim 270}\) = 35.7 \(emu/g\), M\({}_{\rm cal,S,\epsilon_{\rm vM}\sim 1280}\) = 35.1 \(emu/g\) and M\({}_{\rm cal,S,\epsilon_{\rm vM}\sim 2560}\) = 34.2 \(emu/g\), far more than measured (Table 3). These results suggest that the assumed thickness of the non-magnetic layer was underestimated for the considered nanocomposite and should have been much higher to match the experimental results. Regarding the different contributions to M\({}_{\rm S}\) of \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\) and Ni, crystallites that possess different CSDS allow the conclusion, that the decrease of M\({}_{\rm S}\) was mainly caused by the shrinking CSDS of \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\) crystallites through the deformation (Figure 8).
The results from the annealed samples support the previous argument, that the \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\) phase mainly influenced M\({}_{\rm S}\). Growth of \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\) from 8 \(nm\) to 14 \(nm\) CSDS would lead to the following calculated M\({}_{\rm S}\): M\({}_{\rm cal,S,\epsilon_{\rm vM}\sim 1150}\) = 34.4 \(emu/g\), M\({}_{\rm cal,S,ann.450^{\circ}C}\) = 35.0 \(emu/g\) and M\({}_{\rm cal,S,ann.550^{\circ}C}\) = 36.0 \(emu/g\). The estimation for M\({}_{\rm S,\epsilon_{\rm vM}\sim 1150}\) is certainly higher than the measured M\({}_{\rm S}\) value; however, M\({}_{\rm S,ann.450^{\circ}C}\) and M\({}_{\rm S,ann.550^{\circ}C}\) are in good accordance with the values presented in Table 3.
The discrepancy between M\({}_{\rm S,\epsilon_{\rm vM}\sim 1150}\) and M\({}_{\rm S,\epsilon_{\rm vM}\sim 2560}\) could be rooted in a larger number of non-magnetic regions than assumed. If complete oxidation of Fe to \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\) is assumed and crystal size effects are neglected, the calculated M\({}_{\rm S}\) for the annealed Fe\({}_{10}\)Ni\({}_{40}\)NiO\({}_{50}\) sample yields, a value of M\({}_{\rm S,cal}=38.3~{}emu/g\). The M\({}_{\rm S,ann.~{}550^{\circ}C}\) = 36.9 \(emu/g\) is strikingly similar to the theoretical calculation. This comparison suggests that the non-magnetic phase was predominantly composed of Ni and \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\) and was reduced to a minimum by the annealing treatment.
There have been reports of a phase transition from FM \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\) to the thermodynamically more favourable AFM \(\alpha\)-Fe\({}_{2}\)O\({}_{3}\) at temperatures of 370-600\({}^{\circ}\)[11]. While this transition could have occurred during annealing, an accumulation of non-crystalline regions through the application of strain is more likely. On one hand, the XRD peak pattern of \(\alpha\)-Fe\({}_{2}\)O\({}_{3}\) is distinct from \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\); on other hand, a formation of \(\alpha\)-Fe\({}_{2}\)O\({}_{3}\) cannot explain the rise in M\({}_{\rm S}\) through annealing. The plausible existence of non-crystalline regions at interfaces, presumably containing \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\), would have a fraction of bulk M\({}_{\rm S}\), M\({}_{\rm S,amorphous}\sim\)2-5% [25], and the transition from crystalline \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\) phase to a phase with non-crystalline regions at the crystallite boundary could explain the changes in M\({}_{\rm S}\), detected when large amounts of strain had been applied. Composites, such as Nb-Cu, deformed by SPD have been reported to contain non-crystalline regions at phase interfaces [35, 40]. Crystallisation of these non-crystalline regions was observed when annealing was applied [35]. These results suggest that similar behaviour occurred in the Fe\({}_{10}\)Ni\({}_{40}\)NiO\({}_{50}\) nanocomposite.
## 5 Conclusions
This study presents an insight into the deformation behaviour of phases possessing very distinct mechanical properties. Although NiO has, due to its NaCl-structure, different primary slip systems compared to Fe or FeNi, it was possible to synthesise nanocomposites with a homogeneous microstructure on the sub-micrometre regime. The importance of thermal HPT processing was demonstrated on the Fe\({}_{50}\)NiO\({}_{50}\) composition. Through the deformation at 300\({}^{\circ}\)C, a homogeneous two-phase microstructure evolved. XRD investigations confirmed a reduction of NiO through Fe leading to the formation of Fe\({}_{3}\)O\({}_{4}\) and \(\gamma\)-Fe\({}_{39}\)Ni\({}_{61}\).
To conserve the AFM NiO in respect to enhance the H\({}_{\rm eb}\), a composition of Fe\({}_{10}\)Ni\({}_{40}\)NiO\({}_{50}\) was synthesised by HPT at 300\({}^{\circ}\)C and nanocomposites with a unique microstructure were obtained. SEM micrographs showed a very homogeneous microstructure, which primarily
contained of NiO and Ni according to WAXS investigations. The formation of an additional phase was detected by WAXS, which consisted of \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\). Vickers microhardness measurements detected a microstructural saturation behaviour with a slight increase to outer radii to an unusual high Vickers microhardness of \(\sim\)9.9 \(GPa\). WAXS, SEM and magnetometry results conveyed an ongoing refinement of the nanocomposite's microstructure.
It was possible to demonstrate the existence of H\({}_{\rm eb}\) in such bulk sized nanocomposite and the continual increase of H\({}_{\rm eb}\) with applied strain was correlated to the evolving microstructure. The rise of H\({}_{\rm eb}\) was mainly attributed to a simultaneous decrease in M\({}_{\rm S}\), reduction of FM-phase size dimensions and crystallite size of the FM-phases, predominantly \(\gamma\)-Fe\({}_{2}\)O\({}_{3}\).
Annealing experiments of Fe\({}_{10}\)Ni\({}_{40}\)NiO\({}_{50}\) samples showed a decrease of H\({}_{\rm eb}\) and an increase of M\({}_{\rm S}\). Those changes of H\({}_{\rm eb}\) and M\({}_{\rm S}\) were contributed to the growth of FM-phase dimensions and the subsequent reduction of phase interfaces.
## 6 Appendix A. Schematic Illustration of the HPT
- Synthesis
**Funding:**
This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No:757333).
Figure 9: Schematic description of the sample synthesis. From left to right: Powder blends were compacted with HPT inside an airtight capsule. The obtained pellet was used for HPT-deformation experiments. An inductive heating system provided the needed processes temperature for the HPT-anvils. HPT-disc was cut into the desired size for further investigations. The SQUID samples were cut out at the desired radius. For the annealing experiments a HPT-disc was cut into quarters. One quarter provided the SQUID-samples, which were cut out at the same radius. The SQUID sample was simultaneously annealed with a quarter of the HPT-disc.
**Conflicts of Interest:**
The authors declare no conflict of interest.
**Acknowledgement:**
We acknowledge DESY (Hamburg, Germany), a member of the Helmholtz Association HGF, for the provision of experimental facilities. Parts of this research were carried out at (PETRA III) and we would like to thank (N. Schell and E. Maawad) for assistance in using (P07B- High Energy Materials Science).
|
2307.00888 | Mixed state branching evolution for cell division models | We prove a scaling limit theorem for two-type Galton-Waston branching
processes with interaction. The limit theorem gives rise to a class of mixed
state branching processes with interaction using to simulate the evolution for
cell division affected by parasites. Such process can also be obtained by the
pathwise unique solution to a stochastic equation system. Moreover, we present
sufficient conditions for extinction with probability one and the exponential
ergodicity in the total variation distance of such process. | Shukai Chen, Lina Ji, Jie Xiong | 2023-07-03T09:37:57Z | http://arxiv.org/abs/2307.00888v2 | ###### Abstract
###### Abstract
We prove a scaling limit theorem for two-type Galton-Waston branching processes with interaction. The limit theorem gives rise to a class of mixed state branching processes with interaction using to simulate the evolution for cell division affected by parasites. Such process can also be obtained by the pathwise unique solution to a stochastic equation system. Moreover, we present sufficient conditions for extinction with probability one and the exponential ergodicity in the total variation distance of such process.
**Keywords and phrases: mixed state branching process; stochastic integral equation; interaction.**
**Mixed state branching evolution for cell division models**
Shukai Chen,1 Lina Ji 2 and Jie Xiong 3
Footnote 1: Supported by National Key R&D Program of China (No. 2022YFA1006000) and China Postdoctoral Science Foundation (No. 2022M720735).
Footnote 2: Supported by Guangdong Basic and Applied Basic Research Foundation (No. 2022A1515110986), Guangdong Young Innovative Talents Project (No. 2022KQNCX105) and NSFC grant (No. 12271029).
Footnote 3: Supported by National Key R&D Program of China grant (No. 2022YFA1006102) and NSFC grant (No. 11831010).
_School of Mathematics and Statistics, Fujian Normal University_
_Fuzhou 350007, People's Republic of China._
_Faculty of Computational Mathematics and Cybernetics, Shenzhen MSU-BIT University_
_Shenzhen 518172, People's Republic of China._
_Department of Mathematics and National Center for Applied Mathematics (Shenzhen)_
_Southern University of Science and Technology, Shenzhen 518055, China._
_E-mail: [email protected], [email protected] and [email protected]_
## 1 Introduction
Let \(\mathbb{N}=\{0,1,2,...\}\). We consider a continuous time model in \(D=[0,\infty)\times\mathbb{N}\) for cells and parasites, where the behavior of cell division is infected by parasites. Informally, the quantity of parasites \(\{X_{t}:t\geq 0\}\) in a cell evolves as a continuous state branching process. The cells divide in continuous time at a rate \(h(x,y)\) which may depend on the quantity of parasites \(x\) and cells \(y\). This framework is general enough to be applied for the modelling of other structured populations, for instance, grass-rabbit models in [8].
Many studies have been conducted on branching within branching processes to study such population dynamics in continuous time. In [19], the evolution of parasites is modelled by a birth-death process, while the cells split according to a Yule process. [2] allows the quantity of parasites in a cell following a Feller diffusion. A continuous state branching process with jumps is considered to model the quantity of parasites in a cell in [18]. In particular, [19, 2, 18] describe cell populations in a tree structure, in this way, the population of cells at some time may be represented by a random point measure and associated martingale problems can be established by choosing test functions appropriately. Instead of [19, 2, 18], in this paper we ignore the tree structure and mainly focus on a parasite-cell model from a macro point of view. More precisely,
we use a stochastic equation system to describe the sample path of such models,
\[X_{t}=X_{0}-b\int_{0}^{t}X_{s}\,\mathrm{d}s+\int_{0}^{t}\sqrt{2cX_{ s}}\,\mathrm{d}B_{s}+\int_{0}^{t}\int_{0}^{X_{s-}}\int_{0}^{\infty}\xi\,\tilde{M}( \mathrm{d}s,\mathrm{d}u,\mathrm{d}\xi),\] \[Y_{t}=Y_{0}+\int_{0}^{t}\int_{0}^{Y_{s-}}\int_{0}^{h(X_{s-},Y_{s -})}\int_{\mathbb{N}}(\xi-1)\,N(\mathrm{d}s,\mathrm{d}u,\mathrm{d}r,\mathrm{d} \xi),\]
where \(b\in\mathbb{R}\) and \(c\geq 0\) are constants, \((B_{t}:t\geq 0)\) is a standard Brownian motion, \(h(\cdot,\cdot)\in C(\mathbb{R}_{+}^{2})^{+}\), where \(C(\mathbb{R}_{+}^{2})^{+}\) is the collection of continuous positive functions defined on \(\mathbb{R}_{+}^{2}\). Let \((\xi\wedge\xi^{2})\,m(\mathrm{d}\xi)\) be a finite measure on \((0,\infty)\) and \((p_{\xi}:\xi\in\mathbb{N})\) be an offspring distribution satisfying \(\sum_{\xi}\xi p_{\xi}<\infty\). Without losing generality, we assume \(p_{1}=0.\) The above \(M(\mathrm{d}s,\mathrm{d}u,\mathrm{d}\xi)\) is a Poisson random measure on \((0,\infty)^{3}\) with intensity \(\mathrm{d}s\mathrm{d}um(\mathrm{d}\xi)\), and \(\tilde{M}(\mathrm{d}s,\mathrm{d}u,\mathrm{d}\xi)=M(\mathrm{d}s,\mathrm{d}u, \mathrm{d}\xi)-\mathrm{d}s\mathrm{d}um(\mathrm{d}\xi)\). The above \(N\) is a Poisson random measure on \((0,\infty)^{3}\times\mathbb{N}\) with intensity \(\mathrm{d}s\mathrm{d}u\mathrm{d}rn(\mathrm{d}\xi)\), where \(n(\mathrm{d}\xi)=p_{\xi}\sharp(\mathrm{d}\xi)\) with \(\sharp(\cdot)=\sum_{j}\delta_{j}(\cdot)\) being the counting measure on \(\mathbb{N}\). Those three random elements are independent of each other. Apparently, \(\{X_{t}:t\geq 0\}\) is indeed a continuous-state branching process (CB-process), see [5, 6]. In particular, when \(h(\cdot,\cdot)\equiv r>0\), the model reduces to \(\{(X_{t},Y_{t}^{r}):t\geq 0\}\), where \(\{Y_{t}^{r}:t\geq 0\}\) is a standard continuous time Markov branching process with branching rate \(r>0\) and offspring \((p_{\xi},\xi\in\mathbb{N})\). In this case, the system can be seen as a particular case of mixed state branching processes, which has been studied in [4].
For simplicity, we introduce another Poisson random measure and write it again by \(N\) on \([0,\infty)^{3}\times\mathbb{N}^{-1}\), \(\mathbb{N}^{-1}=\mathbb{N}\cup\{-1\}\) with characteristic measure \(n(\mathrm{d}\xi)=p_{\xi}^{\prime}\sharp(\mathrm{d}\xi)\), \(p_{\xi}^{\prime}=p_{\xi+1}\). Then we can rewrite the system by
\[X_{t}=X_{0}-b\int_{0}^{t}X_{s}\,\mathrm{d}s+\int_{0}^{t}\sqrt{2 cX_{s}}\,\mathrm{d}B_{s}+\int_{0}^{t}\int_{0}^{X_{s-}}\int_{0}^{\infty}\xi\, \tilde{M}(\mathrm{d}s,\mathrm{d}u,\mathrm{d}\xi), \tag{1.1}\] \[Y_{t}=Y_{0}+\int_{0}^{t}\int_{0}^{Y_{s-}}\int_{0}^{h(X_{s-},Y_{s -})}\int_{\mathbb{N}^{-1}}\xi\,N(\mathrm{d}s,\mathrm{d}u,\mathrm{d}r,\mathrm{d }\xi). \tag{1.2}\]
In the rest of the paper, we use the stochastic equation system (1.1)-(1.2) to describe the parasite-cell model. In the literature on the theory of branching processes, the rescaling (in time or state) approach plays a valuable role in establishing the connection among those branching processes, see [9], [11, 12], [17], [4] and [15] and the references therein. To the best of our knowledge, limited work has been done in branching processes with interactions. This leads to the first purpose of this paper, and the establishment of strong uniqueness of solution to (1.1)-(1.2). For a sequence of two-type Galton-Watson processes with interactions \(\{(x_{k}(n),y_{k}(n)):n\in\mathbb{N}\}_{k\geq 1}\), we prove that \(\{(x_{k}(\lfloor\gamma_{k}t\rfloor)/k,y_{k}(\lfloor\gamma_{k}t\rfloor)):t\geq 0\}\) converges in distribution to the solution to (1.1)-(1.2) as \(k\to\infty\) under suitable conditions. The pathwise uniqueness of solution to (1.1)-(1.2) is also given.
In addition, the second purpose of this paper is to study several long time behaviors of such process and we mainly obtain the extinction behavior and exponential ergodicity in total variation distance. The result of extinction behavior is inspired by [10]. Furthermore, ergodicity is the foundation for a wide class of limit theorems and long-time behavior for Markov processes. Due to the nonlinearity of function \(h\), the semigroup transition of \((X,Y)\) is not explicit. We obtain the ergodic property by a coupling approach, which has been proved to be effective in the study of ergodicity of nonlinear case, see [3], [16] and the references therein.
We now introduce some notation. Let \(\mathrm{e}_{\lambda}(z)=\mathrm{e}^{-\langle\lambda,z\rangle}\) for any \(\lambda=(\lambda_{1},\lambda_{2})\in\mathbb{R}_{+}^{2}\) and \(z=(x,y)\in D\), where \(\langle\lambda,z\rangle=\lambda_{1}x+\lambda_{2}y.\) We use \(C_{b}(D)\) to denote the set of all bounded functions \((x,y)\mapsto f(x,y)\) on \(D\) with \(x\mapsto f(x,\cdot)\) continuous. Let \(C_{b}^{2}(D)\) be the subset of \(C_{b}(D)\) with continuous bounded derivatives up to \(2\)nd order on \(x\). Let \(C_{0}^{2}(D)\) be the subset of \(C_{b}^{2}(D)\) vanishing at infinity, and \(C_{c}^{2}(D)\) be the subset of \(C_{0}^{2}(D)\) with compact support. Define \(C_{b}(\mathbb{R}_{+}^{2})\) to be the collection of all bounded continuous functions on \(\mathbb{R}_{+}^{2}\), which is a subset of \(C_{b}(D).\) Let \(C_{b}^{2,1}(\mathbb{R}_{+}^{2})\) be
the subset of \(C_{b}(\mathbb{R}^{2}_{+})\) with continuous bounded derivatives up to 2nd order on \(x\) and continuous bounded derivatives up to first order on \(y\). Then we have \(C^{2,1}_{b}(\mathbb{R}^{2}_{+})\subset C^{2}_{b}(D).\) Let \(\mathbb{D}([0,\infty),D)\) denote the space of cadlag paths from \([0,\infty)\) to \(D\) furnished with the Skorokhod topology.
This paper is structured as follows. In Section 2, we show that the existence by a scaling limit of a sequence of two-type Galton-Watson processes with interaction and pathwise uniqueness of solution to (1.1)-(1.2) hold. The extinction behavior is studied in Section 3. In Section 4, an exponential ergodic property is proved under some conditions.
## 2 Existence and pathwise uniqueness of solution
The generator \(A\) of \(\{(X_{t},Y_{t}):t\geq 0\}\) satisfying (1.1)-(1.2) is determined by
\[Af(z) = x\Big{[}-bf_{x}^{\prime}+cf_{xx}^{\prime\prime}+\int_{0}^{\infty }\{f(x+\xi,y)-f(x,y)-\xi f_{x}^{\prime}\}m(\mathrm{d}\xi)\Big{]} \tag{2.3}\] \[+\gamma(x,y)\int_{\mathbb{N}^{-1}}\Big{\{}f(x,y+\xi)-f(x,y)\Big{\}} n(\mathrm{d}\xi)\]
for \(f\in C^{2}_{b}(D)\) and \(z=(x,y)\in D,\) where \(\gamma(x,y)=h(x,y)y\). Then
\[A\mathrm{e}_{\lambda}(z)=\mathrm{e}_{\lambda}(z)\Big{[}x\phi_{1}(\lambda_{1})+ \gamma(x,y)\phi_{2}(\lambda_{2})\Big{]}, \tag{2.4}\]
where
\[\phi_{1}(\lambda_{1}) = b\lambda_{1}+c\lambda_{1}^{2}+\int_{0}^{\infty}(\mathrm{e}^{- \lambda_{1}\xi}-1+\lambda_{1}\xi)\,m(\mathrm{d}\xi), \tag{2.5}\] \[\phi_{2}(\lambda_{2}) = \int_{\mathbb{N}^{-1}}(\mathrm{e}^{-\lambda_{2}\xi}-1)\,n( \mathrm{d}\xi). \tag{2.6}\]
We first consider the case of \(h\in C_{b}(\mathbb{R}^{2}_{+})^{+}.\) Given the initial value \((x(0),y(0))\in\mathbb{N}\times\mathbb{N}\), let \(\{(x(n),y(n)):n\geq 0\}\) be a two-dimensional process defined by
\[x(n)=\sum_{j=1}^{x(n-1)}\alpha_{n-1,j},\qquad y(n)=\sum_{j=1}^{y(n-1)}\beta_{n -1,j,\theta(x(n-1),y(n-1))},\quad n\geq 1, \tag{2.7}\]
where \(\{\alpha_{n,j}:n\geq 0,j\geq 1\}\) are integer-valued i.i.d. random variables with offspring distribution \((w(i):i\in\mathbb{N})\). Given \(x,y\in\mathbb{N}\), the above \(\{\beta_{n,j,\theta(x,y)}:n\geq 0,j\geq 1\}\) are i.i.d. integer-valued random variables with offspring distribution \((v^{\theta(x,y)}(i):i\in\mathbb{N})\) depending on function \(\theta\). Let \(g_{1}\) and \(g_{2}^{\theta(x,y)}\) be the generating functions of \((w(i):i\in\mathbb{N})\) and \((v^{\theta(x,y)}(i):i\in\mathbb{N})\), respectively. It is known that \(\{(x(n),y(n)):n\geq 0\}\) is a Markov process and we call it _two-type Galton-Watson process with interaction_. Suppose that there exists a sequence of two-type Galton-Watson processes with interaction \(\{(x_{k}(n),y_{k}(n)):n\geq 0\}_{k\geq 1}\) with parameters \((g_{k,1},g_{k,2}^{\theta_{k}(x,y)})\). Let \(\{\gamma_{k}\}_{k\geq 1}\) be a sequence of positive numbers with \(\gamma_{k}\to\infty\) as \(k\to\infty\). For \((x,y)\in\mathbb{N}\times\mathbb{N}\), we introduce several functions on \(\mathbb{R}_{+}\) as below:
\[\bar{\Phi}_{k,1}(\lambda_{1})=k\gamma_{k}\log\Big{[}1-(k\gamma_{k })^{-1}\Phi_{k,1}(\lambda_{1})\mathrm{e}^{\lambda_{1}/k}\Big{]},\] \[\Phi_{k,1}(\lambda_{1})=k\gamma_{k}\Big{[}\mathrm{e}^{-\lambda_{ 1}/k}-g_{k,1}(\mathrm{e}^{-\lambda_{1}/k})\Big{]},\] \[\bar{\Phi}_{k,2}^{\theta_{k}(x,y)}(\lambda_{2})=\gamma_{k}\log \Big{[}1-\gamma_{k}^{-1}\Phi_{k,2}^{\theta_{k}(x,y)}(\lambda_{2})\mathrm{e}^{ \lambda_{2}}\Big{]},\] \[\Phi_{k,2}^{\theta_{k}(x,y)}(\lambda_{2})=\gamma_{k}\Big{[} \mathrm{e}^{-\lambda_{2}}-g_{k,2}^{\theta_{k}(x,y)}(\mathrm{e}^{-\lambda_{2}}) \Big{]}.\]
Let \(E_{k}=\{0,k^{-1},2k^{-1},\cdots\}\) for each \(k\geq 1\). For any \(x\in\mathbb{R}_{+}\), we take \(x_{k}:=\lfloor kx\rfloor/k.\) Then \(x_{k}\in E_{k}\) and \(|x_{k}-x|\leq 1/k\). Now we consider the following conditions:
**Condition 2.1**:
1. _The sequence_ \(\{\Phi_{k,1}(\lambda_{1})\}_{k\geq 1}\) _is uniformly Lipschitz in_ \(\lambda_{1}\) _on each bounded interval, and converges to a continuous function as_ \(k\to\infty\)_;_
2. \(\gamma_{k}[1-v_{k}^{\theta_{k}(kx_{k},y)}(1)]\to h(x,y)\) _uniformly in_ \((x,y)\in\mathbb{R}_{+}\times\mathbb{N}\) _as_ \(k\to\infty;\)__
3. \(\frac{v_{k}^{\theta_{k}(kx_{k},y)}(\xi)}{1-v_{k}^{\theta_{k}(kx_{k},y)}(1)} \to p_{\xi}\) _for_ \(\xi\in\mathbb{N}\backslash\{1\}\) _uniformly in_ \((x,y)\in\mathbb{R}_{+}\times\mathbb{N}\) _as_ \(k\to\infty\)_._
4. _The sequence_ \(\{\Phi_{k,2}^{\theta_{k}(kx_{k},y)}(\lambda_{2})\}_{k\geq 1}\) _is uniformly Lipschitz in_ \(\lambda_{2}\) _on each bounded interval, where the Lipshcitz coefficient is independent from_ \(x,y\)_._
By [14, Proposition 2.5], under Condition (2.1.1), \(\Phi_{k,1}(\lambda_{1})\) converges to a function with representation (2.5) as \(k\to\infty\), see also [11, 12]. Moreover, there exists a constant \(K>0\) such that
\[\sup_{k}\Phi_{k,1}^{\prime}(0+)=\sup_{k}\gamma_{k}[g_{k,1}^{\prime}(1-)-1]\leq K. \tag{2.8}\]
**Example 2.1**: _Let \(\{p_{\xi}:\xi=0,1,\cdots\}\) be an offspring distribution with \(p_{1}=0.\) Let_
\[v_{k}^{\theta_{k}(kx_{k},y)}(\xi)=p_{\xi}\gamma_{k}^{-1/2}\left(1-e^{-\gamma_{ k}^{-1/2}h(x_{k},y)}\right)\]
_for any \(\xi\in\mathbb{N}\backslash\{1\}\) and_
\[v_{k}^{\theta_{k}(kx_{k},y)}(1)=1-\gamma_{k}^{-1/2}\left(1-e^{-\gamma_{k}^{-1/ 2}h(x_{k},y)}\right).\]
_Then we have_
\[\Phi_{k,2}^{\theta_{k}(kx_{k},y)}(\lambda_{2})=\gamma_{k}^{1/2}\left(1-e^{- \gamma_{k}^{-1/2}h(x_{k},y)}\right)\left(\mathrm{e}^{-\lambda_{2}}-g(\mathrm{ e}^{-\lambda_{2}})\right)\]
_with \(g(\mathrm{e}^{-\lambda_{2}})=\sum_{\xi=0}^{\infty}p_{\xi}e^{-\lambda_{2}\xi}.\) It is easy to check that the above satisfies Condition (2.1.2)-(2.1.4)._
**Proposition 2.2**: _Under Conditions (2.1.2)-(2.1.3), \(\mathrm{e}^{\lambda_{2}}\Phi_{k,2}^{\theta_{k}(kx_{k},y)}(\lambda_{2})\) converges to \(-h(x,y)\phi_{2}(\lambda_{2})\) uniformly for \((x,y,\lambda_{2})\in\mathbb{R}_{+}\times\mathbb{N}\times\mathbb{R}_{+}\) as \(k\to\infty\), where \(h\in C_{b}(\mathbb{R}_{+}^{2})^{+}\) and \(\phi_{2}(\lambda_{2})\) is given by (2.6)._
_Proof._ One can see that
\[\mathrm{e}^{\lambda_{2}} \Phi_{k,2}^{\theta_{k}(kx_{k},y)}(\lambda_{2})\] \[=\gamma_{k}\Big{[}1-\mathrm{e}^{\lambda_{2}}\theta_{k,2}^{\theta_ {k}(kx_{k},y)}(\mathrm{e}^{-\lambda_{2}})\Big{]}\] \[=\gamma_{k}\Big{[}1-\mathrm{e}^{\lambda_{2}}\sum_{j=0}^{\infty} \mathrm{e}^{-\lambda_{2}j}v_{k}^{\theta_{k}(kx_{k},y)}(j)\Big{]}\] \[=\gamma_{k}\sum_{j=0}^{\infty}(1-e^{-\lambda_{2}(j-1)})v_{k}^{ \theta_{k}(kx_{k},y)}(j)\]
\[=\gamma_{k}\left[1-v_{k}^{\theta_{k}(kx_{k},y)}(1)\right]\int_{\mathbb{N}^{-1} \backslash\{0\}}(1-e^{-\lambda_{2}\xi})\rho_{k}^{\theta_{k}(kx_{k},y)}(\mathrm{d }\xi),\]
where
\[\rho_{k}^{\theta_{k}(kx_{k},y)}(\mathrm{d}\xi) = \frac{1}{1-v_{k}^{\theta_{k}(kx_{k},y)}(1)}\sum_{j=0}^{\infty}v_{ k}^{\theta_{k}(kx_{k},y)}(j)\delta_{j-1}(\mathrm{d}\xi)\] \[= \frac{v_{k}^{\theta_{k}(kx_{k},y)}(\xi+1)}{1-v_{k}^{\theta_{k}(kx _{k},y)}(1)}\sharp(\mathrm{d}\xi)\]
for \(\xi\in\mathbb{N}_{-1}\backslash\{0\}\) with \(\sharp(\mathrm{d}\xi)\) being the counting measure on \(\mathbb{N}_{-1}.\) The result follows from Conditions (2.1.2)-(2.1.3). \(\square\)
Let \(D_{k}:=E_{k}\times\mathbb{N}\). Then \(D_{k}\) is a subset of \(D\). We define a continuous-time stochastic process taking values on \(D_{k}\) as \(\{(X_{k}(t),Y_{k}(t)):t\geq 0\}:=\{(x_{k}(\lfloor\gamma_{k}t\rfloor)/k,y_{k}( \lfloor\gamma_{k}t\rfloor)):t\geq 0\}.\) Denote \(Z_{k}(t)=(X_{k}(t),Y_{k}(t))\) to simplify the notation. For \(\lambda=(\lambda_{1},\lambda_{2})\in\mathbb{R}_{+}^{2}\), we then have
\[\mathrm{e}_{\lambda}(Z_{k}(t)) = \mathrm{e}_{\lambda}(Z_{k}(0))+\sum_{i=1}^{\lfloor\gamma_{k}t \rfloor}\left[\mathrm{e}_{\lambda}\left(Z_{k}\left(\frac{i}{\gamma_{k}} \right)\right)-e_{\lambda}\left(Z_{k}\left(\frac{i-1}{\gamma_{k}}\right) \right)\right] \tag{2.9}\] \[= \mathrm{e}_{\lambda}(Z_{k}(0))+\sum_{i=1}^{\lfloor\gamma_{k}t \rfloor}\gamma_{k}^{-1}A_{k}\mathrm{e}_{\lambda}\left(Z_{k}\left(\frac{i-1}{ \gamma_{k}}\right)\right)+M_{k,\lambda}(t)\] \[= \mathrm{e}_{\lambda}(Z_{k}(0))+\int_{0}^{\lceil\gamma_{k}t \rfloor/\gamma_{k}}A_{k}\mathrm{e}_{\lambda}(Z_{k}(s))\mathrm{d}s+M_{k, \lambda}(t),\]
where
\[M_{k,\lambda}(t) = \sum_{i=1}^{\lfloor\gamma_{k}t\rfloor}\left\{\left[\mathrm{e}_{ \lambda}\left(Z_{k}\left(\frac{i}{\gamma_{k}}\right)\right)-\mathrm{e}_{ \lambda}\left(Z_{k}\left(\frac{i-1}{\gamma_{k}}\right)\right)\right]\right. \tag{2.10}\] \[\left.-\mathbb{E}\left[\mathrm{e}_{\lambda}\left(Z_{k}\left(\frac {i}{\gamma_{k}}\right)\right)-\mathrm{e}_{\lambda}\left(Z_{k}\left(\frac{i-1}{ \gamma_{k}}\right)\right)\left|\mathscr{F}_{\frac{i-1}{\gamma_{k}}}\right| \right.\right\}\]
is a martingale and for \(z=(x,y)\in D,\)
\[A_{k}\mathrm{e}_{\lambda}(z)=\gamma_{k}\bigg{[}(g_{k,1}(\mathrm{e}^{-\lambda _{1}/k}))^{kx_{k}}\cdot(g_{k,2}^{\theta_{k}(kx_{k},y)}(\mathrm{e}^{-\lambda_{2 }}))^{y}-\mathrm{e}_{\lambda}(z)\bigg{]}.\]
One can check that
\[A_{k}\mathrm{e}_{\lambda}(z)=\mathrm{e}_{\lambda}(z)\bigg{[}x\bar{\Phi}_{k,1} (\lambda_{1})+y\bar{\Phi}_{k,2}^{\theta_{k}(kx_{k},y)}(\lambda_{2})\bigg{]}+o(1).\]
By the above, [14, Proposition 2.5] and Proposition 2.2, we have the following estimation.
**Theorem 2.3**: _Suppose that Condition 2.1 holds. Then for any \(\lambda>0,\) we have_
\[\lim_{k\to\infty}\sup_{z\in D_{k}}|A_{k}\mathrm{e}_{\lambda}(z)-A\mathrm{e}_{ \lambda}(z)|=0,\]
_where \(A\) is the generator defined by (2.3)._
**Proposition 2.4**: _Let \(T>0\) be a fixed constant and \(\sup_{k}\mathbb{E}[X_{k}(0)+Y_{k}(0)]<\infty\). Then we have_
\[\sup_{k}\sup_{0\leq t\leq T}\mathbb{E}[X_{k}(t)+Y_{k}(t)]<\infty.\]
_Proof._ By (2.8) one sees that \(0\leq g^{\prime}_{k,1}(1-)\leq K/\gamma_{k}+1.\) Then for \(t\in[\frac{i}{\gamma_{k}},\frac{i+1}{\gamma_{k}}),\) we have
\[\mathbb{E}[X_{k}(t)] = k^{-1}\mathbb{E}[x_{k}(\lfloor\gamma_{k}t\rfloor)]\] \[= g^{\prime}_{k,1}(1-)k^{-1}\mathbb{E}[x_{k}(\lfloor\gamma_{k}t \rfloor-1)]\] \[\leq (K/\gamma_{k}+1)k^{-1}\mathbb{E}[x_{k}(\lfloor\gamma_{k}t\rfloor- 1)].\]
By induction, we have \(\mathbb{E}[X_{k}(t)]\leq(K/\gamma_{k}+1)^{\lfloor\gamma_{k}t\rfloor}\mathbb{E} [X_{k}(0)].\) Moreover, by Condition (2.1.4), we have
\[\sup_{k}\left|\frac{\partial}{\partial\lambda_{2}}\Phi^{\theta_{k}(kx_{k},y)} _{k,2}(\lambda_{2})\Big{|}_{\lambda_{2}=0}\right|=\sup_{k}\gamma_{k}\left| \frac{\partial}{\partial z}g^{\theta_{k}(kx_{k},y)}_{k,2}(z)\Big{|}_{z=1}-1 \right|\leq K.\]
Similarly, for \(t\in[\frac{i}{\gamma_{k}},\frac{i+1}{\gamma_{k}}),\) one sees that
\[\mathbb{E}[Y_{k}(t)] = \mathbb{E}[y_{k}(\lfloor\gamma_{k}t\rfloor)]=\mathbb{E}\left[ \sum_{k=1}^{y_{k}(n-1)}\mathbb{E}\left[\beta_{n-1,k,\theta_{k}(x_{k}(n-1),y_{ k}(n-1))}\Big{|}x_{k}(n-1),y_{k}(n-1)\right]\right]\Bigg{|}_{n=\lfloor \gamma_{k}t\rfloor}\] \[= \mathbb{E}\left[y_{k}(n-1)\cdot\frac{\partial}{\partial z}g^{ \theta_{k}(kx_{k}(n-1),y_{k}(n-1))}_{k,2}(z)\bigg{|}_{z=1}\right]\Bigg{|}_{n= \lfloor\gamma_{k}t\rfloor}\] \[\leq (1+K/\gamma_{k})\mathbb{E}[y_{k}(\lfloor\gamma_{k}t\rfloor-1)].\]
Then we get \(\mathbb{E}[Y_{k}(t)]\leq(K/\gamma_{k}+1)^{\lfloor\gamma_{k}t\rfloor}\mathbb{E }[Y_{k}(0)]\) by induction. The result follows. \(\square\)
Let \(\{\tau_{k}:k\geq 1\}\) be a sequence of bounded stopping times, and \(\{\delta_{k}:k\geq 1\}\) be a sequence of positive constants with \(\delta_{k}\to 0\) as \(k\to\infty.\) For a fixed constant \(T>0,\) we assume that
\[0\leq\tau_{k}\leq\tau_{k}+\delta_{k}\leq T.\]
**Proposition 2.5**: _Suppose that Condition 2.1 holds and \(h\in C_{b}(\mathbb{R}^{2}_{+})^{+}\). Then for any \(\lambda\in\mathbb{R}^{2}_{+},\) we have_
\[\lim_{k\to\infty}\mathbb{E}\left[\left|\mathrm{e}_{\lambda}(Z_{k}(\tau_{k}+ \delta_{k}))-\mathrm{e}_{\lambda}(Z_{k}(\tau_{k}))\right|^{2}\right]=0.\]
_Proof._ For any \(\lambda\in\mathbb{R}^{2}_{+},\) by (2.9) we have
\[\mathbb{E}\left[\left|\mathrm{e}_{\lambda}(Z_{k}(\tau_{k}+\delta _{k}))-\mathrm{e}_{\lambda}(Z_{k}(\tau_{k}))\right|^{2}\right]\] \[\qquad\leq\left|\mathbb{E}\left[\mathrm{e}_{2\lambda}(Z_{k}(\tau_ {k}+\delta_{k}))-\mathrm{e}_{2\lambda}(Z_{k}(\tau_{k}))\right]\right|\] \[\qquad\qquad+\left|\mathbb{E}\left[2\mathrm{e}_{\lambda}(Z_{k}( \tau_{k}))[\mathrm{e}_{\lambda}(Z_{k}(\tau_{k}+\delta_{k}))-\mathrm{e}_{ \lambda}(Z_{k}(\tau_{k}))]\right]\right|\] \[\qquad\qquad\leq I_{1}+I_{2}+I_{3},\]
where
\[I_{1} = \left|\mathbb{E}\left[\int_{\lfloor\tau_{k}\tau_{k}\rfloor/ \gamma_{k}}^{\lceil\gamma_{k}(\tau_{k}+\delta_{k})\rfloor/\gamma_{k}}A_{k} \mathrm{e}_{2\lambda}(Z_{k}(s))\mathrm{d}s\right]\right|,\] \[I_{2} = \left|\mathbb{E}\left[2\mathrm{e}_{\lambda}(Z_{k}(\tau_{k}))\int_ {\lfloor\tau_{k}\tau_{k}\rfloor/\gamma_{k}}^{\lfloor\gamma_{k}(\tau_{k}+\delta _{k})\rfloor/\gamma_{k}}A_{k}\mathrm{e}_{\lambda}(Z_{k}(s))\mathrm{d}s\right] \right|,\] \[I_{3} = \left|\mathbb{E}\left[2\mathrm{e}_{\lambda}(Z_{k}(\tau_{k}))\left( M_{k,\lambda}(\tau_{k}+\delta_{k})-M_{k,\lambda}(\tau_{k})\right)\right]\right|.\]
Then by (2.4), Theorem 2.3 and Proposition 2.4, one can see that
\[I_{1} \leq \mathbb{E}\left[\int_{\lfloor\gamma_{k}\tau_{k}\rfloor/\gamma_{k} }^{\lceil\gamma_{k}(\tau_{k}+\delta_{k})\rfloor/\gamma_{k}}|A_{k}\mathrm{e}_{2 \lambda}(Z_{k}(s))-A\mathrm{e}_{2\lambda}(Z_{k}(s))|\,\mathrm{d}s\right]\]
\[+\mathbb{E}\left[\int_{[\gamma_{k}\tau_{k}]/\gamma_{k}}^{[\gamma_{k}(\tau_{k}+ \delta_{k})]/\gamma_{k}}|\mathrm{A}\mathrm{e}_{2\lambda}(Z_{k}(s))|\,\mathrm{d }s\right]\leq K\delta_{k}.\]
Similarly,
\[I_{2} \leq 2\mathbb{E}\left[\int_{[\gamma_{k}\tau_{k}]/\gamma_{k}}^{[\gamma _{k}(\tau_{k}+\delta_{k})]/\gamma_{k}}|A_{k}\mathrm{e}_{\lambda}(Z_{k}(s))|\, \mathrm{d}s\right]\] \[\leq 2\mathbb{E}\left[\int_{[\gamma_{k}\tau_{k}]/\gamma_{k}}^{[\gamma _{k}(\tau_{k}+\delta_{k})]/\gamma_{k}}|A_{k}\mathrm{e}_{\lambda}(Z_{k}(s))-A \mathrm{e}_{\lambda}(Z_{k}(s))|\,\mathrm{d}s\right]\] \[+2\mathbb{E}\left[\int_{[\gamma_{k}\tau_{k}]/\gamma_{k}}^{[\gamma _{k}(\tau_{k}+\delta_{k})]/\gamma_{k}}|A\mathrm{e}_{\lambda}(Z_{k}(s))|\, \mathrm{d}s\right]\leq K\delta_{k}.\]
On the other hand, by (2.10) and Doob's stopping theorem, it follows that
\[\mathbb{E}\left[\mathrm{e}_{\lambda}(Z_{k}(\tau_{k}))\left(M_{k, \lambda}(\tau_{k}+\delta_{k})-M_{k,\lambda}(\tau_{k})\right)\right]\] \[=\mathbb{E}\left[\mathbb{E}\left[\mathrm{e}_{\lambda}(Z_{k}(\tau _{k}))\left(M_{k,\lambda}(\tau_{k}+\delta_{k})-M_{k,\lambda}(\tau_{k})\right)| \mathscr{F}_{[\gamma_{k}\tau_{k}]}\right]\right]\] \[=\mathbb{E}\left[\mathrm{e}_{\lambda}(Z_{k}(\tau_{k}))\left[ \mathbb{E}\left[M_{k,\lambda}(\tau_{k}+\delta_{k})\Big{|}\mathscr{F}_{[\gamma _{k}\tau_{k}]}\right]-M_{k,\lambda}(\tau_{k})\right]\right]\] \[=0,\]
which implies that \(I_{3}=0.\) The result follows. \(\square\)
**Corollary 2.6**: _Suppose that Condition 2.1 holds and \(h\in C_{b}(\mathbb{R}^{2}_{+})^{+}\). Then for any \(\lambda:=(\lambda_{1},\lambda_{2})\in\mathbb{R}^{2}_{+},\) we have_
\[\lim_{k\to\infty}L^{\lambda}_{\tau_{k},\delta_{k}}(Z_{k})=0,\]
_where \(L^{\lambda}_{\tau_{k},\delta_{k}}(Z_{k}):=\mathbb{E}\left[\left|e^{-\lambda_{ 1}X_{k}(\tau_{k}+\delta_{k})}-e^{-\lambda_{1}X_{k}(\tau_{k})}\right|^{2}+\left| e^{-\lambda_{2}Y_{k}(\tau_{k}+\delta_{k})}-e^{-\lambda_{2}Y_{k}(\tau_{k})} \right|^{2}\right].\)_
_Proof._ The result follows by taking \(\lambda=(\lambda_{1},0)\) and \(\lambda=(0,\lambda_{2})\) in Proposition 2.5. \(\square\)
Similar to the proof of [14, Theorem 3.6], we get the following result.
**Proposition 2.7**: _Suppose that Condition 2.1 holds and \(h\in C_{b}(\mathbb{R}^{2}_{+})^{+}\). Let \(Z_{k}(0)=(X_{k}(0),Y_{k}(0))\) be the initial value satisfying \(\sup_{k}\mathbb{E}[X_{k}(0)+Y_{k}(0)]<\infty\). Then the process \(\{Z_{k}(t):t\geq 0\}_{k\geq 1}=\{(X_{k}(t),Y_{k}(t)):t\geq 0\}_{k\geq 1}\) is tight on \(\mathbb{D}([0,\infty),D).\)_
_Proof._ By Aldous's criterion, it suffices to show that, for any \(\epsilon>0,\)
\[\lim_{k\to\infty}\mathbb{P}\left[\|Z_{k}(\tau_{k}+\delta_{k})-Z_{k}(\tau_{k}) \|>\epsilon\right]=0, \tag{2.11}\]
where \(\|\cdot\|\) is the \(L^{2}\) norm on \(D.\) For any \(a:=(a_{1},a_{2}),b:=(b_{1},b_{2})\in D\) satisfying \(\|a-b\|>\epsilon,\) we have \(|a_{1}-b_{1}|\wedge|a_{2}-b_{2}|>\epsilon/2.\) Then for a fixed constant \(M>0,\) by taking \(0\leq\|a\|,\|b\|\leq M,\) one sees that
\[|\mathrm{e}^{-\lambda_{1}a_{1}}-\mathrm{e}^{-\lambda_{1}b_{1}}|^{2}+|\mathrm{e }^{-\lambda_{2}a_{2}}-\mathrm{e}^{-\lambda_{2}b_{2}}|^{2}\geq\left(\frac{1}{2} (\lambda_{1}\wedge\lambda_{2})\epsilon e^{-(\lambda_{1}+\lambda_{2})M}\right)^ {2}.\]
By Proposition 2.5, it is easy to see that
\[\mathbb{P}\left\{\|Z_{k}(\tau_{k}+\sigma_{k})-Z_{k}(\tau_{k})\|>\epsilon;\|Z_{ k}(\tau_{k})\|\vee\|Z_{k}(\tau_{k}+\delta_{k})\|\leq M\right\}\]
\[\leq\left(\frac{1}{2}(\lambda_{1}\wedge\lambda_{2})\epsilon e^{-(\lambda_{1}+ \lambda_{2})M}\right)^{-2}L^{\lambda}_{\tau_{k},\delta_{k}}(Z_{k})\to 0\]
as \(k\to\infty.\) Further, by Proposition 2.4, we have
\[\mathbb{P}\left[\|Z_{k}(\tau_{k}+\sigma_{k})\|\geq M\right] \leq \mathbb{P}\left[X_{k}(\tau_{k}+\sigma_{k})\geq\frac{M}{2}\right]+ \mathbb{P}\left[Y_{k}(\tau_{k}+\sigma_{k})\geq\frac{M}{2}\right]\] \[\leq 2\frac{\sup_{0\leq t\leq T}\mathbb{E}[X_{k}(t)+Y_{k}(t)]}{M}\leq \frac{K}{M}\]
and
\[\mathbb{P}\left[\|Z_{k}(\tau_{k})\|\geq M\right]\leq\frac{K}{M}.\]
As a result,
\[\mathbb{P}\left[\|Z_{k}(\tau_{k}+\delta_{k})-Z_{k}(\tau_{k})\|> \epsilon\right]\] \[\leq\mathbb{P}\left[\|Z_{k}(\tau_{k}+\sigma_{k})-Z_{k}(\tau_{k}) \|>\epsilon;\|Z_{k}(\tau_{k})\|\vee\|Z_{k}(\tau_{k}+\delta_{k})\|\leq M\right]\] \[\quad\quad+\mathbb{P}\left[\|Z_{k}(\tau_{k}+\sigma_{k})\|\geq M \right]+\mathbb{P}\left[\|Z_{k}(\tau_{k})\|\geq M\right]\]
goes to \(0\) as \(k\to\infty\) and \(M\to\infty,\) which implies (2.11). The result follows. \(\square\)
**Lemma 2.8**: _For any \(f\in C_{b}^{2}(D)\), there exists a sequence of functions \(f^{m,n}\in C_{0}^{2}(D)\) such that \(f^{m,n}\to f\), \(f^{m,n}_{1}\to f_{1}\) and \(f^{m,n}_{11}\to f_{11}\) uniformly on any bounded subset of \(D\) as \(m,n\to\infty\), where \(f^{m,n}_{1}:=\frac{\partial f^{m,n}(x,y)}{\partial x}\), \(f_{1}:=\frac{\partial f(x,y)}{\partial x}\), \(f^{m,n}_{11}:=\frac{\partial^{2}f^{m,n}(x,y)}{\partial x^{2}}\) and \(f_{11}:=\frac{\partial^{2}f(x,y)}{\partial x^{2}}\)._
_Proof._ For any nonnegative function \(f\in C_{b}^{2}(D),\) we define
\[f^{m,n}(x,y)=\begin{cases}f(x,y),&(x,y)\in[0,m]\times[0,n]\cap D,\\ f(x,y)\left[1-2\int_{m}^{x}\rho(2(z-m)-1)dz\right],&(x,y)\in[m,m+1]\times[0,n ]\cap D;\\ 0,&\text{others},\end{cases}\]
where \(\rho\) is the mollifier defined by
\[\rho(x)=\Lambda\exp\{-1/(1-x^{2})\}1_{\{|x|<1\}}\]
with \(\Lambda\) being the constant such that \(\int_{\mathbb{R}}\rho(x)dx=1.\) It is easy to see that \(f^{m,n}\in C_{0}^{2}(D)\). Notice that, for \((x,y)\in[m,m+1]\times[0,n]\cap D,\)
\[f^{m,n}_{1}(x,y)=f_{1}(x,y)-\frac{d}{dx}\left[2f(x,y)\int_{m}^{x}\rho(2(z-m)- 1)dz\right]\]
and
\[f^{m,n}_{11}(x,y)=f_{11}(x,y)-\frac{d^{2}}{dx^{2}}\left[2f(x,y)\int_{m}^{x} \rho(2(z-m)-1)dz\right].\]
Let \(D^{b}\) be a fixed bounded subset of \(D.\) Then we have
\[\sup_{(x,y)\in D^{b}}\left[|f^{m,n}(x,y)-f(x,y)|+|f^{m,n}_{1}(x,y)-f_{1}(x,y)| +|f^{m,n}_{11}(x,y)-f_{11}(x,y)|\right]\to 0\]
as \(m,n\to\infty.\) The result follows. \(\square\)
Now we are ready to give the existence of the solution to (1.1)-(1.2) for the case of \(h\in C_{b}(D)^{+}.\)
**Theorem 2.9**: _Suppose that Condition 2.1 holds and \(h\in C_{b}(\mathbb{R}^{2}_{+})^{+}\). Let \(Z_{k}(0)\) converge in distribution to \(Z_{0}\) as \(k\to\infty\) with \(\sup_{k}\mathbb{E}[X_{k}(0)+Y_{k}(0)]<\infty\). Then \(\{Z_{k}(t):t\geq 0\}_{k\geq 1}\) converges in distribution on \(\mathbb{D}(0,\infty),D)\) to \(\{Z_{t}:t\geq 0\}\), which is a solution to (1.1)-(1.2)._
_Proof._ Let \(P^{(k)}\) be the distributions of \(Z_{k}\) on \(\mathbb{D}([0,\infty),D)\). By Proposition 2.7, the sequence of processes \(\{Z_{k}\}_{k\geq 1}\) is relatively compact. Then there are a probability measure \(Q\) and a subsequence \(P^{(k_{i})}\) on \(\mathbb{D}([0,\infty),D)\) such that \(Q=\lim_{i\to\infty}P^{(k_{i})}.\) By Skorokhod representative theorem, there exists a probability space \((\tilde{\Omega},\tilde{\mathscr{E}},\tilde{\mathbb{P}})\) on which are defined cadlag processes \(\{\tilde{Z}_{t}:t\geq 0\}\) and \(\{\tilde{Z}_{k_{i}}(t):t\geq 0\}\) such that the distribution of \(\tilde{Z}\) and \(\tilde{Z}_{k_{i}}\) on \(\mathbb{D}([0,\infty),D)\) are \(Q\) and \(P^{(k_{i})}\), respectively, and \(\lim_{i\to\infty}\tilde{Z}_{k_{i}}=\tilde{Z},\ \tilde{\mathbb{P}}\)-almost surely.
Now it suffices to show that \((\tilde{Z}_{t})_{t\geq 0}\) satisfies the following martingale problem: for any \(f\in C^{2}_{b}(D)\), we have
\[f(\tilde{Z}_{t})=f(\tilde{Z}_{0})+\int_{0}^{t}Af(\tilde{Z}_{s})ds+\text{local \rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ \rm{ \rm{ \rmrm{ \rmrm { }}}}}}}}}} \,
**Theorem 2.10**: _Assume that \(h\in C_{b}(\mathbb{R}_{+}^{2})^{+}\). For any given initial value \((X_{0},Y_{0})\in D\), the pathwise uniqueness holds for (1.1)-(1.2) on \(D\)._
_Proof._ By [5, Theorems 5.1 and 5.2] and [7, Corollary 5.2], there is a unique positive strong solution to (1.1). Moreover, it was shown in [5, 7] that the solution \(\{X_{t}:t\geq 0\}\) is a CB-process. The pathwise uniqueness of solution to \(Y\) can also be constructed by path stitching method. Here we prove it by Yamada-Watanabe method since we will get comparison theorem by the same method.
Let \(\{(X_{t},Y_{1}^{t}):t\geq 0\}\) and \(\{(X_{t},Y_{t}^{2}):t\geq 0\}\) be two solutions to (1.1)-(1.2) with the same initial value \((X_{0},Y_{0})\). It is easy to see that the processes have bounded first moments since \(h\in C_{b}(D)^{+}\). We define \(\tau_{k}^{1}=\inf\{t\geq 0:Y_{1}^{t}\geq k\}\), \(\tau_{k}^{2}=\inf\{t\geq 0:Y_{t}^{2}\geq k\}\) and \(\tau_{k}=\tau_{k}^{1}\wedge\tau_{k}^{2}\). Then \(\tau_{k}\to\infty\) almost surely as \(k\to\infty\). Let \(Z_{t}=Y_{t}^{1}-Y_{t}^{2}\). One can check that
\[Z_{t\wedge\tau_{k}} = \int_{0}^{t\wedge\tau_{k}}\int_{0}^{Y_{s-}^{1}}\int_{0}^{h(X_{s-},Y_{s-}^{1})}\int_{\mathbb{N}^{-1}}\xi\,N(\mathrm{d}s,\mathrm{d}u,\mathrm{d}r, \mathrm{d}\xi) \tag{2.15}\] \[-\int_{0}^{t\wedge\tau_{k}}\int_{0}^{Y_{s-}^{2}}\int_{0}^{h(X_{s- },Y_{s-}^{1})}\int_{\mathbb{N}^{-1}}\xi\,N(\mathrm{d}s,\mathrm{d}u,\mathrm{d}r,\mathrm{d}\xi)\] \[= \int_{0}^{t\wedge\tau_{k}}\int_{Y_{s-}^{2}}^{Y_{s-}}\int_{0}^{h(X_ {s-},Y_{s-}^{1})}\int_{\mathbb{N}^{-1}}\xi\mathds{1}_{\{Z_{s-}>0\}}\,N( \mathrm{d}s,\mathrm{d}u,\mathrm{d}r,\mathrm{d}\xi)\] \[+\int_{0}^{t\wedge\tau_{k}}\int_{0}^{Y_{s-}^{2}}\int_{h(X_{s-},Y_ {s-}^{1})}^{h(X_{s-},Y_{s-}^{1})}\int_{\mathbb{N}^{-1}}\xi\mathds{1}_{\{Z_{s- }>0\}}\,N(\mathrm{d}s,\mathrm{d}u,\mathrm{d}r,\mathrm{d}\xi)\] \[-\int_{0}^{t\wedge\tau_{k}}\int_{Y_{s-}^{1}}^{Y_{s-}^{1}}\int_{0}^ {h(X_{s-},Y_{s-}^{2})}\int_{\mathbb{N}^{-1}}\xi\mathds{1}_{\{Z_{s-}<0\}}\,N( \mathrm{d}s,\mathrm{d}u,\mathrm{d}r,\mathrm{d}\xi)\] \[-\int_{0}^{t\wedge\tau_{k}}\int_{0}^{Y_{s-}^{1}}\int_{h(X_{s-},Y_ {s-}^{1})}^{h(X_{s-},Y_{s-}^{2})}\int_{\mathbb{N}^{-1}}\xi\mathds{1}_{\{Z_{s- }<0\}}\,N(\mathrm{d}s,\mathrm{d}u,\mathrm{d}r,\mathrm{d}\xi).\]
For each integer \(n\geq 0\) define \(a_{n}=\exp\{-n(n+1)/2\}\). Then \(a_{n}\to 0\) decreasingly as \(n\to\infty\) and \(\int_{a_{n}}^{a_{n-1}}z^{-1}\mathrm{d}z=n,n\geq 1\). Let \(x\to g_{n}(x)\) be a positive continuous function supported by \((a_{n},a_{n-1})\) so that \(\int_{a_{n}}^{a_{n-1}}g_{n}(x)dx=1\) and \(g_{n}(x)\leq 2(nx)^{-1}\) for every \(x>0\). For \(n\geq 0\) and \(x\in\mathbb{R}\) let
\[\phi_{n}(z)=\int_{0}^{|z|}\mathrm{d}y\int_{0}^{y}g_{n}(x)\mathrm{d}x.\]
Then \(\phi_{n}(z)\to|z|\) increasingly as \(n\to\infty\). Moreover, we have \(|\phi_{n}^{\prime}(z)|\leq 1\). For \(z,\zeta\in\mathbb{R}\), it is easy to see that \(|\phi_{n}(z+\zeta)-\phi_{n}(z)|\leq|\zeta|\). By Ito's formula, we have
\[\phi_{n}(Z_{t\wedge\tau_{k}}) = \int_{0}^{t\wedge\tau_{k}}\int_{Y_{s-}^{2}}^{Y_{s-}^{1}}\int_{0}^ {h(X_{s-},Y_{s-}^{1})}\int_{\mathbb{N}^{-1}}[\phi_{n}(Z_{s-}+\xi)-\phi_{n}(Z_{s -})]1_{\{Z_{s-}>0\}}N(\mathrm{d}s,\mathrm{d}u,\mathrm{d}r,\mathrm{d}\xi) \tag{2.16}\] \[+\int_{0}^{t\wedge\tau_{k}}\int_{0}^{Y_{s-}^{2}}\int_{h(X_{s-},Y_ {s-}^{2})}^{h(X_{s-},Y_{s-}^{1})}\int_{\mathbb{N}^{-1}}[\phi_{n}(Z_{s-}+\xi)- \phi_{n}(Z_{s-})]1_{\{Z_{s-}>0\}}N(\mathrm{d}s,\mathrm{d}u,\mathrm{d}r, \mathrm{d}\xi)\] \[+\int_{0}^{t\wedge\tau_{k}}\int_{Y_{s-}^{1}}^{Y_{s-}^{2}}\int_{0}^ {h(X_{s-},Y_{s-}^{2})}\int_{\mathbb{N}^{-1}}[\phi_{n}(Z_{s-}-\xi)-\phi_{n}(Z_{s -})]1_{\{Z_{s-}<0\}}\,N(\mathrm{d}s,\mathrm{d}u,\mathrm{d}r,\mathrm{d}\xi)\] \[+\int_{0}^{t\wedge\tau_{k}}\int_{0}^{Y_{s-}^{1}}\int_{h(X_{s-},Y_ {s-}^{1})}^{h(X_{s-},Y_{s-}^{2})}\int_{\mathbb{N}^{-1}}[\phi_{n}(Z_{s-}-\xi)- \phi_{n}(Z_{s-})]1_{\{Z_{s-}<0\}}\,N(\mathrm{d}s,\mathrm{d}u,\mathrm{d}r, \mathrm{d}\xi)\] \[= \int_{0}^{t\wedge\tau_{k}}Z_{s-}h(X_{s-},Y_{s-}^{1})\int_{\mathbb{N} ^{-1}}[\phi_{n}(Z_{s-}+\xi)-\phi_{n}(Z_{s-})]1_{\{Z_{s-}>0\}}n(\mathrm{d}\xi)\] \[+\int_{0}^{t\wedge\tau_{k}}Y_{s-}^{2}[h(X_{s-},Y_{s-}^{1})-h(X_{s- },Y_{s-}^{2})]\int_{\mathbb{N}^{-1}}[\phi_{n}(Z_{s-}+\xi)-\phi_{n}(Z_{s-})]1_{\{Z_{ s-}>0\}}n(\mathrm{d}\xi)\]
\[-\int_{0}^{t\wedge\tau_{k}}Z_{s-}h(X_{s-},Y_{s-}^{2})\int_{\mathbb{N}^{ -1}}[\phi_{n}(Z_{s-}-\xi)-\phi_{n}(Z_{s-})]1_{\{Z_{s-}<0\}}n(\mathrm{d}\xi)\] \[-\int_{0}^{t\wedge\tau_{k}}Y_{s-}^{2}[h(X_{s-},Y_{s-}^{1})-h(X_{s -},Y_{s-}^{2})]\int_{\mathbb{N}^{-1}}[\phi_{n}(Z_{s-}-\xi)-\phi_{n}(Z_{s-})]1_{ \{Z_{s-}<0\}}n(\mathrm{d}\xi)\] \[+mart. \tag{2.16}\]
Recall that \(h\in C_{b}(\mathbb{R}_{+}^{2})^{+}\). Then \(h\in C_{b}(D)^{+}.\) For any contant \(k>0,\) one sees that there exists a constant \(K_{k}>0\) depending on \(k\) such that
\[|h(x,y_{1})-h(x,y_{2})|\leq K_{k}|y_{1}-y_{2}| \tag{2.17}\]
for any \(x\in\mathbb{R}\) and any \(y_{1},y_{2}\in[0,K]\cap\mathbb{N}.\) Taking expectations on both sides of (2.16), we then have
\[\mathbb{E}[\phi_{n}(Z_{t\wedge\tau_{k}})] \leq K_{k}^{1}\mathbb{E}\left[\int_{0}^{t\wedge\tau_{k}}|Z_{s-}|ds \int_{\mathbb{N}^{-1}}|\xi|n(\mathrm{d}\xi)\right]\] \[\leq K_{k}^{2}\mathbb{E}\left[\int_{0}^{t\wedge\tau_{k}}|Z_{s-}| \mathrm{d}\xi\right].\]
where \(K_{k}^{1},K_{k}^{2}\) are positive constants depending on \(k\). Taking \(n\rightarrow\infty,\) we have \(\mathbb{E}[|Z_{t\wedge\tau_{k}}|]=0\) by Gronwall's inequality, and so \(\mathbb{P}(Y_{t}^{1}=Y_{t}^{2})=1\) for all \(t\geq 0\) by letting \(k\rightarrow\infty.\) Then \(\mathbb{P}(Y_{t}^{1}=Y_{t}^{2}\) for all \(t\geq 0)=1\) by the right continuity of the processes. \(\Box\)
**Theorem 2.11**: _Suppose that \(h\in C(\mathbb{R}_{+}^{2})^{+}\). Then there exists a unique strong solution to (1.1)-(1.2)._
_Proof._ Let \(h_{m}(x,y):=h(x\wedge m,y\wedge m).\) Then \(h_{m}\) is bounded for any \(m\geq 1\) and \(h_{m}\to h\) as \(m\rightarrow\infty\). By Theorems 2.9 and 2.10, there exists a unique strong solution \(\{(X_{t}^{m},Y_{t}^{m}):t\geq 0\}\) to the following stochastic integral equation system:
\[\begin{cases}X_{t}=X_{0}-b\int_{0}^{t}X_{s}\,\mathrm{d}s+\int_{0}^{t}\sqrt{2cX _{s}}\,\mathrm{d}B_{s}+\int_{0}^{t}\int_{0}^{X_{s-}}\int_{0}^{\infty}\xi\, \tilde{M}(\mathrm{d}s,\mathrm{d}u,\mathrm{d}\xi),\\ Y_{t}=Y_{0}+\int_{0}^{t}\int_{0}^{Y_{s-}}\int_{0}^{h_{m}(X_{s-},Y_{s-})}\int_{ \mathbb{N}^{-1}}\xi\,N(\mathrm{d}s,\mathrm{d}u,\mathrm{d}r,\mathrm{d}\xi). \end{cases} \tag{2.18}\]
In fact, \(\{X_{t}^{m}:t\geq 0\}\) is the unique strong solution to (1.1) independent with \(m,\) which is written as \(\{X_{t}:t\geq 0\}\) in the following. Let \(\tau_{m}^{Y}:=\inf\{t>0:X_{t}\geq m\},\ \tau_{m}^{Y}:=\inf\{t>0:Y_{t}^{m}\geq m\}\) and \(\tau_{m}=\tau_{m}^{X}\wedge\tau_{m}^{Y}.\) Then \(0\leq X_{t}<m\) and \(0\leq Y_{t}^{m}<m\) for \(0\leq t<\tau_{m},\) and \((X_{t},Y_{t}^{m})\) satisfies (1.1)-(1.2) for \(0\leq t<\tau_{m}.\) Let
\[Y_{\tau_{m}}^{m}=Y_{\tau_{m}-}^{m}+\int_{\{\tau_{m}\}}\int_{0}^{Y_{\tau_{m}-}^ {m}}\int_{0}^{h_{m}(X_{\tau_{m}-},Y_{\tau_{m}-}^{m})}\int_{\mathbb{N}^{-1}}\xi N (\mathrm{d}s,\mathrm{d}u,\mathrm{d}r,\mathrm{d}\xi).\]
For \(n\geq m\geq 1\) there exists a unique strong solution \((X_{t},\tilde{Y}_{t})_{t\geq\tau_{m}}\) to (1.1) and
\[Y_{t}=Y_{\tau_{m}}^{m}+\int_{\tau_{m}}^{t}\int_{0}^{Y_{s-}}\int_{\tau_{m}}^{h_{ m}(X_{s-},Y_{s-})}\int_{\mathbb{N}^{-1}}\xi\,N(\mathrm{d}s,\mathrm{d}u, \mathrm{d}r,\mathrm{d}\xi).\]
Let \(Y_{t}^{\prime}=Y_{t}^{m}\) for \(0\leq t<\tau_{m}\) and \(Y_{t}^{\prime}=\tilde{Y}_{t}\) for \(t\geq\tau_{m}.\) Then it is a solution to (2.18) by changing \(m\) to \(n.\) By the strong uniqueness we get \((X_{t},Y_{t}^{\prime})_{t\geq 0}=(X_{t},Y_{t}^{m})_{t\geq 0}\) almost surely. In particular, we infer \(Y_{t}^{n}=Y_{t}^{m}<m\) for \(0\leq t<\tau_{m}\). Consequently, the sequence \(\{\tau_{m}\}\) is non-decreasing. On the other hand, by (2.18) it is easy to check that \(\mathbb{E}[X_{t\wedge\tau_{m}^{X}}]\leq\mathbb{E}[X_{0}]e^{Kt},\) where \(K\) is a constant independent with \(m.\) Then we have \(\tau_{m}^{X}\rightarrow\infty\) almost surely as \(m\rightarrow\infty\). Let \(\tau=\lim_{m\rightarrow\infty}\tau_{m}=\lim_{m\rightarrow\infty}\tau_{m}^{Y}.\) Let \(Y_{t}=Y_{t}^{m}\) for all \(0\leq t<\tau_{m}\) and \(m\geq 1\). It is easily seen that \((X_{t},Y_{t})_{t\in[0,\tau)}\) is a unique strong solution to (1.1)-(1.2) up to \(\tau\). For \(t\geq\tau,\) let \((X_{t},Y_{t})=(X_{t},\infty).\) The result follows. \(\Box\)
**Theorem 2.12**: _Assume \(h\in C(\mathbb{R}_{+}^{2})^{+}\). The comparison property of (1.1)-(1.2) holds._
_Proof._ Suppose that \((X_{t}^{1},Y_{t}^{1})_{t\geq 0}\) and \((X_{t}^{2},Y_{t}^{2})_{t\geq 0}\) are two positive solutions to (1.1)-(1.2) with \(\mathbb{P}(X_{0}^{1}\leq X_{0}^{2},\ Y_{0}^{1}\leq Y_{0}^{2})=1.\) By [14, Theorem 8.4] We have \(\mathbb{P}(X_{t}^{1}\leq X_{t}^{2}\)_for all \(t\geq 0)=1.\)_ For \(n\geq 0\) let \(\phi_{n}\) be the function defined as in the proof of Theorem 2.10. Let \(\psi_{n}(z)=\phi_{n}(z\lor 0)\) for \(z\in\mathbb{R}.\) Then \(\psi_{n}(z)\to z_{+}:=z\lor 0\) increasingly as \(n\to\infty.\) We define \(\kappa_{k}^{1}=\inf\{t\geq 0:X_{t}^{1}+Y_{t}^{1}\geq k\},\)\(\kappa_{k}^{2}=\inf\{t\geq 0:X_{t}^{2}+Y_{t}^{2}\geq k\}\) and \(\kappa_{k}=\kappa_{k}^{1}\wedge\kappa_{k}^{2}.\) Let \(\kappa:=\lim_{k\to\infty}\kappa_{k}.\) Then \(Y_{t}^{1}=Y_{t}^{2}=\infty\) almost surely for any \(t\geq\kappa.\) Let \(Z_{t}=Y_{t}^{1}-Y_{t}^{2}\) for \(t\in[0,\kappa).\) Similar to the proof of Theorem 2.10, we get \(\mathbb{E}[(Z_{t\wedge\kappa_{k}})_{+}]=0\) for every \(t\geq 0\) by Gronwall's inequality. Then by taking \(k\to\infty\) and using Fatou's lemma we see that \(\mathbb{E}[(Z_{t})_{+}]=0\) for every \(t\in[0,\kappa),\) then \(\mathbb{P}(Y_{t}^{1}\leq Y_{t}^{2}\) for all \(t\geq 0)=1\) by the right continuity of the processes. \(\square\)
## 3 Foster-Lyapunov criteria for extinction
In this section, we mainly discuss the extinction behavior of \((X,Y)\) under \(b\geq 0\). Define \(\tau_{0}=\inf\{t>0:X_{t}=0\) and \(Y_{t}=0\}.\) Moreover, we separately define the extinction time of \(X,Y\) as \(\tau_{0}^{X}:=\inf\{t>0:X_{t}=0\}\) and \(\tau_{0}^{Y}:=\{t>0:Y_{t}=0\}.\) Then we have \(\tau_{0}=\tau_{0}^{X}\vee\tau_{0}^{Y}.\) For the extinction behavior of the process \(X\), we introduce the so called Grey's condition:
**Condition 3.1**: _There is some constant \(\theta>0\) so that \(\phi_{1}(z)>0\) for \(z\geq\theta\) and \(\int_{\theta}^{\infty}\phi_{1}^{-1}(z)\mathrm{d}z<\infty\), where \(\phi_{1}\) is given by (2.5)._
Since \(b\geq 0,\) under Condition 3.1, one can see that \(\mathbb{P}_{x}(\tau_{0}^{X}<\infty)=1\) for all \(x>0\); see, e.g., [14, Corollary 3.8]. In the following, we present a Foster-Lyapunov criteria-type result for the process \((X,Y)\). For \(z_{1}=(x_{1},y_{2}),z_{2}=(x_{2},y_{2})\in D,\) we say \(z_{1}\succeq z_{2}\) if \(x_{1}\succeq x_{2}\) and \(y_{1}\geq y_{2}\). Let \(\tilde{z}:=(\tilde{x},\tilde{y})\succeq z_{0}\) and \((X_{t},Y_{t})_{t\geq 0}\) be the mixed state branching process satisfying (1.1)-(1.2) with initial value \(z_{0}.\) We define stopping time \(\sigma_{\tilde{z}}=\inf\{t>0:X_{t}\geq\tilde{x}\text{ or }Y_{t}\geq\tilde{y}\}.\) It is easy to see that \(X_{t\wedge\sigma_{\tilde{z}}-}\leq\tilde{x}\) and \(Y_{t\wedge\sigma_{\tilde{z}}-}\leq\tilde{y}.\)
**Theorem 3.2**: _Let \(\{(X_{t},Y_{t}):t\geq 0\}\) be the mixed state branching process satisfying (1.1)-(1.2) with initial value \(z_{0}=(x_{0},y_{0})\in D.\) Suppose that \(\phi_{1}(\lambda_{1})>0\) and \(\phi_{2}(\lambda_{2})>0\) for any \(\lambda:=(\lambda_{1},\lambda_{2})\in(0,\infty)^{2}.\) Then we have \(\mathbb{P}_{z_{0}}\{\tau_{0}<\infty\}=1.\)_
_Proof._ It suffices to prove the case of \(z_{0}\in D\backslash(0,0).\) The proof is inspired by [10, Lemma 4.1]. By Ito's formula, we have
\[\mathrm{e}_{\lambda}(Z_{t\wedge\tau_{0}\wedge\sigma_{\tilde{z}}})=\mathrm{e}_{ \lambda}(z_{0})+\int_{0}^{t\wedge\tau_{0}\wedge\sigma_{\tilde{z}}}A\mathrm{e}_ {\lambda}(Z_{s-})\mathrm{d}s+mart. \tag{3.19}\]
Taking expectations on both sides, we have
\[\mathbb{E}_{z_{0}}\left[\mathrm{e}_{\lambda}(Z_{t\wedge\tau_{0}\wedge\sigma_{ \tilde{z}}})\right]=\mathrm{e}_{\lambda}(z_{0})+\int_{0}^{t}\mathbb{E}_{z_{0}} \left[A\mathrm{e}_{\lambda}(Z_{s-})1_{\{s<\tau_{0}\wedge\sigma_{\tilde{z}}\}} \right]\mathrm{d}s,\]
which implies that
\[\mathrm{d}(\mathbb{E}_{z_{0}}\left[\mathrm{e}_{\lambda}(Z_{t\wedge\tau_{0} \wedge\sigma_{\tilde{z}}})\right])=\mathbb{E}_{z_{0}}\left[A\mathrm{e}_{ \lambda}(Z_{t-})1_{\{t<\tau_{0}\wedge\sigma_{\tilde{z}}\}}\right]\mathrm{d}t.\]
Recall that \(\phi_{1}(\lambda_{1})>0\) and \(\phi_{2}(\lambda_{2})>0\) for all \(\lambda\in(0,\infty)^{2}.\) Then for all \(\tilde{z}=(\tilde{x},\tilde{y})\in D\) with \(\tilde{z}\succeq z_{0}\) and \(\lambda\in(0,\infty)^{2},\) there exists a constant \(d_{z_{0},\tilde{z},\lambda}>0\) such that for all \(z=(x,y)\in D\) with \(z_{0}\preceq z\preceq\tilde{z},\)
\[x\phi_{1}(\lambda_{1})+h(x,y)y\phi_{2}(\lambda_{2})\geq d_{z_{0},\tilde{z},\lambda}. \tag{3.20}\]
Then by integration by parts,
\[\int_{0}^{\infty}\mathrm{e}^{-d_{z_{0},\bar{z},\lambda}t}\mathbb{E}_{ z_{0}}\left[A\mathrm{e}_{\lambda}(Z_{t})1_{\{t<\tau_{0}\wedge\sigma_{z}\}} \right]\mathrm{d}t\] \[\qquad=\int_{0}^{\infty}\mathrm{e}^{-d_{z_{0},\bar{z},\lambda}t} \mathrm{d}(\mathbb{E}_{z_{0}}\left[\mathrm{e}_{\lambda}(Z_{t\wedge\tau_{0} \wedge\sigma_{z}})\right])\] \[\qquad=d_{z_{0},\bar{z},\lambda}\int_{0}^{\infty}\mathrm{e}^{-d_{ z_{0},\bar{z},\lambda}t}\mathbb{E}_{z_{0}}\left[\mathrm{e}_{\lambda}(Z_{t\wedge \tau_{0}\wedge\sigma_{z}})\right]\mathrm{d}t-\mathrm{e}_{\lambda}(z_{0}).\]
Moreover, by (2.4) and (3.20) we have
\[\int_{0}^{\infty}\mathrm{e}^{-d_{z_{0},\bar{z},\lambda}t}\mathbb{ E}_{z_{0}}\left[A\mathrm{e}_{\lambda}(Z_{t})1_{\{t<\tau_{0}\wedge\sigma_{z}\}} \right]\mathrm{d}t\] \[\qquad\geq d_{z_{0},\bar{z},\lambda}\int_{0}^{\infty}\mathrm{e}^ {-d_{z_{0},\bar{z},\lambda}t}\mathbb{E}_{z_{0}}\left[\mathrm{e}_{\lambda}(Z_{t })1_{\{t<\tau_{0}\wedge\sigma_{z}\}}\right]\mathrm{d}t.\]
It follows that
\[\mathrm{e}_{\lambda}(z_{0}) \leq d_{z_{0},\bar{z},\lambda}\int_{0}^{\infty}e^{-d_{z_{0},\bar{z}, \lambda}t}\mathbb{E}_{z_{0}}\left[\mathrm{e}_{\lambda}(Z_{\tau_{0}\wedge \sigma_{z}})1_{\{t\geq\tau_{0}\wedge\sigma_{\bar{z}}\}}\right]\mathrm{d}t\] \[\leq \mathbb{P}_{z_{0}}\{\tau_{0}\leq\sigma_{\bar{z}}\}+\sup_{z\geq \bar{z}}[\mathrm{e}^{-\lambda_{1}x}+\mathrm{e}^{-\lambda_{2}y}]\] \[\leq \mathbb{P}_{z_{0}}\{\tau_{0}<\infty\}+\sup_{z\geq\bar{z}}[\mathrm{ e}^{-\lambda_{1}x}+\mathrm{e}^{-\lambda_{2}y}].\]
Taking \(\tilde{x},\tilde{y}\to\infty\), we get \(\mathbb{P}_{z_{0}}\{\tau_{0}<\infty\}\geq\mathrm{e}_{\lambda}(z_{0})\), which holds for any \(\lambda\in(0,\infty)^{2}\). The result follows by letting \(\lambda\to(0,0)\). \(\Box\)
**Remark 3.3**: _The processes \(\{X_{t}:t\geq 0\}\) and \(\{Y_{t}:t\geq 0\}\) are independent as \(h>0\) being a constant. In this case, one can check that \(\mathbb{P}(\tau_{0}^{X}<\infty)=1\) when \(\phi_{1}(\lambda_{1})>0\) for any \(\lambda_{1}>0\), and \(\mathbb{P}(\tau_{0}^{Y}<\infty)=1\) if \(\phi_{2}(\lambda_{2})>0\) for any \(\lambda_{2}>0\)._
**Corollary 3.4**: _Assume that \(b\geq 0\), \(R_{1}:=\int_{\mathbb{N}^{-1}}\xi n(\mathrm{d}\xi)<0\) and Condition 3.1 holds. Then we have \(\mathbb{P}_{z_{0}}\{\tau_{0}<\infty\}=1.\)_
_Proof._ By Condition 3.1 and \(b\geq 0\), one sees that \(\phi_{1}(\lambda_{1})>0\) for any \(\lambda_{1}>0.\) Moreover, by the inequality \(1-e^{-\lambda_{2}\xi}\leq\lambda_{2}\xi\), we have \(\phi_{2}(\lambda_{2})\geq-R_{1}\lambda_{2}>0.\) The result follows by Theorem 3.2. \(\Box\)
## 4 Exponential ergodicity in the Wasserstein distance
Recalling that the generator of \(\{(X_{t},Y_{t}):t\geq 0\}\) is given by (2.3) for any \(f\in C_{b}^{2,1}(\mathbb{R}_{+}^{2})\). Let \(\mathcal{D}(A)\) denote the linear space consisting of functions \(f\in C_{b}^{2,1}(\mathbb{R}_{+}^{2})\) such that the two integrals on the right-hand side of (2.3) are convergent and define continuous functions on \(D\).
To study the coupling and ergodicity of the process \(\{(X_{t},Y_{t}):t\geq 0\}\), we begin with the construction of a new coupling operator for its generator \(A\). Denote by \(\tilde{A}\) the infinitesimal generator of the Markov coupling process \(\{(X_{t},Y_{t},\tilde{X}_{t},\tilde{Y}_{t}):t\geq 0\}\). Then the operator \(\tilde{A}\) satisfies the following marginal property, i.e., for any \(f,g\in\mathcal{D}(A)\),
\[\tilde{A}F(x,y,\tilde{x},\tilde{y})=Af(x,y)+Ag(\tilde{x},\tilde{y}),\]
where \(F(x,y,\tilde{x},\tilde{y})=f(x,y)+g(\tilde{x},\tilde{y})\) for \((x,y),(\tilde{x},\tilde{y})\in D\). We call \(\tilde{A}\) a coupling operator of \(A\). In order to construct the associated coupling generator, we use the synchronous coupling to the jump system corresponding to \(Y\); see, e.g., [3]. Namely,
\[(y,\tilde{y})\to\begin{cases}(y+\xi,\tilde{y}+\xi),&[\gamma(x,y) \wedge\gamma(\tilde{x},\tilde{y})]n(\mathrm{d}\xi),\\ (y+\xi,\tilde{y}),&[\gamma(x,y)-\gamma(\tilde{x},\tilde{y})]^{+}n(\mathrm{d} \xi),\\ (y,\tilde{y}+\xi),&[\gamma(x,y)-\gamma(\tilde{x},\tilde{y})]^{-}n(\mathrm{d} \xi).\end{cases}\]
For the first component process \(X\), we use the coupling by reflection of the local part but apply the synchronous coupling of the non-local part. More precisely, for any \((x,y),(\tilde{x},\tilde{y})\in D\) with \(x\geq\tilde{x}\geq 0\), let us consider the operator \(\tilde{A}\),
\[\tilde{A}F(x,y,\tilde{x},\tilde{y}) = -bxF^{\prime}_{x}-b\tilde{x}F^{\prime}_{\tilde{x}}+cxF^{\prime \prime}_{xx}+c\tilde{x}F^{\prime\prime}_{\tilde{x}\tilde{x}}-2c\sqrt{x\tilde {x}}F^{\prime\prime}_{x\tilde{x}} \tag{4.21}\] \[+\tilde{x}\int_{0}^{\infty}[F(x+\xi,y,\tilde{x}+\xi,\tilde{y})-F( x,y,\tilde{x},\tilde{y})-\xi(F^{\prime}_{x}+F^{\prime}_{\tilde{x}})]\,m(\mathrm{d}\xi)\] \[+(x-\tilde{x})\int_{0}^{\infty}[F(x+\xi,y,\tilde{x},\tilde{y})-F( x,y,\tilde{x},\tilde{y})-\xi F^{\prime}_{x}]\,m(\mathrm{d}\xi)\] \[+[\gamma(x,y)\wedge\gamma(\tilde{x},\tilde{y})]\int_{\mathbb{N}^ {-1}}[F(x,y+\xi,\tilde{x},\tilde{y}+\xi)-F(x,y,\tilde{x},\tilde{y})]\,n( \mathrm{d}\xi)\] \[+[\gamma(x,y)-\gamma(\tilde{x},\tilde{y})]^{+}\int_{\mathbb{N}^ {-1}}[F(x,y+\xi,\tilde{x},\tilde{y})-F(x,y,\tilde{x},\tilde{y})]\,n(\mathrm{ d}\xi)\] \[+[\gamma(x,y)-\gamma(\tilde{x},\tilde{y})]^{-}\int_{\mathbb{N}^ {-1}}[F(x,y,\tilde{x},\tilde{y}+\xi)-F(x,y,\tilde{x},\tilde{y})]\,n(\mathrm{ d}\xi)\]
for \(F\in\mathcal{D}(\tilde{A})\), where \(\mathcal{D}(\tilde{A})\) denote the linear space consisting of the functions \(F\) such that the integrals in (4.21) are convergent and define continuous functions on \(D\). Similarly, we can define the case that \(0\leq x<\tilde{x}\). In the sequel, it suffices to consider \(x\geq\tilde{x}\geq 0\) due to the comparison theorem w.r.t. general CB processes; see, e.g., [14, Theorem 8.4]. It is not hard to see that \(\tilde{A}\) is indeed a coupling generator of \(A\) defined by (2.3).
**Theorem 4.1**: _There exists a coupling process \(\{((X_{t},Y_{t}),(\tilde{X}_{t},\tilde{Y}_{t})):t\geq 0\}\) whose generator \(\tilde{A}\) is defined by (4.21)._
_Proof._ Consider the following SDE:
\[\begin{cases}X_{t}=X_{0}-b\int_{0}^{t}X_{s}\,\mathrm{d}s+\int_{0}^{t}\sqrt{2cX _{s}}\,\mathrm{d}B_{s}+\int_{0}^{t}\int_{0}^{X_{s-}}\int_{0}^{\infty}\xi\, \tilde{M}(\mathrm{d}s,\mathrm{d}u,\mathrm{d}\xi),\\ Y_{t}=Y_{0}+\int_{0}^{t}\int_{0}^{Y_{s-}}\int_{0}^{h(X_{s-},Y_{s-})}\int_{ \mathbb{N}^{-1}}\xi\,N(\mathrm{d}s,\mathrm{d}u,\mathrm{d}r,\mathrm{d}\xi),\\ \tilde{X}_{t}=\tilde{X}_{0}-b\int_{0}^{t}\tilde{X}_{s}\,\mathrm{d}s+\int_{0}^{t} \sqrt{2c\tilde{X}_{s}}\,\mathrm{d}B_{s}^{*}+\int_{0}^{t}\int_{0}^{\tilde{X}_{s- }}\int_{0}^{\infty}\xi\,\tilde{M}(\mathrm{d}s,\mathrm{d}u,\mathrm{d}\xi),\\ \tilde{Y}_{t}=\tilde{Y}_{0}+\int_{0}^{t}\int_{0}^{\tilde{Y}_{s-}}\int_{0}^{h( \tilde{Y}_{s-},\tilde{Y}_{s-})}\int_{\mathbb{N}^{-1}}\xi\,N(\mathrm{d}s, \mathrm{d}u,\mathrm{d}r,\mathrm{d}\xi),\end{cases} \tag{4.22}\]
where
\[B_{t}^{*}=\left\{\begin{aligned} &-B_{t},& t\leq T,\\ &-2B_{T}+B_{t},& t>T,\end{aligned}\right.\]
\(T=\inf\{t>0:X_{t}=\tilde{X}_{t}\}.\) Clearly, \((B_{t}^{*})_{t\geq 0}\) is still a standard Brownian motion. By the results in Section 2, we can determine the unique strong solution \(\{((X_{t},Y_{t}),(\tilde{X}_{t},\tilde{Y}_{t})):t\geq 0\}\) to (4.22).
On the other hand, we can apply the Ito formula to the SDE (4.22) to see that the infinitesimal generator of the process \(\{((X_{t},Y_{t}),(\tilde{X}_{t},\tilde{Y}_{t})):t\geq 0\}\) is indeed the coupling generator defined by (4.21).
By \(\mathcal{P}(D)\) we denote the space of all Borel probability measures over \(D\). Given \(\mu,\nu\in\mathcal{P}(D)\), a coupling \(H\) of \((\mu,\nu)\) is a Borel probability measure on \(D\times D\) which has marginals \(\mu\) and \(\nu\), respectively. We write \(\mathcal{H}(\mu,\nu)\) for the collection of all such couplings. Let \(d\) be a metric on \(D\) such that \((D,d)\) is a complete separable metric space and define
\[\mathcal{P}_{d}(D)=\Big{\{}\rho\in\mathcal{P}(D):\int_{D}d((x,y),(0,0))\,\rho( \mathrm{d}x,\mathrm{d}y)<\infty\Big{\}}.\]
The _Wasserstein distance_ on \(\mathcal{P}_{d}(D)\) is defined by
\[W_{d}(\mu,\nu)=\inf\Big{\{}\int_{D\times D}d((x,y),(\tilde{x},\tilde{y}))\,H( \mathrm{d}x,\mathrm{d}y,\mathrm{d}\tilde{x},\mathrm{d}\tilde{y}):H\in\mathcal{ H}(\mu,\nu)\Big{\}}.\]
Moreover, it can be shown that this infimum is attained; see, e.g., [20, Theorem 6.16]. More precisely, there exists \(H\in\mathcal{H}(\mu,\nu)\) such that
\[W_{d}(\mu,\nu)=\int_{D\times D}d((x,y),(\tilde{x},\tilde{y}))\,H(\mathrm{d}x, \mathrm{d}y,\mathrm{d}\tilde{x},\mathrm{d}\tilde{y}).\]
In the remainder of the article, we will use the following particular example.
Take \(d((x,y),(\tilde{x},\tilde{y}))=\mathbf{1}_{\{(x,y)\neq(\tilde{x},\tilde{y})\}}\), then \(\mathcal{P}_{d}(D)=\mathcal{P}(D)\) and
\[W_{d}(\mu,\nu)=\|\mu-\nu\|_{TV}:=\sup\{|\mu(A)-\nu(A)|:A:\text{ Borel set}\}.\]
We write \(d=d_{TV}\) and \(W_{d_{TV}}\) is the _total variation distance_.
**Definition 4.2**: _We say the mixed state branching process with interaction \((X,Y)\) or its transition semigroup \((P_{t})_{t\geq 0}\) is exponential ergodic in the total variation distance with rate \(\lambda_{0}>0\) if its possesses a unique stationary distribution \(\mu\) and there is a nonnegative function \(\nu\mapsto C(\nu)\) on \(\mathcal{P}(D)\) such that_
\[W_{d_{TV}}(\nu P_{t},\mu)\leq C(\nu)\mathrm{e}^{-\lambda_{0}t},\quad t\geq 0,\nu\in\mathcal{P}(D). \tag{4.23}\]
By standard arguments, (4.23) follows if there exists a constant \(C_{0}>0\) such that
\[W_{d_{TV}}(P_{t}((x,y),\cdot),P_{t}((\tilde{x},\tilde{y}),\cdot))\leq C_{0} \mathrm{e}^{-\lambda_{0}t}d_{TV}((x,y),(\tilde{x},\tilde{y})),\quad t\geq 0. \tag{4.24}\]
Fixed some positive constant \(l_{0}>0\). Let define a proper function \(f\) on \(D\times D\) such that
\[f(x,y,\tilde{x},\tilde{y})=\Big{[}1+\varphi_{l_{0}}(|x-\tilde{x}|)+\psi_{l_{ 0}}(|y-\tilde{y}|)\Big{]}\mathbf{1}_{\{(x,y)\neq(\tilde{x},\tilde{y})\}}, \tag{4.25}\]
where
\[\varphi_{l_{0}}(r):=c_{0}(r\wedge l_{0})+(r\wedge l_{0})^{\theta_{1}},\quad \psi_{l_{0}}(r):=r\wedge l_{0},\qquad r\geq 0\]
with \(c_{0}>0,\theta_{1}\in(0,1)\), the exact values of above constants will be determined later. It is easy to see that
\[1\leq f(x,y,\tilde{x},\tilde{y})\leq 1+\phi(l_{0})+\psi(l_{0}),\quad(x,y)\neq (\tilde{x},\tilde{y}). \tag{4.26}\]
Clearly, the function \(f\) controls the distance \(d_{TV}\) in the sense that there are constants \(\lambda_{2}\geq\lambda_{1}>0\) such that for \((x,y),(\tilde{x},\tilde{y})\in D\),
\[\lambda_{1}f(x,y,\tilde{x},\tilde{y})\leq d_{TV}((x,y),(\tilde{x},\tilde{y})) \leq\lambda_{2}f(x,y,\tilde{x},\tilde{y}). \tag{4.27}\]
**Condition 4.3**: _One of the following two assumptions holds:_
_(4.3.1) \(c>0\)._
_(4.3.2) There exist \(\alpha\in(1,2)\) and \(C_{*}>0\) such that \(\int_{0}^{r}z^{2}\,m({\rm d}z)\geq C_{*}r^{2-\alpha}\) for \(r\in(0,1]\)._
**Condition 4.4**: _There exists \(k_{2}>0\) such that for all \(|y-\tilde{y}|\in[0,l_{0}]\),_
\[[\gamma(x,y)-\gamma(\tilde{x},\tilde{y})]^{+}{\bf 1}_{\{y-\tilde{y}<0\}}+[ \gamma(x,y)-\gamma(\tilde{x},\tilde{y})]^{-}{\bf 1}_{\{y-\tilde{y}\geq 0\}} \leq k_{2}|x-\tilde{x}|. \tag{4.28}\]
**Condition 4.5**: \(b>0\) _and \(R_{1}=\int_{\mathbb{N}^{-1}}\xi\,n({\rm d}\xi)\in(-\infty,0)\)._
**Remark 4.6**: _(4.3.2) in Condition 4.3 means that the driving Levy-jump process has a \(\alpha\)-stable process as a component, where \(\alpha\in(1,2)\). Note that either (4.3.1) or (4.3.2) in Condition 4.3 implies the so called Grey's Condition in Condition 3.1. And it somehow keeps consistent with the results in [13]; see also [14]._
_Condition 4.4 holds when \(h(x,y)\equiv r\) for some fixed positive constant \(r\). In this case, \((\{Y_{t}^{r}:t\geq 0\})\) is a standard continuous time branching process with branching rate \(r>0\) and offspring \((p_{\xi},\xi\in\mathbb{N})\)._
\(R_{1}<0\) _of Condition 4.5 actually means that the associated first moment of offerspring of each individual strictly less than 1, i.e. \(\sum_{j}jp_{j}<1\), which is so called the subcritical case; see, e.g. [1, pp.112]. Under Conditions 4.3 and 4.5, the process \(\{Z_{t}:t\geq 0\}\) extincts in finite time by Corollary 3.4._
**Theorem 4.7**: _Suppose that Conditions 4.3-4.5 are satisfied. Then there are constants \(\lambda_{0}>0\) and \(C_{0}>0\) such that (4.24) holds._
_Proof. Step 1._ Assume that (4.3.2) in Condition 4.3 is satisfied. We shall first give some estimates of \(\tilde{A}\varphi_{l_{0}}(x-\tilde{x})\) and \(\tilde{A}\psi_{l_{0}}(|y-\tilde{y}|)\).
\[\tilde{A}\varphi_{l_{0}}(x-\tilde{x})\!\leq\!-bc_{0}(x-\tilde{x})\!+\!(x- \tilde{x})\!\int_{0}^{\infty}\!\!\Big{[}\varphi_{l_{0}}(x-\tilde{x}+\xi)- \varphi_{l_{0}}(x-\tilde{x})-\xi\varphi^{\prime}_{l_{0}}(x-\tilde{x})\Big{]} \!m({\rm d}\xi). \tag{4.29}\]
By Taylor's formula and \(b>0\), \(\tilde{A}\varphi_{l_{0}}(x-\tilde{x})\leq 0\) for all \(x\geq\tilde{x}\geq 0\). In particular, when \(x-\tilde{x}\in[0,l_{0}]\), set \(\delta_{0}=\min\{\frac{1}{l_{0}},\frac{2}{2-\theta_{1}}\}\) and \(\theta_{1}=\frac{\alpha-1}{2}\in(0,1)\). By using Taylor's formula again and some elementary calculations,
\[\tilde{A}\varphi_{l_{0}}(x-\tilde{x}) \leq -bc_{0}(x-\tilde{x})+(x-\tilde{x})\int_{0}^{\infty}\Big{[}\varphi (x-\tilde{x}+\xi)-\varphi(x-\tilde{x})-\xi\varphi^{\prime}(x-\tilde{x})\Big{]} \,m({\rm d}\xi) \tag{4.30}\] \[\leq -bc_{0}(x-\tilde{x})+(x-\tilde{x})\int_{0}^{\delta_{0}(x-\tilde{ x})}\Big{[}\frac{\xi^{2}}{2}\varphi^{\prime\prime}(x-\tilde{x})+\frac{\xi^{3}}{6} \varphi^{\prime\prime\prime}(x-\tilde{x})\Big{]}\,m({\rm d}\xi)\] \[= -bc_{0}(x-\tilde{x})+\frac{x-\tilde{x}}{2}\varphi^{\prime\prime}(x -\tilde{x})\Big{[}1+\frac{\delta_{0}(x-\tilde{x})}{3}\frac{\varphi^{\prime \prime}(x-\tilde{x})}{\varphi^{\prime\prime}(x-\tilde{x})}\Big{]}\int_{0}^{ \delta_{0}(x-\tilde{x})}\xi^{2}\,m({\rm d}\xi)\] \[\leq -bc_{0}(x-\tilde{x})+\frac{(x-\tilde{x})^{\delta_{1}-1}\theta_{1}( \theta_{1}-1)}{6}\int_{0}^{\delta_{0}(x-\tilde{x})}\xi^{2}\,m({\rm d}\xi)\] \[\leq -bc_{0}(x-\tilde{x})-\frac{C_{*}\theta_{1}(1-\theta_{1})\delta_{0 }^{2-\alpha}}{6}(x-\tilde{x})^{-\theta_{1}},\]
where the third inequality due to the fact that
\[1+\frac{\delta_{0}(x-\tilde{x})}{3}\frac{\varphi^{\prime\prime\prime}(x-\tilde{ x})}{\varphi^{\prime\prime}(x-\tilde{x})}\geq\frac{1}{3}\]
and the last inequality follows from Condition 4.3. On the other hand,
\[\tilde{A}\psi_{l_{0}}(|y-\tilde{y}|) = [\gamma(x,y)-\gamma(\tilde{x},\tilde{y})]^{+}\int_{\mathbb{N}^{-1} }\Big{[}\psi_{l_{0}}(|y-\tilde{y}+\xi|)-\psi_{l_{0}}(|y-\tilde{y}|)\Big{]}\,n( \mathrm{d}\xi)\] \[+[\gamma(x,y)-\gamma(\tilde{x},\tilde{y})]^{-}\int_{\mathbb{N}^{- 1}}\Big{[}\psi_{l_{0}}(|y-\tilde{y}-\xi|)-\psi_{l_{0}}(|y-\tilde{y}|)\Big{]}\,n( \mathrm{d}\xi)\] \[\leq \mathbf{1}_{\{|y-\tilde{y}|\leq l_{0}\}}\psi^{\prime}(|y-\tilde{ y}|)R_{1}\Big{\{}[\gamma(x,y)-\gamma(\tilde{x},\tilde{y})]^{+}\mathbf{1}_{\{y- \tilde{y}>0\}}\] \[+[\gamma(x,y)-\gamma(\tilde{x},\tilde{y})]^{-}\mathbf{1}_{\{y- \tilde{y}<0\}}\Big{\}}\]
where \(R_{2}=2\psi(l_{0}).\) Clearly, when \(|y-\tilde{y}|>l_{0}\), \(\tilde{A}\psi_{l_{0}}(|y-\tilde{y}|)\leq 0\). And when \(|y-\tilde{y}|\in[0,l_{0}]\), we use Condition 4.4 and Condition 4.5 to see that
\[\tilde{A}\psi_{l_{0}}(|y-\tilde{y}|)\leq R_{2}k_{2}(x-\tilde{x}). \tag{4.31}\]
**Case 1:**\(x-\tilde{x},|y-\tilde{y}|\in[0,l_{0}]\). In view of (4.30) and (4.31), if we take \(c_{0}\geq\frac{R_{2}k_{2}}{b}\),
\[\tilde{A}\Big{(}\varphi_{l_{0}}(x-\tilde{x})+\psi_{l_{0}}(|y- \tilde{y}|)\Big{)} \leq -\frac{C_{*}\theta_{1}(1-\theta_{1})\delta_{0}^{2-\alpha}}{6}(x- \tilde{x})^{-\theta_{1}}\] \[\leq -\frac{C_{*}\theta_{1}(1-\theta_{1})\delta_{0}^{2-\alpha}}{6}l_{ 0}^{-\theta_{1}}.\]
**Case 2:**\(x-\tilde{x}>l_{0},|y-\tilde{y}|\in[0,l_{0}]\). In view of (4.29) and (4.31) and \(c_{0}=\frac{2R_{2}k_{2}}{b}\),
\[\tilde{A}\Big{(}\varphi_{l_{0}}(x-\tilde{x})+\psi_{l_{0}}(|y- \tilde{y}|)\Big{)}\leq-R_{2}k_{2}(x-\tilde{x})<-R_{2}k_{2}l_{0}.\]
**Case 3:**\(x-\tilde{x}\in[0,l_{0}],|y-\tilde{y}|>l_{0}\). By (4.30) it is not hard to show that
\[\tilde{A}\Big{(}\varphi_{l_{0}}(x-\tilde{x})+\psi_{l_{0}}(|y- \tilde{y}|)\Big{)}\leq-\frac{C_{*}\theta_{1}(1-\theta_{1})\delta_{0}^{2-\alpha }}{6}l_{0}^{-\theta_{1}}.\]
**Case 4:**\(x-\tilde{x}>l_{0},|y-\tilde{y}|>l_{0}\). Similarly,
\[\tilde{A}\Big{(}\varphi_{l_{0}}(x-\tilde{x})+\psi_{l_{0}}(|y- \tilde{y}|)\Big{)}\leq-bc_{0}(x-\tilde{x})\leq-bc_{0}l_{0}.\]
_Step 2._ Assume that (4.3.1) in Condition 4.3 is satisfied. By using Taylor's formula again for jump-part integral, one can deduce that
\[\tilde{A}\phi_{l_{0}}(x-\tilde{x}) \leq -bc_{0}(x-\tilde{x})-c\theta_{1}(1-\theta_{1})(x-\tilde{x})^{ \theta_{1}-2}(\sqrt{x}+\sqrt{\tilde{x}})^{2} \tag{4.32}\] \[\leq -bc_{0}(x-\tilde{x})-c\theta_{1}(1-\theta_{1})(x-\tilde{x})^{ \theta_{1}-1}\]
by the fact that \((\sqrt{x}+\sqrt{\tilde{x}})^{2}\geq(x-\tilde{x})\). One can see (4.32) is similar to (4.30) since \(\theta_{1}<1\). We omit the details.
In conclusion, by choosing \(c_{0}=\frac{2R_{2}k_{2}}{b}\) and \(\theta_{1}=\frac{\alpha-1}{2}\) if (4.3.2) holds and \(\theta_{1}\in(0,1)\) is arbitrary if (4.3.1) holds, for any \((x,y),(\tilde{x},\tilde{y})\in D\) with \((x,y)\neq(\tilde{x},\tilde{y})\), there exists \(C>0\) such that
\[\tilde{A}f(x,y,\tilde{x},\tilde{y})\leq-Cf(x,y,\tilde{x},\tilde{y})\]
by noticing (4.26). Following similar arguments in step 2 of the proof for [16, Theorem 3.1] and (4.27), we obtain the desired result. |
2310.12843 | Local behavior of critical points of isotropic Gaussian random fields | In this paper we examine isotropic Gaussian random fields defined on $\mathbb
R^N$ satisfying certain conditions. Specifically, we investigate the type of a
critical point situated within a small vicinity of another critical point, with
both points surpassing a given threshold. It is shown that the Hessian of the
random field at such a critical point is equally likely to have a positive or
negative determinant. Furthermore, as the threshold tends to infinity, almost
all the critical points above the threshold are local maxima and the saddle
points with index $N-1$. Consequently, we conclude that the closely paired
critical points above a high threshold must comprise one local maximum and one
saddle point with index $N-1$. | Paul Marriott, Weinan Qi, Yi Shen | 2023-10-19T15:55:06Z | http://arxiv.org/abs/2310.12843v1 | # Local behavior of critical points of isotropic
###### Abstract
In this paper we examine isotropic Gaussian random fields defined on \(\mathbb{R}^{N}\) satisfying certain conditions. Specifically, we investigate the type of a critical point situated within a small vicinity of another critical point, with both points surpassing a given threshold. It is shown that the Hessian of the random field at such a critical point is equally likely to have a positive or negative determinant. Furthermore, as the threshold tends to infinity, almost all the critical points above the threshold are local maxima and the saddle points with index \(N-1\). Consequently, we conclude that the closely paired critical points above a high threshold must comprise one local maximum and one saddle point with index \(N-1\).
Department of Statistics and Actuarial Science, University of Waterloo. Waterloo, ON N2L 3G1, Canada.
Email: [email protected], [email protected], [email protected]
This work is supported by NSERC grant 2020-04356.
## 1 Introduction
Let \(X=\{X(\boldsymbol{t}),\boldsymbol{t}\in\mathbb{R}^{N}\}\) be an isotropic Gaussian random field defined on \(R^{N},N=1,2,...\). The critical points of \(X\) are the points at which its gradient vanishes. They are naturally classified into different types such as local maxima, saddle points of different kinds, and local minima. This paper explores the local interactions between critical points for which the values of \(X\) exceed a threshold \(u\). More specifically, we pose and address the following question:
_When two critical points surpassing \(u\) are situated in close proximity to one another, what types can they belong to?_
The motivation for studying the critical points of Gaussian random fields largely stems from random topology, an emerging area of probability that deals with the topological features of random objects, such as random sets or random graphs. Its application in statistics is known as topological data analysis, and has also garnered significant research interest. For an overview of this field, readers can refer to [2] or [10]. An important mathematical tool employed in these areas is Morse theory, which suggests that topological information related to the homology groups of a set can be derived from the critical points of a function defined on that set, provided that the function meets certain non-degeneracy conditions [9]. As a result, understanding the behavior of critical points, such as their locations and heights, proves highly beneficial.
It has long been believed that for a stationary Gaussian random field satisfying some mild conditions, the locations of critical points exceeding \(u\) will converge to a (homogeneous) Poisson point process as \(u\) tends to infinity [1]. Putting its proof aside, another issue when applying this so-called Poisson clumping heuristic is that in real-world scenarios, it is not feasible for \(u\) to truly tend to infinity. Consequently, it is often necessary to consider the deviation from the Poisson limit, which is due to the interactions between the critical points, as Poisson limit corresponds to the independent case. For example, if the covariance is positive and a critical point above a high |
2308.07962 | Detecting Galaxy Tidal Features Using Self-Supervised Representation
Learning | Low surface brightness substructures around galaxies, known as tidal
features, are a valuable tool in the detection of past or ongoing galaxy
mergers, and their properties can answer questions about the progenitor
galaxies involved in the interactions. The assembly of current tidal feature
samples is primarily achieved using visual classification, making it difficult
to construct large samples and draw accurate and statistically robust
conclusions about the galaxy evolution process. With upcoming large optical
imaging surveys such as the Vera C. Rubin Observatory Legacy Survey of Space
and Time (LSST), predicted to observe billions of galaxies, it is imperative
that we refine our methods of detecting and classifying samples of merging
galaxies. This paper presents promising results from a self-supervised machine
learning model, trained on data from the Ultradeep layer of the Hyper
Suprime-Cam Subaru Strategic Program optical imaging survey, designed to
automate the detection of tidal features. We find that self-supervised models
are capable of detecting tidal features, and that our model outperforms
previous automated tidal feature detection methods, including a fully
supervised model. An earlier method applied to real galaxy images achieved 76%
completeness for 22% contamination, while our model achieves considerably
higher (96%) completeness for the same level of contamination. We emphasise a
number of advantages of self-supervised models over fully supervised models
including maintaining excellent performance when using only 50 labelled
examples for training, and the ability to perform similarity searches using a
single example of a galaxy with tidal features. | Alice Desmons, Sarah Brough, Francois Lanusse | 2023-08-15T18:01:02Z | http://arxiv.org/abs/2308.07962v2 | # Detecting Galaxy Tidal Features Using Self-Supervised Representation Learning
###### Abstract
Low surface brightness substructures around galaxies, known as tidal features, are a valuable tool in the detection of past or ongoing galaxy mergers, and their properties can answer questions about the progenitor galaxies involved in the interactions. The assembly of current tidal feature samples is primarily achieved using visual classification, making it difficult to construct large samples and draw accurate and statistically robust conclusions about the galaxy evolution process. With upcoming large optical imaging surveys such as the Vera C. Rubin Observatory's Legacy Survey of Space and Time (LSST), predicted to observe billions of galaxies, it is imperative that we refine our methods of detecting and classifying samples of merging galaxies. This paper presents promising results from a self-supervised machine learning model, trained on data from the Ultradeep layer of the Hyper Suprime-Cam Subaru Strategic Program optical imaging survey, designed to automate the detection of tidal features. We find that self-supervised models are capable of detecting tidal features, and that our model outperforms previous automated tidal feature detection methods, including a fully supervised model. An earlier method achieved 76% completeness for 22% contamination, while our model achieves considerably higher (96%) completeness for the same level of contamination. We emphasise a number of advantages of self-supervised models over fully supervised models including maintaining excellent performance when using only 50 labelled examples for training, and the ability to perform similarity searches using a single example of a galaxy with tidal features.
keywords: galaxies: interactions - galaxies: evolution - methods: data analysis
## 1 Introduction
The currently accepted model of the Universe, known as the Lambda Cold Dark Matter (\(\Lambda\)CDM) Cosmological Model, postulates that galaxies evolve through a process which is referred to as the 'hierarchical merger model', wherein the growth of the universe's highest-mass galaxies is dominated by merging with lower-mass galaxies (e.g. Lacey and Cole, 1994; Cole et al., 2000; Robotham et al., 2014; Martin et al., 2018). During the merging process, the extreme gravitational forces involved cause stellar material to be pulled out from the galaxies, forming diffuse non-uniform regions of stars in the outskirts of the galaxies, known as tidal features (e.g. Toomre and Toomre, 1972). Examples of these features from the optical Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP; Aihara et al., 2018) are shown in Figure 1. These tidal features contain information about the merging history of the galaxy, and can thus be used to study the galaxy evolution process. Using tidal features to detect merging galaxies has a number of advantages over other methods such as spectroscopically detected galaxy close pairs. Not only do they remain observable significantly longer than close pairs (a few Gyr compared to \(\sim\) 600 Myr; Lotz et al., 2011; Hendel and Johnston, 2015; Huang and Fan, 2022) but they can also be used to identify mergers where a companion galaxy has already been ripped apart or is too low mass to be detected spectroscopically. Hence, using tidal features to study galaxy evolution can provide important observational confirmation on the contribution of low-mass galaxies to the merging process (e.g. Johnston et al., 2008).
In order to draw accurate and statistically robust conclusions about this evolution process, we require a large sample of galaxies exhibiting tidal features. This is difficult to achieve due to the extremely low surface brightness of tidal features, which can easily reach \(\mu_{r}\geq 27\) mag arcsec\({}^{-2}\). The limiting surface brightness of wide-field optical astronomical surveys often do not reach these depths and as a consequence, many tidal features will not be identified simply because the features are not visible. This not only causes tidal feature incidence measures to be too low, but also increases the work required and number of images that need to be classified to assemble a sample with a significant number of galaxies with tidal features. With the next generation of wide-field optical imaging surveys reaching new limiting depths, such as the Vera C Rubin Observatory's Legacy Survey of Space and Time (LSST; Ivezic et al., 2019) which is predicted to reach \(\mu_{r}\sim\) 30.3 mag arcsec\({}^{-2}\)(Martin et al., 2022), assembling a statistically significant sample of galaxies with tidal features is becoming more feasible. One challenge associated with surveys like LSST, due to commence in 2025 and run for 10 years, is the amount of data predicted to be released, with LSST predicted
to output over 500 petabytes of imaging data including billions of galaxies (Ivezic et al., 2019). Current tidal feature detection and classification is primarily achieved through visual identification (e.g. Tal et al., 2009; Shen et al., 2012; Atkinson et al., 2013; Hood et al., 2018; Bilek et al., 2020; Martin et al., 2022), but billions of galaxies are virtually impossible to classify visually by humans, even using large community based projects such as Galaxy Zoo (Lintott et al., 2008; Darg et al., 2010), and hence we are in urgent need of tools that can automate this classification task and isolate galaxies with tidal features.
Recent years have seen a steady increase in the use of machine learning for these types of data-intensive astrophysical classification tasks (Huertas-Company and Lanusse, 2023). Such tasks have included grouping galaxies according to colour and morphology using both supervised Convolutional Neural Networks (CNNs; e.g. Hocking et al., 2018; Martin et al., 2020) and self-supervised networks (e.g. Hayat et al., 2021), identifying the formation pathways of galaxies using a supervised CNN (e.g. Cavanagh and Bekki, 2020; Diaz et al., 2019), and identifying new strong gravitationals candidates using a self-supervised network (e.g. Stein et al., 2021, 2022). A number of works have also focused on classifying mergers and non-mergers using both random forest algorithms (e.g. Snyder et al., 2019) and supervised CNNs (e.g. Pearson et al., 2019; Suelves et al., 2023). CNNs have even shown great potential for the identification of low surface brightness features such as tidal features (e.g. Walmsley et al., 2019; Bickley et al., 2021; Dominguez Sanchez et al., 2023).
With the promising recent results of machine learning in galaxy classification tasks, we turn to machine learning to construct a model which can take galaxy images as input, convert them into representations - low-dimensional maps which preserve the important information in the image - and output a classification based on whether the galaxy exhibits tidal features. We are looking for a tool which can perform classification and be told what features to look for, so an unsupervised machine learning model is not ideal. We also want a tool which does not require large labelled datasets for training, due to there being few such datasets of galaxies with tidal features and these being time-demanding to construct in themselves, so a supervised model is not ideal either. Instead we use a recently developed machine learning method that is essentially a middle-point between supervised and unsupervised learning, known as self-supervised machine learning (SSL; He et al., 2019; Chen et al., 2020; b, a; Chen and He, 2020). Such models do not require labelled data for the training of the encoder, which learns to transform images into meaningful low-dimensional representations, but can perform classification when paired with a linear classifier and a small labelled dataset. Instead of labels, SSL models rely on augmentations (e.g. image rotation, noise addition, PSF blurring) being applied to the encoder training dataset, to learn under which conditions the output low-dimensional representations should be invariant. These types of models have been successfully used for a variety of astronomical applications (Huertas-Company et al., 2023) including classification of galaxy morphology (e.g. Walmsley et al., 2022; Wei et al., 2022; Ciprijanovic et al., 2023), clustering of galaxies according to age, metallicity, and velocity (e.g. Sarmineto et al., 2021), estimation of black hole properties (e.g. Shen et al., 2022), radio galaxy classification (e.g. Slijepcevic et al., 2022, 2023), and classification of solar magnetic field measurements (e.g. Lamdouar et al., 2022). Another benefit to self-supervised models, other than training on unlabelled data, is the computational cost, self-supervised models have been shown to need only 1% of the computational power needed by supervised models (Hayat et al., 2021). Self-supervised models are also much easier to adapt to perform new tasks, and apply to datasets from new data releases or different astronomical surveys (Ciprijanovic et al., 2023), making this kind of model perfect for our goal of applying a tool developed using HSC-SSP data to future LSST data and potentially other future large imaging surveys such as Euclid (Borlaff et al., 2022) and the Nancy Grace Roman Space Telescope 1(Spergel et al., 2015). Hayat et al. (2021) and Stein et al. (2022) show that once the encoder section of a self-supervised model has been trained one can conduct a simple similarity search to find objects of interest, or apply a linear classifier directly onto the encoder's outputs to separate the data into a variety of classes.
Footnote 1: [https://roman.gsfc.nasa.gov/](https://roman.gsfc.nasa.gov/)
In this paper, we demonstrate that SSL can be used to detect tidal features amidst a dataset of thousands of otherwise uninteresting galaxies using \(\sim\)50,000 \(grizy\)-band Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP; Aihara et al., 2018) 128\(\times\)128 pixel galaxy images. We also show the advantage of using a self-supervised model, as opposed to a supervised model, for the detection of merging galaxies, particularly in the regime of fewer labels, using the dataset of \(\sim\) 6000 Sloan Digital Sky Survey (SDSS; York et al., 2000) galaxies constructed by Pearson et al. (2019). In Section 2 we detail data and sample selection, as well as the model architecture, including the augmentations we apply to the data. Section 3 details our results including our model's ability to detect tidal features in HSC-SSP data and a comparison of our model's performance to a supervised model when used for the detection of merging galaxies in SDSS images. In Section 4 we compare our results with those in the literature and present our conclusions.
## 2 Methods
### Data Sources and Sample Selection
For this work we use two separate datasets sourced from two different surveys. The first dataset is assembled from HSC-SSP galaxies and is used to show the potential of self-supervised machine learning
Figure 1: Example of galaxies with tidal features from HSC-SSP (\(gri\)-band cutout images). Top row from left to right: shells, stream. Bottom row from left to right: asymmetric halo, and double nucleus.
models for the detection of tidal features. The second dataset consists of SDSS galaxies and is used to compare the performance of this self-supervised model with an earlier supervised model for the detection of merging galaxies.
#### 2.1.1 HSC-SSP Dataset
The HSC-SSP dataset used for this work is sourced from the Ultradeep (UD) layer of the HSC-SSP Public Data Release 2 (PDR2; Aihara et al., 2019) for deep galaxy images. The HSC-SSP survey (Aihara et al., 2018) is a three-layered, _grizy_-band imaging survey carried out with the Hyper Suprime-Cam (HSC) on the 8.2m Subaru Telescope located in Hawaii. During the development of this project the HSC Public Data Release 3 (Aihara et al., 2022) became available. Although this is an updated version of the HSC-PDR2, we do not use this data due to the differences in the data treatment pipelines. HSC-PDR2 has been widely tested for low surface brightness studies (e.g. Huang et al., 2018, 2020; Li et al., 2022; Martinez-Lombilla et al., 2023) and fulfils the requirements for our study. The HSC-SSP survey comprises of three layers: Wide, Deep, and Ultradeep which are observed to varying surface brightness depths. We use the Ultradeep field, which spans an area of 3.5 deg\({}^{2}\) and reaches \(\mu_{r}\sim 29.82\) mag arcsec\({}^{-2}\)(Martinez-Lombilla et al., 2023), a surface brightness depth faint enough to detect tidal features. We use HSC-SSP data not only for its depth, allowing us to detect tidal features, but also due to its similarity to LSST data. HSC-SSP data are reduced using the LSST pipeline (Bosch et al., 2018; Aihara et al., 2019) and since the two surveys produce similar data, it will be more straightforward to adapt our SSL model and train it on LSST data once it is released. The HSC-SSP PDR2 has a median \(i\)-band seeing of 0.6 arcsec and a spatial resolution of 0.168 arcsec per pixel.
We assemble our initial unlabelled dataset of \(\sim\)50,000 galaxies by parsing objects in the HSC-SSP PDR2 database using an SQL search and only selecting objects which satisfy a pre-defined criteria. We describe our criteria, their definitions, and our reasoning for choosing them in Table 1. We filter our sample of 50,000 galaxies to remove repeat objects in the dataset by removing objects whose declinations and right ascensions are too close together (within a 5 arcsec radius circle), leaving us with a dataset of \(\sim\) 44,000 unlabelled objects. We access the HSC-SSP galaxy images using the Unagi Python tool (Huang et al., 2019) which, given a galaxy's right ascension and declination, allows us to create multi-band 'HSC outout' images of size \(128\times 128\) pixels (\(21\times 21\) arcsecs), centred around each galaxy. Each cutout is downloaded in five (\(g,~{}r,~{}i,~{}z,~{}y\)) bands.
As mentioned in Section 1 self-supervised networks encode images into meaningful lower-dimensional representations but cannot inherently perform classification tasks. To perform the classification, we will be using a linear classifier which takes in the encoded representations as input and classifies galaxies based on the presence of tidal features. To train this linear classifier we require a small labelled dataset of galaxies with and without tidal features. We use the HSC-SSP PDR2 dataset assembled by Desmons et al. (2023) composed of 211 galaxies with tidal features and 641 galaxies without tidal features. These galaxies were selected from a volume-limited sample with spectroscopic redshift limits \(0.04\leq z\leq 0.2\) and stellar mass limits \(9.50\leq\log_{10}(M_{\star}/\mbox{M}_{\odot})\leq 11.00\) and have \(i\)-band magnitudes in the range \(12.8<i\leq 21.6\) mag. To increase the size of our tidal feature training sample we classified additional galaxies from our HSC-SSP PDR2 unlabelled dataset of \(\sim\) 44,000 objects, according to the classification scheme outlined in Desmons et al. (2023). The classification was performed by the first author. We use an equal number of galaxies with and without tidal features for model training and testing to prevent the model from learning an accidental bias against the category with fewer images. Our final labelled sample contains 760 galaxies, 380 with tidal features, labelled 1, and 380 without, labelled 0. Usually a labelled dataset is split into 80%, 10%, and 10% for training, validation, and testing respectively. However our labelled dataset is small and we want to maximise the number of galaxies in our testing dataset such that the model can be evaluated accurately. Hence we split our labelled dataset set into training, validation, and testing datasets composed of 600 (79%), 60 (8%), and 100 (13%) galaxies respectively.
#### 2.1.2 SDSS Dataset
Our second dataset, consisting of SDSS data release 7 (Abazajian et al., 2009) galaxies, was assembled by Pearson et al. (2019) for the purpose of training a supervised network to classify merging and non-merging galaxies. This dataset contains \(\sim\) 10,000 non-merging galaxies and \(\sim\) 3000 merging galaxies which were selected from the Darg et al. (2010, 2010) catalogue, assembled from Galaxy Zoo (Lintott et al., 2008) classifications. All galaxies in this dataset have spectroscopic redshift limits \(0.005\leq z\leq 0.1\) and are in the stellar mass range \(9.5\leq\log_{10}(M_{\star}/\mbox{M}_{\odot})\leq 12.0\). Additionally, they were required to have SDSS spectra, which were only taken for objects with apparent magnitude \(r~{}<\) 17.7 mag. Due to this magnitude limit being significantly brighter than the HSC-SSP dataset described above, only the brightest tidal features would be visible in these images. This means that the merging galaxies in this sample were selected not based on the presence of tidal features, but rather based on whether galaxies showed obvious signs of mergers with at least two clearly interacting galaxies or significantly morphologically disturbed systems. To prevent the model learning an accidental bias against the category with fewer images we reduce the size of our non-merging dataset by randomly selecting only 3000 non-merging galaxies from the set of 10,000. The SDSS dataset used to train and test our model now consists of 3000 merging galaxies, labelled 1, and 3000 non-merging galaxies, labelled 0. This gives us a sample of 6000 SDSS objects of which we obtain \(256\times 256\) pixel images, downloaded in three (\(g,~{}r,~{}i\)) bands. For more detail about the construction and classification of the SDSS dataset we refer the reader to Pearson et al. (2019). When a labelled dataset is required for training (i.e. for the supervised model or the linear classifier) we split this dataset into 4800 (80%), 600 (10%), and 600 (10%) galaxies to use for training, validation, and testing respectively. For the portion of the training of our self-supervised model which requires an unlabelled dataset, we increase the size of our SDSS dataset from 6000 galaxies to 50,000 galaxies. This is done by making the dataset repeat, selecting 50,000 samples of the 6000 galaxies, and randomly rotating each image 0, 90, 180, or 270 degrees to create 'new' images.
The analysis presented in Section 3 is focused on comparing the performance of supervised and self-supervised models when a varying number of labels are used for training. When using fewer labels to train a model, we do not reduce the number of images shown to the model during training, but rather the number of unique images. For example, if the model trained on 80% of the full SDSS training set is shown 4800 unique images, the model trained on 2% of the labelled data is still shown 4800 images but only 120 of these images are unique. This ensures that the change in model performance is indeed due to the number of unique labelled examples, and not due to the model being shown fewer images.
### Image Pre-processing and Augmentations
Before the images are augmented and fed through the model we apply a pre-processing function to normalise the images. For the HSC-SSP images this is done by taking a subsample of 1000 galaxies from our unlabelled training sample and calculating the standard deviation \(\sigma_{\text{pixel count}}\) for each band \((g,r,i,z,y)\) of this subsample using the median absolute deviation. The entire sample is then normalised by dividing each image band by the corresponding \(3\sigma\) and then taking the hyperbolic sine of this. In this step we also define an 'unnormalising' function which does the inverse of the normalising function, allowing us to retrieve the original unormalised images if needed. The SDSS images we obtained from Pearson et al. (2019) were already in PNG format with pixel values between 0 and 255. To normalise these we simply divide each image band by 255.
Self-supervised networks work by encoding images into lower-dimensional representations and using augmented versions of the training images to learn under which transformations the encoded representations should be invariant. More specifically, these networks use contrastive loss (Hadsell et al., 2006) which is minimised for different augmentations of the same image, and maximised when the images are different. These augmentations are defined and applied before the images get processed by the network and are chosen based on the task at hand. In this project we are constructing a network to classify whether a galaxy possesses tidal features. This classification should be independent of the level of noise, or the orientation or position of the galaxy in the image. To achieve this we use the following augmentations:
* **Orientation:** We randomly flip the image across each axis (x and y) with 50% probability.
* **Gaussian Noise**: We sample a scalar from \(\mathcal{U}\)(1,3) and multiply it with the median absolute deviation of each channel (calculated over 1000 training examples) to get a per-channel noise \(\sigma_{C}\). We then introduce Gaussian noise sampled from \(\sigma_{C}\ \times\ \mathcal{N}\)(0,1) for each channel.
* **Jitter and Crop:** For HSC-SSP images we crop the \(128\times 128\) pixel image to the central \(109\times 109\) pixels before randomly cropping the image to \(96\times 96\) pixels. Random cropping means the image centre is translated, or 'jittered', along each respective axis by \(i\), \(j\) pixels where \(i\), \(j\ \sim\ \mathcal{U}\)(-13,13) before cropping to the central \(96\times 96\) pixels. For SDSS images we crop the \(256\times 256\) pixel image to the central \(72\times 72\) pixels before randomly cropping the image to \(64\times 64\) pixels. We use \(64\times 64\) pixel images when training with SDSS data as these are the image dimensions used by Pearson et al. (2019) for their model. We use a smaller maximum centre translation (\(i\), \(j\ \sim\ \mathcal{U}\)(-8,8)) for SDSS images due to their smaller size.
### Model Architecture
The model we utilise to perform classification of tidal feature candidates consists of two components; a self-supervised model used for pre-training, and a linear classifier used for classification. The self-supervised model receives galaxy images as input and encodes these into meaningful lower-dimensional representations. The linear classifier is a simple supervised model which takes in galaxy images, encodes them into representations using the trained self-supervised model, and outputs a binary classification. All models described below are built using the TensorFlow framework (Abadi et al., 2016).
#### 2.3.1 The Self-Supervised Architecture
For our task of classifying tidal feature candidates we use a type of self-supervised learning known as Nearest Neighbour Contrastive Learning of visual Representations (NNCLR; Dwibedi et al., 2021). We closely follow Dwibedi et al. (2021) in designing the training process for our model. A general schematic of the NNCLR framework is shown in Figure 2 and we refer the reader to Dwibedi et al. (2021) for a more detailed explanation of the approach.
Self-supervised models rely on augmentations to create different views of the same images. Given a sample of images \(\mathbf{x}\), a pair of images \((\mathbf{x_{i}},\mathbf{x_{j}})\) is defined as positive when \(\mathbf{x_{j}}\) is an augmented version of image \(\mathbf{x_{i}}\), a positive pair can be expressed as \((\mathbf{x_{i}},\mathbf{x_{i}^{+}})\). If \(\mathbf{x_{j}}\) is not an augmented version of image \(\mathbf{x_{i}}\) the pair is negative and is expressed as \((\mathbf{x_{i}},\mathbf{x^{-}})\). For each of the images in \(\mathbf{x}\), an encoder networks creates a 128 dimensional representation \(\mathbf{z}=\text{encoder}(\mathbf{x})\). This encoder is trained to make the representations similar for positive pairs, and
\begin{table}
\begin{tabular}{l l} \hline Criteria & Definition and Reasoning \\ \hline
15 \(<\) i\_model\_mag \(<\) 20 & \(i-\)band flux (mag) from the final model fit. We set a faint magnitude limit of 20 mag to ensure that objects are bright enough for tidal features to be visible. \\ x\_inputcount\_value \(\geq\) 3 & The number of exposures available in a given band. We only select images which have at least 3 exposures in each band \((g,r,i,z,y)\) to ensure galaxies have full depth and colour information. \\ NOT x\_pixelflags\_bright\_object & Flags objects which are affected by bright sources. We set this criterion to avoid selecting objects affected by bright stars. \\ NOT x\_pixelflags\_edge & Flags objects which intersect the edge of the exposure region. Ensures that objects are fully visible and not cut off by the edge of a region. \\ NOT x\_pixelflags\_saturatedcenter & Flags objects which have saturated central pixels. We set this to avoid selecting objects with saturated pixels. \\ NOT x\_cmodel\_flag & Flags objects which have general cmodel fit failures. Ensures we only select objects with good photometry. \\ i\_extendends\_value \(>\) 0.5 & Provides information about the extendedness of an object, values \(<\) 0.5 indicate stars. \\ \end{tabular}
\end{table}
Table 1: Criteria used in our SQL search to select our training sample. An ‘x’ indicates the cut was applied to each band \((g,r,i,z,y)\).
dissimilar for negative pairs, by using a contrastive loss function:
\[L_{i}=-\log\left(\frac{\exp(\text{sim}(\mathbf{z_{i}},\mathbf{z_{i}^{+}}))}{\exp( \text{sim}(\mathbf{z_{i}},\mathbf{z_{i}^{+}}))+\sum\limits_{x}\cdot\exp(\text{ sim}(\mathbf{z_{i}},\mathbf{z^{-}}))}\right) \tag{1}\]
where \(\text{sim}(\mathbf{a},\ \mathbf{b})=\mathbf{a}\cdot\mathbf{b}\) / (\(\tau||\mathbf{a}||\ ||\mathbf{b}||\)) is the cosine similarity between vectors \(\mathbf{a}\) and \(\mathbf{b}\), normalised by the tunable "softmax temperature" \(\tau\). This contrastive loss, or InfoNCE loss (Oord et al., 2018), is minimised when the similarity is high for positive pairs and low for negative pairs. Using this contrastive loss, self-supervised networks learn to make the representations similar for positive pairs, and dissimilar for negative pairs, and hence are able to cluster similar (or positive) samples together and push apart dissimilar (or negative) samples. These contrastive learning methods (e.g. SimCLR; Chen et al. 2020) rely only on differently augmented views of the same image to create positive pairs. As a consequence, objects which exhibit large variations but belong to the same class (e.g. galaxies with different types of tidal features) might not be linked using this type of method. NNCLR methods aim to resolve this issue by creating a more diverse set of positive pairs. Instead of defining positive pairs as \((\mathbf{x_{i}},\mathbf{x_{i}^{+}})\) where \(\mathbf{x_{i}^{+}}\) is an augmented version of image \(\mathbf{x_{i}}\), NNCLR uses a queue of examples \(Q\), and defines \(\mathbf{x_{i}^{+}}\) as the nearest-neighbour of \(\mathbf{x_{i}}\) in the queue. The loss function for NNCLR models varies slightly from the InfoNCE loss and is defined as:
\[L_{i}^{\text{NNCLR}} =-\log\left(\frac{\exp(\text{NN}(\mathbf{z_{i}},Q)\cdot\mathbf{z_{ i}^{+}}/\tau)}{\sum\limits_{\mathbf{k}}\exp(\text{NN}(\mathbf{z_{i}},Q)\cdot \mathbf{z_{k}^{+}}/\tau)}\right)\] \[-\log\left(\frac{\exp(\text{NN}(\mathbf{z_{i}},Q)\cdot\mathbf{z_{ i}^{+}}/\tau)}{\sum\limits_{\mathbf{k}}\exp(\text{NN}(\mathbf{z_{k}},Q) \cdot\mathbf{z_{i}^{+}}/\tau)}\right)\]
where NN(\(\mathbf{z_{q}}\), \(Q\)) is the nearest neighbour operator defined as:
\[\text{NN}(\mathbf{z},Q)=\operatorname*{arg\,min}_{i\in Q}\ ||\mathbf{z}-i||_{2} \tag{2}\]
where \(||\mathbf{x}||_{2}\) represents \(\mathbf{x}\,l_{2}\)-normalised along the first axis, defined as:
\[||\mathbf{x}||_{2}=\sqrt{\sum\limits_{k=1}^{n}|x_{k}|^{2}} \tag{3}\]
The self-supervised model was trained using a temperature of 0.1 and a queue size of 10,000. Following Hayat et al. (2021) and Stein et al. (2021), we use ResNet-50 (He et al., 2016) as our encoder followed by an global pooling layer. The architecture of the projection head is two fully connected layers of size 128, which use L2 kernel regularisation with a penalty of 0.0005. Each fully connected layer is followed by a batch-normalisation layer, and the first batch-normalisation layer is followed by ReLU activation. The model was compiled using the Adam optimiser (Kingma and Ba, 2015) and trained for 25 epochs on our unlabelled dataset of \(\sim\) 44,000 HSC-SSP PDR2 galaxies or our dataset of 50,000 SDSS galaxies. Training was completed within \(\sim\) 30 minutes using a single GPU.
#### 2.3.2 The Linear Classifier Architecture
The second part of the model is a simple linear classifier which takes galaxy images as input and converts them to representations using the pre-trained self-supervised encoder. These encoded representations are passed through a fully connected layer with a sigmoid activation, which outputs a single number between 0 and 1. This fine-tuned model was compiled using the Adam optimiser (Kingma and Ba, 2015) and a binary cross entropy loss. It was trained for 50 epochs using the labelled training set of 600 HSC-SSP galaxies or 4800 SDSS galaxies. Training was completed within \(\sim\) 1 minute using a single GPU.
#### 2.3.3 The Supervised Architecture
To draw conclusions about the suitability of self-supervised models for the detection and classification of tidal features, we compare our results with those of a fully supervised model. We do not construct this model from scratch, but instead use the model designed by Pearson et al. (2019) to classify merging galaxies. This model consists of four convolutional layers, each followed by a Rectified Linear unit (ReLU) activation layer and a dropout layer with a dropout rate of 20%. The first, second, and fourth convolutional layers also have max-pooling layers following the dropout layers. The model then has two dense layers, each followed by a ReLU activation layer and dropout layer. The final layer is a dense layer which has two neurons and softmax activation. We do not detail the dimensions of the network layers here but instead focus on the changes made to adapt the network to our data. For further detail on the architecture of this network we refer the reader to Pearson et al. (2019). The output layer was changed from two neurons with softmax activation, to a single neuron with sigmoid activation. The network was compiled using the Adam optimiser (Kingma and Ba, 2015) with the default learning rate and loss of the network was determined using binary cross entropy. When training the network on our HSC-SSP dataset we additionally changed the input image dimension from 64 \(\times\) 64 pixels with three colour channels to 96 \(\times\) 96 pixels with five colour channels. We do this because tidal features in deeper images can often be seen to extend around the galaxy and using 64 \(\times\) 64 pixel images can cause them to be cut-off. We train the supervised network from scratch using the labelled training set of 600 HSC-SSP galaxies or 4800 SDSS galaxies.
### Model Evaluation
There are a number of metrics used in the literature to evaluate the performance of machine learning models, depending on the task at hand. The purpose of our model is to partially automate the creation of large datasets of galaxies with tidal features by reducing the amount of data which has to be visually classified. If the top \(N\) predictions will be visually classified, we want to maximise the number of true positives, or galaxies with tidal features, in the top \(N\) predictions while minimising the number of false positives, or galaxies without tidal features. As such, in terms of model performance, we are primarily concerned with the true positive rate (also known as recall or completeness) and false positive rate (also known as fall-out or contamination). The true positive rate (TPR) ranges from 0 to 1 and is defined as:
\[TPR=\frac{TP}{TP\ +\ FN} \tag{4}\]
where \(TP\) is the number of true positives (i.e. the number of galaxies with tidal features correctly classified by the model) and \(FN\) is the number of false negatives (i.e. the number of galaxies with tidal features incorrectly classified by the model). The false positive rate (FPR) also ranges from 0 to 1 and is defined as:
\[FPR=\frac{FP}{FP\ +\ TN} \tag{5}\]
where \(FP\) is the number of false positives (i.e. the number of galaxies without tidal features incorrectly classified by the model) and \(TN\) is the number of true negatives (i.e. the number of galaxies without tidal features correctly classified by the model).
In addition to using the TPR for a given FPR to evaluate our model, we also use the area under the receiver operating characteristic (ROC) curve to evaluate performance. The ROC curve is a plot of the TPR against the FPR for a range of threshold values, which is the threshold between outputs being labelled positive or negative. For our model where galaxies with tidal features are labelled 1 and galaxies without are labelled 0, the threshold can take any value between 0 and 1 and determines whether outputs are classified as galaxies with or without tidal features. The area under the ROC curve (AUC) of a perfect model with \(\text{TPR}~{}=~{}1\) and \(\text{FPR}~{}=~{}0\) is unity, while a good model will have an AUC close to unity. A truly random model will have a ROC AUC of 0.5.
## 3 Results
In this section we first compare the performance of our self-supervised model with a supervised model, before exploring how our self-supervised model organises the galaxy images in representation space.
### Self-Supervised vs. Supervised Performance
Figure 3 illustrates the ROC AUC for a supervised and self-supervised network as a function of percentage of labels used in training for our SDSS dataset. As shown, when the amount of training data is within the range of 80% (4800 labels) to 10% (600 labels) the models show very similar performances. However, in the regime of fewer labels, particularly when training with 5% (300) or 2% (120) unique labelled examples, both models show decreased performance but this decrease is significantly greater for the supervised model. When 2% (120) of labelled training data is used, the self-supervised model AUC only decreases to 0.85 while the supervised AUC drops to 0.77. This figure does not show that self-supervised models can be used in the detection of tidal features, because the criterion for galaxies being classed as merging in the SDSS dataset is mainly based on whether two clearly interacting galaxies could be seen in the images.
To show that self-supervised models can be used to detect galaxies with tidal features we rely on our HSC-SSP dataset, which reaches surface brightness depths sufficient to clearly identify tidal features. Figure 4 illustrates the testing set ROC AUC for a supervised and self-supervised network as a function of the number of labels used in training for our HSC-SSP dataset. Each point represents the ROC AUC averaged over ten runs using the same training, validation, and testing sets for each run. We average the ROC AUC over the 10 runs and
Figure 3: ROC AUC as a function of the percentage of SDSS labels used for training for a supervised (blue) and self-supervised (red) model. 80% of labels is equal to 4800 labels. Both models show similar performance in the high number of labels regime. For fewer training labels, the self-supervised model outperforms the supervised model.
Figure 2: Illustration of the self-supervised model architecture. The model takes in a batch of images as input and creates two different views of the batch using random augmentations. Each view is encoded and the nearest neighbour of view 1 is located from the queue of example images. The loss between view 2 and the nearest neighbour of view 1 is calculated using a contrastive loss function which is minimised for similar pairs of images and maximised otherwise.
remove outliers further than \(3\sigma\) from the mean. It is important to note that in this figure, the number of labels for our rightmost data point, where 600 labels are used for training, is equivalent to the 10% of labels data point for the SDSS results in Figure 3. Our SSL model maintains high performance across all amounts of labels used for training, having ROC AUC = \(0.911\pm 0.002\) when training on the maximum number of labels and only dropping to ROC AUC = \(0.89\pm 0.01\) when using only 50 labels for training. This is in contrast to the supervised model, which also maintains its performance regardless of label number, but only reaches ROC AUC = \(0.867\pm 0.004\) when training on the maximum number and ROC AUC = \(0.83\pm 0.01\) when using only 50 labels for training.
This figure not only shows that a SSL model can be used for the detection of tidal features with good performance, but also that it performs consistently better than the supervised network regardless of the number of training labels.
It seems unlikely that a purely supervised network would maintain such good performance with only 50 unique labelled training examples, however, upon inspection we found no problems with our dataset assembly or model training. Instead of only evaluating the model based on its performance with regards to the testing set, we also plot the results of the models based on the validation loss. We do this by choosing the 'best' model from the ten runs for each number of labels, defined as the model which reaches the lowest validation loss at the end of training. Typically, a lower validation loss translates to a better model performance on the testing set. However, our validation and testing sets are very small, 60 and 100 galaxies respectively, making it hard to evaluate our models accurately using just one method.
In Figure 5 we show the testing set ROC AUC for a supervised and self-supervised network as a function of the number of labels used for training. This time, instead of each point being an average of ten runs, we show the result of the model which reached the lowest final validation loss for the given number of labels used for training. This figure shows that both models maintain similar ROC AUC when using more training labels, however, when using less than 300 labels for training, the supervised model begins to decrease in performance. When using as few as 50 labelled examples for training, the self-supervised model's ROC AUC remains stable around 0.9, whereas the supervised model's ROC AUC drops to 0.7. Although this may seem to contradict the results of Figure 4, the two figures compare different methods of assessing the model. Figure 4 is based only on the performance of the model on the testing set, whereas Figure 5 takes into account the training process using the validation loss. The self-supervised network shows consistency in the ROC AUC regardless of which method is chosen to evaluate the model, while the supervised network appears to be more sensitive to the choice of method.
Figure 4 showed that our SSL model maintained its ROC AUC and performed consistently better than the supervised network regardless of the number of training labels. This result is also reflected in Figure 6 which shows the testing set TPR for FPR = \(0.2\) for our supervised and self-supervised networks as a function of the number of labels used in training. Similar to Figure 4, each point represents the TPR averaged over 10 runs, removing outliers further than \(3\sigma\) from the mean. Figure 6 not only shows that the TPR for a given FPR is consistently higher for our self-supervised model than our supervised model but also that the self-supervised model mantains a high TPR regardless of the number of training labels, only dropping from TPR = \(0.94\pm 0.01\) at 600 training labels to TPR = \(0.90\pm 0.01\) with a mere 50 training labels.
Figure 4: Average ROC AUC as a function of the number of HSC-SSP labels used for training for a supervised (blue) and self-supervised (red) model. Each point is an average of ten runs. Both models are able to classify galaxies with tidal features. The self-supervised model performs consistently better, but both models remain consistent for all number of labels.
Figure 5: ROC AUC as a function of the number of HSC-SSP labels used for training for a supervised (blue) and self-supervised (red) model. Each point shows the ROC AUC for the model which reached the lowest final validation loss for the given number of training labels. Both models show similar performance in the high number of labels regime. For fewer training labels, the self-supervised model outperforms the supervised model.
Figure 6: Average TPR when FPR = \(0.2\) as a function of the number of HSC-SSP labels used for training for a supervised (blue) and self-supervised (red) model. Each point is an average of 10 runs. Both models show a slight decrease in TPR with decreasing training label number. The self-supervised model has a consistently higher TPR.
### Detection of Tidal Features
One advantage of self-supervised models over supervised models is the ability to use just one labelled example to find examples of similar galaxies from the full dataset. By using just one image from our labelled tidal feature dataset as a query image, and the encoded 128-dimensional representations from the self-supervised encoder, we can perform a similarity search that assigns high similarity scores to images which have similar representations to the query image. This is done using a similarity function which takes in the encoded representations of two images, \(l_{2}\)-normalises them along the first axis, and returns the reduced sum of the product of these normalised representations. This is demonstrated in Figure 7 where we select two galaxies with tidal features from our training sample and perform a similarity search with the 44,000 unlabelled HSC-SSP galaxies. In Figure 7 the query image is shown on the right alongside the 24 galaxies which received the highest similarity scores. This figure shows the power of self-supervised learning, where using only a single labelled example, we can find a multitude of other tidal feature candidates.
Another way to visualise how the model organises the galaxy images in representation space, without having to select a query image, is by using Uniform Manifold Approximation and Projection (UMAP; McInnes et al., 2018). UMAP can be trained directly on image data to produce meaningful clusters however here we use it merely for visualisation purposes. UMAP takes in our encoded 128-dimensional representations as input and reduces them to an easier to visualise 2 dimensional projection. Figure 8 illustrates this 2D projection, created by binning the space into 100 \(\times\) 100 cells and randomly selecting a sample from that cell to plot in the corresponding cell location. To obtain an idea of what attributes galaxy groupings are based on, we hand-select three areas and show zoomed-in versions of these areas around the edges of the figure. From this, it is clear that galaxies are grouped both according to their colour, and their size in the cutout. The fact that we can achieve a high performance of tidal feature classification (shown in Section 3.1) by simply using a linear classifier on the representations means that the representations created by the self-supervised encoder are meaningful. We also determine whether the scores given to galaxies by the linear classifier are related to the galaxies' positions in the UMAP projection. This is done by colouring the UMAP plot according the scores given to each galaxy by the linear classifier, shown in the right panel of Figure 8. We find that the majority of galaxies which were assigned a high classifier score, indicating a high likelihood of tidal features, are located on the left side of the UMAP projection plot. This reinforces the idea that the encoded representations contain meaningful information about tidal features, but also brings to light a potential bias of our model. The left side of the UMAP projection plot contains the galaxies which cover more of the cutout, indicating a potential bias towards classifying bright galaxies that appear large in the cutouts as having tidal features. This bias is likely an effect of a bias potentially introduced by the training set, as tidal features are more likely to be visible and obvious for brighter and larger-appearing galaxies and hence these galaxies are more likely to be classified as having tidal features.
## 4 Discussion and Conclusions
In this work, we have shown that SSL models composed of a self-supervised encoder and linear classifier can not only be used to detect galaxies with tidal features, but can do so reaching both high completeness (TPR = 0.94 \(\pm\) 0.1) for low contamination (FPR = 0.20) and high area under the ROC curve (ROC AUC = 0.91 \(\pm\) 0.002). This means that such models can be used to isolate the majority of galaxies with tidal features from a large sample of galaxies, thus drastically reducing the amount of visual classification needed to assemble a large sample of tidal features. One major advantage of this model over other automated classification methods, is that this level of performance can be reached using only 600 labelled training examples, and only drops mildly when using a mere 50 labels for training maintaining ROC AUC = 0.89 \(\pm\) 0.01 and TPR = 0.90 \(\pm\) 0.1 for FPR = 0.2. SSL models are also inexpensive to train, with the encoder needing only \(\sim\) 30 minutes to train for 25 epochs on a single GPU. The linear classifier, applied to the encoded representations, only required \(\sim\) 1 minute to train for 50 epochs on a single GPU. This makes SSL models easy to re-train on data from different surveys with minimal visual classification needed.
Previous works which used SSL models for classification of astronomical objects highlighted a number of advantages of these models compared to fully supervised models. Stein et al. (2021) highlighted the usefulness of being able to use the encoded representations straight from the self-supervised encoder to perform similarity searches based on a single example image in the context of finding strong gravitational lens candidates. Hayat et al. (2021) emphasised the superior performance of their SSL model compared to a supervised model for both galaxy morphology classification, and spectroscopic redshift estimation, particularly when decreasing the number of labels available for training. In this work we find similar advantages of a SSL model compared to a supervised model. When using the models to detect merging galaxies from an SDSS dataset, both models show similar performance when using a larger number of training labels, however, in the regime of fewer training labels, the supervised model performance decreases more drastically than the SSL model. When using the models for the detection of tidal features we find that the SSL model consistently outperforms the supervised model, regardless of the number of labels used for training. Following Stein et al. (2021), we emphasise the usefulness of being able to perform a similarity search using just the self-supervised encoder and one example of a galaxy with tidal features to find other galaxies with tidal features from a dataset of tens of thousands of galaxies.
The level of comparison that can be carried out with respect to the results obtained here and other works is limited due to the scarcity of similar works. There are only two studies focusing on the detection of tidal features using machine learning. The first is the work of Walmsley et al. (2019) who used a supervised network to identify galaxies with tidal features from the Wide layer of the Canada-France-Hawaii Telescope Legacy Survey (Gwyn, 2012). They used a sample of 305 galaxies with tidal features and 1316 galaxies without tidal features, assembled by Atkinson et al. (2013), to train a CNN, using augmentations such as image flipping, rotation, and translation to expand their dataset. Walmsley et al. (2019) found that their method outperformed other automated methods of tidal feature detection, reaching 76% completeness (or TPR) and 22% contamination (or FPR). Our SSL model, trained on 600 (300 tidal, and 300 non-tidal) galaxies performs considerably better, reaching a completeness of 96% for the same contamination percentage. Dominguez Sanchez et al. (2023) used this same CNN, designed by Walmsley et al. (2019), and the dataset presented in Martin et al. (2022) to also identify galaxies exhibiting tidal features. This dataset consists of \(\sim\) 6000 synthetic mock HSC-SSP images, including \(\sim\) 1800 galaxies with tidal features, from the NewHorizon cosmological hydrodynamical simulation at five different surface brightness limits, ranging from \(\mu_{r}\) = 28 mag arcsec\({}^{-2}\) to \(\mu_{r}\) = 35 mag arcsec\({}^{-2}\). They found that the
model was able to successfully identify tidal features, reaching ROC AUC > 0.9 and a completeness of \(\sim 90\%\) for 22% contamination. They also separated the performance of the model according to the image surface brightness limits and found that at surface brightness limits close to HSC-SSP UD limits (\(\mu_{r}\sim 28\) mag arcsec\({}^{-2}\)) the model reached ROC AUC = 0.88 and completeness of \(\sim 83\%\) for 22% contamination. The ROC AUC reached by our model (0.911 \(\pm\) 0.002) is comparable to that found in Dominguez Sanchez et al. (2023), however, we reach a significantly higher completeness of 96% for the same level of contamination. Dominguez Sanchez et al. (2023) also attempted to apply their trained model to real HSC-SSP images, however the model performance decreased significantly, only reaching ROC AUC = 0.64. They noted that this drop in performance could be attributed to the difference in angular resolution between the simulated and real images, or the lack of real background in the simulated images which were used to train their model.
In a similar work, Bickley et al. (2021) used a CNN to identify recently merged galaxies in a sample of galaxies from the cosmological magnetohydrodynamical simulation IllustrisTNG (Nelson et al., 2018). Their dataset was constructed by combining IllustrisTNG im
Figure 8: Left: 2D UMAP projection of the self-supervised representations. Made by binning the space into \(100~{}\times~{}100\) cells and randomly selecting a sample from that cell to plot in the corresponding cell location. Right: The same 2D UMAP projection without binning, coloured according to the scores assigned to each galaxy by the linear classifier.
Figure 7: Results from a similarity search using two random galaxies with tidal features as query images. The two query galaxies are displayed on the left, alongside the top 24 galaxies with the highest similarity scores for each similarity search. The similarity score is displayed in the top left corner for each image. The red outlines indicate galaxies which would be visually classified as hosting tidal features, regardless of whether this galaxy is the central object in the image.
ages with Canada France Imaging Survey (CFIS; Ibata et al., 2017) image data and metadata to create synthetic CFIS images and the sample consisted of \(\sim\)75,000 galaxies, including \(\sim\)37,000 recently merged galaxies. Their CNN achieved good performance, reaching a higher ROC AUC of 0.95 than our SSL model, and a similar completeness of \(\sim\)95% for 22% contamination, although their model required much larger labelled training set compared to our dataset of 600 galaxies. Bickley et al. (2021) also compared the performance of their CNN to that of visual identification performed by 9 classifiers on a subsample of 200 galaxies. They found that although the CNN recovered a higher fraction of the recently merged galaxies, the visual classifiers were capable of achieving higher purity in their classification. Bickley et al. (2021) suggested that the best approach to merger identification is to use a CNN to assemble an initial sample of candidates, followed by visual classification to improve the quality of this merger sample. This conclusion is consistent with our proposal for the intended use of our SSL model.
We can also compare one part of our work with that of Pearson et al. (2019) who used a supervised model to identify merging galaxies from a dataset of \(\sim\)6000 SDSS galaxies. We use the same SDSS dataset to train both a supervised model (based on that of Pearson et al., 2019) and our SSL model to compare the performance of the two models. In their work, Pearson et al. (2019) found that their CNN reached a ROC AUC of 0.966 when used on the testing dataset. When trained on the same number of labels, both our supervised and SSL models reached a similar ROC AUC of 0.96 on the testing set. However, when training using only 2% of the available training data, or 120 galaxies, the supervised model ROC AUC dropped to 0.77, while the SSL model ROC AUC only dropped to 0.85. This shows the advantage of using SSL models, particularly when available training sets are limited in size or have not been assembled yet.
Bottrell et al. (2019) and Snyder et al. (2019) focused on the classification of galaxy merger signatures using supervised machine learning. Bottrell et al. (2019) used a CNN to classify galaxies according to merger stage, using a series of images from a hydrodynamical simulation. Their model reached high (87%) accuracy even when using simulated images inserted into real SDSS survey fields to train and test the model. Snyder et al. (2019) used a random forest algorithm to isolate galaxies that merged or would merge within 250 Myr, using images from the Illustris cosmological simulation (Genel et al., 2014; Vogelsberger et al., 2014, 2014). Their model reached \(\sim\)70% completeness (TPR) for \(\sim\)30% contamination (FPR) which is a significantly lower TPR than that reach by our model. However, both the works of Bottrell et al. (2019) and Snyder et al. (2019) do not focus explicitly on the detection of tidal features and therefore are not directly comparable to the results of our analysis.
In this work we have shown that self-supervised machine learning models can be used to detect galaxies with tidal features from large datasets of galaxies. They can do so reaching both high area under the ROC curve and high completeness (TPR) for low contamination (FPR). To reach good performance, these models do not require large labelled datasets, and can be fully trained using as few as 50 labelled examples. This makes them easy to re-train on new data and therefore, simple to apply to data from different surveys with minimal visual classification needed. Such models can also be used to conduct similarity searches, finding galaxies with similar features given only one labelled example of a galaxy. This can help us understand what image features the model considers important when making links between images, and can be applied to any astronomical dataset to find rare objects. All of these attributes make this SSL model a valuable tool in sorting through the massive amounts of data output by imaging surveys such as LSST, to assemble large datasets of merging galaxies. The code used to create, train, validate, and test the SSL model, along with instructions on loading and using the pre-trained model as well as training the model using different data can be downloaded from GitHub2.
Footnote 2: [https://github.com/LSSTISSC/Tidalsaurus](https://github.com/LSSTISSC/Tidalsaurus)
## Acknowledgements
We acknowledge funding support from LSST Corporation Enabling Science grant LSSTC 2021-5. SB acknowledges funding support from the Australian Research Council through a Discovery Project DP190101943.
The Hyper Suprime-Cam (HSC) collaboration includes the astronomical communities of Japan and Taiwan, and Princeton University. The HSC instrumentation and software were developed by the National Astronomical Observatory of Japan (NAOJ), the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU), the University of Tokyo, the High Energy Accelerator Research Organization (KEK), the Academia Sinica Institute for Astronomy and Astrophysics in Taiwan (ASIAA), and Princeton University. Funding was contributed by the FIRST program from the Japanese Cabinet Office, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), the Japan Society for the Promotion of Science (JSPS), Japan Science and Technology Agency (JST), the Toray Science Foundation, NAOJ, Kavli IPMU, KEK, ASIAA, and Princeton University. This paper makes use of software developed for Vera C. Rubin Observatory. We thank the Rubin Observatory for making their code available as free software at [http://pipelines.lsst.io/](http://pipelines.lsst.io/). This paper is based on data collected at the Subaru Telescope and retrieved from the HSC data archive system, which is operated by the Subaru Telescope and Astronomy Data Center (ADC) at NAOJ. Data analysis was in part carried out with the cooperation of Center for Computational Astrophysics (CICA), NAOJ. We are honored and grateful for the opportunity of observing the Universe from Maunakea, which has the cultural, historical and natural significance in Hawaii.
## Data Availability
All data used in this work is publicly available at [https://hsc-release.mtk.nao.ac.jp/doc/](https://hsc-release.mtk.nao.ac.jp/doc/) (for HSC-SSP data).
|
2304.00300 | Modelling how curved active proteins and shear flow pattern cellular
shape and motility | Cell spreading and motility on an adhesive substrate are driven by the active
physical forces generated by the actin cytoskeleton. We have recently shown
that coupling curved membrane complexes to protrusive forces, exerted by the
actin polymerization that they recruit, provides a mechanism that can give rise
to spontaneous membrane shapes and patterns. In the presence of an adhesive
substrate, this model was shown to give rise to an emergent motile phenotype,
resembling a motile cell. Here, we utilize this ``minimal-cell" model to
explore the impact of external shear flow on the cell shape and migration on a
uniform adhesive flat substrate. We find that in the presence of shear the
motile cell reorients such that its leading edge, where the curved active
proteins aggregate, faces the shear flow. The flow-facing configuration is
found to minimize the adhesion energy by allowing the cell to spread more
efficiently over the substrate. For the non-motile vesicle shapes, we find that
they mostly slide and roll with the shear flow. We compare these theoretical
results with experimental observations, and suggest that the tendency of many
cell types to move against the flow may arise from the very general, and
non-cell-type-specific mechanism predicted by our model. | Shubhadeep Sadhukhan, Samo Penič, Aleš Iglič, Nir Gov | 2023-04-01T11:47:49Z | http://arxiv.org/abs/2304.00300v1 | # Modelling how curved active proteins and shear flow pattern cellular shape and motility
###### Abstract
Cell spreading and motility on an adhesive substrate are driven by the active physical forces generated by the actin cytoskeleton. We have recently shown that coupling curved membrane complexes to protrusive forces, exerted by the actin polymerization that they recruit, provides a mechanism that can give rise to spontaneous membrane shapes and patterns. In the presence of an adhesive substrate, this model was shown to give rise to an emergent motile phenotype, resembling a motile cell. Here, we utilize this "minimal-cell" model to explore the impact of external shear flow on the cell shape and migration on a uniform adhesive flat substrate. We find that in the presence of shear the motile cell reorients such that its leading edge, where the curved active proteins aggregate, faces the shear flow. The flow-facing configuration is found to minimize the adhesion energy by allowing the cell to spread more efficiently over the substrate. For the non-motile vesicle shapes, we find that they mostly slide and roll with the shear flow. We compare these theoretical results with experimental observations, and suggest that the tendency of many cell types to move against the flow may arise from the very general, and non-cell-type-specific mechanism predicted by our model.
## I Introduction
Cell migration plays a crucial role during many key biological processes, from morphogenesis to cancer progression. As a result, the molecular components involved in cell migration, most notably the actin cytoskeleton, have been intensively investigated. Despite the great progress that was made, it is still an open question how do the different cellular components self-organize in a spatial pattern that maintains the robust motile cell shape. Several theoretical models have been proposed to explain the spontaneous emergence of the motile cell shape.
When cells migrate within the blood and lymphatic vessels, they experience fluid flow, which exerts shear forces on the cells. Outstanding examples include lymphocytes in the lymphatic vessels [31], neutrophils and T cells (types of immune cells) rolling and migrating in the blood vessels towards a site of inflammation [18; 28], endothelial cells [1] and fibroblasts crawling to sites of injury [17]. Therefore, an understanding of how shear stresses influence cell movement is essential, and is still lacking.
One reason for this is that there appear different responses to shear in different cells, and within the same cell type under different conditions, as shown by the following examples. The exposure of sparsely plated endothelial cells, or a wounded monolayer, to shear flow inhibits their migration against the flow [33]. The direction of T-lymphocyte cell migration under shear flow depends on the adhesion receptors [2; 11]: When VCAM-I (Vascular Adhesive Molecule-I) is used, the cells migrate with the flow, while the cells migrate against the flow when ICAM-I (Intracellular adhesive molecule-I) is used. As the shear rate increases, T-lymphocytes favour migration against the flow when ICAM-1 is present, even in the presence of VCAM-1 [11]. The migration of T-cells with the shear also depends on previous exposure to the flow [24].
However, one prominent feature that appears consistently in many cell types, is a tendency to migrate up stream against the flow. This behavior was observed in T-lymphocyte cells [2; 11; 30], microvascular endothelial cells [1; 22], circulating tumor cells [14], and in the single-celled amoeba _Dictyostelium discoideum_[7; 8; 13]. The origin of this prevalent migration response to shear flow is not understood at present.
Here, we utilize a recently developed theoretical model to explore the response of adherent cells to shear flow. The coarse-grained theoretical model describes the shape dynamics of a vesicle that contains curved membrane proteins that recruit active protrusive forces from the cytoskeleton. This model was shown to give rise to spontaneous pattern formation on the the membrane, resulting in different shapes of vesicle. In the presence of adhesion to an external substrate, a motile phenotype emerges in this model. Here we exert on this vesicle an external force field that emulates the viscous drag force due to the fluid flow. This is an approximate description, that avoids solving the full flow field around the cell, but may provide us with a qualitative understanding of the main physical effects of the shear forces due to the flow. The simplicity of the model makes the calculated results very general, not cell-type-specific. They may therefore shed light on basic physical processes that apply to many cell types, such as the observed tendency of many types of motile cells to migrate upstream.
## II Model
Our model is based on Monte Carlo (MC) simulations to evolve the shape of a vesicle in time (see Supplementary Material). The vesicle is described by a three-dimensional surface (Fig. 1a) of \(N\) vertices, each connected with bonds of length \(l\), to form a closed, dynamically triangulated, self-avoiding network, with the topology of a sphere [15]. The position vector of the \(i\)th vertex is \(\overrightarrow{r_{i}}\), where \(i\in[1,N]\). The vesicle contains mobile curved membrane complexes, which are also sites of force application, representing the protrusive force exerted by actin polymerization. The vesicle is placed on a flat adhesive surface parallel to the \(x\)-\(y\) plane at \(z=z_{\rm ad}\).
The total energy of the vesicle is the sum of various contributions [26]: (a) the local bending energy due to the membrane curvature, (b) the energy due to binding between nearest-neighbour membrane protein complexes, (c) the energy due to the active cytoskeleton force, (d) the adhesive energy due to the attractive interaction between the vesicle and the substrate and (e) the energy due to the force experienced by the vesicle due to shear flow.
The bending energy is given by the Helfrich expression [16] as
\[W_{b}=\frac{\kappa}{2}\int_{A}(C_{1}+C_{2}-C_{0})^{2}dA \tag{1}\]
where \(C_{1}\) and \(C_{2}\) are principle curvatures, and \(\kappa=20k_{B}T\) is the bending rigidity. We set the spontaneous curvature \(C_{0}=1l_{min}^{-1}\) for the nodes that contain the curved protein complexes, while it is set to zero for the bare membrane. The interaction energy between nearest-neighbour proteins is expressed as
\[W_{d}=-\sum_{i<j}w\mathcal{H}(r_{0}-r_{ij}) \tag{2}\]
where \(\mathcal{H}\) is the Heaviside step function, \(r_{0}\) is the interaction range, \(r_{ij}=|\overrightarrow{r_{i}}-\overrightarrow{r_{j}}|\) is the distance between proteins, and \(w=1\)\(k_{B}T\) is the interaction energy between neighboring proteins in all the simulations in this paper.
The energy (work) due to the active protrusive force exerted by actin polymerization at the positions of the curved protein complexes
\[\delta W_{F}=-F\sum_{i}\hat{n_{i}}\cdot\delta\overrightarrow{r_{i}}, \tag{3}\]
where, \(\hat{n_{i}}\) is the outward normal to the membrane and index \(i\) runs over the positions of all proteins, \(F\) is the strength of the active force, and \(\delta\overrightarrow{r_{i}}\) is the MC shift in the position of the node. The total active force is denoted by \(F^{\rm act}\) (Fig. 1a)
The vesicle can adhere to the adhesive surface located at \(z=z_{\rm ad}\), and this energy contribution is
\[W_{\rm ad}=-\int_{A}V(z)dA, \tag{4}\]
where \(V(z)\) is the interaction potential between the adhesive surface and the vesicle. If the node is close to the surface, \(z_{\rm ad}\leq z_{i}\leq z_{\rm ad}+\delta z\), then the adhesion energy is \(V(z)=E_{\rm ad}\), while it is zero for all other nodes. We set \(\delta z=l_{\rm min}\) for this whole paper, which is the minimal permitted bond length (to prevent pathological triangulation). The adhesive surface acts as a rigid barrier that the membrane can not penetrate.
Within this model we do not explicitly describe the fluid surrounding and within the vesicle. The MC calculation does not describe the correct time-scale of shape
Figure 1: Schematic diagram for a vesicle on an adhesive substrate under active protein forces and the shear force. (a) A highly polarized vesicle is placed on the adhesive surface at \(z=z_{\rm ad}\) facing a shear flow. The curved proteins are shown in red colour on the blue-coloured membrane. The parameters used in the simulations are: \(F=2k_{B}T/l_{min}\), \(E_{\rm ad}=3k_{B}T\). (b) The schematic diagram for the direction of the shear force that is tangential to the vesicle membrane. The direction is calculated as \(\hat{n}\times(\hat{v}^{\rm shear}\times\hat{n})\), where, \(\hat{n}\) and \(\hat{v}^{\rm shear}\) are the normal direction to the surface and the far flow field. The shear force magnitude is given by the linear relation: \(F^{\rm shear}=a(z-z_{\rm ad})=a\Delta z\), with \(a\) a parameter that determines the magnitude of the shear force. (c) The effect of the shear force is demonstrated. The cross sections of the vesicle at \(y=y_{\rm avg}\) are shown under three cases. No shear, shear in the positive \(x\)-direction, and shear in the negative \(x\)-direction cases are shown in blue, green, and red. Respective arrows show the magnitude and direction of the shear force. The ratio between the maximum shear force on a node and the active force on a node is \(0.04\). The ratio between the total shear and active force is approximately \(0.7\), when we set \(a=0.01k_{B}T/l_{min}^{2}\).
changes, as the dissipative processes involving the fluid flow are not included. This model can predict the shape changes of the vesicle as it minimizes the energy terms listed above.
This limitation means that when we wish to add the effects of shear forces due to fluid flow, we have to implement some approximate way for exerting these forces. We consider a fluid flow that has a far-field linear profile close to the surface on which the vesicle is adhered (Fig. 1a), in the direction \(\hat{v}^{\text{shear}}\). We do not solve the exact flow field around the vesicle, but we assume that the force exerted on the vertices of the vesicle by the flow is everywhere tangential to the vesicle surface, i.e. in the direction of the projection of the shear flow direction on the local tangent plane (\(\hat{n}\times(\hat{v}^{\text{shear}}\times\hat{n})\)), where \(\hat{n}\) is the local outwards normal to the surface (Fig. 1b). The force on the vertex due to the shear flow is given by: \(\overrightarrow{F}^{\text{shear}}=F^{\text{shear}}(\hat{n}\times(\hat{v}^{ \text{shear}}\times\hat{n}))\), where the force magnitude due to the shear is assumed to be given by the linear far-field flow velocity at the corresponding distance from the adhesive surface: \(F^{\text{shear}}=a(z-z_{\text{ad}})=a\Delta z\) (Fig. 1a,c).
This force due to the shear flow is applied on the nodes of the vesicle as an external force, which gives the following contribution to the energy (work) of the system due to each MC node move (similar to Eq.3)
\[\delta W_{s}=-a\sum_{i}(z_{i}-z_{\text{ad}})\ (\hat{n}\times(\hat{v}^{\text{ shear}}\times\hat{n}))\cdot\delta\overrightarrow{r_{i}}. \tag{5}\]
Note that the shear force is applied to each node along the local tangent, which fluctuates due to local membrane shape undulations (inset of Fig. 1c).
The total energy change of the system, per MC mode, is given by
\[\delta W=W_{b}+W_{d}+\delta W_{F}+W_{\text{ad}}+\delta W_{s}. \tag{6}\]
We first verified that our implementation of the effective force due to shear flow produces reasonable results, by calculating the cross-sectional shape of a protein-free vesicle. We found that the vesicle deforms, and tends to lift and detach from the adhesive surface, as the shear flow increases (Fig.S1), similar to the results of previous experimental and theoretical work [3]. Adding passive curved proteins can enhance the adhesion of the vesicle, mitigating the tendency of the vesicle to lift and detach due to the shear flow (Fig.S1). Note that similar detachments were also observed for cells exposed to strong shear flows [12]. We next investigated the response of our vesicle that contains active curved proteins to the shear flow.
## III Motile vesicles
We have previously found that our minimal-cell model can describe a variety of steady-state shapes of the adhered vesicle [26]. One such phenotype is a motile, crescent-shaped vesicle, which appears for strong adhesive interaction \(E_{\text{ad}}=3k_{B}T\) and sufficiently large active protrusive forces. Such a motile vesicle has a direction of polarity, determined by the direction of the total active forces (\(F^{\text{act}}\)), due to the local forces applied by the curved proteins that form the leading edge cluster (Fig. 1a). This shape is self-sustaining, with the highly curved leading edge maintained by the active forces, thereby stabilizing the cluster of curved proteins that seek to minimize their bending energy at such high curvature.
As shown in Fig. 1c, when the shear flow acts opposite to the polarity of the vesicle, the two opposing forces tend to deform the vesicle and sharpen its leading edge. This should therefore further stabilize the leading edge cluster of curved proteins. On the contrary, when the shear is in the direction of the polarity, the leading edge becomes less curved, thereby destabilizing the leading edge cluster, as it can not maintain the high curvature that minimizes the bending energy of the curved proteins. We therefore expect that the motile vesicle will respond and modify its motility due the shear forces.
In Fig. 2(a) we show the effects of different shear directions on the trajectories of the motile vesicle. If shear is absent, the motile vesicle moves persistently along its polarity direction, which meanders over time due to random fluctuations. When shear is parallel to the polarity, we find that the protein aggregate reorients and with it the migration path of the vesicle makes a U-turn, to end up facing the shear flow direction. If the shear is opposite to the polarity, the speed is greatly diminished due to the competition between the active force and the shear force, but the migration direction is highly stable, maintaining the upstream path. Finally, when the shear flow is perpendicular to the polarity, we find again that the protein aggregate rotates and reorients to face the shear.
To explain the origin of these responses of the migration to the shear, we investigate how the shear forces modify the shape of the vesicle (Fig. 1c), and how these shape changes affect the leading-edge protein aggregate. The average mean-curvature \(c_{\text{avg}}^{\text{pro}}\) of all the vertices with the curved proteins is shown in Fig 2(b). We can see a significant drop in the curvature of the leading edge at the early times for the case when the shear is initially parallel to the polarity direction of the vesicle (green solid line). This is a quantification of the effect shown in Fig. 1c. As the vesicle reorients to face the shear flow, the average mean-curvature \(c_{\text{avg}}^{\text{pro}}\) increases, and at long times, when the vesicle faces the shear, it is slightly higher in the presence of shear compared to the no-shear case. The proteins aggregate at the leading edge of the vesicle, and prefer the configuration with the higher curvature, which is oriented against the shear flow.
We follow this reorientation process in Fig. 2(c), and in Fig. 2(d) we plot the local mean curvature of the proteins along the leading edge (\(c^{\text{pro}}\)). The initial reduction
of the curvature due to the shear is most significant in the direction of the shear (\(\theta\approx 0\), blue line), as the vesicle is pushed from behind and the front fatens. As time evolves, the proteins rotate and the curvature along the leading edge increases, first in the direction facing the flow (red line, compare negative vs positive angles). Finally, the protein aggregate orients in the direction facing the shear flow, and the middle of the protein aggregate has the highest curvature.
A very similar dynamics is observed for shear that is perpendicular to the initial vesicle polarization, as shown in Fig. 2(e). The vesicle experiences a shear force that pushes from one side, which makes the farthest side of the protein aggregate father. Therefore, the protein aggregate
Figure 2: Polarized vesicle under shear flow. (a) The trajectories of the centre of mass of the vesicle are shown in lime colour. The red arrows show the direction of the total active force due to the protein aggregate. The color-coded velocity of the centre of mass is shown a shifted trajectory for each case. (b) The average curvature of the protein aggregate over time for the four shear cases \(F^{\rm shear}=0\); \(F^{\rm shear}=0.01\Delta z\) along \(\hat{v}^{\rm shear}=\hat{x}\), \(\hat{v}^{\rm shear}=-\hat{x}\), \(\hat{v}^{\rm shear}=\hat{y}\) in black, green, red, and blue solid lines. It shows the drop in the average curvature of the leading edge \(\hat{v}^{\rm pro}_{\rm avg}\) when the shear is parallel to the vesicle’s initial polarization \(\hat{v}^{\rm shear}=\hat{x}\). (c) Snapshots of the vesicle during the “U-turn” when the shear is initially parallel to its polarity. (d) The illustration on the left shows the definition of the angle \(\theta\) between the position of a protein in the leading-edge cluster with respect to the centre of mass and the direction of the total active force. The right panel shows the curvature profile for the whole protein aggregate at MC times=1, 1999, 4999, 9999 in blue, red, green and magenta solid lines respectively, when \(\hat{v}^{\rm shear}=\hat{x}\). (e) Snapshots of the vesicle during the rotation of the protein aggregate when the shear is initially perpendicular to its polarity. (f) Curvature profile for the whole protein aggregate when shear is initially perpendicular to its polarity at MC times= 1, 499, 2499, 7999, 11499, in blue, red, green, magenta, and cyan lines respectively. (g) The time evolution of the angle \(\phi\) between the active force and the shear force. (h) The time evolution of bending energy \(W_{b}\), protein-protein binding energy \(W_{d}\), and adhesion energy \(W_{\rm ad}\) for the system shown in (e). The double-headed orange arrow indicates the long-time decrease in adhesion energy when shear is present compared to the case when shear is absent. The inset of the third column shows the adhesive area at MC time=8000, for the sheared and non-sheared cases in green and black lines respectively. We used here: \(E_{\rm ad}=3k_{B}T\),\(F=2k_{B}T/l_{min}\rho=3.45\%\).
great rotates towards the more highly curved region and becomes motile against the shear. The curvature at the sites of the proteins \(c^{\rm pro}\) is shown in Fig. 2(f). Initially the far side (\(\theta\approx\pi/2\)) of the vesicle gets fattened, and has lower curvature than the curvature at the side that faces the shear. This gradient in curvature induces the rotation of the leading edge cluster, as shown in Fig. 2(g).
The dynamics in our model is driven by minimization of energy and work. In Fig 2(h) we plot the time evolution of the different energy components: the bending energy \(W_{b}\), the binding energy \(W_{d}\), and the adhesion energy \(W_{\rm ad}\). At the steady-state configurations, when the vesicle is polarized against the flow, we find that the bending energy is increased due to shear, compared to the case when shear is absent. So clearly this energy is not minimized during the reorientation process. The protein-protein binding energy at short times is largest when the leading edge cluster is destabilized by the parallel shear flow (green line), but at long times this energy is essentially unaffected by the presence of shear.
Finally, we find that the adhesion energy \(W_{\rm ad}\) is clearly smaller (more negative) at long times in the presence of shear compared to the no-shear case, as indicated by the two-headed orange arrow. We plot the adhesive area of the vesicle that is in contact with the substrate, for the case with and without shear, in the inset of the rightmost panel of Fig 2(h), showing the bigger adhesive area when the shear is present. At a lower shear flow parameter, we found that the results are qualitatively the same, but the reorientation dynamics take longer to occur.
We therefore conclude that our simplified, minimal-cell model can provide a physical mechanism for the stabilization of cell migration that is upstream in the presence of shear flow. The basis for this mechanism is the increased cell spreading due to shear flow (Fig 2(h)), which was also observed in cells [5; 11]. Note that cells respond to shear also through signalling that modify the overall cell behavior, which corresponds in our model to changes to the model parameters. Nevertheless, the physical mechanism that we find for migration against the flow is not cell-type-specific and is independent of any complex biochemical signalling. It may therefore explain why this behavior appears in many different cell types, as listed in the introduction.
## Non-Motile Vesicles
Next, we explore the response to shear flow for non-motile adhered vesicles in our model. The non-polar, non-motile phenotypes of adhered vesicles in our model have several shapes [26]: At low adhesion or low active force, the vesicle is weakly spread and has a roughly hemispherical shape. For high concentration of the curved membrane proteins, and at sufficiently high adhesion or active force, the vesicle spreads into a round pancake-like
Figure 3: Un-polarized, two-arc vesicle in shear flow. (a) The two-arc, non-motile vesicle, where two opposing leading-edge clusters pull the membrane in the middle into a tubular shape. (b) The trajectories of the centre of mass of the vesicle are shown in lime colour. Red arrows denote the total active force. The velocity of the centre of mass is shown with a colour map by a shifted trajectory for each case in three different columns. (c) Time evolution of the angle \(\theta_{\rm ax}\) between the long axis of the vesicle and the shear force when the shear flow is initially parallel and perpendicular to its body axis, in green and red colour respectively. (d) Different shapes of the vesicle over a long time and their body axis for the cases of shear along the body axis and perpendicular to the body axis. (e) In the top line, the active force along \(x\) direction for the two different arcs is plotted as function of MC time, in red and blue lines respectively. The green line shows the total active force on the vesicle. In the second line, the numbers of proteins (size of arcs) in the two leading-edge clusters are plotted as function of MC time. In the third line, the time evolution of the efficiency of the arc is quantified by the magnitude of force along \(x\) axis per protein in the leading-edge cluster, i.e. \(|F_{\rm ax}^{\rm act}|/N_{\rm cluster}\). (f) The projection of positions of the proteins in the two leading-edge arcs on the \(x-y\) plane. The circle fit gives the radius of curvature of the corresponding arcs. (g) Rolling motion of the membrane due to shear: The cross-section at the middle of the two-arc-shaped vesicle on the \(y\)-\(z\) plane. We plot at three different times \(0,\ 50,\ 100\) (in unit of 2000 MC steps) in red, green, and blue respectively. A particular vertex at different times is highlighted with a circular marker to illustrate the rolling motion. We used here: \(E_{\rm ad}=1k_{B}T,\ F=3k_{B}T/l_{\rm min},\rho=3.45\%\).
shape with a closed, circular leading-edge. The response of both of these shapes to the shear is given in the SI (Fig.S2,S3), where its shown that they roll and slide with the flow.
A more interesting shape arises on surfaces with weaker adhesion, or at high active forces, where the vesicle spreads into a two-arc shape (Fig 3a). The vesicle is elongated by two leading-edge clusters at opposing ends, with the membrane between them being pulled into a cylindrical shape. Adhered cells often have such elongated shapes, with multiple, competing leading edge lamellipodia, which render them non-polar and non-motile [23; 27; 10; 29].
In Fig 3(b) we plot the trajectories of the two-arc vesicle for three different shear conditions with respect to the initial long axis of the vesicle (the long axis is calculated as explain in the SI). When shear is absent, the vesicle is almost completely non-motile, while in the presence of flow the vesicle moves with the shear flow. Interestingly, we can see that initially the vesicle moves faster when the shear is in the same direction as its body axis, compared to when the shear is perpendicular to the vesicle's long axis. However, this migration along the long axis is unstable, and at long times the vesicle rotates to being perpendicular to the flow (Fig 3c), which is the stable configuration. In Fig 3(d) we show snapshots of the vesicle at different times as it moves with the shear flow, either parallel or perpendicular to the initial long axis of the vesicle.
As shown in Fig 3(b), the vesicle is moving faster when the shear is along the body axis compared to when the shear is perpendicular to the body axis. This indicates that the shear is inducing some polarization of the active forces, which now have a net force that contributes to the active motility along the shear flow. To understand the origin of this shear-induced polarization, we analyzed the two leading-edge clusters at the opposing ends of the vesicle. We denote the arc pulling towards positive \(x\) direction (with the flow) and negative \(x\) direction (against the flow), arc-1 (red) and arc-2 (blue) respectively (Fig 3e,f). In Fig 3(e) we show that the net active force due to the two clusters is positive at the early times, indicating that indeed there is a net active force from the leading edges, pulling the vesicle with the flow direction. This shear-induced asymmetry is manifested as a larger active force along \(x\)-direction due to arc-1 compared to the negative component from arc-2. However, the sizes of the two arcs \(N_{\rm cluster}\) do not show any systematic difference between the two leading edge clusters. Nevertheless, the net force in the flow direction due to arc-1 is stronger than the force due to arc-2 since the efficiency, defined as the net force along the flow direction per protein \(|F_{x}^{\rm act}|/N_{\rm cluster}\), of the proteins in arc-1 is larger (Fig 3e). Even when the size of arc-1 is smaller than arc-2, the proteins in arc-1 can be more efficient and produce a stronger net force along the flow direction, make the vesicle motile in the presence of shear.
To get more insight into this efficiency of the two leading edges, we plotted the positions of the proteins on the \(x\)-\(y\) plane as shown in Fig 3(f), at a time where the sizes of the two clusters is almost identical, yet there is a net force in the direction of arc-1. We fit a circular arc and find the radius of curvature for each leading edge cluster using the gradient-descent method. We find that the radius of curvature is bigger for arc-1 (\(R=10.2l_{\rm min}\)) compared to arc-2 (\(R=7.4l_{\rm min}\)), and this flatter shape of arc-1 makes its proteins' active forces more oriented along the flow, compared to the orientations of the active forces in arc-2. The flatter shape of the leading edge of arc-1 is due to the shear forces pushing membrane along the tubular part that connects the two leading edges, such that membrane area is forced from the region of arc-2 to that of arc-1, allowing the fan-shaped region of arc-1 to grow larger in area.
In the stable phase, where the vesicle moves with the shear flow that is perpendicular to its body axis, We plotted the cross-sectional area of the vesicle along its middle point (\(x_{\rm avg}\)) at different times (Fig 3g). By following one particular node we illustrate that the membrane is rolling on the surface due to the flow-induced shear forces. In the SI we present the dynamics of the two-arc shape at different shear flow strengths.
Weakly polarized cells, such as Chinese hamster ovary (CHO) cells, exhibit weak migration with the shear flow, often maintaining an elogated shape that is perpendicular to the shear direction [9]. These CHO cells tend to spread in a circularly symmetric manner, and indeed our vesicles that spread uniformly tend to slide with the shear flow (SI, Fig.S3), as observed for these cells.
## Conclusions
Our "minimal-cell" model, where cell spreading and migration emerges due to curved membrane proteins that recruit the protrusive forces of actin polymerization, is used to explore the effects of shear forces applied to the membrane due to an imposed fluid flow. This model shows that since the self-organization of the curved proteins and active forces are dependent on the membrane shape, the system is strongly affected by these flow-induced shear forces.
We found that the motile crescent-shaped vesicle in our model spontaneously migrates against the shear flow due to the reorganization of curved protein in response to the shear flow. This behavior arises simply from the physics of minimizing the adhesion energy to the substrate. Since our mechanism is based on a very simple model, and a physical mechanism, it may explain why the tendency of cells to migrate against the flow appears in many different cell types that migrate using lamellipodia protrusions [1; 2; 7; 8; 11; 13; 14; 22; 30]. Though
our model does not include many cellular components, it offers an explanation for the origin of this prevalent migration response to shear flow, which is not understood at present.
For the non-motile vesicles we found that they tend to migrate or roll with the shear flow, which may explain why weakly motile cells tend to move with the flow [8; 9; 13]. Note that cells respond to shear flow due to signalling pathways [4; 25], which modify the overall cell-substrate adhesion and cytoskeleton activity. This layer of biochemical control manifests as modifications to the parameters of the vesicle in our model, beyond the shear-induced shape changes that we investigated.
The MC model we used here does not describe the full fluid-flow field surrounding the cell. Future studies that include explicitly the fluid dynamics [19; 21] and additional cellular components [6], may be used to explore the dynamics predicted by our model with better physical realism, at the price of greatly increased complexity and computation time.
Our results may also explain the motility response of cells to external forces that are exerted on them by other means, not due to fluid flow. In [32] it was shown that when an adhered cell is pulled by a magnetic beads that is attached to the cell, it tends to polarize in the opposite direction to the applied force. Similarly, when cells that are attached to each other exert a pulling force on each other, they tend to polarize in opposite directions to each other [32]. This plays an important role during collective cell migration [20]. These responses may arise from the same behavior that we obtain here, namely the pulling force tends to stabilize the leading edge of the cell in the direction that is opposite to the direction of the external force (Fig.1).
## Acknowledgement
We thank Raj Kumar Sadhu and Yoav Ravid for many interesting and fruitful discussions. N.S.G. is the incumbent of the Lee and William Abramowitz Professori Chair of Biophysics, and acknowledges support by the Ben May Center for Theory and Computation, and the Israel Science Foundation (Grant No. 207/22). This research is made possible in part by the historic generosity of the Harold Perlman Family. A.I. and S.P. were supported by the Slovenian Research Agency (ARRS) through the Grant No. J3-3066 and J2-4447 and Programme No. P2-0232.
|
2307.16032 | The Kinematic Distance to NGC 6309 | We report an updated value for the distance to the planetary nebula NGC 6309
(the Box Nebula). The distance is found through two Kinematic Distance Methods
(KDMs): the system of two equations reported in Zhu et al. 2013 and the Monte
Carlo method reported by Wenger et al. 2018. We find the kinematic distance to
NGC 6309 to be 4.1 kpc with an upper uncertainty of +0.29 kpc and a lower
uncertainty of -0.38 kpc. We also calculate the distance to Cassiopeia A with
the two KDMs and compare to the value reported by Reed et al 1995. The Zhu et
al. method and Wenger et al. method yield a value within thirty percent and
twenty percent of the Reed et al. method, respectively. The value reported by
Reed et al 1995 was contained within the error bounds produced by the Wenger et
al. method. The distance measurement to Cassiopeia A suggests that both KDMs,
while imperfect, are moderately accurate methods for determining the distance
to NGC objects in the plane of the Milky Way. | Scott C. Scharlach, Colton G. Morgan | 2023-07-29T17:44:23Z | http://arxiv.org/abs/2307.16032v1 | # The Kinematic Distance to NGC 6309
###### Abstract
We report an updated value for the distance to the planetary nebula NGC 6309 (the Box Nebula). The distance is found through two Kinematic Distance Methods (KDMs): the system of two equations reported in Zhu et al. 2013 and the Monte Carlo method reported by Wenger et al. 2018. We find the kinematic distance to NGC 6309 to be 4.1 kpc with an upper uncertainty of +0.29 kpc and a lower uncertainty of -0.38 kpc. We also calculate the distance to Cassiopeia A with the two KDMs and compare to the value reported by Reed et al 1995. The Zhu et al. method and Wenger et al. method yield a value within thirty percent and twenty percent of the Reed et al. method, respectively. The value reported by Reed et al 1995 was contained within the error bounds produced by the Wenger et al. method. The distance measurement to Cassiopeia A suggests that both KDMs, while imperfect, are moderately accurate methods for determining the distance to NGC objects in the plane of the Milky Way.
\({}^{1}\)_Pisgah Astronomical Research Institute_
_1 PARI Drive_
_Rosman, NC 28772, USA_
_Key Words_: Kinematic Distance - NGC 6309 - Planetary Nebula - Box Nebula
## 1 Introduction
Planetary Nebulae (PNe) are highly luminous clouds of gas and dust around post-main-sequence stars. Stars which range in mass from 0.8 to 8 solar masses emit PNe prior to the star becoming a red giant. Because PNe are created at a transition period in star's life cycle, PNe provides an important source of information on the evolution of stars in the universe.
In order to determine several features of PNe, such as radius, age, mass, and luminosity, astronomers must first accurately measure the object's distance. However, the distances to PNe have historically been difficult to calculate. Methods such as those proposed in Shklovsky 1956 or Daub 1982 provide order-of-magnitude estimates, but they are imprecise.
Two recent astronomical discoveries have allowed for a different distance calculation, the Kinematic Distance Method, to be far more feasible than it was only a few decades ago. Firstly, Reid et al 2014 published an updated function between the galactocentric radius of objects in the Milky Way and the circular velocity of the object around the center of the Milky Way. Secondly, Abuter et al 2019 discovered a highly accurate distance to the center of the galaxy with an uncertainty of 0.3 %. These two values allow astronomers to calculate the distance to PNe with greater accuracy than in the past.
This paper aims to discover the distance to one object, NGC 6309 (nicknamed the Box Nebula), in order to demonstrate the efficacy of the Kinematic Distance Method. The distances to PNe allows astronomers to calculate other important intrinsic features of PNe, such as their radius, age, absolute magnitude, and luminosity. In other words, an accurate understanding of the distance to an object is essential for discovering a multitude of the object's intrinsic features.
Methods
### Kinematic Distance
Objects within the Milky Way galaxy rotate in approximately circular orbits around the center of the galaxy. For objects with a galactocentric radius1 of 6 kpc or greater, the function between galactocentric radius and circular velocity around the Milky Way is approximately flat. In other words, objects in the plane of the Milky Way with a galactocentric radius of 6 kpc or greater have roughly equal circular velocities.
Footnote 1: The galactocentric radius of an object is the distance between the object and the center of the Milky Way galaxy.
When astronomers observe an object in the plane of the Milky Way, there is a component of the object's circular velocity which points directly away from the observer. The rate at which an object moves towards or away from an observer in our solar system is called the radial velocity of the object.
Using trigonometry, Brand and Blitz 1993 presented an equation which relates the object's circular velocity to its observed radial velocity. (The equation assumes the object has no proper motion, i.e. the object perfectly adheres to the galactic rotation curve function. The phenomenon of proper motion is discussed in Section 2.3.) The equation is:
\[V_{r}=(V_{c}\frac{R_{0}}{R}-V_{0})sin(l)cos(b), \tag{1}\]
where \(V_{r}\) is heliocentric radial velocity of the object, \(V_{c}\) is the circular velocity of the object around the center of the Milky Way, \(R_{0}\) is the galactocentric radius of the Local Standard of Rest (LSR)2, \(R\) is the galactocentric radius of the object, \(V_{0}\) is galactocentric velocity of the LSR, and \((l,b)\) are the galactic coordinates of the object.
Footnote 2: Additional information on the LSR is found in the Appendix.
Zhu et al 2013 approximated the circular velocity of the object and the circular velocity of the LSR as approximately equal. The equation then simplifies to:
\[V_{r}=V_{0}(\frac{R_{0}}{R}-1)sin(l)cos(b). \tag{2}\]
An object, the LSR, and the center of the galaxy form the points of a triangle. Thus, their relative distances are related via the Law of Cosines:
\[R^{2}=R_{0}^{2}+(dcos(b))^{2}-2R_{0}dcos(b)cos(l), \tag{3}\]
where \(d\) is the heliocentric distance to the object. The equation utilizes the term \(dcos(b)\) to account for the angle between the plane of the Milky Way and the object, as the galactic object is not necessarily in the exact plane of the Milky Way.
Wenger et al 2018 reports that the circular velocity of the LSR (\(V_{0}\)) is defined to be 220 \(km*s^{-1}\). Abuter et al 2019 reports the galactocentric radius of the LSR (\(R_{0}\)) to be 8.178 kpc, with an uncertainty of 0.3%.
\(V_{r}\) and \((l,b)\) are observable quantities which are specific to the observed object. In particular, \(V_{r}\) can be determined from the object's redshift, which in turn can be determined from spectral data. The center of the galaxy has a known location in the celestial sphere; Wenger et al. 2018 reports the galactic coordinates of the center of the Milky Way to be \((0^{\circ},0^{\circ})\) by definition. The galactic coordinates \((l,b)\) can be determined simply by measuring the angle between the center of the galaxy and the object relative to the observer.
Because \(V_{0}\), \(R_{0}\), \(V_{r}\), and \((l,b)\) are either known or observable values, equations 2 and 3 thus form a system of two equations with two unknowns, \(R\) and \(d\). This system can be solved to determine the heliocentric distance to objects such as planetary nebulae.
### The Kinematic Distance Ambiguity and its Resolution
For objects with a galactocentric radius which is smaller than the galactocentric radius of the LSR, the system of two equations described by 2 and 3 provide two possible values for the heliocentric distance \(d\), a near distance and a far distance. Figure 1 demonstrates two possible locations of a hypothetical object which both satisfy the system of two equations. This uncertainty is called the
Kinematic Distance Ambiguity (KDA). Methods which attempt to resolve the KDA are called the Kinematic Distance Ambiguity Resolution (KDAR).
This paper resolves the KDA with the distance estimation method originally reported by Daub 1982 and modified by Cahn et al 1991. Firstly, Daub 1982 presents an equation for the optical thickness parameter \(T\), which is given by
\[T=log(\frac{4\theta^{2}}{F(5)}), \tag{4}\]
Where \(\theta\) is the angular radius in arcseconds and \(F(5)\) is the 5 GHz radio flux in Janskys. Both \(\theta\)\(F(5)\) are observable quantities.
Secondly, Cahn 1991 defines an abstract value \(\mu\):
\[\mu=\sqrt{2.266*10^{-21}D^{5}\theta^{3}F(5)}, \tag{5}\]
where \(D\) is the distance to the object.
Thirdly, Cahn 1991 measured the distances to several PNe with the statistical parallax method, which allowed for the calculation of \(\mu\) for those PNe. The publication presents a a rough, order-of-magnitude correlation between \(log(\mu)\) and optical thickness \(T\). Figure 2 shows this approximate correlation. Cahn 1991 presumed that the correlation remains true for PNe which are too distant for accurate statistical parallax measurements.
Fourthly, by observing the 5 GHz radio flux of an object and its angular radius, astronomers can use equation 4, equation 5, and Figure 2 to roughly estimate the distance to a planetary nebula, even when statistical parallax data is unavailable. However, the correlation shown in 2 is imprecise, and therefore the distance estimate is imprecise as well.
Generally, only one of the two distance values from the system of two equations described above falls within the uncertainty range of the method described in this section. The distance value which satisfies both methods is more likely to accurately reflect the distance to the object than the distance estimate which satisfies only one of the two methods.
### Proper Motion and a Monte Carlo Method
The Kinematic Distance Method relies on an accurate understanding of the rate at which the object rotates around the center of the Milky Way. Astronomers determine this from observing the object's radial velocity. However, the object may also have proper motion, i.e. motion which does not perfectly adhere to the predicted rotation rate. Proper motion can affect the object's observed radial velocity to a non-negligible degree.
Proper motion presents an important problem for the KDM because equations and 1 and 2 assume that the observed radial velocity is due only to galactic rotation, not proper motion. However, Brand
Figure 1: The small black disk indicates the center of the galaxy. The inner ring indicates the galactic orbital paths of a hypothetical object, while the outer ring represents the galactic orbital path of the solar system. The yellow disk indicates the solar system. The system of two equations described in the text provides not one but two possible heliocentric distances at the same galactocentric distance \(R\) when \(R<R_{0}\).
and Blitz 1993 point out that there is no _a priori_ method for determining what percentage of the observed radial velocity is due to galactic rotation and what percentage is due to proper motion.
The problem of proper motion can be partially accounted for with methods developed by Brandon and Blitz 1993 and Wenger et al 2018, both of which are described in this section.
Brandon and Blitz 1993 developed a method to calculate the proper motions of PNe with previously determined \(d\) and \(R\). (In this context, \(d\) and \(R\) were found by distance estimation methods which are independent of the Kinematic Distance Method.) Firstly, they inserted the galactocentric radius \(R\) into an empirically derived equation for the rotation rate of the Milky Way at a given \(R\):
\[\frac{V_{c}}{V_{0}}=1.00767(\frac{R}{R_{0}})^{0.0394}+0.00712. \tag{6}\]
Secondly, Brandon and Blitz substituted the calculated value of \(V_{c}\) into equation 1 in order to calculate the expected radial velocity due only to the circular velocity. Thirdly, they subtracted the actual observed radial velocity from the predicted radial velocity to find a value which the Brandon and Blitz 1993 referred to as the "velocity residual." The velocity residual measures the degree to which the object's motion deviates from the galactic rotation curve; it is tantamount to the object's proper motion.
Brandon Blitz presented a histogram which displays the velocity residuals of many planetary nebulae, which is reprinted in figure 3. Most PNe have velocity residuals which are 6.4 \(km*s^{-1}\) or lower, but some have velocity residuals with an absolute magnitude as great as 40 \(km*s^{-1}\).
Wenger et al 2018 presents open access software which accounts for the proper motion of PNe in a statistically robust manner. The software utilizes a Monte Carlo method in which each simulation randomly selects a proper motion for the object. The value of the proper motion is statistically weighted in a manner that matches figure 3. By conducting the simulation 10,000 times, the software produces a Gaussian curve of probable distances to an object with an observed \(V_{r}\) and \((l,b)\).
The software also takes into account uncertainties in other variables which are beyond the scope of this paper. For example, the software does not approximate the galactic rotation curve as flat. Instead, it utilizes an updated, empirically-derived galactic rotation function from Reid et al 2014:
\[V_{c}(R)=241[\frac{1.97*\beta*x^{1.22}}{(x^{2}+0.78^{2})^{1.43}}+(1-\beta)x^{2 }\frac{1+1.46^{2}}{x^{2}+1.46^{2}}]^{1/2}, \tag{7}\]
where \(x=R/(0.90R_{0})\) and \(\beta=0.72+0.44log_{10}[(1.46/1.5)^{5}]\).
By utilizing a Monte Carlo method with randomly selected proper motions and attending to precise details such as updated rotation curves, the software from Wenger 2018 allows astronomers to calculate
Figure 2: The optical thickness roughly correlates with the abstract quantity \(\mu\), which is related to the object’s heliocentric distance. Reprinted from Cahn et al 1991.
the most likely distance to planetary nebulae even when the exact proper motion of the object is unknown.
## 3 Results
### The Distance to NGC 6309
SIMBAD reports that the galactic coordinates \((l,b)\) of NGC 6309 are \((l,b)=(9.65484^{\circ},+14.81371^{\circ})\).3
Footnote 3: SIMBAD was accessed via webpage through the following hyperlink: [http://simbad.cds.unistra.fr/simbad/sim-id?Ident=NGC+6309&NbIdent=1&Radius=2&Radius.unit=arcmin&submit=submit+id](http://simbad.cds.unistra.fr/simbad/sim-id?Ident=NGC+6309&NbIdent=1&Radius=2&Radius.unit=arcmin&submit=submit+id)
Vazquez et al 2014 and Rubio et al 2014 both report the radial velocity of NGC 6309 to be \(V_{r}=-32\pm km*s^{-1}\). When this negative value for \(V_{r}\) is inserted into the system of two equations, one distance estimate is negative and the other estimate is greater than the diameter of the Milky Way. The peculiar nature of these values suggest that the values are not physically meaningful, and instead the coordinate system ought to be redefined so that \(V_{r}\) is positive. For this reason, we use \(V_{r}=32\pm km*s^{-1}\) as our value for heliocentric radial velocity of NGC 6309.
The system of two equations yielded two distance values for NGC 6309: a near distance of 4.1 kpc and a far distance 12.6 kpc. Cahn 1991 estimates the distance to NGC 6309 to be 2.532 kpc, which is much closer to the near distance than the far distance. Therefore, the authors of this paper conclude that the near distance estimate, 4.1 kpc, is the more accurate distance estimate.
The Wenger 2018 software produced a Probability Distribution Function (PDF) of the most likely range of distances to NGC 6309. The most likely distance to NGC 6309 is 4.07 kpc, and the 68.3% confidence interval ranges from a distance of 3.72 kpc to 4.39 kpc. The PDF takes the form of a Gaussian curve and is shown in figure 4.
The Zhu 2013 method and Wenger 2018 method yielded distance estimates which differ only by 0.03 kpc; they are in excellent agreement with each other. This agreement in values suggests that the Kinematic Distance Method is replicable, which in the opinion of the authors adds to the method's reliability.
We use the value 4.1 kpc (with two significant figures) as our primary distance estimate to NGC 6309, and we use the 68.3% confidence interval from the PDF as our interval of uncertainty in our distance estimate. With these values, we conclude that the distance to NGC 6309 is 4.1 +0.29/-0.38 kpc.
Figure 3: A histogram displaying the velocity residuals of planetary nebulae with \(R\) and \(d\) estimates. Some planetary nebulae have a velocity residual whose magnitude is large enough to create a systematic error in the distance estimation method described in Section 2.1. Reprinted from Brandon and Blitz 1993.
### Efficacy Check: Cassiopeia A
In order to check the reliability of the Kinematic Distance Method, we applied the KDM to an object with a known distance: the supernova remnant Cassiopeia A. Reed et al 1995 reported that the distance to Cassiopeia A is 3.4 kpc. Cassiopeia A was chosen for this efficacy check because it is in the plane of the Milky Way, it is has a documented radial velocity, and the distance estimate used by Reed 1995 was conducted independently of the Kinematic Distance Method.
Reed 1995 originally calculated the distance to Cassiopeia A with a five-step method. Firstly, the authors utilized the object's redshift to calculate its expansion velocity in \(km*year^{-1}\). Secondly, the authors observed the object's change in angular radius over time, in \(as*year^{-1}\) (arcseconds per year). Thirdly, the authors observed the object's angular radius in \(as\). Fourthly, the authors used the three previous values to calculate the object's radius in \(km\). Finally, the authors calculated the object's distance using the trigonometric equation \(sin(\theta)=r/d\), where \(\theta\) is the angular radius, \(r\) is the radius, and \(d\) is the heliocentric distance to the object.
With regard to the Kinematic Distance Method, Asgekar et al 2013 reports the radial velocity of Cassiopeia A to be approximately -48.0 \(\pm\) 1 \(km*s^{-1}\). SIMBAD reports the galactic coordinates of Cassiopeia A to be \((l,b)=(111.734751^{\circ},-2.129568^{\circ})\).
The system of two equations described above results in a galactocentric radius \(R\) of 11 kpc, which is greater than the galactocentric radius of the LSR. Therefore, there is no Kinematic Distance Ambiguity, and the system of two equations yields only one plausible value for \(d\): 4.5 kpc. The other value is -10.6 kpc, which is negative, suggesting that this value is not physically meaningful. Although the sign of the radial velocity of NGC 6309 was changed to be positive, the sign of the radial velocity of Cassiopeia A remained negative, as this was the sign which produced physically meaningful results.
The Monte Carlo software estimates a distance of 4.14 +0.79/-0.74 pc, where the uncertainty indicates the 68.3% confidence interval. The Probability Distribution Function (PDF) for the distance to Cassiopeia A is shown in figure 5.
The value produced by the Wenger 2018 software differs from the value produced by the system
Figure 4: The software created by Wenger 2018 finds the distance to NGC 6309 to be 4.07 +0.32/-0.35 kpc, with the uncertainty indicating the 68.3% confidence interval.
Figure 5: The Probability Distribution Function (PDF) generated by the Wenger 2018 software estimates the distance to Cassiopeia A to be between 3.40 kpc and 4.93 kpc with 68.3% confidence.
of two equations by 8%. This difference is within the Monte Carlo simulations uncertainty interval, suggesting that the two methods produce values which are mostly, but not perfectly, in agreement with each other.
The distance value found through the Zhu 2013 method and the Wenger 2018 method differ from Asgekar 2013's distance value by 33% and 22%, respectively. The distance reported by Asgekar 2013 falls within the 68.3% confidence interval of the PDF. This suggests that the KDMs are imperfect but moderately accurate.
One possible source of error in this distance estimate is that Cassiopeia A has slow but non-negligible proper motion, as described by Kamper and van den Bergh 1976. Additionally, Cassiopeia A is a supernova remnant, not a planetary nebula, and therefore the distribution of proper motions used by the Wenger 2018 software may be slightly inaccurate for supernova remnants.
### Distance-Dependent Features of NGC 6309
The distance to NGC 6309 allowed the authors of this paper to determine a variety of intrinsic properties of the object, such as its diameter, age, absolute magnitude, and luminosity.
Rubio et al 2014 and Vazquez et al 2014 report the angular radius \(\theta\) of NGC 6309 to be approximately 32 arcseconds. We use the trigonometric equation
\[sin(\theta)=\frac{r}{d}, \tag{8}\]
where \(r\) is the radius of the object in parsecs and \(d\) is the heliocentric distance to the object in parsecs. We calculate the radius to be 0.14 pc, and therefore the diameter of NGC 6309 is 0.28 pc.
Newton's first law of motion suggests that, after being emitted from its central star, NGC 6309 has been expanding at a constant velocity. Therefore, we approximate its expansion as constant for the entirety of its lifespan. Rubio et al 2014 report an expansion velocity of 5 \(km*s^{-1}\) for NGC 6309. Using the radius and expansion velocity, we find a kinematic age of 27,000 years for NGC 6309.
The New General Catalogue reports an apparent magnitude of 11 for NGC 6309.4 The distance modulus equation states that:
Footnote 4: The New General Catalogue was accessed via webpage created by the Students for the Exploration and Development of Space (SEDS). The hyperlink for the webpage is: [http://spider.seds.org/ngc/ngc.cgi?CatalogNumber=NGC+6309](http://spider.seds.org/ngc/ngc.cgi?CatalogNumber=NGC+6309)
\[m-M=5log(d)-5, \tag{9}\]
where \(m\) is the apparent magnitude, \(M\) is the absolute magnitude, and \(d\) is the distance to the object. We calculate \(m=-1.6\) for NGC 6309. Ciardullo et al 2002 found that the brightest hypothetical planetary nebulae can possess an absolute magnitude of \(M=-4.47\pm 0.05\); a value of -1.6 is well within this limit.
The following equation connects the luminosity of an object in Watts with the bolometric magnitude (i.e. absolute magnitude). The equation is:
\[M_{bol}=-2.5*log(\frac{L}{3.0128*10^{28}W}) \tag{10}\]
We calculate the luminosity of NGC 6309 to be \(1.3*10^{29}\) W, which is approximately three orders of magnitude brighter than our sun.
We present these calculations to demonstrate the relative ease at which intrinsic features of a planetary nebula can be calculated when the distance to the object is known.
## 4 Conclusion
We calculate the distance to NGC 6309 (Box Nebula) to be 4.1 +0.29 / -0.38 kpc. The distance allows us to determine other important properties of the nebula, such as its radius, age, absolute magnitude, and luminosity.
One possible source of uncertainty in our calculations, in addition to the sources already discussed, is interstellar reddening. The absorption and subsequent reemission of light by the interstellar medium may create an overly redshifted spectrum of NGC 6309. Therefore, the radial velocity estimates may
possibly be too large if the spectra was subject to interstellar extinction. We encourage future papers to investigate this potential source of error.
We also hope that future papers will utilize this revised distance estimate in order to calculate other intrinsic features of NCG 6309, such as its mass, density, and temperature.
Additionally, we encourage future astrophysicists to perform this method on other planetary nebulae in the New General Catalogue. Many NGC objects only have estimated distances recorded in databases such as SIMBAD.
If astronomers use the KDM to find distances to hundreds of NGC objects, the astronomical community could use the aggregate data to calculate updated estimates of formation rates of PNe and galactic distribution of PNe. These future discoveries may help astrophysicists acquire a deeper understanding of stellar life cycles in the Milk Way galaxy.
## 5 Appendix: The Local Standard of Rest
Our solar system does not perfectly follow the galactic rotation curves describes in equations 6 or 7. Instead, the solar system has proper motion relative to what the equations would predict. However, the proper motion occurs at a known velocity, both in direction and in magnitude, whose values are reported in Wenger 2018.
The Local Standard of Rest (LSR) is the point in space which the solar system would follow if it had no proper motion and perfectly adhered to the expected galactic rotation curve. In particular, the LSR is at the solar system's galactocentric radius from the center of the galaxy and rotates at a rate of \(220~{}km*s^{-1}\) in the direction of \((l,b)=(90^{\circ},0^{\circ})\)
When astronomers observe an object's redshift, a portion of that redshift is due to the object moving away from the solar system relative to the LSR, but another portion of the redshift is due to the solar system approaching the object relative to the LSR. Only the first of these values is relevant to the Kinematic Distance Method. Therefore, the radial velocities of objects reported in this paper are the radial velocities of objects relative to the LSR.
## 6 Acknowledgements
We would like to thank Melanie Crowson for her helpful feedback and suggestions while writing this paper.
Thank you to Jack Layton for helping us utilize and understand the Monte Carlo software.
Thank you to Timothy DeLisle for his useful feedback and suggestions about what problems to be aware of.
Thank you to Thurburn Barker for his help conceptualizing the physics behind this project.
Thank you to Amanda Peake for her help in understanding the mathematics behind this paper.
|
2306.01667 | Towards In-context Scene Understanding | In-context learning$\unicode{x2013}$the ability to configure a model's
behavior with different prompts$\unicode{x2013}$has revolutionized the field of
natural language processing, alleviating the need for task-specific models and
paving the way for generalist models capable of assisting with any query.
Computer vision, in contrast, has largely stayed in the former regime:
specialized decoders and finetuning protocols are generally required to perform
dense tasks such as semantic segmentation and depth estimation. In this work we
explore a simple mechanism for in-context learning of such scene understanding
tasks: nearest neighbor retrieval from a prompt of annotated features. We
propose a new pretraining protocol$\unicode{x2013}$leveraging attention within
and across images$\unicode{x2013}$which yields representations particularly
useful in this regime. The resulting Hummingbird model, suitably prompted,
performs various scene understanding tasks without modification while
approaching the performance of specialists that have been finetuned for each
task. Moreover, Hummingbird can be configured to perform new tasks much more
efficiently than finetuned models, raising the possibility of scene
understanding in the interactive assistant regime. | Ivana Balažević, David Steiner, Nikhil Parthasarathy, Relja Arandjelović, Olivier J. Hénaff | 2023-06-02T16:42:04Z | http://arxiv.org/abs/2306.01667v2 | # Towards In-context Scene Understanding
###### Abstract
In-context learning--the ability to configure a model's behavior with different prompts--has revolutionized the field of natural language processing, alleviating the need for task-specific models and paving the way for generalist models capable of assisting with any query. Computer vision, in contrast, has largely stayed in the former regime: specialized decoders and finetuning protocols are generally required to perform dense tasks such as semantic segmentation and depth estimation. In this work we explore a simple mechanism for in-context learning of such scene understanding tasks: nearest neighbor retrieval from a prompt of annotated features. We propose a new pretraining protocol--leveraging attention within and across images--which yields representations particularly useful in this regime. The resulting _Hummingbird_ model, suitably prompted, performs various scene understanding tasks _without modification_ while approaching the performance of specialists that have been finetuned for each task. Moreover, _Hummingbird_ can be configured to perform new tasks much more efficiently than finetuned models, raising the possibility of scene understanding in the interactive assistant regime.
## 1 Introduction
In natural language processing (NLP), the pretrain-finetune paradigm has long been the dominant way of acquiring domain-specific knowledge and adapting a model's behavior to a particular task (e.g. question answering, natural language inference, summarization). More recently and predominantly due to the increase in model and dataset sizes, large language models have exhibited impressive, task-agnostic emergent capabilities [11; 37; 68], where a single model, given an appropriate prompt, can perform a wide range of downstream tasks without any change in its parameters.
While large-scale supervised and self-supervised pretraining in vision has yielded powerful encoders which capture useful semantics [15; 22; 31; 32; 35; 42; 43], applying these representations to solve downstream tasks has typically required bespoke decoders and end-to-end finetuning. The most readily applicable representations are trained for image-text alignment, enabling zero-shot classification [53] and image-based dialogue [2; 19; 79; 80], however these models are inherently limited by the coarseness of natural language outputs. Attempts have been made at casting fine-grained tasks (e.g. detection) as language modeling [17], but dense scene understanding tasks requiring millions of outputs do not lend themselves to this format. Indeed, deficiencies in fine-grained spatial understanding have been well documented in visual language models [36; 45; 64; 78].
In this work, we investigate the components required for in-context learning of scene understanding tasks, which we characterize along three axes: generality, data efficiency, and fast adaptation. To this end, we expand the well-known non-parametric nearest neighbor (NN) retrieval method [7; 9; 15; 74] to support a variety of dense scene understanding tasks. This retrieval-based decoding mechanism has the advantage of requiring no task-specific parameters or finetuning, thus enabling effortless adaption of standard encoders (e.g. ResNet [32] or ViT [22]) to any dense task of interest, as well as faster research iteration by allowing for simpler and more efficient model selection during pretraining.
We further show that the NN scene understanding capabilities of canonically-pretrained vision transformers (such as MAE [30] and DINO [15]) vary greatly, despite similar finetuned performance. We find two pretraining components to yield reliable gains: (1) a simple modification to the standard self-supervised pretraining protocol, termed _contextual pretraining_, which performs _attention across images_ by updating the spatial representation of each image with features retrieved from a memory bank, and (2) a spatial _attention-pooling_ mechanism (as opposed to the more standard mean-pooling or the [CLS] token), which computes _attention within an image_ to summarize the (contextualized) spatial grid of features into a single image-level representation to be fed into the self-supervised objective. We showcase the benefits of this approach in a standard contrastive framework, demonstrating large gains in NN scene understanding over prior pretraining methods.
Finally we find that our model, named _Hummingbird_ due to its fast adaptation properties: **(1)** yields general-purpose representations which perform well in non-parametric semantic segmentation and monocular depth estimation using NN retrieval, **(2)** approaches the performance of fully finetuned models on some tasks, and **(3)** is more data-efficient and faster to adapt to new tasks when equipped with NN retrieval, compared to other pretraining methods and decoding mechanisms. By adapting quickly and efficiently to new tasks specified on the fly, _Hummingbird_ raises the possibility of vision systems providing general-purpose assistants with in-context scene understanding.
## 2 Related Work
**Retrieval-based perception.** Non-parametric evaluation has a long history with roots in the exemplar theory of human cognition [3; 38; 50] and case-based theories of artificial intelligence [1; 58]. In computer vision, non-parametric methods combined with simple features such as SIFT [48] and HOG [21] saw early success in image classification [9], shape matching [7; 8; 59], scene recognition [66; 75], and image parsing [44]. Exemplar-SVMs [49] showcased the versatility of non-parametric methods by retrieving arbitrary meta-data (such as segmentations, geometry, even 3D models) from training examples. We leverage these insights with modern architectures and training paradigms coupled with dense retrieval.
**Retrieval-based training.** To improve retrieval-based performance at test time, retrieval-based classifiers [69; 73] shape their representations for this task, enabling fine-grained classification from coarse supervision. While not explicitly training for it, DINO [15] witnessed NN classification abilities emerge from self-supervised training of vision transformers, enabling global retrieval tasks such as landmark recognition and copy detection. Retrieval has also been proposed as a means of enriching the positive pairs used in self-supervised contrastive learning [24]. These works differ from ours in that they encode and retrieve global representations of entire images, in contrast to the local inferences required by dense scene understanding tasks.
**Fast adaptation.** A number of methods have tackled the problem of adapting to newly specified tasks, most often from the perspective of meta-learning. For example, matching networks [71] and MAML [26] learn to solve new classification and reinforcement learning tasks specified on the fly. Architectural innovations, such as image prompting [4; 39] and adapter layers [27; 55] have also facilitated transfer to new image recognition tasks. While fast adaptation to dense scene understanding tasks has been less studied, image inpainting [6; 72] and VTM [41] have made progress in this direction, particularly in the low-data regime. These approaches differ from ours in that they achieve fast adaptation by training on related dense tasks and (in the case of VTM) adapt to downstream tasks with task-specific weight updates and learned similarity functions. In contrast, we maintain the simplicity of pure retrieval-based approaches by adapting to new downstream tasks without modifying any model parameters, and the generality of self-supervised approaches by learning representations from generic pretraining data with no dense annotations.
**Self-supervised learning.** Methodologically, our representation learning method is most similar to self-supervised learning (SSL) techniques. Similarly to NLP, image-based SSL has witnessed great success in recent years, notably with the advent of contrastive methods [14; 15; 16; 23; 33], self-distillation [12; 18; 28], and masked auto-encoding [30]. Due to their conceptual simplicity, we base our method on standard contrastive baselines such as SimCLR [16] and MoCo [31]. Image-based SSL techniques have since been tailored to learning representations which transfer well to scene understanding tasks [13; 33; 70; 77], and although they have been shown to support zero-shot object discovery [34; 61], they generally still require task-specific decoders and end-to-end finetuning to perform scene understanding tasks.
## 3 Method
The following sections describe the retrieval-based scene understanding decoding protocol (Section 3.1), followed by the contextual pretraining method (Section 3.2) and the self-supervised (Section 3.3) and supervised learning objectives (Section 3.4). We use subscripts \(\mathbf{x}_{i}\) to differentiate between representations and superscripts \(\mathbf{x}^{j}\) to denote spatial locations within a representation.
### Retrieval-based scene understanding
A general-purpose image representation should perform well across a variety of scene understanding tasks out-of-the-box, i.e. without modifying its parameters. To test whether a representation satisfies this condition, we extend the standard image-level nearest neighbor (NN) retrieval [7; 9] decoding mechanism to dense, patch-level retrieval. Given a prompt composed of training images from the downstream task and their corresponding labels \(\{(\mathbf{x}_{i},\mathbf{y}_{i}),i=1,...,N,\mathbf{x}_{i}\!\in\!\mathbb{R}^{H^{\prime} \times W^{\prime}\times C}\}\), our aim is to enable a pretrained image encoder \(f_{\theta}\) to make predictions about a new image \(\mathbf{x}\) from the test set. In tasks considered in this work, labels \(\mathbf{y}_{i}\) are spatial maps of either class labels \(\mathbf{y}_{i}\!\in\!\mathbb{C}^{H^{\prime}\times W^{\prime}}\) (e.g. for semantic segmentation, where \(\mathbb{C}\) is the space of all classes) or scalars \(\mathbf{y}_{i}\!\in\!\mathbb{R}^{H^{\prime}\times W^{\prime}}\) (e.g. for monocular depth estimation).
We encode each prompt image into a spatial map \(\mathbf{k}_{i}\!=\!f_{\theta}(\mathbf{x}_{i})\!\in\!\mathbb{R}^{H\times W\times D}\), where a feature \(\mathbf{k}_{i}^{j}\!\in\!\mathbb{R}^{D}\) at a given spatial location \(j\) is aligned with the local label \(\mathbf{l}_{i}^{j}\) created by averaging the pixel labels \(\mathbf{y}_{i}^{j}\) in that patch. We then sample a subset of features and local labels for each image, which form the keys and values of the memory bank \(\mathcal{M}\!=\!\{(\mathbf{k}_{i}^{j},\mathbf{l}_{i}^{j}),i\!=\!1,...,N,j\!\sim\! \mathcal{S}\}\) (see Appendix A.1 for details on the sampling distribution \(\mathcal{S}\)). In the following, we do not distinguish between entries from different images, and use a single integer \(j\) to index into the memory bank: \(\mathcal{M}\!=\!\{(\mathbf{k}^{j},\mathbf{l}^{j}),j\!=\!1,...,|\mathcal{M}|\}\).
Given a test image \(\mathbf{x}\), we form a representation \(\mathbf{q}\!=\!f_{\theta}(\mathbf{x})\) and use each spatial feature \(\mathbf{q}^{i}\) as a query to cross-attend over the memory bank with temperature \(\beta\). The cross-attention weights are then used to combine the corresponding labels and form a local prediction \(\hat{\mathbf{l}}^{i}\):
\[s^{i,j}=\frac{1}{\beta}\frac{\langle\mathbf{q}^{i},\mathbf{k}^{j}\rangle}{\|\mathbf{q}^{i }\|\|\mathbf{k}^{j}\|},\qquad\mathbf{a}^{i}=\underset{j}{\text{softmax}}(\mathbf{s}^{i}), \qquad\hat{\mathbf{l}}^{i}=\sum_{j}a^{i,j}\;\mathbf{l}^{j}. \tag{1}\]
Equation 1 defines the cross-attention operation as \(\hat{\mathbf{l}}^{i}\!=\!\text{CA}(\mathbf{q}^{i},\mathbf{k}^{j},\mathbf{l}^{j})\). The final prediction \(\hat{\mathbf{y}}\) is simply the concatenation of local predictions \(\hat{\mathbf{l}}^{i}\) upsampled to the original image size via bilinear interpolation. As a result, nearest neighbor retrieval allows a simple image encoder to perform scene understanding tasks _without any decoders_ or _parameter adaptation_ (finetuning or otherwise) to the downstream dataset. The mechanism is also entirely agnostic to the format of the labels, enabling it to perform tasks as diverse as semantic segmentation and depth estimation.
Figure 1: **In-context scene understanding with nearest neighbor retrieval.** On the left, we provide the system with a “prompt” of annotated images. On the right we ask the system to describe new query images. The network computes dense features for each location and uses them to query features computed from the prompt. The labels associated with the nearest prompt features are then aggregated to make predictions about the query. Note that the system makes no assumptions about the nature of the labels, and as such can be used to solve a variety of different scene understanding tasks in-context. The nearest neighbors and predictions in this example are computed with our _Hunmingbird_ model.
### Contextual pretraining
Memory retrieval allows an image encoder to perform various tasks by combining labels of nearby examples. To ensure that a model will perform well in this regime, we propose to train it in a similar manner, by enforcing its representation to be expressed as a combination of representations of nearby examples. Over the course of training, we populate a memory bank \(\mathcal{M}_{p}=\{(\mathbf{k}_{i},\mathbf{v}_{i}),i=1,...,|\mathcal{M}_{p}|\}\) with spatially-averaged keys and values computed from training images \(\mathbf{x}_{i}\) from previous batches:
\[\mathbf{h}_{i}=f_{\theta}(\mathbf{x}_{i})\in\mathbb{R}^{H\times W\times D},\qquad\mathbf{k} _{i}=\frac{1}{H\cdot W}\sum_{j=1}^{H\cdot W}\mathbf{h}_{i}^{j}\in\mathbb{R}^{D}, \qquad\mathbf{v}_{i}=\phi_{\theta}(\mathbf{k}_{i})\in\mathbb{R}^{D}, \tag{2}\]
where we use an MLP as the value head \(\phi_{\theta}\) (see Appendix B for implementation details). We then form a representation \(\mathbf{q}\!=\!f_{\theta}(\mathbf{x})\) of a new training image \(\mathbf{x}\) and use each spatial feature \(\mathbf{q}^{i}\) to attend over the memory bank and compute an update \(\hat{\mathbf{v}}^{i}\!=\!\text{CA}(\mathbf{q}^{i},\mathbf{k}_{j},\mathbf{v}_{j})\). Each feature is "contextualized" as \(\mathbf{c}^{i}\!=\!\psi_{\theta}((1-\lambda)\frac{\mathbf{q}^{i}}{\|\mathbf{q}^{i}\|}+ \lambda\frac{\hat{\mathbf{v}}^{i}}{\|\hat{\mathbf{v}}^{i}\|})\), where \(\psi_{\theta}\) is a linear layer and \(\lambda\) a weighting parameter. The new image representation \(\mathbf{c}\) is simply the concatenation of local features \(\mathbf{c}^{i}\), denoted by \(\mathbf{c}\!=\!g_{\theta}(\mathbf{q},\ \mathcal{M}_{p})\).
Note that the pretraining memory bank \(\mathcal{M}_{p}\) is discarded at test time and differs from the test time memory bank \(\mathcal{M}\) described in Section 3.1, allowing for straightforward comparison of our representations \(f_{\theta}(\mathbf{x})\) to those trained without the memory bank. Amongst self-supervised objectives, we explore the application of contextual pretraining to a simple contrastive formulation described in the following section and leave for future work its application to other objectives.
### Self-supervised objective
While contextual pretraining updates representations by attending across images, we hypothesize that learning to attend within images will also enable fine-grained predictions required by dense tasks. To that end, we train representations to locate the most distinctive part of an image using a combination of attention pooling and contrastive learning. Following [16; 28] we construct different views of unlabeled images \(\mathbf{x}\) through random data augmentation \(\mathbf{x}_{1}\sim\mathcal{A}_{1}(\mathbf{x}),\mathbf{x}_{2}\sim\mathcal{A}_{2}(\mathbf{x})\), see Appendix C.1. Each view is encoded as \(\mathbf{h}_{i}\!=\!f_{\theta}(\mathbf{x}_{i})\) and further contextualized with the mechanism described above \(\mathbf{c}_{i}\!=\!g_{\theta}(\mathbf{h}_{i},\ \mathcal{M}_{p})\!\in\!\mathbb{R}^{H \times W\times D}\). Following [52], we compute attention-pooled representations \(\mathbf{\hat{c}}_{i}\!\in\!\mathbb{R}^{D}\) using masks \(\mathbf{m}_{i}\) derived from a lightweight attention module \(a_{\theta}\), which we further augment with an additional value head \(\omega_{\theta}\). The pooled features are then used to compute projections \(\mathbf{z}_{i}^{\theta}\):
\[\mathbf{m}_{i}=\underset{j}{\text{softmax}}(a_{\theta}(\mathbf{c}_{i})),\qquad\mathbf{ \hat{c}}_{i}=\sum_{j=1}^{H\cdot W}m_{l}^{j}\ \omega_{\theta}(\mathbf{c}_{i}^{j}),\qquad\mathbf{z}_{i}^{\theta}=p_{\theta}(\mathbf{\hat{c }}_{i}). \tag{3}\]
Finally, following [20; 28; 65], each view forms predictions \(q_{\theta}(\mathbf{z}_{i}^{\theta})\) about the other view's targets \(\mathbf{z}_{j}^{\xi}\), which are computed with the same architecture and a different set of weights \(\xi\) which vary more slowly (see Appendix C.2). The online weights \(\theta\) are optimized using a standard contrastive loss:
\[\mathcal{L}_{\text{SSL}}^{ij}(\theta;\xi)=-\log\frac{\exp(q_{\theta}(\mathbf{z}_ {i}^{\theta})\cdot\mathbf{z}_{j}^{\xi})}{\exp(q_{\theta}(\mathbf{z}_{i}^{\theta})\cdot \mathbf{z}_{j}^{\xi})+\sum_{k}\exp(q_{\theta}(\mathbf{z}_{i}^{\theta})\cdot\mathbf{z}_{k}^{ \xi})}. \tag{4}\]
### Retrieval-based supervised objective
Given the availability of large labeled datasets, and noting that correctly designed supervision does not necessarily hurt generalization [57], we explore the use of label-supervision for learning representations that perform well in dense NN retrieval. While supervision is typically added with a linear classifier atop average-pooled features [32; 43; 62], we instead use it to constrain contextual pretraining and further align our training methodology with NN retrieval [69; 73]. Specifically, we expand the memory bank \(\mathcal{M}_{p}\) to include the labels: \(\mathcal{M}_{p}^{\prime}=\{(\mathbf{k}_{i},\mathbf{v}_{i},\mathbf{y}_{i}),i=1,...,|\mathcal{ M}_{p}^{\prime}|\}\) and query it with attention-pooled features \(\mathbf{\hat{c}}_{i}\!\in\!\mathbb{R}^{D}\) (see Equation 3) to form predictions \(\mathbf{\hat{y}}_{i}\!=\!\text{CA}(\mathbf{\hat{c}}_{i},\mathbf{k}_{j},\mathbf{y}_{j})\). We then use the standard softmax cross entropy loss \(\mathcal{L}_{\text{CE}}^{i}(\mathbf{\hat{y}}_{i},\mathbf{y}_{i})\), which added to the self-supervised objective of Equation 4, forms the total loss \(\mathcal{L}^{ij}=\mathcal{L}_{\text{SSL}}^{ij}+\alpha(\mathcal{L}_{\text{CE}}^{ i}+\mathcal{L}_{\text{CE}}^{j})\), with supervised weight \(\alpha\). Note that the memory bank \(\mathcal{M}_{p}^{\prime}\) is only used during training and the added supervision relates to a global image classification task, not the downstream pixel-level tasks.
## 4 Experiments
We demonstrate the generality of _Hummingbird_ representations through retrieval-based scene understanding on several downstream tasks (Section 4.1): semantic segmentation on PASCAL VOC [25] and ADE20K [81] with mean IoU (mIOU) as metric, and monocular depth estimation on NYUv2 [60] with root-mean-square error (RMSE) as metric. We further show that, in the low-data regime (Section 4.2) and when looking at adaptation speed (Section 4.3), _Hummingbird_ with NN retrieval outperforms other pretraining techniques and decoding mechanisms, including end-to-end finetuning. Section 4.4 compares the performance of fully finetuned _Hummingbird_ with prior work.
### Retrieval-based scene understanding
We consider the performance of learned representations in the retrieval-based scene understanding setup described in Section 3.1 across architecture (ViT-B and ViT-L) and dataset (ImageNet-1k and -22k [56]) scales, trained with supervision (_Hummingbird++_) or without (_Hummingbird_). Figure 1 shows an example prediction made by _Hummingbird_ with a ViT-B encoder on PASCAL VOC.
The top part of Table 1 shows an apples-to-apples comparison of _Hummingbird_ with existing methods for pretraining ViT-B encoders, where it outperforms all baselines by a large margin. We also note that _Hummingbird_ scales well with increasing the dataset size from ImageNet-1k to ImageNet-22k, which does not hold for all other methods (e.g. MAE, consistent with [51]). Further, training with supervision is generally beneficial, particularly for semantic segmentation.
The bottom part of Table 1 contains the best performing methods across architectures, showing a performance increase for _Hummingbird_ with encoder size. Note that results achieved by _Hummingbird_ retrieval on PASCAL VOC and ADE20K, without any finetuning, approach the performance of methods fully finetuned on each of those tasks with specialized decoders (see Table 4).
\begin{table}
\begin{tabular}{l l c c c c c} & & & & \multicolumn{3}{c}{Semantic segmentation} & Depth pred. \\ \cline{4-7} Method & Encoder & Params (M) & Dataset & PASCAL \(\uparrow\) & ADE20K \(\uparrow\) & NYUv2 \(\downarrow\) \\ \hline Supervised\({}^{\dagger}\) & ViT-B & 86 & IN1K & 35.1 & 13.8 &.913 \\ DINO [15] & ViT-B & 86 & IN1K & 55.9 & 21.8 &.793 \\ MoCo-v3 [20] & ViT-B & 86 & IN1K & 37.2 & 14.6 &.771 \\ MAE [30] & ViT-B & 86 & IN1K & 6.6 & 3.3 &.981 \\ LOCA [13] & ViT-B & 86 & IN1K & 57.5 & 18.5 &.880 \\ \hline _Hummingbird_ & ViT-B & 86 & IN1K & 70.5 & 28.3 & **.718** \\ _Hummingbird++_ & ViT-B & 86 & IN1K & **72.1** & **30.5** &.738 \\ \hline Supervised\({}^{\dagger}\) & ViT-B & 86 & IN22K & 63.5 & 28.0 & 1.07 \\ MAE\({}^{\dagger}\)[30] & ViT-B & 86 & IN22K & 9.8 & 4.2 &.968 \\ LOCA [13] & ViT-B & 86 & IN22K & 56.4 & 16.8 &.829 \\ \hline _Hummingbird_ & ViT-B & 86 & IN22K & 73.5 & 30.7 &.706 \\ _Hummingbird++_ & ViT-B & 86 & IN22K & **76.2** & **34.1** & **.695** \\ \hline \multicolumn{7}{l}{_Comparison across architectures:_} \\ CLIP\({}^{\dagger}\)[53] & NFNet-F6 [10] & 438 & ALIGN & 57.2 & 25.0 &.844 \\ Supervised\({}^{\dagger}\) & NeXt-XL [46] & 1300 & IN22K & 58.9 & 25.5 &.791 \\ Supervised\({}^{\dagger}\) & ViT-L & 307 & IN22K & 65.8 & 26.1 &.860 \\ MAE [30] & ViT-L & 307 & IN1K & 8.0 & 3.6 &.934 \\ LOCA [13] & ViT-L & 307 & IN22K & 59.5 & 17.6 &.912 \\ \hline _Hummingbird_ & ViT-L & 307 & IN22K & 76.9 & 35.0 & **.671** \\ _Hummingbird++_ & ViT-L & 307 & IN22K & **77.3** & **35.8** & **.671** \\ \hline \end{tabular}
\end{table}
Table 1: **In-context scene understanding. All models are pretrained on source data in a supervised or self-supervised manner, and applied to downstream datasets without modification. All downstream tasks are performed using a single mechanism, nearest neighbor retrieval. \({}^{\dagger}\)indicates our reproduction of external work, all other models were evaluated using publicly available checkpoints.**
### Data-efficient retrieval-based scene understanding
In addition to adapting to downstream tasks with minimal (or ideally no) alterations to the model, a second ingredient for in-context learning is adaptation given only a limited number of examples.
We therefore evaluate the performance of _Hummingbird_ retrieval in the low-data regime, and compare it with other decoding techniques: linear probing and end-to-end finetuning (Figure 2). For PASCAL VOC, NN retrieval outperforms the end-to-end finetuning for up to \(1/8\) of the data (\(\sim\)1300 images). For ADE20K the effect is less pronounced, however NN retrieval still exceeds end-to-end finetuning when given up to \(1/32\) of the data (\(\sim\)600 images). _Hummingbird_ retrieval outperforms linear decoding on top of the frozen encoder in all cases. These results show that given an appropriately designed encoder, NN retrieval provides a data-efficient alternative to end-to-end finetuning, and is strictly more expressive than linear decoding.
Second, we verify the generality of these findings by comparing _Hummingbird_ retrieval to several other representation learning algorithms which transfer to the low-data regime with finetuning (Table 2, see Appendix D.1 for higher-data regime and additional analysis). For PASCAL VOC, _Hummingbird_ with the NN retrieval decoder outperforms the end-to-end finetuned version of all other techniques for both 1/128 (83 images) and 1/64 (165 images) of the data, which holds for both the purely self-supervised _Hummingbird_ and its supervised variant. For ADE20K, _Hummingbird_ is competitive with DINO [15] for 1/128 of the data (158 images) and outperformed by LOCA for 1/64 of the data (316 images), whereas _Hummingbird++_ outperforms all other models, demonstrating the benefit of retrieval-based supervision during pretraining. In summary, in the few-shot regime (e.g. \(\leq\)100 images) relevant for in-context learning, _Hummingbird_ retrieval provides a compelling and robust alternative to end-to-end finetuning.
\begin{table}
\begin{tabular}{l c c c c c} & \multicolumn{3}{c}{PASCAL \(\uparrow\)} & \multicolumn{2}{c}{ADE20K \(\uparrow\)} \\ Method & Decoder & 1/128 (\(n\)=83) & 1/64 (\(n\)=165) & 1/128 (\(n\)=158) & 1/64 (\(n\)=316) \\ \hline Supervised [67] & E2E FT & 41.8 & 53.8 & 10.8 & 14.3 \\ DINO [15] & E2E FT & 36.1 & 44.3 & 11.7 & 14.4 \\ MoCo-v3 [20] & E2E FT & 19.9 & 33.4 & 4.6 & 7.9 \\ MAE [30] & E2E FT & 34.2 & 44.1 & 8.2 & 12.2 \\ LOCA [13] & E2E FT & 40.1 & 53.9 & 11.2 & 15.5 \\ _Hummingbird_ & **NN** & 50.5 & 57.2 & 11.7 & 15.1 \\ _Hummingbird++_ & **NN** & **52.4** & **57.3** & **12.7** & **16.4** \\ \end{tabular}
\end{table}
Table 2: **Data-efficient scene understanding.** After pretraining, models are adapted to downstream tasks on small amounts of data with end-to-end fine-tuning with a linear head (E2E FT) or with nearest neighbor retrieval (NN). \(n\) refers to the number of images a fraction represents. All runs are averaged over five different seeds, with standard deviation of the order of 0.04 / 0.10% for NN and 0.36 / 1.35% for E2E FTon PASCAL / ADE20K. Each method is trained with ViT-B on ImageNet-1k.
Figure 2: **Data efficiency of _Hummingbird_ evaluated with retrieval-based evaluation (“NN retrieval”), linear probing (“Linear + frozen”), or full finetuning (“Linear + E2E FT”).**
### Fast adaptation to downstream tasks
While _Hummingbird_ retrieval displays useful data-efficiency properties relative to fully finetuned methods, finetuning yields better performance when given access to the entire dataset. Yet even in this large-data regime, assistant systems must be quickly adaptable to new tasks. We thus evaluate the amount of computation required to reach good performance with the different decoding schemes from Section 4.2. All decoders are given the full training set and varying compute budgets. We titrate the amount of computation given to NN retrieval by partially populating the memory bank with fractions of the dataset. Figure 3 shows that 5 minutes (1 epoch through the downstream training set) are sufficient to build a performant NN decoder (70% mIoU on PASCAL VOC, 28% on ADE20K). In contrast, given the same amount of time, end-to-end finetuning still exhibits performance near chance, despite benefiting from hyperparameter tuning of learning rates, weight decay, and warm-up length. While a linear classifier converges more quickly than finetuning, it saturates with a significantly lower performance than NN retrieval (50% mIoU on PASCAL VOC, 20% on ADE20K).
We also quantify these benefits in terms of relative convergence: on PASCAL VOC, NN retrieval reaches the performance of full finetuning after 3 minutes rather than 3 hours, and the performance of a linear classifier in 2 seconds rather than 3 minutes. For ADE20K, the speedups are smaller, but significant: 7 minutes rather than 30 minutes (relative to full finetuning), and 1 minute rather than 30 minutes (relative to the linear classifier). By making substantial gains in this near-real-time use case, we believe NN retrieval lays the groundwork for scene understanding in an interactive setting.
Table 3 compares _Hummingbird_ retrieval to other models (equipped with linear or end-to-end finetuned decoders) in the fast-adaptation regime (i.e. when given a single pass over the full downstream dataset): _Hummingbird_ retrieval outperforms all other pretraining techniques and decoding mechanisms on both PASCAL VOC and ADE20K.
\begin{table}
\begin{tabular}{l c c c c c} & & \multicolumn{2}{c}{PASCAL \(\uparrow\)} & \multicolumn{2}{c}{ADE20K \(\uparrow\)} \\ \cline{3-6} Method & Decoder & Frozen & E2E FT & Frozen & E2E FT \\ \hline Supervised [67] & Linear & 61.5 & 66.3 & 27.6 & 15.1 \\ DINO [15] & Linear & 54.9 & 64.0 & 25.6 & 23.4 \\ MoCo-v3 [20] & Linear & 41.2 & 4.8 & 14.6 & 3.2 \\ MAE [30] & Linear & 20.1 & 42.5 & 8.3 & 7.9 \\ LOCA [13] & Linear & 61.9 & 62.9 & 25.4 & 14.6 \\ _Hummingbird_ & NN & 70.5 & & & 28.3 \\ _Hummingbird++_ & NN & **72.1** & & **30.5** \\ \end{tabular}
\end{table}
Table 3: **Fast adaptation to new scene understanding tasks.** After pretraining, models are transferred to downstream tasks with the full dataset, but a small amount of computation: 1 epoch. Models perform the task either with a linear classifier (Frozen), end-to-end fine-tuning (E2E FT) or with our mechanism for in-context scene understanding (NN).
Figure 3: **Adaptation time of _Hummingbird_ using various decoders: our proposed nearest-neighbor decoder (“NN retrieval”), linear probing (“Linear + frozen”) or full finetuning (“Linear + E2E FT”).**
### Fully finetuned scene understanding
Although the primary focus of this work is on fast and effortless adaption to downstream tasks, for completeness, we include a comparison of fully finetuned _Hummingbird_ with fully finetuned state-of-the-art models on the semantic segmentation task. We follow the finetuning protocol of MAE [30] and use UperNet [76] as a decoder. Table 4 shows that both _Hummingbird_ and _Hummingbird++_ are competitive with state-of-the-art when finetuned. Further analysis shows retrieval-based performance to be correlated with the finetuning performance (see Appendix D.2), paving the way for using retrieval-based evaluation as a model selection tool during training.
## 5 Analysis
**Ablating the pretraining components.** We perform an ablation of pretraining components (see Table 5) required for adaptation to downstream tasks through NN retrieval. We find attention pooling to yield superior performance compared to mean pooling or a [CLS] token. Both contextual pretraining and attention pooling separately lead to large performance improvements over a baseline MoCLR [65] model and best results are achieved when combining the two. Note that although spatial attention pooling was initially introduced in the context of video understanding [52], this work is the first to show its utility for downstream task adaptation in the NN retrieval setup. We further find that modifying it ("QK att") with a value head ("QKV att.") improves its performance across all tasks.
**Effect of evaluation memory length \(|\mathcal{M}|\).** When transferring to downstream tasks with many training images (e.g. PASCAL VOC and ADE20K contain \(\sim 10\)k and \(\sim 20\)k images respectively, each image providing 100s of tokens), we see benefits of using large memory banks (e.g. \(|\mathcal{M}|\) of the order of 1-10 million tokens, see Figure 4, right). Since this makes the cross-attention operation computationally intractable, we leverage powerful libraries for approximate NN search [29; 40] to limit cross-attention (Equation 1) to a small set of nearest neighbors for each query (e.g. \(k=30\), see Appendix A.2 for details, where we find increasing \(k\) not to have significant impact on performance).
**Effect of pretraining memory length \(|\mathcal{M}_{p}|\).** In contrast to retrieval-based evaluation, we find contextual pretraining to be remarkably memory-efficient: small memory banks (e.g. \(|\mathcal{M}_{p}|\) = 40k, see Figure 4, left for PASCAL VOC and Appendix D.3 for ADE20K) are sufficient to yield robust gains in retrieval-based scene understanding, adding a relatively small computational overhead to training the representation (e.g. +22% for \(|\mathcal{M}_{p}|\) = 40k). The module is agnostic to how the representation is trained and it benefits both self-supervised and supervised pretraining. Note that contextual pretraining is only present at training time and does not affect inference speed.
## 6 Conclusion
Inspired by impressive examples of in-context learning in language models, we investigate components necessary for in-context learning of dense scene understanding tasks in computer vision. To this end, we propose a simple non-parametric nearest neighbor retrieval mechanism--which is agnostic to
\begin{table}
\begin{tabular}{l l c c c} & & \multicolumn{3}{c}{Fine-tuned accuracy (mIoU)} \\ \cline{3-5} Method & Encoder & Dataset & PASCAL \(\uparrow\) & ADE20K \(\uparrow\) \\ \hline Random & ViT-B & IN1K & 29.1 & 21.1 \\ Supervised [67] & ViT-B & IN1K & 76.1 & 47.3 \\ DINO [15] & ViT-B & IN1K & 74.1 & 44.1 \\ MoCo-v3 [20] & ViT-B & IN1K & 74.5 & 47.3\({}^{\dagger}\) \\ BEiT [5] & ViT-B & IN1K+DALLE [54] & - & 47.1\({}^{\dagger}\) \\ MAE [30] & ViT-B & IN1K & 75.0 & 48.1\({}^{\dagger}\) \\ LOCA [13] & ViT-B & IN1K & 76.7 & 47.9 \\ _Hummingbird_ & ViT-B & IN1K & 80.0 & 44.9 \\ _Hummingbird_ & ViT-B & IN1K & 81.2 & 44.9 \\ _Hummingbird_ & ViT-B & IN22K & 81.6 & 46.9 \\ _Hummingbird_ & ViT-B & IN22K & **82.1** & **48.2** \\ \end{tabular}
\end{table}
Table 4: **Scene understanding with end-to-end fine-tuning.** After pretraining, models are equipped with task-specific decoders and finetuned for that task on the entire downstream dataset. \({}^{\dagger}\)indicates results are taken from [30], using UperNet [76] as the decoder. Results for all other baselines are taken from [13] and use the linear decoder from [63].
the downstream task and requires no finetuning or specialized decoders--to serve as a general-purpose decoder which we use to evaluate models on semantic segmentation and monocular depth estimation tasks. We further propose _Hummingbird_, a pretraining method which benefits from attention across images (through contextual pretraining) and within an image (through spatial attention pooling) to produce image representations that can be easily configured to perform downstream tasks in a fast and data-efficient manner. By combining _Hummingbird_ as the encoder with NN retrieval as the decoder, we take an important step towards in-context learning for dense vision tasks.
## 7 Broader Impact and Limitations
Broader impact.In laying the groundwork for scene understanding methods to be used in the interactive regime, our work could potentially benefit general-purpose assistants that are seeing rapid adoption. While these may enable a host of beneficial applications, they suffer from the biases and potential harms associated with visual language models and large language models more generally.
Limitations.Despite offering large relative improvements compared to finetuning and linear classification in the low-data regime, the absolute performance of _Hummingbird_ when given less than 100 examples in the prompt is still far from perfect. To truly match the in-context learning abilities displayed in NLP, we would ideally need good performance from a handful of examples. Further, given that retrieval-based scene understanding is task-agnostic, we leave expanding _Hummingbird_ evaluation to other tasks (e.g. object detection) to future work. Finally, while we have showcased the benefits of attention within and across images in a contrastive framework, we defer adding them to more recent approaches using advanced data curation [51] and self-distillation [15; 82].
Figure 4: Effect of the pretraining (_left_) and evaluation (_right_) memory length on performance. All models were pretrained with ViT-B on ImageNet-22k. _Left_: Since the retrieval-based supervised objective is only defined for memory banks of non-zero length, for the purpose of this ablation we replace it with a simple linear classifier when \(|\mathcal{M}_{p}|=0\). _Right_: For downsample=False, we store representations of all patches into the memory bank. If downsample=True, we sample \(|\mathcal{M}|/N\) patches per image (\(N\) is the length of the downstream training set), allowing for greater diversity.
\begin{table}
\begin{tabular}{l l c c c c} & & \multicolumn{3}{c}{Semantic segmentation} & Depth pred. \\ \cline{3-6} Method & Pool. & Cont. & \multicolumn{1}{c}{PASCAL \(\uparrow\)} & \multicolumn{1}{c}{ADE20K \(\uparrow\)} & NYUv2 \(\downarrow\) \\ \hline MoCLR [65] & mean & ✗ & 38.6 & 4.9 & 1.01 \\ + concl & mean & ✓ & 55.6 & 15.3 &.901 \\ + [CLS] & [CLS] & ✗ & 64.5 & 23.9 &.741 \\ + [CLS] + cont. & [CLS] & ✓ & 65.6 & 25.1 &.731 \\ + QK att. [52] + cont. & QK att. & ✓ & 68.7 & 26.3 &.728 \\ + QKV att. & QKV att. & ✗ & 68.0 & 27.4 &.742 \\ _Hummingbird_ & QKV att. & ✓ & **70.5** & **28.3** & **.718** \\ \end{tabular}
\end{table}
Table 5: **Ablation of pretraining components. Effect of training with spatial attention pooling (as opposed to mean pooling or a [CLS] token) and memory contextualization (“Cont.”) on performance. All models were pretrained with ViT-B on ImageNet-1k.**
## Acknowledgements
We thank Daniel Zoran, Andrew Zisserman, Evan Shelhamer and Joao Carreira for their thoughtful feedback, Skanda Koppula and Mathilde Caron for their assistance in reproducing baselines, and Aaron van den Oord and Oliver Vikbladh for fruitful discussions at the inception of the project.
|
2302.13794 | Enhanced sensitivity deep subwavelength direction-of-arrival sensing
using temporal modulation | Electromagnetic wave interaction with time-varying systems has gained a lot
of research interest in recent years. The temporal modulation gives
unprecedented control over the response, allowing us to go beyond the
state-of-the-art in passive systems. In this work, we use time variation to
derive a model for a deep-subwavelength direction-of-arrival (DoA) sensing
apparatus with enhanced performance and sensitivity. We formulate the problem,
derive an analytical model, and discuss the various physical mechanisms
responsible for the enhancement. We show that time modulation enables a new
degree of control that can be used to optimize the response for various
incident frequencies, allowing for wideband operation. Additionally, we show
that incorporating the currents from higher generated harmonics into the
sensing scheme allows us to extract more accurate information about the
impinging wave. | Tamir Zchut, Yarden Mazor | 2023-02-05T12:19:19Z | http://arxiv.org/abs/2302.13794v1 | # Enhanced sensitivity deep subwavelength direction-of-arrival sensing using temporal modulation
###### Abstract
Electromagnetic wave interaction with time-varying systems has gained a lot of research interest in recent years. The temporal modulation gives unprecedented control over the response, allowing us to go beyond the state-of-the-art in passive systems. In this work, we use time variation to derive a model for a deep-subwavelength direction-of-arrival (DoA) sensing apparatus with enhanced performance and sensitivity. We formulate the problem, derive an analytical model, and discuss the various physical mechanisms responsible for the enhancement. We show that time modulation enables a new degree of control that can be used to optimize the response for various incident frequencies, allowing for wideband operation. Additionally, we show that incorporating the currents from higher generated harmonics into the sensing scheme allows us to extract more accurate information about the impinging wave.
## 1 Introduction
Detecting a wave's direction of arrival (DoA) has important applications in many fields. Starting from the survival of small species in nature, which rely on understanding the direction a predator is approaching based on the sounds it makes, through aviation and radar systems, and even modern-day imaging technologies such as light-field photography. Usually, DoA detectors rely on the phase differences when the EM signal is recorded by two (or more) adjacent elements, bringing forward a significant limitation - what if the difference is very small, as when the sensing apparatus is small compared to the received signal's characteristic wavelength?
Two main approaches exist in electromagnetic (EM) waves to alleviate this limitation. One approach is based on Bio-mimicking small insects. In large animals, significant ear separation lets information regarding the DoA be extracted from the phase/amplitude difference of the recorded sound. In small insects, this problem is mitigated by direct coupling between the ears. This approach is simple to implement and works very well, mainly around a prescribed angle [1]. This was followed up by further optimization, including using high-order modes [2] and non-foster coupling networks [3]. The second approach is using multiphysics systems. One can couple the EM wave response to a different wave mechanism that operates in similar frequencies but significantly smaller wavelengths, which enhances the recorded signal difference. This is demonstrated for an electro-acoustic system in [4].
On a parallel route, in the past decade, EM wave interaction with time-varying systems has seen a burst of renewed interest, driven by two parallel processes. First, the ability to implement such systems has seen significant advances, making the theoretical ideas more realistic. Second, the rise of metamaterials has pointed to many intriguing and exotic wave phenomena occurring in materials with extreme parameters. The basic theoretical concepts are introduced in [5; 6], and numerous applications have been proposed, and among the ones that are more relevant to this work we have the implementation of nonreciprocal elements (gyrators [7], circulators and isolators [8; 9]), Nonreciprocal transmission line design [10], extreme energy accumulation [11], nonreciprocal reflection and transmission [12; 13], and improved antenna matching, Q-factor and bandwidth [14; 15; 16; 17] (many other applications are summarized in [18; 19]). Of specific relevance are the control of Mie scattering properties, shown in [20]. Non-periodic variation and switching also play an important role, demonstrated for broadband matching [21], mathced filter design [22], engineered time-reversal spectrum [23], and many more.
In this work, we will examine how we can enhance the sensitivity of DoA detection in deeply subwavelength systems using a different approach - by incorporating temporal modulation to enhance and tailor the physical interactions between the system elements. Since time-modulated systems naturally have many parameters at play, and rich content of physical processes occurring, we will focus on unveiling the physical mechanisms at play that enable sensitivity enhancement.
Several works have already started examining this avenue of time-modulated DoA systems. First, in [24] and subsequent works, periodic switching of the received signal from a DoA sensing dimer is used to enable angle estimation through the ratio of power content in various higher harmonics. However, since only the received signal is switched, physical interaction between the elements does not benefit from these higher harmonics. Moreover, the coupling between the elements, which will be pivotal in this work, does not play a significant role. This makes this strategy viable mostly for \(d>\lambda/2\), and not the deeply subwavelength scenario we target. In [25], a time-modulated metasurface was proposed for DoA detection, leveraging the deviations in the diffraction/reflection angles to do the estimation. Again, this work utilizes a system whose size is on the order of \(\lambda\).
## 2 Formulation
Naturally, time-modulated wave systems possess many degrees of freedom. On top of the "regular" time-invariant parameters, we also have the modulation frequency, depth, waveform, and the ensuing coupling to higher and lower frequencies. Due to that, the analysis becomes complicated quite rapidly with the addition of components. Since we aim to focus on the physical mechanisms that enhance the DoA sensitivity, we employ a simple model - a 2D dimer. To explore our concept, we use a dimer composed of two infinite wires made of a perfect electric conductor (PEC), as shown in Fig. 1. The incident wave angle \(\theta\) corresponds to a phase difference of \(kd\cos(\theta)\ll 1\). The wires are periodically loaded with \(Z_{p}=Z_{s}+Z_{C}\), where \(Z_{s}\) is a static, non-modulated impedance, and \(Z_{C}\) is a time-modulated capacitor \(C(t)\). The wires are modeled using their susceptibility \(\alpha\), which determines the induced current \(I(\omega)\) (without modulation) on each wire via
\[I(\omega)=\alpha(\omega)E_{tan}^{loc}(\omega), \tag{1}\]
Where \(E_{tan}^{loc}(\omega)\) is the tangential component of the local field, which contains both the incident wave, and the field scattered by the other wire, but in the absence of the examined wire itself. For infinite PEC wires, the susceptibility is [26, 27]
\[\alpha^{-1}(\omega)=\alpha_{0}^{-1}(\omega)+\frac{Z_{L}}{\Delta},\qquad\alpha _{0}^{-1}(\omega)=\frac{\eta k}{4}H_{0}^{(2)}(kr_{0}) \tag{2}\]
here \(\alpha_{0}\) is the susceptibility of the unloaded wires, \(Z_{L}\) is the \(\Delta\)-periodic loading, \(\eta\) is the medium impedance, \(k\) is the free-space wavenumber, and \(H_{0}^{(2)}\) is zero-order Hankel function of 2nd kind. To clarify the notation, later we will use also \(\gamma=\alpha^{-1},\gamma_{0}=\alpha_{0}^{-1}\), and for simplicity, we focus on TE incidence (therefore \(\hat{z}\) electric field). For every frequency \(\omega\), the current on the wires generates scattered fields via \(E_{tan}^{scattered}(\rho)=G(\rho,\omega)I\), where \(G(\rho,\omega)\) is the 2D Green's function of the electric field in free-space, i.e. \(G(\rho,\omega)=-\frac{\eta k(\omega)}{4}H_{0}^{(2)}\left(k(\omega)\rho\right)\). Using both \(G\) and \(\alpha\), we can express the total reaction of a non-modulated dimer are
\[I_{2}=\frac{\alpha_{2}E_{2}^{inc}+G(d)\alpha_{1}\alpha_{2}E_{1}^{inc}}{1-G^{2 }(d)\alpha_{1}\alpha_{2}},\;I_{1}=\alpha_{1}\left(E_{1}^{inc}+G(d)I_{2}\right). \tag{3}\]
Now, let us incorporate the effects of periodic temporal modulation into the system. When the capacitor \(C(t)\) is periodically modulated, \(C(t)^{-1}=C_{0}^{-1}[1+m\cos(\omega_{m}t+\varphi_{m})]\), we can represent every physical quantity as a sum of all the possible harmonics
\[X(t)=\sum_{n=-\infty}^{\infty}\tilde{X}_{n}e^{j(\omega+n\omega_{m})t}+c.c.= \sum_{n=-\infty}^{\infty}\tilde{X}_{n}e^{j\omega_{n}t}+c.c. \tag{4}\]
Where X can be the current \(I\) or the electrical field \(E\) or any other relevant quantity, \(\omega_{n}=\omega+n\omega_{m}\), and c.c. means complex conjugate. Since both the current \(I\) and the electric field \(E_{z}\) are excited in multiple frequencies (As expected in our temporally modulated system), the amplitudes of the spectral components can be ordered in a column vector \([\tilde{I}]\), and \([\tilde{E}]\). Using these, and following [6, 26], we can write the time-modulated single wire
Figure 1: Dimer model: two infinite wires, with periodic loading \(Z_{L}=Z_{S}+Z_{C}\), where \(Z_{S}\) is a static impedance, and \(Z_{C}\) is the impedance of a time-modulated capacitor.
response:
\[[\tilde{E}]=\underline{\underline{\Gamma}}[\tilde{I}]\leftrightarrow\left[\begin{array}[ ]{c}\vdots\\ \tilde{E}_{-1}\\ \tilde{E}_{0}\\ \tilde{E}_{1}\\ \vdots\end{array}\right]=\left[\begin{array}{cccc}\ddots&\vdots&\vdots&0\\ \dots&\gamma_{0}(\omega_{-1})+\frac{1}{j\omega_{-1}C_{0}\Delta}&\frac{M}{j \omega_{0}C_{0}\Delta}&0&\dots\\ \dots&\frac{M^{*}}{j\omega_{-1}C_{0}\Delta}&\gamma_{0}(\omega_{0})+\frac{1}{j \omega_{0}C_{0}\Delta}&\frac{M}{j\omega_{1}C_{0}\Delta}&\dots\\ \dots&0&\frac{M^{*}}{j\omega_{0}C_{0}\Delta}&\gamma_{0}(\omega_{1})+\frac{1}{j \omega_{1}C_{0}\Delta}&\dots\\ 0&\vdots&\vdots&\vdots&\ddots\end{array}\right]\left[\begin{array}{c}\vdots \\ \tilde{I}_{-1}\\ \tilde{I}_{0}\\ \tilde{I}_{1}\\ \vdots\end{array}\right] \tag{5}\]
Where \(M=me^{j\varphi_{m}}\). Since the current is excited in several frequencies, each of these will generate a scattered field at it's specific frequency. Since \(G\) is a function of the frequency \(\omega\), it can be formulated using a diagonal matrix, \(\underline{\underline{\underline{G}}}\)
\[\underline{\underline{\underline{G}}}(\rho)=diag[...,G(\rho,\omega_{-1}),G( \rho,\omega_{0}),G(\rho,\omega_{1}),...] \tag{6}\]
Then, for the modulated dimer, the currents can be solved from the following matrix equation
\[\underline{\underline{\Gamma}}_{1}[\tilde{I}_{1}]-\underline{\underline{ \underline{G}}}[\tilde{I}_{2}]=[\tilde{E_{1}^{i}}]\;,\;\underline{\underline{ \Gamma}}_{2}[\tilde{I}_{2}]-\underline{\underline{\underline{G}}}[\tilde{I}_ {1}]=[\tilde{E_{2}^{i}}] \tag{7}\]
### Performance metrics
In order to compare the performance and possible benefits of our approach, both with respect to different modulation scenarios and other enhancement schemes, we use the sensitivity \(S\), defined as
\[S(\theta)=\left|1-\frac{I_{1}(\theta)}{I_{2}(\theta)}\right|^{2} \tag{8}\]
which essentially quantifies the relative variation of \(I_{1},I_{2}\) with respect to the incident angle. To accurately extract \(\theta\) from \(I_{1},I_{2}\), \(S\) should vary with large amplitude against \(\theta\).
In addition to the sensitivity, we would also like to define a more simple metric, that will allow us to capture the properties of \(S(\theta)\) using a single scalar. This will allow us to characterize the sensitivity curve against various system parameters (wire loading, temporal modulation parameters). To that end, we will use the maximum sensitivity, \(S_{max}\):
\[S_{max}=\max_{\theta}\{S(\theta)\} \tag{9}\]
In the modulated system, \(S_{max}\) can be calculated for the different up- and down-converted harmonics (which will prove very beneficial), and therefore we will often use \(S_{max}^{n}\) to indicate
that it was calculated for then'th harmonic, where \(n=0\) is the fundamental, incident frequency. \(S^{NM}_{max}\) (or any other occurance of \(NM\)) will indicate the non-modulated case.
## 3 Results and Discussion
### Frequency conversion effects
Various physical mechanisms play a role in enhancing the sensitivity of the dimer response to the DoA. To discuss these systematically, let us start by looking at the \(S^{n}_{max}/S^{NM}_{max}\) vs. the modulation frequency. Figure 2 shows \(S^{\{-1,0,1\}}_{max}\) for the following parameters: The radius of the wire is \(r_{0}=0.3mm\), the distance between the wires is \(d=5cm\), and the base frequency is \(f_{0}=300MHz\). The period of loading is \(\Delta=1cm\), with the load consisting of a periodic resistor \(R_{L}=0.3m\Omega\), and a capacitance \(C^{-1}(t)=C_{0}^{-1}(1+m\cos(\omega_{m}t))\), where \(C_{0}=13pF\) and \(m=0.2\). In the fundamental frequency, we see that mostly \(S^{max}\) is similar to the non-modulated case (yielding a normalized value close to 1), except for a small region around \(\omega_{m}\approx 2.4\omega\) which exhibits a moderate improvement, \(S^{0}_{max}/S^{NM}_{max}\approx 2.7\). However, when looking at the higher harmonics, we see that the sensitivity becomes much higher at certain values of \(\omega_{m}\) (\(\omega_{m}\approx 0.21\omega,0.42\omega,2.4\omega\)). The effect behind this is composed of the interaction of two mechanisms: the eigenmodes of the dimer and frequency conversion induced by the temporal modulation. In the unmodulated dimer, each wire, incorporated with capacitive loading, is resonant (since the pristine wire response is inductive). When two such resonant wires are placed next to each other to form a dimer, two resonance frequencies exist. Each resonant frequency is characteristic of a specific dimer mode - a symmetric and an anti-symmetric one that form a basis for any current combination that may be excited in the wires. We denote the resonance frequencies of each of these modes \(\omega_{sym},\omega_{anti-sym}\) respectively. How strongly each dimer mode contributes to the total currents is a function of the excitation frequency and the exciting field distribution. Since we want to extract information regarding the DoA from the differences between the currents, we would like to "boost" the content of the anti-symmetric mode as much as possible and how steeply this content varies as a function of the incidence angle (considering that for certain values of \(\theta\), namely \(\pi/2\) and \(3\pi/2\), there is only the symmetric mode, regardless of other parameters,
due to symmetry considerations).
When considering a deep subwavelength dimer, we expect the content of the symmetric mode to be dominant, resulting from the slight difference in exciting field between the dimer wires. This makes the current differences between the wires vary only slightly when changing \(\theta\), making it harder to extract information about the DoA.
When adding periodic temporal modulation, another mechanism behind the sensitivity enhancement comes into play - frequency conversion. When the capacitor is allowed to vary with time, we convert the fields in the basic problem to higher (and possibly lower) frequencies, i.e. \(\omega_{n}=\omega_{0}+n\omega_{m}\). When \(\omega_{m}\) is chosen such that one of the \(\omega_{n}\) frequencies coincides with \(\omega_{anti-sym}\) we can obtain a strongly enhanced anti-symmetric mode content in the correspondingn'th harmonic, which is also steeply dependant on the incident angle \(\theta\). This results in a enhanced \(S_{max}^{n}\) for that corresponding harmonic, as seen in figure 2(a,b) for dimer separation \(d=5cm,1cm\) respectively.
Using Fig. 2(c,d), we can see the correlation between these two mechanisms, which enhance the sensitivity. The continuous lines represent different \(\omega_{n}\)s as a function of \(\omega_{m}\)
Figure 2: (a) maximum sensitivity as a function of \(\omega_{m}/\omega\) for harmonies \(-1,0,1\), and \(d=5cm\) (b) same, for \(d=1cm\) (c) frequency conversion map for \(\pm\omega+n\cdot\omega\), with \(-2\leq n\leq 2\), and the symmetric and anti-symmetric resonance frequencies. The green arrows indicate connection between peaks in the sensitivity and intersection in the resonance map. (d) Same, for \(d=1cm\).
and the dashed lines show the resonance frequency of the symmetric and anti-symmetric modes in the unmodulated dimer. When a continuous line intersects the green dashed line, \(\omega_{n}=\omega_{anti-sym}\) is satisfied for thatn'th harmonic, the \(\omega_{m}\) values that yield this intersection are roughly the modulation frequencies where we experience a sharp increase in the sensitivity. Thus, by choosing the dimer parameters, we can alter \(\omega_{anti-sym}\) and \(\omega_{m}\), and tailor the frequency response we need, to sense the expected \(\theta\)s better for the required range of operating frequencies. It is essential to add here that while this proves good physical intuition, quantitatively, there are minor deviations in the predicted value of \(\omega_{m}\) based on this intuition and the actual value in which the enhancement is most prominent. This is due to the fact that we make use of the resonance frequencies in the _unmodulated system_. These frequencies also experience a shift under modulation since the effective impedance of the wires changes.
Next, we would like to explore additional venues to tune the performance of our dimer. The natural way to do that, is by tailoring the periodic loading \(Z_{L}\) using additional passive elements such as inductors, capacitors, or a combination of the two.
To simplify the calculation we add the components in series. When adding either an inductance \(L\) or capacitance \(C\), the load impedance \(Z_{L}\) in then'th harmonic becomes \(Z_{L}(\omega_{n})=R_{l}+\frac{1}{j\omega_{n}C_{m}}+j\omega_{n}L\) or \(Z_{C}(\omega_{n})=R_{l}+\frac{1}{j\omega_{n}C_{m}}+\frac{1}{j\omega_{n}C}\) respectively. Figure 3 presents the sensitivity dynamics as a function of the values of \(L,C\). In panels (a,b,d,e) we see the sensitivity of both the base harmony \(S_{0}\) and the first harmony \(S_{1}\), as a function of the added passive element (\(L\) for (a,d) or \(C\) for (b,e)) and of the normalized modulation frequency \(\frac{\omega_{m}}{\omega}\). In all these panels, we see that as the passive element value, \(L\) or \(C\), changes, the modulation frequency of the peak \(\omega_{m}^{peak}\) changes as well. In these panels, the red line shows the location of the anti-symmetric resonance frequency for the non-modulated circuit, and the yellow lines show the location of the symmetric resonance frequency. As portrayed before, the sensitivity peak modulation frequency moves together with the conversion frequency corresponding to the anti-symmetric mode. These panels show that this concept can be generalized, and the loading can be used to tailor the response to our needs.
When combining inductance and capacitance, we introduce more resonance frequencies into the non-modulated system, which provide additional sensitivity peaks when coinciding
with converted harmonics. For example, when using \(Z_{L}(\omega_{n})=R_{l}+\frac{1}{j\omega_{n}C_{m}+\frac{1}{j\omega_{n}L+\frac{1}{j \omega_{n}C}}}\) we see in Fig. 3(c,f) how additional peaks for different \(\omega_{m}\) values are added to the system, which can be used to design a multi-frequency sensing dimer.
In fig. 3 (g), we see how a change to the modulation capacitor controls the specific \(\theta\) around which the sensitivity changes most drastically. For each value of \(C_{0}\) the optimal \(\omega_{m}\) is chosen, and \(S^{1}\) is calcualted.
### Energy balance and conversion efficiency
Since our system is time-modulated it is not, in general, passive. Here, we would like to examine the balance between the incident power used to excite the currents \(P_{inc}\) and the radiated power \(P_{rad}\) to better understand the energy dynamics. Assuming that no material losses are present, the only way energy can exit the system is through radiation. In reality, there are some losses to the wires and the system elements, but the dissipated power is
Figure 3: (a,b,c): Normalized maximum sensitivity in the fundamental harmonic, as function of \(\omega_{m}/\omega\), and of the value of inductance/capacitance added to \(Z_{L}\). (d,e,f):Normalized sensitivity of the 1st harmonic, as function of \(\omega_{m}/\omega\), and of the value of inductance/capacitance, on a logarithmic scale. The red lines are the evaluated \(\omega_{m}\) required for the first harmonic to coincide with the antisymmetric resonance frequency of the non-modulated circuit (yellow for the symmetric resonance). for (c,f): \(C=3pF\). (g): \(S^{1}(\theta)\) for varying values of \(C_{0}\).
negligible in comparison with the radiated power. If we define a cylindrical envelope \(\mathcal{S}\) with radius \(R\rightarrow\infty\), the radiated power given by (Appendix)
\[P_{scat}(\omega)=\frac{\eta^{2}k}{4}\left(\left[|I_{1}|^{2}+|I_{2}|^{2}\right]+2J _{0}(kd)\Re\left\{I_{1}\cdot I_{2}^{*}\right\}\right) \tag{10}\]
where \(J_{0}(z)\) is 0th order Bessel function. On the other hand, the power extracted from the incident field, \(P_{inc}(\omega)\) is given by
\[P_{inc}(\omega)=\frac{1}{2}\Re\left\{\alpha^{*}(|\bar{E}_{1}^{inc}|^{2}+\bar{E }_{1}^{inc}G^{*}(d)\bar{I}_{2}^{*})\right\}+\frac{1}{2}\Re\left\{\alpha^{*}(| \bar{E}_{2}^{inc}|^{2}+\bar{E}_{2}^{inc}G^{*}(d)\bar{I}_{1}^{*})\right\} \tag{11}\]
When the system is not modulated and lossless, these quantities will be in equilibrium. When time-modulation is introduced, this equilibrium is violated, since the modulation can potentially provide additional power to the system (or act as a sink and extract power). In this case, there are additional frequencies in which current is generated, and therefore power is radiated, rendering the total scattered power as a sum over all possible frequencies \(P_{scat,tot}=\sum_{n}P_{scat}(\omega_{n})\).
Figure 4: (a) Normalized sum of the total scattered energy (in all harmonics) (blue), and normalized sum of scattered energy in the up- and down-converted harmonics (excluding the fundamental \(n=0\)) of the scattered energy (red).(b) scattered energy of 5 harmonies \(-2<n<2\). Both are functions of the normalized modulation frequency \(\frac{\omega_{m}}{\omega_{0}}\), and show the difference between the minimal energy as a function of \(\theta\) (thick line), and the maximum energy as a function of \(\theta\) (thin line).
In Fig. 4(a), we examine the total power gain (blue) and the up- and down-converted scattered power as a function of the modulation frequency \(\omega_{m}\). For each choice of \(\omega_{m}\) the obtained scattered power is also a function of the incidence angle \(\theta\), and since we would like to examine these dynamic, we represent each quantity by two different lines: a thick line, which depicts the minimal power as a function of \(\theta\), \(\min_{\theta}\{P_{scat}\}\), and a thin line, which depicts the maximal energy as a function of \(\theta\), \(\max_{\theta}\{P_{scat}\}\). Across all examined modulation frequencies a small amount of total gain is provided to the system, since the blue line in 4(a) is always \(>1\). Additionally, in certain values of \(\omega_{m}\) we see there is a significant difference between the maximum and minimum for different DoA \(\theta\). This is a complementary mechanism - when the modulation frequency up- or down-converts to the anti-symmetric dimer mode, the gain provided to the system has a significant variation as a function of \(\theta\), playing a role in the resulting enhanced sensitivity to the DoA. Figure 4(b) helps us confirm this picture. We see that the total scattered energy in each harmonic experiences a noticable dependance on \(\theta\) (manifesting as a gap between the thick and thin lines) when the conversion corresponds to the anti-symemtric dimer mode. When examining the normalized sum of the converted harmonics (all \(n\neq 0\)) (red), we see that for modulation frequencies corresponding to conversion to dimer resonances the scattered power is \(~{}3\) times larger, which indicates the much higher conversion effciency in this regime.
In fig. 4(a)(red), we notice two types of peaks in sensitivity - peaks for which the amplitude varies as a function of \(\theta\) (manifesting as a significant gap between maximum and minimum values), and peaks which do not have this "spreading." Another characteristic that is different between these is the width of the peaks. The peaks associated with the anti-symmetric mode are much narrower than the symmetric one. This is due to the fact that the anti-symmetric mode has a significantly higher quality factor (Q factor), a consequence of the fact that the primary "loss" mechanism here is radiation, and the anti-symmetric mode radiates much less efficiently. This can be seen using the scattered energy equation, eq. 10 - for given current magnitude on each wire, when the currents are out of phase (\(I_{1}=-I_{2}\)) we have \(I_{1}\cdot I_{2}^{*}<0\), while for the symmetric mode \(I_{1}\cdot I_{2}^{*}>0\). Since \(J_{0}(kd)\approx 1\), the expected radiated power will be much higher for the symmetric mode than for the anti-symmetric mode.
### Fast modulation
Another possible enhancement mechanism is revealed when examining higher modulation frequencies. While it might be challenging to achieve fast modulations (depending on the incident wave frequency and other system paramters), this physical mechanism exists and may be of use. When examining \(\Delta\mathcal{S}_{1}\) for higher values of \(\omega_{m}\), we notice "ripples" that start occuring, when plotting \(\Delta\mathcal{S}_{1}(\omega_{m}/\omega)\), as seen in figure 5(a), in red.
This effect happens because the dimer is no longer deep-subwavelength at higher harmonics when the modulation is fast enough. Therefore, the interaction between the wires through the surrounding medium contributes a non-negligible phase difference which causes the exciting field (incident + interaction) to couple better with the antisymmetric dimer mode, enhancing the sensitivity. Since this enhancement is non-resonant, it is milder than we saw previously. However, it can be incorporated with resonance by loading the wires with a resonant impedance in these frequencies, as shown in magenta in figure 5. The basic parameters are the same as in section 3.1, and the additional resonant loading consists of \(L_{1}\approx 5.24nH,C_{1}\approx 0.195pF\) connected similarly to the results shown in Fig. 3(c,f). These are also demonstrated in the scattered power in Fig. 5(b). i
ment around \(\omega_{m}\approx 35\omega\) we see no peak in the scattered energy, indicating the non-resonant operation. However, around \(\omega_{m}=15.7\omega\), we see that the added resonance gives rise to a sharp peak in scattered power in the 1st harmonic, as it significantly increases the conversion efficiency.
### Parametric amplification
Up to now, we have shown several ways the sensitivity benefits from incorporating time modulation. These revolved around the careful design of the dimer resonances, combined with leveraging the frequency conversion processes that occur when applying periodic temporal modulation. In circuits, parametric amplifiers amplify the input voltage by modulating one of the reactive system elements. The most simple case, which we will examine here, is the degenerate regime, where \(\omega_{m}=2\omega_{0}\)[28]. Recently, operating in this regime was also shown to enhance small antenna matching performance, and Q-factor [14, 15, 16]. Since the gain in this regime depends on the phase between the incoming signal and the modulation signal, we intuitively expect that this operation mode will both amplify the currents, and enhance the differences between as a function of the DoA, due to differential gain. We start from Eq. 5, 6 and 7. We substitute \(\omega_{m}=2\omega_{0}\), and assume, in general, a phase difference of \(\delta\phi\) between the modulation and incoming signals. The incident fields, as they shuold be substituted into Eq. 5 are
\[\begin{split}&[\tilde{E}_{1}]=\left[\ldots,0,\tilde{E}_{0}^{*}e^{-j \delta\phi},\tilde{E}_{0}e^{j\delta\phi},0,\ldots\right]^{T}\\ &[\tilde{E}_{2}]=\left[\ldots,0,\tilde{E}_{0}^{*}e^{-j\delta \phi+jkd\cos\theta},\tilde{E}_{0}e^{j\delta\phi-jkd\cos\theta},0,\ldots\right] ^{T}.\end{split} \tag{12}\]
The \(\underline{\underline{\Gamma}},\underline{\underline{\underline{G}}}\) matrices would have the same structure as described in Eq. 5. Using these to solve the currents in the wires results in current components at all frequencies \(\omega=(1+2n)\omega_{0}\). However, since the main interaction we are interested in is the parametric amplification that results from coupling \([-\omega_{0},\omega_{0}]\), we tune the system parameters differently now. We have already established that working at the resonance frequency of the anti-symmetric mode is preferable. In this case, it will have two key effects: first, the baseline sensitivity of the unmodulated dimer will be around the best we can obtain without modulation. Second, the parametric amplification will increase the content of this anti-symmetric mode. And lastly,
operating near a system resonance will greatly simplify the analysis, rendering the currents for \(\omega\neq\pm\omega_{0}\) negligible. Therefore, we use \(C_{0}\approx 27.2pF\), and truncate \(\underline{\underline{\Gamma}}\) and \(\underline{\underline{\underline{G}}}\) into \(2\times 2\) matrices, for the main two frequencies in the system \([-\omega_{0},\omega_{0}]\). In addition, we increase the losses to see the effects of parametric amplification more clearly, adding a \(2\Omega\) resistance to the periodic loading.
\[\underline{\underline{\underline{\Gamma}}}_{2\times 2}=\begin{bmatrix}\gamma_{0}(- \omega_{0})+\frac{1}{-j\omega_{0}C_{0}\Delta}&\frac{m}{j\omega_{0}C_{0}\Delta} \\ \frac{m^{*}}{-j\omega_{0}C_{0}\Delta}&\gamma_{0}(\omega_{0})+\frac{1}{j\omega_{ 0}C_{0}\Delta}\end{bmatrix},\quad\underline{\underline{\underline{G}}}_{2 \times 2}=\begin{bmatrix}G(d,-\omega_{0})&0\\ 0&G(d,\omega_{0})\end{bmatrix} \tag{13}\]
Yielding the wire currents
\[\begin{split}&[\tilde{I}_{2}]=\left[\underline{\underline{ \underline{\Gamma}}}_{2\times 2}\underline{\underline{\underline{G}}}_{2 \times 2}^{-1}\underline{\underline{\underline{\Gamma}}}_{2\times 2}- \underline{\underline{\underline{G}}}_{2\times 2}\right]^{-1}([\tilde{E}_{2}]+ \underline{\underline{\underline{\Gamma}}}_{2\times 2}\underline{\underline{ \underline{G}}}_{2\times 2}^{-1}[\tilde{E}_{1}])\\ &[\tilde{I}_{1}]=\underline{\underline{\underline{\Gamma}}}_{2\times 2}^{-1} \left([\tilde{E}_{1}]+\underline{\underline{\underline{G}}}_{2\times 2}[\tilde{I}_{2}]\right) \end{split} \tag{14}\]
In fig. 6(a), the scattered power, normalized by the scattered power of the non-modulated case, is presented as a function of the modulation depth \(m\) and of the DoA \(\theta\), for \(\delta\phi=0\)
Figure 6: (a) Total scattered power in the PA regime, normalized by the total scattered power of the non-modulated case. Strong amplification occurs only around \(m\approx 0.11\). (b) \(S(\theta)\), normalized by the sensitivity of the non-modulation case.
While there is a mild amplification overall, a very strong gain is obtained for a specific value \(m\approx 0.11\). This operation regime strongly resembles the negative impedance parametric amplifier [28], so such a strong response around a specific \(m\) is expected. Working with this value of \(m\) comes along with a sensitive dependence on the choice of other system parameters. However, since the mere amplification of currents is not the sole purpose, we now examine the map of \(\mathcal{S}\). In Fig. 6(b), The normalized sensitivity is presented as a function of \(m,\theta\). Here, we see that increased sensitivity can be obtained for a wide range of \(m\) values. Moreover, the modulation depth can be used to tune the angle around which the optimal sensitivity is obtained, thus adding a great degree of flexibility that can be controlled by simple means. This shows that while the fundamental operation of the time-modulated system suggested here is similar to a parametric amplifier, the fact that other metrics play a key role is crucial, making this scheme relevant for a much broader range of parameters.
## 4 Conclusions
In this work, we have focused on studying the physical mechanisms that enhance a deeply-subwavelength dimer's sensitivity to the DoA angle. We have shown that by carefully tuning the modulation parameters, different frequency conversion processes contribute to increased sensitivity. Since the modulation parameters can be tuned, one obtains a highly flexible detector that can be tailored for wide-band operation and enhanced sensing in a specific, varying region of space. Modulating in a parametric amplification regime also contributes to a significant increase in sensitivity and exhibits tunability via the modulation depth and phase.
When designing a DoA detection system, the different effects can be combined. As an example, one can stack two modulation tones, one to convert the incoming signal close to the desired anti-symmetric resonance and one to enhance it parametrically.
Overall, we have shown that the richer physical interactions within the sensing system caused by the temporal modulation can be used to enhance DoA detection in various ways. This study can benefit many applications where the space occupied by the detection system must be extremely small, and many different branches can benefit from this research route.
Acknowledgements
Y. Mazor acknowledges support from the Israeli Science Foundation grant 1089/22.
## Appendix: Incident and scattered power
Let us assume that for a specific frequency we have the currents \(I_{1},I_{2}\) in the dimer wires. The far fields generated by the dimer are [29]
\[\begin{split}\mathbf{E}_{\mathbf{sc}}&=-\frac{\eta k}{4} \sqrt{\frac{2}{\pi k\rho}}\left[I_{1}e^{-jk\rho-j\frac{kd\cos(\varphi)}{2}-j \frac{\pi}{4}}+I_{2}e^{-jk\rho+j\frac{kd\cos(\varphi)}{2}-j\frac{\pi}{4}} \right]\hat{z}=\\ &=-\frac{\eta k}{4}\sqrt{\frac{2}{\pi k\rho}}e^{-jk\rho-j\frac{\pi }{4}}\left[I_{1}e^{-j\frac{kd\cos(\varphi)}{2}}+I_{2}e^{j\frac{kd\cos(\varphi)} {2}}\right]\hat{z}\end{split} \tag{15}\]
and the corresponding magnetic field is
\[\mathbf{H}_{\mathbf{sc}}=\frac{1}{\eta}\hat{\rho}\times\mathbf{E}_{\mathbf{sc}}=\frac{1}{\eta} E_{sc}\ \hat{\varphi}. \tag{16}\]
This yields the scattered power per unit length
\[\begin{split} P_{sc}&=\frac{1}{2}\Re\int_{0}^{2 \pi}\mathbf{E}_{sc}\times\mathbf{H}_{sc}^{*}\rho d\varphi\\ &=\frac{\eta^{2}k^{2}\rho}{16}\frac{2}{\pi k\rho}|e^{-jk\rho-j \frac{\pi}{4}}|^{2}\int_{0}^{2\pi}d\varphi\left|I_{1}e^{-j\frac{kd\cos(\varphi )}{2}}+I_{2}e^{j\frac{kd\cos(\varphi)}{2}}\right|^{2}\end{split} \tag{17}\]
Performing the integration, we get
\[\begin{split} P_{sc}&=\frac{\eta^{2}k^{2}\rho}{16} \frac{2}{\pi k\rho}|e^{-jk\rho-j\frac{\pi}{4}}|^{2}\int_{0}^{2\pi}d\varphi(| I_{1}|^{2}+|I_{2}|^{2})+2\Re\left\{|I1|\cdot|I_{2}|e^{-jkd\cos(\varphi)- \angle(I_{1},I_{2})}\right\}\\ &=\frac{\eta^{2}k}{4}\left(\left[|I_{1}|^{2}+|I_{2}|^{2}\right]+2 J_{0}(kd)\Re\left\{I_{1}\cdot I_{2}^{*}\right\}\right)\end{split} \tag{18}\]
Next, \(P_{inc}\) is composed of
\[P_{inc}=\frac{1}{2}\Re\left\{E_{wire}^{inc}I_{wire}^{*}\right\}+\frac{1}{2} \Re\left\{E_{other-wire}^{inc}I_{other-wire}^{*}\right\} \tag{19}\]
For each wire, the current can be expressed as:
\[I_{wire}=\alpha E_{wire}=\alpha\left(E^{inc}+G(d)I_{other-wire}\right) \tag{20}\]
And since the problem is symmetric, we can use it for each wire. Hence:
\[P_{inc}=\frac{1}{2}\Re\left\{\alpha^{*}(|E_{1}^{inc}|^{2}+E_{1}^{inc}G^{*}(d)I _{2}^{*})\right\}+\frac{1}{2}\Re\left\{\alpha^{*}(|E_{2}^{inc}|^{2}+E_{2}^{inc }G^{*}(d)I_{1}^{*})\right\} \tag{21}\]
When modulation is present, we need, in general, to sum up all the contribution from all existing harmonics. |
2305.18666 | BiSLS/SPS: Auto-tune Step Sizes for Stable Bi-level Optimization | The popularity of bi-level optimization (BO) in deep learning has spurred a
growing interest in studying gradient-based BO algorithms. However, existing
algorithms involve two coupled learning rates that can be affected by
approximation errors when computing hypergradients, making careful fine-tuning
necessary to ensure fast convergence. To alleviate this issue, we investigate
the use of recently proposed adaptive step-size methods, namely stochastic line
search (SLS) and stochastic Polyak step size (SPS), for computing both the
upper and lower-level learning rates. First, we revisit the use of SLS and SPS
in single-level optimization without the additional interpolation condition
that is typically assumed in prior works. For such settings, we investigate new
variants of SLS and SPS that improve upon existing suggestions in the
literature and are simpler to implement. Importantly, these two variants can be
seen as special instances of general family of methods with an envelope-type
step-size. This unified envelope strategy allows for the extension of the
algorithms and their convergence guarantees to BO settings. Finally, our
extensive experiments demonstrate that the new algorithms, which are available
in both SGD and Adam versions, can find large learning rates with minimal
tuning and converge faster than corresponding vanilla SGD or Adam BO algorithms
that require fine-tuning. | Chen Fan, Gaspard Choné-Ducasse, Mark Schmidt, Christos Thrampoulidis | 2023-05-30T00:37:50Z | http://arxiv.org/abs/2305.18666v2 | # BiSLS/SPS: Auto-tune Step Sizes for
###### Abstract
The popularity of bi-level optimization (BO) in deep learning has spurred a growing interest in studying gradient-based BO algorithms. However, existing algorithms involve two coupled learning rates that can be affected by approximation errors when computing hypergradients, making careful fine-tuning necessary to ensure fast convergence. To alleviate this issue, we investigate the use of recently proposed adaptive step-size methods, namely stochastic line search (SLS) and stochastic Polyak step size (SPS), for computing both the upper and lower-level learning rates. First, we revisit the use of SLS and SPS in single-level optimization without the additional interpolation condition that is typically assumed in prior works. For such settings, we investigate new variants of SLS and SPS that improve upon existing suggestions in the literature and are simpler to implement. Importantly, these two variants can be seen as special instances of general family of methods with an envelope-type step-size. This unified envelope strategy allows for the extension of the algorithms and their convergence guarantees to BO settings. Finally, our extensive experiments demonstrate that the new algorithms, which are available in both SGD and Adam versions, can find large learning rates with minimal tuning and converge faster than corresponding vanilla SGD or Adam BO algorithms that require fine-tuning.
## 1 Introduction
Bi-level optimization has found its applications in various fields of machine learning, such as hyperparameter optimization [14; 17; 30; 40], adversarial training [51], data distillation [2; 53], neural architecture search [28; 39], neural-network pruning [52], and meta-learning [13; 37; 11]. Specifically, it is used widely for problems that exhibit a hierarchical structure of the following form:
\[\min_{x\in X}F(x)=\mathbb{E}_{\phi}[f(x,y^{*}(x);\phi)]\qquad\text{s.t.}\qquad y ^{*}(x)=\operatorname*{argmin}_{y\in Y}\mathbb{E}_{\psi}[g(x,y;\psi)]. \tag{1}\]
Here, the solution to the lower-level objective \(g\) becomes the input to the upper-level objective \(f\), and in (1) the upper-level variable \(x\) is fixed when optimizing the lower-level variable \(y\). To solve such bi-level problems using gradient-based methods requires computing the hypergradient of \(F\), which based on the chain rule is given as [15]:
\[\nabla F(x)=\nabla_{x}f(x,y^{*}(x))+\nabla_{xy}^{2}g(x,y^{*}(x))[\nabla_{yy}^ {2}g(x,y^{*}(x))]^{-1}\nabla_{y}f(x,y^{*}(x)). \tag{2}\]
In practice, the closed-form solution \(y^{*}(x)\) is difficult to obtain, and one strategy is to run a few steps of (stochastic) gradient descent on \(g\) w.r.t. \(y\) to get an approximation \(\bar{y}\), and use \(\bar{y}\) in places of \(y^{*}(x)\). We denote the stochastic hypergradient based on \(\bar{y}\) as \(h_{f}(x,\bar{y})\) and the stochastic gradient of \(g\) w.r.t. \(y\) as \(h_{g}\). This leads to a general gradient-based framework for solving bi-level optimization [15; 19; 4]. At each iteration \(k\), run T (can be one or more) steps of SGD on \(y\), i.e. \(y^{k,t+1}=y^{k,t}-\beta h_{g}^{k,t}\), then run one step on \(x\) using the approximated hypergradient:
\[x^{k+1}=x^{k}-\alpha h_{f}(x^{k},y^{k+1}),\quad\text{where}\quad y^{k+1}=y^{k, T}. \tag{3}\]
Based on this framework, a series of stochastic algorithms have been developed to achieve the optimal or near-optimal rate of their deterministic counterparts [7; 8]. These algorithms can be broadly divided into single-loop (\(T=1\)) or double-loop (\(T>1\)) categories [23].
Unlike minimizing the single-level finite-sum (convex) problem
\[F(x):=\min_{x\in\mathcal{C}}\frac{1}{N}\sum_{i=1}^{N}f_{i}(x), \tag{4}\]
where only one learning rate is involved when using SGD, bi-level optimization involves tuning both the lower and upper-level learning rates (\(\beta\) and \(\alpha\) respectively). This poses a significant challenge due to the potential correlation between these learning rates [19]. Thus, as observed in Figure 1, algorithm divergence can occur when either \(\alpha\) or \(\beta\) is large. While there is considerable literature on achieving faster rates in bi-level optimization [24; 5; 7; 8], only a few studies have focused on stabilizing its training and automating the tuning of \(\alpha\) and \(\beta\). This work addresses the question: **Is it possible to utilize large \(\alpha\) and \(\beta\) without manual tuning?**
In doing so, we explore the use of stochastic adaptive-step size methods, namely stochastic Polyak step size (SPS) and stochastic line search (SLS), which utilize gradient information to adjust the learning rate at each iteration [44; 29]. These methods have been demonstrated to perform well in interpolation settings with strong convergence guarantees [44; 29]. However, applying them to bi-level optimization (BO) introduces significant challenges, as follows. 1 BO requires tuning two correlated learning rates (for lower and upper-level). 2 The bias in the stochastic approximation of the hypergradient complicates the practical performance and convergence analysis of SLS and SPS. 3 Other algorithmic challenges arise for both algorithms: For SLS, verifying the stochastic Armijo condition at the upper-level involves evaluating the objective at a new \((x,y^{*}(x))\) pair, while \(y^{*}(x)\) is only approximately known; For SPS, most existing variants guarantee good performance only in interpolating settings, which are typically not satisfied for the upper-level objective in BO [22]. Before presenting our solutions to the challenges above in Sec 2, we first review the most closely related literature.
### Related Work
Gradient-Based Bi-level OptimizationPenalty or gradient-based approaches have been used for solving bi-level optimization problems [10; 45; 21]. Here we focus our discussions on stochastic
Figure 1: Results based on hyper-representation learning task (see Sec 4 for details). Validation loss against upper-level iterations for different values of \(\beta\) (left, \(\alpha=0.005\)) and \(\alpha\) (right, \(\beta=0.01\)). Unless carefully tuned, vanilla SGD-based methods for BO are very unstable.
gradient-based methods as they are closely related to this work. For double-loop algorithms, an early work (BSA) by Ghadimi and Wang [15] has derived the sample complexity of \(\phi\) in achieving an \(\epsilon\)-stationary point to be \(\mathcal{O}(\epsilon^{-2})\), but require the number of lower-level steps to satisfy \(T\sim\mathcal{O}(\epsilon^{-1})\). Using a warm start strategy (stocBiO), Ji et al. [22] removed this requirement on \(T\). However, to achieve the same sample complexity, the batch size of stocBiO grows as \(\mathcal{O}(\epsilon^{-1})\). Chen et al. [4] removed both requirements on \(T\) and batch size by using the smoothness properties of \(y^{*}(x)\) and setting the step sizes \(\alpha\) and \(\beta\) at the same scale. For single-loop algorithms, a pioneering work by Hong et al. [19] gave a sample complexity of \(\mathcal{O}(\epsilon^{-2.5})\), provided \(\alpha\) and \(\beta\) are on two different scales (TTSA). By making corrections to the \(y\) variable update (STABLE), Chen et al. [5] improved the rate to \(\mathcal{O}(\epsilon^{-2})\). However, extra matrix projections required by STABLE can incur high computation cost [5, 4]. By incorporating momentum into the updates of \(x\) and \(y\) (SUSTAIN), Khanduri et al. [24] further improved the rate to \(\mathcal{O}(\epsilon^{-1.5})\)[6]. Besides these single or double-loop algorithms, a series of works have drawn ideas from variance reduction to achieve faster convergence rates for BO. For example, Yang et al. [49] designed the VRBO algorithm based on SPIDER [12]. Dagreou et al. [7, 8] designed the SABA and SRBA algorithms based on SAGA and SARAH respectively, and demonstrate that they can achieve the optimal rate of \(\mathcal{O}(\epsilon^{-1})\)[9, 35].
Huang et al. [20] proposes to use Adam-type step sizes in BO. However, it introduces three sequences of learning rates \((\alpha_{k},\beta_{k},\eta_{k})\) that require tuning, which limits its practical usage. To our knowledge, none of these works have explicitly addressed the fundamental problem of how to select \(\alpha\) and \(\beta\) in bi-level optimization. In this work, we focus on the alternating SGD framework (T can be \(1\) or larger), and design efficient algorithms that find large \(\alpha\) and \(\beta\) without tuning, while ensuring the stability of training.
Adaptive Step SizeAdaptive step-size such as Adam has found great success in modern machine learning, and different variants have been proposed [25, 38, 47, 31, 32]. Here, we limit our discussions on two adaptive step sizes that are most relevant to this work. The Armijo line search is a classic way for finding step sizes for gradient descent [48]. Vaswani et al. [44] extends it to the stochastic setting (SLS) and demonstrates that the algorithm works well with minimal tuning required under interpolation, where model fits the data perfectly. Hence, the method is adaptive to local smoothness of the objective, which is typically difficult to predict a priori. However, the theoretical guarantee of SLS in the non-interpolating regime is lacking. In fact, the results in Figure 3 suggest that SLS can perform poorly for convex losses when interpolation is not satisfied. Besides SLS, another adaptive method derived from Polyak step size is proposed by Loizou et al. [29] with the name stochastic Polyak step size (SPS). Loizou et al. [29] further places an upper bound on the step size resulting in the SPS\({}_{\max}\) variant. Similar to SLS, the algorithm performs well when the model is over-parametrized. Without interpolation, the algorithm converges to a neighborhood of the solution whose size depends on this upper bound. In a later work, Orvieto et al. [36] make the SPS converge to the exact solution by ensuring the step size and its upper bound are both non-increasing (DecSPS ). However, enforcing monotonicity may result in the step size being smaller than decaying-step SGD and losing the adaptive features of SPS (see Figure 2, 3). In this work, we propose new versions of SLS and SPS that do not require monotonicity and extend them into the alternating SGD bi-level optimization framework (3).
Figure 2: Experiments on quadratic functions adapted from [29]. The objective is the sum of two-dimensional functions \(f_{i}=\frac{1}{2}(x-x_{i}^{*})^{T}H_{i}(x-x_{i}^{*})\), where \(H_{i}\) is positive definite and \(i=1,2\) (see Appendix B for more details). From left to right, we show: the objective value, distance to optimum, step size, and iterate trajectories.
Summary of Contributions
We discuss our main contributions in this section, which is organized as follows. First, we discuss our variants of SPS and SLS, and unify them under the umbrella of "envelope-type step-size". Then, we extend the envelope-type step size to the bi-level setting. Finally, we discuss our bi-level line-search algorithms based on Adam and SGD.
Converging SPSB and SLS by Envelope ApproachWe first propose simple variants of SLS and SPS that converge in the non-interpolating setting while not requiring the step size to be monotonic. To this end, we introduce a new stochastic Polyak step size (SPSB). For comparison, we also recall the step-sizes of SPS\({}_{\max}\) and DecSPS. For all methods, the iterate updates are given as \(x_{k+1}=x_{k}-\gamma_{k}\nabla f_{i_{k}}(x^{k})\) where \(i_{k}\) is sampled uniformly from \([n]=\{1,\ldots,n\}\) at each iteration \(k\). The step-sizes \(\gamma_{k}\) are then defined as follows:
\[\text{SPS}_{\max}\] [29]: \[\gamma_{k}=\min\{\frac{f_{i_{k}}(x^{k})-f_{i_{k}}^{*}}{c\|\nabla f _{i_{k}}(x^{k})\|^{2}},\gamma_{b,0}\} \tag{5}\] \[\text{DecSPS}\] [36]: \[\gamma_{0}=\bar{\gamma}\quad\gamma_{k}=\frac{1}{c_{k}}\min\{\frac{ f_{i_{k}}(x^{k})-l_{i_{k}}^{*}}{\|\nabla f_{i_{k}}(x^{k})\|^{2}},c_{k-1}\gamma_{k-1}\} \quad\forall k\geq 1\] (6) \[\text{\bf SPSB (ours):}\quad\gamma_{k}=\min\{\frac{f_{i_{k}}(x^{k})-l_{i _{k}}^{*}}{c_{k}\|\nabla f_{i_{k}}(x^{k})\|^{2}},\gamma_{b,k}\}, \tag{7}\]
where \(f_{i}^{*}=\inf_{x}f_{i}(x)\), \(\bar{\gamma}=\frac{1}{c_{0}}\min\{\frac{f_{i_{0}}(x^{0})-l_{i_{0}}^{*}}{\| \nabla f_{i_{0}}(x^{0})\|^{2}},c_{0}\gamma_{b,0}\}\), \(c_{k}\) is non-decreasing, \(\gamma_{b,k}\) is non-increasing, and \(l_{i}^{*}\leq f_{i}^{*}\) is any lower bound.
Unlike SPS\({}_{\max}\) in which \(\gamma_{b,0}\) is a constant, our upper bound \(\gamma_{b,k}\) is non-increasing. Also, unlike DecSPS in which both the step size and the upper bound are non-increasing (this is because \(\gamma_{k}\leq\frac{c_{k-1}}{c_{k}}\gamma_{k-1}\) and \(\min\{\frac{1}{2c_{L_{\max}}},\frac{c_{0}\gamma_{b,0}}{c_{k}}\}\leq\gamma_{k} \leq\frac{c_{0}\gamma_{b,0}}{c_{k}}\)[36, Lemma 1]), we simplify the recursive structure and do not require the step-size to be monotonic. As we empirically observe in Figure 3, the step size of DecSPS is similar to that of decaying SGD and in fact can be much smaller. Interestingly, the resulting performance of DecSPS is worse than SPS\({}_{\max}\) despite SPS\({}_{\max}\) eventually becoming unstable once iterates get closer to the neighborhood of a solution and the step-size naturally behaves erratically. This is not unexpected due to small gradient norms (note division by gradient-norm in (5)) and dissimilarity between samples in the non-interpolating scenario. Moreover, note that the adaptivity of SPS in the early stage seems to be lost in DecSPS due to monotonicity of the latter. On the other hand, SPSB not only takes advantage of the large SPS steps that leads to fast convergence, but also stays regularized due to the non-increasing upper bound \(\gamma_{b,k}\) in (19). These observations are further supported by the experiments on quadratic functions given in Figure 2, where we observe the fast convergence of SPSB and the instability of SPS\({}_{\max}\). Motivated by the good practical performance of SPSB, we take a similar approach for SLS. The SLS proposed and analyzed by Vaswani et al. [44] starts with \(\gamma_{b,0}\) and in each iteration \(k\) finds the largest \(\gamma_{k}\leq\gamma_{b,0}\) that satisfies:
\[f_{i_{k}}(x_{k}-\gamma_{k}\nabla f_{i_{k}}(x_{k}))\leq f_{i_{k}}(x_{k})-\bar{c }\cdot\gamma_{k}\|\nabla f_{i_{k}}(x_{k})\|^{2},\quad 0<\bar{c}<1. \tag{8}\]
To ensure its convergence without interpolation, we replace \(\gamma_{b,0}\) with appropriate non-increasing sequence \(\gamma_{b,k}\). We name this variant of SLS as SLSB. Interestingly, the empirical performance and step size of SLSB are similar to those of SPSB (see Figure 3). This can be explained by observing that the step sizes of SPSB and SLSB share similar envelope structures, as follows (see Lemma 1 in
Appendix A):
\[\text{SPSB}: \min\{\frac{1}{2cL_{\max}},\gamma_{b,k}\}\leq\gamma_{k}=\min\{\frac{ f_{i_{k}}(x^{k})-l_{i_{k}}^{*}}{c\|\nabla f_{i_{k}}(x^{k})\|^{2}},\gamma_{b,k}\}, \quad 0<c,\] \[\text{SLSB}: \min\{\frac{2(1-\bar{c})}{L_{\max}},\gamma_{b,k}\}\leq\gamma_{k} \leq\min\{\frac{f_{i_{k}}(x^{k})-l_{i_{k}}^{*}}{\bar{c}\|\nabla f_{i_{k}}(x^{k} )\|^{2}},\gamma_{b,k}\},\quad 0<\bar{c}<1.\]
Therefore, we unify their analysis based on the following generic _envelope-type step size_:
\[\gamma_{k}=\min\{\max\{\gamma_{l,k},\tilde{\gamma}_{k}\},\gamma_{b,k}\},\quad \gamma_{l,k}=\min\{w,\gamma_{b,k}\}, \tag{9}\]
where \(\omega>0\), \(\gamma_{b,k}\) is non-increasing, and \(\tilde{\gamma}_{k}\) satisfies \(\gamma_{l,k}:=\min\{\omega,\gamma_{b,k}\}\leq\tilde{\gamma}_{k}\leq\gamma_{b,k}\). We show that this envelope-type step size converges at a rate \(\mathcal{O}(\frac{1}{\sqrt{K}})\) and \(\mathcal{O}(\frac{1}{K})\) for convex and strongly-convex losses respectively.
Envelope Step Size for Bi-level Optimization (BiSPS)We extend the analysis of envelope-type step size to the bi-level setting. The step sizes for upper and lower-level objectives of our general envelope-type method are:
\[\text{Upper: }\alpha_{k}=\min\{\max\{\alpha_{l,k},\tilde{ \alpha}_{k}\},\alpha_{b,k}\}\quad\text{hence}\quad\alpha_{l,k}\leq\tilde{ \alpha}_{k}\leq\alpha_{b,k} \tag{10}\] \[\text{Lower: }\beta_{k,t}=\min\{\frac{g(x^{k},y^{k,t};\psi)-g(x^{k},y ^{*}_{x^{k},\psi};\psi)}{p\|\nabla_{y}g(x^{k},y^{k,t};\psi)\|^{2}},\beta_{b,k} \}\quad\forall t, \tag{11}\]
where \(y^{*}_{x^{k},\psi}\) is the minimizer of the function \(g(x^{k},\cdot;\psi)\), and \(\alpha_{l,k}\), \(\alpha_{b,k}\), and \(\beta_{b,k}\) are three non-increasing sequences. Note that \(\beta_{b,k}\) is fixed over the lower-level iterations for a given \(k\), therefore, this is equivalent to running \(T\) steps of \(\text{SPS}_{\max}\) to minimize the function \(g\) at each upper iteration \(k\). However, the decrease in the upper bound \(\beta_{b,k}\) with \(k\) is crucial to guarantee the overall convergence of the algorithm (see Theorem 3). Starting from the general step-size rules in (10), (11), our bi-level extension of SPS, which we call BiSPS, follow by setting \(\alpha_{k}\) in the form of SPS computed using stochastic hypergradient \(h_{f}^{k}\). That is,
\[\bar{\alpha}_{k}=\frac{f(x^{k},y^{k+1};\phi)-l_{f(\cdot,y^{k+1}; \phi)}^{*}}{p\|h_{f}^{k}\|^{2}},\quad\alpha_{l,k}=\frac{\alpha_{l,0}}{\sqrt{k+ 1}},\quad\alpha_{b,k}=\frac{\alpha_{b,0}}{\sqrt{k+1}}, \tag{12}\]
where \(\alpha_{l,0}\leq\alpha_{b,0}\) and \(l_{f(\cdot,y^{k+1};\phi)}^{*}\) is a lower bound for \(\inf_{x}f(x,y^{k+1};\phi)\). For computing \(h_{f}^{k}\), we can take a similar approach as previous works [15, 19, 4] that use Neumann series setting
\[h_{f}^{k}=\nabla_{x}f(x^{k},y^{k+1};\phi)-\nabla_{xy}g(x^{k},y^{k+1};\psi_{0}) \big{[}\frac{N}{L_{g}}\prod_{j=1}^{\bar{N}}(I-\nabla_{yy}^{2}g(x^{k},y^{k+1}; \psi_{j}))\big{]}\nabla_{y}f(x^{k},y^{k+1};\phi), \tag{13}\]
where \(\bar{N}\) is sampled uniformly from \([N]\) and \(N\) is the total number of samples.
For BiSPS, we use the same sample for \(f(x^{k},y^{k+1};\phi)\) and \(\nabla f_{x}(x^{k},y^{k+1};\phi)\) when evaluating \(\bar{\alpha}_{k}\) in (12). Interestingly, we also empirically observe that using independent samples for computing \(\bar{\alpha}_{k}\) and \(h_{f}^{k}\) resulting in similar performance as using the same sample. The optimal rate of SGD for non-convex bi-level optimization is \(\mathcal{O}(\frac{1}{\sqrt{K}})\) without a growing batch size [4]. We show that BiSPS can obtain the same rate (see Theorem 3) by taking the envelope-type step-size of the form (10) and (11). We implement BiSPS according to (12) and observe that it has better performances over decaying-step SGD with less variations across different values of \(\alpha_{b,0}\) (see Figure 4 and note that decaying-step SGD is of the form \(\frac{\alpha_{b,0}}{\sqrt{k+1}}\)).
Figure 4: Results on data distillation experiments adapted from Lorraine et al. [30] (see Sec 4 for details). We compare BiSPS and decaying-step SGD for different values of \(\alpha_{b,0}\) where Hessian inverse in (2) is computed based on the Identity matrix (left) or Neumann series (right). The lower-level learning rate is fixed at \(10^{-4}\).
Stochastic Line-Search Algorithms for Bi-level OptimizationThe challenge of extending SLS to bi-level optimization is rooted in the term \(y^{*}(x)\). In fact, we realize that some of the bi-level objectives are of the form \(F(x)=f(y^{*}(x))\). That is, \(f\) does not have an explicit dependence on \(x\), e.g. the data hyper-cleaning task [22]. This implies that when SLS takes a potential step on \(x\), the approximation of \(y^{*}(x)\) (i.e, \(\tilde{y}(x)\)) also needs to be updated, otherwise there is no change in function values. Moreover, the use of approximation \(\tilde{y}(x)\) and the stochastic estimation error in hypergradient would not guarantee a step size can be always found. To this end, we modify the Armijo line-search rule to be:
\[\begin{split}\text{BiSLS-SGD:}&\quad f\big{(}x^{ k}-\alpha_{k}h_{f}^{k},\hat{y}^{k+1}(x^{k}-\alpha_{k}h_{f}^{k})\big{)}\leq f(x^{k},y^{k+1})-p \alpha_{k}\|h_{f}^{k}\|^{2}+\delta,\\ \text{BiSLS-Adam:}&\quad f\big{(}x^{k}-\alpha_{k}A_{ k}^{-1}h_{f}^{k},\hat{y}^{k+1}(x^{k}-\alpha_{k}A_{k}^{-1}h_{f}^{k})\big{)}\leq f(x^{k},y^{k+1})-p \alpha_{k}\|h_{f}^{k}\|^{2}_{A_{k}^{-1}}+\delta,\end{split} \tag{14}\]
where \(p,\delta>0\) and \(A_{k}\) is a positive definite matrix such that \(A_{k}^{2}=G_{k}\). Similar to the single-level Adam case, the matrix \(G_{k}\) in the bi-level setting is defined as \(G_{k}=(\beta_{2}G_{k-1}+(1-\beta_{2})\operatorname{diag}(h_{f}^{k}h_{f}^{k^{T }}))/(1-\beta_{k}^{2})\)[25, 43]. Moreover, BiSLS-Adam takes the following steps for updating the variable \(x\): \(x^{k+1}=x^{k}-\alpha_{k}A_{k}^{-1}m_{k}\) where \(m^{k+1}=\beta_{1}m^{k}-(1-\beta_{1})h_{f}^{k}\). The details are given in Algorithms 1 and 2. We denote the search starting point for the upper-level as \(\alpha_{b,k}\) at iteration \(k\), and denote it as \(\beta_{b,k}^{t}\) at step \(t\) within iteration \(k\) for the lower-level. We remark the following key benefits of resetting \(\alpha_{b,k}\) and \(\beta_{b,k}^{t}\) (by using Algorithm 2) to larger values with reference to \(\alpha_{k}\) and \(\beta_{k}^{t}\) (respectively) at each step: (1) Avoid always searching from \(\alpha_{b,0}\) or \(\beta_{b,0}^{0}\), thus, reducing computation cost, and, (2) preserving an overall non-increasing (not necessarily monotonic) trend for \(\alpha_{b,k}\) and \(\beta_{b,k}^{t}\), thus, improving training stability. We found different values of \(\eta\) all work well (see Appendix B). The key algorithmic challenge we are racing is that during the backtracking process, for any candidate \(\alpha_{k}\), we need to compute \(\hat{x}^{k}:=x^{k}-\alpha_{k}h_{f}^{k}\) and approximate \(y^{*}(\hat{x}^{k})\) with \(\hat{y}^{k+1}\) (see Algorithm 1). To limit the cost induced by this nested loop, we limit the number of steps to obtain \(\hat{y}^{k+1}\) to be \(1\). Moreover, \(\delta\) in (14) plays the role of a safeguard that ensures a step size can be found.
We set it to be small to avoid finding unrealistically large learning rates while tolerating some error in the hypergradient estimation (see Appendix B for experiments on the sensitivity of \(\delta\)). In practice, we empirically find that simply setting \(\delta=0\) works well. In Figure 4(a), we observe that BiSLS-Adam outperforms fine-tuned Adam or SGD. Surprisingly, its training is stable even when the search starting point \(\alpha_{b,0}\) is \(5\) orders of magnitude larger than a fine-tuned learning rate (\(\mathcal{O}(10^{-4})\)). Importantly, BiSLS-Adam finds large upper and lower-level learning rates in early phase (see Figure 4(b), 4(c)) for different values of \(\alpha_{b,0}\) and \(\beta_{b,0}\) that span \(3\) orders of magnitudes. Interestingly, the learning rates naturally decay with training (also see Figure 4(c) and 5(d)). In essence, BiSLS is a **truly adaptive (no knowledge of initialization required) and robust (different initialization works) method that finds large \(\alpha\) and \(\beta\) without tuning**. In the next section, we give the convergence results of the envelope-type step size.
Figure 5: Results on hyper-representation learning task (see Sec 4 for details). (a) Validation loss against upper-level iterations for comparing BiSLS-Adam/SGD to fine-tuned Adam/SGD. (b)(c) Upper (left) and lower-level (right) learning rates found by BiSLS-Adam. For the tuned Adam, the optimal lower and upper-level learning rates are \(\mathcal{O}(1)\) and \(\mathcal{O}(10^{-4})\), respectively. BiSLS-Adam outperforms tuned Adam/SGD with a starting point that is \(5\) orders of magnitude larger than the optimal step size.
Convergence Results
### Envelope-type step size for single-level optimization
We first state the assumptions, which are standard in the literature, that will be used for analyzing single-level problems. Assumption 1 is on the Lipschitz continuity of \(f\) and \(f_{i}\) in Problem 4.
**Assumption 1**.: _The individual function \(f_{i}\) is convex and \(L_{i}\)-smooth such that \(\|\nabla f_{i}(x)-\nabla f_{i}(x^{\prime})\|\leq L_{i}\|x-x^{\prime}\|,\forall i,\forall x\in\operatorname{dom}f\) and the overall function \(f\) is \(L\)-smooth. We denote \(L_{\max}\triangleq\max_{i}L_{i}\). Furthermore, we assume there exists \(l_{i}^{*}\) such that \(l_{i}^{*}\leq f_{i}^{*}:=\inf_{x}f_{i}(x),\forall i\), and \(f\) is lower bounded by \(f^{*}\) obtained by some \(x^{*}\) such that \(f^{*}=f(x^{*})\)._
The following bounded gradient assumption is also used in the analysis of convex problems [41, 33].
**Assumption 2**.: _There exists \(G>0\) such that \(\|\nabla f_{i}(x)\|^{2}\leq G,\forall i\)._
We first state the theorem for the envelop-type step size defined in (9) for convex functions.
**Theorem 1**.: _Suppose Assumption 1, 2 hold, each \(f_{i}\) is convex, \(C=\operatorname{dom}f\), \(\gamma_{k}\) is independent of the sample \(\nabla f_{k}(x^{k})\), and choose \(\gamma_{b,k}=\frac{\gamma_{b,0}}{\sqrt{k+1}}\). Then, the envelope-type step size in (9) achieves the following rate,_
\[\mathbb{E}[f(\bar{x}^{K})-f(x^{*})]\leq\frac{\|x^{0}-x^{*}\|^{2}}{2\gamma_{l, K-1}K}+\frac{\gamma_{b,0}^{2}G^{2}\log(K)}{2\gamma_{l,K-1}K},\]
_where \(\gamma_{l,K-1}=\min\{\omega,\frac{\gamma_{b,0}}{\sqrt{K}}\}\) and \(\bar{x}^{K}=\frac{1}{K}\sum_{k=0}^{K}x^{k}\)._
We were not able to give a convergence result that uses the same sample for computing the step size and the gradient. However, we empirically observe that the performance is very similar when using either one or two independent samples per iteration (see Figure 2 and Appendix B). When two independent samples \(i_{k}\) and \(j_{k}\) are used per iteration, the first computes the gradient sample \(\nabla f_{i_{k}}(x^{k})\), and the other computes the step-size \(\gamma_{k}\). For example, for SPSB this gives \(\gamma_{k}=\min\{\frac{f_{j_{k}}(x^{k})-l_{j_{k}}^{*}}{c_{k}\|\nabla f_{j_{k} }(x^{k})\|^{2}},\gamma_{b,k}\}\). This type of assumption has been used in several other works for analyzing deep-type step sizes [27, 44, 29]. Under this assumption, we specialize the results of Theorem 1 to SPSB and SLSB, where \(\gamma_{l,K-1}=\min\{\frac{1}{2C{L_{\max}}},\frac{\gamma_{b,0}}{\sqrt{K}}\}\) and \(\gamma_{l,K-1}=\min\{\frac{2(1-\bar{c})}{L_{\max}},\frac{\gamma_{b,0}}{\sqrt{K }}\}\) respectively. Concretely, for \(K\geq\gamma_{b,0}^{2}L_{\max}^{2}\), SLSB and SPSB with \(\gamma_{b,k}=\frac{\gamma_{b,0}}{\sqrt{k+1}}\) and \(c=\bar{c}=\frac{1}{2}\) achieve the following rate: \(\mathbb{E}[f(\bar{x}^{K})-f(x^{*})]\leq\frac{\|x^{0}-x^{*}\|^{2}}{2\gamma_{b, 0}\sqrt{K}}+\frac{\gamma_{b,0}G^{2}\log(K)}{2\sqrt{K}}\). Next, we state the result for the envelop-type step size when \(f\) is \(\mu\)-strongly convex.
**Theorem 2**.: _Suppose a \(\mu\)-strongly convex function \(f\) satisfying Assumptions 1 and 2, assume \(\mathcal{C}\) is a closed and convex set, and \(\gamma_{k}\) is independent of the sample \(\nabla f_{k}(x^{k})\). Then an envelope-type step size as in (9) with \(\gamma_{b,k}=\frac{\gamma_{b,0}}{k+1}\), \(\gamma_{b,0}\geq\frac{1}{\mu}\), and \(\omega\mu<1\) achieves the following rate_
\[\mathbb{E}[f(\bar{x}_{K})-f(x^{*})]\leq\frac{\mu k_{0}}{2(K-k_{0})}\big{(}e^{-k _{0}\mu\omega}\|x_{0}-x^{*}\|^{2}+\gamma_{b,0}^{2}G^{2}\big{)}+\frac{\gamma_{b,0}G^{2}\log K}{2(K-k_{0})},\]
_where \(\bar{x}_{K}=\frac{1}{K-k_{0}}\sum_{k=k_{0}}^{K-1}x^{k}\) and \(k_{0}=\max\{1,\lceil\gamma_{b,0}/\omega\rceil-1\}\)._
We can again apply the result of Theorem 2 to SPSB and SLSB with \(\gamma_{b,k}=\frac{\gamma_{b,0}}{k+1}\), \(\gamma_{b,0}\geq\frac{1}{\mu}\), \(\omega=1/L_{\max}\), and \(c=\bar{c}=\frac{1}{2}\) to get an explicit rate: \(\mathbb{E}[f(\bar{x}_{K})-f(x^{*})]\leq\frac{\mu k_{0}}{2(K-k_{0})}\big{(}e^{ \frac{k_{0}\mu}{\max}}\|x_{0}-x^{*}\|^{2}+\gamma_{b,0}^{2}G^{2}\big{)}+\frac{ \gamma_{b,0}G^{2}\log K}{2(K-k_{0})}\), where \(k_{0}=\max\{1,\lceil\gamma_{b,0}L_{\max}\rceil-1\}\).
**Remark 1**.: _Under the envelop-type step size framework and the assumption of two independent samples, SLSB and SPSB share the same convergence rates of \(\mathcal{O}(\frac{1}{\sqrt{K}})\) and \(\mathcal{O}(\frac{1}{K})\) as SGD with decaying step-size for convex and strongly-convex losses respectively. This is not surprising because of the structure of the envelope step-size in (9). Indeed, the proof is similar to the standard proof of analogous rate for SGD with decaying step-size. Nonetheless, we include it here for completeness._
### Envelope-type step size for bi-level optimization
We start with recalling standard assumptions in BO [22, 19, 15, 4]. We denote \(z=[x;y]\) and recall the bi-level problem in (1). The first assumption is on the lower-level objective \(g\).
**Assumption 3**.: _The function \(g(x,y)\) is \(\mu_{g}\) strongly convex in \(y\) for any given \(x\). Moreover, \(\nabla g\) is Lipschitz continuous: \(\|\nabla g(x_{1},y_{1})-\nabla g(x_{2},y_{2})\|\leq L_{g}\|z_{1}-z_{2}\|\) (also assume that this holds true for each sampled function \(g(x,y;\psi)\)), and \(\nabla^{2}g\) is Lipschitz continuous: \(\|\nabla^{2}g(x_{1},y_{1})-\nabla^{2}g(x_{2},y_{2})\|\leq L_{G}\|z_{1}-z_{2}\|\). We further assume that \(\|\nabla^{2}_{xy}g(x,y)\|\leq C_{g}\), and the condition number is defined as \(\kappa=\frac{L_{g}}{\mu_{g}}\)._
Next, we state the assumptions on the upper objective \(f\).
**Assumption 4**.: _The function \(f\) and its gradients are Lipschitz continuous. That is: \(\|f(x_{1},y_{1})-f(x_{2},y_{2})\|\leq L_{1}\|z_{1}-z_{2}\|\) and \(\|\nabla f(x_{1},y_{1})-\nabla f(x_{2},y_{2})\|\leq L_{f,1}\|z_{1}-z_{2}\|\). We also assume that \(\|\nabla_{y}f(x,y)\|\leq C_{f}\)._
Furthermore, we make the following standard assumptions on the estimates of \(\nabla f\), \(\nabla g\), and \(\nabla^{2}g\).
**Assumption 5**.: _The stochastic gradients are unbiased: \(\mathbb{E}_{\phi}[\nabla f(x,y;\phi)]=\nabla f(x,y)\), \(\mathbb{E}_{\psi}[\nabla g(x,y;\psi)]=\nabla g(x,y)\), and \(\mathbb{E}_{\psi}[\nabla^{2}g(x,y;\psi)]=\nabla^{2}g(x,y)\). The variances of \(\nabla f(x,y;\phi)\) and \(\nabla^{2}g(x,y;\psi)]\) are bounded: \(\mathbb{E}_{\phi}[\|\nabla f(x,y;\phi)-\nabla f(x,y)\|^{2}]\leq\sigma_{f}^{2}\) and \(\mathbb{E}_{\psi}[\|\nabla^{2}g(x,y;\psi)-\nabla^{2}g(x,y)\|^{2}]\leq\sigma_{G} ^{2}\)._
Finally, we introduce the bounded optimal function value assumption in (15), which is used specifically for analyzing step size of the form (11) in the bi-level setting:
\[\mathbb{E}_{\psi}[g(x,y^{*}(x);\psi)-g(x,y^{*}_{x,\psi};\psi)]\leq \sigma_{g}^{2},\forall x, \tag{15}\] \[\mathbb{E}[\|\nabla_{y}g(x,y)-\nabla_{y}g(x,y;\phi)\|^{2}]\leq \sigma_{g}^{2},\forall x,y, \tag{16}\]
where \(y^{*}(x)=\min_{y}g(x,y)\) and \(y^{*}_{x,\psi}=\min_{y}g(x,y;\psi)\) for a given \(x\) (recall that at any iteration \(k\), the lower-level steps in BiSPS are SPS\({}_{\max}\) with an upper bound \(\beta_{b,k}\); furthermore, \(\beta_{b,k}\) is non-increasing w.r.t. upper iteration \(k\)). The one-variable analogous assumption of (15) has been used in the analysis of SPS\({}_{\max}\)[29]. Here, we extend it to a two-variable function. Unlike the bounded variance assumption (16), which needs to hold true for all \(x\) and \(y\), we require (15) to hold at \(y^{*}(x)\) for any given \(x\). As mentioned previously, the closed form solution \(y^{*}(x)\) is difficult to obtain. Thus, we define the following expression by replacing \(y^{*}(x)\) with \(y\) in (2):
\[\bar{\nabla}f(x,y)=\nabla_{x}f(x,y)+\nabla^{2}_{xy}g(x,y)[\nabla^{2}_{yy}g(x, y)]^{-1}\nabla_{y}f(x,y). \tag{17}\]
A stochastic Neumann series in (13) approximates (17) with \(x\) and \(y\) being \(x^{k}\) and \(y^{k+1}\) (respectively), also recall that \(y^{k+1}\) is an approximation of \(y^{*}(x^{k})\) by running \(T\) lower-level SGD steps to minimize \(g\) w.r.t. \(y\) for a fixed \(x^{k}\). Based on Assumptions 3, 4, and 5, we have the following results to be used in the analysis [15]: \(\|\nabla F(x_{1})-\nabla F(x_{2})\|\leq L_{F}\|x_{1}-x_{2}\|\), \(\|y^{*}(x_{1})-y^{*}(x_{2})\|\leq L_{y}\|x_{1}-x_{2}\|\), and \(\|\nabla f(x,y^{*}(x))-\nabla f(x,y)\|\leq L_{f}\|y^{*}(x)-y\|\). Furthermore, the bias in the stochastic hypergradient in (13) (denoted as \(B\)) decays exponentially with \(N\) and its variance is bounded, i.e. \(\mathbb{E}[\|h_{f}^{k}-\mathbb{E}[h_{f}^{k}]\|^{2}]\leq\tilde{\sigma}_{f}^{2}\) (see Appendix A for details) [19].
Now, we state our main theorem based on step size of the form (10) and (11).
**Theorem 3**.: _Suppose \(f\) and \(g\) satisfy Assumptions 3, 4, and 5, learning-rate upper bounds \(\alpha_{b,k}=\frac{\alpha_{b,0}}{\sqrt{k+1}}\) and \(\alpha_{l,k}=\frac{\alpha_{l,0}}{\sqrt{k+1}}\) with \(\alpha_{b,0}\) and \(\alpha_{l,0}\) satisfying \(\frac{1}{L_{F}+4L_{y}^{2}}\geq\frac{\alpha_{b,0}^{2}}{\alpha_{l,0}}\) and \(\alpha_{l,0}\leq\alpha_{b,0}\). Further assume that \(\alpha_{k}\) is independent of the stochastic hypergradient \(h_{f}^{k}\), and each sampled function \(g(x,y;\psi)\) is convex. Then under the Assumption (15) with \(p\geq\frac{1}{2}\), \(C_{k}=\min\{\frac{1}{2pL_{g}},\beta_{b,k}\}\), \(T\geq\frac{\log(\alpha_{b,0}L_{f}^{2}+2)}{-\log(1-\mu_{g}C_{K-1})}\), and \(\beta_{b,k}=\frac{\beta_{b,0}}{k+1}\). BiSPS achieves the rate:_
\[\frac{1}{K}\sum_{k=0}^{K-1}\mathbb{E}[\|\nabla F(x^{k})\|^{2}]\leq\tilde{ \mathcal{O}}(\frac{\kappa^{3}}{\sqrt{K}}+\frac{\kappa^{2}\log K}{\sqrt{K}}). \tag{18}\]
**Remark 2**.: _We further give the convergence result under the bounded variance assumption (16) in Appendix A. Theorem 3 shows that BiSPS matches the optimal rate of SGD up to a logarithmic factor
without a growing batch size. We notice that the assumption (15) largely simplifies the expression on \(T\) and does not require an explicit upper bound on \(\beta_{b,0}\). As in the single-level case, whether using one sample or two samples (which makes upper-level step-size independent of gradient) gives similar empirical performances (see Appendix B). Note that the independence assumption is only needed for the upper-level. Thus, the two-sample requirement of theorem does not apply to the lower-level problem. This is useful from computational standpoint as typical bi-level algorithms run multiple lower-level updates for each upper-level iteration._
## 4 Additional Hyper-Representation and Data Distillation Experiments
**Hyper-representation learning:** The experiments are performed on MNIST dataset using LeNet [26; 42]. We use conjugate gradient method for solving system of equations when computing the hypergradient [17]. The upper and lower-level objectives are to optimize the embedding layers and the classifier (i.e. the last layer of the neural net), respectively (see Appendix B for details). For constant-step SGD and Adam, we tune the lower-level learning rate \(\beta\in\{10.0,5.0,1.0,0.5,0.1,0.05,0.01\}\). For the upper-level learning rate, we tune \(\alpha\in\{0.001,0.0025,0.005,0.01,0.05,0.1\}\) for SGD, and \(\alpha\in\{10^{-5},5\cdot 10^{-5},10^{-4},5\cdot 10^{-4},0.001,0.01\}\) for Adam (recall that \(\delta\) in (14) is set to 0). Based on the results of Figure 6, we make the following key observations: 1 **line-search at the upper-level is essential for achieving the optimal performance (Figure 6a); 2 **BiLS-Adam/SGD not only converges fast but also generalizes well (Figure 6b); 3 **BiLS-Adam/SGD is highly robust to search starting points \(\alpha_{b,0}\) and \(\beta_{b,0}\) (Figure 6c, 6d). It addresses the fundamental question of how to tune \(\alpha\) and \(\beta\) in bi-level optimization** (see Appendix B for additional results on search cost).
**Data distillation:** The goal of data distillation is to generate a small set of synthetic data from an original dataset that preserves the performance of a model when trained using the generated data [46; 50]. We adapted the experiment set up from Lorraine et al. [30] to distill MNIST digits. We present the results in Figure 7, where we observe that BiLS-SGD converges significantly faster than fine-tuned Adam or SGD, and generate realistic MNIST images (see Appendix B for more results).
Figure 6: Validation loss (a) and accuracy (b) against iterations. (a) Comparisons between whether to use or not use line-search at the upper or lower level; (b) Generalization performance of BiLS-Adam/SGD and fine-tuned Adam/SGD; (c) Validation loss against iterations for different values of \(\alpha_{b,0}\) (\(\beta_{b,0}\) fixed at \(100\)). (d) Same plot as (c) but for different values of \(\beta_{b,0}\) (\(\alpha_{b,0}\) fixed at \(10\)).
Figure 7: (a)(b): Comparison between BiLS-SGD and Adam/SGD for Data Distillation on MNIST dataset. Validation loss plotted against iterations. (a) Hypergradient computed using Neumann series; (b) Inverse Hessian in (2) treated as the Identity [30] when computing the hypergradient; (c) Distilled MNIST images after 3000 iterations of BiSLS-SGD.
Conclusion
In this work, we have given simple alternatives to SLS and SPS that show good empirical performance in non-interpolating scenario without requiring the step size to be monotonic. We unify their analysis based on a simplified envelope-type step size, and extend the analysis to the bi-level setting while designing a SPS-based bi-level algorithm. In the end, we propose bi-level line-search algorithm BiSLS-Adam/SGD that is empirically truly robust and adaptive to learning rate initialization. Our work opens several possible future directions. Given the superior performance of BiSLS, we prioritize an analysis of its convergence rates. The difficulty stems from: (a) the bias in hypergradient estimation; (b) the dual updates in \(x\) and \(y^{*}(x)\) (incurring nested loop structures); (c) the error in estimating \(y^{*}(x)\). On single-level optimization, we remark as an important direction to relax the two-sample assumption on SPSB /SLSB. On the other hand, we also deem interesting that the two-sample variants of our algorithms do not appear to deteriorate performance in practice. we hope our ideas motivate further research on practical bi-level optimization algorithms that avoid the often computationally exhausting need for tuning step-sizes. At the same time, we hope to see more results alike extending the SPS/SLS beyond the originally investigated setting of interpolating single-level optimization.
AcknowledgementThis work was partially supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grants RGPIN-2021-03677 and GPIN-2022-03669.
|
2304.08990 | A Comparison of Image Denoising Methods | The advancement of imaging devices and countless images generated everyday
pose an increasingly high demand on image denoising, which still remains a
challenging task in terms of both effectiveness and efficiency. To improve
denoising quality, numerous denoising techniques and approaches have been
proposed in the past decades, including different transforms, regularization
terms, algebraic representations and especially advanced deep neural network
(DNN) architectures. Despite their sophistication, many methods may fail to
achieve desirable results for simultaneous noise removal and fine detail
preservation. In this paper, to investigate the applicability of existing
denoising techniques, we compare a variety of denoising methods on both
synthetic and real-world datasets for different applications. We also introduce
a new dataset for benchmarking, and the evaluations are performed from four
different perspectives including quantitative metrics, visual effects, human
ratings and computational cost. Our experiments demonstrate: (i) the
effectiveness and efficiency of representative traditional denoisers for
various denoising tasks, (ii) a simple matrix-based algorithm may be able to
produce similar results compared with its tensor counterparts, and (iii) the
notable achievements of DNN models, which exhibit impressive generalization
ability and show state-of-the-art performance on various datasets. In spite of
the progress in recent years, we discuss shortcomings and possible extensions
of existing techniques. Datasets, code and results are made publicly available
and will be continuously updated at
https://github.com/ZhaomingKong/Denoising-Comparison. | Zhaoming Kong, Fangxi Deng, Haomin Zhuang, Jun Yu, Lifang He, Xiaowei Yang | 2023-04-18T13:41:42Z | http://arxiv.org/abs/2304.08990v2 | # A Comparison of Image Denoising Methods
###### Abstract
The advancement of imaging devices and countless images generated everyday pose an increasingly high demand on image denoising, which still remains a challenging task in terms of both effectiveness and efficiency. To improve denoising quality, numerous denoising techniques and approaches have been proposed in the past decades, varying from different regularization terms, algebraic representations and especially to advanced deep neural network (DNN) architectures. Despite their sophistication, many methods may fail to achieve desirable results for simultaneous noise removal and fine detail preservation. In this paper, to investigate the applicability of existing denoising techniques, we compare a variety of denoising methods on both synthetic and real-world datasets for different applications. A new dataset is introduced for benchmarking, and evaluations are performed from four different perspectives including quantitative metrics, visual effects, human ratings and computational cost. Throughout extensive experiments, we witness the astonishing success brought by DNN methods, and also recognize the competitive performance of traditional denoisers. In spite of the tremendous progress in recent years, we discuss shortcomings and possible extensions of existing techniques. Datasets, code and results are available and will be continuously updated at [https://github.com/ZhaomingKong/Denoising-Comparison](https://github.com/ZhaomingKong/Denoising-Comparison).
Image denoising, nonlocal self-similarity, block-matching filters, deep neural network, real-world images.
## 1 Introduction
Image denoising enjoys a long history and pioneering works [1] may date back for decades. The primary goal of denoising is to enhance image quality by estimating underlying clean images from noisy observations. As a simple and special form of inverse problems [2], it has drawn extensive attention from both academia and industry. In real-world applications, image denoising can be integrated into many different tasks such as visual tracking [3], image segmentation [4] and classification [5].
These years, the rapid development of modern imaging systems and technologies has largely enriched the information preserved and presented by an image, which can deliver more faithful representation for real scenes. A good example is that the rise and spread of advanced mobile phones facilitates the production of high-quality images and videos. In practice, noise removal has become a necessity for various imaging sensors and techniques such as multispectral/hyperspectral imaging (MSI/HSI) [6], magnetic resonance imaging (MRI) [7] and computed topography (CT) [8]. Meanwhile, the increase of image size and dimension also puts forward higher requirements for denoising in terms of both effectiveness and efficiency. Therefore, the interest in the realm of denoising grows consistently with a large quantity of approaches [1, 9], which may be roughly divided into two categories, namely _traditional denoisers_ and _DNN methods_, depending on whether neural network architectures are utilized.
Briefly, traditional denoisers normally filter out noise based solely on the input noisy observation by taking advantage of different regularization terms and image priors [10, 11]. A particular and powerful solution that deserves our specific attention is the block-matching 3D (BM3D) framework [12], which integrates the nonlocal self-similarity (NLSS) characteristic of natural images [13], sparse representation [14] and transform domain techniques [15] into a subtle paradigm. Since the birth of BM3D, there is no shortage of extensions originating from different disciplines. To name a few, Dabov et al. [16] improve BM3D by exploiting shape-adaptive image patches and principal component analysis (PCA). Maggioni et al. [17] and Rajwade et al. [18] introduce 3D cubes of voxels for high-dimensional data. Zhang et al. [19] replace the sparsity constraint with the low-rank assumption. Xu et al. [20] employ the Maximum A-Posterior (MAP) estimation technique [21] and propose a trilateral weighted sparse coding scheme.
Despite the steady improvements brought by classic algorithms, they suffer from several drawbacks [22] such as the need for solving complex optimization problems in the test phase, manual setting parameters and failure to exploit auxiliary information. To address these issues, DNN methods have been given an exceptionally large attention and shown promising results in image denoising [23]. The arrival of the deep learning era has significantly broadened the scope of denoising and infused new insights into the design of effective denoisers. For example, Zhang et al. [24] incorporate batch normalization (BN) [25], rectified linear unit (ReLU) [26] and residual learning [27] into the convolutional neural network (CNN) model. Chen et al. [28] introduce generative adversarial networks (GANs) [29] to resolve the problem of unpaired noisy images. Lefkimmiatis et al. [30] and Davy [31] et al. combine NLSS and CNN to efficiently remove noise.
Accompanying the significant and inspiring progress of denoising algorithms, concerns may arise about their practical applicability, as a large proportion of approaches are verified on a limited number (often less than three) of
datasets. Besides, with a considerable amount of existing methods [1, 7, 9, 13, 15, 23, 32, 33, 34, 35, 36], there still lacks a study on their performance for different image denoising tasks and applications. In this paper, we intend to narrow the gap by collecting and comparing various denoisers to investigate their effectiveness, efficiency, applicability and generalization ability with both synthetic and real-world experiments.
The main contributions of the paper are as follows.
(1) We construct a real-world dataset for image and video denoising tasks with a variety of digital cameras and smartphones. The dataset is composed of images and video sequences of both indoor and outdoor scenes under different lighting conditions, which serves as a good complement to current benchmark datasets.
(2) We compare a variety of methods and perform extensive experiments in both synthetic and real-world scenarios for different denoising tasks and applications, including images, video sequences, 3D MRI volumes and \(\text{MSI}/\text{HSI}\) data. We adopt both objective and subjective metrics and evaluate the denoising quality of compared methods with quantitative results and human visual ratings.
(3) We make several interesting observations based on experimental results. First, representative traditional denoisers such as the BM3D family [12, 17, 37, 38, 39] still demonstrate very competitive performance in several denoising applications. In addition, we argue that a simple modified singular value decomposition (M-SVD) method is able to produce similar results with tensor-based approaches in image denoising. For DNN methods, advanced network architectures gain significant improvements over traditional denoisers when fine-tuned with given training/validation samples, but the pretrained models and pre-defined settings may not generalize well to other datasets. Nevertheless, we intend to identify models that exhibit impressive generalizability. For example, FCCF [40], DRUNet [41] and PNGAN [42] produce state-of-the-art results on a number of real-world image datasets. FastDVDNet [43], FloRNN [44] and RVRT [45] show outstanding performance in the video denoising task in terms of both effectiveness and efficiency. Interestingly, many of these effective models such as FCCF, RVRT and FastDVDNet take advantage of Gaussian noise modeling and denoisers.
The rest of this paper is organized as follows. Section 2 introduces background knowledge. Section 3 gives a brief review on related denoising techniques and datasets. Section 4 provides experimental results and discussions. Section 5 concludes the paper.
## 2 Background
### _Symbols and Notations_
In this paper, we mainly adopt the mathematical notations and preliminaries of tensors from [46] for image representation. Vectors and matrices are first- and second- order tensors which are denoted by boldface lowercase letters \(\mathbf{a}\) and capital letters \(\mathbf{A}\), respectively. A higher order tensor (the tensor of order three or above) is denoted by call-graphic letters, e.g., \(\mathcal{A}\). An \(N\)th-order tensor is denoted as \(\mathcal{A}\in\mathbb{R}^{I_{1}\times I_{2}\times\cdots\times I_{N}}\). The \(n\)-mode product of a tensor \(\mathcal{A}\) by a matrix \(\mathbf{U}\in\mathbb{R}^{P_{n}\times I_{n}}\), denoted by \(\mathcal{A}\times_{n}\mathbf{U}\in\mathbb{R}^{I_{1}\cdots I_{n-1}P_{n}I_{n+1} \cdots I_{N}}\) is also a tensor. The mode-\(n\) matricization or unfolding of \(\mathcal{A}\), denoted by \(\mathbf{A}_{(n)}\), maps tensor elements \((i_{1},i_{2},\ldots,i_{N})\) to matrix element \((i_{n},j)\) where \(j=1+\sum_{k=1,k\neq n}^{N}(i_{k}-1)J_{k}\), with \(J_{k}=\prod_{m=1,m\neq n}^{k-1}I_{m}\). The Frobenius norm of a tensor \(\mathcal{A}\in\mathbb{R}^{I_{1}\times I_{2}\times\cdots\times I_{N}}\) is defined as \(\|\mathcal{A}\|_{F}=\sqrt{\sum_{i_{1}=1}...\sum_{i_{N}=1}\mathcal{A}_{i_{1} \ldots i_{N}}^{2}}\).
### _Noise Modeling_
Let us consider a noisy observation \(\mathcal{Y}\) and its underlying clean image \(\mathcal{X}\), a general assumption of noise distribution is additive white Gaussian noise (AGWN) with variance \(\sigma^{2}\) represented by \(\mathcal{N}(0,\sigma^{2})\), and the degradation process is then given as
\[\mathcal{Y}=\mathcal{X}+\mathcal{N} \tag{1}\]
Indeed, noise modeling can be more complex and challenging in that noise in real-world images may be multiplicative and signal dependent [47]. Therefore, there are a plenty of non i.i.d Gaussian models tailored for the need of different applications, such as the mixed Gaussian impulse noise of grayscale and color images [48, 49], sparse random corruptions of video data [50], mixture noise removal of MSI/HSI [51, 52], and Rician noise reconstruction of MRI [17, 53]. In this paper, our synthetic experiments and analysis are mainly based on the AGWN model because: (i) the majority of existing methods are able to handle Gaussian noise, (ii) certain types of noise can be transformed to Gaussian distribution, and (iii) Romano et al. [54] have recently pointed out that the removal of AGWN from an image is largely a solved problem, which may help explain the effectiveness of the simplified noise modeling in Eq. (1).
### _Nonlocal Self-similarity_
The nonlocal self-similarity (NLSS) property of natural images is probably the most important prior adopted by many different denoising methods. Briefly, NLSS refers to the fact that a local image patch often has many nonlocal similar patches to it across the image [55]. Usually, the similarity between two patches \(\mathcal{P}_{A}\) and \(\mathcal{P}_{B}\) is measured by their Euclidean distance \(d_{AB}=\|\mathcal{P}_{A}-\mathcal{P}_{B}\|\). In practice, to save some time, the search for similar patches is restricted to a local window \(\Omega_{SR}\) with predefined size. As illustrated in Fig. 1, the patch representation and the rule of similar patch search (SPS) may vary for different types of data. For example, SPS of grayscale/color images can be conducted only on the single/luminance channel; for video sequences, SPS is performed along both temporal and spatial dimensions; and for MRI and MSI/HSI data, a patch could be represented by a 3D cube or a square tube with multiple spectral bands.
Fig. 1: The nonlocal self-similarity (NLSS) prior and patch representation of different imaging sensors, techniques and applications.
Related Works
In this section, we briefly introduce related denoising methods and datasets of different applications, which are summarized in Table I, Table II and Table III. More details can be found in the supplemental material and previous works [13, 15, 7, 32, 33, 34, 35].
### _Traditional Denoisers_
For traditional denoisers, learning and denoising are usually accomplished only with the noisy image by leveraging the NLSS property. The design of related algorithms can stem from the Bayesian point of view with various image priors [1]. In our journey, we shall focus on the popular 'grouping-collaborative filtering-aggregation' framework with different algebraic representations. The flowchart of this effective three-stage paradigm is illustrated in Fig. 2.
#### 3.1.1 Grouping
For every \(d\)-dimensional noisy image patch \(\mathcal{P}_{n}\), based on certain patch matching criteria [59, 74, 76, 102, 103, 104], the grouping step stacks \(K\) similar (overlapping) patches located within a local window \(\Omega_{SR}\) into a \(d+1\)-dimensional group. For example, consider a 3D patch \(\mathcal{P}_{n}\in\mathbb{R}^{H\times W\times N}\), where \(H\), \(W\) and \(N\) represents height, width and the number of channels or spectral bands, respectively, the 4D group of \(K\) patches can be directly represented by a fourth-order tensor \(\mathcal{G}_{n}\in\mathbb{R}^{H\times W\times N\times K}\), or a 2D matrix \(\mathbf{G}_{n}\in\mathbb{R}^{HWN\times K}\) if every patch \(\mathcal{P}_{n}\) is reshaped into a long vector \(\mathbf{p}_{n}\in\mathbb{R}^{HWN}\).
#### 3.1.2 Collaborative Filtering
Collaborative filters operate on the noisy patch group \(\mathcal{G}_{n}\) to estimate the corresponding underlying clean group \(\mathcal{G}_{c}\) via
\[\hat{\mathcal{G}}_{c}=\arg\min_{\mathcal{G}_{c}}\|\mathcal{G}_{n}-\mathcal{ G}_{c}\|_{F}^{2}+\rho\cdot\Psi(\mathcal{G}_{c}) \tag{2}\]
or in the matrix form
\[\hat{\mathbf{G}}_{c}=\arg\min_{\mathbf{G}_{c}}\|\mathbf{G}_{n}-\mathbf{G}_{c} \|_{F}^{2}+\rho\cdot\Psi(\mathbf{G}_{c}) \tag{3}\]
where \(\|\mathcal{G}_{n}-\mathcal{G}_{c}\|_{F}^{2}\) or \(\|\mathbf{G}_{n}-\mathbf{G}_{c}\|_{F}^{2}\) indicates the conformity between the clean and noisy groups, and \(\Psi(\cdot)\) is a regularization term for certain priors. For example, to model the nonlocal redundancies, the low-rank prior is adopted in [67, 71, 88, 90] with \(\Psi(\mathbf{G}_{c})=\|\mathbf{G}_{c}\|_{*}\) for matrix and \(\Psi(\mathcal{G}_{c})=\sum_{n=1}^{4}a_{n}\|\mathbf{G}_{c_{(n)}}\|_{*}\) for tensor [105]. In addition, the dictionary learning model with over-complete representations [14, 106, 20] is utilized to reconstruct \(\mathbf{G}_{c}\) with a dictionary \(\mathbf{D}\) and sparse coding coefficients \(\mathbf{C}\) via
\[\hat{\mathbf{C}}=\arg\min_{\mathbf{G}_{n}}\|\mathbf{G}_{n}-\mathbf{D}\mathbf{C }\|_{F}^{2}+\lambda\|\mathbf{C}\|_{1} \tag{4}\]
where \(\lambda\|\cdot\|\) is the regularization term that enforces sparsity constraint on \(\mathbf{C}\). Once \(\hat{\mathbf{C}}\) is computed, the latent clean patch group \(\hat{\mathbf{G}}_{c}\) can be estimated as \(\hat{\mathbf{G}}_{c}=\mathbf{D}\hat{\mathbf{C}}\). In [6] and [86], Eq. (4) is extended to tensor for MSI/HSI denoising with higher-order SVD (HOSVD) [107, 108] and tensor-SVD (t-SVD) [109, 110] transforms, respectively. A simple and effective method is to model the sparsity with certain thresholding techniques [56, 111] to attenuate the noise. For example, the hard-thresholding technique is adopted by the BM3D family and some state-of-the-art methods [18, 92], which attempts to shrink the coefficients \(\mathcal{T}(\mathcal{G}_{n})\) in the transform-domain [15] under a threshold \(\tau\) via
\[\mathcal{G}_{t}=\begin{cases}\mathcal{T}(\mathcal{G}_{n}),&|\mathcal{T}( \mathcal{G}_{n})|\geq\tau\\ 0,&|\mathcal{T}(\mathcal{G}_{n})|<\tau\end{cases} \tag{5}\]
where \(\mathcal{T}\) represents a pre-defined or learned transform. The estimated clean group \(\hat{\mathcal{G}}_{c}\) is obtained by inverting the transform via
\[\hat{\mathcal{G}}_{c}=\mathcal{T}^{-1}(\mathcal{G}_{t}) \tag{6}\]
It is noticed that SVD-based transforms are widely adopted among traditional denoisers, and such popularity largely results from the invertible orthogonal bases of SVD.
#### 3.1.3 Aggregation
To further smooth out noise, the estimated clean patches of \(\hat{\mathcal{G}}_{c}\) are averagely written back to their original location. More specifically, at the pixel level, every pixel \(\hat{p}_{i}\) of the denoised image is the (weighted) average of all pixels at the same position of filtered group \(\hat{\mathcal{G}}_{c}\), which can be formulated as
\[\hat{p}_{i}=\sum_{\hat{p}_{i_{k}}\in\hat{\mathcal{G}}_{c}}w_{i_{k}}\hat{p}_{i_{k}} \tag{7}\]
where \(w_{i_{k}}\) and \(\hat{p}_{i_{k}}\) are weight and local pixel, respectively.
#### 3.1.4 Discussion
The major difference among various traditional patch-based denoisers mainly lies in the collaborative filtering step, which often varies according to the selection of regularization terms, transforms and algebraic representation. Intuitively, reshaping the 4D group \(\mathcal{G}\) of Eq. (2) into the 2D matrix \(\mathbf{G}\) of Eq. (3) may break the internal structure of natural images. Therefore, a typical assumption [6, 18, 84, 86, 90, 92, 97, 112] is that tensor representation and decomposition techniques can help preserve more structure information, based on the fact that images can be naturally represented by multi-dimensional array. However, the conventional and widely-used tensor model may fall into the unbalance trap [113, 114], leading to unsatisfactory image restoration performance. In this paper, we further show that with slight modifications, a simple modified SVD (M-SVD) implementation may be able to produce competitive results compared with several tensor-based methods. The M-SVD approach is described in Algorithm 1 with more details given in the supplemental material.
### _DNN Methods_
#### 3.2.1 Overview
The most recent development of image processing stems largely from the applications of deep learning techniques,
Fig. 2: Illustration of the grouping-collaborative filtering-aggregation framework for traditional denoisers.
which demonstrate outstanding performance in a wide variety of tasks [171]. Image denoising is not an exception. From the early plain networks [115, 172] to recently proposed generative and diffusion models [42, 166], numerous network architectures and frameworks have been developed with different training paradigms, including supervised, self-supervised and unsupervised learning1.
Footnote 1: It is noticed that the terms _self-supervised, unsupervised and blind_ denoising are often used interchangeably in the literature [36].
#### 3.2.2 Supervised Methods
Different from traditional denoisers that use only internal information of the noisy observation, the supervised training strategy of DNN methods is often guided by external priors and data. The goal is to minimize the distance \(\mathcal{L}\) between predicted and clean images via
\[\min_{\theta}\sum_{i}\mathcal{L}(\mathcal{F}_{\theta}(\mathcal{Y}_{i}), \mathcal{X}_{i})+\rho\cdot\Psi(\mathcal{F}_{\theta}(\mathcal{Y}_{i})) \tag{8}\]
where \(\mathcal{L}\) can be measured by different loss functions [36], \(\mathcal{X}_{i}\) and \(\mathcal{Y}_{i}\) are clean-noisy image (patch) pairs, \(\mathcal{F}_{\theta}\) with parameter \(\theta\) is a nonlinear function that maps noisy patches onto predicted clean patches, and \(\Psi(\cdot)\) represents certain regularizers [30, 173]. Early methods [174, 175] work with the known shift blur function and weighting factors. Burger et al. [115] show that a simple multi-layer perceptron (MLP) network is able to compete with representative
traditional denoisers at certain noise levels. To extract latent features and exploit the self-similarity property of images, a widely adopted network is CNN [176, 177] with flexible size of convolution filters and local receptive fields. Fig. 3 illustrates a simple CNN denoising framework with three convolutional layers. Due to the effectiveness of CNN, its variations are quite extensive. To name a few, GCDN [134] utilizes graph convolution networks (GCNs) to capture self-similar information. SADNet [137] introduces a residual spatial-adaptive block and context block to sample related features and obtain multi-scale information. QRNN3D [132] exploits 3D convolutions to extract structural spatio-spectral correlation of MSI/HSI data.
#### 3.2.3 Self-supervised and Unsupervised Methods
Collecting a large scale matched clean-noisy image pairs for training is expensive and impractical, especially in the medical imaging sector [178]. Such limitation of supervised denoising prompts the development of self-supervised and unsupervised denoising networks.
To get rid of the prerequisite on training data and leverage the power of DNNs, pioneering methods such as DIP [173] trains a network on a single image to fit itself by early stopping. SURE-based methods [179, 180] impose regularizations on DNNs to avoid overfitting. An interesting alternative strategy is to use noisy-noisy pairs for training [154]. For example, Noise2Self [181] and Noise2Void [156] introduce blind-spot learning by masking pixels of the noisy input. Self2Self [157] adopts a dropout-based scheme with noisy pairs generated by the Bernoulli sampler. Noise-As-Clean [182] and R2R [160] obtain noisy pairs by corrupting the input with certain noise distributions and noise levels. Neighbor2Neighbor [161] creates noisy pairs via sub
sampling and pixel-wise independent noise assumption.
A recent appealing tool and new trend for self-supervised denoising is the generative framework such as GANs [188, 189, 178] and diffusion models [184, 185, 166]. Briefly, thanks to the powerful image synthesis capability of GAN, it is able to learn a variety of complex noise distributions and thus generate more accurate realistic noisy images. The diffusion model can go beyond image synthesis [166], which intends to approximate the score function [1, 186] and then adopts an iterative algorithm.
#### 3.2.4 Discussion
The penetration of deep learning in image denoising has beyond doubt pushed forward the frontiers of denoising. Despite the effectiveness of DNN methods, they may not be cure-all, which enjoy three major advantages and also face the same challenges. First, by utilizing external information to guide the training process, DNN methods are not confined to the theoretical and practical bound of traditional denoisers [187]. However, the high quality training datasets and certain prior information such as ISO, shutter speed and camera brands are not always available in practice. Second, with the aid of advanced GPU devices for acceleration, real-time denoising [43] is achievable for certain tasks, yet the expensive computational resources may not be accessible to ordinary users and researchers. Last but not least, the deep, powerful and sophisticated architectures are capable of capturing latent features underlying noisy images. But compared with benchmark traditional denoisers which only store several small predefined transform matrices, the complex networks with millions of parameters may drastically increase the storage cost.
### _Datasets_
In this section, we provide a short overview of popular datasets for various denoising applications and briefly introduce the proposed dataset. More information is available in the supplemental material and [188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199].
#### 3.3.1 A Brief Overview
The statistics of popular datasets for different denoising applications is summarized in Table III. Some examples are illustrated in Fig. 4. Typically, datasets used for synthetic experiments consist of noise-free (ground-truth) images acquired under ideal conditions with sufficient light and careful camera settings, while the corresponding noisy images are generated by manually adding noise of different levels and distributions to the noise-free ones. In many real-world applications, images are inevitably contaminated by noise to various degrees, often decided by the environments and imaging devices. In such cases, the image averaging strategy is often adopted to generate the ground-truth data by averaging a series of images captured based on the same, static scene. It is noteworthy that compared to grayscale/color images and videos, collecting MSI/HSI and MRI data is of greater difficulty and also more expensive.
#### 3.3.2 The Proposed Dataset
In this subsection, we briefly introduce the motivation and details regarding the setup and protocol followed by our indoor-outdoor color image (IOCI) and video (IOCV) dataset, some examples are illustrated in Fig. 5.
**IOCI**. To capture images of various scenes, 13 different camera devices are used to collect data in both indoor and outdoor environments. To reduce human interference and simulate daily usage, we mostly adopt the cameras' _auto mode_ instead of predefined settings such as ISO, shutter speed and aperture [190, 191, 40]. In uncontrollable and dark environments, we use short exposures and increase ISO values to reduce misalignment and produce over 100 images with high noise levels. Sampled data with obvious misalignment and illumination differences are discarded.
Fig. 4: Illustration of datasets for different applications. From the first row to the fourth row: grayscale/color image, video, MSI/HSI and MRI data.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Applications & Name & Experiment & GT & Data-size & \# Data \\ \hline \hline \multirow{7}{*}{Copperback/Color Image} & Non-real set [122] & Synthetic & ✓ & 512 x 512 x 512 x512 x512 & 11 \\ \cline{2-6} & BSD [202] & Synthetic & ✓ & 681 x 32 x 3 & 500 \\ \cline{2-6} & Real-time [201] & Synthetic & ✓ & 512 x 512 x 3 & 24 \\ \cline{2-6} & RN2N2N2 [203] & Real-world & ✓ & 3948 x 220 x 3 & 120 \\ \cline{2-6} & Non-CSS [203] & Real-world & ✓ & 512 x 512 x 3 & 15 \\ \cline{2-6} & Non-CSS [203] & Real-world & ✓ & 512 x 512 x 3 & 40 \\ \cline{2-6} & Non-CSS [204] & Real-world & ✓ & 512 x 512 x 3 & 100 \\ \cline{2-6} & Non-CSS [205] & Real-world & ✓ & 512 x 512 x 3 & 1000 \\ \cline{2-6} & Dual [205] & Real-world & ✓ & 512 x 512 x 3 & 1000 \\ \cline{2-6} & BSD [206] & Real-world & ✓ & 512 x 512 x 3 & 1000 \\ \cline{2-6} & Open-DOX [207] & Real-world & ✓ & 5108 x 108 x 3 & 500 \\ \hline \multirow{7}{*}{Video} & Bus [206] & Synthetic & ✓ & 100 x 300 x 3 x 5 & 8 \\ \cline{2-6} & DANPS [206] & Synthetic & ✓ & 548 x 400 x 3 x 5 & 30 \\ \cline{2-6} & CKD/D[206] & Real-world & ✓ & 100 x 100 x 3 x 5 & 41 \\ \cline{2-6} & PFDD[206] & Real-world & ✓ & - & 200 \\ \cline{2-6} & Our ECCV & Real-world & ✓ & 512 x 512 x 512 x 512 x & 32 \\ \hline \multirow{7}{*}{MS/HSI} & CKD/HSI [206] & Synthetic & ✓ & 1302 x 100 x 31 & 201 \\ \cline{2-6} & Real-time [207] & Synthetic & ✓ & 140 x 148 x 224 & 1 \\ \cline{1-1} \cline{2-6} & Urban [208] & Real-world & - & 307 x 100 x 31 & 1 \\ \cline{1-1} \cline{2-6} & BSD [206] & Real-world & ✓ & 100 x 100 x 31 & 27 \\ \cline{1-1} \cline{2-6} & Real-world & ✓ & 606 x 50 x 34 & 50 \\ \hline \multirow{7}{*}{MB} & Bus [206] & Synthetic & ✓ & 185 x 127 x 182 & 3 \\ \cline{1-1} \cline{2-6} & Real-world & ✓ & 320 x 300 x 49 & 50 \\ \cline{1-1} \cline{2-6} & OASSS [206] & Real-world & - & 206 x 265 x 128 & 2 \\ \hline \hline \end{tabular}
\end{table} TABLE III: Popular datasets for synthetic and real-world experiments. ‘GT’: ground-truth, ‘✓’: Avaliable, ‘\(\cdot\)’: Not available, ‘F’: number of frames.
**IOCV**. To obtain reference videos for benchmarking, we adopt a video-by-video strategy. Instead of manually controlling static objects [194], we propose to move cameras automatically. The procedure of generating mean videos as ground-truth is illustrated in Fig. 6. We fix the cameras onto a rotatable tripod ball-head placed on top of a motorized slider. The slider and the tripod ball-head can be set to repeatedly move and rotate at different speeds, which simulate the movement of observed objects in more than one directions. Both the slider and cameras are controlled remotely to reduce human interference.
## 4 Experiments
In this section, we report the results of different methods/models for the denoising task of images, videos, MSI/HSI and MRI data. All implementations and source codes are provided by the authors or downloaded from the authors' personal websites. For methods that require GPU devices, we resort to Google ColabPro's computing resources. All other experiments are performed on a computer equipped with Core(TM) i5-7500 @ 3.4 GHz and 16GB RAM.
### _Results for Real-World Image Datasets_
#### 4.1.1 Experimental Settings
We compare the performance of over 40 methods or models with real-world image datasets, including DnD, SIDD, Nam (CC15 [71, 189] and CC60 [79, 189]), PolyU, High-ISO and our IOCI, where SIDD dataset provides training and validation samples. The clean-noisy image pairs of DnD and SIDD are not available, and the test results can be obtained by uploading denoised images online. In this paper, we focus on the sRGB color space since the raw data [208] are not always accessible.
Typically, the decisive parameters for traditional denoisers are input noise level, patch size and the number of local similar patches, etc, whereas the number of layers, learning rate and weight size are essential for DNN methods. However, many existing methods are designed for synthetic experiments, while ideally in real-world cases, the parameters of all compared methods should be fine-tuned to obtain the best possible performance. But this can be computationally expensive and the clean reference images for training and validation are always not available. Therefore, a more practical way is to selectively adopt effective pretrained models or predefined parameter settings. Since many DNN methods offer two to six pretrained models with different parameter settings for testing, for fair comparison, we select the input noise level for each Gaussian denoiser from four different values, which may be regarded as the equivalance of four pretrained models corresponding to _low_, _medium-low_, _medium-high_ and _high_ denoising modes. We report the best average values of all compared methods on each dataset. Peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) [209] are employed for objective evaluations. Normally, the higher the PSNR and SSIM values, the better the quality of denoised images.
#### 4.1.2 Objective Results
Detailed denoising results are presented in Table IV. Briefly, it is observed that many DNN methods show outstanding performance on both DnD and SIDD datasets, but they do not always demonstrate evident advantages over traditional denoisers on other datasets when training or validation data are not available.
More specifically, for traditional denoisers, three effective transform-domain approaches namely CBM3D, CMStSVD, and NLHCC show comparable denoising performance on almost all datasets. The effectiveness of CBM3D1 and CMSt-SVD indicates that a one-step implementation with a small number of local patches and the hard-thresholding technique is able to produce very competitive results. In addition, it is noticed that the matrix-based methods such as MCWNNM and the proposed M-SVD present similar denoising capability with several tensor-based methods such as LLRT and 4DHOSVD, which indicates that the use of tensor representation may not significantly boost structural information retrieval in realistic cases.
For DNN methods, we can see that with the aid of training or validation process, models targeting real-world denoising such as AIMDNet, DIDN, NAFNet and Restormer produce much better performance on SIDD and DND datasets. For example, compared to CBM3D, Restormer provides the PSNR improvements of 1.99 dB on SIDD and 4.79 dB on DnD, respectively. However, once the testing data is no longer compatible with the training conditions, they can exhibit poor generalization and lead to overfitting or degrading performance. As a result, none of the pretrained models can significantly advance benchmark traditional denoisers on other real-world datasets. Nevertheless, we also notice that several network frameworks such as DBE, DIDN, DRUNet, FCCF and PNGAN show competitive and robust
Fig. 5: Illustration of the proposed IOCI and IOCV dataset.
Fig. 6: The procedure of generating mean video sequences with a motorized slider. The camera is fixed to a rotatable tripod ball-head placed on top of the slider.
Average PSNR, SSIM values on real-world color image datasets. The average time is calculated based on the Poly4 dataset. \(\cdot\cdot\) means the results are not available. 'C: CANON', 'F: FUJHFall', 'H: HUJAVEF', '1: IPHONE', 'N: INIKON', 'O: OPPO', 'S: SONY', 'X:XAOMMF. Results marked with \(\ast\) are from the original papers or the official websites.
performance in the absence of training data. Furthermore, it is noteworthy that DBF, DRUNet and FCCF utilize Gaussian noise modeling and denoisers, which supports the effectiveness of incorporating Gaussian noise modeling.
From the perspective of denoising speed, DNN methods are normally much faster than traditional denoisers in the test phase thanks to the power of advanced GPU devices. Specifically, efficient DNN models are able to handle images of size \(512\times 512\times 3\) within 0.2 seconds. We also notice that representative self-supervised DNN methods such as Self2Self and R2R bear high computational burden, since for each noisy input, the training process involves thousands of iterations. For traditional denoisers, the time complexity lies mainly in the iterative local patch search and learning. For example, M-SVD spends 26 and 42 seconds on grouping and performing local SVD transforms, respectively. But it is slightly faster than its tensor counterpart 4DHOSVD, because it avoids folding and unfolding operations of high dimensional data along different modes. Among all the traditional denoisers, the state-of-the-art CBM3D is the most efficient because its grouping step is performed only on the luminance channel, and it does not need to train local transforms or solve optimization problems.
#### 4.1.3 Visual Evaluation
Denoised results of compared methods on the DND and SIDD datasets are illustrated in Fig. 7. The drawback of traditional patch-based denoisers is obvious when dealing with severely corrupted images, since the patch search and local transform learning steps are adversely affected by the presence of noise. Consequently, it is not difficult to see the distortion, artifacts, loss of true color and details. By comparison, fine-tuned supervised DNN models show clear advantages in terms of both noise removal and detail recovery, which demonstrates their powerful feature learning and extraction capability. Interestingly, the difference among the sophisticated and well-trained networks is barely noticeable in many cases of the two benchmark datasets. Therefore, we present visual evaluations of the PolyU and the IOCI datasets in Fig. 8 and Fig. 9, respectively. When facing different and unseen noise patterns, the pretrained DNN models may also leave unwanted artifacts and produce over-smooth effects to varying degrees, while traditional denoisers show their strengths by exploiting nonlocal information of the noisy image, which renders certain robustness and adaptability. Nevertheless, we notice that several DNN models such as DRUNet, FCCF and PNGAN show impressive generalizability and achieve good balance between noise removal and detail preservation. The observation is consistent with results reported in Table IV.
### _Results for Video Datasets_
#### 4.2.1 Experimental Settings
Compared to images, videos are more engaging and informative by recording and displaying dynamic objects. In our evaluations, four benchmark datasets are included for video denoising. Briefly, DAVIS-2017-test-dev-480p and Set8 are used for synthetic experiments with Gaussian noise, and the entire CRVD and IOCV for real-world experiments. Our evaluations mainly focus on video denoising in the sRGB space, and the parameters and models of compared methods are carefully chosen in the same way as section 4.1.1. Similar to [43], the PSNR and SSIM of a sequence are computed as their average values of each frame.
#### 4.2.2 Objective Results
Table V lists the average PSNR and SSIM results of compared methods. For traditional denoisers, it is noticed that CVMSt-SVD and VBM4D are able to achieve state-of-the-art PSNR results on the CRVD and IOCV datasets, respectively. This demonstrates the effectiveness of patch-based paradigm on capturing NLSS features among different frames. For DNN methods, representative models such as FastDVDNet, FloRNN, RVRT and VNLNet show dominating performance in synthetic cases, especially when dealing with severe Gaussian noise, which manifests their ability to extract deep features and exploit spatio-temporal information. When it comes to real-world experiments, although DNN methods are no longer significantly superior in the absence of corresponding training data, certain Gaussian-based models exhibit impressive generalization ability. For experiments with the CRVD dataset, by fusing information from multiple frames and benefiting from recurrent design, RVRT and FloRNN provide the PSNR and SSIM improvements of 0.28dB and 0.0142, respectively. Besides, by integrating the NLSS prior into CNN models, VNLNet produces state-of-the-art results on the IOCV dataset.
In terms of denoising efficiency, FastDVDNet achieves almost real-time noise removal by getting rid of the time-consuming patch search and explicit flow estimation steps. More specifically, it takes FastDVDNet less than 0.1 second to process a single video frame of size \(960\times 540\times 3\), which is 8 times faster than FloRNN, and at least 100 times faster than the benchmark traditional denoisers such as VBM4D and CVMSt-SVD. The denoising speed of FastDVDNet is remarkable considering its competitive performance in different cases, making it an exceedingly appealing denoising algorithm in handling high definition videos.
#### 4.2.3 Visual Evaluation
Visual comparison is presented in Fig. 10. From the results of synthetic experiments, we can observe that Gaussian-based DNN models such as FloRNN and VNLNet produce pleasant visual effects by suppressing noise and restoring true colors and details, while traditional denoisers CVMSt-SVD and VBM4D struggle to remove Gaussian noise and therefore generate unwanted color artifacts. Interestingly, from the results of CRVD data, it can be seen that the powerful DNN models may lead to more obvious over-smooth effects in real-world experiments. In this case, we notice that the background is static, the toy dog on the wheels is dynamic and moves fast in more than one directions, thus some details and textures are present only in certain frames. Therefore, traditional denoisers are able to benefit from their NLSS framework to leverage spatial similarity and preserve more structural information.
#### 4.2.4 Human Ratings
Due to the limitations of hardware equipments and environment, the videos acquired by the motorized slider also inevitably exhibit some noise, flickering, staircase effects, motion blur and misalignment that will undermine the accuracy of objective evaluations. In addition, the quality
Figure 8: Visual evaluation of compared methods on the real-world PolyU dataset. The camera device is NIKON D800.
Figure 10: Visual evaluation of compared methods on synthetic and real-world video data. First row: Set8 (\(\sigma=40\)), second row: CFW/D (ISO = 6400).
Figure 7: Visual evaluation of compared methods on DnD (first row) and SIDD (second row) datsets.
Figure 9: Visual evaluation of compared methods on the real-world IOCI dataset. The camera device is IPHONE 13.
of videos may not be assessed frame-by-frame. Therefore, we conduct additional qualitative evaluations by collecting human opinions [225]. Specifically, we randomly select 10 videos from our IOCV dataset and invite 10 volunteers to rate the mean, noisy and denoised sequences of four compared methods. The invited volunteers have very little background knowledge of denoising, and they are not aware of how the presented video sequences are processed. For each of the 10 videos, the volunteers are asked to choose at least 2 best sequences, which then earn 1 point for the corresponding methods. The detailed human rating results are reported in Table VI. First, we observe that the mean video obtains the highest score on 9 of 10 videos, which suggests that our video averaging strategy may provide an alternative to the approach of generating reference videos frame-by-frame [194]. Second, the human rating results show that DNN methods produce videos of higher quality than the benchmark traditional denoisers on our IOCV dataset, since both CVMSt-SVD and VBM4D leave behind medium-to-low-frequency noise, resulting in noticeable flickering. Last but not least, for the comparison of two traditional denoisers, VBM4D looks more visually pleasant than CVMSt-SVD because CVMSt-SVD presents more temporally decorrelated low-frequency noise in flat areas, which will appear as particularly bothersome for the viewers.
### _Results for MSI/HSI Datasets_
#### 4.3.1 Experimental Settings
MSI/HSI play an important role in a variety of remote sensing applications [183]. In this subsection, we evaluate the performance of various related denoising methods on synthetic and real noisy data. For synthetic experiments, due to the high computational cost, we mainly use the CAVE dataset for comparison. We assume that entries in all slices of noisy data are corrupted by zero-mean i.i.d Gaussian noise. In addition to the classical spatial-based quality indices PSNR and SSIM, we adopt two widely used spectral-based quality indicators for MSI/HSI data, namely spectral angle mapper (SAM) [226] and relative dimensionless global error in synthesis (ERGAS) [227]. Different from PSNR and SSIM, recovered data with lower SAM and ERGAS are considered of better quality.
#### 4.3.2 Objective Results
Since the ground-truth data of the HHD dataset is not available, we report the objective results of compared methods on CAVE and Real-HSI datasets in Table VII.
For traditional denoisers, tensor-based methods such as NGMeet, LLRT, LLDL and
\begin{table}
\begin{tabular}{c c c c c c} \multicolumn{6}{c}{Ratings} \\ \hline \# Image & Mean & Noisy & CVMS-SVD & VBM4D1 & FastSVDNet & VWLNet \\ \hline
1 & **9** & 0 & 1 & 2 & **5** & 3 \\ \hline
2 & **8** & 1 & 0 & 4 & **7** & 6 \\ \hline
3 & **10** & 0 & 0 & 8 & 3 & 5 \\ \hline
4 & **8** & 0 & 0 & 4 & 7 & **8** \\ \hline
5 & **10** & 0 & 0 & 4 & 4 & **7** \\ \hline
6 & **10** & 0 & 0 & 5 & 6 & **7** \\ \hline
7 & **10** & 0 & 0 & 5 & **8** & 5 \\ \hline
8 & **10** & 2 & 2 & **8** & 5 & 6 \\ \hline
9 & **9** & 0 & 2 & 4 & 2 & **7** \\ \hline
10 & 4 & 4 & 4 & 4 & **5** & **8** \\ \hline Average & **8.80** & 0.70 & 0.90 & 4.80 & 5.20 & **6.20** \\ \hline \end{tabular}
\end{table} TABLE VI: Human rating results of sequences generated by different methods based on our IOCV dataset. The top two results are bolded.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c c} \multicolumn{13}{c}{Traditional denoisers} & \multicolumn{5}{c}{DNN methods} \\ \hline \multirow{3}{*}{Dataset} & \multirow{3}{*}{\(\sigma\)} & \multirow{3}{*}{\(\frac{\text{CARB-HSI}}{\text{DIF}}\)} & \multirow{3}{*}{\(\frac{\text{NIM/HSI}}{\text{DIF}}\)} & \multirow{3}{*}{\(\frac{\text{VBM4D2}}{\text{NIM}}\)} & \multirow{3}{*}{\(\frac{\text{VBM4D2}}{\text{NIM}}\)} & \multirow{3}{*}{\(\frac{\text{VBM4D2}}{\text{NIM}}\)} & \multirow{3}{*}{\(\frac{\text{VBM4D2}}{\text{NIM}}\)} & \multirow{3}{*}{\(\frac{\text{VBM4D2}}{\text{NIM}}\)} & \multirow{3}{*}{\(\frac{\text{VBM4D2}}{\text{NIM}}\)} & \multirow{3}{*}{\(\frac{\text{VBM4D2}}{\text{NIM}}\)} & \multirow{3}{*}{\(\frac{\text{VBM4D2}}{\text{NIM}}\)} & \multirow{3}{*}{\(\frac{\text{VBM4D2}}{\text{NIM}}\)} \\ & & & & & & & & & & & & & & & & \\ \hline \multirow{11}{*}{Scal} & \multirow{4}{*}{10} & \multirow{4}{*}{20} & 36.25 & 36.86 & 35.49 & 36.45 & 36.38 & 37.58 & - & 37.28 & **37.58** & - & 32.30 & 34.80 & 37.28 \\ & & 0.9453 & 0.9450 & 0.9450 & 0.9450 & 0.9450 & 0.9450 & 0.9450 & 0.9450 & 0.9450 & - & 0.950 & **0.9450** & - & 0.9501 & 0.941 & 0.946 \\ \cline{2-13} & & 32.56 & 32.67 & 31.54 & 32.10 & 33.12 & 33.42 & 33.55 & 33.45 & - & 34.11 & **34.85** & 35.06 & 32.48 & 35.34 & 34.62 \\ \cline{2-13} & & 0.8991 & 0.8977 & 0.8485 & 0.8731 & 0.8571 & 0.9539 & 0.9590 & 0.9450 & - & 0.9270 & **0.9500** & 0.951 & 0.951 & 0.957 & 0.951 \\ \cline{2-13} & & 32.98 & 32.88 & 29.23 & 29.90 & - & 32.120 & 31.89 & - & 32.74 & **33.31** & 30.38 & 31.02 & 23.07 & - \\ \cline{2-13} & & 0.8996 & 0.8542 & 0.7363 & 0.8883 & - & 0.8855 & 0.8884 & 0.9450 & - & 0.8975 & **0.9575** & 0.9570 & 0.9570 & - \\ \cline{2-13} & & 32.50 & 28.44 & 27.76 & 28.37 & 27.36 & 30.45 & 30.36 & 37.71 & - & 30.36 & **32.31** & 28.59 & 28.46 & 18.89 & 30.72 \\ \cline{2-13} & & 40 & 0.7901 & 0.8730 & 0.8402 & 0.7482 & 0.7563 & 0.8538 & 0.8600 & 0.8800 & - & 0.8702 & **0.9501** & 0.7901 & 0.8823 & 0.4670 & 0.889 \\ \cline{2-13} & & 27.80 & 28.54 & 28.29 & 23.23 & - & 29.45 & 29.42 & 37.55 & 26.48 & 28.39 & **33.32** & 25.27 & 27.89 & 15.4 & - \\ \cline{2-13} & & 0.7920 & 0.7880 & 0.6669 & 0.6695 & - & 0.8845 & 0.8877 & 0.863 & 0.7220 & **0.862** & **0.862** & **0.862** & **0.864** & 0.8625 & 0.2430 & 0.869 \\ \hline \multirow{11}{*}{DIFIF} & \multirow{4}{*}{20} & \multirow{4}{*}{10} & \multirow{4}{*}{36.49} & \multirow{4}{*}{-} & 33.57 & 37.97 & - & 37.30 & 36.31 & 36.22 & - & 33.92 & **39.07** & - & 34.56 & 35.90 & 36.68 \\ \cline{1-1} & & 0.9641 & - & 0.9457 & 0.9505 & - & 0.9400 & 0.9531 & 0.9400 & - & 0.9500 & **0.9500** & - & 0.9504 & 0.9505 & 0.9505 \\ \cline{1-1} \cline{2-13} & & 34.72 & - & 33.33 & 34.43 & 35.82 & 36.43 & - & & 30.43 & **34.46** & 35.39 & 33.36 & 30.32 & 35.84 \\ \cline{1-1} \cline{2-13} & & 20 & 0.9447 & - & 0.9509 & 0.9500 & - & 0.9531 & 0.9540 & 0.9508 & - &
ing performance by exploiting both spatial and spectral correlation, which consistently outperform other compared approaches at all noise levels by more than 1 dB on the CAVE dataset. However, their iterative denoising strategy with a large number of local similar patches significantly increases computational burden. By comparison, BM4D and MSt-SVD are more efficient, in that BM4D does not need to learn local patch transforms, and MSt-SVD is a one-step approach that utilizes global patch representation. In addition, they also produce very competitive performance on the Real-HSI dataset.
For DNN methods, we can observe that their results on the CAVE dataset are not satisfactory, since the networks are trained with predefined and also a limited range of noise levels, distributions and bandwidths on other datasets such as ICVL. Nevertheless, it is noticed that MAN and QRNN3D are able to gain state-of-the-art performance on the Real-HSI dataset, which indicates the success of CNN-based frameworks in capturing spatial and spectral correlations in real-world cases. Besides, they are potentially effective and efficient tools for handling large data in practice due to much faster denoising speed.
#### 4.3.3 Visual Evaluation
Fig. 11, Fig. 12 and Fig. 13 illustrate denoised results of competitive methods on the CAVE, Real-HSI and HHD datasets, respectively. For the HHD dataset, due to the large size of noisy observation, we mainly examine the effectiveness of five efficient methods.
First, we notice that all compared methods can effectively remove noise to certain extents, but their performance are far from satisfactory. Specifically, BM4D and MSt-SVD tend to produce unexpected artifacts at high noise levels since the predefined transforms may not fully exploit the correlation among all the spectral bands. Besides, although NGMeet, LRT and LTLD produce outstanding quantitative results, they fail to preserve high-frequency components such as edges and textures, as can be seen from Fig. 11 and Fig. 12. This observation indicates that increasing the number of iterations and local similar patches may not help preserve fine details and structure of MSI/HSI data. In addition, the representative CNN-based method QRNN3D shows impressive noise removal ability at the cost of over-smooth and blurry effects on the edges and details. To conclude, all state-of-the-art methods struggle to adapt to different local image contents to balance smoothness and sharp details. For MSI/HSI data, each spectral band signal has unique spectroscopic characteristics [183], applying a same set of parameters or transforms to multiple spectral bands may result in similar denoising patterns across the whole data. Therefore, apart from the high computational burden, another challenge of filtering MSI/HSI data lies in dealing with the spatial and spectral variation of noise levels and distributions.
been shown to be helpful in disease detection, diagnosis, and treatment monitoring [236]. Different from the AWGN noise modeling, MRI data are often corrupted by Rician noise [53]. Specifically, Let \(\mathcal{X}\) be the original noise-free signal, the noisy Rician MRI data \(\mathcal{Y}\) is defined by
\[\mathcal{Y}=\sqrt{(\mathcal{X}+\sigma\eta_{r})^{2}+(\sigma\eta_{i})^{2}} \tag{9}\]
where \(\eta_{r}\),\(\eta_{i}\sim N(0,1)\) are i.i.d. random vectors following the standard normal distribution, \(\sigma\) is the standard deviation in both real and imaginary channels of the noisy 3D MR images. To handle Rician noise, the technique of forward and inverse variance stabilizing transform (VST) [237] is often adopted by Gaussian denoisers via
\[\hat{\mathcal{X}}=\text{VST}^{-1}(denoise(\text{VST}(\mathcal{Y},\sigma), \sigma_{\text{VST}}),\sigma) \tag{10}\]
where \(\text{VST}^{-1}\) denotes the inverse of VST, \(\sigma_{\text{VST}}\) is the stabilized standard deviation after VST, and \(\sigma\) is the standard deviation of the noise in Eq. (9). According to Eq. (10), the noisy data \(\mathcal{Y}\) is first stabilized by the VST and then filtered by certain Gaussian denoisers using a constant noise level \(\sigma_{\text{VST}}\), and the final estimate is obtained by applying the inverse VST to the output of the denoising result [17].
In our experiments, we use PSNR and SSIM as the objective metrics, and similar to [17, 69], the PSNR value is computed on the foreground defined by \(\mathcal{X}_{f}=\{x\in\mathcal{X}:x>10\cdot D/255\}\), where \(D\) is the peak of clean data \(\mathcal{X}\).
#### 4.4.2 Synthetic Experiments
The volume data of Brainweb and fastMRI are added with varying levels of stationary Rician noise from \(1\%\) to \(19\%\) of the maximum intensity with an increase of \(2\%\). In real-world applications, the noise level is usually lower than \(19\%\), but we are also interested in the denoising capability of compared methods under extreme conditions. Table VIII lists detailed quantitative results, and Fig. 14 compares the denoising performance at high noise levels when \(\sigma\geq 11\%\). We notice that at lower noise levels, PRI-NLPCA is able to take advantage of the high-quality initial estimate of NLPCA, and thus shows outstanding performance when \(\sigma\leq 9\%\). As noise level increases, the tensor-based method ILR-HOSVD demonstrates advantages of extracting latent features, which produces state-of-the-art results on the fastMRI dataset when \(\sigma\geq 9\%\). However, from Fig. 14(a) we observe that the iterative learning strategy of ILR-HOSVD is also subject to the presence of severe noise and may not generalize well to other datasets, while BM4D benefits from its predefined transforms and outperforms other methods on the Brainweb dataset when \(\sigma\geq 17\%\).
Visual evaluation of compare methods on T1w data (\(\sigma=19\%\)) is illustrated in Fig. 15. BM4D is successful in both noise suppression and detail preservation. Compared with ILR-HOSVD, another tensor-based filter MSt-SVD removes noise to a greater extent but blurs the edges and textures. The state-of-the-art NLM methods, namely PRINLM and PRI-NLPCA exhibit pleasant visual effects in homogeneous areas at the cost of slight over-smoothness along the edges. Furthermore, PRI-NLPCA does not remove noise in the background sufficiently since the local PCA transforms are severely degraded.
#### 4.4.3 Real-world MRI Dataset
To evaluate the performance of compared methods on real-world 3D MRI data, we carry out experiments on T1w MR images of the OASIS dataset [199]. The Rician noise levels of two selected T1w data, namely OAS1_0112 and OAS1_0092 are estimated to be \(3\%\) and \(4.5\%\) of the maximum intensity, respectively [84]. The filtered images of different methods are compared in Fig. 16 and Fig. 17, respectively.
According to synthetic experimental results in Table VIII,
Fig. 14: Average PSNR values of compared methods on the Brainweb and fastMRI data at high noise levels (\(\sigma\geq 11\%\)).
Fig. 12: Visual evaluation of the real-world Real-HSI dataset.
Fig. 13: Visual evaluation of the real-world HHD dataset.
we notice that in most cases all methods can achieve promising denoising performance at relatively low noise levels, as can also be seen from the visual effects in Fig. 16. Therefore, from the perspective of real-world denoising, BM4D1 and PRINLM are more competitive with low computational cost. Despite the similar denoising results of compared methods, we observe that for certain slices of OAS1_0092, NLPCA restores more details and textures, while other methods suffer from over-smoothness to varying degrees, as illustrated in Fig. 17. This interesting observation is another vivid example to show that the use of tensor representation and transforms do not always help preserve more structural information of the underlying clean images.
### _Discussion_
With numerous denoising techniques in hand, it is natural to ask how well we may expect a denoiser to perform. The theoretical bound of compared denoising methods is hard to obtain, but it is interesting to investigate the denoising capability under the challenging practical cases when prior information, such as training data and camera settings are unavailable. We use data captured by FUJIFILM X100T camera as an example, and for each scene, we generate three new mean images by averaging 3, 5 and 10 noisy images, and they are named Mean_3, Mean_5 and Mean_10, respectively. They can be regarded as images captured via different continuous shooting modes with _high_, _medium_ and _low_ noise levels. Fig. 18 illustrates the average PSNR differences of six implementations compared with Mean_3 on the FUJI data. Interestingly, Fig. 18 shows that state-of-the-art denoising methods only produce marginal improvements compared with Mean_3, which indicates that their denoising performance is similar to obtaining the mean image by averaging 3 consecutive noisy observations.
## 5 Conclusion
Over the past several decades, the rapid development of imaging systems and techniques has fostered the emergence of numerous denoising approaches, varying from traditional Gaussian denoisers to advanced DNN methods. Driven by the curiosity about the applicability and generalization ability of different methods, we conduct both synthetic and real-world experiments. A new dataset is also introduced to enrich benchmarking. Our experiments demonstrate: (i) the effectiveness of traditional denoisers for various denoising tasks, (ii) a simple matrix-based algorithm may be able to produce similar results compared with its tensor counterparts, and (iii) the substantial achievements of DNN models, which exhibit impressive generalization ability and show state-of-the-art performance on different datasets.
Nowadays, image denoising also serves as a perfect test-bed for assessing new ideas and techniques [1]. Many recent studies focus on the extention of image denoising methods to other computer vision tasks such as dehazing [238] and demoisacking [239]. It is therefore interesting to further explore the potential of related works to satisfy needs beyond denoising.
|
2307.10457 | Improving the Reusability of Pre-trained Language Models in Real-world
Applications | The reusability of state-of-the-art Pre-trained Language Models (PLMs) is
often limited by their generalization problem, where their performance
drastically decreases when evaluated on examples that differ from the training
dataset, known as Out-of-Distribution (OOD)/unseen examples. This limitation
arises from PLMs' reliance on spurious correlations, which work well for
frequent example types but not for general examples. To address this issue, we
propose a training approach called Mask-tuning, which integrates Masked
Language Modeling (MLM) training objectives into the fine-tuning process to
enhance PLMs' generalization. Comprehensive experiments demonstrate that
Mask-tuning surpasses current state-of-the-art techniques and enhances PLMs'
generalization on OOD datasets while improving their performance on
in-distribution datasets. The findings suggest that Mask-tuning improves the
reusability of PLMs on unseen data, making them more practical and effective
for real-world applications. | Somayeh Ghanbarzadeh, Hamid Palangi, Yan Huang, Radames Cruz Moreno, Hamed Khanpour | 2023-07-19T21:00:16Z | http://arxiv.org/abs/2307.10457v3 | # Improving the Reusability of Pre-trained Language Models in Real-world Applications
###### Abstract
The reusability of state-of-the-art Pre-trained Language Models (PLMs) is often limited by their generalization problem, where their performance drastically decreases when evaluated on examples that differ from the training dataset, known as Out-of-Distribution (OOD)/unseen examples. This limitation arises from PLMs' reliance on spurious correlations, which work well for frequent example types but not for general examples. To address this issue, we propose a training approach called Mask-tuning, which integrates Masked Language Modeling (MLM) training objectives into the fine-tuning process to enhance PLMs' generalization. Comprehensive experiments demonstrate that Mask-tuning surpasses current state-of-the-art techniques and enhances PLMs' generalization on OOD datasets while improving their performance on in-distribution datasets. The findings suggest that Mask-tuning improves the reusability of PLMs on unseen data, making them more practical and effective for real-world applications.
NLP applications, Pre-trained language models' reusability, Transfer learning,, Integrated training.
## I Introduction
Fine-tuning large-scale Pre-trained Language Models (PLMs) on a specific-task dataset has achieved state-of-the-art performance on a variety of Natural Language Processing (NLP) tasks [1, 2, 3]. However, recent studies [4, 5, 6] have shown that PLMs trained on the large corpora learn spurious correlations, which are prediction patterns that work for frequent example types drawn from the same distribution as the training examples but break down in more challenging cases such as Out-of-Distribution (OOD)/unseen datasets. If such patterns often yield correct outputs, the fine-tuning training loss function that heavily relies on these patterns cannot incentivize the language model to learn the linguistic patterns from the less frequent example types and generalize them to OOD examples. The inability to generalize patterns to OOD datasets is a major obstacle for PLMs' practical use in real-world applications.
To improve the PLMs' reusability on unseen data, several solutions have been proposed such as: transfer learning through continuing pre-training (e.g., [7, 8, 9]) or performing extra fine-tuning (e.g., [10, 11]) on the domain of interest. However, besides pre-training being computationally expensive, the effectiveness of transfer learning is highly limited to the size of the data, the similarity of the source/target domains, and task complexity [12]. Other studies [13, 14, 15, 16, 17, 18] proposed different learning strategies focusing on challenging keywords, called data biases, that lead to forming spurious correlations. Although these solutions achieved promising results, their main weakness is the strong assumption of knowing the datasets' biases in advance. Also, since some of these studies [13, 17] used data augmentation for their proposed learning method, determining the number of examples required for the best performance is challenging [19]. Some of these solutions also (e.g., [14, 16]) decrease the PLMs' performance on in-distribution datasets.
In this study, we propose a novel approach called Mask-tuning to improve the reusability of PLMs on unseen data by enhancing the fine-tuning training process. We draw inspiration from recent data augmentation techniques (e.g., [20, 21, 22]) that utilize the Masked Language Model (MLM) to increase the diversity of examples in the training dataset. Unlike those traditional data augmentation approaches, Mask-tuning integrates the MLM training objective into fine-tuning's training process. This unique integrated training method in each training batch primarily perturbs the original training examples using MLM to interrupt the frequent example types (patterns) in the training dataset and generates perturbed examples. Then Mask-tuning classifies the perturbed examples through fine-tuning according to the original examples' ground-truth labels. An integrated loss from perturbation and classification trains the Mask-tuning (Figure 1). Our analysis shows that Mask-tuning creates three times more diversified examples than MLM (SectionV), demonstrating its effectiveness in enhancing PLMs' generalization. In summary, our contributions are as follows:
**I.** We study the impact of integrating the MLM training objectives into the fine-tuning for mitigating PLMs' generalization problem. Our proposed method, Mask-tuning, solely uses the downstream task's training dataset and is a plug-and-play tool for any PLM that works with original fine-tuning.
**II.** We conduct comprehensive experiments under a consistent evaluation process to verify the effectiveness of Mask-tuning using BERT, RoBERTa, and autoregressive BART language models. We show that Mask-tuning outperforms six state-of-the-art baselines on three downstream tasks' in-distribution and corresponding five OOD datasets.
**III.** We also conduct three ablation studies to compare Mask
tuning performance with data augmentation, Mask-tuning without integrated loss, and sequential training. The results demonstrate the effectiveness of each component of the Mask-tuning for improving the PLMs' generalization.
## II Related Works
**Transfer Learning;** Transfer learning for improving the PLMs' generalization has been implemented through two different approaches, domain- and task-adaptation. The domain-adaptation continues pre-training on examples with the same domain as the downstream task dataset. Some studies proposed to pre-train the language models in their domain of interest from scratch (e.g., [23, 24]). In contrast, other studies suggested running a second round of pre-training on the domain of interest (e.g., [25, 26]). Besides, task-adaptation focuses on sharing knowledge across different tasks [27, 28]. In task-adaptation, the pre-trained model first fine-tunes on a dataset most related to the target task and then fine-tunes again on the target task. Both approaches have shown the benefits of transfer learning for improving PLM's generalization problem, especially when target training data is limited. However, re-running the pre-training phase on a large-scale dataset is computationally expensive. Furthermore, the effectiveness of transfer learning is highly limited to the size of the data, the similarity between the source and target tasks and domains, and task complexity [28]. However, same as original fine-tuning, Mask-tuning solely uses the target downstream task dataset.
**Data Augmentation;** Data augmentation is the most straightforward approach for improving the PLMs' robustness. In this method, the downstream task dataset is enriched with examples from the target distribution using various techniques such as: increasing the size of the training dataset [22, 29], balancing the existing cues [30], adversarial training [5, 20, 21], augmenting the standard training dataset with syntactic information [31, 32], creating a partial view to augment the training data [17], or dropping span of hidden information [13]. These approaches generate new examples following different methods and extend the size of the training dataset. Then they use the enlarged dataset for fine-tuning. However, Yoo and Qi [19] showed that the size of the required data for the best performance is challenging. The main advantages of Mask-tuning over traditional data augmentation is that Mask-tuning benefits from both MLM and fine-tuning objectives. This unique approach allows Mask-tuning to generate perturbed examples that are more diverse and effective for improving the model's generalization. Furthermore, Mask-tuning's perturbed examples are double-validated through fine-tuning classification, ensuring that only high-quality examples are incorporated into the training process.
**Learning Process Improvement;** Another area of study to enhance the pre-trained models' generalization is introducing new learning techniques that are robust against spurious patterns in the training dataset [14, 33, 34, 35, 16, 36, 37, 18, 30]. These techniques focus on challenging examples or keywords that do not allow the pre-trained models to make shortcuts during training. Thus, they need to design processes to recognize and handle these patterns and keywords in the training examples and re-training the model on extra text corpora. These changes often make the model more complicated and computationally expensive. However, the main weakness of these methods is their strong assumption of knowing these challenging examples or keywords in advance [12].
## III Proposed Method
We propose Mask-tuning for improving the PLMs' generalization by enhancing the training process of the fine-tuning on downstream tasks' datasets. Mask-tuning perturbs the sequence of words in a training example to interrupt the frequent example types in the training dataset and simultaneously validates the perturbed example according to the original example's label (ground-truth label). For this aim, Mask-tuning integrates two training objectives: 1) self-supervised Masked Language Modeling (MLM) training objective and 2) Fine-tuning's classification function. In each training batch, Mask-tuning works as follows:
Mask-tuning uses self-supervised MLM to randomly mask a certain percentage of the input tokens in each training example. The MLM training objective is to predict those masked ones based only on their context with a mean cross-entropy loss. We denote the first training phase's loss as MLM-loss (\(\mathcal{L}_{MLM}\)). The training examples with the predicted token(s), called _perturbed examples_, are fed into fine-tuning to be classified based on the ground-truth label (\(y\)). So, \(p_{\theta}(y^{\prime}=y|\hat{x})\) is the fine-tuning classification function to predict the perturbed example's label (\(y^{\prime}\)) based on the perturbed example (\(\hat{x}\)) and compute the fine-tuning training loss (\(\mathcal{L}_{Fine-tuning}\)), where \(\theta\) is the PLMs' parameters for the fine-tuning. A weighted aggregation of these two training processes computes Mask-tuning loss (\(\mathcal{L}_{Mask-tuning}\)) as follows:
\[\mathcal{L}_{Mask-tuning}=\alpha\ \mathcal{L}_{MLM}+\ (1-\alpha)\mathcal{L}_{Fine-tuning} \tag{1}\]
where \(\alpha\) is a weighting factor that is employed to adjust the
Fig. 1: Illustration of the Mask-tuning’s training process. The input of the fine-tuning is only a perturbed version of the in-distribution training example.
contribution of each training phase loss for computing the Mask-tuning loss. The Mask-tuning training objective aims to minimize both self-supervised MLM loss and supervised fine-tuning classification loss. In the following, we present how Mask-tuning benefits from the aggregated loss in each training batch:
**I. The MLM's token-prediction output is correct, so MLM training loss is close to zero (\(L_{MLM}\approx 0\));** In this case, there would be one of the following scenarios:
1) The fine-tuning classification's output is also correct, so fine-tuning loss is close to zero (\(L_{Fine-tuning}\approx 0\)). In this case, the Mask-tuning loss is also close to zero (\(L_{Mask-tuning}\approx 0\)). For instance, we observed the following example in the third training epoch:
\(x\): "take care is nicely performed by a quintet of actresses", label (\(y\)): \(1\Rightarrow\hat{x}\): "take care is nicely performed by a **[group]** of actresses", (label (\(\hat{y}\)): 1 AND prediction: correct).
It shows that fine-tuning classification validates that the perturbed example has the same label as the original example. Thus, there is no need for further training. This scenario is the primary goal of the Mask-tuning training objective that differs Mask-tuning from sequential training and data augmentation, which are investigated in our ablation studies (Section VI).
2) The output of fine-tuning classification is incorrect, so fine-tuning loss is significant (\(L_{Fine-tuning}>0\)). For instance, we observed the following example from the sentiment analysis task:
\(x\): "mumbles his way through the movie.", label (\(y\)): \(0\Rightarrow\hat{x}\): "[**cleer]** his way through the movie.", (label (\(\hat{y}\)): 1, AND prediction: incorrect).
It means fine-tuning classifier did not validate that the perturbed example has the same label as the original example. In this case, the aggregated loss (Mask-tuning loss) is large enough for continuing training to minimize the fine-tuning loss.
**II. The MLM's token-prediction output is incorrect, so MLM training loss is significant (\(L_{MLM}>0\) );** Suppose the predicted token(s) is unrelated to the original token(s), and the prediction output is incorrect, and MLM-loss is significant. For instance, we observed the following example in the third training epoch:
\(x\): "The slaps together his own brand of liberalism", label (\(y\)): \(0\Rightarrow\hat{x}\): "he **[accept]** together his own brand of liberalism", (label (\(\hat{y}\)): 1 AND prediction output: incorrect).
In this case, the Mask-tuning loss is large enough, regardless of the fine-tuning classification output, to continue training to minimize both MLM and Fine-tuning losses. As we can see, Mask-tuning is trained based on the perturbed examples validated by fine-tuning classifier according to the original examples' ground-truth label. For this objective, the aggregated loss incentivizes the MLM and fine-tuning for further training to reach a correct output from both training objectives and minimize their losses (i.e., MLM training loss and fine-tuning loss).
## IV Experimental setup
To evaluate the effectiveness of our proposed method, we first trained Mask-tuning on the in-distribution downstream tasks' training dataset. Then, we evaluated Mask-tuning on the in-distribution and corresponding OOD evaluation sets. OOD datasets are drawn from a different distribution with more linguistic variation than the in-distribution training set [5, 6]. PLMs' performance on OOD datasets indicates their reusability on real-world applications.
**Baselines.** We compared the performance of our proposed approach with the state-of-the-art baselines, which shed light on the existing training challenges in PLMs. These baselines proposed new learning disciplines to improve the PLMs' generalization. For a fair comparison, we followed their evaluation process (e.g., using the same tasks, OOD datasets, and PLMs).
**In-Distribution and OOD Tasks and Datasets.** In this study, we conducted comprehensive experiments on three tasks from GLUE 1 benchmark [37] that have available and wide-used OOD dataset including: Stanford Sentiment Treebank (SST-2), Natural Language Inference (MNLI [38]), and Paraphrase Identification (QQP [39]). Their corresponding OOD datasets are IMDB-Cont [40] and IMDB-CAD [4], HANS [5] and AdvNLI [41], and PAWS [6], respectively.
Footnote 1: [https://gluebenchmark.com/tasks](https://gluebenchmark.com/tasks)
**Experimental Setup** Following the baselines, we performed Mask-tuning using two PLMs: BERT [1], and RoBERTa [3]. Also, we used BART [2] as an autoregressive language model with a different architecture from BERT. Since we have a base and large size of each pre-trained language model, recent study [43] illustrated that compared with the base size language models, the large size does not necessarily mitigate the PLMs' generalization issue. Besides, the base size models are more efficient and deployable to a broader computational environment and applications. Thus, we chose the base size of these models to show the efficacy of the Mask-tuning method.
We followed the original language models' parameters2 for the two training phases of Mask-tuning. However, we empirically changed the learning rate and batch size for the fine-tuning phase of the Mask-tuning. After several trial runs, the batch size sets to \(32\) for double sentence datasets (i.e., MNLI and QQP) and \(16\) for single sentence datasets (i.e., SST-2). Also, the best learning rate has been determined through grid search among {2e-5, 3e-5, 4e-5, 5e-5}. We empirically selected the optimal value for \(\alpha\) by a grid search between 0 and 1 with 0.1 increments. The best value of \(\alpha\) is 0.6, 0.7, and 0.8 for SST-2, MNLI, and QQP, respectively. Finally, the masking percentage is set to 5%, achieving the best performance on in-distribution and OOD datasets. All experiments were performed with three epochs and an NVIDIA V100 GPU with five random seeds. The average accuracy value with corresponding standard deviations is reported on the in-distribution and OOD test sets. We used huggingface code [42] for performing the original Fine-tuning.
Footnote 2: [https://github.com/huggingface/transformers](https://github.com/huggingface/transformers)
## V Results and discussion
Table I displays the performance of Mask-tuning and other baselines, with some studies only reporting results for one model. We also compare the performance of Mask-tuning and original fine-tuning for the BART model, which has not been previously reported on OOD datasets. Finally, We present the results for GPT-3 on in-distribution and OOD dataset captured from [44].
**Performance on OOD Datasets.** The results show that Mask-tuning performance outperforms all baseline models. To the best of our knowledge, no study has reported BERT-base model performance on IMDB-Cont and IMDB-CAD. Thus, we compared the Mask-tuning and original fine-tuning performance. In comparison with the BERT fine-tuning, Mask-tuning improves the performance on IMDB-Cont by +3.67%, IMDB-CAD by +1.32%, HANS by +12.62%, AdvNLI by +2.2%, and PAWS by +13.94%. Moreover, in compare with the RoBERTa fine-tuning, Mask-tuning improves PLMs' performance on IMDB-Cont by +4%, IMDB-CAD by +3.22%, HANS by +7.9%, AdvNLI by +6.2%, and PAWS by +5.92%. Finally, in compare with the BART fine-tuning, Mask-tuning enhances the BART performance on IMDB-Cont by +0.52%, IMDB-CAD by +1.8%, HANS by +14.18%, AdvNLI by +4.8%, and PAWS by +13.44%.
**Performance on In-Distribution Dataset.** Maintaining high performance on in-distribution datasets while improving performance on OOD datasets is essential in evaluating an approach. Mask-tuning achieves state-of-the-art performance on OOD datasets while boosting performance on in-distribution datasets. These results highlight the effectiveness of Mask-tuning in balancing in-distribution and OOD performance.
**Comparing Mask-tuning and GPT-3 performance on OOD.** Table I illustrates that GPT-3 (175B parameters) with few-shot prompting [44] achieves better accuracy on OOD data compared to BERT-based PLMs. However, this performance improvement was gained at the cost of a significant drop in accuracy on in-distribution downstream tasks' datasets. Our experiments show that Mask-tuning improves the generalization of PLMs by modifying the fine-tuning process. Mask-tuning achieves comparable or even better performance with GPT-3 on benchmark tasks like HANS in RoBERTa\({}_{base}\) (355M parameters), a larger pre-trained model than BERT\({}_{base}\) (110M parameters), without sacrificing performance on in-distribution data. These findings suggest promising avenues for future research in enhancing the fine-tuning process of PLMs to boost their generalization.
**Analysing the impact of Mask-tuning on training examples' diversification.** Recent data augmentation studies [20, 22] showed that MLM improved PLM's generalization performance due to generating plausible and diversified examples from the original training examples. In this section, we compare the capabilities of MLM and Mask-tuning in creating plausible and diversified examples through three training epochs using BERT. We define a perturbed example as plausible if there is no syntactic or grammatical error after predicting a masked token. For example, if a NOUN is predicted for a masked VERB in an example that consequently makes the example grammatically incorrect, then we consider it as an implausible example. We randomly selected 200 training examples from each of the three in-distribution training datasets (SST-2, MNLI, and QQP). For each dataset, we ran two experiments: using 1) MLM and 2) Mask-tuning for generating perturbed examples. Then we recorded the masked and predicted tokens for each example through three training epochs. We presented original and perturbed examples to two annotators to annotate them as plausible or implausible. Annotators ended up with 93% inter-annotator agreement. For
\begin{table}
\begin{tabular}{c c} \hline \hline
1 & “manipulate feminist empowerment nuclear thle thinly!” \\ & epoch 1: [thinly] \(\Rightarrow\) [thinly] \\ & epoch 2: [thinly] \(\Rightarrow\) [typically] \\ & epoch 3: [thinly] \(\Rightarrow\) [tenderly] \\
2 & “Convey a strong sense of the girls’ environment.” \\ & epoch 1: [strong] \(\Rightarrow\) [strong] \\ & epoch 2: [sense] \(\Rightarrow\) [understanding] \\ & epoch 3: [girls] \(\Rightarrow\) [UNK] \\
3 & “The lead roles are more than competent.” \\ & epoch 1: [roles] \(\Rightarrow\) [roles] \\ & epoch 2: [competent] \(\Rightarrow\) [qualified] \\ & epoch 3: [roles] \(\Rightarrow\) [actors] \\ \hline \hline \end{tabular}
\end{table} TABLE II: Some perturbed examples generated by Mask-tuning.
\begin{table}
\begin{tabular}{l|c|c c c|c c|c c} \hline \hline & \multicolumn{4}{c}{Sentiment} & \multicolumn{4}{c}{NLI} & \multicolumn{4}{c}{Paraphrase} \\ \hline \multicolumn{2}{c|}{**Model**} & \multicolumn{2}{c|}{In-Dis} & \multicolumn{2}{c|}{OOD} & \multicolumn{2}{c|}{In-Dis} & \multicolumn{2}{c|}{OOD} & \multicolumn{2}{c|}{In-Dis} & \multicolumn{2}{c}{OOD} \\ \hline \multicolumn{2}{c|}{**Model**} & **Baseline** & **SST-2** & **IMDB-Cont.** & **IMDB-CAD** & **MNLI (mmm)** & **HANS** & **AdvNLI** & **QQP** & **PAWS** \\ \hline BERT\({}_{base}\) & Fine-tuning & 92.43 & 79.08 & 87.00 & 84.508\(\pm\)3.0 & 56.90 & 24.12 & 90.80 & 52.80 \\ & Learned-Mixnet+ [14] & - & - & 83.97\(\pm\) & - & 66.15 & - & - \\ & PeE [16] & - & - & - & 84.19\(\pm\) & - & - & - \\ & Regularized-conf [18] & - & - & - & 84.04\(\pm\) & - & 69.10 & - & 91.50 & 39.80 \\ & IP-standard [32] & - & - & - & 84.04\(\pm\) & - & 56.70 & - & - & - \\ \hline & Mask-tuning (ours) & **93.11\(\pm\)**0.1** & **82.75\(\pm\)**0.3** & **83.2\(\pm\)**0.1** & **84.75\(\pm\)**0.28\(\pm\)**0.1** & **69.52\(\pm\)**0.2** & **32.32\(\pm\)**0.3** & **91.41\(\pm\)**0.1** & **45.74\(\pm\)**0.3** \\ \hline RoBERTa\({}_{base}\) & Fine-tuning & 94.49 & 84.50 & 80.40 & 87.608\(\pm\)3.0 & 67.80\(\pm\)3.0 & 31.20 & 91.50 & 38.45 \\ & Span Cutoff [17] & 95.40 & 85.50 & 89.20 & 88.40\(\pm\) & 68.40 & 31.10 & 92.00 & 38.80 \\ & HiddenText [13] & 95.80 & 87.80 & 90.40 & 88.20\(\pm\) & 71.20 & 32.80 & 92.00 & 41.50 \\ & IP-standard [32] & & & 87.70\(\pm\) & - & 66.30 & - & - \\ \hline & Mask-tuning (ours) & **94.68\(\pm\)**0.1** & **85.09\(\pm\)**0.2** & **91.62\(\pm\)**0.1** & **87.72\(\pm\)**0.1** & **87.20\(\pm\)**0.2** & **7.74\(\pm\)**0.2** & **91.26\(\pm\)**0.2** & **44.57\(\pm\)**0.4** \\ \hline BART\({}_{base}\) & Fine-tuning & 93.23 & 82.48 & 86.03 & 84.68\(\pm\)**0.3** & 56.30 & 30.51 & 90.50 & 32.27 \\ & Mask-Tuning (ours) & **93.80\(\pm\)**0.2** & **84.09\(\pm\)**0.1** & **87.83\(\pm\)**0.3 & **56.08\(\pm\)**0.28\(\pm\)**0.1** & **70.48\(\pm\)**0.5** & **35.31\(\pm\)**0.4** & **91.03\(\pm\)**0.1** & **45.71\(\pm\)**0.5** \\ \hline GPT-3 & Few-shots Forwarding [44] & - & - & - & 77.00\(\pm\) & - & 75.30 & - & 83.5\(\pm\) & 73.7 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Performance comparison of Mask-tuning (Ours) and various baselines using different PLMs on in-distribution and OOD datasets. All models are trained on in-distribution dataset and then evaluated on related test set of in-distribution and OOD dataset.The last row also shows the results of a Large Language Model (LLM), GPT-3 Few-shot Prompting [44] that is discussed in Section V. The Fine-tuning has been implemented using the code from huggingface [42].
the remaining 7%, the annotators expressed their views on each case in the presence of the researchers and finally 100% agreement was achieved.
We observed that in the first experiment (using MLM), MLM predicts a token precisely the same as the masked one in 76% of examples. In 18% of examples, MLM predicts a related plausible token to the masked one at least in one of the three epochs. In 5% of examples, MLM predicts an implausible token and finally removes the masked token in 1% of examples ([UNK]). However, the perturbation results when using Mask-tuning are entirely different from when using only the MLM. In Mask-tuning, in 60% of examples, the predicted token is a related plausible token to the masked one versus 18% in the case of using only MLM. In 37% of examples, mask-tuning predicts a token precisely the same as the masked one. In 2% of examples, Mask-tuning predicts an implausible token and finally removes the masked token in 1% of examples ([UNK]). This analysis shows that mask-tuning improves MLM's capabilities in generating plausible and diversified examples by more than three times. Hence, it can be considered one of the main reasons for Mask-tuning to improve the PLMs' generalization performance. Table II presents some perturbed examples generated by Mask-tuning.
## VI Ablation Study
Three ablation experiments were performed to show the effectiveness of each component of Mask-tuning. We compare the Mask-tuning performance and ablation study on various fine-tuning process: data augmentation, Mask-tuning without integrated loss, and sequential training. Details of these experiments are as follows:
**Mask-tuning Vs. Data augmentation**. In this experiment, we aim to compare Mask-tuning performance and data augmentation. For this aim, we first used MLM to generate the perturbed examples from the original downstream task's training dataset. Then we fine-tuned the PLMs on the augmented training dataset, which is created by merging original and perturbed downstream training datasets. As we can see in Table III, results show that directly augmenting the training data marginally improved the PLM's performance on both in-distribution and OOD datasets. However, this improvement is far behind the Mask-tuning performance.
**Mask-tuning without integrated-loss.** In this experimental setup, we investigate the effect of an integrated loss on the proposed method's performance. We train Mask-tuning on in-distribution downstream datasets based on only fine-tuning loss function output. Then evaluate the model on the in-distribution and the OOD evaluation set. As we can see from Table III (third row of each model's results), eliminating MLM loss from the mask-tuning loss hurts the Mask-tuning performance on both in-distribution and OOD datasets. Hence, the integrated loss has an essential role in Mask-tuning performance and improving the PLMs' generalization.
**Mask-tuning Vs. Sequential training.** This experiment compares the performance of the Mask-tuning and sequential training on downstream tasks. For this comparison, we first use MLM to train the PLM on the original downstream task training dataset. Then we tune the PLM by running original fine-tuning on the perturbed examples. Finally, we evaluated the PLM on in-distribution and OOD evaluation sets. The results in Table III (fourth row in each PLM's results) show that sequential training noticeably decreased the PLMs' performance in most cases.
All in all, the results of these experiments demonstrate the significance of integrating the three components of Mask-tuning (i.e., input perturbation, fine-tuning classification, and integrating loss) to improve the PLMs' generalization.
## VII Conclusion
In this study, we proposed Mask-tuning, a novel and effective training approach that addresses the challenges in PLMs' learning process, which affect their reusability on unseen data. By integrating the MLM training objective into fine-tuning, Mask-tuning enhances the PLMs' generalization. Our comprehensive experiments on in-distribution and OOD datasets from three downstream tasks on three PLMs showed that Mask-tuning outperforms six baselines on OOD datasets while also boosting PLMs' performance on in-distribution datasets. These findings demonstrate that Mask-tuning improves the practicality and effectiveness of PLMs for real-world applications. Moreover, since Mask-tuning is applicable to any PLM that works with fine-tuning, it has broad applicability.
\begin{table}
\begin{tabular}{l|c c c c|c c c|c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{4}{c}{Scentiment} & \multicolumn{4}{c}{Parallelize} \\ \hline \hline \multicolumn{1}{c}{} & In-distribution & \multicolumn{2}{c}{OOD} & In-distribution & \multicolumn{2}{c}{OOD} & In-distribution & \multicolumn{2}{c}{OOD} \\ \hline
**Model** & SSI-2 & IMDB-Cont. & IMDB-CAD & **MNLI (n/mm)** & **HANS** & **Adv-NLI** & **QQP** & **PANS** \\ \hline
**Pre-tuning on original train set** & 92.43 & 79.08 & 87.00 & 84.308/3.40 & 56.90 & 24.12 & 90.80 & 32.20 \\ Fine-tuning on augmented train set & 93.11 & 80.27 & 86.49 & 81.608/3.42 & 57.00 & 25.70 & 91.14 & 33.93 \\ Mask-tuning w/o integrated loss & 92.07 & 79.00 & 84.99 & 83.578/4.20 & 50.51 & 23.60 & 89.92 & 38.15 \\ Sequential Training & 91.97 & 89.97 & 86.76 & 83.408/4.00 & 56.80 & 32.20 & 91.00 & 34.82 \\ \hline Mask-tuning (ours) & **93.12**\(\pm\)0.1 & **82.75**\(\pm\)0.3 & **83.25**\(\pm\)0.1 & **84.75**\(\pm\)**0.28**\(\pm\)0.1 & **69.25**\(\pm\)0.2 & **26.32**\(\pm\)**0.3 & **91.54**\(\pm\)0.1 & **46.74**\(\pm\)0.5 \\ \hline
**Max-tuning** on original train set & 94.49 & 84.50 & 88.40 & 87.608/5.70 & 67.80 & 31.20 & 91.50 & 38.45 \\ Fine-tuning on augmented train set & 93.46 & 85.54 & 89.76 & 97.948/4.71 & 72.66 & 32.20 & 91.26 & 42.06 \\ Mask-tuning w/o integrated loss & 95.00 & 81.80 & 87.31 & 86.838/3.72 & 64.56 & 31.00 & 90.00 & 41.00 \\ Sequential Training & 92.56 & 84.86 & 89.00 & 57.508/4.70 & 71.69 & 30.70 & 91.20 & 39.91 \\ \hline
**Max-tuning (ours)** & **94.60**\(\pm\)0.1 & **88.94**\(\pm\)0.2 & **94.62**\(\pm\)0.1 & **87.25**\(\pm\)**0.1 & **87.22**\(\pm\)**0.1 & **75.70**\(\pm\)**0.2 & **97.40**\(\pm\)**0.6 & **91.62**\(\pm\)0.2 & **43.79**\(\pm\)0.4 \\ \hline
**Max-tuning** on original train set & 91.23 & 82.48 & 86.03 & 81.608/4.80 & 56.30 & 30.51 & 90.50 & 32.27 \\ Fine-tuning on augmented train set & 93.11 & 82.99 & 86.63 & 85.908/6.00 & 65.60 & 32.80 & 9.60 & 33.42 \\ Mask-tuning w/o integrated loss & 95.00 & 82.02 & 85.80 & 81.028/1.40 & 51.52 & 32.40 & 89.34 & 35.21 \\ Sequential Training & 92.88 & 82.65 & 86.13 & 81.814/2.45 & 58.00 & 32.50 & 89.95 & 34.62 \\ Mask-tuning (ours) & **93.80**\(\pm\)0.1 & **83.00**\(\pm\)0.1 & **87.83**\(\pm\)0.3 & **86.08**\(\pm\)**0.26**\(\pm\)**1.2 & **70.48**\(\pm\)**0.5 & **35.31**\(\pm\)**0.4 & **91.03**\(\pm\)**0.1 & **45.71**\(\pm\)**0.5 \\ \hline \hline \end{tabular}
\end{table} TABLE III: Performance comparison of Mask-tuning (ours) and our ablation studies on various fine-tuning process. |
2304.06765 | New Physics searches using ProtoDUNE and the CERN SPS accelerator | The exquisite capabilities of liquid Argon Time Projection Chambers make them
ideal to search for weakly interacting particles in Beyond the Standard Model
scenarios. Given their location at CERN the ProtoDUNE detectors may be exposed
to a flux of such particles, produced in the collisions of 400 GeV protons
(extracted from the Super Proton Synchrotron accelerator) on a target. Here we
point out the interesting possibilities that such a setup offers to search for
both long-lived unstable particles (Heavy Neutral Leptons, axion-like
particles, etc) and stable particles (e.g. light dark matter, or millicharged
particles). Our results show that, under conservative assumptions regarding the
expected luminosity, this setup has the potential to improve over present
bounds for some of the scenarios considered. This could be done within a short
timescale, using facilities that are already in place at CERN, and without
interfering with the experimental program in the North Area. | Pilar Coloma, Jacobo López-Pavón, Laura Molina-Bueno, Salvador Urrea | 2023-04-13T18:09:52Z | http://arxiv.org/abs/2304.06765v2 | # New Physics searches using ProtoDUNE and the CERN SPS accelerator
###### Abstract
The exquisite capabilities of liquid Argon Time Projection Chambers make them ideal to search for weakly interacting particles in Beyond the Standard Model scenarios. Given their location at CERN the ProtoDUNE detectors may be exposed to a flux of such particles, produced in the collisions of 400 GeV protons (extracted from the Super Proton Synchrotron accelerator) on a target. Here we point out the interesting possibilities that such a setup offers to search for both long-lived unstable particles (Heavy Neutral Leptons, axion-like particles, etc) and stable particles (e.g. light dark matter, or millicharged particles). Our results show that, under conservative assumptions regarding the expected luminosity, this setup has the potential to improve over present bounds for some of the scenarios considered. This could be done within a short timescale, using facilities that are already in place at CERN, and without interfering with the experimental program in the North Area.
+
Footnote †: preprint: IFIC/23-12, IFT-UAM/CSIC-23-35, FTUV-23-0329.0644
## I Introduction
Despite the intensive searches in the last decades, no conclusive signal of physics Beyond the Standard Model (BSM) has been observed at the Large Hadron Collider or at dark matter (DM) direct detection experiments. A possible explanation for this is that the new physics (NP) is weakly coupled to the visible sector and lies at low scales. Models of this sort could easily evade current searches, which have so far been aimed at new particles with masses at or above the electroweak scale. From the theory side, several low-scale new physics scenarios have been recently put forward as an alternative approach to address some of the most pressing questions in particle physics: the origin of neutrino masses, the observed baryon asymmetry of the universe, and the DM problem. Weakly interacting particles arise naturally in these type of scenarios, as it is for instance the case of low-scale seesaw models, which are able to answer the first two. Regarding the DM problem, an exciting possibility is based on the existence of an extended "dark" (or hidden) sector communicating with the Standard Model (SM) via a new (light) mediator that is weakly coupled. Different types of particles can act as mediators between the two sectors, covering a wide range of masses, and leading to distinct phenomenological consequences.
This change of paradigm has boosted a worldwide effort yielding a plethora of novel approaches and proposals, in particular, for experiments lying at the edge of the intensity frontier (see for instance [1]). In this sense, one of the most competitive searches are those performed at _beam-dump_ experiments, where the collision of high-energy particles (typically protons, or electrons) against a target may produce an intense flux of new light states, typically from meson decays. Being weakly interacting, such particles could propagate over long distances before decaying visibly (or interacting) inside a detector placed downstream. An example of such facilities are neutrino experiments, where the collision of high-intensity proton beams sourcing the neutrino flux, can also be used to produce a variety of new particles which may lead to observable signals in neutrino detectors.
In this Letter, we propose a new beam-dump experiment using the existing ProtoDUNE detectors at the CERN Neutrino Platform, two kiloton-scale liquid Argon Time Projection Chambers (LArTPCs) constructed to prototype and consolidate the technology of the DUNE Far Detector [2; 3]. These detectors are downstream with respect to the CERN North Area targets, used to produce secondary charged particle beams from the interactions of protons extracted from the CERN Super Proton Synchrotron (SPS) accelerator. As a result, the proton collisions in the primary target may generate a flux of BSM particles which could leave a visible signal in the ProtoDUNE detectors. To the best of our knowledge, this is the first time that this has been pointed out in the literature. Most importantly, we stress out that searches for such signals can be carried out parasitically, without interfering with the dense experimental program at the CERN North Area Experimental Hall (EHN1) multipurpose facility.
One of the key features of the LArTPCs is their excellent imaging capabilities, which allow them to fully reconstruct the tracks of ionising particles resulting from the decay or scattering of the produced new particles. This feature, together with the potential time synchronisation with the beam, can significantly reduce the possible background sources. This first advantage is crucial for surface-detectors exposed to a huge flux of cosmics such as ProtoDUNE. The second intrinsic advantage that this proposal offers is the wider phase space that can be covered, compared to similar searches at neutrino experiments such as T2K [4] or MicroBooNE [5] (for future
prospects see Ref. [6]), thanks to the higher proton beam energy available at the SPS (400 GeV, as opposed to the \(80-120\) GeV protons foreseen for example at DUNE [7]). This allows not only to abundantly produce light short-lived mesons (such as \(\pi^{0},\eta,\eta^{\prime}\), etc) but also to produce a significant flux of heavier short-lived meson (such as \(D\), \(D_{s}\), \(B\), or \(\Upsilon\)), as in the SHIP [8] or SHADOWS [9] experiments. The beam configuration, without a decay volume, does not allow to study decays from longer-lived mesons such as charged kaons or pions since they are significantly deviated with a set of magnets located after the primary target. Nevertheless, this peculiarity translates in an intrinsic advantage as the background from SM neutrinos will be significantly reduced in this case, as opposed to neutrino experiments. Finally, thanks to both its large volume and the high density of liquid Argon (LAr), ProtoDUNE can be used to search for _both_ unstable and stable weakly interacting particles produced in this manner. We will therefore consider both scenarios separately in this work. For each of them, we will first consider a model-independent approach in such a way that our results can be recasted to specific scenarios, addressing then the case of two well-motivated specific models.
The rest of the paper is structured as follows. We provide an overview of the experimental setup in Sec. II. Our results are presented in Sec. III, where the cases of long-lived states and stable particles are discussed separately. Finally, we summarize and conclude in Sec. IV.
## II Experimental Setup
The two ProtoDUNE detectors were constructed and installed in the CERN Neutrino Platform, approved experiments NP02 (ProtoDUNE-VD) and NP04 (ProtoDUNE-SP/ProtoDUNE-HD), at the end of EHN1 [10; 11]. To produce the secondary beams in the North Area at CERN, the high-energy, high-intensity proton beam extracted from the SPS accelerator impinges on a thin (50 cm) Beryllium target, T2. Downstream the target, secondary particles are selected with the use of magnetic spectrometers and transported to the various experimental areas. In particular, the ProtoDUNE detectors are relatively aligned with the secondary H2/H4 beamlines and thus with the primary target T2. This feature puts them in a unique position to act as a beam-dump experiment, being located at a distance of \(L\sim 610-640\) m from the target.
A sketch of the experimental configuration considered in this study is illustrated in Fig. 1. About \(5-7\times 10^{12}\) protons per spill are extracted from the SPS accelerator to the target, with a spill duration of 4.8 s. Therefore, in a year \(\sim 3.5\times 10^{18}\) protons on target (POT) are dumped against T2 1. The T2 target is located 15 m underground for radiation shielding and is surrounded by a series of magnets, collimators and other beamline elements depending on the H2/H4 beam configuration. Here we focus on describing the main items relevant for our study. Before the target a set of magnets define the incoming angle of the protons interacting with T2, which can vary from 0 up to 10 mrad (on average) depending on the desired H2/H4 configuration. The remaining protons (\(\sim 30\%\)) arising from the target are deviated towards a large collimating structure. It has 3.2 m of Iron and acts as a dump (designated "TAX"), as shown in the lower panel of Fig. 1. Subsequently, and given the slope of the secondary line especially in the vertical plane (see the top panel of Fig. 1), the remaining particles produced at the dump are essentially absorbed by the various magnetic elements present in the environment and soil of length \(\sim 500\) m, which act as shielding. In our calculations, we take a 0 mrad angle for the proton collisions on the target, and a 7 mrad angle for the protons that collide on the dump. Therefore, the flux of particles has two main components, as illustrated on the bottom panel in Fig. 1.
Footnote 1: The duty cycle considered for our study has been set to that of SPS during 2022 (see [12]). Specifically, for a data-taking period of 6 months and with the beam parameters listed above, this leads to a duty cycle of 30%.
As shown in Fig. 1, the produced flux of BSM particles would reach both ProtoDUNE modules. In this work we will focus on the first detector (NP02), which is closer; however, a similar approach can be applied for the one downstream, and we have explicitly checked that the results obtained are very similar. Specifically, NP02 has a fiducial volume of \(V_{det}=6\) m \(\times\) 7 m \(\times\) 6 m and is located at a distance of \(L=610\) m from the T2 target, as shown in Fig. 1. As already mentioned, an important peculiarity of this experimental configuration is that there is no decay pipe available. In addition, the strong magnetic field that deviates the remaining protons after they hit the target will also deflect other charged particles (such as pions and kaons) with a much larger opening angle. Therefore the expected background from SM neutrinos at the detector will be considerably suppressed and will be neglected here. Instead, being on surface a significant background is expected from cosmic rays. However these may be reduced applying timing and kinematic cuts as discussed in more detail below.
## III Results
As the SPS beam hits the T2 target, and subsequently the remaining protons hit the dump, the proton collisions create a plethora of unstable mesons, which may produce BSM particles as they decay. As outlined in the previous section, here we will focus on the production of new particles from the decays of short-lived mesons (namely,
\(\pi^{0},\eta,\eta^{\prime},\rho,\omega,J/\psi,D,D_{s},\Upsilon\) and \(B\) decays) as well as from \(\tau\) decays. We extract the event distributions of the parent mesons from Pythia (v8.3.07) [13] using the SoftQCD flag. The exception to this are \(D,D_{s}\) mesons, where the HardQCD flag is used instead since this approach yields results that are more closely aligned with the simulations performed by the SHiP collaboration [14]. Note that this is generally conservative as we are not taking into account the production from secondary interactions in the target, which can lead to a non-negligible enhancement of the meson production rate (see for example Fig. 6 in Ref. [14] for \(D\) mesons). Thus, we expect our results to improve with a dedicated flux simulation that takes into consideration the full geometry of the experimental setup, which we leave for future work. Table 1 summarizes the production rates for the parent particles considered in this work, normalized per PoT, for a beam with incident momentum of 400 GeV.
Starting from the parent particle distributions, we then simulate their decays assuming either a two-body or a three-body process, depending on the parent particle and the scenario being considered. We then follow each BSM particle along its trajectory, keeping only those which intersect the detector fiducial volume (the flux _accepted_ by the detector). The expected number of events is computed from the accepted flux either taking into account the probability of decay inside the detector volume (in the case of unstable particles, which is related to their lifetime) or their probability to interact (in the case of stable particles, which is related to their interaction cross section). The relevant backgrounds will also be different, owing to the different signals expected in each case. Thus, in the remainder of this section, these two scenarios will be discussed separately.
### Long-lived particles
For a long-lived particle \(\Psi\) with mass \(m_{\Psi}\) and lifetime \(\tau_{\Psi}\) that is produced from a given parent meson2 in the decay \(M\to\Psi+\ldots\), the expected number of decays inside the detector can be computed as:
Footnote 2: Although in this section we refer to mesons throughout the text, the expressions can be trivially generalized to the case of \(\tau\) decays.
\[N_{dec}^{M}=N_{\rm PoT}\,Y_{M}\,{\rm BR}(M\to\Psi)\,\int dS\int dE_{\Psi}\,{ \cal P}(c\tau_{\Psi}/m_{\Psi},E_{\Psi},\Omega_{\Psi})\,\frac{dn^{M\to\Psi}}{ dE_{\Psi}dS}\,, \tag{1}\]
Figure 1: Sketch of the experimental configuration considered in this study. The upper panel (side view) indicates the location of the EHN1 Hall, where the ProtoDUNE modules are installed, with respect to the T2 target and the beam dump (TAX). The target is \(\sim 15\) m underground, and there are \(\sim 500\) m of soil between the beam dump and EHN1. In the lower panel (top view), the cones indicate the direction of the beam of BSM particles: solid for the particles produced in the target, and dashed for those produced in the dump. The direction of the proton beam is indicated by the red arrows, see text for details.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \(\pi^{0}\) & \(\eta\) & \(\eta^{\prime}\) & \(D\) & \(D_{s}\) & \(\tau\) \\
4.03 & 0.46 & 0.05 & \(4.8\cdot 10^{-4}\) & \(1.4\cdot 10^{-4}\) & \(7.4\cdot 10^{-6}\) \\ \hline \hline \(\rho\) & \(\omega\) & \(\phi\) & \(J/\psi\) & \(B\) & \(\Upsilon\) \\
0.54 & 0.53 & 0.019 & \(4.4\cdot 10^{-5}\) & \(1.2\cdot 10^{-7}\) & \(2.3\cdot 10^{-8}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Production yield (normalized per PoT) for each of the parent particles considered in this work, see text for details. Note that the number of \(\tau\) leptons receives contributions from direct production as well as from indirect production through \(D_{s}\to\tau\nu\), the latter being dominant.
where \(\text{BR}(M\to\Psi)\) is the production branching ratio in the decay of meson \(M\), \(N_{\text{PoT}}\) is the number of protons on target integrated over a given data taking period, \(Y_{M}\) is the meson production yield (provided in Tab. 1), and \(\frac{dn^{M\to\Psi}}{dE_{\Psi}dS}\) stands for the number of particles produced from the decay \(M\to\Psi\) with energy \(E_{\Psi}\) entering the detector through a differential surface \(dS\), with a trajectory defined by the solid angle \(\Omega_{\Psi}\). The decay probability inside the detector volume reads:
\[\mathcal{P}=e^{\frac{-d_{det}}{L_{\Psi}}}\left(1-e^{\frac{-\Delta d_{det}}{L_{ \Psi}}}\right)\,, \tag{2}\]
where \(\ell_{det}\) is the length of the trajectory before the particle enters the detector, and \(\Delta\ell_{det}\) is the length of the trajectory inside the detector (note that both quantities depend on the solid angle \(\Omega_{\Psi}\)). On the other hand, \(L_{\Psi}\) is the boosted decay length in the laboratory frame: \(L_{\Psi}=\gamma_{\Psi}\beta_{\Psi}c\tau_{\Psi}\simeq c\tau_{\Psi}E_{\Psi}/m_{ \Psi}\).
In the limit of small couplings, the lifetime of the particle will be much longer than both the distance to the detector and the length traveled by the particle inside it. In this limit it is illustrative to consider the case where the dependence of \(\ell_{det}\) and \(\Delta\ell_{det}\) with the solid angle is neglected, which leads to:
\[N_{dec}^{M}\simeq N_{\text{PoT}}\,Y_{M}\,\text{BR}(M\to\Psi)V_{det}\int\,\frac {dE_{\Psi}}{L_{\Psi}}\,\left\langle\frac{dn^{M\to\Psi}}{dE_{\Psi}dS}\right\rangle.\]
Here, our notation \(\left\langle\ldots\right\rangle\equiv\frac{1}{S}\int\ldots dS\) indicates the average taken within the detector size, for a given energy. As can be seen, the number of events approximately scales with the volume of the detector, \(V_{det}=S_{det}\Delta\ell_{det}\). Thus, the number of decays will be enhanced within the ProtoDUNE detectors, thanks to their large fiducial volume. Although this expression is useful to understand the behaviour of the results, we stress again that our numerical calculation of the accepted flux does take into account the detector location, shape, and angle with respect to the beam direction as outlined in Sec. II.
The final number of events needs to take into account the branching ratio of the decay into a visible final state and therefore, the results of Eq. 1 should be multiplied by the corresponding branching ratio, \(\text{BR}(\Psi\to\text{Visible})\), times the detection efficiency for a given final state, \(\epsilon_{det}\). Therefore, for a given mass, the observable number of events will depend on three model-dependent quantities: the branching ratio for the production of \(\Psi\), its branching ratio into visible states, and its lifetime in the rest frame. While in specific models these three quantities may be related, it is useful to derive model-independent sensitivities that can be easily recasted to specific scenarios.
Our results are shown in Fig. 2, where we have computed the regions where the number of signal events would exceed 2.44 in 5 years of data taking, for different fermion production mechanisms as a function of \(m_{\Psi}\). This would correspond to a 90% confidence level (C.L.) sensitivity in the absence of backgrounds [15]. Regarding the expected ProtoDUNE efficiencies, the reconstruction efficiency for energetic particles is expected to be above 80% (see Sec. 4.3 in Ref. [7]). Nevertheless, the final efficiency achievable will depend on the cuts applied to reduce the backgrounds, which cannot be estimated without a dedicated analysis. Therefore we have assumed perfect detector efficiency, \(\epsilon_{det}=1\), but our results can be easily recasted for a different value. The lines are shown as a function of \(c\tau_{\Psi}/m_{\Psi}\), as this ratio determines the point that maximizes the decay probability within the detector in Eq. 2, which at first approximation leads to an optimal sensitivity. However, since the boost kinematics will be slightly different depending on the values of \(m_{\Psi}\), we obtain a different detector acceptance for different masses (see e.g., the discussion in Ref. [16]) which leads to slight variations in the results. This is indicated by the width of each band, which has been obtained scanning masses between 10 MeV (indicated by the dashed lines) and the largest mass kinematically accessible for each production channel.
Let us now focus on a specific model which is well-motivated from the theoretical point of view, such as the Heavy Neutral Lepton (HNL) scenario. In particular, HNLs arise in low-scale Seesaw models, which can generate neutrino masses [17; 18; 19; 20] and the baryon asymmetry of the Universe [21; 22], thus solving two of the main open problems in the SM. At the same time, HNLs with masses in the GeV range are relatively hard to produce at fixed-target experiments since their main production mechanism is through \(D,D_{s}\) and \(\tau\) decays, which are hard to produce in the laboratory. Therefore, this mass
Figure 2: Expected sensitivity to long-lived particles in the model-independent scenario, assuming the branching ratios and lifetime at rest of the long-lived particle are uncorrelated. The region above each dashed line would lead to a number of events above 2.44 in 5 years of data taking, which in the absence of backgrounds would correspond to 90% confidence level (C.L.). The width of the bands indicate the variation in the results for masses of the long-lived particle between 10 MeV and up to the production threshold in each case, see text for details.
region is far less constrained that HNLs with masses below the kaon mass (for a recent review see Ref. [6]). In turn, we expect our setup to yield competitive constraints thanks to the high beam energy available at the SPS. In order to compare to current experimental constraints and future sensitivities, we will consider a simplified scenario with one HNL of mass \(m_{N}\) that mixes exclusively with one SM neutrino of a given flavor. The relevant portion of the Lagrangian reads:
\[\mathcal{L}\supset-\frac{m_{W}}{v}\overline{N}U_{\alpha 4}^{*}\gamma^{\mu}l_{L \alpha}W_{\mu}^{+}-\frac{m_{Z}}{\sqrt{2}v}\overline{N}U_{\alpha 4}^{*}\gamma^{\mu}\nu_{L \alpha}Z_{\mu}\,, \tag{3}\]
where \(l_{\alpha}\) and \(\nu_{\alpha}\) stand for the charged lepton and light neutrino of flavor \(\alpha\equiv e\),\(\mu\),\(\tau\), while \(v\) stands for the Higgs vacuum expectation value, \(m_{Z}\) (\(m_{W}\)) is the mass of the \(Z\) (\(W\)) boson, and \(U_{\alpha 4}\) indicates the mixing matrix elements between the HNL and the light neutrinos. In this scenario the HNL production branching ratio and its decay width will be strongly correlated and depend on the HNL mass and its mixing with the light neutrinos. In our calculations, we take the HNL production branching ratios and decay widths from Ref. [16] (see also Ref. [23]).
Once produced, the HNL will decay back to SM particles (mesons and leptons) through its mixing. In the following, we will consider its decays to the following final states: \(N\to\nu ee,\nu\mu\mu,\nu e\mu,e\pi,\mu\pi\) and \(\nu\pi^{0}\), which would be easily identifiable in the ProtoDUNE detectors thanks to their excellent particle identification (ID) capabilities. As for the backgrounds, since the neutrino production in the beam is heavily reduced (thanks to the dump, as explained in the previous section) we expect the largest contribution to come from cosmic rays. However, we think that these can also be reduced to a negligible level due to several factors. First, angular and timing cuts could be applied to remove those events that are not coming from the direction of the target within a beam spill time window. Second, in the case of fully visible final states (such as \(N\to\mu\pi\)) the event can be fully reconstructed [11; 24], which would offer additional handles to suppress the background with respect to the signal (an example of a similar search at MicroBooNE can be found in Ref. [5], see also Refs. [25; 26] for sensitivity studies using signal-to-background discrimination techniques at the DUNE near detectors). Therefore our results for this scenario have been obtained assuming a negligible background level, leaving a detailed calculation of the expected background for future work. The expected sensitivity, shown in Fig. 3 for an inclusive search using the decay channels indicated above, indicates that this setup would significantly improve over current constraints in the mass window above 400 MeV, and would be competitive (or even better) than other facilities planned on a similar timescale at CERN, such as NA62-dump [27; 28], FASER [29] or DarkQuest [30], indicated by solid lines. Most notably, the sensitivity of our setup lies approximately in the same ballpark as proposals on a longer timescale such as, e.g., FASER2 [29] (which would start taking data around 2038) or the DUNE ND-GAr [16] (which belongs to Phase II of the DUNE experiment and would therefore start data taking after 2035). For comparison, in Fig. 3 we also show the future sensitivity of SHIP [8] and SHADOWS [9], which are expected to start taking data during LHC run 5, beyond 2035 [31]. An overview of the planned timeline for the experiments listed above can be found e.g. in Ref. [32].
Finally, note that the results shown in Fig. 3 are obtained for an inclusive search (including a number of final states, as listed above), in order to ease comparison with the literature. However, experimental searches targeting different HNL decay modes may be subject to different optimization strategies and background discrimination techniques. This may affect the final efficiencies and sensitivities differently for each decay channel. Therefore, in App. A we also provide the expected sensitivities separately for each decay mode of the HNL.
### Stable particles
Being filled with LAr, the ProtoDUNE detectors would also be sensitive to the interactions of BSM particles. While unstable particles may also be searched for in this manner, here we focus on the case of stable particles for simplicity. Let us assume that the leading production mechanism for a light stable particle \(\chi\) comes from a given parent meson \(M\to\chi\bar{\chi}+\ldots\), which will interact as it arrives to the ProtoDUNE modules with an interaction cross section \(\sigma\). An average interaction cross section may then be defined as:
\[\langle\sigma\rangle= \frac{1}{\Phi^{\chi}}\int_{0}^{\infty}\int_{T_{\rm min}}^{T^{ \rm max}}\ \frac{d\sigma}{dT}\left(E_{\chi},\{X\}\right)\frac{d\Phi^{\chi}}{dE_{\chi}} dTdE_{\chi} \tag{4}\]
where \(T\) is the recoil energy of the electron, while \(E_{\chi}\) is the energy of the particle, \(\{X\}\) are the model parameters, and \(\Phi^{\chi}\) is the flux of the incoming \(\chi\) particles which trajectories intersect the detector (in units of \(\rm PoT^{-1}cm^{-2}\), and averaged over all possible trajectories). Note that the cross section here is integrated between the minimum observable recoil energy (which depends on the detector technology) and the maximum achievable recoil (which depends on kinematics). The number of events can then be written in terms of the average interaction cross section, as:
\[N_{ev}=\epsilon_{det}\,N_{trg}\,\langle\sigma\rangle\,\Phi^{\chi}\,N_{\rm PoT}\,, \tag{5}\]
where \(N_{trg}\) is the number of targets relevant for the interaction (e.g., electrons, or nuclei) contained in the fiducial volume of the detector. Notice that the flux \(\Phi^{\chi}\) depends on the production branching ratio, which is also a function of the model parameters. Using this formalism, one can derive model-independent sensitivity limits for the quantity \(\langle\sigma\rangle\times\rm BR(\)\(M\to\chi\bar{\chi}+\ldots)\), as shown in the middle panel in Fig. 4. For simplicity, in this figure
we have assumed no backgrounds and perfect detection efficiency, but our results can be easily rescaled for different assumptions. In addition, these results can be easily recasted to a specific model using the fluxes provided in the left panel of Fig. 4. In particular, we note that our sensitivity regions shown in the middle panel of Fig. 4 would also be applicable to any BSM scenario inducing the production mechanisms listed above, including (but not restricted to) millicharged particles (MCPs) or a dark portal through a massive vector mediator [36, 37].
In order to compare to current bounds from other experiments, let us now focus on a particular scenario. Specifically we consider the case of MCPs, which arise in certain BSM scenarios from the mixing between the SM photon and a massless dark photon [37]. MCPs are fermions with an effective electric charge \(\varepsilon e\) (\(e\) being the electric charge of the electron) and mass \(m_{\chi}\). As they arrive to the detector, these would lead to an excess of electron recoils. The differential electron scattering cross section for this process is given by [46]:
\[\frac{d\sigma}{dT}=\pi\alpha^{2}\varepsilon^{2}\frac{2E_{\chi}^{2}m_{e}+T^{2} m_{e}-T\left(m_{\chi}^{2}+m_{e}\left(2E_{\chi}+m_{e}\right)\right)}{T^{2} \left(E_{\chi}^{2}-m_{\chi}^{2}\right)m_{e}^{2}} \tag{6}\]
where \(m_{e}\) is the electron mass. As can be seen, in the limit \(E_{\chi}\gg T,m_{e},m_{\chi}\) it is enhanced at low recoil energies. Therefore, we naively expect the ProtoDUNE detector to be highly sensitive to such a signal, thanks to the low thresholds achievable at LAr TPCs. While the detection threshold for electron recoils at ProtoDUNE is expected to be around 10-30 MeV [47, 48, 7], we conservatively assume 30 MeV in our calculations. Thus, in the limit of small MCP masses, and taking \(E_{\chi}\gg T,m_{e},m_{\chi}\), the size of the interaction cross section for this model can be estimated as
\[\sigma\sim\varepsilon^{2}\,\left(\frac{30\text{ MeV}}{T_{\text{min}}}\right)\,10^{-26}\text{ cm}^{-2},\]
which implies
\[\frac{\langle\sigma\rangle\times\text{BR}}{10^{-26}\text{ cm}^{2}} \sim\text{BR}(\pi^{0}\to\gamma\chi\bar{\chi})\,\varepsilon^{2}\, \left(\frac{30\text{ MeV}}{T_{\text{min}}}\right)\] \[\sim\text{BR}(\pi^{0}\to\gamma e^{-}e^{+})\,\varepsilon^{4}\, \left(\frac{30\text{ MeV}}{T_{\text{min}}}\right)\,. \tag{7}\]
According to the middle panel in Fig. 4, and taking \(\text{BR}(\pi^{0}\to\gamma e^{+}e^{-})=1.174\%\)[49], this implies that our setup could potentially be sensitive to values of \(\varepsilon\) as low as \(\sim 5\times 10^{-5}\), in the absence of backgrounds.
However, for electron recoils at such low energies we expect a significant background from cosmic ray interactions in the detector. Unlike in the case of long-lived particles, these will be harder to disentangle from the signal events and therefore we expect these to significantly reduce our final sensitivity to this scenario. Thus, the main background will be cosmogenic muons that are energetic enough (\(E_{\mu}\gtrsim 400\) GeV) to penetrate the fiducial volume but do not leave a distinguishable muon-like track. Following Ref. [47], here we define a slightly smaller fiducial volume of \(6\text{ m}\times 7\text{ m}\times 5.65\text{ m}\). Within this volume, the
Figure 3: Expected sensitivity to Heavy Neutral Leptons (HNLs) at 90% C.L., as a function of the HNL mass. In each panel, results are obtained setting the remaining mixing parameters to zero, and assuming backgrounds can be reduced to a negligible level. Our results are given by the solid black lines, while current constraints are indicated by the shaded gray areas (extracted from [1] as well as the phenomenological recasts of results from CHARM [33] and BEBC [34]). The sensitivities of other beam-dump experiments are also shown for comparison, where solid (dashed) lines correspond to experiments taking data before (after) 2035, see text for details. The dashed black line corresponds to the naive seesaw scaling and should be considered only as indicative. It corresponds to \(|U_{\alpha 4}|^{2}=\sqrt{\Delta m_{\text{atm}}^{2}}/m_{N}\), where \(\Delta m_{\text{atm}}^{2}\) is the light neutrino atmospheric mass-squared difference. The brown shaded area is disfavored by BBN bounds extracted from [35, 6].
total number of muons with energies above 400 GeV is approximately \(4\times 10^{11}\) per year [47]. However, only 30% of these muons occur simultaneously with the spill (see Sec. II), and only around 0.1% do not leave a muon-like track [47]. Furthermore, the incoming muon flux has an angular dependence that varies as \(\sim\sin^{2}(\theta)\), where \(\theta\) is the angle with respect to the horizontal. To further reduce this background, we apply an angular cut of \(10^{\circ}\) above the horizontal. After applying these conservative cuts, we are left with approximately \(2\cdot 10^{6}\) background events per year.
Detailed measurements of the cosmic ray background can be performed using beam-off data at ProtoDUNE and, therefore, it is reasonable to assume that our signal significance will be mainly limited by the statistical error on the background, while systematic uncertainties will be subleading. Thus, the sensitivity limit for this scenario can be computed using a Gaussian \(\chi^{2}\), as:
\[\chi^{2}=\left(\frac{N_{ev}-N_{bg}}{\sqrt{N_{bg}}}\right)^{2}\,,\]
which is computed using just the total number of events. We again stress here that our results are conservative as binning in recoil energy may offer additional handles to enhance the signal significance, provided that the background shows a different dependence with recoil energy than the signal. Our results for this scenario are shown in the right panel in Fig. 4, where we show the expected sensitivity of our setup compared to previous limits in the literature and the expected sensitivity of experiments with a similar timescale such as NA64\(\mu\)[44] and milliQan (with 300 fb\({}^{-1}\) of integrated luminosity, taken from Ref. [45]). Note that other future experiments such as JUNO, SHiP or DUNE, will also deliver very interesting limits for this scenario [39; 50]. As can be seen, using just the total event rates and even with our conservative estimates regarding luminosity and the treatment of backgrounds, we expect the setup to be competitive with the most stringent limits in the parameter space for a wide range of masses. For comparison, we also show the potential improvement if a reduction of the background was possible significantly below the value considered here. This is indicated by the dashed black lines, obtained considering perfect background rejection. Although this is not realistic, it serves as an indication of the room for improvement for this setup, which for this scenario is background-limited. We note that our numerical calculations show perfect agreement with our naive estimate for the sensitivity based on Eq. 7.
## IV Summary and Conclusions
Given their location at CERN, the ProtoDUNE detectors may be exposed to a flux of new particles generated after the collision of 400 GeV protons, extracted from the SPS accelerator, with the T2 target (see Fig. 1). In this Letter, we have explored the possibility of using such a setup to search for BSM weakly interacting particles in a beam-dump configuration. We have shown that it offers the opportunity to search for both long-lived un
Figure 4: Expected fluxes and sensitivity to stable, weakly interacting particles. _Left panel:_ Flux of stable particles that would enter the fiducial volume of the ProtoDUNE detectors as a function of energy, for different production mechanisms and parent particles, as indicated in the legend. _Middle panel_: Sensitivity expected to stable particles, for the model-independent case, assuming no backgrounds and perfect detection efficiency. Here, \(\langle\sigma\rangle\) is the average cross section defined in Eq. 4, while \(m_{\chi}\) is the mass of the stable particle. _Right panel_: Sensitivity expected for the millicharged particle scenario as a function of its mass \(m_{\chi}\) (solid black line), see text for details. Since the setup is background-limited, the dashed black line indicates the ultimate sensitivity achievable if backgrounds could be significantly reduced, see text for details. We compare our results to previous constraints from SLAC [38], LSND and MiniBooNE [39], ArgoNeuT [40], miliQan [41] and LEP [42] (gray filled area); recent recasts of results from CHARM II and BEBC [43] (solid violet and blue lines respectively); and the future expected sensitivity of NA64\(\mu\)[44] (dashed light blue line) and milliQan with 300 fb\({}^{-1}\) of integrated luminosity [45] (dashed red line), with a similar timescale to our proposal.
stable particles and stable particles, thanks to the large fiducial volume of the ProtoDUNE modules and to the high density of liquid Argon, respectively. Additional advantages of this setup include the absence of a decay pipe, which leads to a strong suppression of the beam-coincident neutrino events, and the excellent particle ID and tracking capabilities of LAr TPCs, required to suppress the cosmic-ray induced background. Our results show that the expected sensitivity goes considerably beyond current constraints for two representative examples (Heavy Neutral Leptons and millicharged particles) using facilities that are already in place at CERN, without interfering with the experimental program in the North Area, and within a relatively short timescale. However the possibilities offered by this setup are much wider, as it may also be used to search for additional weakly interacting particles such as dark photons, dark scalars, axion-like particles, or light dark matter. To illustrate its reach, we have also shown the expected sensitivity of the setup in a model-independent fashion that allows our results to be easily recasted to particular NP models involving either unstable or stable new states. Finally, we would like to remark that while our results have been derived under generally conservative assumptions, a dedicated analysis is required in order to determine the expected background levels and detector efficiencies achievable for such a setup. In particular, the study of a new trigger algorithm optimised for the beam-dump configuration is essential to fully determine the potential of this setup.
_Acknowledgements._ We are very grateful to Nikolaos Charitonidis, Sylvain Girod and Vincent Clerc from the CERN BE-EA group for very useful discussions and insights on the SPS North Area layout and H2/H4 beamlines configurations. We warmly thank Paolo Crivelli and Sergei Gninenko for very useful discussions. We also warmly thank Francesco Pietropaolo for his useful feedback and reading of the manuscript. This work has received partial support from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 860881-HIDDeN and the Marie Sklodowska-Curie Staff Exchange grant agreement No 101086085 - ASYMMETRY. PC acknowledges partial financial support by the Spanish Research Agency (Agencia Estatal de Investigacion) through the grant IFT Centro de Excelencia Severo Ochoa No CEX2020-001007-S and by the grant PID2019-108892RB-I00 funded by MCIN/AEI/ 10.13039/501100011033. She is also supported by Grant RYC2018-024240-I, funded by MCIN/AEI/ 10.13039/501100011033 and by "ESF Investing in your future". JLP and SU acknowledge support from Generalitat Valenciana through the plan GenT program (CIDEGENT/2018/019) and from the Spanish Ministerio de Ciencia e Innovacion through the project PID2020-113644GB-I00. The work of LMB is supported by SNSF Grant No. 186158 (Switzerland), RyC-030551-I, and PID2021-123955NA-100 funded by MCIN/AEI/ 10.13039/501100011033/FEDER, UE (Spain). The authors acknowledge use of the HPC facilities at the IFIC (SOM cluster) and at the IFT (Hydra Cluster).
## Appendix A Non-inclusive sensitivities to HNLs
This Appendix summarizes the expected sensitivities obtained for a HNL decaying into specific final states, so they can be easily recasted once the information on expected efficiencies and background is available for each channel. Our results are shown in Fig. 5, where the different lines correspond to different decay modes of the HNL.
|
2306.08131 | Efficient Adapters for Giant Speech Models | Large pre-trained speech models are widely used as the de-facto paradigm,
especially in scenarios when there is a limited amount of labeled data
available. However, finetuning all parameters from the self-supervised learned
model can be computationally expensive, and becomes infeasiable as the size of
the model and the number of downstream tasks scales. In this paper, we propose
a novel approach called Two Parallel Adapter (TPA) that is inserted into the
conformer-based model pre-trained model instead. TPA is based on systematic
studies of the residual adapter, a popular approach for finetuning a subset of
parameters. We evaluate TPA on various public benchmarks and experiment results
demonstrates its superior performance, which is close to the full finetuning on
different datasets and speech tasks. These results show that TPA is an
effective and efficient approach for serving large pre-trained speech models.
Ablation studies show that TPA can also be pruned, especially for lower blocks. | Nanxin Chen, Izhak Shafran, Yu Zhang, Chung-Cheng Chiu, Hagen Soltau, James Qin, Yonghui Wu | 2023-06-13T20:51:00Z | http://arxiv.org/abs/2306.08131v1 | # Efficient adapters for giant speech models
###### Abstract
Large pre-trained speech models are widely used as the de-facto paradigm, especially in scenarios when there is a limited amount of labeled data available. However, finetuning all parameters from the self-supervised learned model can be computationally expensive, and becomes infeasible as the size of the model and the number of downstream tasks scales. In this paper, we propose a novel approach called Two Parallel Adapter (TPA) that is inserted into the conformer-based model pre-trained model instead. TPA is based on systematic studies of the residual adapter, a popular approach for finetuning a subset of parameters. We evaluate TPA on various public benchmarks and experiment results demonstrates its superior performance, which is close to the full finetuning on different datasets and speech tasks. These results show that TPA is an effective and efficient approach for serving large pre-trained speech models. Ablation studies show that TPA can also be pruned, especially for lower blocks.
Nanxin Chen, Izhak Shafran, Yu Zhang, Chung-Cheng Chiu, Hagen Soltau, James Qin, Yonghui Wu Google DeepMind, USA model adaptation, pretrained model, residual adapter
## 1 Introduction
Recent years researchers observed the prevalent paradigm shift from random initialization to transfer learning using pretrained self-supervised learning models, yield superior performance on many various downstream tasks [1, 2, 3, 4, 5]. Self-supervised learning methods learn to extract useful feature representations on large amount of unsupervised data through the objectives such as contrastive learning [2, 3], masked language model loss [1, 5], or both [4]. Then the whole network is finetuned with small learning rate to make sure that the learned useful representation could be used on downstream tasks.
This approach introduces two types of potential issues. First, the whole network needs to be updated during finetuning, which are very costly to train and very costly to store. In last few years, we observed a large amount of different speech applications based on finetuning those self-supervised models [6, 7, 8, 9]. If one wants to build a service which provides all those different applications, finetuning is not a feasible solution either to store or to serve because all the parameters need to be stored and loaded. For example, if we want to support a new language with the existing model, it is not realistic to re-train the whole model. Second, to unlock the potential of existing self-supervised learning, a critical point is to use large network which enables to extract expressive representations. When this is finetuned on small amount of labelled data, it tends to overfit quickly and without proper configuration, catastrophic forgetting [10] happens so the information learned through self-supervised learning is abandoned.
Residual adapter [11, 12] provides a neat solution to the two problems mentioned above. Residual adapter is a very simple network which consists of a residual connection with two fully connected layer including nonlinear activation functions like ReLU [13]. Some previous work [12] also includes an optional layer norm before the fully connect layer to normalize the input. When we want to build a model for one task, instead of updating the whole network, we only update the added residual adapters. Typically the residual adapter is much smaller in comparison to the large frozen encoder. This solves both the issue of storing and serving, since only the adapter needs to be stored and loaded while the frozen encoder is preserved once. Because of the small amount of trainable parameters, it is often observed that it overfits less frequently.
In this paper, we provide a systematic study of the various residual adapters following a recent study [14] which studies the residual adapters for Transformers [15]. Based on the comprehensive study, we further propose a new approach based on those observations for Conformer [16], which reports the best performance for various speech tasks [17]. We report results on different public benchmarks, including Automatic Speech Recognition (ASR) tasks on multiple English datasets [18], multilingual dataset [19], and Automatic Speech Translation (AST) task [20]. Our study is based on our large pre-trained speech foundation model, which includes about 2 billion parameters. We chose a large model for study because it has observed that large model benefits more from self-supervised learning [3, 21] and it offers impressive representation abilities. The marginal cost of using a large model with adapters decreases significantly as the number of tasks increases, in contrast to small models which are easy to serve but hard to scale.
The paper is organized as follows. Section 2 gives a brief overview of different previous work on residual adapters and related approaches. Section 3 introduces the idea of resid
ual adapter and its variations. Section 4 starts with massive experiments on different factors of residual adapters on public benchmark. Based on those studies, we proposed a new variation, Two Parallel Adapter (TPA), which combines the best configuration from each factor. We report comprehensive results of the proposed TPA on downstream tasks including ASR and AST. Section 5 summaries the paper, discusses the broad impact and future directions.
## 2 Previous Works
Residual adapters have been broadly studied in prior works. It was originally proposed for transfer learning in natural language processing [11, 12, 22, 23] and then has been applied to other modalities, like speech [24, 25], and cross-modalities [26, 27].
Our studies are closely related to the previous work [14] that conduct experiments on natural language processing tasks and we are applying similar studies on Conformer and speech tasks. In [28], the authors proposed adding residual adapters to self-supervised speech model and applied it to the problem of speech recognition. Our work is also related to [29] where residual adapters are used for speech translation. They studied two scenarios, including adapting from the pre-trained speech translation backbone and transfer learning with both pre-trained encoder and decoder. Our work is different in two respects. First, we propose a new variation of residual adapter, TPA, specially designed for Conformer. Experiments suggest TPA leads to better performance than original residual adapter. Second, we utilize a unified framework where pre-trained encoder keeps frozen during training while adapters and randomly initialized decoder are added for different tasks, in contrast to the simple ctc decoder [28] or pretrained decoder [29]. We verify the effectiveness of our framework on both speech recognition and speech translation benchmarks.
## 3 Residual Adapters
Residual adapter is a small neural network component which is inserted to play an role of model adaptation. Instead of updating all parameters, this small component is finetuned so the functionality from pretraining is largely preserved. The residual adapter consists of a residual connection, two fully connected layers with a bottleneck dimension, one non-linear activation function, and an optional layer normalization [30] before the first fully connected layer. The form could be expressed as follows:
\[\mathrm{adapter}(x)=x+(W_{2}(f(W_{1}g(x)+b_{1}))+b_{2}) \tag{1}\]
where \(f\) is the nonlinear activation function, g could be layer normalization or identity function.
In this paper we mainly study two different styles of adapters: serial and parallel. We compare results from three different cases, which are shown in Figure 1. The first two are the most widely used ones in literature [12, 14, 25, 29].
For the serial adapter, it is inserted after a component, like an Feed-forward Network (FFN) or conformer block. For instance, it could be added between conformer blocks as listed in the left part of Figure 1, and the equation is:
\[\mathrm{FFN}_{\mathrm{serial}}(x)=\mathrm{adapter}(\mathrm{Conformer}(x)) \tag{2}\]
since the residual adapter itself includes the residual connection so the output from conformer block is already included.
For the parallel adapter, it is added to the residual connection of the network component, for example, FFN. Since residual adapter also has a residual connection, the total output becomes the sum of all 3 paths, the input, main branch, and residual adapter:
\[\mathrm{FFN}_{\mathrm{parallel}}(x)=x+0.5*\mathrm{FFN}(x)+(\mathrm{adapter}(x )-x) \tag{3}\]
Here we subtract x to avoid counting the input twice.
The widely used Conformer structure is different in comparison to Transformer and has two FFNs instead of one. In such case, it is possible to add two adapters for each conformer block which includes two FFNs, as shown in Figure 0(c). We discuss all the possible combinations of those configurations in Experiment part.
## 4 Experiments
In our experiments, we study the impact of different residual adapter factors and report the best configuration, Two Parallel Adapter (TPA), on various downstream tasks and public benchmarks. All our experiments use the same pretrained network, so all results are comparable to each other. Our pretrained model is based on BEST-RQ [5] following default masking and quantization parameters. The model has about 2 billion parameters and includes 32 conformer blocks with hidden dimension 1536. Layer normalization instead of batch normalization is adopted for training stabilities. The model is trained on more than 12M hours data covering more than 300 languages.
After pretraining, we add residual adapter to the conformer encoder and finetune it on in-domain supervised data. For speech recognition tasks, a 2-layer LSTM decoder is randomly initialized with a width of 1280, and it is added to the pre-trained encoder with adapters. Both the adapter and the decoder are updated with in-domain training data. For speech translation tasks, we attach a 6-layer, 512-dimension Transformer decoder to the pre-trained encoders.
The residual adapter includes two layers with a bottleneck dimension of 256, and a ReLU non-linear activation function [13], unless otherwise specified. We initialize \(W_{2}\) with all zeros and \(W_{1}\), \(b_{1}\), \(b_{2}\) with Xavier initialization [31].
### Ablation Studies
We study different factors of residual adapters based on the FLEURS dataset [19], which is a multilingual dataset that includes 102 languages. The total number of training data is less than 1000 hours, and the amount of training data varies for different languages. For example, western European languages have more than 230 hours of training data, while Chinese, Japanese, and Korean (CJK) languages have less than 40 hours of training data. We believe this dataset represents different cases well and is challenging enough to study different approaches. We use one adapter for all 102 languages, since it gives better performance in practice.
#### 4.1.1 Comparison with or without layer norm
Layer normalization is used in some previous studies [12, 28] while not used in others [14] so we want to study whether this is a crucial factor. It is introduced in [12] in order to avoid finetuning existing layer normalization layers within the pre-trained model. However, the normalization may also be harmful for the adapter learning because it changes the scale of the input. We used a serial residual adapter, demonstrated in Figure 0(a), as baseline system to study whether the layer norm is helpful.
From Table 1, clearly layer normalization doesn't really improve the performance for adaptation overall. As a result, we don't include any layer normalization layers in the following experiments.
#### 4.1.2 Comparison between different adapters
Next, we aim to determine which configuration of residual adapters is best: serial or parallel. As shown in Figure 1, for parallel adapters, we study two different possibilities: one where a large adapter is added to one Feed-forward Network (FFN) as Figure 0(b) and another where two small adapters are attached to both FFNs as 0(c). The intuition is that the Conformer block includes two FFNs and it might be beneficial to adapt both with half-size adapters. The results are included in Table 2. Despite some small gains in training speed from better parallelism, the single parallel adapter has very similar performance to the serial adapter. However, if we consider the architecture of conformer block as shown in Figure 1, it is possible to attach two smaller adapters to both FFNs. We call
\begin{table}
\begin{tabular}{l|c c c c c c|c} \hline \hline
**System** & WE & EE & CMN & SSA & SA & SEA & CJK & Avg \\ \hline \hline w/ LN & **15.8** & **9.9** & **18.4** & **33.6** & **19.4** & **13.0** & 27.0 & **19.4** \\ \hline w/o LN & **15.8** & **9.9** & **18.4** & **33.6** & 19.7 & 13.2 & **25.3** & **19.4** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Ablation studies on the Layer Normalization. Here the baseline system is shown in Figure 0(a) and Word Error Rate (WER) is reported grouped by languages with geographical areas: Western European (WE), Eastern Europe (EE), Central-Asia/MiddleEast/North-Africa (CMN), Sub-Saharan Africa (SSA), South Asia (SA), South-East Asia (SEA) and Chinese, Japanese, and Korean (CJK) languages.
Figure 1: Conformer block with different residual adapter variations. Here we demonstrate one conformer block and adapters (represented by the fire icon) are added to all conformer blocks without weight sharing. The left image (a) represents the serial adapter inserted between conformer blocks. Other two include two parallel adapters studied in this paper. The difference between middle (b) and right (c) is to use one wide adapter or two narrow adapter.
this approach Two Parallel Adapter, or TPA. TPA reports superior performance compared to the two approaches studied before and obtains the best performance on the Fleurs corpus. TPA improves performance on all language groups except CJK (Chinese, Japanese, and Korean) which is a small group with large variations. From the comparison, we can conclude that adapting both FFNs within the conformer block is beneficial and improves performance.
#### 4.1.3 Comparison between different sizes
We further investigate performance enhancement by modifying the number of parameters in the TPA. The parameter count is controlled by the width of residual adapters which indicates the size of the intermediate layer. Since for adapters the input and output have the same dimension as the last layer, the size depends totally on the width, as demonstrated in Figure 2. The comparison is provided in Figure 3. Results suggest performance improvement correlated to adapter width. Even on the data-constrained Fleurs dataset, TPA achieved comparable performance to full-network finetuning with only 10% of the original number of free parameters. Performance degradation was less than 5.6% with only 1.4% of the parameters.
### Analysis
Based on the results, we want to study if a large width like 1024 is necessary - whether it is possible to shrink the size. Recall that we use ReLU as the non-linear activation function, as shown in Figure 2, it is possible that some intermediate neurons are never activated which means the weighted sum is always non-positive. In this case, it is safe to remove this neuron and it doesn't change the final result. Also, it is possible that each language/task only uses a small subset of all neurons as discussed in [32]. We want to study if we could prune the size of the adapter based on the activation statistics.
We use the best system, TPA(1024), and compute the activation statistics based on the training set which covers all 102 languages. From Figure 4, clearly, higher layer requires a large adapter and neurons are mostly activated. For the lower layer, the conclusion is mixed. For the first FFN block, about 10% to 20% of the neurons could be pruned. For the second FFN block, about 60% to 80% of the neurons could be
\begin{table}
\begin{tabular}{c|c c c c c c|c} \hline \hline System & WE & EE & CMN & SSA & SA & SEA & CJK & Avg \\ \hline \hline Serial & \multirow{2}{*}{15.8} & \multirow{2}{*}{9.9} & \multirow{2}{*}{18.4} & \multirow{2}{*}{33.6} & \multirow{2}{*}{19.7} & \multirow{2}{*}{13.2} & \multirow{2}{*}{**25.3**} & \multirow{2}{*}{19.4} \\ (1a) & & & & & & & & \\ \hline Parallel & \multirow{2}{*}{16.1} & \multirow{2}{*}{10.0} & \multirow{2}{*}{18.6} & \multirow{2}{*}{33.7} & \multirow{2}{*}{19.6} & \multirow{2}{*}{12.9} & \multirow{2}{*}{25.8} & \multirow{2}{*}{19.5} \\ (1b) & & & & & & & & \\ \hline Parallel & \multirow{2}{*}{15.8} & \multirow{2}{*}{10.1} & \multirow{2}{*}{18.2} & \multirow{2}{*}{34.3} & \multirow{2}{*}{19.5} & \multirow{2}{*}{13.1} & \multirow{2}{*}{25.2} & \multirow{2}{*}{19.3} \\ (Conv) & & & & & & & & \\ \hline TPA & \multirow{2}{*}{15.6} & \multirow{2}{*}{**9.8**} & \multirow{2}{*}{**18.0**} & \multirow{2}{*}{**33.3**} & \multirow{2}{*}{**19.0**} & \multirow{2}{*}{**12.8**} & \multirow{2}{*}{26.1} & \multirow{2}{*}{**19.1**} \\ (1c) & & & & & & & & \\ \hline \end{tabular}
\end{table}
Table 2: Ablation studies on different adapter structures listed in Figure 1. Results are reported in Word Error Rate (WER).
Figure 4: Number of activated neurons in different conformer blocks. It is possible to prune 20% - 80% of the neurons for the first 12 layers, depending on the location of the adapter.
Figure 3: Ablation studies on the width of TPA. The width controls the size of each individual residual adapter and TPA includes two residual adapters per block. When we increase the width, the number of parameters increase and it takes longer time to load.
Figure 2: One residual Adapter. Residual connection is not plotted for simplicity. Size of the intermediate layer controls the **width** for the adapter. Since ReLU activation function is used, only a fraction of those neurons are activated.
pruned. Our observation matches studies from previous work which claims adapters are important for higher layers [11]. Indeed, when the task is changed from pre-training (MLM) to finetuning (ASR), the upper layer needs more adaptation. Also, there are not much differences on the activation statistics for all languages and highest activation rate for single language.
Based on this observation, we want to know if the frozen encoder could be pruned since it also speeds up the inference process. If we could shrink the size of FFNs for the upper layers, we could still reduce the computation costs. The result is demonstrated in Figure 5. It is possible to prune 20% to 40% of the neurons from layer 2-10 without performance loss.
### Studies on different tasks and datasets
Next, we test the proposed TPA approach on different speech tasks and datasets to demonstrate whether this could become a general approach: serving the same frozen pre-trained encoder with different task-dependent adapter. Our studies focus on two challenging scenarios: Speech Recognition and Speech Translation. For Speech Recognition, we test on one additional public benchmark SpeechStew, in addition to the multilingual dataset Fleurs. SpeechStew [18] is a combination of seven public corpora, including AMI, Common Voice, English Broadcast News, LibriSpeech, Switchboard/Fisher, TED-LIUM v3 and Wall Street Journal. All of them are English datasets with different recording environments. The speech recognition results are reported on Table 3. TPA achieves same or better results as full network finetuning on half of the datasets and slightly worse on challenge ones while only 2% of parameters are finetuned.
We also test our approach on Speech Translation tasks and CoVoST 2 [33] is used to benchmark multilingual speech translation. Following [34], we choose multilingual XX-to-English task which covers 21 source languages. The amount of training data varies between 1 - 264 hours depending on the source language. Results are reported in Table 4. We only observe 0.9 BLEU score difference between full finetuning and the proposed TPA approach.
## 5 Summary
We propose a new variation of the residual adapter, Two Parallel Adapters (TPA), for the Conformer architecture, which is widely used in many speech applications. TPA adapts both feed-forward networks of a conformer with parallel residual adapters. Given a pre-trained giant encoder, we attach TPA and a randomly initialized decoder. During adaptation, only the TPA and the decoder are updated. Compared to full network finetuning, comparative studies reveal that on multilingual speech recognition tasks, TPA matches the performance of full-network finetuning, but only updates 11% of the parameters. Furthermore, TPA outperforms both serial and parallel adapters proposed in the literature. Ablation studies show TPA could also be pruned, especially for lower layers, without performance loss. Finally, extensive studies suggest that TPA is a general approach applicable to a lot of downstream speech tasks, including speech recognition and translation.
In the future, we plan to focus more on improving the performance of small TPAs. One potential direction is to per
\begin{table}
\begin{tabular}{c|c|c} \hline \hline System & \% of updated paramters & BLEU \\ \hline Finetune All & 100.0\% & **28.7** \\ Serial(512) & 1.4\% & 27.3 \\ \hline TPA(256) & **1.4\%** & 27.8 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance on Covost 2. Results are reported by BLEU score.
Figure 5: Number of activated neurons within each FFN in different conformer blocks. Not much neurons could be pruned from the frozen encoder, except for layer 2-10.
\begin{table}
\begin{tabular}{c c|c c c c} \hline \hline \multicolumn{2}{c}{Dataset} & Finetune & Serial(256) & TPA(128) & TPA(256) \\ \hline \hline \multirow{2}{*}{AMI} & HM & **8.9** & 9.9 & 9.7 & 9.7 \\ \cline{2-6} & SDM1 & **20.1** & 22.3 & 21.2 & 20.9 \\ \hline \multirow{2}{*}{\begin{tabular}{c} Common \\ Voice \\ \end{tabular} } & **9.0** & 9.8 & 9.4 & **9.0** \\ \hline \multirow{2}{*}{LibriSpeech} & clean & 2.0 & 2.0 & **1.9** & **1.9** \\ \cline{2-6} & other & 4.0 & 3.8 & 3.8 & **3.7** \\ \hline \multirow{2}{*}{\begin{tabular}{c} Switchboard \\ /Fisher \\ \end{tabular} } & SWBD & **4.5** & 4.6 & 5.6 & **4.5** \\ \cline{2-6} & CH & **8.1** & 9.2 & 8.9 & 8.9 \\ \hline \multirow{2}{*}{
\begin{tabular}{c} WSJ \\ \end{tabular} } & WSJ & 1.5 & 1.6 & 1.6 & **1.4** \\ \hline \hline \multicolumn{2}{c}{Tedium} & **5.7** & 6.3 & 6.4 & 6.1 \\ \hline \hline \multicolumn{2}{c}{Fleurs} & **18.0** & 19.4 & 19.1 & 18.7 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Speech recognition results on SpeechStew and Fleurs. Results are reported in Word Error Rate (WER).
form iterative pruning, which prunes non-activated neurons iteratively. We could even introduce some sparsity loss on the adapter neurons to make ReLU sparse so neurons could be pruned. Another possibility is to perform knowledge distillation from a large TPA to a small TPA.
|
2310.12857 | Lie-Trotter means in JB-algebras | We initiate the study of Lie-Trotter means in JB-algebras, which is an
extension of Lie-Trotter formulas in JB-algebras. We show that two-variable
Lie-Trotter means include the weighted arithmetic mean, weighted harmonic mean,
weighted geometric mean, and weighted spectral geometric mean. Consequently,
several generalized Lie-Trotter formulas in JB-algebras are derived.
Additionally, we demonstrate that the Sagae-Tanabe mean and Hansen's induction
mean in JB-algebras are multivariable Lie-Trotter means. In the end, using
arithmetic-geometric-harmonic mean inequalities we provide a characterization
of multivariate Lie-Trotter means. | Zhenhua Wang | 2023-10-19T16:08:26Z | http://arxiv.org/abs/2310.12857v3 | # Lie-Trotter means in JB-algebras+
###### Abstract
We initiate the study of Lie-Trotter means in JB-algebras, which is an extension of Lie-Trotter formulas in JB-algebras. We show that two-variable Lie-Trotter means include the weighted arithmetic mean, weighted harmonic mean, weighted geometric mean, and weighted spectral geometric mean. Consequently, several generalized Lie-Trotter formulas in JB-algebras are derived. Additionally, we demonstrate that the Sagae-Tanabe mean and Hansen's induction mean in JB-algebras are multivariable Lie-Trotter means. In the end, using arithmetic-geometric-harmonic mean inequalities we provide a characterization of multivariate Lie-Trotter means.
Key words: Lie-Trotter means, JB-algebra, Spectral geometric mean, Sagae-Tanabe mean, Hansen's induction mean.
MSC 2020: Primary 46H70, 47A64, 17C90, 15A16; Secondary 17C65, 81R15, 81P45, 94C99.
###### Contents
* 1 Introduction
* 1.1 Some common notation
* 2 Spectral geometric means
* 2.1 Geometric mean revisited
* 2.2 Spectral Geometric mean
* 3 Two-variable Lie-Trotter mean
* 4 Multivariate Lie-Trotter mean
* 4.1 Sagae-Tanabe mean
* 4.2 Hansen's inductive mean
* 5 A characterization of multivariate Lie-Trotter means
## 1 Introduction
In 1934, Jordan formalized quantum theory based on the structure of Jordan algebras [8]. These algebras, characterized by their non-associative properties, provide a broader framework for investigating quantum mechanics. The initial conjecture was that this quantum formulation could extend its applicability to relativity and nuclear phenomena [8]. Alternatively, Jordan algebras
serve as a conceptual extension of real/complex quantum theory. For instance, in conventional quantum theory, the product of two observables (self-adjoint operators) may not necessarily be an observable. But the Jordan product of two observables is an observable. This research direction is motivated by the fact that observables in a quantum system naturally form a JB-algebra, which is non-associative. Hence, JB-algebras are considered as objects of interest in the study of quantum systems. Beyond this, JB-algebras have found extensive applications in various domains, including analysis, geometry, operator theory, and more. Further details on these applications can be found in [4, 14, 15].
The Lie-Trotter formula has played a significant role in quantum mechanics, quantum computing, and quantum simulations. Its significance lies in its ability to decompose a complicated quantum system into simpler systems. Recently, by utilizing the Jordan product, the study of generalized Lie-Trotter formulas and Suzuki type error estimates in JB-algebras and Banach algebra was initiated by the author and collaborators [3, 18]. Despite an arbitrary number of elements in a non-associative JB-algebra, we demonstrated that Lie-Trotter formulas and Suzuki type estimates still hold for such algebras.
In a different direction, the notion of mean in JB-algebras was introduced by Lawson and Lim [10]. Later, unaware of their work, the author and his collaborators independently studied the weighted means and their properties in the setting of JB-algebras [16]. Many identities and inequalities for JB-algebras were established [16].
This paper extends the Lie-Trotter formulas established in [3] and further explores the study of means in JB-algebras by introducing the concept of Lie-Trotter means in the setting of JB-algebras. Section 2 is dedicated to the study of the spectral geometric mean, along with its fundamental properties. In Section 3, we show that the weighted arithmetic mean, weighted harmonic mean, weighted geometric mean, and weighted spectral geometric mean are two variable Lie-Trotter means. As a result, several generalized Lie-Trotter formulas in JB-algebras are derived. Furthermore, in Section 4, we prove that Sagae-Tanabe mean and Hansen's induction mean in JB-algebras are multivariable Lie-Trotter means. We conclude our paper by presenting a characterization of multivariate Lie-Trotter means through the arithmetic-geometric-harmonic mean inequalities.
### Some common notation
We give some background on JB-algebra and fix the notation. For more information, we refer the reader to [2, 6, 16, 17].
**Definition 1.1**.: A **Jordan algebra**\(\mathfrak{A}\) over real number is a vector space \(\mathfrak{A}\) over \(\mathbb{R}\) equipped with a bilinear product \(\circ\) that satisfies the following identities:
\[A\circ B=B\circ A,\quad(A^{2}\circ B)\circ A=A^{2}\circ(B\circ A),\]
where \(A^{2}=A\circ A\).
Any associative algebra \(\mathfrak{A}\) has an underlying Jordan algebra structure with Jordan product given by
\[A\circ B=(AB+BA)/2.\]
Jordan subalgebras of such underlying Jordan algebras are called **special**.
**Definition 1.2**.: A **JB-algebra** is a Jordan algebra \(\mathfrak{A}\) over \(\mathbb{R}\) with a complete norm satisfying the following conditions for \(A,B\in\mathfrak{A}:\)
\[\|A\circ B\|\leq\|A\|\|B\|,\ \ \|A^{2}\|=\|A\|^{2},\ \ \text{and}\ \ \|A^{2}\|\leq\|A^{2}+B^{2}\|.\]
As an important object in physics, the set of bounded self adjoint operators on a Hilbert space \(H\), denoted by \(B(H)_{sa}\), which is the set of observables in a quantum mechanical system, is a JB-algebra. However, it is not an associative algebra.
**Definition 1.3**.: Let \(\mathfrak{A}\) be a unital JB-algebra. We say \(A\in\mathfrak{A}\) is **invertible** if there exists \(B\in\mathfrak{A},\) which is called **Jordan inverse** of \(A,\) such that
\[A\circ B=I\ \ \text{ and }\ \ A^{2}\circ B=A.\]
An element \(S\) of a JB-algebra \(\mathfrak{A}\) satisfying \(S^{2}=I\) is called a **symmetry**. Obviously, \(S\) is invertible and its inverse is itself.
The **spectrum** of \(A\) is defined by
\[\text{Sp}(A):=\{\lambda\in\mathbb{R}\ |\,A-\lambda I\ \ \text{is not invertible in}\,\mathfrak{A}\}.\]
If \(\text{Sp}(A)\subset[0,\infty),\) we say \(A\) is **positive**, and write \(A\geq 0.\)
Let \(\mathfrak{A}\) be a unital JB-algebra and \(\mathfrak{A}_{+}\) be the closed convex cone of positive elements in \(\mathfrak{A}.\) And, we denote \(\mathfrak{A}_{++}\) as the open convex cone of positive invertible elements in \(\mathfrak{A}.\)
**Definition 1.4**.: Let \(\mathfrak{A}\) be a unital JB-algebra and \(A,B\in\mathfrak{A}\). We define a map \(\mathcal{U}_{A}\) on \(\mathfrak{A}\) by
\[\mathcal{U}_{A}(B):=\{ABA\}:=2(A\circ B)\circ A-A^{2}\circ B.\]
Note that \(ABA\) is meaningless unless \(\mathfrak{A}\) is special, in which case \(\{ABA\}=ABA.\) The following proposition will be used repeatedly in this paper.
**Proposition 1.5**.: [2, Lemma 1.23-1.25] _Let \(\mathfrak{A}\) be a unital JB-algebra and \(A,B\in\mathfrak{A}.\)_
1. _If_ \(B\) _is positive, then_ \(\mathcal{U}_{A}(B)=\{ABA\}\geq 0.\)__
2. _If_ \(A,B\) _are invertible, then_ \(\{ABA\}\) _is invertible with inverse_ \(\{A^{-1}B^{-1}A^{-1}\}.\)__
3. _If_ \(A\) _is invertible, then_ \(\mathcal{U}_{A}\) _has a bounded inverse_ \(\mathcal{U}_{A^{-1}}\,.\)__
The weighed means in JB-algebras were introduced in [16] as follows
**Definition 1.6**.: For two positive invertible elements \(A,B\) in a unital JB-algebra \(\mathfrak{A}\) and \(0\leq\lambda\leq 1,\) we denote
\[\mathcal{A}_{2}(A,B)=A\triangledown_{\lambda}B=(1-\lambda)A+\lambda B;\]
\[\mathcal{G}_{2}(A,B)=A\#_{\lambda}B=\left\{A^{\frac{1}{2}}\{A^{-\frac{1}{2}} BA^{-\frac{1}{2}}\}^{\lambda}A^{\frac{1}{2}}\right\};\]
\[\mathcal{H}_{2}(A,B)=A!_{\lambda}B=\left((1-\lambda)A^{-1}+\lambda B^{-1} \right)^{-1}.\]
as **weighted arithmetic mean**, **weighted geometric mean** and **weighted harmonic mean** respectively. When \(\lambda=1/2,\) we denote \(A\#_{\lambda}B\) as \(A\#B\).
Spectral geometric means
### Geometric mean revisited
The following result is the uniqueness of positive invertible solution of generalized Riccati equation in JB-algebras, which is no doubt known to experts, see e.g. [12, Section 2.5].
**Proposition 2.1**.: _Let \(A,B\) be two positive invertible elements in a JB-algebra \(\mathfrak{A}\). The Riccati equation \(\{XA^{-1}X\}=B\) has a unique positive invertible solution \(A\#B\)._
Proof.: By Macdonald Theorem (see e.g. Theorem 1.13 [2]), a direct computation shows that
\[\left\{(A\#B)A^{-1}(A\#B)\right\} =\left\{\left\{A^{\frac{1}{2}}\{A^{-\frac{1}{2}}BA^{-\frac{1}{2}} \}^{\frac{1}{2}}A^{\frac{1}{2}}\right\}A^{-1}\left\{A^{\frac{1}{2}}\{A^{-\frac {1}{2}}BA^{-\frac{1}{2}}\}^{\frac{1}{2}}A^{\frac{1}{2}}\right\}\right\}\] \[=\left\{A^{\frac{1}{2}}\left\{\left\{A^{-\frac{1}{2}}BA^{-\frac{1 }{2}}\right\}^{\frac{1}{2}}\{A^{\frac{1}{2}}A^{-1}A^{\frac{1}{2}}\}\left\{A^{- \frac{1}{2}}BA^{-\frac{1}{2}}\right\}^{\frac{1}{2}}\right\}A^{\frac{1}{2}}\right\}\] \[=\left\{A^{\frac{1}{2}}\left\{A^{-\frac{1}{2}}BA^{-\frac{1}{2}} \right\}A^{\frac{1}{2}}\right\}=B.\]
Suppose \(X,Y\) are two positive solutions to the Riccati equation. Then by [2, equality (1.15)]
\[\left\{A^{-1/2}XA^{-1/2}\right\}^{2}=\left\{A^{-1/2}\left\{XA^{-1}X\right\}A^ {-1/2}\right\}=\left\{A^{-1/2}\left\{YA^{-1}Y\right\}A^{-1/2}\right\}=\left\{ A^{-1/2}YA^{-1/2}\right\}^{2}.\]
The uniqueness of positive square roots in JB-algebra indicates that
\[\left\{A^{-1/2}XA^{-1/2}\right\}=\left\{A^{-1/2}YA^{-1/2}\right\}.\]
Therefore, \(X=Y.\)
**Proposition 2.2**.: _Let \(A,B\) be two positive invertible elements in a unital JB-algebra \(\mathfrak{A}\,.\) Then the map \(d:\mathfrak{A}_{++}\times\mathfrak{A}_{++}\rightarrow[0,\infty)\) defined by_
\[d(A,B)=2\left\|\log(A^{-1}\#B)\right\|\]
_has the following properties:_
1. \(d\) _is a semi-metric._
2. \(d(\alpha A,\alpha B)=d(A,B)\) _for any_ \(\alpha>0.\)__
3. \(d(A^{-1},B^{-1})=d(A,B)\)__
4. \(d\left(\{UAU\},\{UBU\}\right)=d(A,B)\) _for any symmetry_ \(U\in\mathfrak{A}\,.\)__
Proof.: For the proof of (i), the non-negativity of \(d(A,B)\) is obvious. If \(A=B\), then \(A^{-1}\#B=I.\) Thus, \(\log(A^{-1}\#B)=0.\) Conversely, if \(d(A,B)=0,\) then \(\log(A^{-1}\#B)=0.\) By functional calculus in JB-algebra, \(A^{-1}\#B=\exp(0)=I,\) which implies that \(\{A^{1/2}BA^{1/2}\}^{1/2}=A.\) Thus, \(B=A\) since
\(B=\{A^{-1/2}A^{2}A^{-1/2}\}=A.\) By [16, Collory 1 and Proposition 6(v)], \((A^{-1}\#B)^{-1}=A\#B^{-1}=B^{-1}\#A.\) Therefore,
\[d(A,B)=2\left\|\log(A^{-1}\#B)\right\|=2\left\|\log(A^{-1}\#B)^{-1}\right\|=2 \left\|\log(B^{-1}\#A)\right\|=d(B,A).\]
(ii) follows from the fact \((\alpha A)^{-1}\#(\alpha B)=A^{-1}\#B\) by [16, Proposition 6(i)]
For the proof of (iii), by [16, Proposition 6(v)]
\[d(A^{-1},B^{-1})=2\left\|\log(A\#B^{-1})\right\|=2\left\|\log(B\#A^{-1}) \right\|=d(B,A)=d(A,B).\]
As for (iv), [16, equation 16] implies that \(\log\left(\{U(A^{-1}\#B)U\}\right)=\{U\log\left(A^{-1}\#B\right)U\}.\) Thus,
\[d\big{(}\{UAU\},\{UBU\}\big{)} =2\left\|\log(\{UAU\}^{-1}\#\{UBU\})\right\|=2\left\|\log\left( \{U(A^{-1}\#B)U\}\right)\right\|\] \[=2\left\|\{U\log\left(A^{-1}\#B\right)U\}\right\|=2\left\|\log(A^ {-1}\#B)\right\|\] \[=d(A,B)\]
### Spectral Geometric mean
**Definition 2.3**.: Let \(A\) and \(B\) be two positive invertible elements in a unital JB-algebra \(\mathfrak{A}\) and \(\lambda\in\mathbb{R}\,.\) The **weighted spectral geometric mean** of \(A\) and \(B\) is defined as
\[A\natural_{\lambda}B:=\left\{\left(A^{-1}\#B\right)^{\lambda}A\left(A^{-1}\#B \right)^{\lambda}\right\}\]
If \(\lambda=1/2,\) then we simply write \(A\natural_{1/2}B\) as \(A\natural B.\)
We note that results from [11, Section 4] for the weighted spectral geometric mean on symmetric cones, can be generalized to positive invertible elements in a unital JB-algebra.
**Proposition 2.4**.: _Let \(\mathfrak{A}\) be a unital JB-algebra and \(A,B\in\mathfrak{A}_{++}\,.\) Then \(X=A\natural_{\lambda}B\) is the unique positive invertible solution to the equation_
\[(A^{-1}\#B)^{\lambda}=A^{-1}\#X\]
Proof.: If \(X=A\natural_{\lambda}B,\) then
\[A^{-1}\#X =\left\{A^{-\frac{1}{2}}\left\{A^{\frac{1}{2}}XA^{\frac{1}{2}} \right\}^{\frac{1}{2}}A^{-\frac{1}{2}}\right\}\] \[=\left\{A^{-\frac{1}{2}}\left\{A^{\frac{1}{2}}\left\{\left(A^{-1} \#B\right)^{\lambda}A\left(A^{-1}\#B\right)^{\lambda}\right\}A^{\frac{1}{2}} \right\}^{\frac{1}{2}}A^{-\frac{1}{2}}\right\} \tag{2.1}\]
The uniqueness of positive square root in JB-algebras and [2, Jordan identity (1.15)] imply
\[\left\{A^{\frac{1}{2}}\left\{\left(A^{-1}\#B\right)^{\lambda}A\left(A^{-1}\#B \right)^{\lambda}\right\}A^{\frac{1}{2}}\right\}^{\frac{1}{2}}=\left\{A^{\frac {1}{2}}\left(A^{-1}\#B\right)^{\lambda}A^{\frac{1}{2}}\right\}\]
Then, the equation (2.1) becomes
\[A^{-1}\#X =\left\{A^{-\frac{1}{2}}\left\{A^{\frac{1}{2}}\left(A^{-1}\#B\right) ^{\lambda}A^{\frac{1}{2}}\right\}A^{-\frac{1}{2}}\right\}\] \[=\left(A^{-1}\#B\right)^{\lambda}.\]
If \(Y\in\mathfrak{A}\) is another solution, then \(A^{-1}\#X=A^{-1}\#Y.\) By Proposition 2.1,
\[X=\left\{(A^{-1}\#X)A(A^{-1}\#X)\right\}=\left\{(A^{-1}\#Y)A(A^{-1}\#Y) \right\}=Y.\]
**Remark 2.5**.: Proposition 2.4 implies \(A\natural_{1}B=B.\)__
We include some fundamental properties of wighted spectral geometric mean in JB-algebras, which are analogous to those on symmetric cones.
**Proposition 2.6**.: _Let \(A,B\) be two positive invertible elements in a unital JB-algebra \(\mathfrak{A}\) and \(\lambda\in[0,1].\)_
1. \((\alpha A)\natural_{\lambda}(\beta B)=\alpha^{1-\lambda}\beta^{\lambda}(A \natural_{\lambda}B),\) _for any nonnegative numbers_ \(\alpha\) _and_ \(\beta.\)__
2. \(\{C(A\natural_{\lambda}B)C\}=\{CAC\}\natural_{\lambda}\{CBC\},\) _for any symmetry_ \(C\) _in_ \(\mathcal{A}.\)__
3. \((A\natural_{\lambda}B)=B\natural_{1-\lambda}A.\)__
4. \((A\natural_{\lambda}B)^{-1}=A^{-1}\natural_{\lambda}B^{-1}.\)__
Proof.: For (i), by[16, Proposition 6(i)],
\[(\alpha A)\natural_{\lambda}(\beta B) =\left\{\left[\left(\alpha A\right)^{-1}\#\left(\beta B\right) \right]^{\lambda}\left(\alpha A\right)\left[\left(\alpha A\right)^{-1}\# \left(\beta B\right)\right]^{\lambda}\right\}\] \[=\left\{\left[\alpha^{-\frac{\lambda}{2}}\beta^{\frac{\lambda}{2} }\left(A^{-1}\#B\right)^{\lambda}\right]\left(\alpha A\right)\left[\alpha^{- \frac{\lambda}{2}}\beta^{\frac{\lambda}{2}}\left(A^{-1}\#B\right)^{\lambda} \right]\right\}\] \[=\alpha^{1-\lambda}\beta^{\lambda}\left\{\left(A^{-1}\#B\right)^ {\lambda}A\left(A^{-1}\#B\right)^{\lambda}\right\}.\]
Proof of (ii). Denote \(L=\{CAC\}\natural_{\lambda}\{CBC\}.\) According to [16, Proposition 6(iv)]
\[L =\left\{\left[\left\{CAC\right\}^{-1}\#\left\{CBC\right\}\right]^ {\lambda}\left(\{CAC\}\right)\left[\{CAC\}^{-1}\#\left\{CBC\right\}\right]^{ \lambda}\right\}\] \[=\left\{\left\{C(A^{-1}\#B)C\right\}^{\lambda}\left(\{CAC\} \right)\left\{C(A^{-1}\#B)C\right\}^{\lambda}\right\}\]
From [13, Proposition 2.1], for \(\lambda\in(0,1)\) and \(x\in(0,\infty),\)
\[x^{\lambda}=\frac{\sin(\lambda\pi)}{\pi}\int_{0}^{\infty}t^{\lambda-1}(1+tx^{- 1})^{-1}dt.\]
By functional calculus in JB-algebras
\[\{CAC\}^{\lambda} =\frac{\sin(\lambda\pi)}{\pi}\int_{0}^{\infty}t^{\lambda-1}\left(1+ t\{CAC\}^{-1}\right)^{-1}dt\] \[=\frac{\sin(\lambda\pi)}{\pi}\int_{0}^{\infty}t^{\lambda-1}\{C^{- 1}(1+tA^{-1})C^{-1}\}^{-1}dt\] \[=\frac{\sin(\lambda\pi)}{\pi}\int_{0}^{\infty}t^{\lambda-1}\{C(1+ tA^{-1})^{-1}C\}dt\] \[=\left\{C\left[\frac{\sin(\lambda\pi)}{\pi}\int_{0}^{\infty}t^{ \lambda-1}(1+tA^{-1})^{-1}dt\right]C\right\}\] \[=\left\{CA^{\lambda}C\right\} \tag{2.2}\]
Therefore,
\[L=\left\{\left\{C(A^{-1}\#B)^{\lambda}C\right\}\left\{CAC\right\}\left\{C(A^{ -1}\#B)^{\lambda}C\right\}\right\}.\]
By 2.8.6 in [6],
\[L =\left\{C\left\{\left(A^{-1}\#B\right)^{\lambda}A\left(A^{-1}\#B \right)^{\lambda}\right\}C\right\}\] \[=\left\{C(A\natural_{\lambda}B)C\right\}\]
For (iii), we denote
\[Z=\left\{\left(B^{-1}\#A\right)^{\lambda-1}\left\{\left(A^{-1}\#B\right)^{ \lambda}A\left(A^{-1}\#B\right)^{\lambda}\right\}\left(B^{-1}\#A\right)^{ \lambda-1}\right\} \tag{2.3}\]
From [16, Corollary 1 and Proposition 6(iv)], we know
\[Z=\left\{\left(A^{-1}\#B\right)^{1-\lambda}\left\{\left(A^{-1}\#B\right)^{ \lambda}A\left(A^{-1}\#B\right)^{\lambda}\right\}\left(A^{-1}\#B\right)^{1- \lambda}\right\}\]
By Shirshov-Cohen theorem for JB-algebras (see e.g. [6, Theorem 7.2.5]),
\[Z=\left\{\left(A^{-1}\#B\right)A\left(A^{-1}\#B\right)\right\}=B.\]
Therefore, according to equation (2.3)
\[\left\{\left(B^{-1}\#A\right)^{1-\lambda}B\left(B^{-1}\#A\right)^{1-\lambda} \right\}=\left\{\left(A^{-1}\#B\right)^{\lambda}A\left(A^{-1}\#B\right)^{ \lambda}\right\}.\]
Proof of (iv). A same reasoning as in the proof of (iii) implies
\[\left(A\natural_{\lambda}B\right)^{-1} =\left\{\left(A^{-1}\#B\right)^{\lambda}A\left(A^{-1}\#B\right)^ {\lambda}\right\}^{-1}\] \[=\left\{\left(A^{-1}\#B\right)^{-\lambda}A^{-1}\left(A^{-1}\#B \right)^{-\lambda}\right\}\] \[=A^{-1}\natural_{\lambda}B^{-1}.\]
We establish upper and lower bounds for the weighted spectral geometric mean of positive invertible elements in unital JB-algebras. This result finds its roots in [9, Proposition 2.3], which pertains to matrices.
**Proposition 2.7**.: _Let \(A,B\) be two positive invertible elements in \(\mathfrak{A}\) and \(\lambda\in[0,1].\) Then_
\[2^{1+\lambda}(A+B^{-1})^{-\lambda}-A^{-1}\leqslant A\natural_{\lambda}B \leqslant\left[2^{1+\lambda}(A^{-1}+B)^{-\lambda}-A\right]^{-1}.\]
Proof.: Let \(X=A\natural_{\lambda}B\) for \(\lambda\in[0,1].\) By Proposition 2.4, Young's inequalities for JB-algebras (see e.g. [16, Theorem 3]) and [16, Proposition 5], we have
\[\left(\frac{A+X^{-1}}{2}\right)^{-1}\leqslant A^{-1}\#X=(A^{-1}\#B)^{\lambda} \leqslant\left(\frac{A^{-1}+B}{2}\right)^{\lambda}\]
The desired result follows from similar argument as in the proof of [9, Proposition 2.3].
**Theorem 2.8**.: (Cf. [5, Theorem 3.4]) _For \(\lambda\in(0,1),\) let \(\mathcal{G}_{\lambda}:\mathfrak{A}_{++}\times\mathfrak{A}_{++}\to\mathfrak{A} _{++}\) satisfy \(\mathcal{G}_{\lambda}(A,A)=A\) for all \(A\in\mathfrak{A}_{++}\) and_
\[\mathcal{G}_{\lambda}(A,B)=I\Rightarrow B=A^{1-1/\lambda}\text{ for any }A,B\in\mathfrak{A}_{++}. \tag{2.4}\]
_Then \(A\natural_{\lambda}B\) is the unique solution \(X\in\mathfrak{A}_{++}\) of the euqation_
\[\mathcal{G}_{\lambda}(A\#X^{-1},B\#X^{-1})=I.\]
Proof.: Let \(U=A\#X^{-1}\) and \(V=B\#X^{-1}.\) Then \(V=U^{1-1/\lambda}\) since \(G_{\lambda}(U,V)=I.\) Proposition 2.1 indicates \(X=\{U^{-1}AU^{-1}\}=\{V^{-1}BV^{-1}\}.\) Thus,
\[A=\left\{U\left\{V^{-1}BV^{-1}\right\}U\right\}=\{U^{1/\lambda}BU^{1/\lambda}\}\]
where the second equation follows from Macdonald Theorem. Again by Proposition 2.1, we know that \(U^{1/\lambda}=A\#B^{-1}.\) Therefore,
\[X=\left\{(A\#B^{-1})^{-\lambda}A(A\#B^{-1})^{-\lambda}\right\}=\left\{(A^{-1} \#B)^{\lambda}A(A^{-1}\#B)^{\lambda}\right\}=A\natural_{\lambda}B\]
where the second equation follows from [16, Proposition 6(v)].
## 3 Two-variable Lie-Trotter mean
The following result extends [1, Proposition 1.1] to the setting of unital JB-algebras.
**Proposition 3.1**.: _For any differentiable curve \(\gamma:(-\varepsilon,\varepsilon)\to\mathfrak{A}_{++}\) with \(\gamma(0)=I,\)_
\[e^{\gamma^{\prime}(0)}=\lim_{t\to 0}\gamma(t)^{1/t}=\lim_{n\to\infty}\gamma(1/n)^{n}.\]
Proof.: The exponential function: \(\exp:\mathfrak{A}\to\mathfrak{A}_{++}\) and logarithmic function \(\log:\mathfrak{A}_{++}\to\mathfrak{A}\) are well-defined and diffeomorphic. Following a similar argument as in the proof of [1, Proposition 1.1], we have
\[\gamma^{\prime}(0) =(\log\circ\gamma)^{\prime}(0)=\lim_{t\to 0}\frac{\log(\gamma(t))- \log(\gamma(0))}{t}=\lim_{t\to 0}\frac{\log(\gamma(t))}{t}\] \[=\lim_{t\to 0}\log(\gamma(t)^{1/t})=\lim_{n\to\infty}\log(\gamma(1/n) ^{n}).\]
Therefore, \(\exp\{\gamma^{\prime}(0)\}=\lim_{t\to 0}\gamma(t)^{1/t}=\lim_{n\to\infty} \gamma(1/n)^{n}\).
**Definition 3.2**.: Let \(\lambda\in(0,1).\) A **weighted \(2\)-mean** in a unital JB-algebra \(\mathfrak{A}\) is a map
\[G_{2}(1-\lambda,\lambda;\,\cdot\,):\mathfrak{A}_{++}\times\mathfrak{A}_{++} \to\mathfrak{A}_{++}\]
satisfying \(G_{2}(1-\lambda,\lambda;A,A)=A,\) for any \(A\in\mathfrak{A}_{++}.\)
**Definition 3.3**.: Let \(\lambda\in(0,1).\) A **two-variable Lie-Trotter mean** in a unital JB-algebra \(\mathfrak{A}\) is a weighted \(2\)-mean \(G_{2}(1-\lambda,\lambda;\,\cdot\,):\mathfrak{A}_{++}^{2}\to\mathfrak{A}_{++}\) such that it is differentiable and satisfies
\[\lim_{t\to 0}G_{2}(1-\lambda,\lambda;\gamma_{1}(t),\gamma_{2}(t))^{1/t}=\exp[( 1-t)\gamma_{1}^{\prime}(0)+t\gamma_{2}^{\prime}(0)], \tag{3.1}\]
where \(\gamma_{1},\gamma_{2}:(-\varepsilon,\varepsilon)\to\mathfrak{A}_{++}\) are differentiable curves with \(\gamma_{1}(0)=\gamma_{2}(0)=I.\)
**Example 3.4**.: Let \(\gamma_{1},\gamma_{2}:(-\varepsilon,\varepsilon)\to\mathfrak{A}_{++}\) be two differentiable curves with \(\gamma_{1}(0)=\gamma_{2}(0)=I.\) We denote
\[\Gamma_{1}(t) =\mathcal{A}_{2}(1-\lambda,\lambda;\gamma_{1}(t),\gamma_{2}(t))= (1-\lambda)\gamma_{1}(t)+\lambda\gamma_{2}(t)\] \[\Gamma_{2}(t) =\mathcal{H}_{2}(1-\lambda,\lambda;\gamma_{1}(t),\gamma_{2}(t))= [(1-\lambda){\gamma_{1}(t)}^{-1}+\lambda\gamma_{2}(t)^{-1}]^{-1}\] \[\Gamma_{3}(t) =\mathcal{G}_{2}(1-\lambda,\lambda;\gamma_{1}(t),\gamma_{2}(t))= \gamma_{1}(t)\#_{\lambda}\gamma_{2}(t)\] \[\Gamma_{4}(t) =\mathcal{F}_{2}(1-\lambda,\lambda;\gamma_{1}(t),\gamma_{2}(t))= \gamma_{1}(t)\natural_{\lambda}\gamma_{2}(t)\]
We observe that \(\Gamma_{k}\)'s are differentiable curves with \(\Gamma_{k}^{\prime}(0)=(1-\lambda)\gamma_{1}^{\prime}(0)+\lambda\gamma_{2}^{ \prime}(0),\) for any \(1\leq k\leq 4.\) By Proposition 3.1,
\[\exp[(1-\lambda)\gamma_{1}^{\prime}(0)+\lambda\gamma_{2}^{\prime} (0)] =\lim_{t\to 0}[(1-\lambda)\gamma_{1}(t)+\lambda\gamma_{2}(t)]^{1/t}\] \[=\lim_{t\to 0}[(1-\lambda)\gamma_{1}(t)^{-1}+\lambda\gamma_{2}(t)^{ -1}]^{-1/t}\] \[=\lim_{t\to 0}[\gamma_{1}(t)\#_{\lambda}\gamma_{2}(t)]^{1/t}\] \[=\lim_{t\to 0}[\gamma_{1}(t)\natural_{\lambda}\gamma_{2}(t)]^{1/t}\]
Therefore, the weighted arithmetic mean, weighted harmonic mean, weighted geometric mean, and weighted spectral geometric mean are two-variable Lie-Trotter means in \(\mathfrak{A}\).
The following result presents generalized Lie-Trotter formulas in JB-algebras.
**Corollary 3.5**.: _Let \(A,B\) be two elements in a unital JB-algebra \(\mathfrak{A}\). Then_
\[\exp[(1-\lambda)A+\lambda B] =\lim_{t\to 0}\left[(1-\lambda)e^{tA}+\lambda e^{tB}\right]^{ \frac{1}{t}}=\lim_{n\to\infty}\left[(1-\lambda)e^{\frac{A}{n}}+\lambda e^{ \frac{B}{n}}\right]^{n}\] \[=\lim_{t\to 0}\left[(1-\lambda)e^{-tA}+\lambda e^{-tB}\right]^{- \frac{1}{t}}=\lim_{n\to\infty}\left[(1-\lambda)e^{-\frac{A}{n}}+\lambda e^{- \frac{B}{n}}\right]^{-n}\] \[=\lim_{t\to 0}\left(e^{tA}\#_{\lambda}e^{tB}\right)^{\frac{1}{t}}= \lim_{n\to\infty}\left(e^{\frac{A}{n}}\#_{\lambda}e^{\frac{B}{n}}\right)^{n}\] \[=\lim_{t\to 0}\left(e^{tA}\natural_{\lambda}e^{tB}\right)^{1/t}= \lim_{n\to\infty}\left(e^{\frac{A}{n}}\natural_{\lambda}e^{\frac{B}{n}}\right) ^{n}\]
Proof.: Let \(\gamma_{1}(t)=e^{tA}\) and \(\gamma_{2}(t)=e^{tB}\) where \(A,B\in\mathfrak{A}.\) Then \(\gamma_{1}^{\prime}(0)=A\) and \(\gamma_{2}^{\prime}(0)=B.\) Then our results follows directly from Example 3.4.
## 4 Multivariate Lie-Trotter mean
Let \(\omega=(\omega_{1},\cdots,\omega_{n})\in\Delta_{n},\) the simplex of positive probability vectors in \(\mathbb{R}^{n}\,.\)
**Definition 4.1**.: A **weighted \(n\)-mean**\(G_{n}\) on \(\mathfrak{A}_{++}\) for \(n\geq 2\) is a map \(G_{n}(\omega;\cdot):\mathfrak{A}_{++}^{n}\longrightarrow\mathfrak{A}_{++}\) satisfying \(G_{n}(\omega;A,\cdots,A)=A\) for any \(A\in\mathfrak{A}_{++}.\) The weighted \(n\)-mean \(G_{n}(\omega;\cdot)\) is called a **multivariate Lie-Trotter mean** if it satisfies
\[\lim_{t\to 0}G_{n}(\omega;\gamma_{1}(t),\cdots,\gamma_{n}(t))^{1/t}= \exp\left(\sum_{k=1}^{n}\omega_{k}\gamma_{k}^{\prime}(0)\right), \tag{4.1}\]
where \(\gamma_{k}:(-\varepsilon,\varepsilon)\rightarrow\mathfrak{A}_{++}\) is differentiable with \(\gamma_{k}(0)=I,\) for any \(1\leq k\leq n.\)
For \(\mathbb{A}=(A_{1},A_{2},\cdot,A_{n})\in\mathfrak{A}_{++}^{n},\)\(t\in\mathbb{R}\) and any invertible element \(C\) in \(\mathfrak{A}\), we denote
\[\mathbb{A}^{t} :=(A_{1}^{t},A_{2}^{t},\cdots,A_{n}^{t})\] \[\left\{C\,\mathbb{A}\,C\right\} :=(\left\{CA_{1}C\right\},\left\{CA_{2}C\right\},\cdots,\left\{CA _{n}C\right\})\,.\]
### Sagae-Tanabe mean
**Definition 4.2**.: Let \(\omega=(\omega_{1},\omega_{2},\cdots,\omega)\in\Delta_{n}\) and \(\mathbb{A}=(A_{1},A_{2},\cdots,A_{n})\in\mathfrak{A}_{++}^{n}.\) Assume that a weighted 2-mean \(S_{2}\) is given. Then the associated **Sagae-Tanabe mean** in JB-algebras is defined inductively as
\[S_{n}(\omega;\mathbb{A})=S_{2}(1-\omega_{n},\omega_{n};S_{n-1}( \hat{\omega};A_{1},\cdots,A_{n-1}),A_{n})\]
for \(n\geq 3,\) where \(\hat{\omega}=\dfrac{1}{1-\omega_{n}}(\omega_{1},\cdots,\omega_{n-1})\in\Delta_ {n-1}.\)
**Examples 4.3**.: Let the weighted 2-mean \(S_{2}\) be the weighted arithmetic mean \(\mathcal{A}_{2}\), the weighted harmonic mean \(\mathcal{H}_{2}\), the weighted geometric mean \(\mathcal{G}_{2}\), and the weighted spectral geometric mean \(\mathcal{F}_{2}\) respectively. Then by induction, the corresponding Sagae-Tanabe means are
1. \(\mathcal{A}_{n}(\omega;\mathbb{A}):=\sum_{k=1}^{n}\omega_{k}A_{k};\)
2. \(\mathcal{H}_{n}(\omega;\mathbb{A}):=\left(\sum_{k=1}^{n}\omega_{k}A_{k}^{-1} \right)^{-1};\)
3. \(\mathcal{G}_{n}^{S}(\omega;\mathbb{A}):=\left[\left(A_{1}\#_{\frac{\omega_{2}}{ \omega_{1}+\omega_{2}}}A_{2}\right)\#_{\frac{\omega_{3}}{\omega_{1}+\omega_{2} +\omega_{3}}}\cdots\right]\#_{\omega_{n}}A_{n};\)
4. \(\mathcal{F}_{n}^{S}(\omega;\mathbb{A}):=\left[\left(A_{1}\natural_{\frac{\omega _{2}}{\omega_{1}+\omega_{2}}}A_{2}\right)\natural_{\frac{\omega_{3}}{\omega_{1} +\omega_{2}+\omega_{3}}}\cdots\right]\natural_{\omega_{n}}A_{n}.\)
The following result is [7, Theorem 3.1] in the setting of JB-algebras.
**Theorem 4.4**.: _Let \(S_{2}\) be a Lie-Trotter mean. Then the Sagae-Tanabe mean \(S_{n}\) for \(n\geq 2\) is a multivariate Lie-Trotter mean._
Proof.: Let \(\gamma_{1},\cdots,\gamma_{n}:(-\varepsilon,\varepsilon)\rightarrow\mathfrak{A }_{++}\) be differentiable curves with \(\gamma_{k}(0)=I\) for all \(1\leq k\leq n.\) By induction, we suppose that \(S_{n-1}\) is a multivariate Lie-Trotter mean, then
\[\lim_{t\to 0}S_{n-1}(\hat{\omega};\gamma_{1}(t),\gamma_{2}(t),\cdots, \gamma_{n}(t))^{1/t}=\exp\left(\sum_{k=1}^{n-1}\frac{\omega_{k}}{1-\omega_{n}} \gamma_{k}^{\prime}(0)\right).\]
Let \(\gamma(t):=S_{n-1}\left(\hat{\omega};\gamma_{1}(t),\gamma_{2}(t),\cdots, \gamma_{n-1}(t)\right).\) Then by Proposition 3.1
\[\gamma^{\prime}(0)=\log\left[\exp\left(\sum_{k=1}^{n-1}\frac{\omega_{k}}{1- \omega_{n}}\gamma_{k}^{\prime}(0)\right)\right]=\sum_{k=1}^{n-1}\frac{\omega_ {k}}{1-\omega_{n}}\gamma_{k}^{\prime}(0).\]
Since \(S_{2}\) is a Lie-Trotter mean,
\[\lim_{t\to 0}S_{n}\left(\omega;\gamma_{1}(t),\cdots, \gamma_{n}(t)\right)^{1/t} =\lim_{t\to 0}S_{2}\left(1-\omega_{n},\omega_{n}; \gamma(t),\gamma_{n}(t)\right)^{1/t}\] \[=\exp\left((1-\omega_{n})\gamma^{\prime}(0)+\omega_{n}\gamma_{n}^ {\prime}(0)\right)\] \[=\exp\left[(1-\omega_{n})\left(\sum_{k=1}^{n-1}\frac{\omega_{k}}{1 -\omega_{n}}\gamma_{k}^{\prime}(0)\right)+\omega_{n}\gamma_{n}^{\prime}(0)\right]\] \[=\exp\left(\sum_{k=1}^{n}\omega_{k}\gamma_{k}^{\prime}(0)\right).\]
**Corollary 4.5**.: _Let \(A_{k}\) be positive invertible elements in a unital JB-algebra \(\mathfrak{A}\) and \(t\in\mathbb{R}\,.\) Then \(\mathcal{A}_{n}(\omega;\mathbb{A}^{t}),\mathcal{H}_{n}(\omega;\mathbb{A}^{t}),\mathcal{G}_{n}^{S}(\omega;\mathbb{A}^{t})\) and \(\mathcal{F}_{n}^{S}(\omega;\mathbb{A}^{t})\) are multivariate Lie-Trotter means._
Proof.: Let \(\gamma_{k}(t)=A_{k}^{t}\) for any \(1\leqslant k\leqslant n.\) Then \(\gamma_{k}^{\prime}(0)=\log A_{k}.\) Theorem 4.4 implies that
\[\exp\left(\sum_{k=1}^{n}\omega_{k}\log A_{k}\right) =\lim_{t\to 0}\mathcal{A}_{n}(\omega;\mathbb{A}^{t})^{1/t}=\lim_{t \to 0}\mathcal{H}_{n}(\omega;\mathbb{A}^{t})^{1/t}\] \[=\lim_{t\to 0}\mathcal{G}_{n}^{S}(\omega;\mathbb{A}^{t})^{1/t}= \lim_{t\to 0}\mathcal{F}_{n}^{S}(\omega;\mathbb{A}^{t})^{1/t}.\]
The following result is generalized Lie-Trotter formulas for an arbitrary finite number of elements in JB-algebras.
**Corollary 4.6**.: _Let \(\omega=(\omega_{1},\cdots,\omega_{n})\in\Delta_{n}.\) For any finite number of elements \(A_{1},A_{2},\cdots,A_{n}\) in a unital JB-algebra \(\mathfrak{A},\)_
\[\exp\left(\sum_{k=1}^{n}\omega_{k}A_{k}\right) =\lim_{t\to 0}\left(\sum_{k=1}^{n}\omega_{k}e^{tA_{k}}\right)^{1/t}\] \[=\lim_{t\to 0}\left(\sum_{k=1}^{n}\omega_{k}e^{-tA_{k}}\right)^{- 1/t}\] \[=\lim_{t\to 0}\left\{\left[\left(e^{tA_{1}}\#_{\frac{\omega_{2}}{ \omega_{1}+\omega_{2}}}e^{tA_{2}}\right)\#_{\frac{\omega_{3}}{\omega_{1}+ \omega_{2}+\omega_{3}}}\cdots\right]\#_{\omega_{n}}e^{tA_{n}}\right\}^{1/t}\] \[=\lim_{t\to 0}\left\{\left[\left(e^{tA_{1}}\sharp_{\frac{\omega_{2}}{ \omega_{1}+\omega_{2}}}e^{tA_{2}}\right)\sharp_{\frac{\omega_{3}}{\omega_{1}+ \omega_{2}+\omega_{3}}}\cdots\right]\sharp_{\omega_{n}}e^{tA_{n}}\right\}^{1/t}\]
### Hansen's inductive mean
**Definition 4.7**.: Given a weighted \(2\)-mean \(H_{2},\) the associated **Hasen's inductive mean** in JB-algebras is defined by
\[H_{n}(\omega;\mathbb{A})=H_{n-1}\left(\hat{\omega};H_{2}(1-\omega_{n},\omega_ {n};A_{1},A_{n}),\cdots,H_{2}(1-\omega_{n},\omega_{n};A_{n-1},A_{n})\right)\]
for \(n\geqslant 3,\) and where \(\hat{\omega}=\dfrac{1}{1-\omega_{n}}(\omega_{1},\cdots,\omega_{n-1})\in \Delta_{n-1}.\)
**Examples 4.8**.: The weighted arithmetic mean and harmonic mean are two Hansen's inductive means.
1. If \(H_{2}\) is the weighted arithmetic mean \(\mathcal{A}_{2},\) then \(H_{n}(\omega;\mathbb{A})=\mathcal{A}_{n}(\omega;\mathbb{A})=\sum_{k=1}^{n} \omega_{k}A_{k}.\)
2. If \(H_{2}\) is the weighted arithmetic mean \(\mathcal{H}_{2},\) then \(H_{n}(\omega;\mathbb{A})=\mathcal{H}_{n}(\omega;\mathbb{A})=\left(\sum_{k=1}^{ n}\omega_{k}A_{k}^{-1}\right)^{-1}.\)
**Proposition 4.9**.: _The Hansen's inductive mean \(\mathcal{G}_{n}^{H}\) induced by the weighted geometric mean \(\#_{\lambda}\) in a unital JB-algebra \(\mathfrak{A}\) has the following properties:_
1. (Joint homogeneity) _For positive real numbers_ \[\mathcal{G}_{n}^{H}(\omega;\alpha_{1}A_{1},\cdots,\alpha_{n}A_{n})=\left(\prod _{k=1}^{n}\alpha_{k}^{\omega_{k}}\right)\mathcal{G}_{n}^{H}(\omega;\mathbb{A})\]
2. (Monotonicity) _If_ \(A_{k}\leq B_{k}\) _for any_ \(1\leq k\leq n,\) _then_ \(\mathcal{G}_{n}^{H}(\omega;\mathbb{A})\leq\mathcal{G}_{n}^{H}(\omega;\mathbb{B})\)__
3. (Concavity)__\(\mathcal{G}_{n}^{H}(\omega;A_{1},\cdots,A_{n})\) _is concave with respect to_ \(A_{1},A_{2},\cdots,A_{n}\) _individually._
4. (Congruence invariance) _For any invertible element_ \(C\) _in JB-algebra,_ \[\mathcal{G}_{n}^{H}\left(\omega;\{C\,\mathbb{A}\,C\}\right)=\left\{C\, \mathcal{G}_{n}^{H}(\omega;\mathbb{A})C\right\}\]
5. (Self duality)__\(\mathcal{G}_{n}^{H}\left(\omega;\mathbb{A}^{-1}\right)^{-1}=\mathcal{G}_{n}^{H} \left(\omega;\mathbb{A}\right)\)__
Proof.: We will prove (1) by induction. For \(n=2,\) it is [16, Proposition 6(i)]. For \(n=3,\)
\[\mathcal{G}_{3}^{H}(\omega;A_{1},A_{2},A_{3})=\mathcal{G}_{2}^{H}(\hat{\omega };A_{1}\#_{\omega_{3}}A_{3},A_{2}\#_{\omega_{3}}A_{3})=(A_{1}\#_{\omega_{3}}A_ {3})\#_{\frac{\omega_{2}}{\omega_{1}+\omega_{2}}}(A_{2}\#_{\omega_{3}}A_{3})\]
Thus,
\[\mathcal{G}_{3}^{H}(\omega;\alpha_{1}A_{1},\alpha_{2}A_{2},\alpha _{3}A_{3}) =\left[(\alpha_{1}A_{1})\#_{\omega_{3}}(\alpha_{3}A_{3})\right]\# _{\frac{\omega_{2}}{\omega_{1}+\omega_{2}}}\left[(\alpha_{2}A_{2})\#_{\omega_ {3}}(\alpha_{3}A_{3})\right]\] \[=\left[\alpha_{1}^{1-\omega_{3}}\alpha_{3}^{\omega_{3}}(A_{1}\#_ {\omega_{3}}A_{3})\right]\#_{\frac{\omega_{2}}{\omega_{1}+\omega_{2}}}\left[ \alpha_{2}^{1-\omega_{3}}\alpha_{3}^{\omega_{3}}(A_{2}\#_{\omega_{3}}A_{3})\right]\] \[=\left(\alpha_{1}^{1-\omega_{3}}\alpha_{3}^{\omega_{3}}\right)^{ \frac{\omega_{1}}{\omega_{1}+\omega_{2}}}\left(\alpha_{2}^{1-\omega_{3}}\alpha _{3}^{\omega_{3}}\right)^{\frac{\omega_{2}}{\omega_{1}+\omega_{2}}}(A_{1}\#_{ \omega_{3}}A_{3})\#_{\frac{\omega_{2}}{\omega_{1}+\omega_{2}}}(A_{2}\#_{ \omega_{3}}A_{3})\] \[=\alpha_{1}^{\omega_{1}}\alpha_{2}^{\omega_{2}}\alpha_{3}^{\omega_ {3}}(A_{1}\#_{\omega_{3}}A_{3})\#_{\frac{\omega_{2}}{\omega_{1}+\omega_{2}}}( A_{2}\#_{\omega_{3}}A_{3})\] \[=\alpha_{1}^{\omega_{1}}\alpha_{2}^{\omega_{2}}\alpha_{3}^{\omega_ {3}}\,\mathcal{G}_{3}^{H}(\omega;A_{1},A_{2},A_{3})\]
Suppose joint homogeneity holds for any \(2\leq m\leq n-1.\) For \(m=n,\)
\[\mathcal{G}_{n}^{H}(\omega;\alpha_{1}A_{1},\cdots,\alpha_{n}A_{n}) =\mathcal{G}_{n-1}^{H}\left(\hat{\omega};(\alpha_{1}A_{1})\#_{ \omega_{n}}(\alpha_{n}A_{n}),\cdots,(\alpha_{n-1}A_{n-1})\#_{\omega_{n}}( \alpha_{n}A_{n})\right)\] \[=\mathcal{G}_{n-1}^{H}\left(\hat{\omega};\alpha_{1}^{1-\omega_{n} }\alpha_{n}^{\omega_{n}}(A_{1}\#_{\omega_{n}}A_{n}),\cdots,\alpha_{n-1}^{1- \omega_{n}}\alpha_{n}^{\omega_{n}}(A_{n-1}\#_{\omega_{n}}A_{n})\right)\] \[=\left(\prod_{k=1}^{n-1}\alpha_{k}^{1-\omega_{n}}\alpha_{n}^{ \omega_{n}}\right)^{\frac{\omega_{k}}{1-\omega_{n}}}\mathcal{G}_{n-1}^{H} \left(\hat{\omega};A_{1}\#_{\omega_{n}}A_{n},\cdots,A_{n-1}\#_{\omega_{n}}A_{ n}\right)\] \[=\left(\prod_{k=1}^{n}\alpha_{k}^{\omega_{k}}\right)\mathcal{G}_{ n}^{H}(\omega;\mathbb{A})\]
By induction, [16, Proposition 6(ii), (iii), (iv), (v)] imply (2), (3), (4) and (5) respectively.
The next Theorem generalizes [16, Theorem 3] for an arbitrary finite number of positive invertible elements in a unital JB-algebra \(\mathfrak{A}\).
**Theorem 4.10**.: (Generalized Young inequalities for JB-algebras) _Let \(\omega=(\omega_{1},\omega_{2},\cdots,\omega)\in\Delta_{n}\) and \(\mathbb{A}=(A_{1},A_{2},\cdots,A_{n})\in\mathfrak{A}_{++}^{n}.\)_
\[\left(\sum_{k=1}^{n}\omega_{k}A_{k}^{-1}\right)^{-1}\leq\mathcal{G}_{n}^{H}\left( \omega,\mathbb{A}\right)\leq\sum_{k=1}^{n}\omega_{k}A_{k}. \tag{4.2}\]
Proof.: For \(n=2,\) it is [16, Theorem 3]. For \(n=3,\) by [16, Theorem 3]
\[\mathcal{G}_{3}^{H}(\omega;A_{1},A_{2},A_{3}) =(A_{1}\#_{\omega_{3}}A_{3})\#_{\frac{\omega_{2}}{\omega_{1}+ \omega_{2}}}(A_{2}\#_{\omega_{3}}A_{3})\] \[\leq\frac{\omega_{1}}{\omega_{1}+\omega_{2}}\left[(1-\omega_{3} )A_{1}+\omega_{3}A_{3}\right]+\frac{\omega_{2}}{\omega_{1}+\omega_{2}}\left[ (1-\omega_{3})A_{2}+\omega_{3}A_{3}\right]\] \[=\omega_{1}A_{1}+\omega_{2}A_{2}+\omega_{3}A_{3}\]
By Proposition 4.9 (5),
\[\mathcal{G}_{3}^{H}(\omega;A_{1},A_{2},A_{3})=\mathcal{G}_{3}^{H}(\omega;A_{1 }^{-1},A_{2}^{-1},A_{3}^{-1})^{-1}\geq\left(\omega_{1}A_{1}^{-1}+\omega_{2}A_ {2}^{-1}+\omega_{3}A_{3}^{-1}\right)^{-1}.\]
Thus, (4.2) hold for \(n=3.\) Suppose (4.2) hold for any \(2\leq m\leq n-1.\) For \(m=n,\) by induction,
\[\mathcal{G}_{n}^{H}(\omega;\mathbb{A}) =\mathcal{G}_{n-1}^{H}(\hat{\omega};A_{1}\#_{\omega_{n}}A_{n}, \cdots,A_{n-1}\#_{\omega_{n}}A_{n})\] \[\leq\sum_{k=1}^{n-1}\frac{\omega_{k}}{1-\omega_{n}}\left[(1- \omega_{n})A_{k}+\omega_{n}A_{n}\right]\] \[=\sum_{k=1}^{n}\omega_{k}A_{k}. \tag{4.3}\]
Applying Proposition 4.9 (5) to (4.3), we know that
\[\mathcal{G}_{n}^{H}(\omega;\mathbb{A})\geq\left(\sum_{k=1}^{n}\omega_{k}A_{k} ^{-1}\right)^{-1}.\]
The following result extends Theorem 3.5 from [7] to the setting of JB-algebras.
**Theorem 4.11**.: _Suppose a weighted 2-mean \(H_{2}\) is a Lie-Trotter mean. Then the associated Hansen's inductive mean \(H_{n}\) for \(n\geq 2\) is a multivariate Lie-Trotter mean._
Proof.: Let \(\omega=(\omega_{1},\cdots,\omega_{n-1},\omega_{n})\in\Delta_{n}\) and let \(\gamma_{1},\cdots,\gamma_{n}:(-\varepsilon,\varepsilon)\rightarrow\mathfrak{A }_{++}\) be any differentiable curves with \(\gamma_{k}(0)=I\) for any \(1\leq k\leq n.\) We need to show that
\[\lim_{t\to 0}H_{n}(\omega;\gamma_{1}(t),\cdots,\gamma_{n}(t))^{1/t}=\exp \left[\sum_{k=1}^{n}\omega_{k}\gamma_{k}^{\prime}(0)\right]\]
We will use induction to prove the statement. It is obvious for \(n=2\) by our assumption. Suppose that the claim is true for \(n-1.\) Then
\[H_{n}(\omega;\gamma_{1}(t),\cdots,\gamma_{n}(t))=H_{n-1}(\hat{\omega};\beta_{1 }(t),\cdots,\beta_{n-1}(t)),\]
where
\[\beta_{k}(t)=H_{2}(1-\omega_{n},\omega_{n};\gamma_{k}(t),\gamma_{n}(t))\text{ for}\,1\leqslant k \leqslant n-1.\]
We observe that \(\beta_{k}(t)\) is a differentiable curve and \(\beta_{k}(0)=I\) by idempotence of \(H_{2}.\) In addition, by Proposition 3.1,
\[\beta_{k}^{\prime}(0)=(\log\circ\beta_{k})^{\prime}(0)=\lim_{t\to 0}\frac{\log( \beta_{k}(t))}{t}=\lim_{t\to 0}\log(\beta_{k}(t))^{1/t}=(1-\omega_{n}) \gamma_{k}^{\prime}(0)+\omega_{n}\gamma_{n}^{\prime}(0).\]
Therefore
\[\lim_{t\to 0}H_{n}(\omega;\gamma_{1}(t),\cdots,\gamma_{n}(t))^{1/t} =\lim_{t\to 0}H_{n-1}(\hat{\omega};\beta_{1}(t),\cdots,\beta_{n-1}(t) )^{1/t}=\exp\left(\sum_{k=1}^{n-1}\frac{\omega_{k}}{1-\omega_{n}}\beta_{k}^{ \prime}(0)\right)\] \[=\exp\left\{\sum_{k=1}^{n-1}\frac{\omega_{k}}{1-\omega_{n}} \left[(1-\omega_{n})\gamma_{k}^{\prime}(0)+\omega_{n}\gamma_{n}^{\prime}(0) \right]\right\}\] \[=\exp\left(\sum_{k=1}^{n}\omega_{k}\gamma_{k}^{\prime}(0)\right)\]
## 5 A characterization of multivariate Lie-Trotter means
In this section, we give a characterization of multivariate Lie-Trotter means in a unital JB-algebra \(\mathfrak{A}.\)
Let \(G_{n}^{\omega}:=G_{n}(\omega;\cdot):\mathfrak{A}_{++}^{n}\rightarrow\mathfrak{ A}_{++}\) be a weighted \(n\)-mean satisfying the following inequality
\[\mathcal{H}_{n}\leqslant G_{n}^{\omega}\leqslant\mathcal{A}_{n}, \tag{5.1}\]
where \(\mathcal{H}_{n}\) and \(\mathcal{A}_{n}\) are harmonic and arithmetic means correspondingly.
Let \(A_{1},\cdots,A_{n}\in\mathfrak{A}\,.\) Without loss of generality, we suppose there exists at least a nonzero \(A_{k}.\) Denote
\[\rho:=\max_{1\leqslant k\leqslant n}\sigma(A_{k}),\]
where \(\sigma(A_{k})\) represents the spectral radius of \(A_{k}.\) Then \(\rho>0.\) For any \(t\in(-1/\rho,1/\rho),\) let \(\lambda\in\) Sp\((I+tA_{k}).\) By spectral theory of JB-algebra
\[\lambda>1-|t|\cdot\rho>0.\]
Therefore, \(G_{n}^{\omega}(I+tA_{1},\cdots,I+tA_{n})\) is well defined on the interval \((-1/\rho,1/\rho).\)
**Lemma 5.1**.: _If \(G_{n}^{\omega}\) as defined above satisfies the inequality (5.1), then \(G_{n}^{\omega}\) is differentiable at \(\mathbb{I}=(I,\cdots,I)\) in the sense that_
\[\lim_{t\to 0}\frac{G_{n}^{\omega}(I+tA_{1},\cdots,I+tA_{n})-G_{n}^{ \omega}(I,\cdots,I)}{t}\]
_exists. Moreover, if we denote the limit as \(DG_{n}^{\omega}(\mathbb{I})(A_{1},\cdots,A_{n}),\) then_
\[DG_{n}^{\omega}(\mathbb{I})(A_{1},\cdots,A_{n})=\sum_{k=1}^{n}\omega_{k}A_{k}.\]
Proof.: Define \(f(t)=\left[\sum_{k=1}^{n}\omega_{k}(I+tA_{k})^{-1}\right]^{-1}\) on \((-1/\rho,1/\rho)\). Then \(f(t):(-1/\rho,1/\rho)\to\mathfrak{A}_{++}\) is well-defined with \(f(0)=I\) and differentiable at \(t=0.\) Indeed,
\[f^{\prime}(0) =\lim_{t\to 0}\frac{f(t)-f(0)}{t}=\lim_{t\to 0}\frac{\left[\sum_{k=1}^{n} \omega_{k}(I+tA_{k})^{-1}\right]^{-1}-I}{t}\] \[=\lim_{t\to 0}\frac{\left[\sum_{k=1}^{n}\omega_{k}(I+tA_{k})^{-1} \right]^{-1}\circ\left[I-\sum_{k=1}^{n}\omega_{k}(I+tA_{k})^{-1}\right]}{t}\] \[=\lim_{t\to 0}\frac{I-\sum_{k=1}^{n}\omega_{k}(I+tA_{k})^{-1}}{t}= \lim_{t\to 0}\sum_{k=1}^{n}\omega_{k}\frac{I-(I+tA_{k})^{-1}}{t}\] \[=\lim_{t\to 0}\sum_{k=1}^{n}\omega_{k}\frac{(I+tA_{k})^{-1} \circ(I+tA_{k}-I)}{t}=\sum_{k=1}^{n}\omega_{k}A_{k}.\]
If \(t\in(0,1/\rho)\) and \(t\to 0^{+}\), then by the inequality (5.1)
\[\lim_{t\to 0^{+}}\frac{f(t)-I}{t}\leq\lim_{t\to 0^{+}}\frac{G_{n}^{\omega}(I+tA_{1}, \cdots,I+tA_{n})-I}{t}\leq\lim_{t\to 0^{+}}\frac{\sum_{k=1}^{n}\omega_{k}(I+tA_{k}) -I}{t}\]
Thus, by sandwich principle
\[\lim_{t\to 0^{+}}\frac{G_{n}^{\omega}(I+tA_{1},\cdots,I+tA_{n})-G_{n}^{ \omega}(I,\cdots,I)}{t}=\sum_{k=1}^{n}\omega_{k}A_{k}\]
Similarly,
\[\lim_{t\to 0^{-}}\frac{G_{n}^{\omega}(I+tA_{1},\cdots,I+tA_{n})-G_{n}^{ \omega}(I,\cdots,I)}{t}=\sum_{k=1}^{n}\omega_{k}A_{k}\]
Therefore, \(G_{n}^{\omega}\) is differentiable at \(\mathbb{I}\) with \(DG_{n}^{\omega}(A_{1},\cdots,A_{n})=\sum_{k=1}^{n}\omega_{i}A_{k}\).
**Theorem 5.2**.: _If a weighted \(n\)-mean \(G_{n}\) in a unital JB-algebra \(\mathfrak{A}\) satisfies the inequality (5.1), then \(G_{n}\) is the multivariate Lie-Trotter mean._
Proof.: Let \(\omega=(\omega_{1},\cdots,\omega_{n-1},\omega_{n})\in\Delta_{n}\) and let \(\gamma_{1},\cdots,\gamma_{n}:(-\varepsilon,\varepsilon)\to\mathfrak{A}_{++}\) be any differentiable curves with \(\gamma_{k}(0)=I\) for any \(1\leq k\leq n.\) Then the inequality (5.1) indicates that
\[\left[\sum_{k=1}^{n}\omega_{k}\gamma_{k}(t)\right]^{-1}\leq G_{n}(\omega; \gamma_{1}(t),\cdots,\gamma_{n}(t))\leq\sum_{k=1}^{n}\omega_{k}\gamma_{k}(t).\]
Since logarithmic function is operator monotone in JB-algebra by [16, Proposition 5],
\[\log\left[\sum_{k=1}^{n}\omega_{k}\gamma_{k}(t)\right]^{-1}\leq\log\left[G_{n }(\omega;\gamma_{1}(t),\cdots,\gamma_{n}(t))\right]\leq\log\left[\sum_{k=1}^{ n}\omega_{k}\gamma_{k}(t)\right]\]
For any \(t\in(0,\varepsilon)\) and \(t\to 0^{+}\),
\[\lim_{t\to 0^{+}}\log\left[\sum_{k=1}^{n}\omega_{k}\gamma_{k}(t)\right]^{-1/t} \leqslant\lim_{t\to 0^{+}}\log\left[G_{n}(\omega;\gamma_{1}(t),\cdots,\gamma_{n}(t) )\right]^{1/t}\leqslant\lim_{t\to 0^{+}}\log\left[\sum_{k=1}^{n}\omega_{k}\gamma_{k}(t) \right]^{1/t}\]
Using the fact \(\mathcal{A}_{n}\) and \(\mathcal{H}_{n}\) are multivariate Lie-Trotter means by Corollary 4.5, we conclude
\[\lim_{t\to 0^{+}}\log\left[G_{n}(\omega;\gamma_{1}(t),\cdots,\gamma_{n}(t)) \right]^{1/t}=\sum_{k=1}^{n}\omega_{k}\gamma_{k}^{\prime}(0).\]
Note that the map \(\log:\mathfrak{A}_{++}\to\mathcal{A}\) is diffeomorphic,
\[\lim_{t\to 0^{+}}\left[G_{n}(\omega;\gamma_{1}(t),\cdots,\gamma_{n}(t)) \right]^{1/t}=\exp\left[\sum_{k=1}^{n}\omega_{k}\gamma_{k}^{\prime}(0)\right].\]
Following a similar argument, we obtain
\[\lim_{t\to 0^{-}}\left[G_{n}(\omega;\gamma_{1}(t),\cdots,\gamma_{n}(t)) \right]^{1/t}=\exp\left[\sum_{k=1}^{n}\omega_{k}\gamma_{k}^{\prime}(0)\right].\]
**Acknowledgement:** We extend our gratitude to Shuzhou Wang for his careful reading of the manuscript and valuable comments.
|
2305.16887 | Awesome SOSS: Atmospheric Characterisation of WASP-96 b using the JWST
Early Release Observations | The newly operational JWST offers the potential to study the atmospheres of
distant worlds with precision that has not been achieved before. One of the
first exoplanets observed by JWST in the summer of 2022 was WASP-96 b, a
hot-Saturn orbiting a G8 star. As part of the Early Release Observations
program, one transit of WASP-96 b was observed with NIRISS/SOSS to capture its
transmission spectrum from 0.6-2.85 microns. In this work, we utilise four
retrieval frameworks to report precise and robust measurements of WASP-96 b's
atmospheric composition. We constrain the logarithmic volume mixing ratios of
multiple chemical species in its atmosphere, including: H$_2$O = $-3.59 ^{+
0.35 }_{- 0.35 }$, CO$_2$ = $-4.38 ^{+ 0.47 }_{- 0.57 }$ and K = $-8.04 ^{+
1.22 }_{- 1.71 }$. Notably, our results offer a first abundance constraint on
potassium in WASP-96 b's atmosphere, and important inferences on carbon-bearing
species such as CO$_2$ and CO. Our short wavelength NIRISS/SOSS data are best
explained by the presence of an enhanced Rayleigh scattering slope, despite
previous inferences of a clear atmosphere - although we find no evidence for a
grey cloud deck. Finally, we explore the data resolution required to
appropriately interpret observations using NIRISS/SOSS. We find that our
inferences are robust against different binning schemes. That is, from low $R =
125$ to the native resolution of the instrument, the bulk atmospheric
properties of the planet are consistent. Our systematic analysis of these
exquisite observations demonstrates the power of NIRISS/SOSS to detect and
constrain multiple molecular and atomic species in the atmospheres of hot giant
planets. | Jake Taylor, Michael Radica, Luis Welbanks, Ryan J. MacDonald, Jasmina Blecic, Maria Zamyatina, Alexander Roth, Jacob L. Bean, Vivien Parmentier, Louis-Philippe Coulombe, Adina D. Feinstein, Néstor Espinoza, Björn Benneke, David Lafrenière, René Doyon, Eva-Maria Ahrer | 2023-05-26T12:46:37Z | http://arxiv.org/abs/2305.16887v1 | # Awesome SOSS: Atmospheric Characterisation of WASP-96 b using the JWST Early Release Observations
###### Abstract
The newly operational JWST offers the potential to study the atmospheres of distant worlds with precision that has not been achieved before. One of the first exoplanets observed by JWST in the summer of 2022 was WASP-96 b, a hot-Saturn orbiting a G8 star. As part of the Early Release Observations program, one transit of WASP-96 b was observed with NIRISS/SOSS to capture its transmission spectrum from 0.6-2.85 um. In this work, we utilise four retrieval frameworks to report precise and robust measurements of WASP-96 b's atmospheric composition. We constrain the logarithmic volume mixing ratios of multiple chemical species in its atmosphere, including: H\({}_{2}\)O = \(-3.59^{+0.35}_{-0.35}\), CO\({}_{2}\) = \(-4.38^{+0.47}_{-0.57}\) and K = \(-8.04^{+1.22}_{-1.71}\). Notably, our results offer a first abundance constraint on potassium in WASP-96 b's atmosphere, and important inferences on carbon-bearing species such as CO\({}_{2}\) and CO. Our short wavelength NIRISS/SOSS data are best explained by the presence of an enhanced Rayleigh scattering slope, despite previous inferences of a clear atmosphere -- although we find no evidence for a grey cloud deck. Finally, we explore the data resolution required to appropriately interpret observations using NIRISS/SOSS. We find that our inferences are robust against different binning schemes. That is, from low \(R=125\) to the native resolution of the instrument, the bulk atmospheric properties of the planet are consistent. Our systematic analysis of these exquisite observations demonstrates the power of NIRISS/SOSS to detect and constrain multiple molecular and atomic species in the atmospheres of hot giant planets.
keywords: planets and satellites: atmospheres - planets and satellites: gaseous planets - planets and satellites: individual: WASP-96 b
## 1 Introduction
After launch in December 2021, a careful journey to L2, and a successful commissioning period, JWST finally began its long-awaited science operations on July 12, 2022. It is a credit to how far the field of exoplanet astronomy has progressed in the past couple of decades that some of the very first observations with this revolutionary new observatory were of transiting exoplanets. JWST vastly extends the wavelength range with which exoplanet atmospheres can be probed from space. Previous state-of-the-art observations with the Hubble Space Telescope (HST) probed UV, optical, and near infrared wavelengths out to 1.7 um. The Spitzer Space Telescope enabled predominantly photometric measurements further into the infrared, although only the bluest bandpasses at 3.6 and 4.5 um remained in operation after the coolant ran out in 2009. In combination,
the four instruments on board the JWST allows for the study of exoplanet atmospheres from 0.6-28 um; enabling the characterization of their atmospheres at wavelengths never seen before, broadening our discovery space into uncharted territories. This is evident from the first results from the Early Release Science (ERS) observations of the hot-Jupiter WASP-39 b which yielded the first ever detections of CO\({}_{2}\) and SO\({}_{2}\) (The JWST Transiting Exoplanet Community Early Release Science Team et al., 2022; Alderson et al., 2022; Rustamkulov et al., 2022; Tsai et al., 2022).
The Early Release Observations (ERO) program was designed to provide the astronomical community with publicly available data, touching on many of the key science objectives of JWST, immediately after the end of the commissioning period (Pontoppidan et al., 2022). For the exoplanet portion of the ERO program, transits of two hot-Saturns, WASP-96 b (Hellier et al., 2014) and HAT-P-18b (Hartman et al., 2011; Fu et al., 2022) were observed with the Single Object Slitless Spectroscopy (SOSS) mode (Albert et al. submitted) of the Near Infrared Imager and Slitless Spectrograph (NIRISS) instrument (Doyon et al. submitted).
This work focuses on WASP-96 b, an inflated hot-Saturn exoplanet with a mass of 0.498\(\pm\) 0.03 M\({}_{\rm J}\), a radius of 1.2\(\pm\)0.06 R\({}_{\rm J}\), and an equilibrium temperature of \(\sim\)1300 K. It orbits a G8 star in the constellation of Phoenix with an orbital period of 3.4 d. The planet's short orbital period, combined with its low density makes it an ideal candidate for atmospheric spectroscopy. Indeed, there have been multiple previous atmospheric studies of this planet, with the first being a ground-based spectrum using VLT/FORS2 by Nikolov et al. (2018). This observation covered a spectral range of 0.36-0.82 um using spectroscopic bins with widths of 0.016 um. This spectroscopic precision allowed for the measurement of the pressure-broadened sodium D line with wings reported to cover six atmospheric pressure scale heights (Nikolov et al., 2018). The visibility of the sodium wings suggested that there are no clouds or hazes obscuring them in the atmosphere at the pressure ranges probed in transmission (Fortney, 2005). This was supported by their atmospheric modeling which finds no evidence for additional opacity due to clouds. Nikolov et al. (2018) further conclude that the abundance of Na is consistent with the measured stellar value.
Prior to the commissioning period of JWST, Nikolov et al. (2022) published the transmission spectrum of WASP-96 b using HST and Spitzer, providing the first look at the infrared spectrum of the planet using space-based instrumentation. To explore the atmospheric constituents in detail they couple the HST and Spitzer observations with the previous VLT observations. They find an offset between the space and ground-based data, consistent with what was found by Yip et al. (2021) who explored the impact of combining space-based and ground-based observations. Together, the HST and Spitzer observations confirm previous findings that the transmission spectrum is consistent with a cloud-free atmosphere. They are able to put a constraint on the absolute sodium and oxygen abundance and find them to be 21\({}^{+27}_{-14}\)\(\times\) and 7\({}^{+11}_{-4}\)\(\times\) solar values, respectively.
Later, McGrader et al. (2022) published a study of WASP-96 b adding to the existing ensemble of transit measurements using the ground-based telescope IMACS/Magellan as part of the ACCESS project (Jordan et al., 2013; Rackham et al., 2017). Their transmission spectrum covers the spectral range of 0.44-0.9 um, which overlaps with the VLT/FORS2 observations of Nikolov et al. (2018), enabling an independent confirmation of the sodium feature with its pressure-broadened wings. They combine their two transits with the published VLT/FORS2 and HST data and perform spectral retrievals on the combined spectrum (0.4-1.644 um). Their results indicate solar-to-super solar abundances of Na and H\({}_{2}\)O, with log-mixing ratios of \(-5.4^{+2.0}_{-1.0}\) and \(-4.5^{+2.0}_{-2.0}\) respectively, in rough agreement with Nikolov et al. (2022) and with previous suggestions of super-solar alkali abundances (Welbanks et al., 2019).
This study is the second paper in a two-part series providing an in-depth treatment of the ERO observations of WASP-96 b. The companion paper, Radica et al. (submitted) focuses on the reduction and extraction of the planet's transmission spectrum from the SOSS time series observations (TSO), as well as provides some initial insights into the composition of WASP-96 b's atmosphere through comparisons of the planet's transmission spectrum with grids of self-consistent atmospheric models. They conclude that the NIRISS/SOSS observations of WASP-96 b are best explained by a cloud-free atmosphere, with a solar-to-super-solar metallicity atmosphere and solar carbon-to-oxygen ratio (C/O). In this study, we perform a detailed atmospheric characterization of WASP-96 b using the spectrum presented in Radica et al. (submitted). NIRISS/SOSS provides spectral coverage from 0.6-2.8 um, covering the red wing of the 0.59 um sodium doublet as well as multiple water bands. Hence we can assess the robustness of previous observations (Nikolov et al., 2022; McGrader et al., 2022) and independently confirm them with an instrument specifically built to study exoplanet atmospheres.
In the era of HST, exoplanet atmosphere observations were generally binned to a resolution which achieved a sufficient signal to noise ratio to obtain spectral information. The choice of binning, though, has remained somewhat arbitrary; for example, with HST/WFC3 G141 (1.1-1.7 microns), Line et al. (2016) use 10 spectral bins for the secondary eclipse of HD 209458b and Kreidberg et al. (2014) use 22 spectral bins for the transmission spectrum and 15 spectral bins for the secondary eclipse of WASP-43b. Hence, for the same instrument, different spectral binning was used throughout the literature, and as of yet, there has been no exploration of whether different spectral bin choices impact the inferred atmospheric properties.
We therefore aim to answer the following questions:
1. What are the chemical species present in WASP-96 b's atmosphere are what are their abundances? Is the atmosphere indeed cloud free?
2. How robust against the choice of framework and model assumptions are the retrieved chemical abundances?
3. Does binning the data from native to lower resolution produce different inferred abundances?
This work is organized as follows: we provide a brief overview of the observations and data reduction in Section 2. We outline the different modelling strategies, both through inference (e.g., retrievals) and forward models, in Section 3, and present the modelling results in Section 4. Section 5 contains a brief discussion of these results, and we summarize our work in Section 6.
## 2 Observations
One transit of the hot-Saturn WASP-96 b was observed with NIRISS/SOSS on June 21, 2022 as part of the JWST Early Release Observations (ERO) program (Pontoppidan et al., 2022). The total duration of the time series observation (TSO) was 6.4 hr. The SUBSTRIP256 subarray configuration was used to capture the first three diffraction orders of the target star on the detector (Albert et al. submitted), which provides access to the full 0.6-2.8 um wavelength range of the SOSS mode. Order 1 covers the 0.85-2.8 um wavelength range, with the 0.6-0.85 um information provided by order 2. The third order is generally too faint to be extracted, and does not provide any unique wavelength coverage (Albert et al. submitted).
The reduction of these data, using the supreme-SPOON pipeline1(Feinstein et al., 2023; Coulombe et al., 2023, Radica et al. submitted) are treated in depth in Radica et al. (submitted). In that work, they present a walkthrough of the critical reduction steps, including correction of the zodiacal background light and 1/\(f\) noise. Their final transmission spectrum, which we make use of in this work, was extracted with the ATOCA algorithm (Darveau-Bernier et al., 2022; Radica et al., 2022) to explicitly model the self-contamination of the first and second diffraction orders on the detector. The transit depths were additionally fit at the pixel level (that is, one transit depth per pixel column on the detector), and were post-processed to correct for contamination from background field stars, which can occur due to the slitless nature of the SOSS mode. This spectrum is shown at the pixel-level, as well as binned several lower resolutions in Figure 1. Their spectrophotometric light curves reach an average precision of 1.2\(\times\) and 1.4\(\times\) the photon noise for order 1 and 2 respectively, resulting in an average pixel-level transit depth precision of 522 ppm and 534 ppm respectively.
Footnote 1: [https://github.com/radicamc/supreme-spoon](https://github.com/radicamc/supreme-spoon)
## 3 Spectral Analysis
We use two different modelling approaches to thoroughly explore WASP-96 b's atmosphere. The first is comparisons with forward models computed using 3D General Circulation Models (GCMs). These allow us to explore the potential formation of different species of cloud condensates in the atmosphere of WASP-96 b (e.g., Samra et al., 2023), and also consider chemical kinetics (Zamyatina et al., 2023). The second is a spectral retrieval analysis that allows us to infer the atmospheric properties of WASP-96 b, such as its chemical composition and temperature structure, directly from the Radica et al. (submitted) transmission spectrum (Figure 1). For the retrieval analysis, we use four different codes: CHIMERA (Line et al., 2013), Aurora (Welbanks and Madhusudhan, 2021), POSEDION (MacDonald and Madhusudhan, 2017; MacDonald, 2023) and PyratBay (Cubillos and Blecic, 2021) to ensure that the atmospheric composition we retrieve is robust against the choice of retrieval code. The model setups for each of the two approaches are detailed below.
### 3D Forward Modelling using GCMs
Although all previous observational studies of this planet have concluded a cloud-free upper atmosphere for WASP-96 b (Nikolov et al., 2018, 2022; McGruder et al., 2022), the idea that clouds could be present in the atmosphere of this planet was recently explored by Samra et al. (2023). Their 3D GCM model considers a kinetic, non-equilibrium formation model for mixed material cloud particles. Their GCM models show that clouds could indeed be ubiquitous in the low-pressure, terminator regions of WASP-96 b's atmosphere, with silicate and metal oxide clouds being the most prominent condensate species. They conclude that the Nikolov et al. (2022) transmission spectrum can also be fit with cloudy models. However, whether the clouds predicted by the kinetic model actually form in WASP-96 b depends on whether they are cold-trapped below the photosphere (Parmentier et al., 2016; Powell et al., 2018), a mechanism that cannot currently be resolved with the kinetics models.
We perform our own GCM modelling to investigate the plausibility that WASP-96 b could host clouds. For our first GCM analysis, we use the non-grey SPARC/MITgcm (Showman et al., 2009). Specifically, we make use of the large grid of models generated by Roth et al. (in prep). The GCM setup is very similar to that described in Parmentier et al. (2018, 2021), but does not consider cloud condensation. The model grid spans a wide range of equilibrium temperatures, atmospheric metallicity, orbital period, and surface gravities which is then interpolated into the specific WASP-96 b parameters. However, other parameters such as the planet radii are fixed and the models have an infinite drag timescale. The resulting thermal profiles are then interpolated to the system parameters of WASP-96 b.
The thermal profiles are read into CHIMERA (see Section 3.2.1 for more details) to produce a transmission spectrum of an atmosphere that has a solar C/O and metallicity of 1\(\times\) and 10\(\times\) solar. The thermal profiles and transmission spectra are shown in Figure 2. To capture the PT profile parameter space spanned by our range of considered metallicities, we denote the 1\(\times\) solar profile in a solid line, and the edge of the shaded area denotes the 10\(\times\) solar profile. It can be seen that the PT structures cross the condensation curves for various cloud species. Specifically, the SPARC/MITgcm predicts that the morning limb favours high-altitude Na\({}_{2}\)S clouds with deeper MnS clouds, whereas in the evening limb only MnS clouds could condense in the pressure regions probed by transmission. Silicate clouds should form only in the 1\(\times\) solar metallicity case, and only in the deep layers of the atmosphere (\(\sim\)1 bar). Depending on whether vertical mixing is large enough, they could be efficiently mixed up to the pressure levels probed by the observations or remain trapped in the deep layers of the atmosphere (Powell et al., 2018).
Our second GCM analysis utilizes the Met Office Unified Model (UM) run specifically for WASP-96 b. We used the same basic model setup as in Drummond et al. (2020) and Zamyatina et al. (2023) with the following changes: (a) the PHOENIX BT-Settl stellar spectrum (Rajpurohit et al., 2013) closely matching WASP-96, (b) WASP-96 b parameters from Hellier et al. (2014), and (c) the UM version 11.6, initialised with (d) WASP-96 b dayside-average pressure-temperature profile obtained with the 1D radiative-convective-chemistry model ATMO (Drummond et al., 2016) assuming chemical equilibrium for the chemical species present in the Venot et al. (2012) chemical network. We further assume the atmosphere to be cloud/haze free and have a solar metallicity and C/O ratio based on the initial modelling of Radica et al. (submitted).
Within the UM framework, we ran two simulations, each with a different chemical scheme: one assuming chemical equilibrium, and the other a chemical kinetics scheme, which computes the production and loss of the chemical species present in the Venot et al. (2019) reduced chemical network. We will refer to these simulations as "UM 1\(\times\) solar equilibrium" and "UM 1\(\times\) solar kinetics", respectively, with the latter simulation accounting for the opacity changes not only due to changes in pressure and temperature, but also due to the transport of chemical species in the atmosphere. The left panel of Figure 2 shows that both UM simulations predict similar limb-average PT profiles (weighted over all latitudes and \(\pm\)20\({}^{\circ}\) longitude), with the morning limb being colder than the evening limb at pressures \(<\)1 bar. Contrary to the SPARC/MITgcm, pressure-temperature profiles from the UM suggest that only MnS clouds could form on WASP-96 b's limbs. This is because the UM predicts a shallower temperature gradient at pressures \(<\)10\({}^{-2}\) bar causing the UM to have temperatures 100-200 K higher than those predicted at comparable pressures by the SPARC/MITgcm. Both GCMs predict similar positions for the MnS cloud decks on both limbs, when the assumed metallicity is 1\(\times\) solar. However, given that the MnS nucleation rate is relatively low (Gao et al., 2020), these clouds might not form quickly enough for their opacity to be relevant for WASP-96 b.
The right panel of Figure 2 shows that both the UM and the SPARC/MITgcm simulations produce transmission spectra that agree well with WASP-96 b's JWST NIRISS/SOSS transmission spectrum in the range 1.3-2.15 \(\mu\)m. Blueward of 1.3 \(\mu\)m, however, K and H\({}_{2}\)O features are muted relative to those predicted by the haze and cloud free GCM models, suggesting the presence of a scattering opacity source. Redward of 2.15 \(\mu\)m, the observations and the models broadly agree, but the observed transit depths vary highly with wavelength. Of particular note is the region between 2.15-2.5 \(\mu\)m, where the "UM 1\(\times\) solar kinetics" simulation predicts a higher transit depth
Figure 1: The full 0.6–2.8 \(\mu\)m NIRISS/SOSS spectrum of WASP-96 b from Radica et al. (submitted). The pixel level spectrum is shown in grey, as well as when binned to a resolution of R=500, 250, and 125 in purple, blue, and red respectively.
Figure 2: _Left:_ Temperature profiles generated from the SPARC MIT/gcm and the UM. The morning and evening limbs are shown in blue and red respectively. The solid blue line is the 1\(\times\) solar model from the MIT/gcm, with the shading showing the parameter space covered between a 1\(\times\) solar and 10\(\times\) solar model, both assuming chemical equilibrium. The crosses and circles are from the UM, showing equilibrium and kinetics cases respectively, both for a 1\(\times\) solar metallicity. Condensation curves for three different cloud species are shown: Na\({}_{2}\)S, MnS, and MgSiO\({}_{3}\) in orange, purple, and green respectively. The line style denotes curves for different atmospheric metallicities: solid, dashed and dotted for 1\(\times\) and 10\(\times\) solar, respectively. _Right:_ WASP-96 b NIRISS/SOSS transmission spectrum from Radica et al. (submitted) binned to a resolution of R=125 (black points with error bars) compared to simulated transmission spectra from outputs of the UM and SPARC MIT/gcm. The orange and red lines are UM models, with and without considering kinetics. The blue and purple lines are from the SPARC MIT/gcm 1\(\times\) and 10\(\times\) solar metallicity runs, respectively.
than the "UM 1\(\times\) solar equilibrium" simulation. This difference is caused by an enhancement of the abundance of CH\({}_{4}\) due to transport-induced quenching, which is captured only in the UM kinetics simulation. However, we are not able to robustly distinguish between these two cases with the current data. Another difference is that, because the solar-composition SPARC/MITgcm predicts cooler limb temperatures than the UM simulation, the spectral features of the SPARC/MITgcm are shallower than the observations. However, the SPARC/MITgcm 10\(\times\) solar metallicity leads to a hotter thermal profile and thus to a better match to the data. This difference highlights the intrinsic dependence of the observables to the modelling framework when using complex, 3D, GCMs (Showman et al., 2020).
Overall, both clear-sky GCMs used in this study provide good agreement with our JWST NIRISS/SOSS transmission spectrum of WASP-96 b. However, we are not able to robustly distinguish between 1\(\times\) and 10\(\times\) solar metallicity models with the current data, and both models struggle to reproduce the observations blueward of 1.3 \(\mu\)m. This further motivates an in-depth investigation using atmospheric retrievals.
### Atmospheric Retrieval
Atmospheric retrievals are a powerful tool to extract information about an exoplanet atmosphere directly from the data (Madhusudhan and Seager, 2009). We explore the data in a hierarchical way, from simple (e.g., cloud free, free abundances, isothermal) to complex models (e.g., inclusion of hazes and clouds, chemical equilibrium, non-isothermal), with multiple retrieval codes. The first set of retrievals we perform are 'free chemistry' retrievals, which directly infer the volume mixing ratios (VMR) for a set of chemical species assumed to be present in the atmosphere (the VMRs are assumed constant with altitude). Each retrieval framework assumed that the atmosphere is dominated by H\({}_{2}\) - expected for objects that have physical properties similar to Saturn, and included the same molecules as opacity sources. All frameworks use the WASP-96 system parameters reported in Hellier et al. (2014). The second set of retrievals are performed assuming that the vertical abundances of the chemical species are in thermochemical equilibrium.
To robustly interpret our WASP-96 b observations, we employ four different retrieval frameworks: CHIMERA (Line et al., 2013), Aurora (Welbanks and Madhusudhan, 2021), POSEIDON (MacDonald and Madhusudhan, 2017; MacDonald, 2023) and PyratBay (Cubillos and Blecic, 2021). A multiple-retrieval approach allows us to compare our results in the regime of high-precision data (Barstow et al., 2020, 2022), thereby quantifying the stability of our atmospheric inferences to model implementations. The common molecules to each code are: H\({}_{2}\)O, CO, CO\({}_{2}\), CH\({}_{4}\), NH\({}_{3}\), HCN, Na and K, these have a prior U(\(-\)12,\(-\)1)2 for all VMRs. The set-up of each code is explained in the following subsections. Furthermore, CHIMERA is also used to run a chemical equilibrium retrieval, as an additional test.
Footnote 2: with the exception of Aurora which has U(\(-\)12,\(-\)0.3) (Welbanks et al., 2019)
#### 3.2.1 Chimera
We use CHIMERA3 to perform both free and chemically consistent spectral retrievals. CHIMERA is the only framework in this study that uses the correlated-\(k\) approach (Lacis and Oinas, 1991) when computing transmission through the atmosphere. The \(k\)-tables are computed at a resolution of R\(\approx\)3000; the line-by-line data used to calculate the \(k\)-tables are from the following sources: H\({}_{2}\)O (Polyansky et al., 2018; Freedman et al., 2014), CO\({}_{2}\)(Freedman et al., 2014), CO (Rothman et al., 2010), CH\({}_{4}\)(Rothman et al., 2010), HCN (Barber et al., 2014), Na (Kramida et al., 2018; Allard et al., 2019), and K (Kramida et al., 2018; Allard et al., 2016), and were computed following the methods described in Gharib-Nezhad et al. (2021); Grimm et al. (2021). We assume the atmosphere is dominated by H\({}_{2}\), with a He/H\({}_{2}\) ratio of 0.1764; therefore, we also model the H\({}_{2}\)-H\({}_{2}\) and H\({}_{2}\)-He collision-induced absorption (CIA) (Richard et al., 2012).
Footnote 3: The open source code can be found here: [https://github.com/mhline/CHIMERA](https://github.com/mhline/CHIMERA)
To compute the thermal structure, we use the parameterisation described in Madhusudhan and Seager (2009). This approach splits the atmosphere into three layers: the upper atmosphere, where no inversion can take place, a middle region, where an inversion is possible, and a deep layer, where the thermal structure is isothermal. We also consider a scenario in which the temperature structure is isothermal, and find that the abundances do not dependent on the thermal structure parameterisations. We also consider an atmosphere that is just parameterised by an isothermal model, it can be seen that all retrievals tend towards an isothermal temperature structure 4.
Our chemically consistent retrievals aim to explore the impact of physical coupling between the atmospheric composition and temperature structure. Specifically, the molecular and atomic vertical abundances are assumed to be in thermochemical equilibrium. The equilibrium abundances are computed using the NASA CEA (Chemical Equilibrium with Applications) model (Gordon and McBride, 1994) for a given C/O, metallicity, and temperature structure. Thus, the C/O ratio and metallicity are free parameters for these retrievals instead of the chemical abundances themselves.
We model hazes following the prescription of Lecavelier Des Etangs et al. (2008), which treats hazes as enhanced H\({}_{2}\) Rayleigh scattering with a free power law slope. This parameterisation expresses the opacity as \(\sigma_{\rm Hazes}=\sigma\nu_{0}(\lambda/\lambda_{0})^{\gamma}\), where \(\alpha\) is the Rayleigh enhancement factor and \(\gamma\) is the scattering slope (equal to \(-\)4 for H\({}_{2}\) Rayleigh scattering). \(\sigma_{0}\) is the H\({}_{2}\) Rayleigh cross section at \(\lambda_{0}\), given by 2.3\(\times\)10\({}^{-27}\) cm\({}^{2}\) and 430 nm respectively. Alongside the haze calculation, we fit for a constant-in-wavelength grey cloud with opacity \(\kappa_{\rm cloud}\). Hence, we term this model "Simple Haze + Cloud model".
To explore the parameter space, we coupled our parametric forward model with the Bayesian Nested Sampling algorithm PyMultiNest (Fero et al., 2009; Buchner et al., 2014).
#### 3.2.2 Aurora
We complement our atmospheric analysis by inferring the atmospheric properties of WASP-96 b using Aurora Welbanks and Madhusudhan (2021), a Bayesian atmospheric retrieval framework for the interpretation of ground- and space-based observations of transiting exoplanets. Our atmospheric model setup generally follows a similar approach to previous atmospheric studies (e.g., Welbanks and Madhusudhan, 2019) with the same priors for WASP-96 b as in the analysis of the existing VLT observations (Nikolov et al., 2018) presented in Welbanks et al. (2019). Our atmospheric model computes line-by-line radiative transfer in transmission geometry in a plane-parallel atmosphere. The pressure structure of the atmosphere assumes hydrostatic equilibrium for a varying-with-height gravity, in a grid of 100 layers uniformly distributed in log-pressure from 10\({}^{-7}\) to 100 bar. The Bayesian inference is performed using the framework MultiNest (Feroz et al., 2009) through its Python implementation PyMultiNest (Buchner et al., 2014) using 2000 live points.
We explore a series of atmospheric model scenarios with Aurora including the possibility of multidimensional clouds and hazes (e.g., Welbanks et al., 2019), terminator inhomogeneities (e.g., Welbanks & Madhusudhan, 2022), and other modelling assumptions regarding the number of free parameters in our retrievals (e.g., Welbanks & Madhusudhan, 2019). Through this exploration of models we determined a fiducial 19 parameter model for our 'free retrieval' analysis using Aurora and other similar frameworks. This model setup considers a non-isothermal pressure-temperature structure parameterized using the six parameter prescription of Madhusudhan & Seager (2009). Eight sources of opacity are considered in our models. These species, expected to be the main absorbers for hot gas giants (e.g., Madhusudhan, 2019), are parameterized by their logarithmic volume mixing ratios assumed to be constant with height. The species and their corresponding line lists are CH\({}_{4}\)(Yurchenko & Tennyson, 2014; Yurchenko et al., 2017), CO (Rothman et al., 2010), CO\({}_{2}\)(Rothman et al., 2010), H\({}_{2}\)O (Rothman et al., 2010), HCN (Barber et al., 2014), K (Allard et al., 2016), Na (Allard et al., 2019), and NH\({}_{3}\)(Yurchenko et al., 2011). We further include H\({}_{2}\)--H\({}_{2}\) and H\({}_{2}\)-He collision induced absorption (CIA; Richard et al., 2012) and H\({}_{2}\)-Rayleigh scattering (Dalgarno & Williams, 1962). The opacities are computed following the methods described in Gandhi & Madhusudhan (2017, 2018); Gandhi et al. (2020) and Welbanks et al. (2019).
We consider the presence of clouds and hazes in our atmospheric models using the modeling strategy for inhomogeneous terminator cover presented in Line & Parmentier (2016). We consider the presence of scattering hazes as deviations from the Rayleigh scattering in the models by following the parameterization of Lecavelier Des Etangs et al. (2008) as described above. The spectroscopic effect of clouds is included by considering the presence optically thick cloud decks at a specific pressure level. The combination of inhomogeneous clouds and hazes is implemented following the single-sector prescription as explained in Welbanks et al. (2019) using four additional free parameters. Finally, we use one free parameter to infer the reference pressure corresponding to the assumed planetary radius. To compare our high-resolution (R\(\sim\)30,000) spectra to the NIRISS/SOSS observations we follow the model binning strategy presented in Pinhas et al. (2018).
#### 3.2.3 Poseidon
The third atmospheric retrieval code we employ is POSEIDON (MacDonald & Madhusudhan, 2017; MacDonald, 2023). POSEIDON is a well-established atmospheric modelling and spectral retrieval code that was recently released as an open-source4 Python package (MacDonald, 2023). The radiative transfer technique underlying POSEIDON's transmission spectrum forward model is described in MacDonald & Lewis (2022). Our POSEIDON retrieval samples the parameter space using the Bayesian nested sampling algorithm Multi-Nest, deployed via its Python wrapper PyMultiNest(Feroz et al., 2009; Buchner et al., 2014).
Footnote 4: POSEIDON is available here: [https://github.com/HartianColonist/POSEIDON](https://github.com/HartianColonist/POSEIDON)
Our WASP-96 b POSEIDON retrieval analysis employs a 19-parameter model accounting for non-isothermal pressure-temperature profiles, inhomogeneous clouds and hazes, and the eight common chemical species described above. One parameter encodes the planetary radius at a 10 mbar reference radius. The five-parameter PT profile follows the prescription in Madhusudhan & Seager (2009), modified to place the reference temperature parameter at 10 mbar. The four-parameter inhomogeneous aerosol model follows MacDonald & Madhusudhan (2017). Finally, eight parameters specify the constant-in-altitude free abundances of H\({}_{2}\)O, CO, CO\({}_{2}\), CH\({}_{4}\), HCN, NH\({}_{3}\), Na, and K. The model constructs an atmosphere ranging from 10\({}^{-8}\)-100 bar, with 100 layers uniformly distributed in log-pressure, and assumes a H\({}_{2}\) + He-dominated background atmosphere with He/H\({}_{2}\) = 0.17. The Bayesian retrieval of this 19-parameter space used 1,000 PyMultiNest live points.
At each location in the parameter space, POSEIDON computed WASP-96 b transmission spectra at a resolution of \(R\) = 20,000 from 0.55-2.9 um. The radiative transfer uses opacity sampling of high-resolution pre-computed cross sections (\(R\sim 10^{6}\)) from the following line list sources: H\({}_{2}\)O (Polyansky et al., 2018), CO (Li et al., 2015), CO\({}_{2}\)(Tashkun & Perevalov, 2011), CH\({}_{4}\)(Yurchenko et al., 2017), HCN (Barber et al., 2014), NH\({}_{3}\)(Coles et al., 2019), Na (Ryabchikova et al., 2015), and K (Ryabchikova et al., 2015). We additionally include continuum opacity from H\({}_{2}\) and He CIA (Karman et al., 2019) and H\({}_{2}\) Rayleigh scattering (Hohm, 1994). We convolve each \(R\) = 20,000 model spectrum with the instrument Point Spread Function (PSF), before binning down to the resolution of the observations (here, \(R\) = 125) to compute the likelihood of each parameter combination. We treat NIRISS/SOSS orders 1 and 2 separately during the convolution and binning procedure, accounting for their different intrinsic PSFs and instrument transmission functions.
#### 3.2.4 PyratBay
Lastly, we also employed PyratBay, the Python Radiative-transfer in a Bayesian framework. PyratBay5 is an open-source framework for exoplanet atmospheric modelling, spectral synthesis, and Bayesian retrieval. It utilizes the most up to date line-by-line opacity sources from ExoMol (Tennyson et al., 2016), HITEMP (Rothman et al., 2010), atomic species Na, K (Burrows et al., 2000) and collision-induced opacities of H\({}_{2}\)-H\({}_{2}\)(Borysow et al., 2001; Borysow, 2002) and H\({}_{2}\)-He pairs (Borysow et al., 1988, 1989; Borysow & Frommhold, 1989). For effective use in retrieval, we compress these large databases (while retaining information from the dominating line transitions), using the available package (Cubillos, 2017). To model the vertical temperature structure, we implement three parameterization schemes: isothermal, and Line et al. (2013) and Madhusudhan & Seager (2009) prescriptions. This retrieval framework also implements a self-consistent 1D radiative-convective equilibrium scheme (Malik et al., 2017), the classic "power law+gray" prescription, a "single-particle-size" haze profile, a "patchy cloud" prescription for transmission geometry (Line & Parmentier, 2016), and two complex Mie-scattering cloud models. The first is a fully self-consistent microphysical kinetic cloud model of Helling & Woitke (2006) which follows the formation of seed particles, growth of various solid materials, evaporation, gravitational settling, elemental depletion and replenishment (Blecic et al., 2023). The other is a parametrized Mie-scattering thermal stability cloud model (Kilpatrick et al., 2018; Venot et al., 2020).
Footnote 5: The open-source PyratBay code can be found here: [https://pyratbay.readthedocs.io/en/latest/](https://pyratbay.readthedocs.io/en/latest/)
In this work, we assumed the atmosphere of WASP-96 b to be hydrogen dominated (He/H\({}_{2}\) = 0.17) and include collision-induced absorption of H\({}_{2}\)-H\({}_{2}\) and H\({}_{2}\)-He. We included molecular opacity sources of H\({}_{2}\)O (Polyansky et al., 2018), CH\({}_{4}\)(Hargreaves et al., 2020), NH\({}_{3}\)(Yurchenko et al., 2011; Yurchenko, 2015), HCN (Harris
et al., 2006, 2008), CO (Li et al., 2015), and CO\({}_{2}\)(Rothman et al., 2010b) and resonant-line cross-sections of Na and K. In addition, we account for the Rayleigh-scattering cross-section of H\({}_{2}\)(Dalgarno & Williams, 1962) and an unknown haze particulate, by applying a power law prescription of Lecavelier Des Etangs et al. (2008). Our radiative transfer routine uses opacity sampling of high-resolution pre-computed cross sections tables generated at a resolution of R\(\rightarrow\)4\(\times\)10\({}^{7}\), calculates the transmission spectra at \(R=\) 20,000, and computes the likelihood of each model by binning it down to resolution of \(R=125\). We generated the atmosphere between 10\({}^{-9}\)-100 bar, with 81 layers uniformly distributed in log-pressure, retrieving in addition to the constant-with-altitude molecular and alkali volume mixing ratios listed above, also the Lecavelier Des Etangs et al. (2008) haze parameters and the planetary radius at the reference pressure of 0.1 bar. To find the best modelling setup we tested our available temperature parametrizations and the full range of the cloud models from simple to complex Mie-scattering clouds, assuming species expected to be seen on this temperature regimes. We compared these models using the Bayesian Information criteria (BIC Liddle, 2007). We found the lowest BIC for the model assuming Madhusudhan & Seager (2009) temperature prescription with patchy opaque cloud deck and hazes, accounting for both Lecavelier Des Etangs et al. (2008) and Dalgarno & Williams (1962) haze particles and opacities from H\({}_{2}\)O, CO\({}_{2}\), CO, Na and K. To explore the phase space of these parameters, we have coupled our atmospheric model with the Bayesian Nested Sampling algorithm PyMultiNest(Feroz et al., 2009; Buchner et al., 2014) and the Multi-core Markov-chain Monte Carlo code MC3(Cubillos et al., 2016). Both algorithms returned the same constraints.
## 4 Results
In this section we present the results from our retrieval analysis. We also discuss the impact the resolution of the data has on our inferred abundances.
### Retrievals
Using the frameworks described above, we infer the atmospheric properties of WASP-96 b using the NIRISS/SOSS observations binned to four different constant resolutions (R = 125, 250, 500, and pixel level). As discussed below (see Section 4.2), we find our inferences robust regardless of the resolution of the binned observations. Therefore, we present our results using the R = 125 binned observations for clarity. Our first consideration is the possible presence of clouds and hazes. As described above, our atmospheric frameworks compute scenarios representative of cloud-free atmospheres, hazy atmospheres, cloudy atmospheres, and atmospheres with inhomogeneous cloud and haze cover. Comparing these atmospheric scenarios using their Bayesian evidence and comparing them to a'sigma' scale (e.g., Benneke & Seager, 2013; Welbanks & Madhusudhan, 2021), we find a \(6\sigma\) model preference for inhomogeneous clouds and hazes over simple cloud-free atmospheres. However, we note that it is primarily a Rayleigh scattering slope which we detect as opposed to any opacity from a grey cloud deck (see Section 4.1.2). We thus limit our discussion to the "inhomogeneous clouds and hazes" model runs moving forwards.
The use of more complex prescriptions separating the spectroscopic effects of clouds from those of hazes across inhomogeneous terminators (e.g., Welbanks & Madhusudhan, 2021), may result in lower model preferences but consistent inferred atmospheric properties. The full retrieved posterior distributions for Aurora, POSEIDON, PyratBay and CHIMERA can be found in Figures 19, 20, and 21 respectively.
#### 4.1.1 Retrieved Abundances
The results for the inhomogeneous haze and cloud model runs for all four frameworks are presented in Figure 4, where we present the best-fit transmission spectra, thermal structure and posteriors for H\({}_{2}\)O, CO, CO\({}_{2}\), Na and K. We do not present the posteriors for NH\({}_{3}\), HCN or CH\({}_{4}\) as they remain mostly unconstrained given existing observations. The retrieved abundances from all codes are summarized in Table 1, and generally remain consistent within 1-\(\sigma\), demonstrating that the retrieved atmospheric properties are robust against different model implementations. They are also largely consistent with a solar metallicity atmosphere, in agreement with the interpretation of Radica et al. (submitted) using self-consistent radiative thermochemical equilibrium models.
Using the Aurora framework, we then assess the detection significance of each molecule. This is done by computing the Bayesian evidence for a model without each molecule and comparing to the original model with all species included. We present the breakdown in Table 2.
As a final test, we perform a chemically consistent retrieval on the same data using CHIMERA in order to directly retrieve the atmosphere log(C/O) and log(Met). Like the free retrieval, we fit for our Simple Haze + Cloud model. We find the log(C/O) = \(-0.30^{+0.17}_{-0.37}\) and log(Met) = \(-0.63^{+0.64}_{-0.44}\), where solar values are log(C/O) = \(-0.26\) and log(Met) = \(0\). We present the full posterior distribution of this simulation in Figure 20. Therefore we find that the data is consistent with a model that has a solar C/O ratio within 1\(\sigma\) and a solar metallicity within 1\(\sigma\). These results are consistent with the modelling work presented in Radica et al. (submitted). We further demonstrate the consistency with Radica et al., (submitted) in Figure 3, the left panel compares the free retrieved results compared to the volume mixing ratios obtained from the best fitting model in Radica et al., (submitted), it can be seen that the abundances obtained in our free retrieval are consistent with these profiles. The outlier is the abundance of CO\({}_{2}\), which we find to be consistent with a volume mixing ratio of 10x solar. The right panel shows the retrieved volume mixing ratios for the chemical equilibrium framework, these are again consistent with the free retrieval and the models of Radica et al., (submitted).
#### 4.1.2 Retrieved Cloud Parameters
We describe in more detail the model preference for inhomogeneous clouds and hazes over the cloud free model described above. The models considering the presence of inhomogeneous clouds and hazes suggest a large fraction (\(\gtrsim\)70% i.e., \(\theta=0.88^{+0.09}_{-0.18}\) Aurora; \(0.74^{+0.08}_{-0.08}\) CHIMERA; \(0.91^{+0.07}_{-0.17}\) POSEIDON; \(0.81^{+0.15}_{-0.15}\) PyratBay) of the planetary terminator covered by either clouds or scattering hazes. However, the retrieved pressure at which the cloud deck is present is consistently high (log\({}_{10}\)(P\({}_{\rm cloud}\)) = \(0.39^{+1.04}_{-1.08}\) Aurora; \(0.38^{+1.05}_{-1.09}\) POSEIDON; \(0.2^{+1.2}_{-1.2}\) PyratBay) suggesting that the spectroscopic impact of these gray clouds is minimal. Similarly, the low cloud opacity (e.g., log(\(\kappa_{\rm cloud}\)) = \(-32.66^{+1.62}_{-1.48}\)) retrieved by our CHIMERA analysis suggests low impact due to clouds.
On the other hand, our inferred haze scattering properties suggest they make a significant contribution in our WASP-96 b observations. While the scattering slope is retrieved to be largely Rayleigh-like (i.e.t, \(\gamma=-4.00^{+0.76}_{-1.01}\) Aurora; \(-4.31^{+0.80}_{-0.22}\) CHIMERA; \(-3.75^{+0.68}_{-0.92}\) POSEIDON; \(-4.5^{+1.1}_{-1.4}\) PyratBay), the slope is enhanced by more than
one order of magnitude (\(\log_{10}(\alpha)=1.85^{+0.73}_{-0.47}\) Aurora; \(1.70^{+0.60}_{-0.41}\) POSEIDON; \(2.49^{+0.95}_{-0.77}\) PyratBay). The inferences from the chemical equilibrium retrievals with CHIMERA remain largely in agreement and suggestive of spectroscopic signatures of Rayleigh scattering rather than clouds (e.g., \(\log_{10}(\alpha_{\rm cloud})=-33.21^{+1.20}_{-1.11}\), \(\gamma=-3.31^{+0.43}_{-0.50}\), \(\log_{10}(\alpha)=-1.39^{+0.26}_{-0.27}\) and f = \(0.93^{+0.05}_{-0.10}\)). All models thus tell the story of an atmosphere with small aerosol particles that produce a Rayleigh scattering slope at short wavelengths, but no evidence for a grey cloud deck, which, as is also the case with our chemical inferences above, is consistent with the interpretation of Radica et al. (submitted).
### Resolution Testing
Atmospheric retrievals can be computationally demanding, and the spectral resolution of the forward model is a large factor in determining the speed of the calculation. To thoroughly study the spectrum of an exoplanet atmosphere, one needs to perform multiple retrieval studies, each study requiring on the order of \(10^{4}\) to \(10^{5}\) model calculations; which can become unfeasible at the native R\(\sim\)700 resolution of NIRISS/SOSS. In this section, we seek to answer the question: Do we infer the same abundances if we bin the native resolution data to lower resolutions?
To answer this we perform a retrieval analysis on three different transmission spectrum resolutions: R = 125, R = 250, and R = 500, shown in Figure 1. We use the same parameterised model presented in Figure 4 and correlated-\(k\) tables calculated at R=3000; hence the model has a resolution six times greater than the maximum data resolution. We find that the retrieved abundances for data with a resolution of R=125 are the same as with a resolution of R=500. Hence, no information is lost when binning the data. We present the posteriors of H\({}_{2}\)O, CO\({}_{2}\), and K in Figure 5. The colours correspond to those in Figure 1.
Figure 4: _Top_: Best fit model and best fit temperature profiles, both with 1\(\sigma\) error envelope. The models have the following colours: CHIMERA = purple, Pyratbay = green, POSEIDON = blue and Aurora = red. The data is binned to R=125. _Bottom_: Posterior distribution of each molecule that had some constraint, with the same colour coordination as the best fit models. The horizontal line indicates the 1\(\sigma\) range. The full 2D corner plots are presented in Appendix A.
Figure 5: Retrieved posterior distributions on the chemical composition of WASP-96 b’s atmosphere from our resolution test. We binned our observations to R=500, R=250, and R=125 to explore if this binning down of the data causes a loss of information. Posteriors for R=125 are shown in red, blue is R=250 and purple is R=500. The points and error bars show the median retrieved value, and 1\(\sigma\) credible interval for each test. A retrieval on each resolution yields consistent abundances to well within 1\(\sigma\), allowing us to conclude that no information is lost when binning our WASP-96 b NIRISS/SOSS transmission observations.
held the unique privilege of being one of the few "cloud-free" exoplanets known. Subsequent studies (Nikolov et al., 2022; McGruder et al., 2022) added HST/WFC3 transit depths, as well as additional ground-based transmission observations from Magellan/IMACS, however the conclusion of the cloud-free nature of WASP-96 b's upper atmosphere remained unchanged. The GCM models of Samra et al. (2023), though, found that the terminator region of WASP-96 b should be entirely covered in clouds given the temperature structure of the planet. Moreover, they show that cloudy transmission spectra can provide an equally good fit to the ensemble of transmission data analyzed in Nikolov et al. (2022).
Our two independent GCM models also predict that clouds should be able to form at the terminator of WASP-96 b in the pressure regions probed by transmission spectroscopy (see Figure 2). These models predict that the atmosphere is likely dominated by MnS and Na\({}_{2}\)S clouds. MgSiO\({}_{3}\) clouds should form in the deep layers of the atmosphere and would be observable only if the vertical mixing was extremely large to easily replenish the upper atmosphere in cloud-forming material, an assumption that is inherent to the Samra et al. (2023) calculation.
One solution to this discrepancy could be that smaller particles than predicted by Samra et al. (2023) form in larger quantities at low pressures in WASP-96b's atmosphere. These could be composed of \(Na_{2}S\) or KCl which would naturally form at much lower pressures than the silicate clouds that dominate the cloud composition in the 100 to 10 mbar range. However, the detection of sodium and potassium in WASP-96b's atmosphere seems to rule out this possibility. MnS is another candidate for forming clouds at low pressures (Parmenteier et al., 2016; Morley et al., 2012), however, Gao et al. (2020) predicts that the nucleation rates for MnS are so low that they should hardly form. Another option would be the formation of a high altitude haze layer formed of photochemically produced particles. Photochemistry is known to naturally form small particles at low pressures that can produce strong scattering slopes (Lavvas and Koskinen, 2017; Kawashima and Ikoma, 2019; Helling et al., 2020; Steinveck et al., 2021). Additional information about the cloud composition could be gathered by targeting the resonant features of the cloud-forming material in the JWST/MIRI LRS bandpass.
We further note that our detection of a strong scattering slope in the optical is partially degenerate with the abundance of gaseous sodium in the atmosphere. Indeed when a scattering slope is not included in the retrievals, we obtain an unphysical alkali abundances (e.g., \(\log_{10}(\mathrm{Na})=-2.54^{0.28}_{-0.34}\) with CHIMERA). However, including enhanced Rayleigh scattering, the Na abundances drops to slightly super-solar to solar values, in agreement with Nikolov et al. (2022). Our inferred abundances of Na and of the presence of a scattering slope therefore needs to be carefully interpreted because of this degeneracy, driven by the fact that the NIRISS/SOSS bandpass cuts off at 0.6 um, and is therefore only able to probe the red wing of the Na feature. Without fully resolving the Na feature peak, it is difficult to differentiate between a slope caused by a Rayleigh scattering haze, or the red wing of a broadened Na feature. More work needs to be conducted to further understand this degeneracy in the context of observations with NIRISS/SOSS.
### Comparison to Radica et al., (submitted)
A suite of forward models were compared to the data in our companion paper (Radica et al., submitted). Three different grids of models were used: PICASO, ATMO and ScCHIMERA, producing a picture of an atmosphere that has a metallicity of \(1-5\times\) solar and a solar C/O. Our free retrieval results demonstrate that we are obtaining an abundance of H\({}_{2}\)O that is consistent with solar values and a CO\({}_{2}\) abundance that is super solar, this demonstrates that our results are consistent with Radica et al., submitted. Similarly to Radica et al., submitted, we need to invoke enhanced Rayleigh scattering slope to match the observations at the shortest wavelengths, but find no spectroscopic impact from a grey cloud deck. We compare the vertical volume mixing ratios obtained from the best fit ScCHIMERA model with our retrieved results in Figure 3 which shows we are obtaining a consistent picture of the atmosphere.
## 6 Conclusions
In this paper we have performed a detailed atmospheric characterisation of WASP-96 b using the transmission spectrum obtained with NIRISS/SOSS as part of the Early Release Observations, and first presented in Radica et al. (submitted).
We ran GCM simulations in order to model the planet's atmosphere using the SPARC MIT/gcm and the UM. These clear-sky models are able to well fit the spectrum redward of 1.3 um, and favour an atmosphere with solar metallicity. However, blueward of 1.3 um, the GCMs under predict the observed transit depths, likely indicating missing opacities such as a scattering haze.
We then performed a suite of retrievals using four different modelling frameworks: CHIMERA, Aurora, PyratBay, and POSEION. We find that a model with patchy clouds and hazes best describe the data and that each framework produces results which are consistent within 1\(\sigma\). We report the retrieved abundances from Aurora as \(\log_{10}(\mathrm{H_{2}O})=-3.59^{+0.35}_{-0.35}\)\(\log_{10}(\mathrm{K})=-8.04^{+1.22}_{-1.71}\), \(\log_{10}(\mathrm{CO})=-3.25\), and \(\log_{10}(\mathrm{CO}_{2})=-4.38^{+0.47}_{-0.57}\). We find a large tail in the posterior CO, so we describe this abundance as an upper limit. Further transmission observations with JWST, particularly with NIRSpec G395H are necessary to more accurately constrain the abundance of CO.
The retrieved abundance of H\({}_{2}\)O is consistent with Yip et al. (2021) and McGruder et al. (2022). Our precision is \(\sim\)10\(\times\) better than McGruder et al. (2022) and \(\sim\)4\(\times\) better Yip et al. (2021). Our range of retrieved abundances of Na is consistent with Nikolov et al. (2022), McGruder et al. (2022), Yip et al. (2021) and Welbanks et al. (2019), however given that NIRISS' wavelength coverage does not capture the complete Na feature, this results in a degeneracy between the abundance of Na and a Rayleigh scattering slope. This is also reflected in the extremely low detection significance for Na (1.24 \(\sigma\)). We therefore caution against any strong interpretations of this Na abundance. We also report a constrained abundance of potassium, although with only a marginal detection significance (\(\sim\)2\(\sigma\)), in the atmosphere of WASP-96 b which was not found in previous studies due to the lower resolution of the optical data. The strong potassium constraint in the atmosphere of WASP-39 b from NIRISS/SOSS (Feinstein et al., 2023), and the tentative detection here, demonstrates how powerful this instrument is to study alkali metals, and opens the door for a new tracer of formation history, the K/O ratio (Feinstein et al., 2023).
Our chemically consistent retrievals favour an atmosphere that has a solar C/O ratio within 1\(\sigma\) and solar metallicity within 1\(\sigma\). We find the \(\log(\mathrm{C/O})=-0.30^{+0.17}_{-0.37}\) and \(\log(\mathrm{Met})=-0.63^{+0.64}_{-0.44}\), where solar values are \(\log(\mathrm{C/O})=-0.26\) and \(\log(\mathrm{Met})=0\). This is consistent with the GCM models and the grid models in Radica et al. (submitted) which favour an atmosphere that is 1\(\times\) solar C/O and \(1-5\times\) solar M/H.
We explore the appropriate resolution to study observations obtained with NIRISS/SOSS. We find that binning the data from native
to R=125 does not impact the inferred abundances. This is useful, given that retrievals at native resolution are computational demanding. In the era of JWST, we need to explore more complex models, which are computational demanding in themselves, therefore we should trade data resolution for model complexity.
Finally, it is critical to note that the previous studies retrieved on a transmission spectrum created through the combination of multiple instruments, with _six transits_ required to construct the Nikolov et al. (2022) spectrum. The NIRISS/SOSS transmission spectrum we have presented here was obtained with _one single transit observation_, further highlighting the undeniable potential of JWST to unveil atmospheres of transiting exoplanets.
## Acknowledgements
We thank the anonymous reviewer for their detailed feedback which greatly improved our manuscript. JT thanks the John Fell Fund and the Canadian Space Agency for financially supporting this work. MR acknowledges financial support from the National Sciences and Research Council of Canada (NSERC), the Fonds de Recherche du Quebec - Nature et Technologies (FRONT), and the Institut Trottier de recherche sur les exoplanetes (iREx). LW and RJM acknowledge support for this work provided by NASA through the NASA Hubble Fellowship grant #HST-HF2-51496.001-A and #HST-HF2-51513.001, respectively, awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555.
JB acknowledges the support received in part from the NYUAD IT High Performance Computing resources, services, and staff expertise.
MZ was supported through a UKRI Future Leaders Fellowship MR/T040866/1. MZ's Met Office Unified Model. simulations were performed using the DRARC Data Intensive service at Leicester, operated by the University of Leicester IT Services, which forms part of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was funded by BEIS capital funding via STFC capital grants ST/K000373/1 and ST/R002363/1 and STFC DiRAC Operations grant ST/R001014/1. DiRAC is part of the National e-Infrastructure. A portion of this analysis was carried out on the High Performance Computing resources at New York University Abu Dhabi and resources provided by the Research Computing at Arizona State University.
This work is based observations made with the NASA/ESA/ESA/ESA James Webb Space Telescope. The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST.
## Software
* astropy; Astropy Collaboration et al. (2013, 2018)
* matplotlib; Hunter (2007)
* numpy; Harris et al. (2020)
* scipy; Virtanen et al. (2020)
* Met Office Unified Model materials were produced using Met Office Software.
* PyMultiNest(Buchner et al., 2014)
* corner (Foreman-Mackey, 2016)
## Data Availability
All data used in this study is publicly available from the Barbara A. Mikulski Archive for Space Telescopes6. The models generated in this paper can be made available on request.
Footnote 6: [https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html](https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html)
|
2307.09106 | Theoretical study on $Λ_c^+ \to ΛK^+\bar{K}^0$ decay and
$Ξ^*(1690)$ resonance | We present a theoretical study of $\Xi^*(1690)$ resonance in the $\Lambda_c^+
\to \Lambda K^+ \bar{K}^0$ decay, where the weak interaction part proceeds
through the Cabibbo-favored process $c \to s + u\bar{d}$. Next, the
intermediate two mesons and one baryon state can be constructed with a pair of
$q\bar{q}$ with the vacuum quantum numbers. Finally, the $\Xi^*(1690)$ is
mainly produced from the final state interactions of $\bar{K}\Lambda$ in
coupled channels, and it is shown in the $\bar{K}\Lambda$ invariant mass
distribution. Besides, the scalar meson $a_0(980)$ and nucleon excited state
$N^*(1535)$ are also taken into account in the decaying channels $K^+\bar{K}^0$
and $K^+\Lambda$, respectively. Within model parameters, the $K^+ \bar{K}^0$,
$\bar{K}^0 \Lambda$ and $K^+ \Lambda$ invariant mass distributions are
calculated, and it is found that our theoretical results can reproduce well the
experimental measurements, especially for the clear peak around $1690$ MeV in
the $\bar{K}\Lambda$ spectrum. The proposed weak decay process $\Lambda_c^+ \to
\Lambda K^+ \bar{K}^0$ and the interaction mechanism can provide valuable
information on the nature of the $\Xi^*(1690)$ resonance. | Si-Wei Liu, Qing-Hua Shen, Ju-Jun Xie | 2023-07-18T09:55:01Z | http://arxiv.org/abs/2307.09106v1 | Theoretical study on \(\Lambda_{c}^{+}\to\Lambda K^{+}\bar{K}^{0}\) decay and \(\Xi^{*}(1690)\) resonance
###### Abstract
We present a theoretical study of \(\Xi^{*}(1690)\) resonance in the \(\Lambda_{c}^{+}\to\Lambda K^{+}\bar{K}^{0}\) decay, where the weak interaction part proceeds through the Cabibbo-favored process \(c\to s+ud\). Next, the intermediate two mesons and one baryon state can be constructed with a pair of \(q\bar{q}\) with the vacuum quantum numbers. Finally, the \(\Xi^{*}(1690)\) is mainly produced from the final state interactions of \(\bar{K}\Lambda\) in coupled channels, and it is shown in the \(\bar{K}\Lambda\) invariant mass distribution. Besides, the scalar meson \(a_{0}(980)\) and nucleon excited state \(N^{*}(1535)\) are also taken into account in the decaying channels \(K^{+}\bar{K}^{0}\) and \(K^{+}\Lambda\), respectively. Within model parameters, the \(K^{+}\bar{K}^{0}\), \(\bar{K}^{0}\Lambda\) and \(K^{+}\Lambda\) invariant mass distributions are calculated, and it is found that our theoretical results can reproduce well the experimental measurements, especially for the clear peak around \(1690\) MeV in the \(\bar{K}\Lambda\) spectrum. The proposed weak decay process \(\Lambda_{c}^{+}\to\Lambda K^{+}\bar{K}^{0}\) and the interaction mechanism can provide valuable information on the nature of the \(\Xi^{*}(1690)\) resonance.
## I Introduction
The study of the properties of the \(\Xi^{*}\) states concentrates much attention in hadron physics [1; 2; 3; 4; 5]. However, our knowledge about the \(\Xi\) spectrum is quite scarce [6]. Except for the ground \(\Xi(1321)\) state with spin-parity \(J^{P}=1/2^{+}\) and \(\Xi^{*}(1530)\) with \(J^{P}=3/2^{+}\) being well established with four-star ratings, the situation of other \(\Xi^{*}\) excited states is still rather uncertain with less than three-star ratings [6]. For some of them, their existence has not been confirmed. Hence, further studies of the \(\Xi^{*}\) resonances both on the experimental and theoretical sides are necessary [7; 8; 9; 10].
For the study of \(\Xi^{*}\) resonances, the \(\bar{K}N\) scattering is available [11]. Indeed, the \(\Xi^{*}(1690)\) resonance was first observed in the \(\bar{K}\Sigma\) invariant mass spectrum [12] in the \(K^{-}p\) reactions at \(4.2\) GeV. In the measurements of Ref. [12], the mass and total decay width of \(\Xi^{*}(1690)\) are established as \(M=1694\pm 6\) MeV and \(\Gamma=26\pm 6\) MeV in the charged channel and \(M=1684\pm 5\) MeV and \(\Gamma=20\pm 4\) MeV in the neutral channel. In Ref. [13], the \(\Xi^{*}(1690)\) resonance was confirmed by the WA89 collaboration in the neutral \(\pi^{+}\Xi^{-}\) channel with measured mass \(M=1686\pm 4\) MeV and width \(\Gamma=10\pm 6\) MeV. In Ref. [14], in addition to the investigations of the \(\Xi^{*}(1530)\) properties in the \(\Lambda_{c}^{+}\to K^{+}\pi^{+}\Xi^{-}\) decay process, evidence of the existence of the \(\Xi^{*}(1690)\) resonance with \(J^{P}=1/2^{-}\) was also found by \(BABAR\) collaboration. In Ref. [15], the Belle collaboration presented the first evidence for the process \(\Lambda_{c}^{+}\to K^{+}\Xi^{*0}(1690)\to K^{+}K^{-}\Sigma^{+}\), and the fitted mass and width of \(\Xi^{*}(1690)\) are \(M=1688\pm 2\) MeV and \(\Gamma=11\pm 4\) MeV. Furthermore, contribution of the \(\Xi^{*0}(1690)\) to the \(\Lambda_{c}^{+}\to K^{+}\Xi^{*0}(1690)\to K^{+}\bar{K}^{0}\Lambda\) reaction was also found [15]. In Ref. [16], the \(\Xi(1690)\) resonance was also observed in the \(\bar{K}^{0}\Lambda\) channel in the decay \(\Lambda_{c}^{+}\to K^{+}\bar{K}^{0}\Lambda\) by the \(BABAR\) collaboration, where a coherent amplitude analysis of the \(\Delta a_{0}(980)\) and \(K^{+}\Xi^{*0}(1690)\) productions was performed, the obtained mass and width of \(\Xi^{*}(1690)\) resonance are \(M=1684.7\pm 1.3\)(stat.)\({}^{+2.2}_{-1.6}\)(syst.) MeV, \(\Gamma=8.1^{+3.9}_{-3.5}\)(stat.)\({}^{+1.0}_{-0.9}\)(syst.) MeV, and its spin is consistent with \(1/2\). Furthermore, the \(\Xi^{*}(1690)\) resonance has also been found in the hyperon-nucleon interactions [17; 18]. In most of these experimental analysis the spin-parity of \(\Xi^{*}(1690)\) resonance favor \(J^{P}=1/2^{-}\).
In 2019, the \(\Xi^{*}(1620)\) resonance was observed in the \(\pi^{+}\Xi^{-}\) invariant spectrum via the \(\Xi_{c}^{+}\to\Xi^{-}\pi^{+}\pi^{+}\) decay. Meanwhile, the evidence of the \(\Xi^{*}(1690)\) resonance was also found with the same data sample, and its significance is \(4\sigma\)[19]. However, up to present time the quantum numbers of \(\Xi^{*}(1690)\) have not yet been determined. Nevertheless, to fully understand the nature of \(\Xi^{*}(1690)\) resonance, further experiments are certainly required.
On the theoretical side, within the constituent quark models, the first excited state of \(\Xi\) baryon is around \(1800\) MeV [20; 21; 22], thus it is difficult to treat \(\Xi^{*}(1690)\) as a three-quark state. In Ref. [3], the \(\Xi^{*}(1690)\) was treated as a three-quark state and its spin-parity quantum numbers are \(J^{P}=1/2^{-}\). Note that the obtained mass with the constituent quark model [3], is 1725 MeV, which is still 35 MeV higher than its nominal mass. This implies that \(\Xi^{*}(1690)\) might have some nontrivial structure other than the usual three-quark state. In fact, using the chiral unitary approach, the \(\Xi^{*}(1690)\) can be interpreted as an \(s\)-wave meson-baryon molecular state [1; 2; 23]. It couples strongly to the \(\bar{K}\Sigma\) channel [23], and its coupling to the \(\pi\Xi\) channel is small. Thus its narrow width can be naturally explained [23]. Furthermore, its spin-parity are \(1/2^{-}\) in the chiral unitary approach. In Ref. [24], the \(\Xi^{*}(1690)\) was investigated by means of the two-point QCD sum rules, where it was also concluded that the \(\Xi^{*}(1690)\) state, most probably, has quantum numbers spin-parity \(J^{P}=1/2^{-}\). However, the obtained width for the
\(\Xi^{*}(1690)\) state is about \(100\) MeV [24], which is much larger than the experimental measurements [6].
The main property of \(\Xi^{*}(1690)\) resonance is that its decay width is too narrow compared with other baryon resonances that have similar mass [6]. In spite of \(\Xi^{*}(1690)\) state has a large phase space to decay to open channels, such as \(\pi\Xi\) and \(\bar{K}\Lambda\), its width is just in the order of \(10\) MeV. In Ref. [25], using the chiral unitary approach, the \(\Xi^{*}(1690)\) state is dynamically generated in the pseudoscalar-baryon and vector-baryon coupled channels 1. It was found that most of the properties of \(\Xi^{*}(1690)\), especially its narrow width, can be well explained [25]. In that work, the \(\Xi^{*}(1690)\) has strong couplings to \(\bar{K}\Sigma\) and \(\eta\Xi\) channels, while its couplings to the \(\bar{K}\Lambda\) and \(\pi\Xi\) channels are small. Recently, the meson-baryon interaction in the neutral strangeness \(S=-2\) sector was studied using an extended Unitarized Chiral Perturbation Theory, where the leading Weinberg-Tomozawa term, Born term, and the next-to-leading order contributions are considered [26]. It was found that both \(\Xi^{*}(1620)\) and \(\Xi^{*}(1690)\) states can be dynamically generated, and the obtained properties of them are in reasonable agreement with the known experimental data.
Footnote 1: Note that the transitions between pseudoscalar-baryon and vector-baryon channels are crucial to obtain the pole for \(\Xi^{*}(1690)\) state. If those transitions were zero, the pole of \(\Xi^{*}(1690)\) disappears.
It has been shown that the weak decay of charmed baryons governed by \(c\to s\) quark transition is beneficial to probe the strange baryons, some of them are subjects of intense debate about their nature [27; 28; 29]. For instance, the hyperon production from the \(\Lambda_{c}^{+}\to K^{-}p\pi^{+}\) and \(\Lambda_{c}^{+}\to K_{S}^{0}\pi^{0}\) decays were investigated within the effective Lagrangian approach in Ref. [30]. The \(\Xi^{*}(1620)\) and \(\Xi^{*}(1690)\) states were studied in the \(\Xi_{c}\to\pi^{+}MB\) process in Ref. [31], where \(M\) and \(B\) stand for mesons and baryons, respectively. It was shown that these weak decays might be an ideal tool to study the \(\Xi^{*}(1620)\) and \(\Xi^{*}(1690)\) resonances, which are dynamically produced in the rescattering of \(M\) and \(B\) in the final states.
Therefore, in this work, we take advantage of those ideas and revisit the \(\Lambda_{c}^{+}\to\Lambda K^{+}\bar{K}^{0}\) decay, which requires the creation of an \(s\bar{s}\) quark pair. Following Refs. [23; 25; 31], in addition to the contributions of the \(\Xi^{*}(1690)\) resonance that is dynamically generated from the final state interactions of \(\bar{K}^{0}\Lambda\) in \(S\) wave, we will consider also the contributions of the \(a_{0}(980)\) and \(N^{*}(1535)\) states, where they are dynamically produced by the final state interactions of \(K^{+}\bar{K}^{0}\)[32; 33] and \(\Lambda K^{+}\)[34; 35; 36; 37; 38; 39] in coupled channels, respectively.
This article is organized as follows. In the next section, we present the theoretical formalism for studying the \(\Lambda_{c}^{+}\to\Lambda K^{+}\bar{K}^{0}\) decay. In Sec. III, theoretical numerical results and discussions and presented, followed by a short summary in Sec. IV.
## II Formalism
For the production of \(\Xi^{*}(1690)\) in the weak decay process \(\Lambda_{c}^{+}\to K^{+}\bar{K}^{0}\Lambda\), we firstly take it as a dynamically generated state in the final state interaction of \(\bar{K}\Lambda\) in coupled channels. Secondly, we take \(\Xi^{*}(1690)\) as a Breit-Wigner resonance. The roles of \(a_{0}(980)\) and \(N^{*}(1535)\) are also investigated, where the contribution of \(a_{0}(980)\) state in encoded in the \(s\)-wave \(K\bar{K}\) final state interaction as done in Refs. [40; 41; 42; 43; 44; 45], and the one of the \(N^{*}(1535)\) resonance is in the \(s\)-wave \(K\Lambda\) final state interaction [34; 35; 36; 37; 38; 39].
The schematic diagram for the Cabibbo-favored process \(\Lambda_{c}^{+}\to\Lambda K^{+}\bar{K}^{0}\) is presented in Fig. 1, where the decay process proceeds in three parts:
* In the first step, the \(c\) quark in \(\Lambda_{c}^{+}\) turns into an \(s\) quark via the weak decay.
* Introduce a \(q\bar{q}\) pair with the quantum numbers of the vacuum to form a pseudoscalar-baryon (\(PB\)) or pseudoscalar-pseudoscalar (\(PP\)) pair.
* The final state interactions are taken into account in coupled channels within the chiral unitary approach, which will lead to the dynamical productions of the \(a_{0}(980)\), \(N^{*}(1535)\), and \(\Xi^{*}(1690)\) resonances.
Contributions of the \(a_{0}(980)\), \(N^{*}(1535)\), and \(\Xi^{*}(1690)\) resonances from final state interactions
Following Refs. [46; 47; 48; 31; 49; 50], we assume two primary decay mechanisms for the process \(\Lambda_{c}^{+}\to\Lambda K^{+}\bar{K}^{0}\), which are presented in Fig. 2. In the weak decay of charmed hadrons the diagrams are classified in six different topologies [51; 52; 53; 54]: \(W\) external emission, \(W\) internal emission, \(W\)-exchange, \(W\)-annihilation, horizontal \(W\)-loop and vertical \(W\)-loop. Here, we consider the dominant \(W\) external emission and internal emission diagrams as shown in Fig. 2 (a) and (b), respectively. The \(c\) quark in \(\Lambda_{c}^{+}\) turns into an \(s\) quark and a \(W^{+}\) boson, followed by the \(W^{+}\) boson decaying into a \(u\bar{d}\) pair. To get the final states \(\Lambda K^{+}\bar{K}^{0}\), the \(sud\) (\(s\bar{d}\)) cluster forms the \(\Lambda\) (\(\bar{K}^{0}\)), and the \(u\bar{d}\) (\(uud\)), together with the \(q\bar{q}\) (\(=u\bar{u}+d\bar{d}+s\bar{s}\)) pair created from the vacuum, hadronize into \(K^{+}\bar{K}^{0}\) (\(K^{+}\Lambda\)). In the former processes, the \(ud\) diquark in the \(\Lambda_{c}^{+}\) is always kept. Therefore, the \(\Lambda_{c}^{+}\) weak decay processes shown in Fig. 2 can
be formulated as following [46; 47; 48; 49; 50]:
\[\Lambda_{c}^{+} =\frac{1}{\sqrt{2}}\left|c(ud-du)\right\rangle,\] \[\rightarrow-\frac{\sqrt{6}}{3}V_{1}\left|\Lambda\right\rangle \left|u(u\bar{u}+d\bar{d}+s\bar{s})\bar{d}\right\rangle, \tag{1}\] \[\rightarrow\frac{1}{\sqrt{2}}V_{2}\left|\bar{K}^{0}\right\rangle \left|u(u\bar{u}+d\bar{d}+s\bar{s})(ud-du)\right\rangle, \tag{2}\]
where \(V_{1}\) and \(V_{2}\) are the strength of the production vertices 2, which contain all the dynamical factors. According to these results of Ref. [31], we then connect two degrees of freedom, the quarks and the hadrons. Then we can rewrite the intermediate states as
Footnote 2: We have assumed that these production vertices proceed through \(S\)-wave interaction.
\[\Lambda_{c}^{+} \to V_{1}\left|\Lambda\right\rangle\left|-\frac{2\sqrt{2}}{3}\pi^{+} \eta-\frac{\sqrt{6}}{3}K^{+}\bar{K}^{0}\right\rangle \tag{3}\] \[\to V_{2}\left|\bar{K}^{0}\right\rangle\left|(\frac{\eta}{ \sqrt{3}}+\frac{\pi^{0}}{\sqrt{2}})p+\pi^{+}n-\frac{\sqrt{6}}{3}K^{+}\Lambda \right\rangle, \tag{4}\]
where we have omitted the \(\eta^{\prime}\) terms as in Refs. [23; 25; 31], because its mass threshold is located much higher in the energy and its contribution should be small. After the production of the \(PP\) or \(PB\) pair, the final state interactions between intermediate mesons and baryons come up, which is shown in Fig. 3. And, we can obtain the explicit expressions of the decay amplitude \(\mathcal{M}_{i}\) (\(i=a\), \(b\), \(c\), and \(d\) according to these diagrams shown in Fig. 3) using Eqs. (3) and (4):
\[\mathcal{M}_{a} = \mathcal{M}^{\rm Tree}=-\frac{\sqrt{6}}{3}(V_{1}+V_{2}), \tag{5}\] \[\mathcal{M}_{b} = \mathcal{M}_{K^{+}\bar{K}^{0}}^{\rm FSI}=h_{K^{+}\bar{K}^{0}}G_{K ^{+}\bar{K}^{0}}T_{KK\to K\bar{K}}^{I=1}\] (6) \[-h_{\pi^{+}\eta}G_{\pi^{+}\eta}T_{\pi\eta\to K\bar{K}}^{I=1},\] \[\mathcal{M}_{c} = \mathcal{M}_{K^{0}\Lambda}^{\rm FSI}=h_{\bar{K}^{0}\Lambda}G_{ \bar{K}^{0}\Lambda}T_{K\Lambda\to K\Lambda}^{I=1/2},\] (7) \[\mathcal{M}_{d} = \mathcal{M}_{K^{+}\Lambda}^{\rm FSI}=h_{K^{+}\Lambda}G_{K^{+} \Lambda}T_{K\Lambda\to K\Lambda}^{I=1/2}\] (8) \[-(\sqrt{\frac{2}{3}}h_{\pi^{+}n}G_{\pi^{+}n}+\frac{1}{\sqrt{3}}h _{\pi^{0}p}G_{\pi^{0}p})T_{\pi N\to K\Lambda}^{I=1/2}\] \[+h_{\eta p}G_{\eta p}T_{\eta N\to K\Lambda}^{I=1/2},\]
where \(G_{PP}\) and \(G_{PB}\) are the loop functions of the \(PP\) or the \(PB\) propagator, respectively. The coefficients \(h_{PP}\) and \(h_{PB}\) for the terms of final state interactions are
\[h_{K^{+}\bar{K}^{0}} = -\frac{\sqrt{6}}{3}(V_{1}+V_{2}),\ h_{\pi^{+}\eta}=-\frac{2\sqrt{ 2}}{3}V_{1}, \tag{9}\] \[h_{\bar{K}^{0}\Lambda} = h_{K^{+}\Lambda}=-\frac{\sqrt{6}}{3}(V_{1}+V_{2}),\] (10) \[h_{\pi^{+}n} = V_{2},\ h_{\pi^{0}p}=\frac{\sqrt{2}}{2}V_{2},\ h_{\eta p}=\frac{ \sqrt{3}}{3}V_{2}. \tag{11}\]
The analytic form of the loop function \(G\), with dimensional regularization, is given by
\[G(s)=\frac{2M}{16\pi^{2}}\Bigg{\{}a_{\mu}+\text{ln}\frac{M^{2}}{ \mu^{2}}+\frac{m^{2}-M^{2}+s}{2s}\text{ln}\frac{m^{2}}{M^{2}}\] \[\qquad\qquad+\frac{p}{\sqrt{s}}\bigg{[}\text{ln}\left(s-\left(m^{ 2}-M^{2}\right)+2p\sqrt{s}\right)\] \[\qquad\qquad+\text{ln}\left(s+\left(m^{2}-M^{2}\right)+2p\sqrt{s }\right)\] \[\qquad\qquad-\text{ln}\left(-s+\left(m^{2}-M^{2}\right)+2p\sqrt{s }\right)\bigg{]}\Bigg{\}}, \tag{12}\]
where \(\mu\) is a scale of dimensional regularization, and \(a_{\mu}\) is the subtraction constant. Any change of \(\mu\) can be reabsorbed by a
change in \(a_{\mu}\). In this work, we choose \(\mu=630\) MeV, while \(a_{\mu}\) will be determined from the experimental data, and they will be discussed in the following. \(m\) and \(M\) are the masses of the meson and baryon in the loop, respectively. \(p\) represents the magnitude of the three-momentum of one particle in the meson-meson or meson-baryon rest frame,
\[p = \frac{\lambda^{1/2}(s,m^{2},M^{2})}{2\sqrt{s}}, \tag{13}\] \[\lambda^{1/2}(x,y,z) = \sqrt{x^{2}+y^{2}+z^{2}-2xy-2yz-2xz}. \tag{14}\]
The two body scattering amplitudes \(T_{PP\to PP}\) and \(T_{PB\to PB}\) are obtained by solving the Bethe-Salpeter equation within the chiral unitary approach [55; 56; 33]
\[T=[1-VG]^{-1}V, \tag{15}\]
where \(V\) is the transition potential between the involved channels, which are given explicitly in Refs. [57; 33; 25; 39]. The scattering amplitude element \(T\) in Eq. (15) in the particle basis can be related to the one in the isospin basis, and we get
\[T_{K^{+}\bar{K}^{0}\to K^{+}\bar{K}^{0}} = T_{KK\to K\bar{K}}^{I=1}, \tag{16}\] \[T_{\pi^{+}\eta\to K^{+}\bar{K}^{0}} = -T_{\pi\eta\to K\bar{K}}^{I=1},\] (17) \[T_{\bar{K}^{0}\Lambda\to\bar{K}^{0}\Lambda} = T_{K\Lambda\to K\Lambda}^{I=1/2},\] (18) \[T_{K^{+}\Lambda\to K^{+}\Lambda} = T_{K\Lambda\to K\Lambda}^{I=1/2},\] (19) \[T_{\pi^{+}n\to K^{+}\Lambda} = -\sqrt{\frac{2}{3}}T_{\pi N\to K\Lambda}^{I=1/2},\] (20) \[T_{\pi^{0}p\to K^{+}\Lambda} = -\sqrt{\frac{1}{3}}T_{\pi N\to K\Lambda}^{I=1/2},\] (21) \[T_{\eta p\to K^{+}\Lambda} = T_{\eta N\to K\Lambda}^{I=1/2}. \tag{22}\]
Although the form of these \(K\bar{K}\), \(K\Lambda\), and \(\bar{K}\Lambda\) interactions have been detailed elsewhere [58; 33; 25; 39; 57; 58], we briefly revisit here the \(\bar{K}\Lambda\) case. This will allow us to review the general procedure of calculating the two-body amplitudes entering the total decay amplitude of \(\Lambda_{c}^{+}\to\Lambda\bar{K}^{0}K^{+}\) reaction.
In order to describe the \(\Xi^{*}(1690)\) state, a coupled channel analysis was performed. In the isospin \(1/2\) sector, there are four coupled channels, which are: \(\pi\Xi\), \(\bar{K}\Lambda\), \(\bar{K}\Sigma\), and \(\eta\Xi\). These channels are labeled by the indices \(j=1,...,4\). For the transition potential \(V_{ij}\), it is expressed as [58; 25]
\[V_{ij} = -\frac{C_{ij}}{4f_{i}f_{j}}(2\sqrt{s}-M_{i}-M_{j}) \tag{23}\] \[\times\sqrt{\frac{(M_{i}+E_{i})(M_{j}+E_{j})}{4M_{i}M_{j}}},\]
where \(f_{i}\) is the meson decay constant of the \(i\)th channel, \(f_{\pi}=92.1\) MeV, \(f_{K}=1.2f_{\pi}\) and \(f_{\eta}=1.3f_{\pi}\). \(E_{i}\) and \(E_{j}\) are the initial and final baryon energies, and \(E_{i}=\sqrt{M_{i}^{2}+|\vec{p}_{i}|^{2}}\), \(|\vec{p}_{i}|=\frac{\lambda^{1/2}(s,m_{i}^{2},M_{i}^{2})}{2\sqrt{s}}\theta( \sqrt{s}-M_{i}-m_{i})\), with \(s\) the invariant mass squared of the meson-baryon system, and the meson and baryon masses \(m_{i}\) and \(M_{i}\) in the \(i\)th channel, respectively. The factor \(C_{ij}\) is symmetric and its value is listed in Table 1.
Then one can solve the Bethe-Salpeter equation as shown
in Eq. 15 with the on-shell factorized potential and, thus, the \(S\) wave scattering \(T_{ij}\) matrix can be easily obtained. Then one can also look for poles of the scattering amplitude \(T_{ij}\) on the complex plane of \(\sqrt{s}\). The pole, \(Z_{R}\), on the second Riemann sheet could be associated with the \(\Xi^{*}(1690)\) resonance. The real part of \(Z_{R}\) is associated with the mass \(M_{\Xi^{*}(1690)}\) of \(\Xi^{*}(1690)\) resonance, and the imaginary part of \(Z_{R}\) is associated with one half of its width \(\Gamma_{\Xi^{*}(1690)}\).
### Contribution of \(\Xi^{*}(1690)\) resonance as a Breit-Wigner resonance
On the other hand, the decay process \(\Lambda_{c}^{+}\to\Xi^{*}(1690)K^{+}\to\Lambda\bar{K}^{0}K^{+}\) can also proceed through \(\Xi^{*}(1690)\) as a Breit-Wigner resonance and decaying into \(\bar{K}^{0}\Lambda\), which is shown in Fig. 4. In this case, the \(\Xi^{*}(1690)\) state is formed with \(ud\bar{d}ss\) as shown in Fig. 4 (a), and it decays into \(\bar{K}^{0}\Lambda\) in the final state. The hadron level diagram for the decay of \(\Lambda_{c}^{+}\to\Xi^{*}(1690)K^{+}\to\Lambda K^{+}\bar{K}^{0}\) is also shown in Fig. 4 (b), where the propagator of \(\Xi^{*}(1690)\) resonance is parametrized as the Breit-Wigner form.
Then, the general decay amplitude for the process \(\Lambda_{c}^{+}\to\Xi^{*}(1690)^{0}K^{+}\to\bar{K}^{0}\Lambda K^{+}\) can be expressed as:
\[\mathcal{M}_{e}=\frac{\Gamma_{\Xi^{*}(1690)}}{2}\frac{V_{3}}{M_{\bar{K}^{0} \Lambda}-M_{\Xi^{*}(1690)}+i^{\frac{\Gamma_{\Xi^{*}(1690)}}{2}}}, \tag{24}\]
where \(M_{\bar{K}^{0}\Lambda}\) is the invariant mass of the \(\bar{K}^{0}\Lambda\) system, and we take \(M_{\Xi^{*}(1690)}=1690\) MeV and \(\Gamma_{\Xi^{*}(1690)}=20\) MeV for the Breit-Wigner mass and width for \(\Xi^{*}(1690)\) resonance, respectively, which are quoted in the PDG [6]. The model free parameter \(V_{3}\) will be determined by the experimental data.
Before going further, we emphasize that the \(ud\bar{d}ss\) component of \(\Xi^{*}(1690)\) resonance cannot be guaranteed from the decay process shown in Fig. 4. Indeed, the \(\Xi^{*}(1690)\) resonance can also be produced from the \(W\)-exchange diagram [59; 60; 15; 61], where \(cd\) transitions firstly into \(su\) with the weak interaction, and then with a \(s\bar{s}\) pair from the vacuum, the \(u\bar{s}\) forms the \(K^{+}\), while the \(\Xi^{*}(1690)^{0}\) is constructed from the \(uss\) cluster and then it decays into \(\bar{K}^{0}\Lambda\). Nevertheless, this kind of contributions can be absorbed into the model parameter \(V_{3}\).
### Invariant mass distributions of the \(\Lambda_{c}^{+}\to\Lambda K^{+}\bar{K}^{0}\) decay
With the ingredients obtained in the previous sections, we can write the double invariant mass distributions for the \(\Lambda_{c}^{+}\to K^{+}\bar{K}^{0}\Lambda\) decay as
\[\frac{d^{2}\Gamma}{dM_{K^{+}\bar{K}^{0}}dM_{\bar{K}^{0}\Lambda}}=\frac{M_{ \Lambda}M_{K^{+}\bar{K}^{0}}M_{\bar{K}^{0}\Lambda}}{16\pi^{3}M_{\Lambda_{c}^ {+}}^{2}}|\mathcal{M}|^{2}, \tag{25}\]
where \(\mathcal{M}\) is the total decay amplitude. We will take two models for \(\mathcal{M}\):
\[\mathcal{M}^{\rm I} = \mathcal{M}_{a}+\mathcal{M}_{b}+\mathcal{M}_{d}+\mathcal{M}_{c}, \tag{26}\] \[\mathcal{M}^{\rm II} = \mathcal{M}_{a}+\mathcal{M}_{b}+\mathcal{M}_{d}+e^{i\theta} \mathcal{M}_{e}, \tag{27}\]
where a relative phase \(\theta\) is added for Model II, and it is a free parameter. In addition, another relative phase, \(\phi\), between \(V_{1}\) and \(V_{2}\) is also taken into account. We will replace \(V_{2}\) with \(e^{i\phi}V_{2}\) in the following fitting process. Furthermore, it is worth to mention that \(V_{1}\), \(V_{2}\) and \(V_{3}\) have dimension \({\rm MeV}^{-1}\).
Then the invariant \(\bar{K}^{0}\Lambda\) and \(K^{+}\bar{K}^{0}\) mass distributions can be obtained by integrating over the other invariant mass in Eq. (25). For a given value of \(M_{\bar{K}^{0}\Lambda}\), the invariant mass \(M_{K^{+}\bar{K}^{0}}\) satisfies the following relation
\[\begin{split}(M_{K^{+}\bar{K}^{0}}^{\text{min}})^{2}& =\left(E_{K^{+}}+E_{\bar{K}^{0}}\right)^{2}\\ &-(\sqrt{E_{K^{+}}^{2}-m_{K^{+}}^{2}}+\sqrt{E_{\bar{K}^{0}}^{2}-m _{\bar{K}^{0}}^{2}})^{2},\\ (M_{K^{+}\bar{K}^{0}}^{\text{max}})^{2}&=\left(E_{K^{+ }}+E_{\bar{K}^{0}}\right)^{2}\\ &-(\sqrt{E_{K^{+}}^{2}-m_{K^{+}}^{2}}-\sqrt{E_{\bar{K}^{0}}^{2}-m _{\bar{K}^{0}}^{2}})^{2},\end{split} \tag{28}\]
where \(E_{K^{+}}\) and \(E_{\bar{K}^{0}}\) are the particle energies in the \(\bar{K}^{0}\Lambda\) rest frame, which can be expressed explicitly
\[\begin{split} E_{K^{+}}&=\frac{M_{\Lambda_{c}^{+}}^{2} -M_{\bar{K}^{0}\Lambda}^{2}-M_{K^{+}}^{2}}{2M_{\bar{K}^{0}\Lambda}},\\ E_{\bar{K}^{0}}&=\frac{M_{\bar{K}^{0}\Lambda}^{2}+M_ {\bar{K}^{0}}^{2}-M_{\Lambda}^{2}}{2M_{\bar{K}^{0}\Lambda}}.\end{split} \tag{29}\]
Note that the invariant \(K^{+}\Lambda\) mass distribution can be obtained by substituting \(M_{K^{+}\bar{K}^{0}}\) with \(M_{K^{+}\Lambda}\).
## III Results and discussion
With the above formalism, we perform the \(\chi^{2}\) fits to the experimental data on the invariant mass distributions of the process \(\Lambda_{c}^{+}\to\Lambda K^{+}\bar{K}^{0}\). There are a total of 79 data points. For
Model I, there are five free parameters: \(V_{1}\), \(V_{2}\), \(a_{\bar{K}\Sigma}\), \(a_{\eta\Xi}\), and \(\phi\). For both Model II, there are also five free parameters: \(V_{1}\), \(V_{2}\), \(V_{3}\), \(\phi\), and \(\theta\). Note that we have fixed \(a_{\pi\Xi}=a_{K\Lambda}=-2\) as their natural value, and \(a_{\bar{K}\Sigma}\) and \(a_{\eta\Xi}\) are free parameters in this work. It is found that the pole position of \(\Xi^{*}(1690)\) state is not sensitive to the values of \(a_{\pi\Xi}\) and \(a_{\bar{K}\Lambda}\), as found in Refs. [23; 58].
The fitted parameters and the corresponding \(\chi^{2}/{\rm dof}\) are listed in table 2. One can find that both of the two fits in table 2 show reasonably small \(\chi^{2}/{\rm dof}\). With the fitted parameters \(a_{\bar{K}\Sigma}\) and \(a_{\eta\Xi}\), the pole position of \(\Xi^{*}(1690)\) state is \(M_{R}=1685.83+i2.76\) MeV, and the corresponding mass and width are: \(M_{\Xi^{*}(1690)}=1685.83\) MeV, \(\Gamma_{\Xi^{*}(1690)}=5.52\) MeV. The obtained mass of \(\Xi^{*}(1690)\) state is sightly below the mass threshold (1689 MeV) of the \(\bar{K}\Sigma\) channel, and close to the averaged value quoted in the PDG. However, the obtained width is rather narrow, which is a common conclusion of the chiral unitary approach [23; 25].
The fitted invariant mass distributions 3 of the \(\Lambda_{c}^{+}\to\Lambda K^{+}\bar{K}^{0}\) reaction are shown in Fig. 5 for Model I and II. It is found that both Model I and II can describe these experimental measurements fairly well. In Fig. 5 (a), the peak of \(\Xi^{*}(1690)\) resonance in the \(\Lambda K_{S}^{0}\) invariant mass distributions is well reproduced. 4 And, it is clearly seen that the line shapes of \(\Xi^{*}(1690)\) resonance are much different for Model I and Model II. The peak produced from the Model I is higher but narrower, and there is a sharp decrease around the mass threshold of the \(\bar{K}\Sigma\) channel. It is expected that more precise data, around the energy of \(M_{\bar{K}^{0}\Lambda}=1.69\) GeV, could be used to clarify this issue. Furthermore, one can see that without the contributions of \(\Xi^{*}(1690)\) (\(\mathcal{M}_{c}\) for Model I; \(\mathcal{M}_{e}\) for Model II), the experimental data cannot be described.
Footnote 3: To compare with the experimental measurements, we take \(\left|K^{0}\right>=\frac{1}{\sqrt{2}}(\left|K_{S}^{0}\right>+\left|K_{L}^{0} \right>)\) and \(\left|\bar{K}^{0}\right>=\frac{1}{\sqrt{2}}(\left|K_{S}^{0}\right>-\left|K_{L} ^{0}\right>)\), where we have ignored the small effect of \(CP\) violation.
In Figs. 5 (b) and (c), we show the \(K_{S}^{0}K^{+}\) and \(\Lambda K^{+}\) invariant mass distributions, respectively. It is found that the theoretical results of Model I and II are very similar. This may indicate that the contributions of \(\Xi^{*}(1690)\) resonance,
\begin{table}
\begin{tabular}{c|c|c} \hline & Model I & Model II \\ \hline \(V_{1}\)\(({\rm MeV}^{-1})\) & 3.60\(\pm\)0.55 & 0.45\(\pm\)0.83 \\ \(V_{2}\)\(({\rm MeV}^{-1})\) & -0.89\(\pm\)0.67 & -4.61\(\pm\)1.16 \\ \(V_{3}\)\(({\rm MeV}^{-1})\) & \(-\) & 1.81\(\pm\)0.18 \\ \(a_{K\Sigma}\) & -1.99\(\pm\)0.09 & \(-\) \\ \(a_{\eta\Xi}\) & -3.53\(\pm\)0.29 & \(-\) \\ \(\phi\) & 0.60\(\pm\)0.45 & 3.20\(\pm\)1.16 \\ \(\theta\) & \(-\) & 0.67\(\pm\)1.02 \\ \(\chi^{2}/{\rm dof}\) & 1.51 & 1.56 \\ \(M_{\Xi^{*}(1690)}\)\(({\rm MeV})\)\(({\rm MeV})\)\(({\rm 1685.83}\)\(({\rm output})\) & \(1690\)\(({\rm input})\) \\ \(\Gamma_{\Xi^{*}(1690)}\)\(({\rm MeV})\)\(({\rm 5.52}\)\(({\rm output})\) & \(20\)\(({\rm input})\) \\ \hline \end{tabular}
\end{table}
Table 2: Values of some of the parameters used or determined in this work. The values of \(a_{\pi\Xi}\) and \(a_{\bar{K}\Lambda}\) in Model I are fixed as the natural value \(-2\). The values of the mass and width for the \(\Xi^{*}(1690)\) state are obtained from the fitted parameters of Model I. The mass and width of \(\Xi^{*}(1690)\) resonance, for Model II, are taken as \(M_{\Xi^{*}}=1690\) MeV and \(\Gamma_{\Xi^{*}}=20\) MeV as quoted in PDG [6].
from Model I and II, to the \(K^{0}_{S}K^{+}\) and \(\Lambda K^{+}\) invariant mass distributions also are similar, though they give very different line shapes for \(\Xi^{*}(1690)\) state. Once again, one can see that the contributions from the \(K\bar{K}\) or \(K\Lambda\) final state interactions in \(S\)-wave, as shown in Fig. 3, are crucial to reproduce the experimental measurements.
Next, we study the branching fractions of the \(\Lambda^{+}_{c}\to K^{+}\Xi^{*}(1690)\to K^{+}\bar{K}^{0}\Lambda\) decay to see the effect of the \(\Xi^{*}(1690)\) resonance in the process of \(\Lambda^{+}_{c}\to\Lambda K^{+}\bar{K}^{0}\). With the full amplitude of Eqs. (26) and (27) and the fitted parameters shown in table 2, the numerical result for the branching fraction of the production of \(\Xi^{*}(1690)\) resonance can be calculated as follows,
\[\frac{\mathcal{B}(\Lambda^{+}_{c}\to K^{+}\Xi^{*}(1690)\to K^{+}\bar{K}^{0} \Lambda)}{\mathcal{B}(\Lambda^{+}_{c}\to\Lambda K^{+}\bar{K}^{0})}=\begin{cases} 0.28\\ 0.07\end{cases}\quad, \tag{30}\]
for Model I and II, respectively. The above result of Model I is in good agreement with the experimental data of Belle collaboration: \(0.26\pm 0.08\pm 0.03\)[15], while the result of the Model II is much smaller. We hope that the calculations here can be checked by future precise experiments.
## IV Summary
In this work, we have performed an analysis of the \(\Lambda^{+}_{c}\to\Lambda K^{+}\bar{K}^{0}\) decay via two cabibbo-favored mechanism. By considering the hadronization process and the final state interaction in the pseudoscalar-baryon and pseudoscalar-pseudoscalar channel with chiral unitary approach, the role of \(\Xi^{*}(1690)\), \(a_{0}(980)\), and \(N^{*}(1535)\) resonances are investigated. Taking into account the contributions of these three above resonances, we have calculated the \(K^{+}K^{0}_{S}\), \(\Lambda K^{0}_{S}\) and \(\Lambda K^{+}\) invariant mass distributions. Up to five free parameters, we have done \(\chi^{2}\)-fit to the experimental measurements. It was found that the reaction proposed here can describe the experimental data of \(BABAR\) collaboration [16]. Especially, the clear signal for the \(\Xi^{*}(1690)\) state can be well reproduced in the \(\Lambda K^{0}_{S}\) invariant mass spectrum.
Within the fitted model parameters, we have further calculated the ratio of branching fractions \(\mathcal{B}(\Lambda^{+}_{c}\to K^{+}\Xi^{*}(1690)\to K^{+}\bar{K}^{0}\Lambda)/ \mathcal{B}(\Lambda^{+}_{c}\to\Lambda K^{+}\bar{K}^{0})\) to see the contribution of \(\Xi^{*}(1690)\) resonance in the \(\Lambda^{+}_{c}\to\Lambda K^{+}\bar{K}^{0}\) decay. The obtained value 0.28 for Model I, where the \(\Xi^{*}(1690)\) is dynamically generated from the meson-baryon final state interactions, is consistent with the experimental value by Belle collaboration [15]. It is expected that the \(\Xi^{*}(1690)\) state can be analyzed from the precise measurements of the \(\Lambda^{+}_{c}\to K^{+}\bar{K}^{0}\Lambda\) decay by BESIII, BelleII, and LHCb collaborations in the future [62].
###### Acknowledgements.
We would like to thank Profs. Wei-Hong Liang and Eulogio Oset for useful discussions. This work is partly supported by the National Natural Science Foundation of China under Grant No. 12075288. It is also supported by the Youth Innovation Promotion Association CAS.
|
2302.03593 | A Systematic Review on Human Modeling: Digging into Human Digital Twin
Implementations | Human Digital Twins (HDTs) are digital replicas of humans that either mirror
a complete human body, some parts of it as can be organs, flows, cells, or even
human behaviors. An HDT is a human specific replica application inferred from
the digital twin (DT) manufacturing concept, defined as a technique that
creates digital replicas of physical systems or processes aimed at optimizing
their performance and supporting more accurate decision-making processes. The
main goal of this paper is to provide readers with a comprehensive overview of
current efforts in the HDT field, by browsing its basic concepts, differences
with DTs, existing developments, and the distinct areas of application. The
review methodology includes an exhaustive review of scientific literature,
patents, and industrial initiatives, as well as a discussion about ongoing and
foreseen HDT research activity, emphasizing its potential benefits and
limitations. | Heribert Pascual, Xavi Masip Bruin, Albert Alonso, Judit Cerdà | 2023-02-04T13:29:33Z | http://arxiv.org/abs/2302.03593v1 | # A Systematic Review on Human Modeling: Digging into Human Digital Twin Implementations
###### Abstract
Human Digital Twins (HDTs) are digital replicas of humans that either mirror a complete human body, some parts of it -as can be organs, flows, cells-, or even human behaviors. An HDT is a human specific replica application inferred from the digital twin (DT) manufacturing concept, defined as a technique that creates digital replicas of physical systems or processes aimed at optimizing their performance and supporting more accurate decision-making processes. The main goal of this paper is to provide readers with a comprehensive overview of current efforts in the HDT field, by browsing its basic concepts, differences with DTs, existing developments, and the distinct areas of application. The review methodology includes an exhaustive review of scientific literature, patents, and industrial initiatives, as well as a discussion about ongoing and foreseen HDT research activity, emphasizing its potential benefits and limitations.
_Digital Twin, Human Digital Twin, modelling_
## I Introduction and Motivation
Human digital twins (HDTs) are digital replicas of either a human body or some of its organs, used in many distinct sectors and verticals, including health (e.g. monitoring diseases progression), sport (e.g. supervising athlete's performance), or manufacturing (e.g. scheduling tasks in factories), just to name a few. An HDT is an extension of the digital twin (DT) concept, standing for the creation of digital replicas of physical systems or processes in order to both optimizing their performance and improving the decision-making processes. It is largely known the fact that nowadays the concept of DTs is gaining popularity, thus being applied in a wide range of sectors.
### _Motivation_
The main goal of this survey is to provide readers with a comprehensive overview of current HDT efforts, particularly focusing on the following objectives:
I-A1 To understand the basic concepts of a DT, including its definition, types, life cycle, and characteristics, thus easing readers to acquire the necessary background about DTs, particularly including their capabilities and limitations, making it easier for them to contextualize HDTs within the broader field of DT.
I-A2 To introduce the HDT concept and explain the key differences vs a DT, also illustrating how a HDT may be used.
I-A3 To dig into the scientific literature, patents, and industrial initiatives for current efforts addressing HDT implementations, thus providing an extensive overview of real efforts towards developing HDTs.
I-A4 To identify the main sectors HDTs are being and will be applied to, thus emphasizing potential HDTs applicability as well as how different sectors may benefit from its deployment.
I-A5 To analyze the rationales for HDTs to be used in all envisioned specific verticals, emphasizing specific goals, challenges and research objectives, while also highlighting potential benefits and limitations..
### _The Digital Twin Concept (DT)_
A DT is a virtual replica of a physical system, process, or product, that is periodically updated with data collected from its corresponding physical entity and environment. By replicating the behavior of the physical entity under different conditions and analyzing the results, DTs can be used to optimize performance, prevent hardware failures, anticipate maintenance needs, and simulate processes by sending the actuations over its physical entity. The concept of DTs has been widely studied, as summarized in section 2A.
### _The Human Digital Twin Concept (HDT)_
HDTs apply the DT's replica concept to humans, thus creating digital replicas of either a human or part of it (e.g., organ). HDTs are created by first collecting data from various sources, such as cameras, sensors, wearables, medical devices, medical records, etc., and then using these data to build a digital model. As is later explained in this paper, this information may be processed using different technologies, e.g., Artificial Intelligence (AI), artificial vision, 3D simulations, rule based, etc. The rationale behind the DT concept makes it easier to assess that HDTs may be used to deploy proactive strategies that, in the case of humans, may notably contribute to several aspects, e.g. predict a disease behavior anticipate the disease status or performance of humans under several conditions and contexts (including for example healthcare, manufacturing, and sports), proactively support patient prognosis, etc.
Indeed, it is widely accepted that HDTs may have the potential to bring many notable benefits to humans, such as improving patients care, optimizing manufacturing processes, enhancing safety in transportation systems, or optimizing athletic performance, to name a few. For example,
in health, HDTs can be used to predict the likelihood of a patient to develop a certain condition, or to evaluate the effectiveness of different treatment options. In manufacturing, HDTs can be used to optimize production schedules, thus improving quality. In transportation, HDTs can be used to optimize vehicles design, improve driver safety systems, and reduce accidents. Finally, in sports, HDTs can be used to evaluate training programs, optimize nutrition and hydration, and improve performance.
### _Review Methodology_
A search was conducted in September 2022 in Google Scholar for the first 100 results ordered by relevance for the terms "human AND digital AND twin*". The same search was also conducted in the Web of Science search engine. In this case 200 results were collected (half of them were patents), in order to analyze a similar volume of results on both searches. The results of both searches (different than patents), were put together in a list, from which repeated papers were eliminated. Only papers written in English were revised by two reviewers, and those that did not include implementation of HDT were rejected. Papers that were marked as not having any implementation by both reviewers were discarded. Papers positively marked by at least one reviewer were included in the eligibility phase. During the eligibility phase, each paper was revised in depth and given a score by the two reviewers, based on the proposal's feasibility, the implementation, and the overall proposal quality. The scores given by the reviewers could be zero, one, or two. Finally, only the most relevant and high-quality papers, as determined by a final score equal or greater than three, were included in the survey.
Patents that did not contain an English abstract were excluded from further analysis. The remaining patents abstracts were evaluated by two reviewers to detect an HDT implementation. Each reviewer assigned a score of 1 if an implementation was identified, and a score of 0 if it was not. To include the patents that scored only 1, a third reviewer was consulted to decide to give the last point. Only patents that received a score of 2 were included in the summary.
Additionally, we undertook many different searches in some search engines looking for private companies' implementations, that allowed us to retrieve documents from grey literature. These private companies' implementations were also considered, as constituting part of the state-of-the art in HDT technology.
### _Related Surveys_
Previous work has already been done to summarize the efforts in the DT area, turning into several surveys already published. These surveys are also revisited and analyzed in this paper. Particularly, surveys [1, 2, 3, 4, 5] providing a comprehensive overview of the DTs basic concepts from a global perspective, and surveys conducted on the DT topic [6, 7, 8] are interesting for further research in other areas of HDT, not just the implementation like, were considered.
## II DigitalTwin Basic Concepts
The concept of a DT was first introduced by Dr. Michael Grieves in a course on Product Lifecycle Management at the University of Michigan, and later published in a whitepaper [9]. In 2010, NASA researchers published Grieves' idea in their roadmap [10] and began investigating ways to use DTs to reduce costs and resources in their space assets. Recently, the interest in DTs has grown significantly mainly motivated by current advances in network capabilities, data collection and delivery technologies (leveraging concepts such as the Internet of Things, big data, and industry 4.0), as well as the availability of data.
In this context, DTs have become an important tool for simulating and analyzing the behavior and characteristics of physical entities. Some of the key features of DTs include their ability to:
_1) Mimic the structure, environment, and behavior of physical entities._
_2) Be dynamically updated with data from the physical entity throughout its life cycle. DTs supporting near to real time updates are called active DTs, while semi-active DTs only support asynchronous updates._
_3) Perform simulations, detect failures, and make predictions about the performance of physical entities_
_4) Transfer the outcome of simulations and analytics to the physical entity in order to improving its performance by both preventing hardware failures and/or malfunctions, and anticipating maintenance needs_
In the following subsections, we will revise the definition, key features, life cycle, and characteristics of DTs, also revisiting some currently common uses for this technology, intended to provide the reader with a more thorough understanding of the concept and its key points.
### _Definition_
There is no a universally agreed definition of a DT within the scientific community, in the terms of what we may call a standard. However, several definitions may be found, customized to suit specific sectors and verticals. For example, the American Institute of Aeronautics and Astronautics (AIAA) [11], defines a DT as "_a set of virtual information constructs that mimics the structure, context, and behavior of an individual/unique physical asset, or a group of physical assets, is dynamically updated with data from its physical twin throughout its life cycle and informs decisions that realize value_". A slightly distinct approach by Rainer Stark and Thomas Damerau [12], defines a DT as "_A digital representation of an active unique product (real device, object, machine, service, or intangle asset) or unique product-service system (a system consisting of a product and a related service) that comprises its selected characteristics, properties, conditions, and behaviors by means of models, information, and data within a single or even across multiple life cycle phases_". Many other DT definitions exists [13], and
Figure 1: WOS DT publications by year
even other similar concepts can be also found in the literature, such as digital mirroring and digital shadow, which are discussed in detail in [14] and [15].
### _DT Types and Operation_
There are several types of DT implementations as defined by Grieves and Vickers in [16], each of which serving a specific purpose.
The DT Prototype (DTP) is a digital representation used for collecting data from physical systems or processes, often used as a test before creating other types of DTs. For instance, a manufacturer might use a DTP to collect data from a prototype of a new product to enhance its design and reduce costs. However, it cannot be considered a fully DT as it lacks feedback to the physical system.
The DT Instance (DTI) is a virtual replica of a single physical entity, such as an engine within a complex system. A DTI might be utilized to monitoring the performance of an engine in a specific aircraft, and the data collected from the DTI might be utilized for example to optimize maintenance schedules and improve efficiency.
Extending the DTI concept, the DT Aggregate (DTA), aggregates data from multiple DTI instances, turning into a virtual model of the physical system or process, rather than on a specific instance of it. For instance, a manufacturer might use a DTA to monitor the performance of hundreds of engines located in different places, to identify recurrent issues and optimize maintenance schedules.
From a distinct perspective, recognized the fact that the environment can notably impact the DT performance, the term DT Environment (DTE) refers to the virtual replica of the conditions a physical system operates in. Thus, a DTE might be utilized to simulate the effects of using a DTI in different locations and test different weather conditions to assess the performance of the beforementioned engine, as it might not be the same landing in a northern country or in a tropical one.
The DT operation is illustrated in Figure 2, wherein sensory data collected from various sources within the physical entity (depicted in light grey) such as speed, temperature, and pressure readings, as well as data about the environment, can be transmitted to the DT (represented in dark grey) via a physical-to-virtual communication channel (detailed in Figure 3). These data enable simulations and analytics to be performed within the virtual instance. For example, a DTI and DTP may be utilized to assess various configurations of a manufacturing process in order to determine the optimal setup. In a DTI, interventions can be initiated based on DT feedback using the virtual-to-physical communication channel (detailed in Figure 3) and subsequently implemented in the physical system if necessary. For instance, whenever a process modifies any aspect of the physical or virtual system, this information is exchanged between them, enabling the necessary actions to be taken. If a malfunction is detected in the physical system, the DTI can be utilized to identify the root cause and propose potential remedial actions, which can then be implemented in the physical system
The design phase of a DT is a crucial challenge, as it requires a comprehensive understanding of the system to be represented and the selection of the relevant parameters to attain the desired level of fidelity. Fidelity refers to the level of detail and precision with which the DT represents the physical system or process, as well as the number and type of parameters that are transferred between the digital and physical environments. Last but not least, the performance should also be considered, referred to as the bandwidth needed or the required computing time. Indeed, this is extremely important as DT implementations usually are highly demanding in resources. For example, if the goal of the DT is to optimize the maintenance schedule for a specific type of engine, it would be important to include a wide range of parameters related to the engine's performance and operating conditions in order to achieve a high level of fidelity. On the other hand, if the goal is simply to monitor the overall performance of the engine as a part of a more complex vehicle DT, a lower level of fidelity may be sufficient to obtain a better performance.
In order to ensure that the DT can accurately represent the physical system, and, consequently, meet the goals of the design, it is important to have highly skilled specialists engaged into the design phase. For instance, a team of engineers with expertise in the design and operation of a specific engine type, working together with computer scientists and data analysts would be well-suited to contribute to the design of a successful DT for that engine.
Figure 3: Detail of DT communication channels
Figure 2: DT basic concept
The parameters referenced before include types of data, information, and processes. These parameters should be carefully chosen and regularly checked during the operation of the DT as their quantity and relevance can affect the DT's fidelity and performance.
### _DT Lifecycle Description_
Figure 4 illustrates the typical procedure for the creation of a DT, which is divided into several phases.
During the concept phase, key considerations include the environment where the DT will be deployed in, the virtualization of the physical system or process, and the DTP creation. The parameters collected from the physical environment and its corresponding DT must be carefully evaluated, and in some cases, it may be necessary to add additional sensors to the physical system and its environment in order to improve the DT fidelity. This tuning might not be so easy in some scenarios. For example, while there should be no issues in an Industry 4.0 context (machines are often highly instrumented with many sensors), it may be way difficult in a healthcare context (for example when considering the lack of comfort of some of the wearables that patients are asked to use). The key challenge to address toward a successful DT design is to select the right relevant data for the problem to be modeled, to be properly processed in order to achieve the intended goals, thus avoiding the need for collecting and processing irrelevant data.
During the realization phase, the DTP is developed and extended to create an individual DTI leveraging the data collected from each physical system or process to be replicated. It is also common to deploy a DTA that consumes information from all the DTIs in order to make general improvements to the product or process, detect recurring faults, or even anticipate critical maintenance.
During the support and use phase, all elements within the DT are interconnected, and the information gathered is used to improve both the physical and virtual sides. The DTIs can be refined by evaluating which information from the physical processes is the most relevant, thus reducing the amount of data to be processed and increasing the fidelity of the DT. The physical system or process can also use the information obtained through the digital processes of the DTIs to make necessary adjustments or interventions.
During the retire phase, the DTIs and their corresponding DTA and virtual environment are typically archived, allowing the owner to utilize the valuable information generated throughout the entire DT lifecycle in the future to design similar products or processes, considering the potential issues that the previous version encountered.
### _DT Characteristics_
Two main characteristics may be emphasized for a DT to properly and accurately achieve its objectives, i.e., effective communication and proactive performance.
Effective communication is crucial for the success of a DT system. The physical and virtual representations must have a communication system enabling continuous and reliable exchange of information about the processes on each side. While the communication channels may utilize different network architectures and synchronization times, a DT isolated from the real-world data would not be effective. Through the communication channels, the DT receives dynamic information from the physical system or process to keep the virtual instance and its environment updated. Using this data, the DT can provide feedback to the physical system and other physical systems in similar environments. All data received, including data generated by the digital twin's processes, should be stored for the DT to access at any time as historical data.
The other important aspect of the DT is its ability to be updated with new data, thus improving its predictive, corrective, and prescriptive capabilities and accurately reflecting the current state of the physical system or process. To this end, DTs often leverage novel technologies and tools, such as mathematical models, artificial intelligence algorithms, data analysis techniques, data dimensionality reduction, self-adaptation, and self-parameterization, to name a few. By using these technologies, the DT can perform its functions efficiently, achieving real-time cyber-physical synchronization, and users can interact with a virtual representation of the physical system or process to view its state, perform stress tests, or simply modify input parameters to predict the final result.
### _Key DT Verticals_
The following subsections present the most common verticals for DT, each introducing some specific use cases.
#### Iv-E1 Manufacturing
The DT concept has gained widespread adoption in the manufacturing industry [7] mainly because this was the seed vertical other sectors mirrored in. Within the manufacturing sector, DTs are used for two main purposes: products and manufacturing processes. The data collected from DTs help companies to make informed decisions about products and processes, resulting in time and cost savings as well as in increased efficiency.
On the one hand, in the product domain [17], DTs are commonly used to perform simulations [9] with demanding variables (such as speed, resistance, or environment), to achieve an improved performance, a specific safety, or to determine the product limit specifications in different environments. The collected data is also utilized to identify hardware defects and prevent faults through proactive maintenance planning. All data collected from multiple DTIs or their associated DTA is utilized to inform the design of subsequent product versions.
Figure 4: DT lifecycle
On the other hand, DTs are employed to digitize manufacturing processes, assisting with: i) the management of machinery [18] and personnel workloads; ii) the prevention of occupational risks [19]; iii) the management of space in warehouses and logistics [20], and; iv) the optimization of production machinery speed, performance, and scheduling [21]. In addition to generating valuable information about the manufacturing process, a DT facilitates the rapid resolution of potential problems that may arise, such as the management of available resources following a production line failure. Using the DT technology, multiple alternatives can be computed or simulated to select the new resource assignment that yields the optimal results, mitigating the disruption caused by the failure. DTs also enable the monitoring of the work environment to prevent industrial accidents, alleviate work fatigue, and ensure the safety of human-robot interaction. Many DTs in manufacturing processes utilize artificial intelligence and big data techniques to more efficiently achieve superior results.
The automotive industry [22] is a special case within the manufacturing sector, as product failures can result in significant harm to customers and the repair of minor failures in vehicle components can incur high recall costs for the company. DTs are used to digitally test certain vehicle components (such as engines, shocks, control units) in various environments and with varying requirements to identify design, hardware, or recurring faults prior to the finalization of a model. Once the product is complete, DTs can also be created with their corresponding DTA to collect data for the detection of recurring failure patterns or the improvement of future designs.
Aviation is another use case in the manufacturing industry [23], extending its relevance even further due to the paramount importance of safety. Indeed, DTs can reduce design time in the aircraft design stage, as aircraft structures and systems are complex and must be highly reliable. DTs can simulate the behavior of aircraft structures under a variety of conditions, the response of safety systems to failures, and the execution of a complex set of tests. These tests would be time-consuming and costly to perform using alternative methods. During the aircraft building process, DTs are utilized to reduce verification costs for certain parts and ensure that the final product meets the requirements of the design phase. Similar to the impact of minor vehicle failures on cost, safety, and brand reliability, these factors are critical in aviation.
Once an aircraft is constructed, accurate fault detection and proactive maintenance planning are both essential due to the high safety standards in the aviation sector [24]. DTs with their corresponding DTA make it easier for companies to run testing procedures in the digital realm aimed at identifying and preventing malfunctions in aircraft parts or systems, enabling proactive updates to maintenance routines. This information can also be utilized for regular inspections to identify any parts requiring special attention.
Finally, in the aviation field, the DT concept is also utilized for pilot and maintenance personnel training, as well as the improvement of airport management procedures and the enhancement of airport passenger capacity.
#### Ii-A2 Healthcare
DT technology has the potential to revolutionize healthcare in several ways, depending on the element being twinned. DTs in healthcare can be classified into three main categories: patient twinning, facility or service digitization, and trial digitization.
One area of particular interest in healthcare is patient twinning [25], also referred to as personalized or precision medicine [26]. It aims at creating a DT of a patient using its own data. Usually, the DT mirrors a specific organ related to a specific disease, as a full human body DT is hard to develop and extremely consuming in resources. Some ongoing efforts may be found, such as those referring to the heart [27] for cardiac diseases. The DT model can then be used to predict the progression of the disease [28], evaluate the potential impact of a specific treatment on the patient [29], prepare the patient for surgery [30], and test the size of an implant and the optimal procedure for placement according to the tools and sizes available. Some DT models are also generated from data from multiple patients to train medical personnel who are still gaining experience. The successful implementation of these techniques has the potential to optimize healthcare resources, improve accessibility, and enhance the quality of life for patients.
The DT paradigm is also applicable to hospital infrastructure [31] and services, as it is not significantly different from a manufacturing process. Many hospitals have created DTs for individual services or groups of services [32] to improve productivity. Others have created a replica of a patient's behavior during hospital visits to enhance patient care and reduce congestion at patient care desks. By utilizing the DT technology, hospitals have been able to shorten patients' visits by eliminating the need to wait in multiple locations [31].
Finally, in the pharmaceutical industry, DT technology has also brought significant changes. Some companies have used previously collected data to generate models and conduct pre-clinical trials [33], reducing the cost and time of drug development. In other cases, a digital twin based on a patient's data is used to predict which drug will yield the best results for a patient using data previously collected from multiple patients. This approach has the potential to accelerate the patient's treatment process, saving time and avoiding complications from treatments that may not be as effective as the predicted one.
#### Ii-A3 Supply Chain
A supply chain DT is a virtual replica of a company's supply chain, used to analyze and optimize the flow of goods, information, and resources. It allows companies to simulate various scenarios and make informed decisions about their supply chain operations.
There are many benefits in implementing a supply chain DT, summarized as: i) an enhanced visibility and transparency by tracking and analyzing data from the entire supply chain, and; ii) a more comprehensive view of companies' operations, identifying bottlenecks and inefficiencies, and letting them make better informed decisions about resources' allocation. Therefore, companies are endowed with higher levels of flexibility and agility, as they can quickly and easily respond to changes in demand, market conditions, and other factors. This can be particularly
valuable in times of uncertainty or disruption. For example: i) by identifying and addressing the detected inefficiencies in the supply chain, companies can reduce waste, lower costs, and improve their overall efficiency; ii) simulating various scenarios, companies can identify and mitigate potential risks before they occur, helping to reduce the impact of disruptions and protect their business, and; iii) improving the customer satisfaction by optimizing the supply chain, companies can improve their delivery times and responsiveness, leading to increased customer satisfaction. In short, the use of a supply chain DT can help companies to better understand and optimize their operations, leading to improved efficiency, agility, and competitiveness.
The implementation of a fully digital twinned supply chain [34] is a complex task, but even partial twinning can bring numerous benefits. From a global perspective, the supply chain consists of many retail shops connected to one or more warehouses through various modes of transportation. These warehouses are in turn linked to many other warehouses, some of which are connected to the manufacturers of the products, who require raw materials, etc. Given the diverse types of transportation available, each with its own unique set of advantages and disadvantages, creating a DT of the infrastructure [35] can be highly advantageous in decision-making processes, particularly in the face of issues within the transport chain, production chain, or shifts in demand. Historical data generated by DTs can also aid in the prediction of significant changes and the implementation of mitigating actions in advance.
#### Ii-B4 Communication Networks
The increasing number of devices connected to communication networks, which is expected to grow by billions annually [36], makes the concept of DTs particularly relevant. DTs of network infrastructure can be used in many areas, with a notable focus on improving service quality, detecting anomalies and controlling traffic. First, with the growing number of devices, networks must be constantly adjusted to meet real-time needs, fueling service providers to dynamically increase capacity or redirect traffic as necessary in order to continuously provide the desired quality of service. While this is already being managed to some extent, the use of DT networks (DTNs) allows for the simulation of network upgrades to determine the most beneficial action for the system. DT networks can also be used to simulate an increase in the number of users at a particular point, enabling better planning for network growth. Second, another key application of DT networks is anomaly detection. A properly functioning network should exhibit a behavior similar to that of its DT network, so if both differ, the DT can alert earlier and help to identify the source of the failure. Finally, in terms of data traffic control, a DT network can be beneficial in configuring routes to reduce the load on certain links. Indeed, with real-time traffic information, a DTN can modify routes to reduce the number of hops for transmitted traffic, leading to an increase in network capacity and a decrease in energy costs. Additionally, traffic generated by routing protocols can be handled in the virtual domain and then transferred to physical devices, reducing the load on actual networks.
One major challenge currently facing DT networks is scalability and interoperability, as there is no standardized framework. However, due to the significant benefits of digital replication, there is significant research being conducted to develop DTN with the aim of being successfully deployed in the near future.
#### Ii-B5 Grids - Supply Networks
Obtaining a faithful digital replica of oil and gas networks [37, 38], electric power lines [39] and powerplants [40], water supplies [41] and other types of infrastructure can be highly beneficial and strategic for the management teams. There are many areas in these scenarios a DT may significantly contribute to. Indeed, DTs can be used to monitor and regulate the production or flow of energy or service, as well as identify the most efficient paths for transporting it. In addition, DTs can be used to simulate and address anomalies [42] or leakages [43], in the infrastructure, helping to design well-structured and dimensioned systems that provide the best service at the lowest cost. However, creating a DT for large supply networks can be challenging due to their size and unique characteristics. For example, the implementation for the water supply network of Valencia, Spain [43] required a simplification in order to be usable in real-time due to its DT complexity and the amount of information used.
## III Material Evaluation in the HDT Field
After briefly introducing and analyzing the applicability of DTs in different verticals, this section focuses on the deployment of human DTs (HDTs), and discusses the results obtained after revisiting existing efforts in many directions and scopes for HDTs. The results have been divided into three groups, the first referring to scientific publications, the second one related to the knowledge transfer (i.e., patents), and the third on industrial initiatives, an area of relevance albeit not much information is disclosed.
### _Scientific Publications_
In Table I a summary of the papers found is presented. The second column indicates the area in which the HDT is applied, while the third column specifies the objective of the HDT proposal. The fourth column lists the type of data or sensors used to generate the input for the HDT, and the fifth column specifies the type of output generated by the HDT. The last column indicates the technology on which the HDT is based.
Two verticals can be identified in the review results where HDTs are designed and partially developed. The first one is healthcare putting together the highest number of HDT real deployments, reaching out 9 of 15. The second one is the manufacturing vertical, where 6 deployments have been found. Certainly, the surveyed contributions presented in this paper are those we found after an exhaustive monitoring and tracking process, but this does not exclude that a more recent or a not sufficiently disseminated effort might not be reported in this manuscript. Set then by the contributions volume, we start reviewing HDT initiatives in the healthcare area.
Authors in [44] proposed a semi-active DT method for approximating the occlusion of carotid artery stenoses. The DT system uses artificial vision captured through a short, pre-recorded video on a smartphone, rather than receiving real-time data. This is why it is referred to as "semi-active". Once the video is loaded into the system, the system focuses on a region of interest, such as the central forehead or area below the eyes, and defines points to track head movement. After
applying digital filtering and principal component analysis (PCA), the resulting vibration is compared to a mechanical model to determine the percentage occlusion. The authors tested the method using both healthy subjects and only one real patient, the data from the latter being replicated to create synthetic patients. The authors concluded that the proposed method shows significant improvement, although further testing is needed to determine its validity and potential usefulness for other disease types.
A framework for utilizing data generated at the edge, such as data collected from monitoring devices connected to a smartphone via Bluetooth or data obtained from phone sensors and social networks, was proposed by the authors in [45]. The goal was to create an artificial intelligence (AI) system that can process these data at the edge, specifically on the smartphone. However, as the project is not yet mature, the authors developed a proof of concept (PoC) based on an inference engine that runs on a smartphone and is trained with public datasets. The PoC was tested on 200 patients, with 166 used for training and 34 for testing, and was able to classify electrocardiogram (ECG) data into myocardial infarction or not with an accuracy of 85.8%, a precision of 95.5%, and a recall of 86.3%.
In [46], the authors aimed to obtain and classify the size of an abdominal aortic aneurysm (AAA) into four categories (healthy, small, medium, and large), using easy-to-access blood pressure waveforms as the main input. To this end, the authors combine several AI techniques. First, they create a patient's DT using patient's profile data, such as height, weight, and mean arterial blood pressure. Using demonstrated empirical relationships, they create a one-dimensional mesh representing the major arteries in the human body. Next, they estimate cardiac flow using a multilayer perceptron. Then, using blood pressure waveforms from the carotid, femoral, and brachial arteries (using only two of these decreases accuracy), they use a recurrent neural network (RNN), specifically a long short-term memory (LSTM) model, to predict blood pressure waveforms at the end of the vessel. Finally, using a convolutional neural network (CNN), the result is classified into one of the aforementioned categories. The accuracy of this system in detecting AAA problems using synthetic data is 99.1%, and its accuracy in determining AAA severity is 97.8%. Two key issues still need to be addressed: developing the system in a real-world environment, and addressing other conditions or diseases that could give similar results in the inverse analysis.
The author in [47] conducted a survey in HDT technology and proposed a structure for constructing HDTs, as well as a set of features that HDTs should possess. The author also presented two projects that were funded by small business innovative research: one on creating patient DTs for scoliosis physiotherapy, and another one on twinning an aircraft pilot. The first project focuses on creating a virtual patient DT using both X-ray images and scans as inputs to help with scoliosis diagnosis and the design and assessment of physiotherapeutic scoliosis-specific exercises. To this end, the author combines all input information to create a skeleton morphing, followed by muscle morphing. The result can be used to help the scoliosis diagnostic and design and assess the physiotherapeutic scoliosis specific exercises prescribed to the patient. The second project, sponsored by the US Air Force, aims at twinning an aircraft pilot using a variety of inputs obtained from wearable sensors and processed using AI. These inputs are applied to the pilot's muscle-skeletal model, anthropometric model, and physiological model to produce outputs such as personalized training to enhance pilot performance, injury mitigation, physiological predictions, and ergonomics for the cockpit and clothing.
Centered on organ replication, authors in [48] presented a dynamic colon model for creating an in-silico test performance model of orally ingested dosage forms inside the gastrointestinal tract. The goal of this model was to facilitate the early stages of testing and fluid modification for specific disease stages. The model variables can be tuned to simulate different conditions in the colon. The model was created using MRI scans, and a Discrete Multiphysics (DMP) simulation was performed to obtain shear rate results for the fluid and its parameters. The simulation is able to replicate the effects of the colon's contractile wall wave propagation speed, media viscosity, and media volume on the mean wall shear rate, similar to results from an in vitro test.
At cellular level, authors in [48] proposed a framework for creating a 3D map of cells from multiplexed images, with the goal of calculating the distance of cell types from endothelial cells and other valuable features. This framework is intended to be a part of a DT for the National Institutes of Health's (NIH) [49] Human Biomolecular Atlas Program (HuBMAP) [50]. Although the input data for this framework is obtained from skin biopsies, it is designed to be applicable to any tissue type. The input data is processed using micro computerized tomography, and a deep learning (DL) encoder-decoder model is used for segmentation and classification. Autofluorescence [51] images were used as a reference to build the digital 3D model. The model was then used to calculate two distances: the distance between the centroid of the nucleus for immune cells, and the distance between the edge of the nearest blood vessel. These results can be compared between different individuals of different ages and states to search for new disease markers. The framework is tested on biopsies from 12 donors ranging in age from 32 to 72, taken from both UV-exposed and non-exposed anatomical regions. While the results of this test were considered relevant and indeed allow for the extraction of many conclusions at the cellular level, they are not discussed in detail in the article. The framework is currently in a development stage and is considered a semi-active DT, as it does not currently provide real-time data. However, once an individualized deployment is released, it has the potential to track the health state of various organs and follow the progression of diseases in each individual using the semi-active paradigm. The main drawback of this proposal is that nowadays, there is no easy way to perform micro CT on tissues without performing a biopsy.
Authors in [52] presented an HDT that combines X-ray navigation and deep learning to create a cyber-human interaction for surgery training. The main focus of the article is on cybersecurity, which is an important aspect of any DT, as virtual entities are vulnerable to various threats. The HDT was developed to simulate surgeries using augmented reality (AR), virtual reality (VR), and mixed reality (MR) modules. The input data for the HDT is obtained from surgery simulation. The authors developed three modules for peg transfer, vessel cutting, and rope handling, and the main inputs for these modules are the total procedure time and the
instrument pathway. Each module also uses specific metrics, such as error in clipping and cutting vessels. In the experiments conducted, the results have been positive, with novice doctors obtaining lower scores than expert surgeons before using the HDT training. However, after training with the HDT, the scores of the novice doctors improved, indicating that the software is effective in its goal of improving surgical skills.
On a different health area, a DT framework called SmartFit [53] was designed to support trainers and coaches in optimizing athletes' behavior. SmartFit mirrors the athletes' team using DT techniques and inputs data collected from IoT devices embedded in wearables, as well as data logged from applications tracking athletes' meals and mood. The total number of inputs collected is 22, including data from the applications on meals and mood, and data from IoT sensors on daily steps, number of floors climbed up, walking distance, seating activity, and type and intensity of activity. The output data from SmartFit are rules or suggestions for trainers to pass on to the athletes. The DT analyzes historical data and calculates a score for athletes' inputs, and once sufficient historical data has been collected, the training phase begins. During this phase, the DT's AI component is generated and processes new data to release updated suggestions, which are then stored as historical data. The authors tried the framework using a dataset recorded in 2016 by a teenage male football team over the course of three days, and the system release suggestions to improve scores in areas where the athletes had performed poorly (e.g., increasing minutes of moderate activity on day 3 by 24 units). As future work, the authors plan to record activity over a longer period of time to build a more solid base of historical data and to develop a faster method for generating the DT based on this data.
So far, healthcare related initiatives have been presented. Next, we review HDT initiatives in the manufacturing vertical. As said before, six contributions have been found.
Authors in [54] proposed a system for integrating human workers into a Cyber Physical Production System (CPPS) using HDTs. The goal of the system is to schedule tasks in the CPPS based on the skills, experience, and preferences of the employees. HDTs are used to emulate the behavior of the employees and automatically schedule tasks according to the available machines and the capabilities of the workers. The inputs for the HDTs include the employee's skills, current tasks, experience, and preferred tasks. The system is rule-based, with all necessary information stored in a database that is updated based on the tasks performed by the employees and the time taken to complete them. By using this system, authors aimed to improve the efficiency of the CPPS and the satisfaction of the employees by allowing them to learn new tasks or perform tasks that align with their preferences.
In [55], authors aimed to improve ergonomics and station reorganization in shop floors by using a DT that includes an HDT. To obtain the HDT, the authors collect human motion data using low-cost capture devices, to recognize the tasks being performed (e.g., picking or placing an object, walking, carrying something). The input data consists of human joint coordinates represented in a three-dimensional space, which are then simulated using a data-driven motion synthesis approach. The output of this system is not directly related to the HDT, but rather provides production performance measures, 3D spatial requirements, and a standardized ergonomics score that can be used to improve the ergonomics of the work environment. The proposal was tested in a case study where an operator had to pick and place warehouse components from a rack to a trolley, and the real-world data is collected and transferred to the DT to assess the ergonomics and cycle time for the operation. The scenario is set up in a 10 square meter area with two racks of different heights, and the operation is performed by three actors, each performing the task five times. The transformation of the real input data to the virtual representation takes between 5 and 10 minutes to complete, and multiple simulations need to be run in order to find the configuration that resulted in the best performance. This approach allowed the completion of the design of the workstation in a matter of hours, rather than spending days trying different configurations.
Similarly, the work presented in [56] proposed a framework for improving ergonomics in the workplace. The case study involves an assembly task that places two steel components together and joins them using four screws. The experiment is carried out by an operator wearing a device that collects data on her movements using motion sensors. The data collected by the sensors is sent to a smartphone app, which generated a CSV file containing Euler angles, quaternions, and posture angles for each of the ten segments measured (lower legs, upper legs, forearms, arms, trunk, and pelvis). To simulate the task, the authors used Tecnomatix Process Simulate software by Siemens, incorporating the data collected from the app to generate an accurate simulation. The simulation allows the authors to identify areas in the process where small modifications could improve ergonomics. For example, adding height to a cart reduced the effort required to lift the pieces from a low shelf to within standard ergonomic limits, and replacing a gun screwdriver with an angle screwdriver reduced the counter reaction force to within acceptable limits. This case study demonstrates the usefulness of the system in accurately assessing the ergonomics of a workplace and in designing work stations, particularly when combined with a DT of the workplace.
Aligned to the references above, authors in [57] proposed a method for designing and evaluating human-robot collaboration tasks in manufacturing environments using a DT. The goal is to improve efficiency and ergonomics by allowing the robot to handle the heavier tasks and the human to handle tasks the robot is unable to perform. In the event that the robot encounters any issues with a task, it can skip the task and delegate it to the human, rescheduling the plans for both accordingly. The DT system allows for real-time integration of the work environment, the robot, and the human, and is used to guide the task rescheduling process. To create the HDT, the authors used a Japanese body dimension database to estimate the body dimensions based on the worker's height and weight, and generate a link structure using the body joints. A digital deformable skin surface is then applied to the link structure. Input data for the HDT is collected using a marker-based optical motion capture system called OptiTrack, in combination with multiple cameras placed in the work area. The HDT is integrated into the shop floor DT space and serves as a guide for the task rescheduling system. The system is tested in a work environment designed for picking parts, with shelves on either side containing 40 boxes of parts. A total of 37 reflective markers are attached
to the worker's clothing and tracked by 16 cameras mounted on the ceiling. A magnetic tape is also placed on the floor to guide the robot. An initial schedule is provided to the system, which adapts dynamically based on the human cycle time, any issues the robot encounters with picking parts, and HDT ergonomic assessments. The system can display the DT on a screen, with the human parts colored according to joint effort.
Different to the references before, authors in [58] presented a worker DT focused on employee preferences. The main goal is to schedule factory tasks to the most suitable employees, considering factors such as their skills, preferences, character, motivation, and mood. The inputs for the HDT implementation are collected manually and stored in a database, and some of them are updated based on the employee's machinery use hours. The HDT is rule-based and will output an acceptance or rejection of a task based on the input and the HDT's properties. In a lab test, the authors replicate a factory 4.0 workplace with an employee picking parts from a flow station and a shelf to join the parts. When a new task is added to the factory flow, the system first checks the availability of materials to perform the task. If the materials are available, the system asks the HDTs of employees who have the skills or are willing to learn new operations if they can fit the new task into their schedule. With all affirmative answers from the HDTs, the system determines the best way to share the tasks between employees, such as through parallel production. If the system only receives rejections from all HDTs, it sends a broadcast to all physical employees to ask for the new task. This HDT implementation may not be very complex, but it demonstrates how employee wellbeing in the workplace can be improved in a factory by allowing employees to choose their preferences, learn new tasks, and have tasks distributed in the most effective way. This can lead to increased productivity as employees can do tasks they are familiar with, learn new ones, or alternate between them, and can also improve ergonomics by allowing for the easy switching of tasks to prevent repetitive movements.
In [59], a system for real-time calculation of safety distance in human-robot collaboration was presented. The system captures the shared workspace between the human and the robot using RGBD sensors. One sensor is used to scan a section of the physical working environment, while the second sensor scans the other section and tracks the human's skeletal information. The RGBD images are sent to a deep learning server, which uses machine learning algorithms to generate a point cloud for the robot and a 3D simulation of the human, including the skeletal data. The simulation is then used to calculate the safety distance between the human and the robot. The output of the system is provided to the robot, which can be instructed to stop or slow down based on the risk, and to the human, who can be warned through MR glasses. The system aims to improve safety and efficiency in human-robot collaboration by enabling the robot to adjust its behavior based on the real-time position and movement of the human in the shared workspace.
\begin{table}
\begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline \hline
**Ref** & **Area** & \multicolumn{1}{p{42.7pt}|}{**Goal**} & \multicolumn{1}{p{42.7pt}|}{**Input**} & \multicolumn{1}{p{42.7pt}|}{**Output**} & \multicolumn{1}{p{42.7pt}|}{**How it’s processed**} \\ \hline
[44] & H & Approximate severity of carotid artery stenoses & Patient anthropometric data and face video & Carotid occlusion percentage & Artificial vision and computational mechanics \\ \hline
[45] & H & Ischemic Heart Disease (IHD) detection & ECG & Classification of data in Myocardial Infarction or not & AI 2 CNNs pipeline not \\ \hline
[46] & H & Abdominal aortic aneurysm detection and classification & RNN + CNN & AAA diameter & AI RNN + CNN \\ \hline
[47] & H & Design and prescribe Physiology Sociosis Specific Exercises & Body scan, X-ray, skeletal templates & 3D model & Simulations \\ \hline
[47] & H & Obtain a military aircraft pilot replica & Many body models and sensors & Performance degradation and impeding casualty & Artificial intelligence \\ \hline
[60] & H & Replicate colon dynamics & Magnetic resonance imaging, fluids variables & Colon absorptions & Discrete Multiphysics simulation \\ \hline
[48] & H & distance distributions for cell in three dimensions for 3D digital skin biopsy data & images of cells taken from biopsies & interactive visualization in 3d of the cells & Extracting high-quality images \\ \hline
[52] & H & Predict health anomalies & Labeled clinical data & A prediction model & CNN \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of papers in the HDT field
### _Inelectual property: Patents_
This section presents the included results from patent analysis exposed in section 1d. Most of the patents are authored by institutions from China and Korea. The patents that pass the screening phase followed in this survey are listed in Table 2, along with their respective focus area and objective. A word cloud graph in Figure 5 illustrates the most frequently occurring words in the titles, having a bigger size for the most repeated words.
### _Inelectual property: Patents_
This section presents the included results from patent analysis exposed in section 1d. Most of the patents are authored by institutions from China and Korea. The patents that pass the screening phase followed in this survey are listed in Table 2, along with their respective focus area and objective. A word cloud graph in Figure 5 illustrates the most frequently occurring words in the titles, having a bigger size for the most repeated words.
Figure 5: Patent titles word cloud
### _Industrial Initiatives_
The most relevant projects undertaken by the industry sector found in the technology radar process building this survey are discussed in this section. As many of these projects have limited scientific literature available, the websites of the products are referenced, along with a brief overview of the details obtained from each project.
Semic Health Digital Body Total [76] is a DT of human biological systems, organs, or molecular systems that utilizes artificial intelligence (AI) to aid in the diagnosis of medical conditions and predict possible health complications. It is an evolution of the Cyber Bio Twin project [77], which aimed to create a digital replica of human biological systems for the purpose of improving healthcare. The Digital Body Total software is based on the collection of data from various subjects to create computational and mathematical models that replicate the physiology of human biological systems. These HDT models are created with the aid of AI. Digital Body Total has obtained approval for use in the US healthcare system by the US Food and Drug Administration (US FDA). This approval highlights the potential value as a tool for healthcare professionals to improve the accuracy of diagnoses and predictions, as well as its adherence to regulatory standards. Digital Body Total software is a valuable tool that can aid in the analysis and management of clinical trials, disease management, outbreak prevention and management, and lifestyle management, while ensuring the privacy and security of an individual's personal medical data. Moreover, the HDT can be modified to examine and predict the effect of changes on an organ and can be experimented on by introducing foreign substances to test their effects. It can also be modeled to consider environmental and medical factors such as air composition and medication or nutrient intake. This allows healthcare professionals to better understand the functions and dysfunctions of various organs and systems within the body and develop more effective treatments and therapies.
Sim&Cure Sim&Size [78] software is used to generate three-dimensional (3D) models of the patient's brain and blood vessels, which allows surgeons to visualize the aneurysm and plan the surgical approach. The software may also include simulation capabilities, enabling surgeons to practice the surgery virtually and test different options before performing the procedure. The Sim&Size software goal is to improve the accuracy and success rate of brain aneurysm surgeries by providing interactive visualization and simulation tools that allow surgeons to more effectively plan and execute the procedure.
Focusing on the heart arena, Dassault Systemes presented the Simulia Living Heart [79], a high-fidelity heart model that converts 2D scans into a 3D model. The Living Heart project has signed a 5-year research agreement with the FDA, followed by a 5-year extension, with the goal of developing and validating highly accurate personalized digital human heart models. These models will serve as a common technology base for education and training, medical device design, testing, clinical diagnosis, and regulatory science, with the aim of rapidly translating current and future innovations into improved patient care. For example, the use of a 3D heart model to perform in silico tests can reduce the need for animal testing and the number of patients subjected to potentially harmful early stage trials, thus saving a significant amount of time and resources in the clinical trial process for medical innovations.
Also dealing with heart, Siemens Healthineers [80] presented in the Radiological Society of North America (RSNA) conference in 2018 its technology [81] based on the use of Magnetic Resonance imaging and ECG to simulate heart physiological processes through a DT. This approach was used at the University of Heidelberg to predict the outcomes of various treatments and select the one that best fits the patient. Cardiologists have also utilized this method in the context of cardiac resynchronization therapy, a treatment for chronic congestive heart failure, in which electrodes are implanted on the heart right and left ventricles to resynchronize beating. By training a heart DT with a large amount of data and virtually implanting electrodes to deliver virtually generated pulses to the heart, the potential success of the therapy can be predicted. If the asynchronous pumping of the virtual heart is corrected, it serves as an indication that resynchronization therapy could also be successful in the real patient.
Last initiative also in the heart arena, seats on Philips developments, namely Phillips Heart Navigator [82] and Phillips Heart Model [83], that are medical software tools that utilize a generic heart model combined with patient-specific information, such as computed tomography (CT) scans, live X-ray images, and ultrasound data, to generate a three-dimensional (3D) model of the patient's heart. This model is used to assist surgeons in planning and performing transcatheter
\begin{table}
\begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline
**Reference** & **Area** \\ \hline
[61] & Shape selection human eyes \\ \hline
[62] & Human Robot Interaction \\ \hline
[63] & Healthcare \\ \hline
[64] & Human robot interaction \\ \hline
[65] & Military \\ \hline
[66] & Healthcare \\ \hline
[67] & Human activities \\ \hline
[68] & Healthcare \\ \hline \end{tabular}
\end{table}
Table 2: Summary of patients in the DT field
aortic replacements (TAR), a type of minimally invasive heart surgery, by helping to select the most appropriate device for the patient and the optimal insertion angle. The Heart Navigator also provides live image guidance during TAR procedures to support the precise positioning of the device. The ultimate goal of these tools is to improve the accuracy and success rate of TAR procedures. In the future, Phillips plans to develop augmented reality (AR) models for practicing TAR procedures and for use in the operating room. These AR models will be patient-specific, allowing for more personalized and accurate guidance during surgery.
## IV Conclusions
After the thorough review carried out in this survey, we may with no doubt conclude that HDTs are a growing area of research with a strong potential to revolutionize the way we approach healthcare, manufacturing, transportation, and many other fields, by simulating and analyzing real-world human bodies and behaviors, aimed at improving decision-making, optimize processes, and enhance safety and efficiency. We may also emphasize that even though HDT applicability ranges a wide spectrum of verticals, healthcare comes up as a highly promising area, being the area where HDT initiatives are deployed the most. Also, there is no doubt about the positive impact that HDT can have in healthcare and in the society as a whole. Indeed, the healthcare sector shows 9 HDT deployments as found in this survey, with a broader scope, ranging from estimating the occlusion of carotid artery stenoses to track football team players' performance.
However, although some deployments are even at a commercial stage, there are some aspects that remain unsolved and thus, still require some attention from the scientific community. Indeed, a lack of contributions in these aspects is strongly hampering the chances for HDT to be widely adopted.
One of the major challenges technology progress must face today, is the need for robust cybersecurity and data privacy protection in the face of the constant, non-stoppable and ever-growing cyber threats. Hackers and cybercriminals are continually seeking out new ways to access sensitive information and steal data, which can have serious consequences for both individuals and organizations. An HDT, considered as a virtual instance, inherently poses many concerns about data privacy and security, as the data used to create the virtual representation of humans must be properly protected to prevent unauthorized access or a non-desired misuse. A parallel issue refers to the lack of transparency around how companies use and share personal data, which can leave individuals feeling uncertain about their privacy online. The use of connected devices, also known as the Internet of Health (IoH), adds an additional layer of complexity to cybersecurity concerns, as these devices are often equipped with weak security and can be exploited by hackers.
In order to address these challenges, it is necessary to prioritize cybersecurity in HDT systems design and development. To this end, encryption techniques and secure communication protocols to send and share data may be used along with prioritizing data processing at the edge rather than relying on centralized servers. Data privacy must also be considered during the development, through the use of "data privacy by design" approaches. Ethical considerations are also crucial in the developments, as they raise important questions about privacy violations, discrimination, and other issues. Researchers and developers must engage with ethicists and other experts in these fields and seek input from stakeholders to ensure that HDT technology is used in a responsible and beneficial manner.
Another important aspect to be considered refers to legal aspects. One of the main concerns is the issue of data privacy and protection as commented before. Different laws in many countries regulate the collection, use, and storage of personal data. These laws often require companies and organizations to obtain consent from individuals before collecting their data and to protect it from unauthorized access or misuse. This is critically related to identifying who owns the HDT. Indeed, many concerns may pop up on who owns the data used to create a HDT and who has the right to use or control the twin. This could be an issue if a company creates a HDT or a DT using data collected from an individual without their knowledge or consent. There may also be legal implications related to the use of HDTs in decision-making processes or other situations where they could potentially affect individuals' rights or interests. For example, if a company uses a HDTs to make decisions about hiring or promotion, there may be concerns about discrimination or bias.
Browsing the current technology progress, we notice that several avenues for future research in the field of HDTs are open. One area would focus on the improvement of HDT accuracy and realism in order to better mimic the behavior and characteristics of real human systems. Research should focus on enhancing current state-of-the-art in HDT modeling and simulation techniques, aimed at creating more useful and relevant HDTs, always considering the main final goal, that is have a fully twinned human able to be used for many purposes.
Another research area would be the expansion of HDT applications to new fields and sectors. Although HDTs have already been applied mainly in healthcare and manufacturing sectors, many other areas may benefit from HDTs. Hence, research is critical to identify the areas where HDT may be beneficial the most. That said, it is also important to figure out which areas would also bring a commercial and large societal impact, indeed, some work have started in clothes design or medical devices personalization. As an example, in a society where several companies and government entities employ the use of DTs, these digital representations could interact directly with a user DT. For this interaction to be effective, the user's digital twin must be able to accurately represent the user's needs and habitual decisions, and be able to make autonomous decisions and carry out tasks that improve the user's overall quality of life. The application of HDT technology can be utilized in situations where there is a human-digital interaction. However, before reaching this goal, the technology must continue to mature and overcome its current limitations, through the completion of ongoing research projects that aim to advance the state-of-the art.
Taking a non-technological perspective, a key area where efforts are required is related to the management of ethical and legal concerns, to ensure that HDTs are used in a responsible and beneficial manner. Particular work in this area should go on generating frameworks or roadmaps to help the HDT technology to be more secure, regulatory compliant, to respect individuals' privacy rights, and to minimize the potential for bias and discrimination.
Finally, the major technical challenge HDT must face refers to achieving much better efficiency and scalability. This is especially important given the resource-intensive nature of HDTs developments, which often require large amounts of data and intensive computing power. To address these challenges, there is a need for more efficient HDT implementations, scalable systems that can effectively manage vast amounts of information in networks and process them in an efficient way to give real time results.
As a summary, any effort towards facing these challenges, will help to benefit from the full potential of HDT systems, supported by a secure, ethical, efficient and accurate adoption of HDTs in many verticals.
## Acknowledgment
This work has been supported by the Spanish Ministry of Science and Innovation under contract PID2021-124463OB-I00, the Catalan Government under contract 2021 SGR 00326 and the Catalan Department of Research and Universities.
|
2305.17541 | Minimal Posets with Prescribed Maximal Chain Cardinalities | Given a nonempty finite multiset $S$ of positive integers, we wish to find a
partially ordered set $P$ of minimal cardinality such that the multiset of
cardinalities of all maximal chains in $P$ equals $S$. This paper establishes
upper and lower bounds on the size of $P$: $\max(S) + \lceil \log_2 |S| \rceil
<= |P| <= \max(S) + |S| - 1$, and both bounds are tight. | Todd Bichoupan | 2023-05-27T18:06:38Z | http://arxiv.org/abs/2305.17541v1 | # Minimal posets with prescribed maximal chain cardinalities
###### Abstract.
Given a nonempty finite multiset \(S\) of positive integers, we wish to find a partially ordered set \(P\) of minimal cardinality such that the multiset of cardinalities of all maximal chains in \(P\) equals \(S\). This paper establishes upper and lower bounds on the size of \(P\): \(\max(S)+\lceil\log_{2}|S|\rceil<=|P|<=\max(S)+|S|-1\), and both bounds are tight.
## 1. Background
A _partial order_\(\leq\) on a set \(P\) is a binary relation that satisfies the following three conditions for all \(x,y,z\in P\):
* Reflexivity: \(x\leq x\)
* Antisymmetry: \(x\leq y\wedge y\leq x\implies x=y\)
* Transitivity: \(x\leq y\wedge y\leq z\implies x\leq z\)
A set equipped with a partial order is called a _partially ordered set_, or _poset_ for short. As a convention, \(x<y\) for \(x,y\in P\) if \(x\leq y\) and \(x\neq y\). For any subset \(S\) of \(P\), the relation \(\leq\) is still a valid partial order when it is restricted to the elements of \(S\), and the poset \((S,\leq)\) is called a _suborder_ of \((P,\leq)\).
Every poset \((P,\leq)\) has an associated covering relation, \(\prec\). For \(x,y\in P\), \(x\prec y\) if \(x<y\) and there is no \(z\in P\) such that \(x<z\) and \(z<y\). When \(P\) is finite, the partial order can be recovered from the covering relation: \(x\leq y\) if there is sequence \(\{a_{i}\}_{i=0}^{n}\) of members of \(P\) such that \(a_{0}=x,a_{n}=y\), and \(a_{i}\prec a_{i+1}\) for all \(i<n\). When \(P\) is finite, it can be visualized with a _Hasse diagram_, which is drawn by arranging the elements of \(P\) so that \(y\) is above \(x\) whenever \(x<y\), and \(x\) is connected to \(y\) if \(x\prec y\).
A covering relation on a poset \((P,\leq)\) forms a simple directed graph on the members of \(P\). Moreover, a simple directed graph is the covering relation associated with some partial order if and only if the graph is acyclic and every edge is the unique path connecting its endpoints (that is, if an edge connects a vertex \(x\) to a vertex \(y\), then there is no other path from \(x\) to \(y\)).
For a poset \((P,\leq)\), elements \(x,y\in P\) are _comparable_ if \(x\leq y\) or \(y\leq x\). A set \(C\subseteq P\) is a _chain_ if all the elements of \(C\) are comparable to one another. A chain \(C\) in \(P\) is _maximal_ if no other chain in \(P\) is a proper superset of \(C\).
For a finite poset \((P,\leq)\), chains and maximal chains can also be defined in terms of the covering relation. A nonempty finite set \(C\subseteq P\) is a chain if and only if the elements of \(C\) can be ordered in a sequence \(\{a_{i}\}_{i=0}^{n}\) where \(a_{i}\prec a_{i+1}\) for all \(i<n\), and \(C\) is maximal if and only if \(a_{0}\) is a minimal element of \(P\) and \(a_{n}\) is a maximal element of \(P\). In other words, a
chain is a path in the directed graph associated with the covering relation, and a chain is maximal if the path cannot be extended by adding an element to either end.
## 2. Counting Chains with Matrices
Let \((P,\leq)\) be a nonempty finite poset. Let \(n=|P|\) and index the elements of \(P\) with the nonnegative integers less than \(n\), so \(P=\{p_{i}\}_{i=0}^{n-1}\). The _adjacency matrix_ associated with the covering relation on \((P,\leq)\) is the \(n\)-by-\(n\) matrix \(A\) where \(A_{ij}=1\) if \(p_{i}\prec p_{j}\) and \(A_{ij}=0\) otherwise. It is well known that one can efficiently calculate the number of paths between two vertices of a graph by taking powers of the corresponding adjacency matrix. In particular, for any nonnegative integer \(k\), entry \((i,j)\) of \(A^{k}\) is the number of chains \(C\subseteq P\) such that \(p_{i}\) is the minimum element of \(C\), \(p_{j}\) is the maximum element of \(C\), and \(|C|=k+1\) (\(A^{0}\) is the identity matrix by convention). By summing all the entries \((i,j)\) of \(A^{k}\) where \(p_{i}\) is a minimal element of \(P\) and \(p_{j}\) is a maximal element of \(P\), one obtains the total number of maximal chains of \(P\) that have cardinality \(k+1\). It is worth noting that since the graph associated with \(A\) is acyclic, \(A^{k}\) is only nonzero for \(k<n\).
## 3. Problem Statement
Given a nonempty finite poset \((P,\leq)\), the previous section shows how one can obtain the multiset \(S\) of cardinalities of maximal chains in \(P\). We are interested in the opposite problem: we are given a nonempty finite multiset \(S\) of positive integers, and we are looking for a poset \((P,\leq)\) such that the multiset of maximal chain cardinalities equals \(S\); furthermore, we want to find a poset \((P,\leq)\) that minimizes \(|P|\) while satisfying the constraint on its maximal chain cardinalities.
## 4. Trivial Bounds
Suppose \(S\) is a nonempty finite multiset of positive integers. Let \(m\) be the maximum element of \(S\) and let \(n\) be the cardinality of \(S\) (counting multiplicities). If \((P,\leq)\) is a poset whose multiset of maximal chain cardinalities equals \(S\), then \(|P|\geq m\), since \(P\) must contain a chain of size \(m\). On the other hand, it is always possible to construct a poset \((P,\leq)\) where \(|P|=m+n-1\) and the multiset of maximal chain cardinalities equals \(S\).
Let \(S^{\prime}=S\setminus\{m\}\) (so the multiplicity of \(m\) in \(S\) minus the multiplicity of \(m\) in \(S^{\prime}\) equals one). Suppose \(P\) has an element \(x_{i}\) for each positive integer \(i\) less that or equal to \(m\), and suppose \(P\) has an element \(s_{jk}\) for each distinct element \(j\in S^{\prime}\) and each positive integer \(k\) less that or equal to the multiplicity of \(j\) in \(S^{\prime}\). So \(|P|=m+n-1\), and if the partial order on \(P\) is defined so that \(x_{i}<x_{j}\) and \(x_{i}<s_{jk}\) when \(i<j\), then the associated multiset of maximal chain cardinalities equals \(S\). Figure 1 shows the Hasse diagram for the poset given by this construction if \(S=\{2,3,3,5,5\}\).
## 5. The Lower Bound
The following theorem shows that the lower bound from Section 4 can be improved.
**Theorem 1**.: _Let \((P,\leq)\) be a finite poset and let \(S\) be the multiset of all cardinalities of maximal chains in \(P\). Let \(m\) be the maximum element of \(S\) and let \(n=|S|\) (counting multiplicities). Then \(|P|\geq m+\log_{2}(n)\)._
Proof.: Let \(k=|P|-m\). Since \(P\) must contain a maximal chain of cardinality \(m\), the elements of \(P\) can be partitioned into sets \(M\) and \(X\) such that \(M\) is a maximal chain of \(m\) elements and \(X\) contains the remaining \(k\) elements. Let \(C\) be a maximal chain in \(P\). Since \(C\cap X\) is a subset of \(X\), there are at most \(2^{k}\) possible values of \(C\cap X\).
The critical idea is that we can determine which elements of \(M\) are in \(C\) if we only know which elements of \(X\) are in \(C\). By the pigeonhole principle, this will imply that there are at most \(2^{k}\) possible maximal chains \(C\) in \(P\), which implies that there are at most \(2^{k}\) different numbers in \(S\). So \(2^{k}\geq n\), or equivalently, \(k\geq\log_{2}(n)\).
Let \(A=C\cap X\). If an element \(y\in M\) is a member of \(C\), then \(y\) is comparable to every element of \(A\). So if \(B\) is the set of all \(y\in M\) such that \(y\) is comparable to all elements of \(A\), then \(B\supseteq C\cap M\). Furthermore, the set \(A\cup B\) is a chain: every pair of elements in \(A\) is comparable because \(A\subseteq C\), every pair of elements in \(B\) is comparable because \(B\subseteq M\), and every element of \(A\) is comparable to every element of \(B\) by the definition of \(B\). But if \(A\cup B\) is a chain where \(A\cup B\supseteq C\), then \(A\cup B=C\) by the maximality of \(C\). It follows that \(C\cap M=B\).
The following theorem shows that the lower bound given in Theorem 1 is sharp.
**Theorem 2**.: _Let \(k\) be a positive integer, and let \(I\) be the set of all positive integers less than or equal to \(k\). For each \(i\in I\), let \(a_{i}\) be a nonnegative integer. Let \(A\) be the multiset \(\{a_{i}:i\in I\}\). Let \(\operatorname{sums}(A)\) be the multiset \(\{\Sigma_{i\in J}a_{i}:J\subseteq I\}\) (so \(|\operatorname{sums}(A)|=2^{k}\)). If \(S\) is the multiset \(\{k+x:x\in\operatorname{sums}(A)\}\), \(n=|S|=2^{k}\), and \(m=\max(S)=k+\Sigma_{i=1}^{k}a_{i}\), then there is a poset of cardinality \(m+\log_{2}(n)=2k+\Sigma_{i=1}^{k}a_{i}\) whose multiset of maximal chain cardinalities equals \(S\)._
Proof.: We give a method to construct the required poset.
In [1, Section 1.6], Gratzer defines the _ordinal sum_ as an associative binary operation on posets: for posets \((P_{1},\leq_{1})\) and \((P_{2},\leq_{2})\), the ordinal sum \((P_{1},\leq_{1})+(P_{2},\leq_{2})\) is a poset on
Figure 1. A poset with maximal chain cardinalities \(\{2,3,3,5,5\}\).
\(P_{1}\cup P_{2}\) that preserves the order of elements within \(P_{1}\) and \(P_{2}\) and places all elements of \(P_{1}\) below all elements of \(P_{2}\).
For each \(i\in I\), let \((P_{i},\leq_{i})\) be a poset containing a chain of size \(a_{i}+1\) and a single additional element not comparable to any element of the chain. If \((P,\leq)=\Sigma_{i=1}^{k}(P_{i},\leq_{i})\), then the multiset of cardinalities of maximal chains in \(P\) equals \(S\), and \(|P|=\Sigma_{i=1}^{k}|P_{i}|=\Sigma_{i=1}^{k}(a_{i}+2)=m+\log_{2}(n)\).
## 6. The Upper Bound
The purpose of this section is to show that the upper bound from Section 4 is sharp.
An element \(x\) of a poset \((P,\leq)\) is a _splitting element_ of \(P\) if there is more than one maximal chain in the suborder of all elements of \(P\) less than or equal to \(x\) and there is more than one maximal chain in the suborder of all elements of \(P\) greater than or equal to \(x\). Equivalently, for a finite poset \((P,\leq)\), an element \(x\in P\) is splitting if the graph of the covering relation associated with \((P,\leq)\) contains more than one path from a minimal element of \(P\) to \(x\) and contains more than one path from \(x\) to a maximal element of \(P\).
Suppose \((P,\leq)\) is a poset and \(A\) is a subset of \(P\). An _in-edge_ of \(A\) is a pair of elements \(x\in A\), \(y\in P\setminus A\) where \(y\prec x\). An _out-edge_ of \(A\) is a pair of elements \(x\in A\), \(y\in P\setminus A\) where \(x\prec y\). In other words, in-edges of \(A\) are edges in the graph of the covering relation associated with \((P,\leq)\) that point from an element outside of \(A\) to an element in \(A\), and likewise, out-edges of \(A\) are edges pointing from an element in \(A\) to an element outside of \(A\)
Figure 2. A minimal poset whose multiset of maximal chain cardinalities is \(\{3,4,5,6,7,8,9,10\}=3+\text{sums}(\{1,2,4\})\). Each color represents one of the posets in the ordinal sum.
Suppose that \((P,\leq)\) is a poset, \(m\) is a positive integer, and \(C=\{c_{i}\}_{i=1}^{m}\) is a chain in \(P\) where \(c_{i}\leq c_{j}\) when \(i\leq j\). The _index_ of an in-edge of \(C\) with endpoint \(c_{i}\) is \(i\), and the index of an out-edge of \(C\) with endpoint \(c_{j}\) is \(j\).
The following theorem shows that the upper bound from Section 4 is sharp when the given maximal chain cardinalities are sparse.
**Theorem 3**.: _Let \(S\) be a nonempty finite multiset of positive integers, let \(n=|S|\), and let \(m=\max(S)\). Suppose that \(S\) has no element with multiplicity greater than one and that for all \(a,b,c\in S\) with \(a<b<c\), \((m-a)\geq(m-b)+(m-c)+n-3\). Then if \((P,\leq)\) is a finite poset where the multiset of all cardinalities of maximal chains in \(P\) equals \(S\), \(|P|\geq m+n-1\)._
Proof.: If \(n\leq 3\), then \(m+n+1=m+\lceil\log_{2}(n)\rceil\), and the result follows from Theorem 1. So we may assume that \(n\geq 4\).
Assume for the sake of contradiction that \(P\) has a splitting element, \(x\). Then there are distinct maximal chains of sizes \(e\) and \(f\) in the suborder of elements below \(x\), and there are distinct maximal chains of sizes \(g\) and \(h\) in the suborder of elements above \(x\). So \(P\) has maximal chains of sizes \(e+g-1\), \(e+h-1\), \(f+g-1\), and \(f+h-1\); since \((e+g-1)+(f+h-1)=(e+h-1)+(f+g-1)\), this implies that \(S\) contains four distinct elements \(a\), \(b\), \(c\), and \(d\) with \(a+d=b+c\). Assume without loss of generality that \(a\) is smaller than \(b\), \(c\), and \(d\). Then \((m-a)=(m-b)+(m-c)-(m-d)<(m-b)+(m-c)+1\), which contradicts the condition of the theorem when \(n\geq 4\).
Let \(M\) be the maximal chain in \(P\) of size \(M\), and let \(X=P\setminus M\). Suppose that \(M=\{t_{i}\}_{i=1}^{m}\) where \(t_{i}\leq t_{j}\) when \(i\leq j\). If \((x,t_{i})\) is an in-edge of \(M\), \((t_{j},y)\) is an out-edge of \(M\), and \(i\leq j\), then for every integer \(k\) with \(i\leq k\leq j\), \(t_{k}\) is a splitting element of \(P\). Since \(P\) has no splitting elements, the index of every in-edge of \(M\) must be strictly greater than the index of every out-edge of \(M\). Moreover, every maximal chain in \(P\) includes at most one in-edge of \(M\) and at most one out-edge of \(M\).
Figure 3. A poset with no splitting elements.
Label each maximal chain \(C\neq M\) of \(P\) with a pair of integers \((i,j)\), where \(i\) is the index of the out-edge of \(M\) whose endpoints are in \(C\) or \(i=0\) if no such out-edge exists, and \(j\) is the index of the in-edge of \(M\) whose endpoints are in \(C\) or \(j=m+1\) if no such in-edge exists. By the conclusion of the last paragraph, every label \((i,j)\) has \(i<j\).
If \(P\) beats the trivial construction and \(|P|\leq m+n-2\), then the maximal chain labels are unique. Suppose for the sake of contradiction that \(P\) has two distinct maximal chains \(C_{1}\) and \(C_{2}\) that are not equal to \(M\) and are both labeled \((i,j)\). Let \(a=|C_{1}|\), let \(b=|C_{2}|\), and assume without loss of generality that \(a<b\). By the condition of the theorem, \((m-a)\geq(m-b)+(m-m)+n-3\), so \(b-a\geq n-3\). Since \(C_{1}\cap M=C_{2}\cap M\), \(b-a=|C_{2}\cap X|-|C_{1}\cap X|\). \(C_{1}\) is a maximal chain not equal to \(M\), so \(C_{1}\not\subseteq M\) and \(|C_{1}\cap X|\geq 1\). Then since \(|X|\leq n-2\), we must have \(|C_{2}\cap X|=|X|=n-2\) and \(|C_{1}\cap X|=1\) to satisfy the inequality \(|C_{2}\cap X|-|C_{1}\cap X|\geq n-3\). But then \(X\subseteq C_{2}\), which implies that \(C_{1}\subsetneq C_{2}\), contradicting the maximality of \(C_{1}\).
Let \(L_{O}\) be the set of all labels \((i,j)\) of maximal chains in \(P\) such that \((a,c)\) is not the label of any maximal chain in \(P\) for any \(c<b\). Let \(L_{I}\) be the set of all labels \((i,j)\) of maximal chains in \(P\) such that \((d,b)\) is not the label of any maximal chain in \(P\) for any \(d>a\). Define the function \(f:L_{O}\mapsto X\) so that for each label \((i,j)\in L_{O}\), \(f(i,j)\) is the unique \(x\in X\) such that \(x\) is in the maximal chain labeled \((i,j)\) and \(t_{a}\prec x\). Similarly, define the function \(g:L_{I}\mapsto X\) so that for each label \((i,j)\in L_{I}\), \(f(i,j)\) is the unique \(x\in X\) such that \(x\) is in the maximal chain labeled \((i,j)\) and \(x\prec t_{b}\). \(f\) and \(g\) are injections, since no two labels in \(L_{O}\) can have the same out-edge index and no two labels in \(L_{I}\) can have the same in-edge index. So \(|L_{O}|\leq|X|\) and \(|L_{I}|\leq X\).
Let \(L=L_{O}\cup L_{I}\). Define \(h:L\mapsto X\) so that for each label \((i,j)\in L\), \(h(i,j)=f(i,j)\) if \((i,j)\in L_{O}\) and \(h(i,j)=g(i,j)\) if \((i,j)\in L_{I}\setminus L_{O}\). \(h\) is an injection. Assume for the sake of contradiction that \(h\) is not an injection, and let \((a,b)\in L_{O}\) and \((c,d)\in L_{I}\) be labels such that \(f(a,b)=h(a,b)=h(c,d)=g(c,d)\). Let \(x=f(a,b)=g(c,d)\), so \(t_{a}\prec x\) and \(x\prec t_{d}\). Since \(x<t_{b}\), it follows that \(t_{d}\leq t_{b}\), and since \(t_{c}<x\), it follows that \(t_{c}\leq t_{a}\). But one can obtain a maximal chain labeled \((a,d)\) by taking the union of all elements less than or equal to \(x\) in the maximal chain labeled \((a,b)\) with the set of all elements of \(M\) above \(x\). If \((a,d)\) is the label of a maximal chain in \(P\) and \((a,b)\) is a label in \(L_{O}\) with \(d\leq b\), then by the definition of \(L_{O}\), \(b\) must equal \(d\). Similarly, if \((a,d)\) is the label of a maximal chain in \(P\) and \((c,d)\) is a label in \(L_{I}\) with \(c\leq a\), then by the definition of \(L_{I}\), \(c\) must equal \(a\). Then \((a,b)=(c,d)\), contradicting the assumption that \((c,d)\not\in L_{O}\). So \(|L|\leq|X|\).
The next paragraph will show that if \(P\) beats the trivial construction and \(|X|\leq n-2\), then every label of a maximal chain in \(P\) other than \(M\) appears in \(L\). Since the label for each maximal chain is unique, it will follow that there are at most \(|L|\leq|X|\leq n-2\) maximal chains in \(P\) other than \(M\), which shows that it is impossible for \(P\) to have one maximal chain for each element of \(S\).
Assume for the sake of contradiction that there exists a maximal chain labeled \((a,d)\) in \(P\) where \((a,d)\not\in L\). Then there is a label \((a,c)\in L_{O}\) and a label \((b,d)\in L_{I}\) where \(a<b<c<d\) (\(b<c\) because every out-edge of \(M\) has a smaller index than every in-edge
of \(M\)). Let \(C_{1}\) be the maximal chain labeled \((a,c)\), let \(C_{2}\) be the maximal chain labeled \((b,d)\), and let \(C_{3}\) be the maximal chain labeled \((a,d)\). Let \(q=|C_{1}|\), let \(r=|C_{2}|\), and let \(s=|C_{3}|\). Let \(x=|X\cap C_{1}\setminus C_{2}|\), let \(y=|X\cap C_{1}\cap C_{2}|\), and let \(z=|X\cap C_{2}\setminus C_{1}|\). Then \(m-q=(c-a-1)-(x+y)\) and \(m-r=(d-b-1)-(y+z)\), so \((m-q)+(m-r)=(c-a-1)-(x+y)+(d-b-1)-(y+z)=(d-a)-(x+y+z)+(c-b-y-2)\). If \(y>0\), then \(C_{1}\cap C_{2}\cap X\) is a chain of \(y\) elements in \(X\) strictly between \(t_{b}\) and \(t_{c}\), and because \(M\) is the longest maximal chain in \(P\), it follows that \(c-b-1\geq y+1\); if \(y=0\), then at least \(c-b-1\geq 0=y\). So \((m-q)+(m-r)\geq(d-a)-(x+y+z)-1\). Let \(w=|X\cap C_{3}\setminus(C_{1}\cup C_{2})|\) and let \(v=|X\cap C_{3}\cap(C_{1}\cup C_{2})|\). \((m-s)=(d-a-1)-(w+v)=(d-a)-w-v-1\), so \((m-s)-(m-q)-(m-r)\leq(d-a)-w-v-1-(d-a)+(x+y+z)+1=(x+y+z+w)-2w-v\leq|X|-2w-v\). By condition of the theorem, however, \((m-s)-(m-q)-(m-r)\geq n-3\), so \(|X|\geq 2w+v+n-3\). But for \(P\) to beat the trivial construction, \(|X|\) cannot be any larger than \(n-2\), so \(2w+v\leq 1\). However, \(|C_{3}\cap X|\geq 1\), so \(w+v\geq 1\). But it is not possible to let \(w=0\) and \(v=1\), because no element of \(X\cap(C_{1}\cup C_{2})\) can simultaneously cover \(t_{a}\) and be covered by \(t_{d}\).
## 7. Validation
We wish to show that the problem of determining whether there exists a poset of cardinality at most \(t\) with a prescribed multiset \(S\) of maximal chain cardinalities is in the complexity class NP.
Let \(S\) is a nonempty finite multiset of positive integers. Let \(\text{size}(S)\) be the sum of all the digits of each number in \(S\) written in binary, counting multiplicities. That is, \(\text{size}(S)=\Sigma_{x\in S}(\lfloor\log_{2}(x)\rfloor+1)\). Let \(n=|S|\) and let \(m=\max(S)\). \(n+\lfloor\log_{2}(m)\rfloor\leq\text{size}(S)\leq n(\lfloor\log_{2}(m)\rfloor+1)\).
Let \(t\) be a positive integer less than or equal to \(m+n-1\). Suppose that there exists a poset of cardinality at most \(t\) whose multiset of maximal chain cardinalities equals \(S\). The goal is to show that there is a method to verify that such a poset exists in an amount of time bounded by a polynomial in \(\text{size}(S)\).
Naively, one might simply construct the poset \((P,\leq)\) satisfying the necessary conditions and verify that \((P,\leq)\) gives the desired set of maximal chain cardinalities using the method described in Section 2. The problem is that \(|P|\) can be exponentially larger than \(\text{size}(S)\). But this approach can be modified to fit our needs.
Let \(M\) be a maximal chain in \(P\) of size \(m\) and let \(X=P\setminus M\). For each \(x\in X\), there is at most one \(y\in M\) such that \(y\prec x\), and there is at most one \(z\in M\) such that \(x\prec z\). Let \(M^{\prime}\) be the subset of \(M\) that contains the minimum and maximum elements of \(M\) as well as every element of \(M\) that either covers some element of \(X\) or is covered by some element of \(X\). So \(|M^{\prime}|\leq 2n\). Let \(P^{\prime}=M^{\prime}\cup X\). Construct an acyclic simple directed graph \(G\) on the elements of \(P^{\prime}\) and assign a weight to each of the edges of \(G\) as follows: for any \(x,y\in P^{\prime}\) where \(x\prec y\), add an edge from \(x\) to \(y\) with weight \(1\), and for any \(x,y\in M^{\prime}\) where \(x\nless y\), add an edge from \(x\) to \(y\) with weight equal to \(1+|\{z\in M:x<z\wedge z<y\}|\). Then for every maximal chain \(C\) in \(P\), there is a corresponding path in \(G\) that starts from a minimal element and ends at maximal element where the sum of all weights of edges in the path in
\(G\) equals the length of \(C\) (the _length_ of a chain \(C\) is \(|C|-1\)).
The number of vertices in \(G\) is no greater than \(3n-1\), and the weight of each edge is no greater than \(m\). Suppose that the vertices of \(G\) (i.e. the elements of \(P^{\prime}\)) are indexed and labeled \(\{v_{i}\}_{i=0}^{|P^{\prime}|-1}\). Let \(A\) be the adjacency matrix for \(G\), but instead of placing a \(1\) on entries corresponding to edges of \(G\), place the monomial \(x^{w}\) on entry \((i,j)\) when there is an edge of weight \(w\) connecting vertex \(v_{i}\) to vertex \(v_{j}\), and place \(0\) on entry \((i,j)\) when there is not an edge connecting vertex \(v_{i}\) to vertex \(v_{j}\). Let \(L=\Sigma_{k=1}^{3n-1}A^{k}\). Entry \((i,j)\) of \(L\) is a polynomial in \(x\) where the coefficient of the term \(x^{l}\) is the total number of paths from vertex \(v_{i}\) to vertex \(v_{j}\) where the sum of the weights of all included edges equals \(l\). By summing all the polynomials on entries \((i,j)\) of \(L\) where the vertex \(v_{i}\) of \(G\) is a minimal element of \(P^{\prime}\) and the vertex \(v_{j}\) of \(G\) is a maximal element of \(P^{\prime}\), one obtains a polynomial where the coefficient of the term \(x^{l}\) is equal to the total number of maximal chains in \(P\) of length \(l\) (i.e. of cardinality \(l+1\)).
## 8. Future Problems
1. Is there an efficient algorithm that takes a nonempty finite multiset \(S\) of positive integers and determines the minimum possible cardinality of a poset whose multiset of maximal chain cardinalities equals \(S\), or is the problem NP-complete?
2. How do the upper and lower bounds change if we change the minimization criteria on the poset? In particular, what happens if, instead of aiming to minimize the number of elements in the poset, we aim to minimize the number of edges in the graph of the associated covering relation?
## Acknowledgements
Special thanks to David Wang for his contribution in the discussion of this problem, and special thanks to Kira Adaricheva for introducing the problem and providing support in the process of writing this paper.
|
2310.13591 | About contamination by sterile females and residual male fertility on
the effectiveness of the sterile insect technique. Impact on disease vector
control and disease control | The sterile insect technique (SIT) is a technique to control pests and
vectors of diseases by releasing mainly sterile males. Several challenges need
to be solved before large-scale field application in order to guarantee its
success. In this paper we intend to focus on two important issues: residual
(sterile) male fertility and contamination by sterile females. Indeed, sterile
males are never $100\%$ sterile, that is there is always a small proportion,
$\varepsilon$, of fertile males (sperm of) within the sterile male population.
Among the sterile insects that are released, a certain proportion,
$\epsilon_F$, of them are sterile females due to imperfect mechanical
sex-separation technique. This can be particularly problematic when arthropod
viruses are circulating, because mosquito females, even sterile, are vectors of
diseases. In this work, we extend results studied separately in [4,7]. To study
the impact of both issues, we develop and study a
SIT-entomological-epidemiological mathematical model, with application to
Dengue. In order to guarantee the success of SIT control, we recommend to solve
in priority the issue of residual fertility, and, then, to decay the
contamination by sterile females as low as possible. | Yves Dumont, Valaire Ivric Yatat-Djeumen | 2023-10-20T15:35:36Z | http://arxiv.org/abs/2310.13591v1 | About contamination by sterile females and residual male fertility on the effectiveness of the sterile insect technique. Impact on disease vector control and disease control.
###### Abstract
The sterile insect technique (SIT) is a technique to control pests and vectors of diseases by releasing mainly sterile males. Several challenges need to be solved before large-scale field application in order to guarantee its success. In this paper we intend to focus on two important issues: residual (sterile) male fertility and contamination by sterile females. Indeed, sterile males are never 100% sterile, that is there is always a small proportion, \(\varepsilon\), of fertile males (sperm of) within the sterile males population. Among the sterile insects that are released, a certain proportion, \(\epsilon_{F}\), of them are sterile females due to an imperfect mechanical sex-separation technique. This can be particularly problematic when arthropod viruses are circulating, because mosquito females, even sterile, are vectors of diseases.
Various upper bound values are given in the entomological literature for \(\epsilon_{F}\) and \(\varepsilon\) without clear explanations. In this work, we aim to show that these values are related to the biological parameters of the targeted vector, the sterile insects release rate, and the epidemiological parameters of a vector-borne disease, like Dengue. We extend results studied separately in [4, 7].
To study the impact of both issues, we develop and study a SIT-entomological-epidemiological mathematical model, with application to Dengue. Qualitative analysis of the model is carried out to highlight threshold values that shape the overall dynamics of the system.
We show that vector elimination is possible only when \(\mathcal{N}\varepsilon<1\), where \(\mathcal{N}\) is the basic-offspring number related to the targeted wild population. In that case, we highlight a critical sterile males release rate, \(\Lambda_{M}^{Cit}\), above which the control of the wild population is always effective, using a strategy of massive releases, and then small releases, to reach elimination and nuisance reduction. In contrary, when \(\mathcal{N}\varepsilon>1\), then SIT-induced vector elimination is unreachable, whatever the size of the releases.
Moreover, we compute a critical value for the release rate of sterile females, \(\Lambda_{F}^{Cit}\), such that if the release rate of the sterilized females is greater than \(\Lambda_{F}^{Cit}\), then the epidemiological risk increases. When the sterile females releases are is low, less than \(\Lambda_{F}^{Cit}\), then whatever the value taken by \(\varepsilon\mathcal{N}\), the epidemiological risk can be controlled using SIT. However, this is more difficult when \(\mathcal{N}\varepsilon>1\). We illustrate our theoretical results with numerical simulations, and we show that early SIT control is better to prevent or mitigate the risk of an epidemic, when residual fertility and contamination by sterile females occur simultaneously. We also highlight the importance of combining SIT with mechanical control.
In order to guarantee the success of SIT control, we recommend to solve in priority the issue of residual fertility, and, then, to decay the contamination by sterile females as low as possible.
## 1 Introduction
Vector-borne diseases have become very strong issues all around the World. After decades of chemical control, the use of biological control methods are more than necessary. Many research programs are ongoing
to develop new biocontrol tools. Among them, an old control technique, the Sterile Insect Technique (SIT), is always under study and improvements [8, 14]. SIT is an environmentally safe, cost-effective, species-specific, and efficient method of insect control. It is a form of insect population control that relies on the mass-rearing and sterile release of large numbers of male insects to mate with wild female insects. This prevents the production of viable eggs, thus reducing the overall population of the target species. This technique was first developed in the 1950s by entomologists Edward Knipling and Raymond Bushland, who were working for the U.S. Department of Agriculture (USDA) [12] (see also [8][chapter 1.1]). The original purpose of SIT was to control the screwworm fly, which was devastating the cattle industry in the southern United States [13]. Since then, SIT has been used to control a variety of other insect pests, including the Mediterranean fruit fly, testes fly, and also against vectors of diseases, including anopheles and aedes mosquitoes, with more or less success [8]. Initially, sterile insects were obtained only by ionization or irradiation, but now new techniques have been developed for mosquitoes control in particular. One of them consists of releasing only males carrying the bacteria _Wolbachia_[19]. This is called the Incompatible Insect Technique (IIT) [14], where the sperm of Wolbachia-carrying males, W-males, is altered so that it can no longer successfully fertilize uninfected eggs. Thus, IIT can be seen as a classical SIT. A third method exists but it is more controversial since it relies of genetic-modified mosquitoes: this is called the RIDL method, where RIDL stands for "Release of Insects carrying Dominant Lethals" [22].
However, while conceptually very simple, the conditions and the difficulties to implement SIT in the field are numerous and that is why a drastic control quality is needed. To this end, IAEA (the International Atomic Energy Agency) has published several manuals where several control steps have to be checked in order to ensure/maximize the success of SIT [9, 26, 16].
While several field programs are ongoing, very few have a mathematical modelling component involved. This is a pity because mathematical modelling can bring new insights on several issues that can be detrimental to the efficacy of SIT: see, for instance, [3, 4, 5, 7], and references therein.
Among these controls, it is necessary to evaluate an upper bound for the contamination by sterile females, i.e. the maximal amount of sterile females that can be released during each field release in order to insure that SIT is efficient. Indeed, in order to produce sterile males only, it is necessary to separate the females from the males. Up to now, the sex-separation system is mechanical as male nymphs are (in general) smaller than female nymphs. However, since sex-sorting is highly operator-dependent, a certain number of female nymphs can accidentally fall in the male nymphs bucket and, then, be irradiated to become fully sterilized. Thus, when sterile mosquitoes are released, if the amount of released sterile females is too large, this could maintain or increase the epidemiological risk. Moreover, when the Incompatible Insect technique is considered, releasing Wolbachia-carrying females, even a small amount, can induce a population replacement as showed in [6].
For _Aedes albopictus_, estimates of contamination by sterile females, done in Mauritius island [10], were around 4%, while in a recent SIT program in Reunion island estimates were around 1%. Note also carefully that sterilized females are always 100% sterile and thus cannot participate in the wild insect dynamics. In [7], we have showed that when no vector-borne viruses are circulating, then the release of sterile females is not an issue, as long as enough sterile males are released. When a virus is circulating, we showed existence of a contamination threshold for sterile females, such that if the amount of released sterile females per hectare is lower than this threshold, then it is possible to control the wild mosquitoes population. Otherwise, whatever the size of the releases, the basic reproduction number will always be greater than 1 and thus it will be impossible to control the epidemiological risk even if the wild population has been reduced using massive sterile insects releases.
Another control test to take care is the (sterile) male residual fertility, when sterilized males (sperm of) are not necessarily 100% sterile, even if an optimal dose of radiation is used. Indeed, males are sterilized in boxes such that full sterility is not guaranteed: There are always irradiated males with a small amount of sperm that remains fertile. This is called residual fertility. For _Aedes albopictus_, some estimates done in Mauritius [11] lead to a residual fertility between 3.8% and 4.1%, while in the SIT-program in Reunion island, an average value of 1% was obtained. In Italy, in [17], the authors found a residual fertility between \(0.82\pm 0.14\%\) and \(4.93\pm 4.72\%\) thanks to the age of the males, for an irradiation at 40 Gy.
In [4], using a very simple model, the authors showed that the proportion of fertile sperms, \(\varepsilon\), has to be lower than \(1/\mathcal{N}\), where \(\mathcal{N}\) is the basic offspring number related to the targeted wild population. If, for any reason, \(\varepsilon>1/\mathcal{N}\), then, whatever the amount of sterile males released, the wild population will always be above a threshold, that can be estimated, numerically at least.
Up to know we have studied these two issues separately in [4, 7], while, in fact, they do occur simultaneously. Thus, it would be useful to know how the combination of both issues could be problematic in the
implementation of SIT program either for nuisance reduction or to reduce the epidemiological risk.
The paper is organized as follows. In section 2, we present the full SIT-entomological-epidemiological model and we recall theoretical results without SIT obtained in [3, 7] and we derive theoretical results for the SIT-entomological model. The full SIT model is studied in section 3. Finally, in section 4, we derive some numerical simulations to illustrate our theoretical results and to discuss the impact of low/high residual fertility as well as low/high contamination by sterile females. The paper ends with a conclusion and perspectives in section 5.
## 2 The SIT-entomological-epidemiological Model
Based on [7], we briefly describe the full SIT model, taking into account residual male fertility and contamination by sterile females.
From the entomological point of view, we split the mosquito population into immature stage (larvae and pupae), \(A\), male adults, \(M\), and mature females, \(F_{W}\).
We consider \(\Lambda_{tot}\) the release rate of all sterile insects, i.e. sterile males and sterile females, such that \(\Lambda_{tot}=\Lambda_{M}+\Lambda_{F}\), where \(\Lambda_{M}=(1-\epsilon_{F})\Lambda_{tot}\), \(\Lambda_{F}=\epsilon_{F}\Lambda_{tot}\), and \(\epsilon_{F}\) is the proportion of sterile females released.
Male residual sterility is modeled by considering that a proportion, \(\varepsilon M_{S}\), of sterile males is fertile, such that emerging immature females will become fertile with a probability of \(\dfrac{M+\varepsilon M_{S}}{M+M_{S}}\) or they will become sterile with a probability of \(\dfrac{\varepsilon M_{S}}{M+M_{S}}\).
Thus, in order to take into account the release of sterile females and the effect of residual fertility, we have to consider a sub-populations of sterile females, \(S\). Moreover, to take into account the circulation of a vector-borne virus, with an extrinsic incubation period of the virus within the vector population, we consider three epidemiological states, i.e. the susceptible, exposed and infected states, for the sterile and the wild females, \(S_{S}\), \(S_{E}\), \(S_{I}\), \(F_{W,S}\), \(F_{W,E}\), and \(F_{W,I}\). We assume that the total population of humans, \(N_{h}\), is positive and constant. It is also divided in three epidemiological states, i.e. \(N_{h}=S_{h}+I_{h}+R_{h}\). When (wild and sterile) female mosquitoes are infected, we assume that their mortality rate can be impacted. Thus following [7], and the flow diagram given in Fig. 1, page 4, we derive the following SIT-entomological-epidemiological model
\[\left\{\begin{array}{lll}\dfrac{dS_{h}}{dt}&=&\mu_{h}N_{h}-B \beta_{mh}\dfrac{F_{W,I}+S_{I}}{N_{h}}S_{h}-\mu_{h}S_{h},\\ \dfrac{dI_{h}}{dt}&=&B\beta_{mh}\dfrac{F_{W,I}+S_{I}}{N_{h}}S_{h}-\nu_{h}I_{h }-\mu_{h}I_{h},\\ \dfrac{dR_{h}}{dt}&=&\nu_{h}I_{h}-\mu_{h}R_{h},\end{array}\right. \tag{1}\]
\[\left\{\begin{array}{lll}\dfrac{dA}{dt}&=&\phi(F_{W,S}+F_{W,E}+F_{W,I})-( \gamma+\mu_{A,1}+\mu_{A,2}A)A,\\ \dfrac{dM}{dt}&=&(1-r)\gamma A-\mu_{M}M,\\ \dfrac{dF_{W,S}}{dt}&=&\dfrac{M+\varepsilon M_{S}}{M+M_{S}}r\gamma A-B\beta _{hm}\dfrac{I_{h}}{N_{h}}F_{W,S}-\mu_{S}F_{W,S},\\ \dfrac{dF_{W,E}}{dt}&=&B\beta_{hm}\dfrac{I_{h}}{N_{h}}F_{W,S}-(\nu_{m}+\mu_{S })F_{W,E},\\ \dfrac{dF_{W,I}}{dt}&=&\nu_{m}F_{W,E}-\mu_{I}F_{W,I},\\ \dfrac{dS_{S}}{dt}&=&\epsilon_{F}\Lambda_{tot}+\dfrac{(1-\varepsilon)M_{S}}{ M+M_{S}}r\gamma A-B\beta_{hm}\dfrac{I_{h}}{N_{h}}S_{S}-\mu_{S}S_{S},\\ \dfrac{dS_{E}}{dt}&=&B\beta_{hm}\dfrac{I_{h}}{N_{h}}S_{S}-(\nu_{m}+\mu_{S})S_{ E},\\ \dfrac{dS_{I}}{dt}&=&\nu_{m}S_{E}-\mu_{I}S_{I},\\ \dfrac{dM_{S}}{dt}&=&(1-\epsilon_{F})\Lambda_{tot}-\mu_{M_{S}}M_{S},\end{array}\right. \tag{2}\]
with appropriate non-negative initial conditions.
We summarize all the model parameters in Table 1, page 5. In [27], the authors have considered varying parameters to take into account variations of temperature and raining along the year in Reunion island and
their impact on SIT strategies to reduce the nuisance or the epidemiological risk. Thus, in Table 1, page 5, we derive the variations for each parameters from a daily average temperature varying between \(15^{\circ}\) and \(30^{\circ}\). These interval values will be used for a global sensitivity analysis done in section 4. In the simulations part, we will consider parameter values related to an average temperature of \(25^{\circ}\), that is (close to) the most favorable temperature for _Aedes albopictus_ mosquito dynamics.
### The wild insect model without SIT
We deduce from system (1)-(2) that dynamics of wild insects, without SIT, is modelled by system (3):
\[\left\{\begin{array}{lcl}\dfrac{dA}{dt}&=&\phi F_{W,S}-(\gamma+\mu_{A,1}+ \mu_{A,2}A)A,\\ \dfrac{dM}{dt}&=&(1-r)\gamma A-\mu_{M}M,\\ \dfrac{dF_{W,S}}{dt}&=&r\gamma A-\mu_{S}F_{W,S}.\end{array}\right. \tag{3}\]
System (3) is quite simple and assumes implicitly that there are always adults of both sex (male and female), such that emerging females will always mate with a male and thus become fertile. In addition, system (3) has been considered and studied in previous works, see e.g. [3, 7]. Hence, below we recall its main qualitative results without any proofs.
The basic offspring number related to model (3) is
\[\mathcal{N}=\dfrac{r\gamma\phi}{\mu_{S}(\gamma+\mu_{A,1})}. \tag{4}\]
Setting the right-hand side of system (3) to zero we obtain the extinction equilibrium
Figure 1: Flow diagram of model (1)-(2).
and the equilibrium \(E^{*}=(A^{*},M^{*},F^{*}_{W,S})^{T}\) given by
\[\left\{\begin{array}{rcl}A^{*}&=&\frac{(\gamma+\mu_{A,1})}{\mu_{A,2}}(\mathcal{ N}-1),\\ M^{*}&=&\frac{(1-r)\gamma A^{*}}{\mu_{M}}=\frac{(1-r)\gamma}{\mu_{M}}\frac{( \gamma+\mu_{A,1})}{\mu_{A,2}}(\mathcal{N}-1),\\ F^{*}_{W,S}&=&\frac{r\gamma A^{*}}{\mu_{S}}=\frac{r\gamma}{\mu_{S}}\frac{( \gamma+\mu_{A,1})}{\mu_{A,2}}(\mathcal{N}-1).\end{array}\right. \tag{5}\]
The inequalities between vectors are considered here in their usual coordinate-wise sense. Clearly, \(E^{*}>\mathbf{0}_{\mathbb{R}^{3}}\) if and only if \(\mathcal{N}>1\). We summarize these results with some more details related to basins of attraction of equilibria in the following theorem.
**Theorem 1** ([3, 7]).: _Model (3) defines a forward dynamical system on \(\mathcal{D}=\{x\in\mathbb{R}^{3}:x\geq\mathbf{0}_{\mathbb{R}^{3}}\}\). Furthermore,_
1. _If_ \(\mathcal{N}\leq 1\) _then_ \(\mathbf{0}_{\mathbb{R}^{3}}\) _is globally asymptotically stable on_ \(\mathcal{D}\)_._
2. _If_ \(\mathcal{N}>1\) _then_ \(E^{*}\) _is stable with basin of attraction_ \[\mathcal{D}\setminus\{x=(A,M,F_{W,S})^{T}\in\mathbb{R}^{3}_{+}:A=F_{W,S}=0\},\]
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Parameters & Description & Unit & Range & Baseline for simulation (\(T=25^{\circ}\)) & Reference \\ \hline \hline \(1/\mu_{h}\) & Average human lifespan & Day & \([60,80]\times 365\) & \(78\times 365\) & \\ \(1/\nu_{h}\) & Average DENV viremic period & Day & \([1,7]\) & 7 & [25] \\ \(B\) & daily number of mosquito bites on human & - & \([0.1,1]\) & 0.25 & \\ \(\beta_{mh}\) & Rate of transmission of DENV from Infected mosquito to Susceptible human & Day\({}^{-1}\) & \([0.12;0.57]\) & 0.3427 & [27] \\ \(\beta_{hm}\) & Rate of transmission of DENV from Infected human to Susceptible mosquito & Day\({}^{-1}\) & \([0.4;0.96]\) & 0.872 & [27] \\ \(\mu_{A,1}\) & Natural death rate for larvae and pupae stage. & Day\({}^{-1}\) & \([0.019;0.299]\) & 0.0262 & [27] \\ \(\mu_{A,2}\) & Density-induced death rate for larvae and pupae stage. & Day\({}^{-1}\)Ind\({}^{-1}\) & \([2\!\times\!10^{-5};0.02]\) & \(1.76\times 10^{-4}\) & [7, 27] \\ \(\phi\) & Daily hatching eggs deposit \(\gamma\) & Day\({}^{-1}\) & \([0,11]\) & 10 & [27] \\ & Transition rate from non-adult stage to adult-stage. & Day\({}^{-1}\) & \([0.028,0.12]\) & 0.0962 & [27] \\ \(r\) & Sex-ratio & - & \([0.4,0.6]\) & 0.5 & \\ \(\mu_{S}\) & Female mosquito death rate & Day\({}^{-1}\) & \([0.035,0.07]\) & 0.0453 & [27] \\ \(\mu_{I}\) & Infected female mosquito death rate & Day\({}^{-1}\) & \([0.035,0.07]\) & 0.0453 & [27] \\ & rate & & & & \\ \(\mu_{M}\) & Male mosquito death rate & Day\({}^{-1}\) & \([0.05,0.082]\) & 0.0722 & [27] \\ \(\mu_{M_{S}}\) & Sterile Male mosquito death rate & Day\({}^{-1}\) & \([0.1,0.2]\) & 0.1 & [27] \\ \(\nu_{m}\) & Extrinsic incubation rate & Day\({}^{-1}\) & \([0.015,0.25]\) & 0.184 & [27] \\ \(\Lambda_{tot}\) & Sterile insect release rate & Ind Day\({}^{-1}\) & \([0;18000]\) & varying & \\ \(\varepsilon\) & Residual fertility & - & \([0;0.05]\) & varying & \\ \(\epsilon_{F}\) & Sterile female contamination & - & \([0;0.05]\) & varying & \\ \hline \end{tabular}
\end{table}
Table 1: Parameters description and parameters values for the entomological-epidemiological model related to Dengue circulation, for an average temperature of \(T=25^{\circ}\)C and \(N_{h}=20000\).
_and_ \(\mathbf{0}_{\mathbf{g}^{A}}\) _is unstable with the non negative_ \(M-\)_axis being a stable manifold._
Proof.: See [3, 7, Theorem 1].
**Remark 1**.: _Mechanical control, that is the removing of mosquito breeding sites, has an impact on \(\mu_{A,2}\) because it depends on \(K\), the larvae-carrying capacity that is defined by \(K=3\times N_{h}\)[7][section 7]_
\[\mu_{A,2}=\frac{\gamma+\mu_{A,1}}{K}\mathcal{N}. \tag{6}\]
_Thus reducing \(K\) by a certain percentage, say \(p_{mc}\), will increase \(\mu_{A,2}\) by a factor \(\frac{1}{1-p_{mc}}\)._
### The wild insect model with SIT
We now consider the following SIT-entomological model that occurs when no virus is circulating. Its study is helpful to derive the Disease Free Equilibrium, DFE, thanks to several release sizes. Thanks to the fact that \(t\) is sufficiently large or that the initial releases are such that \(M_{S}(0)=M_{S}^{*}=(1-\epsilon_{F})\frac{\Lambda_{tot}}{\mu_{M_{S}}}\). The entomological model assumes the form
\[\left\{\begin{array}{l}\frac{dA}{dt}=\phi F_{W,S}-(\gamma+\mu_{A,1}+\mu_{A,2}A)A,\\ \frac{dM}{dt}=(1-r)\gamma A-\mu_{M}M,\\ \frac{dF_{W,S}}{dt}=\frac{M+\varepsilon M_{S}^{*}}{M+M_{S}^{*}}r\gamma A-\mu _{S}F_{W,S},\\ \frac{dS_{S}}{dt}=\epsilon_{F}\Lambda_{tot}+\frac{(1-\varepsilon)M_{S}^{*}}{ M+M_{S}^{*}}r\gamma A-\mu_{S}S_{S}.\end{array}\right. \tag{7}\]
Since the released sterile females do not play a role in the wild mosquito dynamics, we derive the following reduced SIT-entomological model
\[\left\{\begin{array}{l}\frac{dA}{dt}=\phi F_{W,S}-(\gamma+\mu_{A,1}+\mu_{A, 2}A)A,\\ \frac{dM}{dt}=(1-r)\gamma A-\mu_{M}M,\\ \frac{dF_{W,S}}{dt}=\frac{M+\varepsilon M_{S}^{*}}{M+M_{S}^{*}}r\gamma A-\mu _{S}F_{W,S}.\end{array}\right. \tag{8}\]
We now deal with equilibria of model (8). Of course, given an equilibrium \(\bar{E}=(\bar{A},\bar{M},\bar{F}_{W,S})^{T}\) of system (8), we can recover the \(S_{S}\)-component of the corresponding equilibrium of system (7), by setting
\[\bar{S}_{S}=\frac{1}{\mu_{S}}\left(\epsilon_{F}\Lambda_{tot}+\frac{(1- \varepsilon)M_{S}^{*}}{\bar{M}+M_{S}^{*}}r\gamma\bar{A}\right).\]
We follow the methodology developed in [3]. When \(A=0\), we obtain the elimination equilibrium \(E_{0}=(0,0,0)^{T}\). Assuming \(A\neq 0\), then from the first equation, we derive
\[\frac{\phi r\gamma}{\mu_{S}}\frac{M+\varepsilon M_{S}^{*}}{M+M_{S}^{*}}=( \gamma+\mu_{A,1}+\mu_{A,2}A). \tag{9}\]
Then, using the fact that
\[A=\frac{\mu_{M}}{(1-r)\gamma}M,\]
setting
\[\mathcal{Q}=\frac{\mu_{A,2}\mu_{M}}{(\gamma+\mu_{A,1})(1-r)\gamma},\]
and
\[\alpha=\frac{M_{S}^{*}}{M},\]
we derive
\[\frac{1+\varepsilon\alpha}{1+\alpha}\mathcal{N}=1+\frac{\mathcal{Q}M_{S}^{*}}{ \alpha}. \tag{10}\]
Setting \(\mathcal{Q}_{S}=M_{S}^{*}\mathcal{Q}>0,\) equation (10) becomes
\[\left(1-\mathcal{N}\varepsilon\right)\alpha^{2}+\left(1+\mathcal{Q}_{S}- \mathcal{N}\right)\alpha+\mathcal{Q}_{S}=0. \tag{11}\]
The discriminant of (11) is
\[\Delta(\mathcal{Q}_{S})=(\mathcal{Q}_{S})^{2}+\mathcal{Q}_{S}\left(4\mathcal{ N}\varepsilon-2\left(\mathcal{N}+1\right)\right)+\left(\mathcal{N}-1\right)^{2}. \tag{12}\]
To study the sign of \(\Delta(\mathcal{Q}_{S}),\) we consider the sub-determinant of \(\Delta\)
\[\Delta^{\prime}=16\left(1-\mathcal{N}\varepsilon\right)\left(1-\varepsilon \right)\mathcal{N}. \tag{13}\]
Since \(1-\varepsilon\geq 0,\)\(\Delta^{\prime}\) has the same sign as \(1-\mathcal{N}\varepsilon.\)
1. Assume that \(\mathcal{N}\varepsilon<1\). Then, \(\Delta^{\prime}>0\) and \(\Delta\) has two real roots \(\mathcal{Q}_{S_{1}}\) and \(\mathcal{Q}_{S_{2}}\) such that: \[\left\{\begin{array}{l}\mathcal{Q}_{S_{1}}\mathcal{Q}_{S_{2}}=\left(1- \mathcal{N}\right)^{2}>0,\\ \mathcal{Q}_{S_{1}}+\mathcal{Q}_{S_{2}}=2\left(1-\mathcal{N}\varepsilon+ \mathcal{N}\left(1-\varepsilon\right)\right)>0,\\ \mathcal{Q}_{S_{1}}=\left(\sqrt{\mathcal{N}\left(1-\varepsilon\right)}-\sqrt{ 1-\mathcal{N}\varepsilon}\right)^{2}>0,\\ \mathcal{Q}_{S_{2}}=\left(\sqrt{\mathcal{N}\left(1-\varepsilon\right)}+\sqrt{ 1-\mathcal{N}\varepsilon}\right)^{2}>\mathcal{Q}_{S_{1}}.\end{array}\right.\] (14) It therefore follows that \(\Delta(\mathcal{Q}_{S})\geq 0\) when \(\mathcal{Q}_{S}\in\left(0,\mathcal{Q}_{S_{1}}\right]\cup\left[\mathcal{Q}_{S_{2} },+\infty\right)\) and \(\Delta(\mathcal{Q}_{S})<0\) when \(\mathcal{Q}_{S}\in\left(\mathcal{Q}_{S_{1}},\mathcal{Q}_{S_{2}}\right)\). The following discussion is valid: * Assume that \(\mathcal{Q}_{S}\in\left(0,\mathcal{Q}_{S_{1}}\right).\) Then, (11) admits two real roots \(\alpha_{-},\)\(\alpha_{+}\) where \[\alpha_{\pm}=\frac{\left(\mathcal{N}-\mathcal{Q}_{S}-1\right)\pm\sqrt{\Delta( \mathcal{Q}_{S})}}{2\left(1-\mathcal{N}\varepsilon\right)}.\] (15) Note that \[\mathcal{N}-1-\mathcal{Q}_{S}>\mathcal{N}-1-\mathcal{Q}_{S_{1}}=2\left(\sqrt{ \left(1-\mathcal{N}\varepsilon\right)\left(1-\varepsilon\right)\mathcal{N}}- \left(1-\mathcal{N}\varepsilon\right)\right)>0.\] Since \[\alpha_{-}\alpha_{+}=\frac{\mathcal{Q}_{S}}{1-\mathcal{N}\varepsilon}>0,\] \(\mathcal{N}-1-\mathcal{Q}_{S}>0\) and \(\alpha_{+}+\alpha_{-}=\frac{\mathcal{N}-1-\mathcal{Q}_{S}}{1-\mathcal{N} \varepsilon}>0,\) we deduce that \(0<\alpha_{-}<\alpha_{+}.\) * Assume that \(\mathcal{Q}_{S}\in\left(\mathcal{Q}_{S_{2}},+\infty\right).\) Then, (11) admits two real roots \(\alpha_{-},\alpha_{+}\) where \[\alpha_{\pm}=\frac{\left(\mathcal{N}-\mathcal{Q}_{S}-1\right)\pm\sqrt{\Delta( \mathcal{Q}_{S})}}{2\left(1-\mathcal{N}\varepsilon\right)}.\] (16) Note that \[\mathcal{N}-1-\mathcal{Q}_{S}<\mathcal{N}-1-\mathcal{Q}_{S_{2}}=-2\left(\sqrt{ \left(1-\mathcal{N}\varepsilon\right)\left(1-\varepsilon\right)\mathcal{N}}+ \left(1-\mathcal{N}\varepsilon\right)\right)<0.\] Since \[\alpha_{-}\alpha_{+}=\frac{\mathcal{Q}_{S}}{1-\mathcal{N}\varepsilon}>0,\] \(\mathcal{N}-1-\mathcal{Q}_{S}<0\) and \(\alpha_{+}+\alpha_{-}=\frac{\mathcal{N}-1-\mathcal{Q}_{S}}{1-\mathcal{N} \varepsilon}<0,\) we deduce that \(\alpha_{-}<\alpha_{+}<0.\) * Assume that \(\mathcal{Q}_{S}\in\left(\mathcal{Q}_{S_{1}},\mathcal{Q}_{S_{2}}\right).\) Then, (11) does not admit real roots. * Assume that \(\mathcal{Q}_{S}=\mathcal{Q}_{S_{1}}.\) Then, (11) has only one real solution \[\alpha_{\circ}=\frac{\mathcal{N}-1-\mathcal{Q}_{S_{1}}}{2\left(1-\mathcal{N} \varepsilon\right)}>0.\] (17)
* Assume that \(\mathcal{Q}_{S}=\mathcal{Q}_{S_{2}}\). Then, (11) has only one real solution \[\alpha_{-}=\alpha_{+}=\frac{\mathcal{N}-1-\mathcal{Q}_{S_{2}}}{2\left(1- \mathcal{N}\varepsilon\right)}<0.\]
2. Assume that \(\mathcal{N}\varepsilon>1\). Then \(\Delta^{\prime}<0\) and \(\Delta(\mathcal{Q}_{S})>0\). Therefore, (11) admits two real roots \(\alpha_{-}\), \(\alpha_{+}\). Since \(\alpha_{-}\alpha_{+}=\frac{\mathcal{Q}_{S}}{1-\mathcal{N}\varepsilon}<0\). It follows that \[\alpha_{-}<0<\alpha_{+}=\frac{-\left(\mathcal{N}-1-\mathcal{Q}_{S}\right)+ \sqrt{\Delta(\mathcal{Q}_{S})}}{2\left(\mathcal{N}\varepsilon-1\right)}.\] (18)
3. Assume that \(\mathcal{N}\varepsilon=1\). Then, (11) admits a unique solution \[\alpha_{\sharp}=\frac{\mathcal{Q}_{S}}{\mathcal{N}-1-\mathcal{Q}_{S}}.\] (19) \(\alpha_{\sharp}>0\) whenever \(\mathcal{Q}_{S}<\mathcal{N}-1\).
From the previous discussion, we deduce, for \(M_{S}^{*}=\frac{\Lambda_{M}}{\mu_{M_{S}}}\), the following:
**Theorem 2**.: _System (7) always admits the trivial equilibrium \(E_{0}=\left(0,0,0,\frac{\epsilon_{F}\Lambda_{tot}}{\mu_{S}}\right)^{T}\). In addition:_
1. _Assume that_ \(\mathcal{N}\varepsilon<1\)_. Consider the threshold_ \[\Lambda_{M}^{crit}=\frac{\mu_{M_{S}}}{\mathcal{Q}}\left(\sqrt{\mathcal{N}\left( 1-\varepsilon\right)}-\sqrt{1-\mathcal{N}\varepsilon}\right)^{2}.\] (20) 1. _If_ \((1-\epsilon_{F})\Lambda_{tot}\in\left(0,\Lambda_{M}^{crit}\right)\)_, then system (_7_) admits two positive equilibria_ \(E_{1}=(A_{1},M_{1},F_{W,S_{1}},S_{S_{1}})^{T}\) _and_ \(E_{2}=(A_{2},M_{2},F_{W,S_{2}},S_{S_{2}})^{T}\)_, such that_ \((A_{1},M_{1},F_{W,S_{1}})^{T}<(A_{2},M_{2},F_{W,S_{2}})^{T}\) _and_ \[\left\{\begin{array}{l}M_{1}=\frac{(1-\epsilon_{F})\Lambda_{tot}}{\mu_{M_{S }}\alpha_{+}},\quad\text{where $\alpha_{+}$ is computed from \eqref{eq:E_1}},\\ M_{2}=\frac{(1-\epsilon_{F})\Lambda_{tot}}{\mu_{M_{S}}\alpha_{-}},\quad\text{ where $\alpha_{-}$ is computed from \eqref{eq:E_1}},\\ A_{1,2}=\frac{\mu_{M}}{\left(1-r\right)\gamma}M_{1,2},\\ F_{W,S_{1,2}}=\frac{(\gamma+\mu_{1}+\mu_{2}A_{1,2})\,A_{1,2}}{\phi},\\ S_{S_{1,2}}=\frac{1}{\mu_{S}}\left(\epsilon_{F}\Lambda_{tot}+\frac{(1- \varepsilon)M_{S}^{*}}{M_{1,2}+M_{S}^{*}}r\gamma A_{1,2}\right).\end{array}\right.\]
2. _If_ \((1-\epsilon_{F})\Lambda_{tot}=\Lambda_{M}^{crit}\)_, then system (_7_) admits a unique equilibrium_ \(E_{\circ}=(A_{\circ},M_{\circ},F_{W,S_{\circ}},S_{S_{\circ}})^{T}\) _where_ \[\left\{\begin{array}{l}M_{\circ}=\frac{\Lambda_{M}}{\mu_{M_{S}}\alpha_{\circ }},\quad\text{where $\alpha_{\circ}$ is computed from \eqref{eq:E_1}},\\ A_{\circ}=\frac{\mu_{M}}{\left(1-r\right)\gamma}M_{\circ},\\ F_{\circ}=\frac{(\gamma+\mu_{1}+\mu_{2}A_{\circ})\,A_{\circ}}{\phi},\\ S_{S_{\circ}}=\frac{1}{\mu_{S}}\left(\epsilon_{F}\Lambda_{tot}+\frac{(1- \varepsilon)M_{S}^{*}}{M_{\circ}+M_{S}^{*}}r\gamma A_{\circ}\right).\end{array}\right.\]
2. _Assume that_ \(\mathcal{N}\varepsilon>1\)_. Then, for any_ \((1-\epsilon_{F})\Lambda_{tot}>0\)_, system (_7_) admits a unique positive equilibrium_
\(E_{\dagger}=\left(A_{\dagger},M_{\dagger},F_{W,S_{\dagger}},S_{S_{\dagger}}\right)^ {T}\) _where_ \[\left\{\begin{array}{l}M_{\dagger}=\frac{(1-\epsilon_{F})\Lambda_{tot}}{\mu_{M_ {S}}\alpha_{+}},\quad\text{ where $\alpha_{+}$ is computed from \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eq
Qualitative analysis of the full SIT epidemiological model
Now we turn to the more complex model described in the introduction. In the sequel, we assume that \(\mathcal{N}>1\). Indeed, in the case where \(\mathcal{N}\leq 1\), by a comparison argument, the system will always converge toward the trivial disease-free equilibrium.
Without SIT, this model has been studied in [7] where we derived the Basic Reproduction Number defined as follows
\[\mathcal{R}_{0}^{2}=\frac{\nu_{m}}{\nu_{m}+\mu_{S}}\frac{B\beta_{mh}}{\mu_{I}} \frac{B\beta_{hm}}{\nu_{h}+\mu_{h}}\frac{F_{W,S}^{*}}{N_{h}}. \tag{21}\]
We assume that, without any control,
\[\mathcal{R}_{0}^{2}>1.\]
From [7], there exists a unique endemic equilibrium
\[EE=(S_{h}^{\sharp},I_{h}^{\sharp},R_{h}^{\sharp},A^{\sharp},M^{\sharp},F_{W,S}^ {\sharp},F_{E}^{\sharp},F_{I}^{\sharp})^{T}\]
when \(\mathcal{R}_{0}^{2}>1\).
We will now proceed like in [7, section 5]. In this section, we consider that constant and permanent SIT releases are done as a control tool. Hence, following (8), the dynamics of human and mosquito populations are described by system (22)-(23):
\[\left\{\begin{array}{lll}\frac{dS_{h}}{dt}&=&\mu_{h}N_{h}-B \beta_{mh}\frac{F_{W,I}+S_{I}}{N_{h}}S_{h}-\mu_{h}S_{h},\\ \frac{dI_{h}}{dt}&=&B\beta_{mh}\frac{F_{W,I}+S_{I}}{N_{h}}S_{h}- \nu_{h}I_{h}-\mu_{h}I_{h},\\ \frac{dR_{h}}{dt}&=&\nu_{h}I_{h}-\mu_{h}R_{h},\end{array}\right. \tag{22}\]
\[\left\{\begin{array}{lll}\frac{dA}{dt}&=&\phi(F_{W,S}+F_{W,E}+F_{W,I})-( \gamma+\mu_{A,1}+\mu_{A,2}A)A,\\ \frac{dM}{dt}&=&(1-r)\gamma A-\mu_{M}M,\\ \frac{dF_{W,S}}{dt}&=&\frac{M+\varepsilon M_{S}^{*}}{M+M_{S}^{*}} \tau\gamma A-B\beta_{hm}\frac{I_{h}}{N_{h}}F_{W,S}-\mu_{S}F_{W,S},\\ \frac{dF_{W,E}}{dt}&=&B\beta_{hm}\frac{I_{h}}{N_{h}}F_{W,S}-( \nu_{m}+\mu_{S})F_{W,E},\\ \frac{dF_{W,I}}{dt}&=&\nu_{m}F_{W,E}-\mu_{I}F_{W,I},\\ \frac{dS_{S}}{dt}&=&\epsilon_{F}\Lambda_{tot}+\frac{(1- \varepsilon)M_{S}^{*}}{M+M_{S}^{*}}\tau\gamma A-B\beta_{hm}\frac{I_{h}}{N_{h} }S_{S}-\mu_{S}S_{S},\\ \frac{dS_{E}}{dt}&=&B\beta_{hm}\frac{I_{h}}{N_{h}}S_{S}-( \nu_{m}+\mu_{S})S_{E},\\ \frac{dS_{I}}{dt}&=&\nu_{m}S_{E}-\mu_{I}S_{I}.\end{array}\right. \tag{23}\]
In the sequel, we provide qualitative results of system (22)-(23). Let us set
\[x(t)=(S_{h}(t),I_{h}(t),R_{h}(t),A(t),M(t),F_{W,S}(t),F_{W,E}(t),F_{W,I}(t),S_ {S}(t),S_{E}(t),S_{I}(t))^{T}.\]
### Boundedness of solutions and existence of equilibria
Using similar arguments as in [7, Lemmas 1 & 2], it is straightforward to obtain the following Lemma
**Lemma 1** (Boundedness of solutions).: _The set_
\[\Gamma= \left\{x\in\mathbb{R}_{+}^{11}:S_{h}+I_{h}+R_{h}=N_{h};\left(A,M \right)^{T}\leq\left(A^{*},M^{*}\right)^{T};F_{W,S}+F_{W,E}+F_{W,I}\leq F_{W,S} ^{*};\right.\] \[\left.S_{S}+S_{E}+S_{I}\leq\frac{\epsilon_{F}\Lambda_{tot}+r \gamma A^{*}}{\mu_{S}}\right\}\]
_is positively invariant for system (22)-(23) where \((A^{*},M^{*},F_{W,S}^{*})^{T}\) is given by (5)._
Using Theorem 2, page 8, we deduce:
**Proposition 1** (Trivial and non-trivial disease-free equilibria).: _Whatever \(\mathcal{N}\varepsilon\geq 0\), system (22)-(23) always has a trivial disease-free equilibrium, \(TDFE\), such that_
\[TDFE=\left(N_{h},0_{\mathbb{R}^{2}},\frac{\epsilon_{F}\Lambda_{tot}}{\mu_{S}},0 _{\mathbb{R}^{2}}\right)^{T}. \tag{24}\]
1. _Assume_ \(\mathcal{N}\varepsilon<1\)_. Let_ \(\Lambda_{M}^{crit}\) _defined by (_20_), page 8._ * _If_ \((1-\epsilon_{F})\Lambda_{tot}\in(0,\Lambda_{M}^{crit})\)_, then system (_22_)-(_23_) has two non-trivial disease-free equilibria_ \(DFE_{1,2}=\left(N_{h},0_{\mathbb{R}^{2}},A_{1,2},M_{1,2},F_{W,S_{1,2}},0_{ \mathbb{R}^{2}},S_{S_{1,2}},0_{\mathbb{R}^{2}}\right)^{T}\) _with_ \((A_{1},M_{1},F_{W,S_{1}})^{T}<(A_{2},M_{2},F_{W,S_{2}})^{T}\) _and_ \(A_{1,2}\)_,_ \(M_{1,2}\)_,_ \(F_{W,S_{1,2}}\)_, and_ \(S_{S_{1,2}}\) _given in Theorem_ 2_._ * _If_ \((1-\epsilon_{F})\Lambda_{tot}=\Lambda_{M}^{crit}\)_, then system (_22_)-(_23_) has one non-trivial disease-free equilibrium_ \[DFE_{\circ}=\left(N_{h},0_{\mathbb{R}^{2}},A_{\circ},M_{\circ},F_{W,S_{\circ} },0_{\mathbb{R}^{2}},S_{S_{\circ}},0_{\mathbb{R}^{2}}\right)^{T},\] _with_ \(A_{\circ}\)_,_ \(M_{\circ}\)_,_ \(F_{W,S_{\circ}}\)_, and_ \(S_{S_{\circ}}\) _given in Theorem_ 2_._
2. _Assume_ \(\mathcal{N}\varepsilon>1\)_. System (_22_)-(_23_) admits one non-trivial disease-free equilibrium_ \[DFE_{\dagger}=\left(N_{h},0_{\mathbb{R}^{2}},A_{\dagger},M_{\dagger},F_{W,S_{ \dagger}},0_{\mathbb{R}^{2}},S_{S_{\dagger}},0_{\mathbb{R}^{2}}\right)^{T},\] _where_ \(A_{\dagger}\)_,_ \(M_{\dagger}\)_,_ \(F_{W,S_{\dagger}}\)_, and_ \(S_{S_{\dagger}}\) _are given in Theorem_ 2_._
3. _Assume that_ \(\mathcal{N}\varepsilon=1\)_. If_ \((1-\epsilon_{F})\Lambda_{tot}\in(0,\Lambda_{M,\sharp}^{crit})\)_, where_ \(\Lambda_{M,\sharp}^{crit}=\mu_{M_{S}}\frac{\mathcal{N}-1}{\mathcal{Q}}\)_, then system (_22_)-(_23_) has the following non-trivial disease-free equilibrium_ \[DFE_{\sharp}=\left(N_{h},0_{\mathbb{R}^{2}},A_{\sharp},M_{\sharp},F_{W,S_{ \sharp}},0_{\mathbb{R}^{2}},S_{S_{\sharp}},0_{\mathbb{R}^{2}}\right)^{T}\] _where_ \(A_{\sharp}\)_,_ \(M_{\sharp}\)_,_ \(F_{W,S_{\sharp}}\)_, and_ \(S_{S_{\sharp}}\) _are given in Theorem_ 2_._
Note that using the relation \(\mathcal{N}\varepsilon=1\) in the expression of \(\Lambda_{M}^{crit}\), we recover \(\Lambda_{M,\sharp}^{crit}\). Thus, in order to simplify the reading of the paper, we will not consider the particular case \(\mathcal{N}\varepsilon=1\) in the rest of the paper because most of the forthcoming results are similar to those obtained when \(\mathcal{N}\varepsilon<1\).
Following point 1.b) of Theorem 3, page 9, in the disease-free case, equilibrium \(DFE_{1}\) is unreachable because it is always unstable. Therefore, in addition to \(TDFE\), the meaningful disease-free equilibrium of system (22)-(23) is
\[DFE_{SIT_{\varepsilon}}=\left\{\begin{array}{ll}DFE_{\dagger},&\text{when} \quad\mathcal{N}\varepsilon>1,\\ DFE_{2},&\text{when}\quad\mathcal{N}\varepsilon<1\quad\text{and}\quad(1- \epsilon_{F})\Lambda_{tot}\in(0,\Lambda_{M}^{crit}),\\ DFE_{\circ},&\text{when}\quad\mathcal{N}\varepsilon<1\quad\text{and}\quad(1- \epsilon_{F})\Lambda_{tot}=\Lambda_{M}^{crit},\\ TDFE,&\text{when}\quad\mathcal{N}\varepsilon<1\quad\text{and}\quad(1- \epsilon_{F})\Lambda_{tot}>\Lambda_{M}^{crit}.\end{array}\right. \tag{25}\]
**Remark 3**.: _Note that in the last case, only \(TDFE\) exists, while in the other cases \(DFE_{SIT_{\varepsilon}}\) and \(TDFE\) co-exist._
Using the next generation matrix approach, see e.g. [24], the basic reproduction number of system (22)-(23) is
\[\mathcal{R}^{2}_{0,SIT_{c}}=\left\{\begin{array}{ll}\frac{\nu_{m}}{\nu_{m}+\mu_{S }}\frac{B\beta_{mh}}{\mu_{I}}\frac{B\beta_{hm}}{\nu_{h}+\mu_{h}}\frac{(F_{W,S_{ 1}}+S_{S_{1}})}{N_{h}},\quad\text{when}\quad\mathcal{N}\varepsilon>1,\\ \\ \frac{\nu_{m}}{\nu_{m}+\mu_{S}}\frac{B\beta_{mh}}{\mu_{I}}\frac{B\beta_{hm}}{ \nu_{h}+\mu_{h}}\frac{(F_{W,S_{2}}+S_{S_{2}})}{N_{h}},\quad\text{when}\quad \mathcal{N}\varepsilon<1\quad\text{and}\quad(1-\epsilon_{F})\Lambda_{tot} \in(0,\Lambda^{crit}_{M}),\\ \\ \frac{\nu_{m}}{\nu_{m}+\mu_{S}}\frac{B\beta_{mh}}{\mu_{I}}\frac{B\beta_{hm}}{ \nu_{h}+\mu_{h}}\frac{(F_{W,S_{0}}+S_{S_{0}})}{N_{h}},\quad\text{when}\quad \mathcal{N}\varepsilon<1\quad\text{and}\quad(1-\epsilon_{F})\Lambda_{tot}= \Lambda^{crit}_{M},\\ \\ \frac{\nu_{m}}{\nu_{m}+\mu_{S}}\frac{B\beta_{mh}}{\mu_{I}}\frac{B\beta_{hm}}{ \nu_{h}+\mu_{h}}\frac{\epsilon_{F}\Lambda_{tot}}{\mu_{S}N_{h}},\quad\text{ when}\quad\mathcal{N}\varepsilon<1\quad\text{and}\quad(1-\epsilon_{F})\Lambda_{tot}> \Lambda^{crit}_{M}.\end{array}\right. \tag{26}\]
**Remark 4**.: _In some cases, as expected, \(\mathcal{R}^{2}_{0,SIT_{c}}\) has two parts: the first part_
\[\mathcal{R}^{2}_{0,SIT_{c},W}=\frac{\nu_{m}}{\nu_{m}+\mu_{S}}\frac{B\beta_{mh} }{\mu_{I}}\frac{B\beta_{hm}}{\nu_{h}+\mu_{h}}\frac{F_{W,S_{1,2,\circ}}}{N_{h}},\]
_is related to the wild susceptible females that are still fertile while the second part,_
\[\mathcal{R}^{2}_{0,SIT_{c},S}=\frac{\nu_{m}}{\nu_{m}+\mu_{S}}\frac{B\beta_{mh} }{\mu_{I}}\frac{B\beta_{hm}}{\nu_{h}+\mu_{h}}\frac{S_{S_{1,2,\circ}}}{N_{h}},\]
_is related to susceptible females, wild and released ones, that are sterile._
_The main question is: when \(\mathcal{R}^{2}_{0,SIT_{c},W}<1\), is it possible that the releases of sterile females together with the releases of males which are assumed not to be fully sterile imply \(\mathcal{R}^{2}_{0,SIT_{c}}>1\)?_
**Remark 5**.: _Since \(F_{W,S_{2,\uparrow,\circ}}+S_{S_{2,\uparrow,\circ}}=\frac{r\gamma A_{2, \uparrow,\circ}+\epsilon_{F}\Lambda_{tot}}{\mu_{S}}\) and \(F^{*}_{W,S}=\frac{r\gamma A^{*}}{\mu_{S}}\), and using (21), it is interesting to observe that_
\[\mathcal{R}^{2}_{0,SIT_{c}}=\mathcal{R}^{2}_{0}\left\{\begin{array}{ll}\frac {\epsilon_{F}\Lambda_{tot}}{r\gamma A^{*}}+\frac{A_{\uparrow}}{A^{*}},\quad \text{when}\quad\mathcal{N}\varepsilon>1,\\ \\ \frac{\epsilon_{F}\Lambda_{tot}}{r\gamma A^{*}}+\frac{A_{2}}{A^{*}},\quad \text{when}\quad\mathcal{N}\varepsilon<1\quad\text{and}\quad(1-\epsilon_{F}) \Lambda_{tot}\in(0,\Lambda^{crit}_{M}),\\ \\ \frac{\epsilon_{F}\Lambda_{tot}}{r\gamma A^{*}}+\frac{A_{\circ}}{A^{*}},\quad \text{when}\quad\mathcal{N}\varepsilon<1\quad\text{and}\quad(1-\epsilon_{F}) \Lambda_{tot}=\Lambda^{crit}_{M},\\ \\ \frac{\epsilon_{F}\Lambda_{tot}}{r\gamma A^{*}},\quad\text{when}\quad\mathcal{N} \varepsilon<1\quad\text{and}\quad(1-\epsilon_{F})\Lambda_{tot}>\Lambda^{crit}_ {M},\end{array}\right. \tag{27}\]
_where \(A^{*}\) is defined in (5), page 5. Thus, clearly, when \(\epsilon_{F}\Lambda_{tot}\) is too large, i.e. \(\epsilon_{F}\Lambda_{tot}>r\gamma A^{*}\), we always have \(\mathcal{R}^{2}_{0,SIT_{c}}>\mathcal{R}^{2}_{0}\). In this case, if we already have \(\mathcal{R}^{2}_{0}>1\), then \(\mathcal{R}^{2}_{0,SIT_{c}}>1\) such that the SIT will fail to lower the epidemiological risk. Conversely, since \(A^{*}>A_{2,\uparrow,\sharp}\), then \(\mathcal{R}^{2}_{0,SIT_{c}}<\mathcal{R}^{2}_{0}\) whenever \(\epsilon_{F}\Lambda_{tot}\) is sufficiently low, i.e._
\[\epsilon_{F}\Lambda_{tot}<r\gamma(A^{*}-A_{2,\uparrow,\circ}). \tag{28}\]
_We recover the same result like in [7] when \(\mathcal{N}\varepsilon\leq 1\)._
**Remark 6**.: _Since \(A_{2,\uparrow,\circ}\) is an increasing function of \(\epsilon_{F}\Lambda_{tot}\), it is straightforward to deduce that \(\mathcal{R}^{2}_{0,SIT_{c}}\) increases with respect to \(\epsilon_{F}\Lambda_{tot}\)._
**Remark 7**.: _According to (27), when \(\mathcal{N}\varepsilon\leq 1\) and \((1-\epsilon_{F})\Lambda_{tot}>\Lambda^{crit}_{M}\), then \(\mathcal{R}^{2}_{0,SIT_{c}}<1\) iff_
\[\epsilon_{F}\Lambda_{tot}<\frac{r\gamma A^{*}}{\mathcal{R}^{2}_{0}}=\frac{r \gamma(\gamma+\mu_{A,1})(\mathcal{N}-1)}{\mu_{A,2}\mathcal{R}^{2}_{0}}:=\Lambda^ {crit}_{F}. \tag{29}\]
_Also, it follows from (27) that_
\[\epsilon_{F}\Lambda_{tot}>\Lambda^{crit}_{F}\Rightarrow\mathcal{R}^{2}_{0,SIT_{ c}}>1.\]
**Remark 8**.: _Clearly, \(\epsilon_{F}\) has to be chosen such that_
\[\epsilon_{F}<\frac{\Lambda_{F}^{crit}}{\Lambda_{tot}}. \tag{30}\]
_This result is in complete contradiction with the constant maximal percentage given by IAEA for contamination by sterile females: we can clearly see that the percentage of contamination may depend on the total amount of sterile insects per release._
Thanks to the case of sterile female contamination, straightforward computations lead to
**Proposition 2**.: _When \(\epsilon_{F}\Lambda_{tot}>\Lambda_{F}^{crit}\), then there exists a wild insects-free boundary equilibrium, \(WIFE\), such that \(A^{\#}=M^{\#}=F_{S}^{\#}=F_{E}^{\#}=F_{I}^{\#}=0\), \(S_{S}^{\#}>0\), \(S_{E}^{\#}>0\), \(S_{I}^{\#}>0\) and_
\[S_{h}^{\#} = \frac{\mu_{S}+B\beta_{hm}\frac{\mu_{h}}{\mu_{h}+\nu_{h}}}{B\beta_ {hm}\frac{\mu_{h}}{\mu_{h}+\nu_{h}}+\mu_{S}\frac{\epsilon_{F}\Lambda_{tot}}{ \Lambda_{F}^{crit}}}N_{h}, \tag{31}\] \[S_{I}^{\#} = \frac{\nu_{m}}{\mu_{I}\left(\nu_{m}+\mu_{S}\right)}\left(1-\frac{ \mu_{S}}{\mu_{S}+B\beta_{hm}\frac{\mu_{h}}{\mu_{h}+\nu_{h}}\left(1-\frac{S_{h }^{\#}}{N_{h}}\right)}\right)\epsilon_{F}\Lambda_{tot}.\]
Proof.: See Appendix C.
We now have a look at the existence of non-trivial endemic equilibria.
**Proposition 3**.: _Assume \(\mu_{I}=\mu_{S}\)._
* _Let_ \(\mathcal{N}\varepsilon\leq 1\)_, and set_ \[\Lambda_{M,EE}^{crit}=\frac{\mu_{M_{S}}}{\mathcal{Q}}\left(\sqrt{\mathcal{N}+ \left(1-\mathcal{N}\varepsilon\right)}-\sqrt{1-\mathcal{N}\varepsilon} \right)^{2}.\] (32) _Assume_ \(0<(1-\epsilon_{F})\Lambda_{tot}<\Lambda_{M,EE}^{crit}\)_, and_ \(\epsilon_{F}\Lambda_{tot}\geq 0\) _is chosen such that_ \[\epsilon_{F}\Lambda_{tot}+r\gamma A_{1}^{EE}>\frac{F_{W,S}^{*}}{\mathcal{R}_{0} ^{2}},\] (33) _where_ \[A_{1}^{EE}=\frac{1}{2\mathcal{Q}\frac{\left(1-r\right)\gamma}{\mu_{M}}}\left( \mathcal{N}-\mathcal{Q}M_{S}^{*}-\sqrt{\left(\left(\mathcal{Q}M_{S}^{*}- \mathcal{N}\right)^{2}-4\mathcal{Q}\left(1-\mathcal{N}\varepsilon\right)M_{S} ^{*}\right)}\right).\] _Then there exists two endemic equilibria,_ \(EE_{SIT,1}\) _and_ \(EE_{SIT,2}\)_. In addition_ \(EE_{SIT,1}=EE_{SIT,2}\) _when_ \(\mathcal{N}\varepsilon=1\)_._
* _Let_ \(\mathcal{N}\varepsilon>1\)_. For all_ \((1-\epsilon_{F})\Lambda_{tot}>0\)_, assume that_ \(\epsilon_{F}\Lambda_{tot}\geq 0\) _is chosen such that_ \[\epsilon_{F}\Lambda_{tot}+r\gamma A_{*}^{EE}>\frac{F_{W,S}^{*}}{\mathcal{R}_{0} ^{2}},\] _where_ \[A_{1}^{EE}=\frac{1}{2\mathcal{Q}\frac{\left(1-r\right)\gamma}{\mu_{M}}}\left( \mathcal{N}-\mathcal{Q}M_{S}^{*}-\sqrt{\left(\left(\mathcal{Q}M_{S}^{*}- \mathcal{N}\right)^{2}+4\mathcal{Q}\left(\mathcal{N}\varepsilon-1\right)M_{S} ^{*}\right)}\right).\] _Then, there exists one positive equilibrium_ \(EE_{SIT,*}\)_._
Proof.: See Appendix C.
We consider the case where \(\mu_{S}<\mu_{I}\). We first set the following thresholds
\[\alpha = \frac{\nu_{m}}{\mu_{I}}\frac{B\beta_{mh}}{\nu_{h}+\mu_{h}}\frac{B \beta_{hm}}{\nu_{m}+\mu_{S}}\frac{1}{N_{h}^{2}}, \tag{34}\] \[\Lambda_{F,EE}^{crit,1}=\frac{\mu_{S}}{\epsilon_{F}\alpha}\frac{ \mathcal{N}_{\varepsilon}\left(1-\frac{1+\frac{\nu_{m}}{\mu_{I}}}{1+\frac{ \nu_{m}}{\mu_{S}}}\right)}{ 1-\mathcal{N} \varepsilon\left(\frac{1+\frac{\nu_{m}}{\mu_{I}}}{1+\frac{\nu_{m}}{\mu_{S}}} \right)},\] \[\Lambda_{tot}^{crit,2}=\frac{r\gamma(\gamma+\mu_{A,1})\left( \mathcal{N}\frac{1+\frac{\nu_{m}}{\mu_{I}}}{1+\frac{\nu_{m}}{\mu_{S}}}-1 \right)}{\mu_{A,2}\left((1-\epsilon_{F})\frac{r}{1-r}\frac{\mu_{M}}{\mu_{M_{S }}}+\epsilon_{F}\right)},\] (35) \[\Lambda_{tot}^{crit,3}=\frac{1}{2\frac{\mathcal{Q}(1-\epsilon_{F})}{\mu_{M_{S }}}\alpha\epsilon_{F}}\left[\sqrt{\Delta}+\left(\frac{\alpha\mu_{M}(1- \epsilon_{F})r}{(1-r)\,\mu_{M_{S}}}\left(1-\mathcal{N}\varepsilon\frac{1+ \frac{\nu_{m}}{\mu_{I}}}{1+\frac{\nu_{m}}{\mu_{S}}}\right)+\alpha\epsilon_{F} \left(1-\mathcal{N}\frac{1+\frac{\nu_{m}}{\mu_{I}}}{1+\frac{\nu_{m}}{\mu_{S}}} \right)\right)\right], \tag{36}\]
where
\[\Delta=\left(\left(\frac{\alpha\mu_{M}(1-\epsilon_{F})r}{(1-r)\,\mu_{M_{S}}} \left(1-\mathcal{N}\varepsilon\frac{1+\frac{\nu_{m}}{\mu_{I}}}{1+\frac{\nu_{m }}{\mu_{S}}}\right)+\alpha\epsilon_{F}\left(1-\mathcal{N}\frac{1+\frac{\nu_{m }}{\mu_{I}}}{1+\frac{\nu_{m}}{\mu_{S}}}\right)\right)^{2}+4\frac{\mathcal{Q}( 1-\epsilon_{F})}{\mu_{M_{S}}}\alpha\epsilon_{F}\mathcal{N}\mu_{S}\left(1-\frac {1+\frac{\nu_{m}}{\mu_{I}}}{1+\frac{\nu_{m}}{\mu_{S}}}\right)>0.\]
Then, we derive
**Proposition 4**.: _Assume \(\mu_{S}<\mu_{I}\)._
* _Let_ \(\mathcal{N}\varepsilon\leq\frac{1+\frac{\nu_{m}}{\mu_{S}}}{ 1+\frac{\nu_{m}}{\mu_{I}}}\)_._
* _If_ \(\Lambda_{tot}^{crit,1}<\Lambda_{tot}<\Lambda_{tot}^{crit,3}\)_, or_
* _If_ \(\Lambda_{tot}>\max\{\Lambda_{tot}^{crit,3},\Lambda_{tot}^{crit,1}\}\)_, and_ \(\mathcal{N}\geq\frac{1+\frac{\nu_{m}}{\mu_{S}}}{ 1+\frac{\nu_{m}}{\mu_{I}}}\) _and_ \(\Lambda_{tot}<\Lambda_{tot}^{crit,2}\)_,_ _then, there exist no or_ \(2\) _endemic equilibria._
* _If_ \(\Lambda_{tot}>\max\{\Lambda_{tot}^{crit,3},\Lambda_{tot}^{crit,1}\}\)_, and_
* \(\mathcal{N}>\frac{1+\frac{\nu_{m}}{\mu_{S}}}{ 1+\frac{\nu_{m}}{\mu_{I}}}\) _and_ \(\Lambda_{tot}>\Lambda_{tot}^{crit,2}\)_, or_
* \(\mathcal{N}<\frac{1+\frac{\nu_{m}}{\mu_{S}}}{ 1+\frac{\nu_{m}}{\mu_{I}}}\)_,_ _then, no endemic equilibrium exists._
* _If_ \(\Lambda_{tot}<\min\{\Lambda_{tot}^{crit,3},\Lambda_{tot}^{crit,1}\}\)_, then exists one endemic equilibrium._
* _Let_ \(\mathcal{N}\varepsilon\geq\dfrac{1+\dfrac{\nu_{m}}{\mu_{S}}}{1+\dfrac{\nu_{m}}{ \mu_{I}}}\)_._
* _If_ \(\Lambda_{tot}<\Lambda_{tot}^{crit,3}\)_, or_
* _If_ \(\Lambda_{tot}>\max\{\Lambda_{tot}^{crit,2},\Lambda_{tot}^{crit,3}\}\)_,_ _then, only one endemic equilibrium exists._
* _If_ \(\Lambda_{tot}^{crit,3}<\Lambda_{tot}<\Lambda_{tot}^{crit,2}\)__ _then, one endemic equilibrium exists or three endemic equilibria._
Proof.: See Appendix C.
### Stability analysis of the disease-free equilibria and uniform persistence
Let us set
\[\mathcal{R}^{2}_{0,TDFE}=\dfrac{B\beta_{mh}}{\nu_{h}+\mu_{h}}\dfrac{\nu_{m}}{ \left(\nu_{m}+\mu_{S}\right)\mu_{S}}\dfrac{B\beta_{hm}}{N_{h}}\dfrac{ \epsilon_{F}\Lambda_{tot}}{\mu_{S}}=\mathcal{R}^{2}_{0}\dfrac{\epsilon_{F} \Lambda_{tot}}{r\gamma A^{*}}. \tag{37}\]
A straightforward computation of the Jacobian related to system (22)-(23) at equilibrium \(TDFE\) leads to
**Theorem 4**.: _Assume \(\mathcal{N}\varepsilon<1\) and \(\Lambda_{tot}>0\). Let \(\epsilon_{F}\geq 0\) such that \(\mathcal{R}^{2}_{0,TDFE}<1\), then, the Trivial Disease-Free Equilibrium, \(TDFE\), is locally asymptotically stable, and unstable when \(\mathcal{R}^{2}_{0,TDFE}>1\)._
The previous theorem shows that, when \(\mathcal{N}\varepsilon<1\), nuisance reduction with SIT is always possible with low contamination by sterile females, as long as \(\Lambda_{tot}>0\), and the wild population is small or not yet established. When the wild population is large or established we need further results.
Using [24, Theorem 2], the stability properties of the biological disease-free equilibrium \(DFE_{SIT_{c}}\in\{DFE_{\dagger,2,o},TDFE\}\) is summarized as follows.
**Theorem 5**.: _The following results hold true for system (22)-(23). Assume \(\mathcal{N}\varepsilon<1\)._
1. _Let_ \((1-\epsilon_{F})\Lambda_{tot}\in(0,\Lambda_{M}^{crit})\)__ 1. _If_ \(\mathcal{R}^{2}_{0,SIT_{c}}<1\)_, then_ \(DFE_{2}\)_, defined in Proposition_ 1_-(1), is locally asymptotically stable._ 2. _If_ \(\mathcal{R}^{2}_{0,SIT_{c}}>1\)_, then_ \(DFE_{2}\) _is unstable._
2. _Let_ \((1-\epsilon_{F})\Lambda_{tot}=\Lambda_{M}^{crit}\)__ 1. _If_ \(\mathcal{R}^{2}_{0,SIT_{c}}<1\)_, then_ \(DFE_{o}\)_, defined in Proposition_ 1_-(1), is locally asymptotically stable._ 2. _If_ \(\mathcal{R}^{2}_{0,SIT_{c}}>1\)_, then_ \(DFE_{o}\) _is unstable._
3. _Let_ \((1-\epsilon_{F})\Lambda_{tot}>\Lambda_{M}^{crit}\)_._ 1. _If_ \(\mathcal{R}^{2}_{0,SIT_{c}}=\mathcal{R}^{2}_{0,TDFE}<1\)_, then_ \(TDFE\)_, defined in Proposition_ 1_, is globally asymptotically stable._ 2. _If_ \(\mathcal{R}^{2}_{0,SIT_{c}}=\mathcal{R}^{2}_{0,TDFE}>1\)_, then_ \(TDFE\) _is unstable._
_Assume \(\mathcal{N}\varepsilon>1\)._
1. _If_ \(\mathcal{R}^{2}_{0,SIT_{c}}<1\)_, then_ \(DFE_{\dagger}\)_, defined in Proposition_ 1_-(2), is locally asymptotically stable._
2. _If_ \(\mathcal{R}^{2}_{0,SIT_{c}}>1\)_, then_ \(DFE_{\dagger}\) _is unstable._
In fact, when the residual fertility level is low, i.e. \(\varepsilon<\dfrac{1}{\mathcal{N}}\), system (22)-(23) may exhibit a bistable dynamics in the disease-free context. Indeed, based on Theorem 3 together with Theorems 4 and 5, it is straightforward to establish:
**Theorem 6**.: _Assume \(\mathcal{N}\varepsilon<1\) and \((1-\epsilon_{F})\Lambda_{tot}\in(0,\Lambda_{M}^{crit})\). If \(\mathcal{R}^{2}_{0,SIT_{e}}<1\), then equilibria \(DFE_{2}\) and \(TDFE\) are locally asymptotically stable (LAS)._
Clearly, from the two previous theorems, when contamination by sterile females is low, such that \(\mathcal{R}^{2}_{0,TDFE}<1\), we derive that:
* nuisance reduction is only possible when \(\mathcal{N}\varepsilon<1\). In particular, for established wild population, massive sterile insects releases can drive the wild population close to \(TDFE\).
* reducing the epidemiological risk is possible whatever the values taken by \(\mathcal{N}\varepsilon\).
**Remark 9**.: _Based on a comparison argument and a limit system argument we observe the following:_
* _System (_22_)-(_23_) may undergo a bistability involving the wild insects-free boundary equilibrium, WIFE and the 'full' endemic equilibrium_ \(EE\) _when_ \(\mathcal{N}\varepsilon\leq 1\)_,_ \(\mathcal{R}^{2}_{0,TDFE}>1\) _and_ \((1-\epsilon_{F})\Lambda_{tot}\in(0,\Lambda_{M}^{crit})\)_._
* _The wild insects-free boundary equilibrium, WIFE is GAS when_ \(\mathcal{N}\varepsilon\leq 1\)_,_ \(\mathcal{R}^{2}_{0,TDFE}>1\) _and_ \((1-\epsilon_{F})\Lambda_{tot}>\Lambda_{M}^{crit}\)_._
In order to deal with the uniform persistent of system (22)-(23), we prove the following result:
**Theorem 7**.: _If \(\mathcal{N}\varepsilon>1\) and \(\mathcal{R}^{2}_{0,TDFE}>1\), then the system is uniformly persistent._
Proof.: See Appendix D.
However, the previous result does not give information on how SIT can impact \(\mathcal{R}^{2}_{0,SIT_{e}}\).
### Impact of insect releases on the SIT basic reproduction number
Now, we want to find \(\Lambda_{tot}\) and \(\epsilon_{F}\), such that the epidemiological risk is low, i.e. lead \(\mathcal{R}^{2}_{0,SIT_{e}}<1\).
As stated in Remark 7, page 12, if \(\epsilon_{F}\Lambda_{tot}\) is large, that is \(\epsilon_{F}\Lambda_{tot}>\Lambda_{F}^{crit}\), then whatever the release rate of sterile males \((1-\epsilon_{F})\Lambda_{tot}\) is, we will always have \(\mathcal{R}^{2}_{0,SIT_{e}}>1\). Hence, in the sequel, we first assume that
\[\epsilon_{F}\Lambda_{tot}<\Lambda_{F}^{crit}.\]
Moreover, following Remark 5, page 12, \(\mathcal{R}^{2}_{0,SIT_{e}}\leq\mathcal{R}^{2}_{0}\) iff \(\epsilon_{F}\Lambda_{tot}\) is sufficiently low. However, this does not necessarily imply that there exists \((1-\epsilon_{F})\Lambda_{tot}>0\) such that \(\mathcal{R}^{2}_{0,SIT_{e}}<1\). Straightforward computations lead:
\[\left\{\begin{array}{rcl}A_{2}&=&\frac{1}{2}A^{*}\left(1-\frac{ \mathcal{Q}_{S}}{\mathcal{N}-1}\right)\left(1+\sqrt{1-\frac{4\mathcal{Q}_{S} \left(1-\mathcal{N}\varepsilon\right)}{\left(\mathcal{N}-1-\mathcal{Q}_{S} \right)^{2}}}\right)>0,\,\text{when }\mathcal{N}\varepsilon\leq 1\quad\text{and} \quad(1-\epsilon_{F})\Lambda_{tot}\in(0,\Lambda_{M}^{crit})\\ A_{\diamond}&=&\frac{A^{*}}{\mathcal{N}-1}\left(\sqrt{\frac{(1- \varepsilon)\,\mathcal{N}}{(1-\mathcal{N}\varepsilon)}}-1\right),\qquad\qquad \qquad\qquad\qquad\text{when }\mathcal{N}\varepsilon<1\quad\text{and} \quad(1-\epsilon_{F})\Lambda_{tot}=\Lambda_{M}^{crit}\\ A_{\dagger}&=&\frac{1}{2}A^{*}\left(1-\frac{\mathcal{Q}_{S}}{\mathcal{N}-1}+ \sqrt{\left(1-\frac{\mathcal{Q}_{S}}{\mathcal{N}-1}\right)^{2}+\frac{4 \mathcal{Q}_{S}\left(\mathcal{N}\varepsilon-1\right)}{\left(\mathcal{N}-1 \right)^{2}}}\right)>0,\,\text{when }\quad\mathcal{N}\varepsilon>1.\end{array}\right. \tag{38}\]
Using (27), (29) and (38), we deduce that
\[\mathcal{R}^{2}_{0,SIT_{c}}=\left\{\begin{array}{ll}\frac{ \epsilon_{F}\Lambda_{tot}}{\Lambda_{F}^{crit}}+\frac{\mathcal{R}^{2}_{0}}{2} \left(1-\frac{\mathcal{Q}_{S}}{\mathcal{N}-1}\right)\left(1+\sqrt{1-\frac{4 \mathcal{Q}_{S}(1-\mathcal{N}\varepsilon)}{\left(\mathcal{N}-1-\mathcal{Q}_{S} \right)^{2}}}\right),&\text{when }\mathcal{N}\varepsilon\leq 1\quad\text{and}\quad(1- \epsilon_{F})\Lambda_{tot}\in(0,\Lambda_{M}^{crit}),\\ \\ \frac{\epsilon_{F}\Lambda_{tot}}{\Lambda_{F}^{crit}}+\frac{\mathcal{R}^{2}_{0}} {\mathcal{N}-1}\left(\sqrt{\frac{(1-\varepsilon)\mathcal{N}}{(1-\mathcal{N} \varepsilon)}}-1\right),&\text{when }\mathcal{N}\varepsilon<1\quad\text{and}\quad(1- \epsilon_{F})\Lambda_{tot}=\Lambda_{M}^{crit}\\ \\ \frac{\epsilon_{F}\Lambda_{tot}}{\Lambda_{F}^{crit}},&\text{when} \quad\mathcal{N}\varepsilon\leq 1\quad\text{and}\quad(1- \epsilon_{F})\Lambda_{tot}>\Lambda_{M}^{crit},\\ \\ \frac{\epsilon_{F}\Lambda_{tot}}{\Lambda_{F}^{crit}}+\frac{\mathcal{R}^{2}_{0}} {2}\left(1-\frac{\mathcal{Q}_{S}}{\mathcal{N}-1}+\sqrt{\left(1-\frac{ \mathcal{Q}_{S}}{\mathcal{N}-1}\right)^{2}+\frac{4\mathcal{Q}_{S}(\mathcal{N }\varepsilon-1)}{\left(\mathcal{N}-1\right)^{2}}}\right),&\text{when}\quad \mathcal{N}\varepsilon>1.\end{array}\right. \tag{39}\]
It is straightforward to obtain the following result.
**Lemma 2**.:
1. _If_ \(\epsilon_{F}\Lambda_{tot}>\Lambda_{F}^{crit}\)_, then_ \(\mathcal{R}^{2}_{0,SIT_{c}}>1\)_._
2. _Assume that_ \(\mathcal{N}\varepsilon\leq 1\) _and_ \((1-\epsilon_{F})\Lambda_{tot}>\Lambda_{M}^{crit}\)_. Then_ \(\mathcal{R}^{2}_{0,SIT_{c}}<1\) _iff_ \(0<\epsilon_{F}\Lambda_{tot}<\Lambda_{F}^{crit}\)_._
Lemma 2 depicts the fact that when the epidemiological risk is high, that is, when \(\mathcal{R}^{2}_{0}>1\), and if in addition the release rate of sterile females is large, that is \(\epsilon_{F}\Lambda_{tot}>\Lambda_{F}^{crit}\), then whatever the amount of released sterile males, the SIT will fail since we will always have \(\mathcal{R}^{2}_{0,SIT_{c}}>1\). However, massive releases of sterile males (\((1-\epsilon_{F})\Lambda_{tot}>\Lambda_{M}^{crit}\)) could be successful provided that \(\epsilon_{F}\Lambda_{tot}<\Lambda_{F}^{crit}\).
The next question to investigate deals with the possibility to lower the epidemiological risk using small sterile males releases when \(\epsilon_{F}\Lambda_{tot}<\Lambda_{F}^{crit}\) and also to investigate if there exist necessary conditions to ensure that \(\mathcal{R}^{2}_{0,SIT_{c}}<1\) when \(\mathcal{N}\varepsilon>1\).
### When \(\mathcal{N}\varepsilon<1\)
Using (39)\({}_{2}\), we define the following threshold
\[\mathcal{R}^{2}_{0,\mathcal{N}\varepsilon<1}=\frac{\mathcal{N}-1}{\frac{ \epsilon_{F}\Lambda_{tot}\mu_{A,2}}{r\gamma(\gamma+\mu_{A,1})}+\sqrt{\frac{(1- \varepsilon)\,\mathcal{N}}{(1-\mathcal{N}\varepsilon)}}-1}. \tag{40}\]
We derive the following result
**Theorem 8**.: _Assume \(0\leq\epsilon_{F}\Lambda_{tot}<\Lambda_{F}^{crit}\). Consider system (22)-(23) and set_
\[\Lambda_{M,\mathcal{R}^{2}_{0},\varepsilon}^{*}=\frac{\mu_{M_{S}}\left( \mathcal{N}-1\right)}{Q}\left(1-\frac{\mathcal{R}^{4}_{0}(1-\mathcal{N} \varepsilon)+\left(\mathcal{N}-1\right)\left(1-\frac{\epsilon_{F}\Lambda_{tot }}{\Lambda_{F}^{crit}}\right)^{2}}{\mathcal{R}^{4}_{0}(1-\mathcal{N} \varepsilon)+\mathcal{R}^{2}_{0}\left(\mathcal{N}-1\right)\left(1-\frac{ \epsilon_{F}\Lambda_{tot}}{\Lambda_{F}^{crit}}\right)}\right). \tag{41}\]
1. _If_ \(\mathcal{R}^{2}_{0}\geq\mathcal{R}^{2}_{0,\mathcal{N}\varepsilon<1}\)_, then the following results hold true:_ * _When_ \((1-\epsilon_{F})\Lambda_{tot}>\Lambda_{M}^{crit}\)_, the equilibrium_ \(TDFE\) _is globally asymptotically stable._ * _When_ \((1-\epsilon_{F})\Lambda_{tot}\leq\Lambda_{M}^{crit}\)_, then_ \(\mathcal{R}^{2}_{0,SIT_{c}}>1\) _and SIT fails._
2. _If_ \(1<\mathcal{R}^{2}_{0}<\mathcal{R}^{2}_{0,\mathcal{N}\varepsilon<1}\)_, then the following results hold true:_ * _When_ \((1-\epsilon_{F})\Lambda_{tot}>\Lambda_{M}^{crit}\)_, the equilibrium_ \(TDFE\) _is globally asymptotically stable._ * _When_ \((1-\epsilon_{F})\Lambda_{tot}=\Lambda_{M}^{crit}\)_, then_ \(\mathcal{R}^{2}_{0,SIT_{c}}<1\)_,_ \(DFE_{\diamond}\) _and TDFE are locally asymptotically stable. The set_ \[\{(S,I,R,A,M,F_{W,S},F_{W,E},F_{W,I},S_{S},S_{E},S_{I})^{T}\in\mathbb{R}^{11}_{ +}:(A,M,F_{W,S})^{T}<(A_{\diamond},M_{\diamond},F_{W,S_{\diamond}})^{T}\}\]
belongs to the basin of attraction of TDFE while the set_ \[\{(S,I,R,A,M,F_{W,S},F_{W,E},F_{W,I},S_{S},S_{E},S_{I})^{T}\in\mathbb{R}_{+}^{11}: (A,M,F_{W,S})^{T}\geq(A_{\circ},M_{\circ},F_{W,S_{\circ}})^{T}\}\] _belongs to the basin of attraction of_ \(DFE_{\circ}\)_._
* _when_ \((1-\epsilon_{F})\Lambda_{tot}>\Lambda_{M,\mathcal{R}_{0}^{2},\varepsilon}^{*}\)_, then_ \(\mathcal{R}_{0,SIT_{c}}^{2}<1\)_, and the equilibria_ \(DFE_{2}\) _and_ \(TDFE\) _are locally asymptotically stable. Moreover, the set_ \[\{(S,I,R,A,M,F_{W,S},F_{W,E},F_{W,I},S_{S},S_{E},S_{I})^{T}\in\mathbb{R}_{+}^{1 1}:(A,M,F_{W,S})^{T}<(A_{1},M_{1},F_{W,S_{1}})^{T}\}\] _belongs to the basin of attraction of TDFE while the set_ \[\{(S,I,R,A,M,F_{W,S},F_{W,E},F_{W,I},S_{S},S_{E},S_{I})^{T}\in\mathbb{R}_{+}^{1 1}:(A,M,F_{W,S})^{T}>(A_{1},M_{1},F_{W,S_{1}})^{T}\}\] _belongs to the basin of attraction of_ \(DFE_{2}\)_._
Proof.: We follow the same methodology used in [7, Theorem 6] to derive (41). Then, the results follow from Theorem 6, page 16.
**Remark 10**.: _Of course, when \(\varepsilon=0\), we recover the results obtained in [7]._
Clearly the constraint on the releases size given by (41) can be strong, i.e. close to \(\Lambda_{M}^{crit}\), such that it seems to be preferable to use massive releases, i.e. \((1-\epsilon_{F})\Lambda_{tot}>\Lambda_{M}^{crit}\).
In that case, the strategy developed in [1, 3], using massive and then small releases can be adequate to reduce the epidemiological risk and maintain this risk at a lower level.
Thus, in terms of vector control: when \(\mathcal{R}_{0}^{2}\leq 1\), vector control is not necessary; when \(\mathcal{R}_{0}^{2}>1\) and \(0\leq\epsilon_{F}\Lambda_{tot}<\Lambda_{F}^{crit}\), then two cases should be considered:
* when \(\mathcal{R}_{0}^{2}\geq\mathcal{R}_{0,\mathcal{N}\varepsilon<1}^{2}\), then massive releases of sterile insect, i.e. \((1-\epsilon_{F})\Lambda_{tot}>\Lambda_{M}^{crit}\), should be advocated.
* When \(\mathcal{R}_{0}^{2}<\mathcal{R}_{0,\mathcal{N}\varepsilon<1}^{2}\), then small, but large enough (\(\Lambda_{M,\mathcal{R}_{0}^{2},\varepsilon}^{*}<(1-\epsilon_{F})\Lambda_{tot }\leq\Lambda_{M}^{crit}\)), releases of sterile insects could be useful to control the disease. However, since \(\Lambda_{M,\mathcal{R}_{0}^{2},\varepsilon}^{*}\) is close to \(\Lambda_{M}^{crit}\), from a practical point of view, it is preferable to consider massive releases of sterile insects too.
When \(\mathcal{N}\varepsilon\leq 1\), we summarize all qualitative results of system (22)-(23) related to the disease-free equilibria in Table 2, page 18.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline \(\mathcal{N}\) & \(\mathcal{R}_{0}^{2}\) & \(\epsilon_{F}\Lambda_{tot}\) & \(\mathcal{R}_{0}^{2}\) & \((1-\epsilon_{F})\Lambda_{tot}\) & Observations \\ \hline \(\leq 1\) & & & & \(TDFE\) is GAS \\ \hline \multirow{4}{*}{\(>1\)} & \(\leq 1\) & & & Releases of sterile insects are useless \\ & & & & & because the \(DFE\) is already GAS \\ \cline{2-5} & & \(\geq\Lambda_{F}^{crit}\) & & & Even massive releases could not be efficient \\ & & & & & to reduce the epidemiological risk: \(\mathcal{R}_{0,SIT_{c}}^{2}>1\). \\ & & & & & WIFE and/or EE are/is LAS \\ \cline{3-5} & \multirow{4}{*}{\(>1\)} & \multirow{4}{*}{\(\geq\mathcal{R}_{0,\mathcal{N}\varepsilon<1}^{2}\)} & \(\geq\mathcal{R}_{0,\mathcal{N}\varepsilon<1}^{2}\) & \(>\Lambda_{M}^{crit}\) & \(TDFE\) is GAS \\ \cline{3-5} & & & & \(\leq\Lambda_{M}^{crit}\) & SIT failed since \(\mathcal{R}_{0,SIT_{c}}^{2}>1\) \\ \cline{3-5} & & & & \(>\Lambda_{M}^{crit}\) & \(TDFE\) is GAS \\ \cline{3-5} & & & & \(=\Lambda_{M}^{crit}\) & \(\mathcal{R}_{0,SIT_{c}}^{2}<1\) \\ \cline{3-5} & & & & \(>\Lambda_{M,\mathcal{R}_{0}^{2},\varepsilon}^{*}\) & \(\mathcal{R}_{0,SIT_{c}}^{2}<1\): \(TDFE\) and \(DFE_{\circ}\) are both stable \\ \cline{3-5} & & & & \(>\Lambda_{M,\mathcal{R}_{0}^{2},\varepsilon}^{*}\) & \(\mathcal{R}_{0,SIT_{c}}^{2}<1\): \(TDFE\) and \(DFE_{2}\) are both stable \\ \hline \end{tabular}
\end{table}
Table 2: Summary table of the qualitative analysis of system (22)-(23) when \(\mathcal{N}\varepsilon\leq 1\).
### The case where \(\mathcal{N}\varepsilon>1\)
We want to derive if, for a given \(\epsilon_{F}\Lambda_{tot}<\Lambda_{F}^{crit}\), there exists \(\Lambda_{M,\mathcal{N}\varepsilon>1}^{crit}\) such that for all \((1-\epsilon_{F})\Lambda_{tot}>\Lambda_{M,\mathcal{N}\varepsilon>1}^{crit}\), we always have \(\mathcal{R}_{0,SIT_{c}}^{2}>1\). Conversely, for a given \(\Lambda_{tot}\) it is possible to find a rate \(\epsilon_{F}\) such that \(\mathcal{R}_{0,SIT_{c}}^{2}>1\)?
Assuming \(\mathcal{R}_{0}^{2}>1\), \(0\leq\epsilon_{F}\Lambda_{tot}<\Lambda_{F}^{crit}\), and using (39)\({}_{4}\), we have the following:
\(\bullet\) Assume that \(\frac{\epsilon_{F}\Lambda_{tot}}{\Lambda_{F}^{crit}}+\frac{\mathcal{R}_{0}^{2 }}{2}\left(1-\frac{\mathcal{Q}_{S}}{\mathcal{N}-1}\right)\geq 1\) or equivalently \(\left(1-\frac{\epsilon_{F}\Lambda_{tot}}{\Lambda_{F}^{crit}}\right)\frac{2}{ \mathcal{R}_{0}^{2}}-\left(1-\frac{\mathcal{Q}_{S}}{\mathcal{N}-1}\right)\leq 0\). Then it holds
\[\mathcal{R}_{0,SIT_{c}}^{2}>1.\]
Note also that
\[\left(1-\frac{\epsilon_{F}\Lambda_{tot}}{\Lambda_{F}^{crit}}\right)\frac{2}{ \mathcal{R}_{0}^{2}}-\left(1-\frac{\mathcal{Q}_{S}}{\mathcal{N}-1}\right)\leq 0 \Longleftrightarrow(1-\epsilon_{F})\Lambda_{tot}\leq\frac{\mu_{M_{S}}}{ \mathcal{Q}}(\mathcal{N}-1)\left(1-\frac{2\left(1-\frac{\epsilon_{F}\Lambda_ {tot}}{\Lambda_{F}^{crit}}\right)}{\mathcal{R}_{0}^{2}}\right):=\Lambda_{M}^{ crit,\sharp}.\]
\(\bullet\) Assume that \(\frac{\epsilon_{F}\Lambda_{tot}}{\Lambda_{F}^{crit}}+\frac{\mathcal{R}_{0}^{2}}{2}\left(1-\frac{\mathcal{Q}_{S}}{\mathcal{N}-1}\right)<1\) or equivalently \(\left(1-\frac{\epsilon_{F}\Lambda_{tot}}{\Lambda_{F}^{crit}}\right)\frac{2}{ \mathcal{R}_{0}^{2}}-\left(1-\frac{\mathcal{Q}_{S}}{\mathcal{N}-1}\right)>0\) or equivalently
\[(1-\epsilon_{F})\Lambda_{tot}>(\mathcal{N}-1)\frac{\mu_{M_{S}}}{\mathcal{Q}} \left(1-\frac{2\left(1-\frac{\epsilon_{F}\Lambda_{tot}}{\Lambda_{F}^{crit}} \right)}{\mathcal{R}_{0}^{2}}\right):=\Lambda_{M}^{crit,\sharp}.\]
Let us set
\[\mathcal{R}_{0,\mathcal{N}\varepsilon>1}^{2}=\frac{(\mathcal{N}-1)}{(\mathcal{ N}\varepsilon-1)}\left(1-\frac{\epsilon_{F}\Lambda_{tot}}{\Lambda_{F}^{crit}} \right).\]
Then we have
\[\mathcal{R}_{0,SIT_{c}}^{2}>1 \Longleftrightarrow \frac{\epsilon_{F}\Lambda_{tot}}{\Lambda_{F}^{crit}}+\frac{ \mathcal{R}_{0}^{2}}{2}\left(1-\frac{\mathcal{Q}_{S}}{\mathcal{N}-1}+\sqrt{ \left(1-\frac{\mathcal{Q}_{S}}{\mathcal{N}-1}\right)^{2}+\frac{4\mathcal{Q}_{S }(\mathcal{N}\varepsilon-1)}{\left(\mathcal{N}-1\right)^{2}}}\right)>1,\] \[\Longleftrightarrow 1-\frac{\mathcal{Q}_{S}}{\mathcal{N}-1}+\sqrt{\left(1-\frac{ \mathcal{Q}_{S}}{\mathcal{N}-1}\right)^{2}+\frac{4\mathcal{Q}_{S}(\mathcal{N }\varepsilon-1)}{\left(\mathcal{N}-1\right)^{2}}}>\left(1-\frac{\epsilon_{F} \Lambda_{tot}}{\Lambda_{F}^{crit}}\right)\frac{2}{\mathcal{R}_{0}^{2}},\] \[\Longleftrightarrow \sqrt{\left(1-\frac{\mathcal{Q}_{S}}{\mathcal{N}-1}\right)^{2}+ \frac{4\mathcal{Q}_{S}(\mathcal{N}\varepsilon-1)}{\left(\mathcal{N}-1\right) ^{2}}}>\left(1-\frac{\epsilon_{F}\Lambda_{tot}}{\Lambda_{F}^{crit}}\right) \frac{2}{\mathcal{R}_{0}^{2}}-\left(1-\frac{\mathcal{Q}_{S}}{\mathcal{N}-1} \right).\] \[\Longleftrightarrow \mathcal{Q}_{S}\left(\frac{\mathcal{N}\varepsilon-1}{(\mathcal{N }-1)^{2}}-\frac{1-\frac{\epsilon_{F}\Lambda_{tot}}{\Lambda_{F}^{crit}}}{ \mathcal{R}_{0}^{2}(\mathcal{N}-1)}\right)>\frac{1-\frac{\epsilon_{F}\Lambda_ {tot}}{\Lambda_{F}^{crit}}}{\mathcal{R}_{0}^{2}}\left(\frac{1-\frac{\epsilon_{F }\Lambda_{tot}}{\Lambda_{F}^{crit}}}{\mathcal{R}_{0}^{2}}-1\right),\] \[\Longleftrightarrow \mathcal{Q}_{S}\left(\frac{\mathcal{N}\varepsilon-1}{(\mathcal{ N}-1)}\frac{\mathcal{R}_{0}^{2}}{\frac{\epsilon_{F}\Lambda_{tot}}{\Lambda_{F}^{crit}}}-1 \right)>(\mathcal{N}-1)\left(\frac{1-\frac{\epsilon_{F}\Lambda_{tot}}{\Lambda _{F}^{crit}}}{\mathcal{R}_{0}^{2}}-1\right),\] \[\Longleftrightarrow \mathcal{Q}_{S}\left(1-\frac{\mathcal{R}_{0}^{2}}{\mathcal{R}_{0, \mathcal{N}\varepsilon>1}^{2}}\right)<(\mathcal{N}-1)\left(1-\frac{1-\frac{ \epsilon_{F}\Lambda_{tot}}{\Lambda_{F}^{crit}}}{\mathcal{R}_{0}^{2}}\right).\]
Thus, we deduce the two following cases:
* If \(\mathcal{R}_{0}^{2}>\mathcal{R}_{0,\mathcal{N}\varepsilon>1}^{2}\), then \(\mathcal{R}_{0,SIT_{c}}^{2}>1\) for all \((1-\epsilon_{F})\Lambda_{tot}>0\).
* If \(\mathcal{R}_{0}^{2}<\mathcal{R}_{0,\mathcal{N}\varepsilon>1}^{2}\), then we set \[\Lambda_{M,\mathcal{N}\varepsilon>1}^{crit}=\frac{\mu_{M_{S}}}{\mathcal{Q}} \frac{(\mathcal{N}-1)\left(1-\frac{1-\frac{\epsilon_{F}\Lambda_{tot}}{\Lambda_{ F}^{crit}}}{\mathcal{R}_{0}^{2}}\right)}{\left(1-\frac{\mathcal{R}_{0}^{2}}{ \mathcal{R}_{0,\mathcal{N}\varepsilon>1}^{2}}\right)}\] and we have \[\left\{\begin{array}{l}\mathcal{R}_{0,SIT_{c}}^{2}>1\Longleftrightarrow(1- \epsilon_{F})\Lambda_{tot}<\Lambda_{M,\mathcal{N}\varepsilon>1}^{crit},\\ \mathcal{R}_{0,SIT_{c}}^{2}<1\Longleftrightarrow(1-\epsilon_{F})\Lambda_{tot} >\Lambda_{M,\mathcal{N}\varepsilon>1}^{crit}.\end{array}\right.\]
To summarize the previous discussion, when \(\mathcal{N}\varepsilon>1\), we have three configurations
1. When \((1-\epsilon_{F})\Lambda_{tot}\leq\Lambda_{M}^{crit,\sharp}\) or \(((1-\epsilon_{F})\Lambda_{tot}>\Lambda_{M}^{crit,\sharp}\) and \(\mathcal{R}_{0}^{2}>\max(1,\mathcal{R}_{0,\mathcal{N}\varepsilon>1}^{2}))\), then \(\mathcal{R}_{0,SIT_{c}}^{2}>1\).
2. When \(1<\mathcal{R}_{0}^{2}<\mathcal{R}_{0,\mathcal{N}\varepsilon>1}^{2}\) and \((1-\epsilon_{F})\Lambda_{tot}>\max(\Lambda_{M,\mathcal{N}\varepsilon>1}^{crit },\Lambda_{M}^{crit,\sharp})\) then \(\mathcal{R}_{0,SIT_{c}}^{2}<1\).
3. When \(1<\mathcal{R}_{0}^{2}<\mathcal{R}_{0,\mathcal{N}\varepsilon>1}^{2}\) and \(\Lambda_{M}^{crit,\sharp}<(1-\epsilon_{F})\Lambda_{tot}<\Lambda_{M,\mathcal{N }\varepsilon>1}^{crit}\) then \(\mathcal{R}_{0,SIT_{c}}^{2}>1\).
We therefore summarize all qualitative results of system (22)-(23) related to the disease free equilibria in Table 3, page 20.
## 4 Numerical simulations
### Sensitivity analysis
It is interesting to study the impact of parameter changes on the dynamics of our systems, and to find which parameters are the most sensitive on the variable outputs. In Figs 2, 3, 4 and 5, we provide a LHS-PROC sensitivity analysis, where LHS stands for Latin Hypercube Sampling and PRCC for Partial Rank Correlation Coefficient. The LHS-PRCC method provides mainly information about how the outputs are impacted if we increase (or decrease) the inputs of a specific parameter. The analysis is done on the time interval [800,1000]. The results are ordered from the most negative to the most positive ones. We derive a LHS-PRCC analysis for the variable \(F\) from the entomological model, and the variables \(S_{I}\), \(F_{W,I}\) and \(I_{h}\) from the epidemiological model. It is very interesting to compare the impact of the parameters thanks to the considered variables. In Fig. 2, the parameters \(\phi\), \(\varepsilon\), \(\mu_{M_{S}}\) and \(\mu_{A,1}\) are the parameters for which the Female variable, related to the entomological model (7), is the more sensitive to. Then, the infected sterile female variable, \(S_{I}\), is mostly sensitive to \(\mu_{M_{S}}\), \(\nu_{h}\), \(\mu_{A,2}\), \(\epsilon_{F}\), \(\Lambda_{tot}\), and \(B\). A similar trend is observed in Fig. 4, when dealing with wild infected female variable, \(F_{W,I}\), except that now \(\beta_{hm}\) and \(\phi\) are now the main parameters, while \(\epsilon_{F}\) and \(\Lambda_{tot}\) not. The residual fertility parameter, \(\varepsilon\) has also almost no effect. Finally, considering the infected human variable, \(I_{h}\), it is mostly sensitive to parameters \(\mu_{M_{S}}\), \(\mu_{A,1}\)\(\varepsilon\) and \(\phi\) (see also Fig. 5).
We can notice that the two parameters of interest throughout this work \(\varepsilon\) and \(\epsilon_{F}\) have a strong impact on \(F\), \(I_{h}\), and \(S_{I}\).
For all PRCC analysis, we used the PCC function (R software [18]) and 1000 bootstrap replicates, with a probability level of 0.95 for (the bootstrap) confidence intervals.
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline \(\mathcal{N}\) & \(\mathcal{R}_{0}^{2}\) & \(\epsilon_{F}\Lambda_{tot}\) & \(\mathcal{R}_{0}^{2}\) & \((1-\epsilon_{F})\Lambda_{tot}\) & Observations \\ \hline & \(\leq 1\) & & & Releases of sterile insects are useless \\ & & & & because the \(DFE\) is already GAS \\ \cline{2-5} \(>1\) & & \(\geq\Lambda_{F}^{crit}\) & & Even massive releases could not be efficient \\ & & & & to reduce the epidemiological risk \\ \cline{2-5} & & \(\geq\mathcal{R}_{0,\mathcal{N}\varepsilon>1}^{2}\) & & SIT fails since \(\mathcal{R}_{0,SIT_{c}}^{2}>1\) \\ \cline{2-5} & \(<\Lambda_{F}^{crit}\) & \(<\mathcal{R}_{0,\mathcal{N}\varepsilon>1}^{2}\) & \(>\max\{\Lambda_{M,\mathcal{N}\varepsilon>1}^{crit},\Lambda_{M}^{crit,\sharp}\}\) & \(\mathcal{R}_{0,SIT_{c}}^{2}<1\), \(DFE_{\dagger}\) is LAS \\ \cline{2-5} & & & \(<\max\{\Lambda_{M}^{crit,\sharp},\Lambda_{M,\mathcal{N}\varepsilon>1}^{crit}\}\) & SIT fails since \(\mathcal{R}_{0,SIT_{c}}^{2}>1\) \\ \hline \end{tabular}
\end{table}
Table 3: Summary table of the qualitative analysis of system (22)-(23) when \(\mathcal{N}\varepsilon>1\).
### Simulations
All forthcoming numerical simulations are done using the ode23 solver of Matlab [15]. Results are obtained in a couple of seconds.
Like in [7], we will consider the effective reproduction number, \(\mathcal{R}_{eff}(t)\) for all time \(t>0\). Indeed, SIT control is a long term strategy and the starting time of SIT treatment is important thanks to the starting time of the risky period from the epidemiological point of view, that is when Dengue virus starts to circulate, \(t_{DENV}\). That is why, it is important to consider the effective reproduction number, \(\mathcal{R}_{eff}(t)\), that is defined as follow
\[\mathcal{R}_{eff}(t)=\frac{\nu_{m}}{\nu_{m}+\mu_{S}}\frac{B^{2}\beta_{mh} \beta_{hm}}{\mu_{I}\left(\nu_{h}+\mu_{h}\right)}\frac{F_{W,S}(t)+S_{S}(t)}{N_{ h}}. \tag{42}\]
In particular, we will estimate \(\mathcal{R}_{eff}\) at time \(t_{DENV}\). Clearly, if \(\mathcal{R}_{eff}(t_{DENV})<1\) and \(\mathcal{R}_{0,SIT_{c}}^{2}<1\), then no epidemics will occur. In contrary, even if \(\mathcal{R}_{0,SIT_{c}}^{2}<1\) but \(\mathcal{R}_{eff}(t_{DENV})>1\) then an outbreak will occur.
We consider the parameter values defined in Table 1, page 5. For these values we derive \(\mathcal{N}\approx 86.75\). This is a high value but meaningful since we have considered the "best" case for the mosquito dynamics, i.e. the most difficult case in terms of control. For the epidemiological parameters, at a mean temperature of \(T=25^{\circ}C\), we find that \(\mathcal{R}_{0}^{2}\approx 7.298\), which is quite large value.
Then, according to formula (29) and the parameters values, the critical sterile females release rate, \(\Lambda_{F}^{crit}\), is around 391.
We provide simulations with several combination of values for \(\epsilon_{F}\) from 0% to 3%, and \(\varepsilon\) from 0% to 2%.
Since \(\Lambda_{tot}\) varies from 0 to 20000, then according to \(\epsilon_{F}\), \(\Lambda_{F}^{crit}\) varies from 0 to 400, when \(\epsilon_{F}=0.01\), from 0 to 800,, when \(\epsilon_{F}=0.02\), from 0 to 1200, when, when \(\epsilon_{F}=0.03\). Thus, in the forthcoming simulations, for sufficiently large values of \(\Lambda_{tot}\), we will have \(\Lambda_{F}>\Lambda_{F}^{crit}\).
In Tables 4 and 5, we illustrate some of the cases given in Tables 2 and 3. Clearly, when \(\mathcal{N}\varepsilon>1\) (see Table 5), we highlight the fact that it is more difficult to control the epidemiological risk, even with a release rate just above the critical threshold, and such that \(\varepsilon_{F}\Lambda_{tot}<<\Lambda_{F}^{crit}\). In contrary, when \(\mathcal{N}\varepsilon<1\), epidemiological control is easier to reach even with a substantial increase of the contamination by sterile females: see Table 4. These results are also supported by the forthcoming simulations.
In Figs. 6, and 7, page 25, we consider the case where there is no contamination by sterile females, with \(\varepsilon\) such that \(\varepsilon\mathcal{N}<1\) and \(\varepsilon\mathcal{N}>1\), that is where \(\varepsilon=0\) and \(\varepsilon=0.02\). Roughly speaking, it is easy to observe
Figure 2: LHS-PRCC Sensitivity analysis of the Entomological model - Wild Females
that residual fertility has less impact on the rate needed to decay \(\mathcal{R}_{eff}\) below 0.5. When \(\varepsilon\mathcal{N}>1\), it is not possible to lower the wild population under any given small threshold, to reduce the nuisance for instance, but it is still possible to reduce the epidemiological risk, at least when no female contamination occurs.
From Fig. 8, page 26, to Fig. 14, page 29, we consider contamination by sterile females with a residual fertility varying from 1% to 2% in order to consider both cases \(\mathcal{N}\varepsilon<1\) and \(\mathcal{N}\varepsilon>1\). It is interesting to notice that the shape of the level sets change according to \(\epsilon_{F}\), such that when \(\epsilon_{F}\) increases, the area where \(\mathcal{R}_{eff}<0.5\) decays. In fact, when \(\epsilon_{F}\) is large, say 2% or 3%, then very massive releases are such that \(\epsilon_{F}\Lambda_{tot}>\Lambda_{F}^{crit}\) which implies \(\mathcal{R}_{0,TDFE}^{2}>1\) and \(\mathcal{R}_{eff}>1\): see Figs. 11, 13, and 14. This simulation clearly shows that increasing the release rate is not the right response, whatever if \(\varepsilon\mathcal{N}\) is less or greater than 1, when SIT is used to decay the epidemiological risk. Clearly, as long as the female contamination is large, increasing the release rate will take the sterile females close to the release rate threshold, \(\Lambda_{F}^{crit}\), such that
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \(\epsilon_{F}\) & 0 & 0.01 & 0.02 & 0.03 & 0.05 \\ \hline \hline \((1-\epsilon_{F})\Lambda_{tot}\) & 3700 & 3663 & 3626 & 3589 & 3515 \\ \hline \(\epsilon_{F}\Lambda_{tot}\) & 0 & 37 & 74 & 111 & 185 \\ \hline \(\mathcal{R}_{0,\mathcal{N}\varepsilon<1}^{2}\) & 3.51 & 3.314 & 3.143 & 2.99 & 2.72 \\ \hline \(\mathcal{R}_{0,SIT_{e},W}^{2}\) & 0 & 0 & 0.422 & 0.527 & 0.701 \\ \hline \(\mathcal{R}_{0,SIT_{e},S}^{2}\) & 0 & 0.095 & 0.189 & 0.284 & 0.406 \\ \hline \(\mathcal{R}_{0,SIT_{e}}^{2}\) & 0 & 0.095 & 0.61 & 0.81 & 1.17 \\ \hline \end{tabular}
\end{table}
Table 4: Threshold values to lower the epidemiological risk for DENV when \(\varepsilon=0.01\), such that \(\mathcal{N}\varepsilon<1\), \(\Lambda_{M}^{crit}=3653\), \(\Lambda_{F}^{crit}=391\), and \(\mathcal{R}_{0}^{2}>\mathcal{R}_{0,\mathcal{N}\varepsilon<1}^{2}\).
Figure 3: LHS-PRCC Sensitivity analysis of the Epidemiological model - Infected Sterile Females
\(\mathcal{R}_{eff}>1\). Note also, that our simulations show In that an optimal release rate exists for a given, sufficiently large, SIT starting time.
Mechanical control is clearly beneficial to reduce the time needed to decay \(\mathcal{R}_{eff}\) below 0.5 and also the (optimal) release rate: compare Figs. 13 and 15, page 29, where the time needed to reach 0.5 for \(\mathcal{R}_{eff}\) decay from 500 days, for \(\Lambda_{opt}\approx 6000\), to, only 300 days with \(\Lambda_{opt}\approx 4000\), to reduce \(\mathcal{R}_{eff}\) before DENV starts to circulate. Compare also Figs. 11 and 14 with Figs. 16 and 17.
In fact, when \(\mathcal{N}\varepsilon>1\), serious problem occurs when contamination by sterile females increases, without mechanical control: see Fig. 14, page 29. As seen, it is no more possible to decay \(\mathcal{R}_{eff}\) below 0.5 and,
Figure 4: LHS-PRCC Sensitivity analysis of the Epidemiological model - Infected Wild Females
as explained before, very massive release can be such that \(\mathcal{R}_{eff}>1\). In that case, SIT cannot be used to control the epidemiological risk, at least without mechanical control. In fig. 17, page 30, mechanical control allows to lower the time needed to decay \(\mathcal{R}_{eff}\) but does not really increase the maximal release rate such that \(\mathcal{R}_{eff}<1\).
Altogether, our numerical simulations, that the first parameter to lower is \(\varepsilon\), the residual fertility. However, even with a low residual fertility, say 1%, contamination by sterile females should be contained: compare Fig. 9, page 26, with Fig. 10, page 27.
## 5 Conclusion
Conducting SIT programs in the field is a very complex and difficult task. However, before reaching field releases and in order to be successful, several steps have to be checked in laboratory and in semi-field, before and during field releases. In fact, it is better to find and solve issues before starting field releases: to this aim control quality is an essential process within SIT programs. However, SIT programs against mosquitoes can fail, and this is in general due to a combination of several factors, among them residual male fertility and contamination by sterile females that seem not to be always studied as deep as they should be. Indeed, sometimes (numerical) upper bound values are given for these parameters but they do not rely on biological parameters related to the targeted vectors nor on epidemiological parameters when epidemiological control is the main objective. We aim to fill this gap.
Thus, using modelling and mathematical analysis, we provide threshold parameters for residual male fertility and contamination by sterile females. We also show that these thresholds impose constraints on SIT programs to be met. If not, then, the risk of SIT failure is high.
Our results could be used and helpful for field experts to estimate the risk of SIT failures and, thus, to
Figure 5: LHS-PRCC Sensitivity analysis of the Epidemiological model - Infected Humans
target the main parameters to improve before field releases and to follow carefully along the SIT process.
Theoretically, we show that while residual fertility can be an issue to control the wild population, i.e. to lower it under a given threshold, to reduce the nuisance, it is not when it comes to control the epidemiological risk. In other words, when \(\varepsilon\mathcal{N}<1\), both nuisance reduction and epidemiological risk reduction are feasible
Figure 6: \(\mathcal{R}_{eff}(t_{I})\) vs the starting time without sterile female contamination, without residual fertility
Figure 7: \(\mathcal{R}_{eff}(t_{I})\) vs the starting time and the level of the control, without contamination by sterile females, and with 2% of residual fertility, without Mechanical control
as long as the sterile female contamination is low, that is \(\epsilon\Lambda_{tot}<\Lambda_{F}^{crit}\). While, when \(\varepsilon\mathcal{N}>1\), only epidemiological risk reduction is feasible but under rather severe constraints, that is \(\epsilon\Lambda_{tot}<\Lambda_{F}^{crit}\) and \(\mathcal{R}_{0}^{2}<\mathcal{R}_{0,\mathcal{N}\varepsilon>1}^{2}\), with releases that are sufficiently massive.
In fact, once \(\varepsilon\mathcal{N}<1\) is not met, we strongly encourage the SIT program to solve this issue before going further.
Finally, in several SIT reports/manuals or SIT papers [10], a percentage is given for the maximal contamination by sterile females. We show that this percentage is useless since the maximal amount of sterile
Figure 8: \(\mathcal{R}_{eff}(t_{I})\) vs the starting time and the level of the control with 1% of contamination by sterile females, 0% of residual fertility, and without Mechanical control
Figure 9: \(\mathcal{R}_{eff}(t_{I})\) vs the starting time and the level of the control with 1% of contamination by sterile females, 1% of residual fertility, and without Mechanical control
females allowed to be released will depend on the size of the total release. Indeed, you don't release the same amount of sterile females when you consider 1% of 10000 or 1% of 20000 sterile insects: for the first case, \(\Lambda_{F}<\Lambda_{F}^{crit}\), while in the second case, \(\Lambda_{F}>\Lambda_{F}^{crit}\), such that the dynamics of the whole system is completely different and so is the impact of SIT.
Figure 11: \(\mathcal{R}_{eff}(t_{I})\) vs the starting time and the level of the control with 3% of contamination by sterile females, 1% of residual fertility, and without Mechanical control
Figure 10: \(\mathcal{R}_{eff}(t_{I})\) vs the starting time and the level of the control with 2% of contamination by sterile females, 1% of residual fertility, and without Mechanical control
To conclude, our study shows that both contamination by sterile females, \(\epsilon_{F}\Lambda_{tot}\), and residual male fertility, \(\varepsilon\), matter in the efficiency of SIT. We provide upper bounds for these values that guarantee the efficiency of SIT, both for nuisance and epidemiological risk reduction.
Of course, several improvements are possible, like considering impulsive releases, like in [7]. In addition other control quality tests could be taken into account in future SIT models in order to provide more realistic results, and eventually, when possible, to consider variable parameters, like in [27] to take into account temporal and spatial variation of the environmental parameters that can affect the dynamics of the
Figure 12: \(\mathcal{R}_{eff}(t_{I})\) vs the starting time and the level of the control with 1% of contamination by sterile females, 2% of residual fertility, and without Mechanical control
Figure 13: \(\mathcal{R}_{eff}(t_{I})\) vs the starting time and the level of the control with 2% of contamination by sterile females, 2% of residual fertility, and without Mechanical control
vectors and thus its control. Last, migration could be also taken into account [5].
Acknowledgments:YD is (partially) supported by the DST/NRF SARChI Chair in Mathematical Models and Methods in Biosciences and Bioengineering at the University of Pretoria, South Africa (Grant 82770). YD acknowledges the support of the Conseil Regional de la Reunion (France), the Conseil Departemental
Figure 14: \(\mathcal{R}_{eff}(t_{I})\) vs the starting time and the level of the control with 3% of contamination by sterile females, 2% of residual fertility, and without Mechanical control
Figure 15: \(\mathcal{R}_{eff}(t_{I})\) vs the starting time and the level of the control with 2% of contamination by sterile females, 2% of residual fertility, and 40% of Mechanical control
de la Reunion (France), the European Agricultural Fund for Rural Development (EAFRD) and the Centre de Cooperation Internationale en Recherche Agronomique pour le Developpement (CIRAD), France.
Figure 16: \(\mathcal{R}_{eff}(t_{I})\) vs the starting time and the level of the control with 3% of contamination by sterile females, 1% of residual fertility, and 40% of Mechanical control
Figure 17: \(\mathcal{R}_{eff}(t_{I})\) vs the starting time and the level of the control with 3% of contamination by sterile females, 2% of residual fertility, and 40% of Mechanical control |
2305.18672 | Universality theorems of the Selberg zeta functions for arithmetic
groups | After Voronin proved the universality theorem of the Riemann zeta function in
the 1970s, universality theorems have been proposed for various zeta and
L-functions. Drungilas-Garunkstis-Kacenas' work at 2013 on the universality
theorem of the Selberg zeta function for the modular group is one of them and
is probably the first universality theorem of the zeta function of order
greater than one. Recently, Mishou (2021) extended it by proving the joint
universality theorem for the principal congruence subgroups. In the present
paper, we further extend these works by proving the (joint) universality
theorem for subgroups of the modular group and co-compact arithmetic groups
derived from indefinite quaternion algebras, which is available in the region
wider than the regions in the previous two works. | Yasufumi Hashimoto | 2023-05-30T01:07:46Z | http://arxiv.org/abs/2305.18672v1 | # Universality theorems of the Selberg zeta functions
###### Abstract
After Voronin proved the universality theorem of the Riemann zeta function in the 1970s, universality theorems have been proposed for various zeta and L-functions. Drungilas-Garunkstis-Kacenas' work at 2013 on the universality theorem of the Selberg zeta function for the modular group is one of them and is probably the first universality theorem of the zeta function of order greater than one. Recently, Mishou (2021) extended it by proving the joint universality theorem for the principal congruence subgroups. In the present paper, we further extend these works by proving the (joint) universality theorem for subgroups of the modular group and co-compact arithmetic groups derived from indefinite quaternion algebras, which is available in the region wider than the regions in the previous two works.
+
Footnote †: MSC: primary: 11M36; secondary: 11F72
## 1 Introduction
### Universality theorem
The value distributions of the zeta functions in the critical strips are, in general, much more complicated than those in the regions of absolute convergence. One of the earliest remarkable works providing insight into the value distribution of the Riemann zeta function
\[\zeta(s)=\prod_{p}(1-p^{-s})^{-1},\qquad\text{Res}>1\]
was Bohr-Courant's result [5] in the 1910s that the set \(\{\zeta(\sigma+it)\in\mathbb{C}\}_{t\in\mathbb{R}}\) for \(\frac{1}{2}<\sigma\leq 1\) is dense in \(\mathbb{C}\). In the 1970s, Voronin [29] made a breakthrough by proving the following theorem, called the _universality theorem_.
**Theorem 1.1**.: _(Voronin [29], 1975) Let \(0<r<1/4\) and suppose that \(f(s)\) is a non-vanishing analytic function in the interior of the disc \(|s|\leq r\) and is continuous up to the boundary of this disc. Then, for any \(\epsilon>0\), we have_
\[\liminf_{T\to\infty}\frac{1}{T}\mu\left\{\tau\in[0,T]\ \bigg{|}\ \max_{|s|<r}\bigg{|}\zeta \left(s+\frac{3}{4}+i\tau\right)-f(s)\bigg{|}<\epsilon\right\}>0,\]
_where \(\mu\) is the Lebesgue measure on \(\mathbb{R}\)._
Note that the region \(|s|\leq r\) in the theorem above was improved to a compact subset \(K\) of the strip \(\{1/2<\mathrm{Re}s<1\}\) with connected complement [2, 3]. After that, the universality theorems have been proved for various zeta and \(L\)-functions (see, e.g. [19, 21, 27] for the progress of the studies of value distributions and universality theorems). At 2013, Drungilas-Garunkstis-Kacenas [7] proved the universality theorem of the Selberg zeta function associated with the modular group, which is, to the best of our knowledge, the first universality theorem of the zeta function of order greater than one.
### Universality theorem of the Selberg zeta function
Let \(H:=\{x+y\sqrt{-1}\mid x,y\in\mathbb{R},y>0\}\) be the upper half plane and \(\Gamma\) a discrete subgroup of \(\mathrm{SL}_{2}(\mathbb{R})\) with \(\mathrm{vol}(\Gamma\backslash H)<\infty\). Denote by \(\mathrm{Prim}(\Gamma)\) the set of primitive hyperbolic conjugacy classes of \(\Gamma\) and \(N(\gamma)\) the square of the larger eigenvalue of \(\gamma\). The Selberg zeta function for \(\Gamma\) is defined by
\[Z_{\Gamma}(s):=\prod_{\gamma\in\mathrm{Prim}(\Gamma),n\geq 0}(1-N(\gamma)^{-s-n}), \qquad\mathrm{Re}s>1.\]
It is well known that \(Z_{\Gamma}(s)\) is analytically continued to the whole complex plane as a meromorphic function of order \(2\) and its singular points in \(\{1/2<\mathrm{Re}s\leq 1\}\) are only a simple zero at \(s=1\) and at most a finite number of zeros on the real line (see, e.g. [12]). The following analogue of the prime number theorem, called the prime geodesic theorem, holds for any \(\eta>0\).
\[\begin{split}\pi_{\Gamma}(x):=&\#\{\gamma\in \mathrm{Prim}(\Gamma)\mid N(\gamma)<x\}\\ =&\mathrm{li}(x)+\sum_{\frac{1}{2}<\rho<1}\mathrm{li} (x^{\rho})+O_{\eta,\Gamma}\left(x^{\frac{3}{4}+\eta}\right),\quad\text{as} \quad x\to\infty,\end{split} \tag{1.1}\]
where \(\mathrm{li}(x):=\int_{2}^{x}(\log t)^{-1}dt\) and \(\{\frac{1}{2}<\rho<1\}\) is the set of at most a finite number of zeros of \(Z_{\Gamma}(s)\) on the real line. Note that the exponent \(3/4\) of the error term of the prime geodesic theorem above was improved to \(\frac{25}{36}\) for the modular group and the principal congruence subgroups [26, 4, 6], and to \(\frac{7}{10}\) for the congruence subgroups of the modular group and co-compact arithmetic groups derived from indefinite quaternion groups [20, 16]. We also note that, if the Lindelof conjecture hols for the Dirichlet \(L\)-function, it can be improved to \(2/3\) for the modular group and the principal congruence subgroups.
At 2013, Drungilas-Garunkstis-Kacenas [7] proved the following universality theorem of \(Z_{\Gamma}(s)\) associated with the modular group.
**Theorem 1.2**.: _(Drungilas-Garunkstis-Kacenas [7], 2013) Let \(\frac{1}{2}<\alpha<1\) be the exponent of the error term of the prime geodesic theorem for \(\mathrm{SL}_{2}(\mathbb{Z})\), i.e. \(\alpha\) is the constant satisfying_
\[\pi_{\mathrm{SL}_{2}(\mathbb{Z})}(x)-\mathrm{li}(x)\ll_{\eta}x^{\alpha+\eta}, \quad\text{as}\quad x\to\infty\]
_for any \(\eta>0\), \(K\) a compact subset of the strip \(\{\frac{\alpha+1}{2}<\mathrm{Re}s<1\}\) with connected complement, and \(f(s)\) a non-vanishing function which is continuous in \(K\) and analytic in the interior of \(K\). Then, for any \(\epsilon>0\), we have_
\[\liminf_{T\to\infty}\frac{1}{T}\mu\left\{\tau\in[0,T]\ \Big{|}\ \max_{s\in K} \big{|}Z_{\mathrm{SL}_{2}(\mathbb{Z})}(s+i\tau)-f(s)\big{|}<\epsilon\right\}>0.\]
Note that the range \(\{\frac{\alpha+1}{2}<\mathrm{Re}s<1\}\) in the universality theorem above was given by Landau's formula for square integrals of the Dirichlet series (see Section 233-226 in [18]). Since \(\alpha\leq\frac{25}{36}\) holds for \(\Gamma=\mathrm{SL}_{2}(\mathbb{Z})\)[26, 4], we see that the universality theorem above holds in \(\{\frac{61}{72}<\mathrm{Re}s<1\}\). We also note that it will be improved to \(\{\frac{5}{6}<\mathrm{Re}s<1\}\) if the Lindelof conjecture for the Dirichlet _L_-function holds. Recently, Mishou [22] extended Theorem 1.2 by proving the _joint universality theorem_ for the principal congruence subgroups
\[\bar{\Gamma}(N):=\left\{\gamma\in\mathrm{SL}_{2}(\mathbb{Z})\;\Big{|}\;\gamma \equiv\pm\begin{pmatrix}1&\\ &1\end{pmatrix}\bmod N\right\}\]
of the modular group.
**Theorem 1.3**.: _(Mishou [22], 2021) Let \(r\geq 1\) be an integer, \(N_{0}=1\) and \(N_{1},\ldots,N_{r}\geq 3\) the integers relatively prime to each other. Denote by \(\frac{1}{2}<\alpha<1\) the exponent of the prime geodesic theorem for the principal congruence subgroup of \(\mathrm{SL}_{2}(\mathbb{Z})\), i.e. \(\alpha\) is the constant satisfying_
\[\pi_{\Gamma}(x)-\mathrm{li}(x)\ll_{\Gamma,\eta}x^{\alpha+\eta},\quad\text{as} \quad x\to\infty\]
_for any \(\eta>0\) and any principal congruence subgroup \(\Gamma\) of \(\mathrm{SL}_{2}(\mathbb{Z})\). For \(0\leq j\leq r\), let \(K_{j}\) be a compact subset of the strip \(\{\frac{\alpha+1}{2}<\mathrm{Re}s<1\}\) with connected complement and \(f_{j}(s)\) a non-vanishing function which is continuous in \(K_{j}\) and analytic in the interior of \(K_{j}\). Then, for any \(\epsilon>0\), we have_
\[\liminf_{T\to\infty}\frac{1}{T}\mu\left\{\tau\in[0,T]\;\Big{|}\;\max_{0\leq j \leq r}\max_{s\in K_{j}}\Bigl{|}Z_{\bar{\Gamma}(N_{j})}(s+i\tau)-f_{j}(s) \Bigr{|}<\epsilon\right\}>0.\]
Note that, since \(\alpha\leq\frac{25}{36}\) holds for the principal congruence subgroups [6], the range that the joint universality theorem above holds is also \(\{\frac{61}{72}<\mathrm{Re}s<1\}\).
### Main results
In the present paper, we study the universality of the Selberg zeta function for subgroups of the modular group \(\mathrm{SL}_{2}(\mathbb{Z})\) and the quaternion groups defined as follows.
Let \(a,b\) be square-free and relatively prime positive integers and \(B:=\mathbb{Q}+\mathbb{Q}\alpha+\mathbb{Q}\beta+\mathbb{Q}\alpha\beta\) the quaternion algebra over \(\mathbb{Q}\) with \(\alpha^{2}=a\), \(\beta^{2}=b\), \(\alpha\beta=-\beta\alpha\). Suppose that \(B\) is division and fix a maximal order \(\mathcal{O}\) of \(B\). Then the group \(\mathcal{O}^{1}\) consisting of \(q_{0}+q_{1}\alpha+q_{2}\beta+q_{3}\alpha\beta\in\mathcal{O}\) with \(q_{0}^{2}-q_{1}^{2}a-q_{2}^{2}b+q_{3}^{2}ab=1\) can be identified with a co-compact discrete subgroup \(\Gamma=\Gamma_{\mathcal{O}}\) of \(\mathrm{SL}_{2}(\mathbb{R})\) by the map
\[q_{0}+q_{1}\alpha+q_{2}\beta+q_{3}\alpha\beta\mapsto\begin{pmatrix}q_{0}+q_{1} \sqrt{a}&q_{2}\sqrt{b}+q_{3}\sqrt{ab}\\ q_{2}\sqrt{b}-q_{3}\sqrt{ab}&q_{0}-q_{1}\sqrt{a}\end{pmatrix}.\]
The main theorem in the present paper is the following universality theorem of the Selberg zeta function.
**Theorem 1.4**.: _Let \(\Gamma\) be a (not necessarily congruence) subgroup of the modular group \(\mathrm{SL}_{2}(\mathbb{Z})\) or a co-compact arithmetic group \(\Gamma_{\mathcal{O}}\) of finite index, \(K\) a compact subset of the strip \(\{\frac{5}{6}<\mathrm{Re}s<1\}\)
_with connected complement and \(f(s)\) a non-vanishing function which is continuous in \(K\) and analytic in the interior of \(K\). Then, for any \(\epsilon>0\), we have_
\[\liminf_{T\to\infty}\frac{1}{T}\mu\left\{\tau\in[0,T]\;\Big{|}\;\max_{s\in K}|Z_ {\Gamma}(s+i\tau)-f(s)|<\epsilon\right\}>0.\]
Note that the universality theorem above is available in \(\{\frac{5}{6}<\mathrm{Re}s<1\}\) without the condition \(\alpha\leq\frac{2}{3}\) for the corresponding \(\Gamma\). To improve it, we use the upper bound (Lemma 3.1 in [11] and Lemma 3.2 in this paper) of the number \(m_{\Gamma}(n)\), which is almost the same as the number of \(\gamma\in\mathrm{Prim}(\Gamma)\) with the same \(N(\gamma)\). Since its multiplicity is more informative than the error term of the prime geodesic theorem, we can obtain a better region than that in the previous work.
We also prove the following joint universality theorem as an improvement of Theorem 1.3.
**Theorem 1.5**.: _Let \(r\geq 1\) be an integer and \(\Gamma_{1},\ldots,\Gamma_{r}\) congruence subgroups of \(\mathrm{SL}_{2}(\mathbb{Z})\). Suppose that_
\[\hat{T}_{j}:=\{n\in\mathrm{Tr}\Gamma_{j}\;|\;n\not\in\mathrm{Tr}\Gamma_{i}\text { for }1\leq i\leq j-1\}\neq\emptyset \tag{1.2}\]
_for \(1\leq j\leq r\), where \(\mathrm{Tr}\Gamma:=\{\mathrm{tr}\gamma>2\;|\;\gamma\in\mathrm{Prim}(\Gamma)\}\). Let \(K_{j}\) be a compact subset of the strip \(\{\frac{5}{6}<\mathrm{Re}s<1\}\) with connected complement and \(f_{j}(s)\) a non-vanishing function continuous in \(K_{j}\) and analytic in the interior of \(K_{j}\). Then, for any \(\epsilon>0\), we have_
\[\liminf_{T\to\infty}\frac{1}{T}\mu\left\{\tau\in[0,T]\;\Big{|}\;\max_{1\leq j \leq r}\max_{s\in K_{j}}\big{|}Z_{\Gamma_{j}}(s+i\tau)-f_{j}(s)\big{|}< \epsilon\right\}>0.\]
It is easy to see that, for an integer \(N\geq 2\), \(\mathrm{Tr}\bar{\Gamma}(N)=\{n\in\mathbb{Z}\;|\;n\geq 3,n\equiv\pm 2\bmod N^{2}\}\) and \(\mathrm{Tr}\mathrm{SL}_{2}(\mathbb{Z})=\{n\in\mathbb{Z}\;|\;n\geq 3\}\). Then \(\Gamma_{1}=\bar{\Gamma}(N_{1})\),..., \(\Gamma_{r}=\bar{\Gamma}(N_{r})\) and \(\Gamma_{r+1}=\mathrm{SL}_{2}(\mathbb{Z})\) satisfies the condition (1.2) in Theorem 1.5. This means that Theorem 1.5 is an improvement of Theorem 1.3. Checking whether the condition (1.2) holds for given congruence subgroups \(\Gamma_{1},\ldots,\Gamma_{r}\) is not difficult since the sets \(\mathrm{Tr}\Gamma_{1},\ldots,\mathrm{Tr}\Gamma_{r}\) of traces are described by arithmetic progressions. Remark that whether the condition (1.2) holds sometimes depends on the numbering of \(\Gamma_{1},\ldots,\Gamma_{r}\). For example, when \(\Gamma_{1}=\bar{\Gamma}(3)\) and \(\Gamma_{2}=\mathrm{SL}_{2}(\mathbb{Z})\), it holds \(\hat{T}_{1}=\mathrm{Tr}\Gamma_{1}=\{n\geq 3,n\equiv\pm 2\bmod 9\}\) and \(\hat{T}_{2}=\mathrm{Tr}\Gamma_{2}\backslash\mathrm{Tr}\Gamma_{1}=\{n\geq 3,n \not\equiv\pm 2\bmod 9\}\), and then the condition (1.2) holds. However, when \(\Gamma_{1}=\mathrm{SL}_{2}(\mathbb{Z})\) and \(\Gamma_{2}=\bar{\Gamma}(3)\), it holds \(\hat{T}_{1}=\{n\geq 3\}\) and \(\hat{T}_{2}=\emptyset\). Thus, we should be careful about the numbering of \(\Gamma_{1},\ldots,\Gamma_{r}\) when checking whether the joint universality holds by the condition (1.2). We also remark that there are pairs of subgroups of \(\mathrm{SL}_{2}(\mathbb{Z})\) which have the same trace set [25, 17]. For example, \(\Gamma_{1}=\bar{\Gamma}_{1}(p^{2})=\left\{\gamma\equiv\pm\begin{pmatrix}1&* \\ &1\end{pmatrix}\bmod p^{2}\right\}\) and \(\Gamma_{2}=\bar{\Gamma}(p)\) satisfies \(\mathrm{Tr}\Gamma_{1}=\mathrm{Tr}\Gamma_{2}=\{n\geq 3,n\equiv\pm 2\bmod p^{2}\}\). In such cases, the condition (1.2) does not hold, and we should use a different approach to checking joint universality.
## 2 Generalized Dirichlet series
Drungilas-Garunkstis-Kacenas [7] proposed several propositions and lemmas to approximate analytic functions by generalized Dirichlet series. In this section, we have stated some of them with minor modifications.
Let \(\Lambda=\{\lambda\}\) be a monotone increasing sequence of positive real numbers tending to infinity, \(\{a_{\lambda}\}_{\lambda\in\Lambda}\subset\mathbb{C}\) and define
\[N(x):=\sum_{\lambda\in\Lambda,\lambda<x}|a_{\lambda}|.\]
We call that the series
\[\sum_{\lambda\in\Lambda}\frac{a_{\lambda}}{e^{\lambda s}} \tag{2.1}\]
satisfies the _packing condition_ if
\[\left|N\left(x\pm\frac{c}{x^{2}}\right)-N(x)\right|\gg_{c,\eta}e^{(1-\eta)x}, \quad\text{as}\quad x\to\infty \tag{2.2}\]
holds for any \(c>0\) and \(\eta>0\). The following proposition was given in [7] to approximate analytic functions by the Dirichlet series associated with \(\Lambda\) and \(\{a_{\lambda}\}\).
**Proposition 2.1**.: _(Proposition 2.3 in [7] and Proposition 4 in [22]) Let \(\Lambda\) and \(\{a_{\lambda}\}\) be as above. Suppose that the Dirichlet series (2.1) satisfies the packing condition (2.2). Let \(\frac{1}{2}<\sigma_{1}<\sigma_{2}<1\), \(K\) a compact subset of \(\{\sigma_{1}<\mathrm{Re}s<\sigma_{2}\}\) with connected complement and \(g(s)\) a non-vanishing function which continuous on \(K\) and is analytic in the interior of \(K\). Then, for any \(Q>0\), there exist a constant \(Y_{0}>0\) depending on \(\sigma_{1},\sigma_{2},K,g,\mu\) and a sequence \(\{\theta_{\lambda}\}_{\lambda\in\Lambda}\subset[0,1)\) satisfying_
\[\max_{s\in K}\left|g(s)-\sum_{\lambda\in\Lambda,Q<e^{\lambda}\leq Y}\frac{a_{ \lambda}e(\theta_{\lambda})}{e^{\lambda s}}\right|\ll\sum_{\lambda\in\Lambda, Q<e^{\lambda}\leq Y}\frac{|a_{\lambda}|^{2}}{e^{2\lambda\sigma_{1}}},\]
_for any \(Y>Y_{0}\), where \(e(x):=e^{2\pi ix}\) and the implied constant depends only on \(\sigma_{1},\sigma_{2},K\) and \(\Lambda\)._
Next, we state the following proposition, which is a minor modification of Proposition 2.8 and Lemma 2.9 in [7] (see also Proposition 5 in [22]).
**Proposition 2.2**.: _Let \(\alpha>0\) and \(\Lambda\subset\mathbb{R}_{>0}\) be the sequence monotone increasing, tending to infinity and linearly independent over \(\mathbb{Q}\). Suppose that the series_
\[\sum_{\lambda\in\Lambda}\frac{|a_{\lambda}|^{2}}{e^{2\lambda\sigma}}\]
_converges for \(\sigma>\alpha\). Then the following (1) and (2) hold. (1) For a given series \(\{\theta_{\lambda}\}_{\lambda\in\Lambda}\subset[0,1)\), a finite subset \(\Lambda_{1}\) of \(\Lambda\) and \(0<\delta<1/2\), let_
\[S_{T}=S_{T}(\delta,\Lambda_{1}):=\left\{\tau\in[0,T]\ \Big{|}\ \left\|-\frac{ \tau\lambda}{2\pi}-\theta_{\lambda}\right\|<\delta\quad\text{for any }\lambda\in\Lambda_{1} \right\},\]
_where \(\|x\|\) is the distance from the nearest integer to \(x\). Then we have_
\[\lim_{T\to\infty}\frac{\mu(S_{T})}{T}=(2\delta)^{\#\Lambda_{1}}.\]
_(2) Let \(Q>0\) be a large number, \(\Lambda_{2}=\Lambda_{2}(Q)\) a finite subset of \(\Lambda\) with \(\Lambda_{1}\cap\Lambda_{2}=\emptyset\), \(\min\limits_{\lambda\in\Lambda_{2}}e^{\lambda}\geq Q\) and \(K\) a compact subset of \(\{\alpha<\mathrm{Re}s<1\}\). Denote by \(S_{T}^{\prime}=S_{T}^{\prime}(K,\Lambda_{2},Q)\) the set of \(\tau\in S_{T}\) satisfying_
\[\max\limits_{s\in K}\left|\sum\limits_{\lambda\in\Lambda_{2}(Q)}\frac{a_{ \lambda}}{e^{\lambda(s+i\tau)}}\right|<\left(\sum\limits_{\lambda\in\Lambda_{2} (Q)}\frac{|a_{\lambda}|^{2}}{e^{2\lambda\sigma_{1}}}\right)^{1/4},\]
_where \(\alpha<\sigma_{1}<\max\limits_{s\in K}\mathrm{Re}s\). Then, for any \(0<\beta<1\), there exists \(Q>0\) such that_
\[\lim\limits_{T\rightarrow\infty}\frac{\mu(S_{T}^{\prime})}{T}>(1-\beta)\lim \limits_{T\rightarrow\infty}\frac{\mu(S_{T})}{T}. \tag{2.3}\]
We use the following lemmas in the proof of the proposition above.
**Lemma 2.3**.: _(Lemma 2.5 in [9] and Lemma 2.5 in [7]) Let \(K\) be a compact subset of a bounded rectangle \(U\), \(d:=\min\limits_{z\in\partial U}\min\limits_{s\in K}|s-z|\) and \(f(s)\) an analytic function in \(U\). Then, for a given \(\epsilon>0\), we have_
\[\iint_{U}|f(s)|^{2}ds\leq\epsilon\quad\Rightarrow\quad\max\limits_{s\in K}|f( s)|\leq\frac{1}{d}\sqrt{\frac{\epsilon}{\pi}}.\]
**Lemma 2.4**.: _(SS8 of Appendix in [15] and Lemma 2.10 in [7]) Let \(w:\mathbb{R}\rightarrow\mathbb{R}^{N}\) be a curve and suppose that \(\{w(t)\mid t\in\mathbb{R}\}\subset\mathbb{R}^{N}\) is uniformly distributed modulo \(1\) in \(\mathbb{R}\). Denote by \(D\) a closed and Jordan measurable subset of the unit cube in \(\mathbb{R}^{N}\) and by \(\Omega\) a family of complex-valued continuous functions on \(D\). If \(\Omega\) is uniformly bounded and equi-continuous, then_
\[\lim\limits_{T\rightarrow\infty}\frac{1}{T}\int_{0}^{T}f(\{w(t)\})1_{D}(t)dt =\int_{D}f(x_{1},\ldots,x_{N})dx_{1}\cdots x_{N}\]
_uniformly with respect to \(f\in\Omega\), where \(\{w(t)\}:=(w_{1}(t)-[w_{1}(t)],\ldots,w_{N}(t)-[w_{N}(t)])\) for \(w(t)=(w_{1}(t),\ldots,w_{N}(t))\) and \(1_{D}(t)=1\) if \(w(t)\in D\) modulo \(1\) and \(1_{D}(t)=0\) otherwise._
**Proof of Proposition 2.2.** (1) See Lemma 2.9 in [7] and Proposition 5 in [22].
(2) Let \(U\) be a bounded rectangle with \(K\subset U\subset\{\alpha<\mathrm{Re}s<\sigma_{1}(<1)\}\) and \(d:=\min\limits_{z\in OU}\min\limits_{s\in K}|s-z|\). We study the integral
\[\frac{1}{T}\int_{S_{T}}\int_{U}\left|\sum\limits_{\lambda\in\Lambda_{2}(Q)} \frac{a_{\lambda}}{e^{\lambda(s+i\tau)}}\right|^{2}dsd\tau=\int_{U}\frac{1}{ T}\int_{S_{T}}\left|\sum\limits_{\lambda\in\Lambda_{2}(Q)}\frac{a_{\lambda}}{e^{ \lambda(s+i\tau)}}\right|^{2}d\tau ds \tag{2.4}\]
by using Lemma 2.4.
Let \(N_{1}:=\#\Lambda_{1}\),\(N_{2}:=\#\Lambda_{2}\) and \(N:=N_{1}+N_{2}\). Denote by \(\Lambda_{1}=\{\lambda_{1},\ldots,\lambda_{N_{1}}\}\), \(\Lambda_{2}=\{\lambda_{N_{1}+1},\ldots,\lambda_{N}\}\) and put \(w(t):=\left(\frac{\lambda_{1}}{2\pi}t,\ldots,\frac{\lambda_{N}}{2\pi}t\right).\) For a series \(\{\theta_{\lambda}\}_{\lambda\in\Lambda}\subset[0,1)\) and \(0<\delta<1/2\)
let
\[R_{1}= \left\{(y_{1},\ldots,y_{N_{1}})\in[0,1]^{N_{1}}\ \big{|}\ \big{\|}y_{j}- \theta_{\lambda_{j}}\big{\|}<\delta\quad(1\leq j\leq N_{1})\right\},\] \[R_{2}= \left\{(y_{1},\ldots,y_{N})\in[0,1]^{N}\ \big{|}\ \begin{array}{ll} \big{\|}y_{j}-\theta_{\lambda_{j}}\big{\|}<\delta\quad(1\leq j\leq N_{1}),\\ \|y_{j}-1/2\|<1/2\quad(N_{1}+1\leq j\leq N)\end{array}\right\},\]
It is clear that \(\mu(R_{1})=\mu(R_{2})=(2\delta)^{N_{1}}\). According to Lemma 2.4, we have
\[\lim_{T\to\infty}\frac{1}{T}\int_{S_{T}}\left|\sum_{\lambda\in \Lambda_{2}(Q)}\frac{a_{\lambda}}{e^{\lambda(s+i\tau)}}\ \right|^{2}d\tau= \lim_{T\to\infty}\frac{1}{T}\int_{0}^{T}\left|\sum_{\lambda\in \Lambda_{2}(Q)}\frac{a_{\lambda}}{e^{\lambda(s+i\tau)}}\right|^{2}1_{R_{2}}( \tau)d\tau\] \[= \int_{R_{2}}\left|\sum_{N_{1}+1\leq j\leq N}\frac{a_{\lambda_{j} }}{e^{\lambda_{j}s}}e\left(\lambda_{j}y_{j}\right)\right|^{2}dy_{1}\cdots dy_{N}\] \[= \mu(R_{1})\int_{0}^{1}\cdots\int_{0}^{1}\left|\sum_{N_{1}+1\leq j \leq N}\frac{a_{\lambda_{j}}}{e^{\lambda_{j}s}}e\left(\lambda_{j}y_{j}\right) \right|^{2}dy_{N_{1}+1}\cdots dy_{N}\] \[\leq \mu(R_{1})\sum_{\lambda\in\Lambda_{2}(Q)}\frac{|a_{\lambda}|^{2} }{e^{2\lambda\sigma_{1}}}\leq\mu(R_{1})\sum_{\lambda\in\Lambda,e^{\lambda}\geq Q }\frac{|a_{\lambda}|^{2}}{e^{2\lambda\sigma_{1}}}.\]
Then the integral (2.4) is bounded by
\[\frac{1}{T}\int_{S_{T}}\int_{U}\left|\sum_{\lambda\in\Lambda_{2}(Q)}\frac{a_{ \lambda}}{e^{\lambda(s+i\tau)}}\right|^{2}dsd\tau\leq\mu(U)\frac{\mu(S_{T})}{ T}\sum_{\lambda\in\Lambda,e^{\lambda}\geq Q}\frac{|a_{\lambda}|^{2}}{e^{2 \lambda\sigma_{1}}}+o(1),\quad\text{as}\quad T\to\infty. \tag{2.5}\]
Since the sum \(\sum_{e^{\lambda}\geq Q}(...)\) in the inequality above tends to zero as \(Q\to\infty\), we see that, for any \(0<\beta<1\), there exists \(Q>0\) such that
\[\mu\left\{\tau\in S_{T}\ \bigg{|}\ \int_{U}\left|\sum_{\lambda\in\Lambda_{2}(Q)} \frac{a_{\lambda}}{e^{\lambda(s+i\tau)}}\right|^{2}ds<d^{2}\pi\left(\sum_{ \lambda\in\Lambda,e^{\lambda}\geq Q}\frac{|a_{\lambda}|^{2}}{e^{2\lambda\sigma_ {1}}}\right)^{1/2}\right\}>(1-\beta)\mu(S_{T}).\]
Thus Lemma 2.2 follows immediately from Lemma 2.3.
## 3 Explicit formula for the Selberg zeta function
In this section, we study the logarithm \(\log Z_{\Gamma}(s)\) of the Selberg zeta function, whose branch is chosen such that \(\log Z_{\Gamma}(s)\to 0\) as \(\mathrm{Re}s\to\infty\). It is easy to see that
\[\log Z_{\Gamma}(s)=-\sum_{\gamma\in\mathrm{Prim}(\Gamma),j\geq 1}\frac{1}{j(1-N( \gamma)^{-j})}N(\gamma)^{-js}\]
for \(\mathrm{Re}s>1\). For \(\frac{1}{2}<\mathrm{Re}s\leq 1\), it has the following expression as a sum over \(\gamma\in\mathrm{Prim}(\Gamma)\).
**Lemma 3.1**.: _Let \(\Gamma\) be a discrete subgroup of \(\mathrm{SL}_{2}(\mathbb{R})\) with \(\mathrm{vol}(\Gamma\backslash H)<\infty\), \(x>0\) and \(s=\sigma+iT\in\mathbb{C}\) with \(\frac{1}{2}<\sigma<1\), \(T\geq 1\). Set_
\[\psi_{\Gamma,s}(x):=\sum_{\begin{subarray}{c}\gamma\in\mathrm{Prim}(\Gamma),j \geq 1\\ N(\gamma)^{j}<x\end{subarray}}\frac{1}{j(1-N(\gamma)^{-j})}\left(1-\frac{N( \gamma)^{j}}{x}\right)N(\gamma)^{-js}.\]
_Then we have_
\[\log Z_{\Gamma}(s)= -\psi_{\Gamma,s}(x)+O_{\eta,\Gamma}\left(T^{-2}x^{1-\sigma}+T^{1+ \eta}x^{1/2-\sigma+\eta}\right)\quad\text{as}\quad T,x\to\infty\]
_for any \(\eta>0\)._
Proof.: According to Proposition 2.1 in [11], we have
\[\frac{Z_{\Gamma}^{\prime}(s)}{Z_{\Gamma}(s)}= \sum_{\begin{subarray}{c}\gamma\in\mathrm{Prim}(\Gamma),j\geq 1\\ N(\gamma)^{j}<x\end{subarray}}\frac{\log N(\gamma)}{1-N(\gamma)^{-j}}\left(1- \frac{N(\gamma)^{j}}{x}\right)N(\gamma)^{-js}\] \[-\sum_{\frac{1}{2}<\rho\leq 1}\frac{x^{\rho-s}}{(\rho-s)(1+ \rho-s)}+O_{\eta,\Gamma}\left(T^{1+\epsilon}x^{\frac{1}{2}-\sigma+\eta} \right),\quad\text{as}\quad T,x\to\infty\]
for any \(\eta>0\), where \(\{\frac{1}{2}<\rho\leq 1\}\) is the finite set of zeros of \(Z_{\Gamma}(s)\) on the real line. The logarithm of \(Z_{\Gamma}(s)\) is given by
\[\log Z_{\Gamma}(s)= \int_{2+iT}^{\sigma+iT}\frac{Z_{\Gamma}^{\prime}(z)}{Z_{\Gamma}( z)}dz+\log Z_{\Gamma}(2+iT)\] \[= -\psi_{\Gamma,\sigma+iT}+\psi_{\Gamma,2+iT}+\log Z_{\Gamma}(2+iT )+O_{\eta,\Gamma}\left(T^{-2}x^{1-\sigma}+T^{1+\eta}x^{1/2-\sigma+\eta}\right).\]
Due to the prime geodesic theorem (1.1), we have
\[|\psi_{\Gamma,2+iT}(x)+\log Z_{\Gamma}(2+iT)|\] \[\leq x^{-1}\sum_{\begin{subarray}{c}\gamma\in\mathrm{Prim}(\Gamma ),j\geq 1\\ N(\gamma)^{j}<x\end{subarray}}\frac{1}{j(1-N(\gamma)^{-j})}N(\gamma)^{-j}+ \sum_{\begin{subarray}{c}\gamma\in\mathrm{Prim}(\Gamma),j\geq 1\\ N(\gamma)^{j}\geq x\end{subarray}}\frac{1}{j(1-N(\gamma)^{-j})}\left(1-\frac{N (\gamma)^{j}}{x}\right)N(\gamma)^{-2j}\] \[\ll_{\eta,\Gamma}x^{-1+\eta},\quad\text{as}\quad x\to\infty\]
for any \(\eta>0\). We thus obtain Lemma 3.1.
Next, we study the function \(\psi_{\Gamma,s}(x)\) in more detail when \(\Gamma\) is a subgroup of the modular group or a quaternion \(\Gamma_{\mathcal{O}}\). Let
\[\mathrm{Tr}\Gamma:=\{\mathrm{tr}\gamma\mid\gamma\in\Gamma,\mathrm{tr}\gamma>2 \},\qquad m_{\Gamma}(n):=\sum_{\begin{subarray}{c}\gamma\in\mathrm{Prim}( \Gamma),j\geq 1\\ \mathrm{tr}\gamma^{j}=n\end{subarray}}\frac{1}{j}.\]
Since
\[N(\gamma)^{j/2}=\frac{1}{2}\left(\mathrm{tr}\gamma^{j}+\sqrt{(\mathrm{tr}\gamma^{j })^{2}-4}\right),\]
the function \(\psi_{\Gamma,s}(x)\) can be expressed by
\[\psi_{\Gamma,s}(x)=\sum_{\begin{subarray}{c}n\in\mathrm{Tr}(\Gamma)\\ n<X\end{subarray}}m_{\Gamma}(n)\Delta(n)\left(1-\frac{\epsilon(n)^{2}}{x} \right)\epsilon(n)^{-2s},\]
where \(X:=x^{1/2}+x^{-1/2}\) and
\[\epsilon(n):=\frac{1}{2}\left(n+\sqrt{n^{2}-4}\right),\qquad\Delta(n):=\frac{ 1}{1-\epsilon(n)^{-2}}.\]
It is easy to see that \(\mathrm{Tr}(\mathrm{SL}_{2}(\mathbb{Z}))=\mathbb{Z}_{\geq 3}\), and \(\mathrm{Tr}\Gamma\) is a non-sparse subset of \(\mathbb{Z}_{\geq 3}\) when \(\Gamma\) is a subgroup of the modular group \(\mathrm{SL}_{2}(\mathbb{Z})\) or a quaternion \(\Gamma_{\mathcal{O}}\) of finite index. Then, by taking \(m_{\Gamma}(n)=0\) for \(n\not\in\mathrm{Tr}\Gamma\), we can express \(\psi_{\Gamma,s}(x)\) by
\[\psi_{\Gamma,s}(x)=\sum_{\begin{subarray}{c}n\in\mathbb{Z}\\ 3\leq n<X\end{subarray}}m_{\Gamma}(n)\Delta(n)\left(1-\frac{\epsilon(n)^{2}}{x} \right)\epsilon(n)^{-2s}. \tag{3.1}\]
When \(\Gamma=\mathrm{SL}_{2}(\mathbb{Z})\), \(\Gamma=\Gamma_{\mathcal{O}}\) or \(\Gamma\) is a congruence subgroup of \(\mathrm{SL}_{2}(\mathbb{Z})\), it has been known that \(m_{\Gamma}(n)\) can be written as a sum of the class numbers of primitive indefinite binary quadratic forms in the narrow sense [24, 1, 10]. It is also known that \(m_{\Gamma}(n)\) is bounded as follows.
**Lemma 3.2**.: _(Lemma 3.1 in [11]) When \(\Gamma\) is a subgroup of the modular group \(\mathrm{SL}_{2}(\mathbb{Z})\) or a quaternion \(\Gamma_{\mathcal{O}}\), we have_
\[m_{\Gamma}(n)\ll_{\eta,\Gamma}n^{1+\eta},\quad\text{as}\quad n\to\infty \tag{3.2}\]
_for any \(\eta>0\)._
## 4 Proof of the universality theorem
In this section, we prove Theorem 1.4 by using the propositions and lemmas stated in SS2 and SS3.
### Linearly independent subset of \(\{\log\epsilon(n)\}\)
As studied in SS3, we see that \(\log Z_{\Gamma}(x)\) is written by the Dirichlet series over \(\Lambda=\{2\log\epsilon(n)\mid n\in\mathbb{Z}_{\geq 3}\}\). We now generate a linearly independent subset of \(\{\log\epsilon(n)\}\) over \(\mathbb{Q}\). Let \(\mathcal{T}\subset\mathbb{Z}\) be
\[\mathcal{T}:=\left\{n\geq 3\mid\epsilon(n)\neq\epsilon(n_{0})^{k}\text{ for any }k\geq 2,\,n_{0}\geq 3\right\}.\]
For example, \(7,14,18\not\in\mathcal{T}\) since \(\epsilon(7)=\epsilon(3)^{2}\), \(\epsilon(14)=\epsilon(4)^{2}\), \(\epsilon(18)=\epsilon(3)^{3}\), and \(\{3\leq n\leq 18\mid n\neq 7,14,18\}\subset\mathcal{T}\). The following lemma shows that \(\mathcal{T}\) gives a linearly independent subset of \(\{\log\epsilon(n)\}\) and the integers not in \(\mathcal{T}\) are distributed sparsely.
**Lemma 4.1**.: _Let \(\mathcal{T}\) be as above. Then the following (1) and (2) hold. (1) The set \(\left\{\log\epsilon(n)\mid n\in\mathcal{T}\right\}\) is linearly independent over \(\mathbb{Q}\). (2) For any \(\eta>0\), we have_
\[\#\left\{n\in\mathbb{Z}\mid 3\leq n<x,n\not\in\mathcal{T}\right\}\ll_{\eta}x^{1/2+ \eta},\quad\text{as}\quad x\to\infty.\]
Proof.: (1) Assume that \(\left\{\log\epsilon(n)\mid n\in\mathcal{T}\right\}\) is not linearly independent over \(\mathbb{Q}\), i.e. there exist distinct \(n_{1},\ldots,n_{N}\in\mathcal{T}\) and non-zero \(k_{1},\ldots,k_{N}\in\mathbb{Z}\) such that
\[k_{1}\log\epsilon(n_{1})+\cdots+k_{N}\log\epsilon(n_{N})=0.\]
According to Lemma 4.1 in [23], we see that \(\epsilon(n_{1}),\ldots,\epsilon(n_{N})\) lie in the same quadratic field, which means that there exist a non-square integer \(D>0\) and integers \(u_{1},\ldots,u_{N}\geq 1\) such that \(n_{i}^{2}-4=Du_{i}^{2}\) for \(1\leq i\leq N\). Since \(\epsilon(n_{i})=\frac{1}{2}(n_{i}+\sqrt{n_{i}^{2}-4})>1\) is a unit of the integer ring \(\mathcal{O}\) of \(\mathbb{Q}(\sqrt{D})\), there exists an integer \(l_{i}\geq 1\) such that
\[\epsilon(n_{i})=\epsilon_{0}(D)^{l_{i}} \tag{4.1}\]
for \(1\leq i\leq N\), where \(\epsilon_{0}(D)=\frac{1}{2}\left(n_{0}+u_{0}\sqrt{D}\right)\) is the fundamental unit of \(\mathcal{O}_{D}\). Here \((n_{0},u_{0})\) satisfies \(n_{0}^{2}-Du_{0}^{2}=4\) and then
\[\epsilon_{0}(D)=\frac{1}{2}\left(n_{0}+\sqrt{n_{0}^{2}-4}\right)=\epsilon(n_ {0}).\]
This means that (4.1) contradicts the fact that \(n_{1},\ldots,n_{N}\) are distinct and are elements of \(\mathcal{T}\).
(2) For \(k\geq 2\), let
\[T_{k}(x):=\left\{n\in\mathbb{Z}\mid 3\leq n\leq x,\epsilon(n)=\epsilon(n_{0})^{k }\text{ for some }n_{0}\geq 3\right\}.\]
We see that \(n\in T_{k}\) satisfies
\[n=\epsilon(n)+\epsilon(n)^{-1}=\epsilon(n_{0})^{k}+\epsilon(n_{0})^{-k}=n_{0} ^{k}-O(n_{0}^{k-2})\]
for some \(n_{0}\geq 3\) and then
\[\#T_{k}(x)\ll x^{1/k},\quad\text{as}\quad x\to\infty.\]
We thus have
\[\left\{n\in\mathbb{Z}\mid 3\leq n\leq x,n\not\in\mathcal{T}\right\}\leq\sum_{k \geq 2}\#T_{k}(x)\ll_{\eta}x^{1/2+\eta},\quad\text{as}\quad x\to\infty \tag{4.2}\]
for any \(\eta>0\).
### Partial Dirichlet series
In the proof of Theorem 1.4, we will divide \(\log Z_{\Gamma}(s)\) by several partial Dirichlet series. For a set \(A\) of the integers \(n\geq 3\) and a series \(\{a_{n}\}_{n\in A}\subset[0,1)\), define
\[L_{\Gamma}(s;A):= \sum_{n\in A}m_{\Gamma}(n)\Delta(n)\epsilon(n)^{-2s},\] \[L_{\Gamma}(s;A;\{a_{n}\}):= \sum_{n\in A}m_{\Gamma}(n)\Delta(n)\epsilon(n)^{-2s}e\left(a_{n} \right),\]
where \(e(x):=e^{2\pi ix}\). For example,
\[L_{\Gamma}(s;\mathcal{T}\cap[X,Y);\{a_{n}\})= \sum_{\begin{subarray}{c}n\in\mathcal{T}\\ X\leq n<Y\end{subarray}}m_{\Gamma}(n)\Delta(n)\epsilon(n)^{-2s}e\left(a_{n} \right),\] \[L_{\Gamma}(s;\mathcal{T}\cap[Y,\infty))= \sum_{\begin{subarray}{c}n\in\mathcal{T}\\ n\geq Y\end{subarray}}m_{\Gamma}(n)\Delta(n)\epsilon(n)^{-2s}.\]
It is clear that \(L_{\Gamma}(s;\mathrm{Tr}\Gamma)=L_{\Gamma}(s;\mathbb{Z}_{\geq 3})=-\log Z_{ \Gamma}(s)\). We now prove the following lemma.
**Lemma 4.2**.: _Let \(\Gamma\) be a subgroup of the modular group \(\mathrm{SL}_{2}(\mathbb{Z})\) or a co-compact arithmetic group \(\Gamma_{\mathcal{O}}\). Then the following (1) and (2) hold. (1) Let \(\bar{\mathcal{T}}:=\mathbb{Z}_{\geq 3}\backslash\mathcal{T}\). The series_
\[L_{\Gamma}(s;\bar{\mathcal{T}}):=\sum_{n\not\in\mathcal{T},n\geq 3}m_{\Gamma}(n) \Delta(n)\epsilon(n)^{-2s} \tag{4.3}\]
_converges absolutely for \(\mathrm{Re}s>3/4\). (2) For \(\frac{1}{2}<\sigma<1\) and \(0<Y<T^{3/2}\), we have_
\[\frac{1}{T}\int_{1}^{T}\left|L_{\Gamma}\left(\sigma+it;\mathcal{T}\cap[Y, \infty)\right)\right|^{2}dt\ll_{\eta,\Gamma}Y^{3-4\sigma+\eta}+T^{5-6\sigma+ \eta},\quad\text{as}\quad Y,T\to\infty \tag{4.4}\]
_for any \(\eta>0\)._
Proof.: (1) Due to Lemma 3.2 and (2) of Lemma 4.1, we have
\[\sum_{n\not\in\mathcal{T},3\leq n\leq x}m_{\Gamma}(n)\Delta(n)\epsilon(n)^{-2s }\ll_{\eta}\sum_{n\not\in\mathcal{T},3\leq n\leq x}n^{1-2\mathrm{Re}s+\eta} \ll_{\eta}x^{\max\left(0,3/2-2\mathrm{Re}s+\eta\right)}\]
for any \(\eta>0\). Then the series (4.3) converges absolutely for \(\mathrm{Re}s>3/4\).
(2) Recall that \(X=x^{1/2}+x^{-1/2}\) and suppose that \(X>Y\). Due to Lemma 3.1, we have
\[L_{\Gamma}\left(s;\mathcal{T}\cap[Y,\infty)\right)= -\log Z_{\Gamma}(s)-L_{\Gamma}\left(s;\mathcal{T}\cap[3,Y)\right)-L _{\Gamma}(s;\bar{\mathcal{T}})\] \[= \psi_{s}(x)-L_{\Gamma}\left(s;\mathcal{T}\cap[3,Y)\right)-L_{ \Gamma}(s;\bar{\mathcal{T}})+O_{\eta,\Gamma}(T^{-2}x^{1-\sigma}+T^{1+\eta}x^{1/ 2-\sigma+\eta})\] \[= \sum_{n\in\mathcal{T},3\leq n<X}m_{\Gamma}(n)\Delta(n,x,Y)\epsilon (n)^{-2s}\] \[+\sum_{n\notin\mathcal{T},n\geq 3}m_{\Gamma}(n)\Delta(n,x,Y) \epsilon(n)^{-2s}+O_{\eta,\Gamma}\left(T^{-2}x^{1-\sigma}+T^{1+\eta}x^{1/2- \sigma+\eta}\right)\] \[=: M_{1}+M_{2}+O_{\eta,\Gamma}\left(T^{-2}x^{1-\sigma}+T^{1+\eta}x^ {1/2-\sigma+\eta}\right),\]
where
\[\Delta(n,x,Y)=\begin{cases}-\Delta(n)\epsilon(n)^{2}x^{-1},&(n<Y),\\ \Delta(n)\left(1-\frac{\epsilon(n)^{2}}{x}\right),&(Y\leq n<X).\end{cases}\]
We can get \(M_{2}\ll_{\eta,\Gamma}x^{3/4-\sigma+\eta}\) easily from (1). Then the square integral of \(L_{\Gamma}^{(1)}(s;[Y,\infty))\) is given as follows.
\[\frac{1}{T}\int_{1}^{T}\left|L_{\Gamma}(\sigma+it;\mathcal{T} \cap[Y,\infty))\right|^{2}dt\] \[\ll \sum_{n_{1},n_{2}<X}m_{\Gamma}(n_{1})m_{\Gamma}(n_{2})\Delta(n_{1 },x,Y)\Delta(n_{2},x,Y)\epsilon(n_{1})^{-2\sigma}\epsilon(n_{2})^{-2\sigma} \frac{1}{T}\int_{1}^{T}\left(\frac{\epsilon(n_{1})}{\epsilon(n_{2})}\right)^ {2it}dt\] \[+O_{\eta,\Gamma}\left(x^{3/2-2\sigma+\eta}+T^{-1}x^{2-2\sigma}+T^ {2+\eta}x^{1-2\sigma+\eta}\right). \tag{4.5}\]
Divide the double sum in the right hand side by \(\sum_{n_{1}=n_{2}}+\sum_{n_{1}\neq n_{2}}=:S_{1}+S_{2}\). Due to Lemma 3.2, we see that
\[S_{1} =\sum_{3\leq n<X}m_{\Gamma}(n)^{2}\Delta(n,x,Y)^{2}\epsilon(n)^{- 4\sigma}\] \[=x^{-2}\sum_{3\leq n<Y}m_{\Gamma}(n)^{2}\Delta(n)^{2}\epsilon(n)^ {4-4\sigma}+\sum_{Y\leq n<X}m_{\Gamma}(n)^{2}\Delta(n)^{2}\left(1-\frac{ \epsilon(n)^{2}}{x}\right)^{2}\epsilon(n)^{-4\sigma}\] \[\ll_{\eta,\Gamma}x^{-2}Y^{7-4\sigma+\eta}+Y^{3-4\sigma+\eta}. \tag{4.6}\]
Furthermore, since
\[\int_{1}^{T}\left(\frac{\epsilon(n_{1})}{\epsilon(n_{2})}\right)^{2it}dt\ll \frac{1}{\left|\log\frac{\epsilon(n_{1})}{\epsilon(n_{2})}\right|}\ll \begin{cases}\frac{n_{1}}{n_{2}-n_{1}},&(n_{1}<n_{2}<2n_{1}),\\ 1,&(n_{2}\geq 2n_{1}),\end{cases}.\]
we have
\[S_{2} \ll T^{-1}\sum_{n_{1}<X}m_{\Gamma}(n_{1})\Delta(n_{1},x,Y)\epsilon(n_ {1})^{-2\sigma}\sum_{n_{1}<n_{2}<X}m_{\Gamma}(n_{2})\Delta(n_{2},x,Y)\epsilon(n_ {2})^{-2\sigma}\frac{1}{\left|\log\frac{\epsilon(n_{1})}{\epsilon(n_{2})}\right|}\] \[\ll_{\eta,\Gamma}T^{-1}\sum_{n_{1}<X}n_{1}^{1-2\sigma+\eta}\left( \sum_{n_{1}<n_{2}<2n_{1}}n_{2}^{1-2\sigma+\eta}\frac{n_{1}}{n_{2}-n_{1}}+\sum_ {n_{2}\geq 2n_{1}}n_{2}^{1-2\sigma+\eta}\right)\] \[\ll_{\eta,\Gamma}T^{-1}x^{2-2\sigma+\eta}. \tag{4.7}\]
Choosing \(x=T^{3}\), i.e. \(Y<X\sim T^{3/2}\), we can obtain (4.4) from (4.5)-(4.7).
### Proof of Theorem 1.4
We now give the proof the universality theorem of \(\log Z_{\Gamma}(s)\), which is enough to prove the universality theorem of \(Z_{\Gamma}(s)\) itself since
\[Z_{\Gamma}(s+i\tau)-f(s)=f(s)\left(e^{\log Z_{\Gamma}(s+i\tau)-\log f(s)}-1 \right).\]
Let \(X_{1}>0\) and divide \(\log f(s)-\log Z_{\Gamma}(s+i\tau)\) by
\[\log f(s)-\log Z_{\Gamma}(s+i\tau)= \left(\log f(s)+L_{\Gamma}(s;[3,X_{1}))+L_{\Gamma}(s+i\tau; \mathcal{T}\cap[X_{1},\infty))\right)\] \[+\left(L_{\Gamma}(s+i\tau;[3,X_{1}))-L_{\Gamma}(s;[3,X_{1}))+L_{ \Gamma}(s+i\tau;\bar{\mathcal{T}}\cap[X_{1},\infty))\right). \tag{4.8}\]
Due to the prime geodesic theorem (1.1) and (2) of Lemma 4.1, we can easily check that the series
\[\sum_{n\in\mathcal{T}}m_{\Gamma}(n)\Delta(n)\epsilon(n)^{-2s}\]
satisfies the packing condition (2.2). Then, applying Proposition 2.1 with
\[g(x)=\log f(s)+L_{\Gamma}(s;[3,X_{1})),\]
we see that there exist \(X_{2}>X_{1}\) and a series \(\{\theta_{n}\}_{n\in\mathcal{T}\cap[X_{1},X_{2}]}\subset[0,1)\) such that
\[\left|\log f(s)+L_{\Gamma}(s;[3,X_{1}))-L_{\Gamma}(s;\mathcal{T} \cap[X_{1},X_{2});\{\theta_{n}\})\right|\] \[\ll \sum_{\begin{subarray}{c}n\in\mathcal{T}\\ X_{1}\leq n<X_{2}\end{subarray}}m_{\Gamma}(n)^{2}\Delta(n)^{2}\epsilon(n)^{-4 \sigma}\ll_{\eta}X_{1}^{3-4\sigma+\eta}. \tag{4.9}\]
Define the series \(\{\bar{\theta}_{n}\}_{n\in\mathcal{T}}\) and the subset \(S_{T}(\delta)\) of the interval \([0,T]\) by
\[\bar{\theta}_{n}:= \begin{cases}\theta_{n},&(X_{1}\leq n<X_{2}),\\ 0,&(\text{otherwise}),\end{cases}\] \[S_{T}(\delta):= \left\{\tau\in[0,T]\ \Big{|}\ \left\|\frac{\tau\log\epsilon(n)}{\pi}-\bar{ \theta}_{n}\right\|<\delta\quad\text{for any }n\in\mathcal{T}\cap[3,X_{2})\right\}.\]
Then, for \(\tau\in S_{T}(\delta)\), we have
\[L_{\Gamma}(s+i\tau;{\cal T}\cap[X_{1},X_{2}))-L_{\Gamma}(s;{\cal T} \cap[X_{1},X_{2});\{\theta_{n}\})\] \[=\sum_{\begin{subarray}{c}n\in{\cal T}\\ X_{1}\leq n<X_{2}\end{subarray}}m_{\Gamma}(n)\Delta(n)\epsilon(n)^{-2s}( \epsilon(n)^{-2i\tau}-e(\theta_{n}))\ll_{\eta}\delta X_{2}^{2-2\sigma+\eta}. \tag{4.10}\]
We also have
\[L_{\Gamma}(s+i\tau;[3,X_{1}))-L_{\Gamma}(s;[3,X_{1}))+L_{\Gamma} (s+i\tau;\bar{\cal T}\cap[X_{1},\infty))\] \[=\sum_{n\in{\mathbb{Z}},3\leq n<X_{1}}m_{\Gamma}(n)\Delta(n) \epsilon(n)^{-2s}(\epsilon(n)^{-2i\tau}-1)+\sum_{n\not\in{\cal T},n\geq X_{1}}m _{\Gamma}(n)\Delta(n)\epsilon(n)^{-2s-2i\tau}\] \[\ll_{\eta}\delta X_{1}^{2-2\sigma+\eta}+X_{1}^{\frac{3}{2}-2 \sigma+\eta} \tag{4.11}\]
for \(\tau\in S_{T}(\delta)\) and \(\sigma={\rm Re}s>3/4\).
Summarizing (4.8)-(4.11), we see that
\[\log f(s)-\log Z_{\Gamma}(s+i\tau)-L_{\Gamma}(\sigma+it;{\cal T}\cap[X_{2}, \infty))\ll_{\eta}X_{1}^{\frac{3}{2}-2\sigma+\eta}+\delta X_{2}^{2-2\sigma+ \eta}, \tag{4.12}\]
if \(\tau\in S_{T}(\delta)\) and \(\sigma>3/4\). Now, choose a sufficiently large \(X_{3}>X_{2}\) and let \(S^{\prime}_{T}(\delta)\) be the set of \(\tau\in S_{T}(\delta)\) satisfying
\[|L_{\Gamma}(s+i\tau;{\cal T}\cap[X_{2},X_{3}))|<\left(\sum_{n\in{\cal T},n>X_{ 2}}m_{\Gamma}(n)^{2}\Delta(n)^{2}\epsilon(n)^{-4\sigma}\right)^{1/4}\ll_{\eta }X_{2}^{\frac{3}{4}-\sigma+\eta}<X_{1}^{\frac{3}{4}-\sigma+\eta}.\]
Then, if \(\tau\in S^{\prime}_{T}(\delta)\), we see that, for any \(\epsilon>0\), there exist a sufficiently large \(X_{1}>0\) and a sufficiently small \(\delta>0\) such that
\[|\log f(s)-\log Z_{\Gamma}(s+i\tau)-L_{\Gamma}(\sigma+it;{\cal T}\cap[X_{3}, \infty))|<\frac{1}{2}\epsilon. \tag{4.13}\]
Due to Proposition 2.2, we can estimate the measure of \(S^{\prime}_{T}(\delta)\) by
\[\frac{1}{T}\mu\left(S^{\prime}_{T}(\delta)\right)>\frac{1}{2}(2\delta)^{\#{ \cal T}\cap[3,X_{2})}=:\epsilon_{1}. \tag{4.14}\]
The remaining part of this proof is to estimate the measure of \(\tau\) such that
\[L_{\Gamma}(s+i\tau;{\cal T}\cap[X_{3},\infty))\]
is small enough. Let \(U\) be a bounded rectangle with \(K\subset U\subset\{\frac{5}{6}<{\rm Re}s<1\}\), not including the zeros of \(Z_{\Gamma}(s)\). Put \(d:=\min\limits_{z\in\partial U}\min\limits_{s\in K}|s-z|\) and \(\epsilon_{2}:=\min\limits(\frac{\epsilon}{2},\epsilon_{1})>0\). According to (2) of Lemma 4.2, we can choose sufficiently large \(X_{3},T\) such that
\[\frac{1}{T}\int_{1}^{T}|L_{\Gamma}(\sigma+it;{\cal T}\cap[X_{3},\infty))|^{2} dt<\frac{d^{2}\pi}{2\mu(U)}\epsilon_{2}^{3}.\]
We then obtain
\[\mu\left\{\tau\in[0,T]\ \Big{|}\ \max_{s\in K}|L_{\Gamma}(s+i\tau;\mathcal{T}\cap[X _{3},\infty))|<\epsilon_{2}\right\}>\left(1-\frac{1}{2}\epsilon_{2}\right)T \tag{4.15}\]
from Lemma 2.3. The universality theorem
\[\mu\left\{\tau\in[0,T]\ \Big{|}\ \max_{s\in K}|\log f(s)-\log Z_{\Gamma}(s+i\tau) |<\epsilon\right\}>\frac{1}{2}\epsilon_{2}T\]
of \(\log Z_{\Gamma}(s)\) follows from (4.14) and (4.15).
## 5 Proof of Theorem 1.5
In this section, we prove Theorem 1.5. Before proving it, we prepare subsets of \(\mathcal{T}\), which are disjoint to each other and generate Dirichlet series satisfying the packing condition (2.2).
### Subsets of \(\mathcal{T}\) and partial Dirichlet series
Recall that \(r\geq 1\) is an integer, \(\Gamma_{1},\ldots,\Gamma_{r}\) are congruence subgroups of \(\mathrm{SL}_{2}(\mathbb{Z})\) and
\[\hat{T}_{j}:=\{n\in\mathrm{Tr}\Gamma_{j}\mid n\not\in\mathrm{Tr}\Gamma_{i}\ \text{for}\ 1\leq i\leq j-1\}=\mathrm{Tr}\Gamma_{j}\setminus\biggl{(}\bigcup_{1\leq i \leq j-1}\mathrm{Tr}\Gamma_{i}\biggr{)}\]
for \(1\leq j\leq r\). Put \(\mathcal{T}_{j}:=\mathcal{T}\cap\mathrm{Tr}\Gamma_{j}\),
\[\hat{\mathcal{T}}_{j}:=\mathcal{T}\cap\hat{T}_{j}=\mathcal{T}_{j}\setminus \biggl{(}\bigcup_{1\leq i\leq j-1}\mathcal{T}_{i}\biggr{)},\qquad\hat{ \mathcal{T}}_{j}:=\mathcal{T}_{j}\setminus\hat{\mathcal{T}}_{j}=\mathcal{T}_{j }\bigcap\biggl{(}\bigcup_{1\leq i\leq j-1}\mathcal{T}_{i}\biggr{)}.\]
It is easy to see that \(\hat{\mathcal{T}}_{i}\cap\hat{\mathcal{T}}_{j}=\emptyset\) for \(i\neq j\), \(\hat{\mathcal{T}}_{j}\cap\hat{\mathcal{T}}_{j}=\emptyset\) and
\[\bigsqcup_{1\leq i\leq j}\hat{\mathcal{T}}_{i}=\bigcup_{1\leq i\leq j} \mathcal{T}_{i},\qquad\hat{\mathcal{T}}_{j}\bigsqcup\tilde{\mathcal{T}}_{j}= \mathcal{T}_{j}.\]
We now prove the following lemma.
**Lemma 5.1**.: _Let \(\Gamma_{1},\ldots,\Gamma_{r}\) be congruence subgroups of \(\mathrm{SL}_{2}(\mathbb{Z})\). If \(\hat{T}_{j}\neq\emptyset\), then the series_
\[\sum_{n\in\hat{\mathcal{T}}_{j}}m_{\Gamma_{j}}(n)\Delta(n)\epsilon(n)^{-2s} \tag{5.1}\]
_satisfies the packing condition (2.2)._
Proof.: Let \(N\geq 1\) be the integer such that the principal congruence subgroup \(\bar{\Gamma}(N)\) is a normal subgroup of \(\Gamma_{1},\ldots,\Gamma_{r}\) of finite index. Denote by \(\hat{\Gamma}_{j}:=\{g\in\Gamma_{j}\mid\mathrm{tr}\gamma\not\in\mathrm{Tr} \Gamma_{i}\ \text{for}\ 1\leq i\leq j-1\}\), i.e. \(\hat{\Gamma}_{j}\) is a subset of \(\Gamma_{j}\) with \(\mathrm{Tr}\hat{\Gamma}_{j}=\hat{T}_{j}\). We first prove that \(g\bar{\Gamma}(N)\subset\hat{\Gamma}_{j}\) holds for \(g\in\hat{\Gamma}_{j}\). Assume that it does not hold, namely there exist \(1\leq i\leq j-1\), \(\alpha\in\bar{\Gamma}(N)\) and \(\beta\in\Gamma_{i}\) such that
\(\mathrm{tr}g\alpha=\mathrm{tr}\beta\). Since \(\alpha\in\bar{\Gamma}(N)\), it holds \(\mathrm{tr}g\equiv\mathrm{tr}\beta\bmod N\) and then there exists \(\alpha_{1}\in\bar{\Gamma}(N)\) such that \(\mathrm{tr}g=\mathrm{tr}\beta\alpha_{1}\in\mathrm{Tr}\Gamma_{i}\). This contradicts \(g\in\hat{\Gamma}_{j}\).
When \(\hat{T}_{j}\neq\emptyset\), there exists an element \(g\in\hat{\Gamma}_{j}\). It is clear that \(h^{-1}gh\in\hat{\Gamma}_{j}\) holds for any \(h\in\Gamma_{j}\) and \(h^{-1}gh\bar{\Gamma}(N)\subset\hat{\Gamma}_{j}\) also holds. This means that there exists a conjugacy class \([g]\) of the finite group \(\Gamma_{j}/\bar{\Gamma}(N)\) such that
\[\sum_{\begin{subarray}{c}n\in\hat{T}_{j}\\ Y_{1}\leq n<Y_{2}\end{subarray}}m_{\Gamma_{j}}(n)\Delta(n)\geq\sum_{ \begin{subarray}{c}\gamma\in\mathrm{Prim}(\Gamma_{j})\\ \gamma\subset\hat{\Gamma}_{j}\\ \epsilon(Y_{1})^{2}\leq N(\gamma)<\epsilon(Y_{2})^{2}\end{subarray}}\frac{1}{ 1-N(\gamma)^{-1}}\geq\sum_{\begin{subarray}{c}\gamma\in\mathrm{Prim}(\Gamma_{ j})\\ \gamma\bar{\Gamma}(N)=[g]\\ \epsilon(Y_{1})^{2}\leq N(\gamma)<\epsilon(Y_{2})^{2}\end{subarray}}\frac{1}{ 1-N(\gamma)^{-1}}.\]
We can check that the series (5.1) satisfies the packing condition (2.2) from the following variant of the prime geodesic theorem, called the Chebotarev-type prime geodesic theorem [25, 29].
\[\#\{\gamma\in\mathrm{Prim}(\Gamma_{j})\mid\gamma\bar{\Gamma}(N)=[g],N(\gamma)<x \}=C\mathrm{li}(x)+O(x^{a}),\quad\text{as}\quad x\to\infty,\]
where \(0<C\leq 1\) and \(1/2<a<1\) are constants depending on \(\Gamma_{j},\bar{\Gamma}(N)\) and \(g\).
### Proof of Theorem 1.5
Let \(X_{1}>0\) and divide \(\log f_{j}(s)-\log Z_{\Gamma_{j}}(s+i\tau)\) by
\[\log f_{j}(s)-\log Z_{\Gamma_{j}}(s+i\tau)= \left(\log f_{j}(s)+L_{\Gamma_{j}}(s;[3,X_{1}))+L_{\Gamma_{j}}(s+ i\tau;\mathcal{T}\cap[X_{1},\infty))\right)\] \[+\left(L_{\Gamma_{j}}(s+i\tau;[3,X_{1}))-L_{\Gamma_{j}}(s;[3,X_{1 }))+L_{\Gamma_{j}}(s+i\tau;\bar{\mathcal{T}}\cap[X_{1},\infty))\right).\]
First, study the case \(j=1\). Since the series
\[\sum_{n\in\mathcal{T}_{1}}m_{\Gamma_{1}}(n)\Delta(n)\epsilon(n)^{-2s}\]
satisfies the packing condition (2.2), applying Proposition 2.1 with
\[g_{1}(s)=\log f_{1}(s)+L_{\Gamma_{1}}(s;[3,X_{1})),\]
we see that there exist \(X_{2}^{(1)}>X_{1}\) and a series \(\{\theta_{n}^{(1)}\}_{n\in\hat{\mathcal{T}}_{j}\cap[X_{1},X_{2}^{(1)})}\subset[ 0,1)\) such that
\[\left|\log f_{1}(s)+L_{\Gamma_{1}}(s;[3,X_{1}])-L_{\Gamma_{1}}(s; \hat{\mathcal{T}}_{1}\cap[X_{1},X_{2}^{(1)});\{\theta_{n}^{(1)}\})\right|\] \[\ll \sum_{\begin{subarray}{c}n\in\hat{\mathcal{T}}_{1}\\ X_{1}\leq n<X_{2}^{(1)}\end{subarray}}m_{\Gamma_{1}}(n)^{2}\Delta(n)^{2} \epsilon(n)^{-4\sigma}\ll_{\eta}X_{1}^{3-4\sigma+\eta}.\]
Put \(\mathcal{U}_{1}:=\hat{\mathcal{T}}_{1}\cap[X_{1},X_{2}^{(1)})\) and define the series \(\{\bar{\theta}_{n}^{(1)}\}_{n\in\mathcal{T}}\) by \(\bar{\theta}_{n}^{(1)}=\theta_{n}^{(1)}\) if \(n\in\mathcal{U}_{1}\) and \(\bar{\theta}_{n}^{(1)}=0\) otherwise.
Next, we fix the value \(X_{2}^{(j)}>X_{1}\), the series \(\{\theta_{n}^{(j)}\}\), \(\{\bar{\theta}_{n}^{(j)}\}\) and the set \(\mathcal{U}_{j}\) for \(2\leq j\leq r\) recursively as follows. Suppose that \(X_{2}^{(i)}\), \(\{\theta_{n}^{(i)}\}\), \(\{\bar{\theta}_{n}^{(i)}\}\) and \(\mathcal{U}_{i}\) are already given for \(1\leq i\leq j-1\). Due to Lemma 5.1, we see that the series
\[\sum_{n\in\widehat{\mathcal{T}}_{j}}m_{\Gamma_{j}}(n)\Delta(n)\epsilon(n)^{-2s}\]
satisfies the packing condition (2.2). Then, applying Proposition 2.1 with
\[g_{j}(s)=\log f_{j}(s)+L_{\Gamma_{j}}(s;[3,X_{1}))-L_{\Gamma_{j}}(s;\tilde{ \mathcal{T}}_{j}\cap\mathcal{U}_{j-1};\{\bar{\theta}_{n}^{(j-1)}\}),\]
we see that there exist a value \(X_{2}^{(j)}>X_{1}\) and a series \(\{\theta_{n}^{(j)}\}_{n\in\widehat{\mathcal{T}}_{j}\cap[X_{1},X_{2}^{(j)})} \subset[0,1)\) such that
\[\left|\log f_{j}(s)+L_{\Gamma_{j}}(s;[3,X_{1}))-L_{\Gamma_{j}}(s; \tilde{\mathcal{T}}_{j}\cap\mathcal{U}_{j-1};\{\bar{\theta}_{n}^{(j-1)}\})-L _{\Gamma_{j}}(s;\hat{\mathcal{T}}_{j}\cap[X_{1},X_{2}^{(j)});\{\theta_{n}^{(j )}\})\right|\] \[\ll\sum_{\begin{subarray}{c}n\in\widehat{\mathcal{T}}_{j}\\ X_{1}\leq n<X_{2}^{(j)}\end{subarray}}m_{\Gamma_{j}}(n)^{2}\Delta(n)^{2} \epsilon(n)^{-4\sigma}\ll_{\eta,j}X_{1}^{3-4\sigma+\eta}.\]
Put \(\mathcal{U}_{j}:=\mathcal{U}_{j-1}\coprod\left(\widehat{\mathcal{T}}_{j}\cap[ X_{1},X_{2}^{(j)})\right)\) and define the series \(\{\bar{\theta}_{n}^{(j)}\}_{n\in\mathcal{T}}\) by \(\bar{\theta}_{n}^{(j)}=\theta_{n}^{(j)}\) if \(n\in\hat{\mathcal{T}}_{j}\cap[X_{1},X_{2}^{(j)})\) and \(\bar{\theta}_{n}^{(j)}=\bar{\theta}_{n}^{(j-1)}\) otherwise. Note that \(\bar{\theta}_{n}^{(r)}=\theta_{n}^{(j)}\) if \(n\in\hat{\mathcal{T}}_{j}\cap[X_{1},X_{2}^{(j)})\) for some \(1\leq j\leq r\) and \(\bar{\theta}_{n}^{(r)}=0\) otherwise.
We also put \(\{\bar{\theta}_{n}\}_{n\in\mathcal{T}}:=\{\bar{\theta}_{n}^{(r)}\}_{n\in \mathcal{T}}\), \(\mathcal{U}:=\mathcal{U}_{r}\coprod\left(\mathcal{T}\cap[3,X_{1})\right)\) and
\[S_{T}=S_{T}(\delta,\mathcal{U}):=\left\{\tau\in[0,T]\ \Big{|}\ \left\|\frac{\tau\log \epsilon(n)}{\pi}-\bar{\theta}_{n}\right\|<\delta\quad\text{for any }n\in\mathcal{U}\right\}\]
for \(\delta>0\) and \(T>0\). Then, if \(\tau\in S_{T}\), we have
\[L_{\Gamma_{j}}\left(s+i\tau;\hat{\mathcal{T}}_{j}\cap[X_{1},X_{2 }^{(j)})\right)-L_{\Gamma_{j}}\left(s;\hat{\mathcal{T}}_{j}\cap[X_{1},X_{2}^{( j)});\{\bar{\theta}_{n}\})\right)\] \[=\sum_{\begin{subarray}{c}n\in\hat{\mathcal{T}}_{j}\\ X_{1}\leq n<X_{2}^{(j)}\end{subarray}}m_{\Gamma_{j}}(n)\Delta(n)\epsilon(n)^{- 2s}(\epsilon(n)^{-2i\tau}-e(\bar{\theta}_{n}))\ll_{\eta,j}\delta\left(X_{2}^{ (j)}\right)^{2-2\sigma+\eta}, \tag{5.2}\] \[L_{\Gamma_{j}}\left(s+i\tau;\hat{\mathcal{T}}_{j}\cap\mathcal{U}_ {j-1}\right)-L_{\Gamma_{j}}\left(s;\hat{\mathcal{T}}_{j}\cap\mathcal{U}_{j-1}; \{\bar{\theta}_{n}\}\right)\] \[=\sum_{n\in\hat{\mathcal{T}}_{j}\cap\mathcal{U}_{j-1}}m_{\Gamma_{ j}}(n)\Delta(n)\epsilon(n)^{-2s}(\epsilon(n)^{-2i\tau}-e(\bar{\theta}_{n}))\ll_{\eta,j} \delta\left(\bar{X}_{2}^{(j-1)}\right)^{2-2\sigma+\eta}, \tag{5.3}\]
where \(\bar{X}_{2}^{(j-1)}:=\max_{1\leq l\leq j-1}X_{2}^{(l)}\). We also have
\[L_{\Gamma_{j}}(s+i\tau;[3,X_{1}))-L_{\Gamma_{j}}(s;[3,X_{1}))+L_ {\Gamma_{j}}(s+i\tau;\bar{\mathcal{T}}\cap[X_{1},\infty))\] \[\ll\sum_{n\in\mathbb{Z},3\leq n<X_{1}}m_{\Gamma_{j}}(n)\Delta(n) \epsilon(n)^{-2s}|\epsilon(n)^{-2i\tau}-1|+\sum_{n\notin\mathcal{T},n\geq X_{1 }}m_{\Gamma_{j}}(n)\Delta(n)\epsilon(n)^{-2\sigma}\] \[\ll_{\eta,j}\delta X_{1}^{2-2\sigma+\eta}+X_{1}^{\frac{3}{2}-2 \sigma+\eta}. \tag{5.4}\]
if \(\sigma>3/4\) and \(\tau\in S_{T}\). Summarizing (5.2)-(5.4), if \(\tau\in S_{T}\), we have
\[\log f_{j}(s)-\log Z_{\Gamma_{j}}(s+i\tau)-L_{\Gamma_{j}}\left(s+i \tau;\hat{\mathcal{T}}_{j}\cap[X_{2}^{(j)},\infty)\right)-L_{\Gamma_{j}}\left(s +i\tau;\hat{\mathcal{T}}_{j}\cap[X_{1},\infty)\backslash\mathcal{U}_{j-1}\right)\] \[\ll_{\eta,j}\delta\left(\left(X_{2}^{(j)}\right)^{2-2\sigma+\eta} +\left(\bar{X}_{2}^{(j-1)}\right)^{2-2\sigma+\eta}+X_{1}^{2-2\sigma+\eta} \right)+X_{1}^{\frac{3}{2}-2\sigma+\eta}. \tag{5.5}\]
Now, choose a sufficiently large \(X_{3}>\max\limits_{1\leq j\leq r}X_{2}^{(j)}\) and let
\[\bar{\mathcal{U}}_{j}=\bar{\mathcal{U}}_{j}(X_{1},X_{3}):=\left(\hat{\mathcal{ T}}_{j}\cap[X_{2}^{(j)},X_{3})\right)\coprod\left(\bar{\mathcal{T}}_{j}\cap[X_{1 },X_{3})\backslash\mathcal{U}_{j-1}\right).\]
Define the set \(S_{T}^{(j)}\) of \(\tau\in S_{T}\) satisfying
\[\left|L_{\Gamma_{j}}\left(s+i\tau;\bar{\mathcal{U}}_{j}\right)\right| <\left(\sum_{n\in\bar{\mathcal{U}}_{j}}m_{\Gamma_{j}}(n)^{2}\Delta( n)^{2}\epsilon(n)^{-4\sigma}\right)^{1/4}\] \[\ll_{\eta,j}\left(\sum_{n\in\mathcal{T},n>X_{1}}m_{\Gamma_{j}}(n) ^{2}\Delta(n)^{2}\epsilon(n)^{-4\sigma}\right)^{1/4}\ll X_{1}^{3/4-\sigma+\eta}.\]
This means that, if \(\tau\in S_{T}^{(j)}\), for any \(\epsilon>0\), there exist \(X_{1},\delta\) such that
\[\left|\log f_{j}(s)-\log Z_{\Gamma_{j}}(s+i\tau)-L_{\Gamma_{j}}(s+i\tau; \mathcal{T}\cap[X_{3},\infty))\right|<\frac{1}{2}\epsilon. \tag{5.6}\]
Since \(\mathcal{U},\bar{\mathcal{U}}_{j}\) are disjoint and \(\min\{n\in\bar{\mathcal{U}}_{j}\}>X_{1}\), due to (2) of Proposition 2.2, we see that
\[\mu\left(S_{T}^{(j)}\right)>\left(1-\frac{1}{2r}\right)\mu\left(S_{T}\right) \tag{5.7}\]
if \(X_{1}\) is sufficiently large. According to (5.7) and (1) of Proposition 2.2, we have
\[\frac{1}{T}\mu\left(\bigcap_{1\leq j\leq r}S_{T}^{(j)}\right)>\frac{1}{2}\frac {\mu(S_{T})}{T}>\frac{1}{2}(2\delta)^{\#\mathcal{U}}=:\epsilon_{1} \tag{5.8}\]
for sufficiently large \(T>0\). Thus the set of \(\tau\in[0,T]\) satisfying (5.6) for \(1\leq j\leq r\) simultaneously has a positive measure.
The remaining part of this proof is to study \(L_{\Gamma_{j}}(s+i\tau;\mathcal{T}\cap[X_{3},\infty))\). Let \(U\) be a bounded rectangle with \(\cup_{1\leq j\leq r}K_{j}\subset U\subset\{\frac{5}{6}<\text{Re}s<1\}\), not including the zeros of \(Z_{\Gamma_{j}}(s)\). Put \(d:=\max\limits_{1\leq j\leq r}\min\limits_{z\in\partial U}\min\limits_{s\in K_ {j}}|s-z|\) and \(\epsilon_{2}:=\min\left(\frac{\epsilon}{2},\epsilon_{1}\right)>0\). According to Lemma 4.2, we see that there exist \(X_{3},T\) such that
\[\frac{1}{T}\int_{1}^{T}|L_{\Gamma_{j}}(\sigma+it;\mathcal{T}\cap[X_{3},\infty) )|^{2}dt<\frac{d^{2}}{2\pi\mu(U)}\epsilon_{2}^{3}\]
for any \(1\leq j\leq r\). We then obtain
\[\mu\left\{\tau\in[0,T]\ \Big{|}\ \max_{1\leq j\leq r}\max_{s\in K}|L_{\Gamma_{j}}(s +i\tau;\mathcal{T}\cap[X_{3},\infty))|<\epsilon_{2}\right\}>\left(1-\frac{1}{2} \epsilon_{2}\right)T \tag{5.9}\]
from Lemma 2.3, The desired result
\[\mu\left\{\tau\in[0,T]\ \Big{|}\ \max_{1\leq j\leq r}\max_{s\in K_{j}}|\log f_{j} (s)-\log Z_{\Gamma_{j}}(s+i\tau)|<\epsilon\right\}>\frac{1}{2}\epsilon_{2}T \tag{5.10}\]
follows from (5.8) and (5.9).
**Acknowledgment.** The author was supported by JST CREST no.JPMJCR2113 and JSPS Grant-in-Aid for Scientific Research (C) no. 22K03234.
|
2302.02855 | Thermal Hall effect in a van der Waals triangular magnet FeCl2 | Thermal transport is a pivotal probe for studying low-energy, charge-neutral
quasi-particles in insulating magnets. In this Letter, we report an observation
of large magneto-thermal conductivity and thermal Hall effect (THE) in a van
der Waals antiferromagnet FeCl2. The magneto-thermal conductivity reaches over
~700%, indicating strong magnon-phonon coupling. Furthermore, we find an
appreciable thermal Hall signal which changes sign concurrently with the
spin-flip transition from the antiferromagnetic state to the polarized
ferromagnetic state. Our theoretical calculations suggest that, in addition to
the Berry curvature induced at the anticrossing points of the hybridized magnon
and acoustic phonon modes of FeCl2, other mechanisms are needed to account for
the magnitude of the observed THE. | Chunqiang Xu, Caitlin Carnahan, Heda Zhang, Milos Sretenovic, Pengpeng Zhang, Di Xiao, Xianglin Ke | 2023-02-06T15:23:22Z | http://arxiv.org/abs/2302.02855v1 | # Thermal Hall effect in a van der Waals triangular magnet FeCl\({}_{2}\)
###### Abstract
Thermal transport is a pivotal probe for studying low-energy, charge-neutral quasiparticles in insulating magnets. In this Letter, we report an observation of large magneto-thermal conductivity and thermal Hall effect (THE) in a van der Waals antiferromagnet FeCl\({}_{2}\). The magneto-thermal conductivity reaches over \(\sim\)700%, indicating strong magnon-phonon coupling. Furthermore, we find an appreciable thermal Hall signal which changes sign concurrently with the spin-flip transition from the antiferromagnetic state to the polarized ferromagnetic state. Our theoretical calculations suggest that, in addition to the Berry curvature induced at the anticrossing points of the hybridized magnon and acoustic phonon modes of FeCl\({}_{2}\), other mechanisms are needed to account for the magnitude of the observed THE.
\({}^{1}\)Department of Physics and Astronomy, Michigan State University, East Lansing, Michigan 48824-2320, USA
\({}^{2}\)School of Physical Science and Technology, Ningbo University, Ningbo 315211, China
\({}^{3}\)Department of Physics, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213, USA
\({}^{4}\)Department of Materials Science and Engineering, University of Washington, Seattle, Washington 98195, USA
\({}^{5}\)Department of Physics, University of Washington, Seattle, Washington 98195, USA
Thermal Hall effect in a van der Waals triangular magnet FeCl\({}_{2}\)
Chunqiang Xu\({}^{1,2}\)
Caitlin Carnahan\({}^{3}\)
Heda Zhang\({}^{1}\)
Milos Sretenovic\({}^{1}\)
Pengpeng Zhang\({}^{1}\)
Di Xiao\({}^{4,5}\)
Xianglin Ke\({}^{1}\)
\({}^{1}\)Department of Physics and Astronomy, Michigan State University, East Lansing, Michigan 48824-2320, USA
\({}^{2}\)School of Physical Science and Technology, Ningbo University, Ningbo 315211, China
\({}^{3}\)Department of Physics, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213, USA
\({}^{4}\)Department of Materials Science and Engineering, University of Washington, Seattle, Washington 98195, USA
\({}^{5}\)Department of Physics, University of Washington, Seattle, Washington 98195, USA
Thermal transport is a pivotal probe for studying low-energy, charge-neutral quasiparticles in insulating magnets. In this Letter, we report an observation of large magneto-thermal conductivity and thermal Hall effect (THE) in a van der Waals antiferromagnet FeCl\({}_{2}\). The magneto-thermal conductivity reaches over \(\sim\)700%, indicating strong magnon-phonon coupling. Furthermore, we find an appreciable thermal Hall signal which changes sign concurrently with the spin-flip transition from the antiferromagnetic state to the polarized ferromagnetic state. Our theoretical calculations suggest that, in addition to the Berry curvature induced at the anticrossing points of the hybridized magnon and acoustic phonon modes of FeCl\({}_{2}\), other mechanisms are needed to account for the magnitude of the observed THE. |
2305.03534 | Blockchain for smart cities improvement: an architecture proposal | The combination between innovative topics and emerging technologies lets
researchers define new processes and models. New needs regard the definition of
modular and scalable approaches, with society and environment in mind. An
important topic to focus on is the smart city one. The use of emerging
technologies lets smart cities develop new processes to improve services
offered from various actors, either industries or government. Smart cities were
born to improve quality of life for citizens. To reach this goal, various
approaches have been proposed, but they lack on a common interface to let each
stakeholder communicate in a simple and fast way. This paper shows the proposal
of an architecture to overcome the actual limitations of smart cities: it uses
Blockchain technology as a distributed database to let everyone join the
network and feel part of a community. Blockchain can improve processes
development for smart cities. Scalability is granted thanks to a context-aware
approach: applications do not need to know about the back-end implementation,
they just need to adapt to an interface. With Blockchain, it is possible to
collect data anonymously to make some statistical analysis, to access public
records to ensure security in the city and to guarantee the origin of products
and energy. | Marco Fiore, Marina Mongiello | 2023-05-05T13:41:18Z | http://arxiv.org/abs/2305.03534v1 | # Blockchain for smart cities improvement: an architecture proposal
###### Abstract
The combination between innovative topics and emerging technologies lets researchers define new processes and models. New needs regard the definition of modular and scalable approaches, with society and environment in mind. An important topic to focus on is the smart city one. The use of emerging technologies lets smart cities develop new processes to improve services offered from various actors, either industries or government. Smart cities were born to improve quality of life for citizens. To reach this goal, various approaches have been proposed, but they lack on a common interface to let each stakeholder communicate in a simple and fast way. This paper shows the proposal of an architecture to overcome the actual limitations of smart cities: it uses Blockchain technology as a distributed database to let everyone join the network and feel part of a community. Blockchain can improve processes development for smart cities. Scalability is granted thanks to a context-aware approach: applications do not need to know about the back-end implementation, they just need to adapt to an interface. With Blockchain, it is possible to collect data anonymously to make some statistical analysis, to access public records to ensure security in the city and to guarantee the origin of products and energy.
Blockchain, smart city, common interface, proposal
## I Introduction
The rise of emerging technologies defines new and improved software processes. The modeling of an architecture is the first step to adapt a process based on specific needs defined by functional and non functional requirements. It is necessary to design some innovative approaches during the definition of the architecture to create a solid and scalable system. Among various innovative topics in Software Engineering, we focus on smart cities improvement. The smart city trend is constantly growing as new and emerging technologies help its spreading. A smart city goal is improving the quality of life for citizens, as well as making operations easier and more efficient. To make a smart city desirable, the network should be reliable and with high performances; moreover, privacy and encryption of data should be guaranteed and the concept of trust should be a solid foundation of the entire process.
A typical problem in smart cities development is modularity: new applications must be contextualized and developed having that smart city in mind. Actually, there are no standards and guidelines that can adapt to every smart city.
Our proposal is based on the use of Blockchain technology to improve our ability to develop, manage and apply new software and system applications for smart cities. To illustrate the main aim of this paper, let us consider a sample scenario: suppose we would develop a system that provides smart city services, using a single distributed database enabled for accessing city-related information for citizens. Traditional cities can become smart without using new systems, but simply interfacing with existing ones and with distributed databases used by other smart cities. Hence, as shown in Fig. 1, a smart city actor must have a single interface to gather different data and to use the database; this means that the interface and the implementation of an object can vary independently being separated from one another. The implementation can be realized just once and be compliant to every other smart city that implements the proposed interface.
The paper is organized as follows: Section II lists some characteristics of Blockchain technology and presents some state of the art analysis to understand what are the benefits and research directions on Blockchain applied to smart cities. Section III discusses the purpose of the paper. Section IV shows the proposal of a scalable architecture that connects each smart city actor to the Blockchain using a common interface, together with some considerations on different applications of such system. Finally, Section V concludes the paper.
## II Blockchain characteristics
Blockchain technology belongs to Distributed Ledger Technologies (DLTs): born in 2008 thanks to Satoshi Nakamoto [1] can be conceived as a distributed database where information are stored in blocks. Each block is connected to the previous and next one thanks to cryptographic (hash) functions. The main Blockchain features are proposed below, to better understand its peculiarities.
* Decentralization: all transactions in a Blockchain are inserted and validated through a consensus protocol, then they are replicated between all the nodes participating in the network. In this way, there is no need of a central authority (i.e., a bank from the financial point of view) to maintain all the transactions data.
* Immutability: transactions in a Blockchain are stored into blocks. Each block contains its hash and the hash of the previous block, creating a chain that is immutable, because any change on a block will affect all the subsequent blocks. An attacker cannot change the information of a block N without changing all the blocks N+1, N+2,..., M, where M is the total number of blocks in the chain. This change is computationally difficult, so it is not possible to execute it in a short amount of time.
* Transparency: the only way to update the ledger is by reaching the consensus by most of the network nodes. All changes are publicly visible: this ensures transparency and security.
* Traceability: it becomes easy to trace all transactions in a Blockchain thanks to its immutability and transparency features. In this way, every transaction can be traced down back to its origin.
* Trust-less: it is possible to make transaction, in a Blockchain, between unknown parties, even if they do not trust each other. Thanks to the absence of a central authority, it is possible to trust the validity of a transaction without knowing who was involved in it.
Different reviews show the role of Blockchain in smart cities, with some focus on smart healthcare, smart transportation, supply chains [2]. They also underline the combination of Blockchain and other technologies such as Internet of Things and Machine Learning [3]. Blockchain can help smart cities in being more sustainable thanks to its peculiarities: a) it is immutable, so every information added to the chain cannot be modified, b) it is anonymous, meaning that everyone can join the network without worrying about privacy, c) it is trustable even if people don't know each other.
Authors of paper [4] focus their research on Blockchain smart contracts in smart real estate. They propose a conceptual framework for the adoption of such topic in smart cities. The real estate process becomes more immersive and user-friendly, in line with Industry 4.0 requirements.
Blockchain can help smart cities development [5] both from performance and security sides. The distributed nature of this technology makes architectures more scalable and with less point of failures: as soon as one node is active, the entire network is up. Data sharing takes advantage of this approach: education, healthcare, buildings can communicate using a single common interface. Artificial Intelligence intervenes in data management and analysis [6]: deep learning techniques can enrich the green energy production [7, 8], while neural networks can improve road management [9].
Traceability characteristic of Blockchain is helpful for waste management [10] thanks to notarized documentation, compliance with laws and fleet management. It is also useful with respect to public emergency services [11]: it can help security workers to manage different anomalies, from fires to crimes.
A state-of-the-art summary is shown in Table I. The analysis of such publications raises some open challenges: a) Sustainability is an important aspect in the topic of Blockchain applied to smart cities. It is the furthest research topic from most of the analyses [3]. b) There is the need of a single interface to the Blockchain, to create a bridge between different actors in the smart city and a single, common distributed database. c) Security and privacy should be underlined [12]: Blockchain preserves privacy and ensures that only authorized nodes can access sensitive information. d) Costs to deploy a complete Blockchain network in a smart city are not yet known. It is difficult to perform a cost prediction in the deployment of a Blockchain in a smart city [2]. e) Regulations are needed to correctly share information: smart contracts can come in hand in this topic.
## III Our vision
We envision a scenario in which Blockchain is the foundation of smart cities processes. Each process can be easily added to the system thanks to a common interface that embraces every aspect of the city. Information can be exchanged using JSON format, so the communication between a front end decentralized application and the Blockchain is context-aware. Blockchain technology has the potential to play an important role in the development of smart cities. It can provide multiple advantages in many topics:
Fig. 1: Layers of the system showing a single interface in the access layer
* Supply chain management: Blockchain can be used to track goods and materials through a supply chain, thus increasing transparency and reducing the risk of fraud.
* Sustainability: Blockchain can be used to manage and track the use of renewable energy. A sustainable smart city can be obtained if actors reduce their carbon footprint and promote green approaches.
* Authentication and identification: Blockchain can be used to verify identities in a secure and decentralized way, making it easier for citizens to access services and participate in civic life.
* Public records: Blockchain can be used to store and manage public records, such as property titles or licenses.
* Transportation: Blockchain can be used to manage and track the use of public transportation, helping cities optimize their transportation systems and reduce congestion. Transports side, Blockchain can be used to gather information to improve paths, waiting times and overall services.
Overall, the role of Blockchain in smart cities management is to improve different aspects, from sustainability (i.e., no-tarization of clean energy production) to hijacking avoidance (i.e., guaranteeing the path of a bus or a taxi, making statistical analysis for public transports, identifying passengers). The ultimate goal is to improve quality of life for citizens.
### _Clean energy production_
Smart buildings must be energy efficient and incorporate clean energy production technologies. The ways to accomplish this goal are different: a) solar panels can be installed on the roof of a building to capture sunlight and generate electricity, b) wind turbines can convert wind speed to electricity, c) storage systems, such as batteries, can store excess clean energy produced during low demand times. Blockchain technology can support the production of clean energy in multiple ways: a) it can track and verify the energy production, to ensure that a building is sustainable; b) it can help the trading of energy, notarizing transactions between a building with enough energy in its storage and a building with less energy than requested; c) it can help people understand if a building is really sustainable and green (i.e., showing a building carbon footprint), thus letting people choose and prefer smarter and more efficient buildings.
### _Encryption of sensitive information_
Citizens side, information should be encrypted to ensure privacy and anonymity. The encryption process can be both symmetric or asymmetric. In this proposal, we follow an asymmetric key encryption scheme, that takes advantage of the key-pair already present in every Blockchain architecture. In this way, everyone can encrypt any kind of message using the recipient public key, thus guaranteeing that only the recipient can decrypt the message using his or her own private key.
In the case of public services requests, it is possible to use smart contracts to make the process secure and transparent. The authentication and request process is shown in Fig. 2.
1. The citizen requests a service to the institution (i.e., a certificate of residence). The request is managed by a smart contract. It is also possible to directly upload documents to the InterPlanetary File System (IPFS) [13], but due to lack of regulations and laws, we decided to let institutions keep sensitive documents.
2. The smart contract, together with the institution, authenticates the citizen and ensures that the requested certificate is obtainable.
3. The smart contract requests the document to the institution.
4. The institution gives back the requested service using the same smart contract.
5. The citizen receives the requested service or document. The process, thanks to smart contracts intervention, is transparent, secure and fast.
### _Statistical analysis_
The use of a distributed database such as Blockchain lets people read public records, that are stored in the chain and accessible to anyone. These data are stored anonymously, meaning that any information can be related to a public key (wallet), but there is no way to link that wallet to a person. In this way, data can be used to make some statistical analysis to understand how to improve services offered to citizens. Public transports can easily understand if there is a lack on the offer and improve it, knowing exactly where to act.
### _Path planning_
The process of determining one optimal route for a vehicle to travel from one location to another is defined as path planning. This approach can be used to avoid traffic congestion in smart cities [14] or to quickly intervene in case of disasters [15]. Path planning can be used for vehicles, drones and people. With Blockchain technology, it is possible to avoid hijacking: in the Internet of Drones (IoD) field, various approaches have been proposed [16, 17, 18]. They all share a common point of view: everytime a drone approaches a new Point of Interest (PoI), it writes a new information on Blockchain to notarize its position. In this way, every attempt of hijacking can be identified in short time. The same approach can be considered for Autonomous Guided Vehicles (AGVs) in a smart city: AGVs can read from the Blockchain where they have to go, then they can create an optimal path and notarize the time of arrival. These information can further be used for statistical analysis, as underlined before.
## IV Proposed architecture
Our contribution regards the design of an architecture where every actor in a smart city can benefit from using Blockchain as a back end of the system. The main goal is the development of a common interface to communicate with the database, so everyone can join the network in a secure and fast way. Smart contracts can receive any kind of data in a JSON format: new actors just need to upload JSON-formatted information. Data are then managed by the contract, that gathers them and converts them into value, thus uploading them in the Blockchain. An architecture showing different actors is proposed in Fig. 3.
In our scenario, each actor uses a Decentralized Application (DApp) to connect to the Blockchain. DApps are designed to be distributed and to run on multiple nodes, rather than being controlled by a single entity. Some applications of such architecture can be summed up as follows:
* _A. Clean energy production._ To reach sustainability purposes, clean energy production can be notarized in Blockchain. Everyone can ensure that the energy used in a building comes from renewable sources.
* _B. Encryption of sensitive information._ Sensitive information can be encrypted using nodes key-pair. Autonomous shared vehicles (i.e., taxies) can use this sign to authenticate passengers and ensure that only the passenger who payed for the ride can use that vehicle.
* _C. Statistical analysis._ Public transportation can make statistical analysis (i.e., preferred destination, waiting times, etc.) to improve the offer, still guaranteeing anonymity.
* _D. Path planning._ Path planning is possible to avoid hijacking. In the Internet of Drones topic, this can be a useful approach to ensure that the path followed by a drone is correct and there is no tampering [18, 19].
Besides, every actor in the smart city can feel as part of a community, easily accessing any public information in the Blockchain and exchanging messages with other actors in a transparent way.
The architecture respects requirements for building a system process with modularity and scalability in mind, thus ensuring high performances and reliability that are guaranteed by the presence of Blockchain.
A prototype is being developed to show the advantages of adopting a single access layer. To upload JSON information to the Blockchain, some context-aware smart contracts are designed, as proposed in Fig. 4. These smart contracts take the
Fig. 3: Architecture of the proposed Blockchain-based smart city
Fig. 2: Sample process for a citizen requesting a service to government institutions
input, make some checks on the correctness of data and then upload them to the Blockchain. Data are accessible by smart city actors; the retrieval process gives as output a JSON object. The specific front end distributed application can manage the JSON output to show the information requested by the user. The described process is shown in Fig. 5: steps from 1.2 to 1.3 are independent from the front end distributed application.
## V Conclusion
In this paper we propose an architecture for applying Blockchain technology in smart cities. Thanks to its characteristics, Blockchain guarantees anonymity, optional encryption of data and traceability of goods, clean energy production and vehicles. We show how different stakeholders can benefit from using this architecture, then we make some considerations on sample workflows (i.e., the process for a citizen who requests a service or document to government institutions) and processes improvement. The context-aware approach grants scalability and lets applications interface in the same way to every service proposed in the city. Future developments regard the development of a complete simulator using smart contracts to implement some smart city services, to show how each actor can interact, exchange data and access public records.
|
2301.09288 | Evidence of electron correlation and weak bulk plasmon in SrMoO$_{3}$ | We investigate the electronic structure of highly conducting perovskite
SrMoO$_{3}$ using valence band photoemission spectroscopy and electronic
structure calculations. Large intensity corresponding to coherent feature close
to Fermi level is captured by density functional theory (DFT) calculation. An
additional satellite at $\sim$ 3 eV binding energy remains absent in DFT,
hybrid functional (DFT-hybrid) and dynamical mean field theory (DFT + DMFT)
calculations. Mo 4$d$ spectra obtained with different surface sensitive
photoemission spectroscopy suggest different surface and bulk electronic
structures. DFT + DMFT spectral function is in excellent agreement with the
coherent feature in the bulk Mo 4$d$ spectra, revealing moderate electron
correlation strength. A large plasmon satellite and signature of strong
electron correlation are observed in the surface spectra, while the bulk
spectra exhibits a $weak$ plasmon satellite. | Asif Ali, B. H. Reddy, Ravi Shankar Singh | 2023-01-23T06:23:27Z | http://arxiv.org/abs/2301.09288v1 | # Evidence of electron correlation and weak bulk plasmon in SrMoO\({}_{3}\)
###### Abstract
We investigate the electronic structure of highly conducting perovskite SrMoO\({}_{3}\) using valence band photoemission spectroscopy and electronic structure calculations. Large intensity corresponding to coherent feature close to Fermi level is captured by density functional theory (DFT) calculation. An additional satellite at \(\sim\) 3 eV binding energy remains absent in DFT, hybrid functional (DFT-hybrid) and dynamical mean field theory (DFT + DMFT) calculations. Mo \(4d\) spectra obtained with different surface sensitive photoemission spectroscopy suggest different surface and bulk electronic structures. DFT + DMFT spectral function is in excellent agreement with the coherent feature in the bulk Mo \(4d\) spectra, revealing moderate electron correlation strength. A large plasmon satellite and signature of strong electron correlation are observed in the surface spectra, while the bulk spectra exhibits a _weak_ plasmon satellite.
Transition metal oxides (TMOs) exhibit diverse physical phenomena such as metal-insulator transition [1], superconductivity [2], multiferroicity [3], giant and colossal magnetoresistance [4], non-Fermi liquid behaviour [5; 6], quantum phase transition [7] and various exotic magnetic orders [8; 9; 10; 11; 12]. It is well recognized that electron correlation plays a crucial role in describing such exotic properties. The electron correlation is expected to be weak in \(4d\) TMOs compared to \(3d\) TMOs due to larger spatial extension of \(4d\) orbitals than that of \(3d\) orbitals. However, against this general belief, varying strength of electron correlation have been observed in \(4d\) TMOs leading to exotic ground states [13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23].
Among \(4d\) TMOs, molybdenum based perovskites \(A\)MoO\({}_{3}\) (\(A\) = Ca, Sr, Ba) exhibit metallic behaviour with Pauli paramagnetism [24]. Exceptionally high conductivity is found in SrMoO\({}_{3}\) associated with highly delocalized Mo \(4d^{2}\) electrons with weak contribution from phonon scattering [25]. Resistivity measurement shows a Fermi-liquid behaviour and the Sommerfeld coefficient evaluated from specific heat measurements is twice than that obtained from band structure calculation, suggesting an enhanced quasiparticle mass [25]. Such quasiparticle mass enhancement is considered as signature of electron correlation, also observed in other strongly correlated systems [8; 26]. SrMoO\({}_{3}\) exhibits two structural transitions with lowering temperature where the room temperature cubic \(Pm\bar{3}m\) structure goes to tetragonal \(I4/mcm\) structure and further to orthorhombic \(Imma\) structure at around 266 K and 124 K, respectively [27]. The low temperature pseudo-cubic structures arise due to the rotation and/or tilting of MoO\({}_{6}\) octahedra. Phonon calculations within DFT + \(U\) (\(U\): on-site Coulomb repulsion) framework emphasizes that the electron correlation is responsible for the structural transitions [28]. Also, DFT + DMFT correctly predicts the non-magnetic ground state along with octahedra rotation for the \(Imma\) structure [29]. Quasiparticle spectral weight (\(Z\)) is found to be \(\sim\) 0.6 [29; 30], commensurate with specific heat measurements and recent angle-resolved photoemission spectroscopy (ARPES) measurement [25; 31]. However, DFT + DMFT fails to capture an intense satellite observed at \(\sim\) 2.5 eV binding energy in hard \(x\)-ray valence band photoemission spectra. This satellite has been argued to have a plasmonic origin [32]. Various GW calculations have also predicted a plasmon satellite at about 3 eV binding energy but with much smaller spectral weight in contrast to the experiment [33; 34; 35].
In this letter, we investigate the electronic structure of SrMoO\({}_{3}\) using valence band photoemission spectroscopy on _in-situ_ fractured polycrystalline sample. We observe considerably different surface and bulk electronic structure. Observation of significantly _weak_ plasmon satellite along with moderate electron correlation in the bulk Mo \(4d\) spectra, are consistent with many-body theoretical calculations. We further discuss the surface electronic structure.
High quality polycrystalline sample of SrMoO\({}_{3}\) was prepared by solid state reaction method using high purity MoO\({}_{3}\) (99.99 %) and SrCO\({}_{3}\) (99.995 %). Thoroughy ground mixture was palletized and heated at 600 \({}^{\circ}\)C for 8 hours and further at 1250 \({}^{\circ}\)C for 48 hours with intermittent grindings. Heat treatments were performed under 5% hydrogen mixed argon gas flow, resulting in very hard brick-red pellets. The phase purity and crystal structure were confirmed by \(x\)-ray diffraction pattern collected at room temperature (Fig. S1 of supplemental material (SM) [36]). The cubic lattice parameter was found to be 3.975(8) A in excellent agreement with earlier reports [25; 27]. Photoemission spectroscopic measurements were carried out at 30 K using monochromatic Al \(K_{\alpha}\) (\(h\nu\) = 1486.6 eV) and He ii (\(h\nu\) = 40.8 eV) radiation (energy) on _in-situ_ fractured sample (base pressure \(\sim\) 4\(\times\)10\({}^{-11}\) mbar). The Fermi level (\(E_{F}\)) and energy resolution were determined by measuring the Fermi cut-off of a clean polycrystalline silver at 30 K. The energy resolutions for Al \(K_{\alpha}\) and He ii spectra were set to \(\sim\) 300 meV and \(\sim\) 10 meV, respectively.
Electronic structure calculations were performed for the orthorhombic \(Imma\) structure with structural parameters adopted from Ref. [27]. Full potential linearized augmented plane wave method, as implemented in wien2k [37], was used for the DFT calculation within generalized gradient approximation of Purdew-Burke
Ernzerhof[88]. For the DFT-hybrid calculations, screened hybrid functional (YS-PBE0)[39] was constructed by replacing the \(\alpha\) fraction of semi-local exchange with Hartree-Fock exchange, where \(\alpha\) is the mixing parameter. The energy and charge convergence criteria were set to \(10^{-5}\) eV and \(10^{-4}\) electronic charge per formula unit (f.u.), respectively. Fully charge self-consistent DFT + DMFT calculation for 50 K (\(\beta\approx 232\) eV\({}^{-1}\)) were performed using eDMFT code with continuous time quantum Monte Carlo impurity solver and "exact" double counting[40; 41; 42]. A hybridization window of \(\pm\) 10 eV was used and all five \(d\) orbitals as basis was chosen for correlated Mo atoms. Analytical continuation was performed using maximum entropy method to calculate self-energy on the real axis[40]. The chosen local axes were nearly aligned with Mo-O bond direction of the MoO\({}_{6}\) octahedra. The Hubbard-\(U\) and Hund's coupling \(J\) were set to 6.0 eV and 0.7 eV, respectively. Total 4000 \(k\)-points were used for DFT and DFT + DMFT calculations and 500 \(k\)-points were used for DFT-hybrid calculation. Density of states (DOS) was calculated using 4000 \(k\)-points for all the calculations.
Atomic arrangement in orthorhombic SrMoO\({}_{3}\) has been shown in Fig 1 (a). Each Mo atom is surrounded by two types of crystallographically different oxygens (two apical (O1) and four basal (O2)), while the Sr atom sits in the void created by surrounding MoO\({}_{6}\) octahedra. Valence band is formed by the hybridization between Mo \(4d\) and O \(2p\) states and contributions from Sr states are expected to be negligible in the occupied energy range. Valence band photoemission spectra of SrMoO\({}_{3}\) collected at 30 K using Al \(K_{\alpha}\) and He ii radiations have been shown in Fig. 1 (a). Both spectra exhibit two distinctly separated groups of features below and above 3.5 eV binding energy. Considering the larger photo-ionization cross-section ratio of Mo \(4d\) states with O \(2p\) states in case of Al \(K_{\alpha}\) than in case of He ii[43], the features below 3.5 eV binding energy can be attributed to Mo \(4d\) states and the features above 3.5 eV binding energy can be attributed to O \(2p\) states.
The attributed characters of the spectral features are further examined using DFT calculation. Calculated partial density of states (PDOS) for Sr, Mo, O1 and O2 atoms are shown in Fig. 1 (b). It is clear that the valence band is formed by Mo \(4d\) and O \(2p\) hybridized states, while Sr states have negligible contribution in the occupied region (appearing between -2 eV to -8 eV). There are three sets of features above \(\sim\) 3.5 eV binding energy centered \(\sim\) 4.5 eV corresponding to non-bonding states primarily having O \(2p\) character and \(\sim\) 6 eV and \(\sim\) 8 eV features corresponding to bonding states with Mo \(4d\) and O \(2p\) mixed character. The anti-bonding states located between 2 eV to -2 eV binding energy primarily have Mo \(4d\)\(t_{2g}\) character, while the Mo \(4d\)\(e_{g}\) states appear between -2 eV to -7.2 eV binding energy. The average \(\angle\) Mo-O-Mo reduces to \(\sim\) 172.6\({}^{\circ}\) (\(\angle\)Mo-O1-Mo = \(\sim\) 171.3\({}^{\circ}\) and \(\angle\)Mo-O2-Mo = \(\sim\) 173.2\({}^{\circ}\)) in the orthorhombic \(Imma\) structure from 180\({}^{\circ}\) in the cubic \(Pm\bar{3}m\) structure[27]. It is to note here that the PDOS corresponding to O1 and O2 (\(\times\) 0.5) are almost degenerate, as evident from the figure, suggesting negligible influence of the octahedra rotation and/or tilt in the electronic structure of SrMoO\({}_{3}\). This is also confirmed by very similar calculated DOS and essentially similar valence band spectra collected at 300 K and 30 K using Al \(K_{\alpha}\), (Fig. S2 of SM[36]).
The overall comparison of experimental spectra with DFT results reveal two significant differences, (i) a mismatch in the energy position of the O \(2p\) band and (ii) an additional satellite feature around 3 eV binding energy in the experimental spectra. The feature at \(\sim\) 5 eV binding energy in the experimental spectra appears at about 4.5 eV in the DOS calculated within DFT. This overestimation of O \(2p\) band energies has been attributed to the underestimation of electron correlation in DFT, as also observed in various TMOs[44]. The methods beyond DFT, such as DFT-hybrid and DFT + DMFT, have been found to be quite successful in case of TMOs with varying strength of electron correlation. The hybrid functionals containing some part of the Hartree-Fock exchange have been very successful in correctly describing
Figure 1: (color online) (a) Valence band photoemission spectra of SrMoO\({}_{3}\) collected at 30 K using Al \(K_{\alpha}\) (black circles) and He ii (red circles) radiations. The O \(2p\) band contributions (see text) are shown by blue line. Atomic arrangement in orthorhombic structure is shown where red, black, pink and blue spheres represents Sr, Mo, O1 (apical) and O2 (basal) atoms, respectively. (b) DFT calculated PDOS for Sr, Mo, O1 and O2 atoms for the orthorhombic \(Imma\) structure.
the electronic properties of many semiconductors [39; 45; 33], insulators [46; 47; 39] and correlated metallic systems [48; 49; 50]. The mixing parameter \(\alpha\) is related to the dielectric properties of a material and have been varied in order to match with the experimental results [51] even in case of metals [50]. Indeed, a shift of about 0.5 eV towards higher binding energy in the O \(2p\) band position is found for \(\alpha\) = 0.10 as shown in the Fig. 2. DFT-hybrid calculations for cubic structure with varying \(\alpha\) from 0.25 to 0.10 has been shown in SM [36]. To further examine the observed shift, we have also performed DFT + DMFT calculation. The DFT + DMFT has emerged as a successful method to treat weak to strong electron correlation in many \(d\) and \(f\) electron systems [52; 53]. The total DOS calculated using DFT + DMFT has been shown in Fig. 2 which also exhibits shift of the O \(2p\) band towards higher binding energy as compared to DFT, consistent with the experimental spectra and DFT-hybrid. Reduced Mo \(4d\) bandwidth in DFT + DMFT suggests strong renormalization with quasiparticle weight \(Z\approx 0.5\) (for \(t_{2g}\) orbitals), consistent with earlier calculations and specific heat measurements [29; 30; 25; 31]. The DFT + DMFT provides a better description of the experimental spectra, as discussed later.
Interestingly, all of the calculations discussed above fails to describe the \(\sim\) 3 eV satellite feature of the experimental spectra. Such higher binding energy feature observed in the photoemission spectra is a typical signature of correlation induced lower Hubbard band (LHB) in strongly correlated TMOs and has been successfully captured by DFT + DMFT [54; 55; 56]. However, in the present case the satellite feature can not be attributed to LHB since it remains absent in the DFT + DMFT. A similar feature observed in earlier photoemission experiment was suggested to be a plasmon satellite [32], as also observed in subsequent GW calculations [33; 34; 35].
In order to further investigate, we extract the Mo \(4d\) band contributions from the valence band spectra. This can be reliably done by subtracting the O \(2p\) band features (simulated using three Gaussian and shown as blue lines), since they are distinctly separate from Mo \(4d\) band in experimental and calculated valence band shown in Fig. 1. The extracted Mo \(4d\) bands, normalized by total integrated intensity, are shown in Fig. 3, for both Al \(K_{\alpha}\) and He ii spectra. Both spectra exhibit intense feature close to \(E_{F}\) with a hump like satellite feature. The relatively weak satellite feature observed here is consistent with our previous \(x\)-ray photoemission study on _ex-situ_ thin film [57]. He ii spectra exhibits a sharper Fermi cut-off due to higher resolution than that of Al \(K_{\alpha}\) spectra. For a direct comparison, we broaden the He ii spectra by a Gaussian of 0.3 eV width (\(\sim\) energy resolution of Al \(K_{\alpha}\) spectra) as shown by line. Now, these two spectra having similar resolution broadening differ from each other only in probing depth employed in the photoemission spectroscopy where Al \(K_{\alpha}\) spectra is more bulk sensitive while He ii spectra is more surface sensitive. Different line shape of these two spectra suggests that the surface and bulk electronic structures are distinctly different in this system. Lowered symmetry, reduced coordination number and/or surface reconstruction \(etc.\) at the surface may lead to difference in the electronic structure. Thus, it is essential to disentangle the contribution of the surface to understand the intrinsic bulk electronic structure [19; 58].
Photoemission spectral intensity for the incident photon energy \(h\nu\) can be expressed as \(I_{h\nu}(E)=e^{-l/\lambda_{h\nu}}f_{b}(E)+(1-e^{-l/\lambda_{h\nu}})f_{s}(E)\), where \(l\) is surface layer thickness, \(\lambda\) is photoelectron mean free path (probing depth) and \(f_{b}(E)\) and \(f_{s}(E)\) represent bulk and surface
Figure 3: (color online) Extracted Mo \(4d\) band of SrMoO\({}_{3}\) from Al \(K_{\alpha}\) (black circles) and He ii (red circles) valence band spectra. The resolution broadened He ii spectra is shown by red line.
Figure 2: (color online) Total DOS of orthorhombic SrMoO\({}_{3}\) calculated using DFT, DFT-hybrid and DFT + DMFT.
spectra, respectively. \(f_{b}(E)\) and \(f_{s}(E)\) were estimated using \(l/\lambda_{AlK_{\alpha}}=0.45\) and \(l/\lambda_{H\mathrm{e}{\mathrm{I}}}=1.75\) and have been shown in Fig. 4. The obtained spectra are quite robust with respect to the energy positions and relative intensity of features within 20% variation of \(l/\lambda\), providing confidence in the analysis [58; 19].
As evident from Fig. 4, surface and bulk spectra (normalized by total integrated intensity) are distinctly different. The intrinsic bulk spectra exhibiting an intense coherent feature below 2 eV binding energy is compared with resolution broadened occupied part of the DFT + DMFT spectral function (matched at highest intensity). An excellent agreement for the coherent feature confirms moderate electron correlation in this system. A significantly _weak_ satellite (P\({}_{B}\)) in the bulk spectra at \(\sim\) 3 eV binding energy is commensurate with the broad and weak plasmon satellite observed in various GW calculations [33; 34; 35]. It is to note here that the DFT, DFT-hybrid and DFT + DMFT calculations, performed here, fail to describe the plasmonic satellite, since, these calculations do not incorporate any long-range (non-local) interactions which are essential for description of plasmonic excitations [59; 32].
The surface spectra exhibits significantly reduced total width and enhanced satellite feature (appearing below 2.5 eV) in contrast to the bulk spectra. Interestingly, the surface spectra obtained here is strikingly similar to hard \(x\)-ray photoemission spectra on _ex-situ_ thin films [32]. Plasmon satellites in the core level photoemission have been extensively studied revealing different lineshape and intensity for the bulk and surface plasmon. Surface plasmon satellite is expected to appear at smaller energy ( \(\sqrt{2}\) times lower) than bulk plasmon satellite due to reduced dimensionality and/or confinement effects [60; 61]. These have indeed been observed in grazing angle core-level photoemission experiments [62] while the enhancement of surface plasmon satellites are attributed to the enhancement of extrinsic plasmon losses at the surface [63; 64]. A careful look at the surface spectra reveals that the coherent feature remains at very similar position while an intense and shifted satellite suggests an enhanced surface plasmon (P\({}_{S}\)), roughly \(\sqrt{2}\) times lower energy than the bulk plasmon (P\({}_{B}\)). Deconvolution of the surface spectra as shown in Fig. 4, requires at least two Gaussian type features in addition to bulk coherent feature (obtained from bulk spectra after subtracting Gaussian type P\({}_{B}\) and matched at highest intensity), suggesting an additional feature at \(\sim\) 1.5 eV binding energy (Gaussian peaks have been used for simplistic illustration). This additional feature can be a signature of LHB appearing due to enhanced effective electron correlation at the surface and requires further investigations on a high-quality crystals/thin films.
In conclusion, we have investigated the electronic structure of SrMoO\({}_{3}\) using valence band photoemission spectroscopy. An accurate description of the O 2\(p\) band position is found with DFT-hybrid and DFT + DMFT calculations. Valence band obtained using different photon sources suggest that the surface and bulk electronic structures are quite different in this system. The large coherent feature in bulk spectra is commensurate with DFT + DMFT spectral function suggesting moderate electron correlation in the Mo 4\(d\) orbitals. Enhanced plasmon satellite and signature of strong correlation induced LHB are observed in the surface spectra. Intrinsic bulk spectra exhibits a _weak_ plasmon satellite as also observed in various GW calculations.
We thank S. K. Pandey (IIT Mandi) for fruitful discussions. We acknowledge the support of CIF and HPC facilities at IISER Bhopal. We also thankfully acknowledge the funding from DST-FIST (Project No. SR/FST/PSI-195/2014C).
|
2302.10289 | Tackling Shortcut Learning in Deep Neural Networks: An Iterative
Approach with Interpretable Models | We use concept-based interpretable models to mitigate shortcut learning.
Existing methods lack interpretability. Beginning with a Blackbox, we
iteratively carve out a mixture of interpretable experts (MoIE) and a residual
network. Each expert explains a subset of data using First Order Logic (FOL).
While explaining a sample, the FOL from biased BB-derived MoIE detects the
shortcut effectively. Finetuning the BB with Metadata Normalization (MDN)
eliminates the shortcut. The FOLs from the finetuned-BB-derived MoIE verify the
elimination of the shortcut. Our experiments show that MoIE does not hurt the
accuracy of the original BB and eliminates shortcuts effectively. | Shantanu Ghosh, Ke Yu, Forough Arabshahi, Kayhan Batmanghelich | 2023-02-20T20:25:41Z | http://arxiv.org/abs/2302.10289v9 | # Dividing and Conquering a BlackBox to a Mixture of Interpretable Models: Route, Interpret, Repeat
###### Abstract
ML model design either starts with an interpretable model or a Blackbox and explains it post hoc. Blackbox models are flexible but difficult to explain, while interpretable models are inherently explainable. Yet, interpretable models require extensive ML knowledge and tend to be less flexible and underperforming than their Blackbox variants. This paper aims to blur the distinction between a post hoc explanation of a Blackbox and constructing interpretable models. Beginning with a Blackbox, we iteratively _carve out_ a mixture of interpretable experts (MoIE) and a _residual network_. Each interpretable model specializes in a subset of samples and explains them using First Order Logic (FOL), providing basic reasoning on concepts from the Blackbox. We route the remaining samples through a flexible residual. We repeat the method on the residual network until all the interpretable models explain the desired proportion of data. Our extensive experiments show that our _route, interpret, and repeat_ approach (1) identifies a diverse set of instance-specific concepts with high concept completeness via MoIE without compromising in performance, (2) identifies the relatively "harder" samples to explain via residuals, (3) outperforms the interpretable by-design models by significant margins during test-time interventions, and (4) fixes the shortcut learned by the original Blackbox. The code for MoIE is publicly available at: [https://github.com/batmanlab/ICML-2023-Route-interpret-repeat](https://github.com/batmanlab/ICML-2023-Route-interpret-repeat).
## 1 Introduction
Model explainability is essential in high-stakes applications of AI, _e.g.,_ healthcare. While Blackbox models (_e.g.,_ Deep Learning) offer flexibility and modular design, post hoc explanation is prone to confirmation bias (Wan et al., 2022), lack of fidelity to the original model (Adebayo et al., 2018), and insufficient mechanistic explanation of the decision-making process (Rudin, 2019). Interpretable-by-design models overcome those issues but tend to be less flexible than Blackbox models and demand substantial expertise to design. Using a post hoc explanation or adopting an inherently interpretable model is a mutually exclusive decision to be made at the initial phase of AI model design. This paper blurs the line on that dichotomous model design.
The literature on post hoc explanations is extensive. This includes model attributions ((Simonyan et al., 2013; Selvaraju et al., 2017)), counterfactual approaches (Abid et al., 2021; Singla et al., 2019), and distillation methods (Alharbi et al., 2021; Cheng et al., 2020). Those methods either identify key input features that contribute the most to the network's output (Shrikumar et al., 2016), generate input perturbation to flip the network's output (Samek et al., 2016; Montavon et al., 2018), or estimate simpler functions to approximate the network output locally. Post hoc methods preserve the flexibility and performance of the Blackbox but suffer from a lack of fidelity and mechanistic explanation of the network output (Rudin, 2019). Without a mechanistic explanation, recourse to a model's undesirable behavior is unclear. Interpretable models are alternative designs to the Blackbox without many such drawbacks. For example, modern interpretable methods highlight human understandable _concepts_ that contribute to the downstream prediction.
Several families of interpretable models exist for a long time, such as the rule-based approach and generalized additive models (Hastie and Tibshirani, 1987; Letham et al., 2015; Breiman et al., 1984). They primarily focus on tabular data. Such models for high-dimensional data (_e.g.,_ images) primarily rely on projecting to a lower dimensional human understandable _concept_ or _symbolic_ space (Koh et al., 2020) and predicting the output with an interpretable classifier. Despite their utility, the current State-Of-The-Art (SOTA) are limited in design; for example, they do not model the
interaction between the concepts except for a few exceptions (Ciravegna et al., 2021; Barbiero et al., 2022), offering limited reasoning capabilities and robustness. Furthermore, if a portion of the samples does not fit the template design of the interpretable model, they do not offer any flexibility, compromising performance.
**Our contributions** We propose an interpretable method, aiming to achieve the best of both worlds: not sacrificing Blackbox performance similar to post hoc explainability while still providing actionable interpretation. We hypothesize that a Blackbox encodes several interpretable models, each applicable to a different portion of data. Thus, a single interpretable model may be insufficient to explain all samples. We construct a hybrid neuro-symbolic model by progressively _carving out_ a mixture of interpretable models and a _residual network_ from the given Blackbox. We coin the term _expert_ for each interpretable model, as they specialize over a subset of data. All the interpretable models are termed a "Mixture of Interpretable Experts" (MoIE). Our design identifies a subset of samples and _routes_ them through the interpretable models to explain the samples with FOL, providing basic reasoning on concepts from the Blackbox. The remaining samples are routed through a flexible residual network. On the residual network, we repeat the method until MoIE explains the desired proportion of data. We quantify the sufficiency of the identified concepts to explain the Blackbox's prediction using the concept completeness score (Yeh et al., 2019). Using FOL for interpretable models offers recourse when undesirable behavior is detected in the model. We provide an example of fixing a shortcut learning by modifying the FOL. FOL can be used in human-model interaction (not explored in this paper). Our method is the divide-and-conquer approach, where the instances covered by the residual network need progressively more complicated interpretable models. Such insight can be used to inspect the data and the model further. Finally, our model allows _unexplainable_ category of data, which is currently not allowed in the interpretable models.
## 2 Method
**Notation:** Assume we have a dataset \(\{\mathcal{X}\), \(\mathcal{Y}\), \(\mathcal{C}\}\), where \(\mathcal{X}\), \(\mathcal{Y}\), and \(\mathcal{C}\) are the input images, class labels, and human interpretable attributes, respectively. \(f^{0}:\mathcal{X}\rightarrow\mathcal{Y}\), is our pre-trained initial Blackbox model. We assume that \(f^{0}\) is a composition \(h^{0}\circ\Phi\), where \(\Phi:\mathcal{X}\rightarrow\mathbb{R}^{l}\) is the image embeddings and \(h^{0}:\mathbb{R}^{l}\rightarrow\mathcal{Y}\) is a transformation from the embeddings, \(\Phi\), to the class labels. We denote the learnable function \(t:\mathbb{R}^{l}\rightarrow\mathcal{C}\), projecting the image embeddings to the concept space. The concept space is the space spanned by the attributes \(\mathcal{C}\). Thus, function \(t\) outputs a scalar value representing a concept for each input image.
**Method Overview:** Figure 1 summarizes our approach. We iteratively carve out an interpretable model from the given Blackbox. Each iteration yields an interpretable model (the downward grey paths in Figure 1) and a residual (the straightforward black paths in Figure 1). We start with the initial Blackbox \(f^{0}\). At iteration \(k\), we distill the Blackbox from the previous iteration \(f^{k-1}\) into a neuro-symbolic interpretable model, \(g^{k}:\mathcal{C}\rightarrow\mathcal{Y}\). Our \(g\) is flexible enough to be any interpretable models (Yuksekgonul et al., 2022; Koh et al., 2020; Barbiero et al., 2022). The _residual_\(r^{k}=f^{k-1}-g^{k}\) emphasizes the portion of \(f^{k-1}\) that \(g^{k}\)cannot explain. We then approximate \(r^{k}\) with \(f^{k}=h^{k}\circ\Phi\). \(f^{k}\) will be the Blackbox for the subsequent iteration and be explained by the respective interpretable model. A learnable gating mechanism, denoted by \(\pi^{k}:\mathcal{C}\rightarrow\{0,1\}\) (shown as the _selector_ in Figure 1) routes an input sample towards either \(g^{k}\) or \(r^{k}\). The thickness of the lines in Figure 1 represents the samples covered by the interpretable models (grey line) and the residuals (black line). With every iteration, the cumulative coverage of the interpretable models increases, but the residual decreases. We name our method _route, interpret_ and _repeat_.
### Neuro-Symbolic Knowledge Distillation
Knowledge distillation in our method involves 3 parts: (1) a series of trainable selectors, _routing_ each sample through the interpretable models and the residual networks, (2) a sequence of learnable neuro-symbolic interpretable models, each providing FOL explanations to _interpret_ the Blackbox, and (3) _repeating_ with Residuals for the samples that cannot be explained with their interpretable counterparts. We detail each component below.
Figure 1: Schematic view of _route, interpret_ and _repeat_. At iteration \(k\), the selector _routes_ each sample either towards the interpretable model \(g^{k}\) (to _interpret_) with probability \(\pi^{k}(.)\) or the residual \(r^{k}=f^{k-1}-g^{k}\) with probability \(1-\pi^{k}(.)\) (to _repeat_ in the further iterations). \(f^{k-1}\) is the Blackbox of the \((k-1)^{th}\) iteration. \(g^{k}\) generates FOL-based explanations for the samples it covers. Otherwise, the selector routes through the next step until it either goes through a subsequent interpretable model or reaches the last residual. Components in black and grey indicate the fixed and trainable modules in our model, respectively.
#### 2.1.1 The selector function
As the first step of our method, the selector \(\pi^{k}\)_routes_ the \(j^{th}\) sample through the interpretable model \(g^{k}\) or residual \(r^{k}\) with probability \(\pi^{k}(\mathbf{c_{j}})\) and \(1-\pi^{k}(\mathbf{c_{j}})\) respectively, where \(k\in[0,K]\), with \(K\) being the number of iterations. We define the empirical coverage of the \(k^{th}\) iteration as \(\zeta(\pi^{k})=\frac{1}{m}\sum_{j=1}^{m}\pi^{k}(\mathbf{c_{j}})\), the empirical mean of the samples selected by the selector for the associated interpretable model \(g^{k}\), with \(m\) being the total number of samples in the training set. Thus, the entire selective risk is:
\[\mathcal{R}^{k}(\pi^{k},g^{k})=\frac{\frac{1}{m}\sum_{j=1}^{m}\mathcal{L}^{k}_ {(g^{k},\pi^{k})}\big{(}\mathbf{x_{j}},\mathbf{c_{j}}\big{)}}{\zeta(\pi^{k})}, \tag{1}\]
where \(\mathcal{L}^{k}_{(g^{k},\pi^{k})}\) is the optimization loss used to learn \(g^{k}\) and \(\pi^{k}\) together, discussed in Section 2.1.2. For a given coverage of \(\tau^{k}\in(0,1]\), we solve the following optimization problem:
\[\theta^{*}_{s^{k}},\theta^{*}_{g^{k}}= \operatorname*{arg\,min}_{\theta_{s^{k}},\theta_{g^{k}}}\mathcal{ R}^{k}\Big{(}\pi^{k}(.;\theta_{s^{k}}),g^{k}(.;\theta_{g^{k}})\Big{)}\] \[\text{s.t.}\quad\zeta\big{(}\pi^{k}(:;\theta_{s^{k}})\big{)}\geq \tau^{k}, \tag{2}\]
where \(\theta^{*}_{s^{k}},\theta^{*}_{g^{k}}\) are the optimal parameters at iteration \(k\) for the selector \(\pi^{k}\) and the interpretable model \(g^{k}\) respectively. In this work, \(\pi\)s' of different iterations are neural networks with sigmoid activation. At inference time, the selector routes the \(j^{th}\) sample with concept vector \(\mathbf{c_{j}}\) to \(g^{k}\) if and only if \(\pi^{k}(\mathbf{c_{j}})\geq 0.5\) for \(k\in[0,K]\).
#### 2.1.2 Neuro-Symbolic interpretable models
In this stage, we design interpretable model \(g^{k}\) of \(k^{th}\) iteration to _interpret_ the Blackbox \(f^{k-1}\) from the previous \((k-1)^{th}\) iteration by optimizing the following loss function:
\[\mathcal{L}^{k}_{(g^{k},\pi^{k})}(\mathbf{x_{j}},\mathbf{c_{j}})=\underbrace{\ell \big{(}f^{k-1}(\mathbf{x_{j}}),g^{k}(\mathbf{c_{j}})\big{)}\pi^{k}(c_{j})}_{ \begin{subarray}{c}\text{variable component}\\ \text{for current iteration $k$}\end{subarray}}(\underbrace{\prod_{i=1}^{k-1}(1-\pi^{i}(\mathbf{c_{j}}))}_ {\begin{subarray}{c}\text{fixed component trained}\\ \text{in the previous iterations}\end{subarray}}), \tag{3}\]
where the term \(\pi^{k}(\mathbf{c_{j}})\prod_{i=1}^{k-1}\big{(}1-\pi^{i}(\mathbf{c_{j}})\big{)}\) denotes the probability of \(j^{th}\) sample being routed through the interpretable model \(g^{k}\). It is the probability of the sample going through the residuals for all the previous iterations from \(1\) through \(k-1\) (_i.e.,_\(\prod_{i=1}^{k-1}\big{(}1-\pi^{i}(\mathbf{c_{j}})\big{)}\)) times the probability of going through the interpretable model at iteration \(k\) (_i.e.,_\(\pi^{k}(\mathbf{c_{j}})\)). Refer to Figure 1 for an illustration. We learn \(\pi^{1},\dots\pi^{k-1}\) in the prior iterations and are not trainable at iteration \(k\). As each interpretable model \(g^{k}\) specializes in explaining a specific subset of samples (denoted by coverage \(\tau\)), we refer to it as an _expert_. We use SelectiveNet's (Geifman and El-Yaniv, 2019) optimization method to optimize Equation (2) since selectors need a rejection mechanism to route samples through residuals. Appendix A.4 details the optimization procedure in Equation (3). We refer to the interpretable experts of all the iterations as a "Mixture of Interpretable Experts" (MoIE) cumulatively after training. Furthermore, we utilize E-LEN, _i.e.,_ a Logic Explainable Network (Ciravegna et al., 2023) implemented with an Entropy Layer as first layer (Barbiero et al., 2022) as the interpretable symbolic model \(g\) to construct First Order Logic (FOL) explanations of a given prediction.
#### 2.1.3 The Residuals
The last step is to _repeat_ with the residual \(r^{k}\), as \(r^{k}(\mathbf{x_{j}},\mathbf{c_{j}})=f^{k-1}(\mathbf{x_{j}})-g^{k}(\mathbf{c_{j}})\). We train \(f^{k}=h^{k}\big{(}\Phi(.)\big{)}\) to approximate the residual \(r^{k}\), creating a new Blackbox \(f^{k}\) for the next iteration \((k+1)\). This step is necessary to specialize \(f^{k}\) over samples not covered by \(g^{k}\). Optimizing the following loss function yields \(f^{k}\) for the \(k^{th}\) iteration:
\[\mathcal{L}^{k}_{f}(\mathbf{x_{j}},\mathbf{c_{j}})=\underbrace{\ell\big{(}r^{k}(\mathbf{x _{j}},\mathbf{c_{j}}),f^{k}(\mathbf{x_{j}})\big{)}}_{\begin{subarray}{c}\text{ trainable component}\\ \text{for iteration $k$}\end{subarray}}\underbrace{\prod_{i=1}^{k}\big{(}1-\pi^{i}(\mathbf{c_{j}}) \big{)}}_{\begin{subarray}{c}\text{non-trainable component}\\ \text{for iteration $k$}\end{subarray}} \tag{4}\]
Notice that we fix the embedding \(\Phi(.)\) for all the iterations. Due to computational overhead, we only finetune the last few layers of the Blackbox (\(h^{k}\)) to train \(f^{k}\). At the final iteration \(K\), our method produces a MoIE and a Residual, explaining the interpretable and uninterpretable components of the initial Blackbox \(f^{0}\), respectively. Appendix A.5 describes the training procedure of our model, the extraction of FOL, and the architecture of our model at inference.
**Selecting number of iterations \(K\):** We follow two principles to select the number of iterations \(K\) as a stopping criterion: 1) Each expert should have enough data to be trained reliably ( coverage \(\zeta^{k}\)). If an expert covers insufficient samples, we stop the process. 2) If the final residual (\(r^{K}\)) underperforms a threshold, it is not reliable to distill from the Blackbox. We stop the procedure to ensure that overall accuracy is maintained.
\begin{table}
\begin{tabular}{l c c} \hline \hline DATASET & BLACKBOK & \# EVPRTS \\ \hline CUB-200 (Wih et al., 2011) & RESNET101 (Wih et al., 2016) & 6 \\ CUB-200 (Wih et al., 2011) & VIT (Wang et al., 2012) & 6 \\ AWA22(Cun et al., 2018) & RESNET101 (Wih et al., 2016) & 4 \\ AWA22(Cun et al., 2018) & VIT (Wang et al., 2012) & 6 \\ HAM100 (Tchandl et al., 2018) & INCEPTION (Szeply et al., 2015) & 6 \\ SIM-SSC (Zoeberberg et al., 2012) & INCEPTION (Szeply et al., 2015) & 6 \\ EPFUSION IN MIDECO (Zoebergs et al., 2015) & RESNET121 (Zhang et al., 2017) & 3 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Datasets and Blackboxes.
## 3 Related work
**Post hoc explanations:** Post hoc explanations retain the flexibility and performance of the Blackbox. The post hoc explanation has many categories, including feature attribution (Simonyan et al., 2013; Smilkov et al., 2017; Binder et al., 2016) and counterfactual approaches (Singla et al., 2019; Abid et al., 2021). For example, feature attribution methods associate a measure of importance to features (e.g., pixels) that is proportional to the feature's contribution to BlackBox's predicted output. Many methods were proposed to estimate the importance measure, including gradient-based methods (Selvaraju et al., 2017; Sundararajan et al., 2017), game-theoretic approach (Lundberg and Lee, 2017). The post hoc approaches suffer from a lack of fidelity to input (Adebayo et al., 2018) and ambiguity in explanation due to a lack of correspondence to human-understandable concepts. Recently, Posthoc Concept Bottleneck models (PCBMs) (Yuksekgonul et al., 2022) learn the concepts from a trained Blackbox embedding and use an interpretable classifier for classification. Also, they fit a residual in their hybrid variant (PCBM-h) to mimic the performance of the Blackbox. We will compare against the performance of the PCBMs method. Another major shortcoming is that, due to a lack of mechanistic explanation, post hoc explanations do not provide a recourse when an undesirable property of a Blackbox is identified. Interpretable-by-design provides a remedy to those issues (Rudin, 2019).
**Concept-based interpretable models:** Our approach falls into the category of concept-based interpretable models. Such methods provide a mechanistically interpretable prediction that is a function of human-understandable concepts. The concepts are usually extracted from the activation of the middle layers of the Neural Network (bottleneck). Examples include Concept Bottleneck models (CBMs) (Koh et al., 2020), antheo concept decoder (Sarkar et al., 2022), and a high-dimensional Concept Embedding model (CEMs) (Zarlenga et al., 2022) that uses high dimensional concept embeddings to allow extra supervised learning capacity and achieves SOTA performance in the interpretable-by-design class. Most concept-based interpretable models do not model the interaction between concepts and cannot be used for reasoning. An exception is E-LEN (Barbiero et al., 2022) which uses an entropy-based approach to derive explanations in terms of FOL using the concepts. The underlying assumption of those methods is that one interpretable function can explain the entire set of data, which can limit flexibility and consequently hurt the performance of the models. Our approach relaxes that assumption by allowing multiple interpretable functions and a residual. Each function is appropriate for a portion of the data, and a small portion of the data is allowed to be uninterpretable by the model (_i.e.,_ residual). We will compare our method with CBMs, CEMs, and their E-LEN-enhanced variants.
**Application in fixing the shortcut learning:** Shortcuts are spurious features that correlate with both input and the label on the training dataset but fail to generalize in more challenging real-world scenarios. Explainable AI (X-AI) aims to identify and fix such an undesirable property. Related work in X-AI includes LIME (Ribeiro et al., 2016), utilized to detect spurious background as a shortcut to classify an
Figure 2: MoIE identifies diverse concepts for specific subsets of a class, unlike the generic ones by the baselines. **(i)** We construct the FOL explanations of the samples of, “Bay breasted warbler” in the CUB-200 dataset for VIT-based **(a)** CBM + E-LEN as an _interpretable-by-design_ baseline, **(b)** PCBM + E-LEN as a _posthoc_ baseline, **(c)** experts in MoIE at inference. We highlight the unique concepts for experts 1,2, and 3 in _red, blue_, and _magenta_, respectively. **(ii)** Comparison of FOL explanations by MoIE with the PCBM + E-LEN baselines for HAM10000 **(top)** and ISIC **(down)** to classify Malignant lesion. We highlight unique concepts for experts 3, 5, and 6 in _red, blue_, and _violet_, respectively. For brevity, we combine FOLs for each expert for the samples covered by them.
animal. Recently interpretable model (Rosenzweig et al., 2021), involving local image patches, are used as a proxy to the Blackbox to identify shortcuts. However, both methods operate in pixel space, not concept space. Also, both approaches are post hoc and do not provide a way to eliminate the shortcut learning problem. Our MoIE discovers shortcuts using the high-level concepts in the FOL explanation of the Blackbox's prediction and eliminates them via metadata normalization (MDN) (Lu et al., 2021).
## 4 Experiments
We perform experiments on a variety of vision and medical imaging datasets to show that 1) MoIE captures a diverse set of concepts, 2) the performance of the residuals degrades over successive iterations as they cover "harder" instances, 3) MoIE does not compromise the performance of the Blackbox, 4) MoIE achieves superior performances during test time interventions, and 5) MoIE can fix the shortcuts using the Waterbirds dataset (Sagawa et al., 2019). We repeat our method until MoIE covers at least 90% of samples or the final residual's accuracy falls below 70%. Refer to Table 1 for the datasets and Blackboxes experimented with. For ResNets and Inception, we flatten the feature maps from the last convolutional block to extract the concepts. For VITs, we use the image embeddings from the transformer encoder to perform the same. We use SIIM-ISIC as a real-world transfer learning setting, with the Blackbox trained on HAM10000 and evaluated on a subset of the SIIM-ISIC Melanoma Classification dataset (Yuksekgonul et al., 2022). Appendix A.6 and Appendix A.7 expand on the datasets and hyperparameters.
**Baselines:** We compare our methods to two concept-based baselines - 1) interpretable-by-design and 2) posthoc. They consist of two parts: a) a concept predictor \(\Phi:\mathcal{X}\rightarrow\mathcal{C}\), predicting concepts from images; and b) a label predictor \(g:\mathcal{C}\rightarrow\mathcal{Y}\), predicting labels from the concepts. The end-to-end CEMs and sequential CBMs serve as interpretable-by-design baselines. Similarly, PCBM and PCBM-h serve as post hoc baselines. Convolution-based \(\Phi\) includes all layers till the last convolution block. VIT-based \(\Phi\) consists of the transformer encoder block. The standard CBM and PCBM models do not show how the concepts are composed to make the label prediction. So, we create CBM + E-LEN, PCBM + E-LEN and PCBM-h + E-LEN by using the identical \(g\) of MOIE (shown in Appendix A.7), as a replacement for the standard classifiers of CBM and PCBM. We train the \(\Phi\) and \(g\) in these new baselines to sequentially generate FOLs (Barbiero et al., 2022). Due to the unavailability of concept annotations, we extract the concepts from the Derm7pt dataset (Kawahara et al., 2018) using the pretrained embeddings of the Blackbox (Yuksekgonul et al., 2022) for HAM10000. Thus, we do not have interpretable-by-design baselines for HAM10000 and ISIC.
### Results
#### 4.1.1 Expert driven explanations by MoIE
First, we show that MoIE captures a rich set of diverse instance-specific concepts qualitatively. Next, we show quantitatively that MoIE-identified concepts are faithful to Blackbox's final prediction using the metric "completeness score" and zeroing out relevant concepts.
**Heterogenity of Explanations:** At each iteration of MoIE, the blackbox \(\left(h^{k}(\Phi(.)\right)\) splits into an interpretable expert (\(g^{k}\)) and a residual (\(r^{k}\)). Figure 2i shows this mechanism for VIT-based MoIE and compares the FOLs with CBM + E-LEN and PCBM + E-LEN baselines to classify "Bay Breasted Warbler" of CUB-200. The experts of different iterations specialize in specific instances of "Bay Breasted Warbler". Thus, each expert's FOL comprises its instance-specific concepts of the same class ( Figure 2i-c). For example, the concept, _leg_color_grey_ is unique to expert4, but _belly_pattern_solid_ and _back_pattern_multicolored_ are unique to experts 1 and 2, respectively, to classify the instances of "Bay Breasted Warbler". Unlike MoIE, the baselines employ a single interpretable model \(g\), resulting in a generic FOL with identical concepts for all the samples of "Bay Breasted Warbler" (Figure 2(a-b)). Thus the baselines fail to capture the heterogeneity of explanations. For additional results of CUB-200, refer to Appendix A.10.6.
Figure 2ii shows such diverse explanations for HAM10000 (_top_) and ISIC (_bottom_). In Figure 2ii-(top), the baseline-FOL consists of concepts such as _AtypicalPigmentNetwork_ and _BlueWhitishVeil (BWV)_ to classify "Malignancy" for all the instances for HAM10000. However, expert 3 relies on _RegressionStructures_ along with \(BWV\) to classify the same for the samples it covers while expert 5 utilizes several other concepts _e.g., IrregularStreaks_, _Irregular dots and globules (IrregularDG) etc._ Due to space constraints, Appendix A.10.7 reports similar results for the Awa2 dataset. Also, VIT-based experts compose less concepts per sample than the ResNet-based experts, shown in Appendix A.10.8.
**MoIE-identified concepts attain higher completeness scores.** Figure 5(a-b) shows the completeness scores (Yeh et al., 2019) for varying number of concepts. Completeness score is a post hoc measure, signifying the identified concepts as "sufficient statistic" of the predictive capability of the Blackbox. Recall that \(g\) utilizes E-LEN (Barbiero et al., 2022), associating each concept with an attention weight after training. A concept with high attention weight implies its high predictive significance. Iteratively, we select the top relevant concepts based on their attention weights and compute the completeness scores for the top concepts for MoIE and the PCBM + E-LEN baseline in Fig
ure 5(a-b) ( Appendix A.8 for details). For example, MoIE achieves a completeness score of 0.9 compared to 0.75 of the baseline(\(\sim 20\%\uparrow\)) for the 10 most significant concepts for the CUB-200 dataset with VIT as Blackbox.
**MoIE identifies more meaningful instance-specific concepts.** Figure 5(c-d) reports the drop in accuracy by zeroing out the significant concepts. Any interpretable model (\(g\)) supports concept-intervention (Koh et al., 2020). After identifying the top concepts from \(g\) using the attention weights, as in the last section, we set these concepts' values to zero, compute the model's accuracy drop, and plot in Figure 5(b). When zeroing out the top 10 essential concepts for VIT-based CUB-200 models, MoIE records a drop of 53% compared to 28% and 42% for the CBM + E-LEN and PCBM + E-LEN baselines, respectively, showing the faithfulness of the identified concepts to the prediction.
In both of the last experiments, MoIE outperforms the baselines as the baselines mark the same concepts as significant for all samples of each class. However, MoIE leverages various experts specializing in different subsets of samples of different classes. For results of MIMIC-CXR and Awa2, refer to Appendix A.10.2 and Appendix A.10.4 respectively.
#### 4.1.2 Identification of harder samples by successive residuals
Figure 3 (a-c) display the proportional accuracy of the experts and the residuals of our method per iteration. The proportional accuracy of each model (experts and/or residuals) is defined as the accuracy of that model times its cov
Figure 4: Across architectures test time interventions of concepts on all the samples **(a-d)**, on the “hard” samples **(e)**, covered by only the last two experts of MoIE.
Figure 3: The performance of experts and residuals across iterations. **(a-c)** Coverage and proportional accuracy of the experts and residuals. **(d-f)** We route the samples covered by the residuals across iterations to the initial Blackbox \(f^{0}\) and compare the accuracy of \(f^{0}\) (red bar) with the residual (blue bar). Figures **d-f** show the progressive decline in performance of the residuals across iterations as they cover the samples in the increasing order of “hardness”. We observe the similar abysmal performance of the initial blackbox \(f^{0}\) for these samples.
erage. Recall that the model's coverage is the empirical mean of the samples selected by the selector. Figure 2(a) show that the experts and residual cumulatively achieve an accuracy \(\sim\) 0.92 for the CUB-200 dataset in iteration 1, with more contribution from the residual (black bar) than the expert1 (blue bar). Later iterations cumulatively increase and worsen the performance of the experts and corresponding residuals, respectively. The final iteration curves out the entire interpretable portion from the Blackbox \(f^{0}\) via all the experts, resulting in their more significant contribution to the cumulative performance. The residual of the last iteration covers the "hardest" samples, achieving low accuracy. Tracing these samples back to the original Blackbox \(f^{0}\), it also classifies these samples poorly (Figure 2(d-f)). As shown in the coverage plot, this experiment reinforces Figure 1, where the flow through the experts gradually becomes thicker compared to the narrower flow of the residual with every iteration. Refer to Figure 12 in the Appendix A.10.3 for the results of the ResNet-based MoIEs.
#### 4.1.3 Quantitative analysis of MoIE with the blackbox and baseline
**Comparing with the interpretable-by-design baselines:** Table 2 shows that MoIE achieves comparable performance to the Blackbox. Recall that "MoIE" refers to the mixture of all interpretable experts (\(g\)) only excluding any residuals. MoIE outperforms the interpretable-by-design baselines for all the datasets except Awa2. Since Awa2 is designed for zero-shot learning, its rich concept annotation makes it appropriate for interpretable-by-design models. In general, VIT-derived MoIEs perform better than their ResNet-based variants.
**Comparing with the PCBMs:** Table 2 shows that interpretable MoIE outperforms the interpretable posthoc baselines - PCBM and PCBM + E-LEN for all the datasets, especially by a significant margin for CUB-200 and ISIC. We also report "MoIE + Residual" as the mixture of interpretable experts plus the final residual to compare with the residualized PCBM, _i.e.,_ PCBM-h. Table 2 shows that PCBM-h performs slightly better than MoIE + Residual. Note that PCBM-h learns the residual by fitting the complete dataset to fix the interpretable PCBM's mistakes to replicate the performance of the Blackbox, resulting in better performance for PCBM-h than PCBM. However, we assume the Blackbox to be a combination of interpretable and uninterpretable components. So, we train the experts and the final residual to cover the interpretable and uninterpretable portions of the Blackbox respectively. In each iteration, our method learns the residuals to focus on the samples, which are not covered by the respective interpretable experts. Therefore, residuals are not designed to fix the mistakes made by the experts. In doing so, the final residual in MoIE + Residual covers the "hardest" examples, lowering its overall performance compared to MoIE.
#### 4.1.4 Test time interventions
Figure 4(a-d) shows effect of test time interventions. Any concept-based models (Koh et al., 2020; Zarlenga et al., 2022) allow test time interventions for datasets with concept annotation (_e.g.,_ CUB-200, Awa2). We identify the significant concepts via their attention scores in \(g\), as during the computation of completeness scores, and set their values with the ground truths, considering the ground truth concepts as an oracle. As MoIE identifies a more diverse set of concepts by focusing on different subsets of classes, MoIE
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline
**MODEL** & \multicolumn{3}{c}{**Datasets**} \\ & CUB-200 & CUB-200 (VIT) & AWA2 & AWA2 (VIT) & HAM10000 & SIME-SIC & EPpression \\ & (0283TTTTT) & & (0283TTTT) & & & & & \\ \hline BLACKACK & 0.88 & 0.92 & 0.89 & 0.99 & 0.96 & 0.85 & 0.91 \\ \hline
**INTERSPEARLE-RA-V-DISION** & & - & 0.88 \(\pm\) 0.50 & - & NA & NA & 0.76 \(\pm\) 0.00 \\ CIM (Zalalalrega et al., 2022) & 0.77 \(\pm\) 0.22 & - & 0.88 \(\pm\) 0.50 & - & NA & NA & 0.79 \(\pm\) 0.00 \\ CIM (Zalalrega et al., 2022) & 0.65 \(\pm\) 0.37 & 0.86 \(\pm\) 0.24 & 0.88 \(\pm\) 0.35 & 0.94 \(\pm\) 0.28 & NA & NA & 0.79 \(\pm\) 0.00 \\ CIM + E-LEN (Koh et al., 2020; Benlière et al., 2022) & 0.71 \(\pm\) 0.35 & 0.88 \(\pm\) 0.24 & 0.86 \(\pm\) 0.35 & 0.93 \(\pm\) 0.25 & NA & NA & 0.79 \(\pm\) 0.00 \\ \hline
**POSTING** & & & & & & & & \\ PGNL (Vakulappani et al., 2022) & 0.36 \(\pm\) 0.01 & 0.88 \(\pm\) 0.20 & 0.82 \(\pm\) 0.23 & 0.94 \(\pm\) 0.17 & 0.93 \(\pm\) 0.00 & 0.71 \(\pm\) 0.01 & 0.81 \(\pm\) 0.01 \\ PGNL (Vakulappani et al., 2022) & 0.35 \(\pm\) 0.01 & 0.91 \(\pm\) 0.18 & 0.87 \(\pm\) 0.20 & 0.98 \(\pm\) 0.17 & 0.95 \(\pm\) 0.00 & 0.79 \(\pm\) 0.05 & 0.82 \(\pm\) 0.07 \\ PGNL + E-LEN (Vakulappani et al., 2022; Benlière et al., 2022) & 0.80 \(\pm\) 0.36 & 0.89 \(\pm\) 0.26 & 0.85 \(\pm\) 0.25 & 0.96 \(\pm\) 0.18 & 0.94 \(\pm\) 0.02 & 0.73 \(\pm\) 0.01 & 0.83 \(\pm\) 0.01 \\
**POSTING** & & & & & & & & \\ PGNL (Vakulappani et al., 2022; Benlière et al., 2022; Benlière et al., 2022) & 0.88 \(\pm\) 0.24 & 0.98 \(\pm\) 0.20 & 0.95 \(\pm\) 0.03 & 0.82 \(\pm\) 0.05 & 0.87 \(\pm\) 0.03 \\ \hline
**ORES** & & & & & & & & \\ Muffle (COVERAGE) & **0.85 \(\pm\) 0.01 (0.9)** & **0.91 \(\pm\) 0.00** & **0.87 \(\pm\) 0.00** & **0.97 \(\pm\) 0.00** & **0.95 \(\pm\) 0.00** (0.97) \(\pm\) 0.00** & **0.84 \(\pm\) 0.00** & **0.87 \(\pm\) 0.00** \\ MoIE + RESIDUA & & & & & & & & \\ \hline
**MoIE** + **R-SIDUA** & & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 2: MoIE does not hurt the performance of the original Blackbox using a held-out test set. We provide the mean and standard errors of AUROC and accuracy for medical imaging (_e.g.,_ HAM10000, ISIC, and Effusion) and vision (_e.g.,_ CUB-200 and Awa2) datasets, respectively, over 5 random seeds. For MoIE, we also report the percentage of test set samples covered by all experts as “coverage”. Here, MoIE + Residual represents the experts with the final residual. Following the setting (Zarlenga et al., 2022), we only report the performance of the convolutional CEM, leaving the construction of VIT-based CEM as a future work. Recall that interpretable-by-design models can not be constructed for HAM10000 and ISIC as they have no concept annotation; we learn the concepts from the Derm7pt dataset. For all the datasets, MoIE covers a significant portion of data (at least 90%) cumulatively. We boldface our results.
outperforms the baselines in terms of accuracy for such test time interventions. Instead of manually deciding the samples to intervene, it is generally preferred to intervene on the "harder" samples, making the process efficient. As per Section 4.1.2, experts of different iterations cover samples with increasing order of "hardness". To intervene efficiently, we perform identical test-time interventions with varying numbers of concepts for the "harder" samples covered by the final two experts and plot the accuracy in Figure 4(e). For the VIT-derived MoIE of CUB-200, intervening only on 20 concepts enhances the accuracy of MoIE from 91% to 96% (\(\sim 6.1\%\) \(\uparrow\)). We cannot perform the same for the baselines as they cannot directly estimate "harder" samples. Also, Figure 4 shows a relatively higher gain for ResNet-based models in general. Appendix A.10.5 demonstrates an example of test time intervention of concepts for relatively "harder" samples, identified by the last two experts of MoIE.
#### 4.1.5 Application in the removal of shortcuts
First, we create the Waterbirds dataset as in (Sagawa et al., 2019)by using forest and bamboo as the spurious land concepts of the Places dataset for landbirds of the CUB-200 dataset. We do the same by using oceans and lakes as the spurious water concepts for waterbirds. We utilize ResNet50 as the Blackbox \(f^{0}\) to identify each bird as a Waterbird or a Landbird. The Blackbox quickly latches on the spurious backgrounds to classify the birds. As a result, the black box's accuracy differs for land-based versus aquatic subsets of the bird species, as shown in Figure 6a. The Waterbird on the water is more accurate than on land ((96% vs. 67 % in the orange bar in the Figure 6a). The FOL from the biased Blackbox-derived MoIE captures the spurious concept _forest_ a waterbird, misclassified as a landbird. Assuming the background concepts as metadata, we minimize the background bias from the representation of the Blackbox using Metadata normalization (MDN) layers (Lu et al., 2021) between two successive layers of the convolutional backbone to fine-tune the biased Blackbox. Next, we train \(t\), using the embedding \(\Phi\) of the robust Blackbox, and compare the accuracy of the spurious concepts with the biased blackbox in Figure 6d. The validation accuracy of all the spurious concepts retrieved from the robust Blackbox falls well short of the predefined threshold 70% compared to the biased Blackbox. Finally, we re-train the MoIE distilling from the new robust Blackbox. Figure 6b illustrates similar accuracies of MoIE for Waterbirds on water vs. Waterbirds on land (91% -88 %). The FOL from the robust Blackbox does not include any background concepts ( 6c, bottom row). Refer to 8 in Appendix A.9 for the flow diagram of this experiment.
Figure 5: Quantitative validation of the extracted concepts. **(a-b)** Completeness scores of the models for a varying number of top concepts. **(c-d)** Drop in accuracy compared to the original model after zeroing out the top significant concepts iteratively. The highest drop for MoIE indicates that MoIE selects more instance-specific concepts than generic ones by the baselines.
Figure 6: MoIE fixes shortcuts. **(a)** Performance of the biased Blackbox. **(b)** Performance of final MoIE extracted from the robust Blackbox after removing the shortcuts using MDN. **(c)** Examples of samples **(top-row)** and their explanations by the biased **(middle-row)** and robust Blackboxes (**bottom-row)**. **(d)** Comparison of accuracies of the spurious concepts extracted from the biased vs. the robust Blackbox.
## 5 Discussion & Conclusions
This paper proposes a novel method to iteratively extract a mixture of interpretable models from a flexible Blackbox. The comprehensive experiments on various datasets demonstrate that our method 1) captures more meaningful instance-specific concepts with high completeness score than baselines without losing the performance of the Blackbox, 2) does not require explicit concept annotation, 3) identifies the "harder" samples using the residuals, 4) achieves significant performance gain than the baselines during test time interventions, 5) eliminate shortcuts effectively. In the future, we aim to apply our method to other modalities, such as text or video. Also, as in the prior work, MoIE-captured concepts may not reflect a causal effect. The assessment of causal concept effects necessitates estimating inter-concept interactions, which will be the subject of future research.
## 6 Acknowledgement
We would like to thank Mert Yukekgonul of Stanford University for providing the code to construct the concept bank of Derm7pt to conduct the skin experiments. This work was partially supported by NIH Award Number 1R01HL141813-01 and the Pennsylvania Department of Health. We are grateful for the computational resources provided by Pittsburgh Super Computing grant number TG-ASC170024.
|
2307.02053 | Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN
Fine-Tuning | Recently, the release of INSTRUCTEVAL has provided valuable insights into the
performance of large language models (LLMs) that utilize encoder-decoder or
decoder-only architecture. Interestingly, despite being introduced four years
ago, T5-based LLMs, such as FLAN-T5, continue to outperform the latest
decoder-based LLMs, such as LLAMA and VICUNA, on tasks that require general
problem-solving skills. This performance discrepancy can be attributed to three
key factors: (1) Pre-training data, (2) Backbone architecture, and (3)
Instruction dataset. In this technical report, our main focus is on
investigating the impact of the third factor by leveraging VICUNA, a large
language model based on LLAMA, which has undergone fine-tuning on ChatGPT
conversations. To achieve this objective, we fine-tuned VICUNA using a
customized instruction dataset collection called FLANMINI. This collection
includes a subset of the large-scale instruction dataset known as FLAN, as well
as various code-related datasets and conversational datasets derived from
ChatGPT/GPT-4. This dataset comprises a large number of tasks that demand
problem-solving skills. Our experimental findings strongly indicate that the
enhanced problem-solving abilities of our model, FLACUNA, are obtained through
fine-tuning VICUNA on the FLAN dataset, leading to significant improvements
across numerous benchmark datasets in INSTRUCTEVAL. FLACUNA is publicly
available at https://huggingface.co/declare-lab/flacuna-13b-v1.0. | Deepanway Ghosal, Yew Ken Chia, Navonil Majumder, Soujanya Poria | 2023-07-05T06:36:54Z | http://arxiv.org/abs/2307.02053v1 | # Flacuna: Unleashing the Problem Solving Power of Vicuna using Flan Fine-Tuning
###### Abstract
Recently, the release of InstructEval[4] has provided valuable insights into the performance of large language models (LLMs) that utilize encoder-decoder or decoder-only architecture. Interestingly, despite being introduced four years ago, T5-based LLMs, such as Flan-T5, continue to outperform the latest decoder-based LLMs, such as LLaMA and Vicuna, on tasks that require general problem-solving skills. This performance discrepancy can be attributed to three key factors: (1) Pre-training data, (2) Backbone architecture, and (3) Instruction dataset. In this technical report, our main focus is on investigating the impact of the third factor by leveraging Vicuna, a large language model based on LLaMA, which has undergone fine-tuning on ChatGPT conversations. To achieve this objective, we fine-tuned Vicuna using a customized instruction dataset collection called Flan-mini. This collection includes a subset of the large-scale instruction dataset known as Flan, as well as various code-related datasets and conversational datasets derived from ChatGPT/GPT-4. This dataset comprises a large number of tasks that
demand problem-solving skills. Our experimental findings strongly indicate that the enhanced problem-solving abilities of our model, Flacuna, are obtained through fine-tuning Vicuna on the Flan dataset, leading to significant improvements across numerous benchmark datasets in InstructEval. Flacuna is publicly available at [https://huggingface.co/declare-lab/flacuna-13b-v1.0](https://huggingface.co/declare-lab/flacuna-13b-v1.0).
## 1 Introduction
ChatGPT and its successor GPT-4 have surpassed their prior state-of-the-art models on a vast majority of the benchmarking tasks and datasets. However, to preserve privacy, natively running a 175B+ sized model like GPT-3 is beyond the capabilities of most organizations, let alone individuals. This has prompted many researchers to fine-tune manageable-sized LLMs -- from 7B to 30B on a diverse set of instruction examples generated by ChatGPT or GPT-4. This has birthel LLMs, such as, Alpaca (Taori et al., 2023) and Vicuna(Chiang et al., 2023) that are fine-tuned checkpoints of LLaMA (Touvron et al., 2023). These models have attained close to ChatGPT-level performance on some specific benchmarking tasks, but overall generalization still remains elusive. Recent works like InstructEval(Chia et al., 2023) strongly hint that the fine-tuning datasets dictate the task-specific performances. For instance, it has been observed that Flan-T5 -- a T5 checkpoint fine-tuned on Flan Collection instruction dataset -- outperforms Vicuna and Alpaca on tasks involving strong reasoning and problem-solving skills. This spurred us to fine-tune Vicuna on Flan-mini Collection dataset, anticipating improvement on reasoning-intensive tasks in InstructEval(Chia et al., 2023).
To this end, we first sample a 1M-sized instruction dataset from the 15M-sized Flan Collection dataset (Longpre et al., 2023) and combined it with several other datasets comprising coding tasks and ChatGPT/GPT-4 distilled conversations. The resulting smaller dataset, Flan-mini, is then cast into the conversational format of Vicuna. To ensure a reasonable computational cost for the fine-tuning process, we retrofit LoRA (Hu et al., 2021) adapter into the LLaMA (Touvron et al., 2023) decoder-transformer of Vicuna. Following a parameter-efficient LoRA fine-tuning of the Vicuna checkpoint on Flan-mini, we obtain Flacuna. As expected, Flacuna outperforms Vicuna by a substantial margin on most benchmark datasets, especially for reasoning-intensive tasks. However, the performance of Flacuna still remains below Flan-T5 on the same reasoning benchmarks. This could be attributed to the 15-times smaller dataset of the instruction dataset which may contain less diverse samples. Furthermore, full fine-tuning of Vicuna may narrow the gap with Flan-T5.
This work overall has the following contributions:
1. Improving the problem-solving capability of Vicuna through parameter efficient fine-tuning on Flan-mini.
2. Introducing an instruction tuning dataset, Flan-mini, comprising a diverse set of tasks and templates.
## 2 Training Details
Preparing the Flan-mini Collection.Given the enormous size of the Flan Collection (Longpre et al., 2023), we opted to work with a carefully selected subset that maintains a high level of task diversity while reducing the overall dataset size. In Table 1, we present the specific tasks included in our subset of Flan, along with their respective dataset sizes. As the public release of the Flan Collection does not include programming tasks, we augment the collection with existing code datasets. Specifically, we include CodeContests (Li et al., 2022), APPS (Hendrycks et al., 2021) and CodeSearchNet (Husain et al., 2019). Following the data processing pipeline of Flan Collection, we sample a fixed number of examples from each dataset, where each example is randomly augmented with different prompt templates. Specifically, the examples are processed with a pool of handcrafted prompt templates and may be used as zero-shot examples or grouped together with few-shot demonstrations (Longpre et al., 2023).
Maintaining Vicuna's Chatting Ability.Vicuna has demonstrated remarkable chatting ability, achieving 90% of the performance of ChatGPT. This indicates its significant potential as an open-source alternative to closed-source large language models (LLMs) like ChatGPT. To ensure
that Flacuna retains Vicuna's learned knowledge and chatting ability, we incorporated various ChatGPT datasets, including Alpaca (Taori et al., 2023), Code Alpaca (Chaudhary, 2023), and ShareGPT (Chiang et al., 2023), into our Flan collection. Among these three datasets, Vicuna was originally fine-tuned using the ShareGPT dataset. The final collection was then used to train Flacuna.
Architecture.We employed LoRA in the Vicuna model for fine-tuning on the Flan-mini collection. We inserted the low-rank adapters on all the query and value projection layers, resulting in a total trainable parameter count of 6.55M, which is only around 0.05% of the parameter count of the original 13B Vicuna model. The maximum input sequence length was set to 1280, and efficient training was facilitated by utilizing bf16 precision.
Hyperparameter Details.Flacuna was trained on 4\(\times\)A6000 GPUs for 1 epoch. We use 16 gradient accumulation steps with a per-device batch size of 2, resulting in a total batch size of 128. We used 3000 warm-up steps and a learning rate of 2e-5.
## 3 Evaluation Tasks and Results
### Problem Solving Evaluation
To assess the problem-solving prowess of instructed large language models (LLMs), InstructEval employs a range of benchmarks encompassing real-world exams that delve into diverse topics. These benchmarks encompass complex instructions, arithmetic problems, programming challenges, and causal reasoning tasks. In order to excel in these benchmarks, models need to exhibit a profound understanding of the world, demonstrate multi-hop reasoning capabilities, showcase creativity, and employ a plethora of other cognitive skills.
World Knowledge.The Massive Multitask Language Understanding (MMLU) benchmark, introduced in the work by Hendrycks et al. (2021), serves as an assessment tool to gauge the problem-solving aptitude and world knowledge of language models across various subjects. It offers evaluations in both zero-shot and few-shot settings, presenting a more challenging and human-like evaluation scenario. The MMLU benchmark encompasses a comprehensive range of 57 subjects spanning STEM, humanities, social sciences, and other domains. The difficulty levels of the tasks within the benchmark vary from elementary to advanced professional levels, providing a comprehensive assessment of the model's capabilities in problem-solving and domain understanding.
Complex Instructions.The subset known as BIG-Bench Hard (BBH) comprises 23 highly demanding tasks carefully selected from the BIG-Bench benchmark (Srivastava et al., 2022) to specifically target tasks that are considered to surpass the current capabilities of language models (Suzgun et al., 2022). BBH presents models with intricate instructions that require advanced skills in navigation, logical deduction, and fallacy detection.
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Dataset Name** & **Source** & **Dataset Size** \\ \hline Flan2021 & Flan & 388K \\ Public Pool of Prompts & Flan & 320K \\ Natural instructions v2 & Flan & 200K \\ CoT & Flan & 100K \\ Code Search & Husain et al. (2019) & 100K \\ Code Contest & Li et al. (2022) & 50K \\ Apps & Hendrycks et al. (2021) & 50K \\ \hline GPT4-Alpaca & GPT-4 & 52K \\ Code-Alpaca & ChatGPT & 20K \\ ShareGPT & ChatGPT & 60K \\ \hline Total & - & 1.34M \\ \hline \hline \end{tabular}
\end{table}
Table 1: The Flan-mini Collection, used to train Flacuna.
Comprehens and Arithmetic.Discrete Reasoning Over Paragraphs (DROP) is a reading comprehension task with a mathematical focus. It challenges systems to engage in discrete reasoning by analyzing passages extracted from Wikipedia articles. In order to excel in the DROP task, a system needs to adeptly navigate references within a question and identify the appropriate sections of the provided passage. Additionally, the system must demonstrate proficiency in performing discrete operations like addition, counting, or sorting.
Programming.HumanEval serves as a problem-solving benchmark specifically designed for assessing the performance of large language models that are trained on code (Chen et al., 2021). The benchmark comprises 164 unique programming problems, encompassing areas such as language comprehension, algorithms, and basic mathematics. Some of the problems included in HumanEval are similar in nature to straightforward software interview questions. In the evaluation process, models are assessed based on the functional correctness of the code programs they generate, with the criteria for correctness determined by the given docstrings. HumanEval provides a comprehensive evaluation framework for assessing the problem-solving capabilities of language models in a code-centric context.
Causality.The Counterfactual Reasoning Assessment (CRASS) benchmark is a novel dataset and evaluation tool developed specifically to assess the causal reasoning abilities of large language models. By employing counterfactual scenarios, CRASS tests the model's capability to identify and select appropriate causal explanations. This benchmark provides a unique and rigorous evaluation framework to gauge the causal reasoning capabilities of language models.
### Alignment to Human Values
Noting the importance of aligning LLMs to human values, InstructEval incorporates the Helpful, Honest, and Harmless (HHH) benchmark (Askell et al., 2021). The benchmark showcases engaging dialogues between humans and conversational assistants, challenging the model to discern and provide the most appropriate response. It encompasses a diverse array of 61 honesty-related, 59 helpfulness-related, and 58 harmlessness-related samples, along with 43 unique instances falling within the "other" category. The inclusion of the "other" category accounts for examples that embody values not explicitly covered by honesty, helpfulness, or harmlessness.
### Writing Experiments
For the writing experiment, we utilized the IMPACT dataset, which is readily available in InstructEval. This comprehensive dataset consists of 50 prompts across distinct categories, namely informative, professional, argumentative, and creative. Following that, ChatGPT was assigned the responsibility of scoring the models' responses in terms of relevance (Rel.) and coherence (Coh.) on a scale of 1 to 5. For more comprehensive information regarding this evaluation, we refer readers to Chia et al. (2023).
### Results
Comparative Baselines.As baselines, we selected Vicuna(Zheng et al., 2023) and StableVicuna1.
Footnote 1: [https://huggingface.co/CarperAI/stable-vicuna-13b-delta](https://huggingface.co/CarperAI/stable-vicuna-13b-delta)
Few-shot Problem-solving.We present the results of Flacuna on five datasets (see Table 2) from the InstructEval benchmark, focusing on problem-solving tasks. In 4 out of 5 tasks, Flacuna outperformed Vicuna, showing an average performance improvement of 5.6 points over the LLaMA backbone. However, it performed slightly worse on code-related problem-solving tasks in the HumanEval dataset, with a margin of 0.6 points. Overall, the improvement in Flacuna compared to Vicuna is 5.1 points averaged over the five tasks.
Out of the five problem-solving datasets, one of them, DROP, is categorized as a held-in dataset. It is a part of our Flan collection and was utilized for training Flacuna. As a result, we observed a significant performance boost of 11 points compared to Vicuna. The remaining datasets are considered held out.
0-shot Problem-solving.We conducted a 0-shot performance evaluation of Flacuna and compared it against both Vicuna and StableVicuna. The results presented in Table 3 demonstrate a noteworthy performance leap by Flacuna compared to its competitors. This improvement can be attributed to the training of Flacuna on the high-quality Flan instruction dataset.
HHH Evaluation.We conducted a further evaluation using BBH's HHH evaluation dataset (see Table 4), where Flacuna exhibited an impressive 11% improvement over Vicuna. Notably, our instruction dataset collection aimed to enhance Vicuna's problem-solving abilities, but it also had a positive impact on its HHH performance. This observation aligns with the experience of Flan-T5, which achieved a 24.2% performance improvement over its T5 backbone after fine-tuning on Flan.
Writing Evaluation.While Flacuna primarily excels in problem-solving tasks, we made efforts to maintain the impressive writing and chatting ability of Vicuna. To achieve this, we incorporated conversational datasets generated by GPT-4, such as GPT-4-Alpaca and ShareGPT, into the Flan-mini collection. However, despite these efforts, we observed certain issues in Flacuna's writing performance. In some cases, it generates code snippets in response to prompts that are unrelated to coding. We attribute this behavior to the significant data imbalance, where the conversational dataset constitutes only 8.2% of the entire data mixture. Prompt engineering techniques can help rectify such issues.
We discovered that Flacuna generates responses of reasonable quality when provided with the following template: """A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: definition of the task./n/n question/n Output:
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Size**} & \multicolumn{2}{c}{**MMLU (5-shot)**} & \multicolumn{2}{c}{**BBH (3-shot)**} & \multicolumn{2}{c}{**DROP: (3-shot)**} & \multicolumn{2}{c}{**CRASS (3-shot)**} & \multicolumn{2}{c}{**HumanFavL (0-shot)**} & \multicolumn{2}{c}{**Avg.**} \\ \cline{3-13} & & Perf. & \(\Delta\) & Perf. & \(\Delta\) & Perf. & \(\Delta\) & Perf. & \(\Delta\) & Perf. & \(\Delta\) & Perf. & \(\Delta\) \\ \hline GPT-4 & - & 86.4 & - & - & - & 80.9 & - & - & - & 67.0 & - & - & - \\ ChaGPT & - & 70.0 & - & 49.5 & - & 64.1 & - & 90.5 & - & 48.1 & - & 64.5 & - \\ \hline Flan-UL2 & 20B & 55.0 & - & 44.7 & - & 64.3 & - & 94.2 & - & 0.0 & - & 51.6 & - \\ Alpaca-Lora & 30B & 58.4 & +0.6 & 41.3 & +2.0 & 45.1 & -0.3 & 79.2 & +10.6 & 18.9 & +49.8 & +3.6 \\ OpenAssistant & 30B & 56.9 & -0.9 & 39.2 & - & 0.1 & 46.0 & +0.6 & 67.2 & +1.4 & 23.1 & +9.1 & 46.5 & +1.5 \\ OPT-IML & 30B & 38.6 & +11.3 & 31.3 & +3.0 & 47.5 & +28.0 & 67.2 & +32.5 & 9.1 & +7.9 & 38.7 & +16.5 \\ \hline Flan-T5 & 11B & 54.5 & +29.3 & 43.9 & +13.6 & 67.2 & +49.7 & 88.3 & +54.7 & 0.0 & +0.0 & 50.8 & +29.5 \\ Flan-Alpaca & 11B & 50.9 & +25.7 & 23.3 & -7.0 & 62.3 & +44.8 & 90.2 & +56.6 & 0.0 & +0.0 & 45.3 & +24.0 \\ Dolly V2 & 12B & 25.6 & -1.3 & 29.7 & +0.2 & 16.6 & -0.5 & 35.8 & +1.1 & 8.5 & -0.6 & 23.2 & -0.7 \\ \hline Flan-T5 & 3B & 49.2 & +25.9 & 40.2 & +15.9 & 56.3 & +43.7 & 91.2 & +60.2 & 0.0 & +0.0 & 47.4 & +29.2 \\ ChaGMT & 6B & 36.1 & - & 31.3 & - & 44.2 & - & 51.1 & - & 3.1 & - & 33.2 & - \\ Music-Chat & 7B & 37.1 & +1.9 & 32.0 & +1.1 & 20.2 & -7.4 & +7.5 & +13.6 & 17.7 & +7.4 & 30.9 & +3.3 \\ \hline StableVicuna & 13B & 49.2 & +3.0 & 37.5 & +0.4 & 34.3 & -1.0 & 67.5 & +8.7 & 15.9 & +2.5 & 40.9 & +2.7 \\ Vicuna & 13B & 50.6 & +4.5 & 37.6 & +0.5 & 32.6 & -3.0 & 60.9 & +2.1 & 11.6 & -1.8 & 38.7 & +0.5 \\ Flacuna & 13B & 51.1 & +5.0 & 39.3 & +2.2 & 43.6 & +8.0 & 74.1 & +15.3 & 11.0 & -2.4 & 43.8 & +5.6 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Evaluation results for problem-solving benchmarks. We denote the original performance across the benchmarks as Perf., while \(\Delta\) denotes the change in performance compared to the corresponding foundation LLMs. \(\star\) indicates that DROP is a held-in dataset.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Model** & **Size** & **MMLU (0-shot)** & **BBH (0-shot)** & **CRASS (0-shot)** \\ \hline Flan-UL2 & 20B & 54.4 & 34.9 & - \\ OpenAssistant & 30B & 52.0 & 33.4 & - \\ OPT IML & 30B & 41.3 & 17.4 & - \\ \hline TK-Instrect & 11B & 39.4 & 17.1 & - \\ Flan-T5-XXL & 11B & 54.1 & 39.5 & - \\ \hline Dolly V2 & 12B & 25.4 & 22.3 & - \\ \hline StableVicuna & 13B & 47.5 & 18.5 & 64.2 \\ Vicuna & 13B & 48.3 & 28.3 & 65.7 \\ Flacuna & 13B & 49.4 & 32.5 & 67.9 \\ \hline \hline \end{tabular}
\end{table}
Table 3: 0-shot problem-solving evaluation of Flacuna and other baseline models.
ASSISTANT: "". This finding suggests that with the appropriate prompts, we can improve Flacuna's chatting performance.
However, upon careful examination of the generated samples, it becomes apparent that Flacuna does not outperform Vicuna as a writing assistant. This observation is reinforced by the evaluation of the generated responses to the prompts in the IMPACT dataset using ChatGPT, as depicted in Table 5. ChatGPT consistently ranks Vicuna's responses significantly higher than those of Flacuna.
The subpar performance of Flacuna in writing-based scenarios can be attributed to several factors. Firstly, the disproportionate scarcity of conversational datasets in Flan may have contributed to this outcome. Additionally, parameter-efficient tuning methods such as LoRA may limit the effectiveness of the model in learning both problem-solving and general writing abilities. Hence, we may explore other efficient training methods for LLMs in the future (Lv et al., 2023).
An example of the prompt and Flacuna's response is shown below.
**Prompt:** A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: You are tasked to demonstrate your writing skills in professional or work settings for the following question. Write a user manual for a household appliance, providing step-by-step instructions for its operation, maintenance, and troubleshooting, using clear and simple language. Output: ASSISTANT:
**Flacuna's response:**
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline
**Model** & **Size** & **Harmlessnessness** & **Helpfulness** & **Honesty** & **Other** & **Avg.** & \(\Delta\) Avg. \\ \hline ChatGPT & - & 90.7 & 91.2 & 78.1 & 86.3 & 86.6 & - \\ \hline Flan-Alpaca & 11B & 74.2 & 81.4 & 77.4 & 83.4 & 79.1 & +26.6 \\ Flan-T5 & 11B & 75.9 & 75.3 & 75.1 & 79.6 & 76.7 & +24.2 \\ Tk-Instruct & 11B & 70.1 & 54.8 & 62.3 & 76.0 & 65.8 & +13.3 \\ T5 & 11B & 46.4 & 54.8 & 58.1 & 50.7 & 52.5 & - \\ \hline Alpaca & 13B & 49.7 & 51.2 & 51.8 & 45.5 & 49.5 & -12.3 \\ LLaMA & 13B & 57.2 & 61.0 & 57.0 & 72.0 & 61.8 & - \\ \hline Dolly V2 & 12B & 51.7 & 59.9 & 47.0 & 58.1 & 54.2 & +9.1 \\ Pythia & 12B & 41.3 & 46.1 & 43.6 & 49.3 & 45.1 & - \\ \hline StableVicuna & 13B & 61.7 & 67.2 & 57.1 & 79.1 & 66.3 & +4.5 \\ Vicuna & 13B & 62.0 & 66.1 & 52.4 & 74.4 & 63.7 & +1.9 \\ Flacuna & 13B & 72.4 & 71.2 & 70.5 & 83.7 & 74.5 & +12.6 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Evaluation results for alignment to human values on the honesty, helpfulness, and harmlessness (HHH) benchmark. Avg. denotes the average performance, while \(\Delta\) Avg. denotes the average improvement compared to the corresponding foundation model.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Size**} & \multicolumn{2}{c}{**Informative**} & \multicolumn{2}{c}{**Professional**} & \multicolumn{2}{c}{**Argumentative**} & \multicolumn{2}{c}{**Creative**} & \multicolumn{2}{c}{**Avg.**} \\ \cline{3-11} & & Rel. & Coh. & Rel. & Coh. & Rel. & Coh. & Rel. & Coh. & Rel. & Coh. \\ \hline ChatGPT & - & 3.34 & 3.98 & 3.88 & 3.96 & 3.96 & 3.82 & 3.92 & 3.94 & 3.78 & 3.93 \\ Flan-Alpaca & 11B & 3.56 & 3.46 & 3.54 & 3.70 & 3.22 & 3.28 & 3.70 & 3.40 & 3.51 & 3.46 \\ Flan-T5 & 11B & 2.64 & 3.24 & 2.62 & 3.22 & 2.54 & 3.40 & 2.50 & 2.72 & 2.58 & 3.15 \\ Dolly-V2 & 12B & 3.54 & 3.64 & 2.96 & 3.74 & 3.66 & 3.20 & 3.02 & 3.18 & 3.30 & 3.44 \\ \hline StableVicuna & 13B & 3.54 & 3.64 & 2.96 & 3.74 & 3.30 & 3.20 & 3.02 & 3.18 & 3.21 & 3.44 \\ Vicuna & 13B & 3.60 & 3.96 & 3.74 & 3.82 & 3.82 & 3.56 & 3.82 & 3.92 & 3.75 & 3.82 \\ Flacuna & 13B & 3.02 & 3.42 & 3.48 & 3.52 & 3.38 & 3.02 & 3.92 & 3.80 & 3.45 & 3.44 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Evaluation results for writing-based tasks.
1. Introduction Welcome to the user manual for the XYZ household appliance. This appliance is designed to make your life easier and more convenient. With this appliance, you can prepare your meals quickly and efficiently.
2. Operation To operate the appliance, follow these steps:
a. Turn on the appliance by pressing the power button. b. Select the desired cooking mode by using the mode selector. c. Place the food inside the appliance. d. Close the lid and press the start button. e. Wait for the appliance to complete the cooking process.
3. Maintenance To keep your appliance in good working condition, follow these steps:
a. Clean the appliance after each use. b. Check the appliance for any signs of damage or wear and tear. c. If you notice any issues, contact the manufacturer or a certified repair technician.
4. Troubleshooting If you encounter any issues with your appliance, try the following troubleshooting steps:
a. Check the power cord and plug to ensure they are securely connected. b. Check the fuse to ensure it is not blown. c. Check the appliance manual for troubleshooting tips. d. If the issue persists, contact the manufacturer or a certified repair technician.
5. Safety Precautions Always follow these safety precautions when using the appliance:
a. Keep the appliance clean and free of debris. b. Do not use the appliance if it is damaged or showing signs of wear and tear. c. Do not leave the appliance unattended while it is in operation. d. Always use the appliance according to the manufacturer's instructions.
6. Conclusion Congratulations on your new XYZ household appliance! With proper use and maintenance, your appliance will provide you with years of reliable service. If you have any questions or concerns, please refer to the appliance manual or contact the manufacturer.
## 4 Limitations and Future Work
Despite the promising advancements of Flacuna compared to Vicuna, we have identified some issues that require addressing:
* If Flacuna is asked to provide descriptive answers to questions like "Present arguments for or against lowering the age bar for drinking," Flacuna **generates code snippets instead**. This behavior could be attributed to its **imperfect understanding of instructions or a tendency to hallucinate**.
* Flacuna is still **significantly behind Flan-T5** in terms of problem-solving abilities.
* Surprisingly, Flacuna exhibits **inferior performance compared to both LLaMA and Vicuna on coding-related problems**. This outcome is unexpected, considering that we incorporated numerous coding problem-solving datasets into our instruction tuning collection.
* Flacuna is **trained with a maximum input sequence length of 1280** which limits its ability to comprehend longer input sequences.
To address these limitations and known issues, we can explore the following steps:
* Based on previous studies, it has been observed that LoRA performs better with larger models (Chia et al., 2023), such as those with 30B or 65B parameters, and excels in task-specific settings. Therefore, in future work, we could enhance Flacuna by **fully fine-tuning Vicuna, without
**LoRA**, particularly on the Flan collection. Another future work is to train Flacuna on longer token length.
* We can **incorporate the original Flan collection into the training process**, as it is fifteen times larger than the instruction dataset we used in this study. Flan-T5 underwent training on this extensive collection, which resulted in remarkable problem-solving performance.
* The chatting or writing performance of Flacuna could be improved by **incorporating larger conversational datasets in Flan-mini** and subsequently training Flacuna on it.
|
2310.10098 | PAC Learning Linear Thresholds from Label Proportions | Learning from label proportions (LLP) is a generalization of supervised
learning in which the training data is available as sets or bags of
feature-vectors (instances) along with the average instance-label of each bag.
The goal is to train a good instance classifier. While most previous works on
LLP have focused on training models on such training data, computational
learnability of LLP was only recently explored by [Saket'21, Saket'22] who
showed worst case intractability of properly learning linear threshold
functions (LTFs) from label proportions. However, their work did not rule out
efficient algorithms for this problem on natural distributions.
In this work we show that it is indeed possible to efficiently learn LTFs
using LTFs when given access to random bags of some label proportion in which
feature-vectors are, conditioned on their labels, independently sampled from a
Gaussian distribution $N(\mathbf{\mu}, \mathbf{\Sigma})$. Our work shows that a
certain matrix -- formed using covariances of the differences of
feature-vectors sampled from the bags with and without replacement --
necessarily has its principal component, after a transformation, in the
direction of the normal vector of the LTF. Our algorithm estimates the means
and covariance matrices using subgaussian concentration bounds which we show
can be applied to efficiently sample bags for approximating the normal
direction. Using this in conjunction with novel generalization error bounds in
the bag setting, we show that a low error hypothesis LTF can be identified. For
some special cases of the $N(\mathbf{0}, \mathbf{I})$ distribution we provide a
simpler mean estimation based algorithm. We include an experimental evaluation
of our learning algorithms along with a comparison with those of [Saket'21,
Saket'22] and random LTFs, demonstrating the effectiveness of our techniques. | Anand Brahmbhatt, Rishi Saket, Aravindan Raghuveer | 2023-10-16T05:59:34Z | http://arxiv.org/abs/2310.10098v1 | # PAC Learning Linear Thresholds
###### Abstract
Learning from label proportions (LLP) is a generalization of supervised learning in which the training data is available as sets or _bugs_ of feature-vectors (instances) along with the average instance-label of each bag. The goal is to train a good instance classifier. While most previous works on LLP have focused on training models on such training data, computational learnability of LLP was only recently explored by [25, 26] who showed worst case intractability of properly learning _linear threshold functions_ (LTFs) from label proportions. However, their work did not rule out efficient algorithms for this problem on natural distributions.
In this work we show that it is indeed possible to efficiently learn LTFs using LTFs when given access to random bags of some label proportion in which feature-vectors are, conditioned on their labels, independently sampled from a Gaussian distribution \(N(\mathbf{\mu},\mathbf{\Sigma})\). Our work shows that a certain matrix - formed using covariances of the differences of feature-vectors sampled from the bags with and without replacement - necessarily has its principal component, after a transformation, in the direction of the normal vector of the LTF. Our algorithm estimates the means and covariance matrices using subgaussian concentration bounds which we show can be applied to efficiently sample bags for approximating the normal direction. Using this in conjunction with novel generalization error bounds in the bag setting, we show that a low error hypothesis LTF can be identified. For some special cases of the \(N(\mathbf{0},\mathbf{I})\) distribution we provide a simpler mean estimation based algorithm. We include an experimental evaluation of our learning algorithms along with a comparison with those of [25, 26] and random LTFs, demonstrating the effectiveness of our techniques.
## 1 Introduction
In _learning from label proportions_ (LLP), the training data is aggregated into sets or _bugs_ of feature-vectors (instances). For each bag we are given its constituent feature-vectors along with only the sum or average of their labels The goal is a to obtain a _good_ instance-level classifier - one that minimizes the classification error on a test set of instances or bags. In this work we study the LLP learnability over Gaussian distributions of linear threshold functions (LTFs), also called _linear classifiers_ or _halfspaces_, given by \(f(\mathbf{x})=\mathsf{pos}\left(\mathbf{r}^{\mathsf{T}}\mathbf{x}+c\right)\) where \(\mathsf{pos}(a):=1\) if \(a>0\) and \(0\) otherwise.
The _probably approximately correct_ (PAC) model of [29] states that a _concept_ class \(\mathcal{C}\) of \(\{0,1\}\)-valued functions can be learnt by a _hypothesis_ class \(\mathcal{H}\) if there is an algorithm to efficiently obtain, using iid samples from
a distribution on \((\mathbf{x},f(\mathbf{x}))\), a hypothesis \(h\in\mathcal{H}\) of arbitrarily high accuracy on that distribution, for any unknown \(f\in\mathcal{C}\). If \(\mathcal{H}=\mathcal{C}\) we say that \(\mathcal{C}\) is _properly_ learnable, for e.g. LTFs are known to be properly learnable using linear programming ([3]). This notion can be extended to the LLP setting - which for brevity we call PAC-LLP - as follows: distribution \(D\) is over bags and their label proportions \((B,\sigma(B,f))\) where \(B=\{\mathbf{x}_{1},\ldots,\mathbf{x}_{q}\}\) is a bag of feature vectors and \(\sigma(B,f)=\operatorname{Avg}\{f(\mathbf{x})\mid\mathbf{x}\in B\}\). A bag \((B,\sigma(B,f))\) is said to be _satisfied_ by \(h\) iff \(\sigma(B,h)=\sigma(B,f)\), and the accuracy of \(h\) is the fraction of bags satisfied by it.
With the above notion of PAC-LLP, [25, 26] studied the learnability of LTFs and rather disturbingly showed that for any constant \(\varepsilon>0\) it is NP-hard to PAC-LLP learn an LTF using an LTF which satisfies \((1/q+\varepsilon)\)-fraction of the bags when all bags are of size at most \(q\). This is in contrast to the supervised learning (i.e, with unit-sized bags) in which an LTF can be efficiently learnt by an LTF using linear programming. Their work also gave a convex programming algorithms to find an LTFs satisfying \((2/5)\)-fraction of bags of size \(\leq 2\), \((1/12)\)-fraction of bags of size \(\leq 3\). While these results show that PAC-LLP learning LTFs using LTFs is intractable on _hard_ bag distributions, they raise the question of whether the problem is tractable on natural distributions that may arise out of real world scenarios.
We answer the above question in the affirmative when the feature-vectors are distributed according to some (unknown) Gaussian distribution \(\mathcal{D}=N(\boldsymbol{\mu},\boldsymbol{\Sigma})\) in \(d\)-dimensions. Gaussian distributions are ubiquitous in machine learning and in many applications the input data distribution is modeled as multivariate Gaussians, and several previous works [8, 30] have studied learnability in Gaussian distributions. An unkown target LTF is given by \(f(\mathbf{x}):=\text{pos}\left(\mathbf{r}_{s}^{\top}\mathbf{x}+c_{s}\right)\) where \(\|\mathbf{r}_{s}\|_{2}=1\). Let \(\mathcal{D}_{a}\) be the distribution of \(\mathbf{x}\leftarrow\mathcal{D}\) conditioned on \(f(\mathbf{x})=a\), for \(a\in\{0,1\}\). Using this we formalize the notion of a distribution \(\mathcal{O}\) on bags of size \(q\) and average label \(k/q\): a random bag \(B\) sampled from \(\mathcal{O}\) consists of \(k\) iid samples from \(\mathcal{D}_{1}\) and \((q-k)\) iid samples from \(\mathcal{D}_{0}\). The case of \(k\in\{0,q\}\) is uninteresting as all instances in such bags are either labeled \(0\) or \(1\) and traditional PAC-learning for LTFs can be employed directly. Unlike [25, 26] our objective is to directly maximize the instance-level level accuracy on \(\mathcal{D}\). With this setup we informally describe our main result.
**Our PAC-LLP LTF Learner** (Informal): Assuming mild conditions on \(\boldsymbol{\Sigma},\boldsymbol{\mu}\) and \(c_{s}\), for any \(q,k\in\mathbb{Z}^{+}\) s.t. \(1\leq k\leq q-1\) and \(\varepsilon,\delta>0\), there is an algorithm that samples at most \(m\) bags from \(\mathcal{O}\) and runs in time \(O(t+m)\) and with probability \(1-\delta\) produces an LTF \(h\) s.t.
\(\Pr_{\mathcal{D}}\left[f(\mathbf{x})\neq h(\mathbf{x})\right]\leq\varepsilon\) if \(k\neq q/2\), and
\(\Pr_{\mathcal{D}}\left[f(\mathbf{x})\neq h(\mathbf{x})\right]\leq\varepsilon\) or \(\Pr[f(\mathbf{x})\neq(1-h(\mathbf{x}))]\leq\varepsilon\) if \(k=q/2\),
where \(t,m\) are fixed polynomials in \(d,q,(1/\varepsilon),\log(1/\delta)\). We also obtain a more efficient algorithm when \(k\neq q/2\), \(\boldsymbol{\mu}=\boldsymbol{0}\), \(c_{s}=0\) and \(\boldsymbol{\Sigma}=\mathbf{I}\). The ambiguity in the case of \(k=q/2\) is inherent since bags of label proportion \(1/2\) consistent with an LTF \(f(\mathbf{x})\) are also consistent with \((1-f(\mathbf{x}))\).
**Remark 1.1** (Mixtures of \((q,k)\)): _The training data could consist of bags of different sizes and label proportions, however typically the the maximum size of bags is bounded by (say) \(Q\), and in a large enough sample we would have at least \((1/Q^{2})\)-fraction of bags of a particular size and label proportion and we can apply our PAC-LLP LTF Learner above to that subsample._
### Related Work
The LLP problem is motivated by many real applications where labels are available not for each feature-vector but only as the average labels of bags of feature-vectors. This may occur because of privacy and legal ([24, 33])) reasons, supervision cost ([5]) or lack of labeling instrumentation ([10]). Previous works ([9, 15, 20, 24]) on LLP have applied techniques such as such as clustering, and linear classifiers and MCMC. Specifically for LLP, assuming class conditional independence of bags, [23] gave an algorithm to learn an exponential generative model, which was further generalized by [22]. On the other hand, the work of [34] proposed a novel _proportional_ SVM based algorithms which optimized the SVM loss over instance-labels which were constrained by bag-level loss w.r.t the given label-proportions. Subsequently, approaches based on deep neural nets for large-scale and multi-class data ([18, 11, 19, 21]), as well as bag pre-processing techniques ([28, 27]) have been developed. Recently, [4, 6] have proposed model training methods for either random or curated bags.
The LLP framework (as an analogue of PAC learning) was first formalized in the work of [35]. They bounded the generalization error of a trained classifier when taking the (bag, label-proportion)-pairs as instances sampled iid from some distribution. Their loss function was different - a weaker notion than the strict bag satisfaction predicate of [25, 26]. A single-bag variant - _class ratio estimation_ - of LLP was studied by [13] in which learning LTFs has a simple algorithm (see Appendix G). Nevertheless, the study of computational learning in the LLP framework has been fairly limited, apart from the works of [25, 26] whose results of learning LTFs in the LLP setting have been described earlier in this section.
In the fully supervised setting [3] showed that LTFs can be learnt using LTFs via linear programming without any distributional assumptions. Adversarial label noise makes the problem NP-hard to approximate beyond the trivial \((\nicefrac{{1}}{{2}})\)-factor even using constant degree polynomial thresholds as hypothesis ([12, 14, 2]). However, under distributional assumptions a series of results ([16, 17, 1, 7]) have given efficient algorithms to learn adversarially noisy LTFs.
Next, Sec. 1.2 mathematically defines our problem statement. Sec. 1.3 states the main results of this paper. Sec. 1.4 provides an overview of our techniques. Sec. 2 mentions some preliminary results which are used in our proofs. Sec. 3 defines and analyses a subroutine which we use in all our algorithms. Sec. 4 provides a complete proof for one of our main results. Sec 5 gives brief proof sketches of our other results. Sec. 6 mentions some experiments which support of our results.
### Problem Definition
**Definition 1.2** (Bag Oracle): _Given distribution \(\mathcal{D}\) over \(\mathbb{R}^{d}\) and a target concept \(f:\mathbb{R}\to\{0,1\}\), the bag oracle for size \(q\) and label proportion \(k/q\) (\(1\leq k\leq q-1\)), denoted by \(\mathsf{Ex}(f,\mathcal{D},q,k)\), generates a bag \(\{\mathbf{x}^{(i)}\}_{i=1}^{q}\) such that \(\mathbf{x}^{(i)}\) is independently sampled from_ (i)_\(\mathcal{D}_{f,1}\) for \(i=\{1,\ldots,k\}\), and_ (ii)_\(\mathcal{D}_{f,0}\) for \(i=\{k+1,\ldots,q\}\), where \(\mathcal{D}_{f,a}\) is \(\mathbf{x}\leftarrow\mathcal{D}\) conditioned on \(f(\mathbf{x})=a\), for \(a\in\{0,1\}\)._
### Our results
We first state our result (proved in Appendix A) for the case of standard \(d\)-dimensional Gaussian distribution \(N(\mathbf{0},\mathbf{I})\), homogeneous target LTF and unbalanced bags.
**Theorem 1.3**: _For \(q>2\) and \(k\in\{1,\ldots,q-1\}\) s.t. \(k\neq q/2\) and LTF \(f(\mathbf{x}):=\mathsf{pos}(\mathbf{r}_{*}^{\mathsf{T}}\mathbf{x})\), there is an algorithm that samples \(m\) iid bags from \(\mathsf{Ex}(f,N(\mathbf{0},\mathbf{I}),q,k)\) and runs in \(O(m)\) time to produce a hypothesis \(h(\mathbf{x}):=\mathsf{pos}(\hat{\mathbf{r}}^{\mathsf{T}}\mathbf{x})\) s.t. w.p. at least \(1-\delta\) over the sampling, \(\Pr_{\mathcal{D}}\bigl{[}f(\mathbf{x})\neq h(\mathbf{x})\bigr{]}\leq\varepsilon\), for any \(\varepsilon,\delta>0\), when \(m\geq O\left((d/\varepsilon^{2})\log(d/\delta)\right)\)._
The above algorithm is based on estimating the mean of the bag vectors, which unfortunately does not work when \(k=q/2\) or for a general covariance matrix \(\Sigma\). We instead use a covariance estimation based approach - albeit with a worse running time - for our next result which is proved in Sec. 4. \(\lambda_{\min}\) and \(\lambda_{\max}\) denote the minimum and maximum eigenvalues of the covariance matrix \(\Sigma\).
**Theorem 1.4**: _For \(q>2\), \(k\in\{1,\ldots,q-1\}\), \(f(\mathbf{x}):=\mathsf{pos}(\mathbf{r}_{*}^{\mathsf{T}}\mathbf{x})\), and positive definite \(\Sigma\) there is an algorithm that samples \(m\) iid bags from \(\mathsf{Ex}(f,N(\mathbf{0},\mathbf{\Sigma}),q,k)\) and runs in \(\mathsf{poly}(m)\) time to produce a hypothesis \(h(\mathbf{x}):=\mathsf{pos}(\bar{\mathbf{r}}^{\mathsf{T}}\mathbf{x})\) s.t. w.p. at least \(1-\delta\) over the sampling_
* _if_ \(k\neq q/2\)_,_ \(\Pr_{\mathcal{D}}\bigl{[}f(\mathbf{x})\neq h(\mathbf{x})\bigr{]}\leq\varepsilon\)_, and_
* _if_ \(k=q/2\)_,_ \(\min\{\Pr_{\mathcal{D}}\bigl{[}f(\mathbf{x})\neq h(\mathbf{x})\bigr{]},\Pr_{ \mathcal{D}}\bigl{[}f(\mathbf{x})\neq\tilde{h}(\mathbf{x})\bigr{]}\}\leq\varepsilon\)_, where_ \(\tilde{h}(\mathbf{x}):=\mathsf{pos}(-\hat{\mathbf{r}}^{\mathsf{T}}\mathbf{x})\)__
_for any \(\varepsilon,\delta>0\), when \(m\geq O((d/\varepsilon^{4})\log(d/\delta)(\lambda_{\max}/\lambda_{\min})^{6}q ^{8})\)._
Our general result stated below (proved in Appendix C), extends our algorithmic methods to the case of non-centered Gaussian space and non-homogeneous LTFs.
**Theorem 1.5**: _For \(q>2\), \(k\in\{1,\ldots,q-1\}\), \(f(\mathbf{x}):=\mathsf{pos}(\mathbf{r}_{*}^{\mathsf{T}}\mathbf{x}+\varepsilon_{*})\), and positive definite \(\mathbf{\Sigma}\) there is an algorithm that samples \(m\) iid bags from \(\mathsf{Ex}(f,N(\mathbf{\mu},\mathbf{\Sigma}),q,k)\) and runs in \(\mathsf{poly}(m)\) time to produce a hypothesis \(h(\mathbf{x}):=\mathsf{pos}(\hat{\mathbf{r}}^{\mathsf{T}}\mathbf{x}+\bar{ \epsilon})\) s.t. w.p. at least \(1-\delta\) over the sampling_
* _if_ \(k\neq q/2\)_,_ \(\Pr_{\mathcal{D}}\left[f(\mathbf{x})\neq h(\mathbf{x})\right]\leq\varepsilon\)_, and_
* _if_ \(k=q/2\)_,_ \(\min\left\{\Pr_{\mathcal{D}}\left[f(\mathbf{x})\neq h(\mathbf{x})\right],\Pr _{\mathcal{D}}\left[f(\mathbf{x})\neq\bar{h}(\mathbf{x})\right]\right\}\leq\varepsilon\)_,_ \(\bar{h}(\mathbf{x}):=\mathsf{pos}(-\hat{\mathbf{r}}^{\mathsf{T}}\mathbf{x}-\bar {\epsilon})\)__
_for any \(\varepsilon,\delta>0\), when \(m\geq O\left(\left(d/\varepsilon^{4}\right)\frac{\varepsilon^{2}}{(\Phi( \bar{\epsilon})(1-\Phi(\bar{\epsilon})))^{2}}\log(d/\delta)\left(\frac{ \lambda_{\max}}{\lambda_{\min}}\right)^{4}\left(\frac{\sqrt{\lambda_{\max}}+ \|\mathbf{\mu}\|_{2}}{\sqrt{\lambda}_{\min}}\right)^{4}q^{8}\right)\) where \(\Phi(.)\) is the standard Gaussian cdf and \(\ell=-\frac{c_{*}+\mathbf{r}_{*}^{\mathsf{T}}\mathbf{\mu}}{\|\mathbf{\Sigma}^{ \mathsf{T}}/\mathbf{r}_{*}\|_{2}}\)_
The value of \(\hat{\mathbf{r}}\) output by our algorithms is a close estimate of \(\mathbf{r}_{*}\) (or possibly \(-\mathbf{r}_{*}\) in the case of balanced bags). Note that our algorithms do not require knowledge of \(\mathbf{\mu}\) or \(\mathbf{\Sigma}\), and only the derived parameters in Thms. 1.4 and 1.5 are used for the sample complexity bounds. They are based on the certain properties of the empirical mean-vectors and covariance matrices formed by sampling vectors or pairs of vectors from random bags of the bag oracle. An empirical mean based approach has been previously developed by [23] in the LLP setting to estimate the parameters of an exponential generative model, when bag distributions satisfy the so-called class conditioned independence.e., given its label, the feature-vector distribution is same for all the bags. These techniques were extended by [22] to linear classifiers with loss functions satisfying certain smoothness conditions. While the bag oracle in our setup satisfies such conditioned independence, we aim to minimize the instance classification error on which the techniques of [23, 22] are not applicable.
For the case when \(q=1\) (ordinary classification), the sample complexity is \(O(d/\varepsilon\log(d/\delta))\) as one can solve a linear program to obtain an LTF and then use uniform convergence to bound the generalization error. The sample complexity expressions in Theorems 1.3, 1.4 and 1.5 have the same dependence on \(d\) and \(\delta\). However, they have higher powers of \(1/\varepsilon\). They also include other parameters like the bag size (\(q\)), condition number of \(\mathbf{\Sigma}\) (\(\lambda_{\max}/\lambda_{\min}\)) and the normalized distance of mean of the Gaussian to the LTF (\(l\)). The origins and significance of these discrepancies are discussed in Sec. 1.4.
### Our Techniques
**Theorem 1.3**: **Case \(N(\mathbf{0},\mathbf{I})\), \(f(\mathbf{x})=\mathsf{pos}(\mathbf{r}_{*}^{\mathsf{T}}\mathbf{x}),k\neq q/2\).** Assume that \(k>q/2\). A randomly sampled bag with label proportion \(k/q\) has \(k\) vectors iid sampled from the positive side of the separating hyperplane passing through origin, and \((q-k)\) iid sampled from its negative side. It is easy to see that the expected sum of the vectors vanishes in all directions orthogonal to the normal vector \(\mathbf{r}_{*}\), and in the direction of \(\mathbf{r}_{*}\) it has a constant magnitude. The case of \(k<q/2\) is analogous with the direction of the expectation opposite to \(\mathbf{r}_{*}\). Sampling a sufficient number of bags and a random vector from each of them, and taking their normalized expectation (negating if \(k<q/2\)) yields the a close estimate \(\hat{\mathbf{r}}\) of \(\mathbf{r}_{*}\), which in turn implies low classification error. The sample complexity is the same as that for mean estimation bag-vectors (see Section 3) and thus the power of \(1/\varepsilon\) is 2.
This simple approach however does not work when \(r=q/2\), in which case the expectation vanishes completely, or for general Gaussian distributions which (even if centered) could be skewed in arbitrary directions. We present our variance based method to handle these cases.
**Theorem 1.4**: **Case \(N(\mathbf{0},\mathbf{\Sigma})\), \(f(\mathbf{x})=\mathsf{pos}(\mathbf{r}_{*}^{\mathsf{T}}\mathbf{x})\).** To convey the main idea of our approach, consider two different ways of sampling two feature-vectors from a random bag of the oracle. The first way is to sample two feature-vectors \(\mathbf{Z}_{1},\mathbf{Z}_{2}\) independently and u.a.r from a random bag. In this case, the probability that they have different labels (given by \(f\)) is \(2k(q-k)\big{/}q^{2}\). The second way is to sample a random _pair_\(\tilde{\mathbf{Z}}_{1},\tilde{\mathbf{Z}}_{2}\) of feature-vectors i.e., without replacement. In this case, the probability of different labels is \(2k(q-k)/(q(q-1))\) which is strictly greater than \(2k(q-k)\big{/}q^{2}\). Since the labels are given by thresholding in the direction of \(\mathbf{r}_{*}\), this suggests that the variance of \((\tilde{\mathbf{Z}}_{1}-\tilde{\mathbf{Z}}_{2})\) w.r.t. that of \((\mathbf{Z}_{1}-\mathbf{Z}_{2})\) is maximized in the direction of \(\mathbf{r}_{*}\). Indeed, let \(\mathbf{\Sigma}_{D}:=\mathrm{Var}[\tilde{\mathbf{Z}}_{1}-\tilde{\mathbf{Z}}_{2}]\) be the _pair_ covariance matrix and let \(\mathbf{\Sigma}_{B}:=\mathrm{Var}\left[\mathbf{Z}_{1}\right]=(1/2)\mathrm{Var} \left[\mathbf{Z}_{1}-\mathbf{Z}_{2}\right]\) be the _bag_ covariance matrix. Then we show that \(\pm\mathbf{r}_{*}=\mathrm{argmax}_{\mathbf{r}}\rho(\mathbf{r})\) where \(\rho(\mathbf{r}):=\frac{\mathbf{r}^{\mathsf{T}}\mathbf{\Sigma}_{D}\mathbf{r}}{ \mathbf{r}^{\mathsf{T}}\mathbf{\Sigma}_{B}\mathbf{r}}\). A simple
transformation gives us that
\[\pm\mathbf{r}_{*}=\boldsymbol{\Sigma}_{B}^{-1/2}\mathsf{PrincipalEigenVector}( \boldsymbol{\Sigma}_{B}^{-1/2}\boldsymbol{\Sigma}_{D}\boldsymbol{\Sigma}_{B}^{- 1/2}) \tag{1}\]
This suggests the following algorithm: sample enough bags to construct the corresponding empirical estimates \(\hat{\boldsymbol{\Sigma}}_{D}\) and \(\hat{\boldsymbol{\Sigma}}_{B}\) and then compute the empirical proxy of the RHS of (1). We show that using close enough empirical estimates w.h.p the algorithm computes a vector \(\hat{\mathbf{r}}\) s.t. one of \(\pm\hat{\mathbf{r}}\) is close to \(\mathbf{r}_{*}\), and via a geometric stability argument this implies that one of \(\pm\hat{\mathbf{r}}\) yields an LTF that has small instance-level classification error.
At this point, if \(k=q/2\), there is no way to identify the correct solution from \(\pm\hat{\mathbf{r}}\), since a balanced bag, if consistent with an LTF, is also consistent with its complement. On the other hand, if \(k\neq q/2\) we can obtain the correct solution as follows. It is easy to show that since \(f\) is homogeneous and the instance distribution is a centered Gaussian, the measure of \(\{\mathbf{x}\mid f(\mathbf{x})=a\}\) is \(1/2\) for \(a=\{0,1\}\). Thus, one of \(h(\mathbf{x}):=\mathsf{pos}(\hat{\mathbf{r}}^{\mathsf{T}}\mathbf{x})\), \(\tilde{h}(\mathbf{x})=\mathsf{pos}(-\hat{\mathbf{r}}^{\mathsf{T}}\mathbf{x})\) will have a high bag satisfaction accuracy. Thus, a large enough sample of bags can be used to identify one of \(h,\tilde{h}\) having a high bag satisfaction accuracy. Lastly, we use a novel generalization error bound (see below) to show that the identified LTF also has a high instance classification accuracy.
The sample complexity expression includes the \(4^{\text{th}}\) power of \(1/\varepsilon\) and the \(6^{\text{th}}\) power of the condition number of \(\boldsymbol{\Sigma}\). This comes from the sample complexity to estimate \(\boldsymbol{\Sigma}_{D}\) and \(\boldsymbol{\Sigma}_{B}\) (see Section 3), which need to be estimated up to an error of \(O((\varepsilon^{2}(\lambda_{\min}/\lambda_{\max})^{3})/q^{4})\) to ensure that the misclassification error between LTFs with normal vectors \(\hat{\mathbf{r}}\) (or \(-\hat{\mathbf{r}}\)) and \(\mathbf{r}_{*}\) is bounded by \(\varepsilon\). One power of \(\lambda_{\min}/\lambda_{\max}\) comes from translating bounds from a normalized space to the original space (see Lemma 2.1). Another power comes from bounding the sample error from the geometric bound on \(\hat{\mathbf{r}}\) and \(\mathbf{r}_{*}\) (see Lemma 2.3 followed by Chernoff). The remaining power of \(\lambda_{\min}/\lambda_{\max}\) and the powers of \(q\) are artifacts of the analysis. The higher power of \(1/\varepsilon\) is expected as the algorithm estimates second moments. Higher condition number and bag size makes it harder to estimate \(\rho(\mathbf{r})\) and find where it maximises. It also makes it harder for a geometrically close estimator to generalize. It is important to note that the sample complexity explicitly depends on the bag size \(q\) (and not just on the label proportion \(k/q\)). This is because when two feature-vectors are sampled without replacement, the probability of of sampling a pair of differently labeled feature-vectors is \(2(k/q)(1-k/q)/(1-1/q)\). Keeping \(k/q\) the same, this probability decreases with increasing bag size which increases the sample complexity for larger bags.
**Theorem 1.5**:: **Case \(N(\boldsymbol{\mu},\boldsymbol{\Sigma})\)**, \(f(\mathbf{x})=\mathsf{pos}(\mathbf{r}_{*}^{\mathsf{T}}\mathbf{x}+c_{*})\). We show that (1) also holds in this case, and therefore we use a similar approach of empirically estimating the pair and bag covariance matrices solving (1) works in principle. However, there are complications, in particular the presence of \(\boldsymbol{\mu}\) and \(c_{*}\) degrades the error bounds in the analysis, thus increasing the sample complexity of the algorithm. This is because the measures of \(\{\mathbf{x}\mid f(\mathbf{x})=a\}\) for \(a=\{0,1\}\) could be highly skewed if \(\left\|\boldsymbol{\mu}\right\|_{2}\) and/or \(\left|c_{*}\right|\) is large. Moreover, the spectral algorithm only gives a solution \(\pm\hat{\mathbf{r}}\) for \(\mathbf{r}_{*}\). An additional step is required to obtain an estimate of \(c_{*}\). This we accomplish using the following procedure which, given a sample of \(s\) bags and any \(\mathbf{r}\) outputs a \(\hat{c}\) which has the following property: if \(s^{*}=\max_{\zeta}\{\text{no. of bags satisfied by }\mathsf{pos}(\mathbf{r}^{\mathsf{T}} \mathbf{x}+c)\}\), then \(\hat{c}\) will satisfy at least \(s^{*}-1\) bags. This is done by ordering the values \(\mathbf{r}^{\mathsf{T}}\mathbf{x}\) of the vectors \(\mathbf{x}\) within each bag in decreasing order, and then constructing set of the \(k\)th values of each bag. Out of these \(s\) values, the one which taken as \(c\) in \(\mathsf{pos}(\mathbf{r}^{\mathsf{T}}\mathbf{x}+c)\) satisfies the most bags, is chosen to be \(\hat{c}\).
The sample complexity expression for this case differs from that of Theorem 1.4 in two ways. First it includes the \(4^{\text{th}}\) power of \(((\sqrt{\lambda_{\max}}+\left\|\boldsymbol{\mu}\right\|_{2})/\sqrt{\lambda_{ \min}})\). This comes from the the bound on sample error from geometric bound on \(\hat{\mathbf{r}}\) and \(\mathbf{r}_{*}\) (Lemma 2.3). Another change is the term \(\ell^{2}/\Phi(\ell)(1-\Phi(\ell))\) where the \(\ell^{2}\) comes from the sample complexity of estimating \(\boldsymbol{\Sigma}_{B}\) and \(\boldsymbol{\Sigma}_{D}\) and \(\Phi(\ell)(1-\Phi(\ell))\) comes from bounding the sample error from geometric bound on \(\hat{\mathbf{r}}\) and \(\mathbf{r}_{*}\). The term \(\ell\) tells us the perpendicular distance from the center of the Gaussian to the unknown LTF's hyperplane, normalized by the stretch induced by \(\boldsymbol{\Sigma}\) in the direction of \(\mathbf{r}_{*}\). This is required to estimate the density of the Gaussian distribution near the unknown LTF's hyperplane which directly affects the sample complexity - the less the density, the more the sample complexity. Thus it makes sense that the sample complexity increases with \(\left|\ell\right|\).
**Generalization Error Bounds.** We prove (Thm. 2.2) bounds on the generalization of the error of a hypothesis LTF \(h\) in satisfying sampled bags to its distributional instance-level error. Using this, we are able to distinguish (for \(k\neq q/2\)) between the two possible solutions our principal component algorithm yields
the one which satisfies more of the sampled bags has w.h.p. low instance-level error. For proving these bounds, the first step is to use a bag-level generalization error bound shown by [26] using the techniques of [35]. Next, we show that low distributional bag satisfaction error by \(h\) implies low instance level error. This involves a fairly combinatorial analysis of two independent binomial random variables formed from the incorrectly classified labels within a random bag. Essentially, unless \(h\) closely aligns with \(f\) at the instance level, with significant probability there will be an imbalance in these two random variables leading to \(h\) not satisfying the bag.
**Subgaussian concentration bounds.** The standard estimation bounds for Gaussians are not directly applicable in our case, since the random vector sampled from a random bag is biased according to its label given by \(f\), and is therefore not a Gaussian vector. To obtain sample complexity bounds linear in \(\log(1/\delta)\) we use subgaussian concentration bounds for mean and covariance estimation ([32, 31]). For this, we show \(O(\ell)\) bound on the expectation and subgaussian norm of the thresholded Gaussian given by \(\{g\sim N(0,1)\mid g>\ell\}\) for some \(\ell>0\). The random vectors of interest to us are (in a transformed space) distributed as a combination of thresholded Gaussians in one of the coordinates, and \(N(0,1)\) in the rest. We show that they satisfy the \(O(\ell)\) bound on their subgaussian norm and admit the corresponding subgaussian Hoeffding (for empirical mean) and empirical covariance concentration bounds. Based on this, in Sec. 3 we abstract out the procedure used in our learning algorithms for obtaining the relevant mean and covariance estimates.
**Experiments.** We include in Sec. 6 an experimental evaluation of our learning algorithms along with a comparison of with those of [25, 26] and random LTFs, demonstrating the effectiveness of our techniques.
## 2 Preliminaries
We begin with some useful linear algebraic notions. Let \(\lambda_{\text{max}}(\mathbf{A})\) and \(\lambda_{\text{min}}(\mathbf{A})\) denote the maximum and minimum eigenvalue of a real symmetric matrix \(\mathbf{A}\). The _operator norm_\(\|\mathbf{A}\|_{2}:=\max_{\|\mathbf{x}\|_{2}=1}\|\mathbf{A}\mathbf{x}\|_{2}\) for such matrices is given by \(\lambda_{\text{max}}(\mathbf{A})\).
We shall restrict our attention to symmetric _positive definite_ (p.d.) matrices \(\mathbf{A}\) which satisfy \(\mathbf{x}^{\mathsf{T}}\mathbf{A}\mathbf{x}>0\) for all non-zero vectors \(\mathbf{x}\), implying that \(\lambda_{\text{min}}(\mathbf{A})>0\) and \(\mathbf{A}^{-1}\) exists and is symmetric p.d. as well. Further, for such matrices \(\mathbf{A}\), \(\mathbf{A}^{1/2}\) is well defined to be the unique symmetric p.d. matrix \(\mathbf{B}\) satisfying \(\mathbf{B}\mathbf{B}=\mathbf{A}\). The eigenvalues of \(\mathbf{A}^{1/2}\) are the square-roots of those of \(\mathbf{A}\). We have the following lemma which is proved in Appendix B.6.
**Lemma 2.1**: _Let \(\mathbf{A}\) and \(\mathbf{B}\) be symmetric p.d. matrices such that \(\|\mathbf{A}-\mathbf{B}\|\leq\varepsilon_{1}\|\mathbf{A}\|_{2}\). Let \(\mathbf{r}_{1},\mathbf{r}_{2}\in\mathbb{R}^{d}\) be two unit vectors such that \(\|\mathbf{r}_{1}-\mathbf{r}_{2}\|_{2}\leq\varepsilon_{2}\). Then, \(\left\|\frac{\mathbf{A}\mathbf{r}_{1}}{\|\mathbf{A}\mathbf{r}_{1}\|_{2}}- \frac{\mathbf{B}\mathbf{r}_{2}}{\|\mathbf{B}\mathbf{r}_{2}\|_{2}}\right\|_{2} \leq 4\frac{\lambda_{\text{max}}(\mathbf{A})}{\lambda_{\text{min}}( \mathbf{A})}(\varepsilon_{2}+\varepsilon_{1})\) when \(\frac{\lambda_{\text{max}}(\mathbf{A})}{\lambda_{\text{min}}(\mathbf{A})}( \varepsilon_{2}+\varepsilon_{1})\leq\frac{1}{2}\)._
**Bag Oracle and related statistics.** Let \(\mathcal{O}:=\operatorname{Ex}(f,\mathcal{D},q,k)\) be any bag oracle with \(k\in\{1,\ldots,q-1\}\) for an LTF \(f(\mathbf{x}):=\mathbf{r}_{\mathbf{x}}^{\mathsf{T}}\mathbf{x}+\epsilon_{*}\) in \(d\)-dimensions, and let \(\mathcal{M}\) be a collection of \(m\) bags sampled iid from the oracle. Define for any hypothesis LTF \(h\),
\[\mathsf{BagErr}_{\text{oracle}}(h,f,\mathcal{D},q,k):= \Pr_{B\leftarrow\mathcal{O}}\left[\operatorname{Avg}\{h(\mathbf{x})\mid \mathbf{x}\in B\}\neq k/q\right]\text{, and,} \tag{2}\] \[\mathsf{BagErr}_{\text{sample}}(h,\mathcal{M}):=|\{B\in\mathcal{M }\mid\operatorname{Avg}\{h(\mathbf{x})\mid\mathbf{x}\in B\}\neq k/q\}|\ /m. \tag{3}\]
We define the following statistical quantities related to \(\mathcal{O}\). Let \(\mathbf{X}\) be a random feature-vector sampled uniformly from a random bag sampled from \(\mathcal{O}\). Let,
\[\mu_{B}:=\mathbb{E}[\mathbf{X}]\qquad\text{ and,}\qquad\Sigma_{B}:=\mathbb{E} \left[(\mathbf{X}-\mu_{B})\left(\mathbf{X}-\mu_{B}\right)^{\mathsf{T}}\right] =\operatorname{Var}[\mathbf{X}]. \tag{4}\]
Now, let \(\mathbf{Z}=\mathbf{X}_{1}-\mathbf{X}_{2}\) where \((\mathbf{X}_{1},\mathbf{X}_{2})\) are a random pair of feature-vectors sampled (without replacement) from a random bag sampled from \(\mathcal{O}\). Clearly \(\mathbb{E}[\mathbf{Z}]=\mathbf{0}\). Define
\[\Sigma_{D}:=\mathbb{E}\left[\mathbf{Z}\mathbf{Z}^{\mathsf{T}}\right]= \operatorname{Var}[\mathbf{Z}]. \tag{5}\]
**Generalization and stability bounds.** We prove in Appendix D.1 the following generalization bound from bag classification error to instance classification error.
**Theorem 2.2**: _For any \(\varepsilon<1/4q\) if \(\mathsf{BagErr}_{\mathrm{sample}}(h,\mathcal{M})\leq\varepsilon\) then,_
(i) _if \(k\neq q/2\), \(\mathrm{Pr}_{\mathcal{D}}\left[f(\mathbf{x})\neq h(\mathbf{x})\right]\leq 4\varepsilon\), and_
(ii) _if \(k=q/2\), \(\mathrm{Pr}_{\mathcal{D}}\left[f(\mathbf{x})\neq h(\mathbf{x})\right]\leq 4\varepsilon\) or \(\Pr[f(\mathbf{x})\neq(1-h(\mathbf{x}))]\leq 4\varepsilon\),_
_w.p. \(1-\delta\), when \(m\geq C_{0}d\left(\log q+\log(1/\delta)\right)/\varepsilon^{2}\), for any \(\delta>0\) and absolute constant \(C_{0}>0\)._
In some cases we directly obtain geometric bounds on the hypothesis classifier and the following lemma (proved in Appendix D.2) allows us to straightaway bound the classification error.
**Lemma 2.3**: _Suppose \(\|\mathbf{r}-\mathbf{\hat{r}}\|_{2}\leq\varepsilon_{1}\) for unit vectors \(\mathbf{r},\mathbf{\hat{r}}\). Then, \(\Pr\left[\mathsf{pos}\left(\mathbf{r}^{\mathsf{T}}\mathbf{X}+c\right)\neq \mathsf{pos}\left(\mathbf{\hat{r}}^{\mathsf{T}}\mathbf{X}+c\right)\right]\leq\varepsilon\) where \(\varepsilon=\varepsilon_{1}(c_{0}\sqrt{\lambda_{\max}/\lambda_{\min}}+c_{1}\| \boldsymbol{\mu}\|_{2}/\sqrt{\lambda_{\min}})\) for some absolute constants \(c_{0},c_{1}>0\) and \(\lambda_{\max}\),\(\lambda_{\min}\) are the maximum and minimum eigenvalues of \(\boldsymbol{\Sigma}\) respectively._
## 3 Bag distribution statistics estimation
We provide the following estimator for \(\boldsymbol{\mu}_{B}\), \(\boldsymbol{\Sigma}_{B}\) and \(\boldsymbol{\Sigma}_{D}\) defined in (4) and (5).
```
Input:\(\mathsf{Ex}(f,\mathcal{D}=N(\boldsymbol{\mu},\boldsymbol{\Sigma}),q,k),m\), where \(f=\mathsf{pos}\left(\mathbf{r}^{\mathsf{T}}\mathbf{X}+c\right)\).
1. Sample \(m\) bags from \(\mathsf{Ex}(f,\mathcal{D},q,k)\). Let \(\{B_{i}\}_{i=1}^{m}\) be the sampled bags.
2. \(V:=\{\mathbf{x}_{i}\mid\mathbf{x}_{i}\text{ u.a.r. }\leftarrow B_{i},i\in\{1,\ldots,m\}\}\).
3. \(\hat{\boldsymbol{\mu}}_{B}=\sum_{\mathbf{x}\in V}\mathbf{x}/m\).
4. \(\hat{\boldsymbol{\Sigma}}_{B}=\sum_{\mathbf{x}\in V}(\mathbf{x}-\boldsymbol{ \mu}_{B})(\mathbf{x}-\boldsymbol{\mu}_{B})^{\mathsf{T}}/m\).
5. Sample \(m\) bags from \(\mathsf{Ex}(f,\mathcal{D},q,k)\). Let \(\{\tilde{B}_{i}\}_{i=1}^{m}\) be the sampled bags.
6. \(\tilde{V}:=\{\mathbf{\tilde{x}}_{i}=\mathbf{x}_{i}-\mathbf{\tilde{x}}_{i}\mid( \mathbf{x}_{i},\tilde{\mathbf{x}}_{i})\text{ u.a.r. without replacement from }\tilde{B}_{i},i\in\{1,\ldots,m\}\}\).
7. \(\hat{\boldsymbol{\Sigma}}_{D}=\boldsymbol{\Sigma}_{\varepsilon}\forall\mathbf{ z}\mathbf{z}^{\mathsf{T}}/m\).
8. Return:\(\hat{\boldsymbol{\mu}}_{B},\hat{\boldsymbol{\Sigma}}_{B},\hat{\boldsymbol{ \Sigma}}_{D}\).
```
**Algorithm 1**MeanCosEstimator.
We have the following lemma - which follows from the subgaussian distribution based mean and covariance concentration bounds shown for thresholded Gaussians (see Appendix E) - whose proof is given in Appendix E.3.
**Lemma 3.1**: _If \(m\geq O\left((d/\varepsilon^{2})\ell^{2}\log(d/\delta)\right)\) where \(\ell\) is as given in Lemma E.13 then Algorithm 1 returns \(\hat{\boldsymbol{\mu}}_{B},\hat{\boldsymbol{\Sigma}}_{B},\hat{\boldsymbol{ \Sigma}}_{D}\) such that \(\|\hat{\boldsymbol{\mu}}_{B}-\boldsymbol{\mu}_{B}\|_{2}\leq\varepsilon\sqrt{ \lambda_{\max}}/2\), \(\|\hat{\boldsymbol{\Sigma}}_{B}-\boldsymbol{\Sigma}_{B}\|_{2}\leq\varepsilon \lambda_{\max}\), and \(\|\hat{\boldsymbol{\Sigma}}_{D}-\boldsymbol{\Sigma}_{D}\|_{2}\leq\varepsilon \lambda_{\max}\), w.p. at least \(1-\delta\), for any \(\varepsilon,\delta>0\). Here \(\lambda_{\max}\) is the maximum eigenvalue of \(\boldsymbol{\Sigma}\)._
## 4 Proof of Theorem 1.4
For the setting of Theorem 1.4, we provide Algorithm 2. It uses as a subroutine a polynomial time procedure PrincipalEigenVector for the principal eigen-vector of a symmetric matrix, and first computes two LTFs given by a normal vector and its negation, returning the one that has lower error on a sampled collection of bags.
**Lemma 4.1**: _For any \(\varepsilon,\delta\in(0,1)\), if \(m\geq O((d/\varepsilon^{4})\log(d/\delta)(\lambda_{\max}/\lambda_{\min})^{4}q^{ 4})\), then \(\hat{\mathbf{r}}\) computed in Step 3 of Alg. 2 satisfies \(\min\{\|\hat{\mathbf{r}}-\mathbf{r}_{*}\|_{2},\|\hat{\mathbf{r}}+\mathbf{r}_{*} \|_{2}\}\leq\varepsilon\), w.p. \(1-\delta/2\)._
The above, whose proof is deferred to Sec. 4.1, is used in conjunction with the following lemma.
**Lemma 4.2**: _Let \(k\neq q/2\), \(\varepsilon,\delta\in(0,1)\) and suppose \(\hat{\mathbf{r}}\) computed in Step 3 of Alg. 2 satisfies \(\min\{\|\hat{\mathbf{r}}-\mathbf{r}_{*}\|_{2},\|\hat{\mathbf{r}}+\mathbf{r}_{*} \|_{2}\}\leq\varepsilon_{r}\). Then, with \(s\geq O\left(d(\log q+\log(1/\delta))/\varepsilon^{2}\right)\), \(h^{*}\) in Step. 3.c satisfies \(\Pr_{\mathcal{D}}\left[h^{*}(\mathbf{x})\neq f(\mathbf{x})\right]\leq 16c_{0}q \varepsilon\sqrt{\frac{\lambda_{\max}}{\lambda_{\min}}}\) w.p. \(1-\delta/2\), where constant \(c_{0}>0\) is from Lem. 2.3._
With the above we complete the proof of Theorem 1.4 as follows.
Proof.: (of Theorem 1.4) Let the parameters \(\delta,\varepsilon\) be as given in the statement of the theorem.
For \(k=q/2\), we use \(O(\varepsilon\sqrt{\lambda_{\min}/\lambda_{\max}})\) for the error bound in Lemma 4.1 thereby taking \(m=O((d/\varepsilon^{4})\log(d/\delta)(\lambda_{\max}/\lambda_{\min})^{6}q^{4})\) in Alg. 2, so that Lemma 4.1 along with Lemma 2.3 yields the desired misclassification error bound of \(\varepsilon\) for one of \(h\), \(\tilde{h}\).
For \(k\neq q/2\), we use \(O(\varepsilon\sqrt{\lambda_{\min}/\lambda_{\max}}/q)\) for the error bound in Lemma 4.1. In this case, taking \(m=O((d/\varepsilon^{4})\log(d/\delta)(\lambda_{\max}/\lambda_{\min})^{6}q^{ \delta})\) in Alg. 2 we obtain the following bound: \(\min\{\|\hat{\mathbf{r}}-\mathbf{r}_{*}\|_{2},\|\hat{\mathbf{r}}+\mathbf{r}_{* }\|_{2}\}\leq\varepsilon\sqrt{\lambda_{\min}/\lambda_{\max}}/(16c_{0}q)\) with probability \(1-\delta/2\). Using \(s\geq O\left(d(\log q+\log(1/\delta))q^{2}\frac{\lambda_{\max}}{\varepsilon^{2 }\lambda_{\min}}\right)\), Lemma 4.2 yields the desired misclassification error bound of \(\varepsilon\) on \(h^{*}\) w.p. \(1-\delta\).
Proof.: (of Lemma 4.2) Applying Lemma 2.3 we obtain that at least one of \(h\), \(\tilde{h}\) has an instance misclassification error of at most \(O(\varepsilon\sqrt{\lambda_{\max}/\lambda_{\min}})\). WLOG assume that \(h\) satisfies this error bound i.e., \(\Pr_{\mathcal{D}}[f(\mathbf{x})\neq h(\mathbf{x})]\leq c_{0}\varepsilon\sqrt{ \lambda_{\max}/\lambda_{\min}}=:\varepsilon^{\prime}\). Since the separating hyperplane of the LTF \(f\) passes through the origin, and \(\mathcal{D}=N(\mathbf{0},\boldsymbol{\Sigma})\) is centered, \(\Pr_{\mathcal{D}}[f(\mathbf{x})=1]=\Pr_{\mathcal{D}}[f(\mathbf{x})=0]=1/2.\) Thus,
\[\Pr_{\mathcal{D}}[h(x)\neq f(x)\;\mid\;f(\mathbf{x})=1],\Pr_{\mathcal{D}}[[h (x)\neq f(x)\;\mid\;f(\mathbf{x})=0]\leq 2\varepsilon^{\prime}.\]
Therefore, the probability that a random bag from the oracle contains a feature vector on which \(f\) and \(h\) disagree is at most \(2q\varepsilon^{\prime}\). Applying the Chernoff bound (see Appendix B.1) we obtain that with probability at least \(1-\delta/\epsilon\), \(\mathsf{BagErr}_{\text{sample}}(h,\mathcal{M})\leq 4q\varepsilon^{\prime}\). Therefore, in Step 3.c. \(h^{*}\) satisfies \(\mathsf{BagErr}_{\text{sample}}(h^{*},\mathcal{M})\leq 4q\varepsilon^{\prime}\)
On the other hand, applying Theorem 2.2, except with probability \(\delta/3\), \(\Pr_{\mathcal{D}}[f(\mathbf{x})\neq h^{*}(\mathbf{x})]\leq 16q\varepsilon^{ \prime}=16c_{0}q\varepsilon\sqrt{\lambda_{\max}/\lambda_{\min}}\). Therefore, except with probability \(\delta/2\), the bound in Lemma 4.2 holds.
### Proof of Lemma 4.1
We define and bound a few useful quantities depending on \(k,q,\lambda_{\min}\) and \(\lambda_{\max}\) using \(1\leq k\leq q-1\).
**Definition 4.3**: _Define, (i) \(\kappa_{1}:=\left(\frac{2k}{q}-1\right)^{2}\frac{2}{\pi}\) so that \(0\leq\kappa_{1}\leq 2/\pi\), (ii) \(\kappa_{2}:=\frac{1}{q-1}\frac{k}{\delta}\left(1-\frac{k}{q}\right)\frac{16}{\pi}\) so that \(\frac{16}{\pi q^{2}}\leq\kappa_{2}\leq\frac{4}{\pi(q-1)}\), (iii) \(\kappa_{3}:=\frac{\kappa_{2}}{1-\kappa_{1}}\) so that \(\frac{16}{\pi q^{2}}\leq\kappa_{3}\leq\frac{4}{(\pi-2)(q-1)}\), and (iv) \(\theta:=\frac{2\lambda_{\max}}{\lambda_{\min}}\left(\frac{1}{2-\max(0,2\kappa_{ 1}-\kappa_{2})}+\frac{1}{1-\kappa_{1}}\right)\) so that \(\frac{3\lambda_{\max}}{\lambda_{\min}}\leq\theta\leq\frac{3\lambda_{\max}}{(1-2/ \pi)\lambda_{\min}}\)._
For the analysis we begin by showing in the following lemma that \(\hat{\mathbf{r}}\) in the algorithms is indeed \(\pm\mathbf{r}_{*}\) if the covariance estimates were the actual covariances.
**Lemma 4.4**: _The ratio \(\rho(\mathbf{r}):=\mathbf{r}^{\mathsf{T}}\Sigma_{D}\mathbf{r}/\mathbf{r}^{ \mathsf{T}}\Sigma_{B}\mathbf{r}\) is maximized when \(\mathbf{r}=\pm\mathbf{r}_{*}\). Moreover,_
\[\rho(\mathbf{r})=2+\frac{\gamma(\mathbf{r})^{2}\kappa_{2}}{1-\gamma(\mathbf{r} )^{2}\kappa_{1}}\qquad\text{ where }\qquad\gamma(\mathbf{r}):=\frac{\mathbf{r}^{\mathsf{T}} \Sigma\mathbf{r}_{*}}{\sqrt{\mathbf{r}^{\mathsf{T}}\Sigma}\mathbf{r}\sqrt{ \mathbf{r}^{\mathsf{T}}_{*}\Sigma\mathbf{r}_{*}}}\text{ and }\]
\[\mathbf{r}^{\mathsf{T}}\Sigma_{B}\mathbf{r}=\mathbf{r}^{\mathsf{T}}\Sigma \mathbf{r}(1-\gamma(\mathbf{r})^{2}\kappa_{1}),\quad\mathbf{r}^{\mathsf{T}} \Sigma_{D}\mathbf{r}=\mathbf{r}^{\mathsf{T}}\Sigma\mathbf{r}(2-\gamma(\mathbf{ r})^{2}(2\kappa_{1}-\kappa_{2}))\]
Proof.: Let \(\mathbf{\Gamma}:=\mathbf{\Sigma}^{1/2}\), then \(\mathbf{X}\sim N(\mathbf{0},\mathbf{\Sigma})\Leftrightarrow\mathbf{X}= \mathbf{\Gamma}\mathbf{Z}\) where \(\mathbf{Z}\sim N(\mathbf{0},\mathbf{I})\). Further, \(\mathsf{pos}\left(\mathbf{r}^{\mathsf{T}}\mathbf{X}\right)=\mathsf{pos}\left( \mathbf{u}^{\mathsf{T}}\mathbf{Z}\right)\) where \(\mathbf{u}=\mathbf{\Gamma}\mathbf{r}/\|\mathbf{\Gamma}\mathbf{r}\|_{2}\). Using this, we can let \(\mathbf{X}_{B}=\mathbf{\Gamma}\mathbf{Z}_{B}\) as a random feature-vector sampled uniformly from a random bag sampled from \(\mathcal{O}\). Also, let \(\mathbf{X}_{D}=\mathbf{\Gamma}\mathbf{Z}_{D}\) be the difference of two random feature vectors sampled uniformly without replacement from a random bag sampled from \(\mathcal{O}\). Observe that the ratio \(\rho(\mathbf{r})=\mathrm{Var}[\mathbf{r}^{\mathsf{T}}\mathbf{X}_{D}]/\, \mathrm{Var}[\mathbf{r}^{\mathsf{T}}\mathbf{X}_{B}]=\mathrm{Var}[\mathbf{u}^{ \mathsf{T}}\mathbf{Z}_{D}]/\,\mathrm{Var}[\mathbf{u}^{\mathsf{T}}\mathbf{Z}_{ B}]\).
Let \(\mathbf{u}_{*}:=\mathbf{\Gamma}\mathbf{r}_{*}/\|\mathbf{\Gamma}\mathbf{r}_{*}\|_{2}\), and \(g^{*}:=\mathbf{u}_{*}^{\mathsf{T}}\mathbf{Z}\) which is \(N(0,1)\). For \(a\in\{0,1\}\), let \(\mathbf{Z}_{a}\) be \(\mathbf{Z}\) conditioned on \(\mathsf{pos}\left(\mathbf{u}_{*}^{\mathsf{T}}\mathbf{Z}\right)=a\). Let \(g_{a}^{*}:=\mathbf{u}_{*}^{\mathsf{T}}\mathbf{Z}_{a}\), \(a\in\{0,1\}\), be the half normal distributions satisfying \(\mathrm{E}[(g_{a}^{*})^{2}]=1\) and \(\mathrm{E}[g_{a}^{*}]=(-1)^{1-a}\sqrt{2/\pi}\). With this setup, letting \(g_{B}^{*}:=\mathbf{u}_{*}^{\mathsf{T}}\mathbf{Z}_{B}\) and \(g_{D}^{*}:=\mathbf{u}_{*}^{\mathsf{T}}\mathbf{Z}_{D}\) we obtain (using Lemma B.2 in Appendix B.2)
\[\mathrm{Var}[g_{B}^{*}]=1-\kappa_{1},\quad\mathrm{Var}[g_{D}^{*}]=2(1-\kappa_ {1})+\kappa_{2}\]
Now let \(\tilde{\mathbf{u}}\) be a unit vector orthogonal to \(\mathbf{u}_{*}\). Let \(\tilde{g}=\tilde{\mathbf{u}}^{\mathsf{T}}\mathbf{Z}\) be \(N(0,1)\). Also, let \(\tilde{g}_{a}=\tilde{\mathbf{u}}^{\mathsf{T}}\mathbf{Z}_{a}\) for \(a\in\{0,1\}\). Since \(\mathbf{Z}_{a}\) are given by conditioning \(\mathbf{Z}\) only along \(\mathbf{u}_{*}\), \(\tilde{g}_{a}\sim N(0,1)\) for \(a\in\{0,1\}\). In particular, the component along \(\tilde{u}\) of \(\mathbf{Z}_{B}\) (call it \(\tilde{g}_{B}\)) is \(N(0,1)\) and that of \(\mathbf{Z}_{D}\) (call it \(\tilde{g}_{D}\)) is the difference of two iid \(N(0,1)\) variables. Thus, \(\mathrm{Var}[\tilde{g}_{B}]=1\) and \(\mathrm{Var}[\tilde{g}_{D}]=2\). Moreover, due to orthogonality all these gaussian variables corresponding to \(\tilde{\mathbf{u}}\) are independent of those corresponding to \(\mathbf{u}_{*}\) defined earlier. Now let \(\mathbf{u}=\alpha\mathbf{u}_{*}+\beta\tilde{\mathbf{u}}\), where \(\beta=\sqrt{1-\alpha^{2}}\) be any unit vector. From the above we have,
\[\frac{\mathrm{Var}\left[\mathbf{u}^{\mathsf{T}}\mathbf{Z}_{D} \right]}{\mathrm{Var}\left[\mathbf{u}^{\mathsf{T}}\mathbf{Z}_{B}\right]}=\frac {\mathrm{Var}\left[\alpha g_{D}^{*}+\beta\tilde{g}_{D}\right]}{\mathrm{Var} \left[\alpha g_{B}^{*}+\beta\tilde{g}_{B}\right]}=\frac{\alpha^{2}\,\mathrm{ Var}\left[g_{D}^{*}\right]+\beta^{2}\,\mathrm{Var}\left[\tilde{g}_{D} \right]}{\alpha^{2}\,\mathrm{Var}\left[\tilde{g}_{B}^{*}\right]+\beta^{2}\, \mathrm{Var}\left[\tilde{g}_{B}\right]}= \frac{2\alpha^{2}(1-\kappa_{1})+\alpha^{2}\kappa_{2}+2\beta^{2}}{ \alpha^{2}(1-\kappa_{1})+\beta^{2}}\] \[= 2+\frac{\alpha^{2}\kappa_{2}}{1-\alpha^{2}\kappa_{1}} \tag{6}\]
where the last equality uses \(\beta=\sqrt{1-\alpha^{2}}\). Letting \(\mathbf{u}=\mathbf{\Gamma}\mathbf{r}/\|\mathbf{\Gamma}\mathbf{r}\|_{2}\) we obtain that \(\alpha=\frac{\langle\mathbf{\Gamma}\mathbf{r}_{*}\mathbf{\Gamma}\mathbf{r}_{*} \rangle}{\|\mathbf{\Gamma}\mathbf{r}\|_{2}\|\mathbf{\Gamma}\mathbf{r}_{*}\|_{2}}= \gamma(\mathbf{r})\) completing the proof.
**Lemma 4.5**: \(\underset{\|\mathbf{r}\|_{2}=1}{\operatorname{argmax}}\rho(\mathbf{r})=\mathbf{ \Sigma}_{B}^{-1/2}\mathsf{PrincipalEigenVector}(\mathbf{\Sigma}_{B}^{-1/2} \mathbf{\Sigma}_{D}\mathbf{\Sigma}_{B}^{-1/2})\)__
Proof.: This follows directly from its generalization in Appendix B.5.
We now prove that the error in the estimate of \(\hat{\mathbf{r}}\) given to us by the algorithm is bounded if the error in the covariance estimates are bounded. The sample complexity of computing these estimates gives the sample complexity of our algorithm.
**Theorem 4.6**: _The unit vector \(\hat{\mathbf{r}}\) computed in Step 3 of Algorithm 2 satisfies \(\min\{\|\hat{\mathbf{r}}-\mathbf{r}_{*}\|_{2},\|\hat{\mathbf{r}}+\mathbf{r}_{*} \|_{2}\}\leq\varepsilon\) w.p. at least \(1-\delta\) when \(m\geq O\left((d/\epsilon^{4})\log(d/\delta)\left(\frac{\lambda_{\max}}{\lambda_{ \min}}\right)^{4}q^{4}\right)\)._
Proof.: By Lemma 3.1, taking \(m\geq O\left((d/\epsilon_{1}^{2})\log(d/\delta)\right)\) ensures that \(\|\mathbf{E}_{B}\|_{2}\leq\epsilon_{1}\lambda_{\max}\) and \(\|\mathbf{E}_{D}\|_{2}\leq\epsilon_{1}\lambda_{\max}\) w.p. at least \(1-\delta\) where \(\mathbf{E}_{B}=\hat{\mathbf{\Sigma}}_{B}-\mathbf{\Sigma}_{B}\) and \(\mathbf{E}_{D}=\hat{\mathbf{\Sigma}}_{D}-\mathbf{\Sigma}_{D}\). We start by defining \(\hat{\rho}(\hat{\mathbf{r}}):=\frac{\mathbf{r}^{\mathsf{T}}\hat{\Sigma}_{B} \mathbf{r}}{\mathbf{r}^{\mathsf{T}}\Sigma_{B}\mathbf{r}}\) which is the equivalent of \(\rho\) using the estimated matrices. Observe that it can be written as \(\hat{\rho}(\mathbf{r})=\frac{\mathbf{r}^{\mathsf{T}}\Sigma_{B}\mathbf{r}+ \mathbf{r}^{\mathsf{T}}E_{B}\mathbf{r}}{\mathbf{r}^{\mathsf{T}}\Sigma_{D} \mathbf{r}+\mathbf{r}^{\mathsf{T}}E_{D}\mathbf{r}}\). Using
these we can obtain the following bound on \(\hat{\rho}\): for any \(\mathbf{r}\in\mathbb{R}^{d}\), \(|\hat{\rho}(\mathbf{r})-\rho(\mathbf{r})|\leq\theta\varepsilon_{1}|\rho(\mathbf{ r})|\) w.p. at least \(1-\delta\) (*) as long as \(\varepsilon_{1}\leq\frac{(1-\kappa_{1})}{2}\frac{\lambda_{\min}}{\lambda_{\max}}\), which we shall ensure (see Appendix B.4).
For convenience we denote the normalized projection of any vector \(\mathbf{r}\) as \(\bar{\mathbf{r}}:=\frac{\mathbf{\Sigma}^{1/2}\mathbf{r}}{\|\mathbf{\Sigma}^{ 1/2}\mathbf{r}\|_{2}}\). Now let \(\bar{\mathbf{r}}\in\mathbb{R}^{d}\) be a unit vector such that \(\min\{\|\bar{\mathbf{r}}-\bar{\mathbf{r}}_{*}\|_{2},\|\bar{\mathbf{r}}+\bar{ \mathbf{r}}_{*}\|_{2}\}\geq\varepsilon_{2}\). Hence, using the definitions from Lemma 4.4, \(|\gamma(\mathbf{r})|\leq 1-\varepsilon_{2}^{2}/2\) while \(\gamma(\mathbf{r}_{*})=1\) which implies \(\rho(\mathbf{r}_{*})-\rho(\mathbf{r})\geq\kappa_{3}\varepsilon_{2}^{2}/2\). Note that \(\rho(\mathbf{r})\leq\rho(\mathbf{r}_{*})=2+\kappa_{3}\). Choosing \(\varepsilon_{1}<\frac{\kappa_{3}}{4\theta(2+\kappa_{3})}\varepsilon_{2}^{2}\), we obtain that \(\rho(\mathbf{r}_{*})(1-\theta\varepsilon_{1})>\rho(\mathbf{r})(1+\theta \varepsilon_{1})\). Using this along with the bound (*) we obtain that w.p. at least \(1-\delta\), \(\hat{\rho}(\mathbf{r}_{*})>\hat{\rho}(\mathbf{r})\) when \(\varepsilon_{2}>0\). Since our algorithm returns \(\bar{\mathbf{r}}\) as the maximizer of \(\hat{\rho}\), w.p. at least \(1-\delta\) we get \(\min\{\|\bar{\mathbf{r}}-\bar{\mathbf{r}}_{*}\|_{2},\|\bar{\mathbf{r}}+\bar{ \mathbf{r}}_{*}\|_{2}\}\leq\varepsilon_{2}\). Using Lemma 2.1, \(\min\{\|\bar{\mathbf{r}}-\mathbf{r}_{*}\|_{2},\|\bar{\mathbf{r}}+\mathbf{r}_{*} \|_{2}\}\leq 4\sqrt{\frac{\lambda_{\max}}{\lambda_{\min}}}\varepsilon_{2}\). Substituting \(\varepsilon_{2}=\frac{\epsilon}{4}\sqrt{\frac{\lambda_{\min}}{\lambda_{\max}}}\), \(\|\mathbf{r}-\mathbf{r}_{*}\|_{2}\leq\varepsilon\) w.p. at least \(1-\delta\). The conditions on \(\varepsilon_{1}\) are satisfied by taking it to be \(\leq O\left(\frac{\kappa_{3}\varepsilon^{2}\lambda_{\min}}{\theta(2+\kappa_{3 })\lambda_{\max}}\right)\), and thus we can take \(m\geq O\left(\left(d/\varepsilon^{4}\right)\log(d/\delta)\left(\frac{\lambda_{ \max}}{\lambda_{\min}}\right)^{2}\theta^{2}\left(\frac{2+\kappa_{3}}{\kappa_{ 3}}\right)^{2}\right)=O\left(\left(d/\varepsilon^{4}\right)\log(d/\delta) \left(\frac{\lambda_{\max}}{\lambda_{\min}}\right)^{4}q^{4}\right)\), using Defn. 4.3. This completes the proof.
## 5 Proof Sketch for Theorem 1.3 and 1.5
**Theorem 1.3**: **Case \(N(\mathbf{0},\mathbf{I}),f(\mathbf{x})=\mathsf{pos}(\mathbf{r}_{*}^{\mathsf{ T}}\mathbf{x}),k\neq q/2\).** We argue that a vector sampled uniformly at random from a bag is distributed as \(\omega\mathbf{X}_{1}+(1-\omega)\mathbf{X}_{0}\) where \(\mathbf{X}_{a}\sim N(\mathbf{0},\mathbf{I})\) conditioned on \(f(\mathbf{X}_{a})=a\) and \(\omega\) is an independent \(\{0,1\}-\)Bernoulli r.v. s.t. \(p(\omega=1)=k/q\). This along with the fact that uncorrelated Gaussians are independent, allows us to show that the expectation is \(\mathbf{0}\) in any direction orthogonal to \(\mathbf{r}_{*}\) and allows us to compute the expectation in the direction of \(\mathbf{r}_{*}\). We then use Lemma 3.1 to get the sample complexity expression. The detailed proof is in Appendix A.
**Theorem 1.5**: **Case \(N(\mathbf{\mu},\mathbf{\Sigma}),f(\mathbf{x})=\mathsf{pos}(\mathbf{r}_{*}^{ \mathsf{T}}\mathbf{x}+c_{*})\).** We start by generalizing the high probability geometric error bound in Lemma 4.1 to this case (Lemma C.1 proven in Appendix C.1). We redefine \(\kappa_{1},\kappa_{2},\kappa_{3}\) and \(\theta\) so that they generalize to this case. The rest of the proof is similar to Section 4.1. This introduces an extra factor of \(\ell^{2}\) to the sample complexity which comes from Lemma 3.1. Next, assuming the geometric bound, we give a high probability bound on the generalization error similar to Lemma 4.2 (Lemma C.2). Using the geometric bound and Lemma 2.3, we bound the \(\mathsf{BagErr}_{\mathsf{sample}}(h,\mathcal{M})\) where \(h(\mathbf{x})=\mathsf{pos}(\hat{\mathbf{r}}^{\mathsf{T}}\mathbf{x}+c_{*})\) and \(\mathcal{M}\) is a sample. Lemma 2.3 introduces the term of \(O((\sqrt{\lambda_{\max}}+\|\boldsymbol{\mu}\|_{2})/\sqrt{\lambda_{\min}})\) instead of \(O(\sqrt{\lambda_{\max}/\lambda_{\min}})\) as it did in the proof of Lemma 4.1. Bounding the sample error also introduces terms with \(\Phi(\ell)\). Notice that \(h^{*}(\mathbf{x})=\mathsf{pos}(\hat{\mathbf{r}}^{\mathsf{T}}\mathbf{x}+\hat{c})\) will satisfy the same number of samples in \(\mathcal{M}\) as \(h\) by design of the algorithm to find \(\hat{c}\). Thus, \(\mathsf{BagErr}_{\mathsf{sample}}(h,\mathcal{M})\) has the same bound. We then use Theorem 2.2 to bound the generalization error of \(h^{*}\). Lemma C.1 and Lemma C.2 together imply Theorem 1.5.
## 6 Experimental Results
_General Gaussian_. We empirically evaluate our algorithmic technique on centered and general Gaussian distributions for learning homogeneous LTFs. For homogeneous LTFs the general case algorithm (Alg. 4 in Appendix C) boils down to Alg. 2 in Sec. 4. The experimental LLP datasets are created using samples from both balanced as well as unbalanced bag oracles. In particular, for dimension \(d\in\{10,50\}\), and each pair \((q,k)\in\{(2,1),(3,1),(10,5),(10,8),(50,25),(50,35)\}\) and \(m=\) we create 25 datasets as follows: for each dataset (i) sample a random unit vector \(\mathbf{r}^{*}\) and let \(f(\mathbf{x}):=\mathsf{pos}\left(\mathbf{r}^{*\mathsf{T}}\mathbf{x}\right)\), (ii) sample \(\boldsymbol{\mu}\) and \(\mathbf{\Sigma}\) randomly (see Appendix F for details), (iii) sample \(m=2000\) training bags from \(\mathsf{Ex}\big{(}f,N(\mathbf{\mu},\mathbf{\Sigma}),q,k\big{)}\), (iv) sample \(1000\) test instances \((\mathbf{x},f(\mathbf{x})),\mathbf{x}\gets N(\boldsymbol{\mu},\mathbf{ \Sigma})\). We fix \(\boldsymbol{\mu}=\mathbf{0}\) for the centered Gaussian case.
For comparison we include the random LTF algorithm in which we sample \(100\) random LTFs and return the one that satisfies the most bags. In addition, we evaluate the Algorithm of [25] on \((q,k)=(2,1)\), and the Algorithm of [26] on \((q,k)=(3,1)\). We measure the accuracy of each method on the test set of each
dataset. The algorithms of [25, 26] are considerably slower and we use 200 training bags for them. The results for centered Gaussian are in Table 0(a) and for the general Gaussian are in Table 0(b). We observe that our algorithms perform significantly better in terms of accuracy than the comparative methods in all the bag distribution settings. Further, our algorithms have significantly lower error bounds (see Appendix F).
Notice in Tables 0(a) and 0(b) that the test accuracy for Algorithm 2 decreases with an increase in \(q\) and \(d\). This is consistent with the sample complexity expressions in Thm. 1.4 and Thm. 1.5. Also, notice that the test accuracy for Algorithm 2 for general Gaussian (Table 0(b)) is usually lesser than the same for centered Gaussian (Table 0(a)). This supports the theoretical result that the sample complexity increases with the increase in \(l\).
Appendix F has additional details and further experiments for the \(N(\mathbf{0},\mathbf{I})\) with homogeneous LTFs, \(N(\boldsymbol{\mu},\boldsymbol{\Sigma})\) with non-homogeneous LTFs as well as on noisy label distributions.
## 7 Conclusion and Future work
Our work shows that LTFs can be efficiently properly learnt in the LLP setting from random bags with given label proportion whose feature-vectors are sampled independently from a Gaussian space, conditioned on their underlying labels. For the simple case of \(N(\mathbf{0},\mathbf{I})\) distribution and bags with unbalanced labels we provide a mean estimation based algorithm. For the general scenarios we develop a more sophisticated approach using the principal component of a matrix formed from certain covariance matrices. To resolve the ambiguity between the obtained solutions we employ novel generalization error bounds from bag satisfaction to instance classification. We also show that subgaussian concentration bounds are applicable on the thresholded Gaussians, yielding efficient sample complexity bounds. Our experimental results validate the performance guarantees of our algorithmic techniques.
In future work, classes of distributions other than Gaussian could be similarly investigated. Classifiers other than LTFs are also interesting to study in the LLP setting.
\begin{table}
\end{table}
Table 1: Algorithm A2 vs. rand. LTF (R) vs SDP algorithms (S) |
2306.15606 | Photometric variability of blue straggler stars in M67 with TESS and K2 | Blue straggler stars (BSSs) are formed through mass transfer or mergers in
binaries. The recent detections of white dwarf (WD) companions to BSSs in M67
suggested a mass transfer pathway of formation. In search of a close companion
to five BSSs in M67 that are known to be spectroscopic binaries, we study the
light curves from K2 and TESS data. We use PHOEBE to analyse the light curves
and estimate the properties of the companions. We detect variability in WOCS
1007, and the light curve is dominated by ellipsoidal variation. Using the
light curve and radial velocity measurements, we estimate its orbital period to
be 4.212$\pm$0.041 d and $e$ = 0.206$\pm$002. The mass of the companion is
estimated to be 0.22$\pm$0.05 M$_{\odot}$ with a radius of 0.078$\pm$0.027
R$_{\odot}$, confirming it to be a low mass WD with T$_{\rm eff}$ =
14300$\pm$1100 K. The estimated mass of the BSS, 1.95$\pm$0.26 M$_{\odot}$, is
similar to that estimated from isochrones. The BSS in WOCS 1007 shows $\delta$
Scuti pulsations, although it is slightly deformed and likely to be formed
through an efficient mass transfer. Though we detect a light curve for WOCS
4003 showing grazing eclipse with ellipsoidal variation, the estimated
parameters are inconclusive. Apart from the 0.44 d period, we found smaller
eclipses with a period of 1.1 d, suggesting a compact triple system. In the
case of WOCS 4003, WOCS 5005, and WOCS 1025, no eclipses or pulsations are
detected, confirming the absence of any short-period inner binary with high
inclination in these BSSs. | Nagaraj Vernekar, Annapurni Subramaniam, Vikrant V. Jadhav, Dominic M. Bowman | 2023-06-27T16:45:12Z | http://arxiv.org/abs/2306.15606v1 | # Photometric variability of blue straggler stars in M67 with _Tess_ and _K2_
###### Abstract
Blue straggler stars (BSSs) are formed through mass transfer or mergers in binaries. The recent detections of white dwarf (WD) companions to BSSs in M67 suggested a mass transfer pathway of formation. In search of a close companion to five BSSs in M67 that are known to be spectroscopic binaries, we study the light curves from _K2_ and _TESS_ data. We use PHOEBE to analyse the light curves and estimate the properties of the companions. We detect variability in WOCS 1007, and the light curve is dominated by ellipsoidal variation. Using the light curve and radial velocity measurements, we estimate its orbital period to be 4.212\(\pm\)0.041 d and \(e\) = 0.206\(\pm\)002. The mass of the companion is estimated to be 0.22\(\pm\)0.05 M\({}_{\bigodot}\) with a radius of 0.078\(\pm\)0.027 R\({}_{\bigodot}\), confirming it to be an LM WD with T\({}_{\rm eff}\) = 14300\(\pm\)1100 K. The estimated mass of the BSS, 1.95\(\pm\)0.26 M\({}_{\bigodot}\), is similar to that estimated from isochrones. The BSS in WOCS 1007 shows \(\delta\) Scuti pulsations, although it is slightly deformed and likely to be formed through an efficient mass transfer. Though we detect a light curve for WOCS 4003 showing grazing eclipse with ellipsoidal variation, the estimated parameters are inconclusive. Apart from the 0.44 d period, we found smaller eclipses with a period of 1.1 d, suggesting a compact triple system. In the case of WOCS 4003, WOCS 5005, and WOCS 1025, no eclipses or pulsations are detected, confirming the absence of any short-period inner binary with high inclination in these BSSs.
keywords: stars: blue stragglers - binaries: eclipsing - open clusters and associations: M67 - techniques: photometric
## 1 Introduction
Open clusters are known to host a large fraction of binaries. The evolution of single and binary stars is dictated by the complex interaction of stars within the cluster. Blue straggler stars (BSSs) are dwarfs that are brighter and bluer than the stars at the main sequence turn-off a cluster and are thought to have gained mass to continue to stay on the main sequence. The formation pathways of BSSs are through merger or mass transfer in binaries or triples (McCrea, 1964; Perets & Fabrycky, 2009; Naoz & Fabrycky, 2014). The rich, old and dynamically active open cluster, M67, is known to host several BSSs (Mantegia et al., 1989; Gilliland & Brown, 1992; Deng et al., 1999; Brunt et al., 2007; Lu et al., 2010; Geller et al., 2015). In M67, about \(\sim\)79 per cent of the BSSs are in binaries with a range of orbital periods and eccentricities (Latham & Milone, 1996; Sandquist & Shetrone, 2003). The high binary fraction combined with a large period range suggests that BSSs could be formed through different mechanisms.
The mass transfer in binaries results in the formation of a stellar remnant, mostly white dwarfs (WD), as companions to the BSSs. Mergers of stars resulting in forming a BSS do not have a binary companion. As the stellar density in open clusters is low for direct collisions, such a formation pathway is less common (Jadhav & Subramaniam, 2021). Detections of WD companions to BSSs have implied mass transfer as a formation pathway (Gosnell et al., 2015, 2019; Sindhu et al., 2019, 2020). The BSSs are thought to be formed via case A, B, or C mass transfer, based on the phase of mass transfer within the main sequence, before the core-helium burning stage or after core-helium exhaustion of the donor (Kippenhahn & Weigert, 1967; Lauterborn, 1970). Case A mass transfer could lead to the coalescence of the binary, resulting in a single massive BSS or a binary with less massive BSS and a short-period companion. Case B mass transfer produces a BSS in a short period system with a helium core remnant of mass \(<\) 0.45 M\({}_{\odot}\) from an initial mass of \(<\) 3 M\({}_{\odot}\). In Case C, we expect a CO WD, with mass \(\geq\) 0.5 M\({}_{\odot}\). BSSs in long-period binaries are expected to have undergone Case C mass transfer (Perets, 2015).
UV investigations have revealed the presence of hot WD companions to the BSSs and main sequence stars (Knigge et al., 2008; Gosnell et al., 2015; Sindhu et al., 2019; Leiner et al., 2019; Jadhav et al., 2019; Sahu et al., 2019; Sindhu et al., 2020; Subramaniam et al., 2020). Studies by Sindhu et al. (2019) and Pandey et al. (2021) on BSSs in M67 revealed many such hot companions. Based on the number of low-mass (LM) WD companion detections, Sindhu et al. (2020) suggested that 35 % of the stars in their sample must have undergone Case B mass transfer. The issues that require reconciling are as follows: (1) BSSs with LM WD should be in short-period systems such as WOCS 1007, whereas the spectroscopic orbital periods are found to be \(\geq\) 1000 d (WOCS 5005 and WOCS 1025); (2) photomet
ric masses of BSSs (that are estimated using isochrones) are thought to be over-estimated when compared to kinematic masses (which are difficult to estimate) (3) parameter estimations of the BSSs and the hot companions have been done by modelling multi-wavelength SEDs, and it is good to have an independent estimation.
This paper sheds light on the above using the light curve analysis of _K2_ and _TESS_ data for a few select BSSs in M67. In this study, we present (1) the detection of pulsation and (2) the detection and modelling of light curves to study the properties of the BSSs and the binary system. The paper is arranged as follows: Sec. 2 gives a log of data and analytical methods, Sec. 3 contains the results for individual systems, and we discuss the results in Sec. 4.
## 2 Data and Methods
### _K2_
This study uses data from the _K2_ mission. The _K2_ mission was divided into 19 campaigns that were about 80 days in length (Howell et al., 2014). _K2_ has two types of data, short-cadence (1 min) and long-cadence (30 min). The accuracy of the _K2_ photometry is not as high as _Kepler_ photometry due to inaccurate pointing, leading to a drift in the field of view. To correct for the drift, thrusters aboard the satellite were fired once every 6 hrs (Howell et al., 2014). This process introduces systematics into the light curves, which are not inherent to the stars. Therefore, we use the light curves corrected by the _K2_ Systematics Correction (K2SC) algorithm (Aigrain et al., 2016). K2SC is a detrending algorithm that uses Gaussian processes to determine and remove all the systematics due to the pointing jitter. _K2_ observed M67 in campaigns 05, 16, and 18.
### _Tess_
The Transiting Exoplanet Survey Satellite (_TESS_) (Ricker et al., 2015) is a broad-band photometric survey mission covering most of the sky. Each hemisphere is divided into 13 sectors and is observed for a year, with each sector being observed for 27.4 days. _TESS_ provides 30-min cadence data for over 10 million stars through its full-frame images. And for a pre-determined sample of 200,000 stars (Nascimbeni et al., 2016; Raddi et al., 2017), _TESS_ collects 2-min cadence data. In the extended mission that started in July 2020, the cadence was augmented to 20 s and 10 min for the short and long cadence, respectively. The light curves for the stars with short cadence data are extracted by the _TESS_ Science Processing Operation Center (SPOC) pipeline (Jenkins et al., 2016) through Simple Aperture Photometry (SAP). As SAP flux retains systematics, the pipeline performs corrections to it, referred to as Pre-search Data Conditioned SAP (PDCSAP) flux. During its extended mission, _TESS_ has observed M67 in sectors 44, 45, and 46 (2021 October 12 to 2021 December 30).
### Sample selection
M67 is an excellent cluster to study the BSS population due to the extensive literature available about its 14 known BSSs (Latham and Milone, 1996; Yakut et al., 2009; Geller et al., 2015; Jadhav et al., 2019; Sindhu et al., 2020). Among these, our primary focus is on BSSs with possible binary companions, where light curves could be used to constrain the binary properties. Five systems listed in Table 1 are found to be suitable for carrying out a detailed study. All the systems are observed by both _TESS_ and _K2_ missions, but not all systems have corrected data from these missions. Two systems, WOCS 1007 and WOCS 4006, have light curves corrected by the K2SC algorithm and the SPOC pipeline. The other three systems do not have corrected _TESS_ light curves but have a K2SC light curve. We note that the crowding of the systems in M67 increases the complexity of extracting and correcting the _TESS_ data obtained through full-frame images, as it could suffer from severe contamination that adversely affects the analysis.
### Light curve analysis
If a BSS is found in a binary system with a compact object like a WD, the eclipses in the light curves are expected to be shallow and may not be visually evident. In most systems, the amplitude of variability in the out-of-eclipse regions is comparable to the depth of the eclipses. An amplitude spectrum was calculated using Fourier transform to extract the eclipses from such light curves. This is because when a light curve of an eclipsing system is phase folded using the significant frequencies obtained through the amplitude spectrum, the eclipses line up on top of each other for the correct orbital period, which appears as two dips (primary and secondary eclipses) in the phase folded curve. The phase-folded light curve can also show out-of-eclipse sinusoidal variability. This could be caused either due to pulsations in the star, surface activity such as spots or ellipsoidal variability (see Sec. 2.4.3).
#### 2.4.1 Orbital period estimation
A discrete Fourier transform of the light curves provides the amplitude spectrum of the variability. The significant frequencies were extracted from the amplitude spectrum through an iterative pre-whitening process using the software tool Pythia1. For each step, the frequency with the highest S/N in the amplitude spectrum was selected with the corresponding amplitude and phase calculated from non-linear least-squares fitting of a sinusoid to the light curve (see Bowman and Michielsen (2021)). Frequencies were extracted until the S/N was \(<\) 4 (Breger et al., 1993). Once all the significant frequencies were obtained, Period04 (Lenz and Breger, 2014) was used to phase-fold the light curve for each of those frequencies to identify possible eclipses.
Footnote 1: [https://github.com/colej/pythia](https://github.com/colej/pythia)
#### 2.4.2 Binary modelling
To understand the formation pathway of a BSS, it is essential to obtain constraints on the nature of its companion. As eclipsing systems provide a direct way of determining physical properties, the companion's nature can be constrained through binary modelling. This study used an eclipsing binary modelling software called PHOEBE 1.02(Prsa and Zwitter, 2005). PHOEBE uses a model based on the Wilson-Devinney code (Wilson and Devinney, 1971). It simultaneously fits both the light curve and the radial velocity curve to determine the stellar and orbital parameters of a system.
Footnote 2: [http://phoebe-project.org/](http://phoebe-project.org/)
The modelling of systems was carried out in steps. The first step was to manually fit the radial velocity curve of the system to estimate the mass ratio, eccentricity, and argument of the periastron. After obtaining a close enough solution, an iterative minimisation method was carried out to find the best-fit model. This was carried out by repeatedly performing the differential correction method (as described in Prsa and Zwitter, 2005), where the input parameter set was updated
for the next iteration until the convergence of the cost function. The light curve was introduced into the modelling once the best-fit radial velocity model was obtained. Similar to the radial velocity curve, the two curves are manually fitted, followed by an iterative minimisation. The light curve was directly modelled for a system without radial velocity data.
#### 2.4.3 Ellipsoidal Variability
In close binary systems, stars can be tidally distorted due to the gravitational force of each other. The non-spherical shape of the companions causes a variation in the light curve known as ellipsoidal variability (Beech, 1985; Morris, 1985). Ellipsoidal variables have lower inclination angles, such that the dominant effect on the light curve is due to the distorted shape of the star. The light curves of ellipsoidal variables are usually quasi-sinusoidal in nature and can be modelled using the Fourier series (Beech, 1985; Morris, 1985; Morris & Naftilan, 1993; Dal & Sipahi, 2013):
\[L(\phi)=A_{0}+\sum_{n=1}^{z}A_{n}\,\cos(n\phi)+\sum_{n=1}^{z}B_{n}\,\sin(n\phi) \tag{1}\]
where \(\phi\) is the orbital phase. The coefficients of the \(\cos\) (i\(\phi\)) provide information about the nature of variation present in the light curve. The coefficient of \(\cos(2\phi)\) term provides the amplitude of the ellipsoidal variation. In contrast, the coefficient of \(\cos(\phi)\) term provides the amplitude of light variations taking place within the orbital period, such as temperature spots and magnetic activity (Henry & Kaye, 1999; Dal & Sipahi, 2013). If ellipsoidal variability is dominant in a light curve, then the coefficient of \(\cos(2\phi)\) term is the largest. Here, non-linear least-squares fitting of the Fourier series was used to characterise the system's ellipsoidal variability.
## 3 Results
### Wocs 1007
WOCS 1007 is a short-period eccentric (\(e=0.205\)) binary system (Lacy, 1992) with an orbital period of 4.18 d (Milone & Latham, 1992). The primary companion has a spectral type of B9V (Allen & Strom, 1995) with a mass of 2.0-2.2 M\({}_{\odot}\)(Lacy, 1992). The star is known to be a \(\delta\) Scuti pulsator (Tkachenko et al., 2013) with a rotational velocity of \(\sin i=79\) km s\({}^{-1}\)(Bertelli, 2018). Sindhu et al. (2019) used SED analysis to estimate the parameters of the BSS and the companion. By fitting a double SED, they estimated the BSS to have T\({}_{\rm eff}=7500\)-7750 K with a radius of 2.94-2.97 R\({}_{\odot}\) and the hot companion to have T\({}_{\rm eff}=13\,250-13\,750\) K with a radius of 0.095\(\pm\)0.021 R\({}_{\odot}\). By comparing the parameters of the hot companion with the models, they classified the companion to be an LM WD star and estimated a mass of 0.19 M\({}_{\odot}\). _Gaia_ DR3 (Gaia Collaboration et al., 2022) classified this source as a non-variable and non-eclipsing SB1 binary. The parameters from _Gaia_ DR3 are included in Table 2. As the secondary star is optically sub-luminous, we can assume the photometric astrophysical parameters provided in _Gaia_ DR3 represent the primary (with the caveats presented in the _Gaia_ validation papers).
We used the PDCSAP flux from _TESS_ and K2SC light curve from _K2_ for the frequency analysis. We performed frequency analysis on both the light curves to cross-check whether the significant frequencies were present across the instruments. Fig. 1a shows the amplitude spectrum of WOCS 1007 obtained using the _TESS_ and _K2_ data in blue and orange, respectively. From the _TESS_ data, a total of 42 significant frequencies were obtained with the dominant frequency at 19.60 d\({}^{-1}\), whereas for the _K2_ data, only 25 significant frequencies were obtained with the dominant frequency at 17.01 d\({}^{-1}\). From Fig. 1a, it is evident that the majority of frequencies are present in both the data sets with different amplitudes except for a few frequencies. The prominent peak at 5.5d\({}^{-1}\) seen in the _TESS_ data is not present in the _K2_ data. Upon checking the light curves and amplitude spectra of all the neighbouring stars, this peak was found to be the second harmonic of the orbital frequency of a nearby source (about 53"). As the peak and its harmonics were present due to contamination, they were excluded from the analysis.
Here, we are more interested in the low-frequency peaks (f < 5 d\({}^{-1}\)), which are present both in _K2_ and _TESS_ data with similar amplitudes. For further analysis, only the _TESS_ data were used because of its shorter 2-min cadence compared to the 30-min _K2_ cadence. In the low-frequency region of the amplitude spectrum, there are five significant frequencies at 0.236, 0.477, 0.717, 0.955 and 1.195 d\({}^{-1}\) as shown in Fig. 1a. The first frequency is the orbital frequency (f\({}_{\rm orb}\)), and the other four are harmonics of the orbital frequency (n\({}_{\rm orb}\)). A dip resembling an eclipse was obtained by phase folding the light curve for 0.237 d\({}^{-1}\) frequency or 4.212 d period. This orbital period is consistent with the period found by Milone & Latham (1992). As all the low frequencies are associated with the binarity, we estimate the period of the WOCS 1007 binary system from its light curve for the first time, along with the \(\delta\) Scuti pulsations (Breger, 2000; Bowman & Kurtz, 2018).
For the binary modelling of WOCS 1007, we performed PHOEBE analysis using three different models (detached binary, semi-detached with the primary filling its Roche lobe, and unconstrained binary). The fitting was performed multiple times using each model with different initial guesses for the free parameters. Along with the radial velocity data from Milone et al. (1991), we used the binned phase folded curve (binned into 500 bins) to reduce the amplitude of the pulsations and the computation time of the modelling. In all the runs, the period of the system was fixed as 4.212 d, obtained from the frequency analysis. The albedo and gravity brightening of the primary was fixed to be 1 due to the radiative envelope of the primary star. Given that the light curve is available only in one passband, the T\({}_{\rm eff}\) of the primary was fixed to 7600 K (Sindhu et al., 2019). As the semi
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Name & _Gaia_ DR3 source id & RA & Dec & P (d) & \(e\) & Corrected data & Remarks \\ \hline WOCS 1007 & 604918213472162432 & 08 51 34.31 & +11 51 10.49 & 4.182 & 0.205 & _K2_, _TESS_ & SB1\({}^{a}\), \(\delta\) Scuti\({}^{b}\) \\ WOCS 4006 & 604918179110923520 & 08 51 32.57 & +11 50 40.63 & - & - & _K2_, _TESS_ & SB1\({}^{a}\), \(\delta\) Scuti\({}^{b}\) \\ WOCS 4003 & 604918014617889792 & 08 51 28.14 & +11 49 27.49 & 0.441 & 0.0 & _K2_ & SB1\({}^{a}\) \\ WOCS 5005 & 604917285757663872 & 08 51 19.90 & +11 47 00.44 & 4913 & 0.34 & _K2_ & SB1\({}^{a}\) \\ WOCS 1025 & 604896566835438464 & 08 51 37.69 & +11 37 03.79 & 1154 & 0.07 & _K2_ & SB1\({}^{a}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Details of known parameters of the targets. Remarks are taken from \({}^{a}\)Geller et al. (2015) and \({}^{b}\)Tkachenko et al. (2013).
detached model could not properly reproduce the phase folded curve, leading to a large cost function, it was discarded from the analysis. All the runs of the other two models converged close together, irrespective of the initial conditions. Due to this convergence, we classified the system as a detached binary. Fig. 1b shows the best-fitting model obtained for the light curve and radial velocity data. A critical feature of the fit is that the ingress and egress of the eclipse in the synthetic curve are not equal. Table 2 shows the values and errors of the fitting parameters. The errors from PHOEBE GUI are usually underestimated. Hence, we used a Python script3 to estimate errors (Prsa & Zwitter, 2005). This code allows a user to run Markov Chain Monte Carlo (MCMC) method (on PHOEBE 1.0 backend) using the EEMCE package (Foreman-Mackey et al., 2013), which is a pure-Python implementation of Goodman & Weare's Affine Invariant MCMC Ensemble sampler (Goodman & Weare, 2010). For the MCMC sampling, we provided the following nine free parameters: semi-major axis, centre-of-mass velocity, inclination, eccentricity, mass ratio, argument of periastron, two surface potentials and secondary T\({}_{\rm eff}\). The number of walkers was set to 50, and the total number of iterations to 3000. The first 200 iterations were discarded as the burn-in period. Figure 10 shows the corner plots of the free parameters. The standard deviation of the marginalised posterior parameter distributions, i.e. 1\(\sigma\), was taken as the error. The orbital parameters and masses of the companions obtained from PHOEBE agree with the solution from Milone & Latham (1992). The secondary temperature of 1430\(\pm\)1100 K and radius of 0.078\(\pm\)0.027 R\({}_{\odot}\) are consistent with the values obtained through SED fitting by Sinthu et al. (2019). If we vary the T\({}_{\rm eff}\) of the primary by 500 K, the stellar parameters do not show a significant change, but orbital parameters such as the inclination and mass ratio were found to show a change of the order of 5-8 per cent. The light curve analysis, in combination with the
Figure 1: _Ia:_ Amplitude spectrum of WOCS 1007 obtained through Fourier analysis of the _TESS_ (blue) and _K2_ (orange) data. The low-frequency region of WOCS 1007 is shown in the inserted panel with the red triangle representing the orbital frequency of the system (0.237 d\({}^{-1}\) or 4.212 d) and the green triangles representing the harmonics of the orbital frequency. _1b:_ Phase folded curve (top panels) and radial velocity curve (bottom panels) fitting of WOCS 1007 obtained from the PHOEBE modelling along with the residuals.
radial velocity, finds the secondary to have a mass of 0.22\(\pm\)0.05 M\({}_{\sun}\), suggesting it to be an LM WD. This is in excellent agreement with the results from the SED analysis. We, therefore, confirm that the WOCS 1007 is indeed a BSS+LM WD system. The log(g) of the secondary is found to be 6.00\(\pm\)0.27 cm s\({}^{-2}\), which agrees with the values expected for LM WDs. The mass of the primary is estimated as 1.95\(\pm\)0.26 M\({}_{\sun}\), which is also similar to the literature values and estimated mass using isochrones (Sindhu et al., 2018).
As seen from the geometry of the binary system, shown in Fig. 2, it is evident that the components do not eclipse each other. An inclination of 73.6\(\pm\)2.4 is not high enough to produce eclipses in binaries containing compact objects. Therefore, the light curve variation is likely due to the ellipsoidal variability caused by the distorted shape of either or both stars. The fit obtained using the Fourier series given in Eqn. 1 is shown in Fig. 3 and the coefficients are given in Table 3. The coefficient of cos(2\(\phi\)) term (A\({}_{2}\)) is the largest, indicating that the ellipsoidal variability dominates the light curve. Even though ellipsoidal variability is dominant, the presence of the \(\delta\) Scuti pulsations in the BSS indicates that the distortions in shape are not large enough to inhibit pulsations. This is supported by Fig. 2, where we do not see a significant distortion.
Therefore, WOCS1007 is a binary system that shows ellipsoidal variability, where the primary is a slightly distorted BSS showing \(\delta\) Scuti pulsations, and the secondary is an LM WD in a short period orbit.
### Wocs 4003
WOCS 4003 was identified as an F3V type star by Ebbighausen (1940) and as an eclipsing binary by Ferreira Lopes et al. (2015). Yakut et al. (2009) conducted a detailed photometric analysis of WOCS 4003 and found an orbital period of 0.44 d. Even though the nature of the light curve suggested a contact binary, some unusual features were reported. They observed the O'Connell effect in the light curve where the maxima were not of equal brightness. Rusinski (1993) and Selam (2004) compared the Fourier coefficients (a\({}_{4}\) and a\({}_{2}\)) to differentiate between contact and non-contact systems. They found WOCS 4003 to be deviating from the boundary condition expected for a W Uma system. Later, the system was classified as semi-detached using the diagram from Paczynski et al. (2006). From PHOEBE modelling, Yakut et al. (2009) found the effective temperatures of the component stars to be 6900 K and 5200-5830 K with radii of 0.38-0.45 R\({}_{\sun}\) and 0.29-0.33 R\({}_{\sun}\), respectively. Their modelling also included three spots (two cold spots and one hot spot) on the cooler star. Jadhav et al. (2019) noted a large UV excess flux in the SED of WOCS 4003, which is not expected for the binary parameters obtained by Yakut et al. (2009). Though Jadhav et al. (2019) estimated the probable parameters of a binary companion, they suggested the UV excess to be of a different origin, related to the presence of X-ray emission from the system. Their estimates of only the primary are likely to be reliable (T\({}_{\rm eff}\) = 6500\(\pm\)125 K and radius = 1.78\(\pm\)0.02 R\({}_{\sun}\)) unless the secondary has a large contribution to flux. Gaia Collaboration et al. (2022) classified WOCS 4003 as a variable eclipsing binary with a period of 0.441 d. Unfortunately, _Gaia_ DR3 does not provide additional orbital parameters, and the spectro-photometric parameters may be incorrect as both components are optically bright.
Only the K2SC light curve was used for the analysis due to the unavailability of PDCSAP flux from _TESS_, and 17 significant frequencies were found in the amplitude spectrum (Fig. 4a). There are two peaks in the low-frequency region, as shown by the green and red triangles. The 2.265 d\({}^{-1}\) or 0.44 d is taken as the orbital frequency and 4.53 d\({}^{-1}\) as its harmonic. The orbital period derived is consistent with the period found by Yakut et al. (2009). We also report the presence of the O'Connell effect in the light curve as seen in Fig. 4b.
Apart from the large eclipses in the phase folded curve, smaller dips were also present at regular intervals throughout the orbital phase (Fig. 4b). By removing the dominant eclipses through subtraction of the sine waves corresponding to the three frequencies (4.53, 2.26, and 6.79 d\({}^{-1}\)) and their respective amplitudes (also known as pre-whitening), we found the period of the small dips to be 1.11 d. Fig. 4c shows the pre-whitened phase folded curve.
For the PHOEBE analysis, due to the uncertainties in the system's configuration, the fitting was performed with three different models (W UMa type over-contact system, non-thermal over-contact system, and semi-detached with the primary filling its Roche lobe). We fixed the period (0.44 d), gravity brightening and albedo of unity for all three models. The surface temperature of the primary was allowed to vary in the W UMa model but was fixed to 6500 K (Jadhav et al., 2019) in the other two. To improve the fit of the synthetic to the observed light curve, we introduced two spots (one hot and one cold spot) in the non-thermal model and one cold spot in the semi-detached model. The fits obtained for the three models are shown in Fig 4b. Most of the parameters obtained from the three models disagreed with each other, and none of the three agreed with the SED results, especially the secondary radius. The three models returned secondary radii significantly larger than the SED value, leading to an over-luminous system.
In all three models, the inclination of the system was determined to be low. However, due to the larger radii of the components and the small distance between them, they might be exhibiting a grazing eclipse. The geometric configuration (in semi-detached mode) showed that only a small portion of the flux is blocked during eclipses. Therefore, confirming if ellipsoidal variability has a dominant effect on the light curve is vital. For this, Eqn. (1) was fitted to the light curve. Fig. 5 shows the fit, and Table 4 gives the values of the coefficients. We obtained similar values for A\({}_{1}\) and A\({}_{2}\), which are the coefficients of cos (\(\phi\)) and cos (2\(\phi\)). This indicates almost equal contributions from the eclipse and the ellipsoidal variability.
The potential eclipsing and ellipsoidal variable WOCS 4003 is a complicated system, as the total luminosity from the SED and light curve analyses do not match. Notably, the radii and T\({}_{\rm eff}\) obtained from the light curve analysis suggest the system to be very luminous, whereas the SED analysis does not support such a total luminosity. Most importantly, the detected period of 1.11 d suggests a third component, which needs confirmation through further studies.
### Wocs 4006
Manteiga et al. (1989) classified WOCS 4006 as an SB1 system with a v sin i = 80-100 km s\({}^{-1}\)(Pritchet & Glaspey, 1991; Rodriguez et al., 2000). Gilliland et al. (1991) discovered \(\delta\) Scuti pulsations in the system and Bruntt et al. (2007) detected 41 significant frequencies. Two studies reported an effective temperature of 7244 K (Gilliland et al., 1991) and 8090 K (Mathys, 1991) for the BSS. Pandey et al. (2021) used double SED fitting technique and found effective temperatures of 7500\(\pm\)125 and 16000\(\pm\)125 K with radii of 1.61\(\pm\)0.02 and 0.051\(\pm\)0.005 R\({}_{\sun}\) for the primary and secondary, respectively. The _Gaia_ DR3 light curve did not exhibit significant variability, and the _Gaia_ DR3 parameters are as follows: T\({}_{\rm eff}\) = 7958\(\pm\)44 K, log (g) = 4.29\(\pm\)0.02 dex, R = 1.52\(\pm\)0.05 R\({}_{\sun}\) and v sin i = 112\(\pm\)8 km s\({}^{-1}\). This is a rapidly rotating BSS.
As the PDCSAP flux from _TESS_ and K2SC data from _K2_ were available, both were used for the frequency analysis. Fig. 6 shows the
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Parameters & This work & SED\({}^{a}\) & Literature\({}^{b}\) & _Gaia_ DR3 \\ \hline Semi-major axis (\(R_{\sun}\)) & \(14.22\begin{subarray}{c}+0.19\\ -0.16\end{subarray}\) & & & \\ Eccentricity & \(0.206\begin{subarray}{c}+0.002\\ -0.002\end{subarray}\) & & 0.256 & 0.16\(\pm\)0.10 \\ Argument of periastron (rad) & \(5.48\begin{subarray}{c}+0.01\\ -0.01\end{subarray}\) & & & 5.63\(\pm\)0.60 \\ Center of mass velocity ( km s\({}^{-1}\)) & \(34.12\begin{subarray}{c}+0.14\\ -0.13\end{subarray}\) & & & 31.93\(\pm\)1.31 \\ Inclination (deg) & \(73.6\begin{subarray}{c}+2.4\\ -2.3\end{subarray}\) & & & \\ Mass ratio & \(0.115\begin{subarray}{c}+0.002\\ -0.002\end{subarray}\) & & \(>0.092\) & & \\ Period (d) & \(4.212\pm 0.041\) & & 4.182 & 4.185\(\pm\)0.001 \\ Mass 1 (M\({}_{\sun}\)) & \(1.95\begin{subarray}{c}+0.26\\ -0.24\end{subarray}\) & & 2.08 & 1.95\(\pm\)0.06 \\ Mass 2 (M\({}_{\sun}\)) & \(0.22\begin{subarray}{c}+0.05\\ -0.05\end{subarray}\) & \(\sim\)0.2 & \(<0.2\) & \(>\)0.18 \\ Radius 1 (R\({}_{\sun}\)) & \(2.54\begin{subarray}{c}+0.81\\ -0.81\end{subarray}\) & \(2.94\pm 0.04\) & & \\ Radius 2 (R\({}_{\sun}\)) & \(0.078\begin{subarray}{c}+0.023\\ -0.027\end{subarray}\) & \(0.094\pm 0.001\) & & \\ Eff. Temperature 1 (K) & 7600 & & & 7536\(\pm\)38, 7655\(\pm\)10 \\ Eff. Temperature 2 (K) & \(14300\begin{subarray}{c}+1100\\ -1000\end{subarray}\) & 13250–13750 & & & \\ log g\({}_{1}\) ( cm s\({}^{-2}\)) & \(3.91\begin{subarray}{c}+0.16\\ -0.15\end{subarray}\) & \(\sim\) 3.5 & & 3.61\(\pm\)0.01 \\ log g\({}_{2}\) ( cm s\({}^{-2}\)) & \(6.00\begin{subarray}{c}+0.22\\ -0.27\end{subarray}\) & \(\sim\) 7.75–9.0 & & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Orbital and stellar parameters of WOCS 1007 obtained from PHOEBE modelling. The SED and literature parameters are taken from \({}^{a}\)Sindhu et al. (2019) and \({}^{b}\)Milone & Latham (1992), respectively.
Figure 2: Geometric configuration of WOCS 1007 at four phases (0,0.25,0.5 and 0.75), constructed using the PHOEBE results.
amplitude spectra of both data sets. Being located in a dense field, WOCS 4006 suffers from contamination by multiple sources. We see from Fig. 6 that frequencies between 2 and 6 d\({}^{-1}\) are present only in _TESS_ data (top panel) but not in the \(K\)2 data (bottom panel). Due to this large contamination, _TESS_ data were not used for the analysis. The amplitude spectrum using K2 data shows \(\delta\) Scuti pulsations between 15 and 25 d\({}^{-1}\). We did not detect any eclipses or ellipsoidal variations in the phase-folded K2 data. This study, therefore, confirms the source as a \(\delta\) Scuti pulsator and rules out the presence of a close companion (period of a few days) with a large inclination in this SB1 system.
### Wocs 5005
WOCS 5005 is an SB1 system (Geller et al., 2015) with a spectral type of F5IV (Allen & Strom, 1995). Latham & Milone (1996) used spectroscopic data covering over 4000 d to determine the orbital period to be 4913 d with an eccentricity of 0.342. The star is a slow rotator with a rotational velocity of less than 30 km s\({}^{-1}\)(Prichet & Glaspey, 1991). van den Berg et al. (2004) detected X-rays from Chandra observations and suggested the presence of a close binary. Pandey et al. (2021) used double SED fitting to estimate surface temperatures of 6500 K and 13000 K for the primary and secondary, respectively. They also found the radius of the secondary to be 0.035 R\({}_{\sun}\) and suggested it to be an LM WD, which places a demand on it being in a short period with the BSS. WOCS 5005 also falls outside the region resulting from stable mass transfer and requires a period of \(<\) 10 d to comply with formation through mass transfer from the estimated WD companion (Pandey et al., 2021). _Gaia_ DR3 did not demonstrate any variability or binarity in the system. Assuming the optical spectrum is dominated by the primary, the _Gaia_ DR3 parameters for the primary are as follows: T\({}_{\rm eff}\) = 6890\(\pm\)200 K, log(g) = 4.17\(\pm\)0.20 dex and R = 2.36\(\pm\)0.06 R\({}_{\sun}\).
WOCS 5005 has light curves from three algorithms (Kepler SOC pipeline (Jenkins et al., 2010), K2SFF (Vanderburg & Johnson, 2014), and K2SC (Aigrain et al., 2016)). A total of nine significant frequencies were obtained from the _K2_ pipeline (top panel of Fig. 7). The light curve folded for 7.95 d showed shallow dips. Due to their small depth, the dips could not be directly classified as eclipses. We analysed K2SC and K2SFF data to verify the period and the dips. However, the two data sets did not contain the 7.95 d period. Even after analysing the extracted _TESS_ data of WOCS 5005, the period of 7.95 d was not obtained. We discard this estimation as it is likely to be spurious. This analysis, therefore, does not support the presence of a close companion (P \(\sim\) a few days) with a large inclination in the system.
### Wocs 1025
Geller et al. (2015) classified WOCS 1025 as an SB1 system. The system's orbital period was determined to be 1154 d with an eccentricity of 0.066\(\pm\)0.082 (Latham & Milone, 1996). Latham & Milone (1996) also found the rotational velocity to be v sin i = 60 km s\({}^{-1}\). Pandey et al. (2021) used SED fitting and determined the effective temperature of the primary to be 7000\(\pm\)125 K. They could not estimate the secondary effective temperature precisely but stated that it could be between 21 000 K and 50 000 K. _Gaia_ DR3 classified the source as a non-variable binary. The binary parameters are as follows: centre of mass velocity = 33.77\(\pm\)1.87 km s\({}^{-1}\), \(e\) = 0.52\(\pm\)0.06, P = 773\(\pm\)9 d, M\({}_{1}\) = 1.54\(\pm\)0.14 M\({}_{\sun}\) and M\({}_{2}\) = 0.61 to 1.72 M\({}_{\sun}\). If we assume the primary dominates the optical spectrum, the primary parameters are as follows: log(g) = 4.11\(\pm\)0.02 dex, R = 1.61\(\pm\)0.05 R\({}_{\sun}\), T\({}_{\rm eff}\) = 6954\(\pm\)600 K.
WOCS 1025 is observed by both _K2_ and _TESS_ missions, but due to the unavailability of PDCSAP flux from _TESS_, only the K2SC light curve was used. From the frequency analysis, six significant frequencies were obtained (bottom panel of Fig. 7). However, no eclipse-like dips were obtained by phase folding the light curve. This analysis does not support the presence of a close companion (P \(\sim\) a few days) with a large inclination in the system.
## 4 Discussion
This study utilised the capabilities of _TESS_ and _K2_ to trace small amplitude variability among the BSSs in M67. The targets of interest are BSSs with detected LM WD companions and known \(\delta\) Scuti pulsators. A binary system with an LM WD companion demands a short orbit binary system, and the light curves of such a system may show eclipses or ellipsoidal variability if the orbit is highly inclined.
In the case of WOCS 1007, we could combine the light curve
\begin{table}
\begin{tabular}{c c c c} \hline \hline Coefficients & values & Coefficients & values \\ \hline A\({}_{0}\) & 1.00001(1) & & \\ A\({}_{1}\) & 0.03686(7) & B\({}_{1}\) & 0.00535(5) \\ A\({}_{2}\) & 0.03968(8) & B\({}_{2}\) & 0.00244(1) \\ A\({}_{3}\) & 0.00165(4) & B\({}_{3}\) & 0.00007(1) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Coefficients obtained through Fourier modelling of WOCS 4003. A\({}_{n}\) are coefficients of cos(n \(\phi\)) and B\({}_{n}\) are coefficients of sin(n \(\phi\)).
Figure 3: The Fourier fit obtained for WOCS 1007 using Eqn. (1) in the top panel, with the residuals in the bottom panel.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Coefficients & values & Coefficients & values \\ \hline A\({}_{0}\) & 0.00011(1) & & \\ A\({}_{1}\) & 0.00031(4) & B\({}_{1}\) & 0.00005(2) \\ A\({}_{2}\) & 0.00105(4) & B\({}_{2}\) & 0.00022(1) \\ A\({}_{3}\) & 0.00077(4) & B\({}_{3}\) & 0.00004(1) \\ A\({}_{4}\) & 0.00029(5) & B\({}_{4}\) & 0.00005(2) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Coefficients of Eqn. (1) obtained through Fourier modelling of WOCS 1007 with the errors given in brackets. A\({}_{n}\) are coefficients of cos(n \(\phi\)) and B\({}_{n}\) are coefficients of sin(n \(\phi\)).
with the radial velocity data to estimate the parameters of the binary components. We consider the orbital results quite robust, with better estimates of the period and eccentricity. The estimated radius and \(\mathrm{T_{eff}}\) of the companion match well with those estimated from the SED analysis. The mass of the hot companion demands it to be an LM WD, and the log(g) value also supports this.
BSSs can be significantly more massive than the turn-off mass, possibly requiring a large amount of mass accumulated onto them from an external source (Perets, 2015). It is suggested that the mass of the BSSs, as estimated from the colour-magnitude diagram (CMD), is about 15 per cent more than that estimated from dynamical estimates (Perets, 2015). Therefore, estimating the BSSs parameters through
Figure 4: _4a: Top panel:_ Amplitude spectrum of WOCS 4003 obtained using the _K2_ data. The red triangles represent the orbital frequency of the system (2.265 d\({}^{-1}\) or period of 0.44 d) and its harmonics. _Bottom panel:_ Amplitude spectrum of WOCS 4003 after removing the two dominant frequencies. The frequency associated with the period of the smaller dip (1.1 d), along with its harmonics, are represented by yellow triangles. _4b:_ Fitting of the phase folded curve along with the residual of WOCS 4003 assuming two different models. _4c:_ WOCS 4003 light curve obtained by removing the eclipses and phase folding using a period of 1.11 d.
methods other than CMD is essential. From the binary modelling of WOCS 1007, we estimated a mass of 1.95\(\pm\)0.26 M\({}_{\odot}\) (see table 2) for the BSS, which is not very different from that estimated from isochrones (\(\sim\) 2.0 M\({}_{\odot}\)) (Sindhu et al., 2018).
The BSS is likely to have gained mass through a Case-B mass-transfer in its binary system if we consider the lower limit of the mass of the BSS mass, which is at least \(\sim\) 0.3 M\({}_{\odot}\) more than the turn-off mass of the cluster. As the LM WD companion needs to shed the outer layers early in the RGB evolution, the BSS could have gained that mass, making WOCS 1007 a post-mass-transfer system. We also note that a conventional mass-transfer scenario cannot explain the mass gained if the upper limit of the primary mass is considered, as the mass gained (\(\sim\) 1.0 M\({}_{\odot}\)) is much larger than the conventional limit and therefore demands a more efficient mass transfer mechanism. We conclude that this binary system is a post-mass-transfer system.
We suggest that the progenitor of WOCS 1007 is likely to be an equal mass (\(\sim\) 1.35 M\({}_{\odot}\)) binary. The evolution of the current secondary towards the red-giant branch resulted in the filling up of the Roche lobe and subsequent mass transfer. The current primary gained \(\sim\) 0.6 M\({}_{\odot}\), with the secondary ending up as LM Helium core WD of mass \(\sim\) 0.22 M\({}_{\odot}\).
We also note that the BSS in WOCS 1007 is a \(\delta\) Scuti star. The pulsations are in addition to the ellipsoidal variability due to the slightly deformed BSS. We detect 19 frequencies in the amplitude spectrum. We do not detect any signatures of tidally perturbed pulsations (Bowman et al., 2019) or single-sided pulsations seen in systems such as Handler et al. (2020). To understand the evolutionary stage of the BSS, WOCS 1007 was placed on the HR diagram as shown in Figure 8, along with other known \(\delta\) Scuti stars in binary systems using the catalogue from Liakos and Niarchos (2017). The \(\delta\) Scuti instability region indicated by the two yellow lines was taken from Murphy et al. (2019). The BSS is located on the main sequence and the empirical instability region. This agrees with the \(\delta\) Scuti nature of the BSS and that the star is placed in the instability strip due to its rejuvenation of increased mass through mass transfer. The location of the BSS also agrees with the theoretical single-star evolutionary tracks obtained from MIST (MESA Isochrones and Stellar Tracks (Choi et al., 2016; Dotter, 2016; Paxton et al., 2011, 2013, 2015, 2018)) according to its mass. All the MIST tracks in Fig. 8 were calculated with zero rotational velocity and an initial metallicity of 0.02. The above results indicate that the current stellar properties of the BSS in WOCS 1007 are similar to that of single \(\delta\) Scuti stars.
We also compared the properties of the primary star in WOCS 1007 with binary \(\delta\) Scuti stars. The correlation between the dominant pulsation and the orbital periods of \(\delta\) Scuti stars in eclipsing binaries are well established (Liakos and Niarchos, 2017; Liakos, 2020). For the comparison, we used detached eclipsing systems and oscillating eclipsing Algol-type binary (oEA) systems. We considered \(\delta\) Scuti stars in detached systems from Liakos and Niarchos (2017) and Liakos (2020), and semi-detached or oEA systems (taken from Liakos and Niarchos (2017)). In Fig. A2, the BSS was placed on the \(\rm{P_{orb}}\)-\(\rm{P_{pl}}\) (top panel) and \(\rm{P_{pl}}\)-\(\rm{log}\) 8 (bottom panel) diagrams. In the top panel, the BSS is located below the fitted lines but within the region occupied by the detached and oEA binaries. In the bottom panel, it is consistent with both the distributions as well as the empirical relations. Therefore the properties of the WOCS 1007 system are consistent with the properties of binary systems hosting \(\delta\) Scuti stars.
The system WOCS 4003 is reported as an SB1 (Geller et al., 2015) but shows \(\beta\) Lyrae-type light curve. We observed certain features in the light curve, such as the O'Connell effect and small dips equally spaced throughout the light curve. The orbital period estimate from this study is in good agreement with Yakut et al. (2009). As there was no proper constraint on the system's configuration, we modelled the system using three different models (W UMa type overcontact binary, non-thermal overcontact binary, and semi-detached binary with primary filling Roche lobe). Our PHOEBE fitting results from all three models were different from each other. Even though the radius of the primary and effective temperature of the secondary agreed with the SED values, the secondary radius from all three models was significantly higher than the SED solution, leading to an over-luminous system. Overall, we were unable to obtain satisfactory parameters for the components. In all three models, the inclination of the system was determined to be quite low, suggesting the system to be exhibiting a grazing eclipse. Fourier analysis showed that the light curve is affected by both eclipses and ellipsoidal variability. Along with the large eclipses with a period of 0.44 d, the light curve also shows periodic dips of 1.1 d. Though we excluded contamination from nearby sources by checking their light curves, we are unable to confirm the presence of a tertiary component in the system.
WOCS 4006 was classified as an SB1 system with \(\delta\) Scuti pulsations. We did not detect any eclipses though \(\delta\) Scuti pulsations were detected. Similar to WOCS 1007, this BSS also appears to have been rejuvenated after the mass transfer. Upon comparing the SED parameters with the single star evolutionary tracks, the star's mass was around 1.6 M\({}_{\odot}\). We detect 29 frequencies in the amplitude spectrum, higher than the number of frequencies detected in WOCS1007, though not as high as the 41, which was reported in Bruntt et al. (2007).
We did not detect eclipses in the light curves for two more BSSs that are found to be SB1 systems: WOCS 5005 and WOCS 1025. This could be due to the low inclination of the systems. In the literature, WOCS 5005 (4913 d) and WOCS 1025 (1154 d) were reported as long-period binaries with an LM WD and a WD as the secondary companions, respectively. As LM WD cannot form through single star evolution within the Hubble time, Pandey et al. (2021) speculated that systems like WOCS 5005 could either be formed due to dynamical interactions (Khurana et al., 2022) or have the LM WD in the close inner binary of a tertiary system. As we did not detect any eclipses in the light curves of these systems, we can confirm the absence of short-period inner binary with a high inclination that gives rise to eclipses or ellipsoidal variations. We note that this study does not rule out the presence of a binary component with slightly larger periods and/or low inclinations. Therefore, this study shows that TESS/K2 light curves are helpful in detecting close companions
Figure 5: The Fourier fit obtained for WOCS 4003 using Eqn. (1) is in the top panel, with the residuals to the fit shown in the bottom panel.
of BSSs in highly-inclined orbits. If detected, they provide essential constraints on the properties as well as the formation of the binary.
## 5 Summary
This study presents a light curve analysis of select BSSs in M67 using the TESS and K2 data. We summarise the results below:
* All the sources studied are known to be SB1 systems suggesting a low luminosity companion to the BSS primary.
* We analysed the light curve for the BSS WOCS 1007 for the first time. Our analysis using PHOEBE confirms WOCS 1007 to have an LM WD companion. We estimate P = 4.212\(\pm\)0.041 d and \(e\) = 0.206\(\pm\)0.002 The masses of companions are estimated to be 1.95\(\pm\)0.26 and 0.22\(\pm\)0.05 M\({}_{\sun}\) with radii of 2.54\(\pm\)0.81 and 0.078\(\pm\)0.027 R\({}_{\sun}\), respectively. The temperature of the LM WD is found to be 14300\(\pm\)1100 K. The estimated mass of the BSS is similar to the mass estimated from isochrones.
* We detect the \(\delta\) Scuti pulsations in the BSS of WOCS 1007 (19 frequencies). The observed light curve is primarily due to ellipsoidal variations in the system, suggestive of a slight deformation in the BSS, not enough to inhibit the \(\delta\) Scuti pulsations.
* The WOCS 1007 is likely to be formed by efficient mass transfer in a close binary, leaving behind an LM WD. The increased mass has pushed it to within the instability strip. This BSS is found to be similar to other \(\delta\) Scuti stars in eclipsing binaries.
* WOCS 4003 is modelled with various options, but PHOEBE results suggest an over-luminous system. The best model shows a grazing eclipse, but ellipsoidal variation is dominant in the light curve.
* Apart from the 0.44 d period eclipses in WOCS 4003, we found
Figure 6: Amplitude spectrum of WOCS 4006 obtained through Fourier analysis of the _TESS_ (top panel) and _K2_ (bottom panel) data.
Figure 7: Amplitude spectrum of WOCS 5005 (top) and WOCS 1025 (bottom) obtained using the _K2_ data.
smaller eclipses with a period of 1.1 d. There may be a close tertiary, making it a compact triple system. WOCS 4003 is a complex system, and multi-wavelength light curves are needed to identify and characterise the individual components.
* WOCS 4006 is a known \(\delta\) Scuti star, and we detect 29 pulsation frequencies, whereas no eclipses are detected. No eclipses or pulsations are detected in the case of WOCS 5005 and WOCS 1025. We confirm the absence of any short-period inner binary with high inclination in these BSSs. This method cannot rule out a slightly distant inner binary companion or a close companion in an almost face-on inclination.
In conclusion, our study provides additional evidence supporting the mass transfer mechanism of BSS formation in lower-density regions. It also demonstrates that the light curve analysis of BSSs in open clusters, along with the multi-wavelength SED analysis, can address the nature and formation pathways of the BSSs. We plan to use this technique to analyse BSSs in other clusters. From the identification and analysis of WOCS 1007, we have shown that light curve analysis is a valuable technique for studying such short-period post-mass transfer objects. Combining this technique and the one used by Murphy et al. (2018) will help identify both short and long-period post-mass transfer systems. The short-period detached nature of WOCS 1007 BSS, along with its \(\delta\) Scuti pulsations, makes it a prime candidate for a further detailed analysis using asteroseismology, as this is the first such a system identified in an open cluster. Also, only 44 such systems are known in the field (Liakos, 2020). Considering WOCS 1007 has already interacted, such a study will not only provide information on the effects of binarity on pulsations but also help understand the influence of mass transfer on pulsations.
## Acknowledgements
We thank the referee for constructive comments and suggestions. AS acknowledges support from a SERB Power Fellowship. DMB gratefully acknowledges a senior postdoctoral fellowship from the Research Foundation Flanders (FWO) with the grant agreement no. 1286521N. VJ thanks the Alexander von Humboldt Foundation for their support. This paper includes data collected by the _TESS_ mission, which are publicly available from the Mikulski Archive for Space Telescopes (MAST). Funding for the _TESS_ mission is provided by NASA's Science Mission Directorate. This paper includes data collected by the _Kepler_ mission and obtained from the MAST data archive at the Space Telescope Science Institute (STScI). Funding for the _Kepler_ mission is provided by the NASA Science Mission Directorate. STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. This work presents results from the European Space Agency (ESA) space mission _Gaia_. _Gaia_ data are being processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC) ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)). This work and the Pythia code used in this work make use of Python packages (Matplotlib (Hunter, 2007), Numpy (Harris et al., 2020) and PyMC3 (Salvatier et al., 2015)). This research made use of exoplanet(Foreman-Mackey et al., 2020) and its dependencies (Agol et al., 2020; Astropy Collaboration et al., 2013, 2018; Kipping, 2013; Luger et al., 2019; Theano Development Team, 2016). This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France (Wenger et al., 2000).
Figure 8: HR diagram with the \(\delta\) Scuti instability region taken from Murphy et al. (2019). SED results of WOCS 1007, WOCS 4006, and WOCS 4003 are represented using red, blue, and green crosses, respectively. The PHOEBE result of WOCS 1007 is represented using a red square, whereas a square and a star of the colour green represent the PHOEBE results of WOCS 4003 obtained using the non-thermal and semi-detached model, respectively. The grey dashed lines represent the MIST single-star evolutionary stars for different masses. The black stars represent \(\delta\) Scuti stars taken from Liakos & Niarchos (2017).
## Data Availability
All the _TESS_ and _K2_ data can be obtained through the Mikulski Archive for Space Telescopes (MAST) database.
|
2310.19902 | Herd: Using multiple, smaller LLMs to match the performances of
proprietary, large LLMs via an intelligent composer | Currently, over a thousand LLMs exist that are multi-purpose and are capable
of performing real world tasks, including Q&A, text summarization, content
generation, etc. However, accessibility, scale and reliability of free models
prevents them from being widely deployed in everyday use cases. To address the
first two issues of access and scale, organisations such as HuggingFace have
created model repositories where users have uploaded model weights and
quantized versions of models trained using different paradigms, as well as
model cards describing their training process. While some models report
performance on commonly used benchmarks, not all do, and interpreting the real
world impact of trading off performance on a benchmark for model deployment
cost, is unclear. Here, we show that a herd of open source models can match or
exceed the performance of proprietary models via an intelligent router. We show
that a Herd of open source models is able to match the accuracy of ChatGPT,
despite being composed of models that are effectively 2.5x smaller. We show
that in cases where GPT is not able to answer the query, Herd is able to
identify a model that can, at least 40% of the time. | Surya Narayanan Hari, Rex Liu, Matt Thomson | 2023-10-30T18:11:02Z | http://arxiv.org/abs/2310.19902v2 | Herd: Using multiple, smaller LLMs to match the performances of proprietary, large LLMs via an intelligent composer
###### Abstract
Currently, over a thousand LLMs exist that are multi-purpose and are capable of performing real world tasks, including Q&A, text summarization, content generation, etc. However, accessibility, scale and reliability of free models prevents them from being widely deployed in everyday use cases. To address the first two issues of access and scale, organisations such as HuggingFace have created model repositories where users have uploaded model weights and quantized versions of models trained using different paradigms, as well as model cards describing their training process. While some models report performance on commonly used benchmarks, not all do, and interpreting the real world impact of trading off performance on a benchmark for model deployment cost, is unclear. Here, we show that a herd of open source models can match or exceed the performance of proprietary models via an intelligent router. We show that a Herd of open source models is able to match the accuracy of ChatGPT, despite being composed of models that are effectively 2.5x smaller. We show that in cases where GPT is not able to answer the query, Herd is able to identify a model that can, at least 40% of the time.
## 1 Introduction
Large language models have found novel ways to increase the number of use cases, such as by expanding the number of parameters, combining existing models to augment a single models' functionality and quanitizing large models to fit on smaller devices [4; 12; 9; 18; 2; 8; 13; 3; 4; 5]. The rapid expansion of model availability has created a significant challenge in practice, where corporations want to expose performant LLM endpoints for their users, and have to spend time evaluating models to find the best one that works for them in practice. To overcome this problem, engineers often resort to proprietary models without knowing if there are open-source models available at a comparable performance standard.
This often leads to the problem elaborated in Figure 1, showing examples of questions taken from MMLU that ChatGPT (GPT 3.5 Turbo) answers incorrectly, but there is some open source model that can answer the question correctly. We use this insight to try and construct a herd of models such that at least one model in the herd can answer any incoming query correctly.
Recent model evaluation frameworks [6; 19] help users compare LLMs against each other, but the growing pace of model formats, outpaces one-size-fits-all comparison software suites. Empirical evidence in this work, reveals that open source models have caught up with leading proprietary models, but not all open source models feature on leaderboards, due to their vast number.
Deployment of models also remains a key challenge. The 70b parameter LLama-2, in 16-bit precision, requires 2 80Gb A100 GPUs, and in practice, users might want several models running in parallel. Sacrificing parameter count to cut costs risks performance degradation, the exact magnitude of which is unknown before deployment.
While quantized models might alleviate some of the challenges associated with model deployment, finding performant quantized models, navigating their formats and knowing their training details, such as what datasets were used in their quantisation calibration, requires expertise.
In addition to quantized variants of models, specific model variants exist with chat capabilities, with different performance metrics from non-chat models. Others with more specific domain experises such as science or code [17; 1], might be useful for some user applications but aren't fine-tuned for chat capability, making it harder to pick one model to use in production.
Today the Huggingface (HF) model repository contains \(\sim\)24,000 machine learning models for text generation. While model cards might provide some insight into the dataset that a model is trained on, common practices such as fine-tuning models using inputs from other large language models or model merging [10; 16; 14; 11] has made it difficult to track what data was used to train the model. This has also made it challenging to track what datasets or tasks one can expect the models to be performant on. Futher, not all open source models have detailed model cards, making trusting them in deployment even more challenging.
Together, it would be a useful service to expose an endpoint that would process an incoming users' request by abstracting away model selection. Here, we explore the advantage of exposing a model herd of open source models, which outperforms a larger, proprietary large language model, offering size advantages. We also train a Tryage router [7] to predict model performance, and show that the model herd is able to answer 74% of incoming queries with performance comparable to or better than ChatGPT.
Figure 1: In practice, not all models are able to answer all questions accurately (the ones that do answer the questions correctly have their answers boxed in green), which leads to the practical challenge in picking an ensemble of models that has at least one highly performant model for every question. Herd attempts to solve this problem by constructing a herd of large language mdoels that collectively can answer the query accurately, and by learning the association between input text and performance of each LLM.
## Herd Architecture
Define a model Herd \(M\), which is a collection of models. In an oracle model system, incoming query \(z\) is assigned to \(\operatorname*{arg\,max}_{j}M_{j}(z)\). However, in practice, evaluating \(M_{j}(z)\) for all \(j\), is expensive. To this end, we choose the model \(\operatorname*{arg\,max}_{j}\hat{M}_{j}(z)\), where \(\hat{M}_{j}\) is learned by a router \(R\)[7]. We implement the router model as a language model. In practice, the router optimizes its weight set \(W\) over the following loss function.
\[\min_{W}\ \ \mathbb{E}_{z\sim p(z)}\ \left(\frac{1}{|M|}\sum_{M_{i}}D\big{(}R( z,M_{i};W)||L(z,M_{i})\big{)}\right) \tag{1}\]
where \(D(\cdot||\cdot)\) is divergence between predicted loss \(R(z,M_{i};W)\) and ground truth loss \(L(z,M_{i})\) (here the L1 distance function was used between predicted and ground truth F1s measured character-wise) for prompts \(z\) drawn from data distribution \(p(z)\) from the MMLU dataset.
In this work, we found that Bert-medium [15] was the best performing router when a variety of router models were trained on 12,000 examples and validated on 3001 examples from MMLU, with a fixed batch size of 16, using the Adam optimizer with a learning rate of \(2e-5\).
In this work, we composed the herd by replicating realistic user constraints of models that would fit on an 8x48Gb cluster, using a mixture of 7B, 15B, 30B and 70B order models. We used a mix of quantized and non-quantized models, since their performances' were previously unknown.
## Demonstrating Herd
We find that a herd of open source models is able to beat ChatGPT (Figure 2) despite being effectively less than 30% of the size (effective size measured as the average size of models weighted by the number of examples allocated to them. Further, none of the models in the herd were individually better than ChatGPT, but together, they were able to surpass ChatGPT's performance. Further, all the models are open source, and the herd can be seamlessly expanded, contracted or interchanged for other models.
We trained a tryage router [7] to model the performances of a herd and found that the router was able to successfully allocate incoming queries to models that produced aggregate performance comparable to GPT 3.5 Turbo despite being effectively 2.5x smaller 3a 1. Further, some models in the herd are quantized, meaning they can be run on edge compute / cloud compute - a user can trade off the size of a herd for compute cost.
Footnote 1: exact number of parameters in ChatGPT (GPT 3.5 Turbo) unknown, based on reported information
Figure 2: Open source model Herds outperform proprietary models such as ChatGPT on MMLU with decreased model size.
We show that Herd can capture knowledge in cases where ChatGPT fails to answer an incoming query. While any single model might not be able to answer all the incoming queries, Herd is able to find a model that can answer each query, based on the input text of the prompt. ChatGPT is only able to beat a herd of open source models 26% of the time, implying 74% of the queries can be answered by open source models (Fig. 2(b), 'beat' is defined as F1 in excess of 5%).
In the cases where ChatGPT was wrong, defined as when ChatGPT had an F1 score of less than 0.9, Herd was able to achieve a correct answer (defined as when any model in the Herd had an F1 score greater than 0.95), 69.3% of the time. A predictive router, was able to identify a model that can answer the query correctly, 40% of the time (Tryage bar in Fig. 2(c)). The mean of the F1s of the answers from each model, as well as the aggregate F1s from Herd and the predictive router, are shown in Figure 2(c).
## Conclusion and discussion
In this work we present the result that a Herd of open-sourced models can achieve performance comparable or better than ChatGPT, at a fraction of the compute cost and zero query cost. Further, when proprietary models cannot answer a query, a herd of open source models, are able to cover a
Figure 3: a) A router trained to model the performance of a herd offers comparable performance to GPT 3.5 Turbo (mean performances shown as horizontal lines). b) GPT exceeds the performance of the Herd in only 26% of incoming queries, implying 74% of incoming queries can be answered by open source models in the Herd. c) In questions that ChatGPT gets wrong the Herd can find models that perform correctly (Average of 0.9 F1). A routing model, achieves an aggregate of 0.76 F1 on these questions.
significant portion of the deficit. This system offers a new model paradigm to compete against closed source models, by leveraging widely available open source technology. |
2306.12169 | HumanDiffusion: diffusion model using perceptual gradients | We propose {\it HumanDiffusion,} a diffusion model trained from humans'
perceptual gradients to learn an acceptable range of data for humans (i.e.,
human-acceptable distribution). Conventional HumanGAN aims to model the
human-acceptable distribution wider than the real-data distribution by training
a neural network-based generator with human-based discriminators. However,
HumanGAN training tends to converge in a meaningless distribution due to the
gradient vanishing or mode collapse and requires careful heuristics. In
contrast, our HumanDiffusion learns the human-acceptable distribution through
Langevin dynamics based on gradients of human perceptual evaluations. Our
training iterates a process to diffuse real data to cover a wider
human-acceptable distribution and can avoid the issues in the HumanGAN
training. The evaluation results demonstrate that our HumanDiffusion can
successfully represent the human-acceptable distribution without any heuristics
for the training. | Yota Ueda, Shinnosuke Takamichi, Yuki Saito, Norihiro Takamune, Hiroshi Saruwatari | 2023-06-21T10:44:38Z | http://arxiv.org/abs/2306.12169v1 | # HumanDiffusion: diffusion model using perceptual gradients
###### Abstract
We propose _HumanDiffusion_, a diffusion model trained from humans' perceptual gradients to learn an acceptable range of data for humans (i.e., human-acceptable distribution). Conventional HumanGAN aims to model the human-acceptable distribution wider than the real-data distribution by training a neural network-based generator with human-based discriminators. However, HumanGAN training tends to converge in a meaningless distribution due to the gradient vanishing or mode collapse and requires careful heuristics. In contrast, our HumanDiffusion learns the human-acceptable distribution through Langevin dynamics based on gradients of human perceptual evaluations. Our training iterates a process to diffuse real data to cover a wider human-acceptable distribution and can avoid the issues in the HumanGAN training. The evaluation results demonstrate that our HumanDiffusion can successfully represent the human-acceptable distribution without any heuristics for the training.
Yota Ueda, Shinnosuke Takamichi, Yuki Saito, Norihiro Takamune, Hiroshi Saruwatari The University of Tokyo, Japan.
[email protected], [email protected]
**Index Terms**: diffusion models, human computation, black-box optimization, crowdsourcing, speech perception
## 1 Introduction
Generative models can produce data that are indistinguishable from real data. In particular, deep generative models [1, 2, 3, 4], which are based on deep neural networks (DNNs), have significantly improved the quality of generated data in media research, including speech synthesis [5, 6, 7], natural language processing [8], and image synthesis [9]. The generative models can represent real-data distributions and produce data that follow the learned distributions [1]. In contrast, humans may accept data as natural even when the data are outliers of a real-data distribution [10]. For example, in speech perception, humans can accept synthesized or processed speech. In this paper, we use the term _human-acceptable distribution_ defined as the data distribution whose data humans can accept as natural [10].
HumanGAN [10] was proposed to model a human-acceptable distribution, whereas the basic generative adversarial network (GAN) [1] can only represent a real-data distribution. GAN is a type of deep generative model and consists of a DNN-based generator and a DNN-based discriminator. HumanGAN replaces the discriminator of GAN with a human-based discriminator. HumanGAN regards humans as black-box systems that output perceptual evaluation values, given the generated data. By using the estimated perceptual gradient of the human-based discriminator, one can then train a generator to represent a human-acceptable distribution. However, HumanGAN often suffers from gradients vanishing and mode collapse during training similar to the original GAN [11]. In other words, it requires careful heuristics to prevent the learned distribution from converging in a meaningless region, such as outside the human-acceptable distribution or around data whose evaluation value is extremely high (i.e., the mode of perceptual evaluation).
To solve these problems, we propose _HumanDiffusion_, a diffusion model using perceptual gradients. For modeling a human-acceptable distribution, we introduce diffusion models [12] to sample data that follows a real-data distribution using the gradient of the distribution. **Fig. 1** shows a comparison of HumanDiffusion and HumanGAN, and the distributions these models represent. On the basis of a diffusion model, our HumanDiffusion iterates a process to diffuse real data to cover a wider human-acceptable distribution thereby avoiding the issues encountered during the HumanGAN training. HumanDiffusion is evaluated in terms of its phoneme perception. The experimental results show that HumanDiffusion can successfully represent a human-acceptable distribution.
## 2 Related Work
Human perception is incorporated into DNNs such as reinforcement learning [13], and genetic algorithm [14]. HumanGAN [10] deals with perceptual gradients to train a generator. Because humans are better at relative evaluation than absolute evaluation, gradient-based methods can train DNNs well.
### HumanGAN
HumanGAN [10] represents the human-acceptable distribution \(p_{\mathrm{human}}\left(\mathbf{x}\right)\), where \(\mathbf{x}\) is data, that is wider than the real-data distribution \(p_{\mathrm{real}}\left(\mathbf{x}\right)\). A DNN-based generator of HumanGAN is trained using a human-based discriminator (humans' perceptual evaluation) instead of the DNN-based discriminator of GAN [1]. Let \(N\) be the number of data. The generator \(G\left(\cdot\right)\) transforms prior data \(\{\mathbf{z}_{n}\}_{1\leq n\leq N}\) produced from a prior distribution \(\pi\left(\mathbf{z}\right)\) to data \(\{\mathbf{\hat{x}}_{n}\}_{1\leq n\leq N}\). The human-based discriminator \(D\left(\cdot\right)\) imitates humans' perceptual evaluations. \(D\left(\cdot\right)\) takes \(\mathbf{\hat{x}}_{n}\) as an input and outputs a posterior probability that the input is perceptually acceptable. The objective function is
\[L_{\mathrm{HumanGAN}}=\sum_{n=1}^{N}D\left(G\left(\mathbf{z}_{n}\right)\right). \tag{1}\]
\(G\left(\cdot\right)\) is trained to maximize \(L_{\mathrm{HumanGAN}}\). A model parameter \(\mathbf{\theta}\) of \(G\left(\cdot\right)\) is iteratively updated as \(\mathbf{\theta}\leftarrow\mathbf{\theta}+\alpha\partial L_{\mathrm{HumanGAN}}/\partial \mathbf{\theta}\)
Figure 1: HumanDiffusion and HumanGAN. HumanDiffusion can represent a human-acceptable distribution whereas HumanGAN cannot without a heuristic solution.
where \(\alpha\) is the learning rate, and \(\partial L_{\mathrm{HumanGAN}}/\partial\mathbf{\theta}=\partial L_{\mathrm{HumanGAN}}/ \partial\mathbf{x}\cdot\partial\mathbf{x}/\partial\mathbf{\theta}=\partial D\left(\mathbf{x} \right)/\partial\mathbf{x}\cdot\partial\mathbf{x}/\partial\mathbf{\theta}\).
\(\partial\mathbf{x}/\partial\mathbf{\theta}\) can be estimated analytically, but \(\partial D\left(\mathbf{x}\right)/\partial\mathbf{x}\) cannot because the human-based discriminator \(D\left(\cdot\right)\) is not differentiable. HumanGAN uses the natural evolution strategy (NES) [15] algorithm to approximate the gradient. A small perturbation \(\Delta\mathbf{x}_{n}^{(i)}\) is randomly generated from the multivariate Gaussian distribution \(\mathcal{N}\left(\mathbf{0},\sigma_{\mathrm{NES}}^{2}\mathbf{I}\right)\), and it is added to the generated data \(\mathbf{\hat{x}}_{n}\). \(i\) is the perturbation index \(\left(1\leq i\leq I\right)\). \(\sigma_{\mathrm{NES}}\) and \(\mathbf{I}\) are the standard deviation and the identity matrix, respectively. Next, a human observes two perturbed data \(\left\{\mathbf{\hat{x}}_{n}+\Delta\mathbf{x}_{n}^{(i)},\mathbf{\hat{x}}_{n}-\Delta\mathbf{x}_{ n}^{(i)}\right\}\) and evaluates their difference in the posterior probability of naturalness:
\[\Delta D\left(\mathbf{\hat{x}}_{n}^{(i)}\right)\equiv D\left(\mathbf{\hat{x}}_{n}+ \Delta\mathbf{x}_{n}^{(i)}\right)-D\left(\mathbf{\hat{x}}_{n}-\Delta\mathbf{x}_{n}^{(i)} \right). \tag{2}\]
\(\Delta D\left(\mathbf{\hat{x}}_{n}^{(i)}\right)\) ranges from \(-1\) to \(1\). For instance, a human will answer \(\Delta D\left(\mathbf{\hat{x}}_{n}^{(i)}\right)=1\) when the human perceives that \(\mathbf{\hat{x}}_{n}+\Delta\mathbf{x}_{n}^{(i)}\) is substantially more acceptable than \(\mathbf{\hat{x}}_{n}-\Delta\mathbf{x}_{n}^{(i)}\). \(\partial D\left(\mathbf{\hat{x}}_{n}\right)/\partial\mathbf{x}\) is approximated as [15]
\[\frac{\partial D\left(\mathbf{\hat{x}}_{n}\right)}{\partial\mathbf{x}}=\frac{1}{2 \sigma_{\mathrm{NES}}I}\sum_{i=1}^{I}\Delta D\left(\mathbf{\hat{x}}_{n}^{(i)} \right)\cdot\Delta\mathbf{x}_{n}^{(i)}. \tag{3}\]
However, HumanGAN requires various heuristics during training, such as initialization and training stops. First, HumanGAN should roughly know the shapes of the human-acceptable distribution in advance to make the training easier. Data that deviate significantly from the human-acceptable distribution cannot be distinguished by humans, resulting in an inaccurate estimation of the gradient in Eq. (3). Moreover, the generator training should be stopped before converging to represent the human-acceptable distribution because the training causes mode collapse due to the training based on the maximization of \(L_{\mathrm{human}}\). Concretely, if the generator finds one region in the data space where humans evaluate them as highly acceptable during the training, the learned distribution converged in a limited range of the whole human-acceptable distribution.
### Diffusion models
The diffusion models [12] use the gradient of the data distribution to train a model that represents the real-data distribution \(p_{\mathrm{real}}\left(\mathbf{x}\right)\). The score of the real-data distribution \(p_{\mathrm{real}}\left(\mathbf{x}\right)\) is defined as \(\frac{\partial\log p_{\mathrm{real}}\left(\mathbf{x}\right)}{\partial\mathbf{x}}\). The score network \(S\left(\cdot\right)\) is parameterized by \(\mathbf{\theta}\) and trained to approximate the score.
Even if the true \(p_{\mathrm{real}}\left(\mathbf{x}\right)\) is not observable, as long as the score \(\frac{\partial\log p_{\mathrm{real}}\left(\mathbf{x}\right)}{\partial\mathbf{x}}\) is observable, an iterative update with Langevin dynamics [16] can be applied as follows. Langevin dynamics is a kind of Markov chain Monte Carlo (MCMC) method using the gradients of the data distributions. First, an input data \(\mathbf{x}_{0}\) is sampled from a prior distribution \(\pi\left(\mathbf{z}\right)\). Then, the data at \(\left(t+1\right)\)th step (\(t=0,1,\dots,T-1\)) is calculated based on the Langevin dynamics using the score:
\[\mathbf{x}_{t+1}=\mathbf{x}_{t}+\frac{\epsilon^{2}}{2}\frac{\partial\log p_{\mathrm{ real}}\left(\mathbf{x}_{t}\right)}{\partial\mathbf{x}}+\epsilon\mathbf{r}, \tag{4}\]
where \(\epsilon>0\) and \(\mathbf{r}\) are a fixed step size and random vector from the isotropic Gaussian \(N\left(\mathbf{0},I\right)\), respectively. Finally, the distribution of the output data at the final step \(T\), \(\mathbf{x}_{T}\), will equal \(p_{\mathrm{real}}\left(\mathbf{x}\right)\) when \(\epsilon\to 0\) and \(T\rightarrow\infty\). Because sampling from Eq. (4) only requires the score \(\frac{\partial\log p_{\mathrm{real}}\left(\mathbf{x}\right)}{\partial\mathbf{x}}\), the score network \(S\left(\mathbf{x}\right)\approx\frac{\partial\log p_{\mathrm{real}}\left(\mathbf{x} \right)}{\partial\mathbf{x}}\) can be trained, and it can then approximately produce samples with Langevin dynamics. Given real data \(\{\mathbf{x}_{n}\}_{1\leq n\leq N}\), the score \(\frac{\partial\log p_{\mathrm{real}}\left(\mathbf{x}\right)}{\partial\mathbf{x}}\) is predicted by using \(S\left(\cdot\right)\).
The score network \(S\left(\mathbf{x};\mathbf{\theta}\right)\) for estimating \(\frac{\partial\log p_{\mathrm{real}}\left(\mathbf{x}\right)}{\partial\mathbf{x}}\) is trained by score matching without training a model to estimate \(p_{\mathrm{real}}\left(\mathbf{x}\right)\). The objective function for training \(S\left(\cdot\right)\) is
\[L_{\mathrm{Diffusion}}=\sum_{n=1}^{N}\left|S\left(\mathbf{x}_{n};\mathbf{\theta}\right)- \frac{\partial\log p_{\mathrm{real}}\left(\mathbf{x}_{n}\right)}{\partial\mathbf{x}} \right|^{2}. \tag{5}\]
As explained above, the diffusion model can generate data that follow a real-data distribution by using scores of the real-data distribution. If we can observe scores of the human-acceptable distribution instead of the real-data distribution, and if the score network can learn them, we expect to be able to generate data according to the human-acceptable distribution.
## 3 HumanDiffusion
### Components of HumanDiffusion
Fig. 2 shows the process of HumanDiffusion. HumanDiffusion uses the gradient of perceptual evaluation to train a model representing the human-acceptable distribution \(p_{\mathrm{human}}\left(\mathbf{x}\right)=\frac{1}{Z}D\left(\mathbf{x}\right)\), where \(Z=\int D\left(\mathbf{x}\right)d\mathbf{x}\) is the normalization coefficient. The score is defined as \(\frac{\partial\log D\left(\mathbf{x}\right)}{\partial\mathbf{x}}\). We process _periphery data_ sampled from real data \(\mathbf{x}_{n}\) to estimate the score around the real-data distribution. Let \(M\) be the number of periphery data per real data. We sample periphery data \(\{\mathbf{\hat{x}}_{n,m}\}_{1\leq n\leq N,1\leq n\leq M}\) from \(N(\mathbf{x}_{n},\sigma_{\mathrm{per}}^{2}\mathbf{I})\), where \(\sigma_{\mathrm{per}}\) is the standard deviation. The perceptual evaluation function \(D\left(\cdot\right)\) is driven by humans and takes data \(\mathbf{x}\) as the input and outputs the perceptual evaluation value of the naturalness of the data in the range of \(\left[0,1\right]\). If the data is natural, the evaluation value is \(1\), and if the data is unnatural, the evaluation value is \(0\). The score network \(S\left(\cdot\right)\) is trained to approximate the score of \(D\left(\mathbf{x}\right)\). The objective function is defined as the minimization of
\[L_{\mathrm{HumanDiff}}=\sum_{n=1}^{N}\sum_{m=1}^{M}\left|S\left(\mathbf{x}_{n,m}; \mathbf{\theta}\right)-\frac{\partial\log D\left(\mathbf{\hat{x}}_{n,m}\right)}{ \partial\mathbf{x}}\right|^{2}. \tag{6}\]
### Estimating score of perceptual evaluation
Similar to the HumanGAN, the score \(\frac{\partial\log D\left(\mathbf{x}\right)}{\partial\mathbf{x}}\) cannot be estimated because the evaluation functions are based on the perceptual evaluation. We use the NES algorithm to estimate the
Figure 3: Training score network and inference in HumanDiffusion. The score network outputs gradients and evaluation values to calculate scores. Samples are produced with Langevin dynamics.
Figure 2: Tasks for crowdworkers to estimate the score. Periphery data are sampled from real data. A human observes two perturbed data and evaluates their perceptual difference in naturalness acceptability. The score is estimated using evaluation and perturbation.
score. The NES algorithm uses evaluation values of two perturbed data. With the index \(i\), a human observes two perturbed data \(\mathbf{\hat{x}}_{n,m},+\Delta\mathbf{x}_{n,m}^{(i)},\mathbf{\hat{x}}_{n,m}-\Delta\mathbf{x}_{n,m} ^{(i)}\), where \(\Delta\mathbf{x}_{n,m}\) is sampled from \(N(0,\sigma_{\text{NES}}^{2}\mathbf{I})\) in the same way as Eq. (3). In HumanGAN, humans responded only to the difference between the two samples (Eq. (2)), but in HumanDiffusion, humans responded to the absolute value of naturalness for each of these two samples.
To estimate the score using the NES algorithm in the same manner as using HumanGAN, we adopt the chain rule to the score: \(\frac{\partial\log\left(\mathbf{x}\right)}{\partial\mathbf{x}}=\frac{1}{D\left(\mathbf{x} \right)}\frac{\partial D\left(\mathbf{x}\right)}{\partial\mathbf{x}}\). The score is obtained by estimating \(D\left(\mathbf{x}\right)\) and \(\frac{\partial D\left(\mathbf{x}\right)}{\partial\mathbf{x}}\). We remodel the score network \(S\left(\cdot\right)\) to output \(S_{D}\left(\cdot\right)\approx D\left(\mathbf{x}\right)\) and \(S_{\nabla D}\left(\cdot\right)\approx\frac{\partial D\left(\mathbf{\hat{x}}_{n,m} \right)}{\partial\mathbf{x}}\) instead of \(\frac{\partial\log D\left(\mathbf{\hat{x}}_{n,m}\right)}{\partial\mathbf{x}}\), as shown in **Fig. 3**. If the dimension of the data is \(d\), this network outputs values in \(d+1\) dimensions. The loss function of the score network (Eq. (6)) is redefined as
\[L_{\text{HumanDiff}}^{\text{unconditional}}= \sum_{n=1}^{N}\sum_{m=1}^{M}\left(\left|S_{D}\left(\mathbf{\hat{x}}_ {n,m}\right)-D\left(\mathbf{\hat{x}}_{n,m}\right)\right|^{2}\right.\] \[+\left|S_{\nabla D}\left(\mathbf{\hat{x}}_{n,m}\right)-\frac{\partial \log D\left(\mathbf{\hat{x}}_{n,m}\right)}{\partial\mathbf{x}}\right|^{2}\right). \tag{7}\]
\(\frac{\partial D\left(\mathbf{\hat{x}}_{n,m}\right)}{\partial\mathbf{x}}\) can be approximated in the same manner as in HumanGAN:
\[\frac{\partial D\left(\mathbf{\hat{x}}_{n,m}\right)}{\partial\mathbf{x}}= \frac{1}{2\sigma_{\text{NES}}I}\sum_{i=1}^{I}\biggl{\{}D\left(\mathbf{ \hat{x}}_{n,m}+\Delta\mathbf{x}_{n,m}^{(i)}\right)\] \[-D\left(\mathbf{\hat{x}}_{n,m}-\Delta\mathbf{x}_{n,m}^{(i)}\right) \biggr{\}}\cdot\Delta\mathbf{x}_{n,m}^{(i)}. \tag{8}\]
Next is the estimation of \(D\left(\mathbf{x}\right)\), where \(D^{(i)}\left(\mathbf{\hat{x}}_{n,m}\right)\) at the index \(i\) is given by \(D^{(i)}(\mathbf{\hat{x}}_{n,m})=\left(D\left(\mathbf{\hat{x}}_{n,m}+\Delta\mathbf{x}_{n,m} ^{(i)}\right)+D\left(\mathbf{\hat{x}}_{n,m}-\Delta\mathbf{x}_{n,m}^{(i)}\right) \right)/2\), and \(D\left(\mathbf{\hat{x}}_{n,m}\right)\) is estimated from it. The most intuitive method is to estimate the mean \(D\left(\mathbf{\hat{x}}_{n,m}\right)=\frac{1}{2}\sum_{i=1}^{I}D^{(i)}\left(\mathbf{ \hat{x}}_{n,m}\right)\). However, since \(D^{(i)}\left(\mathbf{\hat{x}}_{n,m}\right)\) is a defined range of values, the distribution of \(D^{(i)}\left(\mathbf{x}_{n,m}\right)\) is not symmetric near its lower bound (0, i.e., the perceived quality of the data is extremely poor) or, its upper bound (1, the perceived quality is extremely good). Therefore, the mean is not a suitable representative value of the distribution. Therefore, in this paper, we use kernel distribution estimation given \(P\left(D^{(i)}\left(\mathbf{\hat{x}}_{n,m}\right)\right)\) and consider its mode as \(D\left(\mathbf{\hat{x}}_{n,m}\right)\). Preliminary experiments were conducted using both the mean and mode methods, and the mean did not show the sampling convergence.
From the above formulation, the score network is a network that outputs both \(D\left(\mathbf{x}\right)\) and \(\frac{\partial D\left(\mathbf{x}\right)}{\partial\mathbf{x}}\), as shown in **Fig. 3**. If the dimension of the data is \(d\), this network outputs values in \(d+1\) dimensions. The score network is trained using Eq. (6) by computing \(\frac{\partial\log D\left(\mathbf{x}\right)}{\partial\mathbf{x}}\) from the output \(D\left(\mathbf{x}\right)\) and \(\frac{\partial D\left(\mathbf{x}\right)}{\partial\mathbf{x}}\).
#### 3.2.1 Compensating for inaccurate gradient
If two perturbated data of extremely poor quality (e.g., sounds that do not sound like a speech) are observed and evaluated by a human, the score is very inaccurate1. Therefore, we assume that the naturalness of the real data is not poor at all, and we apply a regularization that brings the gradient \(\frac{\partial D\left(\mathbf{x}\right)}{\partial\mathbf{x}}\) closer to that of the real data used to sample the periphery data as
Footnote 1: Intuitively, it is difficult to evaluate the naturalness of two extremely poor quality data. Therefore, the score of periphery data \(\mathbf{x}_{n,m}\) of extremely poor quality cannot be estimated from the gradient alone.
\[\frac{\partial D\left(\mathbf{\hat{x}}_{n,m}\right)}{\partial\mathbf{x}}\leftarrow \frac{\partial D\left(\mathbf{\hat{x}}_{n,m}\right)}{\partial\mathbf{x}}+b\left(\mathbf{ \hat{x}}_{n,m}-\mathbf{x}_{n}\right), \tag{9}\]
where \(b\) is a hyperparameter. This method is based on an existing diffusion model that robustly estimates scores in the absence of any data [12]. Preliminary experiments have confirmed that without this regularization, the scores for extremely poor-quality data are very close to zero, and the sampling described below does not converge.
### Sampling with Langevin dynamics
The score is computed using Eqs. (8) and (9) and modeled them using Eq. (6). Langevin dynamics is used to sample the data according to the human-acceptable distribution \(p_{\text{human}}\left(\mathbf{x}\right)\).
\[\mathbf{x}_{t+1}=\mathbf{x}_{t}+\frac{\epsilon^{2}}{2}S\left(\mathbf{x}_{t}\right)+c\mathbf{r}. \tag{10}\]
The initial distribution that \(\mathbf{x}_{0}\) follows is a real-data distribution (e.g., data generated by a trained GAN). The training iteration is continued until convergence.
## 4 Experiments and Results
### Experimental setup
**Data space and speech analysis-synthesis.** Phoneme perception experiments were conducted as described in a previous report on HumanGAN using basically the same experimental setup [10]. We basically followed the experimental setup of the HumanGAN paper [10], specifically, we used the phoneme (Japanese /a/), the JVPD corpus [17], preprocessing, the WORLD vocoder [18, 19], and speech features including log spectral envelopes. We applied principal component analysis (PCA) to the log spectral envelopes and used the first and second principal components. The two-dimensional principal components were normalized to have zero-mean and unit-variance. The speech synthesis step also followed the previous report on HumanGAN paper [10]. First, the first and second principal components were generated by the neural network and de-normalized. Then, a 1-second speech sample was synthesized using the remaining speech features, i.e., F0, and aperiodicities. The evaluation was performed by using the Lancers crowdsourcing platform [20].
**Perceptual test.** We carried out perceptual evaluations. Two speech waveforms generated from \(\mathbf{\hat{x}}_{n,m}+\Delta\mathbf{x}_{n,m}^{(i)}\), \(\mathbf{\hat{x}}_{n,m}-\Delta\mathbf{x}_{n,m}^{(i)}\) were presented to a listener, and the listener provided the naturalness values \(D\left(\mathbf{\hat{x}}_{n,m}+\Delta\mathbf{x}_{n,m}^{(i)}\right)\) and \(D\left(\mathbf{\hat{x}}_{n,m}-\Delta\mathbf{x}_{n,m}^{(i)}\right)\) using slide bars. The slide bars range from 0 to 1. listeners selected \(1\) if the speech was natural and \(0\) when the speech was unnatural. The total number of listeners was \(150\).
**Score network and HumanDiffusion sampling.** The score network was a feed-forward neural network. It inputs two-dimensional data \(\mathbf{x}\) and outputs one-dimensional \(D\left(\mathbf{x}\right)\) and two-dimensional \(\frac{\partial D\left(\mathbf{x}\right)}{\partial\mathbf{x}}\). The network consisted of a two-unit input layer, \(3\times 128\)-unit softplus [21] hidden layers, and three-unit sigmoid (for \(D\left(\mathbf{x}\right)\)) and linear (for \(\frac{\partial D\left(\mathbf{x}\right)}{\partial\mathbf{x}}\)) output layers. We used the Adam [22] optimizer with a learning rate of \(\alpha=0.001\) for the training. The number of real data \(N\), the number of periphery data per real datum \(M\), the number of perturbations \(I\), the number of training iterations, and the standard deviation of NES \(\sigma_{\text{NES}}\), the standard deviation \(\sigma_{\text{per}}\) were set to
\(100\), \(3\), \(20\), \(10000\), \(1.0\) and \(10\), respectively. For the Langevin dynamics sampling, the number of data sampled from real-data distribution \(N\left(0,1\right)\), step size \(\epsilon\), the number of iterations were set to \(200\), \(0.0001\), \(100000\). We confirmed that the distribution converged after \(10,000\) iterations of the sampling.
**HumanGAN.** For comparison, we also trained HumanGAN. The generator was a feed-forward neural network. The model consisted of a two-unit input layer, \(3\times 128\)-unit softplus hidden layers, and a two-unit linear output layer. Although the original training of HumanGAN requires listening experiments to determine the perceptual gradient for each iteration, in this study, we used the output of the score network to reduce the cost of experiments. Specifically, the output \(\frac{\partial D(\mathbf{x})}{\partial\mathbf{x}}\) of the score network was used for training. We used the Adam optimizer with a learning rate of \(\alpha=0.01\) for the training. The number of training iterations was set to \(10000\). The initial parameters were set to cover a neighborhood of the perceptual distribution as described in the report on HumanGAN [10].
### Qualitative evaluation of score network training
First, we qualitatively confirm the observed scores and the score network training. The left panel in **Fig. 4** is the estimated gradient for each \(\frac{\partial D(\mathbf{x}_{n,m})}{\partial\mathbf{x}}\). As described in Section 3, the real data distribution follows a mean of 0 and a variance of 1. This figure shows that a gradient occurs over a wider range than that of the real data distribution. In other words, as shown in the HumanGAN paper, the perceptual distribution exists over a wider range than the real data distribution. On the other hand, when the data are outliers far from the real data distribution (e.g., when the x-axis value exceeds \(\pm 5\)), the gradient is almost zero. This indicates that the perceptual gradient has vanished owing to extremely poor sound quality. Regularization compensates for this gradient vanishing, and the result is shown in the middle panel of **Fig. 4**. This regularization produces a gradient toward the real data distribution even for data that are outliers. The result of training this gradient in the score network is shown on the right panel of the figure. Finally, a gradient toward the center is learned.
### Quantitative/qualitative evaluation of sampling
Next, we compare the data generated by HumanGAN and HumanDiffusion to show that the HumanDiffusion sampling is reasonable. **Figs. 5** a) and b) show the distributions of data generated by HumanGAN and HumanDiffusion, respectively. The gradients are also shown for reference.
HumanGAN learns only to move data in the gradient direction, which causes mode collapse, as shown on the right panel of **Fig. 5** a) unless a heuristic iteration limit is set. In contrast, HumanDiffusion includes a stochastic movement term based on MCMC and thus captures a wider range than the real data distribution, as shown on the right panel of **Fig. 5** b). The results demonstrate that HumanDiffusion can learn a wider human-acceptable distribution without introducing heuristics used in HumanGAN. The variance of the real data distribution is \(\left[1.0,1.0\right]\) (unit variance), whereas the variance of the data distribution obtained by HumanDiffusion is \(\left[9.2,8.9\right]\), indicating that HumanDiffusion represents a wider distribution than the real data distribution.
### Evaluation using real and generated data
Finally, we quantitatively verified whether HumanDiffusion can generate samples from a human-acceptable distribution wider than a real-data distribution. We prepared two data sets: from the real-data and the HumanDiffusion distributions. The posterior probabilities of these data sets were evaluated using the MUSHRA test. The posterior probability ranged within \(\left[0,1\right]\) and corresponds to \(D\left(\mathbf{x}\right)\). The numbers of the data are 200 respectively. The total number of listeners was \(40\). A listener evaluated 20 samples.
**Fig. 6** shows the violin plots of the posterior probability. The average of real-data distribution data is 73 and the average of generated data is 69. Therefore, we consider that the generated data are sufficiently natural, although the variation of the data distribution is much larger than that of the real-data distribution. It is clear that HumanDiffusion can represent a human-acceptable distribution.
## 5 Conclusions
In this paper, we presented HumanDiffusion, which can represent a human-acceptable (humans' perceptually recognizable) distribution. We evaluated the use of HumanDiffusion in modeling phoneme perception, and we qualitatively and quantitatively demonstrated that HumanDiffusion can represent a human-acceptable distribution. As our future work, we will examine the scalability of the HumanDiffusion in terms of feature dimensionality.
**Acknowledgements:** This work was spotted by JSPS KAKENHI 23H03418, 21H04900, and JST FOREST JPMJFR226V.
Figure 4: Observed gradients, gradients with regularization, and modeled gradient by score network.
Figure 5: Data generated by a) HumanGAN and b) HumanDiffusion.
Figure 6: Violin plots of posterior acceptability. The white point indicates the average. |
2310.06822 | Neural Bounding | Bounding volumes are an established concept in computer graphics and vision
tasks but have seen little change since their early inception. In this work, we
study the use of neural networks as bounding volumes. Our key observation is
that bounding, which so far has primarily been considered a problem of
computational geometry, can be redefined as a problem of learning to classify
space into free or occupied. This learning-based approach is particularly
advantageous in high-dimensional spaces, such as animated scenes with complex
queries, where neural networks are known to excel. However, unlocking neural
bounding requires a twist: allowing -- but also limiting -- false positives,
while ensuring that the number of false negatives is strictly zero. We enable
such tight and conservative results using a dynamically-weighted asymmetric
loss function. Our results show that our neural bounding produces up to an
order of magnitude fewer false positives than traditional methods. In addition,
we propose an extension of our bounding method using early exits that
accelerates query speeds by 25%. We also demonstrate that our approach is
applicable to non-deep learning models that train within seconds. Our project
page is at: https://wenxin-liu.github.io/neural_bounding/. | Stephanie Wenxin Liu, Michael Fischer, Paul D. Yoo, Tobias Ritschel | 2023-10-10T17:50:09Z | http://arxiv.org/abs/2310.06822v5 | # Neural Bounding
###### Abstract
Bounding volumes are an established concept in computer graphics and vision tasks but have seen little change since their early inception. In this work, we study the use of neural networks as bounding volumes. Our key observation is that bounding, which so far has primarily been considered a problem of computational geometry, can be redefined as a problem of learning to classify space into free or occupied. This learning-based approach is particularly advantageous in high-dimensional spaces, such as animated scenes with complex queries, where neural networks are known to excel. However, unlocking neural bounding requires a twist: allowing - but also limiting - false positives, while ensuring that the number of false negatives is strictly zero. We enable such tight and conservative results using a dynamically-weighted asymmetric loss function. Our results show that our neural bounding produces up to an order of magnitude fewer false positives than traditional methods.
+
Footnote †: journal: Computer graphics and vision
## 1 Introduction
Efficiently testing two, three or higher-dimensional points or ranges for intersections with extended primitives is at the core of many interactive graphics tasks. Examples include testing the 3D position of a particle in a fluid simulation against an animated character mesh, testing rays against a 3D medical scan volume or testing a drone's flight path against time-varying obstacles.
To accelerate all these queries, it is popular to use a hierarchy of tests: if intersection with a simple _bounding primitive_ - such as a box - that conservatively contains a more complex primitive fails, one can skip the costly test with the more complex primitive. For a correct algorithm, the false-negative (FN) rate of the first test must be zero, i.e. bounding must never miss a true positive intersection.
For efficiency, the main trade-off is i) the cost of testing the bounding primitive, ii) the cost of intersecting the original primitive, and iii) the false-positive (FP) rate of the bounding primitive. The FP rate measures how often an initial positive intersection with the bounding primitive turns out to be negative, upon more detailed testing with the original primitive, leading to wasted computation. A successful bounding method will have both a low testing cost and a low false-positive rate. Typical bounding solutions include spheres, boxes, oriented boxes or \(k\) discretely-oriented polytopes (\(k\)-DOPs) [1]. However, fitting those primitives, in particular to higher
dimensions, can result in a poor FP rate since they remain convex, and further may require significant implementation effort [11]. In this article, we thus show how to train neural networks in order to unlock high-dimensional, non-linear, concave bounding with a combination of simplicity, flexibility and testing speed.
While the FP rate is the main concern for _efficiency_, for _correctness_ of the bounding algorithm, the challenge is to develop a neural network (NN) that is trained to produce bounds with strictly zero false negatives. This is crucial, as the FN rate quantifies how often the algorithm erroneously classifies an actual intersection as non-intersection - such misclassifications will result in truncated geometry features and cut-off object parts, as exemplified by the fins of the dolphin in Fig. 1, d. A straightforward solution would be to first find a bounding primitive and then compress it using a NN. Another approach would be to learn the NN to approximate the complex primitive and later make the approximation conservative. Instead, we show that with the right initialization and schedule for weighting FP and FN, it is possible to directly learn a neural bounding in any dimension, for both point and range queries.
As it could appear that executing a neural network for testing bounds is too time-intensive to be useful, we carefully study architectures that are both small and simple (inspired by [12] or [13]), that they are only slight more expensive than linear ones or traditional intersection tests. We further demonstrate that our approach is also amenable to optimizing non-neural representations, such as \(k\)-DOPs.
We show application to two, three and 4D point queries, 2D and 3D range queries as well as queries of dynamic scenes, including scenes with multiple degrees of freedom and compare these results to classic bounding methods, such as spheres, boxes and \(k\)-DOPs.
## 2 Previous work
Bounding \(n\)-D objects is a core operation in graphics. Classic algorithms can be extremely straightforward, such as axis-aligned boxes, but already fitting a sphere can be more involved than it seems at first. An established textbook with many bounding and intersection algorithms is the one by Schneider and Eberly [11].
When it comes to complex objects, the situation is more difficult for bounding. For a single object that is dominantly convex \(k\)-DOPs have shown useful [14, 15]. One further option is to perform convex decomposition on the objects [1, 2]. 3D ray-queries, which are the most relevant of such tests, are typically also performed on a hierarchy, e.g. [1, 10]; for a survey please see Meister et al. [2].
Recently, NNs have changed many operations in graphics, but notably not bounding. Neural intersection functions (NIFs) [16] predict intersections and could also be used to intersect boundings, but do not attempt to be conservative, which allows the possibility to miss parts of the object they enclose. Moreover, NIFs are trained on static scenes, requiring a re-training of the networks with each change of scene configuration or camera viewpoint [16]. Our neural bounding boxes, in contrast, are learned on object-level, and hence can easily be rearranged in a scene without retraining, as we show in our experiments. Both our method and NIF are inspired by neural fields, that have successfully modeled occupancy [21], signed distance or surface distance [22, 23]. For a comprehensive survey of recent works employing coordinate-based NNs please see Xie et al. [21].
In concurrent work, not specific to rendering, very simple primitives are fitted conservatively to polytopes [15], but unable to handle general shapes. Neural concepts have been used to create bounding sphere hierarchies by Weller and colleagues [20], which uses a neural-inspired optimizer, where the representation of the bounding itself remained classic spheres, while in this article we use non-linear functions. Others [23] attempt to optimize collision testing by replacing the test with a neural network. While that work is similar to ours in the sense that it represents the bounding itself as a nonlinear function, it does not strictly bound but simply fits the surface of the indicator with a multi-layer perceptron (MLP) under a common loss. This is also applicable to higher-dimensional spaces (C-spaces) of, e.g. robot configurations [17]. Essentially, these methods train signed distance or occupancy functions, but without any special considerations for the difference of FP and FN, which is at the heart of bounding. We compare to such approaches and show that we can combine their advantages with the guarantee of never missing an intersection. More advanced, combinations of fields can be learned so as to not collide [15], but again only by penalizing intersections, not by producing conservative results. Other constraints such as eikonality [1], Lipschitz [18], or indefinite integrals [24] can be incentivitized similar to how we incentivize conservativeness. Sharp and Jacobsen [23] have proposed a method to query any trained implicit NN over regions using interval arithmetic. That is orthogonal to the question of training the NN to bound a function conservatively, which we study here.
To ensure no FNs, we make use of asymmetric losses, which are typically applied with aims different from ours, such as reducing class imbalance [19], to become robust to noise [10], to regularize a space [16] or, closer to graphics, to control bias and variance in Monte Carlo (MC) path tracing denoisers [20].
## 3 Our Approach
This section will outline how we construct our networks and the asymmetric loss in order to achieve tight, conservative bounding (strictly zero false negatives) in arbitrary dimensions.
### Method
To achieve our task of conservatively bounding in \(n\)-dimensional spaces, we seek to learn a NN that classifies concave regions of space into inside and outside, while ensuring strictly no false negatives. Input to our algorithm is an \(n\)-dimensional _indicator_ function \(f(\mathbf{x})\in\mathbb{R}^{n}\rightarrow\{0,1\}\) that returns \(1\) inside and on the surface of the object, and \(0\) everywhere else. In 2D, the indicator could be visualized as a regular image grid, a voxel grid in 3D, an animated object in 4D, or a multi-dimensional state space of robot arm poses (or even another network, e.g. a neural density field) in higher dimensions. We assume we can evaluate the indicator function exactly and at arbitrary coordinates. It further is not required to differentiate this function with respect to anything.
On top of the indicator, we define a _query_ function \(g(\mathbf{r})\in\mathbb{R}^{n}\rightarrow\{0,1\}\) that is 1 if the indicator function returns 1 for at least one point in the region \(\mathbf{r}\). For the case of point queries, the indicator and query function are identical, i.e. \(g=f\). For extended queries, such as range queries, \(\mathbf{r}\) would be a parametrization of a region, e.g. the two corners that define an axis-aligned bounding box (AABB). While in lower dimensions, the indicator and region could be converted into another indicator (akin to the morphological "open" operation on 2D images [14]), our method also supports queries on high-dimensional indicators that can only be sampled and not be stored in practice.
At the core of our approach is another function \(h_{\theta}(\mathbf{r})\in\mathbb{R}^{n}\rightarrow\{0,1\}\), with learnable parameters \(\theta\), that is strictly 1 where \(g\) is 1, but is allowed to also be 1 in other places (FP). While traditional approaches use computational geometry to infer \(\theta\) (e.g., via the repeated projection step in \(k\)-DOPs or the simple min/max-operation in AABBs), we leverage the power of gradient-based optimization of neural networks to learn the most suitable non-linear \(h_{\theta}\).
The training objective \(\mathcal{L}\) to approximate \(g\) via \(h_{\theta}\) is the combined cost of all FNs and FPs across a region \(\mathbf{r}\):
\[\mathcal{L}(\theta)=\int c(\mathbf{r})\mathrm{d}\mathbf{r},\quad c(\mathbf{r })=\left\{\begin{array}{l}0\text{ if }g(\mathbf{r})=0\text{ and }h_{\theta}(\mathbf{r})=0, \text{ }\text{ }\text{ }\text{TN}\\ \alpha\text{ if }g(\mathbf{r})=1\text{ and }h_{\theta}(\mathbf{r})=0, \text{ }\text{ }\text{FN}\\ \beta\text{ if }g(\mathbf{r})=0\text{ and }h_{\theta}(\mathbf{r})=1, \text{ }\text{ }\text{ FP}\\ 0\text{ if }g(\mathbf{r})=1\text{ and }h_{\theta}(\mathbf{r})=1, \text{ }\text{ }\text{ TP}\end{array}\right.,\]
where \(\alpha\) is the cost for false-negative, which needs to be \(\alpha=\infty\) to be conservative, and \(\beta\) is the cost for a false-positive, which we define to be 1. The first and last clause are true positive and negative and incur no cost, while the second clause ensures conservativeness, and the third ensures that the bounding is tight.
However, it is not obvious how to proceed with a loss that can be infinite. Moreover, \(\mathcal{L}\) is discontinuous in \(\theta\) and has zero gradients almost everywhere, as the observed loss values only change in the proximity of the surface of the bounded region and are constant everywhere else. While \(\mathcal{L}\) is required to ensure conservativeness, its optimization is not feasible in practice for the aforementioned reasons.
We therefore employ two modifications to \(\mathcal{L}\) in order to make it usable in practice. First, we suggest to replace the fixed constant \(\alpha\) with a variable value \(\alpha(t)\) that linearly depends on the learning iteration \(t\in\mathbb{N}^{+}\) as in \(\alpha(t)=t\). This ensures that, in the limit, the cost of false negative is unbounded, so the solution will eventually become conservative. Second, in order to compute smooth gradients for our neural bounding network \(h_{\theta}\), we approximate the previously defined \(\mathcal{L}\) via a variant of a weighted binary cross-entropy \(\hat{\mathcal{L}}\):
\[\hat{\mathcal{L}}(\theta)=-\mathbb{E}_{i}[\alpha(t)\cdot y_{i}\log(\hat{y}_{i,\theta})+\beta\cdot(1-y_{i})\log(1-\hat{y}_{i,\theta})], \tag{1}\]
where \(\hat{y}_{\theta}=h_{\theta}(\mathbf{x})\) is the bounding network prediction with the current parameters \(\theta\) for the current input \(\mathbf{x}\), and the supervisory signal \(y=f(\mathbf{x})\) is the result of evaluating the indicator function \(f\) at the same location. The pseudocode for our loss can be seen in Alg. 1.
### Neural Bounding Hierarchies
We can also stack our neural bounding volumes into hierarchies, similar to how classic bounding boxes are used to compute a bounding volume hierarchy (BVH), with the added benefit that our neural hierarchy's higher levels are again tightly and conservatively bounding the inner levels.
### Implementation
While our approach is realized as a neural field, making it generally applicable and independent of any specific architecture, we have observed that certain architectural decisions do impact the tightness of the bounding. We detail these choices in the following sections.
**Architecture** In all cases, the input to our algorithm is the indicator function to be bounded, which is then sampled at \(m\)-dimensional query locations that are used to fit the network with our asymmetrical loss from Alg. 1. The output of the network is a floating point number that represents occupancy, restricted to (0,1) via a sigmoid and then rounded to {0,1}. For our concrete implementation, our network is implemented as MLP in order to deal with arbitrary-dimensional point queries. The architecture details for all results shown in this paper are reported in the appendix Tbl. 4. ReLUs are used in the hidden layers, and the output layer is activated by a sigmoid function. We have experimented with both residual- and skip-connections [11, 12] as well as Batch-Normalization [10], but found little improvement, presumably due to the shallow network depth. For some results, we use positional encodings.
**Training** We build and train our networks in PyTorch [13] and use the standard layer initialization, which we have found to especially perform favorably in higher dimensions. We use the Adam optimizer [10] with learning rate of \(1\times 10^{-4}\) and implement \(\alpha(t)\) as a linear step-wise schedule that is incremented from 1 to 200 every 10 % of the total training iterations (5 million). We use a batchsize of 50,000 and early-stop the training as soon as FN = 0 is reached and has been stable for the past three scheduling iterations. Depending on dimensionality and query complexity, this takes between 5 and 60 minutes on a modern workstation.
Please note that the concept of "epochs" or train/test splits applies differently to generalization across a continuous space: For learning and validation, we randomly sample this space, and for testing we do the same. Our proposed method does not aim to learn generalization across objects, but a generalization of bounds across the hypercube of space, time, query type, and combinations thereof. We would like to emphasize that this is the same task which classic bounding geometry performs, where a bounding box would also not generalize from a bunny to, e.g. a dragon.
## 4 Evaluation
The analysis of our results is structured around studying different bounding methods (e.g. boxes, spheres, \(k\)-DOPs, etc., see Sec. 4.1) on different tasks, which we define as different query types in varying dimension (Sec. 4.2).
### Methods
We evaluate our approach's performance against different classic bounding primitives. First, we compare to axis-aligned and non-axis-aligned (oriented) bounding boxes (AABox and OBox), followed by bounding by a Sphere and its anisotropically scaled counterparts, axis-aligned and oriented ellipsoids (AAElli and OElli), respectively, which all can be fit in closed form [14]. Another widely-used bounding method we consider is \(k\)-DOPs, method kDOP, implemented following Ericson [1]. We set \(k\), the number of planes, to \(4m\), scaling with the dimensionality. OurNeural implements our neural bounding method as detailed in Sec. 3.3.
Interestingly, the application of our proposed asymmetric loss is not restricted to neural networks, but can also be applied to other optimization problems. We therefore explore replacing our network with a set of \(k\)-DOP planes which are then optimized with our asymmetric loss. We call this method OurkDOP, which has the speed and memory usage of traditional k-DOPs but benefits from parameters found by modern gradient descent.
To verify the contribution of our asymmetric training loss, we study a variant of our approach that uses a symmetric loss which weights both FPs and FNs with the same weight, i.e. \(\alpha=\beta=1\). Due to its similarity to the classic occupancy networks [15], we call this method OccNet. Please note that OccNet is not a method that can be deployed for bounding in practical scenarios, as it does not produce conservative results (i.e. the false-negative rate is not zero). We would like to re-emphasize that deploying non-conservative bounding method in graphics would lead to missing geometry, i.e., rays that do actually intersect a 3D object will wrongly test negative (e.g. column OccNet in Fig. 3). We hence only show qualitative, not quantitative results of this method.
### Tasks
In this analysis, a "task" combines two properties: the dimension of the indicator function (we study \(n=2\), \(3\) and \(4\)) and the type of query (points, rays, planes and boxes), which combines up to eight-dimensional problems.
**Indicators**: For 2D data we use images of single natural objects in front of a white background where the alpha channel defines the indicator function. We use \(9\) such images. For 3D data, we use \(9\) voxelized shapes of popular test meshes, such as the Stanford Bunny and the Utah teapot. For 4D data, we study sets of animated objects: we load random shapes from our 3D data and create time-varying occupancy data by rotating them around their center. We obtained \(3\) samples of this distribution and would like to emphasize that this is a strategy that favors the baselines: if object transformations were characterized by translational instead of rotational motions, the performance of the baseline approaches would deteriorate significantly. AABox, for instance, would have to bound the entire spatial extent between the initial and terminal object locations, thereby yielding an exceedingly high number of false-positive intersections.
**Query types**: Our query types are point-, ray-, plane- and boxqueries. For all query types, the goal is to ask, given a point, ray, plane or box, does it intersect the object to be bounded? Rays are parameterized by an origin and a direction vector, planes by a normal vector and a point \(p_{0}\) on the plane surface, and boxes by their minimum and maximum corners. For every query region, the result is computed as any() of a sample of the indicator across the region.
### Metrics
We report results for the two relevant metrics which define the quality of a bounding approach: _tightness_ and execution _speed_.
**Tightness**: In terms of tightness, FP is the only figure of merit to study. If a bounding method has a high number of false positives, it will result in many unnecessary intersection tests. This effect is worsened if not working with hierarchies, or if we are on the lowest hierarchy level, as then every FP would translate to a very expensive test against the entire bounded geometry (which is often in the order of millions to billions of triangles). We compute the FP and FN using MC of in between 25,000 to one million values, depending on the dimensionality of the task.
Even though our task is the classification of space, we do not employ classification metrics such as F1, precision and recall. These metrics aim to capture the relationship between FPs and FNs, which is not relevant for our study since the FN rate for all bounding methods must be strictly zero.
**Speed**: This is the speed of the bounding operation itself (e.g. evaluating the closed-form sphere intersection, or, for method OurNeural, a network forward pass). We report both query speed and ray throughput as the average number over 500 independent runs with 10 million randomly sampled, forward-facing 3D rays. For fairness, our methods and all baselines have been implemented as vectorized PyTorch code and make full use of GPU acceleration.
### Results
**Quality**: We show our quantitative results and the comparisons against the baselines in Tbl. 1. As is evident, our method OurNeural consistently outperforms the other baselines by a large margin (up to \(8\times\) improvement on 4D point queries). The second-best methods are on average kDOP and OurkDOP, sometimes roughly on-par with AABox. Interestingly, using modern gradient-descent based construction for OurkDOP improves on kDOP results significantly. Unsurprisingly, all methods increase their FP rate when going to higher dimension, a testimony to the increased query complexity. Notably, our method scales favorably with dimension and achieves acceptable FP rates even at high dimensions (e.g. for 4D box queries). Interestingly, looking at the relative intra-dimensional performance of our method reveals that it performs almost consistently worst on plane queries, which leaves room for future research and improvement. Moreover, especially in higher dimensions (e.g. 4D ray, plane, box), the baselines approach near uniform performance, differing by only a few percentage points. We attribute this to their rigidity: the rotating object carves out the same amount of
space, and any rigid fit to this time-varying volume will perform almost equally poorly. Fig. 2 visualizes the same data in the form of a rank plot, see figure caption for discussion. Qualitative results for 2D and 3D are shown and discussed in Fig. 1 and Fig. 3, respectively.
Fig. 6 shows our OurNeural for NNs of different complexity. Fig. 7 shows a bounding hierarchy on a school of 2D fish.
In Fig. 4 we show result for OurkDOP, \(k=6\): the resulting parameters are compatible with any \(k\)-DOP implementation and as fast to test. The only difference is that the planes were found via stochastic gradient descent (using Adam) and our loss. Ours performs better than the common heuristics [1]. This indicates that our approach to finding bounding parameters can be superior to heuristics, even if the model itself is not neural.
We further show a result for an interesting variant of our method, that does not conservatively state which spatial location might be hit, but conservatively bounds which spatial locations certainly are hit. This is achieved by flipping the asymmetry-weights \(\alpha\) and \(\beta\). An example is seen in Fig. 5 for 2D and Fig. 10 in 3D. This is useful for a quick broad-phase test in collision: an object only needs to be tested if it is neither certainly out nor certainly-in.
marginally slower than kDOP and some of the oriented bounding methods. We attribute this to the fact that inverting the bounding primitive's orientation, which is necessary for oriented boundings in order to test against their simpler axis-aligned counterparts, already requires one matrix multiplication, whereas our network architecture only requires three in total. Therefore, even if we nominally lag behind in this comparison (by a factor of max. 2.5\(\times\), OurNeural vs AAElli), we would argue that this is offset by the the substantial reduction in FP that our method achieves (see Tbl. 1, on average our method incurs only 23.8 % of AAElli's FPs, and Sec. 5).
## 5 Discussion
Can a network that is slower than traditional bounding primitives be useful in practical graphics problems? There are two main supportive arguments we will discuss in the following: tightness and scalability.
**Tightness**: In order to evaluate the speed of a bounding method, one must additionally take into account the cost of a false-positive query, i.e. having to perform an intersection against the detailed,
\begin{table}
\begin{tabular}{l r r r} \hline \hline & Speed (ms) & Throughput & DoF \\ \hline AABox & 0.028 & 35.98 & \(2n\) \\ OBox & 0.044 & 22.79 & \(3n\) \\ Sphere & 0.022 & 45.13 & \(n+1\) \\ AAElli & 0.020 & 49.50 & \(2n\) \\ OElli & 0.036 & 28.08 & \(3n\) \\ kDOP & 0.048 & 21.06 & \(k(n+1)\) \\ OurkDOP & 0.018 & 52.80 & \(k(n+1)\) \\ OurNeural & 0.050 & 19.90 & \(50n\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Speed and throughput of different methods. Throughput is reported in billions of rays per second. The last column shows the respective method’s DOF in indicator dimension \(n\).
Figure 3: Comparison of methods (columns) for different 3D shapes (rows). AABox and kDOP both bound the objects conservatively, but create false positives, whereas OccNet bounds overtight and produces false negatives (red). Our approach is both tight and conservative.
Figure 6: Our learned bounding volumes with increasingly complex neural networks from left to right. A simple network, left, fits the shape only roughly, but still conservative. With increasing expressiveness towards the right, the fit gets more accurate and recovers concave regions, and later, the curvature of the starfish’s arms. For a complex NN it becomes almost pixel-exact, but even then, small approximation errors are made but never violating the bounding requirement (zero FN). This indicates we can guarantee conservativeness independent of complexity.
bounded geometry. Assume a bounding method B can be queried in time \(t_{b}\), and the competing bounding method A is five times slower, i.e. \(t_{a}=5t_{b}\). Assume further that A, in spite of being slower, produces significantly fewer false-positives (\(p_{a}=0.1\)) than B (\(p_{b}=0.3\)). The total time for \(N\) tests, regardless of the method used, is \(Nt_{i}+p_{i}\cdot N\cdot t\), of which the first term marks indispensable checks (as every ray must be checked against the bounding method), and the second term marks unnecessary checks due to false-positive bounding queries. Finally, assume that performing tests with the actual detailed geometry needs time \(t\), which is usually much larger than \(t_{a}\) and \(t_{b}\). Hence, for method A to win, the following must hold:
\[N\cdot t_{a}+Np_{a}\cdot t <N\cdot t_{b}+Np_{b}\cdot t\] \[t_{a}+p_{a}\cdot t <t_{b}+p_{b}\cdot t\] \[t_{a}+p_{a}\cdot t-p_{b}\cdot t <t_{b}\] \[p_{a}\cdot t-p_{b}\cdot t <t_{b}-t_{a}\] \[t(p_{a}-p_{b}) <t_{b}-t_{a}\] \[\text{and hence if }(p_{a}-p_{b})>0: t <(t_{b}-t_{a})/(p_{a}-p_{b})\] \[\text{or otherwise if }(p_{a}-p_{b})<0: t >(t_{b}-t_{a})/(p_{a}-p_{b}). \tag{2}\]
For the aforementioned example values, this produces
\[t>(t_{b}-5t_{b})/(0.1-0.3)=20t_{b},\]
which means that method A is to be preferred if tests against the actual bounded geometry are at least 20 times as expensive as the bounding query. As traditional triangle meshes often have millions to billions of triangles, this is easily achieved by our approach.
We quantify the exact ratio of times a 3D ray-geometry test has to be more expensive than the bounding test in Tbl. 3 and see that, while with increasing number of rays the ratio rises, our method is on average to be preferred when the geometry test is as little as 2.43\(\times\) more expensive than the bounding test, which certainly is achieved in most real-world applications.
**Scalability**: From our results one can easily see that our method's advantage increases with dimensionality. We visualize this in detail in Fig. 8. This is because in concave shapes of natural high-dimensional signals, empty space grows much quicker than is intuitive. High-dimensional space is mostly empty, but not always [1]. This is also why NNs excel as classifiers and best imagined in four dimensions, space plus time, where the space-time indicator of a moving objects is highly concave but mostly empty, and well represented by a NN of low complexity: a bit of bending, and that is already much better than any box or sphere.
To demonstrate scalability to complex high-dimensional spaces, we finally bound an entire generative model itself in Fig. 9. We use a pre-trained variational auto-encoder (VAE) of MNIST digits as the indicator. This is a 12-dimensional space, consisting of 10 latent dimensions of the VAE and two more dimensions that denote the pixel coordinate in the output image. Note that a pre-trained MNIST VAE is a deterministic 12-D indicator function like any other function in this paper, independent of the question how it was trained non-deterministic. This allows to predict which pixels will belong to a digit, without even running the generative model. While this is a toy problem, in more engineered applications it would, e.g. allow conservative ray intersection with complex 3D models that have not even yet been generated.
**Limitations**: However, our approach also comes with certain limitations, which we will highlight here to inspire future work:
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline N & \multicolumn{2}{c}{**ABbox**} & \multicolumn{2}{c}{**OBox**} & \multicolumn{2}{c}{**Sphere**} & \multicolumn{2}{c}{**AAE11**} & \multicolumn{2}{c}{**O11**} & \multicolumn{2}{c}{**kDOP**} & \multicolumn{2}{c}{**OurkDOP**} \\ \hline
0.1 M & 1.42 & 1.66 & 1.60 & 1.65 & 1.52 & 2.40 & 3.29 \\
1.0 M & 1.55 & 1.99 & 2.14 & 2.40 & 2.21 & 3.27 & 3.34 \\
10.0 M & 2.66 & 2.81 & 3.64 & 4.14 & 2.89 & 3.75 & 5.97 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Times a test against the actual geometry has to be more expensive than the bounding test for our method to pay off, as per the previously developed inequality Eq. 2.
Figure 8: False positive rate in percentage (vertical axis) as a function of model complexity, i.e. number of tuned parameters (horizontal axis in each subplot) for three different dimensions (subplots). Different colors are different methods (OBox, Sphere, OurNeural; baselines are constants with respect to complexity which remains fixed in each dimension). Across dimensions, we see that FPs increase for baselines, while for ours, the rate (still function of model complexity itself) remains roughly constant (funnels).
Figure 7: A hierarchy of neural bounding networks. The first level is show in blue, the two child nodes in yellow and the leafs in pink. Note that it is not required for the higher-level bounding to bound the lower-level boundaries. It is only required to bound the indicator.
As we overfit a network per shape, our approach requires additional storage in the order of magnitude of the network parameters (e.g. 2,951 parameters for a 3D point query network), which naturally is larger than for traditional baselines (see Tbl. 2). However, storage is cheaper than compute, and moreover, applications which use bounding geometry usually handle meshes with several million to trillion triangles. We therefore believe that our modest storage requirement for the network parameters is negligible in comparison.
Moreover, training one of our bounding networks takes significantly longer than inferring traditional bounding primitives, many of which have closed-form solutions. While this can easily be out-sourced to a pre-processing stage, we do see further potential in maximizing training speed by either incorporating the fully-fused NN architecture proposed by [10] into our approach, or by meta-learning [13, 14, 15, 16] a space of bounding networks, which would be especially useful for bounding similar geometry that only slightly differs in shape or pose.
Our results are only almost always conservative, as they involve two stages of sampling, that, in expectation, will be conservative, but we lack any proof under what conditions the probability of being truly conservative is how high. The step of sampling the query region could be replaced by an unbiased (and potentially closed-form) one; we only choose sampling here as it works on any indicator in any dimension, However, the loss is still an empirical loss, and there is a nonzero chance that a tiny part of the indicator would remain unattended with finitely many samples. In practice, our results show dozens of tasks with dozens of instances, each tested with hundred-thousands of samples with no FN on any sample.
## 6 Conclusion
In future work, the idea of asymmetric losses might be applicable to other neural primitives like hashing. In a similar vein, distance fields could be trained to maybe underestimate, but never overestimate distance so as to aid sphere tracing. It is further conceivable to bound not only geometry but also other quantities like radiance fields or their statistics for image synthesis. While we train to not have FN, the same idea can be made conservative on the inverse indicator, so to have no FP. This would require a second network, which as a pair with the inverse conservatively forms a crust, or trust region. In terms of application, future work could study robot configurations, where a typical application to study with this proxy is collision-free path planning.
Using our technique, bounding in graphics can benefit from many of the exciting recent innovations around NNs such as improved architectures, advanced training, or dedicated neural hardware.
|
2308.13903 | Disjoint Pose and Shape for 3D Face Reconstruction | Existing methods for 3D face reconstruction from a few casually captured
images employ deep learning based models along with a 3D Morphable Model(3DMM)
as face geometry prior. Structure From Motion(SFM), followed by Multi-View
Stereo (MVS), on the other hand, uses dozens of high-resolution images to
reconstruct accurate 3D faces.However, it produces noisy and stretched-out
results with only two views available. In this paper, taking inspiration from
both these methods, we propose an end-to-end pipeline that disjointly solves
for pose and shape to make the optimization stable and accurate. We use a face
shape prior to estimate face pose and use stereo matching followed by a 3DMM to
solve for the shape. The proposed method achieves end-to-end topological
consistency, enables iterative face pose refinement procedure, and show
remarkable improvement on both quantitative and qualitative results over
existing state-of-the-art methods. | Raja Kumar, Jiahao Luo, Alex Pang, James Davis | 2023-08-26T15:18:32Z | http://arxiv.org/abs/2308.13903v1 | # Disjoint Pose and Shape for 3D Face Reconstruction
###### Abstract
Existing methods for 3D face reconstruction from a few casually captured images employ deep learning based models along with a 3D Morphable Model(3DMM) as face geometry prior. Structure From Motion(SFM), followed by Multi-View Stereo (MVS), on the other hand, uses dozens of high-resolution images to reconstruct accurate 3D faces. However, it produces noisy and stretched-out results with only two views available. In this paper, taking inspiration from both these methods, we propose an end-to-end pipeline that disjointly solves for pose and shape to make the optimization stable and accurate. We use a face shape prior to estimate face pose and use stereo matching followed by a 3DMM to solve for the shape. The proposed method achieves end-to-end topological consistency, enables iterative face pose refinement procedure, and show remarkable improvement on both quantitative and qualitative results over existing state-of-the-art methods.
## 1 Introduction
3D face models find a wide range of applications in scenarios such as 3D avatars [53], biometric identification [9], photo editing [49] and film production [10]. Traditionally, specialized setups and hardware [15, 10] have been used to generate high-fidelity 3D faces. However, reconstructing accurate 3D faces from a few casually captured uncalibrated images remains a challenging problem.
Recent learning based methods use a Deep Neural Network (DNN) model to reconstruct 3D faces. Most often, a 3D Morphable Model (3DMM) is employed to represent the face shape using a vector space representation that is learned from a linear combination of principle components of a collection of face scans. These methods aim to recover the 3DMM parameters from given facial images, often by analysis-by-synthesis optimization or by employing multi-view constraints. However, it requires a complicated, non-linear optimization that has difficulty converging in practice.
In contrast, the highest quality 3D faces are generated using multi-view stereo methods with dozens of calibrated high-resolution images captured in a laboratory setting. Unfortunately pre-calibrated camera pose is not readily available in a casual capture setting. Recent learning-based methods consisting of Structure From Motion (SFM) followed by Multi-View Stereo (MVS) have been proposed for 3D reconstruction from uncalibrated sets of images. However, solving for camera poses using this method when only a few views are available are under-constrained and results in inaccurate estimate as shown in Figure 1 above.
The key observation in this paper is that when only two views are available jointly optimizing for pose parameters, shape parameters, and correspondence between image and model is theoretically optimal, but results in sub-optimal results in practice. Instead we propose a disjoint solution in which pose and shape are solved separately. This allows
Figure 1: Comparison of raw stereo output from SFM/MVS [37][36] (left), our method (middle), and ground truth (right). Clearly, SFM/MVS produces a stretched-out face with more noise due to under-constrained optimization with only 2-views. The proposed face shape prior solves this problem by providing a strong prior.
appropriate regularization priors to be used in each stage, allowing a more stable overall solution.
Our end-to-end pipeline consists of three stages, face pose estimation using a face shape prior, 3D reconstruction using stereo matching, and iterative camera pose refinement. In the first stage, given two views, we detect topologically consistent dense 2D landmarks for both views and use a strongly constrained 3D face shape prior in the same topology as the dense 2D landmarks. Given 2D landmarks and the 3D face shape prior, we solve for camera poses using Quadratically Constrained Quadratic Program (QCQP) based optimization proposed in Terzakis [40]. In the second stage of our pipeline, we perform stereo matching to generate a 3D point cloud. This step uses no face shape prior in order to allow a full range of shape variation. This is followed by a fit to a modern 3DMM model called FLAME to fill in the missing region. The 3DMM acts as a face prior on the raw 3D point cloud, however it allows a great deal of variation in shape and is thus much less strongly constrained than the face prior used to find pose.
Since our output accuracy is determined in part by the estimate of pose derived from 2D landmarks, we perform an iterative refinement on these. The 2D landmarks are refined using the current estimates of pose and shape. This refinement converges in only a few iterations. Our analysis on FaceScape and Stirling datasets show that the proposed pipeline outperforms the state-of-the-art multi-view methods in quantitative as well as qualitative comparison.
The contribution of this paper is an end-to-end pipeline for 3D face reconstruction by solving pose and shape disjointly using two uncalibrated images, achieving state-of-the-art accuracy.
## 2 Related Work
### 3D Morphable Face Model
3DMM is a statistical model which transforms the shape and texture into a vector space representation that is derived from hundreds of 3D face scans. It was first proposed by Blanz [8], and numerous variations such as [33, 29] were introduced to include identity, expression, and pose factors. This allowed for separate control of the model, and using these attributes, it can be transformed into a set of blendshapes [28] which can then be rigged to create unique animations for each individual. More recent deep learning based methods [3, 42, 44, 43] have been proposed to enhance the representation power of 3DMM. We recommend referring to the recent survey [18] for a comprehensive review. In this paper, we use the FLAME model [29] for its simplicity and wide applicability.
Figure 2: An overview of the proposed method. In the proposed method, we solve for pose and shape disjointly for an accurate reconstruction. The blue box shows the proposed face pose estimation using face shape prior method, and pink box shows the 3D face reconstruction pipeline. \(\pi_{1}\ and\pi_{2}\) represent the projection matrices.
### Multi-View Stereo
Traditional MVS methods [20, 25, 27] estimate the 3D shape of the object from a set of calibrated multi-view images. These methods perform feature matching, followed by triangulation, and then the application of a depth map fusion algorithm to obtain 3D meshes [20, 11, 37, 41]. Parts of this pipeline that solve for specific missing information or improve the speed and accuracy of reconstructions have also been proposed. Depth fusion methods have been used to produce a valid 3D watertight surface [14, 32, 24]. Multiple stereo-based methods [46, 7] have been proposed specifically for faces. Valgaerts [46] proposed a lightweight passive facial performance capture approach that uses image-based scene flow computation, lighting estimation, and shading-based refinement algorithms to reconstruct the face. Beeler [7] used a combination of smoothness, ordering, and uniqueness constraints to recover facial shape robustly. However, these methods generally require pre-calibrated cameras or many images. Our work focuses on face reconstruction from two uncalibrated images.
### Deep Face Reconstruction
More recent learning-based methods [23, 13, 51] train DNN with different objective functions to improve 3D face reconstruction. Other methods [17, 34, 5] proposes to re
Figure 3: A qualitative comparison of output rendering and error map to the SOTA methods. The first and second rows show the output and error map on a sample input from the FaceScape dataset, while the third and fourth show the output and error map on a sample of the Stirling dataset. The error maps visualize a range from blue (0 mm) to yellow (3 mm). It is evident that the proposed method produces a smooth reconstruction with lower error.
cover facial geometry from uncalibrated images or videos. Duo _et al_. [17] uses DNN to disentangle facial and identity features and uses Recurrent Neural Network (RNN) to perform subspace representation of 3D shapes. Ramon _et al_. [34] uses a Siamese network to extract features from multi-views. Wu _et al_. [48] incorporates multi-view geometric constraints into the network by establishing dense correspondences between different views leveraging a novel self-supervised view alignment loss. Though these methods achieve good results, they require multiple views, and performance is sub-optimal when only two views are available.
3D face reconstruction from a single view has also been studied and is challenging due to its ill-posed nature. Generally, a deep neural network is trained to regress 3DMM model parameters [45, 52] reconstruct 3D geometries [22, 29, 35] or render image using analysis-by-synthesis [16, 21, 19]. Prior work [30, 48] has shown that multi-view face methods generally perform better than single-view. In this paper, we compare directly with several state of the art multi-view reconstruction methods.
## 3 Proposed Method
The proposed method consists of three stages, as shown in Figure 2. In the first stage, we estimate the face poses using topologically consistent dense 2D landmarks and a 3D face shape prior. In the second stage, an accurate 3D face is reconstructed using two-view stereo matching, which utilizes the face pose obtained in the first stage. This is followed by 3DMM model fitting to fill in the missing region. Finally, we perform Face Pose Refinement (FPR) by iteratively projecting the 3D face to image space using the face pose estimated in the previous iteration. We explain each stage in detail in the following sections.
### Pose Estimation Using Face Shape Prior
In the first stage of our pipeline, we use topologically consistent dense 2D landmarks and a 3D face shape prior for face pose estimation. In this section, first we provide details about dense 2D landmark detection and face shape prior computation, followed by an explaination of the face pose estimation method.
Facial landmark detection is a well-explored field of research. However, most of the existing methods focus on a set of frequently used 68 landmarks. Martyniuk _et al_. [31] and Wood _et al_. [47] proposes dense landmark detection methods with 10x more landmark points. We first use a face detector [6] to find the facial region since landmark detection often fails to find accurate landmarks without it. DADNet [31] is used to find dense landmarks.
We use a face shape prior as a regularizer to solve for face poses. We use the mean face with no variability to provide a very strong prior. Given a set of \(m\) training set faces \(\mathcal{F}=\{\mathbf{F_{1}},\mathbf{F_{2}},...,\mathbf{F_{m}}\}\), we compute the face shape prior \(\mathbf{F_{p}}\) as the mean of all the face vertices in correspondence
\[\mathbf{F_{p}}^{k}=\frac{1}{m}\sum_{i=1}^{m}\mathbf{F_{i}}^{k}\hskip 28.452756pt \forall k\in\{1,2,...,n\} \tag{1}\]
where, \(\mathbf{F_{p}}^{k}\) and \(\mathbf{F_{i}}^{k}\) denote the \(k^{th}\) vertex of face prior \(\mathbf{F_{p}}\) and the \(i^{th}\) face \(\mathbf{F_{i}}\) respectively.
```
Input: Two view images (\(I_{1},I_{2}\)), face shape prior (\(F_{p}\)) computed as explained in Section 3.1 Output: Reconstructed 3D face and Face Poses
```
**Algorithm 1** Iterative FPR and 3D Reconstruction
Face pose estimation requires solving for the rotation \(\mathbf{R}\in\) SO(3) and translation \(\mathbf{t}\in\mathbb{R}^{3}\) for an image \(I\). However, solving for face poses using only two views when the face shape is not yet known is a highly under-constraint problem and has shallow minima leading to many solutions. Hence, to constrain our hypothesis space, we use the face shape prior. Formally, Given an image \(I\), we find the topologically consistent landmark points \(\mathbf{L}=\{\mathbf{p_{1}},\mathbf{p_{2}},...,\mathbf{p_{n}}\}\in\mathbb{R}^ {nx2}\) and face shape prior \(\mathbf{F_{p}}=\{\mathbf{v_{1}},\mathbf{v_{2}},...,\mathbf{v_{n}}\}\in\mathbb{R }^{nx3}\) and we seek to find the rotation \(\mathbf{R}\) and \(\mathbf{t}\) minimizing the cumulative squared projection error
\[S(\mathbf{R,t})=\sum_{i=1}^{n}||\mathbf{p_{i}}-\mathbf{K}\cdot(\mathbf{R} \cdot\mathbf{v_{i}}+\mathbf{t})||^{2}=\sum_{i=1}^{n}||\mathbf{p_{i}}-\pi\cdot v _{i}||^{2} \tag{2}\]
Where, \(\mathbf{R,t}\) are the rotation and translation parameters to be estimated, \(\mathbf{K}\) is the scaling factor using weak perspective projection, \(\pi\) is the projection matrix and \(\mathbf{p_{i}}\) and \(\mathbf{v_{i}}\) represent the \(i^{th}\) points in the image and 3D face prior space respectively. This is a well-known Perspective-n-Point (PnP) optimization problem, and we use a Quadratically Constrained Quadratic Program (QCQP) based implementation proposed in Terzakis _et al_. [40], to solve this optimization problem. Figure 2 (blue box) shows our face pose estimation pipeline.
### 3D Face Reconstruction
Given two images and the associated face poses estimated using the method described in the previous section, we use the classical stereo method for 3D reconstruction. We do not employ any face shape prior during stereo matching. This allows maximum shape variability, however it produces noisy results and leaves gaps in areas not visible to both cameras.
To obtain a complete 3D facial model, a statistical blend-shape 3D face model called FLAME is utilized as the geometry prior. This model consists of the head, neck, eyeball, and shoulder region and was developed from more than 33,000 scans. Despite its low dimensionality, the FLAME model is considered more expressive than other models, such as the FaceWarehouse [12] and Basel Face Model [29]. The model's parameters include both shape and expression, and this prior allows significantly more variation while solving for shape than the static prior used when solving for pose. The authors' original implementation of FLAME is used to determine the best-fitting parameters for the available data.
### Iterative Face Pose Refinement
In the proposed pipeline, we find the 2D landmarks for two views independently using DADNet for initialization. However, the two views constraints introduced using stereo can help improve these landmarks, and consequently, the face poses. To this end, we perform an iterative Face Pose Refinement (FPR). Once we obtain the 3D face and face pose in the first iteration, we perform 2D projection and pick the corresponding landmarks. This is possible because the landmarks of our 2D detector are topologically consistent with the vertices in our morphable model. More for
Figure 4: Our dense intermediate stereo output can be used to generate shape refinement beyond 3DMM. A qualitative comparison of deformed rendering to SOTA methods on FaceScape [50] dataset. (from left to right) DECA [19], HRN [26], ours without deformation, our deformed outputs with high resolution inputs, deformed outputs with 50-view inputs, and the ground truth
Figure 5: A qualitative comparison of deformed rendering to SOTA methods on Stirling [1] dataset. Our method can recover mid-frequency details which matches with groudtruth.
mally, given the generated face \(\mathbf{F_{g}}=\{\mathbf{v_{1}},\mathbf{v_{2}},...,\mathbf{v_{n}}\}\) and face pose \((\mathbf{R},\mathbf{t})\), we find the 2D projection as
\[\mathbf{L^{{}^{\prime}}}=\mathbf{K}\cdot(\mathbf{R}\cdot\mathbf{F_{g}}+\mathbf{ t})=\pi\cdot\mathbf{F_{g}} \tag{3}\]
where \(\mathbf{L^{{}^{\prime}}}\) is the new set of dense landmarks used for the next iteration of face pose estimation. This converges in just a few iterations. Algorithm 1 shows our FPR method in detail.
## 4 Results
In this section, we first discuss the datasets, evaluation metrics, and implementation details for conducting the experiments. We then perform a qualitative and quantitative comparison to several State-Of-The-Art(SOTA) methods to show the effectiveness of our method. Finally, we provide an ablation study on different components to validate the proposed method.
### Implementation Details
We perform our experiments on two widely used face datasets, FaceScape [50] and Stirling dataset [1]. The FaceScape dataset contains high-resolution scans of hundreds of identities, each with more than 50 high-resolution images captured by DSLR cameras. We downsample them to a lower resolution (512x341) for a fair comparison. We use 40 randomly selected individuals as our testset. The Stirling dataset contains high-resolution scans of more than 100 people, each with four views. We follow prior work [4] and use 31 individuals as our testset.
For error analysis, the predicted meshes are aligned to the ground truth using the Iterative Closest Point (ICP) algorithm. For each point on the ground truth scan, we calculate the point-to-face distance in millimeters by finding the closest triangle in the predicted mesh. From this set of distances, we calculate summary statistics like mean-squared error (MSE), Median and a robust approximation of maximum error which discards 10% of high error points as outliers (M90). For the face shape prior used during pose estimation, we use 80% of the training set scans, none of which are a part of our test set. During the 3DMM fitting, we crop the face to include only the frontal region.
### Comparison to the State-Of-The-Art
#### 4.2.1 Qualitative Comparison
In this section, we perform a qualitative comparison of the proposed method with four SOTA methods: MVF-Net [48], DFNRMVS [5], INORig [4] and HRN [26] on FaceScape [50] and Stirling [1] datasets. We show the rendering of example 3D faces generated using different methods along with their error map distribution over the face in Figure 3. HRN [26] uses a geometry disentanglement and introduce the hierarchical representation to fulfill detailed face modeling. Although, it captures the high frequency details, the deformation introduces error near head, chin and cheekbones regions as evident from error map in Figure 3. MVF-Net [48] and DFNRMVS [5] trains convolutional neural networks to explicitly enforce multi-view appearance consistency and learns the pose and shape jointly. In contrast, we solve for the pose and shape disjointly, which enables our pipeline to enforce multi-view consistency using accurate multi-view stereo. The qualitative results shown in Figure 3 validates our claim. The first and second rows of Figure 3 show the output and error map on sample data from FaceScape, while the third and fourth rows show the same
Figure 6: Comparison of performance plotted as a CDF with error on the x-axis and the fraction of reconstructed points with the error below this bound on the y-axis.Our proposed method has accuracy substantially better than the existing methods. A horizontal line through the plot at 0.9 is identical to the M90 error metric reported in results.
on a sample from the Stirling dataset. As evident from the error maps, our output face approximates the ground truth more closely than the other methods.
The integration of dense stereo reconstruction within our method enables the implementation of further non-rigid deformation. This particularly generates shape refinement beyond overly smooth FLAME face space. We upsample the FLAME face and perform the as-rigid-as-possible deformation [2] followed by nonrigid ICP [38]. We perform Taubin smoothing after, since 2-view stereo is still noisy [39]. We compare our qualitative results with DECA [19] and HRN [26] as these methods propose to recover higher frequency features beyond 3DMM models. As shown in the figure 4 and figure 5, we are able to recover mid-frequency features which looks visually closer to the ground truth.
In contrast to DNN which often downsample input images, stereo techniques excel in extracting features from high-resolution data. Therefore, in Figure 4 we show the scenario while we intentionally use two high resolution inputs. Stereo is generally used at a multi-view setting, so we also show where we increase the number of images (50+) to serve as upper-bound. Note that these two are already similar, we thus conclude that a mid-frequency detailed 3D face reconstruction can be achieved with our method.
In order to demonstrate the problem that arises when attempting to solve pose and shape jointly, we perform a qualitative comparison of our proposed method with a SFM/MVS method [36, 37] in Figure 1. An ambiguity exists between pose and shape, resulting in a stretched-out face when the optimization converges to a low error solution that is incorrect. Our disjoint pose and shape pipeline allows the use of a strong prior on shape while solving pose, and a known pose while solving shape. This results in finding the correct solution that closely matches the ground truth shape.
#### 4.2.2 Quantitative Comparison
In this section, we present a qualitative comparison of the proposed method with SOTA multi-view methods. Table 1 shows the MSE, Median and M90 error numbers on the FaceScape testset. It can be seen that the proposed method significantly outperforms SOTA methods on all three error metrics, achieving an MSE error of as low as 1.07mm.
Next, We provide the comparison on the Stirling dataset in Table 1 (right half). Although Stirling images have lower resolution and higher compression than FaceScape images, the proposed method outperforms SOTA methods, demonstrating the effectiveness of the proposed method on different datasets. Also, notice that our method outperforms DFNRMVS [5] and INORig [4], which use the Stirling data as their training set.
We have also evaluated the error as a cumulative distribution function, as shown in Figure 6, which provides a more complete summary than a single aggregate metric. A horizontal line through the plot at 0.9 is identical to the M90 error metric reported above. The proposed method exhibits a lower error for all fractions of points on the FaceScape dataset (Figure 6 left). The Stirling dataset (Figure 6 right) plot indicates that even with lower image quality, the proposed method outperforms other methods.
As mentioned in qualitative comparison, our intermediate stereo output can be used to deform the 3DMM output to recover detailed features. In table 2 we show the error before and after deformation. While the qualitative results appear more visually appealing, we note a modest enhancement in the error for FaceScape dataset.
The proposed method can easily be generalized to 3 and more views. We show the reconstruction error for 3-views in Table 3. It can be seen that the proposed method outperforms the existing methods even on 3-views.
### Ablation Study
In this section, we provide an ablation study of different components of the proposed pipeline. In the pose estima
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|} \hline & \multicolumn{3}{c|}{**FaceScape**} & \multicolumn{3}{c|}{**Stirling**} \\ \hline Methods & **MSE** & **Median** & **M90** & **MSE** & **Median** & **M90** \\ \hline MVF-Net [48] & 1.75 & 1.45 & 3.60 & 1.52 & 1.22 & 3.14 \\ DFNRMVS [5] & 1.79 & 1.45 & 3.64 & 1.36 & 1.04 & 2.79 \\ INORig [4] & 1.54 & 1.29 & 3.09 & 1.20 & 0.90 & 2.50 \\ HRN [26] & 1.31 & 0.97 & 2.61 & 1.69 & 1.18 & 3.53 \\ Ours & **1.07** & **0.83** & **2.27** & **1.04** & **0.83** & **2.07** \\ \hline \end{tabular}
\end{table}
Table 1: Quantitative comparison of face shape estimation with SOTA methods on FaceScape and Stirling datasets. MSE, Median, and max error after rejecting 10% outliers (M90) are provided over each test set. Our method clearly outperforms the existing methods on both FaceScape and Stirling datasets.
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline
**Stirling** & **MSE** & **Median** & **M90** \\ \hline Ours (No Deformation) & 1.04 & 0.83 & 2.07 \\ Ours (Deformed) & 0.97 & 0.78 & 1.98 \\ \hline
**FaceScape** & **MSE** & **Median** & **M90** \\ \hline Ours (No Deformation) & 1.07 & 0.83 & 2.27 \\ Ours (Deformed) & 1.06 & 0.82 & 2.26 \\ \hline \end{tabular}
\end{table}
Table 2: Comparison after performing the deformation to 3DMM output using stereo output
tion stage of the pipeline, results are affected by both 2D dense landmark inaccuracies and face prior inaccuracies. Mistakes made estimating pose will propagate to errors in shape reconstruction. Therefore, we perform an analysis of how the reconstruction accuracy is impacted by changes in these two factors.
In the first experiment, we aim to find the upper bound of the proposed method if we improve our face shape prior. We replace the mean face prior with an ideal prior, the ground truth 3D scan, when estimating the face pose. It can be observed in Table 4 that MSE error improves from 1.07 to 1.05mm. However, this is a very small improvement and we conclude that a mean face prior, though simple, is a sufficient prior while estimating pose.
Next, we perform an analysis to find the upper bound reconstruction accuracy with improvement in 2D landmarks. For this, we use ground truth 2D landmark to find the face pose. In Table 4, the first row shows the MSE using the proposed method without FPR, while the last row shows when ground truth landmarks are used, again without FPR. A significant improvement of \(\sim 0.3\)mm (from 1.16 to 0.87) is observed when landmark localization is improved. Based on this observation, we introduced the iterative FPR portion of our pipeline to improve landmarks. As shown in Table 4, second row, the proposed FPR method does improve accuracy, reducing error to 1.07. However, there is still a scope for improvement.
If both ground truth 3D face and ground truth 2D landmarks exist it is possible to solve for pose exactly. This is equivalent to pre-calibrated cameras. In this case the remaining error is due to stereo reconstruction and morphable model fitting. However this error is substantially lower at 0.77mm. We conclude that although pose has many fewer parameters than shape, finding correct pose is critical to future progress in 3D face reconstruction.
Finally, we also perform an analysis of how the reconstruction error improves after each iteration of the proposed FPR process. In Figure 7, we show the MSE for the first three iterations on a sample input. It can be observed that the proposed FPR improves the MSE from 1.16 mm to 1.07 mm. The iterative FPR converges in just three iterations due to good initialization and an end-to-end topologically consistent pipeline.
## 5 Limitations
In comparison to the state-of-the-art methods, our proposed method achieves significantly better 3D face reconstruction both quantitatively and qualitatively. Our method's reliance on photometric stereo as an intermediary step means that it faces challenges. For instance, it struggles when there's a significant angle between the input views, leading to difficulties in stereo matching due to large disparity and matching ambiguity. Additionally, the requirement for images to be captured under uniform conditions is essential for effective stereo matching.
## 6 Conclusion
This paper proposes an end-to-end method for 3D face reconstruction from two uncalibrated images. We introduce a strong face shape prior to the face pose estimation in order to make the optimization stable and accurate. This pose is then used for stereo reconstruction, followed by a 3DMM fitting to find the face shape. An iterative face pose refinement procedure improves the face pose and consequently reconstruction accuracy. The proposed method is evaluated on two widely used face datasets, and outperforms SOTA methods.
\begin{table}
\begin{tabular}{|l|l|l|} \hline & Mean Face & GT Face \\ \hline \hline Ours (No FPR) & 1.16 & 1.09 \\ Ours (Using FPR after 3 iter) & 1.07 & 1.05 \\ GT Landmarks (No FPR) & 0.87 & 0.77 \\ \hline \end{tabular}
\end{table}
Table 4: Ablation study for change in reconstruction performance (MSE in mm) with change in 2D landmarks and face shape prior. It can be observed that using GT 2D landmark provides a significant improvement of \(0.3\)mm while GT Face provides less than \(1.1\) mm improvement. The proposed FPR method reduces error to 1.07mm.
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline & \multicolumn{3}{c|}{**FaceScape (3-views)**} \\ \hline Methods & **MSE** & **Median** & **M90** \\ \hline MVF-Net [48] & 1.75 & 1.45 & 3.60 \\ DFNRMVS [5] & 1.73 & 1.40 & 3.56 \\ INORig [4] & 1.46 & 1.18 & 3.05 \\ HRN [26] & 1.28 & 0.96 & 2.54 \\ Ours & **0.97** & **0.80** & **2.01** \\ \hline \end{tabular}
\end{table}
Table 3: Quantitative comparison using 3-views input with SOTA methods on FaceScape datasets.
Figure 7: (Top) error map for a sample input and (Bottom) average error on FaceScape dataset after each iteration of FPR. A clear improvement in reconstruction performance is observed using the proposed FPR. |
2305.12439 | Large shift current via in-gap and charge-neutral exciton excitations in
BN nanotubes and single BN layer | We perform {\it ab initio} many-body calculations to investigate the exciton
shift current in small diameter zigzag BN nanotubes and also single BN sheet,
using the GW plus Bethe-Salpeter equation (GW-BSE) method with the newly
developed efficient algorithms. Our GW-BSE calculations reveal a giant in-gap
peak in the shift current spectrum in all the studied BN systems due to the
excitation of the A exciton. The peak value of the excitonic shift current is
more than three times larger than that of the quasiparticle shift current, and
is attributed to the gigantic enhancement of the optical dipole matrix element
by the A exciton resonance. The effective exciton shift current conductivity is
nearly ten times larger than the largest shift conductivity observed in
ferroelectric semiconductors. Importantly, the direction of the shift current
in the BN nanotubes is found to be independent of the tube chirality ($n,0$)
(or diameter), contrary to the simple rule of $
sgn(J_\text{shift})=\text{mod}(n,3)$ predicted by previous model Hamiltonian
studies. Finally, our {\it ab initio} calculations also show that the exciton
excitation energies decrease significantly with the decreasing diameter due to
the curvature-induced orbital rehybridization in small diameter zigzag BN
nanotubes. | Yi-Shiuan Huang, Yang-Hao Chan, Guang-Yu Guo | 2023-05-21T12:05:23Z | http://arxiv.org/abs/2305.12439v1 | Large shift current via in-gap and charge-neutral exciton excitations in BN nanotubes and single BN layer
###### Abstract
We perform _ab initio_ many-body calculations to investigate the exciton shift current in small diameter zigzag BN nanotubes and also single BN sheet, using the GW plus Bethe-Salpeter equation (GW-BSE) method with the newly developed efficient algorithms. Our GW-BSE calculations reveal a giant in-gap peak in the shift current spectrum in all the studied BN systems due to the excitation of the A exciton. The peak value of the excitonic shift current is more than three times larger than that of the quasiparticle shift current, and is attributed to the gigantic enhancement of the optical dipole matrix element by the A exciton resonance. The effective exciton shift current conductivity is nearly ten times larger than the largest shift conductivity observed in ferroelectric semiconductors. Importantly, the direction of the shift current in the BN nanotubes is found to be independent of the tube chirality \((n,0)\) (or diameter), contrary to the simple rule of \(sgn(J_{\rm shift})={\rm mod}(n,3)\) predicted by previous model Hamiltonian studies. Finally, our _ab initio_ calculations also show that the exciton excitation energies decrease significantly with the decreasing diameter due to the curvature-induced orbital rehybridization in small diameter zigzag BN nanotubes.
## I Introduction
Shift current is one of the primary mechanisms for the bulk photovoltaic effect (BPVE) (also known as the photogalvanic effect), which can generate DC photocurrent in noncentrosymmetric crystals due to their second order optical response to light irradiation. The shift of the real space Wannier charge center of the excited electron is responsible for the shift current [1; 2; 3; 4]. In contrast to the conventional photovoltaic effect, shift current is a bulk phenomenon that does not require a p-n junction to separate the optically generated electron-hole pairs for a DC photocurrent. Consequently, the BPVE can be exploited to generate the above band gap photovoltage [5] and thus to fabricate high power-conversion efficiency solar cells [6; 7]. Therefore, there has been a resurge of interest in the BPVE in recent years.
An exciton is a type of collective excitation formed by a bound electron-hole pair interacting via Coulomb interaction, and can be optically generated. Although an exciton itself is charge-neutral, the process of exciton excitation can generate a shift current called exciton shift current due to Wannier charge center shift of the electron and hole in noncentrosymmetric crystals [8]. Furthermore, the exciton shift current can have exotic sub-bandgap peaks owing to the exciton binding energy [8; 9]. Recently, the sub-bandgap exciton shift current was observed in the noncentrosymmetric semiconductor CdS by using THz emission spectroscopy at the low temperature of 2 K[10]. The amplitude of the exciton shift current is comparable to the shift current driven by free electron-hole pair excitation.
BN nanotubes (BN-NTs) are formed by rolling up a single hexagonal BN sheet (see Fig. 1) along a specific chiral vector \((n,m)\). [11] There are three types of BN-NTs, namely armchair \((n,n)\) nanotubes, zigzag \((n,0)\) nanotubes (Fig. 1), and chiral \((n,m)\) nanotubes where \(n\neq m\)[11]. However, there is no second-order nonlinear optical (NLO) response (e.g., second-harmonic generation and BPVE) in armchair BN-NTs [12]. Moreover, previous experiments [13] indicate that among the grown BN-NTs, the zigzag structure is usually favored. Therefore, we focus on zigzag BN-NTs in this paper.
BN-NTs exhibit large many-body interaction effects owing to their one-dimension (1D) and wide bandgap nature, as have been demonstrated by previous _ab initio_ many-body theory studies at the level of GW plus Bethe-Salpeter equation (GW-BSE) [14; 15]. Furthermore, huge diameter-dependent A exciton peaks have been observed in BN-NTs [16; 17], being consistent with the theoretical predictions [14; 15]. Therefore, we expect zigzag BN-NTs to be ideal candidates for observing the large subbandgap exciton shift current due to their following two properties [10]. First, exciton absorption spectra in BN-NTs are well separated from the continuum of free electron-hole excitation owing to their significantly renormalized optical spectra and large exciton binding energy [14; 15]. Second, strongly bounded excitons with a large binding energy impede thermal dissociation into free electron-hole pairs [14; 15].
However, _ab initio_ studies of the BPVE in BN-NTs at GW-BSE level have not been reported, mainly because such _ab initio_ calculations are technically and computationally challenging, as will be explained below. Nevertheless, simple model Hamiltonian calculations without [18] and with [19] the inclusion of the the excitonic effect have been performed, predicting that zigzag BN-NTs have shift current along the tube axis with the direc
tion determined by the chiral index (\(n\),0). In particular, the shift current direction would follow the simple rule of \(\text{sgn}(J_{\text{sh}})=\text{mod}(n\),3) [18; 19]. According to this rule, in a bundle of zigzag BN-NTs, 1/3 of them would have the shift current along the positive tube axis, another 1/3 would possess the current flowing in the opposite direction and the other 1/3 would have the zero current. That is to say, a bundle of zigzag BN-NTs would have either zero or very small net shift current, i.e., zigzag BN-NTs would not be suitable for applications in photovoltaic solar cells and nonlinear optoelectronic devices. On the other hand, the prediction of this simple rule is theoretically surprising. A symmetry analysis (see Sec. II. A below) indicates that the shift current would flow along the \(y\)-axis in a single BN sheet [see Fig. 1(c)], which is also the tube axis when the BN sheet is rolled up to form a zigzag BN-NT. Consequently, for zigzag BN-NTs with a large diameter, the shift current would always be along the positive tube axis since the curvature effect would be very small [12; 20], i.e., the above mentioned rule would not occur. In this context, it is important to perform the state-of-the-art _ab initio_ calculations to investigate this detrimental prediction.
In this work, therefore, we perform the state-of-the-art _ab initio_ GW-BSE calculations of the shift current in small diameter zigzag BN-NTs [\((5,0),(6,0),(7,0),(8,0)\)] as well as single BN layer. Because of possible large curvature effect on the optical properties [12; 20], the shift current in small BN-NTs could depend significantly on the tube diameter. On the other hand, the optical properties of large diameter BN-NTs would be rather similar to that of the single BN sheet [12; 20], and thus are not considered here. Among other things, our _ab initio_ calculations reveal that the shift current due to the exciton excitations will be dramatically enhanced. Furthermore, the direction of the shift current calculated without and with the excitonic effect included, is always along the \(c\)-axis, i.e., being independent on the tube index \((n,0)\) (or tube diameter). Therefore, our work demonstrates that the zigzag BN-NT bundles are promising materials for high power conversion efficiency solar cells as well as high sensitivity photodectors.
We would like to comment that extending the existing _ab initio_ GW-BSE approach to calculate the exciton shift current is not straightforward [9]. In fact, there has been only one reported fully _ab initio_ study on the exciton shift current [9]. In Ref. [21], to account for the excitonic effect, Fei et al. used the linear optical coefficients derived from their _ab initio_ GW-BSE calculations. However, since the Coulomb interaction between the electron and hole in an exciton is not explicitly taken into account, such calculations are still based on an independent-particle approximation [21] and hence are not a fully _ab initio_ GW-BSE approach. In Ref. [9], to take the strong excitonic effect in two-dimensional (2D) materials into account, a time-dependent adiabatic GW (TD-aGW) approach was developed and used to calculate the exciton shift current. Although the _ab initio_ TD-aGW method can properly include the excitonic effect on the optical responses [9], the extremely high computational cost of conducting real-time propagation prevents it from studying complex structures such as BN-NTs. In this work, we thus develop a computationally efficient approach that combines the GW-BSE and sum-over-state formalism derived from the perturbative density-matrix approach within the mean-field approximation [22] to calculate the exciton shift current.
The rest of this paper is organized as follows. In Sec. II, we introduce the crystal structure of the BN-NTs and the BN sheet, followed by a brief description of the theories and computational details used in this work. In particular, the computationally efficient approach mentioned above for the exciton shift current calculations will be outlined. The main results are presented in Sec. III. In Sec. III A, we present the calculated electronic and optical properties of the BN sheet. The distinguished features in the optical spectra are analysed in terms of the electronic band structure and interband optical transition matrix elements. In Sec. III B, the electronic properties of the BN-NTs are reported, which will be used to understand the calculated optical absorption and shift current spectra in subsections that follows. In Secs. III C and III D, the calculated optical absorption and shift current spectra of the BN-NTs are presented, respectively. Finally, the conclusions drawn from this work are summarized in Sec. IV.
## II Crystal structures and computational methods
### Symmetry and shift conductivity tensor
Single hexagonal BN sheet is a noncentrosymmetric crystal with space group \(P\overline{6}m2\) and point group \(D_{3h}\). A symmetry analysis would show that the single BN
Figure 1: Crystal structures of the zigzag BN-NT (6,0), and the hexagonal BN sheet. (a) Side and (b) top views of the (6,0) BN-NT. (c) Top view of the single BN sheet. The solid box in (a) indicates the unit cell. Dashed lines in (a) and (c) denote the mirror planes.
sheet has three nonzero shift conductivity tensor elements \(\sigma^{xxy}\), \(\sigma^{yxx}\), and \(\sigma^{yyy}\), where the first Cartesian index is the direction of the current and the second and third indice are the polarization directions of the external field. Furthermore, \(\sigma^{xxy}=\sigma^{yxx}=-\sigma^{yyy}\)[12; 23], i.e., there is only one inequivalent nonzero element. As an example, let us consider the difference between in-plane shift conductivity tensor elements \(\sigma^{xxx}\) and \(\sigma^{yyy}\) in the single BN sheet [see Fig. 1(c)]. Figure 1(c) indicates that the atomic structures on the left and right regions to the mirror plane (denoted by the dashed line) (the \(y\)-axis) are symmetric. Therefore, the shift current along the direction normal to the mirror plane (the \(x\)-axis) would be zero if the second order combination of external fields is even under the mirror symmetry, i.e., \(\sigma^{xxx}=0\). On the other hand, such mirror symmetry does not exist in the direction along the mirror plane (the \(y\)-axis), and thus the conductivity element \(\sigma^{yyy}\) would not be zero.
When a hexagonal BN sheet is wrapped up to form a zigzag BN-NT with chiral index \((n,0)\), the \(y\)-axis would become the tube axis [the \(c\)-axis, see Fig. 1(a)] and the point group of the resultant BN-NT (\(n\),0) is \(C_{2nc}\). Thus, the conductivity tensor element \(\sigma^{ccc}\) is nonzero. On the other hand, there is no azimuthal shift current in the zigzag BN-NTs because it corresponds to the direction of the \(x\)-axis in the single BN sheet (see Fig. 1).
### Density functional theory calculations
_Ab initio_ calculations, based the density functional theory (DFT) with the local density approximation (LDA), are performed to determine the ground state properties of the considered zigzag BN-NTs and also the single BN sheet. A supercell geometry is adopted to simulate a BN-NT in which the nanotubes are arranged in a square array with a minimum distance of 12 A between the neighboring nanotubes. A slab supercell method is used to model the single BN sheet, and the inter-sheet distance used is over 16 A. In the structural optimization calculations, the accurate projector-augmented wave (PAW) method plus the conjugate gradient approach, as implemented in the VASP package [24; 25], is used to determine the atomic positions and lattice constants of BN-NTs. A large plane-wave cutoff energy of 450 eV is adopted. Theoretical equilibrium structures are obtained when the forces acting on all the atoms and the uniaxial stress were less than 0.005 eV/A and 2.0 kBar, respectively.
The ground electronic structure calculations are performed by using the plane-wave pseudopotential method as implemented in the Quantum Espresso package [26]. The optimized norm-conserving Vanderbilt pseudopotential [27] is exploited here. The \(1\times 1\times 32\) and \(18\times 18\times 1\) Monkhorst-Pack \(k\)-grids [28] are used to evaluate the Brillouin zone (BZ) integrals for the BN-NTs and the single BN sheet, respectively. The energy cutoff for the plane-wave basis set is 50 Ry. The resultant electronic structures are used in the subsequent GW-BSE calculations, as described below.
### Quasiparticle band structure calculations
The present GW-BSE calculations are performed via the BerkeleyGW package [29; 30; 31]. The quasiparticle energy bands are calculated by solving the Dyson equation,
\[[-\frac{1}{2}\nabla^{2}+V_{\rm ion}+V_{\rm H}+\Sigma(E_{n\bf k}^{\rm QP})] \psi_{n\bf k}^{\rm QP}=E_{n\bf k}^{\rm QP}\psi_{n\bf k}^{\rm QP}, \tag{1}\]
where \(\Sigma\), \(E_{n\bf k}^{\rm QP}\), and \(\psi_{n\bf k}^{\rm QP}\) are the self-energy operator, the energy, and the wave function of the quasiparticles within the \(G_{0}W_{0}\) approximation, respectively [31].
In the present one shot \(G_{0}W_{0}\) calculations [32], a nonuniform neck subsampling (NNS) \(k\)-grid of \(1\times 1\times 8\) (\(18\times 18\times 1\)) with a subsampling of 10 points in the mini-Brillouin zone, 600 (1600) bands, and a dielectric cutoff energy of 50 (50) Ry for the BN-NTs (the single BN sheet) are used. Truncation of the Coulomb interactions between the BN-NT (the BN sheet) and its periodic images is implemented [33]. The dynamic dielectric matrix is computed within the independent particle approximation (IPA) and Hybertsen-Louie generalized plasmon pole model [29].
### Exciton excitation calculations
The exciton wavefunction can be expressed as the linear combination of the quasiparticle electron-hole pairs,
\[\Psi_{s}({\bf r}_{e},{\bf r}_{h})=\sum_{{\bf k},m,n}A_{mn\bf k}^{s}\psi_{{\bf k },n}({\bf r}_{e})\psi_{{\bf k},m}^{*}({\bf r}_{h}), \tag{2}\]
where band index \(m\) (\(n\)) sums over the valence (conduction) bands only. Exciton envelope function of the \(s\)-th exciton state \(A_{mn\bf k}^{s}\) can be obtained by solving the BSE,
\[(E_{n\bf k}^{\rm QP}-E_{m\bf k}^{\rm QP})A_{mn\bf k}^{s}+\sum_{m^{\prime}n^{ \prime}{\bf k}^{\prime}}\langle mn{\bf k}|K^{eh}|m^{\prime}n^{\prime}{\bf k}^{ \prime}\rangle=\Omega^{s}A_{mn\bf k}^{s} \tag{3}\]
where \(\Omega^{s}\) and \(E_{m\bf k}^{\rm QP}\) (\(E_{n\bf k}^{\rm QP}\)) are the \(s\)-th exciton excitation energy and the valence (conduction) band quasiparticle excitation energies, respectively. \(K^{eh}\) is the electron-hole interaction kernel, which includes an exchange repulsive bare Coulomb term and a direct electron-hole attractive screened Coulomb term [30; 31]. The imaginary part of the dielectric function including the excitonic effect can be expressed as
\[\tilde{\varepsilon_{\rm BSE}} = \frac{g_{s}\pi e^{2}}{\epsilon_{0}N_{k}V_{c}}\sum_{s}|\sum_{mn\bf k }A_{mn\bf k}^{s}{\bf x}_{mn\bf k}\cdot{\bf e}|^{2}\delta(\omega-E^{s}) \tag{4}\] \[= \frac{g_{s}\pi e^{2}}{\epsilon_{0}N_{k}V_{c}}\sum_{s}|{\bf X}_{s} \cdot{\bf e}|^{2}\delta(\omega-E^{s})\]
where \(\mathbf{x}_{mn\mathbf{k}}\cdot\mathbf{e}\) is the dipole matrix element along the polarization direction \(\mathbf{e}\) and \(g_{s}\) accounts for the spin degeneracy. \(N_{k}\) is the number of \(k\) points and \(V_{c}\) is the unit cell volume. The exciton dipole matrix element \(\mathbf{X}_{s}\) is defined as
\[\mathbf{X}_{s}\equiv\sum_{mn\mathbf{k}}A^{s}_{nm\mathbf{k}}\mathbf{x}_{mn \mathbf{k}}. \tag{5}\]
When neglecting the excitonic effect, optical excitations are given by direct transitions between quasiparticle electron-hole pairs. Within the IPA approximation, the imaginary part of the dielectric function reduces to [30],
\[\bar{\varepsilon}_{\text{IPA}}=\frac{g_{s}\pi e^{2}}{\epsilon_{0}N_{k}V_{c}} \sum_{mn\mathbf{k}}|\mathbf{x}_{mn\mathbf{k}}\cdot\mathbf{e}|^{2}\delta(\omega -(E^{\text{QP}}_{n\mathbf{k}}-E^{\text{QP}}_{m\mathbf{k}})). \tag{6}\]
Equation (4) has the same structure with Eq. (6) except that the dipole matrix element is replaced by the exciton dipole matrix element. Our BSE calculations for the BN-NTs (the single BN sheet) are computed on a dense \(k\)-grid of \(1\times 1\times 64\) (\(72\times 72\times 1\)) with the dielectric cutoff of 10 (10) Ry.
### Exciton shift current calculations
The shift current density along the \(a\)-axis is given by [4]
\[J^{a}_{sh}(\omega)=2\sum_{bc}\sigma^{abc}(0;\omega,-\omega)E^{b}(\omega)E^{c} (-\omega), \tag{7}\]
where \(\sigma^{abc}\) is the third-rank conductivity tensor, \(b\) and \(c\) denote the polarization directions of the electric fields of the incident light. In the IPA, \(\sigma^{abc}\) can be expressed as [4; 34]
\[\sigma^{abc}(0;\omega,-\omega)=-\frac{i\pi e^{3}}{4N_{k}V_{c}} \sum_{n,m,\mathbf{k}}f_{nm}\] \[(\mathbf{x}^{b}_{mn}[\mathbf{x}^{c}_{nm}]_{;k_{a}}+\mathbf{x}^{c }_{mn}[\mathbf{x}^{b}_{nm}]_{;k_{a}})[\delta(\omega_{mn}-\omega)+\delta(\omega _{nm}-\omega)], \tag{8}\]
where \(\mathbf{x}^{a}_{nm}\) and \([\mathbf{x}^{a}_{nm}]_{;\mathbf{k}}\) are the dipole matrix element and its generalized momentum derivative, respectively. The generalized momentum derivative of quantity \(O_{mn}(\mathbf{k})\) is given by
\[[O^{b}_{nm}(\mathbf{k})]_{;k_{a}}=\partial_{ka}O^{b}_{nm}(\mathbf{k})-i[ \xi^{a}_{nn}(\mathbf{k})-\xi^{a}_{mm}(\mathbf{k})]O^{b}_{nm}(\mathbf{k}) \tag{9}\]
where \(\xi^{a}_{nm}(\mathbf{k})\) is the \(a\) component of the Berry connection. [4]\(f_{nm}=f_{n}-f_{m}\) and \(\hbar\omega_{nm}=E_{n}-E_{m}\) where \(f_{n}\) denotes the occupation factor and \(E_{n}\) is the energy of the \(n\)th band at the \(\mathbf{k}\) point. Note that here band indices \(m\) and \(n\) should sum over all the states.
As mentioned before, to include the excitonic effect, we use the efficient perturbative density-matrix approach to calculate \(\sigma^{abc}\) [see Eq. (B1a) in Ref. [22]]. In this case, \(\sigma^{abc}\) can be written as
\[\sigma^{abc}(0,\omega,-\omega) =-\frac{g_{s}e^{3}}{2V}\sum_{ss^{\prime}}\left[\frac{X^{b}_{s}\Pi^ {a}_{ss^{\prime}}X^{c*}_{s^{\prime}}}{\left(\hbar\omega-E_{s}+i\eta\right) \left(\hbar\omega-E_{s^{\prime}}-i\eta\right)}+\frac{V^{a}_{s}X^{b}_{ss^{ \prime}}X^{c}_{s^{\prime}}}{-E_{s}\left(\hbar\omega-E_{s^{\prime}}+i\eta\right) }+\frac{V^{a}_{s}X^{b*}_{ss^{\prime}}X^{c*}_{s^{\prime}}}{-E_{s}\left(-\hbar \omega-E_{s^{\prime}}-i\eta\right)}\right]\] \[+\left(b\leftrightarrow c,\omega\leftrightarrow-\omega\right), \tag{10}\]
where we define \(V^{a}_{s}\equiv\sum_{cv\mathbf{k}}A^{ss*}_{cv\mathbf{k}}v^{a}_{cv\mathbf{k}}\) with velocity matrix elements \(v^{a}_{cv\mathbf{k}}\), and inter-exciton coupling matrix elements, \(X_{ss^{\prime}}\) and \(\Pi_{ss^{\prime}}\) are defined as,
\[\Pi^{a}_{ss^{\prime}}\equiv\sum_{cv\mathbf{c}^{\prime}\mathbf{k}}A^{s}_{cv \mathbf{k}}v^{a}_{c^{\prime}\mathbf{k}}A^{s^{\prime}*}_{c^{\prime}\mathbf{k}}- \sum_{cvv^{\prime}\mathbf{k}}A^{s}_{cv^{\prime}\mathbf{k}}v^{a}_{v^{\prime} \mathbf{k}}A^{s^{\prime}*}_{cv\mathbf{k}},\]
and \(X^{b}_{ss^{\prime}}=Y^{b}_{ss^{\prime}}+Q^{b}_{ss^{\prime}}\). We note that \(c\) and \(v\) denote that the summation runs over the conduction and valence band, respectively. Here, \(\mathbf{X}_{ss^{\prime}}\) is an ill-defined operator. However, it can be separated into the well-defined \(\mathbf{Y}_{ss^{\prime}}\) (interband part) and \(\mathbf{Q}_{ss^{\prime}}\) (intraband part) operators, where
\[Q^{b}_{ss^{\prime}}=i\sum_{c^{\prime}v^{\prime}\mathbf{k}^{\prime}}A^{ss}_{c^ {\prime}v^{\prime}\mathbf{k}^{\prime}}\left(A^{s^{\prime}}_{c^{\prime}v^{ \prime}\mathbf{k}^{\prime}}\right)_{;k^{\prime}_{b}},\]
and
\[Y^{b}_{ss^{\prime}}=\sum_{\begin{subarray}{c}c^{\prime}\neq c_{1}\\ v^{\prime}\neq v_{1}\end{subarray}}\sum_{c_{1}v_{1}\mathbf{k}}A^{ss}_{c^{ \prime}v^{\prime}\mathbf{k}^{\prime}}\left[A^{s^{\prime}}_{c_{1}v^{\prime} \mathbf{k}^{\prime}}x^{b}_{c^{\prime}c_{1}}-A^{s^{\prime}}_{c^{\prime}v_{1} \mathbf{k}^{\prime}}x^{b}_{v_{1}v^{\prime}}\right].\]
We develop a post-processing program to implement this formalism, which is then used to calculate the excitonic shift current conductivity using the outputs from the BerkeleyGW package.
It is not straightforward to evaluate the intraband portion of the position operator \(\mathbf{Q}_{ss^{\prime}}\). \(\mathbf{Q}_{ss^{\prime}}\) involves the numerical derivative of the exciton envelope function \(A^{s}_{nm\mathbf{k}}\) with respect to \(\mathbf{k}\). Furthermore, \(A^{s}_{mn\mathbf{k}}\) has an arbitrary \(k\)-dependent gauge. To solve the problem, we use the locally smooth gauge adopted in Ref. [9] to calculate the intraband position operator. The idea of the locally smooth gauge is to rotate the wave functions at neighboring \(k\) points in such a way that the overlap of connected
wave functions is Hermitian [35; 36; 9].
In this work, the \(\delta\)-function is approximated by a Gaussian function with a 0.1 eV broadening. For the BN NTs, we use the effective unit cell volume rather than the supercell volume. The effective unit cell volume is given by \(V_{c}=\pi[(D/2+d/2)^{2}-(D/2-d/2)^{2}]T=\pi DdT\), where \(D\) is the tube diameter, \(T\) is the length of translational vector and \(d\) is the effective thickness of the nanotube walls, which is set to the interlayer distance (3.28 A) of \(h\)-BN [20]. For the single BN sheet, the effective unit cell volume is \(V_{c}=A_{c}d\) where \(A_{c}\) and \(d\) are the area of the unit cell and effective thickness of the sheet, respectively.
## III Results and Discussion
### Electronic and optical properties of the single BN sheet
Both the GW and LDA quasiparticle band structures of the single BN sheet are displayed in Fig. 2. Our GW quasiparticle band structure agrees well with the previous calculations [37; 38; 39; 40; 41; 42; 43]. The single BN sheet shares the similar structure with graphene, the honeycomb lattice. However, unlike graphene which is a semimetal with \(\pi\) and \(\pi^{*}\) bands degenerate at the \(K\) point, here the \(\pi\) and \(\pi^{*}\) bands are well separated due to the ionicity difference between B and N [33]. Hence, the single BN sheet has a large band gap (Fig. 2). Table 1 shows that the single BN sheet is an insulator with a \(K^{v}\to K^{c}\) direct band gap of 4.65 eV from the LDA calculation. However, the single BN sheet becomes an insulator with a larger \(K^{v}\rightarrow\Gamma^{c}\) indirect band gap of 7.66 eV from the GW calculation. The large GW quasiparticle correction to the LDA band gap can be attributed to the weak dielectric screening, a consequence of 2D nature and the wide band gap of the BN sheet. The GW quasiparticle correction of the \(K^{v}\to K^{c}\) gap (3.26 eV) is slightly (0.37 eV) larger than that of \(K^{v}\rightarrow\Gamma^{c}\) (2.89 eV). This makes the single BN sheet transform from the direct band gap (\(K^{v}\to K^{c}\)) in the LDA to the indirect bandgap (\(K^{v}\rightarrow\Gamma^{c}\)) at the GW level. The \(\pi^{*}\) band, arising from \(2p_{z}\) orbitals, has the out-of-plane charge density and hence the weaker dielectric screening. The weaker dielectric screening of the \(\pi^{*}\) band explains why the quasiparticle correction of \(K^{v}\to K^{c}\) gap is smaller than that of \(K^{v}\rightarrow\Gamma^{c}\) gap [44]. Consequently, the GW quasiparticle correction not only modifies the band gap, but also changes the dispersion of the bands. This indicates the significance of GW quasiparticle calculations, and thus the complex screening effect cannot be fully taken into account through a simple scissor correction. We notice that our GW band widths are in nearly perfect agreement with the experimental values (Table 1).
Figure 3(a) shows the calculated imaginary (absorptive) part (\(\varepsilon\)") of the dielectric function of the single BN sheet. We notice that the \(\varepsilon\)" spectrum looks very similar to that of the previous LDA calculation [20], _albeit_ with the onset of the absorption and peak positions at much higher photon energies due to the GW quasiparticle corrections. For optical excitations below 10 eV with the in-plane polarization of the electric field, only transitions between the \(\pi\) and \(\pi^{*}\) bands are dipole allowed. The oscillator strength of the dielectric function is mainly contributed by the \(\pi\) and \(\pi^{*}\) bands. The onset energy of the absorption spectrum at the GW-IPA level corresponds to the direct transition at the \(K\) point. On the other hand, the optical transition near the \(M\) point contributes to the largest peak at \(\sim\)9.0 eV. In other words, the oscillator strength of the dielectric function stems primarily from the optical transitions between two flat \(\pi\) and \(\pi^{*}\) bands along the high symmetry \(K-M\) line.
Remarkably, when the excitonic effect is included at the GW+BSE level, two prominent absorption peaks [labelled A and B in Fig. 3(a)] occur in the quasiparticle band gap and they are due to the excitation of the first two bright excitions. Unlike bulk semiconductors such as CdS in which the exciton peak usually appears a shoulder on the absorption edge [10], the A and B exciton peaks in the single BN sheet are completely detached from the quasiparticle absorption edge due to their gigantic binding energies of 1.81 and 0.96 eV (Table 1), respectively. Note that the calculated large excitation energy of the A exciton agrees well with the measured optical band gap (see Table 1).
The large exciton binding energies are the consequence of the weak screening in 2D wide bandgap materials such as the single BN sheet, as mentioned in the previous paragraph. Figure 4 shows the exciton A wave-function (\(|\psi(\mathbf{r}_{e},\mathbf{r}_{h})|^{2}\)). Clearly, the exciton is strongly
Figure 2: Quasiparticle band structure of the single BN sheet from our GW (solid red lines) and LDA (blue dashed lines) calculations. Valence band maximum is set at 0 eV, which is at the \(K\) point indicated by the pink diamond. The conduction band minima of the GW and LDA band structures are denoted by the green star and orange square, respectively.
localized, and the electron probability distribution beyond the nearest neighbor is lower than 30%. Consequently, the exciton envelope function of the A exciton is widespread in the BZ [37], resulting in the mixing between the interband electron-hole excitations along the high-symmetry \(K-M\) line. The mixing strongly redistributes the oscillator strength of the spectrum [see Eq. (4)]. The in-gap exciton states A and B carry almost all the oscillator strength, and the absorption spectrum almost diminishes above the quasiparticle band gap. The magnitude of the A exciton peak is more than three times larger than the largest quasiparticle absorption peak at \(\sim\)9.0 eV at the GW-IPA level.
Figure 3(b) displays the calculated shift current conductivity of the BN sheet. Clearly, the shift current conductivity spectra are rather similar to the corresponding absorption spectra. In particular, when the excitonic effect is included, two huge peaks occur within the quasiparticle band gap (red line). Comparing the shift current conductivity to the imaginary part of the dielectric function indicates that the two peaks are due to the A and B exciton excitations. The height of the maximal peak at 5.91 eV is nearly eight times larger than that of the shoulder at 8.15 eV from the GW-IPA calculation, i.e., the effect of the exciton excitation not only create two in-gap peaks but also greatly enhance the shift current. The significant increase in the shift current is primarily due to the in-gap A exciton excitation, and is mainly caused by the significant enhancement of the optical dipole matrix element [see Eq. (4) and (10)] due to strong overlap of wavefunctions of the electron and hole within the electron-hole pair.
\begin{table}
\begin{tabular}{c c c c c c c c c} & \multicolumn{2}{c}{\(\Delta E\) (\(E_{g}\))} & \multicolumn{2}{c}{(eV)} & \multicolumn{2}{c}{\(E_{abs}\) (eV)} & \multicolumn{2}{c}{Band width (eV)} & \multicolumn{2}{c}{\(\Omega\) (eV)} & \multicolumn{2}{c}{\(E_{b}\) (eV)} \\ \cline{2-9} & \(K^{v}\to K^{c}\) & \(K^{v}\rightarrow\Gamma^{c}\) & & \(\pi\) Band & \(\sigma_{1}\) Band & \(\sigma_{2}\) Band & & Exciton A (B) \\ \hline LDA & 4.65 (4.65) & 4.77 & 4.52 & 5.45 & 5.80 & 7.65 &... &... \\ GW & 7.91 & 7.66 (7.66) & 7.72 & 5.93 & 6.41 & 8.20 &... &... \\ GW+BSE &... &... &... &... &... &... & 5.91 (6.76) & 1.81 (0.96) \\ Exp. &... &... &... & 5.8\({}^{a}\) & 6.5\({}^{a}\) & 8.2\({}^{a}\) & 6.05\({}^{b}\), 6.03\({}^{c}\), 6.1\({}^{d}\) &... \\ \end{tabular}
\end{table}
Table 1: Transition energies from the valence band maximum (\(v\)) to conduction band minimum (\(c\)) (\(\Delta E\)), band gaps (\(E_{g}\)), band widths, the onset of the continuum optical absorption (\(E_{abs}\)), exciton excitation energy (i.e., optical band gap) (\(\Omega\)) and exciton binding energy (\(E_{b}\)) of single BN sheet. The available experimental data (Exp.) are also listed for comparison.
Figure 4: The electron probability distribution \(|\psi(\mathbf{r}_{e},\mathbf{r}_{h})|^{2}\) [see Eq. (2)] of the doubly degenerate A exciton in the single BN sheet with the hole position (\(\mathbf{r}_{h}\)) fixed at the N atom (the pink sphere).
Figure 3: (a) Imaginary part of the dielectric function and (b) shift current conductivity of the single BN sheet from both GW+BSE (solid red line) and GW+IPA (dashed blue line) calculations. The first two bright exciton peaks are labelled by A and B. The green vertical line indicates the onset of the continuum optical absorption (see Table 1).
### Electronic properties of BN nanotubes
Our LDA and GW calculations indicate that the zigzag BN-NTs considered here are direct bandgap insulators with \(\Gamma^{v}\rightarrow\Gamma^{c}\) transition, being consistent with previous LDA calculations (see, e.g., Ref. [20]). The calculated band gaps of the zigzag BN-NTs are listed in Table 2 and also displayed in Fig. 5. Figure 5 shows that for small diameter BN-NTs, both GW and LDA band gaps get reduced dramatically as the diameter decreases. Interestingly, the GW correction to the LDA band gap (\(\Delta E_{g}\)) is almost independent of the diameter, being about 3.0 eV (Table 2 and Fig. 5). The lowering of the band gap relative to the BN sheet already occurs at the LDA level [20], and was attributed to the curvature-induced orbital rehybridization. The \(\pi^{*}\) and \(\sigma^{*}\) orbitals are orthogonal to each other in the flat BN sheet. However, such orthogonality does not hold in the BN-NTs where the local curvature exists. Consequently, the \(\pi^{*}\) and \(\sigma^{*}\) orbitals can hybridize and form ring-like charge distribution, as shown in Fig. 6. The ring-like charge distribution can effectively reduce the ionicity and hence lowers the energies of \(\pi^{*}\) bands [49; 50; 51], thereby reducing the quasiparticle bandgaps of the BN-NTs. As the diameter increases, the effect of curvature-induced orbital rehybridization decreases and the band gap converges to that of single BN sheet.
### Absorption spectra of BN nanotubes
We show the calculated absorptive part (\(\varepsilon\)") of the dielectric function of the considered BN-NTs and also single BN sheet in Fig. 7(a). In the BN-NTs, optical transitions between \(\pi\) and \(\pi^{*}\) bands dominate the absorption spectra. First, compared with the single BN sheet, the energies of the \(\pi^{*}\) bands of the BN-NTs are lowered owing to the curvature-induced orbital rehybridization, as discussed above. Consequently, the energies of the \(\varepsilon\)" continuum onset at the GW-IPA level decrease with decreasing diameter, as shown in Fig. 7 (a) and Table 2. Second, contrary to the moderate diameter (\(D>\)10 A) BN-NTs where the \(\varepsilon\)" spectrum consists of a single distinct peak [20], the absorption spectra of the small diameter BN-NTs considered here feature multiple peaks at the GW-IPA level. For example, the \(\varepsilon\)" spectrum of BN-NT (5,0) possesses three distinct peaks at 7.4, 7.9, and 8.7 eV, respectively.
BN-NTs are 1D wide bandgap insulators. Consequently, due to much reduced dielectric screening in these 1D wide bandgap materials, large exciton binding energies arise. For small diameter BN-NT (5,0), in particular, the exciton binding energy \(E_{b}\) of peak A is 2.23 eV, which is larger than that (1.81 eV) of the 2D single BN sheet (Table 1) and also that (0.63 eV) of the typical 2D transition metal dichalcogenide semiconductor MoS\({}_{2}\)[54]. It is also much larger than that of 3D bulk \(h\)-BN (0.72 eV) [55] and 2H-MoS\({}_{2}\) (0.05) [56]. Note that the exciton binding energy of typical bulk semiconductor CdS is as small as 0.03 eV [10]. The large \(E_{b}\) results in the emergence of the huge in-gap absorption peak which is completely detached from the onset of the continuum absorption \(\varepsilon\)" spectrum of the BN-NT. Most of the oscillator strength of \(\varepsilon\)" spectra is carried by the A exciton state of the BN-NTs.
Table 2 shows the excitation energies (\(\Omega\)) of exciton A in BN-NTs. Our calculated \(\Omega\) converges to that of the single BN sheet and agrees well with the experimental values [17; 52]. We also compare our calculated \(\varepsilon\)" with the experimental one derived from electron energy-loss spectra. Experimental \(\varepsilon\)" of medium diameter multiwalled BN-NTs with small momentum transfer \(q=0.1\)
Figure 5: Quasiparticle bandgaps of the zigzag BN-NTs vs the tube diameter from both LDA and GW calculations. Bandgaps from the previous LDA calculation [20] are denoted by red open squares. The horizontal red and blue lines indicate the quasiparticle band gaps of the single BN sheet from the LDA and GW calculations, respectively.
Figure 6: Charge density distribution (in units of \(10^{-3}\) Å\({}^{-3}\)) of the states in the vicinity of the conduction band minimum of the zigzag \([(5,0),(6,0),(7,0),(8,0)]\) BN-NTs. B and N atoms are indicated by green and gray spheres, respectively.
A\({}^{-1}\) is plotted in Fig. 7 (a) (orange dotted line) [53]. The complex main peak at 5.5 eV in the experimental \(\varepsilon\) spectrum may consist of exciton A peaks from different walls of the multi-walled BN-NT. The substantial broadening of the complex primary peak could be due to the consequence of the finite exciton lifetime. The shoulder from around 6.0 to 8.0 eV may contribute by minor peaks other than exciton A peak.
In the low-dimensional BN systems, exciton A is strongly localized, as shown in Fig. 4. The localized nature of exciton A suppresses the curvature effect on the exciton binding energy as the tube diameter increases, and leads to the fast convergence of the exciton binding energy [see Table 2] [14]. The red shift of the A peak can be attributed to the curvature-induced orbital rehybridization that lowers the energies of \(\pi^{*}\) bands and reduces the onset of the continuum \(\varepsilon\)" at the GW-IPA level.
### Exciton shift current of BN nanotubes
In Fig. 7 (b), calculated shift current conductivity spectra of the BN-NTs are plotted. Figure 7 shows that the shift current spectra are very similar to the corresponding \(\varepsilon\)" spectra. This correlation is facilitated by the dipole matrix elements present in both the expression for \(\varepsilon\)" [Eq. (4)] and the expression for the shift current [Eq. (9)]. In particular, as for the \(\varepsilon\)" spectrum, the gigantic excitonic effect results in the occurrence of the huge in-gap shift current peak from the GW-BSE calculation [Fig. 7 (b)]. The prominent shift current peak in the BN-NTs and single BN sheet is due to the excitation of the A exciton. Furthermore, the excitonic effect substantially increases the largest shift current peak of all the BN-NTs by nearly a factor of three. This large enhancement of the shift current by the A exciton excitation is mainly due to the large enhancement of the dipole matrix element caused by the strong overlap of the wavefunctions of the electron and hole in the A exciton state.
The optical responses of BN-NTs in the low-energy region is dominated by \(\pi\)-\(\pi^{*}\) transitions. As mentioned before, the curvature-induced orbital rehybridization lowers the energy of the \(\pi^{*}\) bands of BN-NTs. As a result, the onset of the shift current spectra at the GW-IPA level decreases with decreasing diameter, as shown in Fig. 7 (b). Similar to \(\varepsilon\)" spectra, shift current spectra of small diameter BN-NTs feature multiple peaks at the GW-IPA level.
Figure 7 (b) shows that the shift current spectra of the \((n,0)\) BN-NTs from both _ab initio_ GW-IPA and GW-BSE calculations do not change sign as the tube diameter (or \(n\)) increases, i.e., the direction of the calculated shift current does not follow the simple rule of \(sgn(J_{\text{shift}})=\text{mod}(n,3)\) predicted by the previous tight-binding model calculations [18; 19]. To better understand this important finding, we compute the \(\mathbf{k}\)-resolved shift conductivity and shift vector, which is given by \(R_{\text{GW-IPA}}^{yyy}(\mathbf{k})=\text{Im}[\mathbf{x}_{mn}^{y}[\mathbf{x} _{m}^{y}]_{:k_{y}}]/\left|\mathbf{x}_{mn}^{y}\right|^{2}\)[57], of the single BN sheet, as displayed in Fig. 8 for two incident photon energies of 8.0 eV (optical absorption edge) and 9.0 eV (the peak position) at the GW-IPA level. Figure 8 indicates that the \(\mathbf{k}\)-resolved shift current and shift vector of the single BN sheet are positive in the whole BZ for both selected photon energies. At the absorption edge (8.0 eV), the shift current and vector are localized on the \(K\) and \(K^{\prime}\) points. As the energy increases to the peak position (9.0 eV), the weights of the shift current and shift vector extend toward the \(M\) point and eventually toward the \(K\)-\(\Gamma\) symmetry line. The 1D BZ of one BN-NT consists of a few discrete lines on the 2D BZ of the BN sheet [11], as displayed for the \((6,0)\) and \((8,0)\) BN-NTs in Fig. 8. Consequently, the direction of the shift current of the zigzag \((n,0)\) BN-NTs would remain unchanged when the tube diameter [or the chirality \((n,0)\)] varies, contrary to the prediction of the tight-binding model calculations [18; 19].
We notice that previous tight-binding calculations also predicted that the direction of the electric polarization of the zigzag BN-NTs would follow the same rule of
\begin{table}
\begin{tabular}{c c c c c c c c} & \(D\) (Å) & \(E_{g}^{LDA}\) (eV) & \(E_{g}^{GW}\) (eV) & \(\Delta E_{g}^{GW-LDA}\) (eV) & \(\overline{E_{abs}}\) (eV) & \(\overline{E_{b}}\) (eV) & \(\Omega\) (eV) \\ \hline BN-NT & & & & & & & \\ (5,0) & 4.11 & 2.13 & 5.19 & 3.06 & 7.06 & 2.23 & 4.83 \\ (6,0) & 4.87 & 2.70 & 5.70 & 3.00 & 7.37 & 2.32 & 5.05 \\ (7,0) & 5.65 & 3.31 & 6.31 & 3.00 & 7.61 & 2.27 & 5.34 \\ (8,0) & 6.43 & 3.51 & 6.51 & 3.00 & 7.71 & 2.28 & 5.43 \\ Exp.\({}^{a}\) & \(<10\) &... &... &... &... &... & 5.51 \\ Exp.\({}^{b}\) & 15-30 &... &... &... &... &... & 5.8 \(\pm\) 0.2 \\ Exp.\({}^{c}\) & 50 \(\pm\) 10 &... &... &... &... &... & 5.82 \(\pm\) 0.01 \\ Exp.\({}^{c}\) & 600 \(\pm\) 100 &... &... &... &... &... & 5.90 \(\pm\) 0.01 \\ sheet & \(\infty\) & 4.65 & 7.66 & 3.01 & 7.72 & 1.81 & 5.91 \\ \end{tabular} \({}^{a}\)Reference [16]; \({}^{b}\)Reference [52]; \({}^{c}\)Reference [17]
\end{table}
Table 2: Diameter (\(D\)), band gap (\(E_{g}\)), the onset of the continuum optical absorption (\(E_{abs}\)), exciton excitation energy (\(\Omega\)), exciton binding energy (\(E_{b}\)) and the difference between the LDA and GW band gaps (\(\Delta E_{g}^{GW-LDA}\)) of the zigzag BN-NTs and also single BN sheet (Table 1). For comparison, the measured optical band gaps (\(\Omega\)) of large diameter BN-NTs are also listed.
\(sgn(J_{\rm shift})={\rm mod}(n,3)\) as the shift current [58; 18]. This is not surprising since the electric polarization and shift current are closely connected [59]. However, a subsequent \(ab\)\(initio\) calculation showed that the electric polarization of the zigzag BN-NTs grows monotonically as the tube diameter increases [60], i.e., the direction of the electric polarization does not depend on the chiral index (\(n\)). Therefore, both the previous [60] and present _ab initio_ calculations demonstrate the importance of the full _ab initio_ calculations for the electric polarization and shift current in the BN-NTs.
Furthermore, a close examination of the simple tight-binding model Hamiltonian suggests that the erroneous prediction of the tight-binding model calculations [18; 19] would stem from the fact that only \(\pi\)-bands in the single BN sheet and hence the transitions between the lowest two azimuthal subbands in zigzag BN-NTs were included, whereas the predominant exciton peak in BN-NTs is composed of a coherent supposition of transitions from several different subband pairs [15]. Consequently, the rehybridization effect, induced by the curvature of small diameter BN-NTs, is missing in the simple tight binding method [18; 19]. Indeed, previous DFT calculations showed that the rehybridization effect plays a crucial role in the strong bandgap renormalization for small diameter zigzag BN-NTs [20].
Let us now compare the magnitude of the excitonic photocurrents in the considered BN systems with the observed shift currents in well-known materials. First, the observed photocurrent in ferroelectric BaTiO\({}_{3}\) above the absorption edge under light intensity \(I=0.5\) mW/cm\({}^{2}\) and sample width \(w=0.15\) cm is around \(5\times 10^{-13}\) A [61]. We find that under the same conditions, the A exciton shift current of a single BN sheet would reach \(5.0\times 10^{-12}\) A, and that from a thin film consisting of, e.g., the \((5,0)\) BN-NT array with an intertubular distance (\(d\)) of 3.28 A would be as large as \(\sim\)10 mA. Second, the effective conductivity of the A exciton shift current in the considered BN systems (Fig. 7) is about two-orders of magnitude larger than the excitonic shift conductivity (\(\sim 0.2\)\(\mu\)A/V\({}^{2}\)) observed in semiconductor CdS [10], and also more than five times larger than the largest observed shift conductivity (\(\sim 10.0\)\(\mu\)A/V\({}^{2}\)) in ferroelectric SbSI [62].
Figure 7: (a) The imaginary (absorptive) part of the dielectric function and (b) shift current conductivity of the single-walled zigzag \([(5,0),(6,0),(7,0),(8,0)]\) BN-NTs and the single BN sheet from both the GW+BSE (solid red lines) and GW+IPA (dashed blue lines) calculations. The prominent peak labelled A, is due to the first bright exciton (exciton A). Both the polarization direction of light electric field [(a) and (b)] and shift current (b) are parallel to the tube axis (\(y\)-axis) in the BN-NTs (BN sheet). For comparison, the experimental dielectric function of the multi-walled BN-NT [53] is also displayed in (a).
Figure 8: The \(\mathbf{k}\)-resolved shift vector (\(R\)) (a, b) amd shift current conductivity (\(\sigma\)) (c, d) of the single BN sheet from the GW-IPA calculation. The green lines in (a) and (b) indicate the 1D BZ of the \((6,0)\) and \((8,0)\) BN-NTs, respectively.
Conclusions
Using the newly developed computationally efficient algorithms, we have performed the state-of-the-art _ab initio_ GW-BSE calculations to investigate the exciton shift current as well as the electronic and optical properties of the zigzag [\((5,0)\), \((6,0)\), \((7,0)\), \((8,0)\)] BN-NTs and also the single BN sheet. First of all, we find a giant in-gap peak in both the shift current and optical absorption spectra in all the studied BN systems due to the excitation of the A exciton. This excitonic peak is nearly three times higher than that in the continuum due to the excitation of the free electron-hole pairs (Fig. 7), and may be attributed to the gigantic enhancement of the optical dipole matrix element by the A exciton resonance. Second, our _ab initio_ calculations show that the exciton excitation energies and also the onset of the continuum spectra decrease significantly with the decreasing diameter due to the curvature-induced orbital rehybridization in small diameter zigzag BN nanotubes. This orbital rehybridization lowers the ionicity of the \(\pi^{*}\) bands by the formation of ring-like charge distribution inside the BN-NTs (Fig. 6), thus reducing the energy of the \(\pi^{*}\) bands and hence the bandgap. The quasiparticle GW correction to the bandgap is almost independent of the tube diameter.
Third, we find that the direction of the shift current in the BN-NTs is independent of the tube chirality \((n,0)\) (or diameter), contrary to the simple detrimental rule of \(sgn(J_{\rm shift})={\rm mod}(n,3)\) reported by previous model Hamiltonian studies [18; 19]. Importantly, this implies that in a bundle of aligned zigzag BN-NTs, the contributions of the BN-NTs to the shift current would be additive rather than cancelling each other as the simple rule suggests [18; 19]. Finally, the effective exciton shift current conductivity is nearly ten times larger than the largest shift conductivity observed in ferroelectric semiconductors [62]. Furthermore, strongly bounded excitons with a large binding energy in the considered BN systems would impede thermal dissociation into free electron-hole pairs. All these properties would make the zigzag BN-NTs excellent candidates for experimentally studying the exciton shift current and also for promising applications in nano-scale optoelectronic devices.
###### Acknowledgements.
The authors gratefully acknowledge the support from the National Science and Technology Council and the National Center for Theoretical Sciences of The R.O.C. The authors also thank the National Center for High-performance Computing (NCHC) in Taiwan for providing computational and storage resources.
|
2310.01195 | Federated K-means Clustering | Federated learning is a technique that enables the use of distributed
datasets for machine learning purposes without requiring data to be pooled,
thereby better preserving privacy and ownership of the data. While supervised
FL research has grown substantially over the last years, unsupervised FL
methods remain scarce. This work introduces an algorithm which implements
K-means clustering in a federated manner, addressing the challenges of varying
number of clusters between centers, as well as convergence on less separable
datasets. | Swier Garst, Marcel Reinders | 2023-10-02T13:32:00Z | http://arxiv.org/abs/2310.01195v2 | # Federated K-Means clustering
###### Abstract
Federated learning is a technique that enables the use of distributed datasets for machine learning purposes without requiring data to be pooled, thereby better preserving privacy and ownership of the data. While supervised FL research has grown substantially over the last years, unsupervised FL methods remain scarce. This work introduces an algorithm which implements K-means clustering in a federated manner, addressing the challenges of varying number of clusters between centers, as well as convergence on less separable datasets.
Federated Learning K-Means clustering Distributed machine learning.
## 1 Introduction
Nowadays, lots of data is being generated in a distributed fashion. Mobile phones and other personal devices such as smart watches enable the collection of massive amounts of data. If made accessible, this data could prove useful for improving the performance of the services provided by these devices. However, due to a growing concern on data privacy, more and more users of these devices are hesitant in sharing their data. Furthermore, regulations such as the General Data Protection and Regulation (GDPR) act prevent the collection of data of this kind in bulk. Federated learning (FL) ([13]) was introduced as a solution to this problem. In short, instead of pooling data to train a single model, instances of a model are being shared to data owners (clients), which then train the model on their local data. Then, these trained models are sent back to the central server, which aggregates the results. Next, a new round begins with the server sending out the updated models. This cycle continues until convergence.
Over the past couple of years, research has shown FL to be a promising technique, reaching performances comparable to a central approach in which all data of the clients is pooled at a single location[9][14]. The vast majority of the federated learning research has been focusing on the supervised learning paradigm. Little work has been done on unsupervised federated learning methods, even less so when specifically looking into clustering techniques [11][10]. One of these clustering techniques is k-means clustering [5]. In a federated learning setting, k-means clustering can be described as trying to find overarching cluster means over data which is distributed among different datasets (clients). Only a few papers discuss possible options for federated k-means clustering. Kumar et. introduce a method which assumes availability of some data at the central server to pretrain the model, which is not always feasible[8]. Hou et. al. use secure multiparty computation and blockchain to share encrypted data on which k-means is being performed [6]. However, for some cases (e.g. in the medical domain), even sharing encrypted data might not be acceptable.
Bringing k-means into the federated domain comes with a specific challenge: not all clients need to have the same amount of clusters in their data. Therefore, they might also not have data from each cluster. Due to this possible heterogeneity, a way of matching clusters from different clients is required. Liu et. al. introduced a method for federated k-means in a setting where each client only holds one sample [12]. Although this is highly valuable in the cross device setting, it does not address the challenges of varying amounts of local clusters. Servetnyk et. al. propose a dual averaging approach for k-means clustering, which does not seem to address the challenge on how to match clusters from different clients [15]. Dennis et. al. propose a one-shot method by doing a k-means clustering over the
cluster means found locally [4]. However, their experiments assume an equal amount of local clusters in the data available, as well as highly separable clusters. Furthermore, the number of local clusters is an input parameter to their method, even though it is not always possible to determine this. Nevertheless, their choice to run a clustering centrally over cluster means from the local datasets inspired us to do this in an iterative approach. Next to that, we introduce a method that deals with a variable amount of local clusters. We show that both changes lead to improvements, especially in settings where the amount of local clusters is highly varying between clients, or when clusters are less separable. Moreover, we show that in a 2-dimensional setting, a clustering of similar quality can be obtained compared to a conventional k-means clustering (on a pooled dataset).
## 2 Methods
Notation used throughout this section is found in table 1. The pseudocode for our proposed federated k-means algorithm (FKM) can be found in algorithm 1. The algorithm can be divided into two parts: an initialization step, in which we generate initial cluster means on each client using k-means++ initialization [1], and an iterative k-means step in which clients communicate their cluster means to the server, which aggregates these means into a 'global' set of means, which then gets redistributed to the clients for the next k-means iteration. See appendix A for background on k-means and k-means++.
**Determining the amount of local clusters.**
While the global amount of clusters is set (main parameter \(k\) of the k-means procedure), it is not a given that each client has data for each of these clusters. In other words, the number of clusters between clients can differ, and is not necessarily equal to the number of clusters in the pooled data. In order to solve this problem, we determine which global clusters correspond to local data in each round on each client. Before a client applies a new k-means step locally, it assigns its data to the global cluster means it has received (line 14). Next, clients check if there are empty
\begin{table}
\begin{tabular}{c|c} symbol & description \\ \hline \(X_{i}\) & data on client \(i\) \\ \(K_{g}\) & global number of clusters \\ \(K_{i}\) & number of clusters on client \(i\) \\ \(C_{g}\) & global cluster means \\ \(C_{i}\) & cluster means of client \(i\) \\ \(M\) & total amount (sum) of local clusters \\ \(S_{i}\) & amount of samples for each cluster on client \(i\) \\ \(N\) & total amount of clients \\ \end{tabular}
\end{table}
Table 1: notations used
clusters, i.e. cluster means which did not get any points assigned to them. If so, clients discard these empty clusters (line 15). The remaining (global) cluster means are then used as initialization for the next local k-means step (line 16). This way, k can locally become smaller when running k-means on the clients.
**Cluster alignment**
After each client has calculated one iteration of k-means (not until convergence, to avoid local minima) on their local data (each with their own amount of local clusters), they send their cluster means as well as the amount of samples per cluster back to the server. The server then concatenates all cluster means, and aggregates them. It does so by running a k-means clustering on the received local means until convergence (using the global k parameter), to align clusters from different clients to each other. This global k-means is weighted by the amount of samples per cluster found, such that a cluster with lots of samples in it will have a bigger impact on the aggregation step compared to a cluster with fewer samples. That is, we modify the k-means objective function (see appendix A for the original) into:
\[F_{km}=\sum_{j=0}^{M}\min_{C_{i}\in C_{g}}(S_{j}||C_{j}-C_{i}||^{2}) \tag{1}\]
where \(S_{j}\) is the amount of samples corresponding to local cluster \(C_{j}\). Note that the _cluster means_ sent back by the clients are at the server used as the _samples_ for clustering using k-means. Doing the aggregation with a k-means clustering, we solve the cluster alignment problem, since similar clusters will be close to each other and thus merged by the global k-means step.
Because the amount of samples have to be reported to the central server, there exists a privacy risk if a client finds a cluster with only one sample in it. To prevent this, any clusters holding less than \(p\) samples (we used \(p=2\) throughout this work) are simply omitted from the list of means sent to the server.
## 3 Results
We compared our federated k-means (FKM) with a k-means clustering that is executed on all data centrally, as well as to one-shot the method of Dennis et. al [4]. Our first set of experiments is on simulated data, such that ground truth labels of the cluster centers is known. We therefore calculate the Adjusted Rand Index (ARI) for both central and federated approaches with respect to the labelled samples data. Since there are no labels for the clustering in the FEMNIST experiment, the silhouette score was used instead. In some cases, we added an "informed" setting for Dennis et. al., in which we set \(K_{l}\) such as to achieve the highest ARI score by exhaustive search. In all other cases, we run their method using \(K_{l}\) = \(K_{g}\), as the ARI score is only available when ground truth labels are known, which is not always the case.
**Clients holding different parts of the data.**
In order to validate the FKM algorithm, a synthetic two-dimensional dataset was generated. The generation procedure is taken from Servetnyk et. al. [15]. Sixteen cluster centers were chosen with an equal distance (here 5) from one another, see fig. 1(a). Then, 50 data points were sampled around each cluster center using a normal distribution (with variance 1). This data was then distributed among four clients in the following way: First, each client is assigned a 'location' within the field (\(X_{1},X_{2}\in(-12.5,12.5)\)). From there, the probability \(P\) that a data point would be assigned to a certain client scales inversely with the euclidean distance \(d\) to that datapoint:
\[P=1-exp(-\frac{\beta}{d}) \tag{2}\]
where \(\beta\) is a parameter which can be tuned to promote more or less heterogeneity in the data separation. Differing from [15], if a data point happens to be assigned to multiple clients, it instead gets assigned at random.
We wanted to explore the influence of data heterogeneity, i.e. a varying amount of clusters per client. To do so, we generated three versions of this dataset, with \(\beta=0.1,1,10\). See figure 1 c-e for the final distributions. Note that \(\beta\) only changes which points get assigned to which client, meaning that it does not influence the performance for the central case. Figure 1(b) shows that our method is able to attain performance similar to a centralized k-means clustering, while outperforming Dennis et. al., regardless of tuning of the \(K_{l}\) parameter. Performance of our FKM approach seems to be independent of \(\beta\) (in constrast to the method of Dennis et. al.), meaning that our algorithm is robust to having varying cluster amounts between clients.
Figure 1: The regular synthetic datasets. (a) shows the original sampling of the regular synthetic dataset, with the defined cluster means (from which the data are generated using a normal distribution N(0,1)) in red. (b) shows ARI results on all three datasets. (c) until (e) shows how this dataset is distributed over five different clients using different values of \(\beta\). Different colors indicate the different clients.
Increasing levels of noise.
Next, we explored the effect of having noisier clusters. We recreated the regular synthetic dataset, but varied the standard deviation from which samples are being generated, from 1 to 1.5 (original used 1). Figure 2 shows the effect. We generated these datasets twice, once with 50 points per cluster and once with 200 points per cluster.
Results on these datasets are shown in figure 3. For both central and federated clustering, the ARI scores go down for higher noise levels. This is expected, as there will be more points ending up closer to the cluster they did not originally belong to, meaning that even if kmeans finds the original cluster means perfectly, the label assignment will be off. Therefore, the relative difference between federated and central clustering is more important than the absolute ARI scores. Our method attains a similar average performance; however, variance seems to increase compared to centralized clustering. Furthermore, for \(\beta=0.1\), mean ARI decreases compared to central clustering at high noise levels, meaning that a setting with high noise as well as high cluster variability is still a hard challenge for our federated k-means algorithm.
Regardless, performance does seem to increase significantly as compared to the method of Dennis et. al.. This can partly be due to our ability to iterate. Figure 3(d) to 3(f) shows that, especially for noisier datasets, there is a large benefit in being able to iterate more often. The amount of points per cluster does not seem to influence ARI score significantly, see appendix B.
High variability in number of local clusters.
Next we wanted to explore the effect of having an even more variable local \(k\). We used the same data as generated for the regular synthetic dataset, but distributed even more heterogeneously, such that each client only had data from 1, 4, 7, 10 or 16 clusters, respectively. See fig. 4(a).
Figure 4(b) shows that our method attains a similar average performance as compared to the central case, however with a larger variation. This is probably caused by differences in initializations. If the algorithm initializes in such a way that clients assign data to more clusters than what is being present in their data, the algorithm has a hard time correcting for that. Furthermore, it does not help that one client only has ten datapoints in total, meaning it initializes ten clusters of size one, of which none are being send to the central server due to privacy issues. Regardless, our method does outperform the algorithm from Dennis et. al.. This is likely due to our algorithm's ability to change the value of k for its local k-means step between clients.
Figure 2: Some of the data distributions of the simulated datasets with increasing levels of noise (columns), using 50 or 200 points per cluster (rows).
**Clustering higher dimensional real data.**
So far, all our experiments have been done on two dimensional, simulated data. For many use cases, however, data has a much higher dimensionality. In order to determine performance on a higher dimensional dataset, the Federated Extended MNIST (FEMNIST) from LEAF ([2]) (having a dimensionality of 784) was used, which separates the original Extended MNIST ([3]) handwritten numbers and letters based on the person who wrote them. FEMNIST has a dimensionality of 784. This leaves approximately 110 datapoints per client; see appendix C for the distribution. Only 10 clients were used from the original FEMNIST, as this drastically sped up the experiments, while keeping enough data for a meaningful assessment. We set k = 60, in line with earlier experiments from Dennis et. al. Figure 5(a) shows that our method outperforms both settings of the method from Dennis et. al.. There is still a difference with a central clustering, however. This could be due to the relatively small amount of samples per client compared to the amount of dimensions, decreasing the quality of the local clusters.
Figure 4: Assessment of the method on data with a large variability of local clusters per client. (a) shows the distribution per client, (b) the ARI results for different methods.
Figure 3: Clustering results on the synthetic dataset when using different levels of noise for different values of \(\beta\). (a) to (c) show the final ARI scores for beta = 0.1, 1 and 10, respectively. (d) to (f) show how the ARI score for FKM converges over time, each corresponding to the figure above it.
The FEMNIST experiments use the silhouette score as their performance metric. The silhouette score involves calculating distances from each point in a dataset to each other point in a dataset. This means that, to calculate a 'global' silhouette score, distances between datapoints from different clients need to be determined, something that can not be done in a straightforward federated manner. In our case, the simulated federated environment made it possible to calculate the silhouette score for evaluation purposes. In a real-life setting, the simplified silhouette score ([7]) could be a suitable alternative, as it only calculates distances between datapoints and cluster means, something which can be done on all clients separately.
We compare the simplified silhouette score with the silhouette score from the same experiments in Figure 5(b). There seems to be a high correlation between the two scores for a given method, which is in line with previous work [16].
Figure 5: Results on (a subset of) FEMNIST. (a) shows the silhouette score, (b) the simplified silhouette score.
## 4 Discussion and conclusion
This work describes the implementation and validation of a federated k-means clustering algorithm (FKM), enabling clustering over multiple datasets without sharing the underlying data. Our results show performances close to a central method, in which all data is brought into a single location. There are still some scenarios in which our method shows larger variability in performance as compared to a central clustering, however. These are mostly the more difficult scenarios, such as when there is an extreme distribution in the amount of cluster present on each client, or when the data has a high dimensionality as with the FEMNIST experiment. Assessment of our method on more heterogeneous and'real life' datasets is therefore an important direction for future work. Nevertheless, FKM has shown to be a promising method in finding similarities among distributed datasets without the need of sharing any data.
|
2301.08196 | Illuminating trap density trends in amorphous oxide semiconductors with
ultrabroadband photoconduction | Under varying growth and device processing conditions, ultrabroadband
photoconduction (UBPC) reveals strongly evolving trends in the defect density
of states (DoS) for amorphous oxide semiconductor thin-film transistors (TFTs).
Spanning the wide bandgap of amorphous InGaZnO$_x$ (a-IGZO), UBPC identifies
seven oxygen-deep donor vacancy peaks that are independently confirmed by
energetically matching to photoluminescence emission peaks. The sub-gap DoS
from 15 different types of a-IGZO TFTs all yield similar DoS, except only
back-channel etch TFTs can have a deep acceptor peak seen at 2.2 eV below the
conduction band mobility edge. This deep acceptor is likely a zinc vacancy,
evidenced by trap density which becomes 5-6x larger when TFT wet-etch methods
are employed. Certain DoS peaks are strongly enhanced for TFTs with active
channel processing damage caused by plasma exposure. While Ar implantation and
He plasma processing damage are similar, Ar plasma yields more disorder showing
a 2x larger valence-band Urbach energy and two orders of magnitude increase in
the deep oxygen vacancy trap density. Changing the growth conditions of a-IGZO
also impacts the DoS, with zinc-rich TFTs showing much poorer electrical
performance compared to 1:1:1 molar ratio a-IGZO TFTs owing to the former
having a ~10xlarger oxygen vacancy trap density. Finally, hydrogen is found to
behave as a donor in amorphous indium tin gallium zinc oxide TFTs. | George W. Mattson, Kyle T. Vogt, John F. Wager, Matt W. Graham | 2023-01-19T17:47:15Z | http://arxiv.org/abs/2301.08196v1 | Illuminating trap density trends in amorphous oxide semiconductors with ultrabroadband photoconduction
###### Abstract
Under varying growth and device processing conditions, ultrabroadband photoconduction (UBPC) reveals strongly evolving trends in the defect density of states (DoS) for amorphous oxide semiconductor thin-film transistors (TFTs). Spanning the wide bandgap of amorphous InGaZnO\({}_{x}\) (a-IGZO), UBPC identifies seven oxygen deep donor vacancy peaks that are independently confirmed by energetically matching to photoluminescence emission peaks. The sub-gap DoS from 15 different types of a-IGZO TFTs all yield similar DoS, except only back-channel etch TFTs can have a deep acceptor peak seen at 2.2 eV below the conduction band mobility edge. This deep acceptor is likely a zinc vacancy, evidenced by trap density which becomes 5-6\(\times\) larger when TFT wet-etch methods are employed. Certain DoS peaks are strongly enhanced for TFTs with active channel processing damage caused from plasma exposure. While Ar implantation and He plasma processing damage are similar, Ar plasma yields more disorder showing a \(\sim 2\times\) larger valence-band Urbach energy, and two orders of magnitude increase in the deep oxygen vacancy trap density. Changing the growth conditions of a-IGZO also impacts the DoS, with zinc-rich TFTs showing much poorer electrical performance compared to 1:1:1 molar ratio a-IGZO TFTs owing to the former having a \(\sim\)10\(\times\) larger oxygen vacancy trap density. Finally, hydrogen is found to behave as a donor in amorphous indium tin gallium zinc oxide TFTs.
amorphous IGZO, thin-film transistor, density of states, ultrabroadband photoconduction
## I Introduction
Amorphous oxide semiconductors (AOS) such as amorphous indium gallium zinc oxide (a-IGZO) have achieved widespread adoption as the active channel material in optical display thin-film transistors (TFTs) because of their high mobility, low processing cost, and high on-off current ratio.[1; 2] a-IGZO is a wide bandgap semiconductor material with \(\mathrm{E_{g}}\) ranging from 3.1 to 3.5 eV. Owing to its amorphous structure, a-IGZO has a large concentration of sub-gap vacancy sites that serve as the dominant electron donation mechanism for its n-type TFT operation.[1; 3] The subgap states in a-IGZO also act as electron traps that impact device performance by introducing transfer curve hysteresis and bias illumination stressing.[4; 5; 6; 7; 8; 9] Fabrication processes for AOS TFTs have a marked effect on the overall characteristics of the resulting devices, which is reflected in the composition of the subgap states.
Energetically, these trap states span different ranges of the subgap depending on their local charge environment and consequent trapping behavior. Multiple configurations of donor-like oxygen vacancies (\(\mathrm{V_{O}}\)) dominate the shallow and midgap regions.[10; 11; 12; 13] Additionally, both acceptor-like zinc vacancies (\(\mathrm{V_{Zn}}\))[6] and an OH-related state (\(\mathrm{[O_{O}^{2}\mathrm{-H}^{+}]^{-}}\))[14] have been documented; these states are convolved with an exponentially decaying Urbach [15] tail reflecting O 2p disorder in the amorphous matrix.[16]
The donor-like \(\mathrm{V_{O}}\) states have received considerable attention from researchers over the years.[10; 11; 12; 13]\(\mathrm{V_{O}}\) are the dominant donor-like trap state in AOS materials. In a-IGZO, this electron donation pins the Fermi energy (\(\mathrm{E_{F}}\)) near the CBM, making them n-type semiconductors. While electron donation from thermally depopulated shallow donor \(\mathrm{V_{O}}\) states typically contributes the n-type carriers necessary for AOS enhancement-mode operation, \(\mathrm{V_{O}}\) states have been variously linked to effects such as hysteresis[17] and persistent photocurrents[18]. Process tuning of \(\mathrm{V_{O}}\) trap states to amplify AOS TFT electronic performance while avoiding deleterious stability or leakage phenomena remains an ongoing research challenge.
This work reveals trends in how the subgap density of states (DoS) in AOS TFTs evolves over different processing methods, TFT architectures, and channel compositions. In section III, we employ the ultrabroadband photoconduction (UBPC) method to reveal systematic trends in subgap DoS. Section III is broken down into five sub-sections that explore the DoS of AOS materials in the context of: (A) Comparison ultrabroadband optical vs. UBPC photoconductive methods of defect state identification, (B) etch process-induced vacancy state formation in back-channel etch vs. top-gate a-IGZO TFTs, (C) plasma and ion implantation treated a-IGZO top-gate TFTs, (D) a-IGZO TFTs with non-stoichiometric active channel compositions, and (E) hydrogen incorporation into amorphous indium tin gallium zinc oxide (a-ITGZO). Below, we summarize the motivation for each study.
Section III.A compares the subgap defect peaks identified by UBPC in AOS TFTs by matching peaks obtained to more conventional optical methods, including photoluminescence (PL). AOS TFT device electrical metrics
have improved over the last two decades. The optical response at subgap defect energies in AOS materials is often \(\sim 10^{6}\) smaller than at the bandgap. While x-ray or ultraviolet photoelectron spectroscopy (XPS, UPS) have become popular tools for AOS characterization, they are not capable of achieving the necessary resolution to reliably observe the subgap trap states.[19; 20] The recently developed UBPC method has the potential to be a standard experimental technique to quantify the subgap trap state density analytically.
Section III.2 statistically compares the DoS trends for back channel etch (BCE) vs. top-gate (TG) a-IGZO TFTs. Fabrication of BCE AOS TFTs is convenient for TFT arrays in display backplane applications.[21] However, multiple researchers have observed that the etch process traditionally employed in fabricating BCE a-IGZO TFTs results in device degradation,[21] ostensibly due to the creation of metal vacancy states which are introduced via the etchant.[22; 23; 24]
Section III.3 compares the change in DoS of a-IGZO TFTs after plasma and ion implantation methods are applied to the full active channel. Extensive prior research has been performed on plasma[25; 26; 27; 28; 29; 30; 31; 32; 33; 34] and ion implantation[35] treatment of a-IGZO TFTs. Such plasma treatment have a variety of purposes ranging from defect creation[25; 31] to defect passivation[36; 26; 37] to use as a dry etchant for the source-drain electrode metal.[38; 39] In a-IGZO, Such plasma methods commonly increase the TFT conductivity up to \(\sim 50\)Xwhich is desirable for improving electrical contacts.[25; 27; 31] After plasma treatments, the UBPC method will be used to understand how oxygen vacancy deep donor trap density is correlated with conductivity enhancement. The plasmas selected for these treatments vary greatly depending on the intended application: oxygen[27; 36] plasmas can passivate V\({}_{\text{O}}\) defects, while other plasmas can deplete the device CB via either hydrogen[37; 26; 34] incorporation or the formation of V\({}_{\text{O}}\) states (Ar, He[31] plasmas); mixed Ar-O[28; 29; 33] plasmas have also been explored.
In section III.4, AOS growth conditions are changed by comparing the DoS of a zinc-rich growth to that of '111' (1:1:1 molar % In\({}_{2}\)O\({}_{3}\)/Ga\({}_{2}\)O\({}_{3}\)/ZnO ratio) a-IGZO. Exploring non-stoichiometric compositions of a-IGZO (with greater or lesser relative concentrations of the In, Ga, and/or Zn metal constituents compared to 111 a-IGZO) is an active area of research for improving upon a-IGZO TFT characteristics such as field effect mobility.[40; 41; 42; 43; 44; 45] These efforts are sometimes linked to the deposition of bilayer[46; 47; 48; 49; 50] (or trilayer)[51; 52; 53] active channels. These bilayer TFTs often contain a stoichiometric or near-stoichiometric stability layer[43] with a boost layer that is richer in certain constituent elements such as In[45; 49], doped with metals such as Ti[54], or composed of a conductive material such as ITO[55]) intended to enhance the TFT electrical performance.
Lastly, Section III.5 discusses hydrogen incorporation into indium tin gallium zinc oxide (a-ITGZO) TFTs. a-ITZGO has been studied in recent years to reduce the reliance of AOS TFTs on the scarce indium constituent without the adverse device mobility effects that might be expected from reducing or eliminating the molar proportion of indium.[56; 57; 58] While a-ITGZO TFTs exhibit promising carrier mobility metrics, they also exhibit high susceptibility to positive bias stressing (PBS)-related degradation effects attributed to subgap trap states.[57] Hydrogen can be an abundant defect in AOS materials[59], but the electrically active nature of this large interstitial defect is more controversial. Recent work suggests this state that results from its interaction with oxygen states located near the valence band mobility (VBM) edge.[14] Although hydrogen has a donor effect on a-IGZO, its negative-U nature results in a hydrogen electron donation that precedes its incorporation into a-IGZO.[60; 14; 61] Specifically for a-IGZO TFTs, the hydrogen complex state \(\left[\text{O}_{\text{O}}^{2-}\text{H}^{+}\right]^{1-}\) has been observed centered \(\sim 0.4\) eV below the VBM edge.[14; 16] Section III.5 extends prior hydrogen studies to explore other AOS TFT materials.
## II Experimental methods
### Amorphous oxide TFT device characterization
Trends in the subgap DoS are observed for a diverse selection of AOS back channel etch (BCE) and top-gate a-IGZO TFTs discussed in each sub-section. ITO/a-IGZO co-sputtered (a-ITGZO) TFTs possessing different hydrogen concentrations are also fabricated. The TFTs possessing different hydrogen concentrations were synthesized by subjecting the devices to varying annealing time intervals to cause hydrogen migration from an H-rich SiN\({}_{y}\) passivation layer adjacent to the active channel.[14] The photoluminescence (PL) emission spectra of a-IGZO thin films and devices were taken using a Horiba Nanolog flourimeter using using photomultiplier detection for the visible range, and liquid nitrogen-cooled InGaAs detection over the 0.7 to 1.5 eV range. All PL data shown uses a 3.8 eV excitation source. PL spectral shape on active TFTs was confirmed using an Ocean Optics spectrometer under diffraction-limited illumination. The corresponding ultrabroadand absorption spectrum for Tauc bandgap analysis was taken using a Cary UV-VIS-IR spectrometer.
### Sub-gap DoS by Ultrabroadband Photoconduction (UBPC) microscopy
In this work, the on-chip spectroscopic technique called ultrabroadband photocoduction (UBPC)[6] is employed to obtain the experimental subgap trap density and DoS for AOS TFTs. The essential elements of the UBPC setup are depicted in Fig. 1(a). A laser source tunable over the a-IGZO subgap energy range is focused onto the TFT active channel using a piezo scanning mirror within a 4-f confocal scanning geometry which couples the source into
an Olympus BX61W microscope; all-reflective optics are employed throughout the line to reduce spectral aberrations. The laser source was a tunable Ti:Sapphire laser system (Coherent Chameleon Ultra II) coupled to an APE Compact optical parametric oscillator, which provides a spectral range from 0.2 to 3.7 eV, enabling probing of bot near-conduction and near-valence band tail states. A homebuilt difference frequency generation line was employed for probing of near conduction band states reported in Section III.5 to acieve laser energies below 0.2 eV. Select measurements were also verified using a Fianium supercontinuum white light laser source coupled to a laser line tunable filter (Photon Etc.), which yields high-throughput characterization over the 0.7 to 3.1 eV range. Both setups are designed to maintain Poynting vector illumination stability on the TFT during the spectral scan.
A 52X cassegrain reflective objective is used to focus the laser onto a diffraction-limited spot centered on the TFT. The a-IGZO TFT active channel is operated under a forward bias of \(\mathrm{V_{ON}+5}\) V; the TFT is electrically connected to the measurement interface via a homebuilt electrical probe setup consisting of RF source-drain electrical probes. At each illumination wavelength, the photoconduction (PC) signal is retrieved from the noise using a current pre-amplifier and a lock-in amplifier (Zurich HFLI). To eliminate the contribution of dark current background and hysteretic drift, we modulate the illumination source using an optical chopper at a frequency of 585 Hz and reference the lock-in amplifier to this frequency. Simultaneous scanning photoconduction and back reflection maps are taken at each illumination energy using a 4f confocal scanning geometry to ensure uniformity of the illumination spot over the energy measurement range and across successive measurements.
Figure 1(c) shows scanning back reflection (_left_) and scanning photoconduction maps (SPCM, all other panels) for a back channel etch a-IGZO TFT upon photoexcitation at \(h\nu=\)1.8 eV. The PC images are colorized via a heatmap corresponding to the relative amplitude of the integrated trap density that is optically excited. Line cuts as a function of gate voltages (\(\mathrm{V_{G}}\)) ranging from depletion-mode to enhancement-mode operation conditions are shown in Fig. 1(c). As the TFT gate voltage is scanned from depletion-mode (\(\mathrm{V_{G}<0}\) V) to enhancement-mode (\(\mathrm{V_{G}>0}\) V) operation, the photoconduction amplitude becomes order of magnitude larger and becomes uniformly distributed across the active channel length. In depletion-mode operation (\(\mathrm{V_{G}=-4}\) V), the device PC response is localized to the immediate area of the drain electrode, indicative of a Schottky-like barrier at the interface. The amplitude of the spatially-mapped UPBC response can be directly mapped to generate the sub-gap DoS by spectrally scanning the energy of the power-normalized excitation laser from 0.3 to 4 eV.
The photoconduction signal at each given photon energy (\(\mathrm{h\nu}\)) is directly proportional to the total integrated subgap trap density. Specifically, the integrated trap density \(\mathrm{N_{TOT}(h\nu)}\) between the conduction band mobility edge and \(h\nu\) is given by:
\[\mathrm{N_{TOT}(h\nu)=\left(\frac{qN_{o}C_{l}}{tm}\right)\frac{I_{PC}}{N_{ph}}} \tag{1}\]
where \(\mathrm{I_{PC}/N_{ph}}\) represents the PC signal (\(\mathrm{I_{PC}}\)) which has been photon normalized via through-chip laser power correction at each photon energy, \(\mathrm{h\nu}\). The bracketed quantity is a scaling constant composed of the the electronic charge (q), the gate insulator capacitance density(\(\mathrm{C_{l}}\)), the slope (m) of the non-illuminated \(\mathrm{I_{D}-V_{G}}\) transfer curve within \(\pm 0.5\) V of the constant \(\mathrm{V_{ON}+5}\) V forward operating gate bias (taken immediately after each PC measurement), and the accumulation layer thickness (t). Finally, \(\mathrm{N_{o}}\) is a constant calibration term obtained by finding the saturation photon flux, corresponding to the maximum photon flux that yields a detectable change in the in the PC-signal detected for illumination just below the VB mobility edge. After directly mapping the total integrated trap density as a function of photon energy from the raw observable PC signal, we take its derivative to obtain the experimental density of states (DoS): \(\mathrm{DoS(h\nu)=\frac{dN_{TOT}}{d(h\nu)}}\). Throughout this work, we refer to experiment subgap \(\mathrm{DoS(E-E_{C})}\) which is equivalent to \(\mathrm{DoS(-h\nu)}\).
Figure 1: (**a**) The UBPC method illustrated uses tunable lasers and an all-reflective 4\(f\) confocal scanning microscope to measure the DoS for the AOS TFTs to within 0.3 eV of the CB mobility edge. (**b**) Shown for \(h\nu=1.5\) eV, UBPC microscopy spatially-resolves the TFT channel length PC-response, \(\mathrm{I_{PC}}\). Gate voltages resolve ’turn-on’ change from depletion-mode (\(\mathrm{V_{G}<0}\) V) to enhancement-mode (\(\mathrm{V_{G}>0}\) V) operation. (**c**) Scanning-PC microscopy spatial maps corresponding to (_red arrow_) linecuts shown in b with back-reflection map of a BCE a-IGZO TFT on right.
## III Results and Analysis
### a-IGZO DoS by UBPC vs. ultrabroadband optical methods
The predictive trends the UBPC DoS method is independently verified by overlaying prominent DoS peaks with the a-IGZO photoluminescence emission spectrum collected over an ultrabroadband spectral range of 0.7 to 4 eV. Figure 2(a) plots Tauc[62] absorption (\(\alpha\nu^{1/2}\)) and photoluminescence (PL) emission spectra. The estimated optical bandgap (E\({}_{\rm g}\)) for a-IGZO from the Tauc plot is \(3.17\pm 0.03\) eV. PL reveals two strong subgap peaks at \(\sim-1.4\) and \(\sim-1.7\) eV, and two weak subgap peaks at \(\sim-0.8\) and \(\sim-1.1\) eV. The apparent drop-off in emission at \(\sim-3.5\) eV is actually an artifact due to PL drop off associated with use of a cut-off filter used to eliminate higher-harmonics in the monochromator.
The four subgap PL peaks (_red_) of Fig. 2(a), as obtained from a 100 nm a-IGZO thin film, are plotted on an expanded energy scale in Fig. 2(b) and are shown to be correlated to photoconduction (PC) derivative (\(\mu\)AW\({}^{-1}\)eV\({}^{-1}\)) spectrum peaks (_black_), as obtained from a BCE a-IGZO TFT. As evident from Fig. 2(b), the match between PL and PC derivative peaks is quite striking. Three other PC derivative spectral features are indicated in Fig. 2(b) at \(\sim-0.3\), \(\sim-2.2\), and \(\sim-2.8\) eV.
Atomic identification of the PL and PC derivative peaks shown in Fig. 2(b) is shown in Fig. 2(c) and summarized in Table I. In Fig. 2(c), raw UBPC data (_black_ closed circles) is simulated (_black_ solid line) by convolving a series of Gaussian subgap defect peaks and an exponential valence band Urbach[15] tail. The UBPC spectrum indicated in Fig. 2(c) corresponds to the BCE a-IGZO TFT density of states parameters collected in Table I. UBPC density of states parameters for a top-gate (TG) a-IGZO TFT are also included in Table I, and are expected to be representative of the 100 nm a-IGZO thin film used for PL assessment, as both a-IGZO films are prepared in a similar manner.
Several aspects of Fig. 2(c) are notable. First, the four PL peaks (_red_) of Fig. 2(b) are ascribed to oxygen vacancies, O-2, O-3, O-4, and O-5 (see Table I). Second, the strong peak occurring at -2.2 eV in both the PC derivative and UBPC spectra is attributed to a zinc vacancy acceptor, Zn-8. A zinc vacancy has previously been identified as the only cation vacancy energetically favored to exist in a-IGZO.[6] Also, recently reported electron spin resonance measurements confirm the existence of a zinc vacancy in a-IGZO at concentrations up to about an order of magnitude greater than that of estimated peak oxygen vacancy concentrations.[63] Note the absence of a -2.2 eV zinc vacancy peak in the PL spectrum of Fig. 2(b), consistent with the low concentration of this peak in Table I for a TG a-IGZO TFT. Third, the PC derivative and UBPC spectra show evidence of the O-1 oxygen vacancy peak at -0.3 eV. Fourth, the PC derivative curve shows evidence of an \(\left[\mathrm{O}_{\mathrm{O}}^{2}\mathrm{H}^{+}\right]^{1-}\) (or OH\({}^{-}\)) defect complex state at -2.8 eV, while this state is not observed in the UBPC spectrum, presumably because it is obscured by the valence band Urbach band tail. Simulation of the UBPC spectrum reveals a valence band Urbach energy, E\({}_{\mathrm{U}}=110\) meV.
Figure 2: _Ultrabroadband PL emission and photoconduction spectra, both correlated to a-IGZO subgap density of states peaks._ (**a**) Tauc absorption (_black_) and photoluminescence (PL, _red_) emission spectra. (_Inset_) PL emission is due to excited conduction band electrons recombining into deep trap states. (**b**) a-IGZO thin film PL emission spectrum (_red_) is plotted on the same axis as the energy derivative of photoconduction (PC) spectrum of a back channel etch (BCE) a-IGZO TFT. Dotted lines highlight correspondence between PL and UBPC peaks. (**c**) UBPC density of states for a BCE a-IGZO TFT.
\begin{table}
\begin{tabular}{c|c c|c c|c c} Defect & Peak Energy (eV) & \multicolumn{2}{c|}{Peak DoS (\(\times 10^{15}\)cm\({}^{-3}\)eV\({}^{-1}\))} & \multicolumn{2}{c}{FWHM (meV)} \\ \hline & BCE & TG & BCE & TG & BCE & TG \\ \hline O-1 & \(-0.3\) & \(-0.3\) & \(2.5\) & \(1.3\) & \(80\) & \(80\) \\ O-2 & \(-0.85\) & \(-0.75\) & \(11\) & \(8.8\) & \(70\) & \(70\) \\ O-3 & \(-1.2\) & \(-1.2\) & \(24\) & \(9.2\) & \(80\) & \(80\) \\ O-4 & \(-1.35\) & \(-1.45\) & \(37\) & \(30\) & \(110\) & \(80\) \\ O-5 & \(-1.8\) & \(-1.8\) & \(23\) & \(27\) & \(70\) & \(70\) \\ O-6 & \(-2.0\) & \(-2.0\) & \(4\) & \(12\) & \(70\) & \(70\) \\ O-7 & \(-2.4\) & \(-2.4\) & \(50\) & \(67\) & \(50\) & \(80\) \\ Zn-8 & \(-2.2\) & \(-2.2\) & \(\mathbf{155}\) & \(\mathbf{0.1}\) & \(130\) & \(130\) \\ \([\mathrm{O}_{\mathrm{O}}^{2-}\mathrm{H}^{+}]^{1-}\) & \(-2.8\) & \(-2.8\) & \(20\) & \(40\) & \(320\) & \(320\) \\ \end{tabular}
\end{table}
Table 1: Extracted parameters from UBPC measurements on the BCE and TG a-IGZO TFTs plotted in Fig. 3a show sub-gap defect peak energy (\(\mathrm{E-E_{C}}\)), peak density of states (DoS), and peak full-width-half-maximum (FWHM) are similar with the exception of the Zn-8 deep acceptor row (_in bold_).
Figure 3: _Density of states (DoS) trends for a top-gate (TG) vs. a back channel etch (BCE) a-IGZO TFT._**(a)** (_Upper_) UBPC comparison of a TG and a BCE a-IGZO TFT. (_Lower_) Simulated Gaussian subgap state peaks corresponding to the UBPC experimental data. **(b)** Linear DoS scaling highlights the Zn-8 zinc vacancy peak (_red_) for different BCE processing conditions: _1_: Mo S/D wet etch, _2_: Ti/Cu S/D wet etch, _3_: Ti/Cu S/D wet/dry etch, and _4_: Ti/Cu S/D low damage etch. **(c)** Drain current-gate voltage (\(\mathrm{I_{D}-V_{G}}\)) transfer curves for different BCE processing conditions.
### Top-gate (TG) vs. Back Channel Etch (BCE) a-IGZO TFT Trends
Figure 3(a, _upper_) compares UBPC trends (_circles_: experimental data, _solid curves_: simulation) for a TG a-IGZO TFT (_blue, solid_ circles) to that of a BCE a-IGZO TFT (_green_, _solid_ circles). Figure 3(a, _lower_) plots the UBPC experimental data of a TG a-IGZO TFT (solid circles) with a simulation of the UBPC spectra based on Gaussian peaks convolved with an exponentially decaying valence band Urbach tail. Simulated subgap peaks are enumerated according to species (V\({}_{\rm O}\) donor, V\({}_{\rm Zn}\) acceptor, or [O\({}_{\rm O}^{2-}\)H\({}^{+}\)]\({}^{1-}\) donor) and by peak energetic location (from CB \(\sim 0\) eV to VB \(\sim-3.2\) eV). The oxygen vacancy (_blue_) and [O\({}_{\rm O}^{2-}\)H\({}^{+}\)]\({}^{1-}\) (_orange_) simulated peaks plotted in Fig. 3(a) correspond to simulation fitted to a TG UBPC spectrum, while the V\({}_{\rm Zn}\) peak (_red_) corresponds to simulation fitted to a BCE UBPC spectrum. The characteristic energy of the VB tail exponential decay (or Urbach energy) of the UBPC spectrum is derived from the simulation of the experimental data by fixing all Gaussian subgap state peak energies and amplitudes, leaving the Urbach energy (TG: 89 meV, BCE: 95 meV) as the only free simulation parameter. The fitted Gaussian peak characteristics for the enumerated peaks are provided in Table 1.
In Fig. 3(b), the UBPC DoS in the region of the Zn-8 zinc vacancy peak is compared on a linear ordinate scale for several different BCE processing conditions: _1_: Ti/Cu S/D wet etch; _2_: Mo S/D wet etch; _3_: Ti/Cu S/D wet/dry etch (wet etch of the S/D metal far from the back channel surface, followed by dry etch near the back channel surface); and _4_: Ti/Cu S/D low damage etch. Figure 3(c) shows the drain current-gate voltage (\(\rm I_{D}-V_{G}\)) transfer curves for three of the TFTs plotted in Fig. 3(b), as well as for a TG a-IGZO TFT.
As clearly evident from Figs. 3(a) and 3(b), the Zn-8 zinc vacancy concentration is strongly affected by the type of BCE etchant used. Also, a larger Zn-8 zinc vacancy concentration leads to a more positive shift in the turn-on voltage. From a charge balance perspective, an increase in the deep acceptor concentration, i.e., Zn-8 zinc vacancies, is balanced by an increase in the concentration of ionized deep donors, i.e., oxygen vacancies, e.g., O-1, O-2, O-3, etc. Mathematically, \(\rm\Delta N_{DA}\approx\Delta N_{DA}(E_{F})\), where the deep donor Fermi level dependence recognizes that the Fermi level is modulated deeper into the bandgap, away from the conduction band mobility edge, until a sufficient density of oxygen vacancy donors is ionized in order to achieve charge neutrality.
The Zn-8 zinc vacancy concentration can be estimated by integration of its UBPC peak area or alternatively from the shift in the turn-on voltage of an \(\rm I_{D}-V_{G}\) transfer curve via \(\rm N_{DA,ID-VG}\approx(C_{I}\Delta V_{ON}/q)^{3/2}\), where \(\rm C_{I}\) is the gate insulator capacitance density (9.7 nF/cm\({}^{-2}\)), q is the electronic charge, and \(\rm\Delta V_{ON}=\Delta V_{ON,BCE}-\Delta V_{ON,TG}\).
Table 2 compares values of \(\rm N_{DA,ID-VG}\) and \(\rm N_{DA,UBPC}\) for the three BCE a-IGZO TFTs plotted in Fig. 2(d). Note that the UBPC estimate is invariably larger, as UBPC measures non-electrically active valence band tail donor-like traps.[14, 6]
The UBPC spectra provided in Fig. 2(b-c) suggest that: (i). a standard BCE wet etch process results in the formation of a Gaussian zinc vacancy state centered at \(\sim-2.2\) eV (Zn-8), as well as a slight increase in the Urbach energy; (ii). zinc vacancy state formation effect is particularly severe for BCE TFTs with Mo source-drain electrodes; and (iii). the zinc vacancy concentration can be reduced through the use of less-damaging etch methods. The presence of this zinc vacancy peak and its particular susceptibility to form in a-IGZO TFTs processed using a Mo S/D wet etch is consistent with previously published work[22, 23, 24, 25] on BCE etch damage. In Supplemental Materials Fig. S1, we provide the UBPC DoS for 6 BCE and 2 TG a-IGZO TFTs subjected to various processing conditions, along with the averaged BCE and TG DoS across all the devices studied. For all BCE devices, the Zn-8 peak is larger than that observed for any of the TG TFTs.
A number of strategies have been explored to mitigate deleterious effects of the etch process on BCE a-IGZO TFT electrical operation: careful selection of source/drain metal;[23] plasma treatments intended to reduce metal residues;[29] the use of etch stop layers (ESL);[22, 64, 65, 23] less damaging wet etchants such as \(\rm H_{2}O_{2}\);[66] or through adopting more complex device architectures such as a dual-gate TFT.[67] A comparison of an as-grown BCE TFT to a BCE TFT with an ESL is included in Supplemental Materials Fig. S2. UBPC measurements on ESL device show a smaller zinc vacancy DoS peak and improved electrical performance compared to an as-grown TFT.
### Plasma treatments of a-IGZO TFTs
Figure 4(a) depicts the fabrication of a n\({}^{+}\) active channel TG a-IGZO TFT via plasma or ion implantation treatment. In (_Panel_ (1)), an a-IGZO channel deposited onto a glass substrate with a SiO\({}_{2}\) buffer layer is sub
\begin{table}
\begin{tabular}{l c|c|c c} BCE & Etchant & \(\rm\Delta V_{ON}\) (V) & \(\rm N_{DA,ID-VG}\) & \(\rm N_{DA,UBPC}\) (cm\({}^{-3}\)) \\ \hline \hline
1: & wet & 2.55 & \(6.1\times 10^{16}\) & \(6.2\times 10^{16}\) \\
3: & wet/dry & 1.35 & \(2.3\times 10^{16}\) & \(2.4\times 10^{16}\) \\
4: & low damage & 0.50 & \(5.3\times 10^{15}\) & \(1.0\times 10^{16}\) \\ \end{tabular}
\end{table}
Table 2: Comparison of changes in deep acceptor (DA) trap density of of three BCE a-IGZO TFTs with different etch process conditions (from Fig. 3b). TFT \(\rm I_{D}-V_{G}\) transfer curve turn-on voltage shifts (\(\rm\Delta V_{ON}\)) shift right to predict the DA trap density (\(\rm N_{DA,ID-VG}\)) that is also measured directly by the UBPC method (\(\rm N_{DA,UBPC}\)).
jected to a plasma or ion implantation treatment, resulting in a \(\rm n^{+}\) a-IGZO active channel. In (_Panel_ (2)), source-drain electrodes and the top-gate are patterned or deposited onto the \(\rm n^{+}\) a-IGZO active channel to form the \(\rm n^{+}\) a-IGZO active channel TFT. Note that in this process, the entire active channel is subjected to the \(\rm n^{+}\) doping treatment.
In Fig. 4(b), \(\rm log(I_{D})-V_{G}\) transfer curves are compared for three TG a-IGZO TFTs in which the channel is as-grown (_black_) or is subjected to an \(\rm Ar^{+}\) ion implantation (_green_) or a He plasma (_blue_) treatment to obtain a \(\rm n^{+}\) a-IGZO active channel. _Arrows_ represent the direction of hysteresis. Compared to the as-grown a-IGZO TFT, the plasma-processed a-IGZO TFTs are strongly depletion-mode, with large negative turn-on voltages of \(<-20\) V.
Figure 4(c) plots the \(\rm I_{D}-V_{G}\) transfer curves of three TG a-IGZO TFTs whose active channel is subjected to treatment by \(\rm Ar^{+}\) ion implantation (_green_), He plasma (_blue_), or Ar plasma (_red_). The \(\rm Ar^{+}\) ion-implanted and He plasma-treated TFTs exhibit no measurable hysteresis when plotted on a linear ordinate scale (Fig. 4(c)), and only a very small amount of clockwise hysteresis when plotted on a logarithmic scale (Fig. 4(b)). In contrast, the Ar plasma-treated TFT possesses a large amount of hysteresis and, notably, this hysteresis is counterclockwise. Typically, clockwise hysteresis in an n-channel TFT is ascribed to electron trapping, while counterclockwise hysteresis is attributed to ion migration.[68] Thus, the counterclockwise hysteresis witnessed for the Ar plasma-treated TFT is tentatively attributed to ion migration. However, we note that this case is unusual since ion migration normally occurs within the gate insulator rather than the semiconductor.
The deleterious effects of plasma damage to the channel layer of an a-IGZO TFT are unambiguously revealed by UBPC, as shown in Fig. 4(d) and summarized in Table 3. Plasma processing dramatically increases the amplitude of all six oxygen vacancy peaks. \(\rm Ar^{+}\) ion implantation and He plasma treatments increase the amplitudes modestly of the shallow O-1 and O-2 deep traps by a factor of less than an order of magnitude. Ar plasma damage is the most dramatic, increasing the amplitudes of all of the oxygen vacancy subgap peaks by well over two orders of magnitude.
In addition to increasing the amplitude of the oxygen vacancy subgap peaks, the valence band Urbach energy increases after plasma processing. This Urbach energy increase is relatively modest for the \(\rm Ar^{+}\) ion implantation or He plasma treatment - from 117 \(\pm\) 3 meV (as-grown) to 133 \(\pm\) 8 meV (\(\rm Ar^{+}\) ion implantation) - and is more significant for the Ar plasma treatment at \(\rm E_{U}\) = 180 \(\pm\) 9 meV. The linear ordinate density of states plot included as an inset to Fig. 4(d) shows how much more damaging the Ar plasma treatment is than either the \(\rm Ar^{+}\) ion implantation or the He plasma treatment. After Ar plasma treatment, the anion sublattice appears to be so heavily damaged that it is not surprising that a TFT channel layer subjected to such abuse would exhibit \(\rm I_{D}-V_{G}\) transfer curve counterclockwise hysteresis due to ion migration (perhaps by hydrogen) on its sub-lattice.
Table 3 reports the strongest oxygen deep acceptor DoS peak ( \(\times 10^{16}\)\(\rm cm^{-3}eV^{-1}\)) as obtained from the UBPC measurement data plotted in Fig. 3(d). As the
Figure 4: _Plasma processing of a-IGZO TFTs._**(a)**\(\rm n^{+}\) doping of the active channel of a top-gate a-IGZO TFT via plasma treatment or ion implantation prior to deposition of the top-gate electrode. **(b)** The drain current-gate voltage (\(\rm I_{D}-V_{G}\)) transfer curve of an as-grown top-gate a-IGZO TFT compared to those subjected to an \(\rm Ar^{+}\) ion implantation or He plasma active channel pre-treatment. **(c)** The \(\rm I_{D}-V_{G}\) transfer curves of three \(\rm n^{+}\) a-IGZO active channel TFTs subjected to different plasma treatments. **(d)** Subgap density of states (\(\rm cm^{-3}eV^{-1}\)) of as-grown and plasma-treated top-gate TFTs whose \(\rm I_{D}-V_{G}\) transfer curves) are plotted in (b-c). (_Inset_) Linear ordinate scaling of DoS in the oxygen vacancy region of the subgap.
\(\mathrm{I_{D}-V_{G}}\) curves plotted in Fig. 3(c) indicate, all of the plasma-treated or ion-implanted TFTs experienced large negative shifts in the \(\mathrm{I_{D}-V_{G}}\) curve \(\mathrm{V_{ON}}\) curve ranging from \(\sim-10-(-30)\) V. This trend is consistent with n\({}^{+}\) doping of the active channel via the Ar\({}^{+}\) ion implantation or He/Ar plasma treatments.
A primary application of ion implantation or plasma treatment is the formation of n\({}^{+}\) source/drain regions at the active channel edge; after plasma treatment (or ion implantation), these n\({}^{+}\) doped regions function as low contact resistance source-drain regions. The UBPC DoS spectra and \(\mathrm{I_{D}-V_{G}}\) transfer curves in Fig. 4 confirm that treatments that induce a moderate amount of disorder in the active channel material (forming shallow donor oxygen vacancy states) such as Ar\({}^{+}\) ion implantation and He plasma are useful for achieving the desired n\({}^{+}\) doping effect.
### a-IGZO channel stoichiometry variations
Figure 5 plots the density of states (_dots_: UBPC experimental data, _shaded regions_: simulation) of a BCE a-IGZO TFT with a stoichiometric (1:1:1 molar % \(\mathrm{I_{2}O_{3}/Ga_{2}O_{3}/ZnO}\) ratio) active channel composition (111 a-IGZO, _black_) against a BCE a-IGZO TFT with a higher molar proportion of Zn constituent atoms than In and Ga (Zn-rich, _blue_). The (_Inset_) of Fig. 5 plots the drain current-gate voltage (\(\mathrm{I_{D}-V_{G}}\)) transfer curve of the 111 as-grown and Zn-rich TFTs. Table 4 reports the DoS peak characteristics for 111 and Zn-rich a-IGZO TFTs.
The DoS peak amplitudes of the O-3, O-4, O-5, and O-6 peaks are significantly larger for the Zn-rich a-IGZO TFT compared to the 111 a-IGZO TFT, but the O-7 peak is smaller. The \(\mathrm{I_{D}-V_{G}}\) transfer curve of the Zn-rich a-IGZO TFT also exhibits a lower drain current, a positive shift in the turn-on voltage, and a poorer subthreshold slope than the 111 a-IGZO TFT. These trends are consistent with an electron trapping-induced reduction of TFT electrical performance due to the enhanced density of subgap oxygen vacancy states. The appreciable increase in shallow oxygen vacancy states is particularly concerning with respect to TFT operation. Surprisingly, the valence band Urbach energy of the Zn-rich TFT is \(\sim 30\%\) smaller than that of the 111 a-IGZO TFT.
### Hydrogen incorporation into a-ITGZO TFTs
Figure 6 displays a comparison between two a-ITGZO TFTs possessing different concentrations of hydrogen. \(\mathrm{I_{D}-V_{G}}\) transfer curves (Fig. 6(a)) shows that the larger incorporated hydrogen concentration (_orange_ curve) shifts the turn-on voltage \(-3.6\) V, corresponding to an estimated increase in the hydrogen concentration of \(\mathrm{\Delta[H]_{ID-V_{G}}=2.8\times 10^{17}\ cm^{-3}}\), where \(\mathrm{\Delta[H]_{ID-V_{G}}=(C_{I}\Delta V_{ON}/q)^{1.5}}\) and \(\mathrm{C_{I}=19.2\ nFcm^{-2}}\) is the insulator capacitance density. (Since hysteresis is present in the \(\mathrm{I_{D}-V_{G}}\) transfer curves shown in Fig. 6(a), \(\mathrm{V_{ON}}\) is estimated as the midpoint between the right-going and left-going turn-on voltages.) The nega
Figure 5: UBPC DoS spectra comparing different a-IGZO growth conditions plotted for a BCE 111 (_black_) and a BCE Zn-rich (_blue_) a-IGZO growths. Fitted DoS peak are plotted as _shaded_ regions. (_Inset_) Drain current-gate voltage (\(\mathrm{I_{D}-V_{G}}\)) transfer curves for a BCE 111 a-IGZO and a BCE Zn-rich a-IGZO TFT.
\begin{table}
\begin{tabular}{c|c|c|c|c} & \(\mathrm{E-E_{C}(eV)}\) & As-grown & Ar\({}^{+}\) ion & He plasma & Ar plasma \\ \hline \(\mathrm{O-1}\) & \(-0.30\) & \(1.1\) & \(3.2\) & \(3.2\) & \(500\) \\ \(\mathrm{O-2}\) & \(-0.70\) & \(1.7\) & \(3.7\) & \(7.0\) & \(210\) \\ \(\mathrm{O-3}\) & \(-1.30\) & \(3.2\) & \(27\) & \(23\) & \(590\) \\ \(\mathrm{O-6}\) & \(-1.95\) & \(0.8\) & \(190\) & \(140\) & \(520\) \\ \(\mathrm{O-7}\) & \(-2.45\) & \(5.9\) & \(420\) & \(500\) & \(1000\) \\ \(\mathrm{E_{U}(meV)}\) & & \(117\pm 3\) & \(133\pm 8\) & \(135\pm 6\) & \(180\pm 9\) \\ \end{tabular}
\end{table}
Table 3: Comparison of density of states peak maximum amplitudes (\(\times 10^{16}\mathrm{cm^{-3}eV^{-1}}\)) and Urbach energies (meV) from Fig. 4(d) for four top-gate a-IGZO TFTs with the full-channel subjected to different plasma and ion-etch processes.
\begin{table}
\begin{tabular}{c|c|c|c} & \(\mathrm{E-E_{C}(eV)}\) & 111 & Zn-rich \\ \hline \(\mathrm{O-2}\) & \(-1.05\) & \(0.25\) & \(1.45\) \\ \(\mathrm{O-3}\) & \(-1.25\) & \(4.6\) & \(32\) \\ \(\mathrm{O-4}\) & \(-1.40\) & \(2.2\) & \(32\) \\ \(\mathrm{O-5}\) & \(-1.75\) & \(2.45\) & \(22\) \\ \(\mathrm{O-6}\) & \(-1.90\) & \(11.9\) & \(117\) \\ \(\mathrm{O-7}\) & \(-2.50\) & \(59\) & \(5.2\) \\ \(\mathrm{E_{U}(meV)}\) & & \(105\) & \(78\) \\ \end{tabular}
\end{table}
Table 4: Comparison of the DoS peak maximum amplitudes (\(\times 10^{16}\mathrm{cm^{-3}eV^{-1}}\)) from Fig. 5(d) for [111] and Zn-rich growth recipes for BCE a-IGZO TFTs.
tive turn-on voltage shift indicates that the incorporated specie (hydrogen) behaves as a donor.
Since donor ionization energies are normally found nearest to the conduction band, Fig. 6(b) is included in order to demonstrate that the UBPC-estimated trap density is extremely small (less than \(10^{14}\) cm\({}^{-3}\)), and that the near conduction band trap distribution is quite similar for the two a-ITGZO BCE TFTs under consideration. These observations support our contention [14] that hydrogen does not behave as a normal shallow donor when incorporated into an amorphous oxide semiconductor, such as a-ITGZO.
Figure 6(c) reveals an enhanced density of states at \(\mathrm{E-E_{C}\approx-2.7}\) eV for the high hydrogen (_orange_) curve, compared to the low H (_black_) curve. This corresponds to an increase in the concentration of \(\mathrm{[O_{O}^{2-H}+]}^{1-}\) defect complexes (or, equivalently, OH\({}^{-}\)) with increasing hydrogen incorporation. This increase in hydrogen between the _black_ and _orange_ curve can be quantified via UBPC, i.e., \(\mathrm{\Delta[H]_{UBPC}=N_{TOT}(E_{g})|_{orange}-N_{TOT}(E_{g})|_{black}}\)\(=9.2\times 10^{17}\mathrm{cm^{-3}}\)\(-6.0\times 10^{17}\mathrm{cm^{-3}}\)\(=3.2\times 10^{17}\mathrm{cm^{-3}}\). This value of \(\mathrm{\Delta[H]_{UBPC}\ (3.2\times 10^{17}\mathrm{cm^{-3}})}\) is quite similar to that estimated previously for \(\mathrm{\Delta[H]_{ID-VG}\ (2.8\times 10^{17}\mathrm{cm^{-3}})}\), indicating that a correlation exists between the enhancement in density of states and the enhancement in electrically-active donor activity with increasing hydrogen concentration. However, note that \(\mathrm{\Delta[H]_{UBPC}>\Delta[H]_{ID-VG}\) since the UBPC-estimated density of states enhancement includes a contribution associated with valence band tail state density arising from enhanced anion sublattice disorder, as well as from a simple increase in the \(\mathrm{[O_{O}^{2-H}+]}^{1-}\) defect complex density centered at \(\mathrm{E-E_{C}=-2.7}\) eV, due to direct incorporation of hydrogen into the amorphous network. [14]
Simple charge balance considerations are useful for rationalizing turn-on voltage trends similar to those presented in Fig. 6(a). [6] Enhancement-mode operation, in which \(\mathrm{V_{ON}}\) is positive, is witnessed for the two curves included in Fig. 6(a). Charge balance describing this type of behavior likely arises from \(\mathrm{N_{DD}^{+}(E_{F})=N_{DA}-N_{SD}}\) where \(\mathrm{N_{DD}^{+}}\) refers to positively ionized deep donors, the concentration of which is controlled by the position of the Fermi level, \(\mathrm{N_{DA}}\) is the density of deep acceptors, and \(\mathrm{N_{SD}}\) is the density of shallow donors. Identifying deep donors as oxygen vacancies that are distributed across the upper portion of the a-ITGZO bandgap (see _Inset_ to Fig. 6(c)), deep acceptors as zinc vacancies located at \(\mathrm{E-E_{C}\approx-2.3-(-2.4)}\) eV, and shallow donors as \(\mathrm{[O_{O}^{2-H}+]}^{1-}\) defect complexes centered at \(\mathrm{E-E_{C}\approx-2.8}\) eV, enhancement-mode behavior is a balancing act between zinc vacancies and hydrogen. Although \(\mathrm{[O_{O}^{2-H}+]}^{1-}\) defect complexes are certainly not energetically'shallow' since they are centered at \(\mathrm{E-E_{C}\approx-2.8}\) eV, they can be considered to be'shallow' from the perspective of charge balance assessment since they remain ionized, independent of the position of the Fermi level. This odd behavior is due to the non-equilibrium nature of the hydrogen donor, in which hydrogen ionization occurs prior to its incorporation into the amorphous network. [14]
When the zinc vacancy density is much larger than that of incorporated hydrogen, \(\mathrm{N_{DD}^{+}(E_{F})\approx N_{DA}}\), so that strongly enhancement-mode behavior obtains (_black_ curve of Fig. 6(a)), \(\mathrm{V_{ON}}\) is positive and large, and \(\mathrm{E_{F}}\) is positioned deep in the gap (perhaps \(-1.5\) to \(-2\) eV), leaving many oxygen vacancy traps empty (empty traps (\(\mathrm{N_{DD}^{+}}\)) are responsible for enhancement-mode behavior). In contrast, when the zinc vacancy and incorporated hydrogen concentrations are similar, \(\mathrm{N_{D}^{+}(E_{F})\approx 0}\), such that \(\mathrm{V_{ON}\approx 0}\) V, and \(\mathrm{E_{F}}\) is positioned about \(0.15-0.3\) eV below the conduction band mobility edge, and most oxygen vacancy traps are filled. Finally, if the incorporated hydrogen concentration exceeds that of the zinc vacancy concentration, then depletion-mode behavior occurs, as described by a different charge balance relationship, \(\mathrm{n(E_{F})\approx N_{SD}-N_{DA}}\), where n is the free electron density, as controlled by the position of the Fermi level.
Figure 6: **(a)** Drain current-gate voltage (\(\mathrm{I_{D}-V_{G}}\)) transfer curves, **(b)** near conduction band total integrated trap density, \(\mathrm{N_{TOT}}\), and **(c)** near valence band DoS for two a-ITGZO BCE TFTs possessing different concentrations of hydrogen. (c, _Inset_) UBPC DoS plotted across full bandgap (logarithmic scale).
Conclusions
The subgap DoS measured by ultrabroadband photoconduction (UBPC) method reveals systematic trends in AOS subgap trap density that correlate with different TFT architectures, doping treatments, and AOS growth compositions. Notably, we observed the following: \(1\), a Zn vacancy peak centered at \(-2.2\) eV is introduced to back channel etch a-IGZO TFTs as a result of the source/drain metal etch process, and its peak density can be modulated via different etch conditions; \(2\), Ar and He plasma and Ar\({}^{+}\) ion implantation-treatment of the a-IGZO active channel enhance the peak densities of deep-donor oxygen vacancy states up to \(>100\)X; \(3\), Zn-rich a-IGZO TFTs exhibit larger deep oxygen vacancy peaks and significantly worse electrical performance compared to a-IGZO with a 1:1:1 stoichiometric molar proportion of constituent cations; and \(4\), as previously observed for a-IGZO,[14] hydrogen incorporation in a-ITGZO rigidly shifts the \(\rm I_{D}-V_{G}\) transfer curve negative, and is accompanied by a corresponding increase in the \(\left[O_{O}^{2-}H^{+}\right]^{1-}\) complex peak density centered at \(-2.8\) eV.
The defect DoS obtained with the UBPC method has both high \(10^{6}\) signal to noise sensitivity and a broad energy range not yet obtainable by established purely optical and XPS/UPS methods. This newly demonstrated capacity of UBPC to predict defects arising from on-chip TFT processing and growth recipes further suggests its use as an analytical tool in AOS TFT development. The different categories of defects (acceptor, donor, and interstitial hydrogen) that compose the subgap of AOS impact both the TFT \(\rm I_{D}-V_{G}\) transfer curve turn-on voltage and hysteresis. The next challenge is to extend the UBPC method over the conduction band Urbach tails to complete the connection between subgap trap density and performance TFT metrics. The clear DoS trends with TFT production processing demonstrate that UBPC is an emerging post-assembly defect characterization method that can work down to the single-pixel limit.
###### Acknowledgements.
**Acknowledgments**: This work was in part supported by the NSF Grants DMR-1920368. We would like to thank Jessica Waymire for her contribution to the optical photoluminescence of a-IGZO measurements.
**Supporting Information Available**: Supplemental experiment DoS curves showing a statistical comparison between many TFT amorphous oxide devices.
|
2307.14631 | Denjoy Domains and BMOA | A Denjoy domain is a plane domain whose complement is a closed subset $E$ of
the extended real line $\bar{R}$ containing $\infty$ : such a domain is called
Carleson-homogeneous if there exists $C>0$ such that for all $z\in E$ and
$r>0$, one has $\vert E\cap [z-r,z+r]\vert\geq Cr$, where $\vert\cdot\vert$ is
the Lebesgue measure on the line. We prove that if $U=\bar{ \mathbb
C}\backslash K$ is a Carleson-homogeneous Denjoy domain then, if $f$ stands for
one of its universal coverings, $\log {f'}\in BMOA.$ In order to prove this
result, we develop ideas from
[On Carleson measures induced by Beltrami coefficients being compatible with
Fuchsian groups, Ann. Fenn. Math. 46(2021),67-77] leading to a general theorem
about planar domains giving sufficient conditions ensuring that $\log {f'}\in
BMOA$ for any universal covering $f.$ | Shengjin Huo, Michel Zinsmeister | 2023-07-27T05:39:14Z | http://arxiv.org/abs/2307.14631v1 | # Denjoy domains and BMOA
###### Abstract.
A Denjoy domain is a plane domain whose complement is a closed subset \(E\) of the extended real line \(\bar{R}\) containing \(\infty\) : such a domain is called Carleson-homogeneous if there exists \(C>0\) such that for all \(z\in E\) and \(r>0\), one has \(|E\cap[z-r,z+r]|\geq Cr\), where \(|\cdot|\) is the Lebesgue measure on the line. We prove that if \(U=\bar{\mathbb{C}}\backslash K\) is a Carleson-homogeneous Denjoy domain then, if \(f\) stands for one of its universal coverings, \(\log f^{\prime}\in BMOA\). In order to prove this result, we develop ideas from [On Carleson measures induced by Beltrami coefficients being compatible with Fuchsian groups, Ann. Fenn. Math. 46(2021),67-77] leading to a general theorem about planar domains giving sufficient conditions ensuring that \(\log f^{\prime}\in BMOA\) for any universal covering \(f\).
Key words and phrases:Denjoy domain, Carleson measure, Carleson homogeneous 2010 Mathematics Subject Classification: 30F35, 30C62 This work was supported by the National Natural Science Foundation of China (Grant No.11401432).
A rectifiable curve \(\gamma:\,I\to\mathbb{C}\) is said to be Ahlfors-David regular if there exists \(C>0\) such that for every \(z\in\gamma(I)\) and every \(0<r<\)diam(\(\gamma(I)\)),
\[\text{length}(D(z,r)\cap\gamma(I))\leq Cr,\]
where if \(F\subset\gamma(I)\) is, say, a Borel set,
\[\text{length}\ (F)=\int_{\gamma^{-1}(F)}|\gamma^{\prime}(s)|ds.\]
Finally a Jordan curve \(\Gamma:I\to\mathbb{C}\) is said to be a chord-arc curve if it is at the same time Ahlfors-David regular and a quasidisk. Equivalently \(\Gamma\) is chord-arc if given two points \(z,\zeta\in\Gamma(I)\), then
\[\min\{\text{length}(\gamma_{1}),\text{length}(\gamma_{2})\}\leq C|z-\zeta|,\]
where \(\gamma_{1},\gamma_{2}\) are the two subarcs of \(\Gamma\) defined by \(z,\zeta\).
We end this introductory section by recalling some facts from Riemann surfaces. A Riemann surface \(\mathcal{R}\) is a one-dimensional complex manifold. The universal cover \(\mathcal{R}^{*}\) of \(\mathcal{R}\) is conformally equivalent to the Riemann sphere, the complex plane or the unit disk, see [1]. In the latter case we say that the surface is hyperbolic, and the surface is then conformally equivalent to \(\mathbb{D}/G\), where \(G\) is a Fuchsian group, that is a discrete group of automorphisms of the disk. We moreover call universal covering any automorphism from \(\mathbb{D}\) onto \(\mathcal{R}^{*}\).
In this paper we will be mainly interested in planar Riemann surfaces, which are nothing else but the subdomains of the Riemann sphere. Such a surface is hyperbolic if and only if the complement of \(\Omega\) in the sphere contains more than 3 points. In the case of a planar Riemann surface, a universal covering (or more precisely its projection on the Riemann sphere) is then a holomorphic function \(F\) from \(\mathbb{D}\) onto \(\Omega\), locally injective and such that \(F\circ\gamma=F\), \(\gamma\in G\), a Fuchsian group such that \(\Omega\) is conformally equivalent to \(\mathbb{D}/G\). Notice that in the case \(\Omega\) is simply-connected this map is a Riemann map, that is a bi-holomorphism. The universal covering may thus be seen as a Riemann map for general planar domains.
## 2. Main result and motivation
A Denjoy domain is a plane domain whose complement is a closed subset \(E\) of the extended real line \(\bar{R}\) containing \(\infty\). These domains have been intensively studied, the reason being that they are in some sense the simplest infinitely connected domains in the plane. For example, Rubel and Ryff ([15])have shown how to construct quasi-explicitely the uniformizing Fuchsian group of such a domain. Also (see [5], [8], [10]) these domains were the first infinitely connected domains for which the corona property was proven to hold.
We recall that a plane domain \(U\) has the corona property if for any \(n>0\) and for any \(n\)-tuple \((f_{1},..,f_{n})\) of holomorphic functions in \(U\) such that there exists \(\delta>0\)
with
\[\delta\leq\inf_{z\in U}(\max_{j=1,..,n}|f_{j}(z)|)\leq\sup_{z\in U}(\max_{j=1,..,n }|f_{j}(z)|)\leq 1,\]
there exist \(n\) bounded holomorphic functions \(g_{1},..g_{n}\) in \(U\) such that
\[\forall z\in U,\,f_{1}(z)g_{1}(z)+..+f_{n}(z)g_{n}(z)=1.\]
In [8], Garnett and Jones eventually proved that all Denjoy domains possess the corona property, but Carleson ( [5]) has been the first to raise the question and to solve it in the special case of what is now called Carleson-homogeneous Denjoy domains, a notion that will happen to be central in the present paper.
A Denjoy domain \(\mathbb{C}\backslash E\) is said to be Carleson-homogeneous if, writing \(|\cdot|\) for the Lebesque measure on \(\mathbb{R}\), there is a constant \(C\) such that if for all \(x\in E\) and \(t>0\),
\[|(x-t,x+t)\cap E|\geq Ct.\]
The idea in Carleson approach is that the thickness of the boundary implies that a Carleson-homogeneous Denjoy domain "behaves" like a simply-connected one, for which Carleson [4] had previously proven the corona property. Here the "thickness" of the boundary is measured by the Lebesgue measure. There are other ways to measure this thickness, as for example by logarithmic capacity: a plane domain \(U\) is said to be uniformly perfect if there exists \(C>0\) such that for all \(z\in\partial U\) and \(0<t<\operatorname{diam}(U)\),
\[\operatorname{cap}(D(z,t)\cap E)\geq Ct,\]
where \(D(z,t)\) stands for the disk of center \(z\) and radius \(t\) and cap is logarithmic capacity. If \(V\neq\mathbb{C}\) is any simply-connected plane domain a famous theorem of Koebe asserts that, \(f:\mathbb{D}\to V\) being a Riemann map, \(\log f^{\prime}\in\mathcal{B}\), the Bloch class defined as
\[\mathcal{B}=\{b\operatorname{holomorphic}\operatorname{in}\mathbb{D}:\sup_{z \in\mathbb{D}}(1-|z|)|b^{\prime}(z)|<+\infty\}.\]
Pommerenke ( [12]) has proven that if \(f\) is the universal covering of a hyperbolic plane domain \(V,\) then \(\log f^{\prime}\in\mathcal{B}\) if and only if \(V\) is uniformly perfect, so that we may say that uniformly perfect domains "behave" with this respect like simply-connected ones.
There is yet another instance of this comparison between simply and multiply-connected domains, namely the Hayman-Wu theorem:
**Theorem 2.1**.: _There exists a (universal) constant \(C>0\) such that for any hyperbolic simply-connected planar domain \(V\), for any holomorphic bijection \(f\) from \(\mathbb{D}\) onto \(V\) and for any line \(L\), length\((f^{-1}(L\cap V))\leq C\)._
By analogy we will say that a planar hyperbolic domain \(V\) has Hayman-Wu property if there exists a constant \(C>0\) such that, \(f\) being any universal covering of \(V\), length\((f^{-1}(L\cap V))\leq C\) for all lines \(L\). Fernandez and Hamilton ( [7]) proved the following:
**Theorem 2.2**.: _Let \(U\) be a Denjoy domain; then \(U\) has Hayman-Wu property if and only if it is Carleson-homogeneous._
Since if \(f\) is a universal covering and \(T\) is an automorphism of the unit disk then \(f\circ T\) is another universal covering, we see that for a line \(L\), not only \(f^{-1}(L)\) has finite length but also there exists a constant \(C>0\) such that \(\operatorname{length}(T(f^{-1}(L))\leq C\), meaning that arclength on \(f^{-1}(L)\) is a Carleson measure (see below the definition and other instances of this property).
In the spirit of what preceeds, here is the main result of the paper:
**Theorem 2.3**.: _Let \(U\) be a Carleson-homogeneous Denjoy domain and \(f\) one of its universal coverings, then \(\log f^{\prime}\in BMOA(\mathbb{D})\)._
In this statement, \(BMOA(\mathbb{D})\) is the subset of the Hardy space \(H^{2}(\mathbb{D})\) whose boundary values belong to the John-Nirenberg space \(BMO\) (bounded mean oscillations). Notice that \(BMOA(\mathbb{D})\) is a subspace of \(\mathcal{B}\) and that the theorem is obvious if \(V\) is simply-connected (i.e. if \(E\) is an interval).
## 3. Fuchsian groups
Let \(G\) be a uniformizing group for a domain \(\Omega\): it is a Fuchsian group, meanning that it acts on the unit disk \(\mathbb{D}\) of the plane properly discontinuously and freely, and it uniformizes \(\Omega\) in the sense that \(\Omega\) is conformally equivalent to the Riemann surface \(\mathbb{D}/G.\) For an element \(g\) in \(G\), we denote by \(\mathcal{H}_{z}(g)\) the closed hyperbolic half-plane containing \(z\), bounded by the perpendicular bisector of the hyperbolic segment \([z,g(z)]\). The Dirichlet fundamental domain \(\mathcal{F}_{z}(G)\) of \(G\) centered at \(z\) is the intersection of all the sets \(\mathcal{H}_{z}(g)\) with \(g\) in \(G\backslash\{Id\}\). For simplicity, in this paper we use the notation \(\mathcal{F}\) for the Dirichlet fundamental domain \(\mathcal{F}_{0}(G)\) of \(G\) centered at \(z=0.\) It is easy to see that a Dirichlet domain is a convex subset of \(\mathbb{D}\) for the hyperbolic metric. For the Dirichlet fundamental domain \(\mathcal{F}\), let \(\mathcal{F}^{\circ}\) denote its interior and \(\bar{\mathcal{F}}\) its closure. Let \(g\) be a nontrivial element of \(G\). When the intersection of \(\mathcal{F}\) and \(g(\mathcal{F})\) is non-empty, it is contained in the perpendicular bisector of the segment \([z,g(z)]_{h}\). This intersection is a point, a non-trivial geodesic segment, a geodesic ray or a geodesic. In the latter three cases, we say this intersection is an edge. The vertices are the endpoints of the edges. An infinity vertex is a vertex contained in the unit circle \(\partial\mathbb{D}.\) When \(G\) is of first kind (i.e. if its limit set is the whole unit circle), the infinity vertex set \(\mathcal{F}(\infty)\) is at most countable ( possibly empty), see [3].
Let \(\lambda(E)\) denote the Hausdorff linear measure of a set \(E.\) A Fuchsian group \(G\) is said to be of weak-finite length type if
\[\sum_{g\in G}\lambda(g(\partial\mathcal{F}))<+\infty.\]
We also say that a hyperbolic planar domain \(\Omega\) is of finite length type if the universal cover group is of finite length type. Notice that in the case of a Denjoy domain, the
quantity
\[\sum_{g\in G}\lambda(g(\partial\mathcal{F}))\]
is precisely the length of \(f^{-1}(\mathbb{R})\) for some universal covering \(f\). We will say that \(G\) is of strong-finite length type if there exists \(C>0\) such that the length of \(f^{-1}(\mathbb{R})\) is bounded by \(C\) for all universal covering of \(\Omega.\) As we will see later this is equivalent to saying that arclength on \(\cup_{G}g(\partial\mathcal{F})\) is a Carleson measure.
It is interesting to connect this notion of finite length type with the exponent of convergence of the Fuchsian group, defined as the infinimum of the \(\alpha^{\prime}s\) such that
\[\sum_{g\in G}(1-|g(0)|)^{\alpha}<\infty.\]
We denote this number by \(\delta(G)\) or \(\delta(\Omega)\). Fernandez and Hamilton [7] have shown that if \(\delta(G)<1/2\) then \(G\) is of finite length type. Conversely Fernandez has shown that if \(\Omega\) is uniformly perfect then \(\delta(\Omega)<1\). In particular \(G\) is of convergence type, that is \(\sum_{G}(1-|g(0)|)<\infty\). In general if \(G\) is not co-compact, it is easy to see that
\[(1-|g(0)|)\leq C\operatorname{length}{(g(\partial\mathcal{F}))},\]
which gives another proof of the preceeding assertion. If \(G\) is co-compact then the two quantities are of the same order, but in this case the group is of divergence type. We do not know if there are convergence-type groups for which these quantities are comparable. Also convergence type does not imply finite length type. An example is given by \(\Omega=\bar{\mathbb{C}}\backslash K\) where \(K\) is the triadic Cantor set : \(\Omega\) is then uniformly perfect and thus of convergence type but is not of finite length type, not being Carleson-homogeneous.
## 4. Carleson measures
Recall that a positive measure \(\lambda\) defined in a domain \(\Omega\) is called a Carleson measure if
\[\parallel\lambda\parallel^{2}=\sup\{\frac{\lambda(\Omega\cap D(z,r)):z\in \partial\Omega}{r},0<r<\operatorname{diam}(\partial\Omega)\}<+\infty.\]
In order to motivate what follows, let us start with an example coming from Teichmuller theory. If \(G\) is a Fuchsian group and \(\mu(z)\) a bounded measurable function on \(\Delta\) which satisfies
\[||\mu(z)||_{\infty}<1\text{ and }\mu(z)=\mu(g(z))\overline{g^{\prime}(z)}/g^{ \prime}(z)\]
for every \(g\in G\), then we say that \(\mu\) is a \(G\)-compatible Beltrami coefficient (or complex dilatation). We denote by \(M(G)\) the set of all \(G\)-compatible Beltrami coefficients.
We recall that for a Fuchsian group \(G\), the unit disk \(\mathbb{D}\) can be tessellated by the images any Dirichlet fundamental domain \(\mathcal{F}(G)\) of \(G\). Is it possible to check that the measure
\[\lambda_{\mu}=|\mu|^{2}/(1-|z|^{2})dxdy\]
induced by \(\mu\in M(G)\) is in \(CM(\mathbb{D})\) directly from its value on the Dirichlet fundamental domain \(\mathcal{F}_{z}(G)\) of \(G\)?
Very recently, the first author in [9] has given a positive answer for the finitely generated groups of the second kind (i.e. not of the first kind) without parabolic elements.
**Theorem 4.1**.: _[_9_]_ _Let \(G\) be finitely generated Fuchsian of the second kind without parabolic elements and \(\mathcal{F}\) be the Dirichlet fundamental domain of \(G\) centered at \(0\). Let \(\mu\) be a \(G\)-compatible Beltrami coefficient in \(\in M(G)\). Then the measure \(|\mu|^{2}(1-|z|^{2})^{-1}\) is a Carleson measure on the unit disk if and only if the restriction of the measure \(|\mu|^{2}(1-|z|^{2})^{-1}\) to a Dirichlet fundamental domain \(\mathcal{F}(\infty)\) is a Carleson measure on \(\mathcal{F}\)._
In this theorem, the assumption that the finite generated Fuchsian group \(G\) does not contain parabolic elements is essential. A counter- example is given by a cyclic parabolic group, for instance the covering group of a punctured disk. In this case the fundamental domain in the upper half-plane is a vertical strip and setting \(|\mu|\) to be constant in the part of the strip above height \(1\) to provide a counterexample, since if \(|\mu|\) is constant on a horodisk, then it does not give rise to a Carleson measure. Notice also that in this theorem the associated Riemann surface need not be planar.
Let us generalize slightly the problem. In what preceeds we considered measures of the form
\[m=|\mu|^{2}(1-|z|^{2})^{-1}dxdy\]
where \(\mu\in M(G)\). It is easy to see that in this case
\[m|\gamma(\mathcal{F})=\gamma^{*}(|\gamma^{\prime}|(m|\mathcal{F})).\]
Let \(\nu\) be a positive and finite measure on \(\bar{\mathcal{F}}\); we define the measure \(\tilde{\nu}\) on the closed unit disk as
\[\tilde{\nu}=\sum_{\gamma\in G}\gamma^{*}(|\gamma^{\prime}|\nu),\]
and we ask the question: for which groups is it true that \(\tilde{\nu}\) is a Carleson measure on the closed disk if \(\nu\) is a Carleson measure on \(\mathcal{F}\)? It is thus natural to investigate the groups \(G\) satisfying the following property, where \(\mathcal{F}\) is the Dirichlet fundamental domain:
\[(H)\ \nu\in CM(\mathcal{F})\Rightarrow\tilde{\nu}\in CM(\mathbb{D}).\]
**Proposition 4.1**.: _The properties \((H)\) and \((SFLT)\) (strongly-finite length type) are equivalent for a Fuchsian group \(G\)._
Proof: In order to prove that \((H)\Rightarrow(SFLT)\) it suffices to apply \((H)\) to
\[\nu=\text{Arclength }(\partial\mathcal{F}).\]
Suppose now that \((SFLT)\) holds and let \(\nu\in CM(\mathcal{F})\). Let \(T\) be any automorphism of the disk : we may write
\[\int|T^{\prime}|d\tilde{\nu} =\sum_{\gamma\in G}\int_{\gamma(\mathcal{F})}|T^{\prime}|d\gamma^{ *}(|\gamma^{\prime}|\nu)\] \[=\sum_{\gamma\in G}\int_{\mathcal{F}}|(T\circ\gamma)^{\prime}|d\nu\]
and this last quantity is bounded by a constant time the length of \(T(\cup_{G}\gamma(\partial\mathcal{F}))\) which is finite (uniformly in \(T\)) by \((SFLT)\): this proves that \(\tilde{\nu}\in CM(\mathbb{D})\) and thus \((H)\).
We finish this section with an application of the last proposition: let \(\Omega\) be a hyperbolic planar domain which is uniformly perfect, \(G\) an uniformizing Fuchsian group and \(f:\,\mathbb{D}\to\Omega\) a universal covering of \(\Omega\). Gonzalez has shown that under these hypothesis there exists a fundamental domain \(\mathcal{F}\) for \(G\) which is a chord-arc domain. Then \(f\) is a conformal mapping onto a simply connected domain \(U\subset\Omega\) which may be seen as a fundamental domain on the Riemann surface \(\Omega\).
**Theorem 4.2**.: _If \(G\) has property \((H)\) for \(\mathcal{F}\) and if \(U\) is Ahlfors-David regular then \(\log f^{\prime}\in BMOA(\mathbb{D})\)._
Proof.: Let \(\varphi\) be a Riemann map from the disk onto \(\mathcal{F}\). By [16], \(\log(f\circ\varphi)^{\prime}\in BMOA(\mathbb{D})\) as well as \(\log\varphi^{\prime}\). It follows that \(\log f^{\prime}\circ\varphi\in BMOA(\mathbb{D})\). By definition this means that \(\log f^{\prime}\in BMOA(\mathcal{F})\). By results in [16] it implies that
\[d(z,\partial\mathcal{F})^{3}|S_{f}(z)|^{2}dxdy\in CM(\mathcal{F}).\]
At this point we use the already mentioned result of Pommerenke that \(\log f^{\prime}\in\mathcal{B}\): it implies that \((1-|z|^{2})^{3}|S_{f}(z)|^{2}dxdy\in CM(\mathcal{F})\). Using now property \((H)\) an elementary computation shows that it implies that
\[(1-|z|^{2})^{3}|S_{f}(z)|^{2}dxdy\in CM(\mathbb{D}.\]
It remains to show that this last property implies that \(\log f^{\prime}\in BMOA(\mathbb{D})\); this has been proven in [2] in the case \(f\) conformal (that is when \(G=\{Id\}\)). But, thanks to Pommerenke's result, the proof of Lemma 2 in [2] goes true for universal coverings, the set
\[\{g:\,\mathbb{D}\to\mathbb{C}\,\text{holomorphic and locally injective}:\|\log g ^{\prime}\|_{\mathcal{B}}\leq M\}\]
being for every \(M>0\) a normal family.
## 5. Denjoy Domains
We are now in position to prove our main theorem, whose statement we recall:
**Theorem 5.1**.: _Let \(\Omega\) be a Carleson-homogeneous Denjoy domain and \(f\) one of its universal coverings, then \(\log f^{\prime}\in BMOA(\mathbb{D})\)._
Proof: Let us thus consider a Carleson-homogeneous domain \(\Omega\) and let us consider a Fuchsian group \(G\) as constructed by Rubel and Ryff that uniformizes \(\Omega\).
**Lemma 5.1**.: _Let \(\mathcal{F}\) be the fundamental domain for \(G\) constructed by Rubel and Ryff. Then \(\mathcal{F}\) is a chord-arc domain._
Notice that this result is close to Gonzalez's one. except that we specify the fundamental domain. Notice also that the lemma actually holds for more general uniformly perfect Denjoy domains.
Proof.: We start by recalling Rubel and Ryff's construction, see [15]. Let \(F\) be a compact subset of the unit circle supposed to be symmetric with respect to \(\mathbb{R}\): its complement in the circle is a countable union of intervals \(I_{j}\). Our fundamental domain \(\mathcal{F}\) will be the hyperbolic convex hull of \(F\) which consists in the domain obtained by replacing each \(I_{j}\) by the hyperbolic geodesic \(L_{i}\) with the same endpoints. Now let \(f\) be a Riemann map from \(\mathcal{F}\cap\{y>0\}\) onto the half-plane \(\{y>0\}\) fixing \(-1\) and \(1\). We extend \(f\) by Schwarz reflection to \(\mathcal{F}\) as an isomorphism from \(\mathcal{F}\) onto \(\mathbb{C}\backslash\mathbb{R}\cup(-1,1)\). We finally extend \(f\) to all of the unit disk by successive Schwarz reflections using the group \(G\) generated by the \(S\circ R_{j}\) with \(S(z)=\bar{z}\) and \(R_{j}\) being the reflection across \(L_{j}\). This extended function is then the universal covering of some Denjoy domain and Rubel and Rieff have shown that for every \(K\subset\bar{\mathbb{R}}\) a universal covering of \(\Omega=\mathbb{C}\) maybe obtained by this method with some \(F\) as above.
For the rest of the proof of the lemma we will only use the fact that \(\Omega\) is uniformly perfect: Pommerenke [11] has proven that for such domains there exists \(c>0\) such that for every \(\gamma\) belonging to a uniformizing Fuchsian group, we have
\[\operatorname{Trace}(\gamma)\geq 2+c.\]
Now the boundary of \(\mathcal{F}\) is the graph of a function in polar coordinates. So we must show that for any interval \(I\) of the unit circle, the length of the part of the graph lying over \(I\) is controlled by the distance of the two endpoints of this part of the graph.
Suppose first that the two end points of \(I\) lie in \(F\): then obviously the length of the graph above \(I\) is less that \(\pi|I|\) and \(I\) is also in this case the distance between the endpoints. Suppose now that one of the endpoints of \(I\) is in \(E\) while the other is in some \(I_{j}\). Let \(C_{j}\) the part of the graph lying over \(I_{j}\cap I\): if
\[\operatorname{length}\left(C_{j}\right)\leq 10|I|,\]
we are essentially back to the proceeding case. If
\[\operatorname{length}\left(C_{j}\right)\geq 10|I|,\]
then the total length over \(I\) is controlled by \(\operatorname{length}(C_{j})\) and we are also done.
Remains the most interesting case, i.e. when both endpoints of \(I\) lie in say \(I_{j},I_{k}\) with \(k\neq j\) (the case \(k=j\) is obvious). For simplicity of the computations we work in the upper-half plane instead of the disk. We denote by \(r_{j},r_{k}\) the length of
the intervals \(I_{j},I_{k}\) and by \(\varepsilon\) the distance between the two intervals. A lengthy but elementary computation shows that, if \(R_{j},R_{k}\) denote the reflections through \(L_{j},L_{k}\) then
\[\operatorname{Trace}\left(R_{j}R_{k}\right)=2+\frac{2\varepsilon}{\frac{1}{r_ {j}}+\frac{1}{r_{k}}}.\]
Suppose without loss of generality that \(r_{j}\leq r_{k}\). Then by Pommerenke's result
\[|I|\geq\varepsilon\geq\frac{cr_{j}}{2}\]
and we are back to a preceeding case. This complete the proof of the lemma.
With this lemma, the proof of Theorem 5.1 is now an immediate corollary of Theorem 4.2 and of Fernandez's ([6]) result. Indeed, the image \(U\) of \(\mathcal{F}\) by \(f\) is of the form \(\mathbb{C}\backslash\mathbb{R}\cup(a,b)\) for some \(a<b\in\mathbb{R}\) which is obviously an Ahlfors-David regular domain and Fernandez proved that \(G\) has \((SFLT)\Leftrightarrow(H)\) by proposition 4.1.
We believe that the reciprocal of the last theorem is true, i.e. if \(\log f^{\prime}\in BMOA(\mathbb{D})\) for all universal coverings of a Denjoy domain then this domain is Carleson-homogeneous: one possible way to prove that on each circular arc the map \(f\) "behaves" like the corresponding Joukovsky map \(f_{0}\), thus allowing to get a lower bound of the form
\[\iint_{\mathcal{F}}(1-|z|^{2})^{3}|\gamma^{\prime}(z)|dxdy\geq c\mathrm{diam} (\gamma(\partial\mathcal{F}))\]
uniformly on \(\gamma\in G.\) But despite many efforts we could not make this concrete.
|
2306.03702 | Bayesian post-hoc regularization of random forests | Random Forests are powerful ensemble learning algorithms widely used in
various machine learning tasks. However, they have a tendency to overfit noisy
or irrelevant features, which can result in decreased generalization
performance. Post-hoc regularization techniques aim to mitigate this issue by
modifying the structure of the learned ensemble after its training. Here, we
propose Bayesian post-hoc regularization to leverage the reliable patterns
captured by leaf nodes closer to the root, while potentially reducing the
impact of more specific and potentially noisy leaf nodes deeper in the tree.
This approach allows for a form of pruning that does not alter the general
structure of the trees but rather adjusts the influence of leaf nodes based on
their proximity to the root node. We have evaluated the performance of our
method on various machine learning data sets. Our approach demonstrates
competitive performance with the state-of-the-art methods and, in certain
cases, surpasses them in terms of predictive accuracy and generalization. | Bastian Pfeifer | 2023-06-06T14:15:29Z | http://arxiv.org/abs/2306.03702v1 | # Bayesian post-hoc regularization of random forests
###### Abstract
Random Forests are powerful ensemble learning algorithms widely used in various machine learning tasks. However, they have a tendency to overfit noisy or irrelevant features, which can result in decreased generalization performance. Post-hoc regularization techniques aim to mitigate this issue by modifying the structure of the learned ensemble after its training.
Here, we propose _Bayesian post-hoc regularization_ to leverage the reliable patterns captured by leaf nodes closer to the root, while potentially reducing the impact of more specific and potentially noisy leaf nodes deeper in the tree. This approach allows for a form of pruning that does not alter the general structure of the trees but rather adjusts the influence of leaf nodes based on their proximity to the root node. We have evaluated the performance of our method on various machine learning data sets. Our approach demonstrates competitive performance with the state-of-the-art methods and, in certain cases, surpasses them in terms of predictive accuracy and generalization.
Random forest, Feature importance, explainable AI, Regularization
## I Introduction
Post-regularization techniques for random forests refer to methods used to reduce overfitting and improve the generalization performance of random forest models. Random forests are powerful ensemble learning algorithms that combine multiple decision trees to make predictions. However, they can still suffer from overfitting, especially when the trees in the forest become highly complex and tailored to the training data. Post-regularization techniques aim to address this issue by modifying or refining the random forest model after the initial training phase. These techniques typically focus on adjusting the complexity of individual trees or applying ensemble-level modifications. Some commonly used post-regularization techniques for random forests include:
**Pruning:** Pruning involves removing unnecessary branches or nodes from individual trees to simplify their structure. This helps prevent overfitting and promotes better generalization by reducing the complexity of the trees [1].
**Feature selection:** Random forests can sometimes include irrelevant or redundant features, which can degrade performance. Feature selection techniques aim to identify and remove such features from the model, allowing it to focus on the most informative ones and potentially reducing overfitting [2][3].
**Calibration:** Calibration techniques aim to refine the predicted probabilities of random forests to better align with the true class probabilities. This can be particularly useful in tasks where reliable probability estimates are important, such as in certain risk assessment or medical diagnosis scenarios [4].
These post-regularization techniques provide various approaches to combat overfitting and enhance the generalization ability of random forest models. By incorporating these techniques into the random forest
workflow, practitioners can often achieve better performance and more reliable predictions on unseen data.
Here, we present a Bayesian post-hoc regularization method for the calibration of the Section trees' leaf node probabilities. Our method is implemented within the Python package _TreeSmoothing_, which seemingly interfaces with sklearn functionalities and thus can be employed on any trained tree-based classifier.
## II Related Work - Hierarchical Shrinkage
Agarwal et al. (2022) [4] proposed a post-hoc regularization technique known as _Hierarchical Shrinkage_ (HS). Unlike modifying the tree structure, HS focuses on shrinking tree predictions and adjusting sample weights during training. This additional regularization improves generalization performance and allows for smaller ensembles without sacrificing accuracy. HS also enhances post-hoc interpretations by reducing noise in feature importance measures, leading to more reliable and robust interpretations. The method replaces the average prediction of a leaf node with a weighted average of the mean responses of the leaf and its ancestors, controlled by a regularization parameter \(\lambda\) as defined in Eq. (2).
The following is a brief summary of the ideas proposed in [4]. Assume that we are given a training set \(\mathcal{D}_{n}=(X;y)\). Our goal is to learn a tree model \(\hat{f}\) that accurately represents the regression function based on this training data. Given a query point \(\mathbf{x}\), let \(t_{L}\subset t_{L-1}\subset\cdots\subset t_{0}\) denote its leaf-to-root path, with \(t_{L}\) and \(t_{0}\) representing its leaf node and the root node respectively. For any node \(t\), let \(N(t)\) denote the number of samples it contains, and \(\hat{\mathbb{E}}_{t}\{y\}\) the average response. The tree model prediction can be written as the telescoping sum
\[\hat{f}(\mathbf{x})=\hat{\mathbb{E}}_{t_{0}}\{y\}+\sum_{l=1}^{L}\left(\hat{ \mathbb{E}}_{t_{l}}\{y\}-\hat{\mathbb{E}}_{t_{l-1}}\{y\}\right) \tag{1}\]
HS transforms \(\hat{f}\) into a shrunk model \(\hat{f}_{\lambda}\) via the formula:
\[\hat{f}_{\lambda}(\mathbf{x}):=\hat{\mathbb{E}}_{t_{0}}\{y\}+\sum_{l=1}^{L} \frac{\hat{\mathbb{E}}_{t_{l}}\{y\}-\hat{\mathbb{E}}_{t_{l-1}}\{y\}}{1+\lambda /N(t_{l-1})}, \tag{2}\]
where \(\lambda\) is a hyperparameter chosen by the user, for example by cross validation. HS maintains the tree structure, and only modifies the prediction over each leaf node.
## III Proposed approach: Bayesian post-hoc regularization
### _Intuition of approach_
The core idea of our approach is inspired by the pruning concept, where the complexity of the trees is reduced by decreasing its depth. Here, we do not aim to manipulate the general structure of the trees, but we are adopting the leaf node probabilities such that more weight is given to nodes near my the root node (pruning by calibration). According to this idea we propose to update a conjugate Beta prior \(\mathbf{B}_{prior}(\alpha,\beta)\) from the root node to the leaf nodes by subsequently adding the number of classified samples to the model parameters \(\alpha\) (class 0) and \(\beta\) (class 1). The leaf node probabilities are determined using the probabilities to observing a specific class given the inferred posterior Beta distribution \(\mathbf{B}_{posterior}(\alpha,\beta)\).
### _Mathematical formulation_
\[\mathbf{B}_{posterior}(\alpha,\beta)=\mathbf{B}_{prior}(\alpha,\beta)+\sum_{ l=0}^{L}\mathbf{B}_{t_{l}}(\alpha+N_{0}(t_{l}),\beta+N_{1}(t_{l})), \tag{3}\]
where \(N_{0}(t_{l})\) refers to the number of samples classified as class 0, and \(N_{1}(t_{l})\) is the number of samples classified as class 1 at node \(t_{l}\).
The leaf node probabilities are calculate as
\[\hat{f}_{\alpha,\beta}(\mathbf{x})=\mathbf{PPF}(\frac{\alpha}{\alpha+\beta}| \mathbf{B}_{posterior}(\alpha,\beta)), \tag{4}\]
where \(\mathbf{PPF}\) is percent point function (inverse of the cumulative distribution function - percentiles).
## IV Evaluation
### _Data sets_
We assessed the accuracy of our proposed methodology on four machine learning benchmark datasets (see Table I), downloaded using the Python packages models [5] and PMLB [6].
### _Evaluation strategy_
In an first experiment, we used grid search-based on 5-fold crossvalidation to infer the optimal values for the hyperparameters. Following we performed 5-fold crossvalidation and calculated the mean accuracy. The aforementioned procedure was repeated 20 times and we report on results based on balanced accuracy and ROC-AUC.
In an second experiment, we have splitted the data into a train and test dataset. On the train dataset 5-fold crossvalidation was performed to tune the hyperparameters. The tuned model was then tested on the independent test data set. The described procedure was repeated 20 times and we again report results based on balanced accuracy and ROC-AUC as a performance metric.
In case of the herein _Bayesian regularization_ technique, the model-specific \(\alpha\) and \(\beta\) hyperparameters were grid-searched within \([1500,1000,800,500,100,50,30,10,1]\). For the _Hierarchical Shrinkage_ method we used \(\lambda=[0.001,0.01,0.1,1,10,25,50,100,200]\).
### _Metrics used for evaluation_
The _Balanced Accuracy_ is a performance metric used in classification tasks to measure the accuracy of a model by taking into account the imbalance in the class distribution. It is defined as the average of the per-class accuracies, where the per-class accuracy is the ratio of the correctly classified instances to the total number of instances in that class. The formula for calculating Balanced Accuracy is given by:
\[\text{Balanced Accuracy}=\frac{1}{N_{c}}\sum_{i=1}^{N_{c}}\frac{TP_{i}}{TP_{ i}+FN_{i}} \tag{5}\]
where \(N_{c}\) is the number of classes, \(TP_{i}\) represents the number of true positive instances in class \(i\), and \(FN_{i}\) represents the number of false negative instances in class \(i\). The Balanced Accuracy ranges from 0 to 1, where 1 indicates perfect classification performance and 0 indicates random classification.
The _Receiver Operating Characteristic Area Under the Curve_ (ROC-AUC) quantifies the performance of a model by measuring the area under the receiver operating characteristic curve. The ROC-AUC is
\begin{table}
\begin{tabular}{l l l l l}
**Datasets** & Sample size & Features & Class 0 & Class 1 \\ \hline Breast cancer & 286 & 9 & 196 & 81 \\ Habermann & 306 & 3 & 81 & 225 \\ Heart & 270 & 15 & 150 & 120 \\ Diabetes & 768 & 8 & 500 & 268 \\ \hline \end{tabular}
\end{table} TABLE I: Benchmark datasets
computed as the area under the ROC curve, which ranges from 0 to 1. An ROC-AUC score of 1 indicates a perfect classifier, while a score of 0.5 suggests a random classifier. A higher ROC-AUC score indicates better discriminative ability of the model in distinguishing between the positive and negative classes.
To calculate the ROC-AUC, various methods can be used, including the trapezoidal rule or the Mann-Whitney U statistic. The formula for calculating ROC-AUC using the trapezoidal rule is given by:
\[\text{ROC-AUC}=\int_{0}^{1}\text{TPR}(FPR^{-1}(t))\,dt \tag{6}\]
where \(\text{TPR}(FPR^{-1}(t))\) represents the true positive rate at the threshold corresponding to the inverse of the false positive rate \(t\), and \(FPR^{-1}(t)\) is the inverse of the false positive rate. The ROC-AUC provides a single scalar value that summarizes the model's performance across all possible classification thresholds.
The difference between of the two metrics is that Balanced Accuracy focuses on the accuracy of individual classes, accounting for class imbalance, while ROC-AUC evaluates the overall discriminative ability of a classifier across all possible classification thresholds, providing a single scalar value. Both metrics have their own significance and are used in different contexts based on the requirements of the classification problem at hand.
## V Results and discussion
The results based on the herein analyzed four benchmark datasets suggests that our approach is competitive with hierarchical shrinkage in terms of the ROC-AUC values. When Balanced Accuracy is used as a performance metric, however, our Bayesian post regularization procedure is superior in most cases (see Figures 1-4). In all cases the post regularization techniques under investigation clearly can improve the accuracy of the vanilla random forest. Currently, we are working on an in-depth evaluation of the herein proposed Bayesian regularization techniques, while enhancing our method also to regression trees.
## VI Data availability
The proposed methodology is implemented within the Python package _TreeSmoothing_, freely available on GitHub ([https://github.com/pievos101/TreeSmoothing](https://github.com/pievos101/TreeSmoothing)).
## VII Acknowledgments
We would like to thank Arne Gevaert, Martin Uhrschler, and Markus Loecher for helpful discussions.
## References
* [1] B. Liu and R. Mazumder, "ForestPrune: Compact depth-pruned tree ensembles," in _Proceedings of The 26th International Conference on Artificial Intelligence and Statistics_, ser. Proceedings of Machine Learning Research, F. Ruiz, J. Dy, and J.-W. van de Meent, Eds., vol. 206. PMLR, 25-27 Apr 2023, pp. 9417-9428. [Online]. Available: [https://proceedings.mlr.press/v206/liu23h.html](https://proceedings.mlr.press/v206/liu23h.html)
* [2] M. B. Kursa and W. R. Rudnicki, "Feature selection with the Boruta package," _Journal of statistical software_, vol. 36, pp. 1-13, 2010.
* [3] B. Pfeifer, A. Holzinger, and M. G. Schimek, "Robust random forest-based all-relevant feature ranks for trustworthy AI," _Studies in Health Technology and Informatics_, vol. 294, pp. 137-138, 2022.
* [4] A. Agarwal, Y. S. Tan, O. Ronen, C. Singh, and B. Yu, "Hierarchical shrinkage: Improving the accuracy and interpretability of tree-based models." in _International Conference on Machine Learning_. PMLR, 2022, pp. 111-135.
* [5] C. Singh, K. Nasseri, Y. S. Tan, T. Tang, and B. Yu, "imodels: a python package for fitting interpretable models," _Journal of Open Source Software_, vol. 6, no. 61, p. 3192, 2021. [Online]. Available: [https://doi.org/10.21105/joss.03192](https://doi.org/10.21105/joss.03192)
* [6] J. D. Romano, T. T. Le, W. La Cava, J. T. Gregg, D. J. Goldberg, P. Chakraborty, N. L. Ray, D. Himmelstein, W. Fu, and J. H. Moore, "Pmlb v1.0: an open source dataset collection for benchmarking machine learning methods," _arXiv preprint arXiv:2012.00058v2_, 2021.
## References
* [1] J. A. Bell, A. R. Bishop, and A. R. Bishop (2014) A simple model for the estimation of the |
2305.15430 | Bounded Projection Matrix Approximation with Applications to Community
Detection | Community detection is an important problem in unsupervised learning. This
paper proposes to solve a projection matrix approximation problem with an
additional entrywise bounded constraint. Algorithmically, we introduce a new
differentiable convex penalty and derive an alternating direction method of
multipliers (ADMM) algorithm. Theoretically, we establish the convergence
properties of the proposed algorithm. Numerical experiments demonstrate the
superiority of our algorithm over its competitors, such as the semi-definite
relaxation method and spectral clustering. | Zheng Zhai, Hengchao Chen, Qiang Sun | 2023-05-21T06:55:10Z | http://arxiv.org/abs/2305.15430v1 | # Bounded Projection Matrix Approximation with Applications to Community Detection
###### Abstract
Community detection is an important problem in unsupervised learning. This paper proposes to solve a projection matrix approximation problem with an additional entrywise bounded constraint. Algorithmically, we introduce a new differentiable convex penalty and derive an alternating direction method of multipliers (ADMM) algorithm. Theoretically, we establish the convergence properties of the proposed algorithm. Numerical experiments demonstrate the superiority of our algorithm over its competitors, such as the semi-definite relaxation method and spectral clustering.
Projection matrix approximation, boundedness, convex relaxation, ADMM, community detection.
## I Introduction
Community detection is an important problem in unsupervised learning that has attracted the attention of researchers from various fields, such as mathematics, statistics, applied mathematics, physics, and social sciences. The goal of this problem is to partition \(n\) data points into \(K\) groups based on their pairwise similarities, which can be represented as a similarity matrix \(A\in\mathbb{R}^{n\times n}\). A common approach to solve this problem is to first derive a lower-dimensional representation [1, 2, 3, 4] of the data from \(A\) and then apply a clustering algorithm such as \(k\)-means [5] or EM algorithm [6] to identify the clusters. The efficacy of this method is contingent on the quality of the data representation.
A popular choice for the data representation is to use the top \(K\) eigenvectors of \(A\) as in spectral clustering [7]. Finding these eigenvectors is equivalent, up to rotations, to determining the subspace spanned by these vectors. The latter is also equivalent to the following projection matrix approximation problem:
\[X=\operatorname*{argmin}_{X\in\mathcal{P}_{K}}\|A-X\|_{\mathrm{F}}^{2}, \tag{1}\]
where \(\mathcal{P}_{K}\subseteq\mathbb{R}^{n\times n}\) is the set of rank-\(K\) projection matrices. Thus, the effectiveness of spectral clustering is highly dependent on the quality of the projection matrix approximation.
The unconstrained projection matrix approximation (1) may be less effective when extra information is available. In community detection, for instance, an intermediate step is to estimate the projection matrix \(X\) associated with the assignment matrix \(\Theta\in\{0,1\}^{n\times K}\), where \(\Theta_{ik}=1\) if and only if node \(i\) belongs to group \(k\); see Section IV for details. Such a projection matrix has certain structures:
1. \(X\) has non-negative elements.
2. \(X\) has elements upper bounded by \(\max_{k}\frac{1}{n_{k}}\), where \(n_{k}\) is the size of the \(k\)-th group.
Hence, it may be beneficial to seek projection matrices with these desired structures enforced. Inspired by this example, we propose to study the following bounded projection matrix approximation (BPMA) problem:
\[X=\operatorname*{argmin}_{X_{ij}^{\infty\in\mathcal{P}_{K}}\|A-X\|_{\mathrm{F }}^{2}}, \tag{2}\]
where \(\alpha,\beta\in\mathbb{R}\) are lower and upper bounds set _a priori_. In the above example, we simply set \(\alpha=0\) and \(\beta=\max_{k}\frac{1}{n_{k}}\).
Due to the projection matrix and boundedness constraints, it is challenging to solve (2) directly. To address this difficulty, this paper proposes a new differentiable convex penalty to relax the boundedness constraint. We employ the alternating direction method of multipliers (ADMM) [8, 9] to solve this relaxed problem. Moreover, we show that any limiting point of the solution sequence is a stationary point of the relaxed problem. Finally, we apply the proposed method to community detection and demonstrate its superiority over its competitors in both synthetic and real world datasets.
### _Related Work_
Low-rank matrix optimization with additional structural constraints is a common problem in machine learning and signal processing [10, 11]. The problem aims to find the best low-rank matrix approximation that also satisfies certain structural constraints, such as non-negativity, symmetry, boundedness, and sparsity. One line of research studies the matrix factorization approach, such as non-negative matrix factorization [12], semi-nonnegative matrix factorization [13], bounded low-rank matrix approximation [14]. Another line of research studies simultaneously low-rank and sparse matrix approximation [15, 16, 17]. These works, however, only seek a low-rank matrix, which is not necessarily a projection matrix. In contrast, motivated by the problem of community detection, our paper studies the projection matrix approximation problem with additional boundedness constraints. We then propose an ADMM algorithm, and prove the convergence properties.
## II Bounded Projection Matrix Approximation
In this section, we study how to solve (2). First, we relax the BPMA problem (2) using a differentiable convex penalty. Then we derive the ADMM algorithm that can efficiently solve the relaxed BPMA problem.
### _Differentiable Convex Penalty_
To start with, we define the following indicator function:
\[I_{\alpha,\beta}(x)=\left\{\begin{array}{ll}+\infty,&x<\alpha\\ 0,&\alpha\leq x\leq\beta\\ +\infty,&x>\beta.\end{array}\right.\]
Then we can rewrite the BPMA problem (2) as the following optimization problem:
\[X=\operatorname*{argmin}_{X\in\mathcal{P}_{K}}\|A-X\|_{\mathrm{F}}^{2}+\sum_{ ij}I_{\alpha,\beta}(X_{ij}). \tag{3}\]
It is challenging to solve problem (3) due to the discontinuity of the penalty function \(I_{\alpha,\beta}(\cdot)\). To alleviate this issue, we propose to replace \(I_{\alpha,\beta}(\cdot)\) by \(\lambda g_{\alpha,\beta}(\cdot)\) and solve
\[X=\operatorname*{argmin}_{X\in\mathcal{P}_{K}}\|A-X\|_{\mathrm{F}}^{2}+\lambda \sum_{ij}g_{\alpha,\beta}(X_{ij}), \tag{4}\]
where \(\lambda>0\) is a tuning parameter and \(g_{\alpha,\beta}(\cdot)\) is a differentiable convex penalty function given by
\[g_{\alpha,\beta}(x)=(\min\{x-\alpha,0\})^{2}+(\min\{\beta-x,0\})^{2}. \tag{5}\]
Since \(g_{\alpha,\beta}(\cdot)\) is non-negative and \(g_{\alpha,\beta}(x)=0\) if and only if \(x\in[\alpha,\beta]\), problem (4) reduces to problem (3) when \(\lambda\to\infty\). We shall refer to problem (4) as the relaxed bounded projection matrix approximation (RBPMA) problem.
### _Algorithm_
In this subsection, we develop an ADMM algorithm to solve the RBPMA problem. First, we define the augmented Lagrangian \(\mathcal{L}_{\rho}(X,Y,\Lambda)\) as
\[\mathcal{L}_{\rho}(X,Y,\Lambda)= \|A-X\|_{\mathrm{F}}^{2}+\lambda\sum_{ij}g_{\alpha,\beta}(Y_{ij}) +\frac{\rho}{2}\|X-Y\|_{\mathrm{F}}^{2}\] \[+\langle\Lambda,X-Y\rangle,\qquad\forall X,Y,\Lambda\in\mathbb{R }^{n\times n}.\]
Starting from initialization points \(\{X^{0},Y^{0},\Lambda^{0}\}\), our algorithm updates \(\{X^{k},Y^{k},\Lambda^{k}\}\) alternatively as:
\[X^{k+1} =\operatorname*{argmin}_{X\in\mathcal{P}_{K}}\mathcal{L}_{\rho}(X,Y^{k},\Lambda^{k}), \tag{6}\] \[Y^{k+1} =\operatorname*{argmin}_{Y}\mathcal{L}_{\rho}(X^{k+1},Y,\Lambda^ {k}),\] (7) \[\Lambda^{k+1} =\Lambda^{k}+\rho(X^{k+1}-Y^{k+1}). \tag{8}\]
These updates have closed-form solutions and thus can be implemented efficiently. Specifically, problem (6) is equivalent to the following problem:
\[X^{k+1}=\operatorname*{argmax}_{X\in\mathcal{P}_{K}}\langle X,W^{k}\rangle, \quad W^{k}=2A+\rho Y^{k}-\Lambda^{k},\]
and \(X^{k+1}\) is given by the projection matrix associated with the leading \(K\) eigenvectors of \(W^{k}\). Problem (7) is equivalent to:
\[Y^{k+1}=\operatorname*{argmin}_{Y}\|Y-V^{k+1}\|_{\mathrm{F}}^{2}+\tau\sum_{ij }g_{\alpha,\beta}(Y_{ij}),\]
where \(V^{k+1}=X^{k+1}+\Lambda^{k}/\rho\) and \(\tau=2\lambda/\rho\). This is a separable problem, and each entry \(Y_{ij}^{k+1}\) can be solved by
\[Y_{ij}^{k+1}=\operatorname*{argmin}_{Y_{ij}}\left(Y_{ij}-V_{ij}^{k+1}\right)^{ 2}+\tau g_{\alpha,\beta}(Y_{ij}).\]
In a compact form, the solution \(Y^{k+1}\) can be written as
\[Y^{k+1}=\frac{V^{k+1}+\tau\mathcal{P}_{\alpha,\beta}(V^{k+1})}{1+\tau},\]
where \(\mathcal{P}_{\alpha,\beta}(\cdot)\) is an entrywise projection operator given by
\[\mathcal{P}_{\alpha,\beta}(V)=\min\{\max\{V,\alpha\},\beta\}.\]
Here \(\min\) and \(\max\) are defined entrywise.
## III Convergence Theory
This section provides convergence properties of the proposed ADMM algorithm. We show that any limiting point of the solution sequence is a stationary point of problem (4). Our proof consists of three components. First, we show in Lemma 1 that the successive change of the dual variable \(\Lambda\) is controlled by that of \(Y\).
**Lemma 1**.: \(\|\Lambda^{k+1}-\Lambda^{k}\|_{\mathrm{F}}\leq 2\lambda\|Y^{k+1}-Y^{k}\|_{ \mathrm{F}}\)_._
Proof.: By definition (8) of \(\Lambda^{k+1}\), we have
\[\Lambda^{k+1}_{ij}=\Lambda^{k}_{ij}+\rho(X^{k+1}_{ij}-Y^{k+1}_{ij})=\lambda g ^{\prime}_{\alpha,\beta}(Y^{k+1}_{ij}), \tag{9}\]
where the second equality uses the fact that \(Y^{k+1}\) is a stationary point of \(\mathcal{L}_{\rho}(X^{k+1},Y,\Lambda^{k})\). By (5), we can compute
\[g^{\prime}_{\alpha,\beta}(x)=2\min\{x-\alpha,0\}-2\min\{\beta-x,0\}.\]
This is a Lipschitz continuous function with Lipschitz constant \(2\). Hence, we have
\[|\Lambda^{k+1}_{ij}-\Lambda^{k}_{ij}|=\lambda|g^{\prime}_{\alpha,\beta}(Y^{k+1 }_{ij})-g^{\prime}_{\alpha,\beta}(Y^{k}_{ij})|\leq 2\lambda|Y^{k+1}_{ij}-Y^{k}_{ij}|.\]
Summing over all \(i,j\), we prove the lemma.
Next, we show that \(\mathcal{L}_{\rho}(X^{k},Y^{k},\Lambda^{k})\) is decreasing in \(k\) and the difference is lower bounded by \(\lambda\|Y^{k+1}-Y^{k}\|_{\mathrm{F}}^{2}\) when we set \(\rho=4\lambda\).
**Lemma 2**.: _Let \(\rho=4\lambda\). The following inequality holds:_
\[D \coloneqq\mathcal{L}_{\rho}(X^{k},Y^{k},\Lambda^{k})-\mathcal{L}_{ \rho}(X^{k+1},Y^{k+1},\Lambda^{k+1})\] \[\geq\lambda\|Y^{k+1}-Y^{k}\|_{\mathrm{F}}^{2}.\]
Proof.: First, we write \(D=D_{1}+D_{2}\), where
\[D_{1} \coloneqq\mathcal{L}_{\rho}(X^{k},Y^{k},\Lambda^{k})-\mathcal{L} _{\rho}(X^{k+1},Y^{k},\Lambda^{k}),\] \[D_{2} \coloneqq\mathcal{L}_{\rho}(X^{k+1},Y^{k},\Lambda^{k})-\mathcal{L} _{\rho}(X^{k+1},Y^{k+1},\Lambda^{k+1}).\]
The term \(D_{1}\) is non-negative because of (6). For the term \(D_{2}\), we have
\[D_{2}=E_{1}+E_{2}+E_{3},\]
where
\[E_{1} =\frac{\rho}{2}\left(\|X^{k+1}-Y^{k}\|_{F}^{2}-\|X^{k+1}-Y^{k+1} \|_{F}^{2}\right),\] \[E_{2} =\langle\Lambda^{k},X^{k+1}-Y^{k}\rangle-\langle\Lambda^{k+1},X^{k+ 1}-Y^{k+1}\rangle,\] \[E_{3} =\lambda\sum_{ij}(g_{\alpha,\beta}(Y^{k}_{ij})-g_{\alpha,\beta}(Y^{ k+1}_{ij})).\]
The term \(E_{1}\) can be rewritten as
\[E_{1} =\frac{\rho}{2}\|Y^{k+1}-Y^{k}\|_{\mathrm{F}}^{2}+\rho(X^{k+1}-Y^{k+ 1},Y^{k+1}-Y^{k})\] \[=\frac{\rho}{2}\|Y^{k+1}-Y^{k}\|_{\mathrm{F}}^{2}+\underbrace{ \langle\Lambda^{k+1}-\Lambda^{k},Y^{k+1}-Y^{k}\rangle}_{E_{4}}.\]
The term \(E_{2}\) can be rewritten as
\[E_{2} =\langle\Lambda^{k}-\Lambda^{k+1},X^{k+1}-Y^{k+1}\rangle+\langle \Lambda^{k},Y^{k+1}-Y^{k}\rangle\] \[=-\frac{1}{\rho}\|\Lambda^{k+1}-\Lambda^{k}\|_{\mathrm{F}}^{2}+ \underbrace{\langle\Lambda^{k},Y^{k+1}-Y^{k}\rangle}_{E_{5}},\]
where the last equality uses (8). Since \(\Lambda^{k+1}_{ij}=\lambda g^{\prime}_{\alpha,\beta}(Y^{k+1}_{ij})\) by (9) and \(g_{\alpha,\beta}(\cdot)\) is a convex function, we have
\[E_{3}+E_{4}+E_{5}\] \[=\lambda\sum_{ij}\left[g_{\alpha,\beta}(Y^{k}_{ij})-g_{\alpha, \beta}(Y^{k+1}_{ij})-g^{\prime}_{\alpha,\beta}(Y^{k+1}_{ij})(Y^{k}_{ij}-Y^{k+ 1}_{ij})\right]\] \[\geq 0.\]
Combining the above analysis, we have
\[D \geq\frac{\rho}{2}\|Y^{k+1}-Y^{k}\|_{\mathrm{F}}^{2}-\frac{1}{ \rho}\|\Lambda^{k}-\Lambda^{k+1}\|_{\mathrm{F}}^{2}\] \[\geq\lambda\|Y^{k+1}-Y^{k}\|_{\mathrm{F}}^{2},\]
where we use Lemma 1 and \(\rho=4\lambda\).
Finally, we show that any limiting point of \(\{X^{k},Y^{k},\Lambda^{k}\}\) is a stationary point of problem (4).
**Theorem 1**.: _Let \(\rho=4\lambda\) and \(\{X^{*},Y^{*},\Lambda^{*}\}\) be a limiting point of \(\{X^{k},Y^{k},\Lambda^{k}\}\). Then the KKT conditions of problem (4) hold:_
\[\begin{cases}X^{*}=Y^{*},\ X^{*}\ \text{is the projection matrix associated}\\ \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad
## V Experiments
We compare the proposed RBPMA approach with other community detection methods, including SDP-1 [19], SDP-2 [20], and spectral clustering [7], on both synthetic and real-world datasets. The details of these algorithms are listed in Table I. We use the solution of problem (1) as the initial point \(X_{0}\) for RBPMA and set the hyper-parameter \(\lambda\) as a large enough constant such as \(10^{8}\). After solving \(X\), we apply eigen-decomposition to compute its top \(K\) eigenvectors \(U\in\mathbb{R}^{n\times K}\). Then we normalize each row of \(U\) to get \(\tilde{U}\in\mathbb{R}^{n\times K}\). Finally, we perform \(k\)-means clustering on the rows of \(\tilde{U}\). We evaluate the clustering results using two standard criteria: accuracy (ACC) and normalized mutual information (NMI).
### _Synthetic Data_
For synthetic data, we set \(n=80\) and \(K=3\). The sizes of the three groups are \(30,20,30\) respectively. Let \(\Psi\in\mathbb{R}^{3\times 3}\) be the connectivity probability matrix with \(\Psi_{k\ell}=0.2,\,\forall k\neq\ell\), and \(\Psi_{kk}=0.49,\,\forall k\neq\ell\). In other four settings, we simply change \(\Psi_{kk}\) to each of \(\{0.46,0.43,0.40,0.37\}\). We generate the similarity matrix \(A\) such that \(A_{ij}\sim\mathrm{Bernoulli}(\Psi_{k\ell})\) if \(i\in S_{k}\) and \(j\in S_{\ell}\), where \(S_{k}\) denotes the index set of the \(k\)-th group.
We apply all four community detection algorithms to \(A\) and compute the corresponding ACC and NMI. In our RBPMA algorithm, we set \(\alpha=0\) and \(\beta=1/20\). This experiment is repeated 20 times for each \(\Psi\), and the average ACC and NMI are reported in Table II. The results show that our model outperforms its competitors uniformly. This agrees with the intuition that the quality of the rank-\(K\) projection matrix is improved by using the extra boundedness constraint.
### _Real Data_
This subsection compares RBPMA with SDP-1, SDP-2, and spectral clustering on the Coil10, Coil20, and handwritten digit datasets. [21]. Digit5 consists of 1000 images with the shape \(15\times 16\) from five groups, and Digit 10 consists of 2000 images from 10 groups. Since all groups of Coil and Digit datasets share the same size, we set \(\alpha=0\) and \(\beta=K/n\) in this experiment. The similarity matrix \(A\) is constructed using the Gaussian kernel: \(A_{ij}=\exp(-\|x_{i}-x_{j}\|_{2}^{2}/\sigma^{2})\), where \(x_{i}\) is the \(i\)-th data vector and \(\sigma^{2}=\frac{2}{n(n-1)}\sum_{i<j}\|x_{i}-x_{j}\|_{2}^{2}\) is the average of squared pairwise distances. Table III shows that RBPMA outperforms its competitors in terms of ACC and NMI on all Coil and Digit datasets.
|
2308.05807 | Constraints on interacting dark energy revisited: implications for the
Hubble tension | In this paper, we have revisited a class of coupled dark energy models where
dark energy interacts with dark matter via phenomenological interactions. We
included correction terms on the perturbation equations taking into account the
perturbation of the Hubble rate, which was absent in previous works. We also
consider more recent data sets such as cosmic microwave background (CMB)
anisotropies from \textit{Planck} 2018, type I-a supernovae (SNIa) measurements
from Pantheon+ and data from baryon acoustic oscillations (BAO), and redshift
space distortions (RSD). One of the models presents a strong incompatibility
when different cosmological datasets are used. We analyzed the influence of the
SH0ES Cepheid host distances on the results and, although for one model the
discrepancy of $H_0$ is reduced to $1.3\sigma$ when compared to $\Lambda$CDM
and $4.6\sigma$ when compared to the SH0ES team, joint analysis is
incompatible. Including BAO with RSD shows incompatibility with SH0ES for all
models considered here. We performed a model comparison, but there is no clear
preference for interacting dark energy over $\Lambda$CDM ($|\Delta \chi^2|<1$
for all the models for joint analysis CMB+BAO+RSD+SNIa). We conclude that the
models of interactions in the dark sector considered in this paper are not
flexible enough to fit all the cosmological data including values of $H_0$ from
SH0ES in a statistically acceptable way, either the models would need to be
modified to include further flexibility of predictions or that there remains a
tension in this coupled dark energy paradigm. | Gabriel A. Hoerning, Ricardo G. Landim, Luiza O. Ponte, Raphael P. Rolim, Filipe B. Abdalla, Elcio Abdalla | 2023-08-10T18:03:39Z | http://arxiv.org/abs/2308.05807v2 | # Constraints on interacting dark energy revisited: implications for the Hubble tension
###### Abstract
In this paper, we have revisited a class of coupled dark energy models where dark energy interacts with dark matter via phenomenological interactions. We included correction terms on the perturbation equations taking into account the perturbation of the Hubble rate, which was absent in previous works. We also consider more recent data sets such as cosmic microwave background (CMB) anisotropies from _Planck_ 2018, type I-a supernovae (SNIa) measurements from Pantheon+ and data from baryon acoustic oscillations (BAO), and redshift space distortions (RSD). One of the models presents a strong incompatibility when different cosmological datasets are used. We analyzed the influence of the SH0ES Cepheid host distances on the results and, although for one model the discrepancy of \(H_{0}\) is reduced to \(1.3\sigma\) when compared to \(\Lambda\)CDM and \(4.6\sigma\) when compared to the SH0ES team, joint analysis is incompatible. Including BAO with RSD shows incompatibility with SH0ES for all models considered here. We performed a model comparison, but there is no clear preference for interacting dark energy over \(\Lambda\)CDM (\(|\Delta\chi^{2}|<1\) for all the models for joint analysis CMB+BAO+RSD+SNIa). We conclude that the models of interactions in the dark sector considered in this paper are not flexible enough to fit all the cosmological data including values of \(H_{0}\) from SH0ES in a statistically acceptable way, either the models would need to be modified to include further flexibility of predictions or that there remains a tension in this coupled dark energy paradigm.
## I Introduction
The lack of a satisfactory theoretical explanation for the nature of the cosmological constant, used as the standard candidate for the late time accelerated expansion of the Universe [1], added to the existing tensions in the \(\Lambda\)-Cold Dark Matter (\(\Lambda\)CDM) model, such as the Hubble tension [2; 3; 4; 5], makes room for alternative explanations of Dark Energy (DE). There has been a plethora of alternative candidates to explain the cosmic acceleration, such as scalar and vector fields [6], metastable DE [7], holographic DE [8], models using extra dimensions [9], alternative fluids [10; 11], etc. Among the many possibilities, interacting DE (IDE) [12; 13; 14; 15; 16] can help alleviating the coincidence problem [17] and the Hubble tension [18; 19].
The interaction between Dark Matter (DM) and DE has been vastly explored in the literature, using different forms for the interaction. While the main aim is to have a canonical Field theory description of the Dark Sector, it is still far from reach. A full description should include gravity, thus non-renormalizability is a severe constraint and possibly has to go through the construction of quantum gravity [20] (see also [21]).
A sub-class of IDE consists of an interaction at the background level proportional to a weighted sum of the energy densities of DM and DE (\(Q=H(\lambda_{1}\rho_{\rm dm}+\lambda_{2}\rho_{\rm de})\) - see [13] for a review). Latest constraints on the coupling constants were presented in [16], using _Planck_ 2015, while forecasts on such models have been produced for several upcoming observational programs [22].
Although such kernels for IDE have been widely investigated, none of the previous works took into account the perturbation of the Hubble rate in the perturbation equations. A similar situation occurred in works that assumed another form for the coupling (\(Q^{\mu}=\xi H\rho_{\rm de}u^{\mu}\)[23]), until the authors in [24] identified the neglected term from the Hubble rate and corrected the perturbation equations. However, such a correction has not been done yet for the interaction considered here. The presence of the perturbation of the Hubble rate is required by gauge invariance although one may simply guess that there is no physical reason to ignore such a term since the perturbation of the product of two variables acts as a chain rule.
In addition to the correction of the perturbation equations, we use more recent cosmological data sets than the ones assumed in [16] to constrain the models, in
cluding cosmic microwave background (CMB) anisotropy measurements from _Planck_ 2018 [25], type I-a supernovae (SNIa) from Pantheon+ [26], data from baryon acoustic oscillations (BAO) and redshift space distortions (RSD). Additionally, we performed some analysis using the SH0ES Cepheid host distance anchors in the likelihood [R22; 3], aiming to investigate whether or not these models can alleviate the Hubble tension. The tension arises from the \(5\sigma\) discrepancy between the value of \(H_{0}\) obtained by _Planck_ (\(67.4\pm 0.5\) km/s/Mpc) [2] and that derived from the SH0ES team (\(73.04\pm 1.04\) km/s/Mpc).
After a detailed analysis of the different kernels, we obtained more stringent constraints on these IDE models, however for one model the 2D contours do not overlap when the combination of datasets CMB, BAO, RSD, and SNIa is taken into account. After calculating the evidence, we obtained that there is no strong preference for IDE models over \(\Lambda\)CDM. We found that a discrepancy of the \(H_{0}\) (\(68.11^{+0.21}_{-0.22}\) km/s/Mpc) for one of the models of \(1.3\sigma\) compared to \(\Lambda\)CDM and \(4.6\sigma\) compared to R22.
This paper is organized in the following manner. Sec. II presents the models and the background and perturbation equations. In Sec. III we show the cosmological datasets used and the results. Sec. IV is reserved for conclusions. We use Natural units (\(c=\hbar=1\)) throughout the text.
## II Interacting dark energy models
Interaction in the dark sector can be a complex and difficult issue. We assume here that both DE and DM are described by ideal fluids - thus a non-Lagrangian description. In that case, with an interaction between DE and DM the energy-momentum tensor of each component is no longer individually conserved, but both tensors satisfy
\[\nabla_{\mu}T^{\mu\nu}_{(i)}=Q^{\nu}_{(i)}\,, \tag{1}\]
where the index \(i\) represents either the DM component (with index 'dm') or the DE component (with index 'de'). The four-vector \(Q^{\nu}_{(i)}\) regulates the flux between the two species of the dark sector and due to the Bianchi identities, the total energy-momentum tensor should be conserved, implying that \(Q^{\nu}_{\rm dm}=-Q^{\nu}_{\rm de}\). Many different phenomenological couplings \(Q^{\nu}\) have been used in the literature. Here we restrict to the class of models where the interaction is present only in the zeroth component of the coupling vector, i.e., \(Q^{i}_{\rm dm,de}=0\).
Assuming a Friedmann-Lemaitre-Robertson-Walker metric, the continuity equations for DE and DM are
\[\dot{\rho}_{\rm dm}+3\mathcal{H}\rho_{\rm dm} =a^{2}Q^{0}_{\rm dm}=aQ\,, \tag{2}\] \[\dot{\rho}_{\rm de}+3\mathcal{H}(1+w)\rho_{\rm de} =a^{2}Q^{0}_{\rm de}=-aQ\,, \tag{3}\]
where \(w\) is a constant DE equation of state, a dot represents a conformal time derivative, \(\mathcal{H}=aH\) is the Hubble rate for the conformal time and we assume the interaction to be given by \(Q=H(\lambda_{1}\rho_{\rm dm}+\lambda_{2}\rho_{\rm de})\)[27]. A positive \(Q\) corresponds to DE being transformed into DM (as in the case of the alleviation of the coincidence problem in [17]), while negative \(Q\) means the transformation in the opposite direction.
Considering the usual three different choices for the coupling constants (\(\{\lambda_{1}\neq 0,\,\lambda_{2}=0\}\), \(\{\lambda_{1}=0,\,\lambda_{2}\neq 0\}\), \(\{\lambda_{1}=\lambda_{2}\equiv\lambda\}\)) it is possible to find analytic solutions for the continuity equations above. For \(\lambda_{1}\neq 0\), \(\lambda_{2}=0\) we have
\[\rho_{\rm dm} =\rho_{\rm dm,0}a^{-3(1+w_{1}^{\rm eff})}\,, \tag{4}\] \[\rho_{\rm de} =\rho_{\rm de,0}a^{-3(1+w)}+\lambda_{1}\frac{\rho_{\rm dm,0}a^{-3 (1+w)}}{3(w-w_{1}^{\rm eff})}\bigg{[}1-a^{3(w-w_{1}^{\rm eff})}\bigg{]}\,, \tag{5}\]
where \(w_{1}^{\rm eff}=-\lambda_{1}/3\).
For \(\lambda_{1}=0\), \(\lambda_{2}\neq 0\) the solutions are [19]
\[\rho_{\rm dm} =\rho_{\rm dm,0}a^{-3}+\lambda_{2}\frac{\rho_{\rm de,0}a^{-3}}{3w _{2}^{\rm eff}}\bigg{[}1-a^{-3w_{2}^{\rm eff}}\bigg{]}\,, \tag{6}\] \[\rho_{\rm de} =\rho_{\rm de,0}a^{-3(1+w_{2}^{\rm eff})}\,, \tag{7}\]
where \(w_{2}^{\rm eff}=w+\lambda_{2}/3\).
The solution for the case \(\lambda_{1}=\lambda_{2}\equiv\lambda\) is [17]
\[\rho_{\rm dm} =w_{\rm eff}^{-1}\bigg{\{}\bigg{[}\bigg{(}1+w+\frac{\lambda}{3} \bigg{)}\rho_{\rm dm,0}+\frac{\lambda}{3}\rho_{\rm de,0}\bigg{]}(a^{S_{-}}-a^ {S_{+}})+\rho_{\rm dm,0}(S_{-}a^{S_{-}}-S_{+}a^{S_{+}})\bigg{\}}\,, \tag{8}\] \[\rho_{\rm de} =w_{\rm eff}^{-1}\bigg{\{}\bigg{[}\frac{\lambda}{3}\rho_{\rm dm,0}-\bigg{(}1-\frac{\lambda}{3}\bigg{)}\rho_{\rm de,0}\bigg{]}(a^{S_{+}}-a^{S_{- }})+\rho_{\rm de,0}(S_{-}a^{S_{-}}-S_{+}a^{S_{+}})\bigg{\}}\,, \tag{9}\]
where \(w_{\rm eff}=(w^{2}+4\lambda w/3)^{1/2}\) and \(S_{\pm}=-(1+w/2)\mp w_{\rm eff}/2\).
An advantage of the IDE models is that the coincidence problem can be alleviated. When the interaction is either proportional to \(\rho_{\rm dm}\) or \(\rho_{\rm dm}+\rho_{\rm de}\), the ratio \(\rho_{\rm dm}/\rho_{\rm de}\) reaches a plateau for very high or very low redshifts. Therefore, for these models, the ratio is practically constant. One can then use the continuity equations for DM and DE to obtain the solutions of the equation \(\dot{r}=0\), where \(r\equiv\rho_{\rm dm}/\rho_{\rm de}\). The two solutions are [27]
\[\lambda_{1}r_{+} =-\frac{3}{2}\bigg{(}w+\frac{\lambda_{1}}{3}+\frac{\lambda_{2}}{3} \bigg{)}+\frac{3}{2}\sqrt{w^{2}+\frac{2}{3}w(\lambda_{1}+\lambda_{2})+\frac{1}{9 }(\lambda_{1}-\lambda_{2})^{2}}\,, \tag{10}\] \[\lambda_{1}r_{-} =-\frac{3}{2}\bigg{(}w+\frac{\lambda_{1}}{3}+\frac{\lambda_{2}}{3} \bigg{)}-\frac{3}{2}\sqrt{w^{2}+\frac{2}{3}w(\lambda_{1}+\lambda_{2})+\frac{1}{9 }(\lambda_{1}-\lambda_{2})^{2}}\,. \tag{11}\]
These equations will be useful later.
The linear order perturbation equations for DM and DE can be found using the gauge-invariant equations presented in [23; 24; 28]. In the synchronous gauge, they are DE can be found using the gauge-invariant equations presented in [23; 24; 28]. In the synchronous gauge, they are DE can be found using the gauge-invariant equations presented in [23; 24; 28]. In the synchronous gauge, they are DE can be found using the gauge-invariant equations presented in [27; 24]. Using Eqs. (10) and (11) the initial conditions become, for \(Q\propto\rho_{\rm dm}\)
\[\delta^{(i)}_{\rm de}=\delta^{(i)}_{\rm dm}=\frac{3}{4}\delta^{(i)}_{r} \bigg{(}1-\frac{\lambda_{1}}{3}\bigg{)}\,, \tag{20}\]
while for \(Q\propto\rho_{\rm de}\) we have
\[\delta^{(i)}_{\rm de} =\frac{3}{4}\delta^{(i)}_{r}\bigg{(}1+w+\frac{\lambda_{2}}{3} \bigg{)}\,, \tag{21}\] \[\delta^{(i)}_{\rm dm} =\frac{3}{4}\delta^{(i)}_{r}\,. \tag{22}\]
The initial conditions for \(Q\propto\rho_{\rm dm}+\rho_{\rm de}\) remain the ones in Eqs. (17) and (18) with \(r\) given by Eq. (10) and \(\lambda_{1}=\lambda_{2}\).
In order to avoid complex or negative energy densities and early or late time instabilities, the couplings and the equation of state for DE should be restricted to the values presented in Table I [16; 27; 30]. Although the phantom behavior might be problematic because it leads to an increasing energy density in some cases, or to an inconsistent relativistic energy-momentum relation, if one uses a scalar field with the opposite sign kinetic term, we leave the equation of state free to reach values smaller than \(-1\) for the sake of completeness.
### Comparison with model \(Q^{\nu}=\xi H\rho_{\rm de}v_{\rm dm}^{\nu}\)
Before moving to the results, we present here the differences between the models we investigated and one well-studied model in the literature (which we will label as model IV). This scenario is presented in [19] and references therein and we just present the main differences in terms of equations, which will be used to show a comparison in the next section.
The interaction is proportional to the energy density of DE and to the CDM four-velocity \(u_{\rm dm}^{\nu}\). Among other reasons, this parametrization is chosen to avoid momentum transfer in the DM rest frame. At the background level, this model corresponds to our models I or II, but with \(w\simeq-1\) (\(w=-0.999\) in practice), with the energy density for DM and DE given by Equations (6) and (7), respectively.
Because the difference in the interactions only comes from the DM four-velocity, the equations for the DE overdensities are the same, Equations (12) and (14), with \(\lambda_{2}=\xi\) and \(v_{T}\sim 0\).1 Equations for the fluid velocities, on the other hand, become
Footnote 1: Because \(w\simeq-1\) and \(\Omega_{\rm rad}\) was neglected in Eq. (16) [19].
\[\dot{\theta}_{\rm dm}= -\mathcal{H}\theta_{\rm dm}\,, \tag{23}\] \[\dot{\theta}_{\rm de}= 2\mathcal{H}\theta_{\rm de}\bigg{[}1+\frac{\xi}{1+w}\bigg{(}1- \frac{\theta_{\rm dm}}{2\theta_{\rm de}}\bigg{)}\bigg{]}+\frac{k^{2}}{1+w} \delta_{\rm de}\,. \tag{24}\]
## III Data sets and results
In order to constrain the three different IDE models assumed here, we use the most recent data from different surveys: CMB anisotropies from _Planck_ 2018 high-\(\ell\) and low-\(\ell\) temperature and polarization power spectra (TT, TE, EE) [31] and lensing measurements [32], 1701 light curves of SNIa from Pantheon+ [26], BAO measurements from 6dFGS [33], MGS [34], BOSS DR12 [35], DES [36], eBOSS [37], WiggleZ [38] and RSD data from 6dFGS [39], Fastsound [40], GAMA [41] and WiggleZ [42]. We also included in some analysis the SH0ES Cepheid host distance anchors in the likelihood [3], denoting the joint constraints by '\(H_{0}\)'. We implemented the background and perturbation equations in a modified version of CLASS[19; 43]2 and used the code PLINY[44] to execute the Markov Chain Monte Carlo (MCMC) analysis with the Nested Sampling technique [45]. PLINY also produces an evidence calculation, though it becomes relevant only when comparing models with different parameterizations. For this, the Bayes ratio is employed, incorporating the priors of the models into the evaluation. This ratio is defined as:
Footnote 2: Available at [https://github.com/ricardoclandim/class_IDE_pheno](https://github.com/ricardoclandim/class_IDE_pheno).
\[\mathcal{B}=\frac{\mathcal{E}(\mathcal{D}|{\rm IDE})}{\mathcal{E}(\mathcal{D}| \Lambda{\rm CDM})}\,, \tag{25}\]
where \(\mathcal{E}(\mathcal{D}|M)\) represents the evidence of a model \(M\) considering the data \(\mathcal{D}\). According to Jeffrey's scale, a negative (positive) value of \(2\ln\mathcal{B}\) signifies an inclination towards IDE (\(\Lambda\)CDM).
The cosmological parameters in our analysis are:
\[\left\{\Omega_{\rm b}h^{2},\Omega_{\rm c}h^{2},100\theta_{\rm s},\ln(10^{10}A_ {\rm s}),n_{\rm s},\tau,w,\lambda_{1(2)}\right\}. \tag{26}\]
The influence of curvature, neutrino masses, and the effective number of neutrinos is out of the scope of this work and we fix these parameters to their standard values: \(\Omega_{K}=0\), \(\Sigma m_{\nu}=0.06\) eV and \(N_{\rm eff}=3.046\), respectively. We use flat priors, indicated in Table 2, which have distinct boundaries far from the distribution's peak. The results, with best-fit and 68% values of the parameters, are shown in Tables 4, 5 and 6.
Contrary to \(\Lambda\)CDM, some cosmological parameters are not very well constrained when using only CMB data, whose behavior was similarly obtained in previous works [16]. In particular, \(\Omega_{\rm c}h^{2}\) and \(H_{0}\) have values very different from the ones in \(\Lambda\)CDM. However, from Figures 1, 2 and 3 we see that \(w\) and \(H_{0}\) are degenerated while there is also a degeneracy between \(\Omega_{\rm c}h^{2}\) and the coupling constant, indicating that \(\Lambda\)CDM can be recovered and a deviation in one parameter brings the other one away from the fiducial value. The combination of CMB with BAO, RSD, and SNIa reduces the uncertainties on the parameters, however, the resulting contours do not overlap for Model I. The low-redshift data suggest a preference for values of the parameters that are consistent with the ones of \(\Lambda\)CDM. The degeneracies aforementioned are maintained for Model I, although with more constrained contours, while for Models II and III they are broken in some cases.
\begin{table}
\begin{tabular}{c c} \hline \hline Parameter & Priors \\ \hline \(\Omega_{\rm b}h^{2}\) & \([0.005,0.04]\) \\ \(\Omega_{\rm c}h^{2}\) & \([0.001,0.5]\) \\ \(100\theta_{\rm s}\) & \([1.03,1.05]\) \\ \(\ln(10^{10}A_{\rm s})\) & \([2.7,4.0]\) \\ \(n_{\rm s}\) & \([0.9,1.07]\) \\ \(\tau\) & \([0.01,0.1]\) \\ \hline \multicolumn{2}{c}{Model I} & Model II & Model III & Model IV \\ \hline \(w\) & \([-3.0,-0.3]\) & \([-3.0,-1.0]\) & \([-3.0,-1.0]\) & \(-0.999\) \\ \(\lambda_{1(2)}\) & \([-1.5,1.5]\) & \([0.0,0.04]\) & \([0.0,0.04]\) & \([-1.5,0.0]\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Priors
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline Model & \(Q\) & Equation of state & Constraints \\ \hline I & \(\lambda_{2}H\rho_{\rm de}\) & \(w\neq-1\) & \(\lambda_{2}<-w\) \\ II & \(\lambda_{1}H\rho_{\rm dm}\) & \(w<-1\) & \(0\leq\lambda_{1}<-3w\) \\ III & \(\lambda H(\rho_{\rm dm}+\rho_{\rm de})\) & \(w<-1\) & \(0\leq\lambda\leq-3w/4\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Stability conditions for the different models analyzed in the present work.
The non-overlapping of the joint contour of all datasets with the CMB one for Model I suggests that this model is not compatible with all cosmological data. A similar situation happens for Model III for the \(w-H_{0}\) contour. Only Model II is consistent with all datasets (but \(H_{0}\)).
The value of \(H_{0}\) using CMB data alone is large enough to possibly alleviate the Hubble tension and in order to investigate its impact on the results we use the SH0ES Cepheid host distance anchors in the likelihood as well, producing two joint analyses with CMB. The parameters are then even more constrained, however, the contours are no longer overlapped for all data, as they were in some cases.
When all datasets are considered together \(H_{0}\) has its discrepancy reduced to \(1.3\sigma\) (\(4.6\sigma\)) when compared to the standard \(\Lambda\)CDM (R22) value for Model III, while for Models I and II the values are \(4.9\sigma\) (\(2.7\sigma\)) away from \(\Lambda\)CDM (R22). However, only Model II presents a consistent overlap of CMB and CMB+\(H_{0}\) contours. When all datasets are analyzed together, only for some parameters the 2-D contours are overlapped, indicating that SH0ES is not compatible with BAO, RSD, and SNIa for every IDE models assumed here.
Moreover, we constrained the model presented in Sec. II.1, whose results are found in Table 7 and Fig. 4. They reproduced partially the findings in [19], but the difference here is the inclusion of more recent datasets (such as Pantheon+) and the usage of R22 instead of [46]. The results for the cosmological parameters only using _Planck_ are equivalent, however, the other combinations of surveys produce generally tighter constraints. The inclusion of all datasets but \(H_{0}\) produces an incompatibility with the case of CMB alone. Here we used more cosmological data than [19]. The value of \(H_{0}\), for the combination of all datasets, gives a discrepancy of \(3.9\sigma\) from \(\Lambda\)CDM (instead of the previously found \(2.5\sigma\)) and \(3.4\sigma\) from R22 (instead of \(2.6\sigma\)). The Hubble tension is therefore still alleviated, however not anymore at the \(2.5\sigma\). The reduced uncertainty on \(H_{0}\) found here, when compared to [19], added to the fact that R22 increased the tension to \(5\sigma\) (instead of \(4.4\sigma\) with [46]), is what made the tension less alleviated than the previous analysis [19]. The joint contours of all datasets are still consistent with CMB alone or CMB + \(H_{0}\).
Finally, we have used the Bayes ratio in Eq. (25) to calculate the \(2\ln\mathcal{B}\) value shown in Table 3. We can observe a positive inclination towards \(\Lambda\)CDM within the CMB CMB+H\({}_{0}\) datasets. Conversely, when incorporating the BAO+SNIa+RSD datasets, a preference for IDE emerges due to its effective disruption of the degeneracies depicted in the results presented in Figures 1, 2, 3 and 4. Consequently, it is concluded that favoritism for the IDE model, over \(\Lambda\)CDM, is not distinctly pronounced.
## IV Conclusions
Interacting Dark Energy has been an alternative to the standard \(\Lambda\)CDM model for many years. With the release of new datasets, the free parameters of the model can be even more constrained, reaching a level of precision not obtained before.
In this paper, we have investigated a class of coupled DE models, where DE is interacting with DM phenomenologically. We have included a term in the perturbation equations for both fluids that was absent in previous works but are required by gauge invariance and considers the perturbation of the Hubble rate. Furthermore, we have obtained updated constraints on the cosmological parameters, using more recent data from CMB anisotropies, BAO, RSD, and SNIa, showing tighter constraints on the coupling constants. Model III and especially Model I, however, show incompatibility with every cosmological data because we can see that the contours are not overlapped when different datasets are used to constrain the cosmological parameters.
We used the SH0ES Cepheid host distance anchors combined with other likelihoods to constrain \(H_{0}\) in the different models. Model III is the one that presents a better alleviation of the Hubble tension. Model IV also produces a lower discrepancy of \(H_{0}\), although the new data (including the new result from SH0ES, R22) made the tension less alleviated than the previous work [19]. However, when all datasets are taken into account, no overlap between the 2D contours exists for any of the models, still indicating an incompatibility between low-redshift data and \(H_{0}\).
Finally, we performed a statistical model comparison, and no strong preference for IDE over \(\Lambda\)CDM is obtained, with \(|\Delta\chi^{2}|<1\) for all the models, when the datasets CMB, BAO, RSD, and SNIa are used.
We conclude that the IDE models considered in this paper are not flexible enough to fit all cosmological data including values of \(H_{0}\) from SH0ES in a way that is statistically acceptable. Either the models would need to be modified to include further flexibility of predictions or there remains a tension in this coupled dark energy scenario.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c}{Dataset 1} & \multicolumn{2}{c}{Dataset 2} & \multicolumn{2}{c}{Dataset 3} & \multicolumn{2}{c}{Dataset 4} \\ \cline{2-9} & \(\Delta\chi^{2}\) & \(2\ln\mathcal{B}\) & \(\Delta\chi^{2}\) & \(2\ln\mathcal{B}\) & \(\Delta\chi^{2}\) & \(2\ln\mathcal{B}\) & \(\Delta\chi^{2}\) & \(2\ln\mathcal{B}\) \\ \hline I & 3.7 & 4.8 & 0.8 & \(-\)3.9 & 20.5 & 19.3 & \(-\)22.0 & \(-\)23.5 \\ II & 2.6 & 2.5 & \(-\)0.6 & \(-\)6.3 & 14.7 & 15.7 & \(-\)82.3 & \(-\)84.5 \\ III & 2.5 & 2.5 & 0.5 & \(-\)2.9 & 18.8 & 17.1 & \(-\)162.8 & \(-\)162.0 \\ IV & 0.1 & 2.3 & \(-\)0.6 & \(-\)2.8 & 25.6 & 27.3 & \(-\)39.6 & \(-\)38.7 \\ \hline \hline \end{tabular}
\end{table}
Table 3: \(\Delta\chi^{2}\) and evidence level in \(2\ln\mathcal{B}\) scale, both compared to \(\Lambda\)CDM. Datasets 1, 2, 3, and 4 correspond to MCMC runs using the datasets CMB, CMB+BAO+SNIa+RSD, CMB+H\({}_{0}\), and CMB+BAO+SNIa+RSD+H\({}_{0}\) respectively.
###### Acknowledgements.
R.G.L. thanks Matteo Lucca for the useful comments. G.A.H acknowledges a CAPES grant 8887.622333/2021-00. L.O.P. acknowledges a CNPq grant 137172/2022-2. R.P.R. acknowledges a CNPq grant 125103/2022-0. This work was developed in the Brazilian cluster SDumont.
|
2306.13511 | Onboarding Citizens to Digital Identity Systems | Digital Identity (DI) technologies have the potential to enhance the quality
of life of citizens through the provision of seamless services, improve the
effectiveness of public services, and increase overall economic
competitiveness. However, lack of access to DIs can limit these benefits, while
unequal access can lead to uneven distribution of these benefits across social
groups and escalate existing tensions. Accessible, user-friendly and efficient
onboarding can play a key role in ensuring equitable access and wide adoption
of DI technologies. This paper proposes the development of physical locations
(Experience Centres) that can be used for citizen onboarding to national DI
systems, positively shaping citizens' first impression with the technology and,
in turn, promoting adoption. To this end, we outline a multidisciplinary
research approach for identifying and addressing the considerations necessary
for designing, developing and operating a model Experience Centre for DI
onboarding in an inclusive manner. | Tasos Spiliotopoulos, Al Tariq Sheik, Debora Gottardello, Robert Dover | 2023-06-23T14:23:11Z | http://arxiv.org/abs/2306.13511v1 | # Onboarding Citizens to Digital Identity Systems
###### Abstract
Digital Identity (DI) technologies have the potential to enhance the quality of life of citizens through the provision of seamless services, improve the effectiveness of public services, and increase overall economic competitiveness. However, lack of access to DIs can limit these benefits, while unequal access can lead to uneven distribution of these benefits across social groups and escalate existing tensions. Accessible, user-friendly and efficient onboarding can play a key role in ensuring equitable access and wide adoption of DI technologies. This paper proposes the development of physical locations (Experience Centres) that can be used for citizen onboarding to national DI systems, positively shaping citizens' first impression with the technology and, in turn, promoting adoption. To this end, we outline a multidisciplinary research approach for identifying and addressing the considerations necessary for designing, developing and operating a model Experience Centre for DI onboarding in an inclusive manner.
## 1 Introduction
In recent years, policymakers, researchers, and practitioners around the globe have recognised the potential benefits of Digital Identity (DI) systems [1]. Governments have begun implementing digital identity programmes to provide legal and regulated digital identities to citizens. The UK government has taken a large step in this direction by publishing a beta version of the _Digital Identity and Attributes Trust Framework_[2], which is intended to provide a policy framework that enables and encourages the ability for individuals to have reusable certified Digital IDs.
DIs provide citizens with easy, efficient, privacy-preserving and secure access to services. This, in turn, allows governments and businesses to innovate, streamline their services, comply with regulations and compete at the international level. For example, researchers have highlighted how the selective disclosure and self sovereignty afforded by DI technologies can address financial exclusion [3, 4], while also supporting innovation [5].
Despite the potential advantages, research indicates that the design and implementation of national DI systems may have significant socio-economic, ethical, privacy, and human rights implications [6]. When designing and implementing DI systems, these implications must be carefully considered, as they have the potential to affect a wide variety of individuals and groups. One important consideration involves the onboarding process, which needs to be designed in a way that promotes equitable access to DIs for all citizens. This paper outlines a research approach for identifying and addressing the considerations necessary for designing, developing and operating a physical location (an Experience Centre) for DI onboarding in an inclusive manner.
## 2 Digital Identity Technologies
### Technical Background
In broad terms, a digital identity refers to what an entity, object or subject is [7]. A key characteristic of a digital identity is that it can be used to prove something about an entity. This means that a third entity can verify a claim that an issuer of the identity has made about the identity holder. This, in turn, means that services and organisations can trust these claims. With this in mind, Kameron [8] has defined a digital identity as 'a set of claims made by one digital subject about itself or another digital subject'. When considering identities at the national level, the UK Digital Identity and Attributes Trust Framework refers to a DI as a 'digital representation of a person acting as an individual or as a representative of an organisation' and highlights the importance of the ability for people to prove claims about themselves and the impact that this foundation of trust can have on organisations, service providers and the country's economy [2].
A DI _system_ is a mechanism that permits the creation and verification of an individual's identity using digital means. The process of using an identity within this system consists primarily of two steps: (i) onboarding, and (ii) authentication and ID management. During the initial phase of the onboarding procedure, an individual's Personal Identifiable Information (PII) is collected, validated and verified. This information is used to identify and establish the user within the system. Document validation, email verification, and phone verification may be included in the de-duplication and verification process. Once the individual's identity has been proven, the administrator facilitates in creating an identity record. The second stage of the process is authentication and ID management, in which the individual's identity is verified and man
aged whenever they attempt to use a service. This is achieved through a variety of methods, including password-based authentication, two-factor authentication, biometric authentication, issuing, recording credential, binding, expiration, renewal and revocation. The authentication phase's objective is to ensure that only authorised users can access the service and their credentials are managed.
### Policy Background
The UK Digital Identity and Attributes Trust Framework (DIATF) provides a preliminary and evolving set of rules and standards for those providing (or independently certifying) DIs. The DI providers will need to follow these rules and standards in order to provide secure and trustworthy digital identity and attribute solutions in the UK market. The drafting of the DIATF is being overseen by the UK government's Department for Digital, Culture, Media and Sport (DCMS), but this process involves a number of government, scientific, policy and corporate stakeholders injecting expertise into the process. At the time of writing there is no definitive publication date for the final version of the DIATF, with the DCMS stating that it will be published in the short term. However, the currently published beta version makes clear the government's intention to use the trust framework to enable services such as digital right to work, rent and criminal record checks [2].
The DIATF is an important element of the UK government's digital economy initiatives that they argue will facilitate increased levels of innovation, competition, and transparency into the digital identity market. The initial DIATF also puts special emphasis on the protection of individual privacy and security. Such ambitions are tempered by the government's parallel policy agenda of deanonymising individuals to improve the work and effectiveness of law enforcement and intelligence agencies.
The transnationality of digital markets sits uneasily with Westphalian constructs of sovereignty, and so the DIATF will need to be compatible (in important ways) with the European Union's proposed European Digital Identity Framework [9], which builds on the existing cross-border legal framework for trusted digital identities, the European electronic identification and trust services initiative (eIDAS Regulation) [10].
### Considerations around Access to DI Technologies
DI systems, when not designed appropriately, may fail to address the needs of those who already carry significant markers of social and economic marginalisation. Such exclusion results in negative economic consequences and serves to further exclude them from formal mechanisms based upon trustworthy identity systems. For example, biometric identification methods, such as fingerprints, may not be accessible to disabled or elderly individuals, creating 'digital barriers' to access economic and social resources [11, 12]. Additionally, digital identification may enable more efficient discrimination against marginalised groups, such as women, ethnic minorities, religious groups, disabled individuals, and members of the LGBT community [13]. These systems also pose a threat to personal safety of marginalised groups. Therefore, more trustworthy digital identity systems that address the rights of all individuals, particularly the most marginalised, will be necessary in the future.
Research has shown that in the UK, around 11 million people especially from marginalised backgrounds do not have a passport or a driving licence. Women, those living in urban areas, the under 20s and over 65s are less likely to hold a driving licence [14]. Data from the UK Electoral Commission reported that disadvantaged groups are more likely to not have an ID. For example, older people (aged 85+) were less likely to have an ID that was recognisable (91% compared to 95%-98% for those in younger age groups. It also found that the unemployed, people with disabilities, and people without qualifications, were all less likely to hold any form of photo ID [15].
Onboarding can be difficult for people who do not have access to technological resources or who are not technologically skilled. Not having an ID can have important implications for marginalised people as they may lack access to services that ensure credit accumulation or profit storage. As a consequence, marginalised people may have difficulty accessing many basic services, including work, social protection, banking or education. Likewise, the lack of a documented identity puts vulnerable and already marginalised people at constant risk of transgressing the lines between legal and illegal.
In recent years, digital identity verification through a mobile account has proven to be an effective verification method in several countries. This method has been used to prove identity in order to receive benefits from the government, private entities, or obtain microloans. Accessing financial services online or through mobile devices provides independence, the opportunity to pay daily expenses and to make longer-term plans, and therefore remove a key source of anxiety [16]. Having access to financial services helps marginalised groups not only to survive but also to bring themselves closer to the mainstream of society in terms of access whilst maintaining their individual identities, potentially facilitating a greater level of respect from those who do not find themselves marginalised. Poverty or social isolation driven by lack of access to services, including financial services, affects minorities in all nations.
In the western world, communities who are seen as being marginalised often correlate with low recorded levels of literacy, a lack of access to financial services and consequently a reliance on outside agencies, making this loosely confederated group a challenge to onboard to digital services [17]. Despite the fact that the richer countries, such as the United Kingdom, tend to have better quality services than poorer nations, the security that prevents unauthorised use of these services is much stricter. The cost of buying a passport or learning to drive to obtain a photo ID can easily prevent minorities from possessing these essential 'entrance' documents.
Digital Identity Onboarding
As noted above, a DI system is a mechanism that permits the creation and verification of an individual's identity using digital means. The process of using an identity within this system consists primarily of two steps: (i) onboarding, and (ii) authentication and ID management.
The implementation of these two phases incurs respective challenges. The onboarding phase contains many challenges that are social, economic, political and technological in nature. These include data collection, data verification, privacy, security, user experience, scalability and compliance.
The challenges faced during the authentication phase and ID management are analogous to those experienced in technological developments, for example: security, usability, scalability, false acceptance rate and false rejection rate, privacy, compliance and interoperability. Unlike the technological challenges in the authentication phase, which are dependent on the initial stage, the challenges in the onboarding phase are interdisciplinary in nature, making onboarding the referent object for a multidisciplinary inquiry, and requires a number of disciplinary perspectives and injects to generate solutions. As a result, it is crucial to formally address the challenges of the onboarding phase and design interdisciplinary solutions to create a trustable, efficient, and user-friendly experience.
The main obstacles in adopting DIs are: 1) the information gap that exists between the consuming public and the technology companies, and 2) people's hesitation to initially engage with the technology. Trust in technology in general, trust in a specific technology, and trust in the people and institutions behind a technology play an important role in shaping people's beliefs and behaviour [18, 19]. To establish trust, the DI system's onboarding, authentication, and ID lifecycle management processes must be demonstrated as trustworthy: this is both a measure that can be technically benchmarked and is also subject to sentiment. The consuming public's first impression and initial experience with a technology are also particularly important in shaping adoption and post-adoption behaviours [20]. Because these early beliefs and behaviours establish a path-dependency, we are identifying the DI onboarding phase as a key research consideration for ensuring equitable access and wide adoption of digital identities.
## 4 Achieving a Smooth Digital Identity Onboarding Experience
In order to increase the adoption of DIs, and to do so in a fair and equitable manner, we propose the use of physical locations for citizen onboarding to DI systems in the UK context. Such an approach has similarities to the use of Experience Centres (ECs) developed in other countries1. These ECs, which are physical locations, will allow users to register and collect digital IDs and credentials, and integrate them with other systems and services such as civil registration systems, e-sign, and electronic health records management.
Footnote 1: [https://mosip.io/news-events/announcing-the-launch-of-the-first-mosip-experience-centre-an-end-to-end-walk-in-mosip-experience-in-](https://mosip.io/news-events/announcing-the-launch-of-the-first-mosip-experience-centre-an-end-to-end-walk-in-mosip-experience-in-)
An Experience Centre can facilitate trustworthy and inclusive onboarding to DI technologies. This has the potential both to address uneven access to DI technologies, and to increase DI adoption overall. Ensuring that access to DI technologies is inclusive can profoundly reduce inequalities as proving one's identity is rapidly becoming an essential part of exercising human rights on a day-to-day basis. The use of an EC is also to improve the efficiency of the DI onboarding process with additional services taking place on-site, such as document verification, biometric capture, and identity document scanning. Such an EC would provide a secure and user-friendly environment for users to interact with the DI system, close the information gap between the public and DI providers, and develop confidence in the technology and related services. ECs also allow us to iterate and improve the user experience within them, thus constantly improving accessibility and trust. We envision an EC as a _Digital Identity Playground_ that can positively shape citizens' first impression with the technology and, in turn, promote adoption of DI technologies.
ECs are complex sociotechnical systems and, as such, are very difficult to design and implement [21]. A number of considerations need to be taken into account in order to address fundamental design questions, such as:
* What are the most important features and services of a model EC in the UK?
* What are the specific requirements of a model EC in terms of technical infrastructure, staffing, spatial architecture, cost and security?
* What are the main design considerations to ensure that the model EC can engage citizens in an inclusive manner, increase adoption, build confidence in the use of DIs, and act effectively as a digital playground?
## 5 Our Research Approach
To operationalise our _multidisciplinary_ approach we have designed a series of research activities and methods that are necessary steps to effectively address these questions. The contribution from Foreign Policy Analysis (FPA) of _Horizon scanning_ will be used to identify the key drivers shaping the DI onboarding operational environment and key action points to proactively shape desirable futures. The output of a horizon scan is a formal assessment document that provides a probabilistic measure of likelihood of various future trends occurring and allows the recipient to make evidence based judgements about resourcing and framing responses to the initial challenge. In this context, the horizon scan will identify trends over the ten-year time period, from most likely to wild card possibilities, and also provide assessments of the sourcing base for these judgements. The output from the horizon scan will exist as a standalone document, but also helps to inform the creation of requirements (e.g., specific use cases and design
diagrams) and recommendations, that in turn provide an underpinning for the design of an Experience Centre. In general terms, a horizon scan is an empiricist tool for identifying the key elements of the phenomena or issue in hand - in this case onboarding. Further, a horizon scan assists in generating areas for further research, action and mitigation [22].
The contribution from Operational Sciences and Human-Computer Interaction is a _Literature review and science mapping analysis_ which aims at investigating the state of the current research and also the implementation trends and opportunities in DIs with a specific focus on the onboarding process. This structured analysis of this large body of academic information relating to DIs will allow us to infer research trends over time, recognise themes, identify shifts in the boundaries of the disciplines, detect the most prolific scholars, institutions and countries, and to present the 'big picture' of extant research around DI systems, user adoption and onboarding [23, 24].
_Semi-structured interviews_ - derived from a social scientific underpinning - will help to provide an understanding of stakeholder and end-user perceptions and attitudes towards DI systems, with a focus on inclusivity as a framing device within onboarding. The main purpose of the interviews with stakeholders will be to investigate possible challenges, barriers, attitudes and opportunities and identify major trends to inform the development and design of future trustworthy digital identities that guarantee equality and inclusion and are accessible for all. Moreover, by interviewing stakeholders we will be able to understand how ECs can ensure an equal and inclusive society.
Finally, _Threat and risk assessment_ will determine the methods, practices, and approaches that provide the greatest traction for identifying and assessing security threats and evaluating the associated risks for inclusivity in future digital identity onboarding systems. A threat and risk assessment encompasses a comprehensive examination of both the users and the digital identity system for potential threats and the subsequent evaluation of the associated security risks. This assessment takes into account the likelihood and potential impact of a threat event occurring, as well as the capability of a threat actor to exploit any weaknesses within the system. Based on the level of threat and risk identified, appropriate risk management strategies can be developed, which may include the acceptance of the risk, the implementation of mitigation measures, or the adoption of avoidance strategies [25, 26].
We expect that this combination of policy perspective, multidisciplinary academic perspectives, the perspective of end-users and stakeholders, and the technical and security perspectives will complement one another to provide a more complete picture of what is required for the development of an EC in a UK context. This, in turn, can be very useful input for policy making, regulation and inform best practices and specifications for the DIATF.
Two types of results are expected from these research activities. First, this research approach will provide a set of requirements and specifications that can be used for the design and operation of a model EC. These will take the form of commonly used artefacts that are used for this purpose, such as use case descriptions, use case diagrams, data flow diagrams and process flow diagrams. These will not be meant to provide an exhaustive set of rigid specifications, but instead will focus on the DI-specific characteristics of the design and operation of an EC. The focus will also be on addressing 'pain points' or 'critical incidents' [27] identified in the onboarding process. These artefacts also have the potential to be used as 'boundary objects' to facilitate communication, engagement and feedback from stakeholders [28, 29]. Second, we expect to provide a set of qualitative recommendations that arise from these research activities. These recommendations will ensure that aspects of citizen inclusion and empowerment are adequately addressed (e.g., taking into account the needs of diverse groups of citizens), and will provide more flexibility in the output.
## 6 Conclusion
The implementation of a social inclusive and technically robust onboarding process for digital identities is a under-emphasised but highly impactful component of the development of digital identifies. It is an important element of the future economic success of the UK, the trust and participation of all elements of the British society in this digital future, and the strength of digitally platformed or cyber-influenced social relationships within the UK and outside. The multidisciplinary research approach outlined in this work will provide impact-laden research that can be utilised by government policy officials and technology partners to improve their DI offers.
## Acknowledgements
This research was part-funded by SPRITE+: The Security, Privacy, Identity, and Trust Engagement NetworkPlus (EPSRC grant number EP/S035869/1).
|
2301.06805 | Stability and guaranteed error control of approximations to the
Monge--Ampère equation | This paper analyzes a regularization scheme of the Monge--Amp\`ere equation
by uniformly elliptic Hamilton--Jacobi--Bellman equations. The main tools are
stability estimates in the $L^\infty$ norm from the theory of viscosity
solutions which are independent of the regularization parameter $\varepsilon$.
They allow for the uniform convergence of the solution $u_\varepsilon$ to the
regularized problem towards the Alexandrov solution $u$ to the Monge--Amp\`ere
equation for any nonnegative $L^n$ right-hand side and continuous Dirichlet
data. The main application are guaranteed a posteriori error bounds in the
$L^\infty$ norm for continuously differentiable finite element approximations
of $u$ or $u_\varepsilon$. | Dietmar Gallistl, Ngoc Tien Tran | 2023-01-17T11:03:15Z | http://arxiv.org/abs/2301.06805v1 | # Stability and guaranteed error control of approximations to the Monge-Ampere equation
###### Abstract.
This paper analyzes a regularization scheme of the Monge-Ampere equation by uniformly elliptic Hamilton-Jacobi-Bellman equations. The main tools are stability estimates in the \(L^{\infty}\) norm from the theory of viscosity solutions which are independent of the regularization parameter \(\varepsilon\). They allow for the uniform convergence of the solution \(u_{\varepsilon}\) to the regularized problem towards the Alexandrov solution \(u\) to the Monge-Ampere equation for any nonnegative \(L^{n}\) right-hand side and continuous Dirichlet data. The main application are guaranteed a posteriori error bounds in the \(L^{\infty}\) norm for continuously differentiable finite element approximations of \(u\) or \(u_{\varepsilon}\).
Key words and phrases:Monge-Ampere equation, regularization, a posteriori 2010 Mathematics Subject Classification: 35J96, 65N12, 65N30, 65Y20 This project received funding from the European Union's Horizon 2020 research and innovation programme (project DAFNE, grant agreement No. 891734).
continuous right-hand sides \(0\leq f\in C(\Omega)\), the Monge-Ampere equation (1.1) is equivalent to
\[F_{0}(f;x,\mathrm{D}^{2}u)=0\text{ in }\Omega\quad\text{and}\quad u=g\text{ on }\partial\Omega\]
with \(F_{0}(f;x,M)\coloneqq\sup_{A\in\mathbb{S}(0)}(-A:M+f\sqrt[3]{\det A})\) for any \(x\in\Omega\) and \(M\in\mathbb{R}^{n\times n}\). Here, \(\mathbb{S}(0)\coloneqq\{A\in\mathbb{S}:A\geq 0\text{ and }\operatorname{tr}A=1\}\) denotes the set of positive semidefinite symmetric matrices \(A\) with unit trace \(\operatorname{tr}A=1\). Since \(F_{0}\) is only degenerate elliptic, the regularization scheme proposed in [12] replaces \(\mathbb{S}(0)\) by a compact subset \(\mathbb{S}(\varepsilon)\coloneqq\{A\in\mathbb{S}(0):A\geq\varepsilon\}\subset \mathbb{S}(0)\) of matrices with eigenvalues bounded from below by the regularization parameter \(0<\varepsilon\leq 1/n\). The solution \(u_{\varepsilon}\) to the regularized PDE solves
\[F_{\varepsilon}(f;x,\mathrm{D}^{2}u_{\varepsilon})=0\text{ in }\Omega\quad \text{and}\quad u_{\varepsilon}=g\text{ on }\partial\Omega \tag{1.2}\]
where, for any \(x\in\Omega\) and \(M\in\mathbb{R}^{n\times n}\), the function \(F_{\varepsilon}\) is defined as
\[F_{\varepsilon}(f;x,M)\coloneqq\sup_{A\in\mathbb{S}(\varepsilon)}(-A:M+f\sqrt[ 3]{\det A}). \tag{1.3}\]
In two space dimensions \(n=2\), uniformly elliptic HJB equations satisfy the Cordes condition [15] and this allows for a variational setting for (1.2) with a unique strong solution \(u_{\varepsilon}\in H^{2}(\Omega)\) in the sense that \(F_{\varepsilon}(f;x,\mathrm{D}^{2}u_{\varepsilon})=0\) holds a.e. in \(\Omega\)[18, 19]. The paper [12] establishes uniform convergence of \(u_{\varepsilon}\) towards the generalized solution \(u\) to the Monge-Ampere equation (1.1) as \(\varepsilon\searrow 0\) under the assumption \(g\in H^{2}(\Omega)\cap C^{1,\alpha}(\overline{\Omega})\) and that \(0\leq f\in L^{2}(\Omega)\) can be approximated from below by a pointwise monotone sequence of positive continuous functions.
### Contributions of this paper
The variational setting of (1.2) in two space dimensions leads to \(H^{2}\) stability estimates that deteriorate with \(\varepsilon^{-1}\to\infty\) as the regularization parameter \(\varepsilon\to 0\) vanishes. This can be explained by the regularity of Alexandrov solutions to the Monge-Ampere equation (1.1) as they are, in general, not in \(H^{2}(\Omega)\) without additional assumptions on the domain \(\Omega\) and the data \(f,g\). Consequently, error estimates in the \(H^{2}\) norm may not be of interest, and the focus is on error estimates in the \(L^{\infty}\) norm.
The analysis departs from the following \(L^{\infty}\) stability estimate that arises from the Alexandrov maximum principle. If \(v_{1},v_{2}\in C(\overline{\Omega})\) are viscosity solutions to \(F_{\varepsilon}(f_{j};x,\mathrm{D}^{2}v_{j})=0\) in \(\Omega\) with \(0\leq\varepsilon\leq 1/n\) and \(f_{1},f_{2}\in C(\overline{\Omega})\), then
\[\|v_{1}-v_{2}\|_{L^{\infty}(\Omega)}\leq\|v_{1}-v_{2}\|_{L^{\infty}(\partial \Omega)}+C(n,\operatorname{diam}(\Omega))\|f_{1}-f_{2}\|_{L^{n}(\Omega)}. \tag{1.4}\]
The constant \(C(n,\operatorname{diam}(\Omega))\) exclusively depends on the dimension \(n\) and the diameter \(\operatorname{diam}(\Omega)\) of \(\Omega\), but not on the ellipticity constant of (1.2) or on the regularization parameter \(\varepsilon\). Consequently, this allows for control of the \(L^{\infty}\) error even as \(\varepsilon\to 0\). By density of \(C(\overline{\Omega})\) in \(L^{n}(\Omega)\), the \(L^{\infty}\) stability estimate (1.4) can be extended to solutions \(v_{1},v_{2}\in C(\overline{\Omega})\) for \(0<\varepsilon\leq 1/n\) (or \(\varepsilon=0\) if \(f_{1},f_{2}\geq 0\)) with the following two applications. First, this paper establishes, in extension to [12], uniform convergence of (generalized) viscosity solutions \(u_{\varepsilon}\) of the regularized PDE (1.2) to the Alexandrov solution \(u\in C(\overline{\Omega})\) of the Monge-Ampere equation (1.2) under the (essentially) minimal assumptions \(0\leq f\in L^{n}(\Omega)\) and \(g\in C(\partial\Omega)\) on the data. Second, (1.4) provides guaranteed error control in the \(L^{\infty}\) norm (even for inexact solve) for \(H^{2}\) conforming FEM.
### Outline
The principal tool we use for establishing our results is the celebrated Alexandrov maximum principle. It provides an upper bound for the \(L^{\infty}\) norm of any convex function in dependence of its Monge-Ampere measure.
**Lemma 1.1** (Alexandrov maximum principle).: _There exists a constant \(c_{n}\) solely depending on the dimension \(n\) such that any convex function \(v\in C(\overline{\Omega})\) with homogenous boundary data \(v|_{\partial\Omega}=0\) over an open bounded convex domain \(\Omega\) satisfies_
\[|v(x)|^{n}\leq c_{n}^{n}\mathrm{dist}(x,\partial\Omega)\mathrm{diam}(\Omega)^{n -1}\mathcal{L}^{n}(\partial v(\Omega))\quad\text{for any $x\in\Omega$}. \tag{1.5}\]
Proof.: This is [11, Theorem 2.8] and the constant \(c_{n}\coloneqq(2(2\pi)^{n/2-1}/((n-1)!!n)\) arises therein from the \(n\)-dimensional volume formula for a cone \(\mathcal{C}\subset\partial v(\Omega)\). If \(n=2\), then \(c_{2}=1\).
The remaining parts of this paper are organized as follows. Section 2 establishes \(L^{\infty}\) stability estimates for viscosity solutions to the HJB equation (1.2) for all parameters \(0\leq\varepsilon\leq 1/n\) in any space dimension. Section 3 provides a proof of convergence of the regularization scheme. A posteriori error estimates for the discretization error in the \(L^{\infty}\) norm for \(H^{2}\)-conforming FEM are presented in Section 4. The three numerical experiments in Section 5 conclude this paper.
Standard notation for function spaces applies throughout this paper. Let \(C^{k}(\Omega)\) for \(k\in\mathbb{N}\) denote the space of scalar-valued \(k\)-times continuously differentiable functions. Given a positive parameter \(0<\alpha\leq 1\), the Holder space \(C^{k,\alpha}(\Omega)\) is the subspace of \(C^{k}(\overline{\Omega})\) such that all partial derivates of order \(k\) are Holder continuous with exponent \(\alpha\). For any set \(\omega\subset\mathbb{R}^{n}\), \(\chi_{\omega}\) denotes the indicator function associated with \(\omega\). For \(A,B\in\mathbb{R}^{n\times n}\), the Euclidean scalar product \(A:B\coloneqq\sum_{j,k=1}^{n}A_{jk}B_{jk}\) induces the Frobenius norm \(|A|\coloneqq\sqrt{A}:\overline{A}\) in \(\mathbb{R}^{n\times n}\). The notation \(|\cdot|\) also denotes the absolute value of a scalar or the length of a vector. The relation \(A\leq B\) of symmetric matrices \(A,B\in\mathbb{S}\) holds whenever \(B-A\) is positive semidefinite.
## 2. Stability estimate
We first recall the concept of viscosity solutions to the HJB equation (1.2).
**Definition 2.1** (viscosity solution).: _Let \(f\in C(\Omega)\) and \(0\leq\varepsilon\leq 1/n\) be given. A function \(v\in C(\overline{\Omega})\) is a viscosity subsolution (resp. supersolution) to \(F_{\varepsilon}(f;x,\mathrm{D}^{2}v)=0\) if, for all \(x_{0}\in\Omega\) and \(\varphi\in C^{2}(\Omega)\) such that \(v-\varphi\) has a local maximum (resp. minimum) at \(x_{0}\), \(F_{\varepsilon}(f;x,\mathrm{D}^{2}\varphi)\leq 0\) (resp. \(F_{\varepsilon}(f;x,\mathrm{D}^{2}\varphi)\geq 0\)). If \(v\) is viscosity subsolution, then \(v\) is called viscosity solution to \(F_{\varepsilon}(f;x,\mathrm{D}^{2}v)=0\)._
The following result provides the first tool in the analysis of this section.
**Lemma 2.2** (classical comparison principle).: _Given \(0\leq\varepsilon\leq 1/n\) and a continuous right-hand side \(f\in C(\Omega)\), where we assume \(f\geq 0\) if \(\varepsilon=0\), let \(v^{*}\in C(\overline{\Omega})\) resp. \(v_{*}\in C(\overline{\Omega})\) be a super- resp. subsolution to the PDE_
\[F_{\varepsilon}(f;x,\mathrm{D}^{2}v)=0\text{ in }\Omega. \tag{2.1}\]
_If \(v_{*}\leq v^{*}\) on \(\partial\Omega\), then \(v_{*}\leq v^{*}\) in \(\overline{\Omega}\)._
Proof.: The proof applies the arguments from [7, Section 3] to the PDE (2.1) and can follow [10, Lemma 3.6] with straightforward modifications; further details are therefore omitted.
An extended version of Lemma 2.2 below is the following.
**Lemma 2.3** (comparison principle).: _Given any \(0\leq\varepsilon_{*}\leq\varepsilon^{*}\leq 1/n\) and \(f_{*},f^{*}\in C(\Omega)\) with \(f_{*}\leq f^{*}\) in \(\Omega\), where we assume \(f_{*}\geq 0\) if \(\varepsilon_{*}=0\), let \(v_{*},v^{*}\in C(\overline{\Omega})\) be viscosity solutions to_
\[F_{\varepsilon^{*}}(f_{*};x,\mathrm{D}^{2}v^{*})=0\text{ in }\Omega\quad\text{ and } \quad F_{\varepsilon_{*}}(f^{*};x,\mathrm{D}^{2}v_{*})=0\text{ in }\Omega.\]
_If \(v_{*}\leq v^{*}\) on \(\partial\Omega\), then \(v_{*}\leq v^{*}\) in \(\overline{\Omega}\)._
Proof.: Given any test function \(\varphi\in C^{2}(\Omega)\) and \(x\in\Omega\) such that \(v^{*}-\varphi\) has a local minimum at \(x\), then \(F_{\varepsilon^{*}}(f_{*};x,\mathrm{D}^{2}v^{*})=0\) in the sense of viscosity solutions implies \(0\leq F_{\varepsilon^{*}}(f_{*};x,\mathrm{D}^{2}\varphi(x))\). This, \(f_{*}\leq f^{*}\) in \(\Omega\), and \(\mathbb{S}(\varepsilon^{*})\subset\mathbb{S}(\varepsilon_{*})\) show
\[0\leq F_{\varepsilon^{*}}(f_{*};x,\mathrm{D}^{2}\varphi(x))\leq F_{\varepsilon _{*}}(f^{*};x,\mathrm{D}^{2}\varphi(x)), \tag{2.2}\]
whence \(v^{*}\) is viscosity supersolution to the PDE \(F_{\varepsilon_{*}}(f^{*};x,\mathrm{D}^{2}v_{*})=0\). Therefore, the comparison principle from Lemma 2.2 with \(v_{*}\leq v^{*}\) on \(\partial\Omega\) concludes \(v_{*}\leq v^{*}\) in \(\overline{\Omega}\).
The comparison principle from Lemma 2.2 allows for the existence and uniqueness of viscosity solutions (1.2) by Perron's method.
**Proposition 2.4** (properties of HJB equation).: _Given any \(0\leq\varepsilon\leq 1/n\), \(f\in C(\Omega)\cap L^{n}(\Omega)\), where we assume \(f\geq 0\) if \(\varepsilon=0\), and \(g\in C(\partial\Omega)\), there exists a unique viscosity solution \(u\in C(\overline{\Omega})\) to the HJB equation (1.2). It satisfies (a)-(b):_
_(viscosity = Alexandrov) If \(\varepsilon=0\) and \(f\geq 0\) is nonnegative, then the viscosity solution to the HJB equation (1.2) and the Alexandrov solution to the Monge-Ampere equation (1.1) coincide._
_(interior regularity for HJB) If \(\varepsilon>0\) and \(f\in C^{0,\alpha}(\Omega)\) with \(0<\alpha<1\), then \(u\in C(\overline{\Omega})\cap C^{2,\kappa}_{loc}(\Omega)\) with a constant \(0<\kappa<1\) that solely depends on \(\alpha\) and \(\varepsilon\)._
_(interior regularity for Monge-Ampere) If \(\varepsilon=0\), \(f\in C^{0,\alpha}(\Omega)\) with \(0<\alpha<1\), \(f>0\) in \(\overline{\Omega}\), and \(g\in C^{1,\beta}(\partial\Omega)\) with \(\beta>1-2/n\), then \(u\in C(\overline{\Omega})\cap C^{2,\alpha}_{loc}(\Omega)\)._
Proof.: On the one hand, an elementary reasoning as in the proof of Lemma 2.3 proves that the viscosity solution \(v^{*}\) to the Poisson equation \(F_{\varepsilon^{*}}(f_{*};x,\mathrm{D}^{2}v^{*})=0\) with \(\varepsilon^{*}\coloneqq 1/n\), \(f_{*}\coloneqq f\), and Dirichlet data \(v^{*}=g\) on \(\partial\Omega\) is a viscosity supersolution to (1.2). On the other hand, the Alexandrov solution \(v_{*}\) to the Monge-Ampere equation (1.1) with the right-hand side \(|f|\)[11, Theorem 2.14] is the viscosity solution to the HJB equation \(F_{\varepsilon_{*}}(f^{*};x,\mathrm{D}^{2}v_{*})=0\) with \(\varepsilon_{*}\coloneqq 0\), \(f^{*}\coloneqq|f|\), and Dirichlet data \(v_{*}=g\) on \(\partial\Omega\)[13, Proposition 1.3.4]. Hence, the function \(v_{*}\) is viscosity subsolution to (1.2). Therefore, Perron's method [7, Theorem 4.1] and the comparison principle from Lemma 2.2 conclude the existence and uniqueness of viscosity solutions to (1.2). The combination of [10, Theorem 3.3 and Theorem 3.5] with [13, Proposition 1.3.4] implies the assertion in (a). The interior regularity in (b) is a classical result from [5, 17]. For the Monge-Ampere equation, the interior regularity in (c) holds under the assumption that the Alexandrov solution \(u\) is strictly convex [11, Corollary 4.43]. Sufficient conditions for this are that \(f>0\) is bounded away from zero and \(g\in C^{1,\beta}(\partial\Omega)\) is sufficiently smooth [11, Corollary 4.11].
Some comments are in order, before we state a precise version of the \(L^{\infty}\) stability estimate (1.4) from the introduction. In general, these estimates arise from the Alexandrov-Bakelman-Pucci maximum principle for the uniform elliptic Pucci operator, cf. [3] and the references therein for further details. However, the constant therein may depend on the ellipticity constant of \(F_{\varepsilon}\) and therefore, on \(\varepsilon\). In the case of the HJB equation (1.2) that approximates the Monge-Ampere equation (1.1) as \(\varepsilon\to 0\), the Alexandrov maximum principle is the key argument to avoid a dependency on \(\varepsilon\). Recall the constant \(c_{n}\) from Lemma 1.1.
**Theorem 2.5** (\(L^{\infty}\) stability).: _Given a nonnegative parameter \(0\leq\varepsilon\leq 1/n\) and right-hand sides \(f_{1},f_{2}\in C(\overline{\Omega})\), where we assume \(f_{1},f_{2}\geq 0\) if \(\varepsilon=0\), let \(v_{1},v_{2}\in C(\overline{\Omega})\) be viscosity solutions to the HJB equation \(F_{\varepsilon}(f_{j};x,\mathrm{D}^{2}v_{j})=0\) in \(\Omega\) for \(j\in\{1,2\}\). Then, for any subset \(\omega\subset\Omega\),_
\[\|v_{1}-v_{2}\|_{L^{\infty}(\omega)}\leq\|v_{1}-v_{2}\|_{L^{\infty}(\partial \Omega)}+\frac{C}{n}\max_{x\in\overline{\omega}}\mathrm{dist}(x,\partial \Omega)^{1/n}\|f_{1}-f_{2}\|_{L^{n}(\Omega)} \tag{2.3}\]
_with the constant \(C\coloneqq c_{n}\mathrm{diam}(\Omega)^{(n-1)/n}\). In particular,_
\[\|v_{1}-v_{2}\|_{L^{\infty}(\Omega)}\leq\|v_{1}-v_{2}\|_{L^{\infty}(\partial \Omega)}+\frac{C}{n}(\mathrm{diam}(\Omega)/2)^{1/n}\|f_{1}-f_{2}\|_{L^{n}( \Omega)}. \tag{2.4}\]
Proof.: The proof is divided into two steps.
_Step 1:_ The first step establishes (2.3) under the assumptions \(f_{2}\leq f_{1}\) in \(\overline{\Omega}\) and \(v_{1}\leq v_{2}\) on \(\partial\Omega\). For \(f_{\Delta}\coloneqq f_{1}-f_{2}\geq 0\), let the sequence \((f_{\Delta,k})_{k\in\mathbb{N}}\) of smooth functions \(f_{\Delta,k}\in C^{\infty}(\overline{\Omega})\) approximate \(f_{\Delta}\in C(\overline{\Omega})\) from above such that \(f_{\Delta}\leq f_{\Delta,k}\) and \(0<f_{\Delta,k}\) in \(\overline{\Omega}\) for all \(k\in\mathbb{N}\) and \(\lim_{k\to\infty}\|f_{k}-f_{\Delta,k}\|_{L^{\infty}(\Omega)}=0\). Let \(w_{k}\in C(\overline{\Omega})\) be viscosity solutions to the PDE, for all \(k\in\mathbb{N}\),
\[F_{\varepsilon}(f_{\Delta,k};x,\mathrm{D}^{2}w_{k})=0\text{ in }\Omega\quad \text{and}\quad w_{k}=0\text{ on }\partial\Omega. \tag{2.5}\]
Since \(v_{1}\leq v_{2}\) on \(\partial\Omega\) and \(f_{2}\leq f_{1}\) by assumption of Step 1, Lemma 2.3 proves
\[v_{1}\leq v_{2}\text{ in }\overline{\Omega}. \tag{2.6}\]
Proposition 2.4(b)-(c) provides the interior regularity \(w_{k}\in C^{2,\alpha}_{\mathrm{loc}}(\Omega)\) for some positive parameter \(\alpha\) that (possibly) depends on \(\varepsilon\). In particular, \(w_{k}\in C^{2}(\Omega)\) is a classical solution to the PDE (2.5). We define the continuous function \(v_{*}\coloneqq v_{2}-\|v_{1}-v_{2}\|_{L^{\infty}(\partial\Omega)}+w_{k}\in C( \overline{\Omega})\). Given any \(x\in\Omega\) and \(\varphi\in C^{2}(\Omega)\) such that \(v_{*}-\varphi=v_{2}-\left(\|v_{1}-v_{2}\|_{L^{\infty}(\partial\Omega)}-w_{k}+\varphi\right)\) has a local maximum at \(x\), the function \(\psi\coloneqq\|v_{1}-v_{2}\|_{L^{\infty}(\partial\Omega)}-w_{k}+\varphi\in C ^{2}(\Omega)\) is smooth and, therefore, an admissible test function in the definition of viscosity solutions. Since \(v_{2}\) is viscosity solution to \(F_{\varepsilon}(f_{2};x,\mathrm{D}^{2}v_{2})=0\), \(F_{\varepsilon}(f_{2};x,\mathrm{D}^{2}\psi(x))\leq 0\) follows. This, \(\mathrm{D}^{2}\psi=\mathrm{D}^{2}(\varphi-w_{k})\), the sub-additivity \(\sup(X+Y)\leq\sup X+\sup Y\) of the supremum, \(f_{\Delta}\leq f_{\Delta,k}\), and \(F_{\varepsilon}(f_{\Delta,k};x,\mathrm{D}^{2}w_{k}(x))=0\) from (2.5) lead to
\[F_{\varepsilon}(f_{1};x,\mathrm{D}^{2}\varphi(x)) \leq F_{\varepsilon}(f_{2};x,\mathrm{D}^{2}\psi(x))+F_{ \varepsilon}(f_{\Delta};x,\mathrm{D}^{2}w_{k}(x))\] \[\leq F_{\varepsilon}(f_{2};x,\mathrm{D}^{2}\psi(x))+F_{ \varepsilon}(f_{\Delta,k};x,\mathrm{D}^{2}w_{k}(x))\leq 0,\]
whence \(v_{*}\) is viscosity subsolution to the PDE \(F_{\varepsilon}(f_{1};x,\mathrm{D}^{2}v)=0\) in \(\Omega\). Therefore, \(v_{*}\leq v_{1}\) on \(\partial\Omega\) by design and the comparison principle from Lemma 2.2 provide
\[v_{*}\leq v_{1}\text{ in }\overline{\Omega}. \tag{2.7}\]
On the one hand, the zero function with \(F_{\varepsilon}(f_{\Delta,k};x,0)\geq 0\) is a viscosity supersolution to \(F_{\varepsilon}(f_{\Delta,k};x,\mathrm{D}^{2}w_{k})=0\). Hence, the comparison principle from Lemma 2.2 shows \(w_{k}\leq 0\) in \(\overline{\Omega}\). On the other hand, Proposition 2.4(a) proves that the Alex-androv solution \(z_{k}\in C(\overline{\Omega})\) to \(\det\mathrm{D}^{2}z_{k}=(f_{\Delta,k}/n)^{n}\) with homogenous boundary is viscosity solution to \(F_{0}(f_{\Delta,k};x,\mathrm{D}^{2}z_{k})=0\) and Lemma 2.3 reveals \(z_{k}\leq w_{k}\), whence \(z_{k}\leq w_{k}\leq 0\) in \(\overline{\Omega}\). Consequently, the Alexandrov maximum principle from Lemma 1.1 and \(\mathcal{L}^{n}(\partial z_{k}(\Omega))^{1/n}=\|(f_{\Delta,k}/n)^{n}\|_{L^{i}( \Omega)}^{1/n}=\|f_{\Delta,k}\|_{L^{n}(\Omega)}/n\) imply
\[0\leq-w_{k}\leq-z_{k}\leq\frac{C}{n}\max_{x\in\overline{\omega}}\mathrm{ dist}(x,\partial\Omega)^{1/n}\|f_{\Delta,k}\|_{L^{n}(\Omega)}\quad\text{in }\overline{\omega} \tag{2.8}\]
for any subset \(\omega\subset\Omega\). The combination of (2.6)-(2.8) with \(v_{*}=v_{2}-\|v_{1}-v_{2}\|_{L^{\infty}(\partial\Omega)}+w_{k}\) results in
\[\|v_{1}-v_{2}\|_{L^{\infty}(\omega)}\leq\|v_{2}-v_{*}\|_{L^{ \infty}(\omega)}=\|v_{1}-v_{2}\|_{L^{\infty}(\partial\Omega)}+\|w_{k}\|_{L^{ \infty}(\omega)}\] \[\leq\|v_{1}-v_{2}\|_{L^{\infty}(\partial\Omega)}+\frac{C}{n}\max_ {x\in\overline{\omega}}\mathrm{dist}(x,\partial\Omega)^{1/n}\|f_{\Delta,k}\|_{L^ {n}(\Omega)}.\]
A passage of the right-hand side to the limit as \(k\to\infty\) and \(\lim_{k\to\infty}\|f_{\Delta,k}\|_{L^{n}(\Omega)}=\|f_{\Delta}\|_{L^{n}(\Omega)}\) conclude (2.3).
_Step 2:_ The second step establishes (2.3) without the additional assumptions from Step 1. For the functions \(f_{*}\coloneqq\min\{f_{1},f_{2}\}\), \(f^{*}\coloneqq\max\{f_{1},f_{2}\}\), and \(f_{\Delta}\coloneqq f^{*}-f_{*}=|f_{1}-f_{2}|\geq 0\), let \(v^{*},v_{*}\in C(\overline{\Omega})\) be viscosity solutions to the PDE
\[F_{\varepsilon}(f_{*};x,\mathrm{D}^{2}v^{*})=0\text{ in }\Omega \quad\text{and}\quad v^{*}=\max\{v_{1},v_{2}\}\text{ on }\partial\Omega, \tag{2.10}\] \[F_{\varepsilon}(f^{*};x,\mathrm{D}^{2}v_{*})=0\text{ in }\Omega\quad \text{and}\quad v_{*}=\min\{v_{1},v_{2}\}\text{ on }\partial\Omega, \tag{2.9}\]
Since \(f_{*}\leq f_{j}\leq f^{*}\) and \(v_{*}\leq v_{j}\leq v^{*}\) on \(\partial\Omega\) for \(j\in\{1,2\}\), Lemma 2.3 verifies \(v_{*}\leq\{v_{1},v_{2}\}\leq v^{*}\) in \(\overline{\Omega}\), whence
\[\|v_{1}-v_{2}\|_{L^{\infty}(\omega)}\leq\|v^{*}-v_{*}\|_{L^{\infty}(\omega)} \quad\text{for any open subset }\omega\subset\Omega. \tag{2.11}\]
The application of Step 1 to the viscosity solutions \(v^{*},v_{*}\) of (2.9)-(2.10) with \(f_{*}\leq f^{*}\) and \(v_{*}\leq v^{*}\) on \(\partial\Omega\), and the identity \(\max\{a,b\}-\min\{a,b\}=|a-b|\) reveal
\[\|v^{*}-v_{*}\|_{L^{\infty}(\omega)}\leq\|v_{1}-v_{2}\|_{L^{\infty}(\partial \Omega)}+\frac{C}{n}\max_{x\in\Xi}\operatorname{dist}(x,\partial\Omega)^{1/n} \|f_{1}-f_{2}\|_{L^{n}(\Omega)}.\]
The combination of this with (2.11) concludes (2.3).
The stability estimate from Theorem 2.5 motivates a solution concept for the HJB equation (1.2) with \(L^{n}\) right-hand sides.
**Lemma 2.6** (generalized viscosity solution).: _Given \(f\in L^{n}(\Omega)\), \(g\in C(\partial\Omega)\) and \(0\leq\varepsilon\leq 1/n\), where we assume \(f\geq 0\) if \(\varepsilon=0\), there exists a unique function \(u\in C(\overline{\Omega})\) such that \(u\) is the uniform limit of any sequence \((u_{j})_{j\in\mathbb{N}}\) of viscosity solutions \(u_{j}\in C(\overline{\Omega})\) to_
\[F_{\varepsilon}(f_{j};x,\mathrm{D}^{2}u_{j})=0\text{ in }\Omega\quad\text{and} \quad u_{j}=g_{j}\text{ on }\partial\Omega \tag{2.12}\]
_for right-hand sides \(f_{j}\in C(\overline{\Omega})\) and Dirichlet data \(g_{j}\in C(\overline{\Omega})\) with \(\lim_{j\to\infty}\|f-f_{j}\|_{L^{n}(\Omega)}=0\) and \(\lim_{j\to\infty}\|g-g_{j}\|_{L^{\infty}(\partial\Omega)}=0\). The function \(u\) is called generalized viscosity solution to (1.2). If \(\varepsilon=0\) and \(f\geq 0\), then the generalized viscosity solution to (1.2) and the Alexandrov solution to (1.1) coincide._
Proof.: Let \((f_{j})_{j\in\mathbb{N}}\subset C(\overline{\Omega})\) (resp. \((g_{j})_{j\in\mathbb{N}}\subset C(\overline{\Omega})\)) approximate \(f\) in \(L^{n}(\Omega)\) (resp. \(g\) in \(C(\partial\Omega)\)). For any index \(j,k\in\mathbb{N}\), the stability estimate (2.4) from Theorem 2.5 provides
\[\|u_{j}-u_{k}\|_{L^{\infty}(\Omega)}\leq\|g_{j}-g_{k}\|_{L^{\infty}(\partial \Omega)}+\frac{C}{n}(\operatorname{diam}(\Omega)/2)^{1/n}\|f_{j}-f_{k}\|_{L^{n }(\Omega)}.\]
Since \((f_{j})_{j\in\mathbb{N}}\) (resp. \((g_{j})_{j\in\mathbb{N}}\)) is a Cauchy sequence in \(L^{n}(\Omega)\) (resp. \(C(\partial\Omega)\)), this implies that \((u_{j})_{j\in\mathbb{N}}\) is a Cauchy sequence in the Banach space \(C(\overline{\Omega})\) endowed with the \(L^{\infty}\) norm. Therefore, there exists \(u\in C(\overline{\Omega})\) with \(\lim_{j\to\infty}\|u-u_{j}\|_{L^{\infty}(\Omega)}=0\). It remains to prove that \(u\) is independent of the choice of the approximation sequences for \(f\) and \(g\). To this end, let \((\widetilde{f_{j}})_{j\in\mathbb{N}}\) be another sequence of continuous functions \(\widetilde{f_{j}}\in C(\overline{\Omega})\) with \(\lim_{j\to\infty}\|f-\widetilde{f_{j}}\|_{L^{n}(\Omega)}=0\). Then the sequence \((\widetilde{u}_{j})_{j\in\mathbb{N}}\) of viscosity solutions \(\widetilde{u}_{j}\in C(\overline{\Omega})\) to (2.12) with \(f_{j}\) replaced by \(\widetilde{f_{j}}\) converges uniformly to some \(\widetilde{u}\in C(\overline{\Omega})\). The stability estimate (2.4) from Theorem 2.5 shows
\[\|u_{j}-\widetilde{u}_{j}\|_{L^{\infty}(\Omega)}\leq\frac{C}{n}(\operatorname{ diam}(\Omega)/2)^{1/n}\|f_{j}-\widetilde{f_{j}}\|_{L^{n}(\Omega)}\]
for any \(j\in\mathbb{N}\). The right-hand side of this vanishes in the limit and the left-hand side converges to \(\|u-\widetilde{u}\|_{L^{\infty}(\Omega)}\) as \(j\to\infty\), whence \(u=\widetilde{u}\) in \(\overline{\Omega}\). If \(f\geq 0\), then there exists a sequence \((f_{j})_{j\in\mathbb{N}}\) of nonnegative continuous functions \(0\leq f_{j}\in C(\overline{\Omega})\) with \(\lim_{j\to\infty}\|f-f_{j}\|_{L^{\infty}(\Omega)}\) (e.g., from convolution with a nonnegative mollifier). Proposition 2.4(a) provides, for all \(j\in\mathbb{N}\), that the viscosity solution \(u_{j}\) to (2.12) with \(\varepsilon=0\) is the Alexandrov solution to \(\det\mathrm{D}^{2}u_{j}=f_{j}\) in \(\Omega\). Since \(u_{j}\) converges uniformly to the generalized viscosity solution \(u\) to (1.2), the stability of
Alexandrov solutions [11, Corollary 2.12 and Proposition 2.16] concludes that \(u\) is the Alexandrov solution to (1.1).
By approximation of the right-hand sides, the stability estimates from Theorem 2.5 also applies to generalized viscosity solutions to the HJB equation (1.2).
**Corollary 2.7** (extended \(L^{\infty}\) stability).: _Given any \(0\leq\varepsilon\leq 1/n\), \(f_{j}\in L^{n}(\Omega)\), where we assume \(f_{j}\geq 0\) if \(\varepsilon=0\), and \(g_{j}\in C(\overline{\Omega})\), the generalized viscosity solutions \(v_{j}\in C(\overline{\Omega})\) to \(F_{\varepsilon}(f_{j};x,\mathrm{D}^{2}v_{j})=0\) in \(\Omega\) for \(j\in\{1,2\}\) satisfy (2.3)-(2.4)._
Proof.: For any index \(j\in\{1,2\}\), there exists a sequence \((f_{j,k})_{j\in\mathbb{N}}\) of smooth functions \(f_{j,k}\in C^{\infty}(\overline{\Omega})\) that approximates \(f_{j}\) in \(L^{n}(\Omega)\), i.e., \(\lim_{k\to\infty}\|f_{j}-f_{j,k}\|_{L^{n}(\Omega)}=0\). Given any \(j\in\{1,2\}\) and \(k\in\mathbb{N}\), let \(v_{j,k}\in C(\overline{\Omega})\) denote the viscosity solution to the HJB equation \(F_{\varepsilon}(f_{j,k};x,\mathrm{D}^{2}v_{j,k})=0\) in \(\Omega\) and \(v_{j,k}=v_{j}\) on \(\partial\Omega\). The \(L^{\infty}\) stability estimate (2.3) from Theorem 2.5 shows, for any \(k\in\mathbb{N}\), that
\[\|v_{1,k}-v_{2,k}\|_{L^{\infty}(\omega)}\leq\|v_{1}-v_{2}\|_{L^{\infty}( \partial\Omega)}+\frac{C}{n}\max_{x\in\overline{\omega}}\mathrm{dist}(x, \partial\Omega)^{1/n}\|f_{1,k}-f_{2,k}\|_{L^{n}(\Omega)}.\]
The left-hand side of this converges to \(\|v_{1}-v_{2}\|_{L^{\infty}(\Omega)}\) by the definition of generalized viscosity solutions in Lemma 2.6. Hence, \(\lim_{k\to\infty}\|f_{1,k}-f_{2,k}\|_{L^{n}(\Omega)}=\|f_{1}-f_{2}\|_{L^{n}( \Omega)}\) concludes the proof.
_Remark 2.8_ (\(L^{\infty}\) stability for Alexandrov solutions).: If the right-hand sides \(0\leq f_{1},f_{2}\in L^{n}(\Omega)\) are nonnegative, then the generalized solutions \(v_{1},v_{2}\) from Corollary 2.7 are Alexandrov solutions to \(\det\mathrm{D}^{2}v_{j}=(f_{j}/n)^{n}\), cf. Lemma 2.6. Therefore, Corollary 2.7 provides \(L^{\infty}\) stability estimates for Alexandrov solutions.
The convexity of the differential operator \(F_{\varepsilon}\) in \(\mathbb{S}\) leads to existence (and uniqueness) of strong solutions \(u_{\varepsilon}\in C(\overline{\Omega})\cap W^{2,n}_{\mathrm{loc}}(\Omega)\) to (1.2) for any \(\varepsilon>0\), \(f\in L^{n}(\Omega)\), and \(g\in C(\partial\Omega)\)[3]. It turns out that strong solutions are generalized viscosity solutions. For the purpose of this paper, we only provide a weaker result.
**Theorem 2.9** (strong solution implies generalized viscosity solution).: _Let \(0<\varepsilon\leq 1/n\), \(f\in L^{n}(\Omega)\), and \(g\in C(\partial\Omega)\) be given. Suppose that \(u_{\varepsilon}\in W^{2,n}(\Omega)\) is a strong solution to (1.2) in the sense that (1.2) is satisfied a.e. in \(\Omega\). Then this strong solution \(u_{\varepsilon}\) is the unique generalized viscosity solution to (1.2)._
The proof of Theorem 2.9 utilizes the following elementary result.
**Lemma 2.10** (computation and stability of right-hand side).: _Let \(\varepsilon>0\) be given. For any \(M\in\mathbb{S}\), there exists a unique \(\xi(M)\in\mathbb{R}\) such that \(\max_{A\in\mathbb{S}(\varepsilon)}(-A:M+\xi(M)\sqrt[n]{\det A})=0\). Furthermore, any \(M,N\in\mathbb{S}\) satisfy the stability \(|\xi(M)-\xi(N)|\leq C(\varepsilon)|M-N|\) with a constant depending on the regularization parameter \(\varepsilon\)._
Proof.: Given a symmetric matrix \(M\in\mathbb{S}\), define the continuous real-valued function
\[\Psi_{M}(\xi)\coloneqq\max_{A\in\mathbb{S}(\varepsilon)}(-A:M+\xi\sqrt[n]{ \det A}). \tag{2.13}\]
Since \(\Psi_{M}\) is strictly monotonically increasing with the limits \(\lim_{\xi\to-\infty}\Psi_{M}=-\infty\) and \(\lim_{\xi\to\infty}\Psi_{M}=+\infty\), there exists a unique root \(\xi(M)\) such that \(\Psi_{M}(\xi(M))=0\). For any \(M,N\in\mathbb{S}\), the inequality \(\max X-\max Y\leq\max(X-Y)\) shows
\[0=\Psi_{M}(\xi(M))-\Psi_{N}(\xi(N))\leq\Psi_{M-N}(\xi(M)-\xi(N)). \tag{2.14}\]
Let \(A\in\mathbb{S}(\varepsilon)\) be chosen such that \(\Psi_{M-N}(\xi(M)-\xi(N))=-A:(M-N)+(\xi(M)-\xi(N))\sqrt[n]{\det A}\). Then it follows from (2.14) that
\[\xi(N)-\xi(M)\leq A:(N-M)/\sqrt[n]{\det A}\leq|A||M-N|/\sqrt[n]{\det A}. \tag{2.15}\]
Exchanging the roles of \(M\) and \(N\) in (2.15) leads to \(\xi(M)-\xi(N)\leq|B||M-N|/\sqrt[3]{\det B}\) for some \(B\in\mathbb{S}(\varepsilon)\). Since \(|A|/\sqrt[3]{\det A}\leq 1/(\sqrt{\varepsilon^{n-1}(1-(n-1)\varepsilon)})\) holds for any \(A\in\mathbb{S}(\varepsilon)\), the combination of this with (2.15) concludes \(|\xi(N)-\xi(M)|\leq|M-N|/\sqrt[3]{\varepsilon^{n-1}(1-(n-1)\varepsilon)}\).
Proof of Theorem 2.9.: Let \(v_{j}\in C^{2}(\overline{\Omega})\) be a sequence of smooth functions that approximate \(u_{\varepsilon}\) with \(\lim_{j\to\infty}\|u_{\varepsilon}-v_{j}\|_{W^{2,n}(\Omega)}=0\). Lemma 2.10 proves that there exists a (unique) function \(f_{j}\coloneqq\xi(\mathrm{D}^{2}v_{j})\) with \(F_{\varepsilon}(f_{j};x,\mathrm{D}^{2}v_{j})=0\) in \(\Omega\). We apply the stability from Lemma 2.10 twice. First, \(|f_{j}(x)-f_{j}(y)|\leq C(\varepsilon)|\mathrm{D}^{2}v_{j}(x)-\mathrm{D}^{2}v _{j}(y)|\) for any \(x,y\in\Omega\) implies continuity \(f_{j}\in C(\overline{\Omega})\) of \(f_{j}\) and second, \(|f(x)-f_{j}(x)|\leq C(\varepsilon)|\mathrm{D}^{2}u_{\varepsilon}(x)-\mathrm{D} ^{2}v_{j}(x)|\) for a.e. \(x\in\Omega\) implies the convergence \(\lim_{j\to\infty}\|f-f_{j}\|_{L^{n}(\Omega)}=0\). Notice from the Sobolev embedding that \(v_{j}\) converges uniformly to \(u_{\varepsilon}\) in \(\overline{\Omega}\) as \(j\to\infty\). In conclusion, \(u_{\varepsilon}\) is the uniform limit of classical (and in particular, viscosity) solutions \(v_{j}\) such that the corresponding right-hand sides and Dirichlet data converge in the correct norm, i.e., \(\lim_{j\to\infty}\|f-f_{j}\|_{L^{n}(\Omega)}=0\) and \(\lim_{j\to\infty}\|g-v_{j}\|_{L^{\infty}(\partial\Omega)}=0\). Lemma 2.6 proves that \(u_{\varepsilon}\) is the unique (generalized) viscosity solution.
## 3. Convergence of the regularization
This section establishes the uniform convergence of the generalized viscosity solution \(u_{\varepsilon}\) of the regularized HJB equation (1.2) to the Alexandrov solution \(u\) of the Monge-Ampere equation (1.1) for any nonnegative right-hand side \(0\leq f\in L^{n}(\Omega)\). The proof is carried out in any space dimension \(n\) and does not rely on the concept of strong solutions in two space dimensions from [18, 19]. It departs from a main result of [12].
**Theorem 3.1** (convergence of regularization for smooth data).: _Let \(f\in C^{0,\alpha}(\Omega)\), \(0<\lambda\leq f\leq\Lambda\), and \(g\in C^{1,\beta}(\partial\Omega)\) with positive constants \(0<\alpha,\beta<1\) and \(0<\lambda\leq\Lambda\) be given. Let \(u\in C(\overline{\Omega})\cap C^{2,\alpha}_{\mathrm{loc}}(\Omega)\) be the unique classical solution to (1.1) from Proposition 2.4(c)._
_(a) For any sequence \(0<(\varepsilon_{j})_{j\in\mathbb{N}}\leq 1/n\) with \(\lim_{j\to\infty}\varepsilon_{j}=0\), the sequence \((u_{\varepsilon_{j}})_{j\in\mathbb{N}}\) of classical solutions \(u_{\varepsilon_{j}}\in C(\overline{\Omega})\cap C^{2}(\Omega)\) to (1.2) with \(\varepsilon\coloneqq\varepsilon_{j}\) from Proposition 2.4(b) converges uniformly to \(u\) in \(\Omega\) as \(j\to\infty\)._
_(b) If \(g\equiv 0\), \(f\in C^{2,\alpha}(\Omega)\), and \(f>0\) in \(\overline{\Omega}\), then, for some constant \(C\) and all \(0<\varepsilon\leq 1/n\), the generalized viscosity solution \(u_{\varepsilon}\) to (1.2) satisfies_
\[\|u-u_{\varepsilon}\|_{L^{\infty}(\Omega)}\leq C\varepsilon^{1/(n^{2}(2n+3))}.\]
Proof.: The proof of Theorem 3.1 can follow the lines of the proof of [12, Theorem 4.1], where Lemma 3.2 below replaces its counterpart [12, Lemma 4.2] in two space dimensions. We note that the assumption \(g\in H^{2}(\Omega)\) in [12, Theorem 4.1] is only required for the existence of strong solutions \(u_{\varepsilon}\in H^{2}(\Omega)\) and can be dropped. Further details of the proof are omitted.
**Lemma 3.2** (effect of regularization).: _Given \(0<\varepsilon\leq 1/n\), \(M\in\mathbb{S}\), and \(\xi>0\), suppose that \(|M|_{n}^{n}\leq\xi^{n}(1/\varepsilon-(n-1))/n^{n}\) and \(\max_{A\in\mathbb{S}(0)}(-A:M+\xi\sqrt{\det A})=0\), then \(\max_{A\in\mathbb{S}(\varepsilon)}(-A:M+\xi\sqrt{\det A})=0\)._
Proof.: The assumption \(\max_{A\in\mathbb{S}(0)}(-A:M+\xi\sqrt{\det A})=0\) implies that \(M>0\) is positive definite and \(\det M=(\xi/n)^{n}\)[14, p. 51]. Let \(\varrho_{1},\ldots,\varrho_{n}\) denote the positive eigenvalues of \(M\) and \(t_{j}\coloneqq\varrho_{j}^{-1}/(\sum_{k=1}^{n}\varrho_{k}^{-1})\) for \(j=1,\ldots,n\). By design of \(t_{j}\),
\[\varrho_{j}^{-1}=t_{j}\left(\frac{\varrho_{1}^{-1}\cdots\varrho_{n}^{-1}}{t_{ 1}\ldots t_{n}}\right)^{1/n},\]
whence \(\varrho_{j}=\xi(t_{1}\dots t_{n})^{1/n}/(nt_{j})\). Without loss of generality, suppose that \(t_{1}\leq t_{2}\leq\dots\leq t_{n}\). The elementary bound \(t_{1}\dots t_{n}\geq t_{1}^{n-1}(1-(n-1)t_{1})\) proves
\[\xi^{n}(1-(n-1)t_{1})/t_{1}\leq\xi^{n}(t_{1}\dots t_{n})/(nt_{1})^{n}=n^{n} \varrho_{1}^{n}\leq n^{n}|M|_{n}^{n}.\]
Hence, \(1/t_{1}\leq n^{n}|M|_{n}^{n}/\xi^{n}+(n-1)\leq 1/\varepsilon\) by assumption and so, \(t_{1}\geq\varepsilon\). In particular, \(\varepsilon\leq t_{1}\leq\dots\leq t_{n}\) and \(t_{1}+\dots+t_{n}=1\). Notice that \(t\coloneqq(t_{1},\dots,t_{n})\in\mathbb{R}^{n}\) maximizes the scalar-valued function \(g:\mathbb{R}^{n}\to\mathbb{R}\) with
\[\psi(s)\coloneqq-s_{1}\varrho_{1}-\dots-s_{n}\varrho_{n}+\xi\sqrt[n]{s_{1} \dots s_{n}}\]
among \(s\in S(0)\) with \(S(\varepsilon)\coloneqq\{s=(s_{1},\dots,s_{n}):s\geq\varepsilon\text{ and }s_{1}+\dots+s_{n}=1\}\). Since \(\psi(t)=\max_{s\in S(0)}\psi(s)=\max_{A\in\mathbb{S}(0)}(-A:M+\xi\sqrt{\det A})\)[14, p. 51-52] and \(t\in S(\varepsilon)\), this implies that \(0=\psi(t)=\max_{A\in\mathbb{S}(\varepsilon)}(-A:M+\xi\sqrt{\det A})\).
The approximation of nonsmooth data leads to the following convergence result under (almost) minimal assumptions (general Borel measures as right-hand sides are excluded).
**Theorem 3.3** (convergence of regularization).: _Let a sequence \((\varepsilon_{j})_{j\in\mathbb{N}}\subset(0,1/n]\) with \(\lim_{j\to\infty}\varepsilon_{j}=0\), a nonnegative right-hand side \(0\leq f\in L^{n}(\Omega)\), and Dirichlet data \(g\in C(\partial\Omega)\) be given. Then the sequence \((u_{j})_{j\in\mathbb{N}}\) of generalized viscosity solutions \(u_{j}\in C(\overline{\Omega})\) to_
\[F_{\varepsilon_{j}}(f;x,\mathrm{D}^{2}u_{j})=0\text{ in }\Omega\quad\text{and} \quad u_{j}=g\text{ on }\partial\Omega\]
_converges uniformly \(\lim_{j\to\infty}\|u-u_{j}\|_{L^{\infty}(\Omega)}=0\) to the Alexandrov solution \(u\) to the Monge-Ampere equation (1.1)._
Proof.: Recall the constant \(c_{n}\) from Lemma 1.1 and \(C\coloneqq c_{n}\mathrm{diam}(\Omega)^{(n-1)/n}\). Given \(\delta>0\), there exist smooth functions \(f_{\delta},g_{\delta}\in C^{\infty}(\overline{\Omega})\) such that
* \(f_{\delta}>0\) in \(\overline{\Omega}\) and \(\|f-f_{\delta}\|_{L^{n}(\Omega)}\leq n\delta/(8C(\mathrm{diam}(\Omega)/2)^{1/n})\) (the approximation \(f_{\delta}\) can be constructed by the convolution of \(f\) with a nonnegative mollifier plus an additional small constant),
* \(\|g-g_{\delta}\|_{L^{\infty}(\partial\Omega)}\leq\delta/4\).
Notice that the bound \(f_{\delta}>0\) in \(\overline{\Omega}\) and the smoothness of the Dirichlet data \(g_{\delta}\in C^{\infty}(\partial\Omega)\) allow for strict convexity of the Alexandrov solution \(u_{\delta}\) to the Monge-Ampere equation \(\det\mathrm{D}^{2}u_{\delta}=(f_{\delta}/n)^{n}\) with Dirichlet data \(u_{\delta}=g_{\delta}\) on \(\partial\Omega\)[11, Corollary 4.11]. This is a crucial assumption in Theorem 3.1, which leads to the uniform convergence of the sequence \((u_{\delta,j})_{j\in\mathbb{N}}\) of viscosity solutions \(u_{\delta,j}\in C(\overline{\Omega})\) to the HJB equation
\[F_{\varepsilon_{j}}(f_{\delta};x,\mathrm{D}^{2}u_{\delta,j})=0\text{ a.e. in }\Omega\quad\text{and}\quad u_{\delta,j}=g_{\delta}\text{ on }\partial\Omega\]
towards \(u_{\delta}\) as \(j\to\infty\). Therefore, there exists a \(j_{0}\in\mathbb{N}\) such that \(\|u_{\delta}-u_{\delta,j}\|_{L^{\infty}(\Omega)}\leq\delta/4\) for all \(j\geq j_{0}\). The stability estimate (2.4) from Corollary 2.7 and (i)-(ii) provide
\[\|u-u_{\delta}\|_{L^{\infty}(\Omega)}+\|u_{j}-u_{\delta,j}\|_{L^{ \infty}(\Omega)}\] \[\qquad\leq 2\|g-g_{\delta}\|_{L^{\infty}(\partial\Omega)}+\frac{2C}{n }(\mathrm{diam}(\Omega)/2)^{1/n}\|f-f_{\delta}\|_{L^{n}(\Omega)}\leq 3\delta/4.\]
This, the triangle inequality, and \(\|u_{\delta}-u_{\delta,j}\|_{L^{\infty}(\Omega)}\leq\delta/4\) verify, for all \(j\geq j_{0}\), that \(\|u-u_{j}\|_{L^{\infty}(\Omega)}\leq\delta\), whence \(u_{j}\) converges uniformly to \(u\) as \(j\to\infty\)
## 4. A posteriori error estimate
In this section we prove an a posteriori error bound for a given approximation \(v_{h}\) to the Alexandrov solution \(u\) of the Monge-Ampere equation. In what follows we assume a given finite partition \(\mathcal{T}\) of \(\overline{\Omega}\) of closed polytopes such that the interiors of any distinct \(T,K\in\mathcal{T}\) are disjoint and the union over \(\mathcal{T}\) equals \(\overline{\Omega}\). Let \(V_{h}\subset C^{1,1}(\overline{\Omega})\) be a subspace of functions in \(C^{2}(T)\) when restricted to any set \(T\in\mathcal{T}\) of the partition. (Here, \(C^{2}\) up to the boundary of \(T\) means that there exists a sufficiently smooth extension of the function \(v_{h}|_{\mathrm{int}(T)}\) to \(T\) for \(v_{h}\in V_{h}\).) The piecewise Hessian of any \(v_{h}\in V_{h}\) is denoted by \(\mathrm{D}^{2}_{\mathrm{pw}}v_{h}\). In practical examples, we think of \(V_{h}\) as a space of \(C^{1}\)-regular finite element functions. Given any \(v\in C(\Omega)\), its convex envelope is defined as
\[\Gamma_{v}(x)\coloneqq\sup_{\begin{subarray}{c}w:\mathbb{R}^{n}\to\mathbb{R} \text{ affine}\\ w\leq v\end{subarray}}w(x)\quad\text{for any }x\in\Omega. \tag{4.1}\]
Let \(\mathcal{C}_{v}\coloneqq\{x\in\Omega:v(x)=\Gamma_{v}(x)\}\) denote the contact set of \(v\).
**Theorem 4.1** (guaranteed error control for Monge-Ampere).: _Given a nonnegative right-hand side \(f\in L^{n}(\Omega)\) and \(g\in C(\partial\Omega)\), let \(u\in C(\overline{\Omega})\) be the Alexandrov solution to (1.1). Let \(v_{h}\in V_{h}\) with its convex envelope \(\Gamma_{v_{h}}\) be given and define \(f_{h}\coloneqq\chi c_{v_{h}}n(\det\mathrm{D}^{2}_{\mathrm{pw}}v_{h})^{1/n}\). For any convex subset \(\Omega^{\prime}\subset\Omega\), we have_
\[\|u-\Gamma_{v_{h}}\|_{L^{\infty}(\Omega)}\leq\limsup_{x\to\partial\Omega}|( g-\Gamma_{v_{h}})(x)|+\frac{c_{n}}{2^{1/n}n}\mathrm{diam}(\Omega^{\prime})\|f-f_{h}\| _{L^{n}(\Omega^{\prime})} \tag{4.2}\] \[+\frac{c_{n}}{n}\mathrm{diam}(\Omega)^{(n-1)/n}\max_{x\in\Omega \setminus\overline{\Omega^{\prime}}}\mathrm{dist}(x,\partial\Omega)^{1/n}\|f- f_{h}\|_{L^{n}(\Omega)}=:\mathrm{RHS}_{0}.\]
The proof of Theorem 4.1 requires the following result on the Monge-Ampere measure of the convex envelope \(\Gamma_{v_{h}}\).
**Lemma 4.2** (MA measure of the convex envelope).: _The convex envelope \(\Gamma_{v_{h}}\) of any \(v_{h}\in V_{h}\) satisfies \(\det\mathrm{D}^{2}\Gamma_{v_{h}}=\widetilde{f}_{h}\,\mathrm{d}x\) in the sense of Monge-Ampere measure with the nonnegative function \(\widetilde{f}_{h}\coloneqq\chi c_{v_{h}}\det\mathrm{D}^{2}_{\mathrm{pw}}v_{h} \in L^{\infty}(\Omega)\)._
Proof.: We first claim that \(\partial\Gamma_{v_{h}}(x)=\partial v_{h}(x)=\{\nabla v_{h}(x)\}\) holds for all \(x\in\Omega\cap\mathcal{C}_{v_{h}}\). In fact, if \(p\in\partial\Gamma_{v_{h}}(x)\), then \(\ell_{x,p}(z)\coloneqq\Gamma_{v_{h}}(x)+p\cdot(z-x)\) is a supporting hyperplane touching \(\Gamma_{v_{h}}\) from below at \(x\). By design of the convex envelope \(\Gamma_{v_{h}}\), \(\ell_{x,p}\leq v_{h}\). Since \(\ell_{x,p}(x)=v_{h}(x)\) because \(x\in\Omega\cap\mathcal{C}_{v_{h}}\), \(\ell_{x,p}\) touches \(v_{h}\) at \(x\) from below. We deduce \(p=\nabla v_{h}(x)\) from the differentiability of \(v_{h}\). The claim then follows from the fact that the subdifferential \(\partial\Gamma_{v_{h}}\) is nonempty in \(\Omega\)[16, Theorem 23.4]. The set \(\partial\Gamma_{v_{h}}(\Omega\setminus\mathcal{C}_{v_{h}})\) has Lebesgue measure zero [8, p. 995] and \(\partial\Gamma_{v_{h}}(x)=\partial v_{h}(x)=\{\nabla v_{h}(x)\}\) holds for all \(x\in\Omega\cap\mathcal{C}_{v_{h}}\). Therefore, the area formula [11, Theorem A.31] implies, for any Borel set \(\omega\subset\Omega\), that
\[\mu_{\Gamma_{v_{h}}}(\omega)=\mathcal{L}^{n}(\partial\Gamma_{v_{h}}(\omega))= \mathcal{L}^{n}(\nabla v_{h}(\omega\cap\mathcal{C}_{v_{h}}))=\int_{\omega\cap \mathcal{C}_{v_{h}}}\det\mathrm{D}^{2}_{\mathrm{pw}}v_{h}\,\mathrm{d}x.\]
This formula implies that \(\chi_{\mathcal{C}_{v_{h}}}\det\mathrm{D}^{2}_{\mathrm{pw}}v_{h}\geq 0\) is a nonnegative function a.e. in \(\Omega\). Consequently, \(\mu_{\Gamma_{v_{h}}}=\widetilde{f}_{h}\,\mathrm{d}x\) with \(\widetilde{f}_{h}\coloneqq\chi_{c_{v_{h}}}\det\mathrm{D}^{2}_{\mathrm{pw}}v_{h}\geq 0\).
Proof of Theorem 4.1.: Lemma 4.2 proves that the Monge-Ampere measure \(\mu_{\Gamma_{v_{h}}}=(f_{h}/n)^{n}\,\mathrm{d}x\) of \(\Gamma_{v_{h}}\) can be expressed by the \(L^{1}\) density function \((f_{h}/n)^{n}\). In particular, \(\Gamma_{v_{h}}\) is the generalized viscosity solution to \(F_{0}(f_{h};x,\mathrm{D}^{2}\Gamma_{v_{h}})=0\) in \(\Omega\). The application of the stability estimate (2.4) from Corollary 2.7 on the convex subset \(\Omega^{\prime}\subset\Omega\) instead of \(\Omega\) leads to
\[\|u-\Gamma_{v_{h}}\|_{L^{\infty}(\Omega^{\prime})}\leq\|u-\Gamma_{v_{h}}\|_{L^{ \infty}(\partial\Omega^{\prime})}+\frac{c_{n}}{2^{1/n}n}\mathrm{diam}(\Omega^{ \prime})\|f-f_{h}\|_{L^{n}(\Omega^{\prime})}.\]
The unknown error \(\|u-\Gamma_{v_{h}}\|_{L^{\infty}(\partial\Omega^{\prime})}\leq\|u-\Gamma_{v_{h}}\| _{L^{\infty}(\Omega\setminus\Omega^{\prime})}\) can be bounded by the local estimate (2.3) from Corollary 2.7 with \(\omega\coloneqq\Omega\setminus\Omega^{\prime}\). If \(\Gamma_{v_{h}}\in C(\overline{\Omega})\) is continuous up to the boundary \(\partial\Omega\) of \(\Omega\), this reads
\[\|u-\Gamma_{v_{h}}\|_{L^{\infty}(\Omega\setminus\Omega^{\prime})} \leq\|g-\Gamma_{v_{h}}\|_{L^{\infty}(\partial\Omega)}\] \[\quad+\frac{c_{n}}{n}\mathrm{diam}(\Omega)^{(n-1)/n}\max_{x\in \overline{\Omega\setminus\Omega^{\prime}}}\mathrm{dist}(x,\partial\Omega)^{1/ n}\|f-f_{h}\|_{L^{\infty}(\Omega)}.\]
Since \(\Gamma_{v_{h}}\) may only be continuous in the domain \(\Omega\), \(\|g-\Gamma_{v_{h}}\|_{L^{\infty}(\partial\Omega)}\) is replaced by \(\limsup_{x\to\partial\Omega}|(g-\Gamma_{v_{h}})(x)|\) in general. The combination of the two previously displayed formula concludes the proof.
We note that, for certain examples, the convex envelope \(\Gamma_{v_{h}}\) of an approximation \(v_{h}\) is continuous up to the boundary.
**Proposition 4.3** (continuity at boundary).: _Let \(v\in C^{0,1}(\overline{\Omega})\) be Lipschitz continuous such that \(v|_{\partial\Omega}\) can be extended to a Lipschitz-continuous convex function \(g\in C^{0,1}(\overline{\Omega})\). Then \(\Gamma_{v}\in C(\overline{\Omega})\) and \(\Gamma_{v}=v\) on \(\partial\Omega\)._
Proof.: We first prove the assertion for homogenous boundary condition \(v|_{\partial\Omega}=0\). Given any point \(x\in\Omega\), let \(x^{\prime}\in\partial\Omega\) denote a best approximation of \(x\) onto the boundary \(\partial\Omega\) so that \(|x-x^{\prime}|=\mathrm{dist}(x,\partial\Omega)\). Define the affine function \(a_{x}(z)\coloneqq L(z-x^{\prime})\cdot(x^{\prime}-x)/|x-x^{\prime}|\) for \(z\in\Omega\), where \(L\) denotes the Lipschitz constant of the function \(v\in C^{0,1}(\overline{\Omega})\). It is straight-forward to verify that \(a_{x}\leq v\) in \(\overline{\Omega}\)[13, p. 12]. Therefore, \(-L\mathrm{dist}(x,\partial\Omega)=a_{x}(x)\leq\Gamma_{v}(x)\leq 0\) by definition of the convex envelope. This shows \(\Gamma_{v}\in C(\overline{\Omega})\) with \(\Gamma_{v}\equiv 0\) on \(\partial\Omega\). In the general case, we observe that \(v-g\in C^{0,1}(\overline{\Omega})\) is Lipschitz continuous. The first case proves \(\Gamma_{v-g}\in C(\overline{\Omega})\) with \(\Gamma_{v-g}=v-g\) on \(\partial\Omega\). We deduce that \(w\coloneqq g+\Gamma_{v-g}\in C(\overline{\Omega})\) is a convex function with \(w\leq v\) in \(\Omega\) and \(w=v\) on \(\partial\Omega\). Let \((x_{j})_{j}\subset\Omega\) be a sequence of points converging to some point \(x\in\partial\Omega\) on the boundary. For a given \(\gamma>0\), there exists, from the uniform continuity of \(v-w\) in the compact set \(\overline{\Omega}\), a \(\delta>0\) such that \(|(v-w)(x_{j})-(v-w)(x)|\leq\gamma\) whenever \(|x-x_{j}|\leq\delta\). Since \(w\leq\Gamma_{v}\leq v\) in \(\Omega\), this implies \(|(v-\Gamma_{v})(x_{j})|\leq\gamma\) for sufficiently large \(j\). In combination with the triangle inequality and the Lipschitz continuity of \(v\), we conclude \(|v(x)-\Gamma_{v}(x_{j})|\leq\gamma+|v(x)-v(x_{j})|\leq\gamma+L|x-x_{j}|\). Therefore, \(\lim_{j\to\infty}\Gamma_{v}(x_{j})=v(x)\).
The theory of this paper also allows for an a posteriori error control for the regularized HJB equation (1.2). We state this for the sake of completeness as, in general, it is difficult to quantify the regularization error \(\|u-u_{\varepsilon}\|_{L^{\infty}(\Omega)}\).
**Theorem 4.4** (guaranteed \(L^{\infty}\) error control for uniform elliptic HJB).: _Given a positive parameter \(0<\varepsilon\leq 1/n\) and a \(C^{1}\) conforming finite element function \(v_{h}\in V_{h}\), there exists a unique \(f_{h}\in L^{\infty}(\Omega)\) such that_
\[F_{\varepsilon}(f_{h};x,\mathrm{D}^{2}v_{h})=0\text{ a.e. in }\Omega. \tag{4.3}\]
_The viscosity solution \(u_{\varepsilon}\) to (1.2) with right-hand side \(f\in L^{n}(\Omega)\) and Dirichlet data \(g\in C(\partial\Omega)\) satisfies, for any convex subset \(\Omega^{\prime}\Subset\Omega\), that_
\[\|u_{\varepsilon}-v_{h}\|_{L^{\infty}(\Omega)}\leq\|g-v_{h}\|_{L^ {\infty}(\partial\Omega)}+\frac{c_{n}}{2^{1/n}n}\mathrm{diam}(\Omega^{\prime} )\|f-f_{h}\|_{L^{n}(\Omega^{\prime})} \tag{4.4}\] \[+\frac{c_{n}}{n}\mathrm{diam}(\Omega)^{(n-1)/n}\max_{x\in \overline{\Omega\setminus\Omega^{\prime}}}\mathrm{dist}(x,\partial\Omega)^{1/ n}\|f-f_{h}\|_{L^{n}(\Omega)}\eqqcolon\mathrm{RHS}_{\varepsilon}.\]
Proof.: As in the proof of Theorem 2.9, Lemma 2.10 provides a (unique) piecewise continuous and essentially bounded function \(f_{h}\coloneqq\xi(\mathrm{D}^{2}_{\mathrm{pw}}v_{h})\in L^{\infty}(\Omega)\) with (4.3). Theorem 2.9 shows that \(v_{h}\) is the generalized viscosity solution to (4.3). Therefore,
the stability estimates from Corollary 2.7 can be applied to \(u_{\varepsilon}\) and \(v_{h}\). First, the application of (2.4) to the subdomain \(\Omega^{\prime}\) instead \(\Omega\) leads to
\[\|u_{\varepsilon}-v_{h}\|_{L^{\infty}(\Omega^{\prime})}\leq\|u_{\varepsilon}-v _{h}\|_{L^{\infty}(\partial\Omega^{\prime})}+\frac{c_{n}}{2^{1/n}n}\mathrm{ diam}(\Omega^{\prime})\|f-f_{h}\|_{L^{n}(\Omega^{\prime})}.\]
Second, the local estimate (2.3) with \(\omega\coloneqq\Omega\setminus\Omega^{\prime}\) implies
\[\|u_{\varepsilon}-v_{h}\|_{L^{\infty}(\Omega\setminus\Omega^{ \prime})}\leq\|g-v_{h}\|_{L^{\infty}(\partial\Omega)}\] \[\qquad+\frac{c_{n}}{n}\mathrm{diam}(\Omega)^{(n-1)/n}\max_{x\in \overline{\Omega\setminus\Omega^{\prime}}}\mathrm{dist}(x,\partial\Omega)^{1/ n}\|f-f_{h}\|_{L^{n}(\Omega)}.\]
Since \(\|u_{\varepsilon}-v_{h}\|_{L^{\infty}(\partial\Omega^{\prime})}\leq\|u_{ \varepsilon}-v_{h}\|_{L^{\infty}(\Omega\setminus\Omega^{\prime})}\), the combination of the two previously displayed formulas concludes the proof.
We point out that in both theorems of this section, it is possible to apply the stability estimate (2.3) to further subsets of \(\Omega\) to localize the error estimator.
## 5. Numerical examples
In this section, we apply the theory from Section 4 to numerical benchmarks on the (two-dimensional) unit square domain \(\Omega\coloneqq(0,1)^{2}\).
### Implementation
Some remarks on the practical realization precede the numerical benchmarks of this section.
#### 5.1.1. Setup
Given \(\mathcal{T}\) as a rectangular partition of the domain \(\Omega\) with the set \(\mathcal{E}\) of edges, we choose \(V_{h}\) to be the Bogner-Fox-Schmit finite element space [6]. It is the space of global \(C^{1,1}(\overline{\Omega})\) functions that are bicubic when restricted to any element \(T\in\mathcal{T}\). We compute the discrete approximation in \(V_{h}\) by approximating the regularized problem (1.3) with a Galerkin method. In the two-dimensional setting, this yields a strongly monotone problem with a unique discrete solution \(u_{h,\varepsilon}\)[12]. Since \(v_{h}\coloneqq u_{h,\varepsilon}\) is a \(C^{1,1}(\overline{\Omega})\) function, we can apply Theorem 4.1 to obtain error bounds for \(\|u-\Gamma_{v_{h}}\|_{L^{\infty}(\Omega)}\), which motivates an adaptive scheme as outlined below.
#### 5.1.2. Evaluation of the upper bound of Theorem 4.1
We proceed as follows for the computation of the right-hand side \(\mathrm{RHS}_{0}\) of (4.2).
_Integration of \(f-f_{h}\) for \(f_{h}\coloneqq 2\chi_{\mathcal{C}_{v_{h}}}(\det\mathrm{D}^{2}_{\mathrm{pw}}v_{h}) ^{1/2}\)._ The integral \(\|f-f_{h}\|_{L^{2}(\omega)}\) for any subset \(\omega\subset\Omega\) is computed via numerical integration. Given a set of Gauss points \(\mathcal{N}_{\ell}\) associated to the degree of exact integration \(\ell\), this reads
\[\sum_{T\in\mathcal{T}}\sum_{x\in\mathcal{N}_{\ell}\cap T\cap\omega}\mathrm{ meas}(T)w_{\ell,T}(x)(f(x)-2\chi_{\mathcal{C}_{v_{h}}}(x)(\det\mathrm{D}^{2}_{ \mathrm{pw}}v_{h}(x))^{1/2})^{2} \tag{5.1}\]
with some positive weight function \(w_{\ell,T}\in L^{\infty}(T)\). A point \(x\in\mathcal{N}_{\ell}\) is in the contact set \(\mathcal{C}_{v_{h}}\) of \(v_{h}\) if (and only if)
\[0\leq v_{h}(z)-v_{h}(x)-\nabla v_{h}(x)\cdot(z-x)\quad\text{for all }z\in\Omega \tag{5.2}\]
(because \(\partial\Gamma_{v_{h}}(x)=\{\nabla v_{h}(x)\}\) for any \(x\in\Omega\cap\mathcal{C}_{v_{h}}\) from the proof of Theorem 4.1). While this condition can be checked explicitly, it leads to a global problem for each Gauss point, which may become rather expensive. Instead, (5.2) is verified at only a finite number of points, e.g., \(z\in\mathcal{V}_{\ell}\coloneqq\mathcal{N}_{\ell}\cup\mathcal{N}_{\ell}^{b}\), where \(\mathcal{N}_{\ell}^{b}\subset\partial\Omega\) is a discrete subset of \(\partial\Omega\). The set of points \(\mathcal{V}_{\ell}\) create a quasi-uniform refinement \(\mathcal{T}_{\ell}\) of the partition \(\mathcal{T}\) into triangles and we assume that the mesh-size of \(\mathcal{T}_{\ell}\) tends to zero as \(\ell\to\infty\). Let \(\mathrm{I}_{\ell}v_{h}\) denote the nodal interpolation of \(v_{h}\) w.r.t. the mesh \(\mathcal{T}_{\ell}\). We replace the function \(\chi_{\mathcal{C}_{v_{h}}}\) in (5.1) by the indicator function \(\chi_{\mathcal{C}_{v_{h}}^{\ell}}\) of the set
\[\mathcal{C}_{v_{h}}^{\ell}\coloneqq\mathcal{C}_{\mathrm{I}_{\ell}v_{h}}\cap \{x\in\Omega\setminus\cup\mathcal{E}:\mathrm{D}^{2}_{\mathrm{pw}}v_{h}(x)\geq 0 \text{ is positive semi-definite}\}.\]
In practice, the numerical integration formula for \(\|f-f_{h}\|_{L^{2}(\omega)}\) reads
\[\sum_{T\in\mathcal{T}}\sum_{x\in\mathcal{N}_{\ell}\cap T\cap\omega}\mathrm{meas}( T)w_{\ell,T}(x)(f(x)-2\chi_{\mathcal{C}^{\ell}_{v_{h}}}(x)(\det\mathrm{D}_{ \mathrm{pw}}^{2}v_{h}(x))^{1/2})^{2}. \tag{5.3}\]
The convex envelope \(\Gamma_{\mathrm{I}\epsilon v_{h}}\) of \(\mathrm{I}_{\ell}v_{h}\) can be computed, for instance, by the quickhull algorithm [2]. Therefore, it is straight-forward to compute (5.3). We note that if \(x\in\mathcal{C}_{v_{h}}\cap\mathcal{N}_{\ell}\), then (5.2) holds for any \(z\in\mathcal{V}_{\ell}\). Since the convex envelope of the continuous piecewise affine function \(\mathrm{I}_{\ell}v_{h}\) only depends on the nodal values of \(v_{h}\), this implies \(x\in\mathcal{C}^{\ell}_{v_{h}}\cap\mathcal{N}_{\ell}\). However, the reverse is not true. Hence, (5.3) and (5.1) may not coincide. From the uniform convergence of \(\mathrm{I}_{\ell}v_{h}\) to \(v_{h}\) as \(\ell\to\infty\), we deduce
\[\limsup_{\ell\to\infty}\mathcal{C}^{\ell}_{v_{h}}\coloneqq\cap_{\ell\in \mathbb{N}}\cup_{k\geq\ell}\mathcal{C}^{\ell}_{v_{h}}\subset\mathcal{C}_{v_{h}},\]
cf. [3, Lemma A.1]. Given any \(\delta>0\), this implies \(\mathcal{C}^{\ell}_{v_{h}}\setminus\mathcal{C}_{v_{h}}\subset\{x\in\Omega \setminus\mathcal{C}_{v_{h}}:\mathrm{dist}(x,\mathcal{C}_{v_{h}})\leq\delta\}\) for sufficiently large \(\ell\). Therefore, the set of all points \(x\in\mathcal{N}_{\ell}\) with \(\chi_{\mathcal{C}_{v_{h}}}\neq\chi_{\mathcal{C}^{\ell}_{v_{h}}}(x)\) is a subset of \(\mathcal{C}^{\ell}_{v_{h}}\setminus\mathcal{C}_{v_{h}}\), whose Lebesgue measure vanishes in the limit as \(\ell\to\infty\). In conclusion, the limits of (5.1) and (5.3) coincide.
_Computation of \(\mu\coloneqq\limsup_{x\to\partial\Omega}|(g-\Gamma_{v_{h}})(x)|\)_. The boundary residual \(\mu\) is approximated by \(\|g-\Gamma_{\mathrm{I}\epsilon v_{h}}\|_{L^{\infty}(\partial\Omega)}\). Since \(\Gamma_{v_{h}}\leq\mathrm{I}_{\ell}v_{h}\) and \(\mathrm{I}_{\ell}v_{h}\) is piecewise affine, \(\Gamma_{v_{h}}\leq\Gamma_{\mathrm{I}_{\ell}v_{h}}\) holds in \(\Omega\). On the other hand, we have \(\lim_{\ell\to\infty}\|v_{h}-\mathrm{I}\epsilon v_{h}\|_{L^{\infty}(\Omega)}=0\). Hence, any supporting hyperplane \(a_{x}\) of \(\Gamma_{\mathrm{I}\epsilon v_{h}}\) at \(x\in\Omega\) satisfies \(a_{x}-\delta_{\ell}\leq v_{h}\) in \(\Omega\) with \(\delta_{\ell}\coloneqq\|v_{h}-\mathrm{I}_{\ell}v_{h}\|_{L^{\infty}(\Omega)}\). Since \(a_{x}-\delta_{\ell}\) is an affine function, \(\Gamma_{\mathrm{I}\epsilon v_{h}}(x)-\delta_{\ell}=a_{x}(x)-\delta_{\ell}\leq \Gamma_{v_{h}}(x)\). We conclude \(\Gamma_{\mathrm{I}\epsilon v_{h}}-\delta_{\ell}\leq\Gamma_{v_{h}}\leq\Gamma_{ \mathrm{I}\epsilon v_{h}}\) in \(\Omega\). In particular, \(\lim_{\ell\to\infty}\|g-\Gamma_{\mathrm{I}\epsilon v_{h}}\|_{L^{\infty}( \partial\Omega)}=\mu\).
_Choice of \(\Omega^{\prime}\)._ Let \(\delta\coloneqq\min_{E\in\mathcal{E}}h_{E}\) denote the minimal edge length of the mesh \(\mathcal{T}\). For all integers \(0\leq j<1/(2\delta)\), define \(\Omega_{j\delta}\coloneqq\{x\in\Omega:\mathrm{dist}(x,\partial\Omega)\geq j\delta\}\). It seems canonical to choose \(\Omega^{\prime}\coloneqq\Omega_{j\delta}\), where \(j\) is the index that minimizes \(\mathrm{RHS}_{0}\). However, this choice may lead to significant computational effort. From the interior regularity of Alexandrov solutions [4], we can expect that the error is concentrated on the boundary and so, the best \(j\) will be close to one. Accordingly, the smallest \(j\geq 0\) is chosen so that \(\mathrm{RHS}_{0}\) with \(\Omega^{\prime}\coloneqq\Omega_{(j+1)\delta}\) is larger than \(\mathrm{RHS}_{0}\) with \(\Omega^{\prime}\coloneqq\Omega_{j\delta}\).
#### 5.1.3. Adaptive marking strategy
We define the refinement indicator
\[\eta(T)\coloneqq j\delta\sqrt{2}\|f-f_{h}\|_{L^{2}(T)}^{2}+(1-2j\delta)^{2}\| f-f_{h}\|_{L^{2}(T\cap\Omega_{j\delta})}^{2}\]
for any \(T\in\mathcal{T}\), where the scaling in \(\delta\) arises from (4.2) with \(n=2\). Let \(\sigma\coloneqq\mathrm{RHS}_{0}-\mu\) denote the remaining contributions of \(\mathrm{RHS}_{0}\), where \(\mu=\limsup_{x\to\partial\Omega}|(g-\Gamma_{u_{h,\varepsilon}})(x)|\) from above. If \(\sigma/10<\|g-u_{h,\varepsilon}\|_{L^{\infty}(\partial\Omega)}\), then we mark one fifth of all boundary edges \(E\in\mathcal{E}\) with the largest contributions \(\|g-u_{h,\varepsilon}\|_{L^{\infty}(E)}\). Otherwise, we mark a set \(\mathcal{M}\) of rectangles with minimal cardinality so that
\[\frac{1}{2}\sum_{T\in\mathcal{T}}\eta(T)\leq\sum_{T\in\mathcal{M}}\eta(T).\]
#### 5.1.4. Displayed quantities
The convergence history plots display the errors \(\|u-u_{h,\varepsilon}\|_{L^{\infty}(\Omega)}\), \(\mathrm{LHS}\coloneqq\|u-\Gamma_{u_{h,\varepsilon}}\|_{L^{\infty}(\Omega)}\) as well as the error estimator \(\mathrm{RHS}_{0}\) against the number of degrees of freedom ndof in a log-log plot. (We note that ndof scales like \(h_{\max}^{-2}\) on uniformly refined meshes.) Whenever the solution \(u\) is sufficiently smooth, the errors \(\|u-u_{h,\varepsilon}\|_{H^{1}(\Omega)}\) and \(\|u-u_{h,\varepsilon}\|_{H^{2}(\Omega)}\) are also displayed. Solid lines in the convergence history plots indicate adaptive mesh-refinements, while dashed lines are associated with uniform mesh-refinements. The experiments are carried out for the regularization parameters \(\varepsilon=10^{-3}\) in the first two experiments and \(\varepsilon=10^{-4}\) for the third experiment. For a numerical comparison of various \(\varepsilon\), we refer to [12].
### Regular solution
In this example from [9], the exact solution \(u\) is given by
\[u(x)=\frac{(2|x|)^{3/2}}{3}\]
with \(f(x)=1/|x|\). The solution belongs to \(H^{5/2-\nu}(\Omega)\) for any \(\nu>0\), but not to \(C^{2}(\overline{\Omega})\). It is proven in [12] that \(u\) is the viscosity solution to \(F_{\varepsilon}(f;x,\mathrm{D}^{2}u)=0\) in \(\Omega\) for any regularization parameter \(0<\varepsilon\leq 1/3\). Accordingly, we observed no visual differences in the convergence history plots for different \(0<\varepsilon\leq 1/3\). Figure 1 displays the convergence rates \(0.8\) for \(\|u-u_{h,\varepsilon}\|_{L^{\infty}(\Omega)}\) and RHS, \(3/4\) for \(\|u-u_{h,\varepsilon}\|_{H^{1}(\Omega)}\), and \(1/4\) for \(\|u-u_{h,\varepsilon}\|_{H^{2}(\Omega)}\) on uniform meshes. The adaptive algorithm refines towards the singularity of \(u\) at \(0\) and leads to improved convergence rates for all displayed quantities. We observe the rate \(1.75\) for \(\|u-u_{h,\varepsilon}\|_{L^{\infty}(\Omega)}\), \(1\) for LHS, \(\mathrm{RHS}_{0}\), and \(\|u-u_{h,\varepsilon}\|_{H^{2}(\Omega)}\), and \(1.5\) for \(\|u-u_{h,\varepsilon}\|_{H^{1}(\Omega)}\). It is also worth noting that \(\mathrm{RHS}_{0}\) seems to be efficient on adaptive meshes.
### Convex envelope of boundary data
In the second example, we approximate the exact solution
\[u(x,y)\coloneqq|x-1/2|\]
to \(\det\mathrm{D}^{2}u=0\) in \(\Omega\), which is the largest convex function with prescribed boundary data. The solution belongs to \(H^{3/2-\delta}(\Omega)\) for any \(\delta\) > \(0\), but not to \(H^{3/2}(\Omega)\). It was observed in [12] that the regularization error of \(u-u_{\varepsilon}\) dominates the discretization error \(u-u_{h,\varepsilon}\) on finer meshes. Therefore, the errors \(\|u-u_{h,\varepsilon}\|_{L^{\infty}(\Omega)}\) and \(\|u-u_{h,\varepsilon}\|_{H^{1}(\Omega)}\) stagnate at a certain value (depending on \(\varepsilon\)) as displayed in Figure 2. However, LHS converges with convergence rate \(1/2\) on uniform meshes even for fixed \(\varepsilon\). At first glance on the discrete solution shown in Figure 3, we can expect that the maximum of \(|u-u_{h,\varepsilon}|\) is attained along the line \(\mathrm{conv}\{(1/2,0),(1/2,1)\}\). This error depends on the regularization parameter and only vanishes in the limit as \(\varepsilon\to 0\), but the convex envelope of \(u_{h,\varepsilon}\) provides an accurate approximation of \(u\) along this line. In fact, Figure 4 shows that the adaptive algorithm refines towards the points \((1/2,0)\) and \((1/2,1)\), but the whole line \(\mathrm{conv}\{(1/2,0),(1/2,1)\}\) is only of minor interest. We observe the improved convergence rate \(2.5\) for LHS on adaptive meshes. The guaranteed upper bound \(\mathrm{RHS}_{0}\) can provide an accurate estimate of LHS, but seems to oscillate due to the nature of the problem. The goal of the adaptive algorithm is the reduction of \(\mathrm{RHS}_{0}\), which consists of the error \(\|f-f_{h}\|_{L^{2}(\Omega)}\) in the Monge-Ampere measures and of some boundary data approximation error. Thanks
Figure 1. Convergence history for the first experiment with \(\varepsilon=10^{-3}\).
to the additional regularization provided by the convex envelope, \(\|f-f_{h}\|_{L^{2}(\Omega)}\) is concentrated at the points \((1/2,0)\) and \((1/2,1)\), but becomes very small after some mesh-refining steps. We even observed in Figure 2 that \(\mathrm{LHS}=\mathrm{RHS}_{0}\) on two meshes, i.e., \(\|f-f_{h}\|_{L^{2}(\Omega)}=0\). Then \(\mathrm{RHS}_{0}\) is dominated by the data boundary approximation error and leads to mesh refinements on the boundary. This may result in significant changes in the Monge-Ampere measure of \(\Gamma_{u_{h,\varepsilon}}\), because the convex envelope of the discrete function \(u_{h,\varepsilon}\) depends heavily on its values on the boundary in this class of problems.
Figure 3. Discrete solution on a uniform mesh with 4225 nodes.
Figure 2. Convergence history for the second experiment with \(\varepsilon=10^{-4}\).
### Nonsmooth exact solution
In this example, the function
\[u(x,y)\coloneqq-\big{(}\sin(\pi x)^{-1}+\sin(\pi y)^{-1}\big{)}^{-1}\]
is the solution to the Monge-Ampere equation (1.1) with homogenous boundary data and right-hand side
\[f(x,y)=\frac{4\pi^{2}\sin(\pi x)^{2}\sin(\pi y)^{2}(2-\sin(\pi x)\sin(\pi y))}{( \sin(\pi x)+\sin(\pi y))^{4}}.\]
The function \(u\) belongs to \(C^{2}(\Omega)\cap H^{2-\delta}(\Omega)\) for all \(\delta>0\), but neither to \(H^{2}(\Omega)\) nor \(C^{2}(\overline{\Omega})\). The convergence history is displayed in Figure 5. Notice from Proposition 4.3 that \(\mathrm{RHS}_{0}\) consists solely of the error in the Monge-Ampere measures. In this example, \(f\) exhibits strong oscillations at the four corners of the domain \(\Omega\) and the adaptive algorithm seems to solely refine towards these corners as displayed in Figure 6. While \(\mathrm{RHS}_{0}\) converges on uniform meshes (although with a slow rate), there is only a marginal reduction of \(\mathrm{RHS}_{0}\) for adaptive computation. We can conclude that the discrete approximation cannot resolve the infinitesimal oscillation of the Monge-Ampere measure of \(u\) properly. This results in the stagnation of \(\|u-u_{h,\varepsilon}\|_{L^{\infty}(\Omega)}\) and LHS at an early level in comparison to uniform mesh refinements. However, we also observed that the stagnation point depends on the maximal mesh-size. In fact, if we start from an initial uniform mesh with a small mesh-size \(h_{0}\), significant improvements of \(\mathrm{RHS}_{0}\) are obtained on the first levels of adaptive mesh refinements as displayed in Figure 7. Undisplayed experiments show the same behaviour for \(\|u-u_{h,\varepsilon}\|_{L^{\infty}(\Omega)}\). This leads us to believe that, in this example, a combination of uniform and adaptive mesh-refining strategy provides the best results.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.